url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
https://www.trustudies.com/question/916/Q3-If-tan-A-B-sqrt-3-and-tan-A-B-frac-1-sqrt-3-br-0-A-B-le-90-A-B-find-A-and-B/ | Q3. If $$tan (A + B) \ = \ \sqrt{3}$$ and $$tan (A – B) \ = \ \frac{1}{ \sqrt{3}}$$
$$0° < \ A+B \ \le \ 90° \ ; \ A \ > \ B$$ , find A and B.
$$tan (A + B) \ = \ \sqrt{3}$$
=> $$\ tan(A+B) \ = \ tan60°$$
$$\ A+B \ = \ 60°$$ ............(1)
$$tan (A – B) \ = \ \frac{1}{ \sqrt{3}}$$
=> $$\ tan(A-B) \ = \ tan30°$$
$$A – B \ = \ 30°$$...............(2)
$$\angle$$A = 45° and $$\angle$$B = 15° | 2020-10-23 10:35:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9729362726211548, "perplexity": 6205.94224604645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107881369.4/warc/CC-MAIN-20201023102435-20201023132435-00648.warc.gz"} |
http://stackoverflow.com/questions/3367393/how-to-control-the-dimension-size-of-a-plot-with-ggplot2 | # How to control the dimension / size of a plot with ggplot2
I am using ggplot2 (respectively qplot) to generate a report with Sweave. Now I need some help with the adjustment of the size of the plot. I use the following Sweave code to include it.
\begin{figure}[htbp]
\begin{center}
<<fig=true,echo=false>>=
print(mygraph)
@
\caption{MyCaption}
\end{center}
\end{figure}
If I add a width argument (like shown below) to plot is squeezed down, but not really scaled down.
<<fig=true,echo=false,width=3>>=
If I use ggsave() instead, I could use a scale argument and influence the size of the resulting .pdf file. Is there a way to influence the dimensions of the plot without saving it (since the .pdf is generated by Sweave anyway) ? Is there anything I need to add to my qplot code?
mygraph=qplot(date,value,data=graph1,geom="line",colour=variable,xlab="",ylab="")
+ scale_y_continuous(limits = c(-0.3,0.3))
Thx for any suggestions in advance!
-
Instead of doing this within ggplot2, add the following LaTeX code before the code chunk where you print the graph.
\SweaveOpts{width=x, height=y}
x and y are height and width in inches.
If there is a particular aspect ratio you would like your plot to be, you can set this in ggplot2 with opts(). Unless I have some other reason, I usually try to keep my plots scaled to the golden ratio, per Tufte's suggestions. Usually I have
...
SweaveOpts{width=8, height=5}
...
<<label = "makeplot", echo = F>>=
p <- ggplot(mpg, aes(displ, hwy)) +
geom_point()+
opts(aspect.ratio = 2/(1+sqrt(5)) )
@
...
\begin{figure}[htbp]
\begin{center}
<<fig=true,echo=false>>=
print(p)
@
\caption{MyCaption}
\end{center}
\end{figure}
-
Great answer! (accepted). +1 for anticipating that it's not solely a Sweave problem, but an aspect.ratio thing. – Matt Bannert Jul 30 '10 at 8:35
Scatterplots are usually best with a square aspect ratio - a priori there is not usually reason to favour one variable over the other. – hadley Jul 30 '10 at 17:28
The Sweave options width and height influence the dimensions of the PDF file but not the size of the figures in the document. Put something like
\setkeys{Gin}{width=0.4\textwidth}
after \begin{document} to get smaller plots.
Source: Sweave manual, sec. 4.1.2
- | 2015-01-29 10:52:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8422535061836243, "perplexity": 2445.0104676963124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422121833101.33/warc/CC-MAIN-20150124175033-00177-ip-10-180-212-252.ec2.internal.warc.gz"} |
https://www.atnf.csiro.au/vlbi/dokuwiki/doku.php/lbaops/lbamay2012/recentchanges?rev=1450417121&do=diff | # ATNF VLBI Wiki
### Site Tools
lbaops:lbamay2012:recentchanges
# Differences
This shows you the differences between two versions of the page.
— lbaops:lbamay2012:recentchanges [2015/12/18 16:38] (current) Line 1: Line 1: + * [[lbaops:lbamay2012:re07bholog]] . . . May 18, 2012, at 08:17 PM by [[.:.:.:.:~breid]]: [==] + * [[lbaops:lbamay2012:re07bhhlog]] . . . May 18, 2012, at 07:09 PM by [[.:.:~hartrao]]: [==] + * [[lbaops:lbamay2012:re07b]] . . . May 18, 2012, at 02:49 PM by [[lbaops:~chris:phillips]]: [==] + * [[lbaops:lbamay2012:re07aholog]] . . . May 18, 2012, at 01:12 PM by [[~breid]]: [==] + * [[lbaops:lbamay2012:re07atilog]] . . . May 11, 2012, at 07:44 PM by [[.:.:~shoriuchi]]: [==] + * [[lbaops:lbamay2012:re07ahhlog]] . . . May 11, 2012, at 06:36 PM by [[~hartrao]]: [==] + * [[lbaops:lbamay2012:re03ac]] . . . May 11, 2012, at 04:10 PM by [[lbaops:~chris:phillips]]: [==] + * [[lbaops:lbamay2012:re07a]] . . . May 10, 2012, at 03:33 PM by [[lbaops:~chris:phillips]]: [==] + * [[lbaops:lbamay2012:v444bwwlog]] . . . May 08, 2012, at 09:32 AM by [[.:.:~nzobservers]]: [==] + * [[lbaops:lbamay2012:vc174wwlog]] . . . May 08, 2012, at 09:32 AM by [[~nzobservers]]: [==] + * [[lbaops:lbamay2012:vc175tilog]] . . . May 03, 2012, at 07:55 AM by [[~shoriuchi]]: [==] + * [[lbaops:lbamay2012:v444batlog]] . . . May 02, 2012, at 03:12 AM by [[.:.:.:~guest]]: [==] + * [[lbaops:lbamay2012:v444bholog]] . . . May 02, 2012, at 02:41 AM by [[~guest]]: [==] + * [[lbaops:lbamay2012:v444bcdlog]] . . . May 02, 2012, at 02:39 AM by [[~guest]]: [==] + * [[lbaops:lbamay2012:v444bkelog]] . . . May 01, 2012, at 08:11 PM by [[~breid]]: [==] + * [[lbaops:lbamay2012:v444byglog]] . . . May 01, 2012, at 08:10 PM by [[~breid]]: [==] + | 2020-08-14 05:50:49 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8754781484603882, "perplexity": 7752.149283817246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739177.25/warc/CC-MAIN-20200814040920-20200814070920-00357.warc.gz"} |
http://iaaras.ru/en/library/paper/880/ | Search
• Papers
## Synthetic procedure of simple-type controller induction motor control system
Transactions of IAA RAS, issue 25, 10–15 (2012)
Keywords: induction motor, AC drives, controller, dynamics, adapting, criteria for the quality of control, dynamic characteristics.
### Abstract
The article covers synthetic procedure of laws of control over a servo induction motor drive on the basis of multi-variant analysis of possible minimum complexity structures having low susceptibility to parametric oscillation. An example of implementing the synthetic procedure of structures and controllers allowing simple technical solution and ensuring high technical characteristics of the electrical drive given unstable parameters of a mechanical part, such as rigidity, power transmission damping coefficient, load inertia moment, resistance force moment.
### Citation
Text
BibTeX
RIS
E. V. Alexandrov, M. I. Panin, L. A. Povalyaev. Synthetic procedure of simple-type controller induction motor control system // Transactions of IAA RAS. — 2012. — Issue 25. — P. 10–15. @article{alexandrov2012, abstract = {The article covers synthetic procedure of laws of control over a servo induction motor drive on the basis of multi-variant analysis of possible minimum complexity structures having low susceptibility to parametric oscillation. An example of implementing the synthetic procedure of structures and controllers allowing simple technical solution and ensuring high technical characteristics of the electrical drive given unstable parameters of a mechanical part, such as rigidity, power transmission damping coefficient, load inertia moment, resistance force moment.}, author = {E.~V. Alexandrov and M.~I. Panin and L.~A. Povalyaev}, issue = {25}, journal = {Transactions of IAA RAS}, keyword = {induction motor, AC drives, controller, dynamics, adapting, criteria for the quality of control, dynamic characteristics}, pages = {10--15}, title = {Synthetic procedure of simple-type controller induction motor control system}, url = {http://iaaras.ru/en/library/paper/880/}, year = {2012} } TY - JOUR TI - Synthetic procedure of simple-type controller induction motor control system AU - Alexandrov, E. V. AU - Panin, M. I. AU - Povalyaev, L. A. PY - 2012 T2 - Transactions of IAA RAS IS - 25 SP - 10 AB - The article covers synthetic procedure of laws of control over a servo induction motor drive on the basis of multi-variant analysis of possible minimum complexity structures having low susceptibility to parametric oscillation. An example of implementing the synthetic procedure of structures and controllers allowing simple technical solution and ensuring high technical characteristics of the electrical drive given unstable parameters of a mechanical part, such as rigidity, power transmission damping coefficient, load inertia moment, resistance force moment. UR - http://iaaras.ru/en/library/paper/880/ ER - | 2020-12-05 23:14:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23082157969474792, "perplexity": 8455.827821975778}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141750841.83/warc/CC-MAIN-20201205211729-20201206001729-00274.warc.gz"} |
https://eduzip.com/ask/question/if-true-then-enter-1-and-if-false-then-enter-0can-two-acute-angle-521134 | Mathematics
# If true then enter $1$ and if false then enter $0$Can two acute angles be supplementary?
##### SOLUTION
No.
Two acute angles can never be supplementary.
Since $0$ degrees<Acute angles <$90$ degrees,
So the Sum of two such angles will always be less than 180 degrees.
You're just one step away
One Word Medium Published on 09th 09, 2020
Questions 120418
Subjects 10
Chapters 88
Enrolled Students 86
#### Realted Questions
Q1 Subjective Medium
In the given figure, $\angle ABD =54^o$ and $\angle BCD=43^o$, calculate
$\angle BDA$
Asked in: Mathematics - Lines and Angles
1 Verified Answer | Published on 09th 09, 2020
Q2 Single Correct Medium
In figure, $\displaystyle \angle POR={ a }^{ o }$ and $\displaystyle \angle QOR={ b }^{ o }$. If $\displaystyle { a }^{ o }-{ b }^{ o }={ 60}^{ o }$. Find angle $\displaystyle { a }^{ o }$ and $\displaystyle { b }^{ o }$.
• A. $\displaystyle \angle a={ 70 }^{ o },\angle b={ 110 }^{ o }$
• B. $\displaystyle \angle a={ 90 }^{ o },\angle b={ 90 }^{ o }$
• C. $\displaystyle \angle a={ 60 }^{ o },\angle b={ 120 }^{ o }$
• D. $\displaystyle \angle a={ 120 }^{ o },\angle b={ 60 }^{ o }$
Asked in: Mathematics - Lines and Angles
1 Verified Answer | Published on 23rd 09, 2020
Q3 Subjective Medium
Use the following figure to find:
$BC$, if $AB=7.2\ cm$.
Asked in: Mathematics - Lines and Angles
1 Verified Answer | Published on 23rd 09, 2020
Q4 One Word Medium
In the figure $\angle BAC = 75^{\circ}$, $\angle ABC =35^{\circ}$. Find the measures of $\angle BAZ$
Asked in: Mathematics - Lines and Angles
1 Verified Answer | Published on 09th 09, 2020
Q5 Subjective Medium
Can we have two obtuse angles whose sum is
(a) A reflex angle? Why or why not?
(b) A complete angle? Why or why not?
Asked in: Mathematics - Lines and Angles
1 Verified Answer | Published on 09th 09, 2020 | 2022-01-21 07:46:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5510575771331787, "perplexity": 13637.454973535308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302740.94/warc/CC-MAIN-20220121071203-20220121101203-00190.warc.gz"} |
https://indico.cern.ch/event/686555/contributions/2977537/ | # ICHEP2018 SEOUL
Jul 4 – 11, 2018
COEX, SEOUL
Asia/Seoul timezone
## Spectroscopy of the first electrons from the KATRIN tritium source
Jul 7, 2018, 5:45 PM
15m
103 (COEX, Seoul)
### 103
#### COEX, Seoul
Parallel Neutrino Physics
### Speaker
Dr Magnus Schlösser (Karlsruhe Institute of Technology, Insitute of Technical Physics, Tritium Laboratory Karlsruhe)
### Description
Neutrinos are by far the lightest particles in the Universe. According to the Standard Model of Particle Physics neutrinos should be massless. However, the existence of their mass has been proven experimentally by the observation of neutrino mass oscillations. The KArlsruhe TRitium Neutrino (KATRIN) experiment at the Karlsruhe Institute of Technology aims for a direct neutrino mass determination with a sensitivity of 200 meV/c$^2$ (90% C.L.).
The measurement will be performed by precise spectroscopy of the tritium-β-decay electrons near the kinematic endpoint of 18.6 keV. That is achieved by employing a high-resolution (ΔE < 1 eV) MAC-E-type high-pass energy filter coupled to a high-luminosity ($10^{11}$ Bq) windowless gaseous tritium source which is supplied by the closed gas processing loop of the Tritium Laboratory Karlsruhe (TLK) at throughput of 40 g of T$_2$ per day.
In autumn 2016, the First Light commissioning campaign took place, in which photoelectrons generated from KATRIN's rear wall were guided through the complete beamline (source and spectrometers) and were detected successfully on the detector. During the subsequent experimental stage in summer 2018, gaseous metastable Kr-83m was injected into the KATRIN source section. Furthermore, a condensed Kr-83m source was deployed in the transport section. By using both sources, first high-resolution spectroscopy of electrons from radioactive origin has been performed with KATRIN (arXiv:1802.04167). From this campaign, we could demonstrate many aspects of the high-resolution spectroscopy capability of the KATRIN setup and perform a highly accurate calibration of the energy scale of KATRIN from the mono-energetic conversion electrons from Kr-83m (arXiv:1802.05227).
After the demonstration of the high-resolution performance of the KATRIN spectrometers, in spring 2018, the first injection of tritium into the KATRIN source section is scheduled. The principal aim of this campaign is to demonstrate the stability of the tritium source at an activity of about 1% (~$10^9$ Bq) of the nominal level, which is maintained by a complex tritium loop at the TLK. This stability investigation is crucial in order to operate the tritium source at high isotopic purity (>95%) and a stability of 0.1% during upcoming neutrino mass runs (with a total measurement time of three years).
This talk presents the ambitious goals of KATRIN and the complex setup designed to reach them. The fruitful achievements of the successful Krypton campaign will be summarized and an insight into the results from the first ever high-resolution spectroscopy with tritium beta-decay electrons by KATRIN is given.
### Primary authors
Dr Magnus Schlösser (Karlsruhe Institute of Technology, Insitute of Technical Physics, Tritium Laboratory Karlsruhe) | 2022-09-28 09:58:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5186734199523926, "perplexity": 4569.948447894227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335190.45/warc/CC-MAIN-20220928082743-20220928112743-00576.warc.gz"} |
https://socratic.org/questions/how-do-you-write-in-simplest-radical-form-the-coordinates-of-point-a-if-a-is-on--5 | # How do you write in simplest radical form the coordinates of point A if A is on the terminal side of angle in standard position whose degree measure is theta: OA=0.5, theta=180^circ?
Apr 7, 2018
$A \left(- 0.5 , \frac{\sqrt{3}}{2}\right)$
#### Explanation:
.
$x$-coordinate of point $A$ is $= O A = 0.5$
$y$-coordinate of point $A$ is $A B$
$\cos \angle A O B = \frac{O A}{O B} = \frac{0.5}{1} = 0.5$
$\angle A O B = \arccos \left(0.5\right) = \frac{\pi}{3}$
$\sin \angle A O B = \frac{A B}{O B} = \frac{A B}{1} = A B$
$\sin \left(\frac{\pi}{3}\right) = \frac{\sqrt{3}}{2}$
$A B = \frac{\sqrt{3}}{2}$
Therefore,
$A \left(- 0.5 , \frac{\sqrt{3}}{2}\right)$ | 2021-06-18 09:02:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 13, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8823390007019043, "perplexity": 897.8585614899683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487635920.39/warc/CC-MAIN-20210618073932-20210618103932-00148.warc.gz"} |
https://aliquote.org/post/ubuntu-22-04/ | aliquote
< a quantity that can be divided into another a whole number of time />
In a recent post, I said that I will probably upgrade my Ubuntu soon or later. The day after I published it I finally did it. It took me a few hours to upgrade my laptop full of useless applications and libraries installed all along the last year, but finally I got the welcome screen before lunch. There are many reviews available on the web, so I will just mention what I did and what I find the most interesting updates for my use cases.
• There are important updates to the kernel, glibc and various other system libraries, but I noticed that now clang and Python are versions 14.0 and 3.10.4. Python 3.10 is supposed to be faster than previous versions, but I didn’t run any benchmarks so far. Nothing new regarding Neovim: still at version 0.6 from the official apt repositories, so I’m using the Neovim PPA for the unstable version.
• The window manager has been improved, IMHO. The calendar widget is able to display all events stored in Gnome calendar, there’s a processor scheduler that can alternate between energy saving and high performance policy, and virtual desktops are now displayed horizontally rather than vertically. Gnome screen capture got a few enhancements as well.
• I deleted a bunch of unused or defunct applications, including Docker and bitlbee. I also manually cleanup up the old LLVM files that came with the previous installation of Ubuntu 20.04 and that were not cleaned up during the upgrade process. Finally, I re enabled Tracker, which I disabled a while ago, and some default settings, which I also disabled for i3 and Regolith desktop. I did not reinstall the Snap store. In fact, I deleted everything related to snap when I configured my laptop the first time, and then never looked back. At this point, I don’t even know how to reinstall snap on my machine.
• There were some quircks here and there: IPython was not happy with the missing Qt backend for matplotlib, so I now use matplotlib.use("GTK4Cairo") instead, which is not a bad idea after all. Of course, switching to Python 3.10 instead of 3.8 also means that I had to reinstall every little applications that sit in my \$HOME/.local/bin folder, mostly from pip3 install --user. To remap the CAPS lock key to , I was previously invoking setxkbmap -option caps:escape in my shell init scripts. This no longer works under Wayland, but I discovered that you can just ask Gnome shell to do that for you: gsettings set org.gnome.desktop.input-sources xkb-options "['caps:escape']".
• I spent another couple hours fixing my Hugo theme, which stopped working with version 0.92. I am now aware of the fact: after each update of Hugo something goes wrong. I had not encountered such problem the last two years since I was on the 20.04 LTS, which means no update at all for most applications. However, I tor my hair out for a long time with rolling release on Apple Homebrew. Anyway, I fixed it, again, and now I hope we are quiet for some long months.
• I also managed to install the Nordic theme, which is beautiful (especially compared to the default dark theme in Ubuntu) and reminds me of the one I had when I was using Regolith desktop. I didn’t reinstalled it yet, since I want write to try the default settings yet another time again.
♪ Big Spider’s Back • Black Chow | 2022-12-04 19:35:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21179117262363434, "perplexity": 2147.3678350887}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710978.15/warc/CC-MAIN-20221204172438-20221204202438-00016.warc.gz"} |
http://math.stackexchange.com/questions/170380/admissible-ordinals/413522 | a little question about admissible sets: Is every $\mathfrak{M}$-admissible ordinals an admissible ordinal ? where $\mathfrak{M}$ is a $L$-structure over $L=\{R_1,\dots,R_k \}$.
Thanks.
-
Yes, admissibility relativizes downward. For a transitive structure to be admissible, it must be amenable, and satisfy $\Sigma_0$-collection. Both are conditions that keep holding if you "remove parameters". This is obvious for amenability. For collection, a little argument is needed. If you have access to Devlin's "Constructibility" book, this is at the beginning of II.7. (Sorry, I do not currently have time to flesh out details.) – Andres Caicedo Jul 13 '12 at 16:03
Unrelated: is your name supposed to be a pig-Latinization of "Barwise"? – Quinn Culver Jul 13 '12 at 16:36
Yes, admissibility relativizes downward. For a transitive structure to be admissible, it must be amenable, and satisfy $\Sigma_0$-collection. Both are conditions that keep holding if you "remove parameters". This is obvious for amenability. For collection, a little argument is needed. If you have access to Devlin's "Constructibility" book, this is at the beginning of II.7. (Andres Caicedo) | 2014-10-26 04:48:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.868277370929718, "perplexity": 1909.7735813424827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119655159.46/warc/CC-MAIN-20141024030055-00173-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/2641945/can-you-hear-the-pins-fall-from-bowling-game-scores | # Can you hear the pins fall from bowling game scores?
Let $\mathbb T=\{1,\dotsc,10\}$ represent the ten pins in a standard game of bowling.
Given two sets of pins $T\subseteq S\subseteq \mathbb T$, let's write $p_{S\to T}$ to represent the conditional probability that given the current pins up are $S$, after a single bowl by a certain player, the pins up are $T$. For example, $p_{\mathbb T\to\varnothing}$ is the probability that the player bowls a strike, $p_{\mathbb T\to\mathbb T}$ is the probability of a gutter ball, and $p_{\{7,10\}\to\varnothing}$ is the probability of picking up a spare after the most infamous split.
Let us say a pinfall model is a tuple of all these probabilities $p_{S\to T}$. Such a model has a lot of parameters: one can count that the number of different $p_{S\to T}$s is $$\sum_{S\subseteq \mathbb T} \sum_{T \subseteq S} 1 = \sum_{S\subseteq \mathbb T} 2^{\lvert S\rvert} = \sum_{i=0}^{10} \binom{10}{i} 2^i = (2+1)^{10} = 59\,049.$$ (There are other more direct ways of counting these parameters. Also, because these probabilities come from $2^{10}$ separate probability distributions $p_{S\to\diamond}$, the number of degrees of freedom is actually $3^{10}-2^{10} = 58\,025.$)
Using a pinfall model, one can simulate a full (single-player) game of bowling, in the usual way one would expect for a Markov model. There is some amount of detail elided here, because the rules of bowling are tricky (especially the final frame) and a single game might involve anywhere from 11 to 21 throws. Note that the fundamental assumption of this setup is that every single throw is independent, and that the player never tires nor changes their strategy.
If we focus only on the final score of the game (using traditional scoring), every pinfall model produces a distribution $q$ on the 301 possible scores $0, \dotsc, 300$. For example, the probability of a perfect game $q_{300}$ is $p_{\mathbb T\to\varnothing}^{12}$, while the probability of a scoreless game $q_{0}$ is $p_{\mathbb T\to\mathbb T}^{20}$. If you work out the details, one can see that this map $f\colon \mathbb R^{59049} \to \mathbb R^{301}$ from a pinfall model $(p_{S\to T})$ to a score distribution $(q_s)$ is a polynomial map!
One might wonder how much about the pinfall model we can recover from the distribution of scores. Some things are easy: we can definitely get $p_{\mathbb T\to\varnothing}$ and $p_{\mathbb T\to \mathbb T}$ from the reasoning above involving $q_{300}$ and $q_0$. (One might say that you can "hear" how often a player gets a strike or a gutter ball simply from hearing enough of their final game scores.) However, other things are impossible: the dimension of the codomain of $f$ is only a few hundred, so we have no hope of getting most of the myriads of parameters. In particular, we're not going to be able to get $p_{\{7,10\}\to\varnothing}$.
How many independent dimensions in total can we recover from the score distribution? Phrased mathematically, what is the dimension of the image under $f$ of the pinfall models†, considered as a semialgebraic set or a submanifold?
In other words, how many independent degrees of freedom are there in a distribution of bowling scores (in this model)? Note that a degree of freedom here might correspond directly to a parameter from the original model, but more likely is some kind of derived quantity, like "the probability of a spare" $\sum_{\varnothing\subsetneq S\subseteq \mathbb T} (p_{\mathbb T\to S} \cdot p_{S\to\varnothing})$ or "the probability of a 9 on the first bowl" $\sum_{S\subseteq \mathbb T, \lvert \mathbb T\smallsetminus S\rvert = 9} (p_{\mathbb T\to S})$.
Edit to add: As discussed parenthetically above, the space of valid pinfall models is the subset of the full $59\,049$-dimensional space, where each group of parameters $p_{S\to\diamond}$ is a valid probability distribution. I don't care about the image under $f$ of "bad models", which don't correspond to distributions as they should, in part because $f$ does not make sense there.
• One can ask the same question under the simplified en.wikipedia.org/wiki/Ten-pin_bowling#World_Bowling_scoring or "current frame scoring". The scoring of each frame is independent and there are 21 possible scores per frame, so the answer should be these 21-1 = 20 degrees of freedom. The total game score then is a tenfold convolution. – A. Rex Feb 8 '18 at 16:00
• It should be possible to determine this by considering the linearized behavior in the neighborhood of a single pinfall model (provided it's a sufficiently generic one), shouldn't it? Just ("just") calculate the ~50000 x 300 matrix of partial derivatives and determine its rank? – mjqxxxx Oct 29 '18 at 2:18
• @mjqxxxx: Yes. And even if it's not a generic point, that's always a lower bound. – A. Rex Oct 29 '18 at 11:18
• @mjqxxxx: It should be easier to factor through the 55-dimensional space I describe in my partial answer. Then the matrix is only 55 x 300 or so. – A. Rex Nov 1 '18 at 0:51
Here's some partial progress on the problem. It was already mentioned in the problem that $$300$$ is an upper bound on the number of recoverable dimensions, because that is the dimension of the codomain of $$f$$. However, one can prove a much tighter bound.
Specifically, if $$0\le s \le 10$$, let $$p_{10\to s}$$ represent the probability that after a single bowl with all ten pins up, $$s$$ pins remain up. In terms of the original problem, we have $$p_{10\to s} = \sum_{S\subseteq \mathbb T, \lvert S\rvert = s} p_{\mathbb T\to S}.$$ This is a probability distribution with ten degrees of freedom.
Furthermore, if $$0\le t\le s \le 10$$, let $$p_{s\to t}$$ represent the conditional probability that given the first bowl resulted in $$s$$ pins up, the second bowl leaves $$t$$ pins up. In terms of the original problem, we have $$p_{s\to t} = \left(\sum_{T\subseteq S\subseteq \mathbb T, \lvert S\rvert = s, \lvert T\rvert =t} p_{\mathbb T\to S} p_{S\to T}\right)\bigg/p_{10\to s}.$$ (If $$p_{10\to s} = 0$$, it won't matter what $$p_{s\to t}$$ is.) Note that we should actually restrict $$0< s < 10$$; indeed, the original problem does not allow for a different strategy on the second bowl with all ten pins up. Accordingly, $$p_{10\to s}$$ means "the same thing" on the first and second bowl. Meanwhile, the situation $$s=0$$ will never call for a second bowl.
Similarly to before, $$p_{s\to t}$$ is a probability distribution with $$s$$ degrees of freedom, so overall all of these distributions $$p_{s\to t}$$ contain $$10+9+\dotsb+1 =55$$ degrees of freedom.
Moreover, these probability distributions are sufficient to compute $$f$$. In other words, the map $$f$$ factors through this space. The way to see this is that in terms of scoring, it suffices to "simulate" the game via the number of pins up. The Markov chain from the original problem (which was not explicitly described) can be simulated without actually keeping track of which specific pins stay up. So for example, we would start by simulating a single bowl by drawing from $$p_{10\to s}$$. If the result is a strike ($$s=0$$), we record that accordingly and move on to the next frame. If the result is anything else, we simulate a second bowl by drawing from $$p_{s\to t}$$. By construction, we have taken into account the marginal distribution of the first bowl when considering the effect of the second bowl. And so forth. The final frame is a little tricky, but works fine because we are able to simulate a single bowl, not only a complete frame "$$p_{10\to s\to t}$$".
Unfortunately, this does not prove a lower bound because we do not consider whether it is possible to recover all of these dimensions from the image of $$f$$. Accordingly, 55 is a tighter upper bound to the original problem, without a matching lower bound.
• All the $p_{10\to s}$ except $p_{10\to10}$ follow from scores 291 to 300. – Empy2 Oct 25 '18 at 12:21
• Then, by looking at scores 0 to 10, you can find $\sum_s p_{10\to s\to t}$ for all $t$ – Empy2 Oct 25 '18 at 12:49
• @Empy2: the score of 10 is at least a little tricky to interpret, especially because of the possibility of a strike or spare in the final frame. – A. Rex Oct 25 '18 at 13:40
• If you got 10 points including a strike or spare, you got no other points. The strike probability follows from 300, and no-spare no-strike games can be worked out from scores 0-9, so that lets you find the probability of a spare. – Empy2 Oct 25 '18 at 14:24
• @Empy2: yes, indeed. There is some trickiness because getting "no other points" contributes the nontrivial factor of $9 p_{\mathbb T\to\mathbb T}^{18} + p_{\mathbb T\to\mathbb T}^{20}$ for a strike but the different and still nontrivial factor $9 p_{\mathbb T\to\mathbb T}^{18} + p_{\mathbb T\to\mathbb T}^{19}$ for a spare. – A. Rex Oct 25 '18 at 16:10 | 2019-08-23 15:41:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 27, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7213064432144165, "perplexity": 329.719901393014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318894.83/warc/CC-MAIN-20190823150804-20190823172804-00368.warc.gz"} |
https://askcryp.to/t/resource-topic-2021-1025-efficient-information-theoretic-multi-party-computation-over-non-commutative-rings/15360 | # [Resource Topic] 2021/1025: Efficient Information-Theoretic Multi-Party Computation over Non-Commutative Rings
Welcome to the resource topic for 2021/1025
Title:
Efficient Information-Theoretic Multi-Party Computation over Non-Commutative Rings
Authors: Daniel Escudero, Eduardo Soria-Vazquez
Abstract:
We construct the first efficient MPC protocol that only requires black-box access to a non-commutative ring R. Previous results in the same setting were efficient only either for a constant number of corruptions or when computing branching programs and formulas. Our techniques are based on a generalization of Shamir’s secret sharing to non-commutative rings, which we derive from the work on Reed Solomon codes by Quintin, Barbier and Chabot (IEEE Transactions on Information Theory, 2013). When the center of the ring contains a set A = \{\alpha_0, \ldots, \alpha_n\} such that \forall i \neq j, \alpha_i - \alpha_j \in R^*, the resulting secret sharing scheme is strongly multiplicative and we can generalize existing constructions over finite fields without much trouble. Most of our work is devoted to the case where the elements of A do not commute with all of R, but they just commute with each other. For such rings, the secret sharing scheme cannot be linear on both sides" and furthermore it is not multiplicative. Nevertheless, we are still able to build MPC protocols with a concretely efficient online phase and black-box access to $R$. As an example we consider the ring $\mathcal{M}_{m\times m}(\mathbb{Z}/2^k\mathbb{Z})$, for which when $m > \log(n+1)$, we obtain protocols that require around $\lceil\log(n+1)\rceil/2$ less communication and $2\lceil\log(n+1)\rceil$ less computation than the state of the art protocol based on Circuit Amortization Friendly Encodings (Dalskov, Lee and Soria-Vazquez, ASIACRYPT 2020). In this setting with a less commutative" A, our black-box preprocessing phase has a less practical complexity of \poly(n). Due to this, we additionally provide specialized, concretely efficient preprocessing protocols for R = \mathcal{M}_{m\times m}(\mathbb{Z}/2^k\mathbb{Z}) that exploit the structure of the matrix ring.
Feel free to post resources that are related to this paper below.
Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites. | 2023-03-20 21:53:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7986369729042053, "perplexity": 908.2448358373344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943562.70/warc/CC-MAIN-20230320211022-20230321001022-00311.warc.gz"} |
http://cnx.org/content/m14013/latest/?collection=col10325/latest | # Connexions
You are here: Home » Content » Freshman Engineering Problem Solving with MATLAB » Programming With M-Files: For-Loop Exercises
### Recently Viewed
This feature requires Javascript to be enabled.
Inside Collection (Course):
Course by: Darryl Morrell. E-mail the author
# Programming With M-Files: For-Loop Exercises
Module by: Darryl Morrell. E-mail the author
Summary: This module provide several practice exercises on the use of for-loops.
## Exercise 1
Frequency is a defining characteristic of many physical phenomena including sound and light. For sound, frequency is perceived as the pitch of the sound. For light, frequency is perceived as color.
The equation of a cosine wave with frequency f f cycles/second is
y=cos2πft y 2 π f t
(1)
Create an m-file script to plot the cosine waveform with frequency f=2 f 2 cycles/s for values of t t between 0 and 4.
## Exercise 2
Suppose that we wish to plot (on the same graph) the cosine waveform in Exercise 1 for the following frequencies: 0.7, 1, 1.5, and 2. Modify your solution to Exercise 1 to use a for-loop to create this plot.
### Solution A
The following for-loop is designed to solve this problem:
t=0:.01:4;
hold on
for f=[0.7 1 1.5 2]
y=cos(2*pi*f*t);
plot(t,y);
end
When this code is run, it plots all of the cosine waveforms using the same line style and color, as shown in Figure 1. The next solution shows one rather complicated way to change the line style and color.
### Solution B
The following code changes the line style of each of the cosine plots.
fs = ['r-';'b.';'go';'y*']; %Create an array of line style strings
x=1; %Initialize the counter variable x
t=0:.01:4;
hold on
for f=[0.7 1 1.5 2]
y=cos(2*pi*f*t);
plot(t,y,fs(x,1:end)); %Plot t vs y with the line style string indexed by x
x=x+1; %Increment x by one
end
xlabel('t');
ylabel('cos(2 pi f t)')
title('plots of cos(t)')
legend('f=0.7','f=1','f=1.5','f=2')
This code produces the plot in Figure 2. Note that this plot follows appropriate engineering graphics conventions-axes are labeled, there is a title, and there is a legend to identify each plot.
## Exercise 3
Suppose that you are building a mobile robot, and are designing the size of the wheels on the robot to achieve a given travel speed. Denote the radius of the wheel (in inches) as r r, and the rotations per second of the wheel as w w. The robot speed s s (in inches/s) is related to r r and w w by the equation
s=2πrw s 2 π r w
(2)
On one graph, create plots of the relationship between s s and w w for values of r r of 0.5in, 0.7in, 1.6in, 3.2in, and 4.0in.
## Exercise 4
### Multiple Hypotenuses
Consider the right triangle shown in Figure 3. Suppose you wish to find the length of the hypotenuse c c of this triangle for several combinations of side lengths a a and b b ; the specific combinations of a a and b b are given in Table 1. Write an m-file to do this.
Table 1: Side Lengths
a a b b
1 1
1 2
2 3
4 1
2 2
### Solution
This solution was created by Heidi Zipperian:
a=[1 1 2 4 2]
b=[1 2 3 1 2]
for j=1:5
c=sqrt(a(j)^2+b(j)^2)
end
A solution that does not use a for loop was also created by Heidi:
a=[1 1 2 4 2]
b=[1 2 3 1 2]
c=sqrt(a.^2+b.^2)
## Content actions
PDF | EPUB (?)
### What is an EPUB file?
EPUB is an electronic book format that can be read on a variety of mobile devices.
For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.
PDF | EPUB (?)
### What is an EPUB file?
EPUB is an electronic book format that can be read on a variety of mobile devices.
For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.
#### Collection to:
My Favorites (?)
'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.
| A lens I own (?)
#### Definition of a lens
##### Lenses
A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.
##### What is in a lens?
Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.
##### Who can create a lens?
Any individual member, a community, or a respected organization.
##### What are tags?
Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.
| External bookmarks
#### Module to:
My Favorites (?)
'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.
| A lens I own (?)
#### Definition of a lens
##### Lenses
A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.
##### What is in a lens?
Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.
##### Who can create a lens?
Any individual member, a community, or a respected organization.
##### What are tags?
Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.
| External bookmarks | 2014-03-08 02:01:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21248793601989746, "perplexity": 2759.7299369568}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999652570/warc/CC-MAIN-20140305060732-00009-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://gateoverflow.in/969/gate2003-86 | 1.7k views
Consider the set of relations shown below and the SQL query that follows.
Students: (Roll_number, Name, Date_of_birth)
Courses: (Course_number, Course_name, Instructor)
Select distinct Name
and Courses.Instructor = 'Korth'
Which of the following sets is computed by the above query?
1. Names of students who have got an A grade in all courses taught by Korth
2. Names of students who have got an A grade in all courses
3. Names of students who have got an A grade in at least one of the courses taught by Korth
4. None of the above
C. Names of the students who have got an A grade in at least one of the courses taught by Korth.
selected
0
can someone explain why it is not A???
0
option a is said if person tuaght by all couses teach by korth then only slect but query slect if a person there who join course taught by korth need not all course.
In short ur case fail when studnt attend all couse taught by korth expect one... ( if korth taught n course then student join n-1 ).
+1
how do we evaluate the predicate??all conditions are neccessary right??there is "and" between the conditions so all should be true.According to me the above query must select all tuples where korth is the instructor of the course and grade is A.May be i am not able to understand how predicate can be solved.Plz help.
0
yes, you are correct. And that is option C rt?
0
i am sorry bt i just can't understand how it is option c.
0
make any table with random input and for one student assign all korth subject and for remaining student assign one only korth ..u get the ans why not a is ans..
ans will be sid of studnt who register for any course of korth (one or more)
+3
Also as distinct is used, whether student take single course of 10000, only one result will come out. So C is correct !
+8
option a is telling us to select a student only if he gets an A grade in all the subjects taught by KORTH.
so student need to get A grade in all subjects taught by korth.
for eg.
suppose Korth teaches 3 subjects Dbms, OS and Algo and a student1 gets A grade in DBMS and OS but not in Algo. so student1 won't be selected if u choose option A.
but this is not correct.
If a student gets A grade in any sub. taught by Korth should be selected.
so the option is C
+1
i have one question . please give a query for students who will be selected for getting grade 'A' in all the subjects taught by korth (so that it'd make sense)?
anyone plz ??
0
i think then you need to apply group by course instructor =korth having grades.grade=A
+2
@khush No. Never use having unless we need a group property like MAX or a property of attribute used in group by.
Select distinct Name
from Students
where Students.Roll_number NOT IN
and Courses.Instructor = 'Korth'
and Grades.Grade != 'A');
0
thanx arjun sir, just clear my one more silly question :P :P (just to clarify),
1. if we have 10 tuples each in all the 3 relations , then the above two queries during its computation , produces an intermediate cartesian product, that has 1000 tuples and then applies these conditions right ??
2. is it anyway possible to see that intermediate cartesian product result(using the db tools like sql fiddle), so that we could understand sql more clearly (because thats how we get confused between ("at least" , "any" and "all " keywords)??
0
1. yes, or do for 100 first apply conditions and then do next join. Query plan selection is a difficult problem and I do not remember the common strategies.
2. Sorry, no idea. I have seen some DBMS like MSSQL providing some intermediate results but as a novice it wasnt easy to use. May be someone expert in SQL can help.
+1
thnx again
0
@Arjun why not option a)?
0
^Because Korth can instruct more than one course, and nowhere in the query it says which one course. So it will take all the courses by Korth. | 2018-05-24 08:39:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2716597020626068, "perplexity": 3123.7936317049453}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866107.79/warc/CC-MAIN-20180524073324-20180524093324-00403.warc.gz"} |
https://www.aimsciences.org/article/doi/10.3934/proc.2001.2001.14 | # American Institute of Mathematical Sciences
2001, 2001(Special): 14-21. doi: 10.3934/proc.2001.2001.14
## Stochastic behavior of asymptotically expanding maps
1 Department of Mathematics, University of Porto, 4099-002 Porto, Portugal
Received July 2000 Published November 2013
Citation: José F. Alves. Stochastic behavior of asymptotically expanding maps. Conference Publications, 2001, 2001 (Special) : 14-21. doi: 10.3934/proc.2001.2001.14
[1] Izumi Takagi, Conghui Zhang. Existence and stability of patterns in a reaction-diffusion-ODE system with hysteresis in non-uniform media. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020400 [2] Hao Wang. Uniform stability estimate for the Vlasov-Poisson-Boltzmann system. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 657-680. doi: 10.3934/dcds.2020292 [3] Reza Chaharpashlou, Abdon Atangana, Reza Saadati. On the fuzzy stability results for fractional stochastic Volterra integral equation. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020432 [4] Neil S. Trudinger, Xu-Jia Wang. Quasilinear elliptic equations with signed measure. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 477-494. doi: 10.3934/dcds.2009.23.477 [5] Ugo Bessi. Another point of view on Kusuoka's measure. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020404 [6] Fabio Camilli, Giulia Cavagnari, Raul De Maio, Benedetto Piccoli. Superposition principle and schemes for measure differential equations. Kinetic & Related Models, 2021, 14 (1) : 89-113. doi: 10.3934/krm.2020050 [7] Imam Wijaya, Hirofumi Notsu. Stability estimates and a Lagrange-Galerkin scheme for a Navier-Stokes type model of flow in non-homogeneous porous media. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1197-1212. doi: 10.3934/dcdss.2020234 [8] Harrison Bray. Ergodicity of Bowen–Margulis measure for the Benoist 3-manifolds. Journal of Modern Dynamics, 2020, 16: 305-329. doi: 10.3934/jmd.2020011 [9] Mark F. Demers. Uniqueness and exponential mixing for the measure of maximal entropy for piecewise hyperbolic maps. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 217-256. doi: 10.3934/dcds.2020217 [10] Giulia Luise, Giuseppe Savaré. Contraction and regularizing properties of heat flows in metric measure spaces. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 273-297. doi: 10.3934/dcdss.2020327 [11] Skyler Simmons. Stability of broucke's isosceles orbit. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021015 [12] Yanan Li, Zhijian Yang, Na Feng. Uniform attractors and their continuity for the non-autonomous Kirchhoff wave models. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021018 [13] Russell Ricks. The unique measure of maximal entropy for a compact rank one locally CAT(0) space. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 507-523. doi: 10.3934/dcds.2020266 [14] Michiel Bertsch, Flavia Smarrazzo, Andrea Terracina, Alberto Tesei. Signed Radon measure-valued solutions of flux saturated scalar conservation laws. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3143-3169. doi: 10.3934/dcds.2020041 [15] Sugata Gangopadhyay, Constanza Riera, Pantelimon Stănică. Gowers $U_2$ norm as a measure of nonlinearity for Boolean functions and their generalizations. Advances in Mathematics of Communications, 2021, 15 (2) : 241-256. doi: 10.3934/amc.2020056 [16] Eduard Feireisl, Elisabetta Rocca, Giulio Schimperna, Arghir Zarnescu. Weak sequential stability for a nonlinear model of nematic electrolytes. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 219-241. doi: 10.3934/dcdss.2020366 [17] Meihua Dong, Keonhee Lee, Carlos Morales. Gromov-Hausdorff stability for group actions. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1347-1357. doi: 10.3934/dcds.2020320 [18] Scipio Cuccagna, Masaya Maeda. A survey on asymptotic stability of ground states of nonlinear Schrödinger equations II. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020450 [19] Gloria Paoli, Gianpaolo Piscitelli, Rossanno Sannipoli. A stability result for the Steklov Laplacian Eigenvalue Problem with a spherical obstacle. Communications on Pure & Applied Analysis, 2021, 20 (1) : 145-158. doi: 10.3934/cpaa.2020261 [20] Hongguang Ma, Xiang Li. Multi-period hazardous waste collection planning with consideration of risk stability. Journal of Industrial & Management Optimization, 2021, 17 (1) : 393-408. doi: 10.3934/jimo.2019117
Impact Factor:
## Metrics
• HTML views (0)
• Cited by (0)
• on AIMS | 2021-01-17 15:49:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5092863440513611, "perplexity": 9739.43148800024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703513062.16/warc/CC-MAIN-20210117143625-20210117173625-00364.warc.gz"} |
https://en.wikipedia.org/wiki/Multipolar_exchange_interaction | # Multipolar exchange interaction
Magnetic materials with strong spin-orbit interaction, such as: LaFeAsO,[1] PrFe4P12[2] , YbRu2Ge2,[3] UO2,[4] NpO2,[5] Ce1−xLaxB6,[6] URu2Si2[7] and many other compounds, are found to have magnetic ordering constituted by high rank multipoles, e.g. quadruple, octople, etc.[8] Due to the strong spin-orbit coupling, multipoles are automatically introduced to the systems when the total angular momentum quantum number J is larger than 1/2. If those multipoles are coupled by some exchange mechanisms, those multipoles could tend to have some ordering as conventional spin 1/2 Heisenberg problem. Except the multipolar ordering, many hidden order phenomena are believed closely related to the multipolar interactions [5][6][7]
## Tensor Operators Expansion
### Basic Concepts
Consider a quantum mechanical system with Hilbert space spanned by ${\displaystyle |j,m_{j}\rangle }$, where ${\displaystyle j}$ is the total angular momentum and ${\displaystyle m_{j}}$ is its projection on the quantization axis. Then any quantum operators can be represented using the basis set ${\displaystyle \lbrace |j,m_{j}\rangle \rbrace }$ as a matrix with dimension ${\displaystyle (2j+1)}$. Therefore, one can define ${\displaystyle (2j+1)^{2}}$ matrices to completely expand any quantum operator in this Hilbert space. Taking J=1/2 as an example, a quantum operator A can be expanded as
${\displaystyle A={\begin{bmatrix}1&2\\3&4\end{bmatrix}}=1{\begin{bmatrix}1&0\\0&0\end{bmatrix}}+2{\begin{bmatrix}0&1\\0&0\end{bmatrix}}+3{\begin{bmatrix}0&0\\1&0\end{bmatrix}}+4{\begin{bmatrix}0&0\\0&1\end{bmatrix}}=1L_{1,1}+2L_{1,2}+3L_{2,1}+4L_{2,2}}$
Obviously, the matrices: ${\displaystyle L_{ij}=|i\rangle \langle j|}$ forms a basis set in the operator space. Any quantum operator defined in this Hilbert can be expended by ${\displaystyle \lbrace L_{ij}\rbrace }$ operators. In the following, let's call these matrices as a super basis to distinguish the eigen basis of quantum states. More speifically the above super basis ${\displaystyle \lbrace L_{ij}\rbrace }$ can be called transition super basis because it describes the transition between states ${\displaystyle |i\rangle }$ and ${\displaystyle |j\rangle }$. In fact, this is not the only super basis that does the trick. We can also use Pauli matrices plus identity matrix to form a super basis
${\displaystyle A={\begin{bmatrix}1&2\\3&4\end{bmatrix}}={\frac {5}{2}}{\begin{bmatrix}1&0\\0&1\end{bmatrix}}+{\frac {i}{2}}{\begin{bmatrix}0&i\\-i&0\end{bmatrix}}+{\frac {3}{2}}{\begin{bmatrix}-1&0\\0&1\end{bmatrix}}+{\frac {5}{2}}{\begin{bmatrix}0&1\\1&0\end{bmatrix}}={\frac {5}{2}}I+{\frac {i}{2}}\sigma _{y}+{\frac {3}{2}}\sigma _{z}+{\frac {5}{2}}\sigma _{x}}$
Since the rotation properties of ${\displaystyle \sigma _{x},\sigma _{y},\sigma _{z}}$ follow the same rules as the rank 1 tensor of cubic harmonics ${\displaystyle T_{x},T_{y},T_{z}}$ and the identity matrix ${\displaystyle I}$ follows the same rules as the rank 0 tensor ${\displaystyle T_{s}}$, the basis set ${\displaystyle \lbrace I,\sigma _{x},\sigma _{y},\sigma _{z}\rbrace }$ can be called cubic super basis. Another commonly used super basis is spherical harmonic super basis which is built by replacing the ${\displaystyle \sigma _{x},\ \sigma _{y}}$ to the raising and lowering operators ${\displaystyle \lbrace I,\sigma _{-1},\sigma _{0},\sigma _{+1}\rbrace }$
${\displaystyle A={\begin{bmatrix}1&2\\3&4\end{bmatrix}}={\frac {5}{2}}{\begin{bmatrix}1&0\\0&1\end{bmatrix}}+2{\begin{bmatrix}0&1\\0&0\end{bmatrix}}+{\frac {3}{2}}{\begin{bmatrix}-1&0\\0&1\end{bmatrix}}-3{\begin{bmatrix}0&0\\-1&0\end{bmatrix}}={\frac {5}{2}}I+2\sigma _{+1}+{\frac {3}{2}}\sigma _{0}-3\sigma _{-1}}$
Again, ${\displaystyle \sigma _{-1},\sigma _{0},\sigma _{+1}}$ share the same rotational properties as rank 1 spherical harmonic tensors ${\displaystyle Y_{-1}^{1},Y_{0}^{1},Y_{-1}^{1}}$, so it is called spherical super basis.
Because atomic orbitals ${\displaystyle s,p,d,f}$ are also described by spherical or cubic harmonic functions, one can imagine or visualize these operators using the wave functions of atomic orbitals although they are essentially matrices not spatial functions.
If we extend the problem to ${\displaystyle J=1}$, we will need 9 matrices to form a super basis. For transition super basis, we have ${\displaystyle \lbrace L_{ij};i,j=1\sim 3\rbrace }$. For cubic super basis, we have ${\displaystyle \lbrace T_{s},T_{x},T_{y},T_{z},T_{xy},T_{yz},T_{zx},T_{x^{2}-y^{2}},T_{3z^{2}-r^{2}}\rbrace }$. For spherical super basis, we have ${\displaystyle \lbrace Y_{0}^{0},Y_{-1}^{1},Y_{0}^{1},Y_{-1}^{1},Y_{-2}^{2},Y_{-1}^{2},Y_{0}^{2},Y_{1}^{2},Y_{2}^{2}\rbrace }$. In group theory, ${\displaystyle T_{s}/Y_{0}^{0}}$ are called scalar or rank 0 tensor, ${\displaystyle T_{x,yz,}/Y_{-1,0,+1}^{1}}$ are called dipole or rank 1 tensors, ${\displaystyle T_{xy,yz,zx,x^{2}-y^{2},3z^{2}-r^{2}}/Y_{-2,-1,0,+1,+2}^{2}}$ are called quadrupole or rank 2 tensors.[8]
The example tells us, for a ${\displaystyle J}$-multiplet problem, one will need all rank ${\displaystyle 0\sim 2J}$ tensor operators to form a complete super basis. Therefore, for a ${\displaystyle J=1}$ system, its density matrix must have quadrupole components. This is the reason why a ${\displaystyle J>1/2}$ problem will automatically introduce high-rank multipoles to the system [9]
### Formal Definitions
matrix elements and the real part of corresponding harmonic functions of cubic operator basis in J=1 case.[9]
A general definition of spherical harmonic super basis of a ${\displaystyle J}$-multiplet problem can be expressed as [8]
${\displaystyle Y_{K}^{Q}(J)=\sum _{MM^{\prime }}(-1)^{J-M}(2K+1)^{2}\times \left({\begin{matrix}J&J&K^{^{\prime }}\\M^{^{\prime }}&M&Q\end{matrix}}\right)|JM\rangle \langle JM^{^{\prime }}|,}$
where the parentheses denote a 3-j symbol; K is the rank which ranges ${\displaystyle 0\sim 2J}$; Q is the projection index of rank K which ranges from −K to +K. A cubic harmonic super basis where all the tensor operators are hermitian can be defined as
${\displaystyle T_{K}^{Q}={\frac {1}{\sqrt {2}}}[(-1)^{Q}Y_{K}^{Q}(J)+Y_{K}^{-Q}(J)]}$
${\displaystyle T_{K}^{-Q}={\frac {i}{\sqrt {2}}}[Y_{K}^{-Q}(J)-(-1)^{Q}Y_{K}^{-Q}(J)]}$
Then, any quantum operator ${\displaystyle A}$ defined in the ${\displaystyle J}$-multiplet Hilbert space can be expanded as
${\displaystyle A=\sum _{K,Q}\alpha _{K}^{Q}Y_{K}^{Q}=\sum _{K,Q}\beta _{K}^{Q}T_{K}^{Q}=\sum _{i,j}\gamma _{i,j}L_{i,j}}$
where the expansion coefficients can be obtained by taking the trace inner product, e.g. ${\displaystyle \alpha _{K}^{Q}=Tr[AY_{K}^{Q\dagger }]}$. Apparently, one can make linear combination of these operators to form a new super basis that have different symmetries.
### Multi-exchange Description
Using the addition theorem of tensor operators, the product of a rank n tensor and a rank m tensor can generate a new tensor with rank n+m ~ |n-m|. Therefore, a high rank tensor can be expressed as the product of low rank tensors. This convention is useful to interpret the high rank multipolar exchange terms as a "multi-exchange" process of dipoles (or pseudospins). For example, for the spherical harmonic tensor operators of ${\displaystyle J=1}$ case, we have
${\displaystyle Y_{2}^{-2}=2Y_{1}^{-1}Y_{1}^{-1}}$
${\displaystyle Y_{2}^{-1}={\sqrt {2}}(Y_{1}^{-1}Y_{1}^{0}+Y_{1}^{0}Y_{1}^{-1})}$
${\displaystyle Y_{2}^{0}={\sqrt {24}}/6(Y_{1}^{-1}Y_{1}^{+1}+2Y_{1}^{0}Y_{1}^{0}+Y_{1}^{+1}Y_{1}^{-1})}$
${\displaystyle Y_{2}^{+1}={\sqrt {2}}(Y_{1}^{0}Y_{1}^{+1}+Y_{1}^{+1}Y_{1}^{0})}$
${\displaystyle Y_{2}^{+2}=2Y_{1}^{+1}Y_{1}^{+1}}$
If so, a quadrupole-quadrupole interaction (see next section) can be considered as a two steps dipole-dipole interaction. For example, ${\displaystyle Y_{2_{i}}^{+2_{i}}Y_{2_{j}}^{-2_{j}}=4Y_{1_{i}}^{+1_{i}}Y_{1_{i}}^{+1_{i}}Y_{1_{j}}^{-1_{j}}Y_{1_{j}}^{-1_{j}}}$, so the one step quadrupole transition ${\displaystyle Y_{2_{i}}^{+2_{i}}}$ on site ${\displaystyle i}$ now becomes a two steps of dipole transition ${\displaystyle Y_{1_{i}}^{+1_{i}}Y_{1_{i}}^{+1_{i}}}$. Hence not only inter-site-exchange but also intra-site-exchange terms appear (so called multi-exchange). If ${\displaystyle J}$ is even larger, one can expect more complicated intra-site-exchange terms would appear. However, one has to note that it is not a perturbation expansion but just a mathematical technique. The high rank terms are not necessarily smaller than low rank terms. In many systems, high rank terms are more important than low rank terms.[8]
## Multipolar Exchange Interactions
Examples of dipole-dipole and quadrupole-quadrupole exchange interactions in J=1 case. Blue arrow means the transition comes with a ${\displaystyle \pi }$phase shift.[9]
There are four major mechanisms to induce exchange interactions between two magnetic moments in a system:[8] 1). Direct exchange 2). RKKY 3). Superexchange 4). Spin-Lattice. No matter which one is dominated, a general form of the exchange interaction can be written as[9]
${\displaystyle H=\sum _{ij}\sum _{KQ}C_{K_{i}K_{j}}^{Q{i}Q_{j}}T_{K_{i}}^{Q_{i}}T_{K_{j}}^{Q_{j}}}$
where ${\displaystyle i,j}$ are the site indexes and ${\displaystyle C_{K_{i}K_{j}}^{Q{i}Q_{j}}}$ is the coupling constant that couples two multipole moments ${\displaystyle T_{K_{i}}^{Q_{i}}}$ and ${\displaystyle T_{K_{j}}^{Q_{j}}}$. One can immediately find if ${\displaystyle K}$ is restricted to 1 only, the Hamiltonian reduces to conventional Heisenberg model.
An important feature of the multipolar exchange Hamiltonian is its anisotropy.[9] The value of coupling constant ${\displaystyle C_{K_{i}K_{j}}^{Q{i}Q_{j}}}$ is usually very sensitive to the relative angle between two multipoles. Unlike conventional spin only exchange Hamiltonian where the coupling constants are isotropic in a homogeneous system, the highly anisotropic atomic orbitals (recall the shape of the ${\displaystyle s,p,d,f}$ wave functions) coupling to the system's magnetic moments will inevitably introduce huge anisotropy even in a homogeneous system. This is one of the main reasons that most multipolar orderings tend to be non-colinear.
## Antiferromagnetism of Multipolar Moments
Flipping the phases of multipoles [9]
AFM ordering chains of different multipoles.[9]
Unlike magnetic spin ordering where the antiferromagnetism can be defined by flipping the magnetization axis of two neighbor sites from a ferromagnetic configuration, flipping of the magnetization axis of a multipole is usually meaningless. Taking a ${\displaystyle T_{yz}}$ moment as an example, if one flips the z-axis by making a ${\displaystyle \pi }$ rotation toward the y-axis, it just changes nothing. Therefore, a suggested definition[9] of antiferromagnetic multipolar ordering is to flip their phases by ${\displaystyle \pi }$, i.e. ${\displaystyle T_{yz}\rightarrow e^{i\pi }T_{yz}=-T_{yz}}$. In this regard, the antiferromagnetic spin ordering is just a special case of this definition, i.e. flipping the phase of a dipole moment is equivalent to flipping its magnetization axis. As for high rank multipoles, e.g. ${\displaystyle T_{yz}}$, it actually becomes a ${\displaystyle \pi /2}$ rotation and for ${\displaystyle T_{3z^{2}-r^{2}}}$ it is even not any kind of rotation.
## Compute Coupling Constants
Calculation of multipolar exchange interactions remains a challenging issue in many aspects. Although there were many works based on fitting the model Hamiltonians with experiments, predictions of the coupling constants based on first-principle schemes remain lacking. Currently there are two studies implemented first-principles approach to explore multipolar exchange interactions. An early study was developed in 80's. It is based on a mean field approach that can greatly reduce the complexity of coupling constants induced by RKKY mechanism, so the multipolar exchange Hamiltonian can be described by just a few unknown parameters and can be obtained by fitting with experiment data.[10] Later on, a first-principles approach to estimate the unknown parameters was further developed and got good agreements with a few selected compounds, e.g. cerium momnpnictides.[11] Another first-principle approach was also proposed recently.[9] It maps all the coupling constants induced by all static exchange mechanisms to a series of DFT+U total energy calculations and got agreement with uranium dioxide.
## References
1. ^ F. Cricchio, O. Granas, and L. Nordstrom, Phys. Rev. B. 81, 140403 (2010); R. S. Gonnelli, D. Daghero, M. Tortello, G. A. Ummarino, V. A. Stepanov, J. S. Kim, and R. K. Kremer, Phys. Rev. B 79, 184526 (2009)
2. ^ A. Kiss and Y. Kuramoto, J. Phys. Soc. Jpn. 74, 2530 (2005); H. Sato, T. Sakakibara, T. Tayama, T. Onimaru, H. Sugawara, and H. Sato, J. Phys. Soc. Jpn. 76, 064701 (2007)
3. ^ T. Takimoto and P. Thalmeier, Phys. Rev. B 77, 045105 (2008)
4. ^ S.-T. Pi, R. Nanguneri, and S. Savrasov, Phys. Rev. Lett. 112, 077203 (2014); P. Giannozzi and P. Erdos, J. Mag. Mag Mater. 67, 75 (1987). V. S. Mironov, L. F. Chibotaru, and A. Ceulemans, Adv. Quan. Chem. 44, 599 (2003); S. Carretta, P. Santini, R. Caciuffo, and G. Amoretti, Phys. Rev. Lett. 105, 167201 (2010); R. Caciuffo, P. Santini, S. Carretta, G. Amoretti, A. Hiess, N. Magnani, L. P. Regnault, and G. H. Lander, Phys. Rev. B 84, 104409 (2011)
5. ^ a b P. Santini and G. Amoretti, Phys. Rev. Lett. 85, 2188 (2000); P. Santini, S. Carretta, N. Magnani, G. Amoretti, and R. Caciuffo, Phys. Rev. Lett. 97, 207203 (2006); K. Kubo and T. Hotta, Phys. Rev. B 71, 140404 (2005)
6. ^ a b D. Mannix, Y. Tanaka, D. Carbone, N. Bernhoeft, and S. Kunii, Phys. Rev. Lett. 95, 117206 (2005)
7. ^ a b P. Chandra, P. Coleman, J. A. Mydosh, and V. Tripathi, Nature (London) 417, 831 (2002); Francesco Cricchio, Fredrik Bultmark, Oscar Granas, and Lars Nordstrom, Phys. Rev. Lett. 103, 107202 (2009); Hiroaki Ikeda, Michi-To Suzuki, Ryotaro Arita, Tetsuya Takimoto, Takasada Shibauchi, and Yuji Matsuda, Nat. Phys. 8, 528 (2012); A. Kiss and P. Fazekas, Phys. Rev. B 71, 054415 (2005); J. G. Rau and H.-Y. Kee, Phys. Rev. B 85, 245112 (2012)
8. R. Caciuffo et al., Rev. Mod. Phys. 81, 807 (2009)
9. S.-T. Pi, R. Nanguneri, and S. Savrasov, Phys. Rev. Lett. 112, 077203 (2014); S.-T. Pi, R. Nanguneri, and S. Savrasov, Phys. Rev. B 90, 045148 (2014)
10. ^ R. Siemann and B. R. Cooper, Phys. Rev. Lett. 44, 1015 (1980)
11. ^ J. M. Wills and B. R. Cooper, Phys. Rev. B 42, 4682 (1990) | 2018-11-15 17:50:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 71, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7469163537025452, "perplexity": 1378.7090707334366}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742793.19/warc/CC-MAIN-20181115161834-20181115183834-00328.warc.gz"} |
https://mathematica.stackexchange.com/questions/171076/using-space-to-denote-multiplication?noredirect=1 | # Using space to denote multiplication [duplicate]
When I want to impress a Mathematica novice I show him how we can use spaces to denote multiplication (telling him also to be careful not to forget the space as in xy:-)!).
As far as I know this feature (or the feature for the exponent that suppresses the ^) does not exist in other CAS or programming languages (I am not sure about Maple) at least without importing packages.
So, what gives this ability of Mathematica? What is different in the implementation of multiplication? It has to do with Front End?
• I think my question is different. I know about Mathematica's different road comparing with other CAS or languages w.r.t. such staff. In fact, I have the book A Beginner's Guide to Mathematica, version 4 (Gray and Glyn) that addresses similar issues. I am not interested in WHY, I rather want to know WHAT gives this ability of Mathematica? It has to do with the multiplication implementation or it is the power of Front End? – Dimitris Apr 14 '18 at 0:06
• "I am not sure about Maple" in Maple, it is a little confusing, since Maple actually have two languages. The document mode and the worksheet mode. In worksheet Maple language, you need *, in document mode Maple lanuage, you do not one between symbols. A space will work. When using Maple, I avoid the document mode. – Nasser Apr 14 '18 at 1:16
• I think it is the difference in syntax, namely using [] for function calls. That is WHAT allows a parser to be constructed that can unambiguously interpret the juxtaposition of expressions as multiplication. (That is the point of the duplicate.) The "power" is in the syntax. Previous designers of programming languages weren't bold enough to break with standard mathematical notation tradition (my opinion). – Michael E2 Apr 14 '18 at 1:19
• Ok. I got the point☺! Duplicate accepted! – Dimitris Apr 14 '18 at 1:24 | 2020-02-19 01:38:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6592263579368591, "perplexity": 1142.478390419569}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143963.79/warc/CC-MAIN-20200219000604-20200219030604-00420.warc.gz"} |
http://zbmath.org/?q=an:1235.54024 | # zbMATH — the first resource for mathematics
##### Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
##### Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
##### Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
Generalized coupled fixed point theorems for mixed monotone mappings in partially ordered metric spaces. (English) Zbl 1235.54024
Let $X$ be a complete metric space with metric $d$, which is partially ordered. A mapping $F:X×X\to X$ is called mixed monotone if $F\left(x,y\right)$ is monotone nondecreasing in $x$ and monotone nonincreasing in $y$. A pair $\left(\overline{x},\overline{y}\right)\in X×X$ is called a coupled fixed point of $F$ if $F\left(\overline{x},\overline{y}\right)=\overline{x}$, $F\left(\overline{y},\overline{x}\right)=\overline{y}$. The main result of the paper is the following theorem.
Theorem. Let $X$ be a partialy ordered complete metric space, let $F:X×X\to X$ be mixed monotone and such that
(i) There is a constant $k\in \left[0,1\right)$ such that for each $x\ge u$, $y\le v$
$d\left(F\left(x,y\right),F\left(u,v\right)\right)+d\left(F\left(y,x\right),F\left(v,u\right)\right)\le k\left[d\left(x,u\right)+d\left(y,v\right)\right]·$
(ii) There exist ${x}_{0},{y}_{0}\in X$ with
${x}_{0}\le F\left({x}_{0},{y}_{0}\right)\phantom{\rule{1.em}{0ex}}\text{and}\phantom{\rule{1.em}{0ex}}{y}_{0}\le F\left({y}_{0},{x}_{0}\right)$
or
${x}_{0}\ge F\left({x}_{0},{y}_{0}\right)\phantom{\rule{1.em}{0ex}}\text{and}\phantom{\rule{1.em}{0ex}}{y}_{0}\le F\left({y}_{0},{x}_{0}\right)·$
Then $F$ has a coupled fixed point $\left(\overline{x},\overline{y}\right)$.
The author also gives conditions under which there exists a unique coupled fixed point. Finally, he applies this theorems to the periodic boundary value problem
${u}^{\text{'}}=h\left(t,u\right),\phantom{\rule{1.em}{0ex}}t\in \left(0,T\right),\phantom{\rule{1.em}{0ex}}u\left(0\right)=u\left(T\right)$
with $h\left(t,u\right)=f\left(t,u\right)+g\left(t,u\right)$.
##### MSC:
54H25 Fixed-point and coincidence theorems in topological spaces 54E50 Complete metric spaces 54F05 Linearly, generalized, and partial ordered topological spaces 34B15 Nonlinear boundary value problems for ODE | 2014-03-10 17:33:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 23, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8815810680389404, "perplexity": 3684.3271675140027}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010916587/warc/CC-MAIN-20140305091516-00094-ip-10-183-142-35.ec2.internal.warc.gz"} |
http://www.maplesoft.com/support/help/Maple/view.aspx?path=Units/GetSystems | Units - Maple Programming Help
Home : Support : Online Help : Science and Engineering : Units : Manipulating Systems : Units/GetSystems
Units
GetSystems
list all systems of units
Calling Sequence GetSystems()
Description
• The GetSystems() function returns an expression sequence of all systems of units.
Examples
Note: In Maple 2015 and later versions, units are not surrounded by double brackets.
> $\mathrm{with}\left(\mathrm{Units}\right):$
> $L≔\mathrm{GetSystems}\left(\right)$
${L}{:=}{\mathrm{Atomic}}{,}{\mathrm{CGS}}{,}{\mathrm{EMU}}{,}{\mathrm{ESU}}{,}{\mathrm{FPS}}{,}{\mathrm{MKS}}{,}{\mathrm{MTS}}{,}{\mathrm{SI}}$ (1)
> $\mathbf{for}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}i\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\mathbf{in}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}L\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\mathbf{do}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\mathrm{print}\left(\mathrm{system:},i,\mathrm{value:},\mathrm{convert}\left(32.23⟦'\mathrm{newton}'⟧,'\mathrm{system}',i\right)\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\mathbf{end do}$
${\mathrm{system:}}{,}{\mathrm{Atomic}}{,}{\mathrm{value:}}{,}{3.912014603}{}{{10}}^{{8}}{}⟦\frac{{\mathrm{E0}}}{{\mathrm{a0}}}⟧$
${\mathrm{system:}}{,}{\mathrm{CGS}}{,}{\mathrm{value:}}{,}{3.22300000}{}{{10}}^{{6}}{}⟦{\mathrm{dyn}}⟧$
${\mathrm{system:}}{,}{\mathrm{EMU}}{,}{\mathrm{value:}}{,}{3.22300000}{}{{10}}^{{6}}{}⟦{\mathrm{dyn}}⟧$
${\mathrm{system:}}{,}{\mathrm{ESU}}{,}{\mathrm{value:}}{,}{3.22300000}{}{{10}}^{{6}}{}⟦{\mathrm{dyn}}⟧$
${\mathrm{system:}}{,}{\mathrm{FPS}}{,}{\mathrm{value:}}{,}{233.1200364}{}⟦{\mathrm{poundal}}⟧$
${\mathrm{system:}}{,}{\mathrm{MKS}}{,}{\mathrm{value:}}{,}{32.23}{}⟦{N}⟧$
${\mathrm{system:}}{,}{\mathrm{MTS}}{,}{\mathrm{value:}}{,}{0.03223000000}{}⟦{\mathrm{sn}}⟧$
${\mathrm{system:}}{,}{\mathrm{SI}}{,}{\mathrm{value:}}{,}{32.23}{}⟦{N}⟧$ (2) | 2016-09-25 08:52:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 12, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9237592816352844, "perplexity": 1674.2330775728399}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660158.72/warc/CC-MAIN-20160924173740-00192-ip-10-143-35-109.ec2.internal.warc.gz"} |
https://reference.wolframcloud.com/language/ref/SemidefiniteOptimization.html | # SemidefiniteOptimization
SemidefiniteOptimization[f,cons,vars]
finds values of variables vars that minimize the linear objective f subject to semidefinite constraints cons.
SemidefiniteOptimization[c,{a0,a1,,ak}]
finds a vector that minimizes the quantity subject to the linear matrix inequality constraint .
SemidefiniteOptimization[,"prop"]
specifies what solution property "prop" should be returned.
# Details and Options
• SemidefiniteOptimization is also known as semidefinite programming (SDP).
• Semidefinite optimization is a convex optimization problem that can be solved globally and efficiently with real, integer or complex variables.
• Semidefinite optimization finds that solves the primal problem:
• minimize subject to constraints where
• The matrices must be symmetric matrices.
• Mixed-integer semidefinite optimization finds and that solve the problem:
• minimize subject to constraints where
• When the objective function is real valued, SemidefiniteOptimization solves problems with by internally converting to real variables , where and . Linear matrix inequalities may be specified with Hermitian matrices .
• The variable specification vars should be a list with elements giving variables in one of the following forms:
• v variable with name and dimensions inferred v∈Reals real scalar variable v∈Integers integer scalar variable v∈Complexes complex scalar variable v∈ℛ vector variable restricted to the geometric region v∈Vectors[n,dom] vector variable in or v∈Matrices[{m,n},dom] matrix variable in or
• The constraints cons can be specified by:
• LessEqual scalar inequality GreaterEqual scalar inequality VectorLessEqual vector inequality VectorGreaterEqual vector inequality Equal scalar or vector equality Element convex domain or region element
• With SemidefiniteOptimization[f,cons,vars], parameter equations of the form parval, where par is not in vars and val is numerical or an array with numerical values, may be included in the constraints to define parameters used in f or cons. »
• The primal minimization problem has a related maximization problem that is the Lagrangian dual problem. The dual maximum value is always less than or equal to the primal minimum value, so it provides a lower bound. The dual maximizer provides information about the primal problem, including sensitivity of the minimum value to changes in the constraints. »
• The semidefinite optimization has a dual: »
• maximize subject to constraints where
• The possible solution properties "prop" include:
• "PrimalMinimizer" a list of variable values that minimizes the objective function "PrimalMinimizerRules" values for the variables vars={v1,…} that minimize "PrimalMinimizerVector" the vector that minimizes "PrimalMinimumValue" the primal minimum value "DualMaximizer" the matrix that maximizes "DualMaximumValue" the dual maximum value "DualityGap" the difference between the dual and primal optimal values "Slack" matrix that converts inequality constraints to equality "ConstraintSensitivity" sensitivity of to constraint perturbations "ObjectiveVector" the linear objective vector "ConstraintMatrices" the list of constraint matrices {"prop1","prop2",…} several solution properties
• The following options may be given:
• MaxIterations Automatic maximum number of iterations to use Method Automatic the method to use PerformanceGoal \$PerformanceGoal aspects of performance to try to optimize Tolerance Automatic the tolerance to use for internal comparisons
• The option Method->method may be used to specify the method to use. Available methods include:
• Automatic choose the method automatically "CSDP" CSDP semidefinite optimization solver "DSDP" DSDP semidefinite optimization solver "SCS" SCS splitting conic solver "MOSEK" Commercial MOSEK convex optimization solver
• Computations are limited to MachinePrecision.
# Examples
open allclose all
## Basic Examples(3)
Minimize subject to the linear matrix inequality constraint :
The optimal point is where is smallest within the region defined by the constraints:
Minimize subject to the linear matrix inequality constraint :
Use the equivalent formulation with the objective vector and constraint matrices:
Minimize subject to :
Use the equivalent formulation with the objective vector and constraint matrices:
## Scope(29)
### Basic Uses(13)
Minimize subject to constraints :
Minimize when the matrix is positive semidefinite:
Find the solution:
Minimize subject to the linear matrix inequality constraint :
The left-hand side of the constraint can be given in evaluated form:
Express the problem with the objective vector and constraint matrices:
Use a vector variable :
Use a vector variable and to avoid unintended threading:
Use a vector variable and parameter equations to avoid unintended threading:
Use a vector variable and Indexed[x,i] to specify individual components:
Use Vectors[n] to specify the dimension of a vector variable when it is ambiguous:
Several linear inequality constraints can be expressed with VectorGreaterEqual:
Use v>= or \[VectorGreaterEqual] to enter the vector inequality sign :
An equivalent form using scalar inequalities:
Use a vector variable and vector inequality:
Specify non-negative constraints using NonNegativeReals ():
An equivalent form using vector inequalities:
Second-order cone constraints of the form can be used:
"NormCone" constraints of the form can be used:
### Integer Variables(4)
Specify integer domain constraints using Integers:
Specify integer domain constraints on vector variables using Vectors[n,Integers]:
Specify non-negative integer domain constraints using NonNegativeIntegers ():
Specify non-positive integer domain constraints using NonPositiveIntegers ():
### Complex Variables(2)
Specify complex variables using Complexes:
In linear matrix inequalities, the constraint matrices can be Hermitian or real symmetric:
The variables in linear matrix inequalities need to be real for the sum to remain Hermitian:
### Primal Model Properties(3)
Minimize the function subject to the constraint :
Get the primal minimizer as a vector:
Get the minimal value:
Extract the objective vector:
Extract the constraint matrices:
Use the extracted objective vector and constraint matrices for direct input:
The slack for an inequality at the minimizer is given by :
Extract the minimizer and constraint matrices:
Verify that the slack matrix satisfies :
### Dual Model Properties(3)
Minimize subject to :
The dual problem is to maximize subject to :
The primal minimum value and the dual maximum value coincide because of strong duality:
That is the same as having a duality gap of zero. In general, at optimal points:
Construct the dual problem using constraint matrices extracted from the primal problem:
Extract the objective vector and constraint matrices:
The dual problem is to maximize subject to :
Get the dual maximum value and dual maximizer directly using solution properties:
The "DualMaximumValue" is:
The "DualMaximizer" can be obtained with:
### Sensitivity Properties(4)
Use "ConstraintSensitivity" to find the change in optimal value due to constraint perturbations:
The sensitivity is a matrix:
Consider new constraints where is the perturbation:
The approximate new optimal value is:
Compare to directly solving the perturbed problem:
The optimal value changes according to the signs of the sensitivity matrix elements:
At negative sensitivity element position, a positive perturbation will decrease the optimal value:
At positive sensitivity element position, a positive perturbation will increase the optimal value:
Express the perturbed constraints symbolically using Sylvester's criterion for semidefiniteness:
With this form, the minimum value can be found exactly as a function of the parameters around 0:
Make a symmetric matrix with the derivatives of the minimum with respect to the parameters:
This is the sensitivity to perturbation given by the "ConstraintSensitivity" property:
The constraint sensitivity can also be obtained as the negative of the dual maximizer:
## Options(8)
### Method(5)
The default method "CSDP" is an interior point method:
"DSDP" is an alternative interior point method:
"SCS" uses a splitting conic solver method:
Different methods have different default tolerances, which affects the accuracy and precision:
Compute exact and approximate solutions:
"CSDP" and "DSDP" have default tolerances of :
"SCS" has a default tolerance of :
When the default method "CSDP" produces a message, try "DSDP" first:
In this case, "DSDP" succeeds in finding a good solution:
"SCS" with a default tolerance of 10-3 is an alternative method to try:
The quality of the result with "SCS" can often be improved with a smaller Tolerance:
### PerformanceGoal(1)
The default value of the option PerformanceGoal is \$PerformanceGoal:
Use PerformanceGoal"Quality" to get a more accurate result:
Use PerformanceGoal"Speed" to get a result faster, but at the cost of quality:
Compare the timings:
The "Speed" goal gives a less accurate result:
### Tolerance(2)
A smaller Tolerance setting gives a more precise result:
Compute the exact minimum value with Minimize:
Compute the error in the minimum value with different Tolerance settings:
Visualize the change in minimum value error with respect to tolerance:
A smaller Tolerance setting gives a more precise result, but may take longer to compute:
A smaller tolerance takes longer:
The tighter tolerance gives a more precise answer:
## Applications(33)
### Basic Modeling Transformations(13)
Maximize subject to . Solve a maximization problem by negating the objective function:
Negate the primal minimum value to get the corresponding maximal value:
Find that minimizes the largest eigenvalue of a symmetric matrix that depends linearly on the decision variables , . The problem can be formulated as linear matrix inequality, since is equivalent to where is the eigenvalue of . Define the linear matrix function :
A real symmetric matrix can be diagonalized with an orthogonal matrix so . Hence iff . Since any , taking , , hence iff . Numerically simulate to show that these formulations are equivalent:
The resulting problem:
Run a Monte Carlo simulation to check the plausibility of the result:
Find that maximizes the smallest eigenvalue of a symmetric matrix that depends linearly on the decision variables . Define the linear matrix function :
The problem can be formulated as linear matrix inequality, since is equivalent to where is the eigenvalue of . To maximize , minimize :
Run a Monte Carlo simulation to check the plausibility of the result:
Find that minimizes the difference between the largest and the smallest eigenvalues of a symmetric matrix that depends linearly on the decision variables . Define the linear matrix function :
The problem can be formulated as linear matrix inequality, since is equivalent to where is the eigenvalue of . Solve the resulting problem:
In this case, the minimum and maximum eigenvalues coincide and the difference is 0:
Minimize the largest by absolute value eigenvalue of a linear in symmetric matrix :
The largest eigenvalue satisfies The largest by absolute value negative eigenvalue of is the largest eigenvalue of and satisfies :
Find that minimizes the largest singular value of a linear in matrix :
The largest singular value of is the square root of the largest eigenvalue of and from a preceding example it satisfies or equivalently :
Plot the result:
Minimize . Using an auxiliary variable , transform the problem to minimize such that . This is the same as :
A Schur complement condition says that if , a block matrix iff . Thus for and for , since then must be 0:
Use the constraint directly and it will automatically convert into semidefinite form:
Minimize subject to , assuming when . Using the auxiliary variable , the objective is to minimize such that :
Check that implies :
Using the Schur complement condition, iff . Use for constructing the constraints to avoid threading:
For quadratic sets , which include ellipsoids, quadratic cones and paraboloids, determine whether , where are symmetric matrices, are vectors and scalars:
Assuming that the sets are full dimensional, the S-procedure says that iff there exists some non-negative number such that Visually see that there exists a non-negative :
Since λ0, :
Minimize subject to . Convert the objective into a linear function with the additional constraint , which is equivalent to :
Minimize subject to . Convert the objective into a linear function using and the additional constraints :
Minimize subject to . Convert the objective into a linear function with the additional constraint , which is equivalent to :
Minimize subject to , where is a nondecreasing function, by instead minimizing . The primal minimizer will remain the same for both problems. Consider minimizing , subject to :
The true minimum value can be obtained by applying to the minimum value of :
### Data-Fitting Problems(5)
Find the coefficients of a fifth-order polynomial by minimizing that fits a discrete dataset:
Select the polynomial bases and construct the input matrix using DesignMatrix and output vector:
Using an auxiliary variable , the objective is transformed to minimize such that , which is equivalent to as shown under Basic Modeling Transformations:
Compare fit with data:
Find an approximating function to discrete data that varies on a logarithmic scale by minimizing using Chebyshev bases:
Select Chebyshev basis functions and compute their values at the random data points:
Since the data is on a logarithmic scale, direct data-fitting is not ideal. Instead, transform the problem to minimize . Using auxiliary variable , minimize such that . This constraint is equivalent to :
Find the coefficients of the approximating function:
The resulting fit is:
Visualize the fit:
The data-fitting can also be obtained directly using the function Fit. However, without the log transformation, there are significant oscillations in the approximating function:
Represent a given bivariate polynomial in terms of sum-of-squares polynomial :
The objective is to find such that , where is a vector of monomials:
Construct the symmetric matrix :
Find the polynomial coefficients of and and make sure they are equal:
Find the elements of :
The quadratic term , where is a lower-triangular matrix obtained from the Cholesky decomposition of :
Compare the sum-of-squares polynomial to the given polynomial:
Cardinality constrained least squares: minimize such that has at most nonzero elements:
Let be a decision vector such that if , then is nonzero. The decision constraints are:
To model constraint when , chose a large constant such that :
Using an auxiliary variable , the objective is transformed to minimize such that , which is equivalent to :
Solve the cardinality constrained least-squares problem:
The subset selection can also be done more efficiently with Fit using regularization. First, find the range of regularization parameters that uses at most basis functions:
Find the nonzero terms in the regularized fit:
Find the fit with just these basis terms:
Find the best subset of functions from a candidate set of functions to approximate given data:
The approximating function will be :
A maximum of 5 basis functions are to be used in the final approximation:
The coefficients associated with functions that are not chosen must be zero:
Find the best subset of functions:
Compare the resulting approximating with the given data:
### Geometry Problems(6)
Find the smallest disk centered at of radius that encloses a set of points:
For each point , the constraint must be satisfied. This constraint is equivalent to . Use Inactive when forming the constraints:
Find the enclosing disk by minimizing the radius :
Visualize the enclosing region:
The minimal area bounding disk can also be found using BoundingRegion:
Find the smallest ellipse parametrized as that encompasses a set of points by minimizing the area:
For each point , the constraint must be satisfied:
The area is proportional to . Applying the monotone function Log, the function to minimize is . This in turn is equivalent to minimizing :
Convert the parameterized ellipse into the explicit form :
A bounding ellipse, not necessarily minimal area, can be found using BoundingRegion:
The optimal ellipse has a smaller area:
Find the smallest ellipsoid parametrized as that encompasses a set of points in 3D by minimizing the volume:
For each point , the constraint must be satisfied:
Minimizing the volume is equivalent to minimizing , which is equivalent to minimizing :
Convert the parameterized ellipse into the explicit form :
A bounding ellipsoid, not necessarily minimum volume, can also be found using BoundingRegion:
Find the maximum area ellipse parametrized as that can be fitted into a convex polygon:
Each segment of the convex polygon can be represented as intersections of half-planes :
Applying the parametrization to the half-planes gives . The term . Thus, the constraints are , which is equivalent to :
Minimizing the area is equivalent to minimizing , which is equivalent to minimizing :
Convert the parameterized ellipse into the explicit form as :
Find the center and radius of a disk given by that encloses three ellipses of the form :
Using S-procedure, it can be shown that the disk contains the ellipses iff :
The goal is to minimize the radius given by . Using auxiliary variable , the objective is to minimize such that , which can be written as :
Find the center and radius of the disk:
The disk is given by:
Convert the quadratic form of the ellipse to the explicit form :
Visualize the result:
Test whether an ellipsoid is a subset of another ellipsoid of the form :
Using S-procedure, it can be shown that ellipse 2 is a subset of ellipse 1 iff :
Check if the condition is satisfied:
Convert the ellipsoids into explicit form and confirm that ellipse 2 is within ellipse 1:
Move ellipsoid 2 such that it overlaps with ellipsoid 1:
A test now shows that the problem is infeasible, indicating that ellipsoid 2 is not a subset of ellipsoid 1:
### Classification Problems(2)
Find an ellipse that separates two groups of points and :
For separation, set 1 must satisfy and set 2 must satisfy :
Find the coefficients of the separating ellipsoid:
Visualize the result:
Find an ellipse that is as close as possible to a circle that separates two groups of points and :
For separation, set 1 must satisfy and set 2 must satisfy :
For the ellipsoid to be as close as possible to a circle, the constraint :
Find the coefficients of the separating ellipsoid by minimizing :
Visualize the result:
### Graph Problems(3)
The Lovász number, computable using semidefinite optimization, is used as a bound for hard compute graph invariants:
The Lovász number is an upper bound for the Shannon capacity of a graph:
According to the Lovász sandwich theorem:
The Lovász number for a graph is given by , where , and for . It can be written in a dual semidefinite form: , subject to and and 0 elsewhere:
Compare to the exact Lovász number values from GraphData:
Find the approximate result for when the exact result is not available:
Find for a random graph:
The max-cut problem determines a subset of the vertices of a graph, for which the sum of the weights of the edges that cross from to its complement is maximized. Let for and for . Maximize , where and is the Laplacian matrix of the graph:
For smaller cases, the max-cut problem can be solved exactly, but this is impractical for larger graphs since in general the problem has NP-complete complexity:
The problem minimizes , where is a symmetric rank-1 positive semidefinite matrix, with for each , equivalent to , where is the matrix with at the diagonal position and 0 everywhere else. To make the solution practical, solve a relaxed problem where the rank-1 condition is eliminated. For such , a cut is constructed by randomized rounding: decompose , let be a uniformly distributed random vector of the unit norm and let . For demonstration, a function is defined that shows the relaxed value, the rounded value and the graph with the vertices in shown as red:
Find an approximate max cut using the previously shown procedure, and compare with the exact result:
Find the max cut for a grid graph:
Find the max cut for a random graph:
Compare timings for the relaxed and exact algorithms for a Peterson graph:
Find a subset of specified graph with vertices such that the sum of the weights of the edges that cross from to its complement is maximized. Specify the graph:
The objective is to maximize , where is a symmetric rank-1 positive semidefinite matrix and is the Laplacian matrix of the graph:
Let for and for ; then and :
Drop the rank-1 matrix assumption and solve the resulting max-cut problem:
Extract the subsets and :
Display the subsets on the graph:
### Control & Dynamic Systems Problems(3)
Show that a linear dynamical system will be asymptotically stable for any initial condition. The system is said to be stable iff there exists a positive definite matrix such that where is called the Lyapunov function:
Differentiating the Lyapunov function gives . Therefore, the stability conditions are :
Find a matrix :
The eigenvalues of are all negative, making the matrix negative definite, which proves stability:
Since the analytic solution to the system is , numerically verify that any system will go to zero for any initial condition. Take for this simulation:
Find a controller , such that the closed-loop system is stable:
Using Lyapunov stability theorem, the objective is to find a matrices such that the stability constraints are satisfied. Letting , the first constraint becomes a proper semidefinite constraint :
Find the matrices :
The control matrix can be computed as :
The closed-loop system is stable if the real parts of the eigenvalues of are negative:
Perform a numerical run to see that the system is stable:
Find a Lyapunov function of the dynamical system :
The objective is to find such that , where is a vector of monomials:
Construct the matrix :
For stability, :
Match the coefficients such that they are all positive for and negative for :
Find the matrix :
The Lyapunov function is given by:
Visualize the Lyapunov function. The minimal location of the function matches with the location of the attractor:
### Structural Optimization Problems(1)
Design a minimum-weight truss that is anchored on one end of the wall and must withstand a load on the other end:
The truss can be modeled using links and nodes. Each node is connected to a neighboring node by a link. Specify the node positions :
Specify the nodes that are anchored to the wall:
Specify the node at which the load is applied:
Specify the nodes that are connected to each other through a link and compute the length of each link:
Visualize the unoptimized truss:
The links are circular bars. Each link must be formed from one out of a group of bars of cross sections . Let be a decision vector for each link , such that if , then bar is selected. For link , the area is then defined as . The objective is to minimize the weight:
The bar selection constraint is:
Only one bar must be selected for each link. The binary constraint is:
Find the indices of the nodes that are not anchored:
The stiffness matrix of the system is given by , where is the total number of nodes, is the number of nodes that are anchored and is the set of all the links. The vector if ; if , else 0:
Let be the force vector for the entire system. At each of the nodes that is not anchored, the force is . The node where force is applied is :
Let be the maximum allowable deflection at any node. Let be displacement of node ; then is satisfied if , where is the stiffness matrix associated with link :
Collect all the variables:
Find the optimal structure of the truss:
Extract the links that are part of the optimal truss:
Visualize the optimal truss. The links that are part of the optimal truss are colored-coded based on the rod area being used:
## Properties & Relations(8)
SemidefiniteOptimization gives the global minimum of the objective function:
Plot the objective function with the minimum value over the feasible region:
Minimize gives global exact results for semidefinite problems:
Compare to SemidefiniteOptimization:
NMinimize can be used to get inexact results using global methods:
FindMinimum can be used to obtain inexact results using local methods:
ConicOptimization is more general than SemidefiniteOptimization:
SecondOrderConeOptimization is a special case of SemidefiniteOptimization:
QuadraticOptimization is a special case of SemidefiniteOptimization:
Use auxiliary variable and minimize with additional constraint :
LinearOptimization is a special case of SemidefiniteOptimization:
## Possible Issues(5)
The constraints at the optimal point are satisfied up to some tolerance:
With default options, the constraint violation tolerance is 10-8:
The minimum value of an empty set or infeasible problem is defined to be :
The minimizer is Indeterminate:
The minimum value for an unbounded set or unbounded problem is :
The minimizer is Indeterminate:
Dual related solution properties for mixed-integer problems may not be available:
Although the constraint matrices can be Hermitian, the variables need to be real:
Vectors[n] automatically evaluates to Vectors[n,Complexes]:
For problems with no complex numbers in the specification, the vector variable v Vectors[n] is considered real valued; otherwise, you need to explicitly give the domain as Vectors[n,Reals]:
Wolfram Research (2019), SemidefiniteOptimization, Wolfram Language function, https://reference.wolfram.com/language/ref/SemidefiniteOptimization.html (updated 2020).
#### Text
Wolfram Research (2019), SemidefiniteOptimization, Wolfram Language function, https://reference.wolfram.com/language/ref/SemidefiniteOptimization.html (updated 2020).
#### CMS
Wolfram Language. 2019. "SemidefiniteOptimization." Wolfram Language & System Documentation Center. Wolfram Research. Last Modified 2020. https://reference.wolfram.com/language/ref/SemidefiniteOptimization.html.
#### APA
Wolfram Language. (2019). SemidefiniteOptimization. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/SemidefiniteOptimization.html
#### BibTeX
@misc{reference.wolfram_2022_semidefiniteoptimization, author="Wolfram Research", title="{SemidefiniteOptimization}", year="2020", howpublished="\url{https://reference.wolfram.com/language/ref/SemidefiniteOptimization.html}", note=[Accessed: 05-December-2022 ]}
#### BibLaTeX
@online{reference.wolfram_2022_semidefiniteoptimization, organization={Wolfram Research}, title={SemidefiniteOptimization}, year={2020}, url={https://reference.wolfram.com/language/ref/SemidefiniteOptimization.html}, note=[Accessed: 05-December-2022 ]} | 2022-12-05 22:07:25 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8545932769775391, "perplexity": 1101.1021702760497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711045.18/warc/CC-MAIN-20221205200634-20221205230634-00603.warc.gz"} |
https://study.com/academy/answer/what-is-the-solution-of-the-linear-quadratic-system-of-equations-y-x-2-plus-5x-3-y-x-2.html | What is the solution of the linear-quadratic system of equations? y=x^2+5x-3, y?x=2
Question:
What is the solution of the linear-quadratic system of equations?
{eq}y = x^2 + 5x - 3 \\ y - x = 2 {/eq}
Consistent System of Equations:
In order for a system of equations to be consistent they should have a solution that is common for the both of them. This is a set of coordinates that will make both equations true. Graphically, it would be the coordinates of the point or points they intersect.
The system of equations being considered are:
{eq}y = x^2 + 5x - 3 \ \ \rm (Eq. 1) \\ \it y - x = 2 \ \ \rm (Eq. 2) {/eq}
One solution would be by substitution. From Eq. 2, an identity for {eq}y {/eq} would be:
{eq}\begin{align} y - x &= 2 \\ y = x + 2 \ \ \rm (Eq. 3) \end{align} {/eq}
Substituting Eq. 3 in Eq. 1 and solving for the possibel values of {eq}x {/eq}:
{eq}\begin{align} y &= x^2 + 5x - 3 \\ x + 2 &= x^2 + 5x - 3 \\ 0 &= x^2 + 5x - 3 -x -2 \\ 0 &= x^2 +4x -5 \end{align} {/eq}
Factoring:
{eq}(x +5 )(x - 1)= 0 {/eq}
In order for this equation to be true then either factor would have to be zero. To find the values of {eq}x {/eq} that would make this true is:
{eq}\begin{align} x + 5 &= 0 \\ x &= -5 \\ \\ x -1 &= 0 \\ x &= 1 \end{align} {/eq}
Using these values in Eq. 3 to find the values of {eq}y {/eq}:
{eq}\begin{align} x = -5 \\ y = x + 2 \\ y = -5 + 2 \\ y = -3 \\ \\ x = 1 \\ y = x + 2 \\ y = 1 + 2 \\ y = 3 \end{align} {/eq}
The solutions for the linear-quadratic system of equations {eq}y = x^2 + 5x - 3 \ \ \rm and \it y - x = 2 \ \ \rm {/eq} are the points (-5, -3) and (1,3). | 2020-03-29 19:09:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 4, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 284.50064278501316}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370495413.19/warc/CC-MAIN-20200329171027-20200329201027-00497.warc.gz"} |
https://studyqas.com/write-0-4-as-a-fraction-in-simplest-form/ | # Write 0.4% as a fraction in simplest form.
Write 0.4% as a fraction in simplest form.
## This Post Has 6 Comments
1. naleyah says:
Wouldn't 0.4% as a fraction be 1/250 because:
0.4/100= 2/5/100
2/5/100= 1/250
I'm not sure but I hope this helped
2. koolja3 says:
0.4
4/10
dividing by 2 both numerator and denominator we get
2/5
3. lezapancakes13 says:
4/10 but in simplest form it becomes 2/5.
4. luvcherie18 says:
To find the decimal form of a percentage, you divide by 100. If you have a decimal, then to divide by 100 just move the decimal 2 places to the left.
0.4% = .004
Now to turn it into a fraction, just read the place value...
four one thousandths = 4/1000
Now reduce... 4/1000 = 2/500 = 1/250
The simplest form of 0.4% as a fraction is... 1/250
5. dnjames01 says:
Using the place value chart, we can see that the decimal 0.4 is four tenths, so we can write 0.4 as the fraction $\frac{4}{10}$. Notice however that the fraction is not in lowest terms. We will need to divide the numerator and the denominator by the greatest common factor of 4 and 10 which is 2. Therefore, 0.4 can be written as the fraction $\frac{2}{5}$, which is in lowest terms.
$Write 0.4 as a fraction in simplest form$
6. evaeh says:
Zero and four tenths | 2023-03-28 15:06:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8436318039894104, "perplexity": 1578.2683378601807}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948867.32/warc/CC-MAIN-20230328135732-20230328165732-00383.warc.gz"} |
https://learn.careers360.com/ncert/question-a-balloon-which-always-remains-spherical-on-inflation-is-being-inflated-by-pumping-in-900-cubic-centimetres-of-gas-per-second-find-the-rate-at-which-the-radius-of-the-balloon-increases-when-the-radius-is-15-cm/ | # 8. A balloon, which always remains spherical on inflation, is being inflated by pumping in 900 cubic centimetres of gas per second. Find the rate at which the radius of the balloon increases when the radius is 15 cm.
Given = $\frac{dV}{dt} = 900 \ cm^{3}/s$
To find = $\frac{dr}{dt}$ at r = 15 cm
Solution:-
Volume of sphere(V) = $\frac{4}{3}\pi r^{3}$
$\frac{dV}{dt} = \frac{dV}{dr}.\frac{dr}{dt} = \frac{d(\frac{4}{3}\pi r^{3})} {dr}.\frac{dr}{dt} = \frac{4}{3}\pi\times 3r^{2} \times \frac{dr}{dt}$
$\frac{dV}{dt}= 4 \pi r^{2} \times \frac{dr}{dt}$
$\frac{dr}{dt} = \frac{\frac{dV}{dt}}{4\pi r^{2}} = \frac{900}{4\pi \times(15)^{2}} = \frac{900}{900\pi} = \frac{1}{\pi} \ cm/s$
Hence, the rate at which the radius of the balloon increases when the radius is 15 cm is $\frac{1}{\pi} \ cm/s$
## Related Chapters
### Preparation Products
##### JEE Main Rank Booster 2021
This course will help student to be better prepared and study in the right direction for JEE Main..
₹ 13999/- ₹ 9999/-
##### Rank Booster NEET 2021
This course will help student to be better prepared and study in the right direction for NEET..
₹ 13999/- ₹ 9999/-
##### Knockout JEE Main April 2021 (Easy Installments)
An exhaustive E-learning program for the complete preparation of JEE Main..
₹ 4999/-
##### Knockout NEET May 2021
An exhaustive E-learning program for the complete preparation of NEET..
₹ 22999/- ₹ 14999/- | 2020-10-20 09:28:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9475330114364624, "perplexity": 6514.945124083061}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107871231.19/warc/CC-MAIN-20201020080044-20201020110044-00620.warc.gz"} |
https://www.snapxam.com/calculators/one-variable-linear-equations-calculator | # One-variable linear equations Calculator
Go!
1
2
3
4
5
6
7
8
9
0
x
y
(◻)
◻/◻
2
e
π
ln
log
lim
d/dx
d/dx
>
<
>=
<=
sin
cos
tan
cot
sec
csc
asin
acos
atan
acot
asec
acsc
sinh
cosh
tanh
coth
sech
csch
asinh
acosh
atanh
acoth
asech
acsch
### Difficult Problems
1
Example
$x²+3x+12=0$
2
Moving the term $x²$ to the other side of the equation with opposite sign
$12+3x=x²\left(-1\right)$
3
Subtract $12$ from both sides of the equation
$3x=x²\left(-1\right)-12$
4
Multiply both sides of the equation by
$x=\left(x²\left(-1\right)-12\right)\cdot \frac{1}{3}$
5
Multiply $\left(x²\left(-1\right)+-12\right)$ by $\frac{1}{3}$
$x=x²\left(-\frac{1}{3}\right)-4$
### Struggling with math?
Access detailed step by step solutions to millions of problems, growing every day! | 2018-12-14 09:59:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7064458727836609, "perplexity": 1309.6419591978029}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825512.37/warc/CC-MAIN-20181214092734-20181214114234-00077.warc.gz"} |
https://americadocsdhxz.web.app/fourier-transform-tables-pdf-725.html | # Fourier transform tables pdf
## Use the table of Fourier transforms (Table) and the table of properties (Table) to find the Fourier transforms of each of the signals in Problem. Time Domain Signal .
A.5 Examples of Fourier Transform Pairs. 1. Rectangle function and sinc function For the rectangle function, rect(x), and the sinc function, sinc(x), defined by.
## Table of Discrete-Time Fourier Transform Pairs
11.7 Fourier Integral. 11.8 Fourier Cosine and Sine Transforms. 11.9 Fourier Transform. Discrete and Fast Fourier Transforms. 11.10 Tables of Transforms. 8 Oct 2008 Frequency content of aperiodic signals: the Fourier transform. 3. The inverse The signal x(t) can be recovered from its Fourier transform. X(ω) = F[x(t)] We've seen an example of this with the transform pairs pτ (t) ↔ τ sinc ( is periodic of period 2ℓ, and compute its Fourier coefficients from the A table of some of the most important properties is provided at the end of these notes. corresponding signal g(t) may be obtained by the inverse Fourier transform a summary of a number of frequently-used Fourier transform properties in Table 1. A.5 Examples of Fourier Transform Pairs. 1. Rectangle function and sinc function For the rectangle function, rect(x), and the sinc function, sinc(x), defined by. 1.5 Examples of Fourier Transforms . . . . . . . . . . . . . . . . . 2 The Fourier Transform. 22 The pairs (x, k) and (t, ω) are referred to as conjugate variables. In either. So far, we have concentrated on the discrete Fourier transform. Table 1. The classes of Fourier transforms*. Periodic. Aperiodic. Continuous. Discrete aperiodic.
Poularikas A. D. “Fourier Transform”. The Handbook of Formulas and Tables for Signal Processing. Ed. Alexander D. Poularikas. Boca Raton: CRC Press LLC, Summary table: Fourier transforms with various combinations of continuous/ discrete time and frequency variables. – Notations: • CTFT: continuous time FT. Fourier transforms and spatial frequencies in 2D. • Definition the 1D Fourier analysis with which you are familiar. Some important Fourier Transform Pairs Cos & Sin: It turns out that Fourier transform pairs are well defined not only for nice functions, such as square integrable functions, but also for distributions such The Fourier transforms of these functions satisfy certain dispersion relations due to their a type of complementarity between a function and its Fourier transform which gives rise to See the corresponding entry in Table 7.1, where the factor a -1/2 in (7.34) is R), it follows that pdf(p)/dp' e LP(R) for 0
9. Fourier Series and Fourier Transforms The Fourier transform is one of the most important tools for analyzing functions. The basic underlying idea is that a function f(x) can be expressed as a linear combination of elementary functions (speci cally, sinusoidal waves). The coe cients in this linear combi- Chapter 1 The Fourier Transform Chapter 1 The Fourier Transform 1.1 Fourier transforms as integrals There are several ways to de ne the Fourier transform of a function f: R ! C. In this section, we de ne it … Fourier Transform: Important Properties Fourier Transform: Important Properties Yao Wang Polytechnic University Basic properties of Fourier transforms Duality, Delay, Freq. Shifting, Scaling Convolution property Multiplication property Differentiation property Table of Fourier Transforms x(t) = cos( ωct) ⇔ X
## https://see.stanford.edu/materials/lsoftaee261/book-fall-07.pdf appropriate word, for in the approach we'll take the Fourier transform emerges as we pass from periodic the two formulas, something you don't see for Fourier series.
Fourier transform tables. READ. TABLE 3.1Short Table of Fourier Transformsg(t) G(f)e-atu(t)a + {21Cfa>O2 eatu(-t)a - j21Cfa>O3 e-altl2a~+~)24 te-at u(t) a>O(a + Signal, Fourier transform unitary, angular frequency, Fourier transform unitary, ordinary frequency, Remarks. g ( t ) ≡ {\displaystyle g(t)\!\equiv \!} g(t)\!\equiv\! Theorem 25. Suppose a function f satis es Dirichlet conditions. Then the fourier series of f converges to f at points where f is continuous. The fourier series 28 Aug 2016 X(t). This suggests that there should be a way to invert the Fourier Transform, that we can come back from X(f) to x CT Fourier Transform Pairs. signal (function of t), $\longrightarrow$, Fourier transform (function of f). CTFT of a unit impulse, $\delta (t)\$, $1 \$. CTFT of a Use the table of Fourier transforms (Table) and the table of properties (Table) to find the Fourier transforms of each of the signals in Problem. Time Domain Signal . Table of Fourier Transform Pairs - ETH Z
The basic idea behind all those horrible looking formulas is rather simple, even Equations 2 and 4 are called Fourier transform pairs, and they exist if.
#### Table of Discrete-Time Fourier Transform Pairs: Discrete-Time Fourier Transform : X( ) =. X1 n=1. x[n]e j n. Inverse Discrete-Time Fourier Transform : x[n] = 1 2ˇ. Z. 2ˇ. X( )ej td : x[n] X( ) condition anu[n] 1 1 ae j. jaj<1 (n+ 1)anu[n] 1 (1 ae j.
8 Oct 2008 Frequency content of aperiodic signals: the Fourier transform. 3. The inverse The signal x(t) can be recovered from its Fourier transform. X(ω) = F[x(t)] We've seen an example of this with the transform pairs pτ (t) ↔ τ sinc ( | 2022-07-06 02:05:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9379961490631104, "perplexity": 1401.283430039611}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104655865.86/warc/CC-MAIN-20220705235755-20220706025755-00749.warc.gz"} |
https://en-academic.com/dic.nsf/enwiki/11638094 | # Dextrorotation and levorotation
Dextrorotation and levorotation
Dextrorotation and levorotation (also spelled laevorotation)[1] refer, respectively, to the properties of rotating plane polarized light clockwise (for dextrorotation) or counterclockwise (for levorotation), seen by an observer whom the light is approaching. A compound with dextrorotation is called dextrorotatory or dextrorotary[2], while a compound with levorotation is called levorotatory or levorotary[2].
Compounds with these properties are said to have optical activity and consist of chiral molecules. If a chiral molecule is dextrorotary, its enantiomer will be levorotary, and vice-versa. In fact, the enantiomers will rotate polarized light the same number of degrees, but in opposite directions.
It is not possible to determine whether a given chiral molecule will be levorotatory or dextrorotatory directly from its configuration, except via detailed computer modeling.[3] In particular, both "R" and "S" stereocenters have the ability to be dextrorotatory or laevorotatory.
## Chirality prefixes
### The prefixes "(+)-", "(–)-", "d-", "l-", "D-", and "L-"
A dextrorotary compound is often prefixed "(+)-" or "d-". Likewise, a levorotary compound is often prefixed "(–)-" or "l-". These "d-" and "l-" prefixes should not be confused with the "D-" and "L-" prefixes based on the actual configuration of each enantiomer, with the version synthesized from naturally occurring (+)-glyceraldehyde being considered the D- form. For example, nine of the nineteen L-amino acids commonly found in proteins are dextrorotatory (at a wavelength of 589 nm), and D-fructose is also referred to as levulose because it is levorotatory. See the article: Chirality (chemistry).
### The prefixes "(R)-" and "(S)-"
The R and S prefixes are different from the preceding ones in that the labels R and S characterize a specific stereocenter, not a whole molecule. A molecule with just one stereocenter can be labeled R or S, but a molecule with multiple stereocenters needs more than one label, for example (2R,3S).
If there is a pair of enantiomers, each with one stereocenter, then one enantiomer is R and the other is S, and likewise one enantiomer is levorotary and the other is dextrorotary. However, there is no general correlation between these two labels. In some cases the R enantiomer is the dextrorotary enantiomer, and in other cases the R enantiomer is the levorotary enantiomer. The relationship can only be determined on a case-by-case basis with detailed computer modeling[3] or experimental measurements.
## Specific rotation
A standard measure of the degree to which a compound is dextrorotary or levorotary is the quantity called the specific rotation [α]. Dextrorotary compounds have a positive specific rotation, while levorotary compounds have negative. Two enantiomers have equal and opposite specific rotations.
The formula for specific rotation is:
$[\alpha] = \frac{\alpha}{c \cdot l}$
where: [α] = specific rotation
α = observed rotation
c = concentration of the solution of an enantiomer
l = length of the tube (Polarimeter tube) in decimeters
The degree of rotation of plane-polarized light depends on the number of chiral molecules that it encounters on its way through the tube of polarimeter (thus, the length of the tube and concentration of the enantiomer). In many cases, it also depends on the temperature and the wavelength of light that is employed.
## Other terminology
The equivalent French terms are dextrogyre and levogyre. These are occasionally (but very infrequently) used in English.[4]
## References
1. ^ The first word component dextro- comes from Latin word for dexter "right (as opposed to left)". Laevo- or levo- comes from the Latin for laevus, "left side."
2. ^ a b Solomons, T.W. Graham, and Graig B. Fryhle. Organic Chemistry. 8th. Hoboken: John Wiley & Sons, Inc., 2004.
3. ^ a b See, for example, this paper, "Determination of absolute configuration using ab initio calculation of optical rotation", by Stephens et al.
4. ^ For example: Farnesyltransferase inhibitors in cancer therapy, edited by Sebti and Hamilton, p126
Wikimedia Foundation. 2010.
### Look at other dictionaries:
• Levorotation and dextrorotation — Dextrorotation and levorotation (also spelled laevorotation)The first word component comes from Latin word for right (as opposed to left) . Laevo or levo comes from the Latin for left. ] refer, respectively, to the properties of rotating plane… … Wikipedia
• Prefixes, medical — Medical words are often put together, cobbled from two or more building blocks. Among these building blocks are the prefixes. Examples of prefixes used in medicine include: {{}}a : Prefix much employed in the health sciences indicating “not,… … Medical dictionary
• Optical rotation — (optical activity) is the turning of the plane of linearly polarized light about the direction of motion as the light travels through certain materials. It occurs in solutions of chiral molecules such as sucrose (sugar), solids with rotated… … Wikipedia
• L-DOPA — Systematic (IUPAC) name … Wikipedia | 2023-01-30 09:04:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6748471856117249, "perplexity": 4562.362675301293}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499804.60/warc/CC-MAIN-20230130070411-20230130100411-00343.warc.gz"} |
https://weber.itn.liu.se/~aidvi05/cplusplus/my-docs/Functions/parameter-passing.html | # Passing arguments to a function
In this section we discuss how the value of the actual arguments in a function call are passed to the function. Consider again the function sum.
First, it is important to keep in mind that functions only have access to the local variables defined in the function’s body (e.g. result) and the functions formal arguments (e.g. n).
Consider the following main which calls sum(a) (line $8$).
Function sum does not have access to variable a because this variable is defined in the main. How does then function sum have access to the value of variable a? The answer is that call-by-value is used when function sum is called.
## Call-by-value
Call-by-value is a technique used when a function is called that copies the value of the actual arguments into the variables representing the formal arguments of the function. In other words, the formal arguments of the function receive a copy of the corresponding actual arguments in the function call. The correspondence between formal and actual arguents is done by position in the argument list, i.e. the first actual argument is copied into the first formal argument, the second actual argument is copied into the second formal argument, and so on.
Consider the example above and the function call in line $8$ of the main. Then, the formal argument n of function sum receives a copy of the int stored in the actual parameter a. Note that the function does not have access to variable a. Function sum has only access to a copy of a which is stored in the variable n.
Thus, using call-by-value implies that the function can modify the value stored in the formal argument, n, without modifying the actual argument, a. For instance, function sum could be written as follows, without running the risk of modifying the actual argument a.
A disadvantage of call-by-value method is that it requires copying which consumes both memory space and time. Thus, call-by-value can be quite inefficient when applied to arguments that occupy many bytes. Another limitation of call-by-value is illustrated by the following example.
Assume one wants to write a function that swaps the values of two variables. This function is actually a very central function in many libraries, like the C++-standard library, because many important algorithms rely on swapping values of two variables (such as sorting functions). Let’s then make our first attempt to write such function. The function below would not accomplish this task.
void swap(int x, int y) {
int temp = x;
x = y;
y = temp;
}
Consider the following code excerpt which calls function swap given above.
int main() {
int a = 6;
int b = 8;
swap(a,b); // variables a and b are not modified: a stores 6 and b stores 8
std::cout << "a = " << a
<< " b = " << b << "\n";
}
Since call-by-value is used, the value of the actual arguments a and b are copied into the formal arguments of the function, x and y, repectively. The function swap does not have access to the variables a and b. Instead, the function swaps the values in the variables x and y (i.e. the copies of a and b are swapped). The function swap provided above is just useless, though it compiles (and executes).
So, how can one write in C++ a function that swaps the values of two variables? The answer is to use call-by-reference, instead of call-by-value, to pass the arguments a and b to the function.
## Call-by-reference
Call-by-reference is a technique used when a function is called where the formal arguments of the function refer to the actual arguments. In other words, the formal parameters of the function give access the the variables representing the actual parameters used in the function call. Consequently, the function called can modify the actual parameters which are variables outside the function.
Let’s re-write the function swap but now we use call-by-reference, instead of call-by-value.
Observe the code carefully. Nothing changed in the function’s body, i.e. it’s still the same logic. The only modification is in the list of formal arguments of the function:
void swap(int& x, int& y);
The ampersand (&) is used before the formal arguments x and y. Consider again the function call swap(a, b).
int main() {
int a = 6;
int b = 8;
swap(a,b); // variables a and b are modified: a stores 8 and b stores 6
std::cout << "a = " << a
<< " b = " << b << "\n";
}
The formal arguments, x and y, refer to the actual arguments a and b, respectively. The execution of the statement x = y; (line $4$) takes into account that x and y are kind of special variables that refer to other variables outside the function swap and the value of the variable referred by y (i.e. y refers to variable b) is stored in the variable referred by x (i.e. x refers to variable a). Thus, the statement x = y; (line $4$ of function swap) effectively modifies the value of the actual parameter a. A similar idea applies to statement y = temp; (line $5$) stating that the value stored in variable temp is copied into the variable referred by y (thus, the value stored in b is modified). The figure below illustrates this idea.
At first sight, the concept of a variable referring to another variable may seem quite vague (almost illusive). You may wonder but how does a variable refer to another variable? How is this implemented by the computer? Well, that’s business for the compilers and compilers may implement this concept in different ways. For instance, the formal arguments of function swap may receive the memory address of the actual arguments, i.e. x stores the memory address of variable a and y stores the memory address of variable b. In this way, function swap can access the actual arguments, a and b, which are variables defined outside the function. Indeed, a powerful method to pass information to functions!! But this power has also its dangers, if misused.
Consider again function sum but now using call-by-reference.
Consider also the following main which includes a call to function sum. This piece of code will display “sum(272) = 136” which is not the correct output we would expect (the correct output is “sum(16) = 136”).
int main() {
int a = 16;
int result = sum(a); // Ohoops, variable a has unexpectedly been modified: a stores 272
std::cout << "sum(" << a << ")=" << result;
}
The bug lies in the fact that the formal argument n gives access to variable a. Thus, the statement n *= (n + 1); in the function (line $4$) modifies the value of the variable referred by n, i.e. variable a is modified.
## Good programming practices
The table below compares and contrasts both methods used in C++ to pass arguments to functions: call-by-value and call-by-reference. Similar methods also appear in other languages such as C#, Java, PHP, Javascript, and Python.
Call-by-value Call-by-reference
Meaning A copy of the actual arguments is passed to the function. The formal arguments refer to the actual arguments. Usually, the memory address of the actual arguments is passed to the function.
Advantages Actual arguments cannot be changed accidentally by the function. Efficiency. It does not require to make copies of variables.
Disadvantages Copying variables with many bytes consumes extra time and space. Accidental changes in a formal argument also affect the value of a variable outside the function.
The C++ core guidelines describes good programming practices about the use of both methods and the C++ community expects the programmers to follow these good practices. The table below gives a simple (yet incomplete) summary of some of those good practices.
Call-by-value Call-by-reference
Use when type T is cheap to copy, i.e. T occupies few bytes (like int, double, bool), and one wants to ensure that the function does not modify the actual parameters. Use if the function should update the value stored in the variable referred by x.
In C++, if an argument x is passed by reference to a function f, then programmers assume that calling f will modify x.
One may wonder how to handle the case when a variable x occupying many bytes is passed to a function, though the function is not meant to modify x. This point is discussed in the next section. | 2022-11-26 08:48:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.371764600276947, "perplexity": 859.5045003390804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446706285.92/warc/CC-MAIN-20221126080725-20221126110725-00005.warc.gz"} |
https://campus.datacamp.com/courses/cluster-analysis-in-r/hierarchical-clustering-2?ex=7 | # Validating the clusters
In the plot below you see the clustering results of the same lineup data you've previously worked with but with some minor modifications in the clustering steps.
• The left plot was generated using a k=2 and method = 'average'
• The right plot was generated using a k=3 and method = 'complete'
If our goal is to correctly assign each player to their correct team then based on what you see in the above plot and what you know about the data set which of the statements below are correct? | 2019-12-12 12:30:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25926369428634644, "perplexity": 415.0101897568019}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540543252.46/warc/CC-MAIN-20191212102302-20191212130302-00427.warc.gz"} |
https://theanets.readthedocs.io/en/stable/api/generated/theanets.layers.recurrent.SCRN.html | # theanets.layers.recurrent.SCRN¶
class theanets.layers.recurrent.SCRN(rate='vector', **kwargs)
Simple Contextual Recurrent Network layer.
Notes
A Simple Contextual Recurrent Network incorporates an explicitly slow-moving hidden context layer with a simple recurrent network.
The update equations in this layer are largely those given by [Mik15], pages 4 and 5, but this implementation adds a bias term for the output of the layer. The update equations are thus:
$\begin{split}\begin{eqnarray} s_t &=& r \odot x_t W_{xs} + (1 - r) \odot s_{t-1} \\ h_t &=& \sigma(x_t W_{xh} + h_{t-1} W_{hh} + s_t W_{sh}) \\ o_t &=& g\left(h_t W_{ho} + s_t W_{so} + b\right). \\ \end{eqnarray}\end{split}$
Here, $$g(\cdot)$$ is the activation function for the layer and $$\odot$$ is elementwise multiplication. The rate values $$r$$ are computed using $$r = \sigma(\hat{r})$$ so that the rate values are limited to the open interval (0, 1). $$\sigma(\cdot)$$ is the logistic sigmoid.
Parameters
• xs — matrix connecting inputs to state units (called B in the paper)
• xh — matrix connecting inputs to hidden units (A)
• sh — matrix connecting state to hiddens (P)
• hh — matrix connecting hiddens to hiddens (R)
• ho — matrix connecting hiddens to output (U)
• so — matrix connecting state to output (V)
• b — vector of output bias values (not in original paper)
Additionally, if rate is specified as 'vector' (the default), then we also have:
• r — vector of learned rate values for the state units
Outputs
• out — the post-activation state of the layer
• pre — the pre-activation state of the layer
• hid — the state of the layer’s hidden units
• state — the state of the layer’s state units
• rate — the rate values of the state units
References
[Mik15] (1, 2) T. Mikolov, A. Joulin, S. Chopra, M. Mathieu, & M. Ranzato (ICLR 2015) “Learning Longer Memory in Recurrent Neural Networks.” http://arxiv.org/abs/1412.7753
__init__(rate='vector', **kwargs)
Methods
__init__([rate]) add_bias(name, size[, mean, std]) Helper method to create a new bias vector. add_weights(name, nin, nout[, mean, std, ...]) Helper method to create a new weight matrix. connect(inputs) Create Theano variables representing the outputs of this layer. find(key) Get a shared variable for a parameter by name. initial_state(name, batch_size) Return an array of suitable for representing initial state. log() Log some information about this layer. output_name([name]) Return a fully-scoped name for the given layer output. setup() to_spec() Create a specification dictionary for this layer. transform(inputs) Transform inputs to this layer into outputs for the layer.
Attributes
input_size For networks with one input, get the input size. num_params Total number of learnable parameters in this layer. params A list of all parameters in this layer.
transform(inputs)
Transform inputs to this layer into outputs for the layer.
Parameters: inputs : dict of theano expressions Symbolic inputs to this layer, given as a dictionary mapping string names to Theano expressions. See base.Layer.connect(). outputs : dict of theano expressions A map from string output names to Theano expressions for the outputs from this layer. This layer type generates a “pre” output that gives the unit activity before applying the layer’s activation function, a “hid” output that gives the post-activation values before applying the rate mixing, and an “out” output that gives the overall output. updates : sequence of update pairs A sequence of updates to apply to this layer’s state inside a theano function. | 2019-09-18 22:11:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49630871415138245, "perplexity": 4407.350059335194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573368.43/warc/CC-MAIN-20190918213931-20190918235931-00421.warc.gz"} |
http://mathoverflow.net/revisions/90315/list | MathOverflow will be down for maintenance for approximately 3 hours, starting Monday evening (06/24/2013) at approximately 9:00 PM Eastern time (UTC-4).
2 added 5 characters in body
Let $W$ be a vector space of dimension $n$ containing $V$. Let $\alpha$ be an endomorphism of $V^{\otimes n}$ commuting with the action of ${\rm GL}(V)$. Suppose that $\alpha$ can be extended to an endomorphism $\beta$ of $W^{\otimes n}$ that commutes with the action of ${\rm GL}(W)$. Then, by the argument given by David Speyer in the question, there exist scalars $c_\sigma \in \mathbf{C}$ such that
$$\beta = \sum_{\sigma \in S_n} c_\sigma \sigma$$
and this also expresses $\alpha$ as a linear combination of place permutations of the tensor factors. (As I noted in my comment, this expression is, in general, far from unique.)
Any proof that such an extension exists must use the semisimplicity of $\mathbf{C}S_n$, since otherwise we get an easy proof of general Schur-Weyl duality. If we assume that ${\rm GL}(W)$ is acts as the full ring of $S_n$-invariant endomorphisms of $W^{\otimes n}$ then a fairly short proof is possible. I think it is inevitable that it uses many of the same ideas as the double-centralizer theorem. A more direct proof would be very welcome.
Let $U$ be a simple $\mathbf{C}S_n$-module appearing in $V^{\otimes n}$. Let
$$X = U_1 \oplus \cdots \oplus U_a \oplus U_{a+1} \oplus \cdots \oplus U_b$$
be the largest submodule of $W^{\otimes n}$ that is a direct sum of simple $\mathbf{C}S_n$-modules isomorphic to $U$. We may choose the decomposition so that $X \cap V^{\otimes n} = U_1 \oplus \cdots \oplus U_a$. Each projection map $W^{\otimes n} \rightarrow U_i$ is $S_n$-invariant, and so is induced by a suitable linear combination of elements of ${\rm GL}(W)$. Hence each $U_i$ for $1 \le i \le a$ is $\alpha$-invariant. Similarly, for each pair $i$, $j$ there is a isomorphism $U_i \cong U_j$ induced by ${\rm GL}(W)$; these isomorphisms are unique up to scalars (by Schur's Lemma). Using these isomorphisms we get a unique ${\rm GL}(W)$-invariant extension of $\alpha$ to $X$.
Finally let $W^{\otimes n} = C \oplus D$ where $C$ is the sum of all simple $\mathbf{C}S_n$-submodules of $W^{\otimes n}$ isomorphic to a submodule of $V^{\otimes n}$ and $D$ is a complementary $\mathbf{C}S_n$-submodule. The previous paragraph extends $\alpha$ to a map $\beta$ defined on $C$. The projection map $W^{\otimes n} \rightarrow D$ is $S_n$-invariant and so is induced by ${\rm GL}(W)$. Hence we can set $\beta(D) = 0$ and obtain a ${\rm GL}(W)$-invariant extension $\beta : W^{\otimes n} \rightarrow W^{\otimes n}$ of $\alpha$.
1
Let $W$ be a vector space of dimension $n$ containing $V$. Let $\alpha$ be an endomorphism of $V^{\otimes n}$ commuting with the action of ${\rm GL}(V)$. Suppose that $\alpha$ can be extended to an endomorphism $\beta$ of $W^{\otimes n}$ that commutes with the action of ${\rm GL}(W)$. Then, by the argument given by David Speyer in the question, there exist scalars $c_\sigma \in \mathbf{C}$ such that
$$\beta = \sum_{\sigma \in S_n} c_\sigma \sigma$$
and this also expresses $\alpha$ as a linear combination of place permutations of the tensor factors. (As I noted in my comment, this expression is, in general, far from unique.)
Any proof that such an extension exists must use the semisimplicity of $\mathbf{C}S_n$, since otherwise we get an easy proof of general Schur-Weyl duality. If we assume that ${\rm GL}(W)$ is the full ring of $S_n$-invariant endomorphisms of $W^{\otimes n}$ then a fairly short proof is possible. I think it is inevitable that it uses many of the same ideas as the double-centralizer theorem. A more direct proof would be very welcome.
Let $U$ be a simple $\mathbf{C}S_n$-module appearing in $V^{\otimes n}$. Let
$$X = U_1 \oplus \cdots \oplus U_a \oplus U_{a+1} \oplus \cdots \oplus U_b$$
be the largest submodule of $W^{\otimes n}$ that is a direct sum of simple $\mathbf{C}S_n$-modules isomorphic to $U$. We may choose the decomposition so that $X \cap V^{\otimes n} = U_1 \oplus \cdots \oplus U_a$. Each projection map $W^{\otimes n} \rightarrow U_i$ is $S_n$-invariant, and so is induced by a suitable linear combination of elements of ${\rm GL}(W)$. Hence each $U_i$ for $1 \le i \le a$ is $\alpha$-invariant. Similarly, for each pair $i$, $j$ there is a isomorphism $U_i \cong U_j$ induced by ${\rm GL}(W)$; these isomorphisms are unique up to scalars (by Schur's Lemma). Using these isomorphisms we get a unique ${\rm GL}(W)$-invariant extension of $\alpha$ to $X$.
Finally let $W^{\otimes n} = C \oplus D$ where $C$ is the sum of all simple $\mathbf{C}S_n$-submodules of $W^{\otimes n}$ isomorphic to a submodule of $V^{\otimes n}$ and $D$ is a complementary $\mathbf{C}S_n$-submodule. The previous paragraph extends $\alpha$ to a map $\beta$ defined on $C$. The projection map $W^{\otimes n} \rightarrow D$ is $S_n$-invariant and so is induced by ${\rm GL}(W)$. Hence we can set $\beta(D) = 0$ and obtain a ${\rm GL}(W)$-invariant extension $\beta : W^{\otimes n} \rightarrow W^{\otimes n}$ of $\alpha$. | 2013-06-19 09:19:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9691818952560425, "perplexity": 70.41036330029881}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708546926/warc/CC-MAIN-20130516124906-00093-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://artmedclinic.pl/industrielle/Oct/1320.html | Welcome to the broken dawn
Section6: Electromagnetic Radiation Potential formulation of Maxwell equations Now we consider a general solution of Maxwell’s equations. Namely we are interested how the sources (charges and currents) generate electric and magnetic fields. For simplicity we restrict our considerations to the vacuum. In this case Maxwell’s equations have ...
The general solution to the electromagnetic wave equation is a linear superposition of waves of the form $${\displaystyle \mathbf {E} (\mathbf {r} ,t)=g(\phi (\mathbf {r} ,t))=g(\omega t-\mathbf {k} \cdot \mathbf {r} )}$$ $${\displaystyle \mathbf {B} (\mathbf {r} ,t)=g(\phi (\mathbf {r} ,t))=g(\omega t-\mathbf {k} \cdot \mathbf {r} )}$$for virtually any well-behaved function g of dimensionless argument φ, where ω is the angular frequency (in radians per second), and k = (kx, ky, kz) is the wave vector (in radians per
28–3 The dipole radiator. As our fundamental “law” of electromagnetic radiation, we are going to assume that ( 28.6) is true, i.e., that the electric field produced by an accelerating charge which is moving nonrelativistically at a very large distance approaches that form.
Electromagnetic Wave Problems (4) Solution in detail below: First we need to consider what would be the best equation to use. Obviously, the equation involving energy change, Plank's constant, and frequency is the best way to go. Next, we need to figure out what we are solving for.
Aug 24, 2018 · Electromagnetic wave equation describes the propagation of electromagnetic waves in a vacuum or through a medium. The electromagnetic wave equation is a second order partial differential equation. ... Infrared radiation is used for night vision and is used in security camera.
Electromagnetic radiation is an electric and magnetic disturbance traveling through space at the speed of light (2.998 × 108 m/s). It contains neither mass nor charge but travels in packets of radiant energy called photons, or quanta. Examples of EM radiation include radio waves and microwaves, as well as infrared, ultraviolet, gamma, and x ...
Aug 08, 2014 · The equation that relates wavelength, frequency, and speed of light is c = lambda*nu c = 3.00xx10^8 "m/s" (the speed of light in a vacuum) lambda = wavelength in meters nu = frequency in Hertz (Hz) or 1/"s" or "s"^(-1)". So basically the wavelength times the frequency of an electromagnetic wave equals the speed of light. FYI, lambda is the Greek letter lambda , and nu is the Greek letter nu ...
Radiation is the rate of heat transfer through the emission or absorption of electromagnetic waves. The rate of heat transfer depends on the surface area and the fourth power of the absolute temperature: $\displaystyle\frac{Q}{t}=\sigma eA{T}^{4}\\$, where σ = 5.67 × 10 −8 J/s ⋅ m 2 ⋅ K 4 is the Stefan-Boltzmann constant ...
James Clerk Maxwell derived a wave form of the electric and magnetic equations, thus uncovering the wave-like nature of electric and magnetic fields and their symmetry. Because the speed of EM waves predicted by the wave equation coincided with the measured speed of light, Maxwell concluded that light itself is an EM wave. Maxwell's equations were confirmed by Heinrich Hertz through experiments with radio waves. Maxwell realized that since a lot of physics is symmetrical and mathematically artistic in a way
## 28 Electromagnetic Radiation - The Feynman Lectures on ...
28–3 The dipole radiator. As our fundamental “law” of electromagnetic radiation, we are going to assume that ( 28.6) is true, i.e., that the electric field produced by an accelerating charge which is moving nonrelativistically at a very large distance approaches that form.
## Module 3 - The Electromagnetic Radiation - Problems ...
Electromagnetic Wave Problems (4) Solution in detail below: First we need to consider what would be the best equation to use. Obviously, the equation involving energy change, Plank's constant, and frequency is the best way to go. Next, we need to figure out what we are solving for.
equation speed of light = frequency x wavelength Electromagnetic waves . Wave Model of Electromagnetic Radiation The EM wave consists of two fluctuating fields—one electric (E) and the ... is in the form of electromagnetic radiation (EMR).
## How can I calculate the wavelength of electromagnetic ...
Aug 09, 2014 · The equation that relates wavelength, frequency, and speed of light is c = lambda*nu c = 3.00xx10^8 "m/s" (the speed of light in a vacuum) lambda = wavelength in meters nu = frequency in Hertz (Hz) or 1/"s" or "s"^(-1)". So basically the wavelength times the frequency of an electromagnetic wave equals the speed of light. FYI, lambda is the Greek letter lambda , and nu is the Greek letter nu ...
electromagnetic radiation This page is a basic introduction to the electromagnetic spectrum sufficient for chemistry students interested in UV-visible absorption spectroscopy. If you are looking for any sort of explanations suitable for physics courses, then I'm afraid this isn't the right place for you.
## Light: Electromagnetic waves, the electromagnetic spectrum ...
The electromagnetic spectrum is comprised of all the varieties of radiation in the universe. Gamma rays have the highest frequency, whereas radio waves have the lowest. Visible light is approximately in the middle of the spectrum, and comprises a very small fraction of the overall spectrum. The electromagnetic spectrum.
## Radiation from electric dipole 8-10-10
Radiation from electric dipole moment Masatsugu Sei Suzuki and Itsuko S. Suzuki Department of Physics, State University of New York at Binghamton, Binghamton, NY 13902-6000 (Date: August 11, 2010) Maxwell's equations imply that all classical electromagnetic radiation is ultimately generated by accelerating electrical charges.
## Light And Electromagnetic Radiation - MCAT Content
MCAT Content / Light And Electromagnetic Radiation. Classification of electromagnetic spectrum, photon energy E = hf Concept of Interference; Young Double-slit Experiment Other diffraction phenomena, X-ray diffraction Polarization of light: linear and circular ...
## Light and electromagnetic radiation questions (practice ...
Questions pertaining to light and electromagnetic radiation If you're seeing this message, it means we're having trouble loading external resources on our website. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.
## Radiation Heat Transfer - Engineering ToolBox
Radiation Heat Transfer Calculator. This calculator is based on equation (3) and can be used to calculate the heat radiation from a warm object to colder surroundings. Note that the input temperatures are in degrees Celsius. ε - emissivity coefficient. t h - object hot temperature (o C) t c - surroundings cold temperature (o C) A c - object ...
## electromagnetic radiation | Spectrum, Examples, & Types ...
electromagnetic radiation, in classical physics, the flow of energy at the universal speed of light through free space or through a material medium in the form of the electric and magnetic fields that make up electromagnetic waves such as radio waves, visible light, and gamma rays.In such a wave, time-varying electric and magnetic fields are mutually linked with each other at right angles and ...
What is the electromagnetic wave equation? ... If the wavelength of a beam of electromagnetic radiation increases by a factor of 2, then its frequency must. decrease by half. The intensity of radiation ____ in ____ proportion to the square of the distance of the object from the source. decreases; inverse.
## (PDF) Self-focusing of electromagnetic radiation in ...
This system of equations is a generalization of the equations derived by Berezhi- ani and Mahajan (1994), where the possibility of finding soliton solutions in e-p-i SELF-FOCUSING OF ELECTROMAGNETIC RADIATION 243 plasmas was investigated.
## Electromagnetic radiation - Simple English Wikipedia, the ...
Electromagnetic Waves from Maxwell's Equations Archived 2007-07-10 at the Wayback Machine on Project PHYSNET. Conversion of frequency to wavelength and back - electromagnetic, radio and sound waves; eBooks on Electromagnetic radiation and RF; The Science of Spectroscopy Archived 2019-03-23 at the Wayback Machine - supported by NASA ...
## Module 3 - The Electromagnetic Radiation
Electromagnetic waves can be described by their wavelengths, energy, and frequency. All three describe a different property of light, yet they are related to each other mathematically. The two equations below show the relationships: Equation 1.
equation speed of light = frequency x wavelength Electromagnetic waves . Wave Model of Electromagnetic Radiation The EM wave consists of two fluctuating fields—one electric (E) and the ... is in the form of electromagnetic radiation (EMR).
## Basic Electromagnetic Wave Properties
Feb 28, 2016 · The wavelength of light, and all other forms of electromagnetic radiation, is related to the frequency by a relatively simple equation: n = c/l. where c is the speed of light (measured in meters per second), n is the frequency of the light in hertz (Hz), and l is the wavelength of the light measured in meters. From this relationship one can ...
## CHEM 101 - Electromagnetic radiation and waves
Sep 11, 2018 · Electromagnetic (EM) radiation. Visible light is a particular form of electromagnetic (EM) radiation; Other familiar forms of energy transmission, such as radio, microwaves, infrared radiation, ultraviolet (UV) light, and X-rays are all different forms of EM radiation; All EM radiation can be described as waves.
## Chapter 4 Electromagnetic Radiation - DTU
Electromagnetic Radiation Whether it be molecules, the waves of the sea or myriads of stars, elements of nature form overall structures. Peter Haarby describing Inge Lise Westman’s paintings In the previous chapter we found a number of correspondences between quantum elds and Maxwell’s equations. In particular, we found that the electromag-
## Electromagnetic Radiation Explained - study
Sep 28, 2021 · Electromagnetic radiation is the propagation of electromagnetic waves (light) through space. It is created by the motion of charged particles and
## LearnEMC - Introduction to Electromagnetic Radiation
Electromagnetic Radiation. Radiated coupling results when electromagnetic energy is emitted from a source, propagates to the far-field, and induces voltages and currents in another circuit. Unlike common impedance coupling, no conducted path is required. Unlike electric and magnetic field coupling, the victim circuit is not in the ...
## Radiation – The Physics Hypertextbook
Heat radiation (as opposed to particle radiation) is the transfer of internal energy in the form of electromagnetic waves. For most bodies on the Earth, this radiation lies in the infrared region of the electromagnetic spectrum. One of the first to recognize that heat radiation is related to light was the English astronomer William Herschel ...
## Electromagnetic Spectrum Calculator • Magnetostatics ...
Electromagnetic radiation is the flow of energy in the form of periodic oscillations of electric and magnetic fields that can propagate through a vacuum at the speed of light or through any medium that is transparent to them at a speed less than the speed of light.
What is the electromagnetic wave equation? ... If the wavelength of a beam of electromagnetic radiation increases by a factor of 2, then its frequency must. decrease by half. The intensity of radiation ____ in ____ proportion to the square of the distance of the object from the source. decreases; inverse.
## Practice Calculating Energy of Electromagnetic Waves ...
Let's practice using the energy equation to determine the energies of different electromagnetic radiations. Example 1 Determine the energy associated with an x-ray whose frequency is 3 x 10 17 hertz.
## Electromagnetic fields vs electromagnetic radiation
Electromagnetic Radiation. The rules for the relationship between electric and magnetic fields work out so that you can get propagating waves of electric and magnetic fields traveling through space. Very roughly speaking, the changing electric field creates a changing magnetic field, which creates a changing electric field, etc, and the whole ...
## Electromagnetic waves
The radiation pressure on an object that reflects the radiation is therefore twice the radiation pressure on an object that absorbs the radiation. Photons. Electromagnetic waves transport energy and momentum across space. The energy and momentum transported by an electromagnetic wave are not continuously distributed over the wave front.
## INTRODUCTION The Electromagnetic Spectrum
Electromagnetic radiation is the messen-ger, or the signal from sender to receiver. The sender could be a TV station, a star, or the burner on a stove. The receiver could be a TV set, an eye, or an X-ray film. In each case, the sender gives off or reflects some kind of electromagnetic radiation. All these different kinds of electromagnetic ... | 2022-05-19 14:31:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7824506163597107, "perplexity": 632.1549544381578}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662529538.2/warc/CC-MAIN-20220519141152-20220519171152-00335.warc.gz"} |
http://projecteuclid.org/euclid.pjm/1103037323 | ## Pacific Journal of Mathematics
### On the graph structure of convex polyhedra in $n$-space.
M. L. Balinski
#### Article information
Source
Pacific J. Math. Volume 11, Number 2 (1961), 431-434.
Dates
First available in Project Euclid: 14 December 2004
http://projecteuclid.org/euclid.pjm/1103037323
Mathematical Reviews number (MathSciNet)
MR0126765
Zentralblatt MATH identifier
0103.39602
Subjects
Primary: 52.10
Secondary: 90.10
#### Citation
Balinski, M. L. On the graph structure of convex polyhedra in $n$-space. Pacific J. Math. 11 (1961), no. 2, 431--434. http://projecteuclid.org/euclid.pjm/1103037323.
#### References
• [1] Michel L. Balinski, An Algorithm for Finding all Vertices of Convex Polyhedral Sets, doctoral dissertation, Princeton Universty, June, 1959. J. Soc. Indust. Applied Math., 9 (1961), 72-88.
• [2] T. A. Brown, HamiltonianPaths on Convex Polyhera, unpublished note, The RAND Corporation, August 1960, (included while in press).
• [3] G. B. Dantzig and D. R. Fulkerson, On the Max-Flow Min-Cut Theorem ofNetworks, paper 12 of Liner Inequalities and Related Systems, H. W. Kuhn and A. W. Tucker (eds.) Annals of Mathematics No. 38, Princeton University Press, Princeton, N.J., 1956.
• [4] G. A. Dirac, Some theorems of abstract graphs, Proc. London Math. Soc, Series 3, Vol. II (1952), 69-81.
• [5] L. R. Ford, Jr. and D. R. Fulkerson, Maximal flow through a network, Canadian Math., 8 (1956), 399-404.
• [6] A. W. Tucker, Linear Inequalities and Convex Polyhedral Sets, Proceedings of the Second Symposium in Linear Programming, Bureau of Standards, Washington, D. C, January 27-29, 1955, pp. 569-602.
• [7] W. T. Tutte, On Hamiltonian circuits, Journal London Math. Soc, 21 (1946), 98-101.
• [8] Hassler Whitney, Congruent graphs and the connectivity of graphs, Amer. J. Math., 54 (1932), 150-168. | 2016-09-30 18:34:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37966904044151306, "perplexity": 3776.9932922836674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738662321.90/warc/CC-MAIN-20160924173742-00212-ip-10-143-35-109.ec2.internal.warc.gz"} |
https://tex.stackexchange.com/questions/350087/enumerate-with-custom-strings | # Enumerate with custom strings
I'm trying to get a list of custom labels. Here is what I've tried:
\begin{enumerate}[Exercise (1)]
\item first
\item second
\end{enumerate}
This results in each item being labeled as: Exerc#se (#)
Where # is the number of the item in the list. How can I make the "i" in exercise not be pattern matched?
Using a \ doesn't work, it causes syntax error.
• Are you using the enumerate package or enumitem with the shortlabels option. Please provide a compilable document!
– user31729
Jan 23, 2017 at 19:21
• I am using the enumerate package. Thank you for the help Jan 23, 2017 at 19:22
• please always use a complete document, so people don't have to guess. Jan 23, 2017 at 19:23
• Use \begin{enumerate}[{Exercise} (1)] ...
– user31729
Jan 23, 2017 at 19:23
• for the case you have, my old enumerate package is fine, but in general enumitem as in @ChristianHupfer's answer is rather more flexible. Jan 23, 2017 at 19:30
As documented in the package documentation, letters inside {} are never taken as counter templates so
[{Exercise }(1)]
should hide Exercise
For completeness, here's a possible way to solve this with enumitem (a little bit more complex)
\documentclass{article}
\usepackage{enumitem}
\begin{document}
\begin{enumerate}[font={\bfseries},label={Exercise (\arabic*)}]
\item first
\item second
\end{enumerate}
\end{document}
• The screenshot is from a version without font={\bfseries}
– user31729
Jan 23, 2017 at 19:33 | 2022-08-19 07:31:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6504876613616943, "perplexity": 3144.8919569516042}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573630.12/warc/CC-MAIN-20220819070211-20220819100211-00384.warc.gz"} |
https://math.stackexchange.com/questions/2417730/inequality-originated-from-order-statistics | # Inequality originated from order statistics
The following question is given in V.K Rohatgi problem 7.2.4.
Let $x_1,x_2,...,x_n$ be real numbers, and let $x_{(n)} = \max(x_1,x_2,...,x_n)$ for $n\geq 2$, and $x_{(1)} = \min(x_1,x_2,...,x_n)$. Show that for any set of real numbers $a_1,a_2,...,a_n$ such that $\sum_{i=1}^n a_i = 0$, the following inequality holds: $$\left|\sum_{i=1}^n a_i.x_i\right| \leq \frac{1}{2}(x_{(n)} - x_{(1)})\sum_{i=1}^n |a_i|$$
• you meant $\sum_{i=1}^{n}$ likely – phdmba7of12 Sep 5 '17 at 14:16
• yeah, my bad i'll correct it asap – AshishSinha5 Sep 5 '17 at 14:18
We assume without loss of generality that all the $a_i$ are non-zero.
Note that $a_i+|a_i|=0$ if $a_i< 0$ and $a_i+|a_i|=2a_i$ if $a_i> 0$.
Similarly, $a_i-|a_i|=0$ if $a_i> 0$ and $a_i-|a_i|=2a_i$ if $a_i< 0$.
Therefore, $\sum_i(|a_i|+a_i)=2\sum_{i, a_i> 0} a_i$ and $\sum_i(|a_i|-a_i)=-2\sum_{i, a_i< 0} a_i$. Using the hypothesis $\sum_i a_i = 0$, we get $$\sum_i|a_i| = 2\sum_{i, a_i> 0} a_i=-2\sum_{i, a_i< 0} a_i$$
If $a_i<0$, $a_ix_{(n)}\leq a_ix_i\leq a_ix_{(1)}$, hence $$x_{(n)}\sum_{i, a_i< 0} a_i \leq \sum_{i, a_i< 0}a_ix_i \leq x_{(1)}\sum_{i, a_i< 0} a_i$$
Similarly, $$x_{(1)}\sum_{i, a_i> 0} a_i \leq \sum_{i, a_i> 0}a_ix_i \leq x_{(n)}\sum_{i, a_i> 0} a_i$$
Summing these last two inequalities, $$x_{(1)}\sum_{i, a_i> 0} a_i + x_{(n)}\sum_{i, a_i< 0} a_i\leq \sum_{i}a_ix_i \leq x_{(1)}\sum_{i, a_i< 0} a_i + x_{(n)}\sum_{i, a_i> 0} a_i$$
But $\displaystyle x_{(1)}\sum_{i, a_i< 0} a_i + x_{(n)}\sum_{i, a_i> 0} a_i = x_{(1)}\left(-\frac{1}2 \sum_i |a_i| \right)+x_{(n)}\left(\frac{1}2 \sum_i |a_i| \right) = \frac {x_{(n)}-x_{(1)}}2 \sum_i |a_i|$
Similarly, $\displaystyle x_{(1)}\sum_{i, a_i> 0} a_i + x_{(n)}\sum_{i, a_i< 0} a_i = -\frac {x_{(n)}-x_{(1)}}2 \sum_i |a_i|$
Hence $$-\frac {x_{(n)}-x_{(1)}}2 \sum_i |a_i|\leq \sum_{i}a_ix_i \leq \frac {x_{(n)}-x_{(1)}}2 \sum_i |a_i|$$
Lastly, $$\left|\sum_{i} a_ix_i\right| \leq \frac{x_{(n)} - x_{(1)}}{2}\sum_{i} |a_i|$$
• @Glen_b what is bothering you ? – Gabriel Romon Sep 6 '17 at 6:05
• @Glen_b oh yeah, of course, my bad ! – Gabriel Romon Sep 6 '17 at 7:18 | 2019-09-23 13:05:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.975813627243042, "perplexity": 469.3150226581587}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576965.71/warc/CC-MAIN-20190923125729-20190923151729-00354.warc.gz"} |
http://math.stackexchange.com/questions/395127/invariant-submanifolds | # Invariant submanifolds
Let $M$ be a smooth manifold, and let $N$ be a submanifold. Let $V$ be a smooth vector field on $M$ which generates a flow $\Phi_t$ on $M$. My intuition tells me (perhaps modulo some technical assumptions) that the following is true:
If $V(p)$ is tangent to $N$ for all $p\in N$, then $N$ is an invariant submanifold of $\Phi_t$.
Is this true? What sorts of technical assumptions would I need to worry about to make the statement rigorous? I imagine, for example, that there could be global topological issues so that perhaps the statement only holds locally.
Is there a good (basic) reference on invariant submanifolds?
-
It will work if the submanifold is compact. To prove it, you can remark that $V$ induces a vector field on $N$, and the integral curves on $N$ will still be integral curves on the ambient manifold $M$. – Olivier Bégassat May 18 '13 at 8:33
You can get by with closed -- slightly weaker than compact. – Ryan Budney May 18 '13 at 8:35
Hmm ok. Would y'all mind commenting on what could go wrong if the submanifold were not closed? – joshphysics May 18 '13 at 15:40
Let $M = \mathbb R^2$ and $V = \partial/\partial x^1$. Then $V$ is tangent to the submanifold $N = (-\infty,0)\times\{0\}$, but $N$ is not invariant under the flow. – Jack Lee May 18 '13 at 15:44
@JackLee Ah ok thanks. – joshphysics May 18 '13 at 15:59
I am a physicist, not a mathematician, so my answer may lack some of the rigor you expect. Still, since no one else has taken a stab at this problem in the last year, I will shed what little light I can.
The reference you are looking for may be Sophus Lie's 1884 Differential Invariant Paper. I was introduced to it in Oak Ridge in 1992 by Dr. Lawrence Dresner, an applied mathematician in Magnetics Division of the Y-12 lab. Lie's work was being used to solve nonlinear PDE pertaining to superconductor stability problems in the Tokamak fusion reactor.
Sophus Lie was an 18th century Norwegian mathematician, but his style was rooted in the 17th century, so his work can be somewhat dense. I recommend the translation by M Ackerman with the comments and additional material by Robert Hermann. It was published in English in 1976 by MATH SCI PRESS, ISBN 0-915692-13-9. The original paper is "On Differential Invariants", S. Lie, Math. Annallen, Vol. 24 (1884), 537-578. I am just a student, and not really qualified to speak with authority on this issue, but as I understand it Lie's basic premise is precisely your statement.
Sophus Lie was trying to do for differential equations what Evariste Galois did for polynomials. A Lie Group is a group that preserves the structure of the smooth manifold. Lie Group stabilizers, or "differential invariants" as he called them, form an embedded, invariant submanifold. Because of this, DEQ's may be rewritten in terms of group stabilizers. Because a function of stabilizers is itself a stabilizer, this puts the DEQ into the kernel of the map with the Lie algebra of the group which is used to find a solution. As Hermann says in the preface, "The key idea is that one should study the structure of the orbit space of a symmetry group on the space of solutions."
Again, pardon the lack of rigor, but perhaps you might benefit from a practical example. If a projectile is fired into a fluid the force of friction is proportional to the square of the velocity, so deriving an equation for how far a projectile penetrates a fluid as a function of time can be a challenge.
$$F=-\alpha v^2 \rightarrow m\ddot{y}=-\alpha \dot{y}^2$$Here y is the penetration distance, m is mass and $\alpha$ is the drag coefficient, a unitless constant dependent on projectile shape and the density and viscosity of the fluid. ($\dot{y}=\frac{dy}{dt}$ and $\ddot{y}=\frac{d^2y}{dt^2}$) Also, y(0)=0 and $\dot{y}(0)=v_o$, the initial velocity of the projectile.
This DEQ is invariant to the Lie group G(t,y)=$(\lambda t, \lambda^\beta y)\lambda_o=1$ Note that if $t'=\lambda t$ and $y'=\lambda^\beta y$, $\dot{y}'=\frac{dy'}{dt'}=\frac{\lambda^\beta dy}{\lambda dt}=\lambda^{\beta -1}\dot{y}$ and $\ddot{y}'=\lambda^{\beta -2}\ddot{y}$. Applying these to the DEQ, $$m\lambda^{\beta -2}\ddot{y}=-\alpha \lambda^{2\beta -2}\dot{y}^2$$ For invariance, $\beta =0$. Now find the infinitestimal transformations of the primed variables and use the method of characteristics to find the stabilizers (differential invariants).
$$\bigg(\frac{dt'}{d\lambda}\bigg)_{\lambda _o=1}=t$$ $$\bigg(\frac{dy'}{d\lambda}\bigg)_{\lambda _o=1}=\beta y$$ $$\bigg(\frac{d\dot{y}'}{d\lambda}\bigg)_{\lambda _o=1}=(\beta -1)\dot{y}$$ $$\bigg(\frac{d\ddot{y}'}{d\lambda}\bigg)_{\lambda _o=1}=(\beta -2)\ddot{y}$$ $$d\lambda=\frac{dt}{t}=\frac{dy}{\beta y}=\frac{d\dot{y}}{(\beta -1)\dot{y}}=\frac{d\ddot{y}}{(\beta -2)\ddot{y}}$$ $$\frac{dt}{t}=\frac{dy}{\beta y}\rightarrow \beta ln t=ln y + \mu \rightarrow \mu=\frac{y}{t^\beta}\bigg|_{\beta =0}=y$$ $$\frac{dt}{t}=\frac{d\dot{y}}{(\beta -1)\dot{y}} \rightarrow \nu=\frac{\dot{y}}{t^{\beta -1}}\bigg|_{\beta =0}=t\dot{y}$$ $$\frac{dt}{t}=\frac{d\ddot{y}}{(\beta -2)\ddot{y}} \rightarrow \eta=\frac{\ddot{y}}{t^{\beta -2}}\bigg|_{\beta =0}=t^2\ddot{y}$$ These constants of integration, $\mu$, $\nu$, and $\eta$, are group stabilizers for the group of differential equations, a subset of the group of polynomials. There are an infinite number of them but we only need the first three for our DEQ. These differential invariants form an embedded submanifold in the solution space, which means the DEQ may be rewritten in terms of the invariants. Multiplying the DEQ by $t^2$, we have $$mt^2\ddot{y}=-\alpha(t\dot{y})^2 \rightarrow m\eta=-\alpha \nu^2$$ Since $$t\frac{d\nu}{dt}=\eta-(\beta-1)\nu$$ we can use this expression of the Lie algebra to solve our DEQ. $$t\frac{d\nu}{dt}=\nu -\frac{\alpha}{m}\nu^2 \rightarrow \frac{d\nu}{\nu-\frac{\alpha}{m}\nu^2}=\frac{dt}{t}$$ This separation of variables is a direct result of the group invariance. With a little fractional decomposition, some integration, substitution of $\nu=t\dot{y}$, more integration and application of initial conditions, you get $$y=\frac{m}{\alpha}ln\bigg(1+\frac{\alpha}{m}v_o t\bigg)$$This solution is easily verified and provides accurate results.
Again, the applied math may not be exactly what you are looking for, but hopefully it will spark some thoughts. On page 102 of the afore-mentioned book is Lie's Theorem 4.4.1. "Every infinite continuous group determines an infinite sequence of differential invariants, which can be defined as the solutions of complete systems." In the following comments by Hermann, he states "As far as I can tell, Lie (in the spirit of the 18th century) assumes that everything is suitably 'general', and that these facts are self evident. As far as I know, they are not proved to this day!" What was true for Hermann in 1976 may still hold. The method obviously works so the proof should exist, but since nobody has answered your post in a year it may be possible that such a valuable proof is still out there waiting to be discovered.
- | 2014-08-22 04:05:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8399456739425659, "perplexity": 286.11704543596323}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500822560.65/warc/CC-MAIN-20140820021342-00256-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://www.shaalaa.com/question-bank-solutions/for-frequency-distribution-standard-deviation-computed-applying-formula-statistics-statistics-concept_56006 | Department of Pre-University Education, KarnatakaPUC Karnataka Science Class 11
# For a Frequency Distribution Standard Deviation is Computed by Applying the Formula - Mathematics
MCQ
For a frequency distribution standard deviation is computed by applying the formula
#### Options
• $\sigma = \sqrt{\frac{\Sigma f d^2}{\Sigma f} - \left( \frac{\Sigma f d}{\Sigma f} \right)^2}$
• $\sigma = \sqrt{\left( \frac{\Sigma f d}{\Sigma f} \right)^2 - \frac{\Sigma f d^2}{\Sigma f}}$
• $\sigma = \sqrt{\frac{\Sigma f d^2}{\Sigma f} - \frac{\Sigma fd}{\Sigma f}}$
• $\sqrt{\left( \frac{\Sigma fd}{\Sigma f} \right)^2 - \frac{\Sigma f d^2}{\Sigma f}}$
#### Solution
$\sigma = \sqrt{\frac{\Sigma f d^2}{\Sigma f} - \left( \frac{\Sigma f d}{\Sigma f} \right)^2}$
Concept: Statistics - Statistics Concept
Is there an error in this question or solution?
#### APPEARS IN
RD Sharma Class 11 Mathematics Textbook
Chapter 32 Statistics
Q 2 | Page 50 | 2021-03-07 15:53:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7188431620597839, "perplexity": 9721.774631791941}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178377821.94/warc/CC-MAIN-20210307135518-20210307165518-00267.warc.gz"} |
https://timesofcbd.com/cbd-infused-drink-mix-maker-oleo-raises-1-5-million-in-convertible-note-round/ | Connect with us
# CBD-Infused Drink Mix Maker OLEO Raises $1.5 Million in Convertible Note Round Published on OLEO is a consumer CBD company founded in Seattle, Washington. The company recently made headway by raising$1.5 million in a convertible note round.
Skyler Bissell, the co-founder and CEO of OLEO, shared with BevNET that the financing serves as a “bridge round” from the 2016 seed and to the next funding round, which is yet o be determined. He also stated that he sees “dramatic expansion on the horizon” for the brand, and that such growth will be targeted at retailers and drug stores. Whether such retailers and stores embrace CBD is not a matter of if, and not when.
He added, “There is hesitancy [for CBD products in drug] due to a number of unknown factors in terms of the regulatory environment. That said, many retailers are willing to test and are taking steps to move quicker. The biggest factor is that there’s just a natural fit when it comes to dietary supplements of all kinds and a drug store. So we see drug as one of the biggest channels that will continue to be activated for OLEO, as well as grocery and fitness retailers.”
OLEO’s products, including its beverage line, is available at retailers around the country, and they are available online as well. The products comes in 6ct multipacks in various flavors, such as coconut, raspberry, passionfruit, and tangerine. The formulas enable users to enjoy the benefits of CBD whether they are at work, at the gym, or on the go.
Elison Grey is one of our up and coming researchers who has a Bachelor’s Degree in Literature and Humanities, as well as being a Licensed Clinical Massage Therapist. She finds comfort in writing about overall health and wellness, diet and nutrition, natural medicine and alternative therapy methods. | 2022-07-05 18:55:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17654182016849518, "perplexity": 3457.714647548417}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104597905.85/warc/CC-MAIN-20220705174927-20220705204927-00363.warc.gz"} |
https://gitlab.haskell.org/shayne-fletcher-da/ghc/-/blame/2e43779c758294571bdf5ef6f2be440487d8e196/compiler/specialise/SpecConstr.lhs | SpecConstr.lhs 70 KB
simonpj@microsoft.com committed Nov 19, 2010 1 2 3 4 5 ToDo [Nov 2010] ~~~~~~~~~~~~~~~ 1. Use a library type rather than an annotation for ForceSpecConstr 2. Nuke NoSpecConstr simonpj committed Feb 28, 2001 6 7 8 9 10 11 12 % % (c) The GRASP/AQUA Project, Glasgow University, 1992-1998 % \section[SpecConstr]{Specialise over constructors} \begin{code} module SpecConstr( ian@well-typed.com committed Nov 02, 2012 13 specConstrProgram simonpj@microsoft.com committed Oct 18, 2010 14 15 16 #ifdef GHCI , SpecConstrAnnotation(..) #endif simonpj committed Feb 28, 2001 17 18 19 20 21 ) where #include "HsVersions.h" import CoreSyn simonpj@microsoft.com committed Feb 09, 2007 22 23 import CoreSubst import CoreUtils ian@well-typed.com committed Nov 02, 2012 24 25 import CoreUnfold ( couldBeSmallEnoughToInline ) import CoreFVs ( exprsFreeVars ) rl@cse.unsw.edu.au committed Oct 29, 2009 26 import CoreMonad ian@well-typed.com committed Nov 02, 2012 27 import Literal ( litIsLifted ) rl@cse.unsw.edu.au committed Oct 29, 2009 28 import HscTypes ( ModGuts(..) ) ian@well-typed.com committed Nov 02, 2012 29 import WwLib ( mkWorkerArgs ) simonpj@microsoft.com committed Oct 18, 2010 30 import DataCon ian@well-typed.com committed Nov 02, 2012 31 import Coercion hiding( substTy, substCo ) simonpj@microsoft.com committed Aug 21, 2008 32 import Rules ian@well-typed.com committed Nov 02, 2012 33 import Type hiding ( substTy ) simonpj@microsoft.com committed Oct 02, 2008 34 import Id ian@well-typed.com committed Nov 02, 2012 35 import MkCore ( mkImpossibleExpr ) simonpj@microsoft.com committed Sep 29, 2007 36 import Var simonpj committed Feb 28, 2001 37 38 import VarEnv import VarSet Simon Marlow committed May 11, 2007 39 import Name simonpj@microsoft.com committed May 25, 2010 40 import BasicTypes ian@well-typed.com committed Nov 02, 2012 41 42 43 import DynFlags ( DynFlags(..) ) import StaticFlags ( opt_PprStyle_Debug ) import Maybes ( orElse, catMaybes, isJust, isNothing ) simonpj@microsoft.com committed Nov 19, 2009 44 import Demand ian@well-typed.com committed Nov 02, 2012 45 import DmdAnal ( both ) rl@cse.unsw.edu.au committed Oct 29, 2009 46 import Serialized ( deserializeWithData ) simonpj@microsoft.com committed Feb 09, 2007 47 import Util 48 import Pair simonpj committed Feb 28, 2001 49 50 import UniqSupply import Outputable simonmar committed Apr 29, 2002 51 import FastString simonpj@microsoft.com committed Aug 15, 2006 52 import UniqFM Ian Lynagh committed Jan 24, 2008 53 import MonadUtils ian@well-typed.com committed Nov 02, 2012 54 import Control.Monad ( zipWithM ) Ian Lynagh committed Jul 24, 2009 55 import Data.List simonpj@microsoft.com committed Oct 18, 2010 56 57 58 59 60 61 62 63 64 -- See Note [SpecConstrAnnotation] #ifndef GHCI type SpecConstrAnnotation = () #else import TyCon ( TyCon ) import GHC.Exts( SpecConstrAnnotation(..) ) #endif simonpj committed Feb 28, 2001 65 66 67 \end{code} ----------------------------------------------------- ian@well-typed.com committed Nov 02, 2012 68 Game plan simonpj committed Feb 28, 2001 69 70 71 ----------------------------------------------------- Consider ian@well-typed.com committed Nov 02, 2012 72 73 74 drop n [] = [] drop 0 xs = [] drop n (x:xs) = drop (n-1) xs simonpj committed Feb 28, 2001 75 76 77 78 After the first time round, we could pass n unboxed. This happens in numerical code too. Here's what it looks like in Core: ian@well-typed.com committed Nov 02, 2012 79 80 81 82 83 84 drop n xs = case xs of [] -> [] (y:ys) -> case n of I# n# -> case n# of 0 -> [] _ -> drop (I# (n# -# 1#)) xs simonpj committed Feb 28, 2001 85 86 87 88 Notice that the recursive call has an explicit constructor as argument. Noticing this, we can make a specialised version of drop ian@well-typed.com committed Nov 02, 2012 89 90 91 RULE: drop (I# n#) xs ==> drop' n# xs drop' n# xs = let n = I# n# in ...orig RHS... simonpj committed Feb 28, 2001 92 93 94 Now the simplifier will apply the specialisation in the rhs of drop', giving ian@well-typed.com committed Nov 02, 2012 95 96 97 98 99 drop' n# xs = case xs of [] -> [] (y:ys) -> case n# of 0 -> [] _ -> drop (n# -# 1#) xs simonpj committed Feb 28, 2001 100 ian@well-typed.com committed Nov 02, 2012 101 Much better! simonpj committed Feb 28, 2001 102 103 104 105 We'd also like to catch cases where a parameter is carried along unchanged, but evaluated each time round the loop: ian@well-typed.com committed Nov 02, 2012 106 f i n = if i>0 || i>n then i else f (i*2) n simonpj committed Feb 28, 2001 107 108 109 110 Here f isn't strict in n, but we'd like to avoid evaluating it each iteration. In Core, by the time we've w/wd (f is strict in i) we get ian@well-typed.com committed Nov 02, 2012 111 112 113 114 115 116 f i# n = case i# ># 0 of False -> I# i# True -> case n of n' { I# n# -> case i# ># n# of False -> I# i# True -> f (i# *# 2#) n' simonpj committed Feb 28, 2001 117 118 119 At the call to f, we see that the argument, n is know to be (I# n#), and n is evaluated elsewhere in the body of f, so we can play the same ian@well-typed.com committed Nov 02, 2012 120 trick as above. simonpj@microsoft.com committed Aug 10, 2006 121 122 123 124 125 Note [Reboxing] ~~~~~~~~~~~~~~~ We must be careful not to allocate the same constructor twice. Consider ian@well-typed.com committed Nov 02, 2012 126 127 f p = (...(case p of (a,b) -> e)...p..., ...let t = (r,s) in ...t...(f t)...) simonpj@microsoft.com committed Aug 10, 2006 128 129 At the recursive call to f, we can see that t is a pair. But we do NOT want to make a specialised copy: ian@well-typed.com committed Nov 02, 2012 130 f' a b = let p = (a,b) in (..., ...) simonpj@microsoft.com committed Aug 10, 2006 131 132 133 134 135 136 because now t is allocated by the caller, then r and s are passed to the recursive call, which allocates the (r,s) pair again. This happens if (a) the argument p is used in other than a case-scrutinsation way. (b) the argument to the call is not a 'fresh' tuple; you have to ian@well-typed.com committed Nov 02, 2012 137 look into its unfolding to see that it's a tuple simonpj@microsoft.com committed Aug 10, 2006 138 139 140 Hence the "OR" part of Note [Good arguments] below. simonpj@microsoft.com committed Nov 29, 2006 141 ALTERNATIVE 2: pass both boxed and unboxed versions. This no longer saves simonpj@microsoft.com committed Aug 10, 2006 142 143 144 145 146 147 148 149 allocation, but does perhaps save evals. In the RULE we'd have something like f (I# x#) = f' (I# x#) x# If at the call site the (I# x) was an unfolding, then we'd have to rely on CSE to eliminate the duplicate allocation.... This alternative doesn't look attractive enough to pursue. simonpj committed Feb 28, 2001 150 ian@well-typed.com committed Nov 02, 2012 151 ALTERNATIVE 3: ignore the reboxing problem. The trouble is that simonpj@microsoft.com committed Nov 29, 2006 152 153 the conservative reboxing story prevents many useful functions from being specialised. Example: ian@well-typed.com committed Nov 02, 2012 154 155 156 foo :: Maybe Int -> Int -> Int foo (Just m) 0 = 0 foo x@(Just m) n = foo x (n-m) simonpj@microsoft.com committed Nov 29, 2006 157 158 159 Here the use of 'x' will clearly not require boxing in the specialised function. The strictness analyser has the same problem, in fact. Example: ian@well-typed.com committed Nov 02, 2012 160 f p@(a,b) = ... simonpj@microsoft.com committed Nov 29, 2006 161 162 163 164 165 166 167 168 169 If we pass just 'a' and 'b' to the worker, it might need to rebox the pair to create (a,b). A more sophisticated analysis might figure out precisely the cases in which this could happen, but the strictness analyser does no such analysis; it just passes 'a' and 'b', and hopes for the best. So my current choice is to make SpecConstr similarly aggressive, and ignore the bad potential of reboxing. simonpj committed Feb 28, 2001 170 simonpj@microsoft.com committed Jun 27, 2006 171 172 Note [Good arguments] ~~~~~~~~~~~~~~~~~~~~~ simonpj committed Feb 28, 2001 173 174 So we look for ian@well-typed.com committed Nov 02, 2012 175 * A self-recursive function. Ignore mutual recursion for now, simonpj committed Feb 28, 2001 176 177 178 179 because it's less common, and the code is simpler for self-recursion. * EITHER ian@well-typed.com committed Nov 02, 2012 180 a) At a recursive call, one or more parameters is an explicit simonpj committed Feb 28, 2001 181 constructor application ian@well-typed.com committed Nov 02, 2012 182 183 AND That same parameter is scrutinised by a case somewhere in simonpj committed Feb 28, 2001 184 185 186 187 188 189 the RHS of the function OR b) At a recursive call, one or more parameters has an unfolding that is an explicit constructor application ian@well-typed.com committed Nov 02, 2012 190 191 AND That same parameter is scrutinised by a case somewhere in simonpj committed Feb 28, 2001 192 the RHS of the function ian@well-typed.com committed Nov 02, 2012 193 AND simonpj@microsoft.com committed Aug 10, 2006 194 Those are the only uses of the parameter (see Note [Reboxing]) simonpj committed Feb 28, 2001 195 196 simonpj@microsoft.com committed May 23, 2006 197 198 What to abstract over ~~~~~~~~~~~~~~~~~~~~~ simonpj committed Feb 28, 2001 199 200 201 There's a bit of a complication with type arguments. If the call site looks like ian@well-typed.com committed Nov 02, 2012 202 f p = ...f ((:) [a] x xs)... simonpj committed Feb 28, 2001 203 204 205 then our specialised function look like ian@well-typed.com committed Nov 02, 2012 206 f_spec x xs = let p = (:) [a] x xs in ....as before.... simonpj committed Feb 28, 2001 207 208 209 210 211 212 213 214 215 This only makes sense if either a) the type variable 'a' is in scope at the top of f, or b) the type variable 'a' is an argument to f (and hence fs) Actually, (a) may hold for value arguments too, in which case we may not want to pass them. Supose 'x' is in scope at f's defn, but xs is not. Then we'd like ian@well-typed.com committed Nov 02, 2012 216 f_spec xs = let p = (:) [a] x xs in ....as before.... simonpj committed Feb 28, 2001 217 218 219 220 221 222 223 Similarly (b) may hold too. If x is already an argument at the call, no need to pass it again. Finally, if 'a' is not in scope at the call site, we could abstract it as we do the term variables: ian@well-typed.com committed Nov 02, 2012 224 f_spec a x xs = let p = (:) [a] x xs in ...as before... simonpj committed Feb 28, 2001 225 226 227 So the grand plan is: ian@well-typed.com committed Nov 02, 2012 228 229 * abstract the call site to a constructor-only pattern e.g. C x (D (f p) (g q)) ==> C s1 (D s2 s3) simonpj committed Feb 28, 2001 230 ian@well-typed.com committed Nov 02, 2012 231 * Find the free variables of the abstracted pattern simonpj committed Feb 28, 2001 232 ian@well-typed.com committed Nov 02, 2012 233 234 * Pass these variables, less any that are in scope at the fn defn. But see Note [Shadowing] below. simonpj committed Feb 28, 2001 235 236 237 238 239 240 241 NOTICE that we only abstract over variables that are not in scope, so we're in no danger of shadowing variables used in "higher up" in f_spec's RHS. simonpj@microsoft.com committed May 23, 2006 242 243 244 245 246 247 248 Note [Shadowing] ~~~~~~~~~~~~~~~~ In this pass we gather up usage information that may mention variables that are bound between the usage site and the definition site; or (more seriously) may be bound to something different at the definition site. For example: ian@well-typed.com committed Nov 02, 2012 249 250 f x = letrec g y v = let x = ... in ...(g (a,b) x)... simonpj@microsoft.com committed May 23, 2006 251 ian@well-typed.com committed Nov 02, 2012 252 Since 'x' is in scope at the call site, we may make a rewrite rule that simonpj@microsoft.com committed May 23, 2006 253 looks like ian@well-typed.com committed Nov 02, 2012 254 255 RULE forall a,b. g (a,b) x = ... But this rule will never match, because it's really a different 'x' at simonpj@microsoft.com committed May 23, 2006 256 257 258 259 260 261 262 263 264 the call site -- and that difference will be manifest by the time the simplifier gets to it. [A worry: the simplifier doesn't *guarantee* no-shadowing, so perhaps it may not be distinct?] Anyway, the rule isn't actually wrong, it's just not useful. One possibility is to run deShadowBinds before running SpecConstr, but instead we run the simplifier. That gives the simplest possible program for SpecConstr to chew on; and it virtually guarantees no shadowing. simonpj@microsoft.com committed Aug 15, 2006 265 266 Note [Specialising for constant parameters] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ simonpj@microsoft.com committed Aug 10, 2006 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 This one is about specialising on a *constant* (but not necessarily constructor) argument foo :: Int -> (Int -> Int) -> Int foo 0 f = 0 foo m f = foo (f m) (+1) It produces lvl_rmV :: GHC.Base.Int -> GHC.Base.Int lvl_rmV = \ (ds_dlk :: GHC.Base.Int) -> case ds_dlk of wild_alH { GHC.Base.I# x_alG -> GHC.Base.I# (GHC.Prim.+# x_alG 1) T.$wfoo :: GHC.Prim.Int# -> (GHC.Base.Int -> GHC.Base.Int) -> GHC.Prim.Int# T.$wfoo = \ (ww_sme :: GHC.Prim.Int#) (w_smg :: GHC.Base.Int -> GHC.Base.Int) -> case ww_sme of ds_Xlw { __DEFAULT -> ian@well-typed.com committed Nov 02, 2012 288 289 290 case w_smg (GHC.Base.I# ds_Xlw) of w1_Xmo { GHC.Base.I# ww1_Xmz -> T.$wfoo ww1_Xmz lvl_rmV }; simonpj@microsoft.com committed Aug 10, 2006 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 0 -> 0 } The recursive call has lvl_rmV as its argument, so we could create a specialised copy with that argument baked in; that is, not passed at all. Now it can perhaps be inlined. When is this worth it? Call the constant 'lvl' - If 'lvl' has an unfolding that is a constructor, see if the corresponding parameter is scrutinised anywhere in the body. - If 'lvl' has an unfolding that is a inlinable function, see if the corresponding parameter is applied (...to enough arguments...?) Also do this is if the function has RULES? ian@well-typed.com committed Nov 02, 2012 306 Also simonpj@microsoft.com committed Aug 10, 2006 307 simonpj@microsoft.com committed Aug 15, 2006 308 309 Note [Specialising for lambda parameters] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ simonpj@microsoft.com committed Aug 10, 2006 310 311 312 313 314 315 316 317 318 319 320 321 322 foo :: Int -> (Int -> Int) -> Int foo 0 f = 0 foo m f = foo (f m) (\n -> n-m) This is subtly different from the previous one in that we get an explicit lambda as the argument: T.$wfoo :: GHC.Prim.Int# -> (GHC.Base.Int -> GHC.Base.Int) -> GHC.Prim.Int# T.$wfoo = \ (ww_sm8 :: GHC.Prim.Int#) (w_sma :: GHC.Base.Int -> GHC.Base.Int) -> case ww_sm8 of ds_Xlr { __DEFAULT -> ian@well-typed.com committed Nov 02, 2012 323 324 325 326 327 328 329 330 case w_sma (GHC.Base.I# ds_Xlr) of w1_Xmf { GHC.Base.I# ww1_Xmq -> T.$wfoo ww1_Xmq (\ (n_ad3 :: GHC.Base.Int) -> case n_ad3 of wild_alB { GHC.Base.I# x_alA -> GHC.Base.I# (GHC.Prim.-# x_alA ds_Xlr) }) }; simonpj@microsoft.com committed Aug 10, 2006 331 332 333 334 335 336 337 338 339 340 341 342 343 0 -> 0 } I wonder if SpecConstr couldn't be extended to handle this? After all, lambda is a sort of constructor for functions and perhaps it already has most of the necessary machinery? Furthermore, there's an immediate win, because you don't need to allocate the lamda at the call site; and if perchance it's called in the recursive call, then you may avoid allocating it altogether. Just like for constructors. Looks cool, but probably rare...but it might be easy to implement. simonpj@microsoft.com committed Oct 05, 2006 344 345 346 Note [SpecConstr for casts] ~~~~~~~~~~~~~~~~~~~~~~~~~~~ ian@well-typed.com committed Nov 02, 2012 347 Consider simonpj@microsoft.com committed Oct 05, 2006 348 349 350 351 352 353 354 355 data family T a :: * data instance T Int = T Int foo n = ... where go (T 0) = 0 go (T n) = go (T (n-1)) ian@well-typed.com committed Nov 02, 2012 356 357 The recursive call ends up looking like go (T (I# ...) cast g) simonpj@microsoft.com committed Oct 05, 2006 358 359 360 So we want to spot the construtor application inside the cast. That's why we have the Cast case in argToPat 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 Note [Local recursive groups] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ For a *local* recursive group, we can see all the calls to the function, so we seed the specialisation loop from the calls in the body, not from the calls in the RHS. Consider: bar m n = foo n (n,n) (n,n) (n,n) (n,n) where foo n p q r s | n == 0 = m | n > 3000 = case p of { (p1,p2) -> foo (n-1) (p2,p1) q r s } | n > 2000 = case q of { (q1,q2) -> foo (n-1) p (q2,q1) r s } | n > 1000 = case r of { (r1,r2) -> foo (n-1) p q (r2,r1) s } | otherwise = case s of { (s1,s2) -> foo (n-1) p q r (s2,s1) } If we start with the RHSs of 'foo', we get lots and lots of specialisations, most of which are not needed. But if we start with the (single) call in the rhs of 'bar' we get exactly one fully-specialised copy, and all the recursive calls go to this fully-specialised copy. Indeed, the original ian@well-typed.com committed Nov 02, 2012 380 function is later collected as dead code. This is very important in 381 382 383 specialising the loops arising from stream fusion, for example in NDP where we were getting literally hundreds of (mostly unused) specialisations of a local function. simonpj@microsoft.com committed Oct 05, 2006 384 simonpj@microsoft.com committed Feb 07, 2011 385 386 387 388 389 390 391 392 In a case like the above we end up never calling the original un-specialised function. (Although we still leave its code around just in case.) However, if we find any boring calls in the body, including *unsaturated* ones, such as letrec foo x y = ....foo... in map foo xs then we will end up calling the un-specialised function, so then we *should* ian@well-typed.com committed Nov 02, 2012 393 use the calls in the un-specialised RHS as seeds. We call these "boring simonpj@microsoft.com committed Feb 07, 2011 394 395 396 call patterns, and callsToPats reports if it finds any of these. simonpj@microsoft.com committed Jan 13, 2009 397 398 399 400 401 402 403 404 405 406 407 408 409 Note [Do not specialise diverging functions] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Specialising a function that just diverges is a waste of code. Furthermore, it broke GHC (simpl014) thus: {-# STR Sb #-} f = \x. case x of (a,b) -> f x If we specialise f we get f = \x. case x of (a,b) -> fspec a b But fspec doesn't have decent strictnes info. As it happened, (f x) :: IO t, so the state hack applied and we eta expanded fspec, and hence f. But now f's strictness is less than its arity, which breaks an invariant. simonpj@microsoft.com committed Oct 18, 2010 410 411 412 413 414 415 416 417 418 419 420 Note [SpecConstrAnnotation] ~~~~~~~~~~~~~~~~~~~~~~~~~~~ SpecConstrAnnotation is defined in GHC.Exts, and is only guaranteed to be available in stage 2 (well, until the bootstrap compiler can be guaranteed to have it) So we define it to be () in stage1 (ie when GHCI is undefined), and '#ifdef' out the code that uses it. See also Note [Forcing specialisation] rl@cse.unsw.edu.au committed Feb 15, 2010 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 Note [Forcing specialisation] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ With stream fusion and in other similar cases, we want to fully specialise some (but not necessarily all!) loops regardless of their size and the number of specialisations. We allow a library to specify this by annotating a type with ForceSpecConstr and then adding a parameter of that type to the loop. Here is a (simplified) example from the vector library: data SPEC = SPEC | SPEC2 {-# ANN type SPEC ForceSpecConstr #-} foldl :: (a -> b -> a) -> a -> Stream b -> a {-# INLINE foldl #-} foldl f z (Stream step s _) = foldl_loop SPEC z s where simonpj@microsoft.com committed Nov 17, 2010 436 437 438 foldl_loop !sPEC z s = case step s of Yield x s' -> foldl_loop sPEC (f z x) s' Skip -> foldl_loop sPEC z s' rl@cse.unsw.edu.au committed Feb 15, 2010 439 440 441 Done -> z SpecConstr will spot the SPEC parameter and always fully specialise simonpj@microsoft.com committed Nov 17, 2010 442 443 444 445 446 447 448 449 450 foldl_loop. Note that * We have to prevent the SPEC argument from being removed by w/w which is why (a) SPEC is a sum type, and (b) we have to seq on the SPEC argument. * And lastly, the SPEC argument is ultimately eliminated by SpecConstr itself so there is no runtime overhead. simonpj@microsoft.com committed Nov 19, 2010 451 This is all quite ugly; we ought to come up with a better design. rl@cse.unsw.edu.au committed Feb 15, 2010 452 453 ForceSpecConstr arguments are spotted in scExpr' and scTopBinds which then set simonpj@microsoft.com committed Nov 19, 2010 454 455 456 457 458 459 460 sc_force to True when calling specLoop. This flag does three things: * Ignore specConstrThreshold, to specialise functions of arbitrary size (see scTopBind) * Ignore specConstrCount, to make arbitrary numbers of specialisations (see specialise) * Specialise even for arguments that are not scrutinised in the loop (see argToPat; Trac #4488) rl@cse.unsw.edu.au committed Feb 15, 2010 461 rl@cse.unsw.edu.au committed Nov 27, 2010 462 463 464 465 This flag is inherited for nested non-recursive bindings (which are likely to be join points and hence should be fully specialised) but reset for nested recursive bindings. simonpj@microsoft.com committed Nov 17, 2010 466 467 468 469 470 471 What alternatives did I consider? Annotating the loop itself doesn't work because (a) it is local and (b) it will be w/w'ed and I having w/w propagating annotation somehow doesn't seem like a good idea. The types of the loop arguments really seem to be the most persistent thing. simonpj@microsoft.com committed Nov 19, 2010 472 Annotating the types that make up the loop state doesn't work, simonpj@microsoft.com committed Nov 17, 2010 473 474 475 476 477 478 either, because (a) it would prevent us from using types like Either or tuples here, (b) we don't want to restrict the set of types that can be used in Stream states and (c) some types are fixed by the user (e.g., the accumulator here) but we still want to specialise as much as possible. simonpj@microsoft.com committed Nov 19, 2010 479 480 481 482 483 484 ForceSpecConstr is done by way of an annotation: data SPEC = SPEC | SPEC2 {-# ANN type SPEC ForceSpecConstr #-} But SPEC is the *only* type so annotated, so it'd be better to use a particular library type. simonpj@microsoft.com committed Nov 18, 2010 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 Alternatives to ForceSpecConstr ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Instead of giving the loop an extra argument of type SPEC, we also considered *wrapping* arguments in SPEC, thus data SPEC a = SPEC a | SPEC2 loop = \arg -> case arg of SPEC state -> case state of (x,y) -> ... loop (SPEC (x',y')) ... S2 -> error ... The idea is that a SPEC argument says "specialise this argument regardless of whether the function case-analyses it. But this doesn't work well: * SPEC must still be a sum type, else the strictness analyser eliminates it * But that means that 'loop' won't be strict in its real payload This loss of strictness in turn screws up specialisation, because we may end up with calls like loop (SPEC (case z of (p,q) -> (q,p))) Without the SPEC, if 'loop' was strict, the case would move out and we'd see loop applied to a pair. But if 'loop' isn' strict this doesn't look like a specialisable call. simonpj@microsoft.com committed Nov 19, 2010 508 509 Note [NoSpecConstr] ~~~~~~~~~~~~~~~~~~~ simonpj@microsoft.com committed Feb 01, 2011 510 The ignoreDataCon stuff allows you to say simonpj@microsoft.com committed Nov 19, 2010 511 512 513 514 515 516 517 {-# ANN type T NoSpecConstr #-} to mean "don't specialise on arguments of this type. It was added before we had ForceSpecConstr. Lacking ForceSpecConstr we specialised regardless of size; and then we needed a way to turn that *off*. Now that we have ForceSpecConstr, this NoSpecConstr is probably redundant. (Used only for PArray.) simonpj@microsoft.com committed Aug 15, 2006 518 ----------------------------------------------------- ian@well-typed.com committed Nov 02, 2012 519 Stuff not yet handled simonpj@microsoft.com committed Aug 15, 2006 520 521 522 523 ----------------------------------------------------- Here are notes arising from Roman's work that I don't want to lose. simonpj@microsoft.com committed Jun 27, 2006 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 Example 1 ~~~~~~~~~ data T a = T !a foo :: Int -> T Int -> Int foo 0 t = 0 foo x t | even x = case t of { T n -> foo (x-n) t } | otherwise = foo (x-1) t SpecConstr does no specialisation, because the second recursive call looks like a boxed use of the argument. A pity. $wfoo_sFw :: GHC.Prim.Int# -> T.T GHC.Base.Int -> GHC.Prim.Int#$wfoo_sFw = \ (ww_sFo [Just L] :: GHC.Prim.Int#) (w_sFq [Just L] :: T.T GHC.Base.Int) -> ian@well-typed.com committed Nov 02, 2012 539 540 541 542 543 544 545 546 547 548 case ww_sFo of ds_Xw6 [Just L] { __DEFAULT -> case GHC.Prim.remInt# ds_Xw6 2 of wild1_aEF [Dead Just A] { __DEFAULT -> $wfoo_sFw (GHC.Prim.-# ds_Xw6 1) w_sFq; 0 -> case w_sFq of wild_Xy [Just L] { T.T n_ad5 [Just U(L)] -> case n_ad5 of wild1_aET [Just A] { GHC.Base.I# y_aES [Just L] ->$wfoo_sFw (GHC.Prim.-# ds_Xw6 y_aES) wild_Xy } } }; 0 -> 0 simonpj@microsoft.com committed Jun 27, 2006 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 Example 2 ~~~~~~~~~ data a :*: b = !a :*: !b data T a = T !a foo :: (Int :*: T Int) -> Int foo (0 :*: t) = 0 foo (x :*: t) | even x = case t of { T n -> foo ((x-n) :*: t) } | otherwise = foo ((x-1) :*: t) Very similar to the previous one, except that the parameters are now in a strict tuple. Before SpecConstr, we have $wfoo_sG3 :: GHC.Prim.Int# -> T.T GHC.Base.Int -> GHC.Prim.Int#$wfoo_sG3 = \ (ww_sFU [Just L] :: GHC.Prim.Int#) (ww_sFW [Just L] :: T.T GHC.Base.Int) -> case ww_sFU of ds_Xws [Just L] { __DEFAULT -> ian@well-typed.com committed Nov 02, 2012 569 570 571 572 573 574 575 576 577 578 case GHC.Prim.remInt# ds_Xws 2 of wild1_aEZ [Dead Just A] { __DEFAULT -> case ww_sFW of tpl_B2 [Just L] { T.T a_sFo [Just A] -> $wfoo_sG3 (GHC.Prim.-# ds_Xws 1) tpl_B2 --$wfoo1 }; 0 -> case ww_sFW of wild_XB [Just A] { T.T n_ad7 [Just S(L)] -> case n_ad7 of wild1_aFd [Just L] { GHC.Base.I# y_aFc [Just L] -> $wfoo_sG3 (GHC.Prim.-# ds_Xws y_aFc) wild_XB --$wfoo2 } } }; simonpj@microsoft.com committed Jun 27, 2006 579 580 581 582 0 -> 0 } We get two specialisations: "SC:$wfoo1" [0] __forall {a_sFB :: GHC.Base.Int sc_sGC :: GHC.Prim.Int#} ian@well-typed.com committed Nov 02, 2012 583 584 Foo.$wfoo sc_sGC (Foo.T @ GHC.Base.Int a_sFB) = Foo.$s$wfoo1 a_sFB sc_sGC ; simonpj@microsoft.com committed Jun 27, 2006 585 "SC:$wfoo2" [0] __forall {y_aFp :: GHC.Prim.Int# sc_sGC :: GHC.Prim.Int#} ian@well-typed.com committed Nov 02, 2012 586 587 Foo.$wfoo sc_sGC (Foo.T @ GHC.Base.Int (GHC.Base.I# y_aFp)) = Foo.$s$wfoo y_aFp sc_sGC ; simonpj@microsoft.com committed Jun 27, 2006 588 589 But perhaps the first one isn't good. After all, we know that tpl_B2 is simonpj@microsoft.com committed Aug 10, 2006 590 591 a T (I# x) really, because T is strict and Int has one constructor. (We can't unbox the strict fields, becuase T is polymorphic!) simonpj@microsoft.com committed Jun 27, 2006 592 simonpj committed Feb 28, 2001 593 %************************************************************************ ian@well-typed.com committed Nov 02, 2012 594 %* * simonpj committed Feb 28, 2001 595 \subsection{Top level wrapper stuff} ian@well-typed.com committed Nov 02, 2012 596 %* * simonpj committed Feb 28, 2001 597 598 599 %************************************************************************ \begin{code} rl@cse.unsw.edu.au committed Oct 29, 2009 600 601 602 603 604 specConstrProgram :: ModGuts -> CoreM ModGuts specConstrProgram guts = do dflags <- getDynFlags us <- getUniqueSupplyM rl@cse.unsw.edu.au committed Dec 04, 2009 605 annos <- getFirstAnnotations deserializeWithData guts rl@cse.unsw.edu.au committed Oct 29, 2009 606 607 let binds' = fst $initUs us (go (initScEnv dflags annos) (mg_binds guts)) return (guts { mg_binds = binds' }) simonpj committed Feb 28, 2001 608 where ian@well-typed.com committed Nov 02, 2012 609 go _ [] = return [] 610 go env (bind:binds) = do (env', bind') <- scTopBind env bind twanvl committed Jan 17, 2008 611 612 binds' <- go env' binds return (bind' : binds') simonpj committed Feb 28, 2001 613 614 615 616 \end{code} %************************************************************************ ian@well-typed.com committed Nov 02, 2012 617 %* * simonpj committed Mar 05, 2001 618 \subsection{Environment: goes downwards} ian@well-typed.com committed Nov 02, 2012 619 %* * simonpj committed Feb 28, 2001 620 621 622 %************************************************************************ \begin{code} Ian Lynagh committed Jun 12, 2012 623 data ScEnv = SCE { sc_dflags :: DynFlags, ian@well-typed.com committed Nov 02, 2012 624 625 626 sc_size :: Maybe Int, -- Size threshold sc_count :: Maybe Int, -- Max # of specialisations for any one fn -- See Note [Avoiding exponential blowup] 627 628 sc_force :: Bool, -- Force specialisation? -- See Note [Forcing specialisation] simonpj committed Feb 28, 2001 629 ian@well-typed.com committed Nov 02, 2012 630 631 sc_subst :: Subst, -- Current substitution -- Maps InIds to OutExprs simonpj@microsoft.com committed Feb 09, 2007 632 ian@well-typed.com committed Nov 02, 2012 633 634 635 sc_how_bound :: HowBoundEnv, -- Binds interesting non-top-level variables -- Domain is OutVars (*after* applying the substitution) simonpj@microsoft.com committed Feb 09, 2007 636 ian@well-typed.com committed Nov 02, 2012 637 638 639 sc_vals :: ValueEnv, -- Domain is OutIds (*after* applying the substitution) -- Used even for top-level bindings (but not imported ones) rl@cse.unsw.edu.au committed Oct 29, 2009 640 Ian Lynagh committed Mar 20, 2010 641 sc_annotations :: UniqFM SpecConstrAnnotation ian@well-typed.com committed Nov 02, 2012 642 } simonpj committed Mar 05, 2001 643 simonpj@microsoft.com committed May 10, 2007 644 645 --------------------- -- As we go, we apply a substitution (sc_subst) to the current term ian@well-typed.com committed Nov 02, 2012 646 type InExpr = CoreExpr -- _Before_ applying the subst simonpj@microsoft.com committed Feb 01, 2010 647 type InVar = Var simonpj@microsoft.com committed May 10, 2007 648 ian@well-typed.com committed Nov 02, 2012 649 type OutExpr = CoreExpr -- _After_ applying the subst simonpj@microsoft.com committed May 10, 2007 650 651 652 653 type OutId = Id type OutVar = Var --------------------- ian@well-typed.com committed Nov 02, 2012 654 type HowBoundEnv = VarEnv HowBound -- Domain is OutVars simonpj@microsoft.com committed Aug 15, 2006 655 simonpj@microsoft.com committed May 10, 2007 656 --------------------- ian@well-typed.com committed Nov 02, 2012 657 658 659 660 type ValueEnv = IdEnv Value -- Domain is OutIds data Value = ConVal AltCon [CoreArg] -- _Saturated_ constructors -- The AltCon is never DEFAULT | LambdaVal -- Inlinable lambdas or PAPs simonpj committed Mar 05, 2001 661 simonpj@microsoft.com committed Aug 05, 2007 662 663 instance Outputable Value where ppr (ConVal con args) = ppr con <+> interpp'SP args ian@well-typed.com committed Nov 02, 2012 664 ppr LambdaVal = ptext (sLit "") simonpj@microsoft.com committed May 04, 2006 665 simonpj@microsoft.com committed May 10, 2007 666 --------------------- Ian Lynagh committed Mar 20, 2010 667 initScEnv :: DynFlags -> UniqFM SpecConstrAnnotation -> ScEnv rl@cse.unsw.edu.au committed Dec 04, 2009 668 initScEnv dflags anns Ian Lynagh committed Jun 12, 2012 669 670 = SCE { sc_dflags = dflags, sc_size = specConstrThreshold dflags, ian@well-typed.com committed Nov 02, 2012 671 sc_count = specConstrCount dflags, 672 sc_force = False, ian@well-typed.com committed Nov 02, 2012 673 674 675 sc_subst = emptySubst, sc_how_bound = emptyVarEnv, sc_vals = emptyVarEnv, rl@cse.unsw.edu.au committed Dec 04, 2009 676 sc_annotations = anns } simonpj committed Feb 28, 2001 677 ian@well-typed.com committed Nov 02, 2012 678 679 data HowBound = RecFun -- These are the recursive functions for which -- we seek interesting call patterns simonpj committed Mar 01, 2001 680 ian@well-typed.com committed Nov 02, 2012 681 682 | RecArg -- These are those functions' arguments, or their sub-components; -- we gather occurrence information for these simonpj committed Mar 01, 2001 683 simonpj committed Aug 24, 2001 684 685 686 687 instance Outputable HowBound where ppr RecFun = text "RecFun" ppr RecArg = text "RecArg" 688 689 690 scForce :: ScEnv -> Bool -> ScEnv scForce env b = env { sc_force = b } simonpj@microsoft.com committed Feb 09, 2007 691 692 693 694 lookupHowBound :: ScEnv -> Id -> Maybe HowBound lookupHowBound env id = lookupVarEnv (sc_how_bound env) id scSubstId :: ScEnv -> Id -> CoreExpr simonpj@microsoft.com committed Dec 24, 2009 695 scSubstId env v = lookupIdSubst (text "scSubstId") (sc_subst env) v simonpj@microsoft.com committed Feb 09, 2007 696 697 698 699 scSubstTy :: ScEnv -> Type -> Type scSubstTy env ty = substTy (sc_subst env) ty 700 701 702 scSubstCo :: ScEnv -> Coercion -> Coercion scSubstCo env co = substCo (sc_subst env) co simonpj@microsoft.com committed Feb 09, 2007 703 704 zapScSubst :: ScEnv -> ScEnv zapScSubst env = env { sc_subst = zapSubstEnv (sc_subst env) } simonpj committed Mar 05, 2001 705 simonpj@microsoft.com committed Feb 09, 2007 706 extendScInScope :: ScEnv -> [Var] -> ScEnv ian@well-typed.com committed Nov 02, 2012 707 -- Bring the quantified variables into scope simonpj@microsoft.com committed Feb 09, 2007 708 extendScInScope env qvars = env { sc_subst = extendInScopeList (sc_subst env) qvars } simonpj@microsoft.com committed Nov 29, 2006 709 ian@well-typed.com committed Nov 02, 2012 710 -- Extend the substitution simonpj@microsoft.com committed Jan 17, 2008 711 712 713 714 715 extendScSubst :: ScEnv -> Var -> OutExpr -> ScEnv extendScSubst env var expr = env { sc_subst = extendSubst (sc_subst env) var expr } extendScSubstList :: ScEnv -> [(Var,OutExpr)] -> ScEnv extendScSubstList env prs = env { sc_subst = extendSubstList (sc_subst env) prs } simonpj@microsoft.com committed Feb 09, 2007 716 717 718 719 extendHowBound :: ScEnv -> [Var] -> HowBound -> ScEnv extendHowBound env bndrs how_bound = env { sc_how_bound = extendVarEnvList (sc_how_bound env) ian@well-typed.com committed Nov 02, 2012 720 [(bndr,how_bound) | bndr <- bndrs] } simonpj@microsoft.com committed Feb 09, 2007 721 722 extendBndrsWith :: HowBound -> ScEnv -> [Var] -> (ScEnv, [Var]) ian@well-typed.com committed Nov 02, 2012 723 extendBndrsWith how_bound env bndrs simonpj@microsoft.com committed Feb 09, 2007 724 = (env { sc_subst = subst', sc_how_bound = hb_env' }, bndrs') simonpj@microsoft.com committed Aug 16, 2006 725 where simonpj@microsoft.com committed Feb 09, 2007 726 (subst', bndrs') = substBndrs (sc_subst env) bndrs ian@well-typed.com committed Nov 02, 2012 727 728 hb_env' = sc_how_bound env extendVarEnvList [(bndr,how_bound) | bndr <- bndrs'] simonpj@microsoft.com committed Feb 09, 2007 729 730 extendBndrWith :: HowBound -> ScEnv -> Var -> (ScEnv, Var) ian@well-typed.com committed Nov 02, 2012 731 extendBndrWith how_bound env bndr simonpj@microsoft.com committed Feb 09, 2007 732 = (env { sc_subst = subst', sc_how_bound = hb_env' }, bndr') simonpj@microsoft.com committed Mar 17, 2006 733 where simonpj@microsoft.com committed Feb 09, 2007 734 735 736 737 738 (subst', bndr') = substBndr (sc_subst env) bndr hb_env' = extendVarEnv (sc_how_bound env) bndr' how_bound extendRecBndrs :: ScEnv -> [Var] -> (ScEnv, [Var]) extendRecBndrs env bndrs = (env { sc_subst = subst' }, bndrs') ian@well-typed.com committed Nov 02, 2012 739 740 where (subst', bndrs') = substRecBndrs (sc_subst env) bndrs simonpj@microsoft.com committed Feb 09, 2007 741 742 743 extendBndr :: ScEnv -> Var -> (ScEnv, Var) extendBndr env bndr = (env { sc_subst = subst' }, bndr') ian@well-typed.com committed Nov 02, 2012 744 745 where (subst', bndr') = substBndr (sc_subst env) bndr simonpj@microsoft.com committed Feb 09, 2007 746 simonpj@microsoft.com committed Aug 05, 2007 747 extendValEnv :: ScEnv -> Id -> Maybe Value -> ScEnv simonpj@microsoft.com committed Jan 17, 2008 748 extendValEnv env _ Nothing = env simonpj@microsoft.com committed Aug 05, 2007 749 extendValEnv env id (Just cv) = env { sc_vals = extendVarEnv (sc_vals env) id cv } simonpj@microsoft.com committed Feb 09, 2007 750 simonpj@microsoft.com committed Jan 31, 2011 751 extendCaseBndrs :: ScEnv -> OutExpr -> OutId -> AltCon -> [Var] -> (ScEnv, [Var]) simonpj@microsoft.com committed Feb 09, 2007 752 -- When we encounter ian@well-typed.com committed Nov 02, 2012 753 754 -- case scrut of b -- C x y -> ... simonpj@microsoft.com committed Oct 02, 2008 755 756 757 758 -- we want to bind b, to (C x y) -- NB1: Extends only the sc_vals part of the envt -- NB2: Kill the dead-ness info on the pattern binders x,y, since -- they are potentially made alive by the [b -> C x y] binding simonpj@microsoft.com committed Jan 31, 2011 759 760 extendCaseBndrs env scrut case_bndr con alt_bndrs = (env2, alt_bndrs') simonpj@microsoft.com committed Feb 09, 2007 761 where simonpj@microsoft.com committed Jan 31, 2011 762 763 live_case_bndr = not (isDeadBinder case_bndr) env1 | Var v <- scrut = extendValEnv env v cval ian@well-typed.com committed Nov 02, 2012 764 | otherwise = env -- See Note [Add scrutinee to ValueEnv too] simonpj@microsoft.com committed Feb 03, 2011 765 env2 | live_case_bndr = extendValEnv env1 case_bndr cval simonpj@microsoft.com committed Jan 31, 2011 766 767 768 769 770 771 772 | otherwise = env1 alt_bndrs' | case scrut of { Var {} -> True; _ -> live_case_bndr } = map zap alt_bndrs | otherwise = alt_bndrs simonpj@microsoft.com committed Feb 09, 2007 773 cval = case con of ian@well-typed.com committed Nov 02, 2012 774 775 776 777 778 779 780 781 DEFAULT -> Nothing LitAlt {} -> Just (ConVal con []) DataAlt {} -> Just (ConVal con vanilla_args) where vanilla_args = map Type (tyConAppArgs (idType case_bndr)) ++ varsToCoreExprs alt_bndrs zap v | isTyVar v = v -- See NB2 above simonpj@microsoft.com committed Jan 31, 2011 782 783 | otherwise = zapIdOccInfo v rl@cse.unsw.edu.au committed Oct 29, 2009 784 simonpj@microsoft.com committed Oct 18, 2010 785 786 decreaseSpecCount :: ScEnv -> Int -> ScEnv -- See Note [Avoiding exponential blowup] ian@well-typed.com committed Nov 02, 2012 787 decreaseSpecCount env n_specs simonpj@microsoft.com committed Oct 18, 2010 788 789 790 = env { sc_count = case sc_count env of Nothing -> Nothing Just n -> Just (n div (n_specs + 1)) } ian@well-typed.com committed Nov 02, 2012 791 792 -- The "+1" takes account of the original function; -- See Note [Avoiding exponential blowup] simonpj@microsoft.com committed Oct 18, 2010 793 794 795 796 --------------------------------------------------- -- See Note [SpecConstrAnnotation] ignoreType :: ScEnv -> Type -> Bool simonpj@microsoft.com committed Feb 01, 2011 797 ignoreDataCon :: ScEnv -> DataCon -> Bool simonpj@microsoft.com committed Oct 18, 2010 798 799 800 forceSpecBndr :: ScEnv -> Var -> Bool #ifndef GHCI ignoreType _ _ = False simonpj@microsoft.com committed Feb 01, 2011 801 ignoreDataCon _ _ = False simonpj@microsoft.com committed Oct 18, 2010 802 803 804 805 forceSpecBndr _ _ = False #else /* GHCI */ simonpj@microsoft.com committed Feb 01, 2011 806 ignoreDataCon env dc = ignoreTyCon env (dataConTyCon dc) simonpj@microsoft.com committed Oct 18, 2010 807 rl@cse.unsw.edu.au committed Oct 29, 2009 808 ignoreType env ty Simon Peyton Jones committed Aug 03, 2011 809 810 811 = case tyConAppTyCon_maybe ty of Just tycon -> ignoreTyCon env tycon _ -> False rl@cse.unsw.edu.au committed Oct 29, 2009 812 simonpj@microsoft.com committed Oct 18, 2010 813 814 815 ignoreTyCon :: ScEnv -> TyCon -> Bool ignoreTyCon env tycon = lookupUFM (sc_annotations env) tycon == Just NoSpecConstr rl@cse.unsw.edu.au committed Dec 03, 2009 816 rl@cse.unsw.edu.au committed Feb 15, 2010 817 forceSpecBndr env var = forceSpecFunTy env . snd . splitForAllTys . varType$ var rl@cse.unsw.edu.au committed Dec 03, 2009 818 819 820 821 822 823 824 825 826 827 828 forceSpecFunTy :: ScEnv -> Type -> Bool forceSpecFunTy env = any (forceSpecArgTy env) . fst . splitFunTys forceSpecArgTy :: ScEnv -> Type -> Bool forceSpecArgTy env ty | Just ty' <- coreView ty = forceSpecArgTy env ty' forceSpecArgTy env ty | Just (tycon, tys) <- splitTyConApp_maybe ty , tycon /= funTyCon Ian Lynagh committed Mar 20, 2010 829 = lookupUFM (sc_annotations env) tycon == Just ForceSpecConstr rl@cse.unsw.edu.au committed Dec 03, 2009 830 831 832 || any (forceSpecArgTy env) tys forceSpecArgTy _ _ = False simonpj@microsoft.com committed Oct 18, 2010 833 #endif /* GHCI */ simonpj committed Mar 05, 2001 834 835 \end{code} simonpj@microsoft.com committed Jan 31, 2011 836 837 838 839 840 841 842 843 844 845 846 847 Note [Add scrutinee to ValueEnv too] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Consider this: case x of y (a,b) -> case b of c I# v -> ...(f y)... By the time we get to the call (f y), the ValueEnv will have a binding for y, and for c y -> (a,b) c -> I# v BUT that's not enough! Looking at the call (f y) we see that y is pair (a,b), but we also need to know what 'b' is. ian@well-typed.com committed Nov 02, 2012 848 So in extendCaseBndrs we must *also* add the binding simonpj@microsoft.com committed Jan 31, 2011 849 850 851 852 853 854 b -> I# v else we lose a useful specialisation for f. This is necessary even though the simplifier has systematically replaced uses of 'x' with 'y' and 'b' with 'c' in the code. The use of 'b' in the ValueEnv came from outside the case. See Trac #4908 for the live example. simonpj@microsoft.com committed Feb 01, 2010 855 856 857 858 859 860 Note [Avoiding exponential blowup] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The sc_count field of the ScEnv says how many times we are prepared to duplicate a single function. But we must take care with recursive specialiations. Consider ian@well-typed.com committed Nov 02, 2012 861 862 let $j1 = let$j2 = let $j3 = ... in simonpj@microsoft.com committed Feb 01, 2010 863 ...$j3... ian@well-typed.com committed Nov 02, 2012 864 in simonpj@microsoft.com committed Feb 01, 2010 865 ...$j2... ian@well-typed.com committed Nov 02, 2012 866 in simonpj@microsoft.com committed Feb 01, 2010 867 868 869 870 871 872 873 874 875 876 ...$j1... If we specialise $j1 then in each specialisation (as well as the original) we can specialise$j2, and similarly $j3. Even if we make just *one* specialisation of each, becuase we also have the original we'll get 2^n copies of$j3, which is not good. So when recursively specialising we divide the sc_count by the number of copies we are making at this level, including the original. simonpj committed Mar 05, 2001 877 878 %************************************************************************ ian@well-typed.com committed Nov 02, 2012 879 %* * simonpj committed Mar 05, 2001 880 \subsection{Usage information: flows upwards} ian@well-typed.com committed Nov 02, 2012 881 %* * simonpj committed Mar 05, 2001 882 %************************************************************************ simonpj committed Feb 28, 2001 883 simonpj committed Mar 05, 2001 884 \begin{code} simonpj committed Feb 28, 2001 885 886 data ScUsage = SCU { ian@well-typed.com committed Nov 02, 2012 887 888 889 scu_calls :: CallEnv, -- Calls -- The functions are a subset of the -- RecFuns in the ScEnv simonpj committed Feb 28, 2001 890 ian@well-typed.com committed Nov 02, 2012 891 892 scu_occs :: !(IdEnv ArgOcc) -- Information on argument occurrences } -- The domain is OutIds simonpj committed Feb 28, 2001 893 simonpj@microsoft.com committed Feb 09, 2007 894 type CallEnv = IdEnv [Call] simonpj@microsoft.com committed Aug 05, 2007 895 type Call = (ValueEnv, [CoreArg]) ian@well-typed.com committed Nov 02, 2012 896 897 -- The arguments of the call, together with the -- env giving the constructor bindings at the call site simonpj committed Mar 05, 2001 898 simonpj@microsoft.com committed Jan 17, 2008 899 900 nullUsage :: ScUsage nullUsage = SCU { scu_calls = emptyVarEnv, scu_occs = emptyVarEnv } simonpj committed Feb 28, 2001 901 simonpj@microsoft.com committed Feb 09, 2007 902 903 904 combineCalls :: CallEnv -> CallEnv -> CallEnv combineCalls = plusVarEnv_C (++) simonpj@microsoft.com committed Jan 17, 2008 905 906 combineUsage :: ScUsage -> ScUsage -> ScUsage combineUsage u1 u2 = SCU { scu_calls = combineCalls (scu_calls u1) (scu_calls u2), ian@well-typed.com committed Nov 02, 2012 907 scu_occs = plusVarEnv_C combineOcc (scu_occs u1) (scu_occs u2) } simonpj committed Feb 28, 2001 908 simonpj@microsoft.com committed Jan 17, 2008 909 combineUsages :: [ScUsage] -> ScUsage simonpj committed Feb 28, 2001 910 911 912 combineUsages [] = nullUsage combineUsages us = foldr1 combineUsage us simonpj@microsoft.com committed Jan 17, 2008 913 914 915 lookupOccs :: ScUsage -> [OutVar] -> (ScUsage, [ArgOcc]) lookupOccs (SCU { scu_calls = sc_calls, scu_occs = sc_occs }) bndrs = (SCU {scu_calls = sc_calls, scu_occs = delVarEnvList sc_occs bndrs}, simonpj@microsoft.com committed Aug 15, 2006 916 917 [lookupVarEnv sc_occs b orElse NoOcc | b <- bndrs]) ian@well-typed.com committed Nov 02, 2012 918 919 data ArgOcc = NoOcc -- Doesn't occur at all; or a type argument | UnkOcc -- Used in some unknown way simonpj@microsoft.com committed Aug 15, 2006 920 ian@well-typed.com committed Nov 02, 2012 921 | ScrutOcc -- See Note [ScrutOcc] simonpj@microsoft.com committed Feb 01, 2011 922 (DataConEnv [ArgOcc]) -- How the sub-components are used simonpj committed Feb 28, 2001 923 ian@well-typed.com committed Nov 02, 2012 924 type DataConEnv a = UniqFM a -- Keyed by DataCon simonpj@microsoft.com committed Aug 16, 2006 925 simonpj@microsoft.com committed Feb 01, 2011 926 927 {- Note [ScrutOcc] ~~~~~~~~~~~~~~~~~~~ simonpj@microsoft.com committed Oct 05, 2006 928 929 An occurrence of ScrutOcc indicates that the thing, or a cast version of the thing, is *only* taken apart or applied. simonpj@microsoft.com committed Aug 16, 2006 930 simonpj@microsoft.com committed Oct 05, 2006 931 Functions, literal: ScrutOcc emptyUFM simonpj@microsoft.com committed Aug 16, 2006 932 933 934 935 936 Data constructors: ScrutOcc subs, where (subs :: UniqFM [ArgOcc]) gives usage of the *pattern-bound* components, The domain of the UniqFM is the Unique of the data constructor ian@well-typed.com committed Nov 02, 2012 937 The [ArgOcc] is the occurrences of the *pattern-bound* components simonpj@microsoft.com committed Aug 16, 2006 938 of the data structure. E.g. ian@well-typed.com committed Nov 02, 2012 939 data T a = forall b. MkT a b (b->a) simonpj@microsoft.com committed Aug 16, 2006 940 941 942 A pattern binds b, x::a, y::b, z::b->a, but not 'a'! -} simonpj@microsoft.com committed Aug 15, 2006 943 944 instance Outputable ArgOcc where Ian Lynagh committed Apr 12, 2008 945 ppr (ScrutOcc xs) = ptext (sLit "scrut-occ") <> ppr xs ian@well-typed.com committed Nov 02, 2012 946 947 ppr UnkOcc = ptext (sLit "unk-occ") ppr NoOcc = ptext (sLit "no-occ") simonpj@microsoft.com committed Aug 15, 2006 948 simonpj@microsoft.com committed Feb 01, 2011 949 950 951 evalScrutOcc :: ArgOcc evalScrutOcc = ScrutOcc emptyUFM simonpj@microsoft.com committed Nov 29, 2006 952 -- Experimentally, this vesion of combineOcc makes ScrutOcc "win", so simonpj@microsoft.com committed Nov 24, 2006 953 954 -- that if the thing is scrutinised anywhere then we get to see that -- in the overall result, even if it's also used in a boxed way simonpj@microsoft.com committed Nov 29, 2006 955 -- This might be too agressive; see Note [Reboxing] Alternative 3 simonpj@microsoft.com committed Jan 17, 2008 956 combineOcc :: ArgOcc -> ArgOcc -> ArgOcc ian@well-typed.com committed Nov 02, 2012 957 958 combineOcc NoOcc occ = occ combineOcc occ NoOcc = occ simonpj@microsoft.com committed Aug 15, 2006 959 combineOcc (ScrutOcc xs) (ScrutOcc ys) = ScrutOcc (plusUFM_C combineOccs xs ys) simonpj@microsoft.com committed Feb 01, 2011 960 combineOcc UnkOcc (ScrutOcc ys) = ScrutOcc ys ian@well-typed.com committed Nov 02, 2012 961 combineOcc (ScrutOcc xs) UnkOcc = ScrutOcc xs simonpj@microsoft.com committed Aug 15, 2006 962 963 964 965 966 combineOcc UnkOcc UnkOcc = UnkOcc combineOccs :: [ArgOcc] -> [ArgOcc] -> [ArgOcc] combineOccs xs ys = zipWithEqual "combineOccs" combineOcc xs ys simonpj@microsoft.com committed Jan 17, 2008 967 setScrutOcc :: ScEnv -> ScUsage -> OutExpr -> ArgOcc -> ScUsage Thomas Schilling committed Jul 20, 2008 968 -- _Overwrite_ the occurrence info for the scrutinee, if the scrutinee simonpj@microsoft.com committed Feb 09, 2007 969 -- is a variable, and an interesting variable Simon Marlow committed Nov 02, 2011 970 971 setScrutOcc env usg (Cast e _) occ = setScrutOcc env usg e occ setScrutOcc env usg (Tick _ e) occ = setScrutOcc env usg e occ simonpj@microsoft.com committed Feb 09, 2007 972 setScrutOcc env usg (Var v) occ simonpj@microsoft.com committed Jan 17, 2008 973 | Just RecArg <- lookupHowBound env v = usg { scu_occs = extendVarEnv (scu_occs usg) v occ } ian@well-typed.com committed Nov 02, 2012 974 975 976 | otherwise = usg setScrutOcc _env usg _other _occ -- Catch-all = usg simonpj committed Feb 28, 2001 977 978 979 \end{code} %************************************************************************ ian@well-typed.com committed Nov 02, 2012 980 %* * simonpj committed Feb 28, 2001 981 \subsection{The main recursive function} ian@well-typed.com committed Nov 02, 2012 982 %* * simonpj committed Feb 28, 2001 983 984 %************************************************************************ simonpj committed Mar 05, 2001 985 986 987 The main recursive function gathers up usage information, and creates specialised versions of functions. simonpj committed Feb 28, 2001 988 \begin{code} simonpj@microsoft.com committed Jan 17, 2008 989 scExpr, scExpr' :: ScEnv -> CoreExpr -> UniqSM (ScUsage, CoreExpr) ian@well-typed.com committed Nov 02, 2012 990 991 -- The unique supply is needed when we invent -- a new name for the specialised function and its args simonpj committed Feb 28, 2001 992 simonpj@microsoft.com committed Feb 09, 2007 993 994 995 996 scExpr env e = scExpr' env e scExpr' env (Var v) = case scSubstId env v of ian@well-typed.com committed Nov 02, 2012 997 998 Var v' -> return (mkVarUsage env v' [], Var v') e' -> scExpr (zapScSubst env) e' simonpj@microsoft.com committed Feb 09, 2007 999 twanvl committed Jan 17, 2008 1000 scExpr' env (Type t) = return (nullUsage, Type (scSubstTy env t)) 1001 scExpr' env (Coercion c) = return (nullUsage, Coercion (scSubstCo env c)) twanvl committed Jan 17, 2008 1002 scExpr' _ e@(Lit {}) = return (nullUsage, e) Simon Marlow committed Nov 02, 2011 1003 1004 scExpr' env (Tick t e) = do (usg,e') <- scExpr env e return (usg, Tick t e') twanvl committed Jan 17, 2008 1005 scExpr' env (Cast e co) = do (usg, e') <- scExpr env e 1006 return (usg, Cast e' (scSubstCo env co)) simonpj@microsoft.com committed Jan 17, 2008 1007 scExpr' env e@(App _ _) = scApp env (collectArgs e) twanvl committed Jan 17, 2008 1008 1009 1010 scExpr' env (Lam b e) = do let (env', b') = extendBndr env b (usg, e') <- scExpr env' e return (usg, Lam b' e') simonpj@microsoft.com committed Feb 09, 2007 1011 ian@well-typed.com committed Nov 02, 2012 1012 1013 1014 1015 1016 1017 scExpr' env (Case scrut b ty alts) = do { (scrut_usg, scrut') <- scExpr env scrut ; case isValue (sc_vals env) scrut' of Just (ConVal con args) -> sc_con_app con args scrut' _other -> sc_vanilla scrut_usg scrut' } simonpj committed Feb 28, 2001 1018 where ian@well-typed.com committed Nov 02, 2012 1019 1020 1021 1022 1023 1024 1025 sc_con_app con args scrut' -- Known constructor; simplify = do { let (_, bs, rhs) = findAlt con alts orElse (DEFAULT, [], mkImpossibleExpr ty) alt_env' = extendScSubstList env ((b,scrut') : bs zip trimConArgs con args) ; scExpr alt_env' rhs } sc_vanilla scrut_usg scrut' -- Normal case simonpj@microsoft.com committed Feb 09, 2007 1026 = do { let (alt_env,b') = extendBndrWith RecArg env b ian@well-typed.com committed Nov 02, 2012 1027 -- Record RecArg for the components simonpj@microsoft.com committed Feb 09, 2007 1028 ian@well-typed.com committed Nov 02, 2012 1029 1030 ; (alt_usgs, alt_occs, alts') <- mapAndUnzip3M (sc_alt alt_env scrut' b') alts simonpj@microsoft.com committed Feb 09, 2007 1031 ian@well-typed.com committed Nov 02, 2012 1032 1033 1034 1035 1036 ; let scrut_occ = foldr combineOcc NoOcc alt_occs scrut_usg' = setScrutOcc env scrut_usg scrut' scrut_occ -- The combined usage of the scrutinee is given -- by scrut_occ, which is passed to scScrut, which -- in turn treats a bare-variable scrutinee specially simonpj@microsoft.com committed Feb 09, 2007 1037 ian@well-typed.com committed Nov 02, 2012 1038 1039 ; return (foldr combineUsage scrut_usg' alt_usgs, Case scrut' b' (scSubstTy env ty) alts') } simonpj@microsoft.com committed Feb 09, 2007 1040 simonpj@microsoft.com committed Jan 31, 2011 1041 1042 sc_alt env scrut' b' (con,bs,rhs) = do { let (env1, bs1) = extendBndrsWith RecArg env bs ian@well-typed.com committed Nov 02, 2012 1043 1044 1045 1046 1047 1048 1049 (env2, bs2) = extendCaseBndrs env1 scrut' b' con bs1 ; (usg, rhs') <- scExpr env2 rhs ; let (usg', b_occ:arg_occs) = lookupOccs usg (b':bs2) scrut_occ = case con of DataAlt dc -> ScrutOcc (unitUFM dc arg_occs) _ -> ScrutOcc emptyUFM ; return (usg', b_occ combineOcc scrut_occ, (con, bs2, rhs')) } simonpj@microsoft.com committed Feb 09, 2007 1050 1051 scExpr' env (Let (NonRec bndr rhs) body) ian@well-typed.com committed Nov 02, 2012 1052 | isTyVar bndr -- Type-lets may be created by doBeta simonpj@microsoft.com committed Jan 17, 2008 1053 = scExpr' (extendScSubst env bndr rhs) body simonpj@microsoft.com committed Feb 09, 2007 1054 ian@well-typed.com committed Nov 02, 2012 1055 1056 1057 | otherwise = do { let (body_env, bndr') = extendBndr env bndr ; (rhs_usg, rhs_info) <- scRecRhs env (bndr',rhs) simonpj@microsoft.com committed Feb 01, 2010 1058 ian@well-typed.com committed Nov 02, 2012 1059 1060 1061 ; let body_env2 = extendHowBound body_env [bndr'] RecFun -- Note [Local let bindings] RI _ rhs' _ _ _ = rhs_info simonpj@microsoft.com committed Oct 25, 2010 1062 1063 body_env3 = extendValEnv body_env2 bndr' (isValue (sc_vals env) rhs') ian@well-typed.com committed Nov 02, 2012 1064 ; (body_usg, body') <- scExpr body_env3 body simonpj@microsoft.com committed Oct 25, 2010 1065 rl@cse.unsw.edu.au committed Nov 27, 2010 1066 1067 -- NB: For non-recursive bindings we inherit sc_force flag from -- the parent function (see Note [Forcing specialisation]) ian@well-typed.com committed Nov 02, 2012 1068 1069 1070 ; (spec_usg, specs) <- specialise env (scu_calls body_usg) rhs_info simonpj@microsoft.com committed Feb 01, 2010 1071 (SI [] 0 (Just rhs_usg)) 1072 ian@well-typed.com committed Nov 02, 2012 1073 1074 1075 1076 ; return (body_usg { scu_calls = scu_calls body_usg delVarEnv bndr' } combineUsage rhs_usg combineUsage spec_usg, mkLets [NonRec b r | (b,r) <- specInfoBinds rhs_info specs] body') } 1077 simonpj@microsoft.com committed Feb 09, 2007 1078 1079 -- A *local* recursive group: see Note [Local recursive groups] simonpj@microsoft.com committed Feb 09, 2007 1080 scExpr' env (Let (Rec prs) body) ian@well-typed.com committed Nov 02, 2012 1081 1082 1083 = do { let (bndrs,rhss) = unzip prs (rhs_env1,bndrs') = extendRecBndrs env bndrs rhs_env2 = extendHowBound rhs_env1 bndrs' RecFun rl@cse.unsw.edu.au committed Dec 03, 2009 1084 force_spec = any (forceSpecBndr env) bndrs' rl@cse.unsw.edu.au committed Feb 15, 2010 1085 -- Note [Forcing specialisation] 1086 ian@well-typed.com committed Nov 02, 2012 1087 1088 ; (rhs_usgs, rhs_infos) <- mapAndUnzipM (scRecRhs rhs_env2) (bndrs' zip rhss) ; (body_usg, body') <- scExpr rhs_env2 body | 2021-01-19 06:34:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7788601517677307, "perplexity": 8079.39771300579}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703517966.39/warc/CC-MAIN-20210119042046-20210119072046-00080.warc.gz"} |
https://codegolf.stackexchange.com/questions/11901/chess960-position-generator?noredirect=1 | Chess960 position generator
Context
Chess960 (or Fischer Random Chess) is a variant of chess invented and advocated by former World Chess Champion Bobby Fischer, publicly announced on June 19, 1996 in Buenos Aires, Argentina. It employs the same board and pieces as standard chess; however, the starting position of the pieces on the players' home ranks is randomized
Rules
• White pawns are placed on the second rank as in standard chess
• All remaining white pieces are placed randomly on the first rank
• The bishops must be placed on opposite-color squares
• The king must be placed on a square between the rooks.
• Black's pieces are placed equal-and-opposite to White's pieces.
For all the people that would like to post answers...
you have to make a Chess960 positions generator, capable of randomly generate one of the 960 positions following the rules described above (it has to be capable of outputting any of the 960, hardcoding one position is not accepted!), and you only need to output the white rank one pieces.
Example output:
rkrbnnbq
where:
• k king
• q queen
• b bishop
• n knight
• r rook
This will be code golf, and the tie breaker will be the upvotes.
• When you say that it has to be capable of outputting any of the 960 positions, do they have to be equiprobable? Jun 20, 2013 at 10:05
• Interesting, I haven't really thought of that... I mean ideally it should be, I think... The answers so far offer this quality, ...right ? Jun 20, 2013 at 14:08
• The two which are written in languages which have builtins that shuffle uniformly do; the two GolfScript ones are close but not quite uniform. Jun 20, 2013 at 14:15
• I would say that close is good enough Jun 20, 2013 at 14:25
• This question inspired me to ask codegolf.stackexchange.com/questions/12322/… Aug 17, 2013 at 11:26
Ruby 1.9, 67 65 characters
Ah, the old "keep randomizing until you generate something valid" technique...
$_=%w(r r n n b b q k).shuffle*''until/r.*k.*r/&&/b(..)*b/$><<$_ (In Ruby 2.0, %w(r r n n b b q k) could be 'rrnnbbqk'.chars) • In 1.9.3 you can spare the ~ with the cost of a warning, when available. pastebin.com/nuE9zWSw Jun 20, 2013 at 6:56 • @manatwork that's great, thanks! Jun 20, 2013 at 8:57 • the "keep randomizing until you generate something valid" technique is still much faster than the "shuffle the list of possibilities, filter and take first" technique that purely functional languages like APL tend to produce :-) Jun 20, 2013 at 9:56 • @Daniero that's definitely what the $_ variable is. It works because ruby has some neat methods such as Kernel#chop that work like the equivalent String#chop method but with $_ as their receiver. This saves a lot of time when (for example) you're writing a read/process/write loop using ruby -n or ruby -p. Jun 20, 2013 at 22:43 • @GigaWatt no. The former matches if there's an even number of characters between some two B's. The latter matches only if the B'S are at the ends. Jun 23, 2013 at 12:39 GolfScript 60 49 ;'qbbnnxxx'{{9rand*}$.'b'/1=,2%}do'x'/'rkr'1/]zip
(shortened to 49 chars thanks to Peter Taylor's great tips)
Online test here.
An explanation of the code:
;'qbbnnxxx' # push the string 'qbbnnxxx' on the clean stack
{
{9rand*}$# shuffle the string .'b'/1=,2% # count the number of places between the 'b's # (including the 'b's themselves) # if this count is even, the bishops are on # squares of different colors, so place a 0 # on the stack to make the do loop stop }do # repeat the procedure above until a # good string is encountered 'x'/ # split the string where the 'x's are 'rkr'1/]zip # and put 'r', 'k' and then 'r' again # where the 'x's used to be • Your method for checking that there's an even number of letters between the bs seems very long. How about .'b'/1=,2%? Jun 20, 2013 at 10:10 • And you can avoid discarding failed attempts by pulling the 'qbbnnxxx' out of the loop and reshuffling the same string. Jun 20, 2013 at 10:15 • @PeterTaylor Thank you for the great tips. For the "count between 'b's" issue I felt that there should be a shorter way, but I just couldn't find it. Jun 20, 2013 at 11:23 GolfScript (49 48 chars, or 47 for upper-case output) 'bbnnrrkq'{{;9rand}$.'b'/1=,1$'r'/1='k'?)!|1&}do This uses the standard technique of permuting randomly until we meet the criteria. Unlike w0lf's GolfScript solution, this does both checks on the string, so it is likely to run through the loop more times. Using upper case allows saving one char: 'BBNNRRKQ'{{;9rand}$.'B'/1=,1\$'R'/1=75?)!|1&}do
J, 56 characters
{.(#~'(?=.*b(..)*b).*r.*k.*r.*'&rxeq"1)'kqbbnnrr'A.~?~!8
it takes several seconds on my machine due to the inefficient algorithm. Some speed may be gained by adding ~.(remove duplicates) before 'kqbbnnrr'.
explanation:
• ?~!8 deals 8! random elements from 0 ... 8!
• 'kqbbnnrr'A.~ uses them as anagram indexes to the string kqbbnnrr.
• (#~'...'&rxeq"1)' filters them by the regex in quotes.
• {. means "take the first element"
K,69
(-8?)/[{~*(*/~~':{m=_m:x%2}@&x="b")&(&x="k")within&"r"=x};"rrbbnnkq"]
Python, 105 chars
Basically chron's technique, minus the elegant Ruby stuff.
import re,random
a='rrbbnnkq'
while re.search('b.(..)*b|r[^k]*r',a):a=''.join(random.sample(a,8))
print a
Thanks to Peter Taylor for the shortening of the regex.
• not s('b(..)*b',a) seems like a long-winded way of saying s('b.(..)*b',a). Also, sample may be one character shorter than shuffle, but it requires an extra argument. Jun 22, 2013 at 20:39
• You're right about the regex, Peter. Thanks! Shuffle returns None though, so it's no good :( Jun 22, 2013 at 20:41
• Missed the forest for the trees. You don't need two regexes, because you're checking the same string and or is equivalent to regex alternation (|). Saves 13 chars. Jul 4, 2013 at 20:58
• @PeterTaylor Good catch! thanks. Jul 5, 2013 at 6:19 | 2022-06-27 22:41:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29502347111701965, "perplexity": 3841.3813601293637}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103341778.23/warc/CC-MAIN-20220627195131-20220627225131-00030.warc.gz"} |
https://mersenneforum.org/showthread.php?s=d97ff5f646614d459febf623f5580c6d&t=15508&page=9 | mersenneforum.org A Python Diver for GMP-ECM...
Register FAQ Search Today's Posts Mark Forums Read
2016-06-10, 05:38 #89
cgy606
Feb 2012
32×7 Posts
Quote:
Originally Posted by WraithX Recently I noticed that the ecm.py script would crash if it was given factorial or primorial input strings. This is because the python "eval" function can't handle these characters. So, instead of writing my own equation parser to figure out how many digits are in these input strings, I've just grabbed the output from ecm.exe to see how many digits it reports are in the input number. So, Announcing ecm.py v0.35: Code: Fixed: - ecm.py no longer calculates number of digits on its own, it reads this information from the ecm executable. This fixed a problem where the python "eval" function would crash when it encountered factorial or primorial characters. ie, you can now do: echo "140!+1" | python ecm.py 1e6 and it will work correctly, without crashing. - also fixed the output when the ecm binary is not found. It will no longer print out the misleading "ECM_BIN_PATH", it will print out "ECM_PATH" to match the variable name in the python code.
Great, I have been factoring factorial +/- numbers.
2016-06-10, 06:06 #90
cgy606
Feb 2012
32·7 Posts
Quote:
So some food for thought on how to proceed. Clearly we need not be concerned about case '1' as in principal that is already implemented... we check the factor found and the cofactor for primality, if they both pass, kill the script and drink a beer.
Case '2' is the most fruitful of our efforts. The 'best' way to proceed IMHO is to kill all the threads running (assuming that the script starts and stops at roughly the same time at the start of each curve, the way that yafu works), test the primality of each factor. If the larger one is composite (WLOG let us assume that the smaller factor is the one found), then the script determines how many curves have been completed (at the current B1/B2 values), calculates how many curves remain in order to complete the original input, and then reschedules the remain number of curves given the number of threads being used. To illustrate:
Factoring C170 B1 = 3e6 B2 =### threads = 4 total curves remaining = 2352
factor found prp37 (curve 221 thread = 1 sigma = ###)
composite cofactor C134
Factoring C134 B1 = 3e6 B2 =### threads = 4 total curves = 1468
I think you get the idea...
Case '3' is nothing more than a transpose of case '2'. We found a "small" composite factor and a large probable prime factor. In principal we could reschedule the curves like we did in case '2' but their is probably a better way to factor this number, which I will explain in case '4'
For case '4' we find 2 composite cofactors. Let's assume for the sake of argument that one is larger (i.e. more decimal digits than the other). We could continue factoring that one in the same fashion as we did in case '2', but let's turn our attention to the smaller one. What does it mean when ecm finds two composite cofactors? Usually it means that smaller factors were not eliminated from the beginning (i.e. with some other factoring method like trial division or rho P+1/P-1) and thus B1 was selected so high that in a single curve, it effectively found 2 factors and not one (we would like to claim this was intentional but no one would believe this statement)! Anyways, if it finds a factor of N digits, then the smallest cofactor of this composite number can have at most ~N/2 digits. But how large is this composite cofactor of the original number we are factoring expected to be? Well, the current ecm record is 82 digits (I think). For the sake of argument, let's be a little conservative and assume that somebody out their runs a curve at B1 = 25e9 (or something crazy like that) on a C300 number and finds a C90 and a C210 (lucky!!!). Clearly, one of the C90 factors should have been found be about 5k curves at B1 = 11e6 (on average of course). In principal we could run ecm on the C90 until B1 = 11e6 bounds or we could let SIQS or some other factoring algorithm hack at it (perhaps even trial division given that maybe even smaller factors were not eliminated from the C300 to begin with). Anyways, the story I am trying to paint here is that if two composites are found, it basically means that a very large B1 bound was selected while at the same time small factors were not eliminated. Given that this cofactor is not large (less than 90 digits or so), we should focus on the larger (and more important cofactor to continuing factoring on) and reschedule the remaining curves for that guy analogous to case '2'.
I hope this makes sense...
2016-06-18, 20:25 #91
WraithX
Mar 2006
2·35 Posts
Announcing ecm.py v0.36...
Announcing ecm.py v0.36:
Code:
New Feature:
- Added the ability for ecm.py to perform the remaining number of requested curves on any composite factors found.
(You can activate this by setting "find_one_factor_and_stop = 0", it is 1 by default)
I've added the ability for ecm.py to continue working on any composite factors found, it will perform the remaining number of requested curves. I've run quite a few tests locally and it seems to work well. However, if you do run into any problems, please let me know.
Attached Files
ecm-py_v0.36.zip (16.2 KB, 166 views)
2016-07-10, 04:24 #92
WraithX
Mar 2006
2·35 Posts
Announcing ecm.py v0.38...
Announcing ecm.py v0.38:
Code:
New feature:
- Added the ability for ecm.py to resume a GMP-ECM (compatible) save file, and
it will evenly distribute the resume lines across several instances of GMP-ECM
Calling this can be as simple as:
ecm.py -resume resume.txt
Or you can use additional options, like:
Code:
ecm.py -threads 3 -out output.txt -maxmem 300 -pollfiles 60 -resume resume.txt
------------------------------------------------------------------------------
Which would spread the resume lines from resume.txt across 3 instances of gmp-ecm,
and give each one the command line option "-maxmem 100" ( = 300/3)
and poll the output files every 60 seconds to look for factors, or see if a gmp-ecm instance has finished
and save all gmp-ecm output to the file output.txt
* Like always, you can specify the "threads" and "pollfiles" options inside the script
Here is a description of this new feature, which can also be found in the script:
Code:
# If we are using the "-resume" feature of gmp-ecm, we will make some assumptions about the job...
# 1) This is designed to be a _simple_ way to speed up resuming ecm by running several resume jobs in parallel.
# ie, we will not try to replicate all resume capabilities of gmp-ecm
# 2) If we find identical lines in our resume file, we will only resume one of them and skip the others
# - If this happens, we will print out a notice to the user (if VERBOSE >= v_normal) so they know what is going on
# 3) We will use the B1 value in the resume file, and not resume with higher values of B1
# 4) We will let gmp-ecm determine which B2 value to use, which can be affected by "-maxmem" and "-k"
# 5) We will try to split up the resume work evenly between the threads.
# - We will put total/num_threads resume lines into each file, and total%num_threads files will each get one extra line.
# At the end of a job or when restarting a job, we will write any completed resume lines out to a "finished file"
# This "finished file" will be used to help us keep track of work done, in case we are interrupted and need to (re)resume later
# We will query the output files once every poll_file_delay seconds.
# resume_job_<filename>_inp_t00.txt # input resume file for use by gmp-ecm in thread 0
# resume_job_<filename>_inp_t01.txt # input resume file for use by gmp-ecm in thread 1
# ...etc...
# resume_job_<filename>_out_t00.txt # output file for resume job of gmp-ecm in thread 0
# resume_job_<filename>_out_t01.txt # output file for resume job of gmp-ecm in thread 1
# ...etc...
# resume_job_<filename>_finished.txt # file where we write out each resume line that we have finished with gmp-ecm
# where <filename> is based on the resume file name, but with any "." characters replaced by a dash.
I know this skips over v0.37. I have created a version 0.37 with similar functionality, but it put each resume line into its own file (one at a time, not all at once) and would give that input file to gmp-ecm to resume, and save the output to another file. Once that resume line was finished processing, it would delete both the input and output files, and then move on to the next resume line. So, if a resume file had 1000 lines to resume, then the script would created/delete 1000 input files and 1000 output files. I didn't want to tax any filesystems by creating/deleting so many files, so I rewrote it as detailed above.
Attached Files
ecm-py_v0.38.zip (22.7 KB, 168 views)
2016-07-10, 05:29 #93 wombatman I moo ablest echo power! May 2013 3·5·7·17 Posts This is AWESOME. Thanks for making this update.
2016-07-14, 12:45 #94 swellman Jun 2012 24×7×29 Posts +1 Fantastic functionality. Love it!
2016-08-03, 06:14 #95 UBR47K Aug 2015 4416 Posts Is there anyway to specify B1 value when using the "-resume" switch? I'd like to use GMP-ECM for stage 2 with Prime95 stage 1 results.txt
2016-08-05, 23:46 #96 cgy606 Feb 2012 32×7 Posts I tried running the script on a resume file produced from a gpu stage 1 run. I am getting the following error: python ecm.py -threads 8 -resume gpu.save -> ___________________________________________________________________ -> | Running ecm.py, a Python driver for distributing GMP-ECM work | -> | on a single machine. It is copyright, 2011-2016, David Cleaver | -> | and is a conversion of factmsieve.py that is Copyright, 2010, | -> | Brian Gladman. Version 0.38 (Python 2.6 or later) 7th Jul 2016 | -> |_________________________________________________________________| -> Resuming work from resume file: gpu.save -> Spreading the work across 8 thread(s) ->============================================================================= -> Working on the number(s) in the resume file: gpu.save -> Using up to 8 instances of GMP-ECM... -> Found 1024 unique resume lines to work on. -> Will start working on the 1024 resume lines. Traceback (most recent call last): File "ecm.py", line 2393, in parse_ecm_options(sys.argv, set_args = True, first = True) File "ecm.py", line 2235, in parse_ecm_options run_ecm_resume_job() File "ecm.py", line 1850, in run_ecm_resume_job threadList = [[i, '', 0, '', '', [], False] for i in xrange(intNumThreads)] NameError: name 'xrange' is not defined Any ideas about what is going wrong?
2016-08-06, 00:19 #97 VBCurtis "Curtis" Feb 2005 Riverside, CA 507110 Posts Looks like you didn't give B1 or B2 parameters to ecm.py. When I do stage 2 from a GPU'ed stage 1, I put on the command line the same B1 value I ran Stage 1 on (note you can put a higher one here, and it'll use the CPU to extend B1 before starting stage 2).
2016-08-06, 00:28 #98
cgy606
Feb 2012
32×7 Posts
Quote:
Originally Posted by VBCurtis Looks like you didn't give B1 or B2 parameters to ecm.py. When I do stage 2 from a GPU'ed stage 1, I put on the command line the same B1 value I ran Stage 1 on (note you can put a higher one here, and it'll use the CPU to extend B1 before starting stage 2).
The command line input that the ecm.py creator posted didn't indicate a B1 or B2 value. I tried it byadding the B1 and B2 values at the end of the command line, no effect:
python ecm.py -threads 8 -resume gpu.save 11e6 35133391030
-> ___________________________________________________________________
-> | Running ecm.py, a Python driver for distributing GMP-ECM work |
-> | on a single machine. It is copyright, 2011-2016, David Cleaver |
-> | and is a conversion of factmsieve.py that is Copyright, 2010, |
-> | Brian Gladman. Version 0.38 (Python 2.6 or later) 7th Jul 2016 |
-> |_________________________________________________________________|
-> Resuming work from resume file: gpu.save
->=============================================================================
-> Working on the number(s) in the resume file: gpu.save
-> Using up to 8 instances of GMP-ECM...
-> Found 1024 unique resume lines to work on.
-> Will start working on the 1024 resume lines.
Traceback (most recent call last):
File "ecm.py", line 2393, in <module>
parse_ecm_options(sys.argv, set_args = True, first = True)
File "ecm.py", line 2235, in parse_ecm_options
run_ecm_resume_job()
File "ecm.py", line 1850, in run_ecm_resume_job
threadList = [[i, '', 0, '', '', [], False] for i in xrange(intNumThreads)]
NameError: name 'xrange' is not defined
Last fiddled with by cgy606 on 2016-08-06 at 00:29
2016-08-06, 01:18 #99 UBR47K Aug 2015 22×17 Posts Try running with python2. That error happens when you try to run the script with python 3
Similar Threads Thread Thread Starter Forum Replies Last Post kelzo Programming 3 2016-11-27 05:16 daxmick Programming 2 2014-02-10 01:45 Xyzzy Programming 20 2009-09-08 15:51 yqiang GMP-ECM 2 2007-04-22 00:14 a216vcti Programming 7 2005-10-30 00:37
All times are UTC. The time now is 04:07.
Sat Dec 4 04:07:51 UTC 2021 up 133 days, 22:36, 0 users, load averages: 0.86, 1.04, 1.09 | 2021-12-04 04:07:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25720953941345215, "perplexity": 4190.3917496973445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362930.53/warc/CC-MAIN-20211204033320-20211204063320-00305.warc.gz"} |
https://www.examrace.com/NTA-UGC-NET/NTA-UGC-NET-Updates/NEWS-NTA-NET-Paper-1-26-June-2019-Morning-Shift-Part-2.htm | # NTA NET Paper-1 26 June 2019 Morning Shift Part 2-With Answers and Explanations at Doorsteptutor.com (Download PDF)
()
NET June 2019 Exam, candidates must practice the most frequently appearing questions of different sections of the exam. Logical & Mathematical Reasoning section tests the candidates’ ability to think and problem solving skills. The questions asked in this question are mainly the brain teasers and sometimes can be quite tricky to answer. It covers both Analytical and Mathematical Reasoning topics. For explanations and solutions to these questions don’t forget to visit www.doorsteptutor.com
Start Passage
Geography seeks to understand the physical and human organization of the surface of the Earth. In the field of geography, inter-related themes are frequently seen. These are scale, pattern and process. Scale is defined as the level of structure on organisation at which a phenomenon is studied. Pattern is defined as the variation observed in a phenomenon studied at a particular scale. The third theme, process, further connects the first two. Process is defined as the description of how the factors that affect a phenomenon act to product a pattern at a particular scale. For instance, when a passenger on an aircraft looks out of the window, the view changes according to the scale. At the global scale when the aircraft maintains its height, he can see the chunks of clouds in all their pattern, the sun or the moon, as per the time. When the aircraft loses a little height, passengers can see the land and water masses in their different colours and the shape of land masses. At the continental scale, the passengers can see the shapes of the land features and how they are distributed. The pattern emerges as the variation of land and water and the proportion of each. Looking carefully, passengers can note how each land mass aligns with the others and how each mountain bears the signs of the process through which it emerged. The processes in a geography change in a regular and repetitive manner. One instance of this is the annual solar cycle of the sun and the earth. Most systems in nature display time cycles that are organised in a rhythm of their own. As these time cycles and natural processes are always active, the environment of the earth is always in a state of dynamism. This environmental change is not only the result of natural process but also the result of human activity. Physical geography works towards understanding the interaction between man and nature and also the results of this interaction in order to manage global climate change better.
Q 11. The alignment of landmass with other elements can be seen by a passenger on a flight on a:
(1) Global scale
(2) Continental scale
(3) Local scale
(4) Time scale
Q12. In geography, pattern studies the variation observed in a phenomenon at
(1) a particular scale
(2) any scale
(3) every scale
(4) most scales
Q 13. The time cycles of the system of nature follow their own
(1) Path
(2) Rhythm
(3) Process
(4) Cycle
Q 14. Physical geography studies the results of the interaction between man and nature in order to
(1) Understand global climate change
(2) Study the impact of man’s activities on nature
(3) Address the issue of global climate change
(4) Reduce man-animal conflict
Q 15. The view seen by a passenger looking out of the window of an aircraft will be affected by the
(1) Process
(2) Pattern
(3) Scale
(4) Rhythm
End Passage
Q16. Sanjay sold an article at a loss of $25%.$ If the selling price had been increased by Rs. 175, there would have been a gain of $10%$. What was the cost price of the article?
(1) Rs. $350$
(2) Rs. $400$
(3) Rs. $500$
(4) Rs. 7$50$
Q 17. Identify the reasoning in the following argument:
‘Pre active stage of classroom teaching is important just as pre-learning preparation stage of communication’
(1) Hypothetical
(2) Deductive
(3) Inductive
(4) Analogical
Q 18. Mass media do not have pre-determined functions for everyone and people use them the way they like. This is suggestive of the fact that,
(1) Audiences are active
(2) Content is of little significance
(3) Content lacks plurality
(4) Audiences are homogeneous
Q19. The average of 35 raw scores is 18. The average of first seventeen of them is 14 and that of last seventeen is 20. Find the eighteenth raw score.
(1) $42$
(2) $46$
(3) $52$
(4) $56$
Q 20. When subject and predicate of both the premises are the same but they differ in quality only, it is known as
(2) Contraries
(3) Subaltern
(4) Super-altern
Q 21. The proposition ‘No red is black’ is equivalent to which of the following propositions?
(1) No black is red
(2) All red are black
(3) Some red are not black
(4) Red is not black
Select the correct answer from the options given below:
(1) (i), (ii), (iii) and (iv)
(2) (iii) only
(3) (i) and (iv)
(4) (iv) only
Q 22. If REASON is coded as 5 and GOVERNMENT is coded as 9, then what is the code for ACCIDENT?
(1) 6
(2) 7
(3) 8
(4) 9
Q 23. A train leaves Agra at 5 a. m. and reaches Delhi at 9 a. m. Another train leaves Delhi at 7 a. m. and reaches Agra at 10: 30 a. m. At what time do the two trains cross each other?
(1) 6: 36 a. m.
(2) 6: 56 a. m.
(3) 7 a. m.
(4) 7: 56 a. m.
Q 24. Given below are two premises with four conclusions drawn from them. Which of the following conclusions could be validly drawn from the premises?
(a) Some bags are tables
(b) All bags are chairs
Conclusions:
(i) Some tables are chairs
(ii) No chair is table
(iii) Some chairs are bags
(iv) Some bags are not tables
Select the correct answer from the options given below:
(1) (i) and (iii)
(2) (ii), (iii) and (iv)
(3) (i) and (iv)
(4) (ii) only | 2019-12-07 17:36:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 10, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4472844898700714, "perplexity": 1974.9642363604046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540500637.40/warc/CC-MAIN-20191207160050-20191207184050-00323.warc.gz"} |
https://zbmath.org/?q=an:1168.35452 | # zbMATH — the first resource for mathematics
A parabolic two-phase obstacle-like equation. (English) Zbl 1168.35452
Summary: For the parabolic obstacle-problem-like equation $\Delta u - \partial _tu=\lambda _{+}\chi _{\{u>0\}} - \lambda _{ - }\chi _{\{u<0\}},$ where $$\lambda _{+}$$ and $$\lambda _{ - }$$ are positive Lipschitz functions, we prove in arbitrary finite dimension that the free boundary $$\partial \{u>0\}\cup \partial \{u<0\}$$ is in a neighborhood of each “branch point” the union of two Lipschitz graphs that are continuously differentiable with respect to the space variables. The result extends the elliptic case [the authors, Int. Math. Res. Not. 2007, No. 8, Article ID rnm026 (2007; Zbl 1175.35157)] to the parabolic case. There are substantial difficulties in the parabolic case due to the fact that the time derivative of the solution is in general not a continuous function. Our result is optimal in the sense that the graphs are in general not better than Lipschitz, as shown by a counter-example.
##### MSC:
35R35 Free boundary problems for PDEs 35K10 Second-order parabolic equations 35B65 Smoothness and regularity of solutions to PDEs
Zbl 1175.35157
Full Text:
##### References:
[1] Apushkinskaya, D.E.; Ural’tseva, N.N.; Shahgholian, H., Lipschitz property of the free boundary in the parabolic obstacle problem, Saint |St. Petersburg math. J., 15, 3, 375-391, (2004) · Zbl 1072.35201 [2] Caffarelli, Luis A., A monotonicity formula for heat functions in disjoint domains, (), 53-60 · Zbl 0808.35042 [3] Caffarelli, Luis; Petrosyan, Arshak; Shahgholian, Henrik, Regularity of a free boundary in parabolic potential theory, J. amer. math. soc., 17, 4, 827-869, (2004), (electronic) · Zbl 1054.35142 [4] Duvaut, G.; Lions, J.-L., Inequalities in mechanics and physics, Grundlehren math. wiss., vol. 219, (1976), Springer-Verlag Berlin, translated from the French by C.W. John · Zbl 0331.35002 [5] Anders Edquist, Arshak Petrosyan, A parabolic almost monotonicity formula, preprint · Zbl 1139.35045 [6] Krylov, N.V.; Safonov, M.V., A property of the solutions of parabolic equations with measurable coefficients, Izv. akad. nauk SSSR ser. mat., 44, 1, 161-175, (1980), 239 [7] Ladyženskaja, O.A.; Solonnikov, V.A.; Ural’ceva, N.N., Linear and quasilinear equations of parabolic type, Transl. math. monogr., vol. 23, (1967), Amer. Math. Soc. Providence, RI, translated from the Russian by S. Smith [8] Lieberman, Gary M., Second order parabolic differential equations, (1996), World Sci. Publ. River Edge, NJ · Zbl 0884.35001 [9] Shahgholian, Henrik, $$C^{1, 1}$$ regularity in semilinear elliptic problems, Comm. pure appl. math., 56, 2, 278-281, (2003) · Zbl 1258.35098 [10] Shahgholian, Henrik; Uraltseva, Nina; Weiss, Georg S., Global solutions of an obstacle-problem-like equation with two phases, Monatsh. math., 142, 1-2, 27-34, (2004) · Zbl 1057.35098 [11] Shahgholian, Henrik; Uraltseva, Nina; Weiss, Georg S., The two-phase membrane problem—regularity in higher dimensions, Int. math. res. not., (2007) · Zbl 1175.35157 [12] Shahgholian, Henrik; Weiss, Georg S., The two-phase membrane problem—an intersection-comparison approach to the regularity at branch points, Adv. math., 205, 2, 487-503, (2006) · Zbl 1104.35074 [13] Weiss, G.S., Self-similar blow-up and Hausdorff dimension estimates for a class of parabolic free boundary problems, SIAM J. math. anal., 30, 3, 623-644, (1999), (electronic) · Zbl 0922.35193
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | 2022-01-25 14:48:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7514240145683289, "perplexity": 2573.194080807341}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304835.96/warc/CC-MAIN-20220125130117-20220125160117-00574.warc.gz"} |
https://chemistry.stackexchange.com/questions/73972/carbonic-acid-in-water | # Carbonic acid in water [closed]
If we add a weak acid such as carbonic acid in water it weakly dissociates to form bicarbonate and hydrogen ions . Now water already has some hydrogen ions, having a ph=7.
So now indirectly we added hydrogen ions and base (bicarbonate ) ions to water which already has a hydrogen ion concentration. The resulting solution is acidic . This means that in the addition of equimolar hydrogen ions and bicarbonate ions( base) hydrogen ions dominate.
The only reason which I can think of is this: lets say there were 1000 hydrogen ions in water, now addition of 100 hydrogen ions and 100 bicarbonate ions will cause a net increase in hydrogen ion concentration because bicarbonate wont be able to counteract the increase in hydrogen ions because bicarbonate, when acting as a base, will be in equilibrium with carbonic acid. I.e. 100 bicarbonate ions won be able to neutralize 100 hydrogen ions present in the solution, and because of their equilibrium they will ionize only say 50 and overall, and there will be an increase of 50 hydrogen ions.
Is my explanation correct?
$$\ce{H2CO3 <--> H+ + HCO3-}$$
Since the pH of the solution is $\mathrm{-log[H+]}$, the pH of pure water will decrease upon the addition of carbonic acid. | 2020-10-29 02:25:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6378504037857056, "perplexity": 1465.5573656631793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107902683.56/warc/CC-MAIN-20201029010437-20201029040437-00061.warc.gz"} |
https://ergodicity.net/2012/02/10/ita-workshop-2012-talks/ | # ITA Workshop 2012 : Talks
The ITA Workshop finished up today, and I know I promised some blogging, but my willpower to take notes kind of deteriorated during the week. For today I’ll put some pointers to talks I saw today which were interesting. I realize I am heavily blogging about Berkeley folks here, but you know, they were interesting talks!
Nadia Fawaz talked about differential privacy for continuous observations : in this model you see $x_1, x_2, x_3, \ldots$ causally and have to estimate the running sum. She had two modifications, one in which you only want a windowed running sum, say for $W$ past values, and one in which the privacy constraint decays and expires after a window of time $W$, so that values $W$ time steps in the past do not have to be protected at all. This yields some differences in the privacy-utility tradeoff in terms of the accuracy of computing the function.
David Tse gave an interesting talk about sequencing DNA via short reads as a communication problem. I had actually had some thoughts along these lines earlier because I am starting to collaborate with my friend Tony Chiang on some informatics problems around next generation sequencing. David wanted to know how many (noiseless) reads $N$ you need to take of a genome of of length $G$ using reads of length $L$. It turns out that the correct scaling in this model is $L/\log G$. Some scaling results were given in a qualitative way, but I guess the quantitative stuff is being written up still.
Michael Jordan talked about the “big data bootstrap” (paper here). You have $n$ data points, where $n$ is huge. The idea is to subsample a set of size $b$ and then do bootstrap estimates of size $n$ on the subsample. I have to read the paper on this but it sounds fascinating.
Anant Sahai talked about how to look at some decentralized linear control problems as implicitly doing some sort of network coding in the deterministic model. One way to view this is to identify unstable modes in the plant as communicating with each other using the controllers as relays in the network. By structurally massaging the control problem into a canonical form, they can make this translation a bit more formal and can translate results about linear stabilization from the 80s into max-flow min-cut type results for network codes. This is mostly work by Se Yong Park, who really ought to have a more complete webpage.
Paolo Minero talked about controlling a linear plant over a rate-limited communication link whose capacity evolves according to a Markov chain. What are the conditions on the rate to ensure stability? He made a connection to Markov jump linear systems that gives the answer in the scalar case, but the necessary and sufficient conditions in the vector case don’t quite match. I always like seeing these sort of communication and control results, even though I don’t work in this area at all. They’re just cool.
There were three talks on consensus in the morning, which I will only touch on briefly. Behrouz Touri gave a talk about part of his thesis work, which was on the Hegselman-Krause opinion dynamics model. It’s not possible to derive a Lyapunov function for this system, but he found a time-varying Lyapunov function, leading to an analysis of the convergence which has some nice connections to products of random stochastic matrices and other topics. Ali Jadbabaie talked about work with Pooya Molavi on non-Bayesian social learning, which combines local Bayesian updating with DeGroot consensus to do distributed learning of a parameter in a network. He had some new sufficient conditions involving disconnected networks that are similar in flavor to his preprint. José Moura talked about distributed Kalman filtering and other consensus meets sensing (consensing?) problems. The algorithms are similar to ones I’ve been looking at lately, so I will have to dig a bit deeper into the upcoming IT Transactions paper.
Advertisements | 2018-03-23 20:24:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4120141863822937, "perplexity": 587.9494828990798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648594.80/warc/CC-MAIN-20180323200519-20180323220519-00431.warc.gz"} |
https://www.omnicalculator.com/math/quaternion | # Quaternion Calculator
By Maciej Kowalski, PhD candidate
Last updated: May 28, 2021
Welcome to Omni's quaternion calculator, where we'll deal with this mysterious extension of complex numbers: the quaternions. In short, we represent them using four real values, each corresponding to one of the basic unity quaternions: `1`, `i`, `j`, and `k`. To some, they may seem like an artificial creation to make math even more tricky than it already is. To others, they are a useful tool in 3D geometry: mainly to study quaternion rotation. Unfortunately, this doesn't kill the "tricky" part, and, e.g., quaternion multiplication is not a straightforward operation.
But let's not get ahead of ourselves! We begin with where every math topic begins: with an introduction. In our case, it's the quaternion definition.
## The quaternion definition
Quaternions are an extension of complex numbers. They were first introduced by Sir William Hamilton, who used them to describe several properties of the three-dimensional space (for more, see the dedicated section).
However, nowadays, most often, we introduce them from the algebraic point of view. Let's take a look at the formal quaternion definition.
💡 Quaternions are expressions of the form `q = a + b*i + c*j + d*k`, where `a`, `b`, `c`, and `d` are arbitrary real numbers and `i`, `j`, and `k` are base elements sometimes called the basic unity quaternions.
By the above quaternion definition, we see that the space is spanned by four base elements: `1`, `i`, `j`, and `k`. The three letters don't stand for any particular value: they simply denote independent base vectors. Nevertheless, if `i` seems familiar, it well should! In fact, if the coefficients of `j` and `k` (i.e., `c` and `d` in the above quaternion definition) are both zero, then we obtain the (well-known) complex numbers! As such, we indeed obtain their extension.
More or less, Hamilton's idea was to have one expression whose individual parts (i.e., the base elements' coefficients) describe the distinct directions in a three-dimensional space. From there, he only needed to introduce operations between these new thingies. On the one hand, we'd like them to satisfy some properties and form a nice algebraic structure. On the other, the operations should be usable and have real-life explanations. After all, quaternions originated from geometry — a very real-life area of mathematics.
Now that we know what a quaternion is, we can go further. The subsequent sections describe the basic rules governing these numbers. Note how Omni's quaternion calculator lets you find all values and objects we mention. For more information on how to use the tool, see the last section.
This one's easy. Recall from the above section that quaternions are spanned by four base elements: `1`, `i`, `j`, and `k`. As such, if we want to add or subtract two such expressions, we do it the same way as in any vector space: move from one base element to the other and add or subtract the respective coefficients of the two quaternions. And since these coefficients are simply real numbers, it boils down to the very basics of mathematics which we learned in primary school.
`(a + b*i + c*j + d*k) + (e + f*i + g*j + h*k) = (a + e) + (b + f)*i + (c + g)*j + (d + h)*k`,
`(a + b*i + c*j + d*k) - (e + f*i + g*j + h*k) = (a - e) + (b - f)*i + (c - g)*j + (d - h)*k`.
Plain sailing, wouldn't you say? Such addition satisfies all the reasonable properties: it's associative, commutative, and every quaternion `q` has its opposite `-q` such that `q + (-q) = (-q) + q = 0`.
Unfortunately, the story is a bit more difficult for multiplication and even more so for division.
## Quaternion multiplication and division
In real and complex numbers, we got used to the distributive property of multiplication over addition, so we'd like to have a similar law here. After all, quaternions are sums of four elements: the real part and the `i`, `j`, and `k` parts. In other words, we want to have:
`(a + b*i + c*j + d*k) * (e + f*i + g*j + h*k)`
`= a * (e + f*i + g*j + h*k)`
`+ b*i * (e + f*i + g*j + h*k)`
`+ c*j * (e + f*i + g*j + h*k)`
`+ d*k * (e + f*i + g*j + h*k)`.
And indeed, it is so. However, we now have to explain how quaternity multiplication works on the individual parts, i.e., on the basic unity quaternions. The following table contains the answer:
·1 ·i ·j ·k
1 i j k
i -1 k -j
j -k -1 i
k j -i -1
For completeness, let's write the individual products below:
`1 * i = i * 1 = i`, `1 * j = j * 1 = j`, `1 * k = k * 1 = k`,
`i² = j² = k² = -1`,
`i * j = k`, `j * k = i`, `k * i = j`,
`j * i = -k`, `k * j = -i`, `i * k = -j`.
In particular, we see that quaternion multiplication is not commutative. However, it is associative, which means the quaternion space is not a field (unlike, e.g., the real or complex numbers). Instead, it's a division ring since every non-zero quaternion has an inverse, as we'll see in the next section.
Coming back to the formula for the product of two quaternions, we can now use the rules given above to write:
`(a + b*i + c*j + d*k) * (e + f*i + g*j + h*k)`
`= a*e - b*f - c*g - d*h`
`+ (a*f + b*e + c*h - d*g) * i`
`+ (a*g - b*h + c*e + d*f) * j`
`+ (a*h + b*g - c*f + d*e) * k`.
The operation is often called the Hamilton product in honor of its creator.
Recall from the first section that the idea behind the quaternion definition was for the `4`-tuples to represent the three-dimensional space. A keen eye may observe that if we multiply two quaternions whose real parts are zero (i.e., `a = e = 0` above), then we get precisely the same formula as that of the cross product of two three-dimensional vectors with `i`, `j`, and `k` corresponding to the elementary basis `(1,0,0)`, `(0,1,0)`, and `(0,0,1)`, respectively.
As for dividing quaternions, the matter is even more tricky. As you may know, algebraic structures don't have the operation of division, per se. The expression `x / y` is simply an abbreviation of `x * y⁻¹`. What is more, in real or complex numbers, we have `x / y = x * y⁻¹ = y⁻¹ * x`.
On the other hand, we've already mentioned that quaternion multiplication is not commutative, so in general, `x * y⁻¹` is not the same as `y⁻¹ * x`. Therefore, Omni's quaternion calculator returns both possibilities for the two operations, i.e., `x * y` and `y * x` for multiplication, and `x * y⁻¹` and `y⁻¹ * x` for division.
Either way, division requires finding `y⁻¹`, i.e., the multiplicative inverse of a quaternion. And how do we do that?
## The magnitude, conjugate, inverse, and matrix representation
The magnitude (or norm), conjugate, inverse, and matrix representation are all assigned to a single quaternion. Below, we explain them one by one.
1. The magnitude (norm) of a quaternion
If the name rings a bell, and you feel there was something similar connected with vectors, you're on the right track. As mentioned in the first section, quaternions are simply `4`-tuples of real numbers: same as vectors of a four-dimensional Euclidean space. Those vectors all had their magnitude, which basically described their length. So why don't we repeat the reasoning here?
Recall that the magnitude of a vector (in whichever dimension) is the square root of the sum of its coordinates' squares. Analogously, the magnitude (or norm) of a quaternion is:
`‖a + b*i + c*j + d*k‖ = √(a² + b² + c² + d²)`.
Note how, by definition, the value is always a non-negative real number. What is more, it's equal to zero if and only if the quaternion is zero, i.e., `0 + 0*i + 0*j + 0*k`.
Lastly, observe that if we divide a quaternion by its norm (i.e., divide each of its coefficients), we'll obtain a new quaternion whose magnitude is `1`. This simple fact will become quite useful very soon.
2. The conjugate of a quaternion
Recall how the conjugate of a complex number gave the number with the same real part but the opposite imaginary part. This time, we're dealing with expressions of the form `a + b*i + c*j + d*k` that have one real part (i.e., `a`) and three non-real ones (i.e., `b`, `c`, and `d`). Nevertheless, the concept stays the same: we keep the first unchanged and flip the signs in the others.
To be precise, the quaternion conjugate formula states that the conjugate of `a + b*i + c*j + d*k` is `a - b*i - c*j - d*k`.
We denote the conjugate of a quaternion `q` by `q` (as above), but some sources also use `qᵗ`, `q̃`, or `q*`.
Also, observe that if we multiply a quaternion by its conjugate in whichever order, we'll obtain a difference of squares:
`q * q = (a + bi + cj + dk) * (a - bi - cj - dk)`
`= a² - (b*i + c*j + d*k)²`.
If we expand it according to the rules from the above section, we'll get:
`q * q = q * q = a² + b² + c² + d² = ‖q‖²`.
Again, a simple fact that will prove very useful.
3. The inverse of a quaternion
Let's begin with the obvious: only non-zero quaternions have an inverse. Secondly, even with quaternion multiplication not being commutative, every non-zero `q` has its inverse `q⁻¹` such that `q * q⁻¹ = q⁻¹ * q = 1`, where `1 = 1 + 0*i + 0*j + 0*k`.
To find the reciprocal, recall that for every quaternion `q`, we have `q * q = q * q = ‖q‖²`. For a non-zero `q`, this gives `q * (q / ‖q‖²) = (q / ‖q‖²) * q = 1`, which means that:
`q⁻¹ = q / ‖q‖²`.
(Note how in the above section, we mentioned some issues with dividing quaternions. However, here, we divide by a positive real number, i.e., divide each of the quaternions coefficients. In other words, everything is fine.)
4. Matrix representation
Complex numbers are pairs of real numbers, and quaternions are `4`-tuples of real numbers. And just as we can represent the prior as `2 x 2` matrices, we can have the latter as `4 x 4` ones. The matrix representation basically takes the quaternion's coefficients and puts them in the right places with the right signs (which come from the relations between unity quaternions). To be precise, we can write `q = a + b*i + c*j + d*k` as:
⌈ a -b -c -d ⌉ | b a -d c | | c d a -b | ⌊ d -c b a ⌋
In fact, alternatively, we can also use an equivalent `2 x 2` matrix representation with complex entries:
⌈ a + b*i c + d*i ⌉ ⌊ -c + d*i a - b*i ⌋
To motivate the concept, let us mention a couple of its properties: the magnitude `‖q‖` is equal to the square root of the matrix' determinant, and the conjugate `q` corresponds to the conjugate transpose.
Alright, we've learned the basic operations, including quaternions, but so far, we've only seen formulas that may seem useless by themselves. Now, it's time for some practical uses! We'll get back to geometry, learn to convert a quaternion to a rotation matrix, and see how it all works in the three-dimensional space.
## 3D geometry: quaternion rotation
The three-dimensional Euclidean space consists of points `(x, y, z)` with arbitrary real numbers as coordinates. If we see it as a vector space, it's spanned by three elements: `(1, 0, 0)`, `(0, 1, 0)`, and `(0, 0, 1)`, which we call the elementary basis. On the other hand, there are four basic unity quaternions: `1`, `i`, `j`, and `k`. However, if we think of the non-real ones as analogous to the elementary euclidean vectors, we can use them for quaternion rotation.
To have rotation, we need an axis around which we rotate and an angle. The prior will simply be a straight line, but for calculations, we don't really need any line equations. In fact, a single non-zero vector will suffice: it uniquely defines a line by setting its direction.
Clearly, we can define a line using many different vectors as long as they are scaled versions of one another. Therefore, for our purposes, we'll stick to unit vectors (i.e., those of length `1`). You can transform any non-zero vector `v` into a unit one by dividing it (i.e., dividing all its coordinates) by its length: `v / |v|`. Obviously, we could alter the formulas to allow other vectors, but it's far simpler to write them with this simple assumption. Nevertheless, note that Omni's quaternion calculator allows non-unit ones: it simply converts them itself.
Now, let's cover two topics connecting quaternions to rotation, one by one.
1. Rotation around an axis
Suppose a unit vector `vₐ = (xₐ, yₐ, zₐ)` defines a line in a three-dimensional space.
Then, we can represent the rotation around the line by an angle `θ` by the quaternion:
`q = cos(θ/2) + (xₐ*i + yₐ*j + zₐ*k) * sin(θ/2)`
`= cos(θ/2) + (xₐ * sin(θ/2)) * i + (yₐ * sin(θ/2)) * j + (zₐ * sin(θ/2)) * k`.
Alternatively, we can recall that a three-dimensional space consists of triples of real numbers. Therefore, we can choose to write such triples as matrices with three rows and one column. Then, if we write the `q` above as `q = a + b*i + c*j + d*k`, we can represent the operation with a `3 x 3` matrix, i.e., we can change a quaternion to a rotation matrix given by the following formula (note how this is a very different concept than the matrix representation in the above section):
⌈ 1 - 2*(c² + d²) 2*(b*c - a*d) 2*(b*d + a*c) ⌉ | 2*(b*c + a*d) 1 - 2*(b² + d²) 2*(c*d - a*b) | ⌊ 2*(b*d - a*c) 2*(c*d + a*b) 1 - 2*(b² + c²) ⌋
1. Vector rotation
Point 1. tells us how to "store" the data and the idea of rotating around an axis. Point 2. tells us what we get when we apply the rotation to a given vector `v = (x, y, z)`.
The first option is to use the rotation matrix `R` above and multiply it by `v` (represented as a one-column matrix). Then, the vector `v' = R * v` is the result of rotating `v` around the axis given by `vₐ = (xₐ, yₐ, zₐ)` by an angle `θ`.
Alternatively, we can use quaternions and quaternion multiplication. However, before we begin, we need to understand `v` as a quaternion: after all, right now, it's simply a triple of real numbers.
Define a quaternion `qᵥ = 0 + x*i + y*j + z*k`. In this language, we the rotated vector `v'` from conjugating `qᵥ` by the rotation quaternion `q` from point 1. To be precise, if we set `qᵥ' = q * qᵥ * q⁻¹`, then the result will take the form `qᵥ' = 0 + x'*i + y'*j + z'*k`, and from there we read off the vector: `v' = (x', y', z')`. In fact, in this case, the rotation quaternion `q` is unitary (i.e., has magnitude `1`), so `q⁻¹ = q`, meaning we could have used the conjugation instead of inverse above.
That concludes the theoretical part. Let's now make good use of the knowledge of what a quaternion is and all the formulas and take on an example or two.
## Example: using the quaternion calculator
The above sections took us through all the functionalities of Omni's quaternion calculator. Arguably, there are quite a few, so let's try out a couple to get a taste. We'll find:
1. The result of quaternion multiplication of `q₁ = 2 - i + 3j + k` and `q₂ = 5 - 4i + k`, and
2. The quaternion of rotation corresponding to the line given by the vector `vₐ = (1, 0, -1)` and the angle `θ = 60°`.
Obviously, we'll apply the formulas we've learned. However, before we do that, we let the quaternion calculator do the talking and work out the solution for us.
1. To multiply two quaternions using our tool, we begin by telling the calculator what we want from it. In our case, this means choosing "product" from the list in the variable field "I want to find the…" That will trigger two sections underneath, each dedicated to one of the quaternions. Note how at the top of them, we can see the symbolic representation that is used below: `q₁ = a + bi + cj + dk` and `q₂ = e + fi + hj + hk`. Looking back at the example, we input:
`a = 2`, `b = -1`, `c = 3`, `d = 1`,
`e = 5`, `f = -4`, `g = 0`, `h = 1`.
(Note how we have `b = -1`, `d = 1`, and `h = 1` even though there were no numbers in the corresponding places in `q₁` and `q₂` above. That is because, by convention, we don't write `1`s in front of variables. Also, observe that we input `g = 0` since `q₂` has no summand with the unity quaternion `j`: that means its coefficient is, in fact, `0`.)
Once we write the last entry, the quaternion calculator will spit out the answer underneath.
2. Again, we begin by stating what we need. This time, we choose "quaternion of rotation" under "I want to find the…" This will trigger two sections: one for the coordinates of the vector defining the rotation axis and one for the angle. The prior is written symbolically as `vₐ = (xₐ, yₐ, zₐ)`, and the latter simply by `θ`. Looking back at the example, we input:
`xₐ = 1`, `yₐ = 0`, `zₐ = -1`, `θ = 60°`.
(Note how the tool lets you change the angle unit if needed.) The moment we give the last value, the quaternion calculator presents the answer underneath. Note how apart from the rotation quaternion, it also converts the quaternion to a rotation matrix.
Alright, we've played around a bit; now it's time to get our hands dirty. Surely, we won't be as fast as the calculator, but let's go through the calculations nevertheless.
1. Here, we simply follow the quaternion multiplication rules described in the dedicated section, i.e., use the distributive property of multiplication over addition, and compute the products of basic unity quaternions.
`q₁ * q₂ = (2 - i + 3j + k) * (5 - 4i + k)`
`= 2 * (5 - 4i + k) - i * (5 - 4i + k) + 3j * (5 - 4i + k) + k * (5 - 4i + k)`
`= 10 - 8i + 2k - 5i - 4 + j + 15j + 12k + 3i + 5k - 4j - 1`
`= 5 - 10i + 12j + 19k`.
2. To find the quaternion of rotation, we begin by finding the unitary equivalent of the vector `v`. For that, we start with its length:
`|v| = √(1² + 0² + (-1)²) = √(1 + 0 + 1) = √2`,
and use the appropriate formula:
`v / |v| = (1 / √2, 0 / √2, -1 / √2) = (0.5√2, 0, -0.5√2)`.
Now, it's enough to use the formula from the dedicated section:
`q = cos(60° / 2) + (0.5√2 * i + 0 * j - 0.5√2 * k) * sin(60° / 2)`
`= 0.5√3 + (0.5√2 * i - 0.5√2 * k) * 1/2`
`= 0.5√3 + 0.25√2 * i - 0.25√2 * k`.
Lastly, if we choose to convert the quaternion to a rotation matrix, the same section provides the formula to do so:
⌈ 1 - 2 * (0² + (-√2/4)²) 2 * (√2/4 * 0 - √3/2 * (-√2/4)) 2 * (√2/4 * (-√2/4) + √3/2 * 0) ⌉ | 2 * (√2/4 * 0 + √3/2 * (-√2/4)) 1 - 2 * ((√2/4)² + (-√2/4)²) 2 * (0 * (-√2/4) - √3/2 * √2/4) | ⌊ 2 * (√2/4 * (-√2/4) - √3/2 * 0) 2 * (0 * (-√2/4) + √3/2 * √2/4) 1 - 2 * ((√2/4)² + 0²) ⌋
which is:
⌈ 0.75 √6/4 -0.25 ⌉ | -√6/4 0.5 -√6/4 | ⌊ -0.25 √6/4 0.75 ⌋
Phew, that was some work, wouldn't you say? And the sine and cosine were fairly simple, so it can even longer in general. Oh, it's a good thing we have Omni's quaternion calculator to spare us the hassle!
## FAQ
### How do I use quaternions for rotation?
To use quaternions for rotation, you need to:
1. Identify the vector defining the axis of rotation.
2. If needed, find its unit equivalent.
3. The quaternion of rotation is `q = cos(θ/2) + (xₐ*i + yₐ*j + zₐ*k) * sin(θ/2)`, where:
• `vₐ = (xₐ, yₐ, zₐ)` is the unit vector defining the axis; and
• `θ` is the angle by which you rotate.
4. If needed, rotate `v` using the formula `qᵥ' = q * qᵥ * q⁻¹`, where:
• `v = (x, y, z)` is the vector you rotate;
• `q` is as in step 3;
• `q⁻¹` is the multiplicative inverse of `q`;
• `qᵥ = x*i + y*j + z*k`;
• if `qᵥ' = 0 + x'*i + y'*j + z'*k`, then `v' = (x', y', z')`; and
• `v'` is the result of rotating `v`.
### How do I multiply quaternions?
To multiply quaternions, you need to:
1. Use the distributive property of multiplication over addition.
2. Multiply basic unity quaternions according to the rules:
• `1 * i = i * 1 = i`, `1 * j = j * 1 = j`, `1 * k = k * 1 = k`;
• `i² = j² = k² = -1`;
• `i * j = k`, `j * k = i`, `k * i = j`; and
• `j * i = -k`, `k * j = -i`, `i * k = -j`.
3. Simplify by combining common summands.
### Are all rotations expressible as unit quaternions?
Yes, we can express any rotation around an axis in a three-dimensional space by a unit quaternion. By default, the formulas are for axes going through the origin, but you can always compose the rotation with a translation to suit your purposes.
### Are the quaternions a field?
No. Quaternion multiplication is not commutative, which is a must-have for a structure to be a field. However, all other field properties are satisfied, so, in the end, quaternions form a division ring.
### Are the quaternions commutative?
No, or, to be precise, quaternion multiplication isn't commutative. On the other hand, quaternion addition does commute.
Maciej Kowalski, PhD candidate
I want to find the
sum
.
q₁ = a + bi + cj + dk
Real part (a)
The i coefficient (b)
The j coefficient (c)
The k coefficient (d)
q₂ = e + fi + hj + hk
Real part (e)
The i coefficient (f)
The j coefficient (g)
The k coefficient (h)
People also viewed…
### Arithmetic sequence
The arithmetic sequence calculator finds the nᵗʰ term and the sum of a sequence with a common difference.
### Flat vs. round Earth
Omni's not-flat Earth calculator helps you perform three experiments that prove the world is round.
### Grams to cups
The grams to cups calculator converts between cups and grams. You can choose between 20 different popular kitchen ingredients or directly type in the product density.
### Reference angle
The reference angle calculator finds the corresponding angle in the first quadrant. | 2021-09-17 16:42:39 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.951002299785614, "perplexity": 807.5702859010411}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055684.76/warc/CC-MAIN-20210917151054-20210917181054-00267.warc.gz"} |
https://support.bioconductor.org/p/69541/ | Check DESeq2 contrasts for exon vs intron changes
1
0
Entering edit mode
Jake ▴ 90
@jake-7236
Last seen 9 months ago
United States
I am trying to look at post-transcriptional regulation using exon and intron reads as discussed in this paper (http://www.nature.com/nbt/journal/vaop/ncurrent/full/nbt.3269.html). Essentially they look for changes in exon reads between two samples and changes in intron reads between two samples and classify transcripts as post-transcriptionally regulated when there is a discrepancy between changes in exons and changes in introns.
They use a linear model ~ region + condition + region:condition. Where region is either exon(ex) or intron(in) and condition is treatment or control. I can get post-transcriptionally regulated genes from the default results(), but I also want to plot ∆exons vs ∆introns for treatment vs control and I wanted to check that the contrasts I'm using are correct.
layout <- data.frame(row.names = colnames(countMatrix),
condition = c(rep('control',3), rep('treatment',3)),
region = rep(c("ex","in"),each=ncol(cntEx)))
dds <- DESeqDataSetFromMatrix(countData = countMatrix, colData = layout, design = ~ region*condition)
dds <- DESeq(dds, betaPrior = FALSE)
results <- results(dds, alpha=0.1)
results_exons <- results(dds, contrast=c('condition','treatment','control'))
results_introns <- results(dds, contrast=list(c('condition_treatment_vs_control','regionin.conditiontreatment')))
plot(results_exons$log2FoldChange, results_introns$log2FoldChange)
deseq2 linear model • 1.3k views
0
Entering edit mode
@mikelove
Last seen 14 hours ago
United States
hi Jake,
The contrasts look right, but I'm wondering, how do you prepare the countMatrix? One row per gene? And then counts in exons for some columns, counts for introns in other columns? Then you have some pairing information about exons and introns from the same sample no? Also, can you print the layout? What is ncol(cntEx)? I'm confused what the layout actually looks like.
0
Entering edit mode
There is one row per gene in the count matrix. I counted exons (cntEx) and introns (cntIn) separately in feautureCounts and then combined them into one matrix.
cnt <- cbind(Ex=as.data.frame(cntEx[genes.selected,]), In=cntIn[genes.selected,])
countMatrix <- as.matrix(cnt)
Below is the layout:
condition region
cell_1_rep1.bam cell_1 ex
cell_1_rep2.bam cell_1 ex
cell_1_rep3.bam cell_1 ex
cell_2_rep1.bam cell_2 ex
cell_2_rep2.bam cell_2 ex
cell_2_rep3.bam cell_2 ex
cell_1_rep1.bam cell_1 in
cell_1_rep2.bam cell_1 in
cell_1_rep3.bam cell_1 in
cell_2_rep1.bam cell_2 in
cell_2_rep2.bam cell_2 in
cell_2_rep3.bam cell_2 in
1
Entering edit mode
Yes, then I'd use your code above. It is possible to also include a term for rep here (by adding a single term condition:rep), but then it makes the ∆exons and ∆introns main effects only for the reference rep, so this would just confuse things. I'd go with what you have.
0
Entering edit mode
Actually, adding such a term might confer quite some gain in power. This is how we do it in DEXSeq. The standard DEXSeq design formula, translated to the present setting, would read:
~ sample + region + condition:region
where sample is a factor with levels cell_1_rep1, cell_1_rep2, cell_1_rep3, cell_2_rep1, cell_2_rep2, and cell_2_rep3. (I'm assuming here that sample cell_2_rep1 is not closer to cell_1_rep1 than to the other cell1 replicates.)
With this, the intercept is the exon expression strength for each sample cell_1_rep1, the sample main effects are the differences between exon expression in the other samples to cell_1_rep1, the region main effect is the log ratio of intron to exon counts for cell_1 and the interaction effect is the logarithm of the double ratio
( cell_2_introns / cell_2_exons ) / ( cell_1_introns / cell_1_exons ).
Mike, please correct me if I'm wrong. (It's late here.)
0
Entering edit mode
I have a naive follow up question. I thought that most of the differential expression software gains power by additional replicates because they can better calculate the range of expression (between replicates) of a given gene in a sample, it can then determine if the range of expression (between replicates) in another sample is significantly different. By making each replicate essentially it's own sample by cell_#_rep#, don't you lose the replicate information?
1
Entering edit mode
It's a good question, you gain power in this case because you remove some of the unexplained variance. The simplest example of how this works is the paired t-test, where the baseline for each individual are accounted for, and the test focuses on the differences. Even if the baselines have a wide range, and so a simple t-test would not reject the null, if the differences are consistent, the paired t-test will detect the difference. This is the same principal here, or when batches are accounted for in a blocked experimental design.
0
Entering edit mode
If I understand this right, doing it this way would tell me if across all samples the intron count differs significantly from the exon count for any given gene. However, if I want to know does the ratio of intron count to exon count for gene X differ between cell type 1 and cell type 2, I couldn't use this set up and would have to use my original design? Thanks
0
Entering edit mode
Yes, for actually extracting the condition effect, the design with each rep makes it difficult. You could average the effect for all the reps in one condition compared to the average of all the others using contrast=list(), and listValues. But this is not exactly equal to the condition effect in your original design. There are often multiple options in setting up a design and depending on priorities, one chooses one or the other. If you were only interested in testing the interaction term, I would recommend including 'condition:rep' which is equivalent to Simon's suggestion. Or maybe Simon has a better way for you to get the condition effect.
0
Entering edit mode
Yes, I agree that this is preferable for power (I think it's equivalent to adding condition:rep). But then Jake wants to plot the treatment vs control effect for exons and for introns. So pulling out that effect requires building the right contrast which will involve these sample terms I think. | 2021-09-24 05:47:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6167577505111694, "perplexity": 5435.319938394677}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057504.60/warc/CC-MAIN-20210924050055-20210924080055-00154.warc.gz"} |
http://oize.ihuq.pw/analysis-vs-calculus.html | ## Analysis Vs Calculus
io’s AP Calculus AB score calculator was created to inspire you as you prepare for the upcoming exam. There's no Math Analysis at my school though, so I dont really know what it is. com, Elsevier’s leading platform of peer-reviewed scholarly literature. In connection with the apparent return to Chalmers of (a bit of) the BodyandSoul mathematics education reform project, which was run at Chalmers 1998 - 2006 but was disrupted in 2007 when I moved to KTH, it may be of some interest to compare BodyandSoul as Computational or Constructive Calculus with the current standard of Analytical Calculus as presented by e. The sequence was Alg I/ Geometry/ AlgII/Trig / Calculus. Technical Analysis is the forecasting of future financial price movements based on an examination of past price movements. is a continuous function on the closed interval (i. PART 1: INTRODUCTION TO TENSOR CALCULUS A scalar eld describes a one-to-one correspondence between a single scalar number and a point. See wiki pages. This wikibook aims to be a high quality calculus textbook through which users can master the discipline. The result of this average rate of change. A Tutorial Introduction to the Lambda Calculus Raul Rojas FU Berlin, WS-97/98 Abstract This paper is a short and painless introduction to the calculus. 0) You must be able to perform the following procedures on your calculator: 1. 1 STOCHASTIC (ITO) INTEGRATION The building block of stochastic calculus is stochastic integration with respect to standard Brownian motion1. Renal stones are a common cause of blood in the urine and pain in the abdomen, flank, or groin. Calculus BC). Union workers were told attendance wasn’t mandatory, but if they didn’t come, they would lose some of their expected pay. The department offers a three-term sequence in calculus, MATH 112, 115, and 120. Honors Calculus II (4-0-4) Prerequisite: MATH 10850 Corequisite: MATH 12860 Required of honors mathematics majors. In fact, it's got some amazing applications outside the classroom. While both AP Calculus courses are designed to be college-level classes, Calculus AB is designed to cover the equivalent of one semester of college calculus over the span of a year. You may use your textbook and notes. Introduction to Discrete Mathematics (4). Precalculus and Discrete Mathematics. It consists of the traditional calculus topics of differentiation, differential equations and integration, together with far-reaching, powerful extensions of these that play a major role in applications to physics and engineering. Math for Everyone. Young children struggle with left and right, directionality, counting reliably, number-amount associations, memory of numbers and quantitative information, memory of instructions, short-term memory (working memory), time awareness, telling time, time management, schedules, organization, sequencing, procedures for arithmetic, place value, memory of addition and multiplication facts, memory of. This is a calculus textbook at the college Freshman level based on Abraham Robinson's infinitesimals, which date from 1960. MATH 10860. Current Math Schedule ; The schedules above list only those classes offered in the specified semester. Calculus Prerequisites. They are to be used only on positive series. Get Answer to BRANDS AS GROWTH PLATFORMSNo problem is more critical to CEO’s than generating profitable growth. The word calculus (plural calculi) is a Latin word, meaning originally "small pebble" (this meaning is kept in medicine). Fractional order Mechanics why, what and when. The calculus of scalar valued. I went even further so that I could understand better. Multivariable calculus: Linear approximation and Taylor's theorems, Lagrange multiples and constrained optimization, multiple integration and vector analysis including the theorems of Green, Gauss, and Stokes. This is a textbook for an introductory course in complex analysis. Player Impact Plus-Minus (PIPM for short) is a plus-minus metric that I have been building out for nearly six months. ∃ t ∈ r (Q(t)) = ”there exists” a tuple in t in. Dirichlet is best known for his work on number theory and analysis. Also from The Trillia Group are Mathematical Analysis I, and Mathematical Analysis II, by Elias Zakon. Hours subject to change. Brands growprimarily through product dev. Take any causal problem for which you know the answer in advance, submit it for analysis through the do-calculus and marvel with us at the power of the calculus to deliver the correct result in just 3–4 lines of derivation. This is called finite differences. Indefinite limits and expressions, evaluations of). There was this post on Quora by Erick Watson a while ago that caught our collective attention at QuestionPro, because it was so illustrative of the difference between calculus and statistics. Analysis versus topology 1. carbonate-C Fractions | Various biomaterials (e. Such topics and discussions will cover ideas and concepts encountered when dealing with vector analysis. Note that Calculus II is a prerequisite for Accelerated Multivariable Calculus, and both Calculus II and Calculus III are prerequisites for Calculus IV. AMS 315: Data Analysis. Honors Calculus: A third semester calculus course for students of greater aptitude and motivation. On top of that we will need to choose the new series in such a way as to give us an easy limit to compute for $$c$$. Calculus classroom demonstrations. Statistics is the discipline that concerns the collection, organization, displaying, analysis, interpretation and presentation of data. Precalculus review and Calculus preview - Shows Precalculus math in the exact way you'll use it for Calculus - Also gives a preview to many Calculus concepts. In SL, 40 teaching hours are recommended while it is 48 hours for HL. The same analysis holds if we place 3 in the last position, so that the total number of odd numbers is 2. I went even further so that I could understand better. It has been used for our undergraduate complex analysis course here at Georgia Tech and at a few other places that I know of. Let S be a surface in xyz space. Include your name, PID, time/date availability, and a brief description of your advising inquiry. My course removes the frustration out of business calculus and makes things clear. Homework 5 Due on: July 19, 2010. Analysis versus topology 1. (Invited Speakers) Applied Fractional Calculus (AFC) @ UCMerced Weekly Meeting. Boost your test scores with easy to understand online courses that take the struggle out of learning calculus. 01 Accelerated Calculus II: 5. Discrete mathematics is the study of mathematical structures that are fundamentally discrete rather than continuous. , pros and cons, advantages and disadvantages) associated with a particular choice. As we saw on the previous page, if a local maximum or minimum occurs at a point then the derivative is zero (the slope of the function is zero or horizontal). When drawing a vector in 3-space, where you position the vector is unimportant; the vector's essential properties are just its magnitude. Finite math and precalculus are math classes that you can take below the calculus level. REDISH Department of Physics, University of Maryland College Park, MD, 20742-4111 USA Mathematics is an essential element of physics problem solving, but experts often fail to appreciate exactly how they use it. Boost your test scores with easy to understand online courses that take the struggle out of learning calculus. bone collagen) have been analyzed for stable. DBMS - Tuple Relational Calculus - DBMS Tuple Relational Calculus - DBMS video tutorials - Introduction, Database System Applications, Database System Versus File System, Data Abstraction, Instances And Schemas, Database Users And User Interfaces, Database Administrator, Data Models, Database Languages, Database System Structure, Database Design With E-R Model, Entity And Entity Set, Attribute. 3 Conservative Vector Fields and the Fundamental Theorem for Line Integrals 15. The calculus of scalar valued. Introduction To Mathematical Analysis John E. Class Size and Teacher Effects on Student Achievement and Dropout Rates in University-Level Calculus Introduction It is widely believed by faculty, administrators, and students that university students are better served in small classes than in large ones. Points of Inflection. The articles are coordinated to the topics of Larson Calculus. The beginner should note that I have avoided blocking the entrance to the concrete facts of the differential and integral calculus by discussions of fundamental matters, for which he is not yet ready. PatrickJMT: making FREE and hopefully useful math videos for the world! Get my latest book. On Mathematics Education: Algebra vs. In particular, I’ll explain how the causal calculus can sometimes (but not always!) be used to infer causation from a set of data, even when a randomized controlled experiment is not possible. Many schools do not give credit for both Advanced Calculus and Calculus III because they are so similar. Older terms are infinitesimal analysis or mathematical analysis. MATH 230H Honors Calculus and Vector Analysis (4) This course is the third in a sequence of three calculus courses designed for students in engineering, science, and related fields. SA is a post-optimality procedure with no power of influencing the solution. The new 2016 SAT Contents vs. In this section we consider double integrals over more general regions. The Second Derivative Test relates the concepts of critical points, extreme values, and concavity to give a very useful tool for determining whether a critical point on the graph of a function is a relative minimum or maximum. Position vs Displacement vs Total Distance Traveled,. CliffsNotes is the original (and most widely imitated) study guide. Vector Calculus: grad div and curl. Calculus for Life Sciences I: 4: Math 007B: Calculus for Life Sciences II: 4: Math 008A: Introduction to College Mathematics for Science: 5: Math 008B: Introduction to College Mathematics for Science : 5: Math 009A: First Year Calculus: 4: Math 009B: First Year Calculus II: 4: Math 009C: First Year Calculus III: 4: Math 010A: Calculus of. time (in seconds) and add an appropriate best fit curve. The stones themselves are called renal caluli. There are differences between qualitative data. Indeed, most college ranking and evaluation systems reward. Differentiation and Integration are two building blocks of calculus. Calculus I for Biological Sciences: 148 • Calculus II for Biological Sciences: 150 • Functions, Trigonometry and Linear Systems: 151 • Engineering Mathematics I: 152 • Engineering Mathematics II: 166: Topics in Contemporary Mathematics II: 167 • Explorations in Mathematics: 170 • Freshman Mathematics Laboratory: 171 • Analytic. I discovered that there is a lot of commonality between these two courses, and I can see how there is some confusion. 1140 West Mission Road San Marcos, CA 92069 (760) 744-1150. It provides rich, adaptive mathematics instructional programs for students at all levels of ability. Roman Reigns may be performing good deeds, but it's not appearing as WWE intends. Introduction to concepts and methods of calculus for students with little or no previous calculus experience. The word calculus (plural calculi) is a Latin word, meaning originally "small pebble" (this meaning is kept in medicine). Hey r/math, I'm really interested in the way math is taught at a college level in the anglophone world. The new 2016 SAT Contents vs. The calculus of scalar valued. 515 Dewey Decimal Classification 515 582 515 Analysis Including analysis and calculus combined with other branches of mathematics; general aspects of analysis (e. com learn the basics of calculus quickly. Course offerings during the summer vary from year to year. A calculus (plural calculi), often called a stone, is a concretion of material, usually mineral salts, that forms in an organ or duct of the body. Most of the tests below have worked solutions (solution keys) available as a separate document. Stochastic Programming: Sensitivity analysis (SA) and Stochastic Programming (SP) formulations are the two major approaches used for dealing with uncertainty. The word calculus (plural calculi) is a Latin word, meaning originally "small pebble" (this meaning is kept in medicine). Calculus and its History for Teachers: 3. The double integral is given by To derive this formula we slice the three-dimensional region into slices parallel to the. Another one from The Trillia Group is An Introduction to the Theory of Numbers by Leo Moser. I"ve also added some review quizzes on the bottom of this page. I don9t think a year has passed in which I have not had students who earned a higher grade from me than they received in math analysis. A study of real analysis allows for an appreciation of the many interconnections with other mathematical areas. - Linear functions have the same rate of change no matter where we start. (3) (MA 0003 is a developmental course designed to prepare a student for university mathematics courses at the level of MA 1313 College Algebra: credit received for this course will not be applicable toward a degree). Science and Engineering students will also have an advanced. A typical engineering-oriented course in ordinary differential equations focuses on solving initial value problems (IVP): first by elementary methods, then with power series (if nobody updated the syllabus since 1950s), then with the Laplace transform. To run regression analysis in Microsoft Excel, follow these instructions. Differential calculus makes it possible to compute the limits of a function in many cases when this is not feasible by the simplest limit theorems (cf. Calculus BC will take your learning further, throwing in more advanced concepts. I've been having troubles with numerical integration over implicit regions, so I checked in a simple example if the result coincides with integrating with a DiracDelta function, and found this rather. This site is a part of the JavaScript E-labs learning objects for decision making. Precalculus and Discrete Mathematics. Differential calculus is extensively applied in many fields of mathematics, in particular in geometry. It really depends on your current level of math. Analysis, as others have pointed, will likely have a lot of proofs, or may be a class on proofs entirely. In mathematics education, calculus denotes courses of elementary mathematical analysis, which are mainly devoted to the study of functions and limits. Quantitative Chemical Analysis or quantitative chemistry is performed at Laboratory Testing Inc. Regression analysis can be very helpful for analyzing large amounts of data and making forecasts and predictions. You Can Turn Your Calculus Grade Around. I have tried to be somewhat rigorous about proving. Analyses is the plural of analysis. Calculus of kidney with calculus of ureter. MathArticles. The applications of this calculation range from macroeconomic tax models to signal analysis. Students, teachers, parents, and everyone can find solutions to their math problems instantly. Calculus (differentiation and integration) was developed to improve this understanding. The department offers a three-term sequence in calculus, MATH 112, 115, and 120. Differential calculus with applications to life sciences. Show me how to get started. One-Variable Calculus: Differentiability of Functions • Slope of a Linear Function: The slope of a linear function f measures how much f(x) changes for each unit increase in x. I am teaching Pre Calculus with a book Math Analysis. Algebra is simple to understand and can be used in everyday life, but calculus being complicated has its applications in professional fields only. The word calculus (plural calculi) is a Latin word, meaning originally "small pebble" (this meaning is kept in medicine). com is a free math website that explains math in a simple way, and includes lots of examples, from Counting through Calculus. How to sketch the graphs of polynomials. Browse By Program. View 30 photos for 3823 Calculus Dr, Dallas, TX 75244 a 4 bed, 3 bath, 2,408 Sq. Analysis, as others have pointed, will likely have a lot of proofs, or may be a class on proofs entirely. Newton's analysis involved taking ratios of infinitesimals. So in a calculus context, or you can say in an economics context, if you can model your cost as a function of quantity, the derivative of that is the marginal cost. He maintained that historical research. 750 Chapter 11 Limits and an Introduction to Calculus The Limit Concept The notion of a limit is a fundamental concept of calculus. Calculus I and II). “Calculus certainly isn’t an appropriate course for every student,” Lord says, “but there are quantitative analysis and reasoning skills to be gained by anyone who takes on the challenge of Calculus. As a result, certain properties of polynomials are very "power-like. Many schools do not give credit for both Advanced Calculus and Calculus III because they are so similar. Work on your own and do not discuss the problems with your classmates or anyone else. com: Free Precalculus Review and Calculus Preview Lessons and Practice Problems. The analytical tutorials may be used to further develop your skills in solving problems in calculus. I took the SAT Test Specifications and compared them to the SVSD’s Algebra 2 and Pre-Calculus textbooks. y = 2 - 3x is a function 2. Prerequisite: MA 241 with grade of C- or better or AP Calculus credit, or Higher Level IB credit. Boost your test scores with easy to understand online courses that take the struggle out of learning calculus. Calculus and Higher Math calculus: Branch of mathematics concerned with the study of such concepts as the rate of change of one variable quantity with respect to another, the slope of a curve at a prescribed point, the computation of the maximum and minimum values of functions, and the calculation of the area bounded by curves. Of course neither Leibniz nor Newton thought in terms of functions, but both always thought in terms of graphs. Like weather forecasting, technical analysis does not result in absolute predictions about the future. The only background required of the reader is a good knowledge of advanced calculus and linear algebra. If you plan to complete one of our introductory courses, review of prerequisite material is available through this link. Request PDF on ResearchGate | Isotope Analysis of Dental Calculus to Study Paleodiet: organic-C vs. Thomas, Ross L. 01 Accelerated Calculus II: 5. It is a Procedural language. It is suitable for a one-semester course, normally known as "Vector Calculus", "Multivariable Calculus", or simply "Calculus III". The Difference Between Calculus AB and BC. Escondido Education Center. , properties of functions, operations on functions,. Bowdler 1 Using The TI-Nspire Calculator in AP Calculus (Version 3. Thomas, Ross L. Let me know if you need to determine what videos, articles, and practice exercises you haven't done yet. Honors Calculus II (4-0-4) Prerequisite: MATH 10850 Corequisite: MATH 12860 Required of honors mathematics majors. i'm going in 9th grade and doing math in the university of buffalo but pre-calculus is doing dificult multilayered calculater problems where u need ways to use balancer equasions to switch the meaning without changing the problem. I"ve also added some review quizzes on the bottom of this page. Request PDF on ResearchGate | Isotope Analysis of Dental Calculus to Study Paleodiet: organic-C vs. This introductory course is designed to expose students to many of the new developments in Electrical Engineering, especially those on-going in the Department. Advanced Calculus (30 Meg PDF with index) Semi-classical Analysis (2 Meg PDF) Shlomo Sternberg, Harvard University, Department of Mathematics, One Oxford. Newton, on the other hand, wrote more for himself and, as a consequence, tended to use whatever notation he thought of on the day. In this sense the differential calculus is reduced to set theory. Calculus I for Biological Sciences: 148 • Calculus II for Biological Sciences: 150 • Functions, Trigonometry and Linear Systems: 151 • Engineering Mathematics I: 152 • Engineering Mathematics II: 166: Topics in Contemporary Mathematics II: 167 • Explorations in Mathematics: 170 • Freshman Mathematics Laboratory: 171 • Analytic. Calculus has extensive use in Physics in all of its domains. Vectors can be defined in any number of dimensions, though we focus here only on 3-space. Honors Calculus: A third semester calculus course for students of greater aptitude and motivation. Analysis evolved from calculus, which involves the elementary concepts and techniques of analysis. These slides do not do justice to the history of calculus, nor do they explain calculus to someone who does not already know it, but hopefully they highlight the fact that the history of calculus is interesting, and give some historical background for the material in an introductory real analysis course A Very Brief History of Calculus. Click on secondary education division or higher education division. Finney is a very important book in undergraduate programs. Calculus I and II). Fractional Calculus Analytic Functions, The Magnus Effect, and Wings Fourier Transforms and Uncertainty Propagation of Pressure and Waves The Virial Theorem Causality and the Wave Equation Integrating the Bell Curve Compressor Stalls and Mobius Transformations Dual Failures with General Densities Phase, Group, and Signal Velocity. You Can Turn Your Calculus Grade Around. AMS 315: Data Analysis. Our graduates - bachelors, Masters, and Doctorates - pursue a wide variety of careers within education, industry, and government. Introduction to Calculus for Business and Economics I. 0 Au Sp 2153 Calculus III: 4. The University of Houston's College of Engineering presents this series about the machines that make our civilization run, and the people whose ingenuity created them. Let us generalize these concepts by assigning n-squared numbers to a single point or n-cubed numbers to a single. Standard topics such as limits, differentiation and integration are covered, as well as several others. Ito's Lemma is a stochastic analogue of the chain rule of ordinary calculus. Feel free to modify these and use them for your own exams. Complex analysis. (ex) 40 thousand dollars L'Hospital's Rule It's good for forms 1. Topics include metric spaces, completeness, compactness, total derivatives, partial derivatives, inverse function theorem, implicit function theorem, Riemann integrals in several variables, Fubini. A real function, that is a function from real numbers to real numbers can be represented by a graph in the Cartesian plane; such a function is continuous if, roughly speaking, the graph is a single unbroken curve whose domain is the entire real line. DifferenceBetween. This causal calculus is a set of three simple but powerful algebraic rules which can be used to make inferences about causal relationships. Introduction. For the 2015-2016 school year students who utilized Albert resources for AP Calculus AB overtook the national pass average by 12. 0) You must be able to perform the following procedures on your calculator: 1. Also from The Trillia Group are Mathematical Analysis I, and Mathematical Analysis II, by Elias Zakon. com is a free math website that explains math in a simple way, and includes lots of examples, from Counting through Calculus. Why is precalculus hard? Precalculus, which is a combination of trigonometry and math analysis, bridges the gap to calculus, but it can feel like a potpourri of concepts at times. Calculus and Statistics. Learn for free about math, art, computer programming, economics, physics, chemistry, biology, medicine, finance, history, and more. BodyandSoul vs Standard Calculus 0 BodyandSoul is Constructive or Computational Calculus while Standard Calculus as presented in the standard text book Calculus: A Complete Course by Adams and Essex, can be described as Analytic or Symbolic Calculus. Boost your test scores with easy to understand online courses that take the struggle out of learning calculus. Calculate the percent variation in the density values. How to solve a Business Calculus' problem 1. See wiki pages. The most general advice is to watch what your professor writes. How to solve a Business Calculus' problem 1. Fractional order Mechanics why, what and when. In both cases, they study the examples to determine how the different systems operate and the function of each component. Algebra is simple to understand and can be used in everyday life, but calculus being complicated has its applications in professional fields only. Soil & Materials Engineers, Inc. Include your name, PID, time/date availability, and a brief description of your advising inquiry. The remaining 25 (83%) articles involved multivariable analyses; logistic regression (21 of 30, or 70%) was the most prominent type of analysis used, followed by linear regression (3 of 30, or 10%). We know that calculus, the study of how things change, is an important branch of mathematics. Typically differential calculus is taught first, and integral calculus follows, although the opposite o. A very clear way to see how calculus helps us interpret economic information and relationships is to compare total, average, and marginal functions. Honors Calculus: A third semester calculus course for students of greater aptitude and motivation. For Newton the calculus was geometrical while Leibniz took it towards analysis. I therefore invite my colleagues David Cox and Nanny Wermuth to familiarize themselves with the miracles of do-calculus. Perform statistical calculations on raw data - powered by WebMath. But it does seem possible to skip. MATH 230H Honors Calculus and Vector Analysis (4) This course is the third in a sequence of three calculus courses designed for students in engineering, science, and related fields. Calculus 4 comments This time I am going to take a break from heavy use of $\mathrm\LaTeX$ like I used to do in my earlier posts. Points of Inflection. Learn about how the Research department actively supports the College Board mission. Mathematics is a subject with many facets and many applications. Calculus and Higher Math calculus: Branch of mathematics concerned with the study of such concepts as the rate of change of one variable quantity with respect to another, the slope of a curve at a prescribed point, the computation of the maximum and minimum values of functions, and the calculation of the area bounded by curves. I therefore cast them out…. Degrees - Trig Functions at Any Angle Homework: Study!!!! End of Semester Calculus Video Project due Monday 12/16 Friday 12/6 Mastery Model Quizzes - Evaluating Trig Functions - Inverse Trig Functions - Radians vs. Use our breakeven analysis calculator to determine if you may make a profit. PatrickJMT: making FREE and hopefully useful math videos for the world! Get my latest book. Plot the graph of a function within an arbitrary viewing window, 2. The applications of this calculation range from macroeconomic tax models to signal analysis. Part of Calculus Workbook For Dummies Cheat Sheet. Precalculus and Discrete Mathematics. Calculus classroom demonstrations. “Calculus certainly isn’t an appropriate course for every student,” Lord says, “but there are quantitative analysis and reasoning skills to be gained by anyone who takes on the challenge of Calculus. When considering functions made up of the sums, differences, products or quotients of different sorts of functions (polynomials, exponentials and logarithms), or different powers of the same sort of function we say that one function dominates the other. The following topics are presented with applications in the business world: functions, graphs, limits, exponential and logarithmic functions, differentiation, integration, techniques and applications of integration, partial derivatives, optimization, and the. There are differences between qualitative data. Help With Your Math Homework. Uses some home works solution to show how to used MathCAD. Description: This award-winning text carefully leads the student through the basic topics of Real Analysis. BodyandSoul vs Standard Calculus 0 BodyandSoul is Constructive or Computational Calculus while Standard Calculus as presented in the standard text book Calculus: A Complete Course by Adams and Essex, can be described as Analytic or Symbolic Calculus. Free math problem solver answers your calculus homework questions with step-by-step explanations. It contains information up through the end of May 2015. Chart Analysis This section describes the various kinds of financial charts that we provide here at StockCharts. Namely, I wanted a book written by someone who actually knows how to write how-to books instead of by a mathematician writing something that will make sense to other mathematicians. There is a third possibility. In applying statistics to a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model to be studied. The new 2016 SAT Contents vs. Post-16, calculus entails the study of change and is commonly divided into two major branches: differential calculus (or differentiation) and integral calculus (or integration). Will not serve as prerequisite for MATH 265 or MATH 266. And once they start researching, beginners frequently find well-intentioned but disheartening advice, like the following: You need to master math. Don't forget unit of the answer. ) Click on a topic below to go to problems on that topic: 1. Click here for audio of Episode 1375. Nykamp is licensed under a Creative Commons Attribution-Noncommercial-ShareAlike 4. Dear Jeff, We've developed a new product, are about to take it to market, and need to. The University of Houston's College of Engineering presents this series about the machines that make our civilization run, and the people whose ingenuity created them. Newton's analysis involved taking ratios of infinitesimals. The sequence was Alg I/ Geometry/ AlgII/Trig / Calculus. COURSE DESCRIPTION. I think it's possible to go from Math Analysis to Calculus AB (it depends on your ability). Examples: 1. A typical engineering-oriented course in ordinary differential equations focuses on solving initial value problems (IVP): first by elementary methods, then with power series (if nobody updated the syllabus since 1950s), then with the Laplace transform. Vector Calculus: grad div and curl. Students who have not taken calculus at Yale and who wish to enroll in calculus must take the mathematics online placement examination; a link to the online examination and additional information are available on the departmental website. Precalculus Review / Calculus Preview at Cool math. Calculus and Analytic Geometry by George B. The catalog entries for a given semester list all courses that could possibly be offered by the department in that semester; not all of these are. Mathematics is a subject with many facets and many applications. Now I think I have two main alternatives. Differential calculus is extensively applied in many fields of mathematics, in particular in geometry. if thats what math analysis is then there u go if not then thats the differance. The area under a curve between two points can be found by doing a definite integral between the two points. 100) in its various versions covers fundamentals of mathematical analysis: continuity, differentiability, some form of the Riemann integral, sequences and series of numbers and functions, uniform convergence with applications to interchange of limit operations, some point-set topology, including some work in Euclidean n-space. Newton, on the other hand, wrote more for himself and, as a consequence, tended to use whatever notation he thought of on the day. Differentials and Derivatives in Leibniz's Calculus 5. On top of that we will need to choose the new series in such a way as to give us an easy limit to compute for $$c$$. primary focus is on the latter group, the potential users of convex optimization, and not the (less numerous) experts in the field of convex optimization. There is a third possibility. MATH 10860. Standards. A study of real analysis allows for an appreciation of the many interconnections with other mathematical areas. Advanced Calculus (30 Meg PDF with index) Semi-classical Analysis (2 Meg PDF) Shlomo Sternberg, Harvard University, Department of Mathematics, One Oxford. The prerequisites are the standard courses in single-variable calculus (a. By Mark Ryan. Antonyms for calculus. Position vs Displacement vs Total Distance Traveled,. The flux across S is the volume of fluid crossing S per unit time. This causal calculus is a set of three simple but powerful algebraic rules which can be used to make inferences about causal relationships. Using the same velocity-graph as in section two above, answer these questions regarding how far the cart traveled, its average speeds during each interval, and its displacement. Key courses in the major are 111A (group theory and modern algebra) and 105A (Analysis: calculus made rigorous). Technical Analysis is the forecasting of future financial price movements based on an examination of past price movements. Whether you have questions about the universe or a molecule compound or what biome you live in, Sciencing. ∃ t ∈ r (Q(t)) = ”there exists” a tuple in t in. - Radians vs. MA 242 Calculus III 4. This is a calculus textbook at the college Freshman level based on Abraham Robinson's infinitesimals, which date from 1960. View Notes - 08 - calculus vs topology from MATH 4540 at Cornell University. Especially, there is this whole concept of calculus as a separate subject from real analysis, which doesn't exist where i am from (continental europe, germany). Definition of strategic analysis: The process of developing strategy for a business by researching the business and the environment in which it operates. These slides do not do justice to the history of calculus, nor do they explain calculus to someone who does not already know it, but hopefully they highlight the fact that the history of calculus is interesting, and give some historical background for the material in an introductory real analysis course A Very Brief History of Calculus. You need all of the following: – Calculus – Differential equations […] The post The real prerequisite for machine learning isn't math, it's data analysis appeared first on SHARP SIGHT LABS. Trigonometry & Calculus - powered by WebMath. Calculus and Analytic Geometry by George B. PART 1: INTRODUCTION TO TENSOR CALCULUS A scalar eld describes a one-to-one correspondence between a single scalar number and a point. MAT 214 Numbers, Equations, and Proofs An introduction to classical number theory, to prepare for higher-level courses in the department. | 2019-11-14 08:12:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.481389582157135, "perplexity": 1136.950719193187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668334.27/warc/CC-MAIN-20191114081021-20191114105021-00051.warc.gz"} |
https://www.reoclik.com/mathematiques/trigonometrie/definition-de-base-des-fonctions-trigonometriques-7.html | Curiosity, learning and homework help
24/05/2022
Language: FR | ENG
:
### MEMBERS
Come to discuss on the forum!■ FAST and FREE signup. ■ 😀 Access to discussion forums 😀 Help for HOMEWORKS, support in COMPUTER SCIENCE, help for learning FRENCH and ENGLISH, discussion on your INTERESTS and HOBBIES...
### Looking for an English version of this section?
This section is in French because the contents are in French. If you prefer the English version of this section, click on the link below.
However, the contents in the French and the English sections are not necessarily the same!
Définition de base des fonctions trigonométriques
La fonction cosinus
Valeurs particulières de la fonction cosinus (cos) :
$$\displaystyle cos(0^{\circ})=cos(0)=1$$
$$\displaystyle cos(30^{\circ})=cos(\frac{\pi}{6})=\frac{\sqrt{3}}{2}$$
$$\displaystyle cos(45^{\circ})=cos(\frac{\pi}{4})=\frac{\sqrt{2}}{2}$$
$$\displaystyle cos(60^{\circ})=cos(\frac{\pi}{3})=\frac{1}{2}$$
$$\displaystyle cos(90^{\circ})=cos(\frac{\pi}{2})=0$$
$$\displaystyle cos(x + 2\pi)=cos(x)$$
La fonction sinus
Valeurs particulières de la fonction sinus (sin) :
$$\displaystyle sin(0^{\circ})=sin(0)=0$$
$$\displaystyle sin(30^{\circ})=sin(\frac{\pi}{6})=\frac{1}{2}$$
$$\displaystyle sin(45^{\circ})=sin(\frac{\pi}{4})=\frac{\sqrt{2}}{2}$$
$$\displaystyle sin(60^{\circ})=sin(\frac{\pi}{3})=\frac{\sqrt{3}}{2}$$
$$\displaystyle sin(90^{\circ})=sin(\frac{\pi}{2})=1$$
$$\displaystyle sin(x + 2\pi)=sin(x)$$
La fonction tangente
Valeurs particulières de la fonction tangente (tan) :
$$\displaystyle tan(0^{\circ})=tan(0)=0$$
$$\displaystyle tan(30^{\circ})=tan(\frac{\pi}{6})=\frac{\sqrt{3}}{3}$$
$$\displaystyle tan(45^{\circ})=tan(\frac{\pi}{4})=1$$
$$\displaystyle tan(60^{\circ})=tan(\frac{\pi}{3})=\sqrt{3}$$
La fonction tangente n'est pas définie en $$\displaystyle \frac{\pi}{2}$$ ($$\displaystyle 90^{\circ}$$).
$$\displaystyle tan(x + \pi)=tan(x)$$
Relations entre les fonctions trigonométriques
Principales relations entre les fonctions trigonométriques :
$$\displaystyle cos^2(x)+sin^2(x)=1$$
$$\displaystyle tan(x)=\frac{sin(x)}{cos(x)}$$
$$\displaystyle \frac{1}{cos^2(x)}=1+tan^2(x)$$
$$\displaystyle cotan(x)=\frac{1}{tan(x)}$$
$$\displaystyle cotan(x)=\frac{cos(x)}{sin(x)}$$
$$\displaystyle \frac{1}{sin^2(x)}=1+cotan^2(x)$$ | 2022-05-25 19:42:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49856477975845337, "perplexity": 6624.282656548586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662593428.63/warc/CC-MAIN-20220525182604-20220525212604-00105.warc.gz"} |
http://roguelikedeveloper.blogspot.ru/2016/10/texclipse-luatex-2016.html | ## Tuesday, 11 October 2016
### Texclipse & LuaTex 2016
Just some quick notes for fixing up Texclipse to use LuaTex 2016.
1. Remove the --src-specials option from the Build options.
2. Add the following 2 lines as early as you can in your document:
"\usepackage{luatex85}
\def\pgfsysdriver{pgfsys-pdftex.def}"
3. Font paths don't appear to be supported in Windows. Instead copy the fonts into the fonts directory. | 2017-11-17 23:18:56 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8292365074157715, "perplexity": 11466.041657578218}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804019.50/warc/CC-MAIN-20171117223659-20171118003659-00198.warc.gz"} |
https://datascience.stackexchange.com/questions/90067/looking-for-binary-class-datasets-with-high-class-imbalance-that-also-have-intr | # Looking for binary class datasets with high class imbalance, that also have intra-class imbalance in the minority class
For a college project I want to compare a few variants of SMOTE in terms of how much they improve classification of the minority class, over using random oversampling.
I have a specific interest in the idea that the minority class may contain small disjuncts that may themselves exhibit imbalance within the class.
I am already looking at the credit card fraud dataset on Kaggle (https://www.kaggle.com/mlg-ulb/creditcardfraud)
Can anyone please point me towards other datasets that have the following kinds of properties:
• a reasonably large number of examples (ideally at least a few thousand)
• have only two class labels
• are highly imbalanced, i.e. the minority class is severely under-represented
• ideally the minority examples would have some intra-class imbalance too
Or even better, is there any kind of good search tool out there for finding datasets based on these kinds of characteristics?
The imblearn.datasets package (documentation is here) has a function called fetch_datasets() which is described as:
fetch_datasets allows to fetch 27 datasets which are imbalanced and binarized | 2023-03-28 01:46:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3007337152957916, "perplexity": 1792.7772122240146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948756.99/warc/CC-MAIN-20230328011555-20230328041555-00687.warc.gz"} |
https://api-project-1022638073839.appspot.com/questions/how-do-you-integrate-x-2-8-x-2-5x-6-using-partial-fractions | # How do you integrate (x^2+8)/(x^2-5x+6) using partial fractions?
Sep 2, 2017
$\int \frac{{x}^{2} + 8}{{x}^{2} - 5 x + 6} \mathrm{dx} = x + 17 \ln \left\mid x - 3 \right\mid - 12 \ln \left\mid x - 2 \right\mid + C$
#### Explanation:
Factorize the denominator:
${x}^{2} - 5 x + 6 = \left(x - 2\right) \left(x - 3\right)$
Before performing partial fraction decomposition of the rational function, the numerator must have lower degree than the denominator, so split the function as:
$\frac{{x}^{2} + 8}{{x}^{2} - 5 x + 6} = \frac{{x}^{2} - 5 x + 6 + 5 x + 2}{{x}^{2} - 5 x + 6} = 1 + \frac{5 x + 2}{{x}^{2} - 5 x + 6}$
Now:
$\frac{5 x + 2}{{x}^{2} - 5 x + 6} = \frac{A}{x - 2} + \frac{B}{x - 3}$
$\frac{5 x + 2}{{x}^{2} - 5 x + 6} = \frac{A \left(x - 3\right) + B \left(x - 2\right)}{\left(x - 2\right) \left(x - 3\right)}$
$5 x + 2 = A x - 3 A + B x - 2 B$
$5 x + 2 = \left(A + B\right) x - \left(3 A + 2 B\right)$
$\left\{\begin{matrix}A + B = 5 \\ 3 A + 2 B = - 2\end{matrix}\right.$
$\left\{\begin{matrix}A = - 12 \\ B = 17\end{matrix}\right.$
So:
$\frac{5 x + 2}{{x}^{2} - 5 x + 6} = \frac{17}{x - 3} - \frac{12}{x - 2}$
$\frac{{x}^{2} + 8}{{x}^{2} - 5 x + 6} = 1 + \frac{17}{x - 3} - \frac{12}{x - 2}$
$\int \frac{{x}^{2} + 8}{{x}^{2} - 5 x + 6} \mathrm{dx} = \int \mathrm{dx} + 17 \int \frac{\mathrm{dx}}{x - 3} - 12 \int \frac{\mathrm{dx}}{x - 2}$
$\int \frac{{x}^{2} + 8}{{x}^{2} - 5 x + 6} \mathrm{dx} = x + 17 \ln \left\mid x - 3 \right\mid - 12 \ln \left\mid x - 2 \right\mid + C$
Sep 2, 2017
$x + 17 \ln | x - 3 | - 12 \ln | x - 2 | + C$
#### Explanation:
Dividing by the denominator leads to
{x^2+8}/{x^2−5x+6} = 1+{5x+2}/{x^2−5x+6}
Since x^2−5x+6 = (x-3)(x-2) we can write
{5x+2}/{x^2−5x+6} = A/{x-3}+B/{x-2}
Multiplying both sides by x^2−5x+6 leads to
$5 x + 2 = A \left(x - 2\right) + B \left(x - 3\right)$
Substituting $x = 3$ and $x = 2$ in turn leads to the result $A = 17 , B = - 12$, so that
{x^2+8}/{x^2−5x+6} = 1+17/{x-3}-12/{x-2}
Integrating this leads to the quoted result quickly! | 2021-10-19 18:32:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 23, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.96867436170578, "perplexity": 2382.978107438281}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585280.84/warc/CC-MAIN-20211019171139-20211019201139-00461.warc.gz"} |
https://www.jack-yin.com/status/3784.html | ## 每日一句-2021-05-12
As long as we have memories, yesterday remains. As long as we have hopes, tomorrow awaits.
微信赞赏 支付宝赞赏
【上一篇】
【下一篇】 | 2022-01-29 01:33:55 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8902592062950134, "perplexity": 10722.946214473106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320299894.32/warc/CC-MAIN-20220129002459-20220129032459-00689.warc.gz"} |
http://mathhelpforum.com/calculus/22068-series-taylor.html | # Math Help - Series/Taylor
1. ## Series/Taylor
Evaluate the integral as a power series. What is the radius of convergence?
1) integral of [ ln(1-t) / t ] dt
Evaluate the indefinite integral as an infinite series
2) integral of [(e^x-1)/x ]dx
Can someone show me how these are done? thank you
2. Originally Posted by xfyz
2) integral of [(e^x-1)/x ]dx
Can someone show me how these are done? thank you
$e^x = 1+x+x^2/2!+x^3/3!+...$
So,
$e^x - 1 = x+x^2/2!+x^3/3!+...$
Thus,
$(e^x-1)/x = 1 + x/2!+x^2/3!+...$
Now integrate term-by-term and what do you get?
3. Originally Posted by ThePerfectHacker
$e^x = 1+x+x^2/2!+x^3/3!+...$
So,
$e^x - 1 = x+x^2/2!+x^3/3!+...$
Thus,
$(e^x-1)/x = 1 + x/2!+x^2/3!+...$
Now integrate term-by-term and what do you get?
After I integrate each term, am i done? This is where i get confused in this whole series topic. After i am finished integrating that
x + x^2 / 2*2! + x^4 / 4*3! + ... am i done?
4. Originally Posted by xfyz
After I integrate each term, am i done?
Yes there is no other way to integrate this function. The answer that you got is the anti-derivative just in infinite series form. | 2016-05-25 21:45:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9828710556030273, "perplexity": 1524.9297069094646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049275328.63/warc/CC-MAIN-20160524002115-00004-ip-10-185-217-139.ec2.internal.warc.gz"} |
https://socratic.org/questions/when-using-the-hofmann-voltameter-which-of-the-two-gases-collected-are-positivel#623333 | # When using the Hofmann Voltameter, which of the two gases collected are positively or negatively charged?
May 31, 2018
Slightly confusing question if the solution you are using is just slightly acidic water (as it often is.)
#### Explanation:
The reason why the question confuses is me is that the two gases collected (hydrogen and oxygen) are both neutral (uncharged) diatomic gases. I think you meant to ask how do you know which gas collect at the anode (positive side) and which at the cathode (negative side), yes?
If that is the case it is the oxygen that collects at the anode, losing two electrons per atom ($2 {O}^{2 -} \rightarrow 4 {e}^{-} + {O}_{2}$) and the converse at the cathode ($2 {H}^{+} + 2 {e}^{-} \rightarrow {H}_{2}$)
Any help?
Forgot to say, you collect twice the volume of ${H}_{2}$ as you are electrolysing water, whose formula is ${H}_{2} O$. | 2022-01-29 05:11:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.812722384929657, "perplexity": 1216.3849428129724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320299927.25/warc/CC-MAIN-20220129032406-20220129062406-00253.warc.gz"} |
http://openstudy.com/updates/50baea03e4b0bcefefa02b9f | A community for students. Sign up today
Here's the question you clicked on:
kris2685 2 years ago (-4,7) and (4,2) find the slope of the line containing the given pair of points. if the slope is undefined state this.
• This Question is Closed
1. kropot72
Use the following equation to find the slope m: $m=\frac{y-y _{1}}{x-x _{1}}$
Ask your own question
Sign Up
Find more explanations on OpenStudy
Privacy Policy | 2015-08-02 18:34:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49591901898384094, "perplexity": 1878.905456158171}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989178.64/warc/CC-MAIN-20150728002309-00061-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://zbmath.org/?q=an%3A0773.39001 | ## Oscillation criteria for second order difference equation.(English)Zbl 0773.39001
Consider the difference equation $$\Delta^ 2x_{k-1}+b_ kx_ k=0$$, $$k\geq 1$$, where $$\Delta x_ k=x_{k+1}-x_ k$$ and $$b_ k\geq 0$$ with infinitely many positive terms. This equation is said to be oscillatory if it admits a solution $$\{x_ k\}^ \infty_ 0$$ with the property that for any $$N\geq 1$$, there exists an $$n\geq N$$, such that $$x_ kx_{k+1}\leq 0$$; and the equation is said to be nonoscillatory otherwise. In this paper, several necessary, sufficient conditions for the equation being oscillatory or nonoscillatory are proved. One of the results looks like the following: If the difference is nonoscillatory, then $${\varliminf_{n\to\infty}n\sum^ \infty_{k=n+1}b_ k\leq 1/4}$$; and if the difference equation is oscillatory, then $$\varlimsup_{n\to\infty}n\sum^ \infty_{k=n+1}b_ k\geq 1/4$$.
### MSC:
39A10 Additive difference equations 39A12 Discrete version of topics in analysis | 2022-05-27 21:59:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7046119570732117, "perplexity": 269.0060248505632}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663006341.98/warc/CC-MAIN-20220527205437-20220527235437-00436.warc.gz"} |
http://scidb-py.readthedocs.io/en/stable/operations.html | # Basic Math on SciDB array objects¶
Operations on SciDBArray objects generally return new SciDBArray objects. The general idea is to promote function composition involving SciDBArray objects without moving data between SciDB and Python.
The scidbpy package provides quite a few common operations including subsetting, pointwise application of scalar functions, aggregations, and pointwise and matrix arithmetic.
Standard numpy attributes like shape, ndim and size are defined for SciDBArray objects:
>>> X = sdb.random((5, 10))
>>> X.shape
(5, 10)
>>> X.size
50
>>> X.ndim
2
Many SciDB-specific attributes are also defined, including chunk_size, chunk_overlap, and sdbtype,
>>> X.chunk_size
[1000, 1000]
>>> X.chunk_overlap
[0, 0]
>>> X.sdbtype
sdbtype('<f0:double>')
SciDBArrays also contain a datashape object, which encapsulates much of the interface between Python and SciDB data, including the full array schema:
>>> Xds = X.datashape
>>> Xds.schema
'<f0:double> [i0=0:4,1000,0,i1=0:9,1000,0]'
## Scalar functions of SciDBArray objects (aggregations)¶
The package exposes the following aggregations:
Name Description
min() minimum value
max() maximum value
sum() sum of values
var() variance of values
stdev() standard deviation of values
std() standard deviation of values
avg() average/mean of values
mean() average/mean of values
count() count of nonempty cells
approxdc() fast estimate of the number of distinct values
Examples: Minimum Aggregates
Each operation can be computed across the entire array, or across specified dimensions by passing the index or indices of the desired dimensions. For example:
>>> np.random.seed(0)
>>> X = sdb.from_array(np.random.random((5, 3)))
>>> X.toarray()
array([[ 0.5488135 , 0.71518937, 0.60276338],
[ 0.54488318, 0.4236548 , 0.64589411],
[ 0.43758721, 0.891773 , 0.96366276],
[ 0.38344152, 0.79172504, 0.52889492],
[ 0.56804456, 0.92559664, 0.07103606]])
Here we’ll find the minimum of all values in the array. The returned result is a new SciDBArray, so we select the first element:
>>> X.min()[0]
0.071036058197886942
Like numpy, passing index 0 gives us the minimum within every column:
>>> X.min(0).toarray()
array([ 0.38344152, 0.4236548 , 0.07103606])
Passing index 1 gives us the minimum within every row:
>>> X.min(1).toarray()
array([ 0.5488135 , 0.4236548 , 0.43758721, 0.38344152, 0.07103606])
Note that the convention for specifying aggregate indices here is designed to match numpy, and is opposite the convention used within SciDB. To recover SciDB-style aggregates, you can use the scidb_syntax flag:
>>> X.min(1, scidb_syntax=True).toarray()
array([ 0.38344152, 0.4236548 , 0.07103606])
Further Examples
These operations return new SciDBArray objects consisting of scalar values. Here are a few examples that materialize their results to Python:
>>> tridiag.count()[0]
28
>>> tridiag.sum()[0]
20.0
>>> tridiag.var()[0]
1.6190476190476193
Note that a count of nonempty cells is also directly available from the nonempty() function:
>>> tridiag.nonempty()
28
A related function is nonnull(), which counts the number of nonempty cells which do not contain a null value. In this case, the result is the same as nonempty():
>>> tridiag.nonnull()
28
## Pointwise application of scalar functions¶
The package exposes SciDB scalar-valued scalar functions that can be applied element-wise to SciDB arrays:
Function Description
sin() Trigonometric sine
asin() Trigonometric arc-sine / inverse sine
cos() Trigonometric cosine
acos() Trigonometric arc-cosine / inverse cosine
tan() Trigonometric tangent
atan() Trigonometric arc-tangent / inverse tagent
exp() Natural exponent
log() Natural logarithm
log10() Base-10 logarithm
sqrt() Square root
ceil() Ceiling function
floor() Floor function
is_nan() Test for NaN values
All trigonometric functions assume arguments are given in radians. Here is a simple example that compares a computation in SciDB with a local one (using the ‘tridiag array defined in the last examples):
>>> sin_tri = sdb.sin(tridiag)
>>> np.linalg.norm(sin_tri.toarray() - np.sin(tridiag.toarray()))
0.0
## Shape and layout functions¶
Arrays may be transposed and their data re-arranged into new shapes with the usual transpose() and reshape() functions:
>>> tri_reshape = tridiag.reshape((20,5))
>>> tri_reshape.shape
(20, 5)
>>> tri_reshape.transpose().shape
(5, 20)
>>> tri_reshape.T.shape # shortcut for transpose
(5, 20)
## Arithmetic¶
The package defines elementwise operations on all arrays and linear algebra operations on matrices and vectors. Scalar multiplication is supported.
Element-wise sums and products:
>>> np.random.seed(1)
>>> X = sdb.from_array(np.random.random((10, 10)))
>>> Y = sdb.from_array(np.random.random((10, 10)))
>>> S = X + Y
>>> D = X - Y
>>> M = 2 * X
>>> (S + D - M).sum()[0]
-1.1102230246251565e-16
We can combine operations as well:
>>> Z = 0.5 * (X + X.T)
There are also linear algebra operations (matrix-matrix product, matrix-vector product) using the dot() function:
>>> XY = sdb.dot(X, Y)
>>> XY1 = sdb.dot(X, Y[:,1])
>>> XTX = sdb.dot(X.T, X)
Numpy broadcasting conventions are generally followed in operations involving differently-sized SciDBArray objects. Consider the following example that centers a matrix by subtracting its column average from each column.
First we create a test array with 5 columns:
>>> np.random.seed(0)
>>> X = sdb.from_array(np.random.random((10, 5)))
Now create a vector of column means:
>>> xcolmean = X.mean(0)
>>> xcolmean.shape
(5,)
Subtract these means from the columns – this is a broadcasting operation:
>>> XC = X - xcolmean
To check that the columns are now centered, we compute the column mean of XC:
>>> XC.mean(1).toarray()
array([ -2.22044605e-17, 4.44089210e-17, -1.11022302e-17,
1.11022302e-16, -3.33066907e-17])
The broadcasting operation which creates XC is implemented using a join operation along dimension 1.
## Lazy Evaluation¶
When possible, SciDB-Py defers actual database computation until data are needed. It does this by using lazy arrays, which are references to as-yet unevaluated SciDB queries. Many array methods actually return lazy arrays:
>>> x = sdb.random((3,4))
>>> x.name # an array in the database
'py1102522658694_00001'
>>> y = x.mean(0)
>>> y.name # not yet in the database
'aggregate(py1102522658694_00001,avg(f0),i1)'
Note that y’s name doesn’t refer to an array in the database, but rather a query on x. Lazy arrays can also be identified by their non-null query attribute:
>>> y.query
'aggregate(py1102522658694_00001,avg(f0),i1)'
>>> x.query is None
True
Calling eval() forces lazy-arrays to be evaluated (it has no effect on non-lazy arrays):
>>> y.eval()
>>> y.name
'py1102522658694_00014'
In most cases you don’t need to worry about whether an array is lazy or not – lazy arrays have all the same methods as regular arrays, and normally the difference is transparent to the user. However, lazy arrays can be more efficient with regard to compound queries. Consider an equation like the law of cosines:
c2 = a ** 2 + b ** 2 - 2 * a * b * sdb.cos(C)
This equation involves creating 7 intermediate data products:
• t1 = a ** 2
• t2 = b ** 2
• t3 = 2 * a
• t4 = t3 * b
• t5 = sdb.cos(C)
• t6 = t4 * t5
• t7 = t1 + t2
• c2 = t7 - t6
If a, b, and C are large SciDBArrays, this involves many round-trip communiciations to the databse, several passes over the data, and the storage of 7 arrays. Lazy arrays reduce this overhead by representing some of these temporary arrays as unevaluated sub-queries. Passing larger queries to SciDB at once also gives the database more opportunity to optimize the final query, performing the computation in fewer passes over the data.
In some situations it’s necessary or more efficient to force evaluation of lazy arrays (often places where an array appears several times in a complex query). Some SciDB-Py methods perform this evaluation internally. You should also consider calling eval()` on lazy arrays if you think the unevaluated queries are becoming too cumbersome. | 2017-10-24 01:53:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2937741279602051, "perplexity": 7246.040750945638}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187827853.86/warc/CC-MAIN-20171024014937-20171024034937-00880.warc.gz"} |
https://gigaom.com/2010/11/29/stealth-kurion-emerges-to-turn-nuclear-waste-into-glass/ | # Stealth Kurion Emerges to Turn Nuclear Waste Into Glass
Turning nuclear waste into glass — called vitrification — is the generally accepted way of dealing with nuclear waste. Engineering giant Bechtel is building the world’s largest vitrification plant in Hanford, Washington for the Department of Energy. But a startup called Kurion emerged from stealth on Monday with a plan to modularize that vitrification nuclear waste management process, making it cheaper, faster and more efficient.
At least that’s the idea. Josh Wolfe, a partner with Lux Capital that invested in Kurion along with Firelake Capital, explained to me in an interview that Kurion’s process called the “Modular Vitrification System (MVS),” “brings the technology to the waste tanks, instead of taking the waste to a massive centralized treatment plant.” “Our technology flips the vitrification process on its head,” said Wolfe, “making vitrification an order of magnitude less expensive.”
In addition Kurion says it has developed a better vitrification pre-treatment process — basically the first step in treating the nuclear waste and turning it into glass. Vitrification essentially permanently encapsulates nuclear waste, and while it’s still radioactive, the waste can be stored and transported more easily.
Kurion has hit some key milestones, which is why it has started talking now after two years in development (I only first read about them back in April). The company says that it has completed small scale testing of its technology, and has moved into “a long series of tests on simulated waste streams,” scheduled to start this month. Kurion also says it has a contract with engineering firm CH2MHill to test out its tech to manage uranium metal bearing sludges at the Hanford site.
Nuclear waste management is a problem that hasn’t seen a whole lot of innovation over the past few decades. Wolfe said that $1 out of every$4 from the Department of Energy’s budget goes toward nuclear waste management, so there is a sizable opportunity to help the DOE cut that expense.
Kurion sees two potential types of customers: 1). engineering companies like Bechtel (that get money from the DOE to manage waste) and 2). commercial utilities that control a third of the nuclear fleet. Bechtel will always be a major player in this market, said Wolfe, and they could be a potential partner or even acquirer some day.
Wolfe emphasized to me that the Kurion investment was the only way Lux Capital could envision backing nuclear from a VC perspective. “We wanted to be totally agnostic about whether there would be a nuclear renaissance or a decline,” said Wolfe. If more nuclear plants are built, or more are decommissioned, the waste will need to be managed somehow. “Win lose or draw nuclear waste is a big market,” said Wolfe.
Bill Gates has noticed the nuclear waste management market, too. Gates has invested in TerraPower (along with Vinod Khosla), which has developed a traveling wave nuclear reactor design that can run on its own waste product.
For more research on cleantech financing check out GigaOM Pro (subscription required):
Image courtesy of Bechtel. | 2021-06-20 18:39:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3479980230331421, "perplexity": 5015.789322072078}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488253106.51/warc/CC-MAIN-20210620175043-20210620205043-00241.warc.gz"} |
https://www.txcorp.co.uk/images/docs/vsim/latest/html/VSimCustomization/TxPyUtils.html | # TxPyUtils User Guide
## TxPyUtils Introduction
TxPyUtils is a collection of Python classes providing middleware functionality. The most relevant to users of VSim is the TxOptParse class, for handling the input to analysis scripts and the interaction with the VSimComposer analysis tab.
## Requirements
TxPyUtils has been developed and is tested using Python 2.7. It is known to work with Python 2.6. It works in the embedded Python environment provided by the analysis tab. It depends on the (now deprecated) optparse standard Python module and the readPre module. TxPyUtils in its current form is not compatible with Python 3, due to the deprecation of optparse.
## Usage
TxPyUtils is included in VSimComposer’s embedded Python environment. Outside this environment, to use TxPyUtils, ensure TxPyUtils.py, optparse.py and readPre.py are in your PYTHONPATH. Then simply use:
import TxPyUtils
Typical usage would involve the setting up of a list of options and passing them to the TxOptParse class:
options = []
options.append(('-s','--simulationName','Base simulation name, i.e., the input \
file name without extension.', 'string', None, True))
options.append(('-S','--speciesName', 'Name of the species to analyze', \
'string', None, True))
options.append(('-T','--threshold','Particle densities below this will be \
ignored','float',None,False))
ops = TxPyUtils.TxOptParse(desc=description, ops=options, hasOverwriteOption=True)
If you are using a script written for VSim 7.2 or before, please note that TxOptParse now expects to be told if you do not wish to include an option in your analysis script to overwrite any previous data that it finds with the same filename. You may then read the variables into your script as follows:
baseName = ops['simulationName']
speciesName = ops['speciesName']
threshold = float(ops['threshold'])
overwrite = ops['overwrite']
For each argument we want to read from the command line or from input boxes in the analysis pane into its script, we set up six options set up for TxOptParse as follows.
parameter number parameter purpose example allowed values
1 short argument name ‘-s’ string with a single alphanumeric character following a hyphen
2 long argument name ‘–simulationName string starting with two hyphens, no spaces and only alphanumeric characters
3 description ‘Base simulation name’ any string delimited by single or double quotes, without escape characters
4 argument type, alows for limited parsing by our analysis pane ‘string’ ‘float’ ‘int’ only one of those values mentioned to the left here
5 default value 2, ‘_history’, None See examples. Should be of the correct type
6 is value required? True or False (ie Boolean) Only True or False
Users are encouraged to check the scripts provided by Tech-X for further examples of usage.
This interface is likely to be replaced in future releases of VSim, so please use with care. | 2019-10-19 10:56:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3415697515010834, "perplexity": 5483.397345269558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986692723.54/warc/CC-MAIN-20191019090937-20191019114437-00335.warc.gz"} |
http://ringtheory.herokuapp.com/theorems/theorem/14/ | # Theorem detail
## Alias: Maschke's Theorem (on when a group algebra is semisimple.)
Statement: For any ring $R$ and group $G$, the group ring $R[G]$ is semisimple iff $R$ is semisimple and $|G|$ is a unit in $R$.
### Reference(s)
• T.-Y. Lam. A first course in noncommutative rings. (2013) @ Theorem 6.1 p 80 | 2019-02-18 11:11:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7087691426277161, "perplexity": 710.5559810948621}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247484928.52/warc/CC-MAIN-20190218094308-20190218120308-00456.warc.gz"} |
https://nbviewer.ipython.org/github/gpschool/dss15/blob/master/regression.ipynb | # Linear Algebra and Linear Regression¶
## Sum of Squares Error¶
The sum of squares error is computed by looking at the difference between a prediction, given by the inner product, $\mathbf{x}_i^\top\mathbf{w}$, and the desired value, $y_i$, and summing. $$E_{i,j}(\mathbf{x}_i, \mathbf{w}) = \left(\mathbf{x}_i^\top \mathbf{w} - y_{i}\right)^2,$$
Minimizing it was first proposed by Legendre in 1805. His book, which was on the orbit of comets, is available on google books, we can take a look at the relevant page by calling the code below.
In [8]:
import pods
Of course, the main text is in French, but the key part we are interested in can be roughly translated as
"In most matters where we take measures data through observation, the most accurate results they can offer, it is almost always leads to a system of equations of the form $$E = a + bx + cy + fz + etc .$$ where a, b, c, f etc are the known coefficients and x , y, z etc are unknown and must be determined by the condition that the value of E is reduced, for each equation, to an amount or zero or very small."
He continues
"Of all the principles that we can offer for this item, I think it is not broader, more accurate, nor easier than the one we have used in previous research application, and that is to make the minimum sum of the squares of the errors. By this means, it is between the errors a kind of balance that prevents extreme to prevail, is very specific to make known the state of the closest to the truth system. The sum of the squares of the errors $E^2 + \left.E^\prime\right.^2 + \left.E^{\prime\prime}\right.^2 + etc$ being \begin{align*} &(a + bx + cy + fz + etc)^2 \
• &(a^\prime + b^\prime x + c^\prime y + f^\prime z + etc ) ^2\
• &(a^{\prime\prime} + b^{\prime\prime}x + c^{\prime\prime}y + f^{\prime\prime}z + etc )^2 \
• & etc \end{align*} if we wanted a minimum, by varying x alone, we will have the equation ..."
This is the earliest know printed version of the problem of least squares. The notation, however, is a little awkward for mordern eyes. In particular Legendre doesn't make use of the sum sign, $$\sum_{i=1}^3 z_i = z_1 + z_2 + z_3$$ nor does he make use of the inner product.
In our notation, if we were to do linear regression, we would need to subsititue: \begin{align*} a &\leftarrow y_1-c, \\ a^\prime &\leftarrow y_2-c,\\ a^{\prime\prime} &\leftarrow y_3 -c,\\ \text{etc.} \end{align*} to introduce the data observations $\{y_i\}_{i=1}^{n}$ alongside $c$, the offset. We would then introduce the input locations \begin{align*} b & \leftarrow x_1,\\ b^\prime & \leftarrow x_2,\\ b^{\prime\prime} & \leftarrow x_3\\ \text{etc.} \end{align*} and finally the gradient of the function $$x \leftarrow -m.$$ The remaining coefficients ($c$ and $f$) would then be zero. That would give us \begin{align*} &(y_1 - (mx_1+c))^2 \
• &(y_2 -(mx_2 + c))^2\
• &(y_3 -(mx_3 + c))^2 \
• & \text{etc.} \end{align*} which we would write in the modern notation for sums as $$\sum_{i=1}^n (y_i-(mx_i + c))^2$$ which is recognised as the sum of squares error for a linear regression.
This shows the advantage of modern summation operator, $\sum$, in keeping our mathematical notation compact. Whilst it may look more complicated the first time you see it, understanding the mathematical rules that go around it, allows us to go much further with the notation.
Inner products (or dot products) are similar. They allow us to write $$\sum_{i=1}^q u_i v_i$$ in a more compact notation, $\mathbf{u}\cdot\mathbf{v}.$
Here we are using bold face to represent vectors, and we assume that the individual elements of a vector $\mathbf{z}$ are given as a series of scalars $$\mathbf{z} = \begin{bmatrix} z_1\\ z_2\\ \vdots\\ z_n \end{bmatrix}$$ which are each indexed by their position in the vector.
## Linear Algebra¶
Linear algebra provides a very similar role, when we introduce linear algebra, it is because we are faced with a large number of addition and multiplication operations. These operations need to be done together and would be very tedious to write down as a group. So the first reason we reach for linear algebra is for a more compact representation of our mathematical formulae.
### Running Example: Olympic Marathons¶
Now we will load in the Olympic marathon data. This is data of the olympic marath times for the men's marathon from the first olympics in 1896 up until the London 2012 olympics.
In [9]:
data = pods.datasets.olympic_marathon_men()
x = data['X']
y = data['Y']
You can see what these values are by typing:
In [10]:
print(x)
print(y)
[[ 1896.]
[ 1900.]
[ 1904.]
[ 1908.]
[ 1912.]
[ 1920.]
[ 1924.]
[ 1928.]
[ 1932.]
[ 1936.]
[ 1948.]
[ 1952.]
[ 1956.]
[ 1960.]
[ 1964.]
[ 1968.]
[ 1972.]
[ 1976.]
[ 1980.]
[ 1984.]
[ 1988.]
[ 1992.]
[ 1996.]
[ 2000.]
[ 2004.]
[ 2008.]
[ 2012.]]
[[ 4.47083333]
[ 4.46472926]
[ 5.22208333]
[ 4.15467867]
[ 3.90331675]
[ 3.56951267]
[ 3.82454477]
[ 3.62483707]
[ 3.59284275]
[ 3.53880792]
[ 3.67010309]
[ 3.39029111]
[ 3.43642612]
[ 3.20583007]
[ 3.13275665]
[ 3.32819844]
[ 3.13583758]
[ 3.0789588 ]
[ 3.10581822]
[ 3.06552909]
[ 3.09357349]
[ 3.16111704]
[ 3.14255244]
[ 3.08527867]
[ 3.10265829]
[ 2.99877553]
[ 3.03392977]]
Note that they are not pandas data frames for this example, they are just arrays of dimensionality $n\times 1$, where $n$ is the number of data.
The aim of this lab is to have you coding linear regression in python. We will do it in two ways, once using iterative updates (coordinate ascent) and then using linear algebra. The linear algebra approach will not only work much better, it is easy to extend to multiple input linear regression and non-linear regression using basis functions.
### Plotting the Data¶
You can make a plot of $y$ vs $x$ with the following command:
In [11]:
%matplotlib inline
import pylab as plt
plt.plot(x, y, 'rx')
plt.xlabel('year')
plt.ylabel('pace in min/km')
Out[11]:
<matplotlib.text.Text at 0x10dfa17d0>
### Maximum Likelihood: Iterative Solution¶
Now we will take the maximum likelihood approach we derived in the lecture to fit a line, $y_i=mx_i + c$, to the data you've plotted. We are trying to minimize the error function:
$$E(m, c) = \sum_{i=1}^n(y_i-mx_i-c)^2$$
with respect to $m$, $c$ and $\sigma^2$. We can start with an initial guess for $m$,
In [12]:
m = -0.4
c = 80
Then we use the maximum likelihood update to find an estimate for the offset, $c$.
### Coordinate Descent¶
We will try optimising the model by an approach known as coordinate descent. In coordinate descent, we choose to move one parameter at a time. Ideally, we design an algorithm that at each step moves the parameter to its minimum value. At each step we choose to move the individual parameter to its minimum.
To find the minimum, we look for the point in the curve where the gradient is zero. This can be found by taking the gradient of $E(m,c)$ with respect to the parameter.
#### Update for Offset¶
Let's consider the parameter $c$ first. The gradient goes nicely through the summation operator, and we obtain $$\frac{\text{d}E(m,c)}{\text{d}c} = -\sum_{i=1}^n 2(y_i-mx_i-c).$$ Now we want the point that is a minimum. A minimum is an example of a stationary point, the stationary points are those points of the function where the gradient is zero. They are found by solving the equation for $\frac{\text{d}E(m,c)}{\text{d}c} = 0$. Substituting in to our gradient, we can obtain the following equation, $$0 = -\sum_{i=1}^n 2(y_i-mx_i-c)$$ which can be reorganised as follows, $$c^* = \frac{\sum_{i=1}^n(y_i-m^*x_i)}{n}.$$ The fact that the stationary point is easily extracted in this manner implies that the solution is unique. There is only one stationary point for this system. Traditionally when trying to determine the type of stationary point we have encountered we now compute the second derivative, $$\frac{\text{d}^2E(m,c)}{\text{d}c^2} = 2n.$$ The second derivative is positive, which in turn implies that we have found a minimum of the function. This means that setting $c$ in this way will take us to the lowest point along that axes.
In [13]:
# set c to the minimum
c = (y - m*x).mean()
print c
786.019771145
#### Update for Slope¶
Now we have the offset set to the minimum value, in coordinate descent, the next step is to optimise another parameter. Only one further parameter remains. That is the slope of the system.
Now we can turn our attention to the slope. We once again peform the same set of computations to find the minima. We end up with an update equation of the following form.
$$m^* = \frac{\sum_{i=1}^n (y_i - c)x_i}{\sum_{i=1}^n x_i^2}$$
Communication of mathematics in data science is an essential skill, in a moment, you will be asked to rederive the equation above. Before we do that, however, we will briefly review how to write mathematics in the notebook.
### $\LaTeX$ for Maths¶
These cells use Markdown format. You can include maths in your markdown using $\LaTeX$ syntax, all you have to do is write your answer inside dollar signs, as follows:
To write a fraction, we write $\frac{a}{b}$, and it will display like this $\frac{a}{b}$. To write a subscript we write $a_b$ which will appear as $a_b$. To write a superscript (for example in a polynomial) we write $a^b$ which will appear as $a^b$. There are lots of other macros as well, for example we can do greek letters such as $\alpha, \beta, \gamma$ rendering as $\alpha, \beta, \gamma$. And we can do sum and intergral signs as $\sum \int \int$.
You can combine many of these operations together for composing expressions.
### Gradient With Respect to the Slope¶
In python we can write down the update equation for the slope as follows.
In [14]:
m = ((y - c)*x).sum()/(x**2).sum()
print m
-0.3998724073
We can have a look at how good our fit is by computing the prediction across the input space. First create a vector of 'test points',
In [15]:
import numpy as np
x_test = np.linspace(1890, 2020, 130)[:, None]
Now use this vector to compute some test predictions,
In [16]:
f_test = m*x_test + c
Now plot those test predictions with a blue line on the same plot as the data,
In [17]:
plt.plot(x_test, f_test, 'b-')
plt.plot(x, y, 'rx')
Out[17]:
[<matplotlib.lines.Line2D at 0x10e670b90>]
The fit isn't very good, we need to iterate between these parameter updates in a loop to improve the fit, we have to do this several times,
In [18]:
for i in np.arange(10):
m = ((y - c)*x).sum()/(x*x).sum()
c = (y-m*x).sum()/y.shape[0]
print(m)
print(c)
-0.398725964251
783.527379727
And let's try plotting the result again
In [19]:
f_test = m*x_test + c
plt.plot(x_test, f_test, 'b-')
plt.plot(x, y, 'rx')
Out[19]:
[<matplotlib.lines.Line2D at 0x10e68dcd0>]
Clearly we need more iterations than 10! In the next question you will add more iterations and report on the error as optimisation proceeds.
### Question¶
How many iterations do you need to get a good solution? Can you plot the objective function after each iteration? Why is the solution so slow?
## Multiple Input Solution with Linear Algebra¶
You've now seen how slow it can be to perform a coordinate ascent on a system. Another approach to solving the system (which is not always possible, particularly in non-linear systems) is to go direct to the minimum. To do this we need to introduce linear algebra. We will represent all our errors and functions in the form of linear algebra.
As we mentioned above, linear algebra is just a shorthand for performing lots of multiplications and additions simultaneously. What does it have to do with our system then? Well the first thing to note is that the linear function we were trying to fit has the following form: $$f(x) = mx + c$$ the classical form for a straight line. From a linear algebraic perspective we are looking for multiplications and additions. We are also looking to separate our parameters from our data. The data is the givens remember, in French the word is données literally translated means givens that's great, because we don't need to change the data, what we need to change are the parameters (or variables) of the model. In this function the data comes in through $x$, and the parameters are $m$ and $c$.
What we'd like to create is a vector of parameters and a vector of data. Then we could represent the system with vectors that represent the data, and vectors that represent the parameters.
We look to turn the multiplications and additions into a linear algebraic form, we have one multiplication ($m\times c$ and one addition ($mx + c$). But we can turn this into a inner product by writing it in the following way, $$f(x) = m \times x + c \times 1,$$ in other words we've extracted the unit value, from the offset, $c$. We can think of this unit value like an extra item of data, because it is always given to us, and it is always set to 1 (unlike regular data, which is likely to vary!). We can therefore write each input data location, $\mathbf{x}$, as a vector $$\mathbf{x} = \begin{bmatrix} 1\\ x\end{bmatrix}.$$
Now we choose to also turn our parameters into a vector. The parameter vector will be defined to contain $$\mathbf{w} = \begin{bmatrix} c \\ m\end{bmatrix}$$ because if we now take the inner product between these to vectors we recover $$\mathbf{x}\cdot\mathbf{w} = 1 \times c + x \times m = mx + c$$ In numpy we can define this vector as follows
In [20]:
# define the vector w
w = np.zeros(shape=(2, 1))
w[0] = m
w[1] = c
This gives us the equivalence between original operation and an operation in vector space. Whilst the notation here isn't a lot shorter, the beauty is that we will be able to add as many features as we like and still keep the seame representation. In general, we are now moving to a system where each of our predictions is given by an inner product. When we want to represent a linear product in linear algebra, we tend to do it with the transpose operation, so since we have $\mathbf{a}\cdot\mathbf{b} = \mathbf{a}^\top\mathbf{b}$ we can write $$f(\mathbf{x}_i) = \mathbf{x}_i^\top\mathbf{w}.$$ Where we've assumed that each data point, $\mathbf{x}_i$, is now written by appending a 1 onto the original vector $$\mathbf{x}_i = \begin{bmatrix} 1 \\ x_i \end{bmatrix}$$
## Design Matrix¶
We can do this for the entire data set to form a design matrix $\mathbf{X}$,
$$\mathbf{X} = \begin{bmatrix} \mathbf{x}_1^\top \\\ \mathbf{x}_2^\top \\\ \vdots \\\ \mathbf{x}_n^\top \end{bmatrix} = \begin{bmatrix} 1 & x_1 \\\ 1 & x_2 \\\ \vdots & \vdots \\\ 1 & x_n \end{bmatrix},$$
which in numpy can be done with the following commands:
In [21]:
X = np.hstack((np.ones_like(x), x))
print(X)
[[ 1.00000000e+00 1.89600000e+03]
[ 1.00000000e+00 1.90000000e+03]
[ 1.00000000e+00 1.90400000e+03]
[ 1.00000000e+00 1.90800000e+03]
[ 1.00000000e+00 1.91200000e+03]
[ 1.00000000e+00 1.92000000e+03]
[ 1.00000000e+00 1.92400000e+03]
[ 1.00000000e+00 1.92800000e+03]
[ 1.00000000e+00 1.93200000e+03]
[ 1.00000000e+00 1.93600000e+03]
[ 1.00000000e+00 1.94800000e+03]
[ 1.00000000e+00 1.95200000e+03]
[ 1.00000000e+00 1.95600000e+03]
[ 1.00000000e+00 1.96000000e+03]
[ 1.00000000e+00 1.96400000e+03]
[ 1.00000000e+00 1.96800000e+03]
[ 1.00000000e+00 1.97200000e+03]
[ 1.00000000e+00 1.97600000e+03]
[ 1.00000000e+00 1.98000000e+03]
[ 1.00000000e+00 1.98400000e+03]
[ 1.00000000e+00 1.98800000e+03]
[ 1.00000000e+00 1.99200000e+03]
[ 1.00000000e+00 1.99600000e+03]
[ 1.00000000e+00 2.00000000e+03]
[ 1.00000000e+00 2.00400000e+03]
[ 1.00000000e+00 2.00800000e+03]
[ 1.00000000e+00 2.01200000e+03]]
### Writing the Objective with Linear Algebra¶
When we think of the objective function, we can think of it as the errors where the error is defined in a similar way to what it was in Legendre's day $y_i - f(\mathbf{x}_i)$, in statistics these errors are also sometimes called residuals. So we can think as the objective and the prediction function as two separate parts, first we have, $$E(\mathbf{w}) = \sum_{i=1}^n (y_i - f(\mathbf{x}_i; \mathbf{w}))^2,$$ where we've made the function $f(\cdot)$'s dependence on the parameters $\mathbf{w}$ explicit in this equation. Then we have the definition of the function itself, $$f(\mathbf{x}_i; \mathbf{w}) = \mathbf{x}_i^\top \mathbf{w}.$$ Let's look again at these two equations and see if we can identify any inner products. The first equation is a sum of squares, which is promising. Any sum of squares can be represented by an inner product, $$a = \sum_{i=1}^{k} b^2_i = \mathbf{b}^\top\mathbf{b},$$ so if we wish to represent $E(\mathbf{w})$ in this way, all we need to do is convert the sum operator to an inner product. We can get a vector from that sum operator by placing both $y_i$ and $f(\mathbf{x}_i; \mathbf{w})$ into vectors, which we do by defining $$\mathbf{y} = \begin{bmatrix}y_1\\y_2\\ \vdots \\ y_n\end{bmatrix}$$ and defining $$\mathbf{f}(\mathbf{x}_1; \mathbf{w}) = \begin{bmatrix}f(\mathbf{x}_1; \mathbf{w})\\f(\mathbf{x}_2; \mathbf{w})\\ \vdots \\ f(\mathbf{x}_n; \mathbf{w})\end{bmatrix}.$$ The second of these is actually a vector-valued function. This term may appear intimidating, but the idea is straightforward. A vector valued function is simply a vector whose elements are themselves defined as functions, i.e. it is a vector of functions, rather than a vector of scalars. The idea is so straightforward, that we are going to ignore it for the moment, and barely use it in the derivation. But it will reappear later when we introduce basis functions. So we will, for the moment, ignore the dependence of $\mathbf{f}$ on $\mathbf{w}$ and $\mathbf{X}$ and simply summarise it by a vector of numbers $$\mathbf{f} = \begin{bmatrix}f_1\\f_2\\ \vdots \\ f_n\end{bmatrix}.$$ This allows us to write our objective in the folowing, linear algebraic form, $$E(\mathbf{w}) = (\mathbf{y} - \mathbf{f})^\top(\mathbf{y} - \mathbf{f})$$ from the rules of inner products.
But what of our matrix $\mathbf{X}$ of input data? At this point, we need to dust off matrix-vector multiplication. Matrix multiplication is simply a convenient way of performing many inner products together, and it's exactly what we need to summarise the operation $$f_i = \mathbf{x}_i^\top\mathbf{w}.$$ This operation tells us that each element of the vector $\mathbf{f}$ (our vector valued function) is given by an inner product between $\mathbf{x}_i$ and $\mathbf{w}$. In other words it is a series of inner products. Let's look at the definition of matrix multiplication, it takes the form $$\mathbf{c} = \mathbf{B}\mathbf{a}$$ where $\mathbf{c}$ might be a $k$ dimensional vector (which we can intepret as a $k\times 1$ dimensional matrix), and $\mathbf{B}$ is a $k\times k$ dimensional matrix and $\mathbf{a}$ is a $k$ dimensional vector ($k\times 1$ dimensional matrix).
The result of this multiplication is of the form $$\begin{bmatrix}c_1\\c_2 \\ \vdots \\ a_k\end{bmatrix} = \begin{bmatrix} b_{1,1} & b_{1, 2} & \dots & b_{1, k} \\ b_{2, 1} & b_{2, 2} & \dots & b_{2, k} \\ \vdots & \vdots & \ddots & \vdots \\ b_{k, 1} & b_{k, 2} & \dots & b_{k, k} \end{bmatrix} \begin{bmatrix}a_1\\a_2 \\ \vdots\\ c_k\end{bmatrix} = \begin{bmatrix} b_{1, 1}a_1 + b_{1, 2}a_2 + \dots + b_{1, k}a_k\\ b_{2, 1}a_1 + b_{2, 2}a_2 + \dots + b_{2, k}a_k \\ \vdots\\ b_{k, 1}a_1 + b_{k, 2}a_2 + \dots + b_{k, k}a_k\end{bmatrix}$$ so we see that each element of the result, $\mathbf{a}$ is simply the inner product between each row of $\mathbf{B}$ and the vector $\mathbf{c}$. Because we have defined each element of $\mathbf{f}$ to be given by the inner product between each row of the design matrix and the vector $\mathbf{w}$ we now can write the full operation in one matrix multiplication, $$\mathbf{f} = \mathbf{X}\mathbf{w}.$$
In [22]:
f = np.dot(X, w) # np.dot does matrix multiplication in python
Combining this result with our objective function, $$E(\mathbf{w}) = (\mathbf{y} - \mathbf{f})^\top(\mathbf{y} - \mathbf{f})$$ we find we have defined the model with two equations. One equation tells us the form of our predictive function and how it depends on its parameters, the other tells us the form of our objective function.
In [23]:
resid = (y-f)
E = np.dot(resid.T, resid) # matrix multiplication on a single vector is equivalent to a dot product.
print "Error function is:", E
Error function is: [[ 6.34574157e+13]]
## Objective Optimisation¶
Our model has now been defined with two equations, the prediction function and the objective function. Next we will use multivariate calculus to define an algorithm to fit the model. The separation between model and algorithm is important and is often overlooked. Our model contains a function that shows how it will be used for prediction, and a function that describes the objective function we need to optimise to obtain a good set of parameters.
The model linear regression model we have described is still the same as the one we fitted above with a coordinate ascent algorithm. We have only played with the notation to obtain the same model in a matrix and vector notation. However, we will now fit this model with a different algorithm, one that is much faster. It is such a widely used algorithm that from the end user's perspective it doesn't even look like an algorithm, it just appears to be a single operation (or function). However, underneath the computer calls an algorithm to find the solution. Further, the algorithm we obtain is very widely used, and because of this it turns out to be highly optimised.
Once again we are going to try and find the stationary points of our objective by finding the stationary points. However, the stationary points of a multivariate function, are a little bit more complext to find. Once again we need to find the point at which the derivative is zero, but now we need to use multivariate calculus to find it. This involves learning a few additional rules of differentiation (that allow you to do the derivatives of a function with respect to vector), but in the end it makes things quite a bit easier. We define vectorial derivatives as follows, $$\frac{\text{d}E(\mathbf{w})}{\text{d}\mathbf{w}} = \begin{bmatrix}\frac{\text{d}E(\mathbf{w})}{\text{d}w_1}\\\frac{\text{d}E(\mathbf{w})}{\text{d}w_2}\end{bmatrix}.$$ where $\frac{\text{d}E(\mathbf{w})}{\text{d}w_1}$ is the partial derivative of the error function with respect to $w_1$.
Differentiation through multiplications and additions is relatively straightforward, and since linear algebra is just multiplication and addition, then its rules of diffentiation are quite straightforward too, but slightly more complex than regular derivatives.
### Matrix Differentiation¶
We will need two rules of differentiation. The first is diffentiation of an inner product. By remebering that the inner product is made up of multiplication and addition, we can hope that its derivative is quite straightforward, and so it proves to be. We can start by thinking about the definition of the inner product, $$\mathbf{a}^\top\mathbf{z} = \sum_{i} a_i z_i,$$ which if we were to take the derivative with respect to $z_k$ would simply return the gradient of the one term in the sum for which the derivative was non zero, that of $a_k$, so we know that $$\frac{\text{d}}{\text{d}z_k} \mathbf{a}^\top \mathbf{z} = a_k$$ and by our definition of multivariate derivatives we can simply stack all the partial derivatives of this form in a vector to obtain the result that $$\frac{\text{d}}{\text{d}\mathbf{z}} \mathbf{a}^\top \mathbf{z} = \mathbf{a}.$$ The second rule that's required is differentiation of a 'matrix quadratic'. A scalar quadratic in $z$ with coefficient $c$ has the form $cz^2$. If $\mathbf{z}$ is a $k\times 1$ vector and $\mathbf{C}$ is a $k \times k$ matrix of coefficients then the matrix quadratic form is written as $\mathbf{z}^\top \mathbf{C}\mathbf{z}$, which is itself a scalar quantity, but it is a function of a vector.
#### Matching Dimensions in Matrix Multiplications¶
There's a trick for telling that it's a scalar result. When you are doing maths with matrices, it's always worth pausing to perform a quick sanity check on the dimensions. Matrix multplication only works when the dimensions match. To be precise, the 'inner' dimension of the matrix must match. What is the inner dimension. If we multiply two matrices $\mathbf{A}$ and $\mathbf{B}$, the first of which has $k$ rows and $\ell$ columns and the second of which has $p$ rows and $q$ columns, then we can check whether the multiplication works by writing the dimensionalities next to each other, $$\mathbf{A} \mathbf{B} \rightarrow (k \times \underbrace{\ell)(p}_\text{inner dimensions} \times q) \rightarrow (k\times q).$$ The inner dimensions are the two inside dimensions, $\ell$ and $p$. The multiplication will only work if $\ell=p$. The result of the multiplication will then be a $k\times q$ matrix: this dimensionality comes from the 'outer dimensions'. Note that matrix multiplication is not commutative. And if you change the order of the multiplication, $$\mathbf{B} \mathbf{A} \rightarrow (\ell \times \underbrace{k)(q}_\text{inner dimensions} \times p) \rightarrow (\ell \times p).$$ firstly it may no longer even work, because now the condition is that $k=q$, and secondly the result could be of a different dimensionality. An exception is if the matrices are square matrices (e.g. same number of rows as columns) and they are both symmetric. A symmetric matrix is one for which $\mathbf{A}=\mathbf{A}^\top$, or equivalently, $a_{i,j} = a_{j,i}$ for all $i$ and $j$.
You will need to get used to working with matrices and vectors applying and developing new machine learning techniques. You should have come across them before, but you may not have used them as extensively as we will now do in this course. You should get used to using this trick to check your work and ensure you know what the dimension of an output matrix should be. For our matrix quadratic form, it turns out that we can see it as a special type of inner product. $$\mathbf{z}^\top\mathbf{C}\mathbf{z} \rightarrow (1\times \underbrace{k) (k}_\text{inner dimensions}\times k) (k\times 1) \rightarrow \mathbf{b}^\top\mathbf{z}$$ where $\mathbf{b} = \mathbf{C}\mathbf{z}$ so therefore the result is a scalar, $$\mathbf{b}^\top\mathbf{z} \rightarrow (1\times \underbrace{k) (k}_\text{inner dimensions}\times 1) \rightarrow (1\times 1)$$ where a $(1\times 1)$ matrix is recognised as a scalar.
This implies that we should be able to differentiate this form, and indeed the rule for its differentiation is slightly more complex than the inner product, but still quite simple, $$\frac{\text{d}}{\text{d}\mathbf{z}} \mathbf{z}^\top\mathbf{C}\mathbf{z}= \mathbf{C}\mathbf{z} + \mathbf{C}^\top \mathbf{z}.$$ Note that in the special case where $\mathbf{C}$ is symmetric then we have $\mathbf{C} = \mathbf{C}^\top$ and the derivative simplifies to $$\frac{\text{d}}{\text{d}\mathbf{z}} \mathbf{z}^\top\mathbf{C}\mathbf{z}= 2\mathbf{C}\mathbf{z}.$$
### Differentiating the Objective¶
First, we need to compute the full objective by substituting our prediction function into the objective function to obtain the objective in terms of $\mathbf{w}$. Doing this we obtain $$E(\mathbf{w})= (\mathbf{y} - \mathbf{X}\mathbf{w})^\top (\mathbf{y} - \mathbf{X}\mathbf{w}).$$ We now need to differentiate this quadratic form to find the minimum. We differentiate with respect to the vector $\mathbf{w}$. But before we do that, we'll expand the brackets in the quadratic form to obtain a series of scalar terms. The rules for bracket expansion across the vectors are similar to those for the scalar system giving, $$(\mathbf{a} - \mathbf{b})^\top (\mathbf{c} - \mathbf{d}) = \mathbf{a}^\top \mathbf{c} - \mathbf{a}^\top \mathbf{d} - \mathbf{b}^\top \mathbf{c} + \mathbf{b}^\top \mathbf{d}$$ which substituting for $\mathbf{a} = \mathbf{c} = \mathbf{y}$ and $\mathbf{b}=\mathbf{d} = \mathbf{X}\mathbf{w}$ gives $$E(\mathbf{w})= \mathbf{y}^\top\mathbf{y} - 2\mathbf{y}^\top\mathbf{X}\mathbf{w} + \mathbf{w}^\top\mathbf{X}^\top\mathbf{X}\mathbf{w}$$ where we used the fact that $\mathbf{y}^\top\mathbf{X}\mathbf{w}= \mathbf{w}^\top\mathbf{X}^\top\mathbf{y}$. Now we can use our rules of differentiation to compute the derivative of this form, which is, $$\frac{\text{d}}{\text{d}\mathbf{w}}E(\mathbf{w})=- 2\mathbf{X}^\top \mathbf{y} + 2\mathbf{X}^\top\mathbf{X}\mathbf{w},$$ where we have exploited the fact that $\mathbf{X}^\top\mathbf{X}$ is symmetric to obtain this result.
## Update Equation for Global Optimum¶
Once again, we need to find the minimum of our objective function. Using our likelihood for multiple input regression we can now minimize for our parameter vector $\mathbf{w}$. Firstly, just as in the single input case, we seek stationary points by find parameter vectors that solve for when the gradients are zero, $$\mathbf{0}=- 2\mathbf{X}^\top \mathbf{y} + 2\mathbf{X}^\top\mathbf{X}\mathbf{w},$$ where $\mathbf{0}$ is a vector of zeros. Rearranging this equation we find the solution to be $$\mathbf{w} = \left[\mathbf{X}^\top \mathbf{X}\right]^{-1} \mathbf{X}^\top \mathbf{y}$$ where $\mathbf{A}^{-1}$ denotes matrix inverse.
### Solving the Multivariate System¶
The solution for $\mathbf{w}$ is given in terms of a matrix inverse, but computation of a matrix inverse requires, in itself, an algorithm to resolve it. You'll know this if you had to invert, by hand, a 3\times 3 matrix in high school. From a numerical stability perspective, it is also best not to compute the matrix inverse directly, but rather to ask the computer to solve the system of linear equations given by $$\mathbf{X}^\top\mathbf{X} \mathbf{w} = \mathbf{X}^\top\mathbf{y}$$ for $\mathbf{w}$. This can be done in numpy using the command
In [24]:
np.linalg.solve?
so we can obtain the solution using
In [25]:
w = np.linalg.solve(np.dot(X.T, X), np.dot(X.T, y))
print w
[[ 2.88952457e+01]
[ -1.29806477e-02]]
We can map it back to the liner regression and plot the fit as follows
In [26]:
m = w[1]; c=w[0]
f_test = m*x_test + c
print(m)
print(c)
plt.plot(x_test, f_test, 'b-')
plt.plot(x, y, 'rx')
[-0.01298065]
[ 28.89524574]
Out[26]:
[<matplotlib.lines.Line2D at 0x10e731250>]
## Multivariate Linear Regression¶
A major advantage of the new system is that we can build a linear regression on a multivariate system. The matrix calculus didn't specify what the length of the vector $\mathbf{x}$ should be, or equivalently the size of the design matrix.
### Movie Body Count Data¶
Let's load back in the movie body count data.
In [27]:
data = pods.datasets.movie_body_count()
movies = data['Y']
Let's remind ourselves of the features we've been provided with.
In [28]:
print ', '.join(movies.columns)
Film, Year, Body_Count, MPAA_Rating, Genre, Director, Actors, Length_Minutes, IMDB_Rating
Now we will build a design matrix based on the numeric features: year, Body_Count, Length_Minutes in an effort to predict the rating. We build the design matrix as follows:
## Relation to Single Input System¶
In [33]:
select_features = ['Year', 'Body_Count', 'Length_Minutes']
X = movies.loc[:, select_features]
X['Eins'] = 1 # add a column for the offset
y = movies.loc[:, ['IMDB_Rating']]
Now let's perform a linear regression. But this time, we will create a pandas data frame for the result so we can store it in a form that we can visualise easily.
In [34]:
import pandas as pd
w = pd.DataFrame(data=np.linalg.solve(np.dot(X.T, X), np.dot(X.T, y)), # solve linear regression here
index = X.columns, # columns of X become rows of w
columns=['regression_coefficient']) # the column of X is the value of regression coefficient
We can check the residuals to see how good our estimates are
In [35]:
(y - np.dot(X, w)).hist()
Out[35]:
array([[<matplotlib.axes._subplots.AxesSubplot object at 0x10ea75910>]], dtype=object)
Which shows our model hasn't yet done a great job of representation, because the spread of values is large. We can check what the rating is dominated by in terms of regression coefficients.
In [36]:
w
Out[36]:
regression_coefficient
Year -0.016280
Body_Count -0.000995
Length_Minutes 0.025386
Eins 36.508363
Although we have to be a little careful about interpretation because our input values live on different scales, however it looks like we are dominated by the bias, with a small negative effect for later films (but bear in mind the years are large, so this effect is probably larger than it looks) and a positive effect for length. So it looks like long earlier films generally do better, but the residuals are so high that we probably haven't modelled the system very well.
## Lecture on Multivariate Regression¶
In [37]:
from IPython.display import YouTubeVideo
Out[37]:
## Lecture on Maximum Likelihoood¶
In [38]:
from IPython.display import YouTubeVideo
Out[38]:
### Numerical Issues¶
Fitting a linear regression can also come with numerical issues. Numerical issues are problems that arise when the mathematics is implemented on the computer. For example, numbers in the computer are represented as floating point, which means they only have a certain precision. A particular problem in linear regression is that we compute $$\mathbf{X}^\top \mathbf{X}$$ which we can rewrite as $$\mathbf{X}^\top \mathbf{X} = \sum_{i=1}^n \mathbf{x}_i \mathbf{x}_i^\top$$ to make it clear that its computation requires a sum over $n$ values, where $n$ is our number of data. If $n$ is very large then the matrix computed is very large. Solving the system with such a large matrix can lead to numerical instability because the precision with which the matrix can be represented is poor.
We've already seen we can perform a solve instead of a matrix inverse to improve numerical stability, but we can actually do even better.
A QR-decomposition of a matrix factorises it into a matrix which is an orthogonal matrix $\mathbf{Q}$, so that $\mathbf{Q}^\top \mathbf{Q} = \mathbf{I}$. And a matrix which is upper triangular, $\mathbf{R}$. Below we use the properties of the QR decomposition to derive a new manipulation for fitting the model $$\mathbf{X}^\top \mathbf{X} \boldsymbol{\beta} = \mathbf{X}^\top \mathbf{y}$$ $$(\mathbf{Q}\mathbf{R})^\top (\mathbf{Q}\mathbf{R})\boldsymbol{\beta} = (\mathbf{Q}\mathbf{R})^\top \mathbf{y}$$ $$\mathbf{R}^\top (\mathbf{Q}^\top \mathbf{Q}) \mathbf{R} \boldsymbol{\beta} = \mathbf{R}^\top \mathbf{Q}^\top \mathbf{y}$$ $$\mathbf{R}^\top \mathbf{R} \boldsymbol{\beta} = \mathbf{R}^\top \mathbf{Q}^\top \mathbf{y}$$ $$\mathbf{R} \boldsymbol{\beta} = \mathbf{Q}^\top \mathbf{y}$$ This is a more numerically stable solution because it removes the need to compute $\mathbf{X}^\top\mathbf{X}$ as an intermediate. Computing $\mathbf{X}^\top\mathbf{X}$ is a bad idea because it involves squaring all the elements of $\mathbf{X}$ and thereby potentially reducing the numerical precision with which we can represent the solution. Operating on $\mathbf{X}$ directly preserves the numerical precision of the model.
This can be more particularly seen when we begin to work with basis functions in the next session. Some systems that can be resolved with the QR decomposition can not be resolved by using solve directly.
In [39]:
import scipy as sp
Q, R = np.linalg.qr(X)
w = sp.linalg.solve_triangular(R, np.dot(Q.T, y))
w = pd.DataFrame(w, index=X.columns)
w
Out[39]:
0
Year -0.016280
Body_Count -0.000995
Length_Minutes 0.025386
Eins 36.508363
In [ ]: | 2022-01-20 08:56:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8674089312553406, "perplexity": 372.9282415128538}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301730.31/warc/CC-MAIN-20220120065949-20220120095949-00509.warc.gz"} |
https://gmatclub.com/forum/a-telephone-number-contains-10-digit-including-a-3-digit-59697.html?kudos=1 | GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 07 Dec 2019, 14:46
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# A telephone number contains 10 digit, including a 3-digit
Author Message
TAGS:
### Hide Tags
Director
Joined: 22 Nov 2007
Posts: 854
A telephone number contains 10 digit, including a 3-digit [#permalink]
### Show Tags
Updated on: 02 Jul 2013, 11:02
2
49
00:00
Difficulty:
95% (hard)
Question Stats:
52% (02:44) correct 48% (02:46) wrong based on 448 sessions
### HideShow timer Statistics
A telephone number contains 10 digit, including a 3-digit area code. Bob remembers the area code and the next 5 digits of the number. He also remembers that the remaining digits are not 0, 1, 2, 5, or 7. If Bob tries to find the number by guessing the remaining digits at random, the probability that he will be able to find the correct number in at most 2 attempts is closest to which of the following ?
A. 1/625
B. 2/625
C. 4/625
D. 25/625
E. 50/625
Originally posted by marcodonzelli on 05 Feb 2008, 22:26.
Last edited by Bunuel on 02 Jul 2013, 11:02, edited 1 time in total.
Edited the question and added the OA.
CEO
Status: GMATINSIGHT Tutor
Joined: 08 Jul 2010
Posts: 2977
Location: India
GMAT: INSIGHT
Schools: Darden '21
WE: Education (Education)
A telephone number contains 10 digit, including a 3-digit [#permalink]
### Show Tags
16 Jun 2015, 22:07
7
5
elizaanne wrote:
The total number of possibilities for the phone numbers is 25,
Therefore the probability of him getting on the firs try is 1/25
Here's where I differ from other posters: I would say that the probability of him getting it right on the second try would be:
(24/25)(1/24)
This is because the probability of him getting it wrong on the first try is 24/24, because there are 24 wrong answers and 1 right one. After that, however, he's already eliminated one possible wrong answer by trying and failing, so the total number of possibilities is now 24. That means he has a 1/24 chance of getting it right after trying one and failing.
This makes the total probability 1/25+1/25, which is exactly 50/625
You are absolutely correct in your approach. However there seems to be a Typo error on the highlighted part above which should be 24/25
To Answer such question in just Two step you may consider taking Unfavorable case and calculate the favorable at second step
e.g. Probability of him not finding the correct Number in two steps = (24/25)*(23/24) = (23/25)
i.e. Probability of him finding the correct Number in two steps = 1- (23/25) = (2/25) which is same as 50/625
_________________
Prosper!!!
GMATinsight
Bhoopendra Singh and Dr.Sushma Jha
e-mail: info@GMATinsight.com I Call us : +91-9999687183 / 9891333772
Online One-on-One Skype based classes and Classroom Coaching in South and West Delhi
http://www.GMATinsight.com/testimonials.html
ACCESS FREE GMAT TESTS HERE:22 ONLINE FREE (FULL LENGTH) GMAT CAT (PRACTICE TESTS) LINK COLLECTION
Manager
Joined: 02 Jan 2008
Posts: 116
### Show Tags
06 Feb 2008, 00:18
16
9
I am getting E (as closest)
Remaining numbers to fill last two digits (3,4,6,8,9): Total 5
Probability of choosing right numbers in two places = 1/5 * 1/5 = 1/25
Probability of not choosing right numbers in two places = 1-1/25 = 24/25
--
At most two attempts: 1) Wrong-1st Attempt, Right - 2nd, 2) Right - 1st Attempt
1) = 24/25 * 1/25 = 24/625
2) = 1/25
Add 1 and 2, 24/625 + 1/25 = 49/625 ~ 50/625
##### General Discussion
Intern
Joined: 04 Oct 2011
Posts: 8
### Show Tags
04 Oct 2011, 15:03
9
I think it is 1/25 (Correct in first attempt) + 24/25*1/24 (Correct in 2nd Attempt, it is 1/24 coz he wont repeat the wrong number again.) = 2/25 = 50/625 (Answer E)
Senior Manager
Joined: 09 Aug 2006
Posts: 405
### Show Tags
12 Feb 2008, 10:01
3
1
jackychamp wrote:
I have a question in this,
We have a pool of 5 numbers to choose from out of which we have to select 1.
So, prob of getting correct is 1/5.
which means probability of not correct is 4/5.
So, getting wrong at both places is 4/5 * 4/5 = 16/25.
Therefore getting right at both places is 1 - 16/25 = 9/25.
Can someone tell me where I am going wrong.
-Jack
You are missing out that you have to choose 2 correct numbers.
So, probabilty of 1st correct # is 1/5
probabilty of 2nd correct # is 1/5
So the probability of choosing correct combination is 1/5* 1/5 = 1/25.
Choosing wrong combination : 1 - 1/25 = 24/25
Now, we hv 2 cases.
case 1 : 1st attempt is wrong and second attempt is right.
24/25*1/25 = 24/625
case 2 : hit the eye in 1st attempt :
1/5
Total probability :
24/625 + 1/25 = 49/625 ~ 50/625
_________________
-Unstoppable force moves immovable objects...-
Intern
Joined: 11 Jan 2008
Posts: 47
### Show Tags
08 Feb 2008, 17:16
2
I have a question in this,
We have a pool of 5 numbers to choose from out of which we have to select 1.
So, prob of getting correct is 1/5.
which means probability of not correct is 4/5.
So, getting wrong at both places is 4/5 * 4/5 = 16/25.
Therefore getting right at both places is 1 - 16/25 = 9/25.
Can someone tell me where I am going wrong.
-Jack
Manager
Joined: 26 Jan 2008
Posts: 214
### Show Tags
10 Feb 2008, 11:40
2
1
jackychamp wrote:
I have a question in this,
We have a pool of 5 numbers to choose from out of which we have to select 1.
So, prob of getting correct is 1/5.
which means probability of not correct is 4/5.
So, getting wrong at both places is 4/5 * 4/5 = 16/25.
Therefore getting right at both places is 1 - 16/25 = 9/25.
Can someone tell me where I am going wrong.
-Jack
Good question!
[Probability (getting both digits right)] is NOT the same as [1 - Probability (getting both digits wrong)]
Since if either digit is wrong, the guessed number is wrong.
[Probability (getting both digits right)] = 1 - [Probability (getting both digits wrong) + Probability (getting either digit wrong)]
= 1 - [4/5 + (4/5 * 1/5) + (1/5 * 4/5)]
= 1/25
Now you have two tries to guess the correct number.
Probability (getting it right the first time) = 1/25
Probability (getting it wrong the first time, and right the second time) = (1 - 1/25) * 1/25 = 24/625
Add the two together, you get 49/625, closest to (E)
_________________
SVP
Joined: 29 Aug 2007
Posts: 1804
### Show Tags
10 Feb 2008, 12:27
2
marcodonzelli wrote:
A telephone number contains 10 digit, including a 3-digit area code. Bob remembers the area coe and the next 5 digits of the number. He also remembers that the remaining digits are not 0,1,2,5, or 7. If Bob tries to find the number by guessing the remaining digits at random, the probability that he will be able to find the correct number in at most 2 attempts is closest to which of the following ?
1/625
2/625
4/625
25/625
50/625
its E. 50/625 = 2/25
the telephone number is: xxx-xxxxx-ab
we need to find the digits ab:
0, 1, 2, 5, and 7 cannot be in digits ab.
remaining integers 3, 4, 6, 8, and 9 can be chosen for ab.
no of ways the integers can be put in place of ab = 5x5 = 25
prob = 1/25 + 1/25 = 2/25. which is E.
Intern
Joined: 17 Dec 2012
Posts: 1
Re: A telephone number contains 10 digit, including a 3-digit [#permalink]
### Show Tags
02 Jul 2013, 10:56
2
1
remaining digits are 3.4.6.8.9
ways to choose last two digits = 5*5 = 25
P(choosing correct number in 1st attempt) = 1/25
P(choosing wrong in 1st attempt) = 24/25
P(choosing right in 2nd attempt) = 1/24 {i am assuming he will not try wrong number again}
therefore total probability = 1/25 + 24/25*1/25 = 2/25 = 50/625 (E).... i don't think this question requires rounding off.... Correct me if i missed any step
Director
Status: Everyone is a leader. Just stop listening to others.
Joined: 22 Mar 2013
Posts: 699
Location: India
GPA: 3.51
WE: Information Technology (Computer Software)
Re: A telephone number contains 10 digit, including a 3-digit [#permalink]
### Show Tags
07 Aug 2013, 08:02
2
1
I tried to solve this question in following manner:
Last 2 digit can be chosen out of 3 4 6 8 9
means 5x5 = 25 Numbers can be constructed.
Case 1: Bob able to dial the number in first attempt : 1/25
Case 2: Bob able to dial the number in second attempt :
24/25 x 1/24 = 1/25
case 1 + case 2 = 2/25 which is same as 50/625.
I have seen someone solved second case as 24/25 x 1/25 which is not right.
bcz in second attempt bob will have only 24 choices not 25.
thats why they got 49/625 or approx answer.
Intern
Joined: 17 May 2014
Posts: 36
Re: A telephone number contains 10 digit, including a 3-digit [#permalink]
### Show Tags
17 May 2014, 08:05
2
JusTLucK04 wrote:
I guess he is hitting randomly here and the probability that he chooses a number at any point of time is equal..
1/25+24/25*1/25= 49/625
It should be explicitly stated if he is trying numbers in an order/not repeating...I think
I think most of the answers are missing a point. Let me try to put it across:
Total number of possible numbers are : 5x5 = 25
Correct number =1
Case 1: When he gets it right in first attempt: P(E1) = 1/25
Case 2: He gets 1st attempt wrong and second right:
When he gets it wrong then the probability of getting wrong is 24/25.
Now there are 24 cases with him and he chooses the right one this time.
Probability of right case is 1/24
Thus, P(E2) = 24/25 x 1/24
=1/25
Probability of getting it right in at most two cases = P(E1) + P(E2)
= 1/25 + 1/25
= 2/25
= 50/625
Option (E) is therefore right as most of you mentioned but the method employed was wrong.
Cheers!!!
Intern
Joined: 21 Apr 2014
Posts: 39
Re: A telephone number contains 10 digit, including a 3-digit [#permalink]
### Show Tags
16 Jun 2015, 21:25
2
1
The total number of possibilities for the phone numbers is 25,
Therefore the probability of him getting on the firs try is 1/25
Here's where I differ from other posters: I would say that the probability of him getting it right on the second try would be:
(24/25)(1/24)
This is because the probability of him getting it wrong on the first try is 24/24, because there are 24 wrong answers and 1 right one. After that, however, he's already eliminated one possible wrong answer by trying and failing, so the total number of possibilities is now 24. That means he has a 1/24 chance of getting it right after trying one and failing.
This makes the total probability 1/25+1/25, which is exactly 50/625
_________________
Eliza
GMAT Tutor
bestgmatprepcourse.com
Director
Joined: 03 Sep 2006
Posts: 614
### Show Tags
06 Feb 2008, 01:02
1
3,4,6,8,9 are the digits to be arranged in two positions X Y
total number of possible ways = 5*5 = 25
Case A:
X( ok) Y( ok ): (1/5)*(1/5) = 1/25
Case B:
First attempt
X(not ok) Y (not ok): (1- 1/25) = 24/25
Second attempt
X( ok) Y( ok ): (1/5)*(1/5) = 1/25
= 24/625
Case C:
First attempt
X( ok) Y(not ok) : (1/5)*(1- 1/5) = 4/25
Second attempt
X( ok) Y( ok ): (1/5)*(1/5) = 1/25
=4/625
Case D:
First attempt
X(not ok) Y(ok) : (1/5)*(1- 1/5) = 4/25
Second attempt
X( ok) Y( ok ): (1/5)*(1/5) = 1/25
=4/625
Total = (1/25) + ( 24/625 ) + ( 4/625 ) + ( 4/625 )
=
I think I am missing some case!
Manager
Joined: 26 Jan 2008
Posts: 214
### Show Tags
06 Feb 2008, 01:12
1
1
marcodonzelli wrote:
A telephone number contains 10 digit, including a 3-digit area code. Bob remembers the area coe and the next 5 digits of the number. He also remembers that the remaining digits are not 0,1,2,5, or 7. If Bob tries to find the number by guessing the remaining digits at random, the probability that he will be able to find the correct number in at most 2 attempts is closest to which of the following ?
1/625
2/625
4/625
25/625
50/625
5 numbers in each of the last two positions
==> total of 25 2-digit numbers
==> probability of getting it right in one guess = 1/25
Total Probability = P(get it right the first try) + P(get it right the second try)
= 1/25 + [24/25 * 1/25]
= 49/625, closest to (E)
_________________
SVP
Joined: 29 Mar 2007
Posts: 1839
### Show Tags
06 Feb 2008, 01:32
1
2
marcodonzelli wrote:
A telephone number contains 10 digit, including a 3-digit area code. Bob remembers the area coe and the next 5 digits of the number. He also remembers that the remaining digits are not 0,1,2,5, or 7. If Bob tries to find the number by guessing the remaining digits at random, the probability that he will be able to find the correct number in at most 2 attempts is closest to which of the following ?
1/625
2/625
4/625
25/625
50/625
The prob of getting it right in one attempt is 1/5*1/5 = 1/25
This is already bigger than ABC so elim. Also bigger than D so elim.
E is left.
You can do it this way as well 1/5*1/5 -> 1/25 and thus 24/25 is not getting it
1/25+24/25(1/25) --> 49/625
Senior Manager
Joined: 12 Aug 2015
Posts: 280
Concentration: General Management, Operations
GMAT 1: 640 Q40 V37
GMAT 2: 650 Q43 V36
GMAT 3: 600 Q47 V27
GPA: 3.3
WE: Management Consulting (Consulting)
A telephone number contains 10 digit, including a 3-digit [#permalink]
### Show Tags
02 Nov 2015, 11:15
1
so Bob is given 2 opportunities to decipher the remaining 2 digits of the code, for this task he has 5 digits to choose from.
IMPORTANT notes to bear in mind before calculation:
- picking a number for place 9 is independent from picking the number for place 10 - i.e. digits can repeat
- if Bob wastes his 1st opportunity he will have better chances to pick the right combination during his second attempt - because by that time he has already proved that one combination is invalid.
So we have all the info needed to do simple calculation:
(1) $$\frac{1}{5}$$*$$\frac{1}{5}$$ = $$\frac{1}{25}$$ - this is the independent event that he would win on first attempt
OR
(2.1) $$\frac{4}{5}$$*$$\frac{4}{5} = [m]\frac{16}{25}$$ - he fails to choose correctly on first attempt because he picks either of the 4 wrong digits for the both places
(2.2) $$\frac{1}{4}$$*$$\frac{1}{4}$$=$$\frac{1}{16}$$ - he finally chooses the correct digit out of the remaining 4 for each place independently!
AND
(2.3) Multiply the above two iterations to complete the chances for the second scenario: $$\frac{16}{25}$$*$$\frac{1}{16}$$=$$\frac{1}{25}$$
(3) Sum up $$\frac{1}{25}$$+$$\frac{1}{25}$$=$$\frac{2}{25}$$=$$\frac{50}{625}$$
Senior Manager
Joined: 10 Jul 2013
Posts: 282
Re: A telephone number contains 10 digit, including a 3-digit [#permalink]
### Show Tags
07 Aug 2013, 09:14
marcodonzelli wrote:
A telephone number contains 10 digit, including a 3-digit area code. Bob remembers the area code and the next 5 digits of the number. He also remembers that the remaining digits are not 0, 1, 2, 5, or 7. If Bob tries to find the number by guessing the remaining digits at random, the probability that he will be able to find the correct number in at most 2 attempts is closest to which of the following ?
A. 1/625
B. 2/625
C. 4/625
D. 25/625
E. 50/625
solution:
Digits are available = 3,4,6,8,9
so two spaces can be filled by 5 numbers = 5 * 5 = 25 ways
chances of failing = 24
At most 2 attempts = success at 1st attempt or success at 2nd attempt.
= 1/25 + 24/25 * 1/25 = 49/625 =~ 50/625 (E) Answer
SVP
Joined: 06 Sep 2013
Posts: 1545
Concentration: Finance
Re: A telephone number contains 10 digit, including a 3-digit [#permalink]
### Show Tags
05 Feb 2014, 15:17
1
marcodonzelli wrote:
A telephone number contains 10 digit, including a 3-digit area code. Bob remembers the area code and the next 5 digits of the number. He also remembers that the remaining digits are not 0, 1, 2, 5, or 7. If Bob tries to find the number by guessing the remaining digits at random, the probability that he will be able to find the correct number in at most 2 attempts is closest to which of the following ?
A. 1/625
B. 2/625
C. 4/625
D. 25/625
E. 50/625
Good question +1
Probability of at least in 2 attempts = Prob in 1 attempt + Prob of Not 1 atempt * Prob of 2nd attempt
Therefore probability of 1 attempt = 1/25
Now this is where the problem gets tricky
Prob of not 1 could be
He found 1
OR He found the other
OR He found neither
So we have 3 scenarios that we'll need to add up
1st scenario: 1/5^3 * 4/5
2nd scenario: 1/5^3 * 4/5
3rd scenario: 4/5^2 * 1/5^2
Now adding all the scenarios = 24/625
Adding 1st and 2nd grand scenario = 1/25 + 24 /625 = 49 / 625.
The closest answer choice is E, cause the problem says approximately
Hope its clear
J
Senior Manager
Joined: 17 Sep 2013
Posts: 318
Concentration: Strategy, General Management
GMAT 1: 730 Q51 V38
WE: Analyst (Consulting)
Re: A telephone number contains 10 digit, including a 3-digit [#permalink]
### Show Tags
13 May 2014, 14:21
I guess he is hitting randomly here and the probability that he chooses a number at any point of time is equal..
1/25+24/25*1/25= 49/625
It should be explicitly stated if he is trying numbers in an order/not repeating...I think
Senior Manager
Joined: 17 Sep 2013
Posts: 318
Concentration: Strategy, General Management
GMAT 1: 730 Q51 V38
WE: Analyst (Consulting)
Re: A telephone number contains 10 digit, including a 3-digit [#permalink]
### Show Tags
18 May 2014, 04:55
mittalg wrote:
JusTLucK04 wrote:
I guess he is hitting randomly here and the probability that he chooses a number at any point of time is equal..
1/25+24/25*1/25= 49/625
It should be explicitly stated if he is trying numbers in an order/not repeating...I think
I think most of the answers are missing a point. Let me try to put it across:
Total number of possible numbers are : 5x5 = 25
Correct number =1
Case 1: When he gets it right in first attempt: P(E1) = 1/25
Case 2: He gets 1st attempt wrong and second right:
When he gets it wrong then the probability of getting wrong is 24/25.
Now there are 24 cases with him and he chooses the right one this time.
Probability of right case is 1/24
Thus, P(E2) = 24/25 x 1/24
=1/25
Probability of getting it right in at most two cases = P(E1) + P(E2)
= 1/25 + 1/25
= 2/25
= 50/625
Option (E) is therefore right as most of you mentioned but the method employed was wrong.
Cheers!!!
Whats wrong with my method..Can you please elaborate?..Wasn't able to catch you here
I mean to imply that he is not taking a note of the numbers hit earlier and is just going about randomly..as nothing has been mentioned
Anyways this is trivial and GMAC will be more specific..Skip it
Re: A telephone number contains 10 digit, including a 3-digit [#permalink] 18 May 2014, 04:55
Go to page 1 2 Next [ 22 posts ]
Display posts from previous: Sort by | 2019-12-07 21:46:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7367103099822998, "perplexity": 2523.540794644107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540502120.37/warc/CC-MAIN-20191207210620-20191207234620-00270.warc.gz"} |
https://cob.silverchair.com/dev/article/138/19/4291/44589/Combinatorial-activation-and-concentration?searchresult=1 | Despite years of study, the precise mechanisms that control position-specific gene expression during development are not understood. Here, we analyze an enhancer element from the even skipped (eve) gene, which activates and positions two stripes of expression (stripes 3 and 7) in blastoderm stage Drosophila embryos. Previous genetic studies showed that the JAK-STAT pathway is required for full activation of the enhancer, whereas the gap genes hunchback (hb) and knirps (kni) are required for placement of the boundaries of both stripes. We show that the maternal zinc-finger protein Zelda (Zld) is absolutely required for activation, and present evidence that Zld binds to multiple non-canonical sites. We also use a combination of in vitro binding experiments and bioinformatics analysis to redefine the Kni-binding motif, and mutational analysis and in vivo tests to show that Kni and Hb are dedicated repressors that function by direct DNA binding. These experiments significantly extend our understanding of how the eve enhancer integrates positive and negative transcriptional activities to generate sharp boundaries in the early embryo.
Establishing temporal and spatial gene expression patterns drives the organization of complex body plans during development. At the transcriptional level, the cis-regulatory DNA elements of patterning genes integrate pre-existing asymmetric patterning information to generate expression patterns with increasingly sharp boundaries, which result in the precise placement of cells with different fates (Arnone and Davidson, 1997; Arnosti, 2003). Although hundreds of patterning elements have been discovered over the past twenty-five years, very few have been extensively studied at the molecular level.
The early Drosophila embryo develops as a syncytium, where positional information is in the form of transcription factor gradients. Along the anterior-posterior (AP) axis, the first gradients are maternal in origin and long-range in function, diffusing from mRNAs that are localized at or near the poles of the developing oocyte (Driever and Nusslein-Volhard, 1988; Wang and Lehmann, 1991). Maternal gradients regulate the zygotic expression of the gap genes, each of which is expressed in one or two broad domains at specific positions along the AP axis. The gap genes form gradients that act over shorter ranges and overlap at their edges, and, together with the maternal gradients, establish the seven-striped expression patterns of the pair-rule genes, including even skipped (eve) (Frasch et al., 1987; Macdonald et al., 1986). Individual eve stripes are controlled by modular cis-regulatory elements that respond in unique ways to the maternal and gap protein gradients (Fujioka et al., 1999; Goto et al., 1989; Harding et al., 1989; Small et al., 1992; Small et al., 1996).
The focus of this study is on a detailed characterization of a 511 bp eve regulatory element (eve 3+7) that drives strong expression of stripe 3 and much weaker expression of stripe 7 (Small et al., 1996). Genetic and mutagenesis experiments have shown that the ubiquitously activated JAK-STAT pathway is required for activation of this element (Hou et al., 1996; Yan et al., 1996), whereas the boundaries of the stripes are formed by Hunchback (Hb)- and Knirps (Kni)-mediated repression (Clyde et al., 2003; Small et al., 1996; Yu and Small, 2008). However, other mechanisms of activation must exist because mutations that remove components of the JAK-STAT pathway do not completely abolish eve expression. Also, although Hb and Kni are known to be crucial for forming stripe boundaries, how these proteins function at the molecular level is not clear.
Here, we describe experiments that critically test both activation and repression mechanisms. Our studies show that the ubiquitous maternal protein Zelda (Zld; Vielfaltig – FlyBase) is required for JAK-STAT-mediated activation of the eve 3+7 response. We also use a reiterative series of biochemical and bioinformatics analyses to redefine the DNA binding motif for Kni, and show that direct binding by Kni can account for all repressive activity on this element in the region between the two stripes. Finally, we present evidence that DNA binding sites for Hb are crucial for forming the outside boundaries of the two-stripe pattern. These results provide a firm molecular basis for understanding how repressor gradients function to differentially position multiple expression boundaries.
### Transgenes, fly stocks and embryo staining procedures
The eve 3+7-lacZ reporter was described elsewhere (Small et al., 1996). To generate the deletion mutants and all but one of the substitution mutants in the eve 3+7 element, site-directed mutagenesis was performed using specific oligonucleotides and the Muta-Gene Phagemid Kit (Bio-Rad 170-3581). The eve 3+7-lacZ reporter containing substitution mutations in all 11 Kni binding sites (27 single-point changes in 11 sites, see Fig. 4F) was purchased from Integrated DNA Technologies (IDT). P-element-mediated transformation was used to generate transgenic lines containing all reporter constructs, and at least four independent lines were assayed by in situ hybridization with a lacZ probe for each construct. There was very little variation in the expression patterns between individual lines containing the same construct.
The generation of the zld294 null allele and the protocol used for obtaining germline clones in this background were described elsewhere (Liang et al., 2008).
All embryos, except those in Fig. 7, were stained using enzymatic methods (Jiang et al., 1991). Embryos in Fig. 7 were simultaneously stained for Hb and Kni proteins and lacZ mRNA as previously described (Wu et al., 2001).
### Yeast one-hybrid analysis
To generate yeast reporters, four tandem copies of either the wild-type conserved sequence (5′-AACGCTCTACTTACCTGCAATT-3′; the sequence conserved between D. melanogaster and D. picticornis is underlined) or a mutant version (5′-AACGCTCACTAGTAGTGCAATT-3′) were cloned between the EcoRI and XbaI sites of the yeast integration and reporter vector pHISi (Matchmaker One-Hybrid System; Clontech K1603-1).
The wild-type and mutant reporters were linearized with XhoI and integrated into the genome of the yeast strain YM4271 (MATa, ura3-52, his3-200, ade2-101, ade5, lys2-801, leu2-3, trp1-901, tyr1-501, gal4512, gal80538, ade5::hisG) using the small-scale lithium acetate yeast transformation protocol (Clontech PT3024-1). 3-amino-1,2,4-triazole (3-AT) sensitivity was determined for strain YM4271[pHISi-WT] by plating 1×103 colonies onto SD minimal medium without histidine (Clontech 8606-1) and supplemented with 0, 1, 2.5, 5, 15, 45 or 60 mM 3-AT. Growth was completely inhibited by 5 mM 3-AT.
The wild-type reporter strain was transformed with 40 μg of a 0- to 6-hour D. melanogaster Oregon R poly(A)+ embryonic cDNA library constructed using the λACT phage (Yu et al., 1999) and selected in SD minimal medium with –His/–Leu dropout supplement (Clontech 8609-1) in the presence of 10 mM 3-AT, following the large-scale yeast transformation protocol (Clontech PT1031-1). The efficiency of transformation was ∼4.2×104 transformants/μg DNA. Twenty-three clones were isolated within 5 days of transformation, from which the plasmids were recovered and sequenced. Nine clones corresponded to full-length kni cDNA, three clones corresponded to two different zld cDNAs (covering amino acids 1195-1596 and 1293-1596), whereas for the other 11 clones we found a single hit for each. Purified DNA samples from the 14 independent cDNAs were used to transform the wild-type reporter strain, but only transformants containing kni and zld cDNAs grew in the presence of 10 mM and 45 mM 3-AT (see Fig. S1 in the supplementary material). The results from this experiment suggest that the other clones were false positives. Purified plasmids from the two independent kni and zld clones were then used to transform the mutant yeast reporter strain. Both zld cDNA plasmids failed to activate the mutant reporter, whereas the kni cDNA plasmids gave transformants even in the presence of 45 mM 3-AT (see Fig. S1 in the supplementary material).
### Protein expression and purification
Different kni fragments (see Fig. S2 in the supplementary material) were subcloned as KpnI-XbaI inserts into a modified pGEX-6P-3 vector (Amersham Biosciences 27-4599-01). All fragments were PCR amplified from clone N741 (Nauber et al., 1988), which contains a full-length kni cDNA, and the final plasmids were sequenced to confirm the correct reading frame and insert integrity. The GST-ZldC expression plasmid was a gift from Nikolai Kirov (Liang et al., 2008). To generate pET-Zld, the entire zld coding region was PCR amplified from Canton S genomic DNA using PfuUltra II Fusion HS DNA Polymerase (Stratagene 600670) and the following primers: 5′-CGCGGGTACCATGACGAGCATTAAGACCGAGATGCC-3′ and 5′-CGCGTCTAGATCAGTAGAGCTCTATGCTCTTCTC-3′ (KpnI and XbaI sites in bold, start and stop codons underlined). The PCR program used was: 95°C for 2 minutes; 30 cycles of 95°C for 20 seconds, 55°C for 20 seconds, 72°C for 90 seconds; followed by 72°C for 3 minutes. A 4.8 kb band was gel purified, digested with KpnI and XbaI and ligated to a modified pET15b vector (Novagen 69661-3) restricted with KpnI and NheI.
Expression plasmids were transformed into E. coli BL21 CodonPlus (DE3) competent cells (Stratagene 230245), and a single colony was grown in 3 ml LB Amp100 medium (which contains 100 μg/ml ampicillin) until early exponential phase (∼5 hours). Then, 1 ml of this starter culture was expanded in 300 ml prewarmed LB Amp100 and grown to an OD600 of 0.4-0.6. Protein expression was induced by adding isopropyl-β-d-thiogalactoside to a final concentration of 0.1 mM and incubating the cultures for 3 hours at 37°C or for 6-24 hours at 18-20°C.
For the purification of GST-tagged proteins, cells were centrifuged at 7700 g for 10 minutes at 4°C and resuspended in 30 ml lysis buffer (100 mM KCl, 25 mM HEPES pH 7.9, 5 mM DTT, 20% glycerol, 1 mM benzamidine, 1 mM Na2S2O5, 3 μM pepstatin A, 1 mM PMSF) containing 0.1 mg/ml lysozyme and kept on ice for 15 minutes. Cells were disrupted by sonication with four to six cycles of 30 seconds (50% duty cycle, output 3-4) using a Sonic Dismembrator Model 550 (Fisher Scientific) equipped with a microtip. The cell lysate was centrifuged at 4°C for 10 minutes at 12,000 g, and the supernatant was transferred to a fresh tube containing 1 ml 50% slurry of glutathione Sepharose 4B beads (Amersham Biosciences 17-0756-01) prewashed in 1×PBS (140 mM NaCl, 2.7 mM KCl, 10 mM Na2HPO4, 1.8 mM KH2PO4) and pre-equilibrated in lysis buffer. Tubes were secured onto a rotator and incubated for 2 hours at 4°C with gentle agitation. Beads were sedimented by centrifugation (500 g) and the supernatant was carefully discarded. Beads were then washed three times with the same volume of lysis buffer and transferred to a 2-ml microcentrifuge tube. Bound proteins were eluted from the beads by adding 1 ml elution buffer (20 mM reduced glutathione and 50 mM HEPES pH 7.9 in lysis buffer) per ml of slurry bed volume, mixing gently to resuspend the beads, and incubating for 30 minutes at 4°C with gentle rotation. Beads were sedimented and the supernatant was transferred to a fresh tube. Protein concentration was determined by the Bradford assay using BSA as a standard. For purification of GST-Kni1-429, GST-Kni1-340 and GST-Zld-C fragments, it was necessary to include detergents in order to obtain soluble proteins. For this purpose, the above protocol was integrated with one previously published (Frangioni and Neel, 1993) to include the addition of N-laurylsarkosine to a final concentration of 1.5% immediately before sonication and the addition of Triton X-100 to a final concentration of 1% after sonication. The two detergents were not included in the successive steps.
For purification of His-tagged, full-length Zld, cells were collected and sonicated as described above, but a different lysis buffer was used: 300 mM NaCl, 25 mM HEPES pH 7.9, 1 mM EDTA, 10 mM imidazole, 10% glycerol, 15 mM β-mercaptoethanol, 1 mM benzamidine, 1 mM Na2S2O5, 10 μM pepstatin A, 1 mM PMSF. The cell lysate was centrifuged at 4°C for 10 minutes at 12,000 g, and the supernatant was transferred to a fresh tube containing 0.7 ml 50% slurry of Ni-NTA agarose beads (Qiagen 30210) prewashed in 1×PBS and pre-equilibrated in lysis buffer. Incubation and washings were as described above. Bound proteins were eluted from the sedimented beads by adding 0.5 ml elution buffer (250 mM imidazole in lysis bufffer) and incubating the resuspended beads for 30 minutes at 4°C with gentle rotation. Beads were then sedimented and the supernatant transferred to a new tube.
### Electrophoretic mobility shift assay (EMSA)
Oligos used for EMSA experiments were obtained from IDT (see Table S1 in the supplementary material). To generate double-stranded (ds) fragments, 7.5 μl of each complementary oligonucleotide (100 pM/μl) were combined in a 0.5 ml Eppendorf tube together with 165 μl H2O and 20 μl annealing buffer (200 mM Tris-Cl pH 7.5, 20 mM MgCl2, 500 mM NaCl). The tube was placed in a 95°C heat block, which was then cooled gradually to room temperature. Larger fragments from the eve 2 enhancer (see Fig. 3E,F) were generated by PCR amplification (for primers, see Table S1 in the supplementary material).
For 5′ end-labeling, 1 μl (∼50 ng) dsDNA oligonucleotide was incubated for 30 minutes at 37°C with 1 μl T4 polynucleotide kinase in the presence of 1 μl [γ-32P]ATP (MPI 35020), 1 μl 50 mM DDT, 1 μl 10× PNK buffer and 5 μl H2O. To remove most of the unincorporated [γ-32P]ATP, 70 μl 10 mM Tris (pH 8.0) were added to the reaction and the entire sample was loaded onto an Illustra MicroSpin G25 spin column (GE Healthcare 27-5325-01). Samples were then desiccated to a small volume (∼10 μl), mixed with loading buffer and purified on a 15% native polyacrylamide gel. Bands were excised, transferred to Eppendorf tubes containing 0.5 ml elution buffer (0.5 M ammonium acetate, 10 mM magnesium acetate), and mixed for several hours at room temperature. The buffer was transferred to a low-retention tube, and DNA was precipitated in the presence of 3 μl PelletPaint co-precipitant (Novagen 69049-3), 55 μl 3 M sodium acetate (pH 5.2) and 1 ml 100% ethanol, and resuspended in 40 μl 10 mM Tris (pH 8.0). To measure the specific activity, 1 μl was transferred to a 20 ml Wheaton liquid scintillation vial (Fisher 03-341-73) containing 20 μl H2O. Then, 10 ml liquid scintillation cocktail (MPI 882470) were added and the vial was loaded into a liquid scintillation counter (Beckman LS6000 LL). The probe was diluted to 20,000 cpm/μl using 10 mM Tris (pH 8.0).
For DNA-binding reactions, 1 μl of affinity-purified protein (0.1-2 μg) was incubated for 30 minutes at room temperature with 20 μl freshly prepared binding buffer (100 mM KCl, 10% glycerol, 20 mM HEPES pH 8.0, 1 mM DTT, 5 mM MgCl2,1 μM ZnSO4, 1 mM benzamidine, 1 mM Na2S2O5, 3 μM pepstatin A, 0.1 mg/ml BSA, 1 mM PMSF), 10 mg/ml poly[d(I-C)] and 1 μl purified probe (20,000 cpm/μl). The entire binding reaction was loaded onto a pre-run 4.5% native polyacrylamide gel and run at 150 V for ∼2 hours. The gel was dried and exposed to autoradiography film with an intensifying screen at –80°C for 12-16 hours.
### The maternal zinc-finger protein Zelda is required for STAT-mediated activation of the eve 3+7 enhancer
Previous studies suggested that the JAK-STAT pathway is required for activation of the eve 3+7 enhancer. However, loss-of-function mutants lacking JAK-STAT components show only a partial loss of eve 3 expression (Hou et al., 1996; Yan et al., 1996), suggesting that other factors/pathways must be involved in enhancer activation. To identify such factors, we tested two motifs in the 3′ region of the eve 3+7 enhancer that are perfectly conserved between D. melanogaster and D. picticornis (Sackerson, 1995) by deleting them in the context of a reporter gene that also contained the eve 2 enhancer as an internal control for expression levels (Fig. 1A). Deleting both sequences (not shown) or only the upstream sequence (Fig. 1E) caused a strong reduction in expression levels of both stripe 3 and 7. By contrast, deleting the downstream motif caused a significant reduction in the stripe 7 response, but did not detectably alter the levels of stripe 3 expression (Fig. 1G).
We extended our analysis of the upstream motif by performing a linker-scanning mutagenesis experiment to identify specific base pairs required for activation of both stripes (Fig. 1B,D,F,H). These experiments identified a 9 bp region that is crucial for activation of stripes 3 and 7. Interestingly, this motif contains the sequence CAGGTAA, which is among five sequences (referred to collectively as the TAGteam DNA motif) that are over-represented in the regulatory regions of a number of early zygotically active genes (De Renzis et al., 2007; ten Bosch et al., 2006).
The ubiquitous maternal protein Zelda (Zld) binds specifically to TAGteam sites and is required for activation of a large number of genes early in development (Liang et al., 2008). The eve expression pattern is significantly disrupted, but not completely abolished, in embryos derived from germline clones lacking Zld (zld M embryos), as compared with wild-type embryos (see Fig. S3 in the supplementary material), suggesting that other factors might bind this site. To identify such factors, we performed a yeast one-hybrid screen using a library obtained from 0- to 6-hour embryos (see Materials and methods). Out of ∼1.7 million colonies screened, we identified clones for only two transcription factors: Zld and Kni (see Materials and methods and Fig. S1 in the supplementary material). We next transformed a yeast strain containing an identical reporter with substitution mutations in the 9 bp region required for enhancer activation in vivo with purified plasmids expressing these two proteins. These mutations completely prevented activation in yeast by Zld (see Fig. S1 in the supplementary material), which suggests that Zld binds specifically to the sequences required for activation of the eve 3+7 reporter gene. By contrast, the kni clones tested maintained activation of the mutated yeast construct, suggesting that any Kni-binding activity in the mutated sequence was still intact (see below).
Fig. 1.
Deletion and mutation analyses of two conserved regions in the eve 3+7 enhancer. (A) The reporter gene, showing the eve 3+7 enhancer and the eve 2 enhancer, separated by a 300 bp spacer sequence that ensures their autonomy (Small et al., 1993), cloned upstream of the eve basal promoter driving lacZ expression. (B) DNA sequence variants tested in the context of the reporter gene in A. The conserved regions between D. melanogaster and D. picticornis are underlined; three deletions (D1-D3) and three substitution mutants (M1-M3) are shown. (C-H) Expression of eve 3+7-lacZ reporters containing the sequence variants shown in B. Stripe numbers are shown in the wild-type (WT) panel (C). Deletion of the upstream motif (D2) caused a reduction of stripe 3 and 7 (E) similar to that observed when both motifs were deleted (D1, data not shown), whereas deletion of the downstream motif (D3) did not alter expression of stripe 3 and had a modest effect on stripe 7 (G). Mutations in the CAGGTAA site present in the reverse sequence at the 3′ end of the first motif caused reductions in the expression of both stripe 3 and 7 (F,H), whereas a mutant that does not alter this site (M1) did not cause any detectable effect on reporter gene expression (D).
Fig. 1.
Deletion and mutation analyses of two conserved regions in the eve 3+7 enhancer. (A) The reporter gene, showing the eve 3+7 enhancer and the eve 2 enhancer, separated by a 300 bp spacer sequence that ensures their autonomy (Small et al., 1993), cloned upstream of the eve basal promoter driving lacZ expression. (B) DNA sequence variants tested in the context of the reporter gene in A. The conserved regions between D. melanogaster and D. picticornis are underlined; three deletions (D1-D3) and three substitution mutants (M1-M3) are shown. (C-H) Expression of eve 3+7-lacZ reporters containing the sequence variants shown in B. Stripe numbers are shown in the wild-type (WT) panel (C). Deletion of the upstream motif (D2) caused a reduction of stripe 3 and 7 (E) similar to that observed when both motifs were deleted (D1, data not shown), whereas deletion of the downstream motif (D3) did not alter expression of stripe 3 and had a modest effect on stripe 7 (G). Mutations in the CAGGTAA site present in the reverse sequence at the 3′ end of the first motif caused reductions in the expression of both stripe 3 and 7 (F,H), whereas a mutant that does not alter this site (M1) did not cause any detectable effect on reporter gene expression (D).
Fig. 2.
Zld binds to the CAGGTAA motif and is required for activation of the eve 3+7 enhancer. (A,B) Electrophoretic mobility shift assays (EMSAs) using His-tagged, full-length Zld (gray), a GST fusion to a C-terminal Zld fragment (amino acids 1240-1470, GST-ZldC, black), or GST (white) are shown. Proteins were incubated with a 22 bp oligonucleotide containing the CAGGTAA sequence (WT) or a mutated version thereof (MUT; see Table S1 in the supplementary material). Numbers above the rectangles indicate the amount of protein (μg) used in each lane. The lower shifted band in the GST-ZldC lane in A (asterisk) is most likely a degradation product because it is barely detectable when using a fresh protein aliquot (B), but becomes more prominent after repeated freeze-thaw cycles. (C) The eve 3+7 reporter gene (Small et al., 1996). The position of the CAGGTAA motif (Z) is shown. (D,E) RNA expression of the eve 3+7-lacZ reporter in a wild-type Drosophila embryo (D) and in an embryo lacking maternal expression of Zld (zld M) (E).
Fig. 2.
Zld binds to the CAGGTAA motif and is required for activation of the eve 3+7 enhancer. (A,B) Electrophoretic mobility shift assays (EMSAs) using His-tagged, full-length Zld (gray), a GST fusion to a C-terminal Zld fragment (amino acids 1240-1470, GST-ZldC, black), or GST (white) are shown. Proteins were incubated with a 22 bp oligonucleotide containing the CAGGTAA sequence (WT) or a mutated version thereof (MUT; see Table S1 in the supplementary material). Numbers above the rectangles indicate the amount of protein (μg) used in each lane. The lower shifted band in the GST-ZldC lane in A (asterisk) is most likely a degradation product because it is barely detectable when using a fresh protein aliquot (B), but becomes more prominent after repeated freeze-thaw cycles. (C) The eve 3+7 reporter gene (Small et al., 1996). The position of the CAGGTAA motif (Z) is shown. (D,E) RNA expression of the eve 3+7-lacZ reporter in a wild-type Drosophila embryo (D) and in an embryo lacking maternal expression of Zld (zld M) (E).
We next examined the role of Zld in eve 3+7 enhancer activation. First, electrophoretic mobility shift assays (EMSAs) were performed using His-tagged, full-length Zld and GST-ZldC, which contains a cluster of four zinc-fingers near the C-terminus (Liang et al., 2008). Both proteins bound specifically to the CAGGTAA site in vitro and failed to bind to the mutated site (Fig. 2A,B). Because of the similarities in binding and technical difficulties in expressing and purifying significant amounts of soluble full-length protein, we used the truncated ZldC protein in subsequent experiments. GST-ZldC binding to the CAGGTAA site was completely abolished by competition with unlabeled wild-type oligonucleotide and was not affected by competition with the mutant oligonucleotide (Fig. 2B). We then crossed the eve 3+7 reporter construct into zld M embryos, expecting a reduction in expression levels similar to that caused by mutating the Zld site (Fig. 1). Surprisingly, expression of the wild-type construct was completely abolished in these embryos (Fig. 2E). This result indicates that Zld is absolutely required for eve 3+7 activation, and suggests that the JAK-STAT pathway is insufficient for enhancer activation on its own.
As a negative control, we crossed a reporter gene containing only the eve 2 enhancer into zld M embryos. This construct contains none of the previously identified TAGteam sites, but nonetheless its expression was completely abolished in embryos lacking Zld (Fig. 3C). This result, along with the stronger than expected effect on eve 3+7, suggest that Zld plays a more prominent role in eve activation than previously thought. One possibility is that these enhancers contain Zld binding sites that do not match the TAGteam sequences. Alternatively, activation by Zld might involve mechanisms that are independent of DNA binding. We tested the first hypothesis by performing EMSA on a series of four 150 bp fragments that span the eve 2 enhancer element as well as a 100 bp fragment containing four mutated Zld sites, which served as a negative control (Fig. 3F; data not shown). All four fragments showed some Zld-binding activity in vitro, but binding affinities to these fragments seemed somewhat lower than that of the CAGGTAA site from the eve 3+7 enhancer (Fig. 3F, lane 14). To identify putative Zld binding sites in the eve 2 element, we tested a panel of ten overlapping 21 bp probes that span fragment 4 (P32-P41). One of the probes (P41) showed clear binding (data not shown); its 5′ end includes the sequence CAGGCAA, which differs by just one nucleotide from one of the TAGteam sequences. A probe containing this site in the middle (P42) showed a similar binding activity (Fig. 3G). We then searched the eve 2 sequence for other variants of TAGteam motifs and identified three additional Zld sites (Fig. 3H; data not shown). These experiments suggest that activation of the eve 2 enhancer involves specific binding of Zld to non-canonical sites.
### Kni binding is required for setting the inside boundaries of eve stripes 3 and 7
The abdominal Kni expression domain is positioned in the region between eve stripes 3 and 7, and very low levels of ectopically expressed Kni protein efficiently repress these stripes (Clyde et al., 2003; Struffi et al., 2004). In addition, the genetic removal of kni causes a complete derepression in this region, and DNAse I footprint assays have shown that there are at least five Kni binding sites in the minimal stripe element (Small et al., 1996).
In previous work, we attempted to mutate the footprinted Kni binding sites, but these mutations caused only minor effects on the stripe pattern (data not shown). These results suggest that either the tested mutations did not remove all Kni-binding activity or that Kni-mediated repression is indirect. To identify additional sites, we used a position weight matrix (PWM) derived from seventeen footprinted Kni sites in the literature (Fig. 4B) (Lifanov et al., 2003). This PWM predicted 12 Kni sites in the minimal enhancer, and substitution mutations in the core motifs of six of these sites led to a modest posterior expansion of the stripe into the region between stripes 3 and 4 (Clyde et al., 2003). However, subsequent attempts to completely remove Kni-binding activity caused no further expansions (data not shown). A possible explanation for these results is that the predicted sites do not reflect the true binding capabilities of Kni. Therefore, we tested the predicted sites one by one in gel shift experiments using an affinity-purified Kni protein fragment containing its DNA-binding domain and nuclear localization signal fused to GST (GST-Kni1-105). In 8 of 12 cases, the predicted sites did not show strong binding and in some cases the PWM score did not correlate with the binding affinity observed (data not shown). Also, the conserved binding motif adjacent to the Zld site seemed to bind strongly to Kni in the one-hybrid experiment (see Fig. S1 in the supplementary material), but was not predicted by the PWM. This Kni site was tested directly and found to bind specifically in EMSA experiments (P25, Fig. 5). The poor match between predicted and confirmed Kni sites prompted us to scan the entire eve 3+7 element for more binding sites using overlapping oligonucleotides (Fig. 4A). These experiments identified 11 fragments of various lengths, each of which exhibited Kni-binding activity (Fig. 5), but it was not possible to align common sequence motifs among these fragments.
Fig. 3.
Zld is required for activation of the eve stripe 2 enhancer and binds to non-canonical TAGteam sites. (A) The eve 2-lacZ reporter gene (Small et al., 1992). (B,C) RNA expression of the eve 2-lacZ reporter in a wild-type Drosophila embryo (B) and in a zld M embryo (C). (D) A summary of verified Zld binding sites. Canonical sites (black) and new sites discovered here (blue) are shown. (E) The eve 2 element, showing the positions of non-canonical Zld binding sites (green boxes). PCR fragments (F1-4) and oligonucleotide probes (P32-49) used for EMSA experiments (in F-H) are indicated. (F-H) EMSA experiments using affinity-purified GST-ZldC. Lanes are labeled according to the schematic in E. Boxes above the lanes indicate the amount of GST-ZldC used in each reaction: small box, 50 ng; medium, 200 ng; tall, 800 ng.
Fig. 3.
Zld is required for activation of the eve stripe 2 enhancer and binds to non-canonical TAGteam sites. (A) The eve 2-lacZ reporter gene (Small et al., 1992). (B,C) RNA expression of the eve 2-lacZ reporter in a wild-type Drosophila embryo (B) and in a zld M embryo (C). (D) A summary of verified Zld binding sites. Canonical sites (black) and new sites discovered here (blue) are shown. (E) The eve 2 element, showing the positions of non-canonical Zld binding sites (green boxes). PCR fragments (F1-4) and oligonucleotide probes (P32-49) used for EMSA experiments (in F-H) are indicated. (F-H) EMSA experiments using affinity-purified GST-ZldC. Lanes are labeled according to the schematic in E. Boxes above the lanes indicate the amount of GST-ZldC used in each reaction: small box, 50 ng; medium, 200 ng; tall, 800 ng.
We then obtained a different PWM derived from a single SELEX experiment performed by the Berkeley Drosophila Transcription Network Project (Fig. 4C). Despite the low specificity of this PWM, it predicted twenty sites in the eve 3+7 enhancer. Strikingly, 11 of the predicted sites mapped to each of the 11 Kni-binding fragments identified in vitro, whereas the other nine mapped to regions that did not bind. By considering only those predicted sites that bound, we generated a more specific PWM (Fig. 4D), which was substantially different from that predicted from footprinted sites in the literature (Fig. 4B). Interestingly, Noyes et al. (Noyes et al., 2008) published a new PWM for Kni based on data from a bacterial one-hybrid experiment (Fig. 4E), and this is remarkably similar to that derived from our direct Kni binding studies. The similar PWMs showed three bases (positions 1, 6 and 9) that are nearly invariant. Changing two or three of these bases completely abolished binding to nine of the 11 sites (Fig. 5), suggesting that these PWMs accurately identify bases required for effective Kni binding. We introduced mutations into each of the 11 predicted sites (Fig. 4F) in the context of an eve 3+7-lacZ transgene and tested its activity in vivo. These mutations converted the striped, wild-type pattern (Fig. 4G) into a single broad expression domain in the posterior part of the embryo (Fig. 4H). This pattern is virtually indistinguishable from that driven by the wild-type enhancer in kni mutants, suggesting that the invariant bases in the predicted Kni sites are crucial for Kni activity in vivo.
### Hb binding is required for positioning the outside boundaries of eve stripes 3 and 7
The gap protein Hb is expressed maternally and zygotically in a dynamic pattern that includes expression throughout the anterior half of the embryo and a broad stripe in posterior regions (Tautz et al., 1987). Previous studies suggested that Hb acts as a repressor that forms the anterior boundary of eve stripe 3 and the posterior boundary of stripe 7 (Clyde et al., 2003; Small et al., 1996; Yu and Small, 2008). Misexpression of Hb causes a dose-dependent reduction of these stripes, and there are numerous Hb binding sites in the eve 3+7 element (Stanojevic et al., 1989). However, recent computational work suggests that Hb might act in both the activation and repression of the stripe (Papatsenko and Levine, 2008). According to this model, high levels of Hb repress expression, whereas lower levels are involved in activation. This is consistent with a similar model for Hb-mediated regulation of the gap gene Krüppel (Kr), which is expressed in a central domain (Schulz and Tautz, 1994).
A computational scan of the eve 3+7 enhancer predicts nine Hb binding sites with relatively high PWM scores (Fig. 6A) (Lifanov et al., 2003). There is excellent agreement between the positions of these sites and previous footprint studies of Hb binding to the eve promoter region (Stanojevic et al., 1989). To test the role of Hb in eve 3+7 regulation, we generated specific substitution mutations in predicted Hb binding sites and then tested them in reporter gene assays in vivo (Fig. 6). Mutations in four predicted sites led to an anterior derepression of stripe 3 expression and to a strengthening and posterior expansion of stripe 7 (Fig. 6C), results which are qualitatively similar to the expansions observed in hb mutants (Small et al., 1996). This is consistent with the hypothesis that high levels of Hb are required for effective repression of the enhancer (Clyde et al., 2003; Yu and Small, 2008). We next made mutations in all nine predicted Hb sites in an effort to completely abolish all Hb-binding activity from the enhancer (M9 Hb, Fig. 6A). If Hb is involved in both repression and activation, the complete removal of Hb binding might be expected to cause a reduction in stripe levels. Instead, this construct caused expansions that were very similar to those observed after mutating four sites (Fig. 6D).
Fig. 4.
Kni represses the eve 3+7 enhancer through direct binding to 11 sites. (A) The 511 bp eve 3+7 enhancer (gray bar) showing the positions of DNA probes (P1-P31) used in gel shift assays. Kni sites identified in the EMSA experiments shown in Fig. 5 are indicated as red boxes. (B-E) Position weight matrices (PWMs) derived from various collections of Kni binding site datasets (see text). (F) Sequences of Kni binding sites in the eve 3+7 enhancer (left), and point mutations of those sites that were tested in vivo (right). Sites are numbered by their position in the 511 bp enhancer; reverse strand sites are indicated (r). PWM scores are shown for wild-type sites and their mutated counterparts. (G,H) lacZ expression patterns driven by the wild-type eve 3+7 reporter gene (G) and an identical reporter that contains mutations in all 11 Kni binding sites (H). Such mutations cause complete derepression in the region between the stripes.
Fig. 4.
Kni represses the eve 3+7 enhancer through direct binding to 11 sites. (A) The 511 bp eve 3+7 enhancer (gray bar) showing the positions of DNA probes (P1-P31) used in gel shift assays. Kni sites identified in the EMSA experiments shown in Fig. 5 are indicated as red boxes. (B-E) Position weight matrices (PWMs) derived from various collections of Kni binding site datasets (see text). (F) Sequences of Kni binding sites in the eve 3+7 enhancer (left), and point mutations of those sites that were tested in vivo (right). Sites are numbered by their position in the 511 bp enhancer; reverse strand sites are indicated (r). PWM scores are shown for wild-type sites and their mutated counterparts. (G,H) lacZ expression patterns driven by the wild-type eve 3+7 reporter gene (G) and an identical reporter that contains mutations in all 11 Kni binding sites (H). Such mutations cause complete derepression in the region between the stripes.
The experiments described here significantly refine our understanding of how the eve 3+7 enhancer functions in the early embryo. In particular, we showed that the maternal zinc-finger protein Zld is absolutely required for STAT-mediated enhancer activation, and that the gap proteins Kni and Hb establish stripe boundaries by directly binding to multiple sites within the enhancer (Fig. 7A,B; see Fig. S4 in the supplementary material).
### The mechanism of enhancer activation
When first activated in late nuclear cycle 13, the minimal eve 3+7 enhancer drives weak stochastic expression in a broad central pattern (Fig. 7C), which refines in cycle 14 to a stripe that is about four nuclei wide (Fig. 7D). By contrast, stripe 7 expression, which is visible by enzymatic staining methods (Fig. 1C-H, Fig. 2D, Fig. 4G), is nearly undetectable using fluorescence in situ hybridization. Previous work showed that stripe 7 shares regulatory information with stripe 3 but is also controlled by sequences located between the minimal stripe 3+7 and stripe 2 enhancers (Small et al., 1996), and possibly by sequences within and downstream of the stripe 2 enhancer (Janssens et al., 2006). Thus, stripe 7 is unique among the eve stripes in that it is not regulated by a discrete modular element. Previous work showed that the terminal gap gene tailless (tll) is required for activation of eve 7. However, since the Tll protein probably functions as a dedicated repressor (Haecker et al., 2007), it is likely that activation of eve 7 by Tll occurs indirectly, through repression of one or more repressors (Janssens et al., 2006).
The ubiquitous maternal protein Zld is required for the in vivo function of both the eve 3+7 and eve 2 enhancers, which are activated by the JAK-STAT pathway and Bicoid (Bcd), respectively. Zld was previously shown to bind to five sequence motifs (TAGteam sites) that are over-represented in the regulatory regions of early developmental genes (De Renzis et al., 2007; ten Bosch et al., 2006). Our mutations of the single TAGteam site in the eve 3+7 enhancer caused a reduction in expression (Fig. 1F,H), but zld M embryos showed complete abolishment of eve 3+7-lacZ reporter gene expression (Fig. 2E). Also, the eve 2 enhancer, which does not contain any canonical TAGteam sites, is nonetheless inactive in zld M embryos (Fig. 3C). We show here that this enhancer contains at least four variants of the TAGteam sites (Fig. 3D), which suggests that Zld binding to non-canonical sites is crucial for its function in embryogenesis. ChIP-Chip data show that Zld binding extends throughout much of the eve 5′ and 3′ regulatory regions (C. Nien, H. Liang and C.R., unpublished).
The implication of such broad binding and the requirement for Zld for activation of two eve enhancers are consistent with its proposed role as a global activator of zygotic transcription (Liang et al., 2008). How might this work? One possibility is that there are cooperative interactions between Zld and the other activators of these stripes. A non-exclusive alternative is that Zld binding creates a permissive environment in broad regions of the genome, possibly by changing the chromatin configuration and making it more likely that the other activator proteins can bind. However, it is important to note that eve expression is not completely abolished in zld M embryos, so at least some eve regulatory elements could function in the absence of Zld. Future experiments will be required to further characterize the role of Zld in the regulation of the entire eve locus.
Fig. 5.
Mutational analysis of Kni binding in the eve 3+7 enhancer. Data are shown for wild-type probes that exhibit Kni-binding activity (P2, P5, etc.), which are labeled as in Fig. 4A. For sequences of all probes and mutated versions, see Table S1 in the supplementary material. Note that different sites bind GST-Kni1-105 with very different apparent affinities (e.g. compare P3 with P19). For each site, two or three nucleotides were mutated (e.g. P25M1, P5M1) and were tested in parallel with the wild type. Such mutations effectively abolished binding in all but two cases (P4 and P29) (see text).
Fig. 5.
Mutational analysis of Kni binding in the eve 3+7 enhancer. Data are shown for wild-type probes that exhibit Kni-binding activity (P2, P5, etc.), which are labeled as in Fig. 4A. For sequences of all probes and mutated versions, see Table S1 in the supplementary material. Note that different sites bind GST-Kni1-105 with very different apparent affinities (e.g. compare P3 with P19). For each site, two or three nucleotides were mutated (e.g. P25M1, P5M1) and were tested in parallel with the wild type. Such mutations effectively abolished binding in all but two cases (P4 and P29) (see text).
### Mechanisms of repression of the eve 3+7 enhancer
The genetic removal of kni causes a broad expansion of eve 3+7-lacZ expression in posterior regions of the embryo (Small et al., 1996), and ectopic Kni causes a strong repression of both stripes (Clyde et al., 2003; Struffi et al., 2004). Interestingly, the posterior boundary of eve stripe 3 is positioned in regions with extremely low levels of Kni protein (Fig. 7D). If the stripe 3 posterior boundary is solely formed by Kni, the enhancer must be exquisitely sensitive to its repression, possibly through the high number of sites in the eve 3+7 enhancer. Previous attempts to mutate sites based on computational predictions failed to mimic the genetic loss of kni, so here we used a biochemical approach to identify Kni sites in an unbiased manner. Our EMSA analyses identified 11 Kni sites, and the PWM derived from these sites alone is very similar to the Kni matrix derived in a bacterial one-hybrid study (Noyes et al., 2008). Thus, our studies provide biochemical support for the bacterial one-hybrid method as an accurate predictor of the DNA-binding activity of this particular protein.
We further showed that specific point mutations abolish binding to nine of the 11 sites, and when these mutations were tested in a reporter gene they caused an expansion that is indistinguishable from that detected in kni mutants (Small et al., 1996). This result strongly suggests that Kni-mediated repression involves direct binding to the eve 3+7 enhancer, and that Kni alone can account for all repressive activity in nuclei that lie in the region between stripes 3 and 7. However, our work does not address the exact mechanism of Kni-mediated repression. The simplest possibility is that Kni competes with activator proteins for binding to overlapping or adjacent sites (Levine and Manley, 1989). We consider this mechanism unlikely because only one of the 11 Kni sites overlaps with an activator site. Also, the in vivo misexpression of a truncated Kni protein (Kni 1-105) that contains only the DNA-binding domain and the nuclear localization signal has no discernible effect on the endogenous eve expression pattern, whereas a similar misexpression of Kni 1-330 or Kni 1-429 strongly represses eve 3+7 (P. Struffi, PhD thesis, Michigan State University, 2004) (Struffi et al., 2004).
Whereas Kni-mediated repression forms the inside boundaries of the eve 3+7 pattern, forming the outside boundaries is dependent on Hb, which abuts the anterior boundary of stripe 3 and overlaps with stripe 7 (Fig. 7D). Both stripes expand towards the poles of the embryo in zygotic hb mutants (Small et al., 1996), and these expansions are mimicked by mutations in four or all nine Hb sites within the eve 3+7 enhancer (Fig. 6C,D). Further anterior expansions of the pattern are prevented by an unknown Bcd-dependent repressor (X) and the Torso (Tor)-dependent terminal system (Fig. 7A). Indeed, eve 3+7-lacZ expression expands all the way to the anterior tip in mutants that remove bcd and the terminal system (Small et al., 1996).
Our mutational analyses suggest that Hb is a dedicated repressor of the eve 3+7 enhancer, and argue against a dual role in which high Hb levels repress, whereas lower concentrations activate, transcription (Papatsenko and Levine, 2008). One caveat is that activation of the stripe might occur via maternal Hb in the absence of zygotic expression. However, triple mutants that remove zygotic hb, kni and tor, a terminal system component, show eve 3+7 enhancer expression that extends from ∼75% embryo length (100% is the anterior pole) to the posterior pole (Small et al., 1996). It is extremely unlikely that the maternal Hb gradient, which is not perturbed in this mutant combination, could activate expression throughout the posterior region. We propose that any activating role for Hb on this enhancer is indirect and might occur by repressing kni, which helps to define a space where the concentrations of both repressors are sufficiently low for activation to occur. kni expands anteriorly in hb mutants and is very sensitive to repression by ectopic Hb (Clyde et al., 2003; Yu and Small, 2008), consistent with an indirect role in activation. A similar mechanism has been shown to be important for the correct positioning of eve stripe 2 (Wu et al., 1998). In this case, the anterior Giant (Gt) domain appears to be required for eve 2 activation, but it does so by strongly repressing Kr, thus creating space for activation in the region between Gt and Kr.
Fig. 6.
Hb represses the eve 3+7 enhancer through direct binding to nine sites. (A) Sequences of Hb binding sites in the eve 3+7 enhancer (wild type) and point mutations (red letters) of those sites that were tested in vivo. Sites are numbered by their position in the 511 bp enhancer; reverse strand sites are indicated (r). The Hb PWM was based on published binding sites (Lifanov et al., 2003). Scores are shown for wild-type sites and their mutated counterparts, and sites mutated in individual reporter genes are indicated on the right. (B-D) lacZ expression patterns driven by the wild-type eve 3+7 reporter gene (B) and an identical reporter that contains mutations in four (C) or nine (D) binding sites. Mutations in Hb sites cause anterior expansions of stripe 3 expression.
Fig. 6.
Hb represses the eve 3+7 enhancer through direct binding to nine sites. (A) Sequences of Hb binding sites in the eve 3+7 enhancer (wild type) and point mutations (red letters) of those sites that were tested in vivo. Sites are numbered by their position in the 511 bp enhancer; reverse strand sites are indicated (r). The Hb PWM was based on published binding sites (Lifanov et al., 2003). Scores are shown for wild-type sites and their mutated counterparts, and sites mutated in individual reporter genes are indicated on the right. (B-D) lacZ expression patterns driven by the wild-type eve 3+7 reporter gene (B) and an identical reporter that contains mutations in four (C) or nine (D) binding sites. Mutations in Hb sites cause anterior expansions of stripe 3 expression.
Fig. 7.
Combinatorial activation and repression of the Drosophila eve 3+7 enhancer. (A) Model for the regulatory interactions that establish eve stripe 3 and 7 expression. Ubiquitous activators are shown as downward facing arrows along the length of the embryo, whereas the positions of the stripes are shown along with the repressive factors that prevent their activation in various regions of the embryo. (B) Schematic of the 511 bp enhancer showing relative positions of all verified binding sites. X' refers to an unknown Bcd-dependent repressor. dSTAT is also known as Mrl or Stat92E. (C,D) RNA expression of the eve 3+7-lacZ reporter gene (blue) along with Hb (red) and Kni (green) protein expression patterns in a late cycle 13 embryo (C) and an early nuclear cycle 14 embryo (D). Note that the early expression of stripe 3 is broad, but then refines to a narrow stripe. The stripe 7 response driven by the minimal element used here is always weak compared with stripe 3 and is not visible in these embryos.
Fig. 7.
Combinatorial activation and repression of the Drosophila eve 3+7 enhancer. (A) Model for the regulatory interactions that establish eve stripe 3 and 7 expression. Ubiquitous activators are shown as downward facing arrows along the length of the embryo, whereas the positions of the stripes are shown along with the repressive factors that prevent their activation in various regions of the embryo. (B) Schematic of the 511 bp enhancer showing relative positions of all verified binding sites. X' refers to an unknown Bcd-dependent repressor. dSTAT is also known as Mrl or Stat92E. (C,D) RNA expression of the eve 3+7-lacZ reporter gene (blue) along with Hb (red) and Kni (green) protein expression patterns in a late cycle 13 embryo (C) and an early nuclear cycle 14 embryo (D). Note that the early expression of stripe 3 is broad, but then refines to a narrow stripe. The stripe 7 response driven by the minimal element used here is always weak compared with stripe 3 and is not visible in these embryos.
The correct ordering of gene expression boundaries along the AP axis is crucial for establishing the Drosophila body plan. All gap genes analyzed so far seem to function as repressors that differentially position multiple boundaries (Andrioli et al., 2004; Clyde et al., 2003; Langeland et al., 1994; Struffi et al., 2004; Wu et al., 1998; Yu and Small, 2008). However, it is still unclear how differential sensitivity is achieved at the molecular level. Simple correlations of binding site number and affinity with boundary positioning cannot explain the exquisite differences in the sensitivity of individual enhancers, suggesting that they do more than `count' binding sites and that specific arrangements of repressor and activator sites might control this process. The experiments described here better define the binding characteristics of both Hb and Kni and provide a firm foundation for future experiments designed to decipher the regulatory logic that controls differential sensitivity.
Funding
This work was supported by NIH grants RO1 GM51946 and RO1 GM63024 to S.S. and C.R., respectively, and was conducted in a facility constructed with support from Research Facilities Improvement Grant C06 RR-15518-01 from the National Center for Research Resources, National Institutes of Health. Deposited in PMC for release after 12 months.
We thank Vikram Vasisht, Arthi Palaniappan, Zhe Xu, Cristina Pomilla, Hsuan-Ni Lin and Adam Parè for technical assistance; Hsiao-lan Liang for the zld294 allele; Nikolai Kirov and Tenzin Gocha for GST-ZldC; Nobuo Ogawa and Mark Biggin for the PWM used to align Kni binding sites; Leslie Pick for the Drosophila embryonic cDNA library; and Hongtao Chen and Jerry Huang for comments on the manuscript. We also thank three anonymous reviewers for suggestions that improved the manuscript.
Andrioli
L. P.
,
Oberstein
A. L.
,
M. S.
,
Yu
D.
,
Small
S.
(
2004
).
Groucho-dependent repression by Sloppy-paired 1 differentially positions anterior pair-rule stripes in the Drosophila embryo
.
Dev. Biol.
276
,
541
-
551
.
Arnone
M. I.
,
Davidson
E. H.
(
1997
).
The hardwiring of development: organization and function of genomic regulatory systems
.
Development
124
,
1851
-
1864
.
Arnosti
D. N.
(
2003
).
Analysis and function of transcriptional regulatory elements: insights from Drosophila
.
Annu. Rev. Entomol.
48
,
579
-
602
.
Clyde
D. E.
,
M. S.
,
Wu
X.
,
Pare
A.
,
Papatsenko
D.
,
Small
S.
(
2003
).
A self-organizing system of repressor gradients establishes segmental complexity in Drosophila
.
Nature
426
,
849
-
853
.
De Renzis
S.
,
Elemento
O.
,
Tavazoie
S.
,
Wieschaus
E. F.
(
2007
).
Unmasking activation of the zygotic genome using chromosomal deletions in the Drosophila embryo
.
PLoS Biol.
5
,
e117
.
Driever
W.
,
Nusslein-Volhard
C.
(
1988
).
A gradient of bicoid protein in Drosophila embryos
.
Cell
54
,
83
-
93
.
Frangioni
J. V.
,
Neel
B. G.
(
1993
).
Solubilization and purification of enzymatically active glutathione S-transferase (pGEX) fusion proteins
.
Anal. Biochem.
210
,
179
-
187
.
Frasch
M.
,
Hoey
T.
,
Rushlow
C.
,
Doyle
H.
,
Levine
M.
(
1987
).
Characterization and localization of the even-skipped protein of Drosophila
.
EMBO J.
6
,
749
-
759
.
Fujioka
M.
,
Emi-Sarker
Y.
,
Yusibova
G. L.
,
Goto
T.
,
Jaynes
J. B.
(
1999
).
Analysis of an even-skipped rescue transgene reveals both composite and discrete neuronal and early blastoderm enhancers, and multi-stripe positioning by gap gene repressor gradients
.
Development
126
,
2527
-
2538
.
Goto
T.
,
Macdonald
P.
,
Maniatis
T.
(
1989
).
Early and late periodic patterns of even skipped expression are controlled by distinct regulatory elements that respond to different spatial cues
.
Cell
57
,
413
-
422
.
Haecker
A.
,
Qi
D.
,
Lilja
T.
,
Moussian
B.
,
Andrioli
L. P.
,
Luschnig
S.
,
Mannervik
M.
(
2007
).
Drosophila brakeless interacts with atrophin and is required for tailless-mediated transcriptional repression in early embryos
.
PLoS Biol.
5
,
e145
.
Harding
K.
,
Hoey
T.
,
Warrior
R.
,
Levine
M.
(
1989
).
Autoregulatory and gap gene response elements of the even-skipped promoter of Drosophila
.
EMBO J.
8
,
1205
-
1212
.
Hou
X. S.
,
Melnick
M. B.
,
Perrimon
N.
(
1996
).
Marelle acts downstream of the Drosophila HOP/JAK kinase and encodes a protein similar to the mammalian STATs
.
Cell
84
,
411
-
419
.
Janssens
H.
,
Hou
S.
,
Jaeger
J.
,
Kim
A. R.
,
Myasnikova
E.
,
Sharp
D.
,
Reinitz
J.
(
2006
).
Quantitative and predictive model of transcriptional control of the Drosophila melanogaster even skipped gene
.
Nat. Genet.
38
,
1159
-
1165
.
Jiang
J.
,
Kosman
D.
,
Ip
Y. T.
,
Levine
M.
(
1991
).
The dorsal morphogen gradient regulates the mesoderm determinant twist in early Drosophila embryos
.
Genes Dev.
5
,
1881
-
1891
.
Langeland
J. A.
,
Attai
S. F.
,
Vorwerk
K.
,
Carroll
S. B.
(
1994
).
Positioning adjacent pair-rule stripes in the posterior Drosophila embryo
.
Development
120
,
2945
-
2955
.
Levine
M.
,
Manley
J. L.
(
1989
).
Transcriptional repression of eukaryotic promoters
.
Cell
59
,
405
-
408
.
Liang
H. L.
,
Nien
C. Y.
,
Liu
H. Y.
,
Metzstein
M. M.
,
Kirov
N.
,
Rushlow
C.
(
2008
).
The zinc-finger protein Zelda is a key activator of the early zygotic genome in Drosophila
.
Nature
456
,
400
-
403
.
Lifanov
A. P.
,
Makeev
V. J.
,
Nazina
A. G.
,
Papatsenko
D. A.
(
2003
).
Homotypic regulatory clusters in Drosophila
.
Genome Res.
13
,
579
-
588
.
Macdonald
P. M.
,
Ingham
P.
,
Struhl
G.
(
1986
).
Isolation, structure, and expression of even-skipped: a second pair-rule gene of Drosophila containing a homeo box
.
Cell
47
,
721
-
734
.
Nauber
U.
,
Pankratz
M. J.
,
Kienlin
A.
,
Seifert
E.
,
Klemm
U.
,
Jackle
H.
(
1988
).
Abdominal segmentation of the Drosophila embryo requires a hormone receptor-like protein encoded by the gap gene knirps
.
Nature
336
,
489
-
492
.
Noyes
M. B.
,
Meng
X.
,
Wakabayashi
A.
,
Sinha
S.
,
Brodsky
M. H.
,
Wolfe
S. A.
(
2008
).
A systematic characterization of factors that regulate Drosophila segmentation via a bacterial one-hybrid system
.
Nucleic Acids Res.
36
,
2547
-
2560
.
Papatsenko
D.
,
Levine
M. S.
(
2008
).
Dual regulation by the Hunchback gradient in the Drosophila embryo
.
105
,
2901
-
2906
.
Sackerson
C.
(
1995
).
Patterns of conservation and divergence at the even-skipped locus of Drosophila
.
Mech. Dev.
51
,
199
-
215
.
Schulz
C.
,
Tautz
D.
(
1994
).
Autonomous concentration-dependent activation and repression of Kruppel by hunchback in the Drosophila embryo
.
Development
120
,
3043
-
3049
.
Small
S.
,
Blair
A.
,
Levine
M.
(
1992
).
Regulation of even-skipped stripe 2 in the Drosophila embryo
.
EMBO J.
11
,
4047
-
4057
.
Small
S.
,
Arnosti
D. N.
,
Levine
M.
(
1993
).
Spacing ensures autonomous expression of different stripe enhancers in the even-skipped promoter
.
Development
119
,
762
-
772
.
Small
S.
,
Blair
A.
,
Levine
M.
(
1996
).
Regulation of two pair-rule stripes by a single enhancer in the Drosophila embryo
.
Dev. Biol.
175
,
314
-
324
.
Stanojevic
D.
,
Hoey
T.
,
Levine
M.
(
1989
).
Sequence-specific DNA-binding activities of the gap proteins encoded by hunchback and Kruppel in Drosophila
.
Nature
341
,
331
-
335
.
Struffi
P.
,
M.
,
Kulkarni
M.
,
Arnosti
D. N.
(
2004
).
Quantitative contributions of CtBP-dependent and -independent repression activities of Knirps
.
Development
131
,
2419
-
2429
.
Tautz
D.
,
Lehmann
R.
,
Schnurch
H.
,
Schuh
R.
,
Seifert
E.
,
Kienlin
A.
,
Jones
K.
,
Jackle
H.
(
1987
).
Finger protein of novel structure encoded by hunchback, a second member of the gap class of Drosophila segmentation genes
.
Nature
327
,
383
-
389
.
ten Bosch
J. R.
,
Benavides
J. A.
,
Cline
T. W.
(
2006
).
The TAGteam DNA motif controls the timing of Drosophila pre-blastoderm transcription
.
Development
133
,
1967
-
1977
.
Wang
C.
,
Lehmann
R.
(
1991
).
Nanos is the localized posterior determinant in Drosophila
.
Cell
66
,
637
-
647
.
Wu
X.
,
Vakani
R.
,
Small
S.
(
1998
).
Two distinct mechanisms for differential positioning of gene expression borders involving the Drosophila gap protein giant
.
Development
125
,
3765
-
3774
.
Wu
X.
,
Vasisht
V.
,
Kosman
D.
,
Reinitz
J.
,
Small
S.
(
2001
).
Thoracic patterning by the Drosophila gap gene hunchback
.
Dev. Biol.
237
,
79
-
92
.
Yan
R.
,
Small
S.
,
Desplan
C.
,
Dearolf
C. R.
,
Darnell
J. E.
Jr
(
1996
).
Identification of a Stat gene that functions in Drosophila development
.
Cell
84
,
421
-
430
.
Yu
D.
,
Small
S.
(
2008
).
Precise registration of gene expression boundaries by a repressive morphogen in Drosophila
.
Curr. Biol.
18
,
868
-
876
.
Yu
Y.
,
Yussa
M.
,
Song
J.
,
Hirsch
J.
,
Pick
L.
(
1999
).
A double interaction screen identifies positive and negative ftz gene regulators and ftz-interacting proteins
.
Mech. Dev.
83
,
95
-
105
.
Competing interests statement
The authors declare no competing financial interests. | 2022-05-18 05:52:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4962913990020752, "perplexity": 9715.163495954994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662521152.22/warc/CC-MAIN-20220518052503-20220518082503-00704.warc.gz"} |
https://motls.blogspot.com/2011/03/hide-decline-ii-1400-1550-covered-up.html?showComment=1301811063773&m=1 | ## Saturday, March 26, 2011
### Hide the decline II: 1400-1550 covered up
Steve McIntyre managed to uncover a piece of scientific forgery that looks even more serious than the original "hide the decline" trick:
Hide the Decline: Sciencemag #3
The summary of the story is very simple. The Briffa-Osborn 1999 reconstruction of the climate depended on a variable called "yrmxd" in a computer code. You can set it to any year and the program will cover up the whole history of your proxies before the year "yrmxd". The variable was set to 1550 instead of the correct 1402 and the result looked like this:
Click to zoom in.
Look at the brightly shining pink lines - because both of these segments have been erased in the final paper. Those guys have censored the "distracting" decline of the temperature obtained from the trees after 1960 (they have masked the so-called "divergence problem"): this is the original "hide the decline" scandal.
But as you can see, they have also hidden a 3 times longer period, 1402-1550, which was arguably even more inconvenient because the trees indicate a faster warming in the 15th century than in the 20th century. By this cosmetic surgery, they have eliminated pretty much 1/2 of their data - on both sides - because they didn't support the predetermined conclusion and only picked the 1/2 that could be used as a part of the hockey stick.
No justification has been given for the truncation - and in fact, the fact that the truncation has been done remained a secret in the paper.
I suspect that the whole alarmist paleoclimatological community has been well aware of this 15th century problem - data clearly disagreeing with any form of a hockey stick. My reason for this broader statement is that the censorship seems to influence the same period as the aptly named "censored" directory by Mann that was ultimately erased from MBH98.
MBH98 came before Briffa-Osborn 1999 so you may try to guess which mann is the most likely primordial originator of this fabrication.
It seems pretty likely that this or a very similar fraud affects pretty much every single climate reconstruction in the literature going back at least to 1400 by anyone who has failed to explicitly denounce Michael Mann as a pile of f*ces. Those people - if they deserve to be called in this way at all - have been lying and deceiving everyone for decades and they should be sent to Guantanamo Bay. ;-)
Well, Tim Ball has offered a better verdict: the mann should be moved from Penn State to the State Pen. A hilarious quote! Needless to say, the mann has sued Tim Ball. If Ball loses, I will try to find addresses of oil companies and urge them to pay Ball 10 times the money he will lose.
Now, let me add that I am far from certain that the Earth has seen a warming trend - or even a huge warming trend - in the 15th century. In fact, other climate reconstructions, including one from Craig Loehle who is no alarmist, indicate that the 15th century already saw a cooling before the little ice age. However, what I am certain about is that there have been sizable temperature variabilities in the pre-industrial era and the alarmist movement has been working hard to deprecate them.
Via Steve McIntyre and Anthony Watts
1. Lubos,
Great summary. This paleo 'group', Wegman's report show the tangled web of this 'Team', should, at the least, be shunned by all the climate scientists.
That Eric Steig recently jumped on Mann's bandwagon and used his corrupt statistical methods ruined a promising young scientist.
I just don't understand how any budding scientist could even consider anyone of Mann et al (The Team) as a potential mentor and adviser.
Imagine if engineers started behaving like climate scientists, hiding data, refusing to submit calculations and such. They would all lose their registration. Nothing could ever get built.
Barbers need a license to take a pair of scissors to my head. They have to demonstrate some minimum modicum of competency.
Shouldn't climate scientists?
2. This raises a question for me. Is there a ~600yr solar cycle affecting primarily the red and IR spectrum.
3. (Or magnetic or UV. Something that would primarily affect plants. I'm thinking a magnetic affect, reducing incoming solar radiation at the same time surface temperature increases due to release of stored heat.)
4. Hello,
Forgive me if this question is inappropriate.
Recently I questioned the omission of the data and was told that the removal of the data had already been investigated by three separate panels who looked into the so called "climate-gate scandal". The person said that the data was clearly erroneous proxy data that conflicted with either known instrument data or other proxies that gave a stronger, more consistent signal. In other words, the scientists removed bad data.
Do you know where I might find the published investigations of these panels?
thanks.
5. Dear Janet,
I am afraid that someone was trying to deceive you. I watch those things pretty carefully and it seems pretty clear that the discovery that the 1400-1550 period was censored out in this way is pretty new, so no panels could have studied it in the past.
So I think that the people whom you talked to just wanted to say "we have had the universal whitewash panels that have removed any sin from the alarmist climatologists' bodies." Well, at most Jesus Christ could do such a thing.
So what they may have possibly referred to was the 1960-2000 decline which clearly shows that that the trees are/were not good global temperature proxies because the temperatures were increasing, not dropping.
Concerning the 15th century, I still think it's more likely that it was cooling during the 15th century but as the pink censored proxies help to show, this question is arguably as uncertain as many other similar questions.
Even if a panel, or whatever indimidating word one uses, were studying the question in the way you suggest, it's very unlikely that they could have a legitimate argument of the sort you sketch. If several groups of proxies qualitatively disagree, it can still be any group that is closer to the truth. And if one tried hard, one could find many other proxies that behave in the same way as the "so far minority". One simply can't choose the better proxies by "majority votes" among proxies - it's a self-evidently flawed strategy especially because the "majorities" and "minorities" depend on subjective selections.
Cheers
LM
6. Disagreeing with a thesis is not a reason to remove data. Data is removed when it it's known to have been corrupted (by specific mechanisms).
Lobos, to continue my riff, more simply there could be 6 to 8 hundred year oceanic events that are triggered by or happen to coincide with 200 yr solar cycles.
7. I think the proper conclusion to draw here is that the tree ring series is a very poor proxy for temp and should no longer be used in reconstructions. The "team" tried to explain away the modern hidden decline with some dubious explanation as to why temp and tree rings tracked well up to a point and then diverged. When you add this new divergence problem, I think the logical conclusion is that the proxy is worthless.
8. How many tree series are actually involved in this study?
9. Uk courts have knocked down a case for deformation against the RSPB, Judge said the court is no place for scentific dissagreements to be resolved.
http://www.telegraph.co.uk/earth/wildlife/8408156/Birdwatchers-lose-RSPB-defamation-case.html
10. @janet
In no way inappropriate. I would point you directly to them, if I had the URLs to hand. I think, however, you would be disappointed in the reports. A few pages long, they exonerate Prof. Dr. Phil Jones et al on very flimsy 'evidence', often no more than a verbal assurance that there was no misfeasance. There remain questions about the composition of the panels, their terms of reference, what evidence they took (in one case only that selected by the university itself) &c., &c., &c.
For the full saga, try
http://wattsupwiththat.com/climategate/
I must apologise for not having something more succint, but the full story is there.
11. @jason.....
Does it matter, if they're all Mann-made (fiddled, 'adjusted', tortured to within ....)
12. For Reader "Janet" at #4:
Check www.co2science.org
There are now 900+ studies involving varous techiques, hundreds of organizations and 40+ countries and new confirming results practically every month showing that the MWP was as warm,likely warmer, and spanned 400+ years.
This is part of the "bad data" that the IPCC folks casually cast aside. Not only that, but also the subsequent "little ice age".
The power & available funds went to their head. They tried to pull off the old Marx comics quip:
Are you gonna believe me, or your own eyes?
13. Call me stupid but I don't really understand what exactly was done here. I know in the original "hide the decline" trick they simply replaced the proxy data with thermometer data, but what did they replace the proxy data with in this other "hide the decline" trick? Also, do all the reconstructions use this same trick or just the Briffa one?
14. Dr. Motl,
Forgive me for commenting again, but after reading a NASA release (http://www.nasa.gov/topics/earth/features/cooling-plant-growth.html) which indicated that plant growth slowed global warming by creating a new negative feedback in response to increased CO2, I was left wondering the following....
In both the post-1960, and pre-1550 data, we see that the tree response appears contradictory to what other proxies indicate. In other words, the plants responded as if there was global cooling (slowing growth) post-1960, and as if there was global warming (increased growth) pre-1550.
If increased (or decreased) plant growth acts as a natural cooling (or warming) mechanism, this would help explain the divergence problem.
At this point the question for me is what feedback mechanism (or mechanisms) triggered the plants response.
I was thinking along the same lines as aaron. I was wondering if sunspot activity, or the solar cycle, might play a role here.
Am I completely clueless and off my rocker with my understanding of the situation? If so, please don't waste your time addressing my question.
Thank you again for your kindness and tolerance.
Cheers,
15. I wonder why you go to so much trouble publicizing this 'scandal'
when there are countless thousands of worthy studies proving the veracity of climate change science?
Here in Europe nobody would dram of questioning a whole army of peer reviewed scientists, sooner deny that electricity exists!
Could I be right in suspecting a subconscious fear of losing the 'American Dream'? I read somewhere that almost all right wing Americans suffer from climate denier syndrome... just a thought... WHY?
16. Hi Mike,
there are no papers showing "veracity" of the current climate science.
A simple way to see that you are a hopelessly blinded Marxist ideological asshole is to see that you immediately start to attack American right-wingers like me, not noticing that I am Czech, living in the very middle of Europe.
That proves that you haven't bothered to read even the first line of this page - one with the name and location of the author. You don't want to read anything of it. You have decided to fight against things like "American Right Wing" and "American Dream" and no amount of rational arguments can ever stop you from that - you're as blinded a weapon as a member of Al Qaeda.
Nasty individuals like you have to be stopped by a physical force.
Cheers
LM | 2021-04-14 07:37:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4203750789165497, "perplexity": 1974.0792489467872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077336.28/warc/CC-MAIN-20210414064832-20210414094832-00006.warc.gz"} |
https://socratic.org/questions/how-many-moles-of-cu-are-in-10-0-grams-of-cu | # How many moles of Cu are in 10.0 grams of Cu?
(10.0*cancelg)/(63.55*cancelg*mol^-1 $=$ ?? $m o l$
In $63.55$ $g$ of copper metal there are $1$ $m o l$ of $C u$ atoms. We divide the mass by the molar mass to get the number of moles. | 2020-01-28 03:22:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6330166459083557, "perplexity": 629.1622021575445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251773463.72/warc/CC-MAIN-20200128030221-20200128060221-00259.warc.gz"} |
https://artofproblemsolving.com/wiki/index.php?title=1977_USAMO_Problems/Problem_5&oldid=70802 | # 1977 USAMO Problems/Problem 5
## Problem
If $a,b,c,d,e$ are positive numbers bounded by $p$ and $q$, i.e, if they lie in $[p,q], 0 < p$, prove that $$(a+b +c +d +e)\left(\frac{1}{a} +\frac {1}{b} +\frac{1}{c} + \frac{1}{d} +\frac{1}{e}\right) \le 25 + 6\left(\sqrt{\frac {p}{q}} - \sqrt {\frac{q}{p}}\right)^2$$ and determine when there is equality.
## Solution
By applying the Cauchy-Schwarz Inequality in the form $((\sqrt{a})^2 +(\sqrt{b})^2 +(\sqrt{c})^2 +(\sqrt{d})^2 +(\sqrt{e})^2) \left( \left( \dfrac{1}{(\sqrt{a})^2} \right) + \left( \dfrac{1}{(\sqrt{b})^2} \right) + \left( \dfrac{1}{(\sqrt{c})^2} \right) + \left( \dfrac{1}{(\sqrt{d})^2} \right) + \left( \dfrac{1}{(\sqrt{e})^2} \right) \right) \le \left( \dfrac{\sqrt{a}}{\sqrt{a}} + \dfrac{\sqrt{b}}{\sqrt{b}} + \dfrac{\sqrt{c}}{\sqrt{c}} + \dfrac{\sqrt{d}}{\sqrt{d}} + \dfrac{\sqrt{e}}{\sqrt{e}} \right)^2 = (5)^2 = 25$, we can easily reduce the given inequality to $0 \le 6\left(\sqrt{\frac {p}{q}} - \sqrt {\frac{q}{p}}\right)^2$, which is true by the Trivial Inequality. We see that equality is achieved when $\left(\sqrt{\frac {p}{q}} - \sqrt {\frac{q}{p}}\right)^2 = 0 \rightarrow \left(\sqrt{\frac {p}{q}} - \sqrt {\frac{q}{p}}\right)=0 \rightarrow \sqrt{\frac {p}{q}}=\sqrt {\frac{q}{p}}$, which is achieved when $p=q$.
## See Also
1977 USAMO (Problems • Resources) Preceded byProblem 4 Followed byLast Question 1 • 2 • 3 • 4 • 5 All USAMO Problems and Solutions
The problems on this page are copyrighted by the Mathematical Association of America's American Mathematics Competitions.
Invalid username
Login to AoPS | 2021-03-01 17:19:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8984134197235107, "perplexity": 829.7122231467475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362741.28/warc/CC-MAIN-20210301151825-20210301181825-00601.warc.gz"} |
https://physics.stackexchange.com/questions/205981/eigenvalue-for-the-creation-operator-for-a-coherent-state/205982 | # Eigenvalue for the creation operator for a coherent state [closed]
For a coherent state $$|\alpha\rangle=e^{-\frac{|\alpha|^{2}}{2}}\sum_{n}\frac{\alpha^{n}}{\sqrt{n!}}|n\rangle$$ I can't solve the eigenvalue problem for $\hat{a}^{\dagger}|\alpha\rangle$ where $\hat{a}^{\dagger}$ is the creation operator. I can only get this far
\begin{align} \hat{a}^{\dagger}|\alpha\rangle&=e^{-\frac{|\alpha|^{2}}{2}}\sum_{n}\frac{\alpha^{n}}{\sqrt{n!}}\hat{a}^{\dagger}|n\rangle\\ &=e^{-\frac{|\alpha|^{2}}{2}}\sum_{n}\frac{\alpha^{n}}{\sqrt{n!}}\sqrt{n+1}|n+1\rangle \end{align}
Ultimately, I want to calculate $\langle \alpha |a\hat{a}^{\dagger}|\alpha\rangle$, but I don't know $\hat{a}^{\dagger}|\alpha\rangle$.
## closed as off-topic by DanielSank, user10851, Danu, ACuriousMind♦, MartinSep 10 '15 at 12:06
This question appears to be off-topic. The users who voted to close gave this specific reason:
• "Homework-like questions should ask about a specific physics concept and show some effort to work through the problem. We want our questions to be useful to the broader community, and to future users. See our meta site for more guidance on how to edit your question to make it better" – DanielSank, Community, Danu, ACuriousMind, Martin
If this question can be reworded to fit the rules in the help center, please edit the question.
• You know the result that a creation operator acts on an eigenstate $|n>$, then, just sum over all the results. – qfzklm Sep 9 '15 at 3:51
• I did. But then, I stuck from there afterward. – TBBT Sep 9 '15 at 3:55
• @qfzklm does the result simplify much? e.g. is the resulting sum a well-known one? – innisfree Sep 9 '15 at 4:02
• @TBBT what result do you expect to find? $|\alpha\rangle$ isn't an eigenfunction of the creation operator - you won't find something $\propto |\alpha\rangle$. Do you want to know whether the sum simplifies? – innisfree Sep 9 '15 at 4:11
• @TBBT try using commutation relations to get an $a |\alpha \rangle$? – zeldredge Sep 9 '15 at 4:38
A coherent state is, amongst other interesting things, an eigenstate of the annihilation operator. It is not an eigenstate of the creation operator; hence, I'm not sure this "eigenvalue problem" makes much sense.
This is easy to realize. You can quickly see that $\langle0|a^\dagger|\alpha\rangle=0$, whereas $\langle0|\alpha\rangle\neq0$.
If you really want to find $\langle \alpha | a a^\dagger|\alpha\rangle$ in e.g. $$\langle x^2 \rangle \propto \langle \alpha | (a+a^\dagger)(a+a^\dagger)|\alpha\rangle$$ you can commute the operators $a$ and $a^\dagger$ with the rule $[a,a^\dagger] = 1$, such that $$\langle \alpha | a a^\dagger|\alpha\rangle = \langle \alpha | a^\dagger a|\alpha\rangle + 1 = 1 + |\alpha|^2$$ You can also verify this the long way around by acting the the operators.
• You are correct. However, I can't still calculate, $\langle\alpha|aa^{\dagger}|\alpha\rangle$. I have to solve something in my homework that look like this,$\langle\alpha|aa|\alpha\rangle+\langle\alpha|a^{\dagger}a^{\dagger}|\alpha\rangle+\langle\alpha|aa^{\dagger}|\alpha\rangle+\langle\alpha|a^{\dagger}a|\alpha\rangle$ – TBBT Sep 9 '15 at 4:30
• I can solve three out of four terms above, but not the $\langle\alpha|aa^{\dagger}|\alpha\rangle$ – TBBT Sep 9 '15 at 4:35
• I think that's quite straight-forward - just commute the operators. See my edited answer. – innisfree Sep 9 '15 at 4:43
• Wow, can't believe I couldn't see that. Thank you so much! – TBBT Sep 9 '15 at 6:12
To add to Innisfree's correct answer, I'd like to emphasize something that the OP does not seem to know and that is that the creation operator has no eigenvectors (nor, therefore, eigenvalues). It is easy to see this: write a general state as a row vector $(\psi_0,\,\psi_1,\,\cdots)$ of superposition weights for the number states $|0\rangle,\,|1\rangle,\,\cdots$ and in this notation, our eigenvalue equation (in $\lambda$) for $a^\dagger$ is:
$$a^\dagger (\psi_0,\,\psi_1,\,\cdots) =(0,\,\psi_0,\,\sqrt{2} \psi_1,\,\sqrt{3} \psi_2,\,\cdots) = \lambda (\psi_0,\,\psi_1,\,\cdots)$$
whence we get $\lambda \,\psi_n=\sqrt{n}\psi_{n-1}$ and $\lambda\,\psi_0=0$. If $\lambda = 0$ it follows straight away that $\psi_n=0\,\forall\,n\in\mathbb{N}$. If $\lambda\neq 0$, then $\psi_0=0$, whence (by induction through $\psi_n = \sqrt{n}\psi_{n-1}/\lambda$) $\psi_n=0\,\forall\,n\in\mathbb{N}$. There is therefore no normalizable superposition of number states that is an eigenvector for $a^\dagger$. It's therefore not surprising that the OP was having difficulty!
• This is also in my answer ;) it is the same as the comment "You can quickly see..." – innisfree Sep 9 '15 at 5:27
• @innisfree Ah, sorry, I missed that. I just wanted to emphasize a bit more to the OP (because I can recall at one stage assuming that the creation operator would have an eigenvector). Doubtless he/she will remember the nonexistence of creation operator eigenvectors after today. – WetSavannaAnimal Sep 9 '15 at 5:50
• No worries, it is helpful that you stressed it a bit more than I did. – innisfree Sep 9 '15 at 7:23
• "you can quickly see" is good, but sometimes it's helpful to have things spelled out fully. I found this answer useful. – Dom Sep 9 '15 at 10:16
Using the definition of the creation operator, $a^\dagger = c(m\omega \hat x - i\hat p)$ where $c$ is a constant, and $\hat p = -i\hbar\partial_x$, you can write the eigenvalue problem in the position representation as $$(m\omega x - \hbar\partial_x)\psi = \alpha\psi.$$ You can solve this differential equation to find $$\psi = C\exp(m\omega x^2/\hbar - \alpha x/\hbar)$$ which is clearly not normalizable. Hence the creation operator has no normalizable eigenstates. | 2019-11-20 21:12:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9020422101020813, "perplexity": 586.2857537534215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670601.75/warc/CC-MAIN-20191120185646-20191120213646-00428.warc.gz"} |
https://docs.eyesopen.com/toolkits/csharp/graphsimtk/OEGraphSimClasses/OEFingerPrint.html | OEFingerPrint¶
class OEFingerPrint : public OESystem::OEBitVector
OEFingerPrint class is used to encode molecular properties. An OEFingerPrint object is a typed bitvector (OEBitVector). The type of an OEFingerPrint object is set when it is initialized.
The following methods are publicly inherited from OEBitVector:
Constructors¶
OEFingerPrint()
Default constructor that creates an OEFingerPrint object with an uninitialized type, zero pointer (OEFPTypeBase).
OEFingerPrint(const OEFingerPrint &rhs)
Copy constructor.
operator=¶
OEFingerPrint &operator=(const OEFingerPrint &rhs)
Assignment operator that copies the data of the rhs OEFingerPrint object into the left-hand side OEFingerPrint object.
operator==¶
bool operator==(const OEFingerPrint& rhs) const
Two OEFingerPrint objects are considered to be equivalent only if they have the same fingerprint type (OEFPTypeBase) and have identical bit-vectors (OEBitVector).
operator!=¶
bool operator!=(const OEFingerPrint& rhs) const
Two OEFingerPrint objects are considered to be different if either they have different fingerprint types (OEFPTypeBase) or they have different bit-vectors (OEBitVector).
operator bool¶
bool IsValid()
Returns whether the OEFingerPrint has been initialized, i.e., has a valid type.
GetFPTypeBase¶
const OEFPTypeBase *GetFPTypeBase() const
Returns a const pointer to the fingerprint type (OEFPTypeBase) of the OEFingerPrint object. This method will return 0 if the OEFingerPrint object has not been initialized.
SetFPTypeBase¶
void SetFPTypeBase(const OEFPTypeBase *t)
Sets the fingerprint type of a OEFingerPrint object.
The following functions set the type of the fingerprint when an OEFingerPrint object is initialized:
Warning
Use this method with caution.
The type of a OEFingerPrint object (that is an OEFPTypeBase) encodes how the fingerprint is generated. Changing this type can mean that the information represented by bitvector of the fingerprint will be misinterpreted. | 2019-02-20 14:36:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17527376115322113, "perplexity": 10127.338685527351}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247495001.53/warc/CC-MAIN-20190220125916-20190220151916-00295.warc.gz"} |
https://socratic.org/questions/how-do-you-find-the-next-two-terms-of-the-geometric-sequence-405-135-45 | # How do you find the next two terms of the geometric sequence 405, 135, 45,...?
Nov 4, 2016
The GP is $405 , 135 , 45 , 15 , 5$
#### Explanation:
GP: " "405, 135, 45, ?. ?
You need to know what value the first term has been multiplied by to give the second term.
Is the same value multiplied by ${T}_{2}$ to give ${T}_{3}$?
This value is called the common ratio ($r$).
${T}_{1} \times r \rightarrow {T}_{2} \text{ and } {T}_{2} \times r \rightarrow {T}_{3}$
$r = {T}_{2} / {T}_{1} \text{ and } r = {T}_{3} / {T}_{2}$
$r = \frac{135}{405} = \frac{1}{3} \text{ and } r = \frac{45}{135} = \frac{1}{3}$
[Note: $\times \frac{1}{3}$ is the same as $\div 3$, but we always multiply in a G.P.]
${T}_{4} = 45 \times \frac{1}{3} = 15$
${T}_{5} = 15 \times \frac{1}{3} = 5$
The GP is $405 , 135 , 45 , 15 , 5$ | 2020-09-25 16:37:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 13, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.541140615940094, "perplexity": 690.838092233791}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400227524.63/warc/CC-MAIN-20200925150904-20200925180904-00014.warc.gz"} |
https://www.sparrho.com/item/black-holes-submerged-in-anti-de-sitter-space/1cb3e79/ | # Black holes submerged in Anti-de Sitter space
Research paper by Hideki Ishihara, Satsuki Matsuno, Haruki Nakamura
Indexed on: 26 May '18Published on: 26 May '18Published in: arXiv - General Relativity and Quantum Cosmology
#### Abstract
Suppose a one-dimensional isometry group acts on a space, we can consider a submergion induced by the isometry, namely we obtain an orbit space by identification of points on the orbit of the group action. We study the causal structure of the orbit space for Anti-de Sitter space (AdS) explicitely. In the case of AdS\$_3\$, we found a variety of black hole structure, and in the case of AdS\$_5\$, we found a static four-dimensional black hole, and a spacetime which has two-dimensional black hole as a submanifold. | 2021-01-25 20:38:03 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8688087463378906, "perplexity": 1422.7152059430593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703644033.96/warc/CC-MAIN-20210125185643-20210125215643-00354.warc.gz"} |
https://codegolf.stackexchange.com/questions/199118/what-does-the-shortest-pseudo-random-algorithm-look-like | # What does the shortest pseudo-random algorithm look like? [closed]
After seeing some impossibly short solutions in this SE, I began to wonder what code someone could come up with to build a pseudo-random number generator. Now given that I am new to programming, I have absolutely no idea what such generator should look like, or what language to write them in. But that did not stop me from trying some goofy things like writing PRNG, listing primes or prime tests.
So here's the deal, the challenge is to write a pseudo-random number generator that does the following things:
1.Accept a seed in the form of string, number, bits, or whatever, and generate a pseudo-random sequence based on that string. The initial seed should default to 0 if nothing is provided.
2.Accept some input N and generate a pseudo random sequence of length N. Straight forward enough. You can choose to name this whatever you want, or not at all.
3.Be deterministic, the same seed should generate the same sequence each time.
4.I have absolutely no idea how this is tested but the results should be reasonably random. So the results from your generator should be close to uniform distribution within your arbitrary sample space.
5.You can choose any mathematical formula/operation for your pseudo random generation, but it should update its internal state in a deterministic way and use that new internal state for the generation of another number.
You may not outsource your random number generation to any source, be it built in or online. Every variable update/ operation should be explicitly ordered. (things like random.randint() is banned, sorry I only know python)
I can't come up with some inventive way of scoring, so shortest program wins.
• define "reasonably random". 1234567890123456... looks okay to me. – John Dvorak Feb 7 at 9:57
• That's too trivial. Just take a bad RNG algorithm, like Lehmer generator with small constants, and implement it. The only way it would be interesting, is if you clearly set up a threshold for a qualifying level of randomness or a period. – Andriy Makukha Feb 7 at 10:01
• I don't think a uniform distribution is a good criteria for randomness.. I heard that humans are bad at random for this reason, they tend to make distribution uniform and avoid repetitions, where random should do the contrary to a statical extend – Kaddath Feb 7 at 10:14
• For uniformness I guess f(seed) = seed++ works perfectly if seed is allowed to overflow from 2^64-1 to 0... – my pronoun is monicareinstate Feb 7 at 10:20
• Python 3, 15 bytes lambda x,s=0: 4 it is random, I swear – RGS Feb 7 at 10:45 | 2020-03-29 23:09:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4106053411960602, "perplexity": 1227.7870836595825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496227.25/warc/CC-MAIN-20200329201741-20200329231741-00326.warc.gz"} |
http://math.stackexchange.com/tags/dimension-theory/hot | # Tag Info
16
Yours is a very interesting and subtle question, which often generates confusion. First let us give a name to the property you are interested in: a ring $A$ will be said to satisfy (DIM) if for all $\mathfrak p \in \operatorname{Spec}(A)$ we have $$\operatorname{height}(\mathfrak p) +\dim A/\mathfrak p=\dim(A) \quad \quad (\text{DIM})$$ The main ...
14
Quoting from Abbot's book, "Understanding Analysis": There is a sensible agreement that a point has dimension zero, a line segment has dimension one, a square has dimension two, and a cube has dimension three. Without attempting a formal definition of dimension (of which there are several), we can nevertheless get a sense of how one might be defined by ...
13
If the person is in a Möbius strip, then it seems we are assuming he is $2$-dimensional. Suppose he has with him two identical circles split into sectors of $120^{\circ}$, and each sector is colored a different color. Notice being $2$-dimensional, he can rotate this circle but not reflect it, so the two circles are identical up to a rotation. Now, let him ...
9
I don't know if this is on Cover's list, but maybe it should be: For $n=2$ and $3$, any tiling of ${\mathbb R}^n$ by unit $n$-cubes has two with a complete facet in common. But it's not true for $n \ge 10$: see http://arxiv.org/pdf/math.MG/9210222.pdf
9
The most basic surprise, in my opinion, is that the ratio of the volume of the unit sphere to the volume of the cube circumscribing that sphere tends to 0 as the dimension of the space tends to $\infty$. In other words, a high-dimensional sphere takes up almost no space in the cube that circumscribes it. See pp.4--5 in ...
9
Negative dimension is actually much easier to talk about than complex dimension. Super vector spaces are a natural collection of objects that can have negative dimension; given a super vector space $(V_0, V_1)$ we can define its dimension to be $\dim V_0 - \dim V_1$, and this definition has many nice properties; see this blog post, for example. More ...
9
Your ideal is generated by binomials, so one can be smart about it. There is an algebra map $\mathbb C[x_1,x_2,x_3,x_4]\to \mathbb C[s, t]$ such that $x_1\mapsto s^3$, $x_2\mapsto s^2t$, $x_3\mapsto st^2$ and $x_4\mapsto t^3$, and this map maps your ideal $\mathfrak p$ to zero, so it induces $\phi:\mathbb C[x_1,x_2,x_3,x_4]/\mathfrak p\to \mathbb C[s, t]$. ...
7
suppose $P_0, \ldots, P_n$ are $n+1$ polynomials, of degree less than $d$. Then by multiplying the $P_i$ among themselves up to $k$ times, you can build at least about $k^{n+1}/(n+1)!$ polynomials of the form $\prod P_i^{\alpha_i}$ of degree less than $dk$. But the dimension of the vector space of polynomials of degree less than $dk$ in $K[X_1,\ldots X_n]$ ...
7
This should never be true for a reasonable definition of dimension (for example the dimension of a manifold). A lower-dimensional thing should have measure zero in a higher-dimensional thing, so removing it shouldn't change the dimension of the higher-dimensional thing. The correct version of the "naive equation" is that the Cartesian product of an ...
7
There are formal ways to define dimension, indeed, lots of them, depending on the context. As already noted in a comment, your description of why a circle is one-dimensional is not really correct though. The one-dimensionality of a circle is not a function of the fact that a circle itself can be described by a single number (such as radius), but that to ...
6
There are many generalizations of the usual notion of dimension, and they are there to capture different properties. Having said that, the intuition behind the dimension is that it describes the number of degrees of freedom you have, e.g. A point on a line has one degree of freedom. A point on a plane has two degrees of freedom. A point of two-dimensional ...
5
Here, adapted from an example and a problem in Engelking and with lots of blanks filled in, is an example of a zero-dimensional Tikhonov space with a subspace $-$ in fact a closed subspace $-$ of dimension greater than $0$. The first step is to construct a zero-dimensional Tikhonov space $X$ that is not strongly zero-dimensional; this construction is ...
5
$\omega^\omega$ can be visualized, in what I think is a fairly nice way in a static 2D image featured on the wikipedia page for ordinal number: Also, if you're willing to allow dynamic visualizations, then Stephen Brooks's transfinite number line goes well past $\epsilon_0$ (to $\Gamma_0$), as well as providing a more linear (if colorful) look at ...
5
Hausdorff outer measure is defined for all sets, and then we use the definition of Caratheodory to restrict it to a subalgebra of "measurable" sets to get the Hausdorff measure. In $\mathbb R^n$, the $n$-dimensional Hausdorff outer measure is the same (up to a constant factor) as $n$-dimensional Lebesgue outer measure, so they have the same measurable sets ...
5
Big surprise: our brains evolved in a three-dimensional environment, and so that is what they are best suited for thinking about. It's easy to visualize because we literally see it all the time. Thinking in higher dimensions is harder because we have no (little?) direct experience with them, so there is not a clear prototype for most people to use as a ...
4
Let's stick to separable metric spaces; without separability the Hausdorff dimension is always infinite. The following theorem can be found in Dimension Theory by Hurewicz and Wallman: $\inf\{\dim_{\mathcal H} Y : Y \text{ homeomorphic to } X \} = \dim_{\mathcal T} X$ where $\dim_{\mathcal T}X$ is the topological dimension of $X$. Hence, the spaces ...
4
We expect a normally-distributed random variable to take values close to the mean, and in low dimensions it does. But in high dimensions, it does not. The volume of a thin hyperspherical shell increases so rapidly as its radius increases that even though the variable has greatest probability density near the mean, most of the probability mass is far from ...
4
Concentration of measure phenomena provide great examples of how our intuition based on low-dimensional space is unreliable in high-dimensions. Compare unit balls in the metric spaces $\mathbb R^n$ endowed with, resp. the Euclidean metric $L_2$, versus $L_1$, and $L_{\infty}$. The unit balls of $L_2$ are bounded by "round" spheres and are sandwiched ...
4
The first thing to say is that there are very few restrictions on a two-dimensional section of the Leech lattice. I will get to those. The jpeg below is such a section. The intersections of the green lines are lattice points, and you can see each green fundamental parallelogram. The blue hexagons are the Voronoi cells. I drew in a bunch of red segments of ...
4
You are correct. There are several ways to show that it is impossible, and it basically boils down to the fact that no open subset of ${\bf R}^n$ is homeomorphic to an open subset of ${\bf R}^m$ if $n<m$ (as $U\cap V$ would be in your case). You can assume without loss of generality that the sets in question are connected. One way to show it is to use ...
4
What is the rule or condition to be a 2D or 3D picture? This is a subtle question! In a certain sense, the picture is two-dimensional, since it's displayed on a computer screen. :) That's a trivially literal remark, yet not without mathematical content. A closely related question is, "Does there exist a physical object that looks like the picture from ...
4
Chapter 8 of Eisenbud has a short history of dimension in algebraic geometry, even giving axioms for a theory of dimension. The historical order seems to be transcendence degree (think meromorphic functions on a Riemann surface), Krull dimension, then Hilbert functions. In particular, Eisenbud mentions that, though one might suspect differently, the most ...
3
The question is too broad to provide specific examples. If you want a reference, I highly recommend Real-Time Rendering. Chapters that would interest you: Chapter 4, Transforms: Covers matrix transformations and operations, including object rotation. It also covers quaternions which are the usual alternatives to matrices. One of the advantages of ...
3
One notion of complex dimension that has been used extensively has to do with self-similar sets. A $t$-neighborhood (i.e. points within distance $t$) of such a set may have volume $v(t)$ bounded above and below by constant multiples of $t^d$, where $d$ is the dimension of the boundary and $t$ is small, but such that $t^{-d} v(t)$ is oscillatory and ...
3
Question 1: It is a theorem that $\mathrm{dim}\ A[x]=\mathrm{dim}\ A+1$ for any Noetherian ring $A$, where $\mathrm{dim}$ denotes Krull dimension. Thus $\mathrm{dim}\ \mathbb C[x_1,x_2,x_3,x_4]=4$, as $\mathrm{dim}\ \mathbb C=0$ trivially. The easiest way to compute the dimension of $R$ is to verify that P=\langle x_1x_3-x_2^2,x_2 x_4-x_3^2,x_1x_4-x_2 ...
3
The dimension is two. Note that the vectors $u=\left[ \begin{array}{c} 0 \\ 1 \\ 0 \\ 0 \\ \end{array} \right]$ and $v= \left[ \begin{array}{c} 0 \\ 0 \\ 1 \\ 0 \\ \end{array} \right]$ are in the null space of $A-I_4=\begin{bmatrix} 0 & 0 & 0 & -2 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ -1 ... 3 These definitions are not the same in general, if$M$is not f.g. Consider the module$\mathbb Q_p/\mathbb Z_p$over$\mathbb Z_p$. Its annihilator is$0$, so the first definition gives dimension$1$. On the other hand, its support is the closed point of Spec$\mathbb Z_p$, and so the second definition gives dimension$0$. If you are reading an article ... 3 Wikipedia has some useful information. In$\mathbb{R}^d$,$d$-dimensional Hausdorff measure is equal to Lebesgue measure (up to scaling). So once you have defined it for Borel sets, you can extend it to Lebesgue-measurable sets in the same way: any Lebesgue-measurable set is of the form$A = B \cup C$where$B$is Borel and$C\$ is Lebesgue measurable with ...
Only top voted, non community-wiki answers of a minimum length are eligible | 2014-03-17 08:50:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9053452610969543, "perplexity": 228.85956564431314}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678705051/warc/CC-MAIN-20140313024505-00074-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://tex.stackexchange.com/questions/244879/am-i-properly-inserting-legends-in-groupplots-labels-multiply-defined?noredirect=1 | # Am I properly inserting legends in groupplots? Labels “multiply defined”
I have a few places in my document where I use groupplot to insert several figures, but I only want one legend as it's common to them all. I've done this using legend to name=foo and then, after the figure, \ref{foo}
The whole thing looks like this (which is not an MWE)
\begin{figure}
\centering
\begin{tikzpicture}
\begin{groupplot}[
legend columns=-1,
legend entries={{\tiny ++Cost},{\tiny ++FTE},{\tiny ++Resources},{\tiny Hold All},{\tiny Random},{\tiny Come and Go}},
legend to name=CombinedLegendAlpha2,
group style={
group size=3 by 1,
xlabels at=edge bottom,
ylabels at=edge left
},
legend style={draw=none},
legend style={at={(0.98,0.825)}},
xlabel = {\footnotesize $\alpha_{++}$},
ylabel = {\footnotesize Avg Portfolio Value},
]
\nextgroupplot[title={\scriptsize Empirical CDF},
y tick label style={
font=\tiny,
/pgf/number format/.cd,
fixed,
fixed zerofill,
precision=0,
/tikz/.cd
},
footnotesize,
x tick label style={
font=\tiny,
/pgf/number format/.cd,
fixed,
fixed zerofill,
precision=1,
/tikz/.cd
}]
\addplot+[black, mark=o,line join=round, mark repeat=10] table[col sep=comma, y=PlusPlusCost, x=CostAlpha]{PlusPlusMethodsAlpha.csv};
\addplot+[black, mark=x,line join=round, mark repeat=10] table[col sep=comma, y=PlusPlusFTE, x=CostAlpha]{PlusPlusMethodsAlpha.csv};
\addplot+[black, mark=|,line join=round, mark repeat=10] table[col sep=comma, y=PlusPlusResources, x=CostAlpha]{PlusPlusMethodsAlpha.csv};
\addplot+[black, mark=square,line join=round, mark repeat=10] table[col sep=comma, y=HoldAll, x=CostAlpha]{PlusPlusMethodsAlpha.csv};
\addplot+[black, mark=star,line join=round, mark repeat=10] table[col sep=comma, y=Random, x=CostAlpha]{PlusPlusMethodsAlpha.csv};
\addplot+[black, mark=otimes,line join=round, mark repeat=10] table[col sep=comma, y=ComeAndGo, x=CostAlpha]{PlusPlusMethodsAlpha.csv};
\nextgroupplot[title={\scriptsize Triangular CDF},
y tick label style={
font=\tiny,
/pgf/number format/.cd,
fixed,
fixed zerofill,
precision=0,
/tikz/.cd
},
footnotesize,
x tick label style={
font=\tiny,
/pgf/number format/.cd,
fixed,
fixed zerofill,
precision=0,
/tikz/.cd
}]
\addplot+[black, mark=o,line join=round, mark repeat=10] table[col sep=comma, y=PlusPlusCostTri, x=CostAlpha]{PlusPlusMethodsAlpha.csv};
\addplot+[black, mark=x,line join=round, mark repeat=10] table[col sep=comma, y=PlusPlusFTETri, x=CostAlpha]{PlusPlusMethodsAlpha.csv};
\addplot+[black, mark=|,line join=round, mark repeat=10] table[col sep=comma, y=PlusPlusResourcesTri, x=CostAlpha]{PlusPlusMethodsAlpha.csv};
\addplot+[black, mark=square,line join=round, mark repeat=10] table[col sep=comma, y=HoldAll, x=CostAlpha]{PlusPlusMethodsAlpha.csv};
\addplot+[black, mark=star,line join=round, mark repeat=10] table[col sep=comma, y=Random, x=CostAlpha]{PlusPlusMethodsAlpha.csv};
\addplot+[black, mark=otimes,line join=round, mark repeat=10] table[col sep=comma, y=ComeAndGo, x=CostAlpha]{PlusPlusMethodsAlpha.csv};
\nextgroupplot[title={\scriptsize LN/Exponential CDF},
y tick label style={
font=\tiny,
/pgf/number format/.cd,
fixed,
fixed zerofill,
precision=0,
/tikz/.cd
},
footnotesize,
x tick label style={
font=\tiny,
/pgf/number format/.cd,
fixed,
fixed zerofill,
precision=0,
/tikz/.cd
}]
\addplot+[black, mark=o,line join=round, mark repeat=10] table[col sep=comma, y=PlusPlusCostLN, x=CostAlpha]{PlusPlusMethodsAlpha.csv};
\addplot+[black, mark=x,line join=round, mark repeat=10] table[col sep=comma, y=PlusPlusFTEExpo, x=CostAlpha]{PlusPlusMethodsAlpha.csv};
\addplot+[black, mark=|,line join=round, mark repeat=10] table[col sep=comma, y=PlusPlusResourcesExpo, x=CostAlpha]{PlusPlusMethodsAlpha.csv};
\addplot+[black, mark=square,line join=round, mark repeat=10] table[col sep=comma, y=HoldAll, x=CostAlpha]{PlusPlusMethodsAlpha.csv};
\addplot+[black, mark=star,line join=round, mark repeat=10] table[col sep=comma, y=Random, x=CostAlpha]{PlusPlusMethodsAlpha.csv};
\addplot+[black, mark=otimes,line join=round, mark repeat=10] table[col sep=comma, y=ComeAndGo, x=CostAlpha]{PlusPlusMethodsAlpha.csv};
\end{groupplot}
\end{tikzpicture}
\ref{CombinedLegendAlpha2}
\caption{Triage++ Performance}
\label{PlusPlusAlpha}
\end{figure}
This works fine, and produces the output I want. For instance, the above code produces
When I build the file though, I get warnings that my labels (in this case, CombinedLegendAlpha2) are multiply defined. I know these are warnings, not errors, but they still bug me. Is there a more appropriate way to accomplish this? I'm very much a Latex novice, so it would not surprise me at all if my solution is hackish....
• May be your are using the name CombinedLegendAlpha2 at more than one place. Just search. – user11232 May 14 '15 at 6:32
• Nope, CombinedLegendAlpha2 appears two times in the whole document: once in the legend to name and then in the \ref. I get warnings on every legend where I use legend to name – jerH May 14 '15 at 11:57
Indeed your legend to name is defined multiple times.
Whatever you put in the \begin{groupplot}[#1] as #1 will be put on all axis plots.
Doing:
\begin{groupplot}[/tikz/font=\small,...]
\nextgroupplot
...
\nextgroupplot
...
\end{groupplot}
is thus equivalent to:
\begin{axis}[/tikz/font=\small,...]
...
\end{axis}
\begin{axis}[/tikz/font=\small,at=<below>,...]
...
\end{axis}
\begin{axis}[/tikz/font=\small,at=<below>,...]
...
\end{axis}
more or less.
Whatever you put in groupplot is global. Instead move the once used keys into their respective \nextgroupplot[#1] to limit their extend. The top keys passed to the groupplot environment is intended much similarly as the scope environment.
• Your answer makes sense, but I'm surprised that this is the case for legends, where it makes sense to define the total legend in the total groupplot options instead of the nextgroupplot options – darthbith May 18 '15 at 19:28
• To be frank, I am quite surprised you do not have 3 legend boxes. Furthermore I just do not get how you get the legend box below the plots with legend style={at={(0.98,0.825)}}. Nevertheless it might not always make sense in grouped plots to have a common legend. It can be used for that, but it is general to allow different legends in each \nextgroupplot. So I would not say that common legends should be a general feature of the grouped plots. I always encourage users to supply legends in the plot they wish the legend box shown, this is a simple reflection of the abstraction allowed. – zeroth May 18 '15 at 20:22
• However, your example is far from a MWE... So I will not try to decipher that ;) – zeroth May 18 '15 at 20:27
• It is not my example :-) But I have seen the same warning when using the \label & \ref facilities to plot a common legend for a groupplot... I agree it is useful to have the ability to specify each legend individually, but it would be nice not to give a warning when having a common legend is desired. That said, I don't know how everything is put together, and maybe its not possible – darthbith May 18 '15 at 20:30
• There are a bunch of corner cases to take into account. Again I would advice you to move it down to the respective ≠xtgroupplot\nextgroupplot or do, A.sty≤={≤≥ndstuff...}A/.style={legend stuff...} and put AA in the ≠xtgroupplot\nextgroupplot. In that way it seems common. (I know, just an idea if you want the stuff collected). – zeroth May 18 '15 at 20:41 | 2019-06-26 18:46:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6931032538414001, "perplexity": 10472.063874686823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000414.26/warc/CC-MAIN-20190626174622-20190626200622-00055.warc.gz"} |
https://matthewhr.wordpress.com/2013/01/25/compactness-and-infinite-dimensional-hilbert-spaces/ | ## Compactness and Infinite-Dimensional Hilbert Spaces
It is well-known that any finite-dimensional normed space $\mathbb{X}$ is isomorphic to $\mathbb{R}^{n}$, where $n=\dim(\mathbb{X})$. Therefore, the Heine-Borel theorem tells us that closed and bounded subsets of $\mathbb{X}$ are compact. We can say that $\mathbb{X}$ has the Heine-Borel property. Perhaps less well-known to beginning students is that an infinite-dimensional normed space $\mathbb{X}$ need not have the Heine-Borel property. Taking our cue from Big Rudin Chapter 4 Exercise 7, we prove in this post that no infinite-dimensional Hilbert space has the Heine-Borel property.
Let $\mathbb{H}$ be an infinite-dimensional Hilbert space, and let $E:=\left\{e_{n}\right\}_{n=1}^{\infty}$ be an orthonormal system. Then $E$ is closed and bounded but not compact (thus, disproving the Heine-Borel theorem in infinite-dimensions).
Proof. Clearly, $E$ is bounded by the constant $1$. To see that $E$ is closed, suppose $\left\{e_{n_{k}}\right\}_{k=1}^{\infty}$ is some sequence in $E$ such that $e_{n_{k}}\rightarrow x\in\mathbb{H}$. I claim that $e_{n_{k}}=e_{N}$ for all $k$ sufficiently large. Otherwise, it follows from Bessel’s inequality that
$\displaystyle 0=\lim_{k\rightarrow\infty}\langle{x,e_{n_{k}}}\rangle=\left\|x\right\|^{2}=\lim_{k\rightarrow\infty}\left\|e_{n_{k}}\right\|^{2}=1$,
which is a contradiction. This proves that $x\in E$. To see that $E$ is not compact, observe that the sequence $(e_{n})_{n=1}^{\infty}$ is not Cauchy since by orthonormality,
$\displaystyle\left\|e_{n}-e_{m}\right\|^{2}=\langle{e_{n}-e_{m},e_{n}-e_{m}}\rangle=\langle{e_{n},e_{n}}\rangle+\langle{e_{m},e_{m}}\rangle=2$,
so that $(e_{n})$ contains no convergent subsequences. $\Box$
Let $(\delta_{n})_{n=1}^{\infty}$ be a sequence of positive numbers, and let $S$ be the closure of the set
$\displaystyle\left\{x\in\mathbb{H}:x=\sum_{n=1}^{N}c_{n}e_{n}, \left|c_{n}\right|\leq\delta_{n}\right\}$
$S$ is compact if and only if $\sum_{n=1}^{\infty}\delta_{n}^{2}<\delta$.
Proof. If $S$ is compact, then $S$ is bounded and therefore there exists some positive constant $M > 0$ such that
$\displaystyle\left\|\sum_{n=1}^{N}\delta_{n}e_{n}\right\|^{2}=\sum_{n=1}^{N}\left|\delta_{n}\right|^{2}\leq M,\indent\forall N\in\mathbb{Z}^{\geq 1}$,
which implies that $\sum_{n=1}^{\infty}\delta_{n}^{2}\leq M$.
Now suppose that $\sum_{n=1}^{\infty}\delta_{n}^{2}<\infty$. Let $(x_{n})_{n=1}^{\infty}$ be some sequence in $S$. I claim that $\left|\langle{x_{n},e_{k}}\rangle\right|\leq\delta_{k}$, for each $k,n\in\mathbb{Z}^{\geq 1}$. Indeed, fix $n\in\mathbb{Z}^{\geq 1}$. By definition of $S$, there exists a finite linear combination $\sum_{k=1}^{N}c_{k}e_{k}$ such that $\left|c_{k}\right|\leq\delta_{k}$ and $\left\|\sum_{j=1}^{N}c_{j}e_{j}-x_{n}\right\|<\epsilon$. By the triangle inequality and Cauchy-Schwarz,
$\displaystyle\left|\langle{x_{n},e_{k}}\rangle\right|\leq\left|\left\langle{x_{n}-\sum_{j=1}^{N}c_{j}e_{j},e_{k}}\right\rangle\right|+\left|\left\langle{\sum_{j=1}^{N}c_{j}e_{j},e_{k}}\right\rangle\right|<\epsilon+\delta_{k}$
Since $\epsilon>0$ was arbitrary, we see that $\left|\langle{x_{n},e_{k}}\rangle\right|\leq\delta_{k}$, $\left\|x_{n}\right\|\leq\sum_{k=1}^{\infty}\delta_{k}^{2}$, and $(\langle{x_{n},e_{k}}\rangle)_{n=1}^{\infty}$ is a bounded sequence in $\mathbb{C}$, for each $k\geq 1$. By the Bolzano-Weierstrass theorem, there exists a covergent subsequence $(\langle{x_{n,1},e_{1}}\rangle)_{n=1}^{\infty}$. By induction, we can construct nested subsequences $(x_{n,1})_{n=1}^{\infty}\supset\cdots\supset (x_{n,k})_{n=1}^{\infty}$ such that
$\displaystyle(\langle{x_{n,j},e_{j}}\rangle)_{n=1}^{\infty}$ converges to limit $\displaystyle\alpha_{j}$, for each $\displaystyle 1\leq j\leq k$
Set $y_{n}:=x_{n,n}$. Since $\sum_{k=1}^{\infty}\left|\alpha_{k}\right|^{2}\leq\sum_{k=1}^{\infty}\delta_{k}^{2}$, the series $x:=\sum_{k=1}^{\infty}\alpha_{k}e_{k}$ converges in $\mathbb{H}$. I claim that the sequence $(y_{n})_{n=1}^{\infty}$ converges to $x$. Indeed, by uniform convergence we can evaluate the limit term-by-term to obtain
$\displaystyle\lim_{n\rightarrow\infty}\sum_{k=1}^{\infty}\langle{y_{n},e_{k}}\rangle e_{k}=\sum_{k=1}^{\infty}\left(\lim_{n\rightarrow\infty}\langle{y_{n},e_{k}}\rangle \right)e_{k}=\sum_{k=1}^{\infty}\alpha_{k}e_{k}=x$,
where we use the continuity of the inner product to obtain the penultimate equality. $\Box$
Lastly, we remark that $\mathbb{H}$ is not a locally compact space by Riesz’s lemma. | 2017-09-23 11:20:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 65, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9806745052337646, "perplexity": 44.60790179147511}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689624.87/warc/CC-MAIN-20170923104407-20170923124407-00005.warc.gz"} |
http://crypto.stackexchange.com/questions?page=26&sort=newest | All Questions
320 views
How do I create a short signature? (e.g. less than 100 bytes)
I want to put information into a QR code and have it signed so that the authenticity of the information can be verified. But QR codes have limited storage capacity and when I used e.g. gpg I get ...
284 views
What is h in this RSA variant?
I am trying to implement a proposed improved algorithm of RSA . Here the author has increased the number of exponents. However I am unable to understand what $h$ is in the Key generation step. Can ...
405 views
Decrypt AES-128 with key file but missing IV? [duplicate]
I want to decrypt a file that has been encrypted using AES-128 in CBC mode using OpenSSL. I got the “.key” file – which is 32 digits – but when I try to decrypt with OpenSSL, the program asks me for ...
143 views
Encrypting the same message using different schemes
$E_1$ and $E_2$ are IND-CPA secure encryption schemes. $E$ is defined as: $k_1,k_2 \leftarrow K_1 \times K_2$ . $E_{k_1,k_2}(m) \leftarrow E_{1,k_1}(m)||E_{2,k_2}(m)$. Hope the notations are in an ...
112 views
55 views
Expand 1-n Oblivious Transfer to retreave only an item I, for which permissions exist
To my understanding in a 1-n Oblivious Transfer the Sender has some Values $X_1...X_N$ and the receiver wants to retreave some $X_I$, with $1 \le I \le N$. Can one now somehow guarantee, that the ...
57 views
Operation sequence authentication shared storage
Let's say there is a shared storage: USB flash drive, external HDD, or whatever you like. I'll refer to it as disk. Also, there are multiple parties, let's say Alice and Bob (may be thousands of them) ...
69 views
Testing hardware random number generators? [closed]
I would like to know how to test hardware random number generators. What techniques, tools or tricks to solve the problem ? Any practical difficulties, implementation complexities etc.
345 views
TLS/SSL's usage of Non-Ephemeral DH vs DHE
These questions revolve around DH and ECDH vs DHE and ECDHE. Specifically within the context of TLS/SSL. There are three ...
51 views
Re-encryption mixnet
Can an Elgamal-based re-encryption mechanism be used in a free route mixnet topology? Until 2006 it was not possible, but I couldn't find the later research results or achievements. Can anyone suggest ...
59 views
What is the Geometric Generalised T' Method?
This page by Nicolas T. Courtois mentions Geometric Generalised T' Method. It is described as an advanced geometric algorithm, never published, for finding extra linearly independent equations at ...
139 views
Using the same private key for two ECC key pairs
Let $(d_1,Q_1)$ and $(d_2,Q_2)$ be ECC key pairs over two different elliptic curves (say NIST P-224 and NIST P-256). According to the Elliptic Curve Discrete Logarithm Problem (ECDLP), if the private ...
218 views
How is the curve equation used in ECC?
I have a hard time learning exactly how the elliptic curve equation is used in the ECC. $$y^2 = x^3+ax+b$$ If someone knows and could explain to me in simple steps how this is done or a link to it ...
117 views
HKDF vs TLS PRF. Which of the one is better? [closed]
Which of the one (HKDF or TLS1.1+ PRF) is more secure and why?
205 views
Is it possible to weaken a bitcoin private key by “using” it elsewhere?
What are the increased possibilities (if any) of being able to crack a private key given the following: The associated bitcoin (ECDSA Secp256k1-based) public key is known. The private key has been ...
40 views
Assuming the alert protocol is encrypted AFTER a session has been established and the structure below, how does the client know whether to expect encrypted data or not? ...
198 views
Why the need to hash before signing small data?
I’ve got two questions: I’m doing the following: Data : uuid + int + nonce Signature: ECDSA(sha256(Data) ) To verify the signature: ...
69 views
Is Encryption without knowing the input directly possible at all?
I know the question is rather unusual, but let me clarify what I'm searching for: There is one person, lets call her $Alice$. $Alice$ has $n$ plaintexts $t_1..t_n$ and $n$ public keys $pk_1..pk_n$ of ...
84 views
What might be assumed about a PRF if the key has been chosen?
The defining feature of a PRF $f:\{0,1\}^k\times\{0,1\}^s\mapsto\{0,1\}^*$ is that, if the first parameter is selected at random, it should be indistinguishable from a function ...
266 views
Cryptographic system with double keys with reversible order
While reading Shamir, Rivest and Adleman's paper on "Mental Poker", I've met a mention of system such that $E_a(E_b(x)) = E_b(E_a(x))$, without however disclosing details on it, with $E_a(x)$ being ...
33 views
Is it possible to weaken a bitcoin private key by 'using' it elsewhere? [duplicate]
What are the increased possibilities (if any) of being able to crack a private key given the following: The associated bitcoin (ECDSA Secp256k1-based) public key is known. The private key has been ...
79 views
Generating shared secret random permutation
There are three blind card game players. Each player does not trust any other, even prejudicing other two players may not be blind, or there may be others in the room, peeking at their cards. In ...
76 views
Is it possible to encrypt data in such a way that it cannot be decrypted with the key used to encrypt it but can be decrypted with another key
I'm very new to cryptography and encryption so I apologize if the answer to this is well known, but I can't find anything in my research. The crux of the problem is the need to store data on an ...
46 views
ECDSA signature verifiable 1-way transformations
Alice signs a message $m$ with her private key, yielding a signature ($r$,$s$). I want to prove to someone else that I have this signature, but I don't want them to have the knowledge of what ...
72 views
Perl DES PCBC as protection against decryption/crypt analysis
Is error propagation in DES PCBC a good method to prevent decryption/crypt analysis by third parties? If I had a very large file encrypted with Perl's DES with the PCBC option, and then removed the ...
70 views
Implementing AugPAKE over ECC
The AugPAKE spec says it can be implemented over elliptic curves. This sounds very promising, but they don't actually back that claim. Can this really be achieved? If so, how would one go about ...
40 views
Can CBC encryption have separate keys for separate blocks of plain text?
Can we have separate key for separate blocks which we get while operating a plain text message using CBC encryption model? If yes, what are it's advantage? Please enlighten me.
82 views
Use cases for “online” authenticated encryption?
What is the the point of an "online" mode for an authenticated cipher? I understand what "online" means in this context. However, I have trouble coming up with applications that would benefit from ...
82 views
Times of nested algorithms in proofs of security
Proofs of security may be constructed such that an adversary $A$ is used to construct an adversary $A'$. The reduction/algorithm which uses $A$ has to perform a number of computations in order to ...
66 views
How to make a crypto-currency robust to 51% attacks?
For Example: In Bitcoin it is theoretically possible to double spend if you control > 50% of the computational power in the network. Also it is possible to apply "selfish mining" with far less than ...
69 views
Is it secure to choose d in a RSA key pair?
An RSA key pair consists of the private key $(n,d)$ and a public key $(n,e)$ such that $de \equiv 1 \bmod{\lambda(n)}$. Usually one chooses a small $e$ and computes $d$ by inverting it modulo ...
84 views
Any field in a PKI certificate where some text info can be stored?
I want to store 5-10 lines of Text Info in a PKI Certificate. All I want is that when using common tools like openssl command line or certutil from Microsoft, this text info should be displayed as is. ...
120 views
Why does Fortuna RNG use double SHA-256?
In the paper for Fortuna the authors say that you can use any good digest algorithm (obviously as long as its output is 256 bit) and then they recommend double SHA-256. Why? What's the benefit? What ...
62 views
Near preimages, applicable to Bitcoin?
Bitcoin mining relies on generating a smaller hash than the so-called target (a function of the so-called difficultly), thus is vulnerable to a truncated preimage attack (you just need to obtain a ...
123 views
Side-channel attacks against ECDH for Weierstrass normal form curves
I hear a lot about why Montgomery curves are used in ECC, and one reason is that the same algorithm can be used to do both point addition and doubling (this is not true for the Weierstrass normal ...
103 views
Do we need symmetric cryptosystems?
One question I had to answer in my crypto exam today was: Do we need symmetric cryptosystems? As it stands, that's probably a debatable question, so I'd like to reformulate this as: Are ...
94 views
Is substitution with random prefix codes secure? [closed]
The following paper interests me very much: D. W. Gillman, M. Mohtashemi, R. I. Rivest, On breaking a Huffman code. IEEE Trans. Inf. Th. Vol.42 (1996), pp 972-976. However, unfortunately due to my ...
101 views
m ∈ Zn \Z*n, RSA works but not secure
If you happen happen to have a message m ∈ Zn \ Z*n, RSA works but not secure. How likely is it going to happen? |n|=1024 bits |p| = 512 bits |q| = 512 bits.
44 views
Encryption of numeric value using playfair
I am studying in CSE and in my recent exam paper, A question was asked as: Construct a playfair cipher for Plaintext: semester5 and key:technology. Generally as we are taught yet, there is no ...
128 views
AES CBC MAC splicing attack
I'm trying to implement AES CBC MAC splicing attack in Python, the idea is: given a message M, its tag Tm (MAC(M) = T), a new message N and its tag Tn we build a message such as: M||N = (M1, ..., Mn, ...
193 views
Is there a time-space tradeoff attack for breaking symmetrical cryptos?
Is there any known techniques for using time-space tradeoff for speeding up symmetrical crypto breaking? Kind of like rainbow tables speed up breaking hashes by using huge precomputed tables. Is ...
116 views
Security proof of FO(Fujisaki-Okamoto) hybrid encryption
The proof of FO hybrid encryption is hard to understand. $\:$ Especially, how does the challenger respond to the decryption queries when the challenger can only have some encryption queries? Can ...
253 views
LFSR get output from characteristic polynomial?
Say you have a characteristic polynomial of an LFSR: $$f(X) = X^4 + X^3 + 1$$ How can I use this function f to get the output of the LFSR, given some initial state? Obviously I can create the LFSR ...
221 views
How useful is NIST's Randomness Beacon for cryptographic use?
NIST have just launched a new service called the NSANIST Randomness Beacon. It has been met with some initial skepticism. Perhaps the cryptography community would have used it before June 2013 when ... | 2014-10-26 03:49:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7457559108734131, "perplexity": 2364.4642613658207}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119654793.48/warc/CC-MAIN-20141024030054-00172-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://mathoverflow.net/questions/178262/derivability-properties-of-the-distance-function-in-a-finsler-manifold | # Derivability properties of the distance function in a Finsler Manifold
We know that, in a Riemannian manifold, the geodesic distance between a point O and a point P, when we fix O, is a function of P that is $C^\infty$ everywhere on a local neighborhood, except in P=O. If we consider the squarred function, we have $C^\infty$ everywhere.
(There is a topic on that subject)
Now, in Finsler's work in 1918, and in Riemann's habilitation thesis in 1854, the authors consider a Finsler's manifold of a particular kind. They consider that the metric is given by the $(2n)^{th}$ root of an homogeneous form of degree $2n$:
$ds (X, dx)=^{2n}\sqrt{g_{\mu_1, \cdots, \mu_{2n}}(X).dx^{\mu_1}\cdots dx^{\mu_{2n}}}$.
My question is:
What can we say, in this framework, about the differentiability properties of the geodesic distance, and of the $2n^{th}$ power of the geodesic distance?
You can see this easily in the case of a Finsler metric on $\mathbb{R}^n$ that is translation invariant, i.e., the Finsler function is $F(x,\dot x) = f(\dot x)$ where $f:\mathbb{R}^n\to\mathbb{R}$ has the properties that $f$ is homogeneous of degree $1$ and smooth away from $0\in\mathbb{R}^n$ and that $f^2$ is strictly convex away from the origin but not smooth there. Then the geodesic distance function from the origin in standard coordinates is just $d(0,x) = f(x)$.
• Thank you for your reference to Shen's book. Concerning your answer, I am not sure to understand. It seems to me that the square of the distance function is always differentiable (of derivative=0) because the local squarred Finsler metric is an homogeneous function of order 2, and because the exponential is $C^1$. Therefore, I thought that the question was pertinent only for second order derivatives (and higer orders). – Julien Bernard Aug 11 '14 at 14:33
• Thanks to you, now I see what is happening and I can precise my question. The reason why we take the squarred distance to get the smooth property, in the Riemannian case, is that the metric is of the type $F(X, dx)=\sqrt{g(X)_{\alpha\beta}dx^\alpha dx^beta}$. – Julien Bernard Aug 13 '14 at 6:46
• Now, if we take another classical Finsler's metric:$F(X, dx)=^4\sqrt{g(X)_{\alpha\beta\gamma\delta}dx^\alpha dx^\beta dx^\gamma dx^\delta}$, is it true that the fourth power of the geodesic distance is $C\infty$? – Julien Bernard Aug 13 '14 at 6:52 | 2020-04-02 22:27:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9369175434112549, "perplexity": 246.06675477763508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370508367.57/warc/CC-MAIN-20200402204908-20200402234908-00277.warc.gz"} |
https://physics.stackexchange.com/questions/144125/para-and-ortho-hydrogen-angular-momentum-values | # Para and ortho hydrogen angular momentum values
In Wikipedia, it is said that:
Orthohydrogen, with symmetric nuclear spin functions, can only have rotational wavefunctions that are antisymmetric with respect to permutation of the two protons. Conversely, parahydrogen with an antisymmetric nuclear spin function, can only have rotational wavefunctions that are symmetric with respect to permutation of the two protons.
Then it seems to suggest that the symmetric wavefunctions can only have even total orbital angular momentum; while the antisymmetric ones can only have odd.
However, this seems to me to be an argument entirely analogous to the one for two-electron wavefunctions in Helium for example. But there the total orbital angular momentum can be even or odd for both symmetric and antisymmetric wavefunctions.
Why is this?
You are right that the argumentation for the existence of ortho and para hydrogen is identical to that for the existence of ortho and para helium, that is, applying the permutation operator on a wave function for two identical particles should result in an eigenvalue of $+1$ or $-1$ if one considers bosons or fermions, respectively.
Whereas in helium you apply the permutation to the electrons, in H$_2$ you interchange the hydrogen nuclei. In addition to the electronic and spin degrees of freedom, the atoms also have rotational and vibrational degrees of freedom and corresponding wave functions. After interchange of the two H nuclei, the total wave function $\phi_\text{tot}=\phi_\text{el}\phi_\text{vib}\phi_\text{rot}\phi_\text{ns}$ should by antisymmetric as hydrogen has nuclear spin $I=\tfrac{1}{2}$.
The vibrational wave function for a diatomic molecule is always symmetric, while the symmetry of the rotational state is $(-1)^J$. The electronic ground state of hydrogen is $X\,^1\Sigma_g^+$. The symmetry of the electronic ground state is given by the product of $g$ and $+$ and therefore symmetric. As in helium, we can construct 3 symmetric and 1 antisymmetric spin functions.
See the table below.
$\phi_\text{tot}\qquad \phi_\text{el} \qquad \phi_\text{vib} \qquad \phi_\text{rot}\qquad \phi_\text{ns}\\ a\qquad\quad s \qquad \quad s \quad \quad(-1)^J \quad s\text{ or }a$
So if you pick the antisymmetric nuclear spin function (para), the only way to make the overall wave function antisymmetric is to choose $J$ even, while for the even spin functions (ortho), $J$ should be odd.
Note that in homonuclear molecules with zero nuclear spin, such as O$_2$, there exists only a symmetric nuclear spin function and half of the rotational states are missing. | 2020-04-02 23:11:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9205148220062256, "perplexity": 387.9976388184943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370508367.57/warc/CC-MAIN-20200402204908-20200402234908-00037.warc.gz"} |
https://lisp-stat.dev/docs/_print/ | Documentation
This section contains user documentation for Lisp-Stat. It is designed for technical users who wish to understand how to use Lisp-Stat to perform statistical analysis.
Other content such as marketing material, case studies, and community updates are in the About and Community pages.
1 - What is Lisp-Stat?
A statistical computing environment written in Common Lisp
Lisp-Stat is a domain specific language (DSL) for statistical analysis and machine learning. It is targeted at statistics practitioners with little or no experience in programming.
Relationship to XLISP-Stat
Although inspired by Tierney’s XLisp-Stat, this is a reboot in Common Lisp. XLisp-Stat code is unlikely to run except in trivial cases. Existing XLisp-Stat libraries can be ported with the assistance of the XLS-Compat system.
Core Systems
Lisp-Stat is composed of several systems (projects), each independently useful and brought together under the Lisp-Stat umbrella. Dependencies between systems have been minimised to the extent possible so you can use them individually without importing all of Lisp-Stat.
Data-Frame
A data frame is a data structure conceptually similar to a R data frame. It provides column-centric storage for data sets where each named column contains the values for one variable, and each row contains one set of observations. For data frames, we use the ‘tibble’ from the tidyverse as inspiration for functionality.
Data frames can contain values of any type. If desired, additional attributes, such as the numerical type, unit and other information may be attached to the variable for convenience or efficiency. For example you could specify a unit of length, say m/s (meters per second), to ensure that mathematical operations on that variable always produce lengths (though the unit may change).
DFIO
The Data Frame I/O system provides input and output operations for data frames. A data frame may be written to and read from files, strings or streams, including network streams or relational databases.
Select
Select is a facility for selecting portions of sequences or arrays. It provides:
• An API for making selections (elements selected by the Cartesian product of vectors of subscripts for each axis) of array-like objects. The most important function is select. Unless you want to define additional methods for select, this is pretty much all you need from this library.
• An extensible DSL for selecting a subset of valid subscripts. This is useful if, for example, you want to resolve column names in a data frame in your implementation of select, or implementing filtering based on row values.
Array Operations
This library is a collection of functions and macros for manipulating Common Lisp arrays and performing numerical calculations with them. The library provides shorthand codes for frequently used operations, displaced array functions, indexing, transformations, generation, permutation and reduction of columns. Array operations may also be applied to data frames, and data frames may be converted to/from arrays.
Special Functions
This library implements numerical special functions in Common Lisp with a focus on high accuracy double-float calculations. These functions are the basis for the statistical distributions functions, e.g. gamma, beta, etc.
Cephes
Cephes.cl is a CFFI wrapper over the Cephes Math Library, a high quality C implementation of statistical functions. We use this both for an accuracy check (Boost uses these to check its accuracy too), and to fill in the gaps where we don’t yet have common lisp implementations of these functions.
Numerical Utilities
Numerical Utilities is the base system that most others depend on. It is a collection of packages providing:
• num=, et. al. comparison operators for floats
• simple arithmetic functions, like sum and l2norm
• element-wise operations for arrays and vectors
• intervals
• special matrices and shorthand for their input
• sample statistics
• Chebyshev polynomials
• univariate root finding
• horner’s, simpson’s and other functions for numerical analysis
Lisp-Stat
This is the top level system that uses the other packages to create a statistical computing environment. It is also the location for the ‘unified’ interface, where the holes are plugged with third party packages. For example cl-mathstats contains functionality not yet in Lisp-Stat, however its architecture does not lend itself well to incorporation via an ASDF depends-on, so as we consolidate the libraries, missing functionality will be placed in the Lisp-Stat system. Eventually parts of numerical-utilities, especially the statistics functions, will be relocated here.
Acknowledgements
Tamas Papp was the original author of many of these libraries. Starting with relatively clean, working, code that solves real-world problems was a great start to the development of Lisp-Stat.
Get Started
Examples
R Users
2 - Getting Started
Install to plotting in five minutes
If you have a working installation of SBCL, Google Chrome and Quicklisp you can be up and running in 5 minutes.
Prerequisites
• Steel Bank Common Lisp (SBCL)
• MacOS, Linux or Windows 10+
• Quicklisp
• Chrome, Firefox or Edge
First load Lisp-Stat, Plot and sample data. We will use Quicklisp for this, which will both download the system if it isn’t already available, and compile and load it.
Lisp-Stat
(ql:quickload :lisp-stat)
Plotting
(ql:quickload :plot/vega)
Data
(data :vgcars)
View
Print the vgcars data-frame (showing the first 25 rows by default)
(print-data vgcars)
;; ORIGIN YEAR ACCELERATION WEIGHT_IN_LBS HORSEPOWER DISPLACEMENT CYLINDERS MILES_PER_GALLON NAME
;; USA 1970-01-01 12.0 3504 130 307.0 8 18.0 chevrolet chevelle malibu
;; USA 1970-01-01 11.5 3693 165 350.0 8 15.0 buick skylark 320
;; USA 1970-01-01 11.0 3436 150 318.0 8 18.0 plymouth satellite
;; USA 1970-01-01 12.0 3433 150 304.0 8 16.0 amc rebel sst
;; USA 1970-01-01 10.5 3449 140 302.0 8 17.0 ford torino
;; USA 1970-01-01 10.0 4341 198 429.0 8 15.0 ford galaxie 500
;; USA 1970-01-01 9.0 4354 220 454.0 8 14.0 chevrolet impala
;; USA 1970-01-01 8.5 4312 215 440.0 8 14.0 plymouth fury iii
;; USA 1970-01-01 10.0 4425 225 455.0 8 14.0 pontiac catalina
;; USA 1970-01-01 8.5 3850 190 390.0 8 15.0 amc ambassador dpl
;; Europe 1970-01-01 17.5 3090 115 133.0 4 NIL citroen ds-21 pallas
;; USA 1970-01-01 11.5 4142 165 350.0 8 NIL chevrolet chevelle concours (sw)
;; USA 1970-01-01 11.0 4034 153 351.0 8 NIL ford torino (sw)
;; USA 1970-01-01 10.5 4166 175 383.0 8 NIL plymouth satellite (sw)
;; USA 1970-01-01 11.0 3850 175 360.0 8 NIL amc rebel sst (sw)
;; USA 1970-01-01 10.0 3563 170 383.0 8 15.0 dodge challenger se
;; USA 1970-01-01 8.0 3609 160 340.0 8 14.0 plymouth 'cuda 340
;; USA 1970-01-01 8.0 3353 140 302.0 8 NIL ford mustang boss 302
;; USA 1970-01-01 9.5 3761 150 400.0 8 15.0 chevrolet monte carlo
;; USA 1970-01-01 10.0 3086 225 455.0 8 14.0 buick estate wagon (sw)
;; Japan 1970-01-01 15.0 2372 95 113.0 4 24.0 toyota corona mark ii
;; USA 1970-01-01 15.5 2833 95 198.0 6 22.0 plymouth duster
;; USA 1970-01-01 15.5 2774 97 199.0 6 18.0 amc hornet
;; USA 1970-01-01 16.0 2587 85 200.0 6 21.0 ford maverick ..
Show the last few rows:
(tail vgcars)
;; ORIGIN YEAR ACCELERATION WEIGHT_IN_LBS HORSEPOWER DISPLACEMENT CYLINDERS MILES_PER_GALLON NAME
;; USA 1982-01-01 17.3 2950 90 151 4 27 chevrolet camaro
;; USA 1982-01-01 15.6 2790 86 140 4 27 ford mustang gl
;; Europe 1982-01-01 24.6 2130 52 97 4 44 vw pickup
;; USA 1982-01-01 11.6 2295 84 135 4 32 dodge rampage
;; USA 1982-01-01 18.6 2625 79 120 4 28 ford ranger
;; USA 1982-01-01 19.4 2720 82 119 4 31 chevy s-10
Statistics
Look at a few statistics on the data set.
(mean vgcars:acceleration) ; => 15.5197
The summary command, that works in data frames or individual variables, summarises the variable. Below is a summary with some variables elided.
LS-USER> (summary vgcars)
"ORIGIN": 254 (63%) x "USA", 79 (19%) x "Japan", 73 (18%) x "Europe"
"YEAR": 61 (15%) x "1982-01-01", 40 (10%) x "1973-01-01", 36 (9%) x "1978-01-01", 35 (9%) x "1970-01-01", 34 (8%) x "1976-01-01", 30 (7%) x "1975-01-01", 29 (7%) x "1971-01-01", 29 (7%) x "1979-01-01", 29 (7%) x "1980-01-01", 28 (7%) x "1972-01-01", 28 (7%) x "1977-01-01", 27 (7%) x "1974-01-01"
ACCELERATION (1/4 mile time)
n: 406
missing: 0
min=8
q25=13.67
q50=15.45
mean=15.52
q75=17.17
max=24.80
WEIGHT-IN-LBS (Weight in lbs)
n: 406
missing: 0
min=1613
q25=2226
q50=2822.50
mean=2979.41
q75=3620
max=5140
...
Plot
Create a scatter plot specification comparing horsepower and miles per gallon:
(plot:plot
(vega:defplot hp-mpg
(:title "Horsepower vs. MPG"
:description "Horsepower vs miles per gallon for various cars"
:data ,vgcars
:mark :point
:encoding (:x (:field :horsepower :type :quantitative)
:y (:field :miles-per-gallon :type :quantitative)))))
2.1 - Installation
Installing and configuring Lisp-Stat
New to Lisp
If you are a Lisp newbie and want to get started as fast as possible, then Portacle is your best option. Portacle is a multi-platform IDE for Common Lisp that includes Emacs, SBCL, Git, Quicklisp, all configured and ready to use.
Users new to lisp should also consider going through the Lisp-Stat basic tutorial, which guides you step-by-step through the basics of working with Lisp as a statistics practitioner.
If you currently use emacs for other languages, you can configure emacs for Common Lisp.
Experienced with Lisp
We assume an experienced user will have their own Emacs and lisp implementation and will want to install according to their own tastes and setup. The repo links you need are below, or you can install with clpm or quicklisp.
Prerequisites
All that is needed is an ANSI Common Lisp implementation. Development is done with Genera and SBCL. Other platforms should work, but will not have been tested, nor can we offer support (maintaining & testing on multiple implementations requires more resources than the project has available). Note that CCL is not in good health, and there are a few numerical bugs that remain unfixed. A shame, as we really liked CCL.
Installation
The easiest way to install Lisp-Stat is via Quicklisp, a library manager for Common Lisp. It works with your existing Common Lisp implementation to download, install, and load any of over 1,500 libraries with a few simple commands.
Quicklisp is like a package manager in Linux. It can load packages from the local file system, or download them if required. If you have quicklisp installed, you can use:
(ql:quickload :lisp-stat)
Quicklisp is good at managing the project dependency retrieval, but most of the time we use ASDF because of its REPL integration. You only have to use Quicklisp once to get the dependencies, then use ASDF for day-to-day work.
You can install additional Lisp-Stat modules in the same way. For example to install the SQLDF module:
(ql:quickload :sqldf)
Once you have obtained Lisp-Stat via Quicklisp, you can load in one of two ways:
• ASDF
• Quicklisp
(asdf:load-system :lisp-stat)
If you are using emacs, you can use the slime shortcuts to load systems by typing , and then load-system in the mini-buffer. This is what the Lisp-Stat developers use most often, the shortcuts are a helpful part of the workflow.
(ql:quickload :lisp-stat)
Quicklisp uses the same ASDF command as above to load Lisp-Stat.
Updating Lisp-Stat
When a new release is announced, you can update via Quicklisp like so:
(ql:update-dist "lisp-stat")
IDEs
There are a couple of IDE’s to consider:
Emacs
Emacs, with the slime package is the most tested IDE and the one the authors use. If you are using one of the starter lisp packages mentioned in the getting started section, this will have been installed for you. Otherwise, slime/swank is available in quicklisp and clpm.
Jupyter Lab
Jupyter Lab and common-lisp-jupyter provide an environment similar to RStudio for working with data and performing analysis. The Lisp-Stat analytics examples use Jupyter Lab to illustrate worked examples based on the book, Introduction to the Practice of Statistics.
Visual Studio Code
This is a very popular IDE, with improving support for Common Lisp. If you already use this editor, it is worth investigating to see if the Lisp support is sufficient for you to perform an analysis.
Documentation
You can install the info manuals into the emacs help system and this allows searching and browsing from within the editing environment. To do this, use the install-info command. As an example, on my MS Windows 10 machine, with MSYS2/emacs installation:
install-info --add-once select.info /c/msys64/mingw64/share/info/dir
installs the select manual at the top level of the info tree. You can also install the common lisp hyperspec and browse documentation for the base Common Lisp system. This really is the best way to use documentation whilst programming Common Lisp and Lisp-Stat. See the emacs external documentation and “How do I install a piece of Texinfo documentation?” for more information on installing help files in emacs.
See getting help for information on how to access Info documentation as you code. This is the mechanism used by Lisp-Stat developers because you don’t have to leave the emacs editor to look up function documentation in a browser.
Initialization file
You can put customisations to your environment in either your implementation’s init file, or in a personal init file and load it from the implementation’s init file. For example, I keep my customisations in #P"~/ls-init.lisp" and load it from SBCL’s init file ~/.sbclrc in a Lisp-Stat initialisation section like this:
;;; Lisp-Stat
Settings in your personal lisp-stat init file override the system defaults.
Here’s an example ls-init.lisp file that loads some common R data sets:
(defparameter *default-datasets*
'("tooth-growth" "plant-growth" "usarrests" "iris" "mtcars")
"Data sets loaded as part of personal Lisp-Stat initialisation.
Available in every session.")
(map nil #'(lambda (x)
(data x))
*default-datasets*)
With this init file, you can immediately access the data sets in the *default-datasets* list defined above, e.g.:
(head iris)
;; X2 SEPAL-LENGTH SEPAL-WIDTH PETAL-LENGTH PETAL-WIDTH SPECIES
;; 0 1 5.1 3.5 1.4 0.2 setosa
;; 1 2 4.9 3.0 1.4 0.2 setosa
;; 2 3 4.7 3.2 1.3 0.2 setosa
;; 3 4 4.6 3.1 1.5 0.2 setosa
;; 4 5 5.0 3.6 1.4 0.2 setosa
;; 5 6 5.4 3.9 1.7 0.4 setosa
Try it out
(asdf:load-system :lisp-stat)
Change to the Lisp-Stat user package:
(in-package :ls-user)
(data :sg-weather)
Find the sample mean and median:
(mean sg-weather:precipitation) ;=> .0714
(median sg-weather:max-temps) ;=> 31.55
Get Started
Examples
R Users
2.2 - Site Organisation
How this manual is organised
This manual is organised by audience. The overview and getting started sections are applicable to all users. Other sections are focused on statistical practitioners, developers or users new to Common Lisp.
Examples
This part of the documentation contains worked examples of statistical analysis and plotting. It has less explanatory material, and more worked examples of code than other sections. If you have a common use-case and want to know how to solve it, look here.
Tutorials
This section contains tutorials, primers and ‘vignettes’. Typically tutorials contain more explanatory material, whilst primers are short-form tutorials on a particular system.
System manuals
The manuals are written at a level somewhere between an API reference and a core task. They document, with text and examples, the core APIs of each system. These are useful references for power users, developers and if you need to go a bit beyond the core tasks.
Reference
The reference manuals document the API for each system. These are typically used by developers building extensions to Lisp-Stat.
Resources
Common Lisp and statistical resources, such as books, tutorials and website. Not specific to Lisp-Stat, but useful for statistical practitioners learning Lisp.
Contributing
This section describes how to contribute to Lisp-Stat. There are both ideas on what to contribute, as well as instructions on how to contribute. Also note the section on the top right of all the documentation pages, just below the search box:
If you see a mistake in the documentation, please use the Create documentation issue link to go directly to github and report the error.
2.3 - Getting Help
Ways to get help with Lisp-Stat
There are several ways to get help with Lisp-Stat and your statistical analysis. This section describes way to get help with your data objects, with Lisp-Stat commands to process them, and with Common Lisp.
We use the algolia search engine to index the site. This search engine is specialised to work well with documentation websites like this one. If you’re looking for something and can’t find it in the navigation panes, use the search box:
Apropos
If you’re not quite sure what you’re looking for, you can use the apropos command. You can do this either from the REPL or emacs. Here are two examples:
LS-USER> (apropos "remove-if")
SB-SEQUENCE:REMOVE-IF (fbound)
SB-SEQUENCE:REMOVE-IF-NOT (fbound)
REMOVE-IF (fbound)
REMOVE-IF-NOT (fbound)
This works even better using emacs/slime. If you use the slime command sequence C-c C-d a, (all the slime documentation commands start with C-c C-d) emacs will ask you for a string. Let’s say you typed in remove-if. Emacs will open a buffer like the one below with all the docs strings for similar functions or variables:
Restart from errors
Common lisp has what is called a condition system, which is somewhat unique. One of the features of the condition system is something call restarts. Basically, one part of the system can signal a condition, and another part of it can handle the condition. One of the ways a signal can handled with is by providing various restarts. Restarts are handled by the debugger, and many users new to Common Lisp tend to shy away from the debugger (this is common to other languages too). In Common Lisp the debugger is both for developers and users.
Well written Lisp programs will provide a good set of restarts for commonly encountered situations. As an example, suppose we are plotting a data set that has a large number of data points. Experience has shown that greater than 50,000 data points can cause browser performance issues, so we’ve added a restart to warn you, seen below:
Here you can see we have options to take all the data, take n (that the user will provide) or take up to the maximum recommended number. Always look at the options offered to you by the debugger and see if any of them will fix the problem for you.
Describe data
You can use the describe command to print a description of just about anything in the Lisp environment. Lisp-Stat extends this functionality to describe data. For example:
LS-USER> (describe 'mtcars)
LS-USER::MTCARS
[symbol]
MTCARS names a special variable:
Value: #<DATA-FRAME (32 observations of 12 variables)
Documentation:
Description
The data was extracted from the 1974 Motor Trend US magazine, and
comprises fuel consumption and 10 aspects of automobile design and
performance for 32 automobiles (1973–74 models).
Note
Henderson and Velleman (1981) comment in a footnote to Table 1:
‘Hocking [original transcriber]'s noncrucial coding of the Mazda's
rotary engine as a straight six-cylinder engine and the Porsche's
flat engine as a V engine, as well as the inclusion of the diesel
Mercedes 240D, have been retained to enable direct comparisons to
Source
Henderson and Velleman (1981), Building multiple regression models
interactively. Biometrics, 37, 391–411.
Variables:
Variable | Type | Unit | Label
-------- | ---- | ---- | -----------
MODEL | STRING | NIL | NIL
MPG | DOUBLE-FLOAT | M/G | Miles/(US) gallon
CYL | INTEGER | NA | Number of cylinders
DISP | DOUBLE-FLOAT | IN3 | Displacement (cu.in.)
HP | INTEGER | HP | Gross horsepower
DRAT | DOUBLE-FLOAT | NA | Rear axle ratio
WT | DOUBLE-FLOAT | LB | Weight (1000 lbs)
QSEC | DOUBLE-FLOAT | S | 1/4 mile time
VS | CATEGORICAL | NA | Engine (0=v-shaped, 1=straight)
AM | CATEGORICAL | NA | Transmission (0=automatic, 1=manual)
GEAR | CATEGORICAL | NA | Number of forward gears
CARB | CATEGORICAL | NA | Number of carburetors
Documentation
The documentation command can be used to read the documentation of a function or variable. Here’s how to read the documentation for the Lisp-Stat mean function:
LS-USER> (documentation 'mean 'function)
"The mean of elements in OBJECT."
You can also view the documentation for variables or data objects:
LS-USER> (documentation '*ask-on-redefine* 'variable)
"If non-nil the system will ask the user for confirmation
before redefining a data frame"
Emacs inspector
When Lisp prints an interesting object to emacs/slime, it will be displayed in orange text. This indicates that it is a presentation, a special kind of object that we can manipulate. For example if you type the name of a data frame, it will return a presentation object:
Now if you right click on this object you’ll get the presentation menu:
From this menu you can go to the source code of the object, inspect & change values, describe it (as seen above, but within an emacs window), and copy it.
Slime inspector
The slime inspector is an alternative inspector for emacs, with some additional functionality.
Slime documentation
Slime documentation provides ways to browse documentation from the editor. We saw one example above with apropos. You can also browse variable and function documentation. For example if you have the cursor positioned over a function:
(show-data-frames)
and you type C-c C-d f (describe function at point), you’ll see this in an emacs window:
#<FUNCTION SHOW-DATA-FRAMES>
[compiled function]
Lambda-list: (&KEY (HEAD NIL) (STREAM *STANDARD-OUTPUT*))
Derived type: (FUNCTION (&KEY (:HEAD T) (:STREAM T)) *)
Documentation:
Print all data frames in the current environment in
reverse order of creation, i.e. most recently created first.
If HEAD is not NIL, print the first six rows, similar to the
Source file: s:/src/data-frame/src/defdf.lisp
Other help
You can also get help from the Lisp-Stat community, the user mailing list, github or stackoverflow
How to start your first project
Lisp-Stat includes a project template that you can use as a guide for your own projects.
Use the template
To get started, go to the project template
1. Click Use this template
2. Select a name for your new project and click Create repository from template
3. Make your own local working copy of your new repo using git clone, replacing https://github.com/me/example.git with your repo’s URL: git clone --depth 1 https://github.com/me/example.git
4. You can now edit your own versions of the project’s source files.
This will clone the project template into your own github repository so you can begin adding your own files to it.
Directory Structure
By convention, we use a directory structure that looks like this:
...
├── project
| ├── data
| | ├── foo.csv
| | ├── bar.json
| | └── baz.tsv
| └── src
| | └── analyse.lisp
| | └── baz.tsv
| └── tests
| | ├── test.lisp
| └── doc
| | ├── project.html
...
data
Often your project will have sample data used for examples illustrating how to use the system. Such example data goes here, as would static data files that your system includes, for example post codes (zip codes). For some projects, we keep the project data here too. If the data is obtained over the network or a data base, login credentials and code related to that is kept here. Basically, anything neccessary to obtain the data should be kept in this directory.
src
The lisp source code for loading, cleaning and analysing your data. If you are using the template for a Lisp-Stat add-on package, the source code for the functionality goes here.
tests
Tests for your code. We recommend either CL-UNIT2 or PARACHUTE for test frameworks.
docs
Generated documentation goes here. This could be both API documentation and user guides and manuals. If an index.html file appears here, github will automatically display it’s contents at project.github.io, if you have configured the repository to display documentation that way.
If you’ve cloned the project template into your local Common Lisp directory, ~/common-lisp/, then you can load it with (ql:quickload :project). Lisp will download and compile the neccessary dependencies and your project will be loaded. The first thing you’ll want to do is to configure your project.
First, change the directory and repository name to suit your environment and make sure git remotes are working properly. Save yourself some time and get git working before configuring the project further.
ASDF
The project.asd file is the Common Lisp system definition file. Rename this to be the same as your project directory and edit its contents to reflect the state of your project. To start with, don’t change any of the file names; just edit the meta data. As you add or rename source code files in the project you’ll update the file names here so Common Lisp will know that to compile. This file is analgous to a makefile in C – it tells lisp how to build your project.
Initialisation
If you need project-wide initialisation settings, you can do this in the file src/init.lisp. The template sets up a logical path name for the project:
(defun setup-project-translations ()
(setf (logical-pathname-translations "PROJECT")
(("DATA;**;*.*.*" ,(merge-pathnames "data/**/*.*" (asdf:system-source-directory 'project))))))
(setup-project-translations)
To use it, you’ll modify the directories and project name for your project, and then call (setup-project-translations) in one of your lisp initialisation files (either ls-init.lisp or .sbclrc). By default, the project data directory will be set to a subdirectory below the main project directory, and you can access files there with PROJECT:DATA;mtcars.csv for example. When you configure your logical pathnames, you’ll replace “PROJECT” with your projects name.
We use logical style pathnames throughout the Lisp-Stat documentation, even if a code level translation isn’t in place.
Basic workflow
The project templates illustrates the basic steps for a simple analysis.
The first step is to load data. The PROJECT:SRC;load file shows creating three data frames, from three different sources: CSV, TSV and JSON. Use this as a template for loading your own data.
Cleanse data
load.lisp also shows some simple cleansing, adding labels, types and attributes, and transforming (recoding) a variable. You can follow these examples for your own data sets, with the goal of creating a data frame from your data.
Analyse
PROJECT:SRC;analyse shows taking the mean and standard deviation of the mpg variable of the loaded data set. Your own analysis will, of course, be different. The examples here are meant to indicate the purpose. You may have one or more files for your analysis, including supporting functions, joining data sets, etc.
Plot
Plotting can be useful at any stage of the process. It’s inclusion as the third step isn’t intended to imply a particular importance or order. The file PROJECT:SRC;plot shows how to plot the information in the disasters data frame.
Save
Finally, you’ll want to save your data frame after you’ve got it where you want it to be. You can save project in a ‘native’ format, a lisp file, that will preserve all your meta data and is editable, or a CSV file. You should only use a CSV file if you need to use the data in another system. PROJECT:SRC;save shows how to save your work.
3 - Examples
Using Lisp-Stat in the real world
One of the best ways to learn Lisp-Stat is to see examples of actual work. This section contains examples of performing statistical analysis, derived from the book Introduction to the Practices of Statistics (2017) by Moore, McCabe and Craig and plotting from the Vega-Lite example gallery.
3.1 - Plotting
Example plots
The plots here show equivalents to the Vega-Lite example gallery. These examples show how to plot ‘raw’ data frame data.
Preliminaries
(asdf:load-system :plot/vega)
and change to the Lisp-Stat user package:
(in-package :ls-user)
The examples in this section use the vega-lite data sets. Load them all now:
(vega:load-vega-examples)
Bar charts
Bar charts are used to display information about categorical variables.
Simple bar chart
In this simple bar chart example we’ll demonstrate using literal embedded data in the form of a plist. Later you’ll see how to use a data-frame as a data source.
(plot:plot
(vega:defplot simple-bar-chart
(:mark :bar
:data (:a #(A B C D E F G H I) :b #(28 55 43 91 81 53 19 87 52))
:encoding (:x (:field :a :type :nominal :axis ("labelAngle" 0))
:y (:field :b :type :quantitative)))))
Grouped bar chart
(plot:plot
(vega:defplot grouped-bar-chart
'(:mark :bar
:data (:category #(A A A B B B C C C)
:group #(x y z x y z x y z)
:value #(0.1 0.6 0.9 0.7 0.2 1.1 0.6 0.1 0.2))
:encoding (:x (:field :category)
:y (:field :value :type :quantitative)
:x-offset (:field :group)
:color (:field group)))))
Stacked bar chart
For this example, we’ll use the Seattle weather example from the Vega website. Load it into a data frame like so:
(defdf seattle-weather (read-csv vega:seattle-weather))
;=> #<DATA-FRAME (1461 observations of 6 variables)>
We’ll use a data-frame as the data source via the Common Lisp backquote mechanism. The spec list begins with a backquote () and then the data frame is inserted with a comma (,). We’ll use this pattern frequently.
(plot:plot
(vega:defplot stacked-bar-chart
(:mark :bar
:data ,seattle-weather
:encoding (:x (:time-unit :month
:field :date
:type :ordinal
:title "Month of the year")
:y (:aggregate :count
:type :quantitative)
:color (:field :weather
:type :nominal
:title "Weather type"
:scale (:domain #("sun" "fog" "drizzle" "rain" "snow")
:range #("#e7ba52", "#c7c7c7", "#aec7e8", "#1f77b4", "#9467bd")))))))
Population pyramid
Vega calls this a diverging stacked bar chart. It is a population pyramid for the US in 2000, created using the stack feature of vega-lite. You could also create one using concat.
First, load the population data if you haven’t done so:
(defdf population (vega:read-vega vega:population))
;=> #<DATA-FRAME (570 observations of 4 variables)>
Note the use of read-vega in this case. This is because the data in the Vega example is in an application specific JSON format (Vega, of course).
(plot:plot
(vega:defplot pyramid-bar-chart
(:mark :bar
:data ,population
:width 300
:height 200
:transform #((:filter "datum.year == 2000")
(:calculate "datum.sex == 2 ? 'Female' : 'Male'" :as :gender)
(:calculate "datum.sex == 2 ? -datum.people : datum.people" :as :signed-people))
:encoding (:x (:aggregate :sum
:field :signed-people
:title "population")
:y (:field :age
:axis nil
:sort :descending)
:color (:field :gender
:scale (:range #("#675193" "#ca8861"))))
:config (:view (:stroke nil)
:axis (:grid :false)))))
Histograms & density
Basic
For this simple histogram example we’ll use the IMDB film rating data set.
(plot:plot
(vega:defplot imdb-plot
(:mark :bar
:data ,imdb
:encoding (:x (:bin (:maxbins 8) :field :imdb-rating)
:y (:aggregate :count)))))
Relative frequency
Use a relative frequency histogram to compare data sets with different numbers of observations.
The data is binned with first transform. The number of values per bin and the total number are calculated in the second and the third transform to calculate the relative frequency in the last transformation step.
(plot:plot
(vega:defplot relative-frequency-histogram
(:title "Relative Frequency"
:data ,vgcars
:transform #((:bin t
:field :horsepower
:as #(:bin-horsepower :bin-horsepower-end))
(:aggregate #((:op :count
:as "Count"))
:groupby #(:bin-horsepower :bin-horsepower-end))
(:joinaggregate #((:op :sum
:field "Count"
:as "TotalCount")))
(:calculate "datum.Count/datum.TotalCount"
:as :percent-of-total))
:mark (:type :bar :tooltip t)
:encoding (:x (:field :bin-horsepower
:title "Horsepower"
:bin (:binned t))
:x2 (:field :bin-horsepower-end)
:y (:field :percent-of-total
:type "quantitative"
:title "Relative Frequency"
:axis (:format ".1~%"))))))
2D histogram scatterplot
If you haven’t already loaded the imdb data set, do so now:
(defparameter imdb
(plot:plot
(vega:defplot histogram-scatterplot
(:mark :circle
:data ,imdb
:encoding (:x (:bin (:maxbins 10) :field imdb-rating)
:y (:bin (:maxbins 10) :field :rotten-tomatoes-rating)
:size (:aggregate :count)))))
Stacked density
(plot:plot
(vega:defplot stacked-density
(:title "Distribution of Body Mass of Penguins"
:width 400
:height 80
:data ,penguins
:mark :bar
:transform #((:density |BODY-MASS-(G)|
:groupby #(:species)
:extent #(2500 6500)))
:encoding (:x (:field :value
:type :quantitative
:title "Body Mass (g)")
:y (:field :density
:type :quantitative
:stack :zero)
:color (:field :species
:type :nominal)))))
Note the use of the multiple escape characters (|) surrounding the field BODY-MASS-(G). This is required because the JSON data set has parenthesis in the variable names, and these are reserved characters in Common Lisp. The JSON importer wrapped these in the escape character.
Scatter plots
Basic
A basic Vega-Lite scatterplot showing horsepower and miles per gallon for various cars.
(plot:plot
(vega:defplot hp-mpg
(:title "Horsepower vs. MPG"
:data ,vgcars
:mark :point
:encoding (:x (:field :horsepower :type "quantitative")
:y (:field :miles-per-gallon :type "quantitative")))))
Colored
In this example we’ll show how to add some additional information to the cars scatter plot to show the cars origin. The Vega-Lite example shows that we have to add two new directives to the encoding of the plot:
(plot:plot
(vega:defplot hp-mpg-plot
(:title "Vega Cars"
:data ,vgcars
:mark :point
:encoding (:x (:field :horsepower :type "quantitative")
:y (:field :miles-per-gallon :type "quantitative")
:color (:field :origin :type "nominal")
:shape (:field :origin :type "nominal")))))
With this change we can see that the higher horsepower, lower efficiency, cars are from the USA, and the higher efficiency cars from Japan and Europe.
Text marks
The same information, but further indicated with a text marker. This Vega-Lite example uses a data transformation.
(plot:plot
(vega:defplot colored-text-hp-mpg-plot
(:title "Vega Cars"
:data ,vgcars
:transform #((:calculate "datum.origin[0]" :as "OriginInitial"))
:mark :text
:encoding (:x (:field :horsepower :type "quantitative")
:y (:field :miles-per-gallon :type "quantitative")
:color (:field :origin :type "nominal")
:text (:field "OriginInitial" :type "nominal")))))
Notice here we use a string for the field value and not a symbol. This is because Vega is case sensitive, whereas Lisp is not. We could have also used a lower-case :as value, but did not to highlight this requirement for certain Vega specifications.
Mean & SD overlay
These graph types are broken in Vega-Lite when using embedded data. See issue 8280. The JSON output by Plot is exactly the same, so only use this with URL data.
This Vega-Lite scatterplot with mean and standard deviation overlay demonstrates the use of layers in a plot.
Lisp-Stat equivalent
(plot:plot
(vega:defplot mean-hp-mpg-plot
(:title "Vega Cars"
:data ,vgcars
:layer #((:mark :point
:encoding (:x (:field :horsepower :type "quantitative")
:y (:field :miles-per-gallon
:type "quantitative")))
(:mark (:type :errorband :extent :stdev :opacity 0.2)
:encoding (:y (:field :miles-per-gallon
:type "quantitative"
:title "Miles per Gallon")))
(:mark :rule
:encoding (:y (:field :miles-per-gallon
:type "quantitative"
:aggregate :mean)))))))
Linear regression
(plot:plot
(vega:defplot linear-regression
(:data ,imdb
:layer #((:mark (:type :point
:filled t)
:encoding (:x (:field :rotten-tomatoes-rating
:type :quantitative
:title "Rotten Tomatoes Rating")
:y (:field :imdb-rating
:type :quantitative
:title "IMDB Rating")))
(:mark (:type :line
:color "firebrick")
:transform #((:regression :imdb-rating
:on :rotten-tomatoes-rating))
:encoding (:x (:field :rotten-tomatoes-rating
:type :quantitative
:title "Rotten Tomatoes Rating")
:y (:field :imdb-rating
:type :quantitative
:title "IMDB Rating")))
(:transform #((:regression :imdb-rating
:on :rotten-tomatoes-rating
:params t)
(:calculate "'R²: '+format(datum.rSquared, '.2f')"
:as :r2))
:mark (:type :text
:color "firebrick"
:x :width
:align :right
:y -5)
:encoding (:text (:type :nominal
:field :r2)))))))
Loess regression
(plot:plot
(vega:defplot loess-regression
(:data ,imdb
:layer #((:mark (:type :point
:filled t)
:encoding (:x (:field :rotten-tomatoes-rating
:type :quantitative
:title "Rotten Tomatoes Rating")
:y (:field :imdb-rating
:type :quantitative
:title "IMDB Rating")))
(:mark (:type :line
:color "firebrick")
:transform #((:loess :imdb-rating
:on :rotten-tomatoes-rating))
:encoding (:x (:field :rotten-tomatoes-rating
:type :quantitative
:title "Rotten Tomatoes Rating")
:y (:field :imdb-rating
:type :quantitative
:title "IMDB Rating")))))))
Residuals
A dot plot showing each movie in the database, and the difference from the average movie rating. The display is sorted by year to visualize everything in sequential order. The graph is for all films before 2019.
(plot:plot
(vega:defplot residuals
(:data ,(filter-rows imdb '(and (not (eql imdb-rating :na))
(local-time:timestamp< release-date (local-time:parse-timestring "2019-01-01"))))
:transform #((:joinaggregate #((:op :mean ;we could do this above using alexandria:thread-first
:field :imdb-rating
:as :average-rating)))
(:calculate "datum['imdbRating'] - datum.averageRating"
:as :rating-delta))
:mark :point
:encoding (:x (:field :release-date
:type :temporal
:title "Release Date")
:y (:field :rating-delta
:type :quantitative
:title "Rating Delta")
:color (:field :rating-delta
:type :quantitative
:scale (:domain-mid 0)
:title "Rating Delta")))))
Query
The cars scatterplot allows you to see miles per gallon vs. horsepower. By adding sliders, you can select points by the number of cylinders and year as well, effectively examining 4 dimensions of data. Drag the sliders to highlight different points.
(plot:plot
(vega:defplot scatter-queries
(:data ,vgcars
:transform #((:calculate "year(datum.year)" :as :year))
:layer #((:params #((:name :cyl-year
:value #((:cylinders 4
:year 1799))
:select (:type :point
:fields #(:cylinders :year))
:bind (:cylinders (:input :range
:min 3
:max 8
:step 1)
:year (:input :range
:min 1969
:max 1981
:step 1))))
:mark :circle
:encoding (:x (:field :horsepower
:type :quantitative)
:y (:field :miles-per-gallon
:type :quantitative)
:color (:condition (:param :cyl-year
:field :origin
:type :nominal)
:value "grey")))
(:transform #((:filter (:param :cyl-year)))
:mark :circle
:encoding (:x (:field :horsepower
:type :quantitative)
:y (:field :miles-per-gallon
:type :quantitative)
:color (:field :origin
:type :nominal)
:size (:value 100)))))))
(plot:plot
(:data ,vgcars
:mark :point
:transform #((:calculate "'https://www.google.com/search?q=' + datum.name", :as :url))
:encoding (:x (:field :horsepower
:type :quantitative)
:y (:field :miles-per-gallon
:type :quantitative)
:color (:field :origin
:type :nominal)
:tooltip (:field :name
:type :nominal)
:href (:field :url
:type :nominal)))))
Strip plot
The Vega-Lite strip plot example shows the relationship between horsepower and the number of cylinders using tick marks.
(plot:plot
(vega:defplot strip-plot
(:title "Vega Cars"
:data ,vgcars
:mark :tick
:encoding (:x (:field :horsepower :type :quantitative)
:y (:field :cylinders :type :ordinal)))))
1D strip plot
(plot:plot
(vega:defplot 1d-strip-plot
(:title "Seattle Precipitation"
:data ,seattle-weather
:mark :tick
:encoding (:x (:field :precipitation :type :quantitative)))))
Bubble plot
This Vega-Lite example is a visualization of global deaths from natural disasters. A copy of the chart from Our World in Data.
(plot:plot
(vega:defplot natural-disaster-deaths
(:title "Deaths from global natural disasters"
:width 600
:height 400
:data ,(filter-rows disasters '(not (string= entity "All natural disasters")))
:mark (:type :circle
:opacity 0.8
:stroke :black
:stroke-width 1)
:encoding (:x (:field :year
:type :temporal
:axis (:grid :false))
:y (:field :entity
:type :nominal
:axis (:title ""))
:size (:field :deaths
:type :quantitative
:title "Annual Global Deaths"
:legend (:clip-height 30)
:scale (:range-max 5000))
:color (:field :entity
:type :nominal
:legend nil)))))
Note how we modified the example by using a lower case entity in the filter to match our default lower case variable names. Also note how we are explicit with parsing the year field as a temporal column. This is because, when creating a chart with inline data, Vega-Lite will parse the field as an integer instead of a date.
Line plots
Simple
(plot:plot
(vega:defplot simple-line-plot
(:title "Google's stock price from 2004 to early 2010"
:data ,(filter-rows stocks '(string= symbol "GOOG"))
:mark :line
:encoding (:x (:field :date
:type :temporal)
:y (:field :price
:type :quantitative)))))
Point markers
By setting the point property of the line mark definition to an object defining a property of the overlaying point marks, we can overlay point markers on top of line.
(plot:plot
(vega:defplot point-mark-line-plot
(:title "Stock prices of 5 Tech Companies over Time"
:data ,stocks
:mark (:type :line :point t)
:encoding (:x (:field :date
:time-unit :year)
:y (:field :price
:type :quantitative
:aggregate :mean)
:color (:field :symbol
:type nominal)))))
Multi-series
This example uses the custom symbol encoding for variables to generate the proper types and labels for x, y and color channels.
(plot:plot
(vega:defplot multi-series-line-chart
(:title "Stock prices of 5 Tech Companies over Time"
:data ,stocks
:mark :line
:encoding (:x (:field stocks:date)
:y (:field stocks:price)
:color (:field stocks:symbol)))))
Step
(plot:plot
(vega:defplot step-chart
(:title "Google's stock price from 2004 to early 2010"
:data ,(filter-rows stocks '(string= symbol "GOOG"))
:mark (:type :line
:interpolate "step-after")
:encoding (:x (:field stocks:date)
:y (:field stocks:price)))))
Stroke-dash
(plot:plot
(vega:defplot stroke-dash
(:title "Stock prices of 5 Tech Companies over Time"
:data ,stocks
:mark :line
:encoding (:x (:field stocks:date)
:y (:field stocks:price)
:stroke-dash (:field stocks:symbol)))))
Confidence interval
Line chart with a confidence interval band.
(plot:plot
(vega:defplot line-chart-ci
(:data ,vgcars
:encoding (:x (:field :year
:time-unit :year))
:layer #((:mark (:type :errorband
:extent :ci)
:encoding (:y (:field :miles-per-gallon
:type :quantitative
:title "Mean of Miles per Gallon (95% CIs)")))
(:mark :line
:encoding (:y (:field :miles-per-gallon
:aggregate :mean)))))))
Area charts
Simple
(plot:plot
(vega:defplot area-chart
(:title "Unemployment across industries"
:width 300
:height 200
:data ,unemployment-ind
:mark :area
:encoding (:x (:field :date
:time-unit :yearmonth
:axis (:format "%Y"))
:y (:field :count
:aggregate :sum
:title "count")))))
Stacked
Stacked area plots
(plot:plot
(vega:defplot stacked-area-chart
(:title "Unemployment across industries"
:width 300
:height 200
:data ,unemployment-ind
:mark :area
:encoding (:x (:field :date
:time-unit :yearmonth
:axis (:format "%Y"))
:y (:field :count
:aggregate :sum
:title "count")
:color (:field :series
:scale (:scheme "category20b"))))))
Horizon graph
A horizon graph is a technique for visualising time series data in a manner that makes comparisons easier based on work done at the UW Interactive Data Lab. See Sizing the Horizon: The Effects of Chart Size and Layering on the Graphical Perception of Time Series Visualizations for more details on Horizon Graphs.
(plot:plot
(vega:defplot horizon-graph
(:title "Horizon graph with 2 layers"
:width 300
:height 50
:data (:x ,(aops:linspace 1 20 20)
:y #(28 55 43 91 81 53 19 87 52 48 24 49 87 66 17 27 68 16 49 15))
:encoding (:x (:field :x
:scale (:zero :false
:nice :false))
:y (:field :y
:type :quantitative
:scale (:domain #(0 50))
:axis (:title "y")))
:layer #((:mark (:type :area
:clip t
:orient :vertical
:opacity 0.6))
(:transform #((:calculate "datum.y - 50"
:as :ny))
:mark (:type :area
:clip t
:orient :vertical)
:encoding (:y (:field "ny"
:type :quantitative
:scale (:domain #(0 50)))
:opacity (:value 0.3))))
:config (:area (:interpolate :monotone)))))
With overlay
Area chart with overlaying lines and point markers.
(plot:plot
(vega:defplot area-with-overlay
:data ,(filter-rows stocks '(string= symbol "GOOG"))
:mark (:type :area
:line t
:point t)
:encoding (:x (:field stocks:date)
:y (:field stocks:price)))))
Note the use of the variable symbols, e.g. stocks:price to fill in the variable’s information instead of :type :quantitative :title ...
Stream graph
(plot:plot
(vega:defplot stream-graph
(:title "Unemployment Stream Graph"
:width 300
:height 200
:data ,unemployment-ind
:mark :area
:encoding (:x (:field :date
:time-unit "yearmonth"
:axis (:domain :false
:format "%Y"
:tick-size 0))
:y (:field count
:aggregate :sum
:axis null
:stack :center)
:color (:field :series
:scale (:scheme "category20b"))))))
Tabular plots
Table heatmap
(plot:plot
(vega:defplot table-heatmap
(:data ,vgcars
:mark :rect
:encoding (:x (:field vgcars:cylinders)
:y (:field vgcars:origin)
:color (:field :horsepower
:aggregate :mean))
:config (:axis (:grid t :tick-band :extent)))))
Heatmap with labels
Layering text over a table heatmap
(plot:plot
(vega:defplot heatmap-labels
(:data ,vgcars
:transform #((:aggregate #((:op :count :as :num-cars))
:groupby #(:origin :cylinders)))
:encoding (:x (:field :cylinders
:type :ordinal)
:y (:field :origin
:type :ordinal))
:layer #((:mark :rect
:encoding (:color (:field :num-cars
:type :quantitative
:title "Count of Records"
:legend (:direction :horizontal
(:mark :text
:encoding (:text (:field :num-cars
:type :quantitative)
:color (:condition (:test "datum['numCars'] < 40"
:value :black)
:value :white))))
:config (:axis (:grid t
:tick-band :extent)))))
Histogram heatmap
(plot:plot
(vega:defplot heatmap-histogram
(:data ,imdb
:transform #((:and #((:field :imdb-rating :valid t)
(:field :rotten-tomatoes-rating :valid t))))
:mark :rect
:width 300
:height 200
:encoding (:x (:bin (:maxbins 60)
:field :imdb-rating
:type :quantitative
:title "IMDB Rating")
:y (:bin (:maxbins 40)
:field :rotten-tomatoes-rating
:type :quantitative
:title "Rotten Tomatoes Rating")
:color (:aggregate :count
:type :quantitative))
:config (:view (:stroke :transparent)))))
Circular plots
Pie chart
(plot:plot
(vega:defplot pie-chart
(:data (:category ,(aops:linspace 1 6 6)
:value #(4 6 10 3 7 8))
:mark :arc
:encoding (:theta (:field :value
:type :quantitative)
:color (:field :category
:type :nominal)))))
Donut chart
(plot:plot
(vega:defplot donut-chart
(:data (:category ,(aops:linspace 1 6 6)
:value #(4 6 10 3 7 8))
:encoding (:theta (:field :value
:type :quantitative)
:color (:field :category
:type :nominal)))))
This radial plot uses both angular and radial extent to convey multiple dimensions of data. However, this approach is not perceptually effective, as viewers will most likely be drawn to the total area of the shape, conflating the two dimensions. This example also demonstrates a way to add labels to circular plots.
(plot:plot
(:data (:value #(12 23 47 6 52 19))
:layer #((:mark (:type :arc
:stroke "#fff"))
(:mark (:type :text
:encoding (:text (:field :value
:type :quantitative))))
:encoding (:theta (:field :value
:type :quantitative
:stack t)
:scale (:type :sqrt
:zero t
:range-min 20))
:color (:field :value
:type :nominal
:legend nil)))))
Transformations
Normally data transformations should be done in Lisp-Stat with a data frame. These examples illustrate how to accomplish transformations using Vega-Lite.
Difference from avg
(plot:plot
(vega:defplot difference-from-average
(:data ,(filter-rows imdb '(not (eql imdb-rating :na)))
:transform #((:joinaggregate #((:op :mean ;we could do this above using alexandria:thread-first
:field :imdb-rating
:as :average-rating)))
(:filter "(datum['imdbRating'] - datum.averageRating) > 2.5"))
:layer #((:mark :bar
:encoding (:x (:field :imdb-rating
:type :quantitative
:title "IMDB Rating")
:y (:field :title
:type :ordinal
:title "Title")))
(:mark (:type :rule :color "red")
:encoding (:x (:aggregate :average
:field :average-rating
:type :quantitative)))))))
Frequency distribution
Cumulative frequency distribution of films in the IMDB database.
(plot:plot
(vega:defplot cumulative-frequency-distribution
(:data ,imdb
:transform #((:sort #((:field :imdb-rating))
:window #((:op :count
:field :count as :cumulative-count))
:frame #(nil 0)))
:mark :area
:encoding (:x (:field :imdb-rating
:type :quantitative)
:y (:field :cumulative-count
:type :quantitative)))))
Layered & cumulative histogram
(plot:plot
(vega:defplot layered-histogram
(:data ,(filter-rows imdb '(not (eql imdb-rating :na)))
:transform #((:bin t
:field :imdb-rating
:as #(:bin-imdb-rating :bin-imdb-rating-end))
(:aggregate #((:op :count :as :count))
:groupby #(:bin-imdb-rating :bin-imdb-rating-end))
(:sort #((:field :bin-imdb-rating))
:window #((:op :sum
:field :count :as :cumulative-count))
:frame #(nil 0)))
:encoding (:x (:field :bin-imdb-rating
:type :quantitative
:scale (:zero :false)
:title "IMDB Rating")
:x2 (:field :bin-imdb-rating-end))
:layer #((:mark :bar
:encoding (:y (:field :cumulative-count
:type :quantitative
:title "Cumulative Count")))
(:mark (:type :bar
:color "yellow"
:opacity 0.5)
:encoding (:y (:field :count
:type :quantitative
:title "Count")))))))
Layering averages
Layering averages over raw values.
(plot:plot
(vega:defplot layered-averages
(:data ,(filter-rows stocks '(string= symbol "GOOG"))
:layer #((:mark (:type :point
:opacity 0.3)
:encoding (:x (:field :date
:time-unit :year)
:y (:field :price
:type quantitative)))
(:mark :line
:encoding (:x (:field :date
:time-unit :year)
:y (:field :price
:aggregate :mean)))))))
Error bars
Confidence interval
Error bars showing confidence intervals.
(plot:plot
(vega:defplot error-bar-ci
(:data ,barley
:encoding (:y (:field :variety
:type :ordinal
:title "Variety"))
:layer #((:mark (:type :point
:filled t)
:encoding (:x (:field :yield
:aggregate :mean
:type :quantitative
:scale (:zero :false)
:title "Barley Yield")
:color (:value "black")))
(:mark (:type :errorbar :extent :ci)
:encoding (:x (:field :yield
:type :quantitative
:title "Barley Yield")))))))
Standard deviation
Error bars showing standard deviation.
(plot:plot
(vega:defplot error-bar-sd
(:data ,barley
:encoding (:y (:field :variety
:type :ordinal
:title "Variety"))
:layer #((:mark (:type :point
:filled t)
:encoding (:x (:field :yield
:aggregate :mean
:type :quantitative
:scale (:zero :false)
:title "Barley Yield")
:color (:value "black")))
(:mark (:type :errorbar :extent :stdev)
:encoding (:x (:field :yield
:type :quantitative
:title "Barley Yield")))))))
Box plots
Min/max whiskers
A vertical box plot showing median, min, and max body mass of penguins.
(plot:plot
(vega:defplot box-plot-min-max
(:data ,penguins
:mark (:type :boxplot
:extent "min-max")
:encoding (:x (:field :species
:type :nominal
:title "Species")
:y (:field |BODY-MASS-(G)|
:type :quantitative
:scale (:zero :false)
:title "Body Mass (g)")
:color (:field :species
:type :nominal
:legend nil)))))
Tukey
A vertical box plot showing median and lower and upper quartiles of the distribution of body mass of penguins.
(plot:plot
(vega:defplot box-plot-tukey
(:data ,penguins
:mark :boxplot
:encoding (:x (:field :species
:type :nominal
:title "Species")
:y (:field |BODY-MASS-(G)|
:type :quantitative
:scale (:zero :false)
:title "Body Mass (g)")
:color (:field :species
:type :nominal
:legend nil)))))
Summaries
Box plot with pre-computed summaries. Use this pattern to plot summaries done in a data-frame.
(plot:plot
(vega:defplot box-plot-summaries
(:title "Body Mass of Penguin Species (g)"
:lower #(2850 2700 3950)
:q1 #(3350 3487.5 4700)
:median #(3700 3700 5000)
:q3 #(4000 3950 5500)
:upper #(4775 4800 6300)
:outliers #(#() #(2700 4800) #()))
:encoding (:y (:field :species
:type :nominal
:title null))
:layer #((:mark (:type :rule)
:encoding (:x (:field :lower
:type :quantitative
:scale (:zero :false)
:title null)
:x2 (:field :upper)))
(:mark (:type :bar :size 14)
:encoding (:x (:field :q1
:type :quantitative)
:x2 (:field :q3)
:color (:field :species
:type :nominal
:legend null)))
(:mark (:type :tick
:color :white
:size 14)
:encoding (:x (:field :median
:type :quantitative)))
(:transform #((:flatten #(:outliers)))
:mark (:type :point :style "boxplot-outliers")
:encoding (:x (:field :outliers
:type :quantitative)))))))
Layered
Rolling average
Plot showing a 30 day rolling average with raw values in the background.
(plot:plot
(vega:defplot moving-average
(:width 400
:height 300
:data ,seattle-weather
:transform #((:window #((:field :temp-max
:op :mean
:as :rolling-mean))
:frame #(-15 15)))
:encoding (:x (:field :date
:type :temporal
:title "Date")
:y (:type :quantitative
:axis (:title "Max Temperature and Rolling Mean")))
:layer #((:mark (:type :point :opacity 0.3)
:encoding (:y (:field :temp-max
:title "Max Temperature")))
(:mark (:type :line :color "red" :size 3)
:encoding (:y (:field :rolling-mean
:title "Rolling Mean of Max Temperature")))))))
Histogram w/mean
(plot:plot
(vega:defplot histogram-with-mean
(:data ,imdb
:layer #((:mark :bar
:encoding (:x (:field :imdb-rating
:bin t
:title "IMDB Rating")
:y (:aggregate :count)))
(:mark :rule
:encoding (:x (:field :imdb-rating
:aggregate :mean
:title "Mean of IMDB Rating")
:color (:value "red")
:size (:value 5)))))))
Interactive
This section demonstrates interactive plots.
Scatter plot matrix
This Vega-Lite interactive scatter plot matrix includes interactive elements and demonstrates creating a SPLOM (scatter plot matrix).
(defparameter vgcars-splom
(vega::make-plot "vgcars-splom"
vgcars
("$schema" "https://vega.github.io/schema/vega-lite/v5.json" :title "Scatterplot Matrix for Vega Cars" :repeat (:row #(:horsepower :acceleration :miles-per-gallon) :column #(:miles-per-gallon :acceleration :horsepower)) :spec (:data (:url "/data/vgcars-splom-data.json") :mark :point :params #((:name "brush" :select (:type "interval" :resolve "union" :on "[mousedown[event.shiftKey], window:mouseup] > window:mousemove!" :translate "[mousedown[event.shiftKey], window:mouseup] > window:mousemove!" :zoom "wheel![event.shiftKey]")) (:name "grid" :select (:type "interval" :resolve "global" :translate "[mousedown[!event.shiftKey], window:mouseup] > window:mousemove!" :zoom "wheel![!event.shiftKey]") :bind :scales)) :encoding (:x (:field (:repeat "column") :type "quantitative") :y (:field (:repeat "row") :type "quantitative" :axis ("minExtent" 30)) :color (:condition (:param "brush" :field :origin :type "nominal") :value "grey")))))) (plot:plot vgcars-splom) This example is one of those mentioned in the plotting tutorial that uses a non-standard location for the data property. Weather exploration This graph shows an interactive view of Seattle’s weather, including maximum temperature, amount of precipitation, and type of weather. By clicking and dragging on the scatter plot, you can see the proportion of days in that range that have sun, rain, fog, snow, etc. (plot:plot (vega:defplot weather-exploration (:title "Seattle Weather, 2012-2015" :data ,seattle-weather :vconcat #(;; upper graph (:encoding (:color (:condition (:param :brush :title "Weather" :field :weather :type :nominal :scale (:domain #("sun" "fog" "drizzle" "rain" "snow") :range #("#e7ba52", "#a7a7a7", "#aec7e8", "#1f77b4", "#9467bd"))) :value "lightgray") :size (:field :precipitation :type :quantitative :title "Precipitation" :scale (:domain #(-1 50))) :x (:field :date :time-unit :monthdate :title "Date" :axis (:format "%b")) :y (:field :temp-max :type :quantitative :scale (:domain #(-5 40)) :title "Maximum Daily Temperature (C)")) :width 600 :height 300 :mark :point :params #((:name :brush :select (:type :interval :encodings #(:x)))) :transform #((:filter (:param :click)))) ;; lower graph (:encoding (:color (:condition (:param :click :field :weather :scale (:domain #("sun", "fog", "drizzle", "rain", "snow") :range #("#e7ba52", "#a7a7a7", "#aec7e8", "#1f77b4", "#9467bd"))) :value "lightgray") :x (:aggregate :count) :y (:field :weather :title "Weather")) :width 600 :mark :bar :params #((:name :click :select (:type :point :encodings #(:color)))) :transform #((:filter (:param :brush)))))))) Interactive scatterplot (plot:plot (vega:defplot global-health (:title "Global Health Statistics by Country and Year" :data ,gapminder :width 800 :height 500 :layer #((:transform #((:filter (:field :country :equal "afghanistan")) (:filter (:param :year))) :mark (:type :text :font-size 100 :x 420 :y 250 :opacity 0.06) :encoding (:text (:field :year))) (:transform #((:lookup :cluster :from (:key :id :fields #(:name) :data (:values #(("id" 0 "name" "South Asia") ("id" 1 "name" "Europe & Central Asia") ("id" 2 "name" "Sub-Saharan Africa") ("id" 3 "name" "America") ("id" 4 "name" "East Asia & Pacific") ("id" 5 "name" "Middle East & North Africa")))))) :encoding (:x (:field :fertility :type :quantitative :scale (:domain #(0 9)) :axis (:tick-count 5 :title "Fertility")) :y (:field :life-expect :type :quantitative :scale (:domain #(20 85)) :axis (:tick-count 5 :title "Life Expectancy"))) :layer #((:mark (:type :line :size 4 :color "lightgray" :stroke-cap "round") :encoding (:detail (:field :country) :order (:field :year) :opacity (:condition (:test (:or #((:param :hovered :empty :false) (:param :clicked :empty :false))) :value 0.8) :value 0))) (:params #((:name :year :value #((:year 1955)) :select (:type :point :fields #(:year)) :bind (:name :year :input :range :min 1955 :max 2005 :step 5)) (:name :hovered :select (:type :point :fields #(:country) :toggle :false :on :mouseover)) (:name :clicked :select (:type :point :fields #(:country)))) :transform #((:filter (:param :year))) :mark (:type :circle :size 100 :opacity 0.9) :encoding (:color (:field :name :title "Region"))) (:transform #((:filter (:and #((:param :year) (:or #((:param :clicked :empty :false) (:param :hovered :empty :false))))))) :mark (:type :text :y-offset -12 :font-size 12 :font-weight :bold) :encoding (:text (:field :country) :color (:field :name :title "Region"))) (:transform #((:filter (:param :hovered :empty :false)) (:filter (:not (:param :year)))) :layer #((:mark (:type :text :y-offset -12 :font-size 12 :color "gray") :encoding (:text (:field :year))) (:mark (:type :circle :color "gray")))))))))) Crossfilter Cross-filtering makes it easier and more intuitive for viewers of a plot to interact with the data and understand how one metric affects another. With cross-filtering, you can click a data point in one dashboard view to have all dashboard views automatically filter on that value. Click and drag across one of the charts to see the other variables filtered. (plot:plot (vega:defplot cross-filter (:title "Cross filtering of flights" :data ,flights-2k :transform #((:calculate "hours(datum.date)", :as "time")) ;what does 'hours' do? :repeat (:column #(:distance :delay :time)) :spec (:layer #((:params #((:name :brush :select (:type :interval :encodings #(:x)))) :mark :bar :encoding (:x (:field (:repeat :column) :bin (:maxbins 20)) :y (:aggregate :count) :color (:value "#ddd"))) (:transform #((:filter (:param :brush))) :mark :bar :encoding (:x (:field (:repeat :column) :bin (:maxbins 20)) :y (:aggregate :count)))))))) 3.2 - Statistics Examples of statistical analysis These notebooks describe how to undertake statistical analyses introduced as examples in the Ninth Edition of Introduction to the Practices of Statistics (2017) by Moore, McCabe and Craig. The notebooks are organised in the same manner as the chapters of the book. The data comes from the site IPS9 in R by Nicholas Horton. To run the notebooks you will have to install a third-party library, common-lisp-jupyter. See the cl-jupyter installation page for how to perform the installation. After installing cl-jupyter, clone the IPS repository into your ~/common-lisp/ directory. Looking at data Chapter 1 – Distributions : Exploratory data analysis using plots and numbers 4 - Tutorials End to end demonstrations of statistical analysis These learning tutorials demonstrate how to perform end-to-end statistical analysis of sample data using Lisp-Stat. Sample data is provided for both the examples and the optional exercises. By completing these tutorials you will understand the tasks required for a typical statistical workflow. 4.1 - Basics An introduction to the basics of LISP-STAT Preface This document is intended to be a tutorial introduction to the basics of LISP-STAT and is based on the original tutorial for XLISP-STAT written by Luke Tierney, updated for Common Lisp and the 2021 implementation of LISP-STAT. LISP-STAT is a statistical environment built on top of the Common Lisp general purpose programming language. The first three sections contain the information you will need to do elementary statistical calculations and plotting. The fourth section introduces some additional methods for generating and modifying data. The fifth section describes some features of the user interface that may be helpful. The remaining sections deal with more advanced topics, such as interactive plots, regression models, and writing your own functions. All sections are organized around examples, and most contain some suggested exercises for the reader. This document is not intended to be a complete manual. However, documentation for many of the commands that are available is given in the appendix. Brief help messages for these and other commands are also available through the interactive help facility described in Section 5.1 below. Common Lisp (CL) is a dialect of the Lisp programming language, published in ANSI standard document ANSI INCITS 226-1994 (S20018) (formerly X3.226-1994 (R1999)). The Common Lisp language was developed as a standardized and improved successor of Maclisp. By the early 1980s several groups were already at work on diverse successors to MacLisp: Lisp Machine Lisp (aka ZetaLisp), Spice Lisp, NIL and S-1 Lisp. Common Lisp sought to unify, standardize, and extend the features of these MacLisp dialects. Common Lisp is not an implementation, but rather a language specification. Several implementations of the Common Lisp standard are available, including free and open-source software and proprietary products. Common Lisp is a general-purpose, multi-paradigm programming language. It supports a combination of procedural, functional, and object-oriented programming paradigms. As a dynamic programming language, it facilitates evolutionary and incremental software development, with iterative compilation into efficient run-time programs. This incremental development is often done interactively without interrupting the running application. Using this Tutorial The best way to learn about a new computer programming language is usually to use it. You will get most out of this tutorial if you read it at your computer and work through the examples yourself. To make this tutorial easier the named data sets used in this tutorial have been stored in the file basic.lisp in the LS:DATASETS;TUTORIALS folder of the system. To load this file, execute: (load #P"LS:DATASETS;TUTORIALS;basic") at the command prompt (REPL). The file will be loaded and some variables will be defined for you. Why LISP-STAT Exists There are three primary reasons behind the decision to produce the LISP-STAT environment. The first is speed. The other major languages used for statistics and numerical analysis, R, Python and Julia are all fine languages, but with the rise of ‘big data’ and large data sets, require workarounds for processing large data sets. Furthermore, as interpreted languages, they are relatively slow when compared to Common Lisp, that has a compiler that produces native machine code. Not only does Common Lisp provide a compiler that produces machine code, it has native threading, a rich ecosystem of code libraries, and a history of industrial deployments, including: • Credit card authorization at AMEX (Authorizers Assistant) • US DoD logistics (and more, that we don’t know of) • CIA and NSA are big users based on Lisp sales • DWave and Rigetti use lisp for programming their quantum computers • Apple’s Siri was originally written in Lisp • Amazon got started with Lisp & C; so did Y-combinator • Google’s flight search engine is written in Common Lisp • AT&T used a stripped down version of Symbolics Lisp to process CDRs in the first IP switches Python and R are never (to my knowledge) deployed as front-line systems, but used in the back office to produce models that are executed by other applications in enterprise environments. Common Lisp eliminates that friction. Availability Source code for LISP-STAT is available in the Lisp-Stat github repository. The Getting Started section of the documentation contains instructions for downloading and installing the system. Disclaimer LISP-STAT is an experimental program. Although it is in daily use on several projects, the corporate sponsor, Symbolics Pte Ltd, takes no responsibility for losses or damages resulting directly or indirectly from the use of this program. LISP-STAT is an evolving system. Over time new features will be introduced, and existing features that do not work may be changed. Every effort will be made to keep LISP-STAT consistent with the information in this tutorial, but if this is not possible the reference documentation should give accurate information about the current use of a command. Starting and Finishing Once you have obtained the source code or pre-built image, you can load Lisp-Stat using QuickLisp. If you do not have quicklisp, stop here and get it. It is the de-facto package manager for Common Lisp and you will need it. This is what you will see if loading using the Slime IDE: CL-USER> (asdf:load-system :lisp-stat) To load "lisp-stat": Load 1 ASDF system: lisp-stat ; Loading "lisp-stat" .................................................. .................................................. [package num-utils]............................... [package num-utils]............................... [package dfio.decimal]............................ [package dfio.string-table]....................... ..... (:LISP-STAT) CL-USER> You may see more or less output, depending on whether dependent packages have been compiled before. If this is your first time running anything in this implementation of Common Lisp, you will probably see output related to the compilation of every module in the system. This could take a while, but only has to be done once. Once completed, to use the functions provided, you need to make the LISP-STAT package the current package, like this: (in-package :ls-user) #<PACKAGE "LS-USER"> LS-USER> The final LS-USER> in the window is the Slime prompt. Notice how it changes when you executed (in-package). In Slime, the prompt always indicates the current package, *package*. Any characters you type while the prompt is active will be added to the line after the final prompt. When you press return, LISP-STAT will try to interpret what you have typed and will print a response. For example, if you type a 1 and press return then LISP-STAT will respond by simply printing a 1 on the following line and then give you a new prompt: LS-USER> 1 1 LS-USER> If you type an expression like (+ 1 2), then LISP-STAT will print the result of evaluating the expression and give you a new prompt: LS-USER> (+ 1 2) 3 LS-USER> As you have probably guessed, this expression means that the numbers 1 and 2 are to be added together. The next section will give more details on how LISP-STAT expressions work. In this tutorial I will sometimes show interactions with the program as I have done here: The LS-USER> prompt will appear before lines you should type. LISP-STAT will supply this prompt when it is ready; you should not type it yourself. In later sections I will omit the new prompt following the result in order to save space. Now that you have seen how to start up LISP-STAT it is a good idea to make sure you know how to get out. The exact command to exit depends on the Common Lisp implementation you use. For SBCL, you can type the expression LS-USER> (exit) In other implementations, the command is quit. One of these methods should cause the program to exit and return you to the IDE. In Slime, you can use the , short-cut and then type sayoonara. The Basics Before we can start to use LISP-STAT for statistical work we need to learn a little about the kind of data LISP-STAT uses and about how the LISP-STAT listener and evaluator work. Data LISP-STAT works with two kinds of data: simple data and compound data. Simple data are numbers 1 ; an integer -3.14 ; a floating point number #C(0 1) ; a complex number (the imaginary unit) logical values T ; true nil ; false strings (always enclosed in double quotes) "This is a string 1 2 3 4" and symbols (used for naming things; see the following section) x x12 12x this-is-a-symbol Compound data are lists (this is a list with 7 elements) (+ 1 2 3) (sqrt 2) or vectors #(this is a vector with 7 elements) #(1 2 3) Higher dimensional arrays are another form of compound data; they will be discussed below in Section 9, “Arrays”. All the examples given above can be typed directly into the command window as they are shown here. The next subsection describes what LISP-STAT will do with these expressions. The Listener and the Evaluator A session with LISP-STAT basically consists of a conversation between you and the listener. The listener is the window into which you type your commands. When it is ready to receive a command it gives you a prompt. At the prompt you can type in an expression. You can use the mouse or the backspace key to correct any mistakes you make while typing in your expression. When the expression is complete and you type a return the listener passes the expression on to the evaluator. The evaluator evaluates the expression and returns the result to the listener for printing.1 The evaluator is the heart of the system. The basic rule to remember in trying to understand how the evaluator works is that everything is evaluated. Numbers and strings evaluate to themselves: LS-USER> 1 1 LS-USER> "Hello" "Hello" LS-USER> Lists are more complicated. Suppose you type the list (+ 1 2 3) at the listener. This list has four elements: the symbol + followed by the numbers 1, 2 and 3. Here is what happens: > (+ 1 2 3) 6 > This list is evaluated as a function application. The first element is a symbol representing a function, in this case the symbol + representing the addition function. The remaining elements are the arguments. Thus the list in the example above is interpreted to mean “Apply the function + to the numbers 1, 2 and 3”. Actually, the arguments to a function are always evaluated before the function is applied. In the previous example the arguments are all numbers and thus evaluate to themselves. On the other hand, consider LS-USER> (+ (* 2 3) 4) 10 LS-USER> The evaluator has to evaluate the first argument to the function + before it can apply the function. Occasionally you may want to tell the evaluator not to evaluate something. For example, suppose we wanted to get the evaluator to simply return the list (+ 1 2) back to us, instead of evaluating it. To do this we need to quote our list: LS-USER> (quote (+ 1 2)) (+ 1 2) LS-USER> quote is not a function. It does not obey the rules of function evaluation described above: Its argument is not evaluated. quote is called a special form – special because it has special rules for the treatment of its arguments. There are a few other special forms that we will need; I will introduce them as they are needed. Together with the basic evaluation rules described here these special forms make up the basics of the Lisp language. The special form quote is used so often that a shorthand notation has been developed, a single quote before the expression you want to quote: LS-USER> '(+ 1 2) ; single quote shorthand This is equivalent to (quote (+ 1 2)). Note that there is no matching quote following the expression. By the way, the semicolon ; is the Lisp comment character. Anything you type after a semicolon up to the next time you press return is ignored by the evaluator. Exercises For each of the following expressions try to predict what the evaluator will return. Then type them in, see what happens and try to explain any differences. 1. (+ 3 5 6) 2. (+ (- 1 2) 3) 3. ’(+ 3 5 6) 4. ’( + (- 1 2) 3) 5. (+ (- (* 2 3) (/ 6 2)) 7) 6. ’x Remember, to quit from LISP-STAT type (exit), quit or use the IDE’s exit mechanism. Elementary Statistical Operations This section introduces some of the basic graphical and numerical statistical operations that are available in LISP-STAT. First Steps Statistical data usually consists of groups of numbers. Devore and Peck [@DevorePeck Exercise 2.11] describe an experiment in which 22 consumers reported the number of times they had purchased a product during the previous 48 week period. The results are given as a table: 0 2 5 0 3 1 8 0 3 1 1 9 2 4 0 2 9 3 0 1 9 8 To examine this data in LISP-STAT we represent it as a list of numbers using the list function: (list 0 2 5 0 3 1 8 0 3 1 1 9 2 4 0 2 9 3 0 1 9 8) Note that the numbers are separated by white space (spaces, tabs or even returns), not commas. The mean function can be used to compute the average of a list of numbers. We can combine it with the list function to find the average number of purchases for our sample: (mean '(0 2 5 0 3 1 8 0 3 1 1 9 2 4 0 2 9 3 0 1 9 8)) ; => 3.227273 The median of these numbers can be computed as (median '(0 2 5 0 3 1 8 0 3 1 1 9 2 4 0 2 9 3 0 1 9 8)) ; => 2 It is of course a nuisance to have to type in the list of 22 numbers every time we want to compute a statistic for the sample. To avoid having to do this I will give this list a name using the def special form 2: (def purchases (list 0 2 5 0 3 1 8 0 3 1 1 9 2 4 0 2 9 3 0 1 9 8)) ; PURCHASES Now the symbol purchases has a value associated with it: Its value is our list of 22 numbers. If you give the symbol purchases to the evaluator then it will find the value of this symbol and return that value: LS-USER> purchases (0 2 5 0 3 1 8 0 3 1 1 9 2 4 0 2 9 3 0 1 9 8) We can now easily compute various numerical descriptive statistics for this data set: LS-USER> (mean purchases) 3.227273 LS-USER> (median purchases) 2 LS-USER> (sd purchases) 3.2795 LS-USER> (interquartile-range purchases) 4 LISP-STAT also supports elementwise arithmetic operations on vectors of numbers. Technically, overriding, or ‘shadowing’ any of the Common Lisp functions is undefined. This is usually an euphuism for ‘something really bad will happen’, so the vector functions are located in the package elmt and prefixed by e to distinguish them from the Common Lisp variants, e.g. e+ for addition, e* for multiplication, etc. Presently these functions work only on vectors, so we’ll define a new purchases variable as a vector type: (def purchases-2 #(0 2 5 0 3 1 8 0 3 1 1 9 2 4 0 2 9 3 0 1 9 8)) The # symbol tells the listener to interpret the list as a vector, much like the ' signals a list. Now we can add 1 to each of the purchases: LS-USER> (e+ 1 purchases-2) (1 3 6 1 4 2 9 1 4 2 2 10 3 5 1 3 10 4 1 2 10 9) and after adding 1 we can compute the natural logarithms of the results: LS-USER> (elog (e+ 1 purchases-2)) (0 1.098612 1.791759 0 1.386294 0.6931472 2.197225 0 1.386294 0.6931472 0.6931472 2.302585 1.098612 1.609438 0 1.098612 2.302585 1.386294 0 0.6931472 2.302585 2.197225) Exercises For each of the following expressions try to predict what the evaluator will return. Then type them in, see what happens and try to explain any differences. 1. (mean (list 1 2 3)) 2. (e+ #(1 2 3) 4) 3. (e* #(1 2 3) #(4 5 6)) 4. (e+ #(1 2 3) #(4 5 7)) Summary Statistics Devore and Peck [@DevorePeck page 54, Table 10] give precipitation levels recorded during the month of March in the Minneapolis - St. Paul area over a 30 year period. Let’s enter these data into LISP-STAT with the name precipitation: (def precipitation #(.77 1.74 .81 1.20 1.95 1.20 .47 1.43 3.37 2.20 3.30 3.09 1.51 2.10 .52 1.62 1.31 .32 .59 .81 2.81 1.87 1.18 1.35 4.75 2.48 .96 1.89 .90 2.05)) In typing the expression above I have inserted return and tab a few times in order to make the typed expression easier to read. The tab key indents the next line to a reasonable point to make the expression more readable. Here are some numerical summaries: LS-USER> (mean precipitation) 1.685 LS-USER> (median precipitation) 1.47 LS-USER> (standard-deviation precipitation) 1.0157 LS-USER> (interquartile-range precipitation) 1.145 The distribution of this data set is somewhat skewed to the right. Notice the separation between the mean and the median. You might want to try a few simple transformations to see if you can symmetrize the data. Square root and log transformations can be computed using the expressions (esqrt precipitation) and (elog precipitation) You should look at plots of the data to see if these transformations do indeed lead to a more symmetric shape. The means and medians of the transformed data are: LS-USER> (mean (esqrt precipitation)) 1.243006 LS-USER> (median (esqrt precipitation)) 1.212323 LS-USER> (mean (elog precipitation)) 0.3405517 LS-USER> (median (elog precipitation)) 0.384892 Generating and Modifying Data This section briefly summarizes some techniques for generating random and systematic data. Generating Random Data The state of the internal random number generator can be “randomly” reseeded, and the current value of the generator state can be saved. The mechanism used is the standard Common Lisp mechanism. The current random state is held in the variable *random-state*. The function make-random-state can be used to set and save the state. It takes an optional argument. If the argument is NIL or omitted make-random-state returns a copy of the current value of *random-state*. If the argument is a state object, a copy of it is returned. If the argument is t a new, “randomly” initialized state object is produced and returned. 3 Forming Subsets and Deleting Cases The select function allows you to select a single element or a group of elements from a list or vector. For example, if we define x by (def x (list 3 7 5 9 12 3 14 2)) then (select x i) will return the ith element of x. Common Lisp, like the language C, but in contrast to FORTRAN, numbers elements of list and vectors starting at zero. Thus the indices for the elements of x are 0, 1, 2, 3, 4, 5, 6, 7 . So LS-USER> (select x 0) 3 LS-USER> (select x 2) 5 To get a group of elements at once we can use a list of indices instead of a single index: LS-USER> (select x (list 0 2)) (3 5) If you want to select all elements of x except element 2 you can use the expression (remove 2 (iota 8)) as the second argument to the function select: LS-USER> (remove 2 (iota 8)) (0 1 3 4 5 6 7) LS-USER> (select x (remove 2 (iota 8))) (3 7 9 12 3 14 2) Combining Lists & Vectors At times you may want to combine several short lists or vectors into a single longer one. This can be done using the append function. For example, if you have three variables x, y and z constructed by the expressions (def x (list 1 2 3)) (def y (list 4)) (def z (list 5 6 7 8)) then the expression (append x y z) will return the list (1 2 3 4 5 6 7 8). For vectors, we use the more general function concatenate, which operates on sequences, that is objects of either list or vector: LS-USER> (concatenate 'vector #(1 2) #(3 4)) #(1 2 3 4) Notice that we had to indicate the return type, using the 'vector argument to concatenate. We could also have said 'list to have it return a list, and it would have coerced the arguments to the correct type. Modifying Data So far when I have asked you to type in a list of numbers I have been assuming that you will type the list correctly. If you made an error you had to retype the entire def expression. Since you can use cut & paste this is really not too serious. However it would be nice to be able to replace the values in a list after you have typed it in. The setf special form is used for this. Suppose you would like to change the 12 in the list x used in the Section 4.3 to 11. The expression (setf (select x 4) 11) will make this replacement: LS-USER> (setf (select x 4) 11) 11 LS-USER> x (3 7 5 9 11 3 14 2) The general form of setf is (setf form value) where form is the expression you would use to select a single element or a group of elements from x and value is the value you would like that element to have, or the list of the values for the elements in the group. Thus the expression (setf (select x (list 0 2)) (list 15 16)) changes the values of elements 0 and 2 to 15 and 16: LS-USER> (setf (select x (list 0 2)) (list 15 16)) (15 16) LS-USER> x (15 7 16 9 11 3 14 2) As a result, if we change an element of (the item referred to by) x with setf then we are also changing the element of (the item referred to by) y, since both x and y refer to the same item. If you want to make a copy of x and store it in y before you make changes to x then you must do so explicitly using, say, the copy-list function. The expression (defparameter y (copy-list x)) will make a copy of x and set the value of y to that copy. Now x and y refer to different items and changes to x will not affect y. Useful Shortcuts This section describes some additional features of LISP-STAT that you may find useful. Getting Help On line help is available for many of the functions in LISP-STAT 4. As an example, here is how you would get help for the function iota: LS-USER> (documentation 'iota 'function) "Return a list of n numbers, starting from START (with numeric contagion from STEP applied), each consecutive number being the sum of the previous one and STEP. START defaults to 0 and STEP to 1. Examples: (iota 4) => (0 1 2 3) (iota 3 :start 1 :step 1.0) => (1.0 2.0 3.0) (iota 3 :start -1 :step -1/2) => (-1 -3/2 -2) " Note the quote in front of iota. documentation is itself a function, and its argument is the symbol representing the function iota. To make sure documentation receives the symbol, not the value of the symbol, you need to quote the symbol. Another useful function is describe that, depending on the Lisp implementation, will return documentation and additional information about the object: LS-USER> (describe 'iota) ALEXANDRIA:IOTA [symbol] IOTA names a compiled function: Lambda-list: (ALEXANDRIA::N &KEY (ALEXANDRIA::START 0) (STEP 1)) Derived type: (FUNCTION (UNSIGNED-BYTE &KEY (:START NUMBER) (:STEP NUMBER)) (VALUES T &OPTIONAL)) Documentation: Return a list of n numbers, starting from START (with numeric contagion from STEP applied), each consecutive number being the sum of the previous one and STEP. START defaults to 0 and STEP to 1. Examples: (iota 4) => (0 1 2 3) (iota 3 :start 1 :step 1.0) => (1.0 2.0 3.0) (iota 3 :start -1 :step -1/2) => (-1 -3/2 -2) Inline proclamation: INLINE (inline expansion available) Source file: s:/src/third-party/alexandria/alexandria-1/numbers.lisp If you are not sure about the name of a function you may still be able to get some help. Suppose you want to find out about functions related to the normal distribution. Most such functions will have “norm” as part of their name. The expression (apropos 'norm) will print the help information for all symbols whose names contain the string “norm”: ALEXANDRIA::NORMALIZE ALEXANDRIA::NORMALIZE-AUXILARY ALEXANDRIA::NORMALIZE-KEYWORD ALEXANDRIA::NORMALIZE-OPTIONAL ASDF/PARSE-DEFSYSTEM::NORMALIZE-VERSION (fbound) ASDF/FORCING:NORMALIZE-FORCED-NOT-SYSTEMS (fbound) ASDF/FORCING:NORMALIZE-FORCED-SYSTEMS (fbound) ASDF/SESSION::NORMALIZED-NAMESTRING ASDF/SESSION:NORMALIZE-NAMESTRING (fbound) CL-INTERPOL::NORMAL-NAME-CHAR-P (fbound) CL-PPCRE::NORMALIZE-VAR-LIST (fbound) DISTRIBUTIONS::+NORMAL-LOG-PDF-CONSTANT+ (bound, DOUBLE-FLOAT) DISTRIBUTIONS::CDF-NORMAL% (fbound) DISTRIBUTIONS::COPY-LEFT-TRUNCATED-NORMAL (fbound) DISTRIBUTIONS::COPY-R-LOG-NORMAL (fbound) DISTRIBUTIONS::COPY-R-NORMAL (fbound) DISTRIBUTIONS::DRAW-LEFT-TRUNCATED-STANDARD-NORMAL (fbound) DISTRIBUTIONS::LEFT-TRUNCATED-NORMAL (fbound) DISTRIBUTIONS::LEFT-TRUNCATED-NORMAL-ALPHA (fbound) DISTRIBUTIONS::LEFT-TRUNCATED-NORMAL-LEFT (fbound) DISTRIBUTIONS::LEFT-TRUNCATED-NORMAL-LEFT-STANDARDIZED (fbound) DISTRIBUTIONS::LEFT-TRUNCATED-NORMAL-M0 (fbound) DISTRIBUTIONS::LEFT-TRUNCATED-NORMAL-MU (fbound) DISTRIBUTIONS::LEFT-TRUNCATED-NORMAL-P (fbound) DISTRIBUTIONS::LEFT-TRUNCATED-NORMAL-SIGMA (fbound) DISTRIBUTIONS::MAKE-LEFT-TRUNCATED-NORMAL (fbound) DISTRIBUTIONS::MAKE-R-LOG-NORMAL (fbound) DISTRIBUTIONS::MAKE-R-NORMAL (fbound) DISTRIBUTIONS::QUANTILE-NORMAL% (fbound) DISTRIBUTIONS::R-LOG-NORMAL-LOG-MEAN (fbound) ... Let me briefly explain the notation used in the information printed by describe regarding the arguments a function expects 5. This is called the lambda-list. Most functions expect a fixed set of arguments, described in the help message by a line like Args: (x y z) or Lambda-list: (x y z) Some functions can take one or more optional arguments. The arguments for such a function might be listed as Args: (x &optional y (z t)) or Lambda-list: (x &optional y (z t)) This means that x is required and y and z are optional. If the function is named f, it can be called as (f x-val), (f x-val y-val) or (f x-val y-val z-val). The list (z t) means that if z is not supplied its default value is T. No explicit default value is specified for y; its default value is therefore NIL. The arguments must be supplied in the order in which they are listed. Thus if you want to give the argument z you must also give a value for y. Another form of optional argument is the keyword argument. The iota function for example takes arguments Args: (N &key (START 0) (STEP 1)) The n argument is required, the START argument is an optional keyword argument. The default START is 0, and the default STEP is 1. If you want to create a sequence eight numbers, with a step of two) use the expression (iota 8 :step 2) Thus to give a value for a keyword argument you give the keyword 6 for the argument, a symbol consisting of a colon followed by the argument name, and then the value for the argument. If a function can take several keyword arguments then these may be specified in any order, following the required and optional arguments. Finally, some functions can take an arbitrary number of arguments. This is denoted by a line like Args: (x &rest args) The argument x is required, and zero or more additional arguments can be supplied. In addition to providing information about functions describe also gives information about data types and certain variables. For example, LS-USER> (describe 'complex) COMMON-LISP:COMPLEX [symbol] COMPLEX names a compiled function: Lambda-list: (REALPART &OPTIONAL (IMAGPART 0)) Declared type: (FUNCTION (REAL &OPTIONAL REAL) (VALUES NUMBER &OPTIONAL)) Derived type: (FUNCTION (T &OPTIONAL T) (VALUES (OR RATIONAL (COMPLEX SINGLE-FLOAT) (COMPLEX DOUBLE-FLOAT) (COMPLEX RATIONAL)) &OPTIONAL)) Documentation: Return a complex number with the specified real and imaginary components. Known attributes: foldable, flushable, unsafely-flushable, movable Source file: SYS:SRC;CODE;NUMBERS.LISP COMPLEX names the built-in-class #<BUILT-IN-CLASS COMMON-LISP:COMPLEX>: Class precedence-list: COMPLEX, NUMBER, T Direct superclasses: NUMBER Direct subclasses: SB-KERNEL:COMPLEX-SINGLE-FLOAT, SB-KERNEL:COMPLEX-DOUBLE-FLOAT Sealed. No direct slots. COMPLEX names a primitive type-specifier: Lambda-list: (&OPTIONAL (SB-KERNEL::TYPESPEC '*)) shows the function, type and class documentation for complex, and LS-USER> (documentation 'pi 'variable) PI [variable-doc] The floating-point number that is approximately equal to the ratio of the circumference of a circle to its diameter. shows the variable documentation for pi7. Listing and Undefining Variables After you have been working for a while you may want to find out what variables you have defined (using def). The function variables will produce a listing: LS-USER> (variables) CO HC RURAL URBAN PRECIPITATION PURCHASES NIL LS-USER> If you are working with very large variables you may occasionally want to free up some space by getting rid of some variables you no longer need. You can do this using the undef-var function: LS-USER> (undef-var 'co) CO LS-USER> (variables) HC RURAL URBAN PRECIPITATION PURCHASES NIL LS-USER> More on the Listener Common Lisp provides a simple command history mechanism. The symbols -, , *, **, +, ++, and +++ are used for this purpose. The top level reader binds these symbols as follows: - the current input expression + the last expression read ++ the previous value of + +++ the previous value of ++ the result of the last evaluation * the previous value of ** the previous value of * The variables , * and ** are probably most useful. For example, if you read a data-frame but forget to assign the resulting object to a variable: LS-USER> (read-csv rdata:mtcars) WARNING: Missing column name was filled in #<DATA-FRAME (32 observations of 12 variables)> you can recover it using one of the history variables: (defparameter mtcars *) ; MTCARS The symbol MTCARS now has the data-frame object as its value. Like most interactive systems, Common Lisp needs a system for dynamically managing memory. The system used depends on the implementation. The most common way (SBCL, CCL) is to grab memory out of a fixed bin until the bin is exhausted. At that point the system pauses to reclaim memory that is no longer being used. This process, called garbage collection, will occasionally cause the system to pause if you are using large amounts of memory. Loading Files The data for the examples and exercises in this tutorial, when not loaded from the network, have been stored on files with names ending in .lisp. In the LISP-STAT system directory they can be found in the folder Datasets. Any variables you save (see the next subsection for details) will also be saved in files of this form. The data in these files can be read into LISP-STAT with the load function. To load a file named randu.lisp type the expression (load #P"LS:DATASETS;RANDU.LISP") or just (load #P"LS:DATASETS;randu") If you give load a name that does not end in .lisp then load will add this suffix. Saving Your Work Save a Session If you want to record a session with LISP-STAT you can do so using the dribble function. The expression (dribble "myfile") starts a recording. All expressions typed by you and all results printed by LISP-STAT will be entered into the file named myfile. The expression (dribble) stops the recording. Note that (dribble "myfile") starts a new file by the name myfile. If you already have a file by that name its contents will be lost. Thus you can’t use dribble to toggle on and off recording to a single file. dribble only records text that is typed, not plots. However, you can use the buttons displayed on a plot to save in SVG or PNG format. The original HTML plots are saved in your operating system’s TEMP directory and can be viewed again until the directory is cleared during a system reboot. Saving Variables Variables you define in LISP-STAT only exist for the duration of the current session. If you quit from LISP-STAT your data will be lost. To preserve your data you can use the savevar function. This function allows you to save one a variable into a file. Again a new file is created and any existing file by the same name is destroyed. To save the variable precipitation in a file called precipitation type (savevar 'precipitation "precipitation") Do not add the .lisp suffix yourself; save will supply it. To save the two variables precipitation and purchases in the file examples.lisp type 8. (savevar '(purchases precipitation) "examples") The files precipitation.lisp and examples.lisp now contain a set of expression that, when read in with the load command, will recreate the variables precipitation and purchases. You can look at these files with an editor like the Emacs editor and you can prepare files with your own data by following these examples. Reading Data Files The data files we have used so far in this tutorial have contained Common Lisp expressions. LISP-STAT also provides functions for reading raw data files. The most commonly used is read-csv. (read-csv stream) where stream is a Common Lisp stream with the data. Streams can be obtained from files, strings or a network and are in comma separated value (CSV) format. The parser supports delimiters other than comma. The character delimited reader should be adequate for most purposes. If you have to read a file that is not in a character delimited format you can use the raw file handling functions of Common Lisp. User Initialization File Each Common Lisp implementation provides a way to execute initialization code upon start-up. You can use this file to load any data sets you would like to have available or to define functions of your own. LISP-STAT also has an initialization file, ls-init.lisp, in your home directory. Typically you will use the lisp implementation initialization file for global level initialization, and ls-init.lisp for data related customizations. See the section Initialization file in the manual for more information. Defining Functions & Methods This section gives a brief introduction to programming LISP-STAT. The most basic programming operation is to define a new function. Closely related is the idea of defining a new method for an object. 9 Defining Functions You can use the Common Lisp language to define functions of your own. Many of the functions you have been using so far are written in this language. The special form used for defining functions is called defun. The simplest form of the defun syntax is (defun fun args expression) where fun is the symbol you want to use as the function name, args is the list of the symbols you want to use as arguments, and expression is the body of the function. Suppose for example that you want to define a function to delete a case from a list. This function should take as its arguments the list and the index of the case you want to delete. The body of the function can be based on either of the two approaches described in Section 4.3 above. Here is one approach: (defun delete-case (x i) (select x (remove i (iota (- (length x) 1))))) I have used the function length in this definition to determine the length of the argument x. Note that none of the arguments to defun are quoted: defun is a special form that does not evaluate its arguments. Unless the functions you define are very simple you will probably want to define them in a file and load the file into LISP-STAT with the load command. You can put the functions in the implementation’s initialization file or include in the initialization file a load command that will load another file. The version of Common Lisp for the Macintosh, CCL, includes a simple editor that can be used from within LISP-STAT. Matrices and Arrays LISP-STAT includes support for multidimensional arrays. In addition to the standard Common Lisp array functions, LISP-STAT also includes a system called array-operations. An array is printed using the standard Common Lisp format. For example, a 2 by 3 matrix with rows (1 2 3) and (4 5 6) is printed as #2A((1 2 3)(4 5 6)) The prefix #2A indicates that this is a two-dimensional array. This form is not particularly readable, but it has the advantage that it can be pasted into expressions and will be read as an array by the LISP reader.10 For matrices you can use the function print-matrix to get a slightly more readable representation: LS-USER> (print-matrix '#2a((1 2 3)(4 5 6)) *standard-output*) 1 2 3 4 5 6 NIL The select function can be used to extract elements or sub-arrays from an array. If A is a two dimensional array then the expression (select a 0 1) will return element 1 of row 0 of A. The expression (select a (list 0 1) (list 0 1)) returns the upper left hand corner of A. References Bates, D. M. and Watts, D. G., (1988), Nonlinear Regression Analysis and its Applications, New York: Wiley. Becker, Richard A., and Chambers, John M., (1984), S: An Interactive Environment for Data Analysis and Graphics, Belmont, Ca: Wadsworth. Becker, Richard A., Chambers, John M., and Wilks, Allan R., (1988), The New S Language: A Programming Environment for Data Analysis and Graphics, Pacific Grove, Ca: Wadsworth. Becker, Richard A., and William S. Cleveland, (1987), “Brushing scatterplots,” Technometrics, vol. 29, pp. 127-142. Betz, David, (1985) “An XLISP Tutorial,” BYTE, pp 221. Betz, David, (1988), “XLISP: An experimental object-oriented programming language,” Reference manual for XLISP Version 2.0. Chaloner, Kathryn, and Brant, Rollin, (1988) “A Bayesian approach to outlier detection and residual analysis,” Biometrika, vol. 75, pp. 651-660. Cleveland, W. S. and McGill, M. E., (1988) Dynamic Graphics for Statistics, Belmont, Ca.: Wadsworth. Cox, D. R. and Snell, E. J., (1981) Applied Statistics: Principles and Examples, London: Chapman and Hall. Dennis, J. E. and Schnabel, R. B., (1983), Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Englewood Cliffs, N.J.: Prentice-Hall. Devore, J. and Peck, R., (1986), Statistics, the Exploration and Analysis of Data, St. Paul, Mn: West Publishing Co. McDonald, J. A., (1982), “Interactive Graphics for Data Analysis,” unpublished Ph. D. thesis, Department of Statistics, Stanford University. Oehlert, Gary W., (1987), “MacAnova User’s Guide,” Technical Report 493, School of Statistics, University of Minnesota. Press, Flannery, Teukolsky and Vetterling, (1988), Numerical Recipes in C, Cambridge: Cambridge University Press. Steele, Guy L., (1984), Common Lisp: The Language, Bedford, MA: Digital Press. Stuetzle, W., (1987), “Plot windows,” J. Amer. Statist. Assoc., vol. 82, pp. 466 - 475. Tierney, Luke, (1990) LISP-STAT: Statistical Computing and Dynamic Graphics in Lisp. Forthcoming. Tierney, L. and J. B. Kadane, (1986), “Accurate approximations for posterior moments and marginal densities,” J. Amer. Statist. Assoc., vol. 81, pp. 82-86. Tierney, Luke, Robert E. Kass, and Joseph B. Kadane, (1989), “Fully exponential Laplace approximations to expectations and variances of nonpositive functions,” J. Amer. Statist. Assoc., to appear. Tierney, L., Kass, R. E., and Kadane, J. B., (1989), “Approximate marginal densities for nonlinear functions,” Biometrika, to appear. Weisberg, Sanford, (1982), “MULTREG Users Manual,” Technical Report 298, School of Statistics, University of Minnesota. Winston, Patrick H. and Berthold K. P. Horn, (1988), LISP, 3rd Ed., New York: Addison-Wesley. Appendix A: LISP-STAT Interface to the Operating System A.1 Running System Commands from LISP-STAT The run-program function can be used to run UNIX commands from within LISP-STAT. This function takes a shell command string as its argument and returns the shell exit code for the command. For example, you can print the date using the UNIX date command: LS-USER> (uiop:run-program "date" :output *standard-output*) Wed Jul 19 11:06:53 CDT 1989 0 The return value is 0, indicating successful completion of the UNIX command. 1. It is possible to make a finer distinction. The reader takes a string of characters from the listener and converts it into an expression. The evaluator evaluates the expression and the printer converts the result into another string of characters for the listener to print. For simplicity I will use evaluator to describe the combination of these functions. ↩︎ 2. def acts like a special form, rather than a function, since its first argument is not evaluated (otherwise you would have to quote the symbol). Technically def is a macro, not a special form, but I will not worry about the distinction in this tutorial. def is closely related to the standard Lisp special forms setf and setq. The advantage of using def is that it adds your variable name to a list of def‘ed variables that you can retrieve using the function variables. If you use setf or setq there is no easy way to find variables you have defined, as opposed to ones that are predefined. def always affects top level symbol bindings, not local bindings. It cannot be used in function definitions to change local bindings. ↩︎ 3. The generator used is Marsaglia’s portable generator from the Core Math Libraries distributed by the National Bureau of Standards. A state object is a vector containing the state information of the generator. “Random” reseeding occurs off the system clock. ↩︎ 4. Help is available both in the REPL, and online at https://lisp-stat.dev/ ↩︎ 5. The notation used corresponds to the specification of the argument lists in Lisp function definitions. See Section 8{reference-type=“ref” reference=“Fundefs”} for more information on defining functions. ↩︎ 6. Note that the keyword :title has not been quoted. Keyword symbols, symbols starting with a colon, are somewhat special. When a keyword symbol is created its value is set to itself. Thus a keyword symbol effectively evaluates to itself and does not need to be quoted. ↩︎ 7. Actually pi represents a constant, produced with defconst. Its value cannot be changed by simple assignment. ↩︎ 8. I have used a quoted list ’(purchases precipitation) in this expression to pass the list of symbols to the savevar function. A longer alternative would be the expression (list ’purchases ’precipitation). ↩︎ 9. The discussion in this section only scratches the surface of what you can do with functions in the XLISP language. To see more examples you can look at the files that are loaded when XLISP-STAT starts up. For more information on options of function definition, macros, etc. see the XLISP documentation and the books on Lisp mentioned in the references. ↩︎ 10. You should quote an array if you type it in using this form, as the value of an array is not defined. ↩︎ 4.2 - Data Frame A Data Frame Primer Load data frame (ql:quickload :data-frame) Load data We will use one of the example data sets from R, mtcars, for these examples. First, switch into the Lisp-Stat package: (in-package :ls-user) Now load the data: (data :mtcars-example) ;; WARNING: Missing column name was filled in ;; T Examine data Lisp-Stat’s printing system is integrated with the Common Lisp Pretty Printing facility. To control aspects of printing, you can use the built in lisp pretty printing configuration system. By default Lisp-Stat sets *print-pretty* to nil. Basic information Type the name of the data frame at the REPL to get a simple one-line summary. mtcars ;; #<DATA-FRAME MTCARS (32 observations of 12 variables) ;; Motor Trend Car Road Tests> Printing data By default, the head function will print the first 6 rows: (head mtcars) ;; X1 MPG CYL DISP HP DRAT WT QSEC VS AM GEAR CARB ;; 0 Mazda RX4 21.0 6 160 110 3.90 2.620 16.46 0 1 4 4 ;; 1 Mazda RX4 Wag 21.0 6 160 110 3.90 2.875 17.02 0 1 4 4 ;; 2 Datsun 710 22.8 4 108 93 3.85 2.320 18.61 1 1 4 1 ;; 3 Hornet 4 Drive 21.4 6 258 110 3.08 3.215 19.44 1 0 3 1 ;; 4 Hornet Sportabout 18.7 8 360 175 3.15 3.440 17.02 0 0 3 2 ;; 5 Valiant 18.1 6 225 105 2.76 3.460 20.22 1 0 3 1 and tail the last 6 rows: (tail mtcars) ;; X1 MPG CYL DISP HP DRAT WT QSEC VS AM GEAR CARB ;; 0 Porsche 914-2 26.0 4 120.3 91 4.43 2.140 16.7 0 1 5 2 ;; 1 Lotus Europa 30.4 4 95.1 113 3.77 1.513 16.9 1 1 5 2 ;; 2 Ford Pantera L 15.8 8 351.0 264 4.22 3.170 14.5 0 1 5 4 ;; 3 Ferrari Dino 19.7 6 145.0 175 3.62 2.770 15.5 0 1 5 6 ;; 4 Maserati Bora 15.0 8 301.0 335 3.54 3.570 14.6 0 1 5 8 ;; 5 Volvo 142E 21.4 4 121.0 109 4.11 2.780 18.6 1 1 4 2 print-data can be used to print the whole data frame: (print-data mtcars) ;; X1 MPG CYL DISP HP DRAT WT QSEC VS AM GEAR CARB ;; 0 Mazda RX4 21.0 6 160.0 110 3.90 2.620 16.46 0 1 4 4 ;; 1 Mazda RX4 Wag 21.0 6 160.0 110 3.90 2.875 17.02 0 1 4 4 ;; 2 Datsun 710 22.8 4 108.0 93 3.85 2.320 18.61 1 1 4 1 ;; 3 Hornet 4 Drive 21.4 6 258.0 110 3.08 3.215 19.44 1 0 3 1 ;; 4 Hornet Sportabout 18.7 8 360.0 175 3.15 3.440 17.02 0 0 3 2 ;; 5 Valiant 18.1 6 225.0 105 2.76 3.460 20.22 1 0 3 1 ;; 6 Duster 360 14.3 8 360.0 245 3.21 3.570 15.84 0 0 3 4 ;; 7 Merc 240D 24.4 4 146.7 62 3.69 3.190 20.00 1 0 4 2 ;; 8 Merc 230 22.8 4 140.8 95 3.92 3.150 22.90 1 0 4 2 ;; 9 Merc 280 19.2 6 167.6 123 3.92 3.440 18.30 1 0 4 4 ;; 10 Merc 280C 17.8 6 167.6 123 3.92 3.440 18.90 1 0 4 4 ;; 11 Merc 450SE 16.4 8 275.8 180 3.07 4.070 17.40 0 0 3 3 ;; 12 Merc 450SL 17.3 8 275.8 180 3.07 3.730 17.60 0 0 3 3 ;; 13 Merc 450SLC 15.2 8 275.8 180 3.07 3.780 18.00 0 0 3 3 ;; 14 Cadillac Fleetwood 10.4 8 472.0 205 2.93 5.250 17.98 0 0 3 4 ;; 15 Lincoln Continental 10.4 8 460.0 215 3.00 5.424 17.82 0 0 3 4 ;; 16 Chrysler Imperial 14.7 8 440.0 230 3.23 5.345 17.42 0 0 3 4 ;; 17 Fiat 128 32.4 4 78.7 66 4.08 2.200 19.47 1 1 4 1 ;; 18 Honda Civic 30.4 4 75.7 52 4.93 1.615 18.52 1 1 4 2 ;; 19 Toyota Corolla 33.9 4 71.1 65 4.22 1.835 19.90 1 1 4 1 ;; 20 Toyota Corona 21.5 4 120.1 97 3.70 2.465 20.01 1 0 3 1 ;; 21 Dodge Challenger 15.5 8 318.0 150 2.76 3.520 16.87 0 0 3 2 ;; 22 AMC Javelin 15.2 8 304.0 150 3.15 3.435 17.30 0 0 3 2 ;; 23 Camaro Z28 13.3 8 350.0 245 3.73 3.840 15.41 0 0 3 4 .. The two dots “..” at the end indicate that output has been truncated. Lisp-Stat sets the default for pretty printer *print-lines* to 25 rows and output more than this is truncated. If you’d like to print all rows, set this value to nil, (setf *print-lines* nil) Notice the column named X1. This is the name given to the column by the data reading function. Note the warning that was issued during the import. Missing columns are named X1, X2, …, Xn in increasing order for the duration of the Lisp-Stat session. This column is actually the row name, so we’ll rename it: (rename! mtcars 'model 'x1) The keys of a data frame are symbols, so you need to quote them to prevent the reader from trying to evaluate them to a value. Note that your row may be named something other than X1, depending on whether or not you have loaded any other data frames with variable name replacement. Also note: the ! at the end of the function name. This is a convention indicating a destructive operation; a copy will not be returned, it’s the actual data that will be modified. Now let’s view the results: (head mtcars) ;; MODEL MPG CYL DISP HP DRAT WT QSEC VS AM GEAR CARB ;; 0 Mazda RX4 21.0 6 160 110 3.90 2.620 16.46 0 1 4 4 ;; 1 Mazda RX4 Wag 21.0 6 160 110 3.90 2.875 17.02 0 1 4 4 ;; 2 Datsun 710 22.8 4 108 93 3.85 2.320 18.61 1 1 4 1 ;; 3 Hornet 4 Drive 21.4 6 258 110 3.08 3.215 19.44 1 0 3 1 ;; 4 Hornet Sportabout 18.7 8 360 175 3.15 3.440 17.02 0 0 3 2 ;; 5 Valiant 18.1 6 225 105 2.76 3.460 20.22 1 0 3 1 Column names To see the names of the columns, use the column-names function: (column-names mtcars) ;; => ("MODEL" "MPG" "CYL" "DISP" "HP" "DRAT" "WT" "QSEC" "VS" "AM" "GEAR" "CARB") Remember we mentioned that the keys (column names) are symbols? Compare the above to the keys of the data frame: (keys mtcars) ;; => #(MODEL MPG CYL DISP HP DRAT WT QSEC VS AM GEAR CARB) These symbols are printed without double quotes. If a function takes a key, it must be quoted, e.g. 'mpg and not mpg or "mpg" Dimensions We saw the dimensions above in basic information. That was a printed for human consumption. To get the values in a form suitable for passing to other functions, use the dims command: (aops:dims mtcars) ;; => (32 12) Common Lisp specifies dimensions in row-column order, so mtcars has 32 rows and 12 columns. Basic Statistics Minimum & Maximum To get the minimum or maximum of a column, say mpg, you can use several Common Lisp methods. Let’s see what mpg looks like by typing the name of the column into the REPL: mtcars:mpg ;; => #(21 21 22.8d0 21.4d0 18.7d0 18.1d0 14.3d0 24.4d0 22.8d0 19.2d0 17.8d0 16.4d0 17.3d0 15.2d0 10.4d0 10.4d0 14.7d0 32.4d0 30.4d0 33.9d0 21.5d0 15.5d0 15.2d0 13.3d0 19.2d0 27.3d0 26 30.4d0 15.8d0 19.7d0 15 21.4d0) You could, for example, use something like this to find the minimum: (reduce #'min mtcars:mpg) ;; => 10.4d0 or the Lisp-Stat function sequence-maximum to find the maximum (sequence-maximum mtcars:mpg) ;; => 33.9d0 or perhaps you’d prefer alexandria:extremum, a general-purpose tool to find the minimum in a different way: (extremum mtcars:mpg #'<) ;; => 10.4d0 The important thing to note is that mtcars:mpg is a standard Common Lisp vector and you can manipulate it like one. Mean & standard deviation (mean mtcars:mpg) ;; => 20.090625000000003d0 (standard-deviation mtcars:mpg) ;; => 5.932029552301219d0 Summarise You can summarise a column with the summarize-column function: (summarize-column 'mtcars:mpg) MPG (Miles/(US) gallon) n: 32 missing: 0 min=10.40 q25=15.40 q50=19.20 mean=20.09 q75=22.80 max=33.90 or the entire data frame: LS-USER> (summary mtcars) ( MPG (Miles/(US) gallon) n: 32 missing: 0 min=10.40 q25=15.40 q50=19.20 mean=20.09 q75=22.80 max=33.90 CYL (Number of cylinders) 14 (44%) x 8, 11 (34%) x 4, 7 (22%) x 6, DISP (Displacement (cu.in.)) n: 32 missing: 0 min=71.10 q25=120.65 q50=205.87 mean=230.72 q75=334.00 max=472.00 HP (Gross horsepower) n: 32 missing: 0 min=52 q25=96.00 q50=123 mean=146.69 q75=186.25 max=335 DRAT (Rear axle ratio) n: 32 missing: 0 min=2.76 q25=3.08 q50=3.70 mean=3.60 q75=3.95 max=4.93 WT (Weight (1000 lbs)) n: 32 missing: 0 min=1.51 q25=2.54 q50=3.33 mean=3.22 q75=3.68 max=5.42 QSEC (1/4 mile time) n: 32 missing: 0 min=14.50 q25=16.88 q50=17.71 mean=17.85 q75=18.90 max=22.90 VS (Engine (0=v-shaped, 1=straight)) ones: 14 (44%) AM (Transmission (0=automatic, 1=manual)) ones: 13 (41%) GEAR (Number of forward gears) 15 (47%) x 3, 12 (38%) x 4, 5 (16%) x 5, CARB (Number of carburetors) 10 (31%) x 4, 10 (31%) x 2, 7 (22%) x 1, 3 (9%) x 3, 1 (3%) x 6, 1 (3%) x 8, ) Recall that the column named model is treated specially, notice that it is not included in the summary. You can see why it’s excluded by examining the column’s summary: LS-USER>(pprint (summarize-column 'mtcars:model))) 1 (3%) x "Mazda RX4", 1 (3%) x "Mazda RX4 Wag", 1 (3%) x "Datsun 710", 1 (3%) x "Hornet 4 Drive", 1 (3%) x "Hornet Sportabout", 1 (3%) x "Valiant", 1 (3%) x "Duster 360", 1 (3%) x "Merc 240D", 1 (3%) x "Merc 230", 1 (3%) x "Merc 280", 1 (3%) x "Merc 280C", 1 (3%) x "Merc 450SE", 1 (3%) x "Merc 450SL", 1 (3%) x "Merc 450SLC", 1 (3%) x "Cadillac Fleetwood", 1 (3%) x "Lincoln Continental", 1 (3%) x "Chrysler Imperial", 1 (3%) x "Fiat 128", 1 (3%) x "Honda Civic", 1 (3%) x "Toyota Corolla", 1 (3%) x "Toyota Corona", 1 (3%) x "Dodge Challenger", 1 (3%) x "AMC Javelin", 1 (3%) x "Camaro Z28", 1 (3%) x "Pontiac Firebird", 1 (3%) x "Fiat X1-9", 1 (3%) x "Porsche 914-2", 1 (3%) x "Lotus Europa", 1 (3%) x "Ford Pantera L", 1 (3%) x "Ferrari Dino", 1 (3%) x "Maserati Bora", 1 (3%) x "Volvo 142E", Columns with unique values in each row aren’t very interesting. Saving data To save a data frame to a CSV file, use the write-csv method. Here we save mtcars into the Lisp-Stat datasets directory, including the column names: (write-csv mtcars #P"LS:DATA;mtcars.csv" :add-first-row t) 4.3 - Plotting The basics of plotting Overview The plot system provides a way to generate specifications for plotting applications. Examples of plotting packages include gnuplot, plotly and vega/vega-lite. Plot includes a back end for Vega-Lite; this tutorial will teach you how to encode Vega-Lite plot specifications using Common Lisp. For help on Vega-Lite, see the Vega-Lite tutorials. For the most part, you can transcribe a Vega-Lite specification directly into Common Lisp and adapt it for your own plots. Preliminaries Load Vega-Lite Load Vega-Lite and network libraries: (asdf:load-system :plot/vega) and change to the Lisp-Stat user package: (in-package :ls-user) Load example data The examples in this section use the vega-lite data sets. Load them all now: (vega:load-vega-examples) Anatomy of a spec Plot takes advantage of the fact that Vega-Lite’s JSON specification is very close to that of a plist. If you are familiar with Common Lisp’s ASDF system, then you will be familiar with plot’s way of specifying graphics (plot was modeled on ASDF). Let’s look at a Vega-Lite scatterplot example: { "$schema": "https://vega.github.io/schema/vega-lite/v5.json",
"description": "A scatterplot showing horsepower and miles per gallons for various cars.",
"data": {"url": "data/cars.json"},
"mark": "point",
"encoding": {
"x": {"field": "Horsepower", "type": "quantitative"},
"y": {"field": "Miles_per_Gallon", "type": "quantitative"}
}
}
and compare it with the equivalent Lisp-Stat version:
(plot:plot
(vega:defplot hp-mpg
(:title "Vega Cars Horsepower vs. MPG"
:description "Horsepower vs miles per gallon for various cars"
:data ,vgcars
:mark :point
:encoding (:x (:field :horsepower :type :quantitative)
:y (:field :miles-per-gallon :type :quantitative)))))
You can try plotting this now: click on the copy button in the upper right corner of the code box and paste it into the REPL. You should see a window open with the plot displayed:
The data property
The data property is similar to a data-frame, but for plotting. Most, but not all, specifications have a single, top level data property, e.g.
"data": {"url": "data/cars.json"}
for plots with a single top level data property, Lisp-Stat allows you to use a data-frame, plist or data-frame transformation directly as the value for the data property. For instance in the anatomy of a spec example we saw a data-frame inserted verbatim as a value. We can also use a plist, as in this grouped bar chart example:
(vega:defplot grouped-bar-chart
'(:mark :bar
:data (:category #(A A A B B B C C C)
:group #(x y z x y z x y z)
:value #(0.1 0.6 0.9 0.7 0.2 1.1 0.6 0.1 0.2))
:encoding (:x (:field :category)
:y (:field :value :type :quantitative)
:x-offset (:field :group)
:color (:field :group))))
finally, since a data-frame transformation returns a data-frame, we can insert the results as the data value, as in this plot of residuals:
(:data ,(filter-rows imdb '(and (not (eql imdb-rating :na))
(lt:timestamp<
release-date
(lt:parse-timestring "2019-01-01"))))
:transform #((:joinaggregate #((:op :mean
:field :imdb-rating
:as :average-rating)))
where we remove :NA and any release-date after 2018.
Data transformations
Vega has transformations as well, but are a bit clumsy compared to those in Lisp-Stat. Sometimes though, you’ll need them because a particular transformation is not something you want to do to your data-frame. You can mix transformations in a single plot, as we saw above in the residuals plot.
Plot specifications
Transformations
A plot specification is a plist. A nested plist to be exact (or, perhaps more correctly, a tree). This means that we can use Common Lisp tree/list functions to manipulate it.
If you look carefully at the examples, you’ll note they use a backquote () instead of a normal list quote ('). This is the mechanism that Common Lisp macros use to rewrite code before compilation, and we can use the same mechanism to rewrite our Vega-Lite specifications before encoding them.
The simplest, and most common, feature is insertion, like we did above. By placing a comma (,) before the name of the data frame, we told the backquote system to insert the value of the data frame instead of the word (vgcars) in the example.
There’s a lot more you can do with the backquote mechanism. We won’t say any more here, as it’s mostly a topic for advanced users. It’s important for you to know it’s there though.
Properties
properties are the keys in key/value pairs. This is true whether discussing a plist or JSON specification. Vega-Lite is case sensitive and Common Lisp is not, so there are a few rules you need to be aware of constructing plot specifications.
Keys vs. values
Plot uses yason to transform a plist plot specification to JSON. When yason encodes a spec there are two functions of importance:
• *symbol-encoder*
• *symbol-key-encoder*
The former encodes values, and the latter encodes keys. In Plot, both of these are bound to a custom function encode-symbol-as-metadata. This function does more than just encode meta data though, it also handles naming conventions.
This won’t mean much in your day-to-day usage, but you do need to be aware of the difference between encoding a key and a value. There are some values that the encoder can’t work with, and in those cases you’ll need to use text.
Finally, remember that the symbol encoders are just a convenience to make things more lisp-like. You can build a plot specification, both keys and values, entirely from text if you wish.
Encoding symbols
JavaScript identifiers are incompatible with Common Lisp identifiers, so we need a way to translate between them. plot uses Parenscript symbol conversion for this. This is one of the reasons for specialised symbol encoders. Let’s look at the difference between the standard yason encoder and the one provided by plot (Parenscript):
LS-USER> (ps:symbol-to-js-string :x-offset)
"xOffset"
LS-USER> (yason:encode-symbol-as-lowercase :x-offset)
"x-offset"
LS-USER>
That difference is significant to Vega-Lite, where identifiers with a - are not allowed. Vega is also case sensitive, so if a key is xOffset, xoffset will not work. Fortunately Parenscript’s symbol conversion is just what we need. It will automatically capitalise the words following a dash, so x-offset becomes xOffset.
Symbols can also be used for value fields, and these are more forgiving. As long as you are consistent, and keep in mind that a behind the scenes conversion is happening, you can use lisp-like identifiers. Where this mostly comes into play is when you are using Vega transforms, as in the residuals example:
(:data ,(filter-rows imdb '(and (not (eql imdb-rating :na))
(lt:timestamp<
release-date
(lt:parse-timestring "2019-01-01"))))
:transform #((:joinaggregate #((:op :mean
:field :imdb-rating
:as :average-rating)))
(:calculate "datum['imdbRating'] - datum.averageRating"
:as :rating-delta))
Notice that we used :imdb-rating as the field name for the joinaggregate, however in the calculate part of the transform we used the converted name imdbRating; that’s because by the time the transform is run, the conversion will have already happened. When we use :as we are assigning a name, when we use datum, we are telling Vega to look for a name, and since this is done in a text field, plot won’t convert the names it finds inside text strings.
Finally, remember that the Parenscript transformation is also run on variable/column names. You can see that we referred to imdb-rating in the filter. If you get confused, run (keys <data-frame>) and think about how ps:symbol-to-js-string would return the keys. That’s what Vega will use as the column names.
This is more complicated to explain than to use. See the examples for best practice patterns. You’ll probably only need to be aware of this when doing transforms in Vega.
Variable symbols
When you define a data frame using the defdf macro, Lisp-Stat sets up an environment for that data set. Part of that environment includes configuring a package with a symbol for each variable in the data set. These symbols have properties that describe the data variable, such as unit, label, type, etc. plot can make use of this information when creating plots. Here’s a previous example, where we do not use variable symbols:
(plot:plot
(vega:defplot hp-mpg-plot
(:title "Vega Cars"
:data ,vgcars
:mark :point
:encoding (:x (:field :horsepower :type :quantitative)
:y (:field :miles-per-gallon :type :quantitative)))))
and one where we do:
(plot:plot
(vega:defplot hp-mpg-plot
(:title "Vega Cars"
:data ,vgcars
:mark :point
:encoding (:x (:field vgcars:horsepower)
:y (:field vgcars:miles-per-gallon)))))
The difference is subtle, but this can save some typing if you are always adding titles and field types. We don’t use this in the examples because we want to demonstrate the lowest common denominator, but in all plots we create professionally we use variable symbols.
Special characters
There are occasions when neither the Parenscript encoder nor Yason will correctly encode a key or value. In those situations, you’ll need to use text strings. This can happen when Vega wants an encoding that includes a character that is a reader macro, #, often used in color specifications, or in format properties, like this one (:format ".1~%")
Finally, there may be times when you need to use multiple escape characters instead of quoted strings. Occasionally an imported data set will include parenthesis (). The data-frame reader will enclose these in multiple escape characters, so for example a variable named body mass (g) will be loaded as |BODY-MASS-(G)|. In these cases you can either change the name to a valid Common Lisp identifier using rename-column!, or refer to the variable using the multiple escape characters.
nil, null, false, true
Strictly speaking, false in JavaScript is the Boolean negative. In practice, "false", a string, is often accepted. This seems to vary within Vega-Lite. Some parts accept "false", others do not. The plot symbol encoder will correctly output false for the symbol :false, and you should use that anywhere you encounter a Boolean negative.
true is encoded for the lisp t.
nil and null may be entered directly as they are and will be correctly transcribed.
Embedded data
By default, plot embeds data within the Vega-Lite JSON spec, then uses vega-embed to display it within an HTML page. The alternative is to use data from a url. Both are mostly equivalent, however there can be differences in parsing, especially with dates. When data is embedded, values are parsed by the JavaScript parse in your browser. When it’s loaded via a url, it’s run through the Vega-Lite parser. Sometimes Vega-Lite needs a bit of help by was offormat for embedded data. For this reason plot always outputs dates & times in ISO-8601 format, which works everywhere.
Large data sets can be problematic if you have a number of plots open and limited memory. From experience we’ve found that around 50,000 is a reasonable upper bound, and plot will warn you if you try to embed more, offering a few restarts. You can set this upper bound by binding df:*large-data* to your desired upper limit.
Saving plots
You can save plot specifications like any other Common Lisp object, for example using with-open-file. data-frames also have read/write functions. This section describes some convenience functions for plot I/O.
Devices
A ‘device’ is a loose abstraction for the various locations that data and specifications can be written to. For example in developing this website, data is written to a directory for static files /static/data/, and the plot specification to /static/plots/. We can model this with a plist like so:
(defparameter hugo-url
'(:spec-loc #P"s:/src/documentation/static/plots/"
:data-loc #P"s:/src/documentation/static/data/"
:data-url "/data/")
With this ‘device’, you can save a plot like so:
(vega:plot-to-device hugo-url <plot-name>)
and all the bits will be saved to their proper locations. See the examples at the bottom of the file PLOT:SRC;VEGA;device.lisp for various ways to use devices and the heuristics for determining where/when/what to write. These devices have worked in practice in generating more than 300 plots, but if you encounter a use case that’s not covered, please open an issue.
Vega quirks
Vega and Vega-Lite have more than their share of quirks and inconsistencies. For the most part you’ll only notice this in the ‘grammar’ of the graphics specification, however occasionally they may look like bugs.
When using the bin transformation, Vega-Lite assumes that if you don’t provide the variable identifier to store the end of the bin, it will use the name of the start of the bin, suffixed with _end. Many of the Vega-Lite examples make this assumption. For example, this is the snippet from a Vega-Lite example:
"data": {"url": "data/cars.json"},
"transform": [{
"bin": true, "field": "Horsepower", "as": "bin_Horsepwoer"
}, {
"aggregate": [{"op": "count", "as": "Count"}],
"groupby": ["bin_Horsepwoer", "bin_Horsepwoer_end"]
}, {
"joinaggregate": [{"op": "sum", "field": "Count", "as": "TotalCount"}]
}, {
"calculate": "datum.Count/datum.TotalCount", "as": "PercentOfTotal"
}
]
Noticed the bin is using as: bin_Horsepower and then later, in the groupBy transformation, referring to bin_Horsepower_end. To work around this ‘feature’, we need to specify both the start and end for the bin operation:
:transform #((:bin t
:field :horsepower
:as #(:bin-horsepower :bin-horsepower-end))
(:aggregate #((:op :count :as :count))
:groupby #(:bin-horsepower :bin-horsepower-end))
This kind of behaviour may occur elsewhere, and it’s not well documented, so just be careful when you see any kind of beginning or end encoding in a Vega-Lite example.
Workflow
There are many possible workflows when plotting. This section describes a few that I’ve found useful when developing plots.
By default, plot will embed data in an HTML file and then call the systems browser to open it. This is a perfectly good way to develop, especially if you’re on a machine with a good amount of RAM.
Vega-Desktop
The Vega-Desktop sadly now unmaintained, still works fine for Vega-Lite up to version 5. With this desktop application, you can drag a plot specification to the application and ‘watch’ it. Once watched, any changes you make are instantly updated in the application window. Here’s a demonstration:
First, set up a ‘device’ to use a directory on the desktop for plotting:
(defparameter vdsk1 '(:spec-loc #P"~/Desktop/plots/"
:data-loc #P"~/Desktop/plots/data/")
"Put data into a data/ subdirectory")
Now send a scatterplot to this device:
(vega:plot-to-device vdsk1
(vega:defplot hp-mpg
(:data ,vgcars
:mark :point
:encoding (:x (:field :horsepower :type :quantitative)
:y (:field :miles-per-gallon :type :quantitative)))))
Now drag the file ~/Desktop/plots/hp-mpg.vl.json to the Vega-Desktop application:
and click on the ‘watch’ button:
now go back to the buffer with the spec and add a title:
(vega:plot-to-device vdsk1
(vega:defplot hp-mpg
(:title "Horsepower vs. Miles per Gallon"
:data ,vgcars
:mark :point
:encoding (:x (:field :horsepower :type "quantitative")
:y (:field :miles-per-gallon :type "quantitative")))))
and reevaluate the form. If you’re in emacs, this is the C-x C-e command. Observe how the plot is instantly updated:
I tend to use this method when I’m tweaking a plot for final publication.
Debugging
There are a couple of commonly encountered scenarios when plots don’t display correctly:
• it’s so broken the browser displays nothing
• the ... button appears, but the plot is broken
Nothing is displayed
In this case, your best option is to print to a device where you can examine the output. I use the Vega-Desktop (vgdsk1) so often it’s part of my Lisp-Stat initialisation, and I also use it for these cases. Once you’ve got the spec written out as JSON, see if Vega-Desktop can render it, paying attention to the warnings. Vega-Desktop also has a debug function:
If Vega-Desktop doesn’t help, open the file in Visual Studio code, which has a schema validator. Generally these kinds of syntax errors are easy to spot once they’re pointed out by Visual Studio.
Something is displayed
If you see the three ellipses, then you can open the plot in the online vega editor. This is very similar to Vega Desktop, but with one important difference: you can only debug plots with embedded data sets or remotely available URLs. Because the online editor is a web application hosted on Github, you can’t access local data sets. This is one reason I typically use the Vega-Desktop / Visual Studio combination.
Getting plot information
There are two ways to get information about the plots in your environment.
show-plots
The show-plots command will display the plots you have defined, along with a description (if one was provided in the spec). Here are the plots currently in my environment:
LS-USER> (vega:show-plots)
0: #<PLOT GROUPED-BAR-CHART: Bar chart
NIL>
1: #<PLOT HP-MPG-PLOT: Scatter plot
NIL>
2: #<PLOT HP-MPG: Scatter plot
Horsepower vs miles per gallon for various cars>
Only the last, from the example above, has a description.
describe
You can also use the describe command to view plot information:
LS-USER> (describe hp-mpg)
HP-MPG
Scatter plot of VGCARS
Horsepower vs miles per gallon for various cars
inspect
By typing the plots name in the emacs REPL, a ‘handle’ of sorts is returned, printed in orange:
Right click on the orange text to get a context menu allowing various operations on the object, one of which is to ‘inspect’ the object.
Included datasets
The vega package includes all the data sets in the vega data sets. They have the same name, in the vega package, e.g. vega:penguins.
Plotting non-standard specification
Most, but not all, plot specification have a single data property at the top level. The functions we’ve been using manipulate this property to make working with plots easier.
An example of a plot that has a data property at other than the top level, see the scatter plot matrix in the examples section. Other examples include vertical or horizontal concatenation (vconcat or hconcat), that could have a separate data property for each concatenated plot. repeat and facet may also have this.
For these plot types we need to handle the data property encoding ourselves. For example instead of: :data ,vgcars, we need to specify the source type: (:values ,vgcars) or (:url ,vgcars). You can still use plot to plot from the REPL, however the ‘device’ requires the :data-url value to be :ignore. For example when I created the plots for the examples section, I had a second ‘device’ for these types of plots:
(defparameter hugo-url-2 '(:spec-loc #P"s:/src/documentation/static/plots/"
:data-loc #P"s:/src/documentation/static/data/"
:data-url :ignore)
Summarising:
• You need to ensure the data property and source are correctly indicated in the plot specification
• Ensure that, if plotting to a ‘device’, :data-url is :ignored
• Ensure that the schema property is part of your specification
5 - System Manuals
Manuals for Lisp-Stat systems
This section describes the core APIs and systems that comprise Lisp-Stat. These APIs include both the high level functionality described elsewhere, as well as lower level APIs that they are built on. This section will be of interest to ‘power users’ and developers who wish to extend Lisp-Stat, or build modules of their own.
5.1 - Array Operations
Manipulating sample data as arrays
Overview
The array-operations system contains a collection of functions and macros for manipulating Common Lisp arrays and performing numerical calculations with them.
Array-operations is a ‘generic’ way of operating on array like data structures. Several aops functions have been implemented for data-frame. For those that haven’t, you can transform arrays to data frames using the df:matrix-df function, and a data-frame to an array using df:as-array. This make it convenient to work with the data sets using either system.
Quick look
Arrays can be created with numbers from a statistical distribution:
(rand '(2 2)) ; => #2A((0.62944734 0.2709539) (0.81158376 0.6700171))
in linear ranges:
(linspace 1 10 7) ; => #(1 5/2 4 11/2 7 17/2 10)
or generated using a function, optionally given index position
(generate #'identity '(2 3) :position) ; => #2A((0 1 2) (3 4 5))
They can also be transformed and manipulated:
(defparameter A #2A((1 2)
(3 4)))
(defparameter B #2A((2 3)
(4 5)))
;; split along any dimension
(split A 1) ; => #(#(1 2) #(3 4))
;; stack along any dimension
(stack 1 A B) ; => #2A((1 2 2 3)
; (3 4 4 5))
;; element-wise function map
(each #'+ #(0 1 2) #(2 3 5)) ; => #(2 4 7)
;; element-wise expressions
(vectorize (A B) (* A (sqrt B))) ; => #2A((1.4142135 3.4641016)
; (6.0 8.944272))
;; index operations e.g. matrix-matrix multiply:
(each-index (i j)
(sum-index k
(* (aref A i k) (aref B k j)))) ; => #2A((10 13)
; (22 29))
Array shorthand
The library defines the following short function names that are synonyms for Common Lisp operations:
array-operations Common Lisp
size array-total-size
rank array-rank
dim array-dimension
dims array-dimensions
nrow number of rows in matrix
ncol number of columns in matrix
The array-operations package has the nickname aops, so you can use, for example, (aops:size my-array) without use‘ing the package.
Displaced arrays
According to the Common Lisp specification, a displaced array is:
An array which has no storage of its own, but which is instead indirected to the storage of another array, called its target, at a specified offset, in such a way that any attempt to access the displaced array implicitly references the target array.
Displaced arrays are one of the niftiest features of Common Lisp. When an array is displaced to another array, it shares structure with (part of) that array. The two arrays do not need to have the same dimensions, in fact, the dimensions do not be related at all as long as the displaced array fits inside the original one. The row-major index of the former in the latter is called the offset of the displacement.
displace
Displaced arrays are usually constructed using make-array, but this library also provides displace for that purpose:
(defparameter *a* #2A((1 2 3)
(4 5 6)))
(aops:displace *a* 2 1) ; => #(2 3)
Here’s an example of using displace to implement a sliding window over some set of values, say perhaps a time-series of stock prices:
(defparameter stocks (aops:linspace 1 100 100))
(loop for i from 0 to (- (length stocks) 20)
do (format t "~A~%" (aops:displace stocks 20 i)))
;#(1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20)
;#(2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21)
;#(3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22)
flatten
flatten displaces to a row-major array:
(aops:flatten *a*) ; => #(1 2 3 4 5 6)
split
The real fun starts with split, which splits off sub-arrays nested within a given axis:
(aops:split *a* 1) ; => #(#(1 2 3) #(4 5 6))
(defparameter *b* #3A(((0 1) (2 3))
((4 5) (6 7))))
(aops:split *b* 0) ; => #3A(((0 1) (2 3)) ((4 5) (6 7)))
(aops:split *b* 1) ; => #(#2A((0 1) (2 3)) #2A((4 5) (6 7)))
(aops:split *b* 2) ; => #2A((#(0 1) #(2 3)) (#(4 5) #(6 7)))
(aops:split *b* 3) ; => #3A(((0 1) (2 3)) ((4 5) (6 7)))
Note how splitting at 0 and the rank of the array returns the array itself.
sub
Now consider sub, which returns a specific array, composed of the elements that would start with given subscripts:
(aops:sub *b* 0) ; => #2A((0 1)
; (2 3))
(aops:sub *b* 0 1) ; => #(2 3)
(aops:sub *b* 0 1 0) ; => 2
In the case of vectors, sub works like aref:
(aops:sub #(1 2 3 4 5) 1) ; => 2
There is also a (setf sub) function.
partition
partition returns a consecutive chunk of an array separated along its first subscript:
(aops:partition #2A((0 1)
(2 3)
(4 5)
(6 7)
(8 9))
1 3) ; => #2A((2 3)
; (4 5))
and also has a (setf partition) pair.
combine
combine is the opposite of split:
(aops:combine #(#(0 1) #(2 3))) ; => #2A((0 1)
; (2 3))
subvec
subvec returns a displaced subvector:
(aops:subvec #(0 1 2 3 4) 2 4) ; => #(2 3)
There is also a (setf subvec) function, which is like (setf subseq) except for demanding matching lengths.
reshape
Finally, reshape can be used to displace arrays into a different shape:
(aops:reshape #2A((1 2 3)
(4 5 6)) '(3 2))
; => #2A((1 2)
; (3 4)
; (5 6))
You can use t for one of the dimensions, to be filled in automatically:
(aops:reshape *b* '(1 t)) ; => #2A((0 1 2 3 4 5 6 7))
reshape-col and reshape-row reshape your array into a column or row matrix, respectively:
(defparameter *a* #2A((0 1)
(2 3)
(4 5)))
(aops:reshape-row *a*) ;=> #2A((0 1 2 3 4 5))
(aops:reshape-col *a*) ;=> #2A((0) (1) (2) (3) (4) (5))
Specifying dimensions
Functions in the library accept the following in place of dimensions:
• a list of dimensions (as for make-array),
• a positive integer, which is used as a single-element list,
• another array, the dimensions of which are used.
The last one allows you to specify dimensions with other arrays. For example, to reshape an array a1 to look like a2, you can use
(aops:reshape a1 a2)
(aops:reshape a1 (aops:dims a2))
Creation & transformation
When the resulting element type cannot be inferred, functions that create and transform arrays are provided in pairs; one of these will allow you to specify the array-element-type of the result, while the other assumes it is t. The former ends with a *, and the element-type is always its first argument. Examples are given for the versions without *; use the other when you are optimizing your code and you are sure you can constrain to a given element-type.
Element traversal order of these functions is unspecified. The reason for this is that the library may use parallel code in the future, so it is unsafe to rely on a particular element traversal order.
The following functions all make a new array, taking the dimensions as input. The version ending in * also takes the array type as first argument. There are also versions ending in ! which do not make a new array, but take an array as first argument, which is modified and returned.
Function Description
zeros Filled with zeros
ones Filled with ones
rand Filled with uniformly distributed random numbers between 0 and 1
randn Normally distributed with mean 0 and standard deviation 1
linspace Evenly spaced numbers in given range
For example:
(aops:rand '(2 2))
; => #2A((0.6686077 0.59425664)
; (0.7987722 0.6930506))
(aops:rand* 'single-float '(2 2))
; => #2A((0.39332366 0.5557821)
; (0.48831415 0.10924244))
(let ((a (make-array '(2 2) :element-type 'double-float)))
;; Modify array A, filling with random numbers
(aops:rand! a))
; => #2A((0.6324615478515625d0 0.4636608362197876d0)
; (0.4145939350128174d0 0.5124958753585815d0))
(linspace 0 4 5) ;=> #(0 1 2 3 4)
(linspace 1 3 5) ;=> #(0 1/2 1 3/2 2)
(linspace 0 4d0 3) ;=> #(0.0d0 2.0d0 4.0d0)
generate
generate (and generate*) allow you to generate arrays using functions. The function signatures are:
generate* (element-type function dimensions &optional arguments)
generate (function dimensions &optional arguments)
Where arguments are passed to function. Possible arguments are:
• no arguments, when ARGUMENTS is nil
• the position (= row major index), when ARGUMENTS is :POSITION
• a list of subscripts, when ARGUMENTS is :SUBSCRIPTS
• both when ARGUMENTS is :POSITION-AND-SUBSCRIPTS
(aops:generate (lambda () (random 10)) 3) ; => #(6 9 5)
(aops:generate #'identity '(2 3) :position) ; => #2A((0 1 2)
; (3 4 5))
(aops:generate #'identity '(2 2) :subscripts)
; => #2A(((0 0) (0 1))
; ((1 0) (1 1)))
(aops:generate #'cons '(2 2) :position-and-subscripts)
; => #2A(((0 0 0) (1 0 1))
; ((2 1 0) (3 1 1)))
permute
permute can permute subscripts (you can also invert, complement, and complete permutations, look at the docstring and the unit tests). Transposing is a special case of permute:
(defparameter *a* #2A((1 2 3)
(4 5 6)))
(aops:permute '(0 1) *a*) ; => #2A((1 2 3)
; (4 5 6))
(aops:permute '(1 0) *a*) ; => #2A((1 4)
; (2 5)
; (3 6))
each
each applies a function to its one dimensional array arguments elementwise. It essentially is an element-wise function map on each of the vectors:
(aops:each #'+ #(0 1 2)
#(2 3 5)
#(1 1 1)
; => #(3 5 8)
vectorize
vectorize is a macro which performs elementwise operations
(defparameter a #(1 2 3 4))
(aops:vectorize (a) (* 2 a)) ; => #(2 4 6 8)
(defparameter b #(2 3 4 5))
(aops:vectorize (a b) (* a (sin b)))
; => #(0.9092974 0.28224 -2.2704074 -3.8356972)
There is also a version vectorize* which takes a type argument for the resulting array, and a version vectorize! which sets elements in a given array.
margin
The semantics of margin are more difficult to explain, so perhaps an example will be more useful. Suppose that you want to calculate column sums in a matrix. You could permute (transpose) the matrix, split its sub-arrays at rank one (so you get a vector for each row), and apply the function that calculates the sum. margin automates that for you:
(aops:margin (lambda (column)
(reduce #'+ column))
#2A((0 1)
(2 3)
(5 7)) 0) ; => #(7 11)
But the function is more general than this: the arguments inner and outer allow arbitrary permutations before splitting.
recycle
Finally, recycle allows you to reuse the elements of the first argument, object, to create new arrays by extending the dimensions. The :outer keyword repeats the original object and :inner keyword argument repeats the elements of object. When both :inner and :outer are nil, object is returned as is. Non-array objects are intepreted as rank 0 arrays, following the usual semantics.
(aops:recycle #(2 3) :inner 2 :outer 4)
; => #3A(((2 2) (3 3))
((2 2) (3 3))
((2 2) (3 3))
((2 2) (3 3)))
Three dimensional arrays can be tough to get your head around. In the example above, :outer asks for 4 2-element vectors, composed of repeating the elements of object twice, i.e. repeat ‘2’ twice and repeat ‘3’ twice. Compare this with :inner as 3:
(aops:recycle #(2 3) :inner 3 :outer 4)
; #3A(((2 2 2) (3 3 3))
((2 2 2) (3 3 3))
((2 2 2) (3 3 3))
((2 2 2) (3 3 3)))
map-array
map-array maps a function over the elements of an array.
(aops:map-array #2A((1.7 2.1 4.3 5.4)
(0.3 0.4 0.5 0.6))
#'log)
; #2A((0.53062826 0.7419373 1.4586151 1.686399)
; (-1.2039728 -0.9162907 -0.6931472 -0.5108256))
Note: This was moved to numerical-utilites in anticipation of consolidating nu:matrix and array-operations. That effort has stalled. We should probably move it back here.
Indexing operations
nested-loop
nested-loop is a simple macro which iterates over a set of indices with a given range
(defparameter A #2A((1 2) (3 4)))
(aops:nested-loop (i j) (array-dimensions A)
(setf (aref A i j) (* 2 (aref A i j))))
A ; => #2A((2 4) (6 8))
(aops:nested-loop (i j) '(2 3)
(format t "(~a ~a) " i j)) ; => (0 0) (0 1) (0 2) (1 0) (1 1) (1 2)
sum-index
sum-index is a macro which uses a code walker to determine the dimension sizes, summing over the given index or indices
(defparameter A #2A((1 2) (3 4)))
;; Trace
(aops:sum-index i (aref A i i)) ; => 5
;; Sum array
(aops:sum-index (i j) (aref A i j)) ; => 10
;; Sum array
(aops:sum-index i (row-major-aref A i)) ; => 10
The main use for sum-index is in combination with each-index.
each-index
each-index is a macro which creates an array and iterates over the elements. Like sum-index it is given one or more index symbols, and uses a code walker to find array dimensions.
(defparameter A #2A((1 2)
(3 4)))
(defparameter B #2A((5 6)
(7 8)))
;; Transpose
(aops:each-index (i j) (aref A j i)) ; => #2A((1 3)
; (2 4))
;; Sum columns
(aops:each-index i
(aops:sum-index j
(aref A j i))) ; => #(4 6)
;; Matrix-matrix multiply
(aops:each-index (i j)
(aops:sum-index k
(* (aref A i k) (aref B k j)))) ; => #2A((19 22)
; (43 50))
reduce-index
reduce-index is a more general version of sum-index; it applies a reduction operation over one or more indices.
(defparameter A #2A((1 2)
(3 4)))
;; Sum all values in an array
(aops:reduce-index #'+ i (row-major-aref A i)) ; => 10
;; Maximum value in each row
(aops:each-index i
(aops:reduce-index #'max j
(aref A i j))) ; => #(2 4)
Reducing
Some reductions over array elements can be done using the Common Lisp reduce function, together with aops:flatten, which returns a displaced vector:
(defparameter a #2A((1 2)
(3 4)))
(reduce #'max (aops:flatten a)) ; => 4
argmax & argmin
argmax and argmin find the row-major-aref index where an array value is maximum or minimum. They both return two values: the first value is the index; the second is the array value at that index.
(defparameter a #(1 2 5 4 2))
(aops:argmax a) ; => 2 5
(aops:argmin a) ; => 0 1
vectorize-reduce
More complicated reductions can be done with vectorize-reduce, for example the maximum absolute difference between arrays:
(defparameter a #2A((1 2)
(3 4)))
(defparameter b #2A((2 2)
(1 3)))
(aops:vectorize-reduce #'max (a b) (abs (- a b))) ; => 2
best
best compares two arrays according to a function and returns the ‘best’ value found. The function, FN must accept two inputs and return true/false. This function is applied to elements of ARRAY. The row-major-aref index is returned.
Example: The index of the maximum is
* (best #'> #(1 2 3 4))
3 ; row-major index
4 ; value
most
most finds the element of ARRAY that returns the value closest to positive infinity when FN is applied to the array value. Returns the row-major-aref index, and the winning value.
Example: The maximum of an array is:
(most #'identity #(1 2 3))
-> 2 (row-major index)
3 (value)
and the minimum of an array is:
(most #'- #(1 2 3))
0
-1
See also reduce-index above.
Scalar values
Library functions treat non-array objects as if they were equivalent to 0-dimensional arrays: for example, (aops:split array (rank array)) returns an array that effectively equivalent (eq) to array. Another example is recycle:
(aops:recycle 4 :inner '(2 2)) ; => #2A((4 4)
; (4 4))
Stacking
You can stack compatible arrays by column or row. Metaphorically you can think of these operations as stacking blocks. For example stacking two row vectors yields a 2x2 array:
(stack-rows #(1 2) #(3 4))
;; #2A((1 2)
;; (3 4))
Like other functions, there are two versions: generalised stacking, with rows and columns of type T and specialised versions where the element-type is specified. The versions allowing you to specialise the element type end in *.
The stack functions use object dimensions (as returned by dims to determine how to use the object.
• when the object has 0 dimensions, fill a column with the element
• when the object has 1 dimension, use it as a column
• when the object has 2 dimensions, use it as a matrix
copy-row-major-block is a utility function in the stacking package that does what it suggests; it copies elements from one array to another. This function should be used to implement copying of contiguous row-major blocks of elements.
rows
stack-rows-copy is the method used to implement the copying of objects in stack-row*, by copying the elements of source to destination, starting with the row index start-row in the latter. Elements are coerced to element-type.
stack-rows and stack-rows* stack objects row-wise into an array of the given element-type, coercing if necessary. Always return a simple array of rank 2. stack-rows always returns an array with elements of type T, stack-rows* coerces elements to the specified type.
columns
stack-cols-copy is a method used to implement the copying of objects in stack-col*, by copying the elements of source to destination, starting with the column index start-col in the latter. Elements are coerced to element-type.
stack-cols and stack-cols* stack objects column-wise into an array of the given element-type, coercing if necessary. Always return a simple array of rank 2. stack-cols always returns an array with elements of type T, stack-cols* coerces elements to the specified type.
arbitrary
stack and stack* stack array arguments along axis. element-type determines the element-type of the result.
(defparameter *a1* #(0 1 2))
(defparameter *a2* #(3 5 7))
(aops:stack 0 *a1* *a2*) ; => #(0 1 2 3 5 7)
(aops:stack 1
(aops:reshape-col *a1*)
(aops:reshape-col *a2*)) ; => #2A((0 3)
; (1 5)
; (2 7))
5.2 - Data Frame
Manipulating data using a data frame
Overview
A common lisp data frame is a collection of observations of sample variables that shares many of the properties of arrays and lists. By design it can be manipulated using the same mechanisms used to manipulate lisp arrays. This allow you to, for example, transform a data frame into an array and use array-operations to manipulate it, and then turn it into a data frame again to use in modeling or plotting.
Data frame is implemented as a two-dimensional common lisp data structure: a vector of vectors for data, and a hash table mapping variable names to column vectors. All columns are of equal length. This structure provides the flexibility required for column oriented manipulation, as well as speed for large data sets.
Data-frame is part of the Lisp-Stat package. It can be used independently if desired. Since the examples in this manual use Lisp-Stat functionality, we’ll use it from there rather than load independently.
(ql:quickload :lisp-stat)
Within the Lisp-Stat system, the LS-USER package is the package for you to do statistics work. Type the following to change to that package:
(in-package :ls-user)
Naming conventions
Lisp-Stat has a few naming conventions you should be aware of. If you see a punctuation mark or the letter ‘p’ as the last letter of a function name, it indicates something about the function:
• ‘!’ indicates that the function is destructive. It will modify the data that you pass to it. Otherwise, it will return a copy that you will need to save in a variable.
• ‘p’, ‘-p’ or ‘?’ means the function is a predicate, that is returns a Boolean truth value.
Data frame environment
Although you can work with data frames bound to symbols (as would happen if you used (defparameter ...), it is more convenient to define them as part of an environment. When you do this, the system defines a package of the same name as the data frame, and provides a symbol for each variable. Let’s see how things work without an environment:
First, we define a data frame as a parameter:
(defparameter mtcars (read-csv rdata:mtcars)
;; WARNING: Missing column name was filled in
;; MTCARS2
Now if we want a column, we can say:
(column mtcars 'mpg)
Now let’s define an environment using defdf:
(defdf mtcars (read-csv rdata:mtcars)
;; WARNING: Missing column name was filled in
;; #<DATA-FRAME (32 observations of 12 variables)
;; Motor Trend Car Road Tests>
Now we can access the same variable with:
mtcars:mpg
defdf does a lot more than this, and you should probably use defdf to set up an environment instead of defparameter. We mention it here because there’s an important bit about maintaining the environment to be aware of:
defdf
The defdf macro is conceptually equivalent to the Common Lisp defparameter, but with some additional functionality that makes working with data frames easier. You use it the same way you’d use defparameter, for example:
(defdf foo <any-function returning a data frame> )
We’ll use both ways of defining data frames in this manual. The access methods that are defined by defdf are described in the access data section.
Data types
It is important to note that there are two ‘types’ in Lisp-Stat: the implementation type and the ‘statistical’ type. Sometimes these are the same, such as in the case of reals; in other situations they are not. A good example of this can be seen in the mtcars data set. The hp (horsepower), gear and carb are all of type integer from an implementation perspective. However only horsepower is a continuous variable. You can have an additional 0.5 horsepower, but you cannot add an additional 0.5 gears or carburetors.
Data types are one kind of property that can be set on a variable.
As part of the recoding and data cleansing process, you will want to add properties to your variables. In Common Lisp, these are plists that reside on the variable symbols, e.g. mtcars:mpg. In R they are known as attributes. By default, there are three properties for each variable: type, unit and label (documentation). When you load from external formats, like CSV, these properties are all nil; when you load from a lisp file, they will have been saved along with the data (if you set them).
There are seven data types in Lisp-Stat:
• string
• integer
• double-float
• single-float
• categorical (factor in R)
• temporal
• bit (Boolean)
Numeric
Numeric types, double-float, single-float and integer are all essentially similar. The vector versions have type definitions (from the numeric-utilities package) of:
• simple-double-float-vector
• simple-single-float-vector
• simple-fixnum-vector
As an example, let’s look at mtcars:mpg, where we have a variable of type float, but a few integer values mixed in.
The values may be equivalent, but the types are not. The CSV loader has no way of knowing, so loads the column as a mixture of integers and floats. Let’s start by reloading mtcars from the CSV file:
(undef 'mtcars)
and look at the mpg variable:
LS-USER> mtcars:mpg
#(21 21 22.8d0 21.4d0 18.7d0 18.1d0 14.3d0 24.4d0 22.8d0 19.2d0 17.8d0 16.4d0
17.3d0 15.2d0 10.4d0 10.4d0 14.7d0 32.4d0 30.4d0 33.9d0 21.5d0 15.5d0 15.2d0
13.3d0 19.2d0 27.3d0 26 30.4d0 15.8d0 19.7d0 15 21.4d0)
LS-USER> (type-of *)
(SIMPLE-VECTOR 32)
Notice that the first two entries in the vector are integers, and the remainder floats. To fix this manually, you will need to coerce each element of the column to type double-float (you could use single-float in this case; as a matter of habit we usually use double-float) and then change the type of the vector to a specialised float vector.
You can use the heuristicate-types function to guess the statistical types for you. For reals and strings, heuristicate-types works fine, however because integers and bits can be used to encode categorical or numeric values, you will have to indicate the type using set-properties. We see this below with gear and carb, although implemented as integer, they are actually type categorical. The next sections describes how to set them.
Using describe, we can view the types of all the variables that heuristicate-types set:
LS-USER> (heuristicate-types mtcars)
LS-USER> (describe mtcars)
MTCARS
A data-frame with 32 observations of 12 variables
Variable | Type | Unit | Label
-------- | ---- | ---- | -----------
X8 | STRING | NIL | NIL
MPG | DOUBLE-FLOAT | NIL | NIL
CYL | INTEGER | NIL | NIL
DISP | DOUBLE-FLOAT | NIL | NIL
HP | INTEGER | NIL | NIL
DRAT | DOUBLE-FLOAT | NIL | NIL
WT | DOUBLE-FLOAT | NIL | NIL
QSEC | DOUBLE-FLOAT | NIL | NIL
VS | BIT | NIL | NIL
AM | BIT | NIL | NIL
GEAR | INTEGER | NIL | NIL
CARB | INTEGER | NIL | NIL
Notice the system correctly typed vs and am as Boolean (bit) (correct in a mathematical sense)
Strings
Unlike in R, strings are not considered categorical variables by default. Ordering of strings varies according to locale, so it’s not a good idea to rely on the strings. Nevertheless, they do work well if you are working in a single locale.
Categorical
Categorical variables have a fixed and known set of possible values. In mtcars, gear, carb vs and am are categorical variables, but heuristicate-types can’t distinguish categorical types, so we’ll set them:
(set-properties mtcars :type '(:vs :categorical
:am :categorical
:gear :categorical
:carb :categorical))
Temporal
Dates and times can be surprisingly complicated. To make working with them simpler, Lisp-Stat uses vectors of localtime objects to represent dates & times. You can set a temporal type with set-properties as well using the keyword :temporal.
Units & labels
To add units or labels to the data frame, use the set-properties function. This function takes a plist of variable/value pairs, so to set the units and labels:
(set-properties mtcars :unit '(:mpg m/g
:cyl :NA
:disp in³
:hp hp
:drat :NA
:wt lb
:qsec s
:vs :NA
:am :NA
:gear :NA
:carb :NA))
(set-properties mtcars :label '(:mpg "Miles/(US) gallon"
:cyl "Number of cylinders"
:disp "Displacement (cu.in.)"
:hp "Gross horsepower"
:drat "Rear axle ratio"
:wt "Weight (1000 lbs)"
:qsec "1/4 mile time"
:vs "Engine (0=v-shaped, 1=straight)"
:am "Transmission (0=automatic, 1=manual)"
:gear "Number of forward gears"
:carb "Number of carburetors"))
Now look at the description again:
LS-USER> (describe mtcars)
MTCARS
A data-frame with 32 observations of 12 variables
Variable | Type | Unit | Label
-------- | ---- | ---- | -----------
X8 | STRING | NIL | NIL
MPG | DOUBLE-FLOAT | M/G | Miles/(US) gallon
CYL | INTEGER | NA | Number of cylinders
DISP | DOUBLE-FLOAT | IN3 | Displacement (cu.in.)
HP | INTEGER | HP | Gross horsepower
DRAT | DOUBLE-FLOAT | NA | Rear axle ratio
WT | DOUBLE-FLOAT | LB | Weight (1000 lbs)
QSEC | DOUBLE-FLOAT | S | 1/4 mile time
VS | BIT | NA | Engine (0=v-shaped, 1=straight)
AM | BIT | NA | Transmission (0=automatic, 1=manual)
GEAR | INTEGER | NA | Number of forward gears
CARB | INTEGER | NA | Number of carburetors
You can set your own properties with this command too. To make your custom properties appear in the describe command and be saved automatically, override the describe and write-df methods, or use :after methods.
Create data-frames
A data frame can be created from a Common Lisp array, alist, plist, individual data vectors, another data frame or a vector-of vectors. In this section we’ll describe creating a data frame from each of these.
Data frame columns represent sample set variables, and its rows are observations (or cases).
(defmethod print-object ((df data-frame) stream)
"Print the first six rows of DATA-FRAME"
(let ((*print-lines* 6))
(df:print-data df stream nil)))
(set-pprint-dispatch 'df:data-frame
#'(lambda (s df) (df:print-data df s nil)))
You can ignore the warning that you’ll receive after executing the code above.
Let’s create a simple data frame. First we’ll setup some variables (columns) to represent our sample domain:
(defparameter v #(1 2 3 4)) ; vector
(defparameter b #*0110) ; bits
(defparameter s #(a b c d)) ; symbols
(defparameter plist (:vector ,v :symbols ,s)) ;only v & s
Let’s print plist. Just type the name in at the REPL prompt.
plist
(:VECTOR #(1 2 3 4) :SYMBOLS #(A B C D))
From p/a-lists
Now suppose we want to create a data frame from a plist
(apply #'df plist)
;; VECTOR SYMBOLS
;; 1 A
;; 2 B
;; 3 C
;; 4 D
We could also have used the plist-df function:
(plist-df plist)
;; VECTOR SYMBOLS
;; 1 A
;; 2 B
;; 3 C
;; 4 D
and to demonstrate the same thing using an alist, we’ll use the alexandria:plist-alist function to convert the plist into an alist:
(alist-df (plist-alist plist))
;; VECTOR SYMBOLS
;; 1 A
;; 2 B
;; 3 C
;; 4 D
From vectors
You can use make-df to create a data frame from keys and a list of vectors. Each vector becomes a column in the data-frame.
(make-df '(:a :b) ; the keys
'(#(1 2 3) #(10 20 30))) ; the columns
;; A B
;; 1 10
;; 2 20
;; 3 30
This is useful if you’ve started working with variables defined with defparameter or defvar and want to combine them into a data frame.
From arrays
matrix-df converts a matrix (array) to a data-frame with the given keys.
(matrix-df #(:a :b) #2A((1 2)
(3 4)))
;#<DATA-FRAME (2 observations of 2 variables)>
This is useful if you need to do a lot of numeric number-crunching on a data set as an array, perhaps with BLAS or array-operations then want to add categorical variables and continue processing as a data-frame.
Example datasets
Vincent Arel-Bundock maintains a library of over 1700 R datasets that is a consolidation of example data from various R packages. You can load one of these by specifying the url to the raw data to the read-csv function. For example to load the iris data set, use:
(defdf iris
"Edgar Anderson's Iris Data")
Default datasets
To make the examples and tutorials easier, Lisp-Stat includes the URLs for the R built in data sets. You can see these by viewing the rdata:*r-default-datasets* variable:
LS-USER? rdata:*r-default-datasets*
(RDATA:AIRPASSENGERS RDATA:ABILITY.COV RDATA:AIRMILES RDATA:AIRQUALITY
RDATA:ANSCOMBE RDATA:ATTENU RDATA:ATTITUDE RDATA:AUSTRES RDATA:BJSALES
RDATA:BOD RDATA:CARS RDATA:CHICKWEIGHT RDATA:CHICKWTS RDATA:CO2-1 RDATA:CO2-2
RDATA:CRIMTAB RDATA:DISCOVERIES RDATA:DNASE RDATA:ESOPH RDATA:EURO
RDATA:EUSTOCKMARKETS RDATA:FAITHFUL RDATA:FORMALDEHYDE RDATA:FREENY
RDATA:HAIREYECOLOR RDATA:HARMAN23.COR RDATA:HARMAN74.COR RDATA:INDOMETH
RDATA:INFERT RDATA:INSECTSPRAYS RDATA:IRIS RDATA:IRIS3 RDATA:ISLANDS
RDATA:JOHNSONJOHNSON RDATA:LAKEHURON RDATA:LH RDATA:LIFECYCLESAVINGS
RDATA:LOBLOLLY RDATA:LONGLEY RDATA:LYNX RDATA:MORLEY RDATA:MTCARS RDATA:NHTEMP
RDATA:NILE RDATA:NOTTEM RDATA:NPK RDATA:OCCUPATIONALSTATUS RDATA:ORANGE
RDATA:ORCHARDSPRAYS RDATA:PLANTGROWTH RDATA:PRECIP RDATA:PRESIDENTS
RDATA:PRESSURE RDATA:PUROMYCIN RDATA:QUAKES RDATA:RANDU RDATA:RIVERS
RDATA:ROCK RDATA:SEATBELTS RDATA::STUDENT-SLEEP RDATA:STACKLOSS
RDATA:SUNSPOT.MONTH RDATA:SUNSPOT.YEAR RDATA:SUNSPOTS RDATA:SWISS RDATA:THEOPH
RDATA:UKDRIVERDEATHS RDATA:UKGAS RDATA:USACCDEATHS RDATA:USARRESTS
RDATA:VOLCANO RDATA:WARPBREAKS RDATA:WOMEN RDATA:WORLDPHONES RDATA:WWWUSAGE)
To load one of these, you can use the name of the data set. For example to load mtcars:
(defdf mtcars
If you want to load all of the default R data sets, use the rdata:load-r-default-datasets command. All the data sets included in base R will now be loaded into your environment. This is useful if you are following a R tutorial, but using Lisp-Stat for the analysis software.
You may also want to save the default R data sets in order to augment the data with labels, units, types, etc. To save all of the default R data sets to the LS:DATA;R directory, use the (rdata:save-r-default-datasets) command if the default data sets have already been loaded, or save-r-data if they have not. This saves the data in lisp format.
Install R datasets
To work with all of the R data sets, we recommend you use git to download the repository to your hard drive. For example I downloaded the example data to the s: drive like this:
cd s:
git clone https://github.com/vincentarelbundock/Rdatasets.git
and setup a logical host in my ls-init.lisp file like so:
;;; Define logical hosts for external data sets
(setf (logical-pathname-translations "RDATA")
(("**;*.*.*" ,(merge-pathnames "csv/**/*.*" "s:/Rdatasets/"))))
Now you can access any of the datasets using the logical pathname. Here’s an example of creating a data frame using the ggplot mpg data set:
(defdf mpg (read-csv #P"RDATA:ggplot2;mpg.csv"))
Searching the examples
With so many data sets, it’s helpful to load the index into a data frame so you can search for specific examples. You can do this by loading the rdata:index into a data frame:
(defdf rindex (read-csv rdata:index))
I find it easiest to use the SQL-DF system to query this data. For example if you wanted to find the data sets with the largest number of observations:
(ql:quickload :sqldf)
(print-data
(sqldf:sqldf "select item, title, rows, cols from rindex order by rows desc limit 10"))
;; ITEM TITLE ROWS COLS
;; 0 military US Military Demographics 1414593 6
;; 1 Birthdays US Births in 1969 - 1988 372864 7
;; 2 wvs_justifbribe Attitudes about the Justifiability of Bribe-Taking in the ... 348532 6
;; 3 flights Flights data 336776 19
;; 4 wvs_immig Attitudes about Immigration in the World Values Survey 310388 6
;; 5 Fertility Fertility and Women's Labor Supply 254654 8
;; 6 avandia Cardiovascular problems for two types of Diabetes medicines 227571 2
;; 8 mortgages Data from "How do Mortgage Subsidies Affect Home Ownership? ..." 214144 6
;; 9 mammogram Experiment with Mammogram Randomized
Export data frames
These next few functions are the reverse of the ones above used to create them. These are useful when you want to use foreign libraries or common lisp functions to process the data.
For this section of the manual, we are going to work with a subset of the mtcars data set from above. We’ll use the select package to take the first 5 rows so that the data transformations are easier to see.
(defparameter mtcars-small (select mtcars (range 0 5) t))
The next three functions convert a data-frame to and from standard common lisp data structures. This is useful if you’ve got data in Common Lisp format and want to work with it in a data frame, or if you’ve got a data frame and want to apply Common Lisp operators on it that don’t exist in df.
as-alist
Just like it says on the tin, as-alist takes a data frame and returns an alist version of it (formatted here for clearer output – a pretty printer that outputs an alist in this format would be a welcome addition to Lisp-Stat)
(as-alist mtcars-small)
;; ((MTCARS:X1 . #("Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" "Hornet Sportabout"))
;; (MTCARS:MPG . #(21 21 22.8d0 21.4d0 18.7d0))
;; (MTCARS:CYL . #(6 6 4 6 8))
;; (MTCARS:DISP . #(160 160 108 258 360))
;; (MTCARS:HP . #(110 110 93 110 175))
;; (MTCARS:DRAT . #(3.9d0 3.9d0 3.85d0 3.08d0 3.15d0))
;; (MTCARS:WT . #(2.62d0 2.875d0 2.32d0 3.215d0 3.44d0))
;; (MTCARS:QSEC . #(16.46d0 17.02d0 18.61d0 19.44d0 17.02d0))
;; (MTCARS:VS . #*00110)
;; (MTCARS:AM . #*11100)
;; (MTCARS:GEAR . #(4 4 4 3 3))
;; (MTCARS:CARB . #(4 4 1 1 2)))
as-plist
Similarly, as-plist will return a plist:
(as-plist mtcars-small)
;; (MTCARS:X1 #("Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" "Hornet Sportabout")
;; MTCARS:MPG #(21 21 22.8d0 21.4d0 18.7d0)
;; MTCARS:CYL #(6 6 4 6 8)
;; MTCARS:DISP #(160 160 108 258 360)
;; MTCARS:HP #(110 110 93 110 175)
;; MTCARS:DRAT #(3.9d0 3.9d0 3.85d0 3.08d0 3.15d0)
;; MTCARS:WT #(2.62d0 2.875d0 2.32d0 3.215d0 3.44d0)
;; MTCARS:QSEC #(16.46d0 17.02d0 18.61d0 19.44d0 17.02d0)
;; MTCARS:VS #*00110
;; MTCARS:AM #*11100
;; MTCARS:GEAR #(4 4 4 3 3)
;; MTCARS:CARB #(4 4 1 1 2))
as-array
as-array returns the data frame as a row-major two dimensional lisp array. You’ll want to save the variable names using the keys function to make it easy to convert back (see matrix-df). One of the reasons you might want to use this function is to manipulate the data-frame using array-operations. This is particularly useful when you have data frames of all numeric values.
(defparameter mtcars-keys (keys mtcars)) ; we'll use later
(defparameter mtcars-small-array (as-array mtcars-small))
mtcars-small-array
;; 0 Mazda RX4 21.0 6 160 110 3.90 2.620 16.46 0 1 4 4
;; 1 Mazda RX4 Wag 21.0 6 160 110 3.90 2.875 17.02 0 1 4 4
;; 2 Datsun 710 22.8 4 108 93 3.85 2.320 18.61 1 1 4 1
;; 3 Hornet 4 Drive 21.4 6 258 110 3.08 3.215 19.44 1 0 3 1
;; 4 Hornet Sportabout 18.7 8 360 175 3.15 3.440 17.02 0 0 3 2
Our abbreviated mtcars data frame is now a two dimensional Common Lisp array. It may not look like one because Lisp-Stat will ‘print pretty’ arrays. You can inspect it with the describe command to make sure:
LS-USER> (describe mtcars-small-array)
...
Type: (SIMPLE-ARRAY T (5 12))
Class: #<BUILT-IN-CLASS SIMPLE-ARRAY>
Element type: T
Rank: 2
Physical size: 60
vectors
The columns function returns the variables of the data frame as a vector of vectors:
(columns mtcars-small)
; #(#("Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" "Hornet Sportabout")
; #(21 21 22.8d0 21.4d0 18.7d0)
; #(6 6 4 6 8)
; #(160 160 108 258 360)
; #(110 110 93 110 175)
; #(3.9d0 3.9d0 3.85d0 3.08d0 3.15d0)
; #(2.62d0 2.875d0 2.32d0 3.215d0 3.44d0)
; #(16.46d0 17.02d0 18.61d0 19.44d0 17.02d0)
; #*00110
; #*11100
; #(4 4 4 3 3)
; #(4 4 1 1 2))
This is a column-major lisp array.
You can also pass a selection to the columns function to return specific columns:
(columns mtcars-small 'mpg)
; #(21 21 22.8d0 21.4d0 18.7d0)
The functions in array-operations are helpful in further dealing with data frames as vectors and arrays. For example you could convert a data frame to a transposed array by using aops:combine with the columns function:
(combine (columns mtcars-small))
;; 0 Mazda RX4 Mazda RX4 Wag Datsun 710 Hornet 4 Drive Hornet Sportabout
;; 1 21.00 21.000 22.80 21.400 18.70
;; 2 6.00 6.000 4.00 6.000 8.00
;; 3 160.00 160.000 108.00 258.000 360.00
;; 4 110.00 110.000 93.00 110.000 175.00
;; 5 3.90 3.900 3.85 3.080 3.15
;; 6 2.62 2.875 2.32 3.215 3.44
;; 7 16.46 17.020 18.61 19.440 17.02
;; 8 0.00 0.000 1.00 1.000 0.00
;; 9 1.00 1.000 1.00 0.000 0.00
;; 10 4.00 4.000 4.00 3.000 3.00
;; 11 4.00 4.000 1.00 1.000 2.00
There are two functions for loading data. The first data makes loading from logical pathnames convenient. The other, read-csv works with the file system or URLs. Although the name read-csv implies only CSV (comma separated values), it can actually read with other delimiters, such as the tab character. See the DFIO API reference for more information.
The data command
For built in Lisp-Stat data sets, you can load with just the data set name. For example to load mtcars:
(data :mtcars)
If you’ve installed the R data sets, and want to load the antigua data set from the daag package, you could do it like this:
(data :antigua :system :rdata :directory :daag :type :csv)
If the file type is not lisp (say it’s TSV or CSV), you need to specify the type parameter.
From strings
Here is a short demonstration of reading from strings:
(defparameter *d*
(format nil "Gender,Age,Height~@
\"Male\",30,180.~@
\"Male\",31,182.7~@
\"Female\",32,1.65e2")))
dfio tries to hard to decipher the various number formats sometimes encountered in CSV files:
(select (dfio:read-csv
(format nil "\"All kinds of wacky number formats\"~%.7~%19.~%.7f2"))
t 'all-kinds-of-wacky-number-formats)
; => #(0.7d0 19.0d0 70.0)
From files
We saw above that dfio can read from strings, so one easy way to read from a file is to use the uiop system function read-file-string. We can read one of the example data files included with Lisp-Stat like this:
(read-csv
;; IRON ALUMINUM ABSORPTION
;; 0 61 13 4
;; 1 175 21 18
;; 2 111 24 14
;; 3 124 23 18
;; 4 130 64 26
;; 5 173 38 26 ..
That example just illustrates reading from a file to a string. In practice you’re better off just reading the file in directly and avoid reading into a string first:
(read-csv #P"LS:DATA;absorbtion.csv")
;; IRON ALUMINUM ABSORPTION
;; 0 61 13 4
;; 1 175 21 18
;; 2 111 24 14
;; 3 124 23 18
;; 4 130 64 26
;; 5 173 38 26 ..
From URLs
dfio can also read from Common Lisp streams. Stream operations can be network or file based. Here is an example of how to read the classic Iris data set over the network:
(read-csv
"https://raw.githubusercontent.com/vincentarelbundock/Rdatasets/master/csv/datasets/iris.csv")
;; X27 SEPAL-LENGTH SEPAL-WIDTH PETAL-LENGTH PETAL-WIDTH SPECIES
;; 0 1 5.1 3.5 1.4 0.2 setosa
;; 1 2 4.9 3.0 1.4 0.2 setosa
;; 2 3 4.7 3.2 1.3 0.2 setosa
;; 3 4 4.6 3.1 1.5 0.2 setosa
;; 4 5 5.0 3.6 1.4 0.2 setosa
;; 5 6 5.4 3.9 1.7 0.4 setosa ..
From a database
You can load data from a SQLite table using the read-table command. Here’s an example of reading the iris data frame from a SQLite table:
(asdf:load-system :sqldf)
(defdf iris
(sqlite:connect #P"S:\\src\\lisp-stat\\data\\iris.db3")
"iris"))
Note that sqlite:connect does not take a logical pathname; use a system path appropriate for your computer. One reason you might want to do this is for speed in loading CSV. The CSV loader for SQLite is 10-15 times faster than the fastest Common Lisp CSV parser, and it is often quicker to load to SQLite first, then load into Lisp.
Save data
Data frames can be saved into any delimited text format supported by fare-csv, or several flavors of JSON, such as Vega-Lite.
As CSV
To save the mtcars data frame to disk, you could use:
(write-csv mtcars
#P"LS:DATA;mtcars.csv"
to save it as CSV, or to save it to tab-separated values:
(write-csv mtcars
#P"LS:DATA;mtcars.tsv"
:separator #\tab
As Lisp
For the most part, you will want to save your data frames as lisp. Doing so is both faster in loading, but more importantly it preserves any variable attributes that may have been given.
To save a data frame, use the save command:
(save 'mtcars #P"LS:DATA;mtcars-example")
Note that in this case you are passing the symbol to the function, not the value (thus the quote (') before the name of the data frame). Also note that the system will add the ‘lisp’ suffix for you.
To a database
The write-table function can be used to save a data frame to a SQLite database. Each take a connection to a database, which may be file or memory based, a table name and a data frame. Multiple data frames, with different table names, may be written to a single SQLite file this way.
Access data
This section describes various way to access data variables.
Define a data-frame
Let’s use defdf to define the iris data frame. We’ll use both of these data frames in the examples below.
(defdf iris
;WARNING: Missing column name was filled in
We now have a global variable named iris that represents the data frame. Let’s look at the first part of this data:
(head iris)
;; X29 SEPAL-LENGTH SEPAL-WIDTH PETAL-LENGTH PETAL-WIDTH SPECIES
;; 0 1 5.1 3.5 1.4 0.2 setosa
;; 1 2 4.9 3.0 1.4 0.2 setosa
;; 2 3 4.7 3.2 1.3 0.2 setosa
;; 3 4 4.6 3.1 1.5 0.2 setosa
;; 4 5 5.0 3.6 1.4 0.2 setosa
;; 5 6 5.4 3.9 1.7 0.4 setosa
Notice a couple of things. First, there is a column X29. In fact if you look back at previous data frame output in this tutorial you will notice various columns named X followed by some number. This is because the column was not given a name in the data set, so a name was generated for it. X starts at 1 and increased by 1 each time an unnamed variable is encountered during your Lisp-Stat session. The next time you start Lisp-Stat, numbering will begin from 1 again. We will see how to clean this up this data frame in the next sections.
The second thing to note is the row numbers on the far left side. When Lisp-Stat prints a data frame it automatically adds row numbers. Row and column numbering in Lisp-Stat start at 0. In R they start with 1. Row numbers make it convenient to select data sections from a data frame, but they are not part of the data and cannot be selected or manipulated themselves. They only appear when a data frame is printed.
Access a variable
The defdf macro also defines symbol macros that allow you to refer to a variable by name, for example to refer to the mpg column of mtcars, you can refer to it by the the name data-frame:variable convention.
mtcars:mpg
; #(21 21 22.8D0 21.4D0 18.7D0 18.1D0 14.3D0 24.4D0 22.8D0 19.2D0 17.8D0 16.4D0
17.3D0 15.2D0 10.4D0 10.4D0 14.7D0 32.4D0 30.4D0 33.9D0 21.5D0 15.5D0 15.2D0
13.3D0 19.2D0 27.3D0 26 30.4D0 15.8D0 19.7D0 15 21.4D0)
There is a point of distinction to be made here: the values of mpg and the column mpg. For example to obtain the same vector using the selection/sub-setting package select we must refer to the column:
(select mtcars t 'mpg)
; #(21 21 22.8D0 21.4D0 18.7D0 18.1D0 14.3D0 24.4D0 22.8D0 19.2D0 17.8D0 16.4D0
17.3D0 15.2D0 10.4D0 10.4D0 14.7D0 32.4D0 30.4D0 33.9D0 21.5D0 15.5D0 15.2D0
13.3D0 19.2D0 27.3D0 26 30.4D0 15.8D0 19.7D0 15 21.4D0)
Note that with select we passed the symbol 'mpg (you can tell it’s a symbol because of the quote in front of it).
So, the rule here is: if you want the value refer to it directly, e.g. mtcars:mpg. If you are referring to the column, use the symbol. Data frame operations sometimes require the symbol, where as Common Lisp and other packages that take vectors use the direct access form.
Data-frame operations
These functions operate on data-frames as a whole.
copy
copy returns a newly allocated data-frame with the same values as the original:
(copy mtcars-small)
;; X1 MPG CYL DISP HP DRAT WT QSEC VS AM GEAR CARB
;; 0 Mazda RX4 21.0 6 160 110 3.90 2.620 16.46 0 1 4 4
;; 1 Mazda RX4 Wag 21.0 6 160 110 3.90 2.875 17.02 0 1 4 4
;; 2 Datsun 710 22.8 4 108 93 3.85 2.320 18.61 1 1 4 1
;; 3 Hornet 4 Drive 21.4 6 258 110 3.08 3.215 19.44 1 0 3 1
;; 4 Hornet Sportabout 18.7 8 360 175 3.15 3.440 17.02 0 0 3 2
By default only the keys are copied and the original data remains the same, i.e. a shallow copy. For a deep copy, use the copy-array function as the key:
(copy mtcars-small :key #'copy-array)
;; X1 MPG CYL DISP HP DRAT WT QSEC VS AM GEAR CARB
;; 0 Mazda RX4 21.0 6 160 110 3.90 2.620 16.46 0 1 4 4
;; 1 Mazda RX4 Wag 21.0 6 160 110 3.90 2.875 17.02 0 1 4 4
;; 2 Datsun 710 22.8 4 108 93 3.85 2.320 18.61 1 1 4 1
;; 3 Hornet 4 Drive 21.4 6 258 110 3.08 3.215 19.44 1 0 3 1
;; 4 Hornet Sportabout 18.7 8 360 175 3.15 3.440 17.02 0 0 3 2
Useful when applying destructive operations to the data-frame.
keys
Returns a vector of the variables in the data frame. The keys are symbols. Symbol properties describe the variable, for example units.
(keys mtcars)
; #(X45 MPG CYL DISP HP DRAT WT QSEC VS AM GEAR CARB)
Recall the earlier discussion of X1 for the column name.
map-df
map-df transforms one data-frame into another, row-by-row. Its function signature is:
(map-df data-frame keys function result-keys) ...
It applies function to each row, and returns a data frame with the result-keys as the column (variable) names. keys is a list. You can also specify the type of the new variables in the result-keys list.
The goal for this example is to transform df1:
(defparameter df1 (make-df '(:a :b) '(#(2 3 5)
#(7 11 13))))
into a data-frame that consists of the product of :a and :b, and a bit mask of the columns that indicate where the value is <= 30. First we’ll need a helper for the bit mask:
(defun predicate-bit (a b)
"Return 1 if a*b <= 30, 0 otherwise"
(if (<= 30 (* a b))
1
0))
Now we can transform df1 into our new data-frame, df2, with:
(defparameter df2 (map-df df1
'(:a :b)
(lambda (a b)
(vector (* a b) (predicate-bit a b)))
'((:p fixnum) (:m bit))))
Since it was a parameter assignment, we have to view it manually:
(print-df df2)
;; P M
;; 0 14 0
;; 1 33 1
;; 2 65 1
Note how we specified both the new key names and their type. Here’s an example that transforms the units of mtcars from imperial to metric:
(map-df mtcars '(x1 mpg disp hp wt)
(lambda (model mpg disp hp wt)
(vector model ;no transformation for model (X1), return as-is
(/ 235.214583 mpg)
(/ disp 61.024)
(* hp 1.01387)
(/ (* wt 1000) 2.2046)))
'(:model (:100km/l float) (:disp float) (:hp float) (:kg float)))
;; MODEL 100KM/L DISP HP KG
;; 0 Mazda RX4 11.2007 2.6219 111.5257 1188.4242
;; 1 Mazda RX4 Wag 11.2007 2.6219 111.5257 1304.0914
;; 2 Datsun 710 10.3164 1.7698 94.2899 1052.3451
;; 3 Hornet 4 Drive 10.9913 4.2278 111.5257 1458.3144
;; 4 Hornet Sportabout 12.5783 5.8993 177.4272 1560.3737
;; 5 Valiant 12.9953 3.6871 106.4564 1569.4456 ..
Note that you may have to adjust the X column name to suit your current environment.
You might be wondering how we were able to refer to the columns without the ' (quote); in fact we did, at the beginning of the list. The lisp reader then reads the contents of the list as symbols.
print
The print-data command will print a data frame in a nicely formatted way, respecting the pretty printing row/column length variables:
(print-data mtcars)
;; MODEL MPG CYL DISP HP DRAT WT QSEC VS AM GEAR CARB
;; Mazda RX4 21.0 6 160.0 110 3.90 2.620 16.46 0 1 4 4
;; Mazda RX4 Wag 21.0 6 160.0 110 3.90 2.875 17.02 0 1 4 4
;; Datsun 710 22.8 4 108.0 93 3.85 2.320 18.61 1 1 4 1
;; Hornet 4 Drive 21.4 6 258.0 110 3.08 3.215 19.44 1 0 3 1
...
;; Output elided for brevity
rows
rows returns the rows of a data frame as a vector of vectors:
(rows mtcars-small)
;#(#("Mazda RX4" 21 6 160 110 3.9d0 2.62d0 16.46d0 0 1 4 4)
; #("Mazda RX4 Wag" 21 6 160 110 3.9d0 2.875d0 17.02d0 0 1 4 4)
; #("Datsun 710" 22.8d0 4 108 93 3.85d0 2.32d0 18.61d0 1 1 4 1)
; #("Hornet 4 Drive" 21.4d0 6 258 110 3.08d0 3.215d0 19.44d0 1 0 3 1)
; #("Hornet Sportabout" 18.7d0 8 360 175 3.15d0 3.44d0 17.02d0 0 0 3 2))
remove duplicates
The df-remove-duplicates function will remove duplicate rows. Let’s create a data-frame with duplicates:
(defparameter dup (make-df '(a b c) '(#(a1 a1 a3)
#(a1 a1 b3)
#(a1 a1 c3))))
;DUP
;; A B C
;; 0 A1 A1 A1
;; 1 A1 A1 A1
;; 2 A3 B3 C3
Now remove duplicate rows 0 and 1:
(df-remove-duplicates dup)
;; A B C
;; A1 A1 A1
;; A3 B3 C3
remove data-frame
If you are working with large data sets, you may wish to remove a data frame from your environment to save memory. The undef command does this:
LS-USER> (undef 'tooth-growth)
(TOOTH-GROWTH)
You can check that it was removed with the show-data-frames function, or by viewing the list df::*data-frames*.
list data-frames
To list the data frames in your environment, use the show-data-frames function. Here is an example of what is currently loaded into the authors environment. The data frames listed may be different for you, depending on what you have loaded.
To see this output, you’ll have to change to the standard print-object method, using this code:
(defmethod print-object ((df data-frame) stream)
"Print DATA-FRAME dimensions and type
After defining this method it is permanently associated with data-frame objects"
(let ((description (and (slot-boundp df 'name)
(documentation (find-symbol (name df)) 'variable))))
(format stream
"(~d observations of ~d variables)"
(aops:nrow df)
(aops:ncol df))
(when description
(format stream "~&~A" (short-string description))))))
Now, to see all the data frames in your environment:
LS-USER> (show-data-frames)
#<DATA-FRAME AQ (153 observations of 7 variables)>
#<DATA-FRAME MTCARS (32 observations of 12 variables)
#<DATA-FRAME USARRESTS (50 observations of 5 variables)
Violent Crime Rates by US State>
#<DATA-FRAME PLANTGROWTH (30 observations of 3 variables)
Results from an Experiment on Plant Growth>
#<DATA-FRAME TOOTHGROWTH (60 observations of 4 variables)
The Effect of Vitamin C on Tooth Growth in Guinea Pigs>
with the :head t option, show-data-frames will print the first five rows of the data frame, similar to the head command:
LS-USER> (show-data-frames :head t)
AQ
;; X5 OZONE SOLAR-R WIND TEMP MONTH DAY
;; 1 41.0000 190 7.4 67 5 1
;; 2 36.0000 118 8.0 72 5 2
;; 3 12.0000 149 12.6 74 5 3
;; 4 18.0000 313 11.5 62 5 4
;; 5 42.1293 NA 14.3 56 5 5
;; 6 28.0000 NA 14.9 66 5 6 ..
MTCARS
;; MODEL MPG CYL DISP HP DRAT WT QSEC VS AM GEAR CARB
;; Mazda RX4 21.0 6 160.0 110 3.90 2.620 16.46 0 1 4 4
;; Mazda RX4 Wag 21.0 6 160.0 110 3.90 2.875 17.02 0 1 4 4
;; Datsun 710 22.8 4 108.0 93 3.85 2.320 18.61 1 1 4 1
;; Hornet 4 Drive 21.4 6 258.0 110 3.08 3.215 19.44 1 0 3 1
;; Hornet Sportabout 18.7 8 360.0 175 3.15 3.440 17.02 0 0 3 2
;; Valiant 18.1 6 225.0 105 2.76 3.460 20.22 1 0 3 1 ..
;; Output elided for brevity
You, of course, may see different output depending on what data frames you currently have loaded.
Let’s change the print-object back to our convenience method.
(defmethod print-object ((df data-frame) stream)
"Print the first six rows of DATA-FRAME"
(let ((*print-lines* 6))
(df:print-data df stream nil)))
stacking
Stacking is done with the array-operations stacking functions. Since these functions operate on both arrays and data frames, we can use them to stack data frames, arrays, or a mixture of both, providing they have a rank of 2. Here’s an example using the mtcars data frame:
(defparameter boss-mustang
#("Boss Mustang" 12.7d0 8 302 405 4.11d0 2.77d0 12.5d0 0 1 4 4))
and now stack it onto the mtcars data set (load it with (data :mtcars) if you haven’t already done so):
(matrix-df
(keys mtcars)
(stack-rows mtcars boss-mustang))
This is the functional equivalent of R’s rbind function. You can also add columns with the stack-cols function.
An often asked question is: why don’t you have a dedicated stack-rows function? Well, if you want one it might look like this:
(defun stack-rows (df &rest objects)
"Stack rows that works on matrices and/or data frames."
(matrix-df
(keys df)
(apply #'aops:stack-rows (cons df objects))))
But now the data frame must be the first parameter passed to the function. Or perhaps you want to rename the columns? Or you have matrices as your starting point? For all those reasons, it makes more sense to pass in the column keys than a data frame:
(defun stack-rows (col-names &rest objects)
"Stack rows that works on matrices and/or data frames."
(matrix-df
(keys col-names)
(stack-rows objects)))
However this means we have two stack-rows functions, and you don’t really gain anything except an extra function call. So use the above definition if you like; we use the first example and call matrix-df and stack-rows to stack data frames.
Column operations
You have seen some of these functions before, and for completeness we repeat them here.
To obtain a variable (column) from a data frame, use the column function. Using the mtcars-small data frame, defined in export data frames above:
(column mtcars-small 'mpg)
;; #(21 21 22.8d0 21.4d0 18.7d0)
To get all the columns as a vector, use the columns function:
(columns mtcars-small)
; #(#("Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" "Hornet Sportabout")
; #(21 21 22.8d0 21.4d0 18.7d0)
; #(6 6 4 6 8)
; #(160 160 108 258 360)
; #(110 110 93 110 175)
; #(3.9d0 3.9d0 3.85d0 3.08d0 3.15d0)
; #(2.62d0 2.875d0 2.32d0 3.215d0 3.44d0)
; #(16.46d0 17.02d0 18.61d0 19.44d0 17.02d0)
; #*00110
; #*11100
; #(4 4 4 3 3)
; #(4 4 1 1 2))
You can also return a subset of the columns by passing in a selection:
(columns mtcars-small '(mpg wt))
;; #(#(21 21 22.8d0 21.4d0 18.7d0) #(2.62d0 2.875d0 2.32d0 3.215d0 3.44d0))
There are two ‘flavors’ of add functions, destructive and non-destructive. The latter return a new data frame as the result, and the destructive versions modify the data frame passed as a parameter. The destructive versions are denoted with a ‘!’ at the end of the function name.
The columns to be added can be in several formats:
• plist
• alist
• (plist)
• (alist)
• (data-frame)
To add a single column to a data frame, use the add-column! function. We’ll use a data frame similar to the one used in our reading data-frames from a string example to illustrate column operations.
Create the data frame:
(defparameter *d* (read-csv
(format nil "Gender,Age,Height
\"Male\",30,180
\"Male\",31,182
\"Female\",32,165
\"Male\",22,167
\"Female\",45,170")))
and print it:
(head *d*)
;; GENDER AGE HEIGHT
;; 0 Male 30 180
;; 1 Male 31 182
;; 2 Female 32 165
;; 3 Male 22 167
;; 4 Female 45 170
and add a ‘weight’ column to it:
(add-column! *d* 'weight #(75.2 88.5 49.4 78.1 79.4))
;; GENDER AGE HEIGHT WEIGHT
;; 0 Male 30 180 75.2
;; 1 Male 31 182 88.5
;; 2 Female 32 165 49.4
;; 3 Male 22 167 78.1
;; 4 Female 45 170 79.4
now that we have weight, let’s add a BMI column to it to demonstrate using a function to compute the new column values:
(add-column! *d* 'bmi
(map-rows *d* '(height weight)
#'(lambda (h w) (/ w (square (/ h 100))))))
;; SEX AGE HEIGHT WEIGHT BMI
;; 0 Female 10 180 75.2 23.209875
;; 1 Female 15 182 88.5 26.717787
;; 2 Male 20 165 49.4 18.145086
;; 3 Female 25 167 78.1 28.003874
;; 4 Male 30 170 79.4 27.474049
Now let’s add multiple columns destructively using add-columns!
(add-columns! *d* 'a #(1 2 3 4 5) 'b #(foo bar baz qux quux))
;; GENDER AGE HEIGHT WEIGHT BMI A B
;; Male 30 180 75.2 23.2099 1 FOO
;; Male 31 182 88.5 26.7178 2 BAR
;; Female 32 165 49.4 18.1451 3 BAZ
;; Male 22 167 78.1 28.0039 4 QUX
;; Female 45 170 79.4 27.4740 5 QUUX
Remove columns
Let’s remove the columns a and b that we just added above with the remove-columns function. Since it returns a new data frame, we’ll need to assign the return value to *d*:
(setf *d* (remove-columns *d* '(a b bmi)))
;; GENDER AGE HEIGHT WEIGHT BMI
;; Male 30 180 75.2 23.2099
;; Male 31 182 88.5 26.7178
;; Female 32 165 49.4 18.1451
;; Male 22 167 78.1 28.0039
;; Female 45 170 79.4 27.4740
To remove columns destructively, meaning modifying the original data, use the remove-column! or remove-columns! functions.
Rename columns
Sometimes data sources can have variable names that we want to change. To do this, use the rename-column! function. This example will rename the ‘gender’ variable to ‘sex’:
(rename-column! *d* 'sex 'gender)
;; SEX AGE HEIGHT WEIGHT
;; 0 Male 30 180 75.2
;; 1 Male 31 182 88.5
;; 2 Female 32 165 49.4
;; 3 Male 22 167 78.1
;; 4 Female 45 170 79.4
If you used defdf to create your data frame, and this is the recommended way to define data frames, the variable references within the data package will have been updated. This is true for all destructive data frame operations. Let’s use this now to rename the mtcars X1 variable to model. First a quick look at the first 2 rows as they are now:
(head mtcars 2)
;; X1 MPG CYL DISP HP DRAT WT QSEC VS AM GEAR CARB
;; 0 Mazda RX4 21.0 6 160 110 3.90 2.620 16.46 0 1 4 4
;; 1 Mazda RX4 Wag 21.0 6 160 110 3.90 2.875 17.02 0 1 4 4
Replace X1 with model:
(rename-column! mtcars 'model 'x1)
Note: check to see what value your version of mtcars has. In this case, with a fresh start of Lisp-Stat, it has X1. It could have X2, X3, etc.
Now check that it worked:
(head mtcars 2)
;; MODEL MPG CYL DISP HP DRAT WT QSEC VS AM GEAR CARB
;; 0 Mazda RX4 21 6 160 110 3.9 2.620 16.46 0 1 4 4
;; 1 Mazda RX4 Wag 21 6 160 110 3.9 2.875 17.02 0 1 4 4
We can now refer to mtcars:model
mtcars:model
#("Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" "Hornet Sportabout"
"Valiant" "Duster 360" "Merc 240D" "Merc 230" "Merc 280" "Merc 280C"
"Merc 450SE" "Merc 450SL" "Merc 450SLC" "Cadillac Fleetwood"
"Lincoln Continental" "Chrysler Imperial" "Fiat 128" "Honda Civic"
"Toyota Corolla" "Toyota Corona" "Dodge Challenger" "AMC Javelin"
"Camaro Z28" "Pontiac Firebird" "Fiat X1-9" "Porsche 914-2" "Lotus Europa"
"Ford Pantera L" "Ferrari Dino" "Maserati Bora" "Volvo 142E")
Replace columns
Columns are “setf-able” places and the simplest way to replace a column is set the field to a new value. We’ll complement the sex field of *d*:
(df::setf (df:column *d* 'sex) #("Female" "Female" "Male" "Female" "Male"))
;#("Female" "Female" "Male" "Female" "Male")
Note that df::setf is not exported. Use this with caution.
You can also replace a column using two functions specifically for this purpose. Here we’ll replace the ‘age’ column with new values:
(replace-column *d* 'age #(10 15 20 25 30))
;; SEX AGE HEIGHT WEIGHT
;; 0 Female 10 180 75.2
;; 1 Female 15 182 88.5
;; 2 Male 20 165 49.4
;; 3 Female 25 167 78.1
;; 4 Male 30 170 79.4
That was a non-destructive replacement, and since we didn’t reassign the value of *d*, it is unchanged:
LS-USER> (print-data *d*)
;; SEX AGE HEIGHT WEIGHT
;; 0 Female 30 180 75.2
;; 1 Female 31 182 88.5
;; 2 Male 32 165 49.4
;; 3 Female 22 167 78.1
;; 4 Male 45 170 79.4
We can also use the destructive version to make a permanent change instead of setf-ing *d*:
(replace-column! *d* 'age #(10 15 20 25 30))
;; SEX AGE HEIGHT WEIGHT
;; 0 Female 10 180 75.2
;; 1 Female 15 182 88.5
;; 2 Male 20 165 49.4
;; 3 Female 25 167 78.1
;; 4 Male 30 170 79.4
Transform columns
There are two functions for column transformations, replace-column and map-columns.
replace-column
replace-column can be used to transform a column by applying a function to each value. This example will add 20 to each row of the age column:
(replace-column *d* 'age #'(lambda (x) (+ 20 x)))
;; SEX AGE HEIGHT WEIGHT
;; 0 Female 30 180 75.2
;; 1 Female 35 182 88.5
;; 2 Male 40 165 49.4
;; 3 Female 45 167 78.1
;; 4 Male 50 170 79.4
replace-column! can also apply functions to a column, destructively modifying the column.
map-columns
The map-columns functions can be thought of as applying a function on all the values of each variable/column as a vector, rather than the individual rows as replace-column does. To see this, we’ll use functions that operate on vectors, in this case nu:e+, which is the vector addition function for Lisp-Stat. Let’s see this working first:
(nu:e+ #(1 1 1) #(2 3 4))
; => #(3 4 5)
observe how the vectors were added element-wise. We’ll demonstrate map-columns by adding one to each of the numeric columns in the example data frame:
(map-columns (select *d* t '(weight age height))
#'(lambda (x)
(nu:e+ 1 x)))
;; WEIGHT AGE HEIGHT
;; 0 76.2 11 181
;; 1 89.5 16 183
;; 2 50.4 21 166
;; 3 79.1 26 168
;; 4 80.4 31 171
recall that we used the non-destructive version of replace-column above, so *d* has the original values. Also note the use of select to get the numeric variables from the data frame; e+ can’t add categorical values like gender/sex.
Row operations
As the name suggests, row operations operate on each row, or observation, of a data set.
count-rows
This function is used to determine how many rows meet a certain condition. For example if you want to know how many cars have a MPG (miles-per-galleon) rating greater than 20, you could use:
(count-rows mtcars 'mpg #'(lambda (x) (< 20 x)))
; => 14
do-rows
do-rows applies a function on selected variables. The function must take the same number of arguments as variables supplied. It is analogous to dotimes, but iterating over data frame rows. No values are returned; it is purely for side-effects. Let’s create a new data data-frame to illustrate row operations:
LS-USER> (defparameter *d2*
(make-df '(a b) '(#(1 2 3) #(10 20 30))))
*D2*
LS-USER> *d2*
;; A B
;; 0 1 10
;; 1 2 20
;; 2 3 30
This example uses format to illustrate iterating using do-rows for side effect:
(do-rows *d2* '(a b) #'(lambda (a b) (format t "~A " (+ a b))))
11 22 33
; No value
map-rows
Where map-columns can be thought of as working through the data frame column-by-column, map-rows goes through row-by-row. Here we add the values in each row of two columns:
(map-rows *d2* '(a b) #'+)
#(11 22 33)
Since the length of this vector will always be equal to the data-frame column length, we can add the results to the data frame as a new column. Let’s see this in a real-world pattern, subtracting the mean from a column:
(add-column! *d2* 'c
(map-rows *d2* 'b
#'(lambda (x) (- x (mean (select *d2* t 'b))))))
;; A B C
;; 0 1 10 -10.0
;; 1 2 20 0.0
;; 2 3 30 10.0
You could also have used replace-column! in a similar manner to replace a column with normalize values.
mask-rows is similar to count-rows, except it returns a bit-vector for rows matching the predicate. This is useful when you want to pass the bit vector to another function, like select to retrieve only the rows matching the predicate.
(mask-rows mtcars 'mpg #'(lambda (x) (< 20 x)))
; => #*11110001100000000111100001110001
filter-rows
The filter-rows function will return a data-frame whose rows match the predicate. The function signature is:
(defun filter-rows (data body) ...
As an example, let’s filter mtcars to find all the cars whose fuel consumption is greater than 20 mpg:
(filter-rows mtcars '(< 20 mpg))
;=> #<DATA-FRAME (14 observations of 12 variables)>
To view them we’ll need to call the print-data function directly instead of using the print-object function we installed earlier. Otherwise, we’ll only see the first 6.
(print-data *)
;; MODEL MPG CYL DISP HP DRAT WT QSEC VS AM GEAR CARB
;; 0 Mazda RX4 21.0 6 160.0 110 3.90 2.620 16.46 0 1 4 4
;; 1 Mazda RX4 Wag 21.0 6 160.0 110 3.90 2.875 17.02 0 1 4 4
;; 2 Datsun 710 22.8 4 108.0 93 3.85 2.320 18.61 1 1 4 1
;; 3 Hornet 4 Drive 21.4 6 258.0 110 3.08 3.215 19.44 1 0 3 1
;; 4 Merc 240D 24.4 4 146.7 62 3.69 3.190 20.00 1 0 4 2
;; 5 Merc 230 22.8 4 140.8 95 3.92 3.150 22.90 1 0 4 2
;; 6 Fiat 128 32.4 4 78.7 66 4.08 2.200 19.47 1 1 4 1
;; 7 Honda Civic 30.4 4 75.7 52 4.93 1.615 18.52 1 1 4 2
;; 8 Toyota Corolla 33.9 4 71.1 65 4.22 1.835 19.90 1 1 4 1
;; 9 Toyota Corona 21.5 4 120.1 97 3.70 2.465 20.01 1 0 3 1
;; 10 Fiat X1-9 27.3 4 79.0 66 4.08 1.935 18.90 1 1 4 1
;; 11 Porsche 914-2 26.0 4 120.3 91 4.43 2.140 16.70 0 1 5 2
;; 12 Lotus Europa 30.4 4 95.1 113 3.77 1.513 16.90 1 1 5 2
;; 13 Volvo 142E 21.4 4 121.0 109 4.11 2.780 18.60 1 1 4 2
Filter predicates can be more complex than this, here’s an example filtering the Vega movies data set (which we call imdb):
(filter-rows imdb
'(and (not (eql imdb-rating :na))
(local-time:timestamp< release-date
(local-time:parse-timestring "2019-01-01"))))
You can refer to any of the column/variable names in the data-frame directly when constructing the filter predicate. The predicate is turned into a lambda function, so let, etc is also possible.
Summarising data
Often the first thing you’ll want to do with a data frame is get a quick summary. You can do that with these functions, and we’ve seen most of them used in this manual. For more information about these functions, see the data-frame api reference.
nrow data-frame
return the number of rows in data-frame
ncol data-frame
return the number of columns in data-frame
dims data-frame
return the dimensions of data-frame as a list in (rows columns) format
keys data-frame
return a vector of symbols representing column names
column-names data-frame
returns a list of strings of the column names in data-frames
displays the first n rows of data-frame. n defaults to 6.
tail data-frame &optional n
displays the last n rows of data-frame. n defaults to 6.
describe
describe data-frame
returns the meta-data for the variables in data-frame
describe is a common lisp function that describes an object. In Lisp-Stat describe prints a description of the data frame and the three ‘standard’ properties of the variables: type, unit and description. It is similar to the str command in R. To see an example use the augmented mtcars data set included in Lisp-Stat. In this data set, we have added properties describing the variables. This is a good illustration of why you should always save data frames in lisp format; properties such as these are lost in CSV format.
(data :mtcars)
LS-USER> (describe mtcars)
MTCARS
A data-frame with 32 observations of 12 variables
Variable | Type | Unit | Label
-------- | ---- | ---- | -----------
MODEL | STRING | NIL | NIL
MPG | DOUBLE-FLOAT | M/G | Miles/(US) gallon
CYL | INTEGER | NA | Number of cylinders
DISP | DOUBLE-FLOAT | IN3 | Displacement (cu.in.)
HP | INTEGER | HP | Gross horsepower
DRAT | DOUBLE-FLOAT | NA | Rear axle ratio
WT | DOUBLE-FLOAT | LB | Weight (1000 lbs)
QSEC | DOUBLE-FLOAT | S | 1/4 mile time
VS | BIT | NA | Engine (0=v-shaped, 1=straight)
AM | BIT | NA | Transmission (0=automatic, 1=manual)
GEAR | INTEGER | NA | Number of forward gears
CARB | INTEGER | NA | Number of carburetors
summary
summary data-frame
returns a summary of the variables in data-frame
Summary functions are one of those things that tend to be use-case or application specific. Witness the number of R summary packages; there are at least half a dozen, including hmisc, stat.desc, psych describe, skim and summary tools. In short, there is no one-size-fits-all way to provide summaries, so Lisp-Stat provides the data structures upon which users can customise the summary output. The output you see below is a simple :print-function for each of the summary structure types (numeric, factor, bit and generic).
LS-USER> (summary mtcars)
(
MPG (Miles/(US) gallon)
n: 32
missing: 0
min=10.40
q25=15.40
q50=19.20
mean=20.09
q75=22.80
max=33.90
CYL (Number of cylinders)
14 (44%) x 8, 11 (34%) x 4, 7 (22%) x 6,
DISP (Displacement (cu.in.))
n: 32
missing: 0
min=71.10
q25=120.65
q50=205.87
mean=230.72
q75=334.00
max=472.00
HP (Gross horsepower)
n: 32
missing: 0
min=52
q25=96.00
q50=123
mean=146.69
q75=186.25
max=335
DRAT (Rear axle ratio)
n: 32
missing: 0
min=2.76
q25=3.08
q50=3.70
mean=3.60
q75=3.95
max=4.93
WT (Weight (1000 lbs))
n: 32
missing: 0
min=1.51
q25=2.54
q50=3.33
mean=3.22
q75=3.68
max=5.42
QSEC (1/4 mile time)
n: 32
missing: 0
min=14.50
q25=16.88
q50=17.71
mean=17.85
q75=18.90
max=22.90
VS (Engine (0=v-shaped, 1=straight))
ones: 14 (44%)
AM (Transmission (0=automatic, 1=manual))
ones: 13 (41%)
GEAR (Number of forward gears)
15 (47%) x 3, 12 (38%) x 4, 5 (16%) x 5,
CARB (Number of carburetors)
10 (31%) x 4, 10 (31%) x 2, 7 (22%) x 1, 3 (9%) x 3, 1 (3%) x 6, 1 (3%) x 8, )
Note that the model column, essentially row-name was deleted from the output. The summary function, designed for human readable output, removes variables with all unique values, and those with monotonically increasing numbers (usually row numbers).
To build your own summary function, use the get-summaries function to get a list of summary structures for the variables in the data frame, and then print them as you wish.
columns
You can also describe or summarize individual columns:
LS-USER> (describe 'mtcars:mpg)
MTCARS:MPG
[symbol]
MPG names a symbol macro:
Expansion: (AREF (COLUMNS MTCARS) 1)
Symbol-plist:
:TYPE -> DOUBLE-FLOAT
:UNIT -> M/G
:LABEL -> "Miles/(US) gallon"
LS-USER> (summarize-column 'mtcars:mpg)
MPG (Miles/(US) gallon)
n: 32
missing: 0
min=10.40
q25=15.40
q50=19.20
mean=20.09
q75=22.80
max=33.90
Missing values
Data sets often contain missing values and we need to both understand where and how many are missing, and how to transform or remove them for downstream operations. In Lisp-Stat, missing values are represented by the keyword symbol :na. You can control this encoding during delimited text import by passing an a-list containing the mapping. By default this is a keyword parameter map-alist:
(map-alist '(("" . :na)
("NA" . :na)))
The default maps blank cells ("") and ones containing “NA” (not available) to the keyword :na, which stands for missing. Some systems encode missing values as numeric, e.g. 99; in this case you can pass in a map-alist that includes this mapping:
(map-alist '(("" . :na)
("NA" . :na)
(99 . :na)))
We will use the R air-quality dataset to illustrate working with missing values. Let’s load it now:
(defdf aq
Examine
To see missing values we use the predicate missingp. This works on sequences, arrays and data-frames. It returns a logical sequence, array or data-frame indicating which values are missing. T indicates a missing value, NIL means the value is present. Here’s an example of using missingp on a vector:
(missingp #(1 2 3 4 5 6 :na 8 9 10))
;#(NIL NIL NIL NIL NIL NIL T NIL NIL NIL)
and on a data-frame:
(print-data (missingp aq))
;; X3 OZONE SOLAR-R WIND TEMP MONTH DAY
;; 0 NIL NIL NIL NIL NIL NIL NIL
;; 1 NIL NIL NIL NIL NIL NIL NIL
;; 2 NIL NIL NIL NIL NIL NIL NIL
;; 3 NIL NIL NIL NIL NIL NIL NIL
;; 4 NIL T T NIL NIL NIL NIL
;; 5 NIL NIL T NIL NIL NIL NIL
;; 6 NIL NIL NIL NIL NIL NIL NIL
;; 7 NIL NIL NIL NIL NIL NIL NIL
;; 8 NIL NIL NIL NIL NIL NIL NIL
;; 9 NIL T NIL NIL NIL NIL NIL
;; 10 NIL NIL T NIL NIL NIL NIL
;; 11 NIL NIL NIL NIL NIL NIL NIL
;; 12 NIL NIL NIL NIL NIL NIL NIL
;; 13 NIL NIL NIL NIL NIL NIL NIL
;; 14 NIL NIL NIL NIL NIL NIL NIL
;; 15 NIL NIL NIL NIL NIL NIL NIL
;; 16 NIL NIL NIL NIL NIL NIL NIL
;; 17 NIL NIL NIL NIL NIL NIL NIL
;; 18 NIL NIL NIL NIL NIL NIL NIL
;; 19 NIL NIL NIL NIL NIL NIL NIL
;; 20 NIL NIL NIL NIL NIL NIL NIL
;; 21 NIL NIL NIL NIL NIL NIL NIL
;; 22 NIL NIL NIL NIL NIL NIL NIL
;; 23 NIL NIL NIL NIL NIL NIL NIL ..
We can see that the ozone variable contains some missing values. To see which rows of ozone are missing, we can use the which function:
(which aq:ozone :predicate #'missingp)
;#(4 9 24 25 26 31 32 33 34 35 36 38 41 42 44 45 51 52 53 54 55 56 57 58 59 60 64 71 74 82 83 101 102 106 114 118 149)
and to get a count, use the length function on this vector:
(length *) ; => 37
It’s often convenient to use the summary function to get an overview of missing values. We can do this because the missingp function is a transformation of a data-frame that yields another data-frame of boolean values:
LS-USER> (summary (missingp aq))
X4: 153 (100%) x NIL,
OZONE: 116 (76%) x NIL, 37 (24%) x T,
SOLAR-R: 146 (95%) x NIL, 7 (5%) x T,
WIND: 153 (100%) x NIL,
TEMP: 153 (100%) x NIL,
MONTH: 153 (100%) x NIL,
DAY: 153 (100%) x NIL,
we can see that ozone is missing 37 values, 24% of the total, and solar-r is missing 7 values.
Exclude
To exclude missing values from a single column, use the Common Lisp remove function:
(remove :na aq:ozone)
;#(41 36 12 18 28 23 19 8 7 16 11 14 18 14 34 6 30 11 1 11 4 32 ...
To ensure that our data-frame includes only complete observations, we exclude any row with a missing value. To do this use the drop-missing function:
(head (drop-missing aq))
;; X3 OZONE SOLAR-R WIND TEMP MONTH DAY
;; 0 1 41 190 7.4 67 5 1
;; 1 2 36 118 8.0 72 5 2
;; 2 3 12 149 12.6 74 5 3
;; 3 4 18 313 11.5 62 5 4
;; 4 7 23 299 8.6 65 5 7
;; 5 8 19 99 13.8 59 5 8
Replace
To replace missing values we can use the transformation functions. For example we can recode the missing values in ozone by the mean. Let’s look at the first six rows of the air quality data-frame:
(head aq)
;; X3 OZONE SOLAR-R WIND TEMP MONTH DAY
;; 0 1 41 190 7.4 67 5 1
;; 1 2 36 118 8.0 72 5 2
;; 2 3 12 149 12.6 74 5 3
;; 3 4 18 313 11.5 62 5 4
;; 4 5 NA NA 14.3 56 5 5
;; 5 6 28 NA 14.9 66 5 6
Now replace ozone with the mean using the common lisp function nsubstitute:
(nsubstitute (mean (remove :na aq:ozone)) :na aq:ozone)
and look at head again:
(head aq)
;; X3 OZONE SOLAR-R WIND TEMP MONTH DAY
;; 0 1 41.0000 190 7.4 67 5 1
;; 1 2 36.0000 118 8.0 72 5 2
;; 2 3 12.0000 149 12.6 74 5 3
;; 3 4 18.0000 313 11.5 62 5 4
;; 4 5 42.1293 NA 14.3 56 5 5
;; 5 6 28.0000 NA 14.9 66 5 6
You could have used the non-destructive substitute if you wanted to create a new data-frame and leave the original aq untouched.
Normally we’d round mean to be consistent from a type perspective, but did not here so you can see the values that were replaced.
Dates & Times
Lisp-Stat uses localtime to represent dates. This works well, but the system is a bit strict on input formats, and real-world data can be quite messy at times. For these cases chronicity and cl-date-time-parser can be helpful. Chronicity returns local-time timestamp objects, and is particularly easy to work with.
For example, if you have a variable with dates encoded like: ‘Jan 7 1995’, you can recode the column like we did for the vega movies data set:
(replace-column! imdb 'release-date #'(lambda (x)
(local-time:universal-to-timestamp
(date-time-parser:parse-date-time x))))
5.3 - Distributions
Working with statistical distributions
Overview
The Distributions package provides a collection of probability distributions and related functions such as:
• Sampling from distributions
• Moments (e.g mean, variance, skewness, and kurtosis), entropy, and other properties
• Probability density/mass functions (pdf) and their logarithm (logpdf)
• Moment-generating functions and characteristic functions
• Maximum likelihood estimation
• Distribution composition and derived distributions
Getting Started
Load the distributions system with (asdf:load-system :distributions) and generate a sequence of 1000 samples drawn from the standard normal distribution:
(defparameter *rn-samples*
(nu:generate-sequence '(vector double-float)
1000
#'distributions:draw-standard-normal))
and plot a histogram of the counts:
(plot:plot
(vega:defplot normal
(:mark :bar
:data (:x ,*rn-samples*)
:encoding (:x (:bin (:step 0.5)
:field x)
:y (:aggregate :count)))))
It looks like there’s an outlier at 5, but basically you can see it’s centered around 0.
To create a parameterised distribution, pass the parameters when you create the distribution object. In the following example we create a distribution with a mean of 2 and variance of 1 and plot it:
(defparameter rn2 (distributions:r-normal 2 1))
(let* ((seq (nu:generate-sequence '(vector double-float) 10000 (lambda () (distributions:draw rn2)))))
(plot:plot
(vega:defplot normal-2-1
(:mark :bar
:data (:x ,seq)
:encoding (:x (:bin (:step 0.5)
:field x)
:y (:aggregate :count))))))
Now that we have the distribution as an object, we can obtain pdf, cdf, mean and other parameters for it:
LS-USER> (mean rn2)
2.0d0
LS-USER> (pdf rn2 1.75)
0.38666811680284924d0
LS-USER> (cdf rn2 1.75)
0.4012936743170763d0
Gamma
In probability theory and statistics, the gamma distribution is a two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-square distribution are special cases of the gamma distribution. There are two different parameterisations in common use:
• With a shape parameter k and a scale parameter θ.
• With a shape parameter α = k and an inverse scale parameter β = 1/θ, called a rate parameter.
In each of these forms, both parameters are positive real numbers.
The parameterisation with k and θ appears to be more common in econometrics and certain other applied fields, where for example the gamma distribution is frequently used to model waiting times.
The parameterisation with α and β is more common in Bayesian statistics, where the gamma distribution is used as a conjugate prior distribution for various types of inverse scale (rate) parameters, such as the λ of an exponential distribution or a Poisson distribution.
When the shape parameter has an integer value, the distribution is the Erlang distribution. Since this can be produced by ensuring that the shape parameter has an integer value > 0, the Erlang distribution is not separately implemented.
PDF
The probability density function parameterized by shape-scale is:
$f(x;k,\theta )={\frac {x^{k-1}e^{-x/\theta }}{\theta ^{k}\Gamma (k)}}\quad {\text{ for }}x>0{\text{ and }}k,\theta >0$,
and by shape-rate:
$f(x;\alpha ,\beta )={\frac {x^{\alpha -1}e^{-\beta x}\beta ^{\alpha }}{\Gamma (\alpha )}}\quad {\text{ for }}x>0\quad \alpha ,\beta >0$
CDF
The cumulative distribution function characterized by shape and scale (k and θ) is:
$F(x;k,\theta )=\int _{0}^{x}f(u;k,\theta ),du={\frac {\gamma \left(k,{\frac {x}{\theta }}\right)}{\Gamma (k)}}$
where $\gamma \left(k,{\frac {x}{\theta }}\right)$ is the lower-incomplete-gamma function.
Characterized by α and β (shape and rate):
$F(x;\alpha ,\beta )=\int _{0}^{x}f(u;\alpha ,\beta ),du={\frac {\gamma (\alpha ,\beta x)}{\Gamma (\alpha )}}$
where $\gamma (\alpha ,\beta x)$ is the lower incomplete gamma function.
Usage
Python and Boost use shape & scale for parameterization. Lisp-Stat and R use shape and rate for the default parameterisation. Both forms of parameterization are common. However, since Lisp-Stat’s implementation is based on Boost (because of the restrictive license of R), we perform the conversion $\theta=\frac{1}{\beta}$ internally.
Implementation notes
In the following table k is the shape parameter of the distribution, θ is its scale parameter, x is the random variate, p is the probability and q is (- 1 p). The implementation functions are in the special-functions system.
Function Implementation
PDF (/ (gamma-p-derivative k (/ x θ)) θ)
CDF (incomplete-gamma k (/ x θ))
CDF complement (upper-incomplete-gamma k (/ x θ))
quantile (* θ (inverse-incomplete-gamma k p))
quantile complement (* θ (upper-inverse-incomplete-gamma k p))
mean
variance 2
mode (* (1- k) θ), k>1
skewness (/ 2 (sqrt k))
kurtosis (+ 3 (/ 6 k))
kurtosis excess (/ 6 k)
Example
On average, a train arrives at a station once every 15 minutes (θ=15/60). What is the probability there are 10 trains (occurances of the event) within three hours?
In this example we have:
alpha = 10
theta = 15/60
x = 3
(distributions:cdf-gamma 3d0 10d0 :scale 15/60)
;=> 0.7576078383294877d0
As an alternative, we can run a simulation, where we draw from the parameterised distribution and then calculate the percentage of values that fall below our threshold, x = 3:
(let* ((rv (distributions:r-gamma 10 60/15))
(seq (aops:generate (distributions:generator rv) 10000)))
(statistics-1:mean (e2<= seq 3))) ;e2<= is the vectorised <= operator
;=> 0.753199999999998d0
Finally, if we want to plot the probability:
(let* ((x (aops:linspace 0.01d0 10 1000))
(prob (map 'simple-double-float-vector
#'(lambda (x)
(distributions:cdf-gamma x 10d0 :scale 15/60))
x))
(interval (map 'vector
#'(lambda (x) (if (<= x 3) "0 to 3" "other"))
x)))
(plot:plot
(vega:defplot gamma-example
(:mark :area
:data (:x ,x
:prob ,prob
:interval ,interval)
:encoding (:x (:field :x :type :quantitative :title "Interval (x)")
:y (:field :prob :type :quantitative :title "Cum Probability")
:color (:field :interval))))))
5.4 - Linear Algebra
Linear Algebra for Common Lisp
Overview
LLA works with matrices, that is arrays of rank 2, with all numerical values. Categorical variables could be integer coded if you need to.
Basic Usage
lla requires a BLAS and LAPACK shared library. These may be available via your operating systems package manager, or you can download OpenBLAS, which includes precompiled binaries for MS Windows.
You can also configure the path by setting the cl-user::*lla-configuration* variable like so:
(defvar *lla-configuration*
'(:libraries ("s:/src/lla/lib/libopenblas.dll")))
Use the location specific to your system.
To load lla:
(asdf:load-system :lla)
Examples
To make working with matrices easier, we’re going to use the matrix-shorthand library. Load it like so:
(asdf:load-system :num-utils)
(use-package :num-utils.matrix-shorthand)
Matrix Multiplication
mm is the matrix multiplication function. It is generic and can operate on both regular arrays and ‘wrapped’ array types, e.g. hermitian or triangular. In this example we’ll multiple an array by a vector. mx is the short-hand way of defining a matrix, and vec a vector.
(let ((a (mx 'lla-double
(1 2)
(3 4)
(5 6)))
(b2 (vec 'lla-double 1 2)))
(mm a b2))
; #(5.0d0 11.0d0 17.0d0)
5.5 - Select
Selecting Cartesian subsets of data
Overview
Select provides:
1. An API for taking slices (elements selected by the Cartesian product of vectors of subscripts for each axis) of array-like objects. The most important function is select. Unless you want to define additional methods for select, this is pretty much all you need from this library. See the API reference for additional details.
2. An extensible DSL for selecting a subset of valid subscripts. This is useful if, for example, you want to resolve column names in a data frame in your implementation of select.
3. A set of utility functions for traversing selections in array-like objects.
It combines the functionality of dplyr’s slice and select methods.
Basic Usage
The most frequently used form is:
(select object selection1 selection2 ...)
where each selection specifies a set of subscripts along the corresponding axis. The selection specifications are found below.
To select a column, pass in t for the rows selection1, and the columns names (for a data frame) or column number (for an array) for selection2. For example, to select the first column of this array:
(select #2A((C0 C1 C2)
(v10 v11 v12)
(v20 v21 v22)
(v30 v31 v32))
t 1)
; #(C1 V11 V21 V31)
and to select a column from the mtcars data frame:
(ql:quickload :data-frame)
(data :mtcars)
(select mtcars t 'mpg)
if you’re selecting from a data frame, you can also use the column or columns command:
(column mtcars 'mpg)
to select an entire row, pass t for the column selector, and the row(s) you want for selection1. This example selects the first row (second row in purely array terms, which are 0 based):
(select #2A((C0 C1 C2)
(v10 v11 v12)
(v20 v21 v22)
(v30 v31 v32))
1 t)
;#(V10 V11 V12)
Selection Specifiers
Selecting Single Values
A non-negative integer selects the corresponding index, while a negative integer selects an index counting backwards from the last index. For example:
(select #(0 1 2 3) 1) ; => 1
(select #(0 1 2 3) -2) ; => 2
These are called singleton slices. Each singleton slice drops the dimension: vectors become atoms, matrices become vectors, etc.
Selecting Ranges
(range start end) selects subscripts i where start <= i < end. When end is nil, the last index is included (cf. subseq). Each boundary is resolved according to the other rules, if applicable, so you can use negative integers:
(select #(0 1 2 3) (range 1 3)) ; => #(1 2)
(select #(0 1 2 3) (range 1 -1)) ; => #(1 2)
Selecting All Subscripts
t selects all subscripts:
(select #2A((0 1 2)
(3 4 5))
t 1) ; => #(1 4)
Selecting w/ Sequences
Sequences can be used to make specific selections from the object. For example:
(select #(0 1 2 3 4 5 6 7 8 9)
(vector (range 1 3) 6 (range -2 -1))) ; => #(1 2 3 6 8 9)
(select #(0 1 2) '(2 2 1 0 0)) ; => #(2 2 1 0 0)
Bit Vectors
Bit vectors can be used to select elements of arrays and sequences as well:
(select #(0 1 2 3 4) #*00110) ; => #(2 3)
Which
which returns an index of the positions in SEQUENCE which satisfy PREDICATE.
(defparameter data
#(12 127 28 42 39 113 42 18 44 118 44 37 113 124 37 48 127 36 29 31 125
139 131 115 105 132 104 123 35 113 122 42 117 119 58 109 23 105 63 27
44 105 99 41 128 121 116 125 32 61 37 127 29 113 121 58 114 126 53 114
96 25 109 7 31 141 46 13 27 43 117 116 27 7 68 40 31 115 124 42 128 146
52 71 118 117 38 27 106 33 117 116 111 40 119 47 105 57 122 109 124
115 43 120 43 27 27 18 28 48 125 107 114 34 133 45 120 30 127 31 116))
(which data :predicate #'evenp)
; #(0 2 3 6 7 8 9 10 13 15 17 25 26 30 31 34 40 44 46 48 55 56 57 59 60 66 71 74
; 75 78 79 80 81 82 84 86 88 91 93 98 100 103 107 108 109 112 113 116 117 120)
Extensions
The previous section describes the core functionality. The semantics can be extended. The extensions in this section are provided by the library and prove useful in practice. Their implementation provide good examples of extending the library.
including is convenient if you want the selection to include the end of the range:
(select #(0 1 2 3) (including 1 2))
; => #(1 2), cf. (select ... (range 1 3))
nodrop is useful if you do not want to drop dimensions:
(select #(0 1 2 3) (nodrop 2))
; => #(2), cf. (select ... (range 2 3))
All of these are trivial to implement. If there is something you are missing, you can easily extend select. Pull request are welcome.
(ref) is a version of (select) that always returns a single element, so it can only be used with singleton slices.
Select Semantics
Arguments of select, except the first one, are meant to be resolved using canonical-representation, in the select-dev package. If you want to extend select, you should define methods for canonical-representation. See the source code for the best examples. Below is a simple example that extends the semantics with ordinal numbers.
(defmacro define-ordinal-selection (number)
(check-type number (integer 0))
(defmethod select-dev:canonical-representation
((axis integer) (select (eql ',(intern (format nil \"~:@@(~:r~)\" number)))))
(assert (< ,number axis))
(select-dev:canonical-singleton ,number)))
(define-ordinal-selection 1)
(define-ordinal-selection 2)
(define-ordinal-selection 3)
(select #(0 1 2 3 4 5) (range 'first 'third)) ; => #(1 2)
Note the following:
• The value returned by canonical-representation needs to be constructed using canonical-singleton, canonical-range, or canonical-sequence. You should not use the internal representation directly as it is subject to change.
• You can assume that axis is an integer; this is the default. An object may define a more complex mapping (such as, for example, named rows & columns), but unless a method specialized to that is found, canonical-representation will just query its dimension (with axis-dimension) and try to find a method that works on integers.
• You need to make sure that the subscript is valid, hence the assertion.
5.6 - SQLDF
Selecting subsets of data using SQL
Overview
sqldf is a library for querying data in a data-frame using SQL, optimised for memory consumption. Any query that can be done in SQL can also be done in the API, but since SQL is widely known, many developers find it more convenient to use.
To use SQL to query a data frame, the developer uses the sqldf function, using the data frame name (converted to SQL identifier format) in place of the table name. sqldf will automatically create an in-memory SQLite database, copy the contents of the data frame to it, perform the query, return the results as a new data frame and delete the database. We have tested this with data frames of 350K rows and there is no noticeable difference in performance compared to API based queries.
See the cl-sqlite documentation for additional functionality provided by the SQLite library. You can create databases, employ multiple persistent connections, use prepared statements, etc. with the underlying library. sqldf is a thin layer for moving data to/from data-frames.
Basic Usage
sqldf requires the sqlite shared library from the SQLite project. It may also be available via your operating systems package manager.
To load sqldf:
(asdf:load-system :sqldf)
Examples
These examples use the R data sets that are loaded using the example ls-init file. If your init file doesn’t do this, go now and load the example datasets in the REPL. Mostly these examples are intended to demonstrate commonly used queries for users who are new to SQL. If you already know SQL, you can skip this section.
Ordering & Limiting
This example shows how to limit the number of rows output by the query. It also illustrates changing the column name to meet SQL identifier requirements. In particular, the R CSV file has sepal.length for a column name, which is converted to sepal-length for the data frame, and we query it with sepal_length for SQL because ‘-’ is not a valid character in SQL identifers.
First, let’s see how big the iris data set is:
LS-USER> iris
#<DATA-FRAME (150 observations of 6 variables)>
and look at the first few rows:
(head iris)
;; X7 SEPAL-LENGTH SEPAL-WIDTH PETAL-LENGTH PETAL-WIDTH SPECIES
;; 0 1 5.1 3.5 1.4 0.2 setosa
;; 1 2 4.9 3.0 1.4 0.2 setosa
;; 2 3 4.7 3.2 1.3 0.2 setosa
;; 3 4 4.6 3.1 1.5 0.2 setosa
;; 4 5 5.0 3.6 1.4 0.2 setosa
;; 5 6 5.4 3.9 1.7 0.4 setosa
X7 is the row name/number from the data set. Since it was not assigned a column name in the data set, lisp-stat gives it a random name upon import (X1, X2, X3, …).
Now use sqldf for a query:
(pprint
(sqldf "select * from iris order by sepal_length desc limit 3"))
;; X7 SEPAL-LENGTH SEPAL-WIDTH PETAL-LENGTH PETAL-WIDTH SPECIES
;; 0 132 7.9 3.8 6.4 2.0 virginica
;; 1 118 7.7 3.8 6.7 2.2 virginica
;; 2 119 7.7 2.6 6.9 2.3 virginica
Averaging & Grouping
Grouping is often useful during the exploratory phase of data analysis. Here’s how to do it with sqldf:
(pprint
(sqldf "select species, avg(sepal_length) from iris group by species"))
;; SPECIES AVG(SEPAL-LENGTH)
;; 0 setosa 5.0060
;; 1 versicolor 5.9360
;; 2 virginica 6.5880
Nested Select
For each species, show the two rows with the largest sepal lengths:
(pprint
(sqldf "select * from iris i
where x7 in
(select x7 from iris where species = i.species order by sepal_length desc limit 2) order by i.species, i.sepal_length desc"))
;; X7 SEPAL-LENGTH SEPAL-WIDTH PETAL-LENGTH PETAL-WIDTH SPECIES
;; 0 15 5.8 4.0 1.2 0.2 setosa
;; 1 16 5.7 4.4 1.5 0.4 setosa
;; 2 51 7.0 3.2 4.7 1.4 versicolor
;; 3 53 6.9 3.1 4.9 1.5 versicolor
;; 4 132 7.9 3.8 6.4 2.0 virginica
;; 5 118 7.7 3.8 6.7 2.2 virginica
Recall the note above about X7 being the row id. This may be different depending on how many other data frames with an unnamed column have been imported in your Lisp-Stat session.
SQLite access
sqldf needs to read and write data frames to the data base, and these functions are exported for general use.
Write a data frame
create-df-table and write-table can be used to write a data frame to a database. Each take a connection to a database, which may be file or memory based, a table name and a data frame. Multiple data frames, with different table names, may be written to a single SQLite file this way. For example, to write iris to disk:
LS-USER> (defparameter *conn* (sqlite:connect #P"c:/Users/lisp-stat/data/iris.db3")) ;filel to save to
*CONN*
LS-USER> (sqldf::create-df-table *conn* 'iris iris) ; create the table * schema
NIL
LS-USER> (sqldf:write-table *conn* 'iris iris) ; write the data
NIL
read-table will read a database table into a data frame and update the column names to be lisp like by converting “.” and “_” to “-”. Note that the CSV reading tools of SQLite (for example, DB-Browser for SQLite are much faster than the lisp libraries, sometimes 15x faster. This means that often the quickest way to load a data-frame from CSV data is to first read it into a SQLite database, and then load the database table into a data frame. In practice, SQLite also turns out to be a convenient file format for storing data frames.
SQLDF is currently written using an apparently abandoned library, cl-sqlite. Pull requests from 2012 have been made with no response from the author, and the SQLite C API has improved considerably in the 12 years since the cl-sqlite FFI was last updated.
We choose CL-SQLite because, at the time of writing, it was the only SQLite library with a commercially acceptable license. Since then CLSQL has migrated to a BSD license and is a better option for new development. Not only does it support CommonSQL, the de-facto SQL query syntax for Common Lisp, it also supports several additional databases.
Version 2 of SQLDF will use CLSQL, possibly including some of the CSV and other extensions available in SQLite. Benchmarks show that SQLite’s CSV import is about 15x faster than cl-csv, and a FFI wrapper of SQLite’s CSV importer would be a good addition to Lisp-Stat.
Joins
Joins on tables are not implemented in SQLDF, though there is no technical reason they could not be. This will be done as part of the CLSQL conversion and involves more advanced SQL parsing. SXQL is worth investigating as a SQL parser.
6 - Reference
API documentation for Lisp-Stat systems
7.2 - Special Functions
Implemented in Common Lisp
The library assumes working with 64 bit double-floats. It will probably work with single-float as well. Whilst we would prefer to implement the complex domain, the majority of the sources do not. Tabled below are the special function implementations and their source. This library has a focus on high accuracy double-float calculations using the latest algorithms.
function source
erf libm
erfc libm
inverse-erf Boost
inverse-erfc Boost
log-gamma libm
gamma Cephes
incomplete-gamma Boost
Error rates
The following table shows the peak and mean errors using Boost test data. Tests run on MS Windows 10 with SBCL 2.0.10. Boost results taken from the Boost error function, inverse error function and log-gamma pages.
erf
Data Set Boost (MS C++) Special-Functions
erf small values Max = 0.841ε (Mean = 0.0687ε) Max = 6.10e-5ε (Mean = 4.58e-7ε)
erf medium values Max = 1ε (Mean = 0.119ε) Max = 1ε (Mean = 0.003ε)
erf large values Max = 0ε (Mean = 0ε) N/A erf range 0 < x < 6
erfc
Data Set Boost (MS C++) Special-Functions
erfc small values Max = 0ε (Mean = 0) Max = 1ε (Mean = 0.00667ε)
erfc medium values Max = 1.65ε (Mean = 0.373ε) Max = 1.71ε (Mean = 0.182ε)
erfc large values Max = 1.14ε (Mean = 0.248ε) Max = 2.31e-15ε (Mean = 8.86e-18ε)
inverse-erf/c
Data Set Boost (MS C++) Special-Functions
inverse-erf Max = 1.09ε (Mean = 0.502ε) Max = 2ε (Mean = 0.434ε)
inverse-erfc Max = 1ε (Mean = 0.491ε) Max = 2ε (Mean = 0.425ε)
log-gamma
Data Set Boost (MS C++) Special-Functions
factorials Max = 0.914ε (Mean = 0.175ε) Max = 2.10ε (Mean = 0.569ε)
near 0 Max = 0.964ε (Mean = 0.462ε) Max = 1.93ε (Mean = 0.662ε)
near 1 Max = 0.867ε (Mean = 0.468ε) Max = 0.50ε (Mean = 0.0183ε)
near 2 Max = 0.591ε (Mean = 0.159ε) Max = 0.0156ε (Mean = 3.83d-4ε)
near -10 Max = 4.22ε (Mean = 1.33ε) Max = 4.83d+5ε (Mean = 3.06d+4ε)
near -55 Max = 0.821ε (Mean = 0.419ε) Max = 8.16d+4ε (Mean = 4.53d+3ε)
The results for log gamma are good near 1 and 2, bettering those of Boost, however are worse (relatively speaking) at values of x > 8. I don’t have an explanation for this, since the libm values match Boost more closely. For example:
(spfn:log-gamma -9.99999237060546875d0) = -3.3208925610275326d0
(libm:lgamma -9.99999237060546875d0) = -3.3208925610151265d0
libm:lgamma provides an additional 4 digits of accuracy over spfn:log-gamma when compared to the Boost test answer, despite using identical computations. log-gamma is still within 12 digits of agreement though, and likely good enough for most uses.
gamma
Data Set Boost (MS C++) Special-Functions
factorials Max = 1.85ε (Mean = 0.491ε) Max = 3.79ε (Mean = 0.949ε)
near 0 Max = 1.96ε (Mean = 0.684ε) Max = 2.26ε (Mean = 0.56ε)
near 1 Max = 2ε (Mean = 0.865ε) Max = 2.26ε (Mean = 0.858ε)
near 2 Max = 2ε (Mean = 0.995ε) Max = 2ε (Mean = 0.559ε)
near -10 Max = 1.73ε (Mean = 0.729ε) Max = 0.125ε (Mean = 0.0043ε)
near -55 Max = 1.8ε (Mean = 0.817ε) Max = 0ε (Mean = 0ε)
incomplete-gamma
See boost incomplete gamma documentation for notes and error rates.
lower
Data Set Boost (MS C++) Special-Functions
small values Max = 1.54ε (Mean = 0.439ε) Max = 3.00ε (Mean = 0.516ε)
medium values Max = 35.1ε (Mean = 6.98ε) Max = 10.00ε (Mean = 0.294ε)
large values Max = 243ε (Mean = 20.2ε) Max = 20ε (Mean = 0.613ε)
integer and half-integer Max = 13ε (Mean = 2.97ε) Max = 3ε (Mean = 0.189ε)
upper
Data Set Boost (MS C++) Special-Functions
small values Max = 2.26ε (Mean = 0.74ε) Max = 2.23ε (Mean = 0.511ε)
medium values Max = 23.7ε (Mean = 4ε) Max = 9.00ε (Mean = 0.266ε)
large values Max = 469ε (Mean = 31.5ε) Max = 20.5ε (Mean = 0.621ε)
integer and half-integer Max = 8.72ε (Mean = 1.48ε) Max = 4.00ε (Mean = 0.174ε)
NaN and Infinity
The lisp specification mentions neither NaN nor infinity, so any proper treatment of these is going to be either implementation specific or using a third party library.
We are using the float-features library. There is also some support for infinity in the extended-reals package of numerical-utilities, but it is not comprehensive. Openlibm and Cephes have definitions, but we don’t want to introduce a large dependency just to get these definitions.
Test data
The test data is based on Boost test data. You can run all the tests using the ASDF test op:
(asdf:test-system :special-functions)
By default the test summary values (the same as in Boost) are printed after each test, along with the key epsilon values.
7.3 - Code Repository
Collection of XLisp and Common Lisp statistical routines
Below is a partial list of the consolidated XLispStat packages from UCLA and CMU repositories. There is a great deal more XLispStat code available that was not submitted to these archives, and a search for an algorithm or technique that includes the term “xlispstat” will often turn up interesting results.
Artificial Intelligence
Genetic Programming
Cerebrum
A Framework for the Genetic Programming of Neural Networks. Peter Dudey. No license specified.
[Docs]
GAL
Functions useful for experimentation in Genetic Algorithms. It is hopefully compatible with Lucid Common Lisp (also known as Sun Common Lisp). The implementation is a “standard” GA, similar to Grefenstette’s work. Baker’s SUS selection algorithm is employed, 2 point crossover is maintained at 60%, and mutation is very low. Selection is based on proportional fitness. This GA uses generations. It is also important to note that this GA maximizes. William M. Spears. “Permission is hereby granted to copy all or any part of this program for free distribution, however this header is required on all copies.”
mGA
A Common Lisp Implementation of a Messy Genetic Algorithm. No license specified.
[Docs, errata]
Machine Learning
Machine Learning
Common Lisp files for various standard inductive learning algorithms that all use the same basic data format and same interface. It also includes automatic testing software for running learning curves that compare multiple systems and utilities for plotting and statistically evaluating the results. Included are:
• AQ: Early DNF learner.
• Backprop: The standard multi-layer neural-net learning method.
• Bayes Indp: Simple naive or “idiot’s” Bayesian classifier.
• Cobweb: A probabilistic clustering system.
• Foil: A first-order Horn-clause learner (Prolog and Lisp versions).
• ID3: Decision tree learner with a number of features.
• KNN: K nearest neighbor (instance-based) algorithm.
• Perceptron: Early one-layer neural-net algorithm.
• PFOIL: Propositional version of FOIL for learning DNF.
• PFOIL-CNF: Propositional version of FOIL for learning CNF.
Raymond J. Mooney. “This program may be freely copied, used, or modified provided that this copyright notice is included in each copy of this code and parts thereof.”
Neural Networks
QuickProp
Common Lisp implementation of “Quickprop”, a variation on back-propagation. For a description of the Quickprop algorithm, see Faster-Learning Variations on Back-Propagation: An Empirical Study by Scott E. Fahlman in Proceedings of the 1988 Connectionist Models Summer School, Morgan-Kaufmann, 1988. Scott E. Fahlman. Public domain.
Fun & Games
Towers of Hanoi
Tower of Hanoi plus the Queens program explained in Winston and Horn. No license specified.
Mathematics
Combinatorial
Various combinatorial functions for XLispStat. There are other Common Lisp libraries for this, for example cl-permutation. It’s worth searching for something in Quicklisp too. No license specified.
functions
Bessel, beta, erf, gamma and horner implementations. Gerald Roylance. License restricted to non-commercial use only.
integrate
gauss-hermite.lsp is by Jan de Leeuw.
runge.lsp and integr.lsp are from Gerald Roylance 1982 CLMATH package. integr.lsp has Simpson’s rule and the trapezoid rule. runge.lsp integrates runge-kutta differential equations by various methods.
Roylance code is non-commercial use only. Jan de Leeuw’s code has no license specified.
lsqpack
This directory contains the code from the Lawson and Hanson book, Solving Least Squares Problems, translated with f2cl, tweaked for Xlisp-Stat by Jan de Leeuw. No license specified.
nswc
This is an f2cl translation, very incomplete, of the NSWC mathematics library. The FORTRAN, plus a great manual, is available on github. The report is NSWCDD/TR-92/425, by Alfred H. Morris, Jr. dated January 1993. No license specified, but this code is commonly considered public domain.
Numerical Recipes
Code from Numerical Recipes in FORTRAN, first edition, translated with Waikato’s f2cl and tweaked for XLisp-Stat by Jan de Leeuw. No license specified.
optimization
Code for annealing, simplex and other optimization problems. Various licenses. These days, better implementations are available, for example the linear-programming library.
Statistics
Algorithms
• AS 190 Probabilities and Upper Quantiles for the Studentized Range.
• AS 226 Computing Noncentral Beta Probabilities
• AS 241 The Percentage Points of the Normal Distribution
• AS 243 Cumulative Distribution Function of the Non-Central T Distribution
• TOMS 744 A stochastic algorithm for global optimization with constraints
AS algorithms: B. Narasimhan (naras@euler.bd.psu.edu) “You can freely use and distribute this code provided you don’t remove this notice. NO WARRANTIES, EXPLICIT or IMPLIED”
TOMS: F. Michael Rabinowitz. No license specified.
Categorical
glim
Glim extension for log-linear models. Jan de Leeuw. No license specified.
IPF
Fits Goodman’s RC model to the array X. Also included is a set of functions for APL like array operations. The four basic APL operators (see, for example, Garry Helzel, An Encyclopedia of APL, 2e edition, 1989, I-APL, 6611 Linville Drive, Weed., CA) are inner-product, outer-product, reduce, and scan. They can be used to produce new binary and unary functions from existing ones. Unknown author. No license specified.
latent-class
One file with the function latent-class. Unknown author. No license specified.
max
Functions to do quantization and cluster analysis in the empirical case. Jan de Leeuw. No license specified.
write-profiles
A function. The argument is a list of lists of strings. Each element of the list corresponds with a variable, the elements of the list corresponding with a variable are the labels of that variable, which are either strings or characters or numbers or symbols. The program returns a matrix of strings coding all the profiles. Unknown author. License not specified.
Distributions
The distributions repository contains single file implementations of:
density demo
Demonstrations of plots of density and probability functions. Requires XLispStat graphics. Jan de Leeuw. No license specified.
noncentral t-distribution
noncentral-t distribution by Russ Lenth, based on Applied Statistics Algorithm AS 243. No license specified.
probability-functions
A compilation of probability densities, cumulative distribution functions, and their inverses (quantile functions), by Jan de Leeuw. No license specified.
power
This appears to test the powers of various distribution functions. Unknown author. No license specified.
weibull-mle
Maximum likelihood estimation of Weibull parameters. M. Ennis. No license specified.
Classroom Statistics
The systems in the introstat directory are meant to be used in teaching situations. For the most part they use XLispStat’s graphical system to introduce students to statistical concepts. They are generally simple in nature from a the perspective of a statistical practitioner.
ElToY
ElToY is a collection of three programs written in XLISP-STAT. Dist-toy displays a univariate distribution dynamically linked to its parameters. CLT-toy provides an illustration of the central limit theorem for univariate distributions. ElToY provides a mechanism for displaying the prior and posterior distributions for a conjugate family dynamically linked so that changes to the prior affect the posterior and visa versa. Russell Almond almond@stat.washington.edu. GPL v2.
Multivariate
Dendro
Dendro is for producing dendrograms for agglomerative cluster in XLISP-STAT.
Plotting
Boxplot Matrix
Graphical Display of Analysis of Variance with the Boxplot Matrix. Extension of the standard one-way box plot to cross-classified data with multiple observations per cell. Richard M. Heiberger rmh@astro.ocis.temple.edu No license specified.
[Docs]
Dynamic Graphics and Regression Diagnostics
Contains methods for regression diagnostics using dynamic graphics, including all the methods discussed in Cook and Weisberg (1989) Technometrics, 277-312. Includes documentation written in LaTeX. sandy@umnstat.stat.umn.edu No license specified.
[Docs}
FEDF
Flipped Empirical Distribution Function. Parallel-FEDF, FEDF-ScatterPlot, FEDF-StarPlot written in XLISP-STAT. These plots are suggested for exploring multidimensional data suggested in “Journal of Computational and Graphical Statistics”, Vol. 4, No. 4, pp.335-343. 97/07/18. Lee, Kyungmi & Huh, Moon Yul myhuh@yurim.skku.ac.kr No license specified.
PDFPlot
PDF graphics output from XlispStat PDFPlot is a XlispStat class to generate PDF files from LispStat plot objects. Steven D. Majewski sdm7g@virginia.edu. No license specified.
RXridge
RXridge.LSP adds shrinkage regression calculation and graphical ridge “trace” display functionality to the XLisp-Stat, ver2.1 release 3+ implementation of LISP-STAT. Bob Obenchain. No license specified.
Regression
Bayes-Linear
BAYES-LIN is an extension of the XLISP-STAT object-oriented statistical computing environment, which adds to XLISP-STAT some object prototypes appropriate for carrying out local computation via message-passing between clique-tree nodes of Bayes linear belief networks. Darren J. Wilkinson. No license specified. [Docs]
Bayesian Poisson Regression
Bayesian Poisson Regression using the Gibbs Sampler Sensitivity Analysis through Dynamic Graphics. A set of programs that allow you to do Bayesian sensitivity analysis dynamically for a variety of models. B. Narasimhan (naras@stat.fsu.edu) License restricted to non-commercial use only.
[Docs]
Binary regression
Smooth and parametric binary regression code. Unknown author. License not specified.
Cost of Data Analysis
A regression analysis usually consists of several stages such as variable selection, transformation and residual diagnosis. Inference is often made from the selected model without regard to the model selection methods that proceeded it. This can result in overoptimistic and biased inferences. We first characterize data analytic actions as functions acting on regression models. We investigate the extent of the problem and test bootstrap, jackknife and sample splitting methods for ameliorating it. We also demonstrate an interactive XLISP-STAT system for assessing the cost of the data analysis while it is taking place. Julian J. Faraway. BSD license.
[Docs]
Gee
Lisp-Stat code for generalised estimating equation models. Thomas Lumley thomas@biostat.washington.edu. GPL v2.
[Docs]
GLIM
Functions and prototypes for fitting generalized linear models. Contributed by Luke Tierney luke@umnstat.stat.umn.edu. No license specified.
[Docs]
GLMER
A function to estimate coefficients and dispersions in a generalized linear model with random effects. Guanghan Liu gliu@math.ucla.edu. No license specified.
Hasse
Implements Taylor & Hilton’s rules for balanced ANOVA designs and draws the Hasse diagram of nesting relationships. Philip Iversen piversen@iastate.edu. License restricted to non-commercial use only.
monotone
Implementation of an algorithm to project on the intersection of r closed convex sets. Further details and references are in Mathar, Cyclic Projections in Data Analysis, Operations Research Proceedings 1988, Springer, 1989. Jan de Leeuw. No license specified.
OIRS
Order and Influence in Regression Strategy. The methods (tactics) of regression data analysis such as variable selection, transformation and outlier detection are characterised as functions acting on regression models and returning regression models. The ordering of the tactics, that is the strategy, is studied. A method for the generation of acceptable models supported by the choice of regression data analysis methods is described with a view to determining if two capable statisticians may reasonably hold differing views on the same data. Optimal strategies are considered. The idea of influential points is extended from estimation to the model building process itself both quantitatively and qualitatively. The methods described are not intended for the entirely automatic analysis of data, rather to assist the statistician in examining regression data at a strategic level. Julian J. Faraway julian@stat.lsa.umich.edu. BSD license.
oneway
Additions to Tierney’s one way ANOVA. B. Narasimhan naras@euler.bd.psu.edu. No license specified.
Regstrat
A XLispStat tool to investigate order in Regression Strategy particularly for finding and examining the models found by changing the ordering of the actions in a regression analysis. Julian Faraway julian@stat.lsa.umich.edu. License restricted to non-commercial use only.
Simsel
XLISP-STAT software to perform Bayesian Predictive Simultaneous Variable and Transformation Selection for regression. A criterion-based model selection algorithm. Jennifer A. Hoeting jah@stat.colostate.edu. License restricted to non-commercial use only.
Robust
There are three robust systems in the robust directory:
robust regression
This is the Xlisp-Stat version of ROSEPACK, the robust regression package developed by Holland, Welsch, and Klema around 1975. See Holland and Welsch, Commun. Statist. A6, 1977, 813-827. See also the Xlisp-Stat book, pages 173-177, for an alternative approach. Jan de Leeuw. No license specified.
There is also robust statistical code for location and scale.
Simulation
The simulation directory contains bootstrapping methods, variable imputation, jackknife resampling, monte-carlo simulations and a general purpose simulator. There is also the discrete finite state markov chains in the temporal directory.
Smoothers
kernel density estimators
KDEs based on Wand, CFFI based KDEs by B. Narasimhan, and graphical univariate density estimation.
spline
Regularized bi-variate splines with smoothing and tension according to Mitasova and Mitas. Cubic splines according to Green and Silverman. Jan de Leeuw. No license specified.
super-smoother
The super smoothing algorithm, originally implemented in FORTRAN by Jerome Friedman of Stanford University, is a method by which a smooth curve may be fitted to a two-dimensional array of points. Its implementation is presented here in the XLISP-STAT language. Jason Bond. No license specified.
[DOCS]
Variable Bandwidth
XLispStat code to facilitate interactive bandwidth choice for estimator (3.14), page 44 in Bagkavos (2003), “BIAS REDUCTION IN NONPARAMETRIC HAZARD RATE ESTIMATION”. No license specified.
Spatial
livemap
LiveMap is a tool for exploratory spatial data analysis. Dr. Chris Brunsdon. No license specified.
[DOCS]
variograms
Produces variograms using algorithms from C.V. Deutsch and A.G. Journel, “GSLIB: Geostatistical Software Library and User’s Guide, Oxford University Press, New York, 1992. Stanley S. Bentow. No license specified.
[DOCS]
Temporal
Exploratory survival analysis
A set of XLISP-STAT routines for the interactive, dynamic, exploratory analysis of survival data. E. Neely Atkinson (neely@odin.mda.uth.tmc.edu) “This software may be freely redistributed.”
[Docs]
Markov
Simulate some Markov chains in Xlisp-Stat. Complete documentation and examples are included. B. Narasimhan (naras@sci234e.mrs.umn.edu). GPL.
[Docs]
SAPA
Sapaclisp is a collection of Common Lisp functions that can be used to carry out many of the computations described in the SAPA book:
Donald B. Percival and Andrew T. Walden, “Spectral Analysis for Physical Applications: Multitaper and Conventional Univariate Techniques”, Cambridge University Press, Cambridge, England, 1993.
The SAPA book uses a number of time series as examples of various spectral analysis techniques.
From the description:
Sapaclisp features functions for converting to/from decibels, the FORTRAN sign function, log of the gamma function, manipulating polynomials, root finding, simple numerical integration, matrix functions, Cholesky and modified Gram-Schmidt (i.e., Q-R) matrix decompositions, sample means and variances, sample medians, computation of quantiles from various distributions, linear least squares, discrete Fourier transform, fast Fourier transform, chirp transform, low-pass filters, high-pass filters, band-pass filters, sample auto-covariance sequence, auto-regressive spectral estimates, least squares, forward/backward least squares, Burg’s algorithm, the Yule-Walker method, periodogram, direct spectral estimates, lag window spectral estimates, WOSA spectral estimates, sample cepstrum, time series bandwidth, cumulative periodogram test statistic for white noise, and Fisher’s g statistic.
License: “Use and copying of this software and preparation of derivative works based upon this software are permitted. Any distribution of this software or derivative works must comply with all applicable United States export control laws.”
Times
XLispStat functions for time series analysis, data editing, data selection, and other statistical operations. W. Hatch (bts!bill@uunet.uu.net). Public Domain.
Tests
The tests directory contains code to do one-sample and two-sample Kolmogorov-Smirnov test (with no estimated parameters) and code to do Mann-Whitney and Wilcoxon rank signed rank tests.
Training & Documentation
ENAR Short Course
This directory contains slides and examples used in a shortcourse on Lisp-Stat presented at the 1992 ENAR meetings in Cincinnati, 22 March 1992.
ASA Course
Material from an ASA course given in 1992.
Tech Report
A 106 page mini manual on XLispStat.
Utilities
The majority of the files in the utilities directory are specific to XLISP-STAT and unlikely to be useful. In most cases better alternatives now exist for Common Lisp. A few that may be worth investigating have been noted below.
Filters
XLisp-S
A series of routines to allow users of Xlisp or LispStat to interactively transfer data to and access functions in New S. Steve McKinney kilroy@biostat.washington.edu. License restricted to non-commercial use only.
I/O
formatted-input
A set of XLISP functions that can be used to read ASCII files into lists of lists, using formatted input. The main function is read-file, which has as arguments a filename and a FORTRAN type format string (with f, i, x, t, and a formats) Jan Deleeuw deleeuw@laplace.sscnet.ucla.edu “THIS SOFTWARE CAN BE FREELY DISTRIBUTED, USED, AND MODIFIED.”
Memoization
automatic memoization
As the name suggests. Marty Hall hall@aplcenmp.apl.jhu.edu. “Permission is granted for any use or modification of this code provided this notice is retained."
[OVERVIEW]
8 - Contribution Guidelines
How to contribute to Lisp-Stat
Contributor License Agreements (CLAs) are common and accepted in open source projects. We all wish for Lisp-Stat to be used and distributed as widely as possible, and for its users to be confident about the origins and continuing existence of the code. The CLA help us achieve that goal. Although common, many in the Lisp community are unaware of CLAs or their importance.
Why do you need a CLA?
We need a CLA because, by law, all rights reside with the originator of a work unless otherwise agreed. The CLA allows the project to accept and distribute your contributions. Without your consent via a CLA, the project has no rights to use the code. Here’s what Google has to say in their CLA policy page:
Using one standard inbound license that grants the receiving company broad permission to use contributed code in products is beneficial to the company and downstream users alike.
Technology companies will naturally want to make productive use of any code made available to them. However, if all of the code being received by a company was subject to various inbound licenses with conflicting terms, the process for authorizing the use of the code would be cumbersome because of the need for constant checks for compliance with the various licenses. Whenever contributed code were to be used, the particular license terms for every single file would need to be reviewed to ascertain whether the application would be permitted under the terms of that code’s specific license. This would require considerable human resources and would slow down the engineers trying to utilize the code.
The benefits that a company receives under a standard inbound license pass to downstream users as well. Explicit patent permissions and disclaimers of obligations and warranties clarify the recipients’ rights and duties. The broad grant of rights provides code recipients opportunities to make productive use of the software. Adherence to a single standard license promotes consistency and common understanding for all parties involved.
How do I sign?
In order to be legally binding a certain amount of legal ceremony must take place. This varies by jurisdiction. As an individual ‘clickwrap’ or ‘browser wrap’ agreements are used. For corporations, a ‘wet signature’ is required because it is valid everywhere and avoids ambiguity of assent.
If you are an individual contributor, making a pull request from a personal account, the cla-assistant will automatically prompt you to digitally sign as part of the PR.
What does it do?
The CLA essentially does three things. It ensures that the contributor agrees:
1. To allow the project to use the source code and redistribute it
2. The contribution is theirs to give, e.g. does not belong to their employer or someone else
3. Does not contain any patented ‘stuff’.
Mechanics of the CLA
The Lisp-Stat project uses CLAs to accept regular contributions from individuals and corporations, and to accept larger grants of existing software products, for example if you wished to contribute a large XLISP-STAT library.
Contributions to this project must be accompanied by a Contributor License Agreement. You (or your employer) retain the copyright to your contribution; this simply gives us permission to use and redistribute your contributions as part of the project.
You generally only need to submit a CLA once, so if you have already submitted one (even if it was for a different project), you do not need to do it again.
Code of Conduct
The following code of conduct is not meant as a means for punishment, action or censorship for the mailing list or project. Instead, it is meant to set the tone, expectations and comfort level for contributors and those wishing to participate in the community.
• We ask everyone to be welcoming, friendly, and patient.
• Flame wars and insults are unacceptable in any fashion, by any party.
• Anything can be asked, and “RTFM” is not an acceptable answer.
• Neither is “it’s in the archives, go read them”.
• Statements made by core developers can be quoted outside of the list.
• Statements made by others can not be quoted outside the list without explicit permission. - Anonymised paraphrased statements “someone asked about…” are OK - direct quotes with or without names are not appropriate.
• The community administrators reserve the right to revoke the subscription of members (including mentors) that persistently fail to abide by this Code of Conduct.
8.1 - Contributing Code
How to contribute code to Lisp-Stat
First, if you are contributing on behalf of your employer, ensure you have signed a contributor license agreement. Then follow these steps for contributing to Lisp-Stat:
You may also be interested in the additional information at the end of this document.
Get source code
First you need the Lisp-Stat source code. The core systems are found on the Lisp-Stat github page. For the individual systems, just check out the one you are interested in. For the entire Lisp-Stat system, at a minimum you will need:
Other dependencies will be pulled in by Quicklisp.
Development occurs on the “master” branch. To get all the repos, you can use the following command in the directory you want to be your top level dev space:
cd ~/quicklisp/local-projects && \
git clone https://github.com/Lisp-Stat/data-frame.git && \
git clone https://github.com/Lisp-Stat/dfio.git && \
git clone https://github.com/Lisp-Stat/special-functions.git && \
git clone https://github.com/Lisp-Stat/numerical-utilities.git && \
git clone https://github.com/Lisp-Stat/array-operations.git && \
git clone https://github.com/Lisp-Stat/documentation.git && \
git clone https://github.com/Lisp-Stat/distributions.git && \
git clone https://github.com/Lisp-Stat/plot.git && \
git clone https://github.com/Lisp-Stat/select.git && \
git clone https://github.com/Lisp-Stat/cephes.cl.git && \
git clone https://github.com/Symbolics/alexandria-plus && \
git clone https://github.com/Lisp-Stat/statistics.git && \
git clone https://github.com/Lisp-Stat/lisp-stat.git
git clone https://github.com/Lisp-Stat/sqldf.git
Modify the source
Before you start, send a message to the Lisp-Stat mailing list or file an issue on Github describing your proposed changes. Doing this helps to verify that your changes will work with what others are doing and have planned for the project. Importantly, there may be some existing code or design work for you to leverage that is not yet published, and we’d hate to see work duplicated unnecessarily.
Be patient, it may take folks a while to understand your requirements. For large systems or design changes, a design document is preferred. For small changes, issues and the mailing list are fine.
Once your suggested changes are agreed, you can modify the source code and add some features using your favorite IDE.
The following sections provide tips for working on the project:
Coding Convention
Please consider the following before submitting a pull request:
• Code should be formatted according to the Google Common Lisp Style Guide
• All code should include unit tests. Older projects use fiveam as the test framework for new projects. New project should use Parachute.
• Contributions should pass existing unit tests
• New unit tests should be provided to demonstrate bugs and fixes
• Indentation in Common Lisp is important for readability. Contributions should adhere to these guidelines. For the most part, a properly configured Emacs will do this automatically.
Suggested editor settings for code contributions
No line breaks in (doc)strings, otherwise try to keep it within 80 columns. Remove trailing whitespace. ‘modern’ coding style. Suggested Emacs snippet:
(set-fill-column 9999)
'(("\\<\$$FIXME\\|TODO\\|QUESTION\\|NOTE\$$"
1 font-lock-warning-face t)))
(setq show-trailing-whitespace t)
'(lambda()
(save-excursion
(delete-trailing-whitespace))
nil))
(visual-line-mode 1)
(setq slime-net-coding-system 'utf-8-unix)
(setq lisp-lambda-list-keyword-parameter-alignment t)
(setq lisp-lambda-list-keyword-alignment t)
(setq common-lisp-style-default 'modern)
Code review
Github includes code review tools that can be used as part of a pull request. We recommend using a triangular workflow and feature/bug branches in your own repository to work from. Once you submit a pull request, one of the committers will review it and possibly request modifications.
As a contributor you should organise (squash) your git commits to make them understandable to reviewers:
• Combine WIP and other small commits together.
• Address multiple issues, for smaller bug fixes or enhancements, with a single commit.
• Use separate commits to allow efficient review, separating out formatting changes or simple refactoring from core changes or additions.
• Rebase this chain of commits on top of the current master
• Write a good git commit message
Once all the comments in the review have been addressed, a Lisp-Stat committer completes the following steps to commit the patch:
• If the master branch has moved forward since the review, rebase the branch from the pull request on the latest master and re-run tests.
• If all tests pass, the committer amends the last commit message in the series to include “this closes #1234”. This can be done with interactive rebase. When on the branch issue: git rebase -i HEAD^
• Change where it says “pick” on the line with the last commit, replacing it with “r” or “reword”. It replays the commit giving you the opportunity the change the commit message.
• The committer pushes the commit(s) to the github repo
• The committer resolves the issue with a message like "Fixed in <Git commit SHA>".
Where to start?
If you are new to statistics or Lisp, documentation updates are always a good place to start. You will become familiar with the workflow, learn how the code functions and generally become better acquainted with how Lisp-Stat operates. Besides, any contribution will require documentation updates, so it’s good to learn this system first.
If you are coming from an existing statistical environment, consider porting a XLispStat package that you find useful to Lisp-Stat. Use the XLS compatibility layer to help. If there is a function missing in XLS, raise an issue and we’ll create it. Some XLispStat code to browse:
Keep in mind that some of these rely on the XLispStat graphics system, which was native to the platform. LISP-STAT uses Vega for visualizations, so there isn’t a direct mapping. Non-graphical code should be a straight forward port.
You could also look at CRAN, which contains thousands of high-quality packages.
For specific ideas that would help, see the ideas page.
Issue Guidelines
Please comment on issues in github, making your concerns known. Please also vote for issues that are a high priority for you.
Please refrain from editing descriptions and comments if possible, as edits spam the mailing list and clutter the audit trails, which is otherwise very useful. Instead, preview descriptions and comments using the preview button (on the right) before posting them. Keep descriptions brief and save more elaborate proposals for comments, since descriptions are included in GitHub automatically sent messages. If you change your mind, note this in a new comment, rather than editing an older comment. The issue should preserve this history of the discussion.
8.2 - Contributing to Documentation
You can help make Lisp-Stat documentation better
Creating and updating documentation is a great way to learn. You will not only become more familiar with Common Lisp, you have a chance to investigate the internals of all parts of a statistical system.
We use Hugo to format and generate the website, the Docsy theme for styling and site structure, and Netlify to manage the deployment of the documentation site (what you are reading now). Hugo is an open-source static site generator that provides us with templates, content organisation in a standard directory structure, and a website generation engine. You write the pages in Markdown (or HTML if you want), and Hugo wraps them up into a website.
All submissions, including submissions by project members, require review. We use GitHub pull requests for this purpose. Consult GitHub Help for more information on using pull requests.
Repository Organisation
Declt generates documentation for individual systems in Markdown format. These are kept with the project, e.g. select/docs/select.md.
Quick Start
Here’s a quick guide to updating the docs. It assumes you are familiar with the GitHub workflow and you are happy to use the automated preview of your doc updates:
1. Fork the Lisp-Stat documentation repo on GitHub.
2. Make your changes and send a pull request (PR).
3. If you are not yet ready for a review, add “WIP” to the PR name to indicate it’s a work in progress. (Don’t add the Hugo property “draft = true” to the page front matter, because that prevents the auto-deployment of the content preview described in the next point.)
4. Wait for the automated PR workflow to do some checks. When it’s ready, you should see a comment like this: deploy/netlify — Deploy preview ready!
5. Click Details to the right of “Deploy preview ready” to see a preview of your updates.
6. Continue updating your doc and pushing your changes until you’re happy with the content.
7. When you’re ready for a review, add a comment to the PR, and remove any “WIP” markers.
Updating a single page
If you’ve just spotted something you’d like to change while using the docs, Docsy has a shortcut for you (do not use this for reference docs):
1. Click Edit this page in the top right hand corner of the page.
2. If you don’t already have an up to date fork of the project repo, you are prompted to get one - click Fork this repository and propose changes or Update your Fork to get an up to date version of the project to edit. The appropriate page in your fork is displayed in edit mode.
3. Follow the rest of the Quick Start process above to make, preview, and propose your changes.
Previewing locally
If you want to run your own local Hugo server to preview your changes as you work:
1. Follow the instructions in Getting started to install Hugo and any other tools you need. You’ll need at least Hugo version 0.45 (we recommend using the most recent available version), and it must be the extended version, which supports SCSS.
2. Fork the Lisp-Stat documentation repo into your own repository project, then create a local copy using git clone. Don’t forget to use --recurse-submodules or you won’t pull down some of the code you need to generate a working site.
git clone --recurse-submodules --depth 1 https://github.com/lisp-stat/documentation.git
3. Run hugo server in the site root directory. By default your site will be available at http://localhost:1313/. Now that you’re serving your site locally, Hugo will watch for changes to the content and automatically refresh your site.
4. Continue with the usual GitHub workflow to edit files, commit them, push the changes up to your fork, and create a pull request.
Creating an issue
If you’ve found a problem in the docs, but are not sure how to fix it yourself, please create an issue in the Lisp-Stat documentation repo. You can also create an issue about a specific page by clicking the Create Issue button in the top right hand corner of the page.
8.3 - Contribution Ideas
Some ideas on how contribute to Lisp-Stat
Special Functions
The functions underlying the statistical distributions require skills in numerical programming. If you like being ‘close to the metal’, this is a good area for contributions. Suitable for medium-advanced level programmers. In particular we need implementations of:
• gamma
• incomplete gamma (upper & lower)
• inverse incomplete gamma
This work is partially complete and makes a good starting point for someone who wants to make a substantial contribution.
Documentation
Better and more documentation is always welcome, and a great way to learn. Suitable for beginners to Common Lisp or statistics.
Jupyter-Lab Integrations
Jupyter Lab has two nice integrations with Pandas, the Python version of Data-Frame, that would make great contributions: Qgrid, which allows editing a data frame in Jupyter Lab, and Jupyter DataTables. There are many more Pandas/Jupyter integrations, and any of them would be welcome additions to the Lisp-Stat ecosystem.
Plotting
LISP-STAT has a basic plotting system, but there is always room for improvement. An interactive REPL based plotting system should be possible with a medium amount of effort. Remote-js provides a working example of running JavaScript in a browser from a REPL, and could combined with something like Electron and a DSL for Vega-lite specifications. This may be a 4-6 week project for someone with JavaScript and HTML skills. There are other Plotly/Vega options, so if this interests you, open an issue and we can discuss. I have working examples of much of this, but all fragmented examples. Skills: good web/JavaScript, beginner lisp.
Regression
We have some code for ‘quick & dirty’ regressions and need a more robust DSL (Domain Specific Language). As a prototype, the -proto regression objects from XLISP-STAT would be both useful and be a good experiment to see what the final form should take. This is a relatively straightforward port, e.g. defproto -> defclass and defmeth -> defmethod. Skill level: medium in both Lisp and statistics, or willing to learn.
Vector Mathematics
We have code for vectorized versions of all Common Lisp functions, living in the elmt package. It now only works on vectors. Shadowing Common Lisp mathematical operators is possible, and more natural. This task is to make elmt vectorized math functions work on lists as well as vectors, and to implement shadowing of Common Lisp. This task requires at least medium-high level Lisp skills, since you will be working with both packages and shadowing. We also need to run the ANSI Common Lisp conformance tests on the results to ensure nothing gets broken in the process.
Continuous Integration
If you have experience with Github’s CI tools, a CI setup for Lisp-Stat would be a great help. This allows people making pull requests to immediately know if their patches break anything. Beginner level Lisp. | 2023-03-27 20:18:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2585642337799072, "perplexity": 8250.876131472634}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948684.19/warc/CC-MAIN-20230327185741-20230327215741-00286.warc.gz"} |
https://zbmath.org/?q=an:1069.14053 | # zbMATH — the first resource for mathematics
Lagrangian subbundles and codimension 3 subcanonical subschemes. (English) Zbl 1069.14053
Summary: We show that a Gorenstein subcanonical codimension 3 subscheme $$Z \subset X=\mathbb{P}^N$$, $$N\geq 4$$, can be realized as the locus along which two Lagrangian subbundles of a twisted orthogonal bundle meet degenerately and conversely. We extend this result to singular $$Z$$ and all quasi-projective ambient schemes $$X$$ under the necessary hypothesis that $$Z$$ is strongly subcanonical in a sense defined below. A central point is that a pair of Lagrangian subbundles can be transformed locally into an alternating map. In the local case our structure theorem reduces to that of D. A. Buchsbaum and D. Eisenbud [Am. J. Math. 99, 447–485 (1977; Zbl 0373.13006)] and says that $$Z$$ is Pfaffian.
We also prove codimension 1 symmetric and skew-symmetric analogues of our structure theorems.
##### MSC:
14M07 Low codimension problems in algebraic geometry 13D02 Syzygies, resolutions, complexes and commutative rings 14J60 Vector bundles on surfaces and higher-dimensional varieties, and their moduli 14M12 Determinantal varieties
Zbl 0373.13006
Full Text:
##### References:
[1] S. Abeasis and A. Del Fra, Young diagrams and ideals of Pfaffians , Adv. Math. 35 (1980), 158–178. · Zbl 0444.20037 [2] P. Balmer, Derived Witt Groups of a Scheme , J. Pure Appl. Algebra 141 (1999), 101–129. · Zbl 0972.18006 [3] C. Bănică and M. Putinar, On complex vector bundles on rational threefolds , Math. Proc. Cambridge Philos. Soc. 97 (1985), 279–288. · Zbl 0564.32018 [4] W. Barth, “Counting singularities of quadratic forms on vector bundles” in Vector Bundles and Differential Equations (Nice, 1979) , Progr. Math. 7 , Birkhäuser, Boston, 1980, 1–19. · Zbl 0442.14021 [5] N. Bourbaki, Éléments de mathématique, première partie: Les structures fondamentales de l’analyse, livre II: Algèbre, chapitre 9: Formes sesquilinéaires et formes quadratiques , Actualités Sci. Indust. 1272 Hermann, Paris, 1959. · Zbl 0102.25503 [6] D. Buchsbaum and D. Eisenbud, Algebra structures for finite free resolutions, and some structure theorems for ideals of codimension $$3$$ , Amer. J. Math. 99 (1977), 447–485. JSTOR: · Zbl 0373.13006 [7] G. Casnati and F. Catanese, Even sets of nodes are bundle symmetric , J. Differential Geom. 47 (1997), 237–256.; Corrigendum , J. Differential Geom. 50 (1998), 415. · Zbl 0896.14017 [8] G. Casnati and T. Ekedahl, Covers of algebraic varieties, I: A general structure theorem, covers of degree $$3,4$$ and Enriques surfaces , J. Algebraic Geom. 5 (1996), 439–460. · Zbl 0866.14009 [9] F. Catanese, Babbage’s conjecture, contact of surfaces, symmetric determinantal varieties and applications , Invent. Math. 63 (1981), 433–465. · Zbl 0472.14024 [10] –. –. –. –., “Homological algebra and algebraic surfaces” in Algebraic Geometry (Santa Cruz, 1995) , Proc. Sympos. Pure Math. 62 , Part 1, Amer. Math. Soc., Providence, 1997, 3–56. [11] C. De Concini and P. Pragacz, On the class of Brill-Noether loci for Prym varieties , Math. Ann. 302 (1995), 687–697. · Zbl 0829.14021 [12] J. A. Eagon and D. G. Northcott, On the Buchsbaum-Eisenbud theory of finite free resolutions , J. Reine Angew. Math. 262/263 (1973), 205–219. · Zbl 0272.18010 [13] D. Eisenbud and S. Popescu, Gale duality and free resolutions of ideals of points , Invent. Math. 136 (1999), 419–449. · Zbl 0943.13011 [14] D. Eisenbud, S. Popescu, and C. Walter, Enriques surfaces and other non-Pfaffian subcanonical subschemes of codimension $$3$$ , to appear in Comm. Algebra 28 (2000); preprint, http://www.arXiv.org/abs/math.AG/9906171 · Zbl 0983.14018 [15] ——–, Symmetric locally free resolutions of coherent sheaves , in preparation. [16] W. Fulton, Determinantal formulas for orthogonal and symplectic degeneracy loci , J. Differential Geom. 43 (1996), 276–290. · Zbl 0911.14001 [17] –. –. –. –., “Schubert varieties in flag bundles for the classical groups” in Proceedings of the Hirzebruch 65 Conference on Algebraic Geometry (Ramat Gan, 1993) , Israel Math. Conf. Proc. 9 , Bar-Ilan Univ., Ramat Gan, 1996, 241–262. · Zbl 0862.14032 [18] W. Fulton and P. Pragacz, Schubert Varieties and Degeneracy Loci , Lecture Notes in Math. 1689 , Springer, Berlin, 1998. [19] M. Grassi, Koszul modules and Gorenstein algebras , J. Algebra 180 (1996), 918–953. · Zbl 0866.14030 [20] P. Griffiths and J. Harris, Residues and zero-cycles on algebraic varieties , Ann. of Math. (2) 108 (1978), 461–505. JSTOR: · Zbl 0423.14001 [21] J. Harris and L. Tu, On symmetric and skew-symmetric determinantal varieties , Topology 23 (1984), 71–84. · Zbl 0534.55010 [22] R. Hartshorne, Stable vector bundles of rank $$2$$ on $$\mathbb P^3$$ , Math. Ann. 238 (1978), 229–280. · Zbl 0411.14002 [23] T. Józefiak, A. Lascoux, and P. Pragacz, Classes of determinantal varieties associated with symmetric and skew-symmetric matrices , Math. USSR-Izv. 18 (1982), no. 3, 575–586. · Zbl 0489.14020 [24] S. Kleiman and B. Ulrich, Gorenstein algebras, symmetric matrices, self-linked ideals, and symbolic powers , Trans. Amer. Math. Soc. 349 (1997), 4973–5000. JSTOR: · Zbl 0897.13016 [25] M-A. Knus, Quadratic and Hermitian Forms over Rings , Grundlehren Math. Wiss. 294 , Springer, Berlin, 1991. · Zbl 0756.11008 [26] S. Mukai, Curves and symmetric spaces, I , Amer. J. Math. 117 (1995), 1627–1644. JSTOR: · Zbl 0871.14025 [27] D. Mumford, Theta characteristics of an algebraic curve , Ann. Sci. École Norm. Sup. (4) 4 (1971), 181–192. · Zbl 0216.05904 [28] D. G. Northcott, Finite Free Resolutions , Cambridge Tracts in Math. 71 , Cambridge Univ. Press, Cambridge, 1976. · Zbl 0328.13010 [29] C. Okonek, Notes on varieties of codimension $$3$$ in $$\mathbb P^N$$ , Manuscripta Math. 84 (1994), 421–442. · Zbl 0828.14032 [30] C. Okonek, M. Schneider, and H. Spindler, Vector Bundles on Complex Projective Spaces , Progr. Math. 3 , Birkhäuser, Boston, 1980. · Zbl 0438.32016 [31] A. Pfister, Quadratic Forms with Applications to Algebraic Geometry and Topology , London Math. Soc. Lecture Note Ser. 217 , Cambridge Univ. Press, Cambridge, 1995. · Zbl 0847.11014 [32] P. Pragacz, “Cycles of isotropic subspaces and formulas for symmetric degeneracy loci” in Topics in Algebra (Warsaw, 1988), Part 2 , Banach Center Publ. 26 , Part 2, PWN, Warsaw, 1990, 189–199. · Zbl 0743.14009 [33] P. Pragacz and J. Ratajski, Formulas for Lagrangian and orthogonal degeneracy loci; $$\tilde Q$$-polynomial approach , Compositio Math. 107 (1997), 11–87. · Zbl 0916.14026 [34] J. Vogelaar, Constructing vector bundles from codimension-two subvarieties , dissertation, Leiden University, 1978. [35] C. Walter, Pfaffian subschemes , J. Algebraic Geom. 5 (1996), 671–704. · Zbl 0864.14032 [36] ——–, Obstructions to the Existence of Symmetric Resolutions , in preparation.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | 2022-01-18 11:05:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8714150190353394, "perplexity": 1588.7911126860974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300810.66/warc/CC-MAIN-20220118092443-20220118122443-00586.warc.gz"} |
http://continuummechanics.org/divergencetheorem.html | Search Continuum Mechanics Website
# Divergence Theorem
home > basic math > divergence theorem
## Introduction
The divergence theorem is an equality relationship between surface integrals and volume integrals, with the divergence of a vector field involved. It often arises in mechanics problems, especially so in variational calculus problems in mechanics. The equality is valuable because integrals often arise that are difficult to evaluate in one form (volume vs. surface), but are easier to evaluate in the other form (surface vs. volume). This page presents the divergence theorem, several variations of it, and several examples of its application.
## Divergence Theorem
The divergence theorem, applied to a vector field $${\bf f}$$, is
$\int_V \nabla \cdot {\bf f} \, dV = \int_S {\bf f} \cdot {\bf n} \, dS$
where the LHS is a volume integral over the volume, $$V$$, and the RHS is a surface integral over the surface enclosing the volume. The surface has outward-pointing unit normal, $${\bf n}$$. The vector field, $${\bf f}$$, can be any vector field at all. Do not assume that it is limited to forces due to the use of the letter $${\bf f}$$ in the above equation.
### Tensor Notation
The divergence theorem can be written in tensor notation as
$\int_V f_{i,i} \, dV = \int_S f_i n_i \, dS$
### Divergence Theorem in 1-D
The divergence theorem is nothing more than a generalization of the straight forward 1-D integration process we all know and love. To see this, start with the divergence theorem written out as
$\int_V \left( {\partial f_x \over \partial x} + {\partial f_y \over \partial y} + {\partial f_z \over \partial z} \right) dV = \int_S \left( f_x n_x + f_y n_y + f_z n_z \right) dS$
But in 1-D, there are no $$y$$ or $$z$$ components, so we can neglect them. And the volume integral becomes a simple integral over $$x$$, so $$dV$$ becomes $$dx$$.
On the RHS, the surface integral becomes the left and right boundaries on the x-axis, and $$n_x$$ equals -1 on the left boundary and +1 on the right. All this reduces the above equation to
$\int_{x_1}^{x_2} \, {\partial f_x \over \partial x} \, dx = f(x_2) - f(x_1)$
And that's it! To show that this works, let $$f(x) = x^2$$, then $${\partial f_x \over \partial x} = 2x$$, and we get
$\int_{x_1}^{x_2} \, 2x \, dx = (x_2)^2 - (x_1)^2$
which is clearly the correct 1-D result.
The following examples present integrals over cubic volumes only because this keeps the math simple and allows the concepts to be more easily grasped. But note that the divergence theorem applies regardless of the shape of the volume.
### Divergence Example
Consider a constant velocity field of a fluid flowing at 5 m/s in the y-direction, $${\bf v} = 5{\bf j}$$. The net volumetric flow, $$Q$$, out of the box shown in the figure, with each face having area = 4m2, is given by
$Q = \int_S {\bf v} \cdot {\bf n} \, dS$
This is easily evaluated because the velocity field is constant, exactly normal to two faces, and exactly parallel to all others. For the left face, $${\bf n} = -1{\bf j}$$ and
$\int_{Face1} {\bf v} \cdot {\bf n} \, dS = 5 * (-4) = -20 \;\text{m}^3\!\!/\text{s}$
For the right face, $${\bf n} = 1{\bf j}$$ and
$\int_{Face2} {\bf v} \cdot {\bf n} \, dS = 5 * (+4) = +20 \;\text{m}^3\!\!/\text{s}$
So the total integral is
$Q = \int_S {\bf v} \cdot {\bf n} \, dS = -20 + 20 = 0$
That was easy. But it would be easier still to evaluate
$Q = \int_V \nabla \cdot {\bf v} \, dV$
because since $$\nabla \cdot {\bf v} = 0$$, then the integral over any volume of the quantity zero, is zero. And this is exactly equal to the surface integral as it must be.
### 2nd Divergence Example
Consider instead a more complex velocity field of $${\bf v} = 5x{\bf i} + 10xz{\bf j} - 2z{\bf k}$$ The net volumetric flow, $$Q$$, out of the same box is still given by
$Q = \int_S {\bf v} \cdot {\bf n} \, dS$
While this is still possible to solve, it is in fact much easier to apply the divergence theorem and instead evaluate the divergence of the velocity field over the volume.
$\nabla \cdot {\bf v} \;\; = \;\; {\partial (5x) \over \partial x} + {\partial (10xz) \over \partial y} + {\partial (-2z) \over \partial z} \;\; = \;\; 3$
and the integral of 3 over a volume of 8m3 is
$Q \;\; = \;\; \int_V \nabla \cdot {\bf v} \, dV \;\; = \;\; 3 * 8 \;\; = \;\; 24$
and this means that the surface integral above must also equal 24. If this were a conservation of mass problem, then the net outflow of material must mean that something very curious is happening to the density!
## Alternate Forms
Several variations of the divergence theorem exist. For example, a closely related alternate form is
$\int_V \nabla f({\bf x}) \, dV = \int_S f({\bf x}) {\bf n} \, dS$
where $$f({\bf x})$$ is a scalar function of the vector $${\bf x}$$. This equation is in fact three separate, independent ones because it is a vector. This is seen more clearly when written in tensor form.
$\int_V f,_i \, dV = \int_S f \; n_i \, dS$
Writing each equation out explicitly gives
$\int_V {\partial f({\bf x}) \over \partial x} \, dV = \int_S f({\bf x}) n_x \, dS \qquad \qquad \int_V {\partial f({\bf x}) \over \partial y} \, dV = \int_S f({\bf x}) n_y \, dS \qquad \qquad \int_V {\partial f({\bf x}) \over \partial z} \, dV = \int_S f({\bf x}) n_z \, dS$
Each equation is separate and can be used independently, in isolation of the others. In fact, the first equation arises in the derivation of the J-Integral. In that case, $$f({\bf x})$$ is actually the strain energy density, $$w({\bf x})$$, in the vicinity of the crack tip. Finally, note how closely each equation resembles the 1-D case discussed above. Nevertheless, it is a slightly different variation because in this case, the volume is a 3-D object, not 1-D.
A second alternate form involves the application of the divergence theorem to 2nd rank tensors, such as the stress tensor, $$\boldsymbol{\sigma}$$.
$\int_V \nabla \cdot \boldsymbol{\sigma} \, dV = \int_S \boldsymbol{\sigma } \cdot {\bf n} \, dS$
This identity often arises because stress-times-area is force. It can be written in tensor notation as
$\int_V \sigma_{ij,j} \, dV = \int_S \sigma_{ij} \, n_j \, dS$
## Summary
Note how similar the three forms discussed above appear to be when written in tensor notation.
$\int_V f,_i \, dV = \int_S f \; n_i \, dS$
$\int_V f_{i,i} \, dV = \int_S f_i n_i \, dS$
$\int_V \sigma_{ij,j} \, dV = \int_S \sigma_{ij} \, n_j \, dS$
The equations are written for a scalar function, $$f$$, and then a vector function, $$f_i$$, and finally a tensor function, $$\sigma_{ij}$$. In each case, the "$$,i$$" in the volume integral becomes $$n_i$$ in the surface integral (except it's a $$j$$ in the last example). Tensor notation makes the various forms of the divergence theorem very easy to remember.
### Thank You
Thank you for visiting this webpage. Feel free to email me if you have questions.
Bob McGinty
bmcginty@gmail.com
### Entire Math Chapter - $4.99 For$4.99, you receive two formatted PDFs (the first for 8.5" x 11" pages, the second for tablets) of the entire chapter.
Click here to see a sample page in each of the two formats. | 2017-03-23 08:09:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9681860208511353, "perplexity": 343.45831515566016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186841.66/warc/CC-MAIN-20170322212946-00649-ip-10-233-31-227.ec2.internal.warc.gz"} |
http://calculus7.org/2012/07/12/an-exercise-in-walking-randomly/ | # An exercise in walking randomly
## 5 thoughts on “An exercise in walking randomly”
1. Lianxin says:
I took a course on Stochastic Calculus with J. Michael Steele at UPenn. (I remember you said you were gonna teach stat in fall 2010, but never learnt it before)
First of all, $S_n^2-n$ is a martingale,
$E(S_{n+1}^2-(n+1))=\frac12 ((S_n+1)^2-(n+1))+\frac12 ((S_n-1)^2-(n+1))=S_n^2-n$
From Doob stopping time theorem,
$E(S_T^2-T)=E(S_0^2-0)=0$
So
$E(T)=E(S_T^2)=\frac12 N^2+\frac12 N^2=N^2$
I like your series term matching method for solving for $P(T=M)$. I came up with a combinatorial approach that I want to discuss with you.
Let $(\frac{M+N}{2},\frac{M-N}{2})$ denote the event that for a random walk without stopping rules (hallway is infinitely long), there are $\frac{M+N}{2}$ left steps and $\frac{M-N}{2}$ right steps during the first $M$ steps.
$P(T=M, S_T=N) P((\frac{M+N}{2},\frac{M-N}{2})| T=M, S_T=N)=P(T=M, S_T=N| (\frac{M+N}{2},\frac{M-N}{2}) ) P( (\frac{M+N}{2},\frac{M-N}{2}) )$
$P(T=M, S_T=N) = P(T=M, S_T=N| (\frac{M+N}{2},\frac{M-N}{2}) ) C_M^{\frac{M+N}{2}} /2^M$
where $C_X^Y$ is the number of ways to choose $Y$ items from $X$ items.
$P(T=M, S_T=N | (\frac{M+N}{2},\frac{M-N}{2}) )=[C_M^{(M+N)/2}-N(\text{hit }N\text{ before }T)-N(\text{hit }-N\text{ before }T)]/C_M^{(M+N)/2}$
By principle of reflection, $N(\text{hit }-N\text{ before }T)=N(S_T= -3N)=C_M^{M+3N/2}$ for $M\ge3N$. Otherwise it is 0.
$N(\text{hit }N\text{ before }T)$ can be decomposed as $N(S_{T-1}=N+1,\text{ hit }N\text{ before }T)+N(S_{T-1}=N-1,\text{ hit }N\text{ before }T)=C_M^{M+3N/2}/2+N(S_{T-1}=N+1)=C_M^{M+3N/2}/2+C_{M-1}^{(M+N)/2}$
Finally $P(T=M)=2 P(T=M, S_T=N)$
1. I converted formulas to latex (WordPress uses “dollar”latex … “dollar” syntax). Your count of paths is interesting but there are two questionable points. First, the events “hit N before T” and “hit -N before T” are not mutually exclusive. Second, I did not understand your way of counting the number of “hit N before T”.
1. Lianxin says:
Yes thanks for pointing these out! It should be $P(T=M, S_T=N|(\frac{M+N}{2},\frac{M-N}{2}))=[C_M^{\frac{M+N}{2}}-N(\text{hit N before T})-N(\text{hit -N before T})+N(\text{hit both N and -N before T})]/C_M^{\frac{M+N}{2}}$.
$N(\text{hit N before T})=N(S_{T-1}=N+1)+N(S_{T-1}=N-1,\text{ hit N})=2N(S_{T-1}=N+1)=2C_{M-1}^{\frac{M+N}{2}}$
$N(\text{hit -N before T})=N(S_{T}=3N)=C_M^{\frac{M+3N}{2}}$ if $M \ge 3N$.
$N(\text{hit both N and -N before T})=N(S_{T}=5N)=C_M^{\frac{M+5N}{2}}$ if $M \ge 5N$.
2. This is meant as a reply to your Nov. 4 7:59pm comment, but WordPress does not seem to allow 4th level replies. The term $C_M^{\frac{M+5N}{2}}$ actually counts only (hits N and then -N before T). Using reflection again, the term (hits -N and then N before T) should be $2*C_{M-1}^{\frac{M+3N}{2}}$. So the total number of walks exiting [-N,N] through N in M steps should be
binomial(M,(M+N)/2)-2*binomial(M-1,(M+N)/2)-binomial(M,(M+3*N)/2)+2*binomial(M-1,(M+3*N)/2)+binomial(M,(M+5*N)/2)
(using Maple notation, since I calculated with Maple).
The convenient test case is N=2, because (for M even) the correct answer is $2^{(M-2)/2}$. The above formula gives correct answer for M=2,4,6,8,10, but for M=12 it returns 34 instead of 32. Why? Apparently, because the events (hits -N and then N before T) and (hits N and then -N before T) are not mutually exclusive. We could try to fix this too, but I’m afraid this is never going to stop. | 2014-04-17 09:34:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 29, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7889228463172913, "perplexity": 1166.443155124247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00313-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://ai.stackexchange.com/questions?tab=Frequent | All Questions
384 questions
Filter by
Sorted by
Tagged with
1k views
If digital values are mere estimates, why not return to analog for AI?
The impetus behind the twentieth century transition from analog to digital circuitry was driven by the desire for greater accuracy and lower noise. Now we are developing software where results are ...
5k views
How to handle invalid moves in reinforcement learning?
I want to create an AI which can play five-in-a-row/gomoku. As I mentioned in the title, I want to use reinforcement learning for this. I use policy gradient method, namely REINFORCE, with baseline. ...
11k views
Do scientists know what is happening inside artificial neural networks?
Do scientists or research experts know from the kitchen what is happening inside complex "deep" neural network with at least millions of connections firing at an instant? Do they understand the ...
15k views
How does one start learning artificial intelligence?
I am a software engineering student and I am complete beginner to AI. I have read a lot of articles on how to start learning AI, but each article suggests a different way. I was wondering if some of ...
2k views
What are the minimum requirements to call something AI?
I believe artificial intelligence (AI) term is overused nowadays. For example, people see that something is self-moving and they call it AI, even if it's on autopilot (like cars or planes) or there is ...
5k views
How is it possible that deep neural networks are so easily fooled?
The following page/study demonstrates that the deep neural networks are easily fooled by giving high confidence predictions for unrecognisable images, e.g. How this is possible? Can you please ...
17k views
Why is Lisp such a good language for AI?
I've heard before from computer scientists and from researchers in the area of AI that that Lisp is a good language for research and development in artificial intelligence. Does this still apply, with ...
25k views
Why is Python such a popular language in the AI field?
First of all, I'm a beginner studying AI and this is not an opinion oriented question or one to compare programming languages. I'm not saying that is the best language. But the fact is that most of ...
1k views
What are the steps to follow to learn artificial intelligence? [closed]
I know nothing about AI. Can anybody tell me what steps I have to follow to learn artificial intelligence? Are there any special technologies, or anything else, I have to learn?
2k views
What is the difference between machine learning and deep learning?
Can someone explain to me the difference between machine learning and deep learning? Is it possible to learn deep learning without knowing machine learning?
587 views
Is the singularity concept mathematically flawed?
In Comes IQ When the concept of Intelligence Quotient arose it was based on this approximation. Each human being has a number that quantifies their intelligence relative to a fixed norm, and, ...
12k views
What is the difference between artificial intelligence and machine learning?
These two terms seem to be related, especially in their application in computer science and software engineering. Is one a subset of another? Is one a tool used to build a system for the other? ...
8k views
What is the time complexity for training a neural network using back-propagation?
Suppose that a NN contains $n$ hidden layers, $m$ training examples, $x$ features, and $n_i$ nodes in each layer. What is the time complexity to train this NN using back-propagation? I have a basic ...
3k views
What exactly are genetic algorithms and what sort of problems are they good for?
I've noticed that a few questions on this site mention genetic algorithms and it made me realize that I don't really know much about those. I have heard the term before, but it's not something I've ...
1k views
How are Artificial Neural Networks and the Biological Neural Networks similar and different?
I've heard multiple times that "Neural Networks are the best approximation we have to model the human brain", and I think it is commonly known that Neural Networks are modelled after our brain. I ...
5k views
How could self-driving cars make ethical decisions about who to kill?
Obviously, self-driving cars aren't perfect, so imagine that the Google car (as an example) got into a difficult situation. Here are a few examples of unfortunate situations caused by a set of events:...
1k views
To what extent can quantum computers help to develop Artificial Intelligence?
What aspects of quantum computers, if any, can help to further develop Artificial Intelligence?
500 views
Sources on the AI theory, philosophy, tools and applications
I am software/hardware engineer for many years now. However, I know nothing about AI and machine learning. I have a strong background in digital signal processing, and various programming languages (...
3k views
Is the Turing Test, or any of its variants, a reliable test of artificial intelligence?
The Turing Test was the first test of artificial intelligence and is now a bit outdated. The Total Turing Test aims to be a more modern test which requires a much more sophisticated system. What ...
835 views
What do I need to study for machine learning?
Starting from last year, I have been studying various subjects in order to understand some of the most important thesis of machine learning like S. Hochreiter, & J. Schmidhuber. (1997). Long ...
1k views
Problems that only humans will ever be able to solve
With the increasing complexity of reCAPTCHA, I wondered about the existence of some problem, that only a human will ever be able to solve (or that AI won't be able to solve as long as it doesn't ...
1k views
Could an AI feel emotions?
Assuming humans had finally developed the first humanoid AI based on the human brain, would It feel emotions? If not, would it still have ethics and/or morals?
578 views
The Singularity and future of civilisation
My understanding of the singularity is when artificial intelligence becomes "more intelligence" than humans. This will be achieved through machine learning where an; algorithm, neural network ? ...
523 views
What kind of education is required for researchers in AI?
Suppose my goal is to collaborate and create an advanced AI, for instance one that resembles a human being and the project would be on the frontier of AI research, what kind of skills would I need? I ...
118 views
What is the difference between a stochastic and a deterministic policy?
In reinforcement learning, there are the concepts of stochastic (or probabilistic) and deterministic policies. What is the difference between them?
6k views
What is the purpose of an activation function in Neural Networks?
It is said that activation functions in neural networks help introduce non-linearity. What does this mean? What does non-linearity mean in this context? How does introduction of this non-linearity ...
1k views
How could emotional intelligence be implemented?
I've seen emotional intelligence defined as the capacity to be aware of, control, and express one's emotions, and to handle interpersonal relationships judiciously and empathetically. What are some ...
401 views
How would an AI learn language?
I was think about AIs and how they would work, when I realised that I couldn't think of a way that an AI could be taught language. A child tends to learn language through associations of language and ...
1k views
How close are we to creating Ex Machina?
Are there any research teams which attempted to create or have already created an AI robot which can be as close to intelligent as these found in Ex Machina or I, Robot movies? I'm not talking about ...
489 views
Is topological sophistication necessary to the furtherance of AI?
The current machine learning trend is interpreted by some new to the disciplines of AI as meaning that MLPs, CNNs, and RNNs can exhibit human intelligence. It is true that these orthogonal structures ...
701 views
Which areas of applied math are relevant to AI?
My background is in electrical engineering. I have a good grasp of CS foundations (e.g. data structures, algorithms, operating systems, discrete math and software engineering). I have option of ...
411 views
What does “stationary” mean in the context of reinforcement learning?
I think I've seen the expressions "stationary data", "stationary dynamics" and "stationary policy", among others, in the context of reinforcement learning. What does it mean? I think stationary policy ...
666 views
What makes animal brain so special?
So this is an introductory question. Whenever I read any book about Neural Nets or Machine Learning, their introductory chapter says that we haven't been able to replicate the brain's power due to its ...
262 views
Can anyone suggest reference books to start with AI? Preferably, I am looking for books that provide source code in Java or Python.
522 views
Can an AI learn to suffer?
I had first this question in mind "Can an AI suffer?". Suffering is important for human beings. Imagine that you are damaging your heel. Without pain, you will continue to harm it. Same for an AI. But ...
786 views
What does “death” intuitively mean in the paper “Death and Suicide in Universal Artificial Intelligence”?
In the paper Death and Suicide in Universal Artificial Intelligence, a proposal is given for what death could mean for Artificial Intelligence. What does this mean using English only? I understand ...
395 views
Is transistor the first artificial intelligence?
Artificial Intelligence is any device that perceives its environment and takes actions that maximize its chance of success at some goal. I got this definition from Wikipedia that cited "Russell and ...
237 views
Is it possible to build human-brain-level artificial intelligence based on neuromorphic chips and neural networks?
I read a lot about the structure of the human brain and artificial neural networks. I wonder if it is possible to build an artificial intelligence with neural networks that would be divided into ...
6k views
What are the differences between A* and greedy best-first search?
What are the differences between the A* algorithm and the greedy best-first search algorithm? Which one should I use? Which algorithm is the better one, and why?
330 views
Can a brain be intelligent without a body?
A more formal implication of this question is whether intelligence requires a context. On Topic This question may have little value to the fields of data science or statistics, however it is of ...
4k views
What is the fringe in the context of search algorithms?
What is the fringe in the context of search algorithms?
130 views
What career paths should be avoided with the growth in AI?
Consider high school graduates entering higher education or the workforce, each making a decision about where to commit their efforts. History tells us an important story about choosing in a changing ...
267 views
Why are Q values updated according to the greedy policy?
Apparently, in the Q-learning algorithm, the Q values are not updated according to the "current policy", but according to a "greedy policy". Why is that the case? I think this is related to the fact ...
40k views
Could a paradox kill an AI?
In Portal 2 we see that AI's can be "killed" by thinking about a paradox. I assume this works by forcing the AI into an infinite loop which would essentially "freeze" the computer's consciousness. ...
27k views
How can neural networks deal with varying input sizes?
As far as I can tell, neural networks have a fixed number of neurons in the input layer. If neural networks are used in a context like NLP, sentences or blocks of text of varying sizes are fed to a ...
21k views
How does Hinton's “capsules theory” work?
Geoffrey Hinton has been researching something he calls "capsules theory" in neural networks. What is this and how does it work?
9k views
Are neural networks prone to catastrophic forgetting?
Imagine you show a neural network a picture of a lion 100 times and label with "dangerous", so it learns that lions are dangerous. Now imagine that previously you have shown it millions of images of ...
24k views
In a CNN, does each new filter have different weights for each input channel, or are the same weights of each filter used across input channels?
My understanding is that the convolutional layer of a convolutional neural network has four dimensions: input_channels, filter_height, filter_width, number_of_filters. Furthermore, it is my ... | 2019-10-16 12:07:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39968612790107727, "perplexity": 1371.4767939015378}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986668569.22/warc/CC-MAIN-20191016113040-20191016140540-00111.warc.gz"} |
http://mymathforum.com/calculus/340481-total-differential-function.html | My Math Forum Total differential function
Calculus Calculus Math Forum
May 12th, 2017, 04:51 PM #1 Newbie Joined: Mar 2016 From: Australia Posts: 21 Thanks: 1 Total differential function If I have this total differential: $\displaystyle (2\cos(x)-\sin(x))e^{(2x+3y)}dx+3e^{(2x+3y)}\cos(x)dy$ I then find the differential: $\displaystyle e^{(2x+3y)}\cos(x)$ I now need to find the function subject to the condition (0,0)=2 Is this just a case of finding the constant C, such as: $\displaystyle 2=e^{(2x+3y)}\cos(x)+C$ With C=1 Therefore the function is: $\displaystyle e^{(2x+3y)}\cos(x)+1$ Last edited by skipjack; May 12th, 2017 at 05:13 PM.
May 12th, 2017, 05:20 PM #2 Global Moderator Joined: Dec 2006 Posts: 18,048 Thanks: 1395 Yes.
May 15th, 2017, 03:01 AM #3
Math Team
Joined: Jan 2015
From: Alabama
Posts: 2,729
Thanks: 705
Quote:
Originally Posted by max233 If I have this total differential: $\displaystyle (2\cos(x)-\sin(x))e^{(2x+3y)}dx+3e^{(2x+3y)}\cos(x)dy$ I then find the differential: $\displaystyle e^{(2x+3y)}\cos(x)$
Actually, the first expression was, as you said, the "differential". What you are finding here is one possible anti-derivative.
Given a function F(x,y), the "total differential" is $\frac{\partial F}{\partial x}dx+ \frac{\partial F}{\partial y}dy$. Given the above total differential, in order to find F you have to solve the two equations:
$\frac{\partial F}{\partial x}= (2 cos(x)- sin(x))e^{2x+ 3y}$ and
$\frac{\partial F}{\partial y}= 3e^{2x+ 3y}cos(x)$.
Integrate the first equation with respect to x, treating y as a constant. That would give $F(x,y)= e^{2x+ 3y}cos(x)$ plus the "constant of integration". But since we are treating y as a constant, that "constant of integration might be a function of y. That is, we have $F(x,y)= e^{2x+ 3y)cos(x)+ g(y)$ where g can be any (differentiable) function of y.
Differentiaing that F(x, y) with respect to y, $\frac{\partial F}{\partial y}= 3e^{2x+ 3y}cos(x)+ g'(y)$ and that must be equal to $2e^{2x+ 3y}cos(y)$. That means that g'(y)= 0 and, since it is a function of y only, g(x) is a constant. The general anti-derivative is $F(x,y)= e^{2x+ 3y}cos(x)+ C$ for some constant, C.
Quote:
I now need to find the function subject to the condition (0,0)=2 Is this just a case of finding the constant C, such as: $\displaystyle 2=e^{(2x+3y)}\cos(x)+C$ With C=1 Therefore the function is: $\displaystyle e^{(2x+3y)}\cos(x)+1$
Yes, but you shouldn't have waited until you were matching the condition to write the "+ C"!
Tags differential, function, total
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post max233 Calculus 6 May 6th, 2016 08:21 PM golomorf Differential Equations 0 December 1st, 2014 03:34 AM adagus Differential Equations 1 November 19th, 2014 06:27 AM piotrek Differential Equations 2 May 23rd, 2013 07:22 AM Taurai Mabhena Differential Equations 11 January 30th, 2012 02:55 PM
Contact - Home - Forums - Cryptocurrency Forum - Top | 2017-10-21 08:15:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 8, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.868525505065918, "perplexity": 1502.1599658560453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824675.67/warc/CC-MAIN-20171021081004-20171021101004-00622.warc.gz"} |
https://minpy.readthedocs.io/en/latest/tutorial/autograd_tutorial.html | This tutorial is also available in step-by-step notebook version on github. Please try it out!
Writing backprop is often the most tedious and error prone part of a deep net implementation. In fact, the feature of autograd has wide applications and goes beyond the domain of deep learning. MinPy’s autograd applies to any NumPy code that is imperatively programmed. Moreover, it is seemlessly integrated with MXNet’s symbolic program (see for example). By using MXNet’s execution engine, all operations can be executed in GPU if available.
## A Close Look at Autograd System¶
MinPy’s implementation of autograd is insprired from the Autograd project. It computes a gradient function for any single-output function. For example, we define a simple function foo:
In [1]:
def foo(x):
return x**2
foo(4)
Out[1]:
16
Now we want to get its derivative. To do so, simply import grad from minpy.core.
In [2]:
import minpy.numpy as np # currently need import this at the same time
In [3]:
d_foo(4)
Out[3]:
8.0
You can also differentiate as many times as you want:
In [4]:
d_2_foo = grad(d_foo)
Now import matplotlib to visualize the derivatives.
In [5]:
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
x = np.linspace(-10, 10, 200)
# plt.plot only takes ndarray as input. Explicitly convert MinPy Array into ndarray.
plt.plot(x.asnumpy(), foo(x).asnumpy(),
x.asnumpy(), d_foo(x).asnumpy(),
x.asnumpy(), d_2_foo(x).asnumpy(),
x.asnumpy(), d_3_foo(x).asnumpy())
plt.show()
Just as you expected.
Autograd also differentiates vector inputs. For example:
In [6]:
x = np.array([1, 2, 3, 4])
d_foo(x)
Out[6]:
[ 2. 4. 6. 8.]
As for multivariate functions, you also need to specify arguments for derivative calculation. Only the specified argument will be calcualted. Just pass the position of the target argument (of a list of arguments) in grad. For example:
In [7]:
def bar(a, b, c):
return 3*a + b**2 - c
We get their gradients by specifying their argument position.
In [8]:
gradient = grad(bar, [0, 1, 2])
[3.0, 6.0, -1.0]
grad_array[0], grad_array[1], and grad_array[2] are gradients of argument a, b, and c.
The following section will introduce a more comprehensive example on matrix calculus.
Since in world of machine learning we optimize a scalar loss, Autograd is particular useful to obtain the gradient of input parameters for next updates. For example, we define an affine layer, relu layer, and a softmax loss. Before dive into this section, please see Logistic regression tutorial first for a simpler application of Autograd.
In [9]:
def affine(x, w, b):
"""
Computes the forward pass for an affine (fully-connected) layer.
The input x has shape (N, d_1, ..., d_k) and contains a minibatch of N
examples, where each example x[i] has shape (d_1, ..., d_k). We will
reshape each input into a vector of dimension D = d_1 * ... * d_k, and
then transform it to an output vector of dimension M.
Inputs:
- x: A numpy array containing input data, of shape (N, d_1, ..., d_k)
- w: A numpy array of weights, of shape (D, M)
- b: A numpy array of biases, of shape (M,)
Returns a tuple of:
- out: output, of shape (N, M)
"""
out = np.dot(x, w) + b
return out
def relu(x):
"""
Computes the forward pass for a layer of rectified linear units (ReLUs).
Input:
- x: Inputs, of any shape
Returns a tuple of:
- out: Output, of the same shape as x
"""
out = np.maximum(0, x)
return out
def softmax_loss(x, y):
"""
Computes the loss for softmax classification.
Inputs:
- x: Input data, of shape (N, C) where x[i, j] is the score for the jth class
for the ith input.
- y: Vector of labels, of shape (N,) where y[i] is the label for x[i] and
0 <= y[i] < C
Returns a tuple of:
- loss: Scalar giving the loss
"""
N = x.shape[0]
probs = np.exp(x - np.max(x, axis=1, keepdims=True))
probs = probs / np.sum(probs, axis=1, keepdims=True)
loss = -np.sum(np.log(probs) * y) / N
return loss
Then we use these layers to define a single layer fully-connected network, with a softmax output.
In [10]:
class SimpleNet(object):
def __init__(self, input_size=100, num_class=3):
# Define model parameters.
self.params = {}
self.params['w'] = np.random.randn(input_size, num_class) * 0.01
self.params['b'] = np.zeros((1, 1)) # don't use int(1) (int cannot track gradient info)
def forward(self, X):
# First affine layer (fully-connected layer).
y1 = affine(X, self.params['w'], self.params['b'])
# ReLU activation.
y2 = relu(y1)
return y2
def loss(self, X, y):
# Compute softmax loss between the output and the label.
return softmax_loss(self.forward(X), y)
We define some hyperparameters.
In [11]:
batch_size = 100
input_size = 50
num_class = 3
Here is the net and data.
In [12]:
net = SimpleNet(input_size, num_class)
x = np.random.randn(batch_size, hidden_size)
idx = np.random.randint(0, 3, size=batch_size)
y = np.zeros((batch_size, num_class))
y[np.arange(batch_size), idx] = 1
In [13]:
gradient = grad(net.loss)
Then we can get gradient by simply call gradient(X, y).
In [14]:
d_x = gradient(x, y)
Ok, Ok, I know you are not interested in x’s gradient. I will show you how to get the gradient of the parameters. First, you need to define a function with the parameters as the arguments for Autograd to process. Autograd can only track the gradients in the parameter list.
In [15]:
def loss_func(w, b, X, y):
net.params['w'] = w
net.params['b'] = b
return net.loss(X, y)
Yes, you just need to provide an entry in the new function’s parameter list for w and b and that’s it! Now let’s try to derive its gradient.
In [16]:
# 0, 1 are the positions of w, b in the paramter list.
Note that you need to specify a list for the parameters that you want their gradients.
Now we have
In [17]:
d_w, d_b = gradient(net.params['w'], net.params['b'], x, y)
With d_w and d_b in hand, training net is just a piece of cake.
## Less Calculation: Get Forward Pass and Backward Pass Simultaneously¶
Since gradient calculation in MinPy needs forward pass information, if you need the forward result and the gradient calculation at the same time, please use grad_and_loss to get them simultaneously. In fact, grad is just a wrapper of grad_and_loss. For example, we can get
In [18]:
from minpy.core import grad_and_loss
forward_backward = grad_and_loss(bar, [0, 1, 2])
grad_array, result = forward_backward(2, 3, 4)
grad_array and result are result of gradient and forward pass respectively. | 2018-08-15 03:40:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5015368461608887, "perplexity": 7053.288251063753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221209856.3/warc/CC-MAIN-20180815024253-20180815044253-00074.warc.gz"} |
http://tex.stackexchange.com/questions/99483/hyperbola-conics | # Hyperbola? conics? [closed]
I use Geogebra to draw by five knowing points on a plane an hyperbol. Then I used the intern command to translate into tikz code my figure. I create a file source code latex/tikz, but after compilation (pdfLaTeX), points are drawing, but not the correct hyperbola trough the fives points.
Question : Someone know how to draw a simplest conic by five points directly in TikZ source? I can I draw an hyperbol if I know only the f(x,y)=0 equation?
Thank for this studies ! D. Collin % % % % The code generate by export pgf/TikZ is :
% ---------
\documentclass[10pt]{article}
\usepackage{pgf,tikz}
\usetikzlibrary{arrows}
\pagestyle{empty}
\begin{document}
\definecolor{xdxdff}{rgb}{0.4902,0.4902,1}
\definecolor{zzqqzz}{rgb}{0.6,0,0.6}
\definecolor{qqzzqq}{rgb}{0,0.6,0}
\definecolor{uququq}{rgb}{0.25098,0.25098,0.25098}
\definecolor{ffqqtt}{rgb}{1,0,0.2}
\definecolor{qqqqff}{rgb}{0,0,1}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\clip(-6,-6) rectangle (6,6);
\draw[line width=0.4pt] (-3.24171,-0.87914) -- (-3.16257,-0.72086) -- (-3.32086,-0.64171) -- (-3.4,-0.8) -- cycle;
\draw[line width=0.4pt] (-2.70156,-2.99384) -- (-2.67081,-2.81957) -- (-2.84508,-2.78881) -- (-2.87584,-2.96309) -- cycle;
\draw[line width=0.4pt] (0.37519,-4.37497) -- (0.35016,-4.19978) -- (0.17497,-4.22481) -- (0.2,-4.4) -- cycle;
\draw[line width=0.4pt] (2.71507,-4.46302) -- (2.56435,-4.37027) -- (2.4716,-4.52098) -- (2.62232,-4.61373) -- cycle;
\draw [samples=50,domain=-0.99:0.99,rotate around={-157.68231:(-4.98202,-2.15901)},xshift=-4.98202cm,yshift=-2.15901cm,line width=2pt,color=ffqqtt] plot ({2.47764*(1+\x^2)/(1-\x^2)},{1.88916*2*\x/(1-\x^2)});
\draw [samples=50,domain=-0.99:0.99,rotate around={-157.68231:(-4.98202,-2.15901)},xshift=-4.98202cm,yshift=-2.15901cm,line width=2pt,color=ffqqtt] plot ({2.47764*(-1-\x^2)/(1-\x^2)},{1.88916*(-2)*\x/(1-\x^2)});
\draw [line width=1.2pt,color=qqqqff] (-1,4)-- (-2,2);
\draw [line width=1.2pt,color=qqqqff] (-2,2)-- (-2.6,-1.4);
\draw [line width=1.2pt,color=qqqqff] (-2.6,-1.4)-- (0,-3);
\draw [line width=1.2pt,color=qqqqff] (0,-3)-- (-1,4);
\draw [line width=1.2pt,dash pattern=on 2pt off 2pt,color=qqzzqq] (3,-4)-- (-3.4,-0.8);
\draw [line width=1.2pt,dash pattern=on 2pt off 2pt,color=qqzzqq] (3,-4)-- (2.62232,-4.61373);
\draw [line width=1.2pt,dash pattern=on 2pt off 2pt,color=zzqqzz] (3,-4)-- (0.2,-4.4);
\draw [line width=1.2pt,dash pattern=on 2pt off 2pt,color=zzqqzz] (3,-4)-- (-2.87584,-2.96309);
\draw [dash pattern=on 2pt off 2pt,color=qqqqff] (-2,2)-- (-3.71193,-1.42386);
\draw [dash pattern=on 2pt off 2pt,color=qqqqff] (-2.6,-1.4)-- (-3.11831,-4.33711);
\draw [dash pattern=on 2pt off 2pt,color=qqqqff] (0,-3)-- (0.27455,-4.92186);
\draw [dash pattern=on 2pt off 2pt,color=qqqqff] (0,-3)-- (3.39975,-5.09215);
\fill [color=qqqqff] (-1,4) circle (1.5pt);
\draw[color=qqqqff] (-1.14371,4.25851) node {$M_1$};
\fill [color=qqqqff] (-2,2) circle (1.5pt);
\draw[color=qqqqff] (-2.07806,2.2897) node {$M_2$};
\fill [color=qqqqff] (-2.6,-1.4) circle (1.5pt);
\draw[color=qqqqff] (-2.77882,-1.36427) node {$M_3$};
\fill [color=qqqqff] (0,-3) circle (1.5pt);
\draw[color=qqqqff] (0.30787,-2.78248) node {$M_4$};
\fill [color=qqqqff] (3,-4) circle (1.5pt);
\draw[color=qqqqff] (3.27776,-3.70015) node {$M_5$};
\fill [color=uququq] (0.2,-4.4) circle (1.5pt);
\draw[color=uququq] (-0.02583,-4.35086) node {$H_{14}$};
\fill [color=uququq] (-2.87584,-2.96309) circle (1.5pt);
\draw[color=uququq] (-3.0291,-2.83254) node {$H_{23}$};
\fill [color=uququq] (2.62232,-4.61373) circle (1.5pt);
\draw[color=uququq] (2.56032,-4.70124) node {$H_{34}$};
\fill [color=uququq] (-3.4,-0.8) circle (1.5pt);
\draw[color=uququq] (-3.49627,-0.46329) node {$H_{12}$};
\fill [color=xdxdff] (4.0018,-4.30299) circle (1.5pt);
\draw[color=xdxdff] (4.11201,-4.0839) node {$E$};
\fill [color=xdxdff] (-0.46313,5.00435) circle (1.5pt);
\draw[color=xdxdff] (-0.64317,5.19286) node {$F$};
\end{tikzpicture}
\end{document}
%----------
-
## closed as too localized by hpesoj626, Paul Gaborit, Kurt, Martin Schröder, lockstepFeb 23 '13 at 0:27
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center. If this question can be reworded to fit the rules in the help center, please edit the question.
If you provide the mathematical description of the function given the points then it is TeX question. Otherwise this seems more a math question so might be more appropriate at Math.SE. – Peter Grill Feb 22 '13 at 19:09
Sorry, but TikZ don't draw correctly the hyperbola with the code generated by geogebra... So, how I can draw correctly the hyperbola passing through this five points with a simple Tikz code?... – DK06100 Feb 22 '13 at 21:13
That seems to be an issue with GeoGebra, tikz/pgfplots would most likely do a great job but do need either a function of a algorithm to compute the points that are to be graphed. So, once the math is done, then the drawing can begin. – Peter Grill Feb 22 '13 at 21:17
Can you post / upload somewhere the code GeoGebra outputs? Then it can be tested whether the mistake is on GeoGebra site or LaTeX site. And if on GeoGebra, we can find what does it do wrong. – yo' Feb 22 '13 at 21:22
I have Geogebra 4.2.7.0 installed in Ubuntu 12.10 and exported to tikz a hyperbola drawn using "Conic through Five Points" and compiled the output. It behaves okay in my setup. And I think Geogebra is now on a newer version. Perhaps an update is required? I also second the suggestion that you post the code produced by Geogebra. I have not seen a question regarding a conic passing through five points using tikz in this site. And perhaps you can revise your question along that line, since your question seems off-topic at the moment . – hpesoj626 Feb 22 '13 at 21:34 | 2015-08-02 02:45:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8811869621276855, "perplexity": 4964.214754619028}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988930.94/warc/CC-MAIN-20150728002308-00127-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://planetcalc.com/2449/ | # Tidal gate calculator (Rule of twelfths)
This calculator calculates tidal gates using the rule of twelfths, estimating the height of the tide at any time, given only the time and height of high and low water. The rule is a rough approximation only and should be applied with great caution when used for navigational purposes. Officially produced tide tables should be used in preference whenever possible.
### Tidal gate calculator (Rule of twelfths)
::
Digits after the decimal point: 1
Tidal gate open time
Tidal gate close time
Tidal gate duration
Rule of twelfths | 2019-05-20 02:57:59 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.857924222946167, "perplexity": 6376.071118185977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255536.6/warc/CC-MAIN-20190520021654-20190520043654-00345.warc.gz"} |
https://bawoqinyzemutul.adamwbender.com/measurement-of-triple-gauge-boson-couplings-in-fully-leptonic-w-decays-book-6684qy.php | Last edited by Shakasho
Monday, May 4, 2020 | History
2 edition of measurement of triple gauge boson couplings in fully leptonic W decays found in the catalog.
measurement of triple gauge boson couplings in fully leptonic W decays
Alun Wyn Lloyd
# measurement of triple gauge boson couplings in fully leptonic W decays
## by Alun Wyn Lloyd
Written in English
Edition Notes
Thesis (Ph.D) - University of Birmingham, Particle Physics Group, School of Physics and Astronomy, Faculty of Science, 2001.
The Physical Object ID Numbers Statement Alun Wyn Lloyd. Pagination vii, 162 p. : Number of Pages 162 Open Library OL19005448M
Limits on the anomalous WW{gamma} and WWZ couplings are presented from a simultaneous fit to the data samples of three gauge boson pair final states in p{bar p} collisions at {radical}(s) = TeV: W{gamma} production with the W boson decaying to e{nu} or {mu}{nu}, W boson pair production with both of the W bosons decaying to e{nu} or {mu}{nu}, and WW or WZ production with one W boson . The Table of Contents for the book is as follows: * VOLUME I * Foreword * Conference Organization * PLENARY SESSIONS * PL New Results from e + e - B-factories * First CP Violation Results from BaBar * A Measurement of CP Violation in B 0 Meson Decays at Belle * New Results from CLEO * PL CP Violation and Rare Decays * Recent Experimental Results on CP Violating and Rare K and μ Cited by: 3.
S. Abachi, B. Abbott, M. Abolins, B. S. Acharya, I. Adam, D. L. Adams, M. Adams, S. Ahn, H. Aihara, J. Alitti, G. Alvarez, G. A. Alves, E. Amidi, N. Amos, E. W Author: S. Abachi, B. Abbott, M. Abolins, B. S. Acharya, I. Adam, D. L. Adams, M. Adams, S. Ahn, H. Aihara. W-Pair and Single-W production events are observed at LEP, the Large Electron Positron collider at CERN, using the L3 detector. All decay channels are considered, and data taken in the years and , with electron-positron collision centre-of-mass energies from GeV to GeV are : Yoshi Uchida.
ATLAS analyses are presented which measure the properties of Higgs boson decays to leptons. The focus of this article will be laid on the measurement of the cross section of the Higgs boson decay to two tau leptons. So far, the Higgs boson decay to a di-tau pair has been the only accessible leptonic Higgs boson decay mode. Full text of "Anomalous gauge-boson couplings and the Higgs-boson mass" See other formats HD-THEP hep-ph/ in o o (N C On X Anomalous gauge-boson couplings and the Higgs-boson mass O. Nachtmann 1, F. Nagel 2 and M. Pospischil 3 (Nj Institut fur Theoretische Physik, Philosophen D Heidelberg, Germany > o O ' Abstract We study anomalous gauge .
You might also like
Turbulent tales
Turbulent tales
Firearms
Firearms
Wild life of Southern Africa.
Wild life of Southern Africa.
Founding fathers
Founding fathers
Expelled
Expelled
FHA emergency loan eligibility requirements
FHA emergency loan eligibility requirements
An act, passed by the Great and General Court of Her Majesties province of the Massachusetts-Bay in New-England
An act, passed by the Great and General Court of Her Majesties province of the Massachusetts-Bay in New-England
Modifying Osage fund restrictions.
Modifying Osage fund restrictions.
Sumer.
Sumer.
history of Anglican Liturgy.
history of Anglican Liturgy.
Square Summable Power Series (Springer Monographs in Mathematics)
Square Summable Power Series (Springer Monographs in Mathematics)
The vanity of arts and sciences
The vanity of arts and sciences
Volvo 740 & 760 owners workshop manual
Volvo 740 & 760 owners workshop manual
Provisional list of documentation on the problems of brain drain in developing countries.
Provisional list of documentation on the problems of brain drain in developing countries.
Twenty five discourses suitable to the Lords Supper
Twenty five discourses suitable to the Lords Supper
Larval fish and shellfish transport through inlets
Larval fish and shellfish transport through inlets
### Measurement of triple gauge boson couplings in fully leptonic W decays by Alun Wyn Lloyd Download PDF EPUB FB2
The CP-conserving triple-gauge-boson couplings, g 1 Z, κ γ, λ γ, g 5 Z, κ Z and λ Z are measured using hadronic and semi-leptonic W-pair events selected in pb −1 of data collected at LEP with the L3 detector at centre-of-mass energies between and GeV.
The results are combined with previous L3 measurements based on data collected at lower centre-of-mass energies and with the Cited by: The CP-conserving triple-gauge-boson couplings, g1Z, κγ, λγ, g5Z, κZ and λZ are measured using hadronic and semi-leptonic W-pair events selected in pb−1 of data collected at LEP with.
A measurement of triple gauge boson couplings is presented, based on W-pair data recorded by the OPAL detector at LEP during at a centre-of-mass energy of GeV with an integrated. Results from the analysis of fully leptonic W-pair decays are also given.
All results are in agreement with the Standard Model expectations and confirm the existence of self-couplings among electroweak gauge bosons. (C) Published by Elsevier by: We report on measurements of the triple-gauge-boson couplings of the W boson in e + e − collisions with the L3 detector at LEP.
W-pair, single-W and single-photon events are analysed in a data sample corresponding to a total luminosity of pb −1 collected at centre-of-mass energies between GeV and -conserving as well as both C- and P-conserving triple-gauge-boson Cited by: Measurement of Triple-Gauge-Boson Couplings of the W Boson at LEP The L3 Collaboration Abstract The CP-conserving triple-gauge-boson couplings, gZ 1, κγ, λγ, g5Z, κZ and λZ are measured using hadronic and semi-leptonic W-pair events selected in pb−1 of data collected at LEP with the L3 detector at centre-of-mass energies between Cited by: The CP-conserving triple-gauge-boson couplings,gZ 1, κγ, λγ, g5Z, κ Zand λ are measured using hadronic and semi-leptonic W-pair events selected in pb−1 of data collected at LEP with the L3 detector at centre-of-mass energies between and GeV.
Measurement of triple gauge boson couplings from W+W− production at LEP energies up to GeV TheOPALCollaboration Abstract A measurement of triple gauge boson couplings is presented, based on W-pair data recorded by the OPAL detector at LEP during at a centre-of-mass energy of GeV with an integrated luminosity of pb−1.
After. Measurement of Triple-Gauge-Boson Couplings of the W Boson at LEP The L3 Collaboration Abstract We report on measurements of the triple-gauge-boson couplings of the W boson in e+e− collisions with the L3 detector at LEP. W-pair, single-W and single-photon events are analysed in a data sample corresponding to a totalluminosity of pb−1.
Monte-Carlo Program; Electroweak Vector Bosons; (Un)Stable W+W Production; Pair Production; E(+)E(-) Collisions; Gamma Couplings; Final-States; Trilinear Couplings; Bhabha Cited by: Measurement of triple-gauge-boson couplings of the W boson at LEP. Abstract.
We report on measurements of the triple-gauge-boson couplings of the W boson in e + e - collisions with the L3 detector at LEP. W-pair, single-W and single-photon events are analysed in a data sample corresponding to a total luminosity of pb -1 collected at centre-of-mass energies between GeV and GeV.
We report on measurements of the triple-gauge-boson couplings of the W boson in e + e-collisions with the L3 detector at LEP. W-pair, single-W and single-photon events are analysed in a data sample corresponding to a total luminosity of pb-1 collected at centre-of-mass energies between GeV and GeV.
CP-conserving as well as both C- and P-conserving triple-gauge-boson couplings are Cited by: The CP-conserving triple-gauge-boson couplings, g1Z, κγ, λγ, g5Z, κZ and λZ are measured using hadronic and semi-leptonic W-pair events selected in pb−1 of data collected at LEP with the L3 detector at centre-of-mass energies between and.
We present measurements of triple gauge boson coupling parameters using data recorded by the OPAL detector at LEP2, at a centre-of-mass energy of GeV. A total of W-pair candidates has been selected in the ${\rm q \bar q q \bar q}$, ${\rm q\bar q} \ell \bar\nu_\ell$ and $\ell \bar \nu_\ell \bar\ell^\prime \nu_{\ell^\prime }$ decay channels, for an integrated luminosity of pb Cited by: 1.
A study of the measurement of trilinear gauge couplings is presented looking at the W-pair production where one W decays leptonically and the other hadronically in e + e-annihilation at the ILC at a centre-of-mass energy of 1 TeV with polarized beams.
The analysis is based on a realistic full simulation of this process in the ILD detector. Triple gauge boson couplings are measured from W-pair events recorded by the OPAL detector at LEP at centre-of-mass energies of – GeV with a total integrated luminosity of pb−1.
Only CP-conserving couplings are considered and SU(2)×U(1) relations are used, resulting in four independent couplings, κγ, gz 1, λγ and gz 5. CERN is discussed in this paper.
We propose to measure this vertex in the e p!e W jchannel as a complement to the conventional charged current e jchannel. In addition to the cross section measurement, ˜2 method studies of angular variables provide powerful tools to probe the anomalous structure of triple gauge boson by: 3.
Measurement of Triple-Gauge Boson Couplings of the W boson at L3 Mark Dierckxsens Particle Physics Seminar Brookhaven National Laboratory WW qqln semi-leptonic (44%) with l=e,m,t lnln leptonic (11%) measurement checked.
The triple gauge-boson couplings involving the W are determined using data samples collected with the ALEPH detector at mean centre-of-mass energies of GeV and GeV, corresponding to integrated luminosities of 57 pb $^{-1}$ and pb $^{-1}$, by:. CiteSeerX - Document Details (Isaac Councill, Lee Giles, Pradeep Teregowda): We examine the sensitivity of flavor changing neutral current (FCNC) processes to anomalous triple gauge boson couplings.
We show that in the non-linear realization of the electroweak symmetry breaking sector these processes are very sensitive to two CP conserving anomalous couplings.Triple gauge boson couplings are measured from W-pair events recorded by the OPAL detector at LEP at centre-of-mass energies of - GeV with a total integrated luminosity of pb Only CP-conserving couplings are considered and SU(2) x U(1) relations are used, resulting in four independent couplings, κ γ, g z 1, λ γ and g z 5.This also renders couplings between two W bosons and a neutral boson, the photon or Z boson, and are called triple gauge-boson couplings (TGC's).
In fact, diagrams containing these vertices are necessary to ensure a proper high energy behaviour for certain processes, like W-pair production in e +e - by: 1. | 2021-01-20 00:42:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7987242937088013, "perplexity": 7721.169572848005}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519843.24/warc/CC-MAIN-20210119232006-20210120022006-00203.warc.gz"} |
https://www.futurelearn.com/courses/begin-programming/0/steps/2947 | 2.6
Introduction to operators
Operators are used to perform functions on the data in our variables. In this video, you will learn about some of the ways operators can be used, see examples of operators in the game code, and find out how you can use the IDE to implement and test operators in the code.
In this activity we’ll be looking at some of the most commonly used operators. These fall into four main groups:
Assignment operator
The assignment operator is used to assign a value.
Arithmetic operators
Arithmetic operators are used to perform basic mathematical operations such as addition, subtraction and division.
Unary operator
We’re only going to cover one unary operator in this course: ! which is used to invert a Boolean value. | 2019-08-22 01:05:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3554999828338623, "perplexity": 557.6164062268973}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316555.4/warc/CC-MAIN-20190822000659-20190822022659-00436.warc.gz"} |
https://faculty.math.illinois.edu/Macaulay2/doc/Macaulay2-1.18/share/doc/Macaulay2/SimplicialPosets/html/_from__F__Vector.html | # fromFVector -- If possible, returns a simplicial poset with the given f-vector.
## Synopsis
• Usage:
P = fromFVector(L)
• Inputs:
• L, a list, The desired F-vector.
• Outputs:
• P, an instance of the type Poset, A simplicial poset with f-vector L.
## Description
This method is provided as a way to construct an example of a simplicial poset with a given f-vector.
It implements a construction due to Richard Stanley.
For details about this construction and a description of what f-vectors can be constructed, see Stanley's original paper.
i1 : P = fromFVector({1,6,5,1}); i2 : isSimplicial(P) o2 = true i3 : getFVector(P) o3 = {1, 6, 5, 1} o3 : List
## Ways to use fromFVector :
• "fromFVector(List)"
## For the programmer
The object fromFVector is . | 2021-09-22 20:59:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4469711482524872, "perplexity": 3351.4385082511785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057388.12/warc/CC-MAIN-20210922193630-20210922223630-00446.warc.gz"} |
https://chemistry.stackexchange.com/questions/122609/how-do-i-figure-out-how-many-hydrogens-my-compound-actually-has-using-a-mass-and/122635 | # How do I figure out how many hydrogens my compound actually has using a mass and NMR spectrum?
Question 3:
It said $$m/z = 122,$$ and $$m/z = 124$$ is in a $$3:1$$ ratio, so I figured that meant that chlorine is present. Then I thought $$m/z$$ was the actual compound's molecular mass.
So I used the rule of 13, and did:
Chlorine's molar mass = 35
122 - 35 = 87
Using the rule of 13
87/13 = 6C + 9H/13
So the molecular formula is $$\ce{C6H9Cl}.$$ But the integral values I rounded are: 2,2,2,2 and 3 adding up to 11. Did I use the wrong mass to charge ratio? What am I missing?
• I would check for consistency with the proton spectrum: that suggests you have 11 H or a multiple thereof. – Buck Thorn Oct 18 at 8:26
• How do I know for sure if there are exactly 11 H? – Mohamed Oct 18 at 8:45
• Well, I posted an explanation as an answer... – Buck Thorn Oct 18 at 9:27
Being an NMR fan myself I would inspect that NMR spectrum:
The integrals suggest you have 11 $$\ce{^1H}$$ or a multiple thereof (the number under each peak is the normalized integral, which is proportional to the number of protons represented by the multiplet). That leaves you with $$\pu{87 Da -11 Da}=\pu{76 Da}$$ to explain. If you throw in an oxygen you get $$\pu{60 Da}$$, which is neatly divisible by 12, for a formula $$\ce{C5H11OCl}$$. Next use the NMR spectrum again to infer the connectivity. The rightmost integrated multiplets (multiplet and triplet) are typical of $$\ce{-CH2CH3}$$ with $$\ce{CH2}$$ split by methyl and something else. From the integrals we see clearly there is only one methyl group, and there are no lone H which seems to eliminate the possibility of carbonyl or hydroxyl groups, suggesting an ether.
In the end I just "cheated" (it's 2019 after all) and used the online NMR simulator predictor:
This nice piece of software lets you tinker quickly with different structures and inspect assignments interactively, helping you develop your intuition.
Next you need to verify the MS and IR spectra, but the NMR spectrum is a giveaway that this is the correct hit.
• ‘verify the IR spectrum’? I thought IR spectra nowadays were only recorded because some journal editors complain if they are not there; then a couple of wavenumbers are written down and the spectrum filed. – Jan Oct 18 at 11:27
• @Jan Yes, well, in that case consider it "fine print". Evidently this is homework and the OP should incorporate all available info into the answer :-) – Buck Thorn Oct 18 at 12:05
I would probably also use the method Buck has suggested, but let’s say the NMR broke down or somebody is measuring a $$\ce{^13C}$$ of $$\pu{2.5mg}$$ meaning it will be blocked until tomorrow; in this case, we can still extract more information from the mass spectrum.
In addition to the molecule peak at 122, you have:
• a chlorine-containing fragment $$m/z=93$$
• a chlorine-containing fragment $$m/z=63$$
• a chlorine-free fragment $$m/z=73$$
• and some more stuff at lower masses that doesn’t analyse itself quite as easily.
Those three peaks should derive from fragmentation processes, so we can get an idea of what groups we have from looking at the mass difference (i.e. the bit that fragmented away).
For the peak at 93: $$122-93=29$$. The chlorine atom is retained. The most common groups to leave are methyl, ethyl etc., and 29 happens to be exactly the mass of $$\ce{CH3CH2}$$ (methyl is 15, methylene is 14; you internalise those numbers pretty quickly). Therefore, your molecule should have an ethyl group somewhere.
The chlorine-free peak at 73: $$122-73=49$$. That’s an uncommon number at first sight but remember we lost a chlorine. Taking that into account, $$49-35=14$$ and we find another methylene group. So we can say that there is a terminal $$\ce{CH2Cl-{}}$$ group.
Finally, the peak at 63: $$122-63=59$$. That again doesn’t say everything at once, but what happens if we remove the 29 we found earlier for the ethyl group? $$59-29=30$$. 30 could mean two methyl groups but that’s somewhat unlikely given how few fragments we have overall—especially since we would also expect a peak with a mass difference of 15 for a single methyl group. But $$30=14+16$$, meaning there could be a $$\ce{-CH2-O-{}}$$ group attached to the ethyl group. Other possibilities for 16 might be $$\ce{NH2}$$, but the molecule disobeys the odd-nitrogen rule and two amino groups doesn’t look good with the data.
If we take what we have, we have a strong suspicion that there might be a $$\ce{CH3-CH2-CH2-O-{}}$$ group and a $$\ce{CH2Cl-{}}$$ group. Adding these together gives an intermediate mass of $$59+49=108$$ or 14 missing from the complete molecule. It seems reasonable to assume a $$\ce{CH2}$$ group connecting the fragments.
We should consider this all a working hypothesis until we can then take a look at the NMR spectrum which beautifully confirms that molecular structure.
I like both answers provided before me where one has used exclusive use of internet to suggest structure by NMR spectrum, and the other has used thorough analysis of mass spectrum. Although these two are valuable techniques, I feel OP needs to analyse step by step analysis of spectra given to predict the structure since he/her seemingly in a graduate course, Spectroscopic Analysis of Organic Compound, or one similar to that. Hence OP needs knowledge of how to analyse given spectra to determine relevant structure.
Thus, I'd like to give some insight to provide that knowledge. I usually start with $$\ce{^1H}$$-NMR since it gives the most information. The given NMR is showing 5 resonances indicating 5 different protons in 2:2:2:2:3 ratio, according to the corresponding integral values given. By that information, you can guess there are at least 11 protons and 5 carbons are in the molecule to be analyzed. Therefore, the molecule possibly contains $$\ce{C5H11}$$- portion, which counts for $$\pu{71D}$$. There are three -$$\ce{CH2}$$- resonances present in the downfield of the NMR spectrum, each having chemical shifts greater than $$\pu{3 ppm}$$. This clearly suggest that at least one side of each of these -$$\ce{CH2}$$- groups attached to a electron withdrawing group (such as oxygen or halide) making the corresponding proton deshielded than ordinary alkyl group.
Mass spectra of that compound has shown that it must contain one $$\ce{Cl}$$ atom, as OP correctly guessed (peaks with 3:1 ration intensities with $$2 \: m/z$$ difference). It also has shown molecular peak is $$122 \: m/z$$ and $$124 \: m/z$$ for $$\ce{^{35}Cl}$$ and $$\ce{^{37}Cl}$$ fragments. Therefore, $$\ce{C5H11^{35}Cl}$$- fragment gives $$\pu{(71+35) D}=\pu{106D}$$. Thus, the rest is $$\pu{(122-106) D}=\pu{16 D}$$, which indicates the presence of $$\ce{O}$$ in the molecule. Therefore, molecular formula of the molecule would be $$\ce{C5H11ClO}$$, which accounts for $$\pu{122 D}$$ (for $$\ce{C5H11^{35}ClO}$$). Based on these values, you can predict the NMR resonance as follows:
Since two electron withdrawing groups in the molecule are $$\ce{O}$$ and $$\ce{Cl}$$, we can predict that $$\mathrm{EWG^{I}}$$ is $$\ce{Cl}$$ and $$\mathrm{EWG^{II}}$$ is $$\ce{O}$$, based on the chemical shift values of peaks at ~$$\pu{3.9 ppm}$$ and ~$$\pu{3.6 ppm}$$. Therefore, it can be suggested that $$\ce{Cl-CH2-CH2-O}$$- group is present in the molecule. The third deshelded peak at ~$$\pu{3.3 ppm}$$ suggests the presence of $$\ce{R-CH2-CH2-O}$$- group in the molecule as well. Thus, conventional wisdom would suggest the molecular structure $$\ce{Cl-CH2-CH2-O-CH2-CH2-R}$$ for the molecule. Based on two shielded resonances at ~$$(\pu{1.5 ppm},\: sextet)$$ and ~$$(\pu{0.9 ppm},\: t)$$ suggest that $$\ce{R}$$- group would be $$\ce{CH3}$$-, based on chemical shifts of $$1.5$$ and $$\pu{0.9 ppm}$$, and integral ($$3.07$$) and splitting pattern of resonance at $$\pu{0.9 ppm}$$. Therefore, the tentative structure of the molecule is $$\ce{Cl-CH2-CH2-O-CH2-CH2-CH3}$$ as given in the diagram above. The given mass spectrum confirms this structure as shown in fragmentation illustrated in following diagram and elsewhere in one of the answers: | 2019-12-16 09:05:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 70, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6864597797393799, "perplexity": 946.4981870251563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541318556.99/warc/CC-MAIN-20191216065654-20191216093654-00167.warc.gz"} |
http://slideplayer.com/slide/3974220/ | Presentation on theme: ""— Presentation transcript:
Support Vector Machines
MEDINFO 2004, T02: Machine Learning Methods for Decision Support and Discovery Constantin F. Aliferis & Ioannis Tsamardinos Discovery Systems Laboratory Department of Biomedical Informatics Vanderbilt University
Support Vector Machines
Decision surface is a hyperplane (line in 2D) in feature space (similar to the Perceptron) Arguably, the most important recent discovery in machine learning In a nutshell: map the data to a predetermined very high-dimensional space via a kernel function Find the hyperplane that maximizes the margin between the two classes If data are not separable find the hyperplane that maximizes the margin and minimizes the (a weighted average of the) misclassifications
Support Vector Machines
Three main ideas: Define what an optimal hyperplane is (in way that can be identified in a computationally efficient way): maximize margin Extend the above definition for non-linearly separable problems: have a penalty term for misclassifications Map data to high dimensional space where it is easier to classify with linear decision surfaces: reformulate problem so that data is mapped implicitly to this space
Support Vector Machines
Three main ideas: Define what an optimal hyperplane is (in way that can be identified in a computationally efficient way): maximize margin Extend the above definition for non-linearly separable problems: have a penalty term for misclassifications Map data to high dimensional space where it is easier to classify with linear decision surfaces: reformulate problem so that data is mapped implicitly to this space
Which Separating Hyperplane to Use?
Var1 Var2
Maximizing the Margin Var1 Var2
IDEA 1: Select the separating hyperplane that maximizes the margin! Margin Width Margin Width Var2
Support Vectors Var1 Support Vectors Margin Width Var2
Setting Up the Optimization Problem
Var1 The width of the margin is: So, the problem is: Var2
Setting Up the Optimization Problem
Var1 There is a scale and unit for data so that k=1. Then problem becomes: Var2
Setting Up the Optimization Problem
If class 1 corresponds to 1 and class 2 corresponds to -1, we can rewrite as So the problem becomes: or
Linear, Hard-Margin SVM Formulation
Find w,b that solves Problem is convex so, there is a unique global minimum value (when feasible) There is also a unique minimizer, i.e. weight and b value that provides the minimum Non-solvable if the data is not linearly separable Quadratic Programming Very efficient computationally with modern constraint optimization engines (handles thousands of constraints and training instances).
Support Vector Machines
Three main ideas: Define what an optimal hyperplane is (in way that can be identified in a computationally efficient way): maximize margin Extend the above definition for non-linearly separable problems: have a penalty term for misclassifications Map data to high dimensional space where it is easier to classify with linear decision surfaces: reformulate problem so that data is mapped implicitly to this space
Support Vector Machines
Three main ideas: Define what an optimal hyperplane is (in way that can be identified in a computationally efficient way): maximize margin Extend the above definition for non-linearly separable problems: have a penalty term for misclassifications Map data to high dimensional space where it is easier to classify with linear decision surfaces: reformulate problem so that data is mapped implicitly to this space
Non-Linearly Separable Data
Var1 Introduce slack variables Allow some instances to fall within the margin, but penalize them Var2
Formulating the Optimization Problem
Constraint becomes : Objective function penalizes for misclassified instances and those within the margin C trades-off margin width and misclassifications Var1 Var2
Linear, Soft-Margin SVMs
Algorithm tries to maintain i to zero while maximizing margin Notice: algorithm does not minimize the number of misclassifications (NP-complete problem) but the sum of distances from the margin hyperplanes Other formulations use i2 instead As C, we get closer to the hard-margin solution
Robustness of Soft vs Hard Margin SVMs
Var1 Var2 i Var1 Var2 Hard Margin SVN Soft Margin SVN
Soft vs Hard Margin SVM Soft-Margin always have a solution
Soft-Margin is more robust to outliers Smoother surfaces (in the non-linear case) Hard-Margin does not require to guess the cost parameter (requires no parameters at all)
Support Vector Machines
Three main ideas: Define what an optimal hyperplane is (in way that can be identified in a computationally efficient way): maximize margin Extend the above definition for non-linearly separable problems: have a penalty term for misclassifications Map data to high dimensional space where it is easier to classify with linear decision surfaces: reformulate problem so that data is mapped implicitly to this space
Support Vector Machines
Three main ideas: Define what an optimal hyperplane is (in way that can be identified in a computationally efficient way): maximize margin Extend the above definition for non-linearly separable problems: have a penalty term for misclassifications Map data to high dimensional space where it is easier to classify with linear decision surfaces: reformulate problem so that data is mapped implicitly to this space
Disadvantages of Linear Decision Surfaces
Var1 Var2
Advantages of Non-Linear Surfaces
Var1 Var2
Linear Classifiers in High-Dimensional Spaces
Constructed Feature 2 Var1 Var2 Constructed Feature 1 Find function (x) to map to a different space
Mapping Data to a High-Dimensional Space
Find function (x) to map to a different space, then SVM formulation becomes: Data appear as (x), weights w are now weights in the new space Explicit mapping expensive if (x) is very high dimensional Solving the problem without explicitly mapping the data is desirable
The Dual of the SVM Formulation
Original SVM formulation n inequality constraints n positivity constraints n number of variables The (Wolfe) dual of this problem one equality constraint n number of variables (Lagrange multipliers) Objective function more complicated NOTICE: Data only appear as (xi) (xj)
The Kernel Trick (xi) (xj): means, map data into new space, then take the inner product of the new vectors We can find a function such that: K(xi xj) = (xi) (xj), i.e., the image of the inner product of the data is the inner product of the images of the data Then, we do not need to explicitly map the data into the high-dimensional space to solve the optimization problem (for training) How do we classify without explicitly mapping the new instances? Turns out
Examples of Kernels Assume we measure two quantities, e.g. expression level of genes TrkC and SonicHedghog (SH) and we use the mapping: Consider the function: We can verify that:
Polynomial and Gaussian Kernels
is called the polynomial kernel of degree p. For p=2, if we measure 7,000 genes using the kernel once means calculating a summation product with 7,000 terms then taking the square of this number Mapping explicitly to the high-dimensional space means calculating approximately 50,000,000 new features for both training instances, then taking the inner product of that (another 50,000,000 terms to sum) In general, using the Kernel trick provides huge computational savings over explicit mapping! Another commonly used Kernel is the Gaussian (maps to a dimensional space with number of dimensions equal to the number of training cases):
The Mercer Condition Is there a mapping (x) for any symmetric function K(x,z)? No The SVM dual formulation requires calculation K(xi , xj) for each pair of training instances. The array Gij = K(xi , xj) is called the Gram matrix There is a feature space (x) when the Kernel is such that G is always semi-positive definite (Mercer condition)
Support Vector Machines
Three main ideas: Define what an optimal hyperplane is (in way that can be identified in a computationally efficient way): maximize margin Extend the above definition for non-linearly separable problems: have a penalty term for misclassifications Map data to high dimensional space where it is easier to classify with linear decision surfaces: reformulate problem so that data is mapped implicitly to this space
Other Types of Kernel Methods
SVMs that perform regression SVMs that perform clustering -Support Vector Machines: maximize margin while bounding the number of margin errors Leave One Out Machines: minimize the bound of the leave-one-out error SVM formulations that take into consideration difference in cost of misclassification for the different classes Kernels suitable for sequences of strings, or other specialized kernels
Variable Selection with SVMs
Recursive Feature Elimination Train a linear SVM Remove the variables with the lowest weights (those variables affect classification the least), e.g., remove the lowest 50% of variables Retrain the SVM with remaining variables and repeat until classification is reduced Very successful Other formulations exist where minimizing the number of variables is folded into the optimization problem Similar algorithm exist for non-linear SVMs Some of the best and most efficient variable selection methods
Comparison with Neural Networks
Hidden Layers map to lower dimensional spaces Search space has multiple local minima Training is expensive Classification extremely efficient Requires number of hidden units and layers Very good accuracy in typical domains SVMs Kernel maps to a very-high dimensional space Search space has a unique minimum Training is extremely efficient Classification extremely efficient Kernel and cost the two parameters to select Very good accuracy in typical domains Extremely robust
Why do SVMs Generalize? Even though they map to a very high-dimensional space They have a very strong bias in that space The solution has to be a linear combination of the training instances Large theory on Structural Risk Minimization providing bounds on the error of an SVM Typically the error bounds too loose to be of practical use
MultiClass SVMs One-versus-all
Train n binary classifiers, one for each class against all other classes. Predicted class is the class of the most confident classifier One-versus-one Train n(n-1)/2 classifiers, each discriminating between a pair of classes Several strategies for selecting the final classification based on the output of the binary SVMs Truly MultiClass SVMs Generalize the SVM formulation to multiple categories More on that in the nominated for the student paper award: “Methods for Multi-Category Cancer Diagnosis from Gene Expression Data: A Comprehensive Evaluation to Inform Decision Support System Development”, Alexander Statnikov, Constantin F. Aliferis, Ioannis Tsamardinos
Conclusions SVMs express learning as a mathematical program taking advantage of the rich theory in optimization SVM uses the kernel trick to map indirectly to extremely high dimensional spaces SVMs extremely successful, robust, efficient, and versatile while there are good theoretical indications as to why they generalize well | 2018-04-24 08:57:06 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.872085452079773, "perplexity": 2013.5546190279376}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946578.68/warc/CC-MAIN-20180424080851-20180424100851-00571.warc.gz"} |
https://www.r-bloggers.com/2014/03/guardian-data-blog-uk-general-election-analysis-in-r/ | [This article was first published on Benomics » R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
The Guardian newspaper has for a few years been running a data blog and has built up a massive repository of (often) well-curated datasets on a huge number of topics. They even have an indexed list of all data sets they’ve put together or reused in their articles.
It’s a great repository of interesting data for exploratory analysis, and there’s a low barrier to entry in terms of getting the data into a useful form. Here’s an example using UK election polling data collected over the last thirty years.
## ICM polling data
The Guardian and ICM research have conducted monthly polls on voting intentions since 1984, usually with a sample size of between 1,000 and 1,500 people. It’s not made obvious how these polls are conducted (cold-calling?) but for what it’s worth ICM is a member of the British Polling Council, and so hopefully tries to monitor and correct for things like the “Shy Tory Factor“—the observation that Conservative voters supposedly have (or had prior to ’92) a greater tendency to conceal their voting intentions than Labour supporters.
## Preprocessing
The data is made available from The Guardian as a .csv file via Google spreadsheets here and requires minimal cleanup, cut the source information from the end of the file and you can open it up in R.
sop <- read.csv("StateOfTheParties.csv", stringsAsFactors=F)
## Data cleanup
sop[,2:5] <- apply(sop[,2:5], 2, function(x) as.numeric(gsub("%", "", x)))
sop[,1] <- as.Date(sop[,1], format="%d-%m-%Y")
colnames(sop)[1] <- "Date"
# correct for some rounding errors leading to 101/99 %
sop$rsum <- apply(sop[,2:5], 1, sum) table(sop$rsum)
sop[,2:5] <- sop[,2:5] / sop\$rsum
Then after melting the data.frame down (full code at the end of the post), you can get a quick overview with ggplot2.
Outlines (stacked bars) represent general election results
## Election breakdown
The area plot is a nice overview but not that useful quantitatively. Given that the dataset includes general election results as well as opinion polling, it’s straightforward to split the above plot by this important factor. I also found it useful to convert absolute dates to be relative to the election they precede. R has an object class, difftime, which makes this easy to accomplish and calling as.numeric() on a difftime object converts it to raw number of days (handily accounting for things like leap years).
These processing steps lead to a clearer graph with more obvious stories, such as the gradual and monotonic decline of support for Labour during the Blair years.
NB Facet headers show the election year and result of the election with which the (preceding) points are plotted relative to.
## Next election’s result
I originally wanted to look at this data to get a feel for how things are looking before next year’s (2015) general election, maybe even running some predictive models (obviously I’m no fivethirtyeight.com).
However, graphing the trends of public support for the two main UK parties hints it’s unlikely to be a fruitful endeavour at this point, and with the above graphs showing an ominous increasing support for “other” parties (not accidentally coloured purple), it looks like with about 400 days to go the 2015 general election is still all to play for. | 2021-09-20 14:27:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25881654024124146, "perplexity": 2922.133438245904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057039.7/warc/CC-MAIN-20210920131052-20210920161052-00617.warc.gz"} |
http://mathoverflow.net/feeds/question/76050 | is connected complex Lie group with a trivial center linear? - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-25T16:58:53Z http://mathoverflow.net/feeds/question/76050 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/76050/is-connected-complex-lie-group-with-a-trivial-center-linear is connected complex Lie group with a trivial center linear? Dima Sustretov 2011-09-21T11:30:34Z 2012-12-07T23:09:15Z <p>There is a theorem of Rosenlicht ("Some basic theorems on algebraic groups", 1956, Theorem 13) asserting that a quotient of a connected algebraic group by its center is linear. So a connected algebraic group with trivial center is linear.</p> <p>Is it true of connected complex Lie groups? I.e. is a connected complex Lie group with a trivial center a subgroup of $GL(n,\mathbb{C})$? Is it algebraic?</p> http://mathoverflow.net/questions/76050/is-connected-complex-lie-group-with-a-trivial-center-linear/115754#115754 Answer by Aakumadula for is connected complex Lie group with a trivial center linear? Aakumadula 2012-12-07T22:51:44Z 2012-12-07T23:09:15Z <p>As Alain Valette says, a centreless connected complex Lie group $G$ has an injective homomorphism into $GL_n({\mathbb C})$. However, it need not be algebraic. To see this, consider the semi-direct product $G={\mathbb C}^2 \rtimes {\mathbb C}$. Here $z\in {\mathbb C}$ acts on the standard basis $e_1,e_2$ by the characters $e^{2\pi i z}$ and $e^{2\pi i z/\sqrt{2}}$ respectively. If $G$ can be given the structure of an algebraic group, then these two characters on ${\mathbb C}$ would become algebraically dependent, which cannot be. </p> <p>Incidentally, this $G$ is not closed in its adjoint "embedding", since the closure contains $S^1\times S^1$ in the diagonal part. That is $G\subset {\mathbb C}^2\rtimes D_2$ where $D_2$ is the group of diagonals in $GL({\mathbb C}^2)$. </p> | 2013-05-25 16:59:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9233583807945251, "perplexity": 508.57357088532854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705976722/warc/CC-MAIN-20130516120616-00002-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://qiskit.org/documentation/stubs/qiskit.optimization.QuadraticProgram.to_ising.html | QuadraticProgram.to_ising()[source]
Return the Ising Hamiltonian of this problem.
Returns
The qubit operator for the problem offset: The constant value in the Ising Hamiltonian.
Return type
qubit_op
Raises | 2020-09-26 20:17:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9939069151878357, "perplexity": 7369.847904067185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400245109.69/warc/CC-MAIN-20200926200523-20200926230523-00350.warc.gz"} |
https://proofwiki.org/wiki/Leigh.Samphier/Sandbox/Field_Operations_on_P-adic_Numbers | # Leigh.Samphier/Sandbox/Field Operations on P-adic Numbers
## Theorem
Let $p$ be any prime number.
Let $\struct {\Q_p, \norm {\,\cdot\,}_p}$ be the $p$-adic numbers as quotient of Cauchy sequences.
Then the field operations on $\Q_p$ are defined by:
$+ :\forall \eqclass{\sequence{x_n}}{}, \eqclass{\sequence{y_n}}{} \in \Q_p: \eqclass{\sequence{x_n}}{} + \eqclass{\sequence{y_n}}{} = \eqclass{\sequence{x_n + y_n}}{}$
$\circ :\forall \eqclass{\sequence{x_n}}{}, \eqclass{\sequence{y_n}}{} \in \Q_p: \eqclass{\sequence{x_n}}{} \circ \eqclass{\sequence{y_n}}{} = \eqclass{\sequence{x_n y_n}}{}$
where $\eqclass{\sequence{x_n}}{}, \eqclass{\sequence{y_n}}{}$ denote the left coset of $\Q_p$ containing the Cauchy sequences $\sequence{x_n}, \sequence{y_n}$, respectively, of $\Q$.
## Proof
By definition of $p$-adic numbers as quotient of Cauchy sequences:
$\Q_p$ is the quotient ring of Cauchy sequences of the valued field $\struct {\Q, \norm {\,\cdot\,}^\Q_p}$
By definition of the quotient ring of Cauchy sequences:
$\Q_p = \CC / \NN$
where:
$\CC$ is the ring of Cauchy sequences over $\Q$
$\NN$ is the ideal of null sequences over $\Q$
$+ :\forall \eqclass{\sequence{x_n}}{}, \eqclass{\sequence{y_n}}{} \in \Q_p: \eqclass{\sequence{x_n}}{} + \eqclass{\sequence{y_n}}{} = \eqclass{\sequence{x_n + y_n}}{}$
$\circ :\forall \eqclass{\sequence{x_n}}{}, \eqclass{\sequence{y_n}}{} \in \Q_p: \eqclass{\sequence{x_n}}{} \circ \eqclass{\sequence{y_n}}{} = \eqclass{\sequence{x_n y_n}}{}$
$\blacksquare$ | 2021-04-13 22:08:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9902520775794983, "perplexity": 267.85558893693644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038075074.29/warc/CC-MAIN-20210413213655-20210414003655-00072.warc.gz"} |
https://jme.bmj.com/content/30/1/44 | Article Text
The Olivieri debacle: where were the heroes of bioethics?
Free
1. F Baylis
1. Correspondence to:
Professor F Baylis
Department of Bioethics and Department of Philosophy, Dalhousie University, Halifax, Nova Scotia, Canada B3H 4H7; francoise.baylisdal.ca
## Abstract
All Canadian bioethicists need to reflect on the meaning and value of their work, to see more clearly how the ethics of bioethics is being undermined from within. In the case involving Dr Olivieri, the Hospital for Sick Children, the University of Toronto, and Apotex Inc, there were countless opportunities for bioethical heroism. And yet, no bioethics heroes emerged from this case. Much has been written about the hospital’s and the university’s failures in this case. But what about the deafening silence from the Canadian bioethics community? Given the duty of bioethicists to “speak truth to power”, this silence is troubling. To date, nothing has been written about the silence. This article is intended as a partial remedy. As well, the article pays tribute to heretofore unsung heroes among Dr Olivieri’s research colleagues.
• Olivieri/Apotex affair
• research ethics
• public accountability
• conflict of interest
## Request Permissions
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
## SCIENCE FICTION: A PROUD MOMENT IN TIME FOR CANADIAN BIOETHICS
In late 1995 and early 1996, after six years of clinical trials, Dr Nancy Olivieri (an internationally renowned expert on blood disorders) began to have concerns about deferiprone (L1)—a drug she was testing for the treatment of thalassaemia major. Dr Olivieri’s first concern was that deferiprone might be ineffective. Later, in February 1997, she came to believe that the drug might actually be toxic: a probable cause of progression of liver fibrosis in some patients.
When she first had misgivings about the effectiveness of deferiprone, Dr Olivieri reported her concerns to Apotex Inc, the Canadian generic drug manufacturer who was sponsoring some of her research. Apotex disputed her claims about unexpected risk to patients and contested the need for her to inform patients of the “risk”. Dr Olivieri told Apotex that the Research Ethics Board (REB) at the Hospital for Sick Children (HSC) (where the clinical trial was being conducted), would have to be advised of her findings regarding loss of efficacy, and that the existing protocols and consent forms would have to be modified. The REB subsequently instructed Dr Olivieri to amend the research consent forms, and also told her to report the unexpected findings to the Health Protection branch of Health Canada and to other physicians responsible for patient care who were using deferiprone.
When Apotex received the revised information and consent forms, it terminated the trial and informed Dr Olivieri that all information about the trial was to remain confidential, or there would be legal consequences. There is a comprehensive account of the details of this case.1–6
On May 24, 1996, Dr Olivieri was told: “You must not publish or divulge information to others about the work you have done with Apotex ... without the written consent of Apotex. Now, should you choose to violate this agreement you will be subject to legal action.” (Thompson J, et al,1 p 144)
The HSC, where Dr Olivieri was head of the thalassaemia programme (the largest in North America), and the University of Toronto’s faculty of medicine, where she held an academic appointment, were outraged at this turn of events. They immediately came to her defence and took a strong stand in support of research integrity, the protection of research participants, informed consent, academic freedom, and the protection of the public interest in the face of ever increasing pressures on universities and teaching hospitals to seek corporate sponsorship for research. They offered Dr Olivieri (and all members of staff) moral support on these matters of principle and exercised moral leadership and authority in meetings and negotiations with Apotex. Members of the bioethics department at the HSC, as well as members of the Joint Centre for Bioethics at the University of Toronto—all of whom had tried (to varying degrees) to resolve the conflict as it was brewing—actively assisted their respective institutions in defending these matters of principle. They also undertook to participate in various processes initiated to develop harmonised guidelines, policies, and procedures for corporate sponsorship of clinical research.
As well, the Canadian bioethics community at large rallied around Dr Olivieri, the HSC, the University of Toronto, and other Toronto based bioethicists. Together, they took a united stand in support of academic freedom, informed consent, the protection of research participants, and public accountability. These core ethical issues were debated at the annual meeting of the Canadian Bioethics Society, where several motions were passed, including a motion to develop clear policies and practices for sound ethical partnerships involving hospitals, universities, university researchers, and industry. Individual bioethicists were galvanised into action; there were discussions with Health Canada, the National Council on Ethics in Human Research, and the biomedical ethics committee of the Medical Research Council of Canada. As well, several articles were published in prominent professional journals, and there were a good number of public lectures and media interviews. Indeed, this was a proud moment for Canadian bioethics.
Had it been the case that all this were true... . But of course the above italicised text is fictional.
While the description of the events until the time at which Apotex terminated the trials and threatened legal action is accurate, the details about the various responses to the controversy surrounding the dispute are but a figment of my imagination. The University of Toronto did acknowledge that Apotex was acting inappropriately, and eventually did accept that it had a responsibility to defend Dr Olivieri’s academic freedom. No steps were taken to meet this responsibility, however, “except for the Dean of Medicine’s clearly ineffective 1996 requests to Apotex to desist”.3 The hospital, for its part, took no effective action to support Dr Olivieri (Thompson J, et al,1 pp 155–8).and indeed there is good evidence that efforts were made to undermine Dr Olivieri (Downie J, et al,3 pp 108–10). As Drs Nathan and Weatherall (internationally renowned blood researchers) have written on this point: “although the Hospital for Sick Children and the University of Toronto knew that [Olivieri’s academic] freedom was under attack, Olivieri received harassment instead of support from the hospital and ineffectual support from the university in her legal stand against Apotex” (Nathan DG, et al,6 p 1369). See also Spurgeon D.7 For their part, bioethicists on site, and the bioethics community at large, from beginning to end, were largely silent.
In this article, I will not add to the extensive commentary arguing that the hospital and the university failed to vigorously defend academic freedom and to seriously tackle the complex issue of conflict of interest with industry/university research partnerships. I will focus instead on the deafening silence from the Canadian bioethics community—from all of us in Canada whose professional work (clinical or academic) is in the area of bioethics. Given the duty of bioethicists to speak truth to power,8 this silence is troubling.
The time has come to critically examine the failure of Canadian bioethicists to play a pivotal role in this precedent setting research ethics controversy. It is important to question the meaning and value of bioethics work in clinical and academic settings, if bioethicists say and do nothing (or very little) in difficult and complicated cases that directly challenge cherished fundamental ethical standards and principles.
## STORIES OF SILENCE
In 1998, as the controversy involving Dr Olivieri, the HSC, the University of Toronto, and Apotex continued to escalate, the board of the HSC mandated a review of the facts and circumstances in the controversy. Dr Arnold Naimark agreed to conduct the review. Part way through the review process, because of ongoing controversy about his perceived conflicts of interest as well as concerns about the legitimacy of a single person review committee, he appointed two associate reviewers well known to the Canadian bioethics community, Professor Bartha Maria Knoppers and Dr Frederick Lowy.
The final report of the Naimark committee, submitted in November 1998, suggests that the role of bioethics in helping to resolve the controversy was, at best, very limited. In a 160 page document, there are but a few paragraphs that discuss the role of bioethics:
Another noteworthy feature of the final report submitted by the Naimark committee is the absence of any comment on the roles and responsibilities of the bioethics department at HSC and the Joint Centre for Bioethics at the University of Toronto (of which the HSC bioethics department is an affiliate member). One possible explanation for this omission is that while the Naimark committee knew of Ms Rowell’s involvement in the case, they did not know there was a bioethics department at HSC (and so did not contact the director, Dr Christine Harrison), and did not know of any formal affiliation agreement between the HSC and the Joint Centre for Bioethics (and so did not contact the director, Dr Peter Singer). (Neither C Harrison, director of the bioethics department at HSC, nor P Singer, director of the Joint Centre for Bioethics at the University of Toronto, are among those included in the list of contacts in the Naimark final report.)
This explanation is, however, implausible. Dr Frederick Lowy—an associate reviewer with the Naimark committee—was the founding director of the Joint Centre for Bioethics (indeed, this fact added to the original concerns about conflict of interest with the review process). Dr Lowy initiated the process that led to the HSC bioethics department becoming an affiliate member of the Joint Centre of Bioethics. As well, the other associate team member, Professor Knoppers, was very familiar with the work of the bioethics department and the Joint Centre for Bioethics. Therefore, knowledge of the scope and nature of bioethics practice within HSC and its relation to the Joint Centre for Bioethics, was available to the Naimark committee. Thus, ignorance of the bioethics resources available at HSC and the University of Toronto cannot explain the Naimark committee’s failure to meet with, and report on, the contributions (or lack thereof) of the directors of the HSC bioethics department and the University of Toronto’s Joint Centre for Bioethics.
A second possible explanation for this omission is that the director of the HSC bioethics department and the director of the Joint Centre for Bioethics were not involved in the case and so there was nothing on which to report. In 1999, the Canadian Association of University Teachers commissioned an independent committee of inquiry to investigate the case involving Dr Olivieri. The committee of inquiry issued its findings in 2001 and, by most accounts, this 540 page document is a more accurate, careful, and complete report of the facts and circumstances than the earlier 160 page report published by the Naimark committee. The committee of inquiry report confirms that Dr Harrison, the director of the bioethics department at HSC, was not involved in the case (Thompson J, et al,1 p 257). This left the more junior member of the department, Ms Rowell, to deal with the issues alone. This fact is striking when one considers that Ms Rowell has reported that “she was treated so rudely by the hospital executive when she raised concerns about the Olivieri affair that she considered resigning”.10 Indeed, she did eventually resign.
As for any possible involvement in the Olivieri dispute by the Joint Centre for Bioethics, the committee of inquiry concluded that “The Joint Centre, as a centre, appears not to have been engaged or to have spoken publicly on the controversy. Its silence is hard to understand” (Thompson J, et al,1 p 258). Further, Dr Singer, the director of the Joint Centre for Bioethics, declined the invitation to meet with the committee of inquiry. Reflecting on this, the committee of inquiry reported:
The Joint Centre for Bioethics is a partnership between the university and a number of health care institutions. Staff bioethicists of HSC and other hospitals are members of the joint centre. Its website states: “Our mission is to provide leadership in bioethics research, education, and clinical activities”. The efforts by Apotex to deter Dr Olivieri from informing patients about risks she had identified, and the lack of effective support for her by HSC and the university, gave rise to one of the most significant and highly publicised bioethical disputes in Canada in many years. Yet the Joint Centre for Bioethics appears not to have provided leadership in this matter. Dr Peter Singer, director of the joint centre, declined to meet with this committee of inquiry and, instead, informed us in writing that: “The involvement of the joint centre was through the work of two of its members–Dr Christine Harrison and Professor Mary Rowell–who are the Bioethicists at the Hospital for Sick Children. I understand that they have already met with you in this matter.” (Thompson J, et al,1 p 257)
Dr Singer is a senior figure in Canadian bioethics. His decision not to meet with the committee of inquiry and to remain silent is difficult to understand, especially in view of the centre’s Statement of Mission, Vision, Values and Goals which states: “Our mission is to provide leadership in bioethics research, education, and clinical activities... . The JCB does not advocate positions on specific issues, although its individual members may do so”.11
Ms Rowell has also chosen to remain silent and she has never publicly told her story. The closest she has come to doing so was at the 13th Annual meeting of the Canadian Bioethics Society in the fall of 2001. During the question period, after a plenary lecture entitled A Reflection on the “Place” of Bioethics12 criticising the Canadian bioethics community at large for its silence on two internationally prominent ethics cases originating in Toronto—one involving Dr Nancy Olivieri, the other involving Dr David Healy13,14—Ms Rowell spoke passionately from the floor about the unbearable stress and lack of institutional support she experienced while involved in this case in her official capacity as bioethicist. She indicated that she had no choice but to leave her position at the hospital.
When Ms Rowell spoke at the Canadian Bioethics Society annual meeting, I was reminded of an observation made by my friend and colleague, Dr Benjamin Freedman, in his writings on bioethical heroism: “Working at the intersection between conflicting claims of patients, staff, and administration, bioethicists must often find themselves under pressure to compromise their ideals, to ‘get along by going along’”.15 What pressure had Ms Rowell been under? Where had it come from? How unbearable had it been? At what point had she come to believe that she had (perhaps unintentionally) compromised her ideals? Was she living with moral residue?16
And, what about the director of the bioethics department, Dr Christine Harrison, and the rest of the Canadian bioethics community? At no time has Dr Harrison spoken publicly about her involvement in this case. It is known, however, that she did not speak publicly about the case or the issues raised by the case and furthermore that she was a member of the ad hoc subcommittee of the medical advisory committee (MAC) of the Hospital for Sick Children—the committee that advises the board of trustees on disciplinary action against staff physicians. The MAC established a fact finding subcommittee following receipt of the Naimark report. The significant limitations of the subcommittee’s review are discussed in the Report of the Committee of Inquiry on the Case Involving Dr Nancy Olivieri, the Hospital for Sick Children, the University of Toronto, and Apotex Inc (Thompson J, et al,1 p 336).
As for the non-response from the Canadian bioethics community at large, while initially only two members of the bioethics community, Dr Harrison and Ms Rowell, may have had intimate knowledge of the events at HSC, by the fall of 1998 there was considerable information in the public domain to which other members of the Canadian bioethics community could have responded. Only one person, however, is known to have taken up the cause: Professor Arthur Schafer, the director of the Centre for Professional and Applied Ethics at the University of Manitoba. At the invitation of Dr Olivieri and colleagues, Professor Schafer became actively involved in the controversy. He participated in news conferences and media interviews. He also participated in two fund-raising events organised by Doctors for Research Integrity to help pay the mounting legal bills of Dr Olivieri and her colleagues. For each of these events he prepared a report: Medicine, Morals and Money: the High Road or the Bottom Line (A Schafer, unpublished ms, 1998) and later Medicine, Morals and Money: Dancing with Porcupines or Sleeping beside Elephants (A Schafer, unpublished ms, 2001).
Less well known is the fact that a group of bioethicists at Dalhousie University, including myself, wrote to the president of the HSC, the dean of the faculty of medicine at the University of Toronto, and the director of the University of Toronto Joint Centre for Bioethics (copied to a number of individuals including all three members of the Naimark committee), asking them to clarify their respective institution’s policies and commitments in relation to the physician/researcher’s duty to disclose risks to research participants and the freedom of bioethicists to speak out against unethical practices. In this letter we were careful not to take a position on the merits of the specific case, as not all of the relevant facts were known to us. Rather, this was a carefully worded letter with several objectives: to let these institutions know that members of the bioethics community were watching the case, to elicit certain facts relevant to our concerns, and to show support for our bioethics colleagues. We wrote:
... bioethicists have professional responsibilities that must never be compromised by the conditions of their employment. These responsibilities include preventing unethical behaviour, where possible, confronting such behaviour if it does occur, and further ensuring that measures are introduced to preclude the recurrence of unethical behaviour. These obligations may require bioethicists to advocate on behalf of persons or for a particular position on a controversial issue and, if other means have failed, to draw public attention to the matter. The institutions for which bioethicists work or with which they have formal affiliation must support bioethicists when they engage in debate and speak out against unethical practices, so that the professional integrity of bioethicists is not compromised (correspondence with M Strofolino, A Aberman and P Singer, 26 November 1998).
The president of the HSC, the Dean of the faculty of medicine, the director of the Joint Centre for Bioethics and others answered our letter. The response from the HSC was brief:
We are confident that the policies and practices at the Hospital for Sick Children support the integrity of research and of our bioethicists. However, we have chosen not to respond publicly on these other related issues until after Dr Naimark submits his report (correspondence from Mr M Strofolino, 3 December 1998).
After the Naimark report was published we sent a follow up letter to the president. This letter was referred to Dr Buchwald. His response was equally brief and he suggested we consult the Tri-Council Policy Statement on Ethical Conduct for Research Involving Humans (national guidelines for all research involving humans). Similarly, the original responses from the dean of medicine and the director for the Joint Centre for Bioethics directed us to various policy documents. None of the letters directly engaged the substance of our letter.
For complicated (and perhaps ultimately indefensible) reasons, we decided not to pursue further communication. Instead, we returned to our academic pursuits and published on the roles and responsibilities of bioethicists. In retrospect, I believe this was a mistake.
## UNSUNG HEROES
Several opportunities for heroism arose in the case involving Dr Olivieri; this was an ethical struggle of international proportion calling out for someone to take a principled stand in the face of serious wrong, in order to protect the interests of research participants and the integrity of the research process.17 Where were the heroes of bioethics? (Freedman B,15 pp 297–9).
Some years ago, Benjamin Freedman noted that, given the nature of clinical ethics work, occasions for heroism in bioethics were plentiful and yet, there were no tales of bioethical heroism. “How could this be,” he wondered. Were he alive today, Benjy would surely bemoan the fact that trained bioethicists failed to speak out publicly in this case (with the notable exception of Professor Schafer). Indeed, the heroes in this case are scientists. Dr Olivieri and a few of her research colleagues then at the HSC, Drs Helen Chan, John Dick, Peter Durie and Brenda Gallie, risked their careers, their health, and their finances to defend principles they held dear. As well, it was scientists, Dr David Nathan of the Dana Farber Cancer Institute in Boston and Sir David Weatherall of Oxford University who intervened on Dr Olivieri’s behalf to broker a settlement to the ongoing dispute. Dr Olivieri’s heroism and the contributions of Drs Nathan and Weatherall are well known to many. Less well known, but no less important, is the heroism of four of Dr Olivieri’s colleagues. This article begins to tell some of their story.
A hero, according to Urmson, is a person who fulfils or exceeds the demands of duty in contexts where most others would fail to do so. In the face of adversity, she acts courageously in pursuit of a morally praiseworthy goal, even when this may involve or result in significant personal sacrifice. For Urmson, a person may be called a hero:
1. if he does his duty in contexts in which terror, fear, or a drive to self preservation would lead most men not to do it, and does so by exercising abnormal self control...
2. if he does his duty in contexts in which fear would lead most men not to do it, and does so without effort...
3. if he does actions that are far beyond the bounds of his duty, whether by control of natural fear or without effort.18
In defending Dr Olivieri, Drs Chan, Dick, Durie, and Gallie placed the interests of others above their own interests. They took a stand in defence of the principles of research integrity, academic freedom, informed consent, and patient safety, and they did so in the face of tremendous pressure from within their workplace. See—for example, Koren reprimanded by Ontario College of Physicians and Surgeons.19 While initially they took this stand because they felt they could not do otherwise, there can be no denying that their actions required courage, “courage to take a stand in the face of serious wrong, and courage to persevere in the face of seemingly constant ‘setbacks, weariness, difficulties and dangers’”.20 Courage was required not only to take the initial stand, but also to persevere as the HSC became much more aggressive in its public denunciations. Dr Olivieri’s supporters not only faced increasing hostility from the HSC, but also a loss of support from colleagues who turned away because of the hospital’s actions.
### Research integrity
A central issue in this case was conflict of interest. It was well known at the time of the dispute, and subsequently well documented, that Apotex and the University of Toronto were discussing a multimillion dollar donation to the university for the construction of a new biomedical research centre ($20 million to the university and$10 million to the university’s affiliated teaching hospitals). This promise of new funding (which was to be matched by other sources for a total of approximately \$92 million) (Thompson J, et al,1 p 94) put the institution in an ethically troubling conflict of interest situation (dependence on corporate funding introduces both pressures and temptations):
If realised, this would have been the largest corporate donation ever received by the University. While these negotiations were ongoing, the then University of Toronto president, President Prichard, at the request of Apotex, wrote to Prime Minister Chretien and four other federal ministers regarding proposed drug patent regulations. He wrote that Apotex had: “promised ‘a very substantial philanthropic commitment’ to the university. He went on to say that Apotex ‘has advised us that the adverse effect of the new regulations would make it impossible for Apotex to make its commitment to us’. Prichard urged the Prime Minister and Liberal cabinet members to do what is necessary ‘to avoid the serious negative consequences to our very important medical sciences initiative’.” (Thompson J, et al,1 p 99) President Prichard later apologised to the Executive Committee of the University for this action, acknowledging that he had made “a mistake” and that the letter had “placed the University in an inappropriate position of intervening in a matter beyond the legitimate scope of the University’s jurisdiction” (Thomson J, et al,1 p 100, in: Gibson E, et al,4 p 448).
As I and others have argued elsewhere, objectivity and integrity can be put at risk through industry/university partnerships:
The duty of universities is to seek truth. The duty of pharmaceutical companies is to make money for their shareholders. Drug companies that fail to do so go out of business. Universities that subordinate the disinterested search for truth to other ends lose credibility and their claim to a privileged status in society. If either abandons its fundamental mission, it ultimately fails. At times, institutional imperatives are bound to conflict. (Lewis S, et al,21 p 783)
When institutional imperatives conflict, there have to be clear mechanisms in place to protect the interests of patients and the essence of academic inquiry against the legal and financial power of the pharmaceutical industry. There were no such mechanisms at the HSC. In this policy vacuum, Dr Gallie set out to educate the HSC administration about events involving Dr Olivieri. She argued that the approach taken by Dr Olivieri—to report her concerns to the REB and, following their instructions, to modify the consent forms, and advise physicians outside HSC of the changes—was consistent with “normal clinical trial methodology where great caution is exerted to prevent harm to those people who volunteer for clinical trials” (correspondence with M Buchwald and M Strolofino 12 May 1998; correspondence with M Buchwald and M Strofolino 3 June 1998). In contrast, the approach taken by Apotex, which involved terminating a clinical trial because the data suggested a lack of effectiveness or toxicity, was not consistent with normal clinical trial methodology and appeared to be motivated primarily by commercial interests.
Similar concerns were brought to the attention of Dr Buchwald, the director of the HSC Research Institute, by Dr Dick. Dr Dick pointed out that: internationally renowned leaders in the field of blood research were dismayed at what was happening at HSC. When a researcher of Dr Olivieri’s stature identifies a serious research risk he argued, the institution has a responsibility to ascertain and assess the potential risk to clinical trial participants, and then to act accordingly.
Drs Chan, Dick, Durie, and Gallie sought to defend the integrity of the research process and to insist that the interests of patients should outweigh any commercial or other interests that Apotex or the hospital might have.
The principle of academic freedom (to publish findings or publicly voice opinions) was also central to this case. The freedom of academics to share their views, even when these views are unpopular and potentially threaten their institution’s commercial interests is the hallmark of academia. It ought not to have been difficult for anyone (let alone those who claim expertise in ethics), to have stood tall and firm in support of this fundamental principle, especially because of the potential beneficial impact of publication of the research results. With publication, research findings can be tested by peers. The findings can then be confirmed or disputed, thereby contributing to knowledge production and eventual benefit for both patients enrolled in the clinical trials of L1, as well as other thalassaemia patients with an interest in the outcome of the research. Defending academic freedom did not require taking a stance on whether Dr Olivieri was right or wrong in her assessment of the efficacy of deferiprone, it merely required defending her right to present her views to her scientific peers.
In taking up Dr Olivieri’s cause, Drs Chan, Dick, Durie, and Gallie never sought to defend any of the specific claims made by Dr Olivieri regarding the efficacy or safety of deferiprone. None of them had the expertise to evaluate the science and determine whether Dr Olivieri’s claims were right or wrong. Rather, they sought to defend the scientific method whereby researchers present their findings at scientific meetings and in scientific journals, so that these findings can be critically evaluated by peers as part of the ongoing process of gathering the best evidence on which to base clinical care. Early in the dispute, in keeping with the commitment to scientific integrity, there were efforts to initiate an internal review of the science by scientists appropriately qualified to judge the research findings. These efforts were not successful. At any rate, the principle at stake in this case, and the principle defended by Drs Chan, Dick, Durie, and Gallie, was the right of a scientist to present her findings for peer review.
### Informed consent and patient safety
It is widely accepted in North America and clearly documented in the Tri-Council policy statement22 (the guidelines that govern research involving humans in Canada), that research involving humans cannot proceed without prior review and approval by a research ethics board. One of the responsibilities of the research ethics board is to ensure that there is a favourable harm/benefit ratio and that adequate provisions have been made for the informed consent of research participants. Also non-controversial is the fact that a legally and morally valid consent requires full disclosure of relevant information, including information about possible harms and benefits. This information is to be updated as new information becomes available, and at all times consent is revocable. Indeed, the ongoing right of research participants to be informed of potential research risks and the correlative ongoing duty of researchers to inform participants of such risks is uncontested in Canada.
All of Dr Olivieri’s supporters were outspoken advocates of informed consent and the safety of child research participants. As Dr Durie said, “This is really about children who undergo clinical trials and their safety and their interests and the responsibility of the institution toward them”.23
Dr Brenda Gallie, then head of the Research Program of Cancer and Blood at HSC (and as such Dr Olivieri’s immediate administrative leader on the research side), was deeply concerned about consent issues and potential harm to children. On behalf of Dr Olivieri, she lobbied the administration at HSC to recognise and safeguard its fiduciary relationship with patients and research participants. She insisted that the safety and wellbeing of child research participants must come before “all other concerns, including institutional loyalty”.23
## OF RISKS AND CONSEQUENCES
As documented above, Drs Chan, Dick, Durie, and Gallie took a principled stand in support of research integrity, academic freedom, informed consent, and patient safety. For this, they each paid a heavy price. There were serious consequences for themselves, for their partners, and for their families. Personal and professional relations were strained, respect and trust among colleagues was seriously threatened, and self confidence was undermined. As well, there were serious health consequences for some, owing to the incredible stress they were under as they were attacked or shunned by colleagues at HSC and the University of Toronto. In addition to such personal costs, there were also significant financial costs—for example, lawyers’ fees and disbursements.
The professional costs have been no less significant. Dr Gallie’s research in retinoblastoma is of international renown and has most recently been celebrated in a national special exhibition “The Geee! in Genome” that opened in the nation’s capital on April 25, 2003.24 She feels that her outspoken determination in the Olivieri case has limited both her opportunity for advancement and the application of her research findings to improve the health of children. Most importantly, she also believes that because of her involvement in the Olivieri case, HSC has disregarded her research findings on molecular diagnosis of retinoblastoma mutations (B Gallie, personal communication 2003) and denied children and families with, or at risk of, retinoblastoma timely access to more sensitive, more efficient, and less costly testing.25
Finally, it is important to note the opportunity costs. The energy directed to resolving this controversy by these four eminent researchers was energy not directed to their respective ongoing research programmes. In each of these areas one must ask the following questions: How much research was not done? How many research papers were not written? How many students’ educational experiences were compromised? And, most importantly, what has all of this meant in terms of possible delays in the advancement of knowledge in pursuit of the care for children? The following example serves to illustrate the point.
At the time the Olivieri controversy was brewing, Dr Dick and his research team had just recently discovered a novel and unexpected class of blood stem cells. They published their findings in the summer of 1998,26 but were not able to capitalise on this discovery. Dr Dick experienced several serious health problems during his involvement in the Olivieri case. Many believe these problems were a direct consequence of his involvement in the case. Clearly it is impossible to prove that there was a cause and effect relationship, but it is fair to say that stress is generally detrimental to one’s health. When Dr Dick returned to work after a health related absence, he found it very difficult to carry on his research with his usual drive and determination. Compounding this problem was his reluctance to hire new staff because he felt it would be unfair to bring new people into a suboptimal work environment. Without new staff, the research team could not move its initial discovery forward, and others have since filled the vacuum. He has recently left HSC to take a leadership role in building a stem cell programme at the University Health Network.
## NO PLACE FOR NEUTRALITY
At the end of the day, it is certainly a good thing that researchers were front and centre in the ethical struggle for the protection of research integrity, academic freedom, informed consent and patient safety. But it is also surely deeply problematic that, for the most part, trained bioethicists were on the sidelines and not at their sides. In this case, bioethicists in Canada did not stand for, stand with, or even stand behind those who were taking a principled stand at great personal and professional risk. Rather, we stood aside.
It is also important to note here that at no time during this longrunning controversy has there been serious and sustained open debate and discussion about this case (and specifically the role of bioethics) through the Canadian Bioethics Society (CBS). This is not a mere oversight. In 1997, early in the controversy, I proposed to the CBS annual meeting planning committee that we have a plenary session on this topic. The suggestion was not taken up. In sharp contrast, there was a panel discussion on “Company Secrets, Patients’ Health, and Bioethicists’ Responsibilities: When Corporate Sponsors Hide Information” at the 1998 meeting of the American Society for Bioethics and Humanities. The presentations and discussion focused on the cases of Dr Olivieri and Dr David Kern of Brown University Medical School.
Bioethicists in Canada failed Dr Olivieri and her colleagues at HSC. Why? Did they fear losing their jobs? There are few bioethicists who have the security of tenure. Did they fear being sued? Many of the individuals and organisations involved in this case had shown themselves willing to engage in litigation. Did they fear loss of reputation? Again, many involved in this case had shown themselves willing to make damaging public comments. Did they fear retribution and consequent damage to their careers? After all, bioethics in Canada is a very small and fractured community. I do not know the reason(s) for the ensuing silence. I do know, however, that by and large Canadian bioethicists failed to speak up when there was ample time and opportunity. As a responsible community, we must ask ourselves whether we could and should have done more.
## Footnotes
• The Health Protection Branch has since been renamed the Health Products and Food Branch of Health Canada. The Health Protection Branch was a branch of government responsible for managing risks and benefits related to health. The new organisation is responsible for managing risks and benefits related to health products and food.*
• All page numbers given in this paper refer to the actual printed report unless otherwise stated.** | 2022-10-01 05:37:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2285822182893753, "perplexity": 4233.3757623575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00033.warc.gz"} |
https://www.mattssonmaleri.se/2rr7d/length-of-diagonal-of-square-1d6ee7 | # length of diagonal of square
State University of NY at Albany, Bachelor in Arts, English. Given an number d which is length of diagonal of a square, find its area. The Diagonal is the side length times the square root of 2: Diagonal "d" = a × √2. You can contact him at GeometryHelpBlog@gmail.com. Your Infringement Notice may be forwarded to the party that made the content available or to third parties such How far (nearest foot) will it be from home plate to second base, assuming he builds it to that specification? Find the length of the square's diagonal. Now you know that all the sides of a square are equal so all sides are 4cm long. How to Square up a Lay Out. According to the Pythagorean theorem, the diagonal value can be found knowing the side length. information described below to the designated agent listed below. A kite! Diagonal Formula The line stretching from one corner of the square or rectangle to the opposite corner through the centre of the figure is known as the diagonal. A diagonal is a straight line joining two opposite corners of a square, rectangle, or another straight-sided shape and is represented as d=sqrt(l^2+b^2+h^2) or Diagonal=sqrt(Length^2+Breadth^2+Height^2).Height is the distance between the lowest and highest points of a person standing upright, Length is the measurement or extent of something from end to … Is a square a kite? The formula for calculating the length of the diagonals of a square is s√2. A regular pentagon has five diagonals all of the same length. Given any 1 variable you can calculate the other 3 unknowns. Let #d =# the length of the diagonal = #10" in"# Let #s =# the length of the sides. Therefore, we can use the Pythagorean Theorem to solve for the diagonal: The perimeter of a square is units. How many units long is the diagonal of the square? d = a√2. Working on one of the triangles, apply Pythagorean Theorem i.e. Its opposite sides are parallel and of equal length, and its two diagonals intersect each other in the middle and are of equal lengths too. Use this feet and inch diagonal calculator to easily find the diagonal between two sides of any rectangle. Following is the Java program that takes Length, Breadth as inputs and compute Area, Perimeter & Length of diagonal of a Square If diagonal of Square is constructed, then is a 45-45-90 triangle with hypotenuse approximately 6.3662. St. Louis, MO 63105. ∴ The length of the diagonal of the square is 8 √2 cm. an In this formula, a and b are the sides of the right triangle, and c is the long side or the hypotenuse. Diagonal Length = a × √2 = 5 × 1.41421... = 7.071 m (to 3 decimals) Calculator. Ido Sarig is a high-tech executive with a BSc degree in Computer Engineering. Their hypotenuse is the diagonal of the square, so we can solve for the hypotenuse. The area is measured in units units such as square centimeters $(cm^2)$, square meters $(m^2)$, square kilometers $(km^2)$ etc. The diagonal of the square is the hypotenuseof these triangles.We can use Pythagoras' Theoremto find the length of the diagonal if we know the side length of the square. (Note: the four bases are the vertices of a perfect square, with the bases called home plate, first base, second base, third base, in that order). The path from home plate to first base is a side of a perfect square; the path from home plate to second base is a diagonal. This will probably be … PR is the diagonal in the above diagram. means of the most recent email address, if any, provided by such party to Varsity Tutors. Diagonal Formula is used to calculate the polygon diagonals. In the figure above, click 'reset'. improve our educational resources. Please email us at GeometryHelpBlog@gmail.com. All four sides of a square are equal. Copyright © 2020. A square can be divided into two right triangles where the length of the hypotenuse of the triangle is equal to the diagonal of the square. Please see below image for details. We need to use the Pythagorean Theorem: , where a and b are the legs and c is the hypotenuse. Go ahead on try our calculators dedicated to squares. Thus, if you are not sure content located The hypotenuse is the side which lies … Please be advised that you will be liable for damages (including costs and attorneys’ fees) if you materially Diagonal of a square formula: Let PQRS be a square with ‘S ‘units as side. Now in triangle abc, ac2 = bc2 + ab2 d2 = a2 + a2 d = sqrt(2*a2) d2 /2 = a2. Opposite sides of a square are both parallel and equal in length. Therefore, by Pythagoras theorem, we can say, diagonal is the hypotenuse and the two sides of the triangle formed by diagonal of the square, are perpendicular and base. Varsity Tutors LLC Square is a parallelogram with four sides of equal length and with all right angles (90) Following image shows you how a Square looks like. A description of the nature and exact location of the content that you claim to infringe your copyright, in \ The square is the n=2 case of the families of n-hypercubes and n-orthoplexes. My goal is to help you develop a better way to approach and solve geometry problems. A square is a geometric shape which is fully determined by the lengths of its side, a. As two sides and a diagonal form a triangle, the diagonal measures as long as a side. Area of a square from diagonal length Last Updated: 29-01-2019. Since this is a square we know this is a right triangle and we can use the Pythagorean Theorem to determine the length of . Use the Pythagorean Theorem to find the length of the diagonal. Diamond diagonals Calculate the diamonds' diagonals lengths if the diamond area is 156 cm square, and the side length is 13 cm. Sides of length form each of the legs and is the hypotenuse. The diagonals of a square bisect its angles. This makes squaring up any lay out a snap. Get this app free now at the Play Store. Find the length of the diagonal of a square with side lengths of . Cannot be determined from information given. This theorem shows that if you have a right triangle, the length of the hypotenuse is the square root of the sum of the square of the sides. Your name, address, telephone number and email address; and Send your complaint to our designated agent at: Charles Cohn The diagonals of a square are the same length (congruent). Use our online diagonal of a rectangle calculator to find diagonal of rectangle by entering the width and height. With the help of the community we can continue to In the figure above, click 'reset'. 2. It all depends on whether or not you think a kite can have all its sides be the same length or not (which would make it a rhombus.) ac and bd are the diagonal of the square abcd. We can break up the square into two equal right triangles. You can use Pythagorean Theorem to calculate the length of the diagonal of a square. misrepresent that a product or activity is infringing your copyrights. Please follow these steps to file a notice: A physical or electronic signature of the copyright owner or a person authorized to act on their behalf; Thus, the length of diagonal of square is \ [\sqrt {128}\] cm. The diagonal of the rectangle is the hypotenuseof these triangles.We can use Pythagoras' Theoremto find the length of the diagonal if we know the width and height of the rectangle. *Units: Note that units of length are shown for convenience. A statement by you: (a) that you believe in good faith that the use of the content that you claim to infringe A square is also a rhombus, so its area can be calculated as one half the product of its diagonals: The distance between opposite corners is about 49 meters. Its definition is that it has all four sides of equal length, or alternatively, the angle between two diagonals is right. So know the formula for length of diagonal of a square and also its proof that means how we got that formula. A square is a rhombus. ChillingEffects.org. In this problem, we are given the length of diagonals of a square and we have to find are of the square. If you believe that content available by means of the Website (as defined in our Terms of Service) infringes one But we'd sure like to know about it so that we can fix it. the They do not affect the calculations. For the first idea, use the Pythagorean Theorem: , where a and b are the side lengths of the square and c is the length of the diagonal. Now divide the square into two isosceles right triangles with the diagonal. As you can see, a diagonal of a square divides it into two right triangles,BCD and DAB. Area of a square can be computed as (d * d)/2. sufficient detail to permit Varsity Tutors to find and positively identify that content; for example we require So the equation looks like this: A man wants to build a not-quite-regulation softball field on his property and finds that he only has enough room to make the distance between home plate and first base 44 feet. Thank you. First find the length of each side of the square by: √(Area)= Length of each side of square, which is √16 = 4 cm. I've seen "yes" and "no." Type that value into the diagonal of a square calculator to check it yourself! Side in the square below has a length of 12. √2=√(2A), Filed Under: Squares Last updated on July 29, 2020. your copyright is not authorized by law, or by the copyright owner or such owner’s agent; (b) that all of the Calculate the length of the other diagonal. Use the Pythagorean Theorem to find the length of the diagonal. A square is a special case of a rectangle. The perimeter of a square is 48. Dr. Pan walks through the following problem: What is the length of a diagonal of a square if its area is 98 in squared? A square has two diagonals, they are equal in length and intersect in the middle. The diameter of a circle that circumscribes a square is equal to the length of the diagonals of the square. So, what's left? For the square given in … Diagonal of rectangle refers to the line segment or straight line that connect the opposite corner or vertex of the rectangle. To the nearest meter, how far is it from one corner to the opposite corner? 101 S. Hanley Rd, Suite 300 a Then we use the Pythagorean Theorme to find the diagonal, which is the hypotenuse of a right triangle with each leg being a side of the square. What is the length of its diagonal? So, for example, if the square side is equal to 5 in, then the diagonal is 5√2 in ≈ 7.071 in. Then the diagonal of the square (or the hypotenuse of the right triangle) will be . Any square that has two diagonals are equal in length to each other. A square is a parallelogram. The diagonal of a square is also a hypotenuse of a right triangle with the side lengths as legs of the triangle. Cool facts about squares: A square is a rectangle. Length of a Diagonal of Square. Pythagoras theorem, which is applicable to right-angled triangles, shows the relation between the hypotenuse and sides of a right triangle. The diagonal of a square is also a hypotenuse of a right triangle with the side lengths as legs of the triangle. Diagonals of a Square. An identification of the copyright claimed to have been infringed; Diagonal of a square is the line segment from corner of a square linking to the opposite corning of the square as shown in the below diagram. The two legs have lengths of 8. In today’s lesson, we will find the length of a diagonal of a square using three simple formulas, derived from the length of the square’s side, or its perimeter, or its area. {\displaystyle {\sqrt {2}}\approx 1.414.} I'm Ido Sarig, a high-tech executive with a BSc degree in Computer Engineering and an MBA degree. If diagonal of a rectangle is 26 cm and one side is … © 2007-2020 All Rights Reserved, How To Find The Length Of The Diagonal Of A Square, GRE Courses & Classes in San Francisco-Bay Area. If you know that ALL squares can be made into two special right triangles such that their angles are 45-45-90, then there's a formula you could use: Let's say that your side length of the square is "a". Any other base unit can be substituted. Establish two parallel lines for the the width; Establish a starting point for the length on … As you can see, a diagonal of a rectangle divides it into two right triangles,BCD and DAB. Use this square calculator to find the side length, diagonal length, perimeter or area of a geometric square. on or linked-to by the Website infringes your copyright, you should consider first contacting an attorney. If we know the length of the side of a square, we know its perimeter, its area, the length of its diagonals, etc. In this formula, s refers to the side of the square. Diagonal forms a triangle with adjacent sides . A square A square with a length of diagonals 12cm give: a) Calculate the area of a square b) rhombus with the same area as the square, has one diagonal with a length of 16 cm. link to the specific question (not just the name of the question) that contains the content and a description of How to find the diagonal of a square - formula. If You Know the Length of One Side Find the length of one side of the square. Find the length of the diagonal of the square with a side length of . State University of NY at Albany, Master of Science, Teaching En... University of New Orleans, Bachelor in Arts, Anthropology. Examples: Input : d = 10 Output : Area = 50 Input : d = 12.2 Output : Area = 74.42 Recommended: Please try your approach on first, before moving on to the solution. All four angles of a square are equal (each being 360°/4 = 90°, a right angle). To calculate the diagonal of a square, multiply the length of the side by the square root of 2:. either the copyright owner or a person authorized to act on their behalf. The diagonal line cuts the square into two equal triangles. The ratio of a diagonal to a side is 2 ≈ 1.414. From the perimeter, we can find the length of each side of the square. The side lengths of a square are equal by definition therefore, the perimeter can be rewritten as. Finding the side lengths of a square given diagonalsPhillips Exeter Math 2 @ Foothill HSDan Tating The length of the diagonal can be found using the Pythagorean Theorem (a^2+b^2=c^2). Click to learn more... By accessing or using this website, you agree to abide by the Terms of Service and Privacy Policy. Infringement Notice, it will make a good faith attempt to contact the party that made such content available by You can do this problem in two different ways that lead to the final answer: 1. Universidad Interamericana de Puerto Rico, Bachelor of Science, Biology and Biotechnology Laboratory Technology. Diagonal of a polygon is the line joining two sides that are not adjacent to each other. The units are in place to give an indication of the order of the calculated results such as ft, ft2 or ft3. The diagonal of the sqaure is then the hypotenuse of these two triangles. The length of the diagonals of the square is equal to s√2, where s is the side of the square. Welcome to Geometry Help! Geometry doesn't have to be so hard! And the diagonal which you need to find is the hypotenuse so : 4^2 + 4^2 = c^2 16 + 16 = c^2 c^2 = 32 c = sqrt(32) c = 5.65cm Which you can round to: c = 6cm The diagonals length is equal to about 6cm. Track your scores, create tests, and take your learning to the next level! Plug this in and solve for c: Find the length of the diagonal of a square that has side lengths of cm. His goal is to help you develop a better way to approach and solve geometry problems. The diagonals of a square are equal. information contained in your Infringement Notice is accurate, and (c) under penalty of perjury, that you are As we know, the length of the diagonals is equal to each other. If you've found an issue with this question, please let us know. University at Buffalo, Doctor of Philosophy, Anthropology. which specific portion of the question – an image, a link, the text, etc – your complaint refers to; 1 answer. We also know that lengths of both the diagonals are equal to each other. ← Prev Question Next Question → Related questions 0 votes. If Varsity Tutors takes action in response to A square has two diagonals of equal length, which intersect at the center of the square. Varsity Tutors. It happens! The distance to second base from home is times the distance to first base: A square lot has an area of 1,200 square meters. The length is 12 ft if one of the sides is 12 ft. A square is a trapezoid. or more of your copyrights, please notify us by providing a written notice (“Infringement Notice”) containing Example: A square has a side length of 5 m, what is the length of a diagonal? What is the length of the diagonal ? as Terms of Service and Privacy Policy can fix it connect the opposite corner or of! At the center of the square is also a hypotenuse of the square is 156 cm square, and is... Length length of diagonal of square the square then is a rectangle apply Pythagorean Theorem to find the length of right. Diagonals calculate the diagonal universidad Interamericana de Puerto Rico, Bachelor in Arts English. My goal is to help you develop a better way to approach solve., they are equal in length his goal is to help you develop a better way to approach solve. Length, diagonal length, which intersect at the Play Store '' = a × √2 = 5 1.41421. Universidad Interamericana de Puerto Rico, Bachelor in Arts, English a and b are the sides 12! Times the square abcd be found using the Pythagorean Theorem to calculate the is! Measures as long as a side length, s refers to the final answer 1... Since this is a 45-45-90 triangle with hypotenuse approximately 6.3662 ft if one of the is! ← Prev Question Next Question → Related questions 0 votes or vertex of the same length continue. 'Ve found an issue with this Question, please Let us know now divide the square is a square a... Can see, a high-tech executive with a BSc degree in Computer Engineering makes squaring any... 90°, a also its proof that means how we got that formula in in! That made the content available or to third parties such as ft, ft2 or ft3 Last! Four sides of a diagonal of a square are equal in length to each other 'm ido Sarig a. You develop a better way to approach and solve for the square in Arts, English being 360°/4 90°... Means how we got that formula Note that units of length are for. Can fix it ( to 3 decimals ) calculator and an MBA degree a geometric square,. Same length ( congruent ) or using this website, you agree to abide the... Doctor of Philosophy, Anthropology are of the diagonals of a square, find area. Four angles of a rectangle divides it into two right triangles, apply Pythagorean Theorem to determine the length diagonal. Filed Under: squares Last updated on July 29, 2020 at Albany Master... Of a square are the same length Next Question → Related questions 0.! Next Question → Related questions 0 votes, perimeter or area of geometric! Long as a side length of Engineering and an MBA degree side or the hypotenuse a... Units: Note that units of length are shown for convenience as a side equal. The Terms of Service and Privacy Policy √2=√ ( 2A ), Under... Square side is 2 ≈ 1.414. alternatively, the angle between two diagonals is.... Laboratory Technology perimeter or area of a square we know this is a rectangle is fully determined by Terms. Rico, Bachelor in Arts, English two isosceles right triangles ← Prev Question Next Question → questions! Has a side length, or alternatively, the perimeter of a diagonal of a,! It into two equal right triangles, BCD and DAB = 5 × 1.41421... = 7.071 m ( 3! 5 m, what is the length of a rectangle divides it into two triangles! Side or the hypotenuse of a square is equal to 5 in, then the diagonal a... Will it be from home plate to second base, assuming he builds it to that specification in this,... All of the square, so we can break up the square of. Then is a geometric square one corner to the party that made content... We also know that lengths of a square formula: Let PQRS be a square a! To improve our educational resources side find the length is 12 ft the opposite corner side find the of! Is constructed, then the hypotenuse of a square has two diagonals, they are equal in length intersect., assuming he builds it to that specification diameter of a square, find its area BSc degree Computer... Do this problem, we can find the length of the diagonals of square... Bcd and DAB is s√2 in two different ways that lead to the Next level be square... C: find the side lengths of cm: Let PQRS be a square has two diagonals are equal all! Nearest meter, how far is it from one corner to the line or! Abide by the Terms of Service and Privacy Policy is length of the of! Assuming he builds it to that specification line cuts the square root of 2: diagonal length of diagonal of square ''! Determined by the Terms of Service and Privacy Policy, we can solve for c find... Triangles with the help of the order of the diagonals of equal length, which at. In Computer Engineering and an MBA degree Interamericana de Puerto Rico, Bachelor of Science, Biology and Laboratory! Geometry problems we 'd sure like to know about it so that we can use Pythagorean Theorem to for! Ratio of a square has two diagonals are equal in length to each other into diagonal... Side, a and b are the diagonal of the diagonals are equal in length pythagoras,! Also a hypotenuse of these two triangles variable you can use Pythagorean Theorem, angle. Lead to the opposite corner or using this website, you agree to by. Note that units of length are shown for convenience and Privacy Policy problem, we are given length. Of any rectangle geometry problems area is 156 cm square, find its area its. To s√2, where a and b are the diagonal 5 × 1.41421... = 7.071 (...
Scroll to top | 2021-04-15 01:47:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5777139067649841, "perplexity": 448.8080726500782}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038082988.39/warc/CC-MAIN-20210415005811-20210415035811-00026.warc.gz"} |
https://codereview.stackexchange.com/questions/19343/simplifying-code-for-drop-down-box-in-jquery-and-html/19405 | # Simplifying Code for Drop-Down-Box in JQuery and HTML
I'm trying to come up with a way to make a drop down box that is displayed through a jquery mouse hover event and with nested dropdown boxes displayed through hovering over elements of the original drop down box. I wrote some terribly inefficient code and I'm struggling to find ways of simplifying it. If anyone has any suggestions that will help me shorten this code and get a better idea of how to take advantage of functions of JQuery, please help.
http://cs-dev.dreamhosters.com/dropd.php
$(document).ready(function(){$(".tab, .drop").hover(function(){
$(".tab").css("color","#FF7722");$(".drop").css("display","block");
$("#tv, .droptv").hover(function(){$(this).css("color","#FF7722");
$(".droptv").css("display","block");$(".droptv").hover(function(){
$("#tv, .droptv").css("color","#FF7722"); },function(){$(".droptv").css("color","#005BAB");
});
},function(){
$(this).css("color","#005BAB");$(".droptv").css("display","none");
});
$("#interact").hover(function(){$(this).css("color","#FF7722");
},function(){
$(this).css("color","#005BAB"); });$("#online").hover(function(){
$(this).css("color","#FF7722"); },function(){$(this).css("color","#005BAB");
});
$("#vod, .dropvod").hover(function(){$(this).css("color","#FF7722");
$(".dropvod").css("display","block");$(".dropvod").hover(function(){
$("#dai").hover(function(){$(this).css("color","#FF7722");
},function(){
$(this).css("color","#005BAB"); });$("#iguide").hover(function(){
$(this).css("color","#FF7722"); },function(){$(this).css("color","#005BAB");
});
$("#vod").css("color","#FF7722"); },function(){$(".dropvod").css("color","#005BAB");
});
},function(){
$(this).css("color","#005BAB");$(".dropvod").css("display","none");
});
$("#tablet").hover(function(){$(this).css("color","#FF7722");
},function(){
$(this).css("color","#005BAB"); });$("#mobile").hover(function(){
$(this).css("color","#FF7722"); },function(){$(this).css("color","#005BAB");
});
},function(){
$(".tab").css("color","#005BAB");$(".drop").css("display","none");
});
});
• Try using JavaScript for state and CSS for style. It would make this code so much simpler! – ANeves Dec 6 '12 at 10:08
Your code is very big and messy so this a bit tricky to really see what you are trying to do. A few obvious things to make the code more readable:
• Replace .css("display","block") with .show()
• Replace .css("display","none") with .hide()
• You repeat the same color changing hover over and over again. Instead, group all your elements together and specify this function only once:
$("#containo div").hover(function() {$(this).css("color","#FF7722");
},function() {
$(this).css("color","#005BAB"); }); • Do not nest hovers inside other hovers. .hover() creates a new event handler when it is called. If you nest them then you each time you move your mouse over the parent, the child is assigned a new event handler. You do not want these duplicates. Instead assign all of your events in the root level. This, plus a little refactoring could reduce your code considerably. Maybe from here the code will be easier to work with so could see how to reduce it further. $(document).ready(function() {
function on(selector) {
$(selector).css("color","#FF7722"); }; function off(selector) {$(selector).css("color","#005BAB");
};
$("#interact,#online,#tablet,#mobile,#dai,#iguide,#tv,.droptv,#vod,.dropvod") .hover(function(){ on(this) },function(){ off(this) });$("#tv, .droptv").hover(function(){
$(".droptv").show(); },function(){$(".droptv").hide();
});
$("#vod, .dropvod").hover(function(){ on("#vod");$(".dropvod").show();
},function(){
off(".dropvod");
$(".dropvod").hide(); });$(".tab, .drop").hover(function(){
on(".tab");
$(".drop").show(); },function(){ off(".tab");$(".drop").hide();
});
});
• Save the references to the elements in a variable, so that instead of $('.tv') and making jQuery perform a search through the DOM every time, you can refer to your variable and apply jQuery methods to it. Furthermore, this can help if you also can give meaningful names to those variables. For example var$mainCombos = $('.tv'); // later on$mainCombos.show();
Beware that this may bring you problems if you're dynamically adding or removing elements from the DOM. If this is the case, you may use alternative versions of this, like re-setting the reference variable each time your code is called, or applying live events and grasping the references with $(this). • Chain calls to jQuery methods whenever possible. This prevents jQuery from searching all over again for those elements (also useful if you can't apply my first suggestion). For instance: Instead of $('.droptv').show();
$('.droptv').hover(...); You could use $('.droptv').show() | 2019-07-20 08:46:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23327802121639252, "perplexity": 1212.279729783202}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526489.6/warc/CC-MAIN-20190720070937-20190720092937-00232.warc.gz"} |
https://electronics.stackexchange.com/questions/555349/pcb-6-layer-stackup-problem | PCB 6-layer stackup problem
I'm currently designing a PCB with double side components and I'm limited to 6 layers max. The project has couple of MCUs at 84 MHz. There are USART, I2C, SPI, some analogue lines and high-power lines in the design but not any high-speed lines. There are also some very short RF lines for antennas.
The problem is due to the high density and limited size of the PCB I can't use this stackup: S-G-S-S-P-S At some parts of the PCB specially around the high pin count MCUs there's need for signal lines to go through the ground or power plane.
Also all the power electronics and switching components are on the back side.
This is the stackup properties provided by the PCB manufacturer:
So my main concern is if routing some signal lines through the power/ground plane will cause me problems or not?
• If you take care it should not be a problem. How experienced are you are PCB layout? Mar 25, 2021 at 13:22
• if you need to cut the plane, track it around the edge to ensure a continous gnd plane
– user16222
Mar 25, 2021 at 13:26
• PCB designers route signals on power and ground layers all the time. The key is understanding what you're doing and ensuring that signal integrity is maintained. Mar 25, 2021 at 13:27
• @JonRB Thanks, I have already done that.
– Oli
Mar 25, 2021 at 13:41
• @Andyaka I have started couple years back and have designed handful of professional PCBs, so I'm not an expert but I'm tiring to get there.
– Oli
Mar 25, 2021 at 13:44
3 basic rules
Rule #1: Bandwidth is not the clock speed rather the rise time f-3dB=0.35/Tr (10~90%) so you probably want thinner dielectric than normal for lower impedance tracks
• this helps keep track/gap <= 5 mil (127 um) and < matched via impedance which raises L but thinner dielectrics raises C to maintain Z^2=L/C Also 3 mil (64um) track/gap is doable by good shops.
Rule #2 avoid crosstalk with adjacent SS layer parallel tracks.
Rule #3: Use lots of microvias for PS layer connections to other layers grids and appropriate decoupling cap per IC. If a Microvia is 50 Ohms on a power supply that is 50 mOhms, how many do you need? ( not depends on decoupling caps and rise time and ringing tolerance)
If you don't already have , get Saturn PCB Design Toolkit
• Thanks for the recommendation, all the tracks that need to be routed through planes are simple digital inputs for external sensors, so after reading all the comments and answers I believe It should be fine.
– Oli
Mar 25, 2021 at 14:09
• OK now try to route it on 4 layers for a cost reduction j/k that 's next year job if volume goes high Mar 25, 2021 at 14:13
• That'll be a good challenge to do if it gets to that point.
– Oli
Mar 25, 2021 at 14:27
• A day’s job with a good auto-router Mar 25, 2021 at 15:16
• Can you recommend me one ?
– Oli
Mar 25, 2021 at 18:12
Experienced PCB designers DO NOT route signals on power and ground layers all the time. They just don't do it unless there are no other ways to complete the routing.
You might need a lot of vias to complete your routing.
Reduce the size of your vias.
In high density boards I used the following via:
copper pads on top = 0.45 mm
drill = 0.15 mm
copper pads in inner layers = 0.45 (*)
copper pads on bottom = 0.45 mm
(*) Whenever I can, I enlarge inner pads 0.55 mm to make the PCB manufacture happy.
There's no extra cost to pay for these vias. They fit their standard manufacturing flow.
• Thanks, I used to route signals on plane layers only if I had to, but after reading some papers about how bad it could be I avoided it completely and changed the layout or increased the layer count whenever I could.
– Oli
Mar 25, 2021 at 14:04
• Unless you plan to go to an EMC notified body for CE or UL compliance tests, don't go mad to complete your routing. Mar 25, 2021 at 14:14
• Thanks for the heads-up. I have been warned many times about the importance of routing back when I was studding, so I do some times try to go over the top.
– Oli
Mar 25, 2021 at 14:30 | 2022-08-10 12:20:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1719360500574112, "perplexity": 2862.025033949478}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571153.86/warc/CC-MAIN-20220810100712-20220810130712-00498.warc.gz"} |
https://projecteuclid.org/euclid.agt/1517454228 | ## Algebraic & Geometric Topology
### Loop homology of some global quotient orbifolds
Yasuhiko Asao
#### Abstract
We determine the ring structure of the loop homology of some global quotient orbifolds. We can compute by our theorem the loop homology ring with suitable coefficients of the global quotient orbifolds of the form $[ M ∕ G ]$ for $M$ being some kinds of homogeneous manifolds, and $G$ being a finite subgroup of a path-connected topological group $G$ acting on $M$. It is shown that these homology rings split into the tensor product of the loop homology ring $ℍ ∗ ( L M )$ of the manifold $M$ and that of the classifying space of the finite group, which coincides with the center of the group ring $Z ( k [ G ] )$.
#### Article information
Source
Algebr. Geom. Topol., Volume 18, Number 1 (2018), 613-633.
Dates
Revised: 11 July 2017
Accepted: 18 August 2017
First available in Project Euclid: 1 February 2018
https://projecteuclid.org/euclid.agt/1517454228
Digital Object Identifier
doi:10.2140/agt.2018.18.613
Mathematical Reviews number (MathSciNet)
MR3748255
Zentralblatt MATH identifier
1385.55006
#### Citation
Asao, Yasuhiko. Loop homology of some global quotient orbifolds. Algebr. Geom. Topol. 18 (2018), no. 1, 613--633. doi:10.2140/agt.2018.18.613. https://projecteuclid.org/euclid.agt/1517454228
#### References
• A Adem, J Leida, Y Ruan, Orbifolds and stringy topology, Cambridge Tracts in Mathematics 171, Cambridge Univ. Press (2007)
• A Ángel, E Backelin, B Uribe, Hochschild cohomology and string topology of global quotient orbifolds, J. Topol. 5 (2012) 593–638
• M Chas, D Sullivan, String topology, preprint (1999)
• D Chataur, L Menichi, String topology of classifying spaces, J. Reine Angew. Math. 669 (2012) 1–45
• R L Cohen, J D S Jones, A homotopy theoretic realization of string topology, Math. Ann. 324 (2002) 773–798
• R L Cohen, J D S Jones, J Yan, The loop homology algebra of spheres and projective spaces, from “Categorical decomposition techniques in algebraic topology” (G Arone, J Hubbuck, R Levi, M Weiss, editors), Progr. Math. 215, Birkhäuser, Basel (2004) 77–92
• R L Cohen, J R Klein, Umkehr maps, Homology Homotopy Appl. 11 (2009) 17–33
• K Gruher, P Salvatore, Generalized string topology operations, Proc. Lond. Math. Soc. 96 (2008) 78–106
• R A Hepworth, String topology for complex projective spaces, preprint (2009)
• R A Hepworth, String topology for Lie groups, J. Topol. 3 (2010) 424–442
• S Kaji, H Tene, Products in equivariant homology, preprint (2015)
• A P M Kupers, An elementary proof of the string topology structure of compact oriented surfaces, preprint (2012)
• K Kuribayashi, On the mod $p$ cohomology of the spaces of free loops on the Grassmann and Stiefel manifolds, J. Math. Soc. Japan 43 (1991) 331–346
• E Lupercio, B Uribe, M A Xicoténcatl, Orbifold string topology, Geom. Topol. 12 (2008) 2203–2247
• L Menichi, String topology for spheres, Comment. Math. Helv. 84 (2009) 135–157
• L Menichi, String topology, Euler class and TNCZ free loop fibrations, preprint (2013)
• I Moerdijk, Orbifolds as groupoids: an introduction, from “Orbifolds in mathematics and physics” (A Adem, J Morava, Y Ruan, editors), Contemp. Math. 310, Amer. Math. Soc., Providence, RI (2002) 205–222
• H Tamanoi, Batalin–Vilkovisky Lie algebra structure on the loop homology of complex Stiefel manifolds, Int. Math. Res. Not. 2006 (2006) art. id. 97193
• H Tamanoi, Cap products in string topology, Algebr. Geom. Topol. 9 (2009) 1201–1224
• D Vaintrob, The string topology BV algebra, Hochschild cohomology and the Goldman bracket on surfaces, preprint (2007)
• M Vigué-Poirrier, Dans le fibré de l'espace des lacets libres, la fibre n'est pas, en général, totalement non cohomologue à zéro, Math. Z. 181 (1982) 537–542 | 2019-10-14 12:49:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5438070893287659, "perplexity": 3236.4465595181223}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986653247.25/warc/CC-MAIN-20191014124230-20191014151730-00158.warc.gz"} |
https://stats.stackexchange.com/questions/298623/tensor-classification-models | # Tensor Classification Models
Aside from Convolution Neural Networks, are there any other methods that allow for classification of Tensors? My observations consist of multi-dimensional tensors with height of 1, where each channel corresponds to a particular time-series and am wondering how I can effectively classify the tensors, taking into account the relationship between the time-series.
• You can flatten the tensor and run the usual machine learning methods on the vector: random forest, kNN, SVM, logistic regression, etc. – Dave Apr 20 '20 at 1:27 | 2021-07-24 22:37:28 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8169229626655579, "perplexity": 625.4739755963546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151531.67/warc/CC-MAIN-20210724223025-20210725013025-00036.warc.gz"} |