url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://tex.stackexchange.com/questions/290088/converting-a-tex-document-to-se-markdown
|
# Converting a TeX document to SE markdown? [closed]
Portions of my masters thesis work, which was never published officially, could be pretty easily adapted to a very comprehensively answer to a question on Programmers about Particle Swarm Optimization.
However, I foolishly wisely1 chose to write it in LaTeX which means I cannot just copy/paste it into the "Answer" text box there, at least (I could copy... the entire document I guess and tell people to use TeX to read it?).
The main concerns I have relate to the mathematical notation and references. Text/headings could be pretty straightforward to modify into markdown syntax.
Something like:
\begin{align*} \labeltarget{eq:opproblem}
&\operatorname{Minimize}& & F_p(\bar{x}) \quad & &p=1,\dots,l & &\text{\nameref{sec:objectivefunction}}\\
&\operatorname{Subject\;To} & &g_j(\bar{x}) \leq 0, \quad & &j= 1,\dots,m & &\text{\nameref{sec:inequalityconstraints}} \\
&&&h_k(\bar{x}) = 0, \quad & &k= 1, \dots,p & &\text{\nameref{sec:equalityconstraints}} \\
&&&x_{low,i} \le x_{i} \le x_{up,i}, \quad & &i = 1, \dots,n & &\text{\nameref{sec:sideconstraints}}
\end{align*}
Would be great to auto convert via [insert magical process I don't know]. I could take screenshots of them, I guess...
There are also a lot of pieces like:
\noindent with $\bar{x}$ representing \hyperref[sec:designvariables]{design variables} for the optimization problem \cite{vanderplaats}.
Which would be.. tedious at best to go through and change.
Is there any straightforward way to minimize the tediousness of converting a TeX document to SE markdown syntax?
1: My advisor also required a Word document during the process (???) and the amount of times I've tried to convert from TeX is making me sick. Fortunately it's more fun than going backwards!
## closed as off-topic by egreg, Joseph Wright♦Jun 4 '16 at 21:39
This question appears to be off-topic. The users who voted to close gave this specific reason:
• "This question does not fall within the scope of TeX, LaTeX or related typesetting systems as defined in the help center." – egreg, Joseph Wright
If this question can be reworded to fit the rules in the help center, please edit the question.
• Have you considered Pandoc? – WillAdams Jan 29 '16 at 16:58
• Really not a question at 'our' end – Joseph Wright Jun 4 '16 at 21:39
|
2019-10-19 04:33:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9757872819900513, "perplexity": 3174.668519316502}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986688826.38/warc/CC-MAIN-20191019040458-20191019063958-00008.warc.gz"}
|
http://www.mathworks.com/help/signal/ref/sigwin.chebwin-class.html?nocookie=true
|
Accelerating the pace of engineering and science
# sigwin.chebwin class
Package: sigwin
Construct Dolph-Chebyshev window object
## Description
sigwin.chebwin creates a handle to a Dolph–Chebyshev window object for use in spectral analysis and FIR filtering by the window method. Object methods enable workspace import and ASCII file export of the window values.
The Dolph-Chebyshev window is constructed in the frequency domain by taking samples of the window's Fourier transform:
$\stackrel{^}{W}\left(k\right)={\left(-1\right)}^{k}\frac{\mathrm{cos}\left[N{\mathrm{cos}}^{-1}\left[\beta \mathrm{cos}\left(\pi k/N\right)\right]\right]}{\mathrm{cosh}\left[N{\mathrm{cosh}}^{-1}\left(\beta \right)\right]}\text{ }0\le k\le N-1$
where
$\beta =\mathrm{cos}\left[1/N{\mathrm{cosh}}^{-1}\left({10}^{\alpha }\right)\right]$
$\alpha$ determines the level of the sidelobe attenuation. The level of the sidelobe attenuation is equal to $-20\alpha$. For example, 100 dB of attenuation results from setting $\alpha =5$
The discrete-time Dolph-Chebyshev window is obtained by taking the inverse DFT of $\stackrel{^}{W}\left(k\right)$ and scaling the result to have a peak value of 1.
## Construction
H = sigwin.chebwin returns a Dolph-Chebyshev window object H of length 64 with relative sidelobe attenuation of 100 dB.
H = sigwin.chebwin(Length) returns a Dolph–Chebyshev window object H of length Length with relative sidelobe attenuation of 100 dB. Length requires a positive integer. Entering a positive noninteger value for Length rounds the length to the nearest integer. A window length of 1 results in a window with a single value equal to 1.
H = sigwin.chebwin(Length,SidelobeAtten) returns a Dolph-Chebyshev window object with relative sidelobe attenuation of atten_param dB.
## Properties
Length Dolph-Chebyshev window length. SidelobeAtten The attenuation parameter in dB. The attenuation parameter is a positive real number that determines the relative sidelobe attenuation of the window.
## Methods
generate Generates Dolph-Chebyshev window info Display information about Dolph–Chebyshev window object winwrite Save Dolph-Chebyshev window object values in ASCII file
## Copy Semantics
Handle. To learn how copy semantics affect your use of the class, see Copying Objects in the MATLAB® Programming Fundamentals documentation.
## Examples
Default length N=64 Dolph–Chebyshev window with 100 dB relative sidelobe attenuation:
```H=sigwin.chebwin;
wvtool(H); ```
Generate length N=128 Chebyshev window with 120 dB attenuation, return values, and write ASCII file:
```H=sigwin.chebwin(128,120);
% Return window with generate
win=generate(H);
% Write ASCII file in current directory
% with window values
winwrite(H,'chebwin_128_100')```
## References
Harris.F.J. "On the Use of Windows for Harmonic Analysis with the Discrete Fourier Transform." Proceedings of the IEEE®. Vol. 66, 1978, pp. 51–83.
|
2014-12-26 07:58:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20998704433441162, "perplexity": 4821.931241978501}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447548655.55/warc/CC-MAIN-20141224185908-00039-ip-10-231-17-201.ec2.internal.warc.gz"}
|
https://www.lrde.epita.fr/wiki/Publications/remaud.17.seminar
|
# Integration of TChecker in Spot
## Abstract
A timed automaton is an ${\displaystyle \omega }$-automaton which describe a model containing continue time conditions. Without treatments, those automata can have an infinite states space. However, we can translate them in finite automatacalled zone graph, to analyze their properties. In this report, we show how the timed automata have been integrated at Spot. We use TChecker, a program created at the LaBRI. We explain then how we integrate the representation of those TChecker's zone graph in Spot. Then we show how Spot's algorithms can read this translation to work on timed automata and zone graphes. We then explain how inconvenients of TChecker have been adapted to make simpler its utilisation by Spot.
|
2020-07-13 01:12:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2633582353591919, "perplexity": 2924.535442397277}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657140746.69/warc/CC-MAIN-20200713002400-20200713032400-00496.warc.gz"}
|
https://articleff.com/convolutional-neural-network-for-automatic-maxillary-sinus-segmentation-on-cone-beam-computed-tomographic-images/
|
# Convolutional neural network for automatic maxillary sinus segmentation on cone-beam computed tomographic images
This study was conducted in accordance with the standards of the Helsinki Declaration on medical research. Institutional ethical committee approval was obtained from the Ethical Review Board of the University Hospitals Leuven (reference number: S57587). Informed consent was not required as patient-specific information was anonymized. The study plan and report followed the recommendations of Schwendicke et al.23 for reporting on artificial intelligence in dental research.
### data set
A sample of 132 CBCT scans (264 sinuses, 75 females and 57 males, mean age 40 years) from 2013 to 2021 with different scanning parameters was collected (Table 1). Inclusion criteria were patients with permanent dentition and maxillary sinus with/without mucosal thickening (shallow > 2 mm, moderate > 4 mm) and/or with semi-spherical membrane in one of the walls.24. Scans having dental restorations, orthodontic brackets and implants were also included. The exclusion criteria were patients with a history of trauma, sinus surgery and presence of pathologies affecting its contour.
The Digital Imaging and Communication in Medicine (DICOM) files of the CBCT images were exported anonymously. Dataset was further randomly divided into three subsets: (1) training set (n = 83 scans) for training of the CNN model based on the ground truth; (2) validation set (n = 19 scans) for evaluation and selection of the best model; (3) testing set (n = 30 scans) for testing the model performance by comparison with ground truth.
### Ground truth labeling
The ground truth datasets for training and testing of the CNN model were labeled by semi-automatic segmentation of the sinus using Mimics Innovation Suite (version 23.0, Materialize NV, Leuven, Belgium). Initially, a custom threshold leveling was adjusted between [− 1024 to − 200 Hounsfield units (HU)] to create a mask of the air (Fig. 1a). Subsequently, the region of interest (ROI) was isolated from the rest of the surrounding structures. A manual delineation of the bony contours was performed using eclipse and livewire function, and all contours were checked in coronal, axial, and sagittal orthogonal planes (Fig. 1b). To avoid any inconsistencies in the ROI of different images, the segmentation region was limited to the early start of the sinus ostium from the sinus side before continuation into the infundibulum (Fig. 1b). Finally, the edited mask of each sinus was exported separately as a standard tessellation language (STL) file. The segmentation was performed by a dentomaxillofacial radiologist (NM) with seven years of experience and subsequently re-assessed by two other radiologists (KFV&RJ) with 15 and 25 years of experience respectively.
### CNN model architecture and training
Two 3D U-Net architectures were used25both of which consisted of 4 encoder and 3 decoder blocks, 2 convolutions with a kernel size of 3 × 3 × 3, followed by a rectified linear unit (ReLU) activation and group normalization with 8 feature maps26. Thereafter, max pooling with kernel size 2 × 2 × 2 by strides of two was applied after each encoder, allowing reduction of the resolution with a factor 2 in all dimensions. Both networks were trained as a binary classifier (0 or 1) with a weighted Binary Cross Entropy Loss:
$${L}_{BCE}={y}_{n}*logleft({p}_{n}right)+left(1-{y}_{n}right)*log left(1-{p}_{n}right)$$
for each voxel n with ground truth value ({y}_{n}) = 0 or 1, and the predicted probability of the network =({p}_{n})
A two-step pre-processing of the training dataset was applied. First, all scans were resampled at the same voxel size. Thereafter, to overcome the graphics processing unit (GPU) memory limitations, the full-size scan was down sampled to a fixed size.
The first 3D U-Net was used to provide roughly low-resolution segmentation for proposing 3D patches and cropped only those which belonged to the sinus. Later, those relevant patches were transferred to the second 3D U-Net where they were individually segmented and combined to create the full resolution segmentation map. Finally, binarization was applied and only the largest connected part was kept, followed by application of a marching cubes algorithm on the binary image. The resulting mesh was smoothed to generate a 3D model (Fig. 2).
The model parameters were optimized with ADAM27 (an optimization algorithm for training deep learning models) having an initial learning rate of 1.25e−4. During training, random spatial augmentations (rotation, scaling, and elastic deformation) were applied. The validation dataset was used to define the early stopping which indicates a saturation point of the model where no further improvement can be noticed by the training set and more cases will lead to data overfitting. The CNN model was deployed to an online cloud-based platform called virtual patient creator (creator.relu.eu, Relu BV, Version October 2021) where users could upload DICOM dataset and obtain an automatic segmentation of the desired structure.
### Testing of AI pipeline
The testing of the CNN model was performed by uploading DICOM files from the test set to the virtual patient creator platform. The resulting automatic segmentation (Fig. 3) could be later downloaded in DICOM or STL file format. For clinical evaluation of the automatic segmentation, the authors developed the following classification criteria: A—perfect segmentation (no refinement was needed), B—very good segmentation (refinements without clinical relevance, slight over or under segmentation in regions other than the maxillary sinus floor), C—good segmentation (refinements that have some clinical relevance, slight over or under segmentation in the maxillary sinus floor region), D—deficient segmentation (considerable over or under segmentation, independent of the sinus region, with necessary repetition) and E—negative (the CNN model could not predict anything). Two observers (NM and KFV) evaluated all the cases, followed by an expert consensus (RJ). In cases where refinements were required, the STL file was imported into Mimics software and edited using the 3D tools tab. The resulting segmentation was denoted as refined segmentation.
### Evaluation metrics
The evaluation metrics28.29 are outlined in Table 2. The comparison of outcome amongst the ground truth and automatic and refined segmentation was performed by the main observer on the whole testing set. A pilot of 10 scans were tested at first, which showed a Dice similarity coefficient (DSC) of 0.985 ± 004, Intersection over union (IoU) of 0.969 ± 0.007 and 95% Hausdorff Distance (HD) of 0.204 ± 0.018 mm. Based on these findings, the sample size of the testing set was increased up to 30 scans according to the central limit theorem (CLT)30.
#### Time efficiency
The time required for the semi-automatic segmentation was calculated starting from opening the DICOM files in Mimics software till export of the STL file. For automatic segmentation, the algorithm automatically calculated the time required to have a full resolution segmentation. The time for the refined segmentation was calculated similarly to that of semi-automatic segmentation and later added to the initial automatic segmentation time. The average time for each method was calculated based on the testing set sample.
#### Accuracy
A voxel-wise comparison amongst ground truth, automatic and refined segmentation of the testing set was performed by applying a confusion matrix with four variables: true positive (TP), true negative (TN), false positive (FP) and false negative (FN). ) voxels. Based on the aforementioned variables, the accuracy of the CNN model was assessed according to the metrics mentioned in Table 2.
#### Consistency
Once the CNN model is trained it is deterministic; hence it was not evaluated for consistency. For illustration, one scan was uploaded twice on the platform and the resulting STLs were compared. Intra- and inter-observer consistency were calculated for the semi-automatic and refined segmentation. The intra-observer reliability of the main observer was calculated by re-segmenting 10 scans from the testing set with different protocols. For the inter-observer reliability, two observers (NM and KFV) performed the needed refinements, then the STL files were compared with each other.
### Statistical analysis
Data were analyzed with RStudio: Integrated Development Environment for R, version 1.3.1093 (RStudio, PBC, Boston, MA). Mean and standard deviation was calculated for all evaluation metrics. A paired-sample t-test was performed with a significance level (p < 0.05) to compare timing required for semi-automatic and automatic segmentation of the testing set.
|
2022-07-05 09:23:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4496631920337677, "perplexity": 3359.316827512734}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104542759.82/warc/CC-MAIN-20220705083545-20220705113545-00320.warc.gz"}
|
https://publications.hse.ru/en/chapters/132114796
|
• A
• A
• A
• ABC
• ABC
• ABC
• А
• А
• А
• А
• А
Regular version of the site
## Sublinear Space Algorithms for the Longest Common Substring Problem
P. 605-617.
Starikovskaya T., Vildhoj H. W., Kociumaka T.
Given $m$ documents of total length $n$, we consider the problem of finding a longest string common to at least $d \geq 2$ of the documents. This problem is known as the \emph{longest common substring (LCS) problem} and has a classic $\Oh(n)$ space and $\Oh(n)$ time solution (Weiner [FOCS'73], Hui [CPM'92]). However, the use of linear space is impractical in many applications. In this paper we show that for any trade-off parameter $1 \leq \tau \leq n$, the LCS problem can be solved in $\Oh(\tau)$ space and $\Oh(n^2/\tau)$ time, thus providing the first smooth deterministic time-space trade-off from constant to linear space. The result uses a new and very simple algorithm, which computes a $\tau$-additive approximation to the LCS in $\Oh(n^2/\tau)$ time and $\Oh(1)$ space. We also show a time-space trade-off lower bound for deterministic branching programs, which implies that any deterministic RAM algorithm solving the LCS problem on documents from a sufficiently large alphabet in $\Oh(\tau)$ space must use $\Omega(n\sqrt{\log(n/(\tau\log n))/\log\log(n/(\tau\log n)})$ time.
|
2021-10-21 09:16:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7672582864761353, "perplexity": 1135.1605799947417}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585382.32/warc/CC-MAIN-20211021071407-20211021101407-00348.warc.gz"}
|
http://www.mathworks.com/help/physmod/simscape/ref/localrestrictiontl.html?requestedDomain=true&nocookie=true
|
Local Restriction (TL)
Time-invariant reduction in flow area
Library
Thermal Liquid/Elements
Description
The Local Restriction (TL) block models the pressure drop due to a time-invariant reduction in flow area such as an orifice. Ports A and B represent the restriction inlets. The restriction area, specified in the block dialog box, remains constant during simulation.
The restriction consists of a contraction followed by a sudden expansion in flow area. The contraction causes the fluid to accelerate and its pressure to drop. The pressure drop is assumed to persist in the expansion zone—an approximation suitable for narrow restrictions.
Local Restriction Schematic
Mass Balance
The mass balance in the restriction is
`$0={\stackrel{˙}{m}}_{\text{A}}+{\stackrel{˙}{m}}_{\text{B}},$`
where:
• ${\stackrel{˙}{m}}_{\text{A}}$ is the mass flow rate into the restriction through port A.
• ${\stackrel{˙}{m}}_{\text{B}}$ is the mass flow rate into the restriction through port B.
Momentum Balance
The pressure difference between ports A and B follows from the momentum balance in the restriction:
`${p}_{\text{A}}-{p}_{\text{B}}=\frac{{\stackrel{˙}{m}}_{A}{\left({\stackrel{˙}{m}}_{A}{}^{2}+{\stackrel{˙}{m}}_{Ac}{}^{2}\right)}^{1/2}}{2\text{\hspace{0.17em}}{C}_{\text{d}}^{2}{S}_{R}{\rho }_{\text{u}}},$`
where:
• pA is the pressure at port A.
• pB is the pressure at port B.
• Cd is the discharge coefficient of the restriction aperture.
• SR is the cross-sectional area of the restriction aperture.
• ρu is the liquid density upstream of the restriction aperture.
• ${\stackrel{˙}{m}}_{\text{Ac}}$ is the critical mass flow rate at port A.
The critical mass flow rate at port A is
`${\stackrel{˙}{m}}_{\text{Ac}}={\mathrm{Re}}_{\text{c}}\sqrt{\pi {S}_{R}}\frac{{\mu }_{\text{u}}}{2},$`
where:
• Rec is the critical Reynolds number,
`${\text{Re}}_{c}=\frac{|{\stackrel{˙}{m}}_{\text{Ac}}|D}{{S}_{\text{R}}{\mu }_{\text{u}}},$`
D is the hydraulic diameter of the restriction aperture.
• μu is the liquid dynamic viscosity upstream of the restriction aperture.
The discharge coefficient is the ratio of the actual mass flow rate through the local restriction to the ideal mass flow rate,
`${C}_{d}=\frac{{\stackrel{˙}{m}}_{ideal}}{\stackrel{˙}{m}},$`
where:
• $\stackrel{˙}{m}$ is the actual mass flow rate through the local restriction.
• ${\stackrel{˙}{m}}_{ideal}$ is the ideal mass flow rate through the local restriction:
`${\stackrel{˙}{m}}_{ideal}={S}_{R}\sqrt{\frac{2{\rho }_{u}\text{\hspace{0.17em}}\left({p}_{A}-{p}_{B}\right)}{1-{\left({S}_{R}/S\right)}^{2}}}.$`
where S is the inlet cross-sectional area.
Energy Balance
The energy balance in the restriction is
`${\varphi }_{\text{A}}+{\varphi }_{\text{B}}=0,$`
where:
• ϕA is the energy flow rate into the restriction through port A.
• ϕB is the energy flow rate into the restriction through port B.
Variables
Use the Variables tab in the block dialog box (or the Variables section in the block Property Inspector) to set the priority and initial target values for the block variables prior to simulation. For more information, see Set Priority and Initial Target for Block Variables.
Assumptions and Limitations
• The restriction is adiabatic. It does not exchange heat with its surroundings.
• The dynamic compressibility and thermal capacity of the liquid in the restriction are negligible.
Parameters
Restriction Area
Enter the flow cross-sectional area of the local restriction. The default value is `1e-5` m^2.
Cross-sectional area at ports A and B
Enter the flow cross-sectional area of the local restriction ports. This area is assumed the same for the two ports. The default value is `1e-2` m^2 .
Characteristic longitudinal length
Enter the approximate longitudinal length of the local restriction. This length provides a measure of the longitudinal scale of the restriction. The default value is `1e-1` m.
Discharge coefficient
Enter the discharge coefficient of the local restriction. The discharge coefficient is a semi-empirical parameter commonly used to characterize the flow capacity of an orifice. This parameter is defined as the ratio of the actual mass flow rate through the orifice to the ideal mass flow rate:
`${C}_{d}=\frac{{\stackrel{˙}{m}}_{ideal}}{\stackrel{˙}{m}},$`
where Cd is the discharge coefficient, $\stackrel{˙}{m}$ is the actual mass flow rate through the orifice, and ${\stackrel{˙}{m}}_{ideal}$ is the ideal mass flow rate:
`${\stackrel{˙}{m}}_{ideal}={S}_{r}\sqrt{\frac{2\rho \text{\hspace{0.17em}}\left({p}_{A}-{p}_{B}\right)}{1-{\left({S}_{r}/S\right)}^{2}}}.$`
The default value is `0.7`, corresponding to a sharp-edged orifice.
Pressure recovery
Specify whether to account for pressure recovery at the local restriction outlet. Options include `On` and `Off`. The default setting is `On`.
Critical Reynolds number
Enter the Reynolds number for the transition between laminar and turbulent flow regimes. The default value is `12`, corresponding to a sharp-edged orifice.
Ports
The block has two thermal liquid conserving ports, A and B. These ports represent the inlets of the local restriction.
|
2018-02-25 02:15:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 16, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.83819180727005, "perplexity": 1363.8058177523017}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816083.98/warc/CC-MAIN-20180225011315-20180225031315-00785.warc.gz"}
|
https://homework.cpm.org/category/ACC/textbook/gb8i/chapter/9%20Unit%2010/lesson/INT1:%209.3.3/problem/9-97
|
### Home > GB8I > Chapter 9 Unit 10 > Lesson INT1: 9.3.3 > Problem9-97
9-97.
A sequence starts $–3, 1, 5, 9, …$.
1. If you wanted to find the $50^{th}$ term of the sequence, would an explicit equation or a recursive equation be more useful?
Recall the definitions of explicit and recursive equations.
Explicit.
2. Write the equation in standard form as you did in problem 8-98.
Refer to problem 8-98 to recall what the standard form means.
$t(n) = −3 + 4(n − 1)$
3. What is the 50th term of the sequence?
Insert $50$ for $n$ in the equation you found in part (b).
|
2022-05-18 20:54:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 5, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9054528474807739, "perplexity": 716.5456374958959}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522309.14/warc/CC-MAIN-20220518183254-20220518213254-00689.warc.gz"}
|
http://mathoverflow.net/revisions/79405/list
|
3 added 512 characters in body
Regarding the secondary question: If we start the process from a large $N$, it will always reach 1 sooner or later. The probability that it passes through the number 2 is $1/2$, since as long as the numbers are larger than 2, going in the next step to 2 is as likely as going to 1, and when the process reaches 1 or 2 for the first time, where it went decides whether it ever passes through 2. Similarly the probability that the process ever visits a number $n$ is $1/n$, since this happens precisely when the first visit to any of $1,\dots,n$ is to $n$.
Now it's easy in principle to see what the inverse is: For each pair of numbers $m < n$, the probability that a process starting at a larger $N$ will include a step from $n$ to $m$ is $$\frac1{n(n-1)},$$ since it will reach $n$ with probability $1/n$, and the next number distinct from $n$ is uniform on $1,\dots,n-1$. In particular, for "infinite $N$", the last number visited before reaching 1 is $n$ with this probability.
For $m>1$, the probability that $m$ was reached from $n$ given that the process reached $m$ is $m/(n(n-1))$, since we get a factor $m$ from conditioning on the process ever reaching $m$. It might be easier to sort out the details if we assume that the process never repeats the same number.
ADDED: The inverse process constructed this way has the property that at any $m$, the expectation of the next (previous in the original process) step is infinite. But it does have the nice property that the probability of going from $m$ to a number $\leq 2m$ is $1/2$, so the median growth factor over one step is 2.
NEW UPDATE: If we discard repetitions, then one way to understand the inverse is that from a number $m$, the next step is $$m\mapsto \left\lceil\frac{m}{U}\right\rceil,$$ where $U$ is uniform on the interval $[0,1]$. When $m$ is large, the growth factor is therefore asymptotically the reciprocal of a uniform $[0,1]$, which is the same thing as an "exponential of an exponential" ($e$ to an exponential variable). The median growth factor over one step is 2, but over a large number of steps approaches $e$.
2 added 321 characters in body
Regarding the secondary question: If we start the process from a large $N$, it will always reach 1 sooner or later. The probability that it passes through the number 2 is $1/2$, since as long as the numbers are larger than 2, going in the next step to 2 is as likely as going to 1, and when the process reaches 1 or 2 for the first time, where it went decides whether it ever passes through 2. Similarly the probability that the process ever visits a number $n$ is $1/n$, since this happens precisely when the first visit to any of $1,\dots,n$ is to $n$.
Now it's easy in principle to see what the inverse is: For each pair of numbers $m < n$, the probability that a process starting at a larger $N$ will include a step from $n$ to $m$ is $$\frac1{n(n-1)},$$ since it will reach $n$ with probability $1/n$, and the next number distinct from $n$ is uniform on $1,\dots,n-1$. In particular, for "infinite $N$", the last number visited before reaching 1 is $n$ with this probability.
For $m>1$, the probability that $m$ was reached from $n$ given that the process reached $m$ is $m/(n(n-1))$, since we get a factor $m$ from conditioning on the process ever reaching $m$. It might be easier to sort out the details if we assume that the process never repeats the same number.
ADDED: The inverse process constructed this way has the property that at any $m$, the expectation of the next (previous in the original process) step is infinite. But it does have the nice property that the probability of going from $m$ to a number $\leq 2m$ is $1/2$, so the median growth factor over one step is 2.
1
Regarding the secondary question: If we start the process from a large $N$, it will always reach 1 sooner or later. The probability that it passes through the number 2 is $1/2$, since as long as the numbers are larger than 2, going in the next step to 2 is as likely as going to 1, and when the process reaches 1 or 2 for the first time, where it went decides whether it ever passes through 2. Similarly the probability that the process ever visits a number $n$ is $1/n$, since this happens precisely when the first visit to any of $1,\dots,n$ is to $n$.
Now it's easy in principle to see what the inverse is: For each pair of numbers $m < n$, the probability that a process starting at a larger $N$ will include a step from $n$ to $m$ is $$\frac1{n(n-1)},$$ since it will reach $n$ with probability $1/n$, and the next number distinct from $n$ is uniform on $1,\dots,n-1$. In particular, for "infinite $N$", the last number visited before reaching 1 is $n$ with this probability.
For $m>1$, the probability that $m$ was reached from $n$ given that the process reached $m$ is $m/(n(n-1))$, since we get a factor $m$ from conditioning on the process ever reaching $m$. It might be easier to sort out the details if we assume that the process never repeats the same number.
|
2013-06-18 07:38:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.861481249332428, "perplexity": 116.55766262308029}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707186142/warc/CC-MAIN-20130516122626-00026-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://petanimalscare.com/do-black-capped-conures-talk/
|
# Do black capped conures talk?
## Do black capped conures talk?
Speech and Vocalizations This bird is one of the quieter of the conure species. It is not known to be one of the best talkers, but with patient training, the black-capped conure can learn a small repertoire of words and phrases
## Is a black-capped conure rare?
Black-capped Conures are rare in aviculture and special care should be taken in ensuring that the breeding efforts are successful.
## How long does a black-capped conure live?
In the wild black capped parakeets are canopy feeders. The black capped parakeet lives up to 30 years in captivity
## What is the friendliest conure?
White-Eyed Conure The White-Eyed Conure only lives about 20 years, but they make some of the best pets of any type of Conure. This is because of their docile nature that makes them more well-behaved than other parrots.
## Are black capped conures quiet?
While conures and other parrot breeds are infamous for being noisy creatures, the black-capped conure is actually a relatively quiet bird. They do make calls in the morning and the eveningknown as contact callsbut otherwise, they aren’t known for being very talkative.
## Can conures say words?
It’s quite typical for a conure to say at least a few words, and some individuals have much larger vocabularies. However, when you ask conure owners about their birds, they are likely to first describe the conure’s, energetic, fun and affectionate nature long before they get around to talking ability.
## Which conure is the most talkative?
Although conures tend to not talk as much as other parrot species, the blue-crowned conure has a reputation for being one of the more talkative conure species.
## What conure can talk?
Blue-Crowned Conure In general, conures aren’t the best talkers, preferring to mimic other sounds, such as the beeping of an alarm clock. But the blue-crowned conure is capable of learning several words and phrases with frequent training sessions.
## What is the rarest type of conure?
If you’re looking for the rarest and most unique parrot you could find, then the Queen of Bavaria Conure certainly fits the bill. Also called the Golden Conure, this bird is a sight to behold.
## How much is a black-capped conure?
$400 to$600
## Can a black-capped conure breed with a green cheek conure?
In aviculture, these birds are known as black capped conures. As the quietest of the conure birds they are popular as household pets. They can reproduce in captivity and can also mate with the green cheeked parakeet to produce hybrid offspring
## Which conure is most popular?
There are many conure species commonly kept as pets, with one of the most popular being the beautiful sun conure. These intelligent, playful birds generally love spending time with their caretakers. But don’t let their medium size fool you in terms of their noise level.
## Are black capped conures rare?
Speech and Vocalizations This bird is one of the quieter of the conure species. It is not known to be one of the best talkers, but with patient training, the black-capped conure can learn a small repertoire of words and phrases
## How big do black capped conures get?
Black-capped Conures are rare in aviculture and special care should be taken in ensuring that the breeding efforts are successful.
## How long do conures live as pets?
Black Capped Conure flaunts a host of unique and distinguishing features, making them easy to recognize. The adults reach an average length of 10 inches (25 centimeters) and can weigh around 2.5 ounces (70 grams). They have moderately sized tail feathers and can be generally considered as medium-sized parrots.
## What is the most affectionate conure?
Aratinga and Patagonians are the best conure choices for families with children because they tend to be the most affectionate and gentle.
## What is the sweetest conure?
Blue Crowned Conure (Thectocercus acuticaudata) Characteristics: The Blue Crowned Conure is an intelligent, sweet natured, playful bird that will readily learn tricks, and also make very good talkers.
## Which conure is best for me?
Select a green-cheeked or dusky conure if you want a quieter bird. Green-cheeked and dusky conures are some of the quieter types of conure. While they likely won’t learn how to talk, they are very energetic. They can provide you with the endless fun of a conure without as much noise.
## Which conure is loudest?
#1: Nanday Conure You’ll notice that many of the loudest birds have chirps that suit their native habitat of tropical areas. The Nandy Conure has a chirp that reaches 155 decibels, comparable to the level of firecrackers. It is an exotic-looking pet bird that can also learn tricks and how to talk.
## What conures are the quietest?
Conures. While the sun and nanday conures are among the loudest parrots, the half-moon, green-cheeked, and peach-fronted are among the quietest.
## What is the noisiest conure?
#1: Nanday Conure You’ll notice that many of the loudest birds have chirps that suit their native habitat of tropical areas. The Nandy Conure has a chirp that reaches 155 decibels, comparable to the level of firecrackers. It is an exotic-looking pet bird that can also learn tricks and how to talk.
## What is the friendliest type of conure?
White-Eyed Conure The White-Eyed Conure only lives about 20 years, but they make some of the best pets of any type of Conure. This is because of their docile nature that makes them more well-behaved than other parrots.
## Can conures speak?
Conures are capable talking and, although their vocabularies are not as extensive as that of other parrot species, they can learn to speak a few words and phrases.
## Do conures talk or sing?
Talking. Conures are not known for their speaking abilities, and in fact, many bird owners choose Conures due to their relatively quiet nature compared to other parrots. That said, they can learn to mimic a dozen or so words with a bit of time and training.
## What kind of conures talk?
Green cheek conures can talk. Green cheek conures can mimic human voices. If the owner can make some effort, put in that extra time, these birds are going to learn to talk rather quickly.
|
2023-03-23 16:55:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32535839080810547, "perplexity": 6353.445038556518}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945182.12/warc/CC-MAIN-20230323163125-20230323193125-00448.warc.gz"}
|
https://solvedlib.com/n/score-on-last-try-0-of-2-pts-see-details-for-more-you-can,19517524
|
# Score on last try: 0 of 2 pts. See Details for more _ You can retry this question belowUse the
###### Question:
Score on last try: 0 of 2 pts. See Details for more _ You can retry this question below Use the normal distribution to approximate the desired probability. Find the probability that in 271 tosses of a single 9-sided die, we will get at most 21 threes_ Round your answer to 4 places after the decimal point. 0392 Submit Question
#### Similar Solved Questions
##### Fioend %D thel alrio Lle Total Females (b) Fina thc probabilty 3 1 3.3 AMTAic Dlnai 1 23 282 V 1 alin U maxn = 0488 822] li257 Anmnaunstu alnDUR 4D0 Cnec" Anatur6823 [ 10 w] : 0AntComolto pan} Ji
Fioend %D thel alrio Lle Total Females (b) Fina thc probabilty 3 1 3.3 AMTAic Dlnai 1 23 282 V 1 alin U maxn = 0488 822] li 257 Anmnaunstu alnDUR 4D0 Cnec" Anatur 6 823 [ 1 0 w] : 0 Ant Comolto pan} Ji...
##### In a study, the response variable y follows the simple linear regression model y = Bo + Bx + &, where x is a random variable with mean Ux and variance 02 and is the random error with mean zero and variance 02 which does not depend on the value of x Moreover, x and are statistically independent: The variables y and x are jointly bivariate normal (ref: slide 8 of Module 2 Lecture 2D). Suppose that in the study, independent paired data (Y1,X1), (yn Xn) follow this model. Before the paired data
In a study, the response variable y follows the simple linear regression model y = Bo + Bx + &, where x is a random variable with mean Ux and variance 02 and is the random error with mean zero and variance 02 which does not depend on the value of x Moreover, x and are statistically independent: ...
##### Haed Haip/ 2 8 2Bla Ict F] 48 tonnne (inut< Lut] M MILuntiaeu 1
Haed Haip/ 2 8 2 Bla Ict F] 48 tonnne (inut< Lut] M MILuntiaeu 1...
##### Btora elalyat plola the pice pet share o( centn commeaulote urchatod Find Ule Jvorage Blice al the lcck Ovur Iha Eret 80707 | Que comnatatu Arid wuppOrIN Nutt aowWaU ued uurWNetAnrbthnAeetuloncu eardlod Fecelmouine mn eanadavarane plca ollt slcda! (Randnudett canaa nttdad
btora elalyat plola the pice pet share o( centn commeaulote urchatod Find Ule Jvorage Blice al the lcck Ovur Iha Eret 80707 | Que comnatatu Arid wuppOrIN Nutt aowWaU ued uurWNet Anrbthn Aeetuloncu eardlod Fec elmouine mn eanad avarane plca ollt slcda! (Rand nudett canaa nttdad...
##### On January 1, 2021, The Barrett Company purchased merchandise from a supplier. Payment was a noninterest-bearing...
On January 1, 2021, The Barrett Company purchased merchandise from a supplier. Payment was a noninterest-bearing note requiring five annual payments of $40,000 on each December 31 beginning on December 31, 2021, and a lump-sum payment of$300,000 on December 31, 2025. A 9% interest rate properly ref...
##### Table of values for f, 9, f', and 9 is given: f(z) g(z) f' (c) (c)(a) If h(c) = f(g(z)) , find h' (3). h' (3) Incorrect(b) If H(z) 9(f(x)) , find H ' (2)H' (2) =IncorrectQuestion Help: @Message InsILu
table of values for f, 9, f', and 9 is given: f(z) g(z) f' (c) (c) (a) If h(c) = f(g(z)) , find h' (3). h' (3) Incorrect (b) If H(z) 9(f(x)) , find H ' (2) H' (2) = Incorrect Question Help: @Message InsILu...
##### Dentify the therapeutic class, drug form, use, action, adverse reaction, contraindication use and nursingcare and teaching...
dentify the therapeutic class, drug form, use, action, adverse reaction, contraindication use and nursingcare and teaching for the following drugs; Naloxone (Narcan)...
##### Approximate the change in the volume of a sphere when its radiu= changes from $r=5 \mathrm{ft}$ to $r=5.1 \mathrm{ft}\left(V(r)=\frac{4}{3} \pi r^{3}\right)$
Approximate the change in the volume of a sphere when its radiu= changes from $r=5 \mathrm{ft}$ to $r=5.1 \mathrm{ft}\left(V(r)=\frac{4}{3} \pi r^{3}\right)$...
##### For Excrciscs 19 . 22 ust Lhe rcgion under the gruph = ol f (*) =4+2r and belween the vertical lines xabuve [he >-axis;Use lelt sum wilhapproximalt the arta ol the region.Use the limit delinition of the area lind the cxact value of the arca.
For Excrciscs 19 . 22 ust Lhe rcgion under the gruph = ol f (*) =4+2r and belween the vertical lines x abuve [he >-axis; Use lelt sum wilh approximalt the arta ol the region. Use the limit delinition of the area lind the cxact value of the arca....
m2/3and p5/4...
##### The maximum torque experienced by a coil in a 0.75-T magnetic field is $8.4 \times 10^{-4} \mathrm{N} \cdot \mathrm{m} .$ The coil is circular and consists of only one turn. The current in the coil is 3.7 A. What is the length of the wire from which the coil is made?
The maximum torque experienced by a coil in a 0.75-T magnetic field is $8.4 \times 10^{-4} \mathrm{N} \cdot \mathrm{m} .$ The coil is circular and consists of only one turn. The current in the coil is 3.7 A. What is the length of the wire from which the coil is made?...
##### 1. Consider a moving average MA(2) model: y(t) = e(t) +belt-1) + b,elt-2) Assume that the...
1. Consider a moving average MA(2) model: y(t) = e(t) +belt-1) + b,elt-2) Assume that the noise e(t) has is i.i.d. with variance = 1. (a) Compute the autocorrelation process r(k) for y(t). (b) Compute the PSD of y(t). (Hint: 12.4 +e=24 = 2 cos(24)) (c) Plot the spectral density from part (b) for at ...
##### Heat of Fusion
Assume 12,500 J of energy is added to 2.0 moles (36 grams) of H2O as an ice sample at O°C. The molar heat of fusion is 6.02 kJ/mol. The specific heat ofliquid water is 4.18 J/g°C. The molar heat of vaporization is 40.6 kJ/mol. The resulting sample contains which of the following?A. only iceB...
##### (a)Calculate the following limits or show that they do not exist:lim xv 1 - 2x-2 /5x- 5 |(ii) lim x-0cos (x) - 1X sin (x)(b)Find the 6 -th order Taylor polynomial P 6 ( x for the function sin (x ) about x = 0(ii) Use the Taylor polynomial that you have found in part (i) to approximate the integralJ 0 1 sin (x2)dx_(You do not need to calculate the error in this approximation )(c)Using an appropriate definite integral, obtain lower and upper bounds for the sumEk=120k3
(a) Calculate the following limits or show that they do not exist: lim xv 1 - 2x-2 /5x- 5 | (ii) lim x-0cos (x) - 1X sin (x) (b) Find the 6 -th order Taylor polynomial P 6 ( x for the function sin (x ) about x = 0 (ii) Use the Taylor polynomial that you have found in part (i) to approximate the int...
##### What are halides used for?
What are halides used for?...
##### 3. Given a line reflection 0 and a rotation P p ,1809 determine all conditions on / and P such that the two isometries will commute_
3. Given a line reflection 0 and a rotation P p ,1809 determine all conditions on / and P such that the two isometries will commute_...
##### What is the meaning of this sentence? “the world revealed by our senses is not the...
what is the meaning of this sentence? “the world revealed by our senses is not the real world but only a poor copy of it, and that the real world can only be apprehended intellectually”...
##### Q12 D-Ghyccraldchrde thc prrquntor aa € maat Dthc prnunof ofrtotct (AJ Nctajet Adotet (B} Nduter Arlatet (CI Aldoten dhydrotyacctone (D} e4ran tAbotyk -4;QJ} In Kben _ ccntral cutbon stom futruunded by Jillcrent runanaturnl, u ial br Fntiiha poLarized Dxht pASng through Rope opucally Jctvt (J optkally ixtnvt (C) eptically xtvc (D)S epecilly Active Q34 Ilthc rolalton ot Uhe plane polarized Jieht bry the ahot e molecular €ornpavund (Q!3] ts inthe darku aic dicttiun, An4m the €untnictcto
Q12 D-Ghyccraldchrde thc prrquntor aa € maat Dthc prnunof ofrtotct (AJ Nctajet Adotet (B} Nduter Arlatet (CI Aldoten dhydrotyacctone (D} e4ran tAbotyk -4; QJ} In Kben _ ccntral cutbon stom futruunded by Jillcrent runanaturnl, u ial br Fntiiha poLarized Dxht pASng through Rope opucally Jctv...
##### What is the eclipse period of a viral growth curve
What is the eclipse period of a viral growth curve...
##### A packet of sweetener contains 40.0 mg of saccharine (CzHsNOzS) How many carbon atoms are in the sample?9.20 X 1023 atoms1.31 X 1020 atoms1.31 X 1023 atoms9.20 X 1020 atoms
A packet of sweetener contains 40.0 mg of saccharine (CzHsNOzS) How many carbon atoms are in the sample? 9.20 X 1023 atoms 1.31 X 1020 atoms 1.31 X 1023 atoms 9.20 X 1020 atoms...
##### HW problem help needed please fast
Suppose that R and S are relations on a setA. Prove or give acounterexampleto each of the following statements:If RSis reexive, theneither R is reflexive orS isreflexive.If RSis reflexive,then both R and S arereflexive....
##### Use the given function f(z) +322 + 452 20 to answer in each part: Note this same function is also used in question number 2. Part !. Which choice is the correct sign chart for f'?0+++0---0 + Choice A is0 Choice B is0 +++ 0 -0 Choice C is0 +++ 00+++0---0 + Choice D isAnswer for Part is ChoicePart II. Which intervalls) is the function, f, increasing on? Answer for Part Il is3 < X < 5Part III: Which intervalls) is the function; f, decreasing on? Answer for Part IIl is-3 < X < 5Part
Use the given function f(z) +322 + 452 20 to answer in each part: Note this same function is also used in question number 2. Part !. Which choice is the correct sign chart for f'? 0+++0---0 + Choice A is 0 Choice B is 0 +++ 0 - 0 Choice C is 0 +++ 0 0+++0---0 + Choice D is Answer for Part is Ch...
##### 14. Among men in the United States, prostate cancer has the highest death rate but not...
14. Among men in the United States, prostate cancer has the highest death rate but not the highest incidence. True False...
##### Question 121 ptsBased on the abstract below Biophys Biochim Acta 2017) answer the following MC questionAbstractPolybia-MP1 (IDWKKLLDAAKQIL-NH2) is a lytic peptide from the Brazilian wasp venom with known anti-cancer properties. Previous evidence indicates that phosphatidylserine (PS) lipids are relevant for the lytic activity of MP1. In agreement with this requirement; phosphatidylserine lipids are translocated to the outer leaflet of cells, and are available for MP1 binding, depending on the pr
Question 12 1 pts Based on the abstract below Biophys Biochim Acta 2017) answer the following MC question Abstract Polybia-MP1 (IDWKKLLDAAKQIL-NH2) is a lytic peptide from the Brazilian wasp venom with known anti-cancer properties. Previous evidence indicates that phosphatidylserine (PS) lipids are ...
##### Derive the expression for the most probable speed. You have to find the maximum of the...
Derive the expression for the most probable speed. You have to find the maximum of the Maxwell-Boltzmann distribution by taking its derivative and equating it to zero....
##### 5. Why would the use of data reduction be useful to highlight related party transactions (e.g.,...
5. Why would the use of data reduction be useful to highlight related party transactions (e.g., CEO has her own separate company that the main company does business with)?...
##### A horizontal thin glass slide with flat surfaces has thickness tand refractive index n = 1.40. The slide is surrounded by air. Theslide is illuminated at normal incidence by light that haswavelength λ in air. The longest wavelength (exclude infinity) oflight for which there is constructive interference in the lightreflected from the top and bottom surfaces of the film is 504 nm,what is the thickness t of the slide?45 nm90 nm126 nm 180 nm
A horizontal thin glass slide with flat surfaces has thickness t and refractive index n = 1.40. The slide is surrounded by air. The slide is illuminated at normal incidence by light that has wavelength λ in air. The longest wavelength (exclude infinity) of light for which there is constructive in...
##### Determine the design compressive strength, ФР", based upon the flexural buckling for the built-up shape sho...
Determine the design compressive strength, ФР", based upon the flexural buckling for the built-up shape shown. The column length, L, is 28 ft. Assume that the components are connected in such a way that the gross section is fully effective and the ends of the column are fully fixed. ...
##### ENTERING NUMERICAL ANSWERSOften, your answers will include negative numbers and decimal values: If your answer is not an exact value, you'Il want to enter at least 3 decimal places unless the problem specifies otherwise:What is18?
ENTERING NUMERICAL ANSWERS Often, your answers will include negative numbers and decimal values: If your answer is not an exact value, you'Il want to enter at least 3 decimal places unless the problem specifies otherwise: What is 18?...
##### 13. Find the critical values of the function, f(x) = 2x - 3x - 36x +...
13. Find the critical values of the function, f(x) = 2x - 3x - 36x + 12. Use the critical values to find the absolute min and max on the interval (-5,5). delle cos de...
##### 3 Consicker he X = and 9 = V UtV+f U+v+l transfocmatioa There i5 one Cons Foct C foc any cegion fovad Such +ha+ the condmtion 2 (6y) dudv 2 €C JL satisfles 0 (u,v) Show i+9 ) Integral JI YsJA ovec b = fwg):*zysv7 domain calcu la te
3 Consicker he X = and 9 = V UtV+f U+v+l transfocmatioa There i5 one Cons Foct C foc any cegion fovad Such +ha+ the condmtion 2 (6y) dudv 2 €C JL satisfles 0 (u,v) Show i+ 9 ) Integral JI YsJA ovec b = fwg):*zysv7 domain calcu la te...
##### Sion ofa symmetrical (Op in a& gravitational field, by imposing the requirement that the motion be a unifon precession without nutation 41 Show that the magnitude of the angular momentum for a heavy symmetrical tOp can be expressed aS a function Of 0 and the constants of the molon only: Prove that as a result the angular momentum vector precesses uniformly only when there is uniform precession of lhe symmetry axts
sion ofa symmetrical (Op in a& gravitational field, by imposing the requirement that the motion be a unifon precession without nutation 41 Show that the magnitude of the angular momentum for a heavy symmetrical tOp can be expressed aS a function Of 0 and the constants of the molon only: Prove th...
##### PROBLEM 3: 255. Match the items below by entering the appropriate code letter in the space...
PROBLEM 3: 255. Match the items below by entering the appropriate code letter in the space provided. A Prenumbered documents B. Custody of an asset should be kept separate from the record-keeping for that asset Television monitors, garment sensors and burglar alarms are examples D. Bonding employees...
|
2023-03-23 18:03:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6516068577766418, "perplexity": 6947.968005899067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945182.12/warc/CC-MAIN-20230323163125-20230323193125-00526.warc.gz"}
|
http://www.reynwar.net/ben/docs/statistics/
|
Statistics Summary
Author: Ben Reynwar Whatever wikipedia uses since there are probably bits cut and pasted. 2011 April 1 2011 May 28
These are notes I made while working through the book "Statistics Explained". They're rather terse but include R examples so I thought they might possibly be useful.
Type I error - finding an effect that isn't real.
Type II error - not finding a real effect.
Some Useful Distributions
Chi-square Distribution
The chi-square distribution with k degrees of freedom is the distribution of the sum of squares of k independent standard normal variables.
It is a special case of the gamma distribution.
Useful for estimating variance of a population. Suppose we have n observations from a normal population $$N(\mu, \sigma^2)$$ and $$S$$ is their standard deviation then $$\frac{(n-1)S^2}{\sigma^2}$$ has a chi squared distribution with n-1 degrees of freedom. Since $$\sigma$$ is the only unknown once can get confidence intervals for the population variance.
> # Number of observations in sample.
> n <- 10
> # Set alpha for 95% confidence interval (two-sided)
> alpha <- 0.05
> # Get a sample. Pretend we don't know mean or standard deviation.
> sample = rnorm(n, 25, 8)
> top = var(sample)*(n-1)
> # Get bounds for confidence interval for the standard deviation.
> lower = sqrt(top/qchisq(1-alpha/2, n-1))
> upper = sqrt(top/qchisq(alpha/2, n-1))
> c(lower, upper)
[1] 3.531557 9.373243
F-Distribution
The ratio of two independent random variables both of which have chi-squared distributions has an F-distribution.
> # We take two samples from a normal population and take the ratio of variances.
> n1 <- 3; n2 <- 4
> # A 95% confidence interval for our result is:
> alpha <- 0.05
> c(qf(alpha/2, n1, n2), qf(1-alpha/2, n1, n2))
[1] 0.06622087 9.97919853
> mu <- 10; sigma <- 5
> sample1 <- rnorm(n1, mu, sigma)
> sample2 <- rnorm(n2, mu, sigma)
> ratio <- var(sample1)/var(sample2)
> # Calculated ratio is:
> ratio
[1] 0.2974744
Students's t-distribution
• An estimated distibution of sample means. Different from a normal distribution since it takes into account the uncertainty in the standard deviation.
• Depends on number of degrees of freedom of the standard deviation used.
> # Calculate the value of x for which the cumulative distribution of the
> # t-distribution is 0.05, for degrees of freedom = 1000.
> qt(0.05, 1000)
[1] -1.646379
> qt(0.05, 3)
[1] -2.353363
> # Get the density of a t-distribution.
> dt(0.05, 1000)
[1] 0.3983438
> # Get the cumultive distribution
> pt(1, 1000)
[1] 0.8412238
Student's t-test
• Assesses the statistical significance of the difference between two sample means.
• Can have paired or unpaired samples (related or independent).
• Assumes the two samples have equal variances. Often used as long as one is not more than three times as big as the other.
• Welch's t-test is an extension that does not assume equal variances.
Independent Example
> # Generate a random vector containing 10 values from a normal distibution
> # with mean 10 and standard deviation 2.
> x = rnorm(10, 10, 2)
> x
[1] 7.288907 10.679290 7.021853 12.356291 11.973576 13.452286 10.765847
[8] 12.815315 10.776181 11.478132
> # Generate another similar random vector.
> y = rnorm(10, 10, 2)
> y
[1] 10.228506 9.307230 9.007056 11.699393 12.524014 15.061729 9.449236
[8] 14.378682 10.251995 9.862443
> # Perform the t-test on them.
> # The p-value is the chance that the difference between the means would be
> # this large if the null hypothesis were true.
> t.test(x, y, var.equal=TRUE)
Two Sample t-test
data: x and y
t = -0.3273, df = 18, p-value = 0.7472
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-2.346419 1.713897
sample estimates:
mean of x mean of y
10.86077 11.17703
Analysis of Variance
Variance Ratio(F) = (Between conditions variance)/(Error variance)
Assume samples come from normally distributed populations with equal variances.
F statistic depends on the dof for the two kinds of variances. These should always be given with the F value:
$$F(df_{bt.conds}, df_{error})$$ = calculated value
The p-value is then found from a table or calculation.
One factor independent measures ANOVA
• also called completely randomised design ANOVA
• We have $$K$$ conditions, with $$n_i$$ samples in the ith condition, and $$N$$ samples overall.
• $$\bar{Y}_{i}$$ denotes the sample mean in the ith condition.
• $$\bar{Y}$$ denotes the overall mean of the data.
• $$Y_{ij}$$ is the jth observation in the ith condition.
Between conditions variance = $$\sum_i n_i(\bar{Y}_{i} - \bar{Y})^2/(K-1)$$ you Error variance = $$\sum_{ij} (Y_{ij}-\bar{Y}_{i})^2/(N-K)$$
For two conditions it is mathematically identical to the t-test.
> # Number of points in each data set.
> ni <- 4
> # Generate four sets of data with different means.
> a <- rnorm(ni, 10, 2)
> b <- rnorm(ni, 12, 2)
> c <- rnorm(ni, 14, 2)
> d <- rnorm(ni, 16, 2)
> # Merge them all together into a data frame.
> values <- c(a, b, c, d)
> letters <- c(rep('a', ni), rep('b', ni), rep('c', ni), rep('d', ni))
> df <- data.frame(letter=letters, value=values)
> # Perform an anova analysis.
> fit <- aov(value ~ letter, data=df)
> summary(fit)
Df Sum Sq Mean Sq F value Pr(>F)
letter 3 91.902 30.6342 24.829 1.964e-05 ***
Residuals 12 14.805 1.2338
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Post-hoc Tests
If we find an effect with the F-test then we need to work out where it is coming from. We do this with post-hoc tests. The risk is the increased chance of a type I error.
Least Significant Difference Test
• takes not account of the number of comparisons being made.
• increased risk of Type I error is simply accepted.
Neuman-Keuls Test
• ??
Duncan Test
• ??
Tukey Test
• We have K conditions, with n samples in each. N=nK.
• Studentized Range is the range of samples divided by an estimate of their standard deviation.
• We find the value of the Studentized Range for which their is some defined chance that the condition results will be under.
• If any two conditions deviate by more than this amount we can say they are significant.
• Depends on number of conditions and dof in standard deviation calculation.
> th <- TukeyHSD(fit)
> th
Tukey multiple comparisons of means
95% family-wise confidence level
Fit: aov(formula = value ~ letter, data = df)
$letter diff lwr upr p adj b-a 0.7224116 -1.609443 3.054266 0.7949923 c-a 4.8536226 2.521768 7.185477 0.0002394 d-a 5.3724864 3.040632 7.704341 0.0000919 c-b 4.1312110 1.799356 6.463066 0.0009963 d-b 4.6500748 2.318220 6.981929 0.0003539 d-c 0.5188638 -1.812991 2.850718 0.9097766 > > # Now try to do the same thing but more manually for (b-a) comparison. > # Variance of residual errors. > vare <- mean(c(var(a), var(b), var(c), var(d))) > # Normalize the difference between the means the estimate of the standard deviation > # of the means. > st_range <- (mean(d)-mean(a))/(sqrt(vare/ni)) > # ptukey: ptukey(q, nmeans, df, nranges = 1, lower.tail = TRUE, log.p = FALSE) > # q - a given studentized range > # nmeans - the number of samples (i.e. number of means in this case) > # df - degrees of freedom in the calculation of the stdev. > # It returns the probability that the studentized range of the sample of means is > # less than the given value of q. > # For our example nmeans is clearly 4 (a,b,c and d) > # And df is 4*(ni-1) because we calculated the variance from 4 sets of samples each of > # which had (ni-1) degrees of freedom. > manual <- 1 - ptukey(st_range, 4, 4*(ni-1)) > manual [1] 9.19243e-05 > th$letter["d-a", "p adj"] - manual < 10e-6
[1] TRUE
Scheffé Test
• Very similar to Turkey method except we do not limit to pairwise comparisons.
• Let μ1, ..., μr be the means of some variable in r disjoint populations.
• Begin cut and paste from wikipedia: An arbitrary contrast is defined by
$$C = \sum_{i=1}^r c_i\mu_i$$
where
$$\sum_{i=1}^r c_i = 0.$$
If μ1, ..., μr are all equal to each other, then all contrasts among them are 0. Otherwise, some contrasts differ from 0.
Technically there are infinitely many contrasts. The simultaneous confidence coefficient is exactly 1 − α, whether the factor level sample sizes are equal or unequal. (Usually only a finite number of comparisons are of interest. In this case, Scheffé's method is typically quite conservative, and the experimental error rate will generally be much smaller than α.)
We estimate C by
$$\hat{C} = \sum_{i=1}^r c_i\bar{Y}_i$$
for which the estimated variance is
$$s_{\hat{C}}^2 = \hat{\sigma}_e^2\sum_{i=1}^r \frac{c_i^2}{n_i}.$$
It can be shown that the probability is 1 − α that all confidence limits of the type
$$\hat{C}\pm s_\hat{C}\sqrt{\left(r-1\right)F_{\alpha ; r-1 ; N-r}}$$
> # Perform a Scheffe test to see if the average of samples a and b is significantly
> # different from the average of c and d.
> cs <- c(-0.5, -0.5, 0.5, 0.5)
> means <- tapply(df$value, df$letter, mean)
> c.hat <- sum(cs*means)
> s2 <- vare/ni * sum(cs^2)
> fval = c.hat^2/s2/(4*(ni-1))
> fval
[1] 6.100458
> # The chance of any linear combination deviating by this much if all samples
> # were from the same normal population.
> 1 - pf(fval, 3, 4*(ni-1))
[1] 0.009188318
Analyzing Frequency Data
A population can be divided in c categories. The chance of an observation being of category i is $$p_i$$. We make N observations and find $$n_i$$ in category i.
$$\sum_i{\frac{(n_i-N p_i)^2}{N p_i}}$$ can be approximated by a chi-squared distribution with c-1 degrees of freedom as long as $$N p_i$$ is greater than 5 for all categories.
Possible Uses:
• A goodness of fit test. To see if a sample seems to match a normal distribution bin the observations and compare the observed frequencies to those expected.
• A test of independence. If we have two types of categories and each observation is a member of one of each types of categories then we can check if the categories are independent. We just see how the frequencies deviate from what would be expected if they were independent.
Linear Correlation
We have a sample of (x, y) pairs. Pearson correllation coefficient is:
$$r = \frac{Sum of XY products}{Sum of XX squares + Sum of YY squares}$$
r=1 is perfect positive correlation, r=0 is uncorrelated, r=-1 is perfect negative correlation. It is the slope of the line of best-fit through the reduced variables.
The distribution of r is approximately a Studentized t-distribution. To be exact the variable t ($$t = r\sqrt{\frac{n-2}{1 - r^2}}$$) has this distribution.
> # Create some correlated data.
> n <- 10
> a <- rnorm(n, 12, 1)
> b <- 3*a + 6 + rnorm(n, 0, 3)
> r <- cor(a, b)
> r
[1] 0.5985606
> t <- r * sqrt((n-2)/(1-r^2))
> t
[1] 2.113384
> # The probability of a correlation being this far from 0 by chance is:
> 2 * (1 - pt(t, n-2))
[1] 0.06751678
|
2018-12-11 11:39:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.741765022277832, "perplexity": 2174.6224740151474}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823618.14/warc/CC-MAIN-20181211104429-20181211125929-00388.warc.gz"}
|
http://mathhelpforum.com/calculus/210501-help-limit-log-print.html
|
# Help on limit with log
• Dec 30th 2012, 06:53 AM
bizan
Help on limit with log
Hello, I have spent several hours stuck on this limit:
$lim(x->0) \frac{ln(1+x)-x}{x^2}$
I managed to solve it by L'hopital, its limit is -0.5 but I was wondering if anyone can come up with a different method, as I am not supposed to use L'hopital at this stage.
I tried
$lim\hspace{0.1cm} ln(1+x)^{\frac{1}{x^2} } - lim \frac{1}{x}$ = $ln \hspace{0.1cm} lim(1+x)^{\frac{1 1}{x x}} - lim \frac{1}{x}$ = $ln e^{1/x} - lim \frac{1}{x}$
But it did not work.
Any idea? Thanks!
• Dec 30th 2012, 07:36 AM
abender
Re: Help on limit with log
$\ln(1+x)=x-\frac{x^2}{2}+\frac{x^3}{3}-\frac{x^4}{4}+\frac{x^5}{5} - \cdots$
$\ln(1+x)-x=-\frac{x^2}{2}+\frac{x^3}{3}-\frac{x^4}{4}+\frac{x^5}{5} - \cdots$
$\frac{\ln(1+x)-x}{x^2}=-\frac{1}{2}+\frac{x}{3}-\frac{x^2}{4}+\frac{x^3}{5} - \cdots$
Easier to see now?
• Dec 30th 2012, 07:40 AM
bizan
Re: Help on limit with log
Wow!, now it makes sense. Thank you very much :).
• Dec 30th 2012, 05:31 PM
hollywood
Re: Help on limit with log
That would work if you were allowed to use series. But that is equivalent to L'Hopital's rule":
$\lim_{x\to 0} \frac{f(x)}{g(x)} = \lim_{x\to 0} \frac{f(0)+f'(0)x+c_1x^2+\dots}{g(0)+g'(0)x+c_2x^2 +\dots}$
and assuming $f(0)=g(0)=0$,
$\lim_{x\to 0} \frac{f(x)}{g(x)} = \lim_{x\to 0} \frac{f'(0)+c_1x+\dots}{g'(0)+c_2x+\dots} = \frac{f'(0)}{g'(0)}$
- Hollywood
• Dec 30th 2012, 05:50 PM
bizan
Re: Help on limit with log
I am allowed to use L'hopital's rule. But I am also asked to do it with a different method.
Given that in another exercise I had to expand ln(1+x) using a taylor series (which gives as a result the series abender posted) I think that is the extra way they want me to do it.
• Dec 30th 2012, 06:58 PM
hollywood
Re: Help on limit with log
I see. You're right - abender's calculation is undoubtedly what they are looking for.
- Hollywood
|
2016-09-30 17:39:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9174183011054993, "perplexity": 1479.476510892628}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738662321.82/warc/CC-MAIN-20160924173742-00299-ip-10-143-35-109.ec2.internal.warc.gz"}
|
https://mcpt.ca/problem/lcc20c5j1
|
## LCC/Moose '20 Contest 5 J1 - Tracy the Chat Filter
View as PDF
Points: 3 (partial)
Time limit: 2.0s
Memory limit: 64M
Author:
Problem type
Tracy has been assigned the role of being a manual chat filter. She needs to detect strings with too many capital letter words. If strictly more than half of the words in the string are composed completely of capital letters, the string needs to be removed. Given a string of length , composed of upper and lower case alphabetical characters and spaces separating the words, can you output if the string needs to be removed or not?
#### Input Specification
The first line will contain a single integer , the length of Tracy's string.
The next line will contain a single string , , composed of upper and lowercase alphabetical characters and spaces. Each word in the string is guaranteed to contain at least one character.
#### Output Specification
Output yes if Tracy should remove the string, or no otherwise.
#### Sample Input 1
11
hi hi HI HI
#### Sample Output 1
no
#### Sample Input 2
39
FJHGDBN G G FD D S e e e e eeeeeeEEEEEE
#### Sample Output 2
yes
|
2021-07-26 22:47:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17530593276023865, "perplexity": 2745.5907275918075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152156.49/warc/CC-MAIN-20210726215020-20210727005020-00358.warc.gz"}
|
http://mathematica.stackexchange.com/tags/calculus-and-analysis/hot
|
# Tag Info
## Hot answers tagged calculus-and-analysis
10
There is also a newer package, HolonomicFunctions, that has an implementation of Chyzak's generalization of Zeilberger's algorithm. To perform the desired task, use the following commands: smnd = Simplify[ G /. HoldPattern[HypergeometricPFQ[pl_List, ql_List, x_]] :> (Times @@ (Pochhammer[#, k] & /@ pl)) / (Times @@ (Pochhammer[#, k] & /@ ql)...
9
Let's rename things slightly to make it more consistent g = Fit[newdata, {1, x, x^2, x^3, x^4}, x]; To find inflection points, you can just put (blue) points where the second derivative is zero. Plot[g, {x, 20, 60}, Epilog -> {Red, PointSize[0.02], Point[newdata], Blue, Point[{x, g} /. Solve[D[g, {x, 2}] == 0]]}, PlotRange -> {{-5, 70}, {-5, ...
8
You can use the Euler-Maclaurin formula to get the limit (the sum can be approximated by an integral, which becomes exact in the infinite limit): f[i_] = i/(n^2 - i + 1); Integrate[f[k], {k, 0, n}, Assumptions -> n > 0] Limit[%, n -> Infinity] 1/2
7
Reverse the order of the summation. i.e., k -> (n - k + 1) s = Sum[(n - k + 1)/(n^2 - (n - k + 1) + 1), {k, 1, n}] // Simplify (* -n + (1 + n^2) PolyGamma[0, 1 + n^2] - (1 + n^2) PolyGamma[0, 1 - n + n^2] *) Limit[s, n -> Infinity] (* 1/2 *) For an alternative representation s2 = FullSimplify[s] (* -n - (1 + n^2) HarmonicNumber[(-1 + n) n] ...
6
Assume analyticity : Limit[1 + (r El'[r])/El[r], r -> 0, Analytic -> True] (* 1 *) Analytic->True assumes that generic functions (e.g., El[r] and El'[r] in this case) are analytic.
6
Using the conditions in the book ClearAll[a, x]; expr = Log[1 + 2 a*Cos[2 x] + a^2]*Sin[x]^2; r = Integrate[expr, {x, 0, Pi/2}, Assumptions -> a^2 > 1] Book result You used $a>1$ but the book says to use $a^2>1$. These are not the same. Update: I asked about this on another forum. Experts opinions says that result should be valid for $... 5 In addition to what was suggested by @BobHanlon, one might just replace the quotient of El'[r])/El[r] by the assumed value of A+O[r] to denote that it is the given constant A to first order. In[646]:= Limit[1 + r (A + O[r]), r -> 0] (* Out[646]= 1 *) 5 The problem appears to be that Mathematica assumes certain values for a and b so that it can use a particular expression to obtain the result. This is not (necessarily) consistent with the assumptions that you supply for Simplify. The solution is to supply the assumptions around the integral, so that they can be accounted for there. I believe that the ... 4 TL;DR Use HeavisideTheta's properties before integration. This is my strategy. First the HeavisideTheta gives you the following integration limits: $$0\leq y \leq 1-x \qquad \& \qquad 0\leq x \leq 1$$ $$0\leq x \leq 1-y \qquad \& \qquad 0\leq y \leq 1$$ In both cases I used Integrate first then NIntegrate. In the first case I could not ... 4 [...] I am only interested in very fast numerical methods, no analytical results are needed. [...] I have no idea how I can do it in Mathematica The package AdaptiveNumericalLebesgueIntegration.m has Lebesgue integration strategy and rules implementations and it is discussed in detail in the blog post "Adaptive numerical Lebesgue integration by set ... 4 The problem is that the two Hypergeometric2F1 terms each take the value of ComplexInfinity when n=1. The resulting difference is necessarily undefined. If Mathematica substitutes n->1 before evaluating the integral, it is able to use a more specific integration technique. This sort of behaviour occurs frequently: Mathematica results that are ... 4 Consider the following additional definition: Clear[deriv] deriv[α_][a_ x_^k_][x_] := If[k >= α, Gamma[k + 1]/Gamma[k + 1 - α] a x^(k - α), 0] deriv[α_][polynomial_Plus][x_] := Plus @@ (deriv[α][#][x] & /@ MonomialList[polynomial]) Now: deriv[1.0][5 x^1.2 + 3 x^0.8][x] (* Out: deriv[1.][0][x] *) You should still add definitions for such special ... 3 Maybe you want something like this?: eq = 4/2 x (1 - x) (x - 1/2) - 1/2 m x^2 sols = x /. Solve[expr == 0, x] (* Out[2] := {0,1/8 (6-m-Sqrt[4-12 m+m^2]),1/8 (6-m+Sqrt[4-12 m+m^2])} *) realandimaginarysolutions = Through[{Re, Im}[#]] & /@ sols (* Out[3] := {{0,0} ,{1/8 (6+Re[-m-Sqrt[4-12 m+m^2]]),1/8 Im[-m-Sqrt[4-12 m+m^2]]} ,{... 3 Another form I have found useful is: f = ListInterpolation[ Table[Sin[x y], {x, 0, 1, .25}, {y, 0, 2, .25}], {{0, 1}, {0, 2}}] Then one can make derivative functions that can be treated as normal function via dfdx[x_, y_] := Evaluate[D[f[x, y], {x, 1}]] dfdy[x_, y_] := Evaluate[D[f[x, y], {y, 1}]] and the second derivatives d2fdx[x_, y_] := Evaluate[... 3 Using an example from the documentation of ListInterpolation: f = ListInterpolation[ Table[Sin[x y], {x, 0, 1, .25}, {y, 0, 2, .25}], {{0, 1}, {0, 2}}] dfdx[u_, v_] := D[f[x, y], x] /. {x -> u, y -> v} dfdy[u_, v_] := D[f[x, y], y] /. {x -> u, y -> v} Manipulate[ Show[{Plot3D[f[x, y], {x, 0, 1}, {y, 0, 2}], Graphics3D[{Red, PointSize[0.03]... 3 f[x_] = Piecewise[{{Sqrt[x], x >= 0}, {Sqrt[-x], x < 0}}]; f'[x] gives $$\begin{cases} -\frac{1}{2 \sqrt{-x}} & x<0 \\ \frac{1}{2 \sqrt{x}} & x>0 \\ \text{Indeterminate} & \text{True} \end{cases}$$ f[x_] = Sqrt[Abs[x]]; f'[x] gives $$\frac{\text{Abs}'(x)}{2 \sqrt{\text{Abs}(x)}}$$ So the second form doesn't evaluate ... 3 In cases where you can't get a symbolic result, it's also possible to use a completely numerical approach: Needs["NumericalCalculus"] sum[n_?NumberQ] := NSum[k/(n^2 - k + 1), {k, 1, n}] NLimit[sum[n], n -> Infinity] (* ==> 0.499999 *) 3 The following definition takes as arguments two pure functions, a and b, their argument x and the parameter n. HirotaD[a_, b_, x_, n_] := Module[{}, sol = D[a[x + y]*b[x - y], {y, n}] /. y -> 0 // TraditionalForm; Print[("\!$$\*SubscriptBox[\(D$$, $$x$$]\)")^n, "=", sol]]; It works on general functions, not yet defined. HirotaD[a, b, x, 1] (*... 3 Use of NIntegrate and ?NumericQ were helpful in reducing errors. Take a look at the referenced questions at the bottom for additional information. ClearAll["Global*"] data = {{0.0049351, 887.55}, {0.014628, 2076.6}, {0.024377, 2684.6}, {0.034198, 3044.85}, {0.043943, 3281.3}, {0.053758, 3454.15}, {0.06349, 3585.85}, {0.073305, 3692.4}, {0.... 2 Please make your question clear. But I think you're simply using f'[x,y] and hope that you can get a result? Try the following code: f = ListInterpolation[ Table[Sin[x y], {x, 0, 1, .25}, {y, 0, 2, .25}], {{0, 1}, {0, 2}}] D[f[x, y], x] Plot3D[Evaluate@D[f[x, y], x], {x, 0, 1}, {y, 0, 2}] for higher order: D[f[x,y],{x,2}] Will this code help? ... 2 I've edited my answer in the linked thread so that it can now be used without modification. Previously, you had to define the functions uv that you wish to differentiate in a more general way, replacing the explicit 0 in their argument with y. The reason is that my earlier answer assumed differentiations are performed on the given function, not on the new ... 2 This has been confirmed as a bug by Wolfram support. 2 You are correct. It is the logical OR function. It evaluates its arguments in order, giving True immediately if any of them are True, and False if they are all False. https://reference.wolfram.com/language/ref/Or.html 2 To get a simpler form f[a_, b_] =Assuming[a > 0 && b > 0, Integrate[ 1/((x^2 - a^2)^2 + b^4), {x, 0, Infinity}] // ComplexExpand[#, TargetFunctions -> {Re, Im}] & // Simplify] Plot3D[f[a, b], {a, -5, 5}, {b, -6, 6}, ClippingStyle -> None] 2 I am definitely not an expert in this field, but I believe that some part of Integrate uses Risch's algorithm. And that, to my understanding, is also the reason why it's not easy to show the "steps" Integrate takes to solve an integral. The intermediate "steps" are not meaningful to most humans. Some more references are in the page "Some Notes On Internal ... 2 Plugging in F[x_] := Piecewise[{ {0, x == 0}, {2*x*Cos[1/x], x != 0}}]; Integrate[F[x], x] leads to the output Piecewise[{{2 (1/2 x^2 Cos[1/x] + 1/2 CosIntegral[1/x] - 1/2 x Sin[1/x]), x <= 0}}, I \[Pi] + 2 (1/2 x^2 Cos[1/x] + 1/2 CosIntegral[1/x] - 1/2 x Sin[1/x])]] 2 To answer this ill-posed question, we need to know the following, Values of m and θ which to do not produce a singularity in the integrand in the domain of interest, {r, 1, 8}. I choose m = 20 and θ = 4 π, more or less arbitrarily. A syntactically correct integrand. I am guessing you want 1/(1 - 4 m/(r Sqrt[π]) GammaRegularized[3/2, r^2/(4 θ)]) With ... 2 The integrand cannot be solved when$|z|\rightarrow1\$ Therefore, let's integrate the function on a possible domain, numerically. zdat=Table[NIntegrate[f, {Phi, 0, 7 Pi/18}, {z, 0, i}], {i, -0.95, 0.95, 0.1}]; This gives us a list of values, from which we can approximate a function for this part of the domain for z. Plotting this list gives us: lp = ...
2
Clear your variables before you run and "cosine" isn't recognized by Mathematica. You need to use Cos[]. ClearAll["Global`*"] Integrate[1/(1 + 3 Cos[x] Cos[x]), {x, -Pi, Pi}] Pi In regard to your follow-up question in the comments about the following equation: $$1/(1 - Cos[x] - I (1/3) Sin[x])$$ The integral doesn't converge with the region {x,-Pi,...
2
To expand on my comment: in Gradshteyn and Ryzhik (the seventh edition, at least), they list formula 2.174, which I think is more practical for computational purposes than the direct output of Mathematica. Translated into Mathematica syntax for the OP's specific case, if we have int[n_] := Integrate[t^n/(t^2 + b t + 1), t] then there is the useful (...
Only top voted, non community-wiki answers of a minimum length are eligible
|
2016-07-23 09:20:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.852871835231781, "perplexity": 2666.086489817014}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257821671.5/warc/CC-MAIN-20160723071021-00266-ip-10-185-27-174.ec2.internal.warc.gz"}
|
http://mymathforum.com/algebra/347266-prove-aba-singular.html
|
My Math Forum Prove ABA' is singular
Algebra Pre-Algebra and Basic Algebra Math Forum
October 19th, 2019, 06:02 PM #1 Newbie Joined: Oct 2019 From: Taiwan Posts: 2 Thanks: 0 Prove ABA' is singular A is MxN matrix. B is NxN matrix. If N
October 19th, 2019, 11:16 PM #2 Senior Member Joined: Oct 2018 From: USA Posts: 102 Thanks: 77 Math Focus: Algebraic Geometry We know that $A$ is $m \times n$ and $B$ is $n \times n$, so $AB$ is $m \times n$ and $ABA^{t}$ is $m \times m$. Since \$n
October 20th, 2019, 12:50 AM #3 Newbie Joined: Oct 2019 From: Taiwan Posts: 2 Thanks: 0 Thank you!! Very clear.
Tags aba, prove, singular
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post JoseTorero Differential Equations 8 January 12th, 2018 02:51 PM Adam Ledger Number Theory 0 June 5th, 2016 11:47 AM 450081592 Linear Algebra 7 November 27th, 2009 12:37 PM 450081592 Linear Algebra 1 November 16th, 2009 10:05 PM E.L.Kim Calculus 5 August 31st, 2009 09:01 PM
Contact - Home - Forums - Cryptocurrency Forum - Top
|
2019-11-13 21:51:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46166545152664185, "perplexity": 10569.771103906884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667442.36/warc/CC-MAIN-20191113215021-20191114003021-00170.warc.gz"}
|
https://cs.stackexchange.com/questions/102466/what-important-crucial-real-world-applications-use-blockchain/102468
|
# What important/crucial real-world applications use blockchain?
As part of some blockchain-related research I am currently undertaking, the notion of using blockchains for a variety of real-world applications are thrown about loosely.
Therefore, I propose the following questions:
1. What important/crucial real-world applications use blockchain?
2. To add on to the first question, more specifically, what real-world applications actually need blockchain - who may or may not currently use it?
From a comment, I further note that this disregards the notion of cryptocurrencies. However, the use of smart contracts can have other potential applications aside from benefits they can pose to the area of cryptocurrencies
• Some think that voting could be done using blockchains. I don't think this is a good idea, but you might be interested in research in that area. – Bakuriu Jan 6 '19 at 23:05
• We don't have a strict policy for list questions, but there is a general dislike. Please note also this and this discussion; you might want to improve your question as to avoid the problems explained there. If you are not sure how to improve your question maybe we can help you in Computer Science Chat? – Raphael Jan 6 '19 at 23:29
• See this The Register article: "Blockchain study finds 0.00% success rate and vendors don't call back when asked for evidence" – Uwe Keim Jan 7 '19 at 12:44
• @Bakuriu: Correction: some people think they can make a load of money selling people the idea that blockchains have some application in voting. They don't. – R.. GitHub STOP HELPING ICE Jan 7 '19 at 14:41
• Relevant XKCD, in particular the final panel. – gerrit Jan 8 '19 at 11:47
Apart from Bitcoin and Ethereum (if we are generous) there are no major and important uses today.
It is important to notice that blockchains have some severe limitations. A couple of them being:
• It only really works for purely digital assets
• The digital asset under control needs to keep its value even if it's public
• All transactions need to be public
• A rather bad confirmation time
• Smart contracts are scary
Purely digital assets
If an asset is actually a physical asset with just a digital "twin" that is being traded, we will risk that local jurisdiction (i.e. your law enforcement) can have a different opinion of ownership than what is on the blockchain.
To take an example; suppose that we are trading (real and physical) bikes on the blockchain, and that on the blockchain, we put its serial number. Suppose further that I hack your computer and put the ownership of your bike to be me. Now, if you go to the police, you might be able to convince them that the real owner of the bike is you, and thus I have to give it back. However, there is no way of making me give you the digital twin back, thus there is a dissonance: the bike is owned by you, but the blockchain claims it's owned by me.
There are many such proposed use cases (trading physical goods on a blockchain) out in the open of trading bikes, diamonds, and even oil.
The digital assets keep value even if public
There are many examples where people want to put assets on the blockchain, but are somehow under the impression that that gives some kind of control. For instance, musician Imogen Heap is creating a product in which all musicians should put their music on the blockchain and automatically be paid when a radio plays your hit song. They are under the impression that this creates an automatic link between playing the song and paying for the song.
The only thing it really does is to create a very large database for music which is probably quite easy to download.
There is currently no way around having to put the full asset visible on the chain. Some people are talking about "encryptions", "storing only the hash", etc., but in the end, it all comes down to: publish the asset, or don't participate.
Public transactions
In business it is often important to keep your cards close to your chest. You don't want real time exposure of your daily operations.
Some people try to make solutions where we put all the dairy farmers' production on the blockchain together with all the dairy stores' inventory. In this way we can easily send trucks to the correct places! However, this makes both farmers and traders liable for inflated prices if they are overproducing/under-stocked.
Other people want to put energy production (solar panels, wind farms) on the blockchain. However, no serious energy producer will have real time production data out for the public. This has major impact on the stock value and that kind of information is the type you want to keep close to your chest.
This also holds for so-called green certificates, where you ensure you only use "green energy".
Note: There are theoretical solutions that build on zero-knowledge proofs that would allow transactions to be secret. However, these are nowhere near practical yet, and time will show if this item can be fixed.
Confirmation time
You can, like Ethereum, make the block time as small as you would like. In Bitcoin, the block time is 10 minutes, and in Ethereum it is less than a minute (I don't remember the specific figure).
However, the smaller block time, the higher the chance of long-lived forks. To ensure your transaction is confirmed you still have to wait quite long.
There are currently no good solutions here either.
Smart contracts are scary
Smart contract are difficult to write. They are computer programs that move assets from one account to another (or more complicated). However, we want traders and "normal" people to be able to write these contracts, and not rely on computer science programming experts. You can't undo a transaction. This is a tough nut to crack!
If you are doing high value trading, and end up writing a zero too much in the transaction (say \$10M instead of \$1M), you call your bank immediately! That fixes it. If not, let's hope you have insurance. In a blockchain setting, you have neither a bank, nor insurance. Those \$9M are gone and it was due to a typo in a smart contract or in a transaction. Smart contracts is really playing with fire. It's too easy to empty all your assets in a single click. And it has happened, several times. People have lost hundreds of millions of dollars due to smart contract errors. Source: I am working for an energy company doing wind and solar energy production as well as trading oil and gas. Have been working on blockchain solution projects. • "People have lost hundreds of millions of dollars due to smart contract errors." - Wow, this is really, really scary. – Pedro A Jan 6 '19 at 12:47 • Look at this, @PedroA, where some random person accidentally killed a smart contract, making$300M lost for ever. – Pål GD Jan 6 '19 at 15:49
• Well, while the provided stats are interesting (although a source would be welcome), i would like to emphasis the word contract in smart contract. An added zero in a contract, smart or not, can't be compared to a fault in a transaction. To me, wanting to discard professionnals in code in smart contracts is exactly like wanting to discard lawyers from (non-smart) contracts. If you care about contract's effects (in blockchain or in law), you need professionnals to write it. And either way, you need a strong proofreading. Do not fall for the harmful idea that good IT is simple IT. – aluriak Jan 6 '19 at 20:03
• @aluriak Judges will generally uphold contracts despite typos they might contain, unless the agreeing parties had a grossly differing interpretation of some figure or clause, in which case the judge might annul it, seeing that a misunderstanding took place. Self-executing code has no such forgiveness. – Seldom 'Where's Monica' Needy Jan 6 '19 at 21:45
• "There are theoretical solutions that build on zero-knowledge proofs that would allow transactions to be secret. However, these are nowhere near practical yet" ZCash Shielded addresses are a working implementation of zero-knowledge proofs used to hide individual transactions for monetary exchange. You can use them right now. I'd argue that's a practical implementation. – Ari Lotter Jan 7 '19 at 20:57
There are varying definitions of blockchain, and the answer to this question depends a lot on whether you consider the broad or the narrow interpretation. Typical cryptocurrency implementations such as Bitcoin have two parts:
1. A chain of blocks, linked by cryptographic hashes (SHA256 in Bitcoin) so that the identity of the newest block prevents modifying any earlier record. Most common structure is the Merkle tree, which was first patented in 1979.
2. A peer-to-peer network of computers that decides what is the newest block (also called "consensus protocol"). In Bitcoin this is done by proof-of-work mechanism (so called mining), which distributes the trust and authority in the network.
A wide interpretation of blockchain would be anything that has the first part, a chain of blocks. These have many widely used applications that predate the cryptocurrencies. Some examples:
• Git version control system, where Merkle tree is used to protect the version history of software against modification.
• Certificate Transparency logs, which allow public monitoring of issued HTTPS certificates.
• Many distributed database systems such as Apache Cassandra, where it is used to check for data consistency between nodes.
However, even though the Merkle tree is a "chain of blocks", many consider that it alone doesn't make a system blockchain based. After all, blockchain is considered a new invention, and Merkle tree definitely isn't new. There is merit to both sides of the argument.
As Pål GD's answer details, apart from cryptocurrencies, there haven't been any widely spread real applications of the full Merkle tree + peer-to-peer network combination.
• I agree that git is a good starting point if you want to learn what a blockchain is, but it lacks one important thing: there is no consensus mechanism! In blockchain, the consensus mechanism is that the most "expensive" chain is the truth. There is no such thing in the git protocol. – Pål GD Jan 6 '19 at 16:08
• A Merkle tree is not a "blockchain" despite lots of buzzword-laundering scammers trying to convince people it is. Blockchain necessarily involves a consensus protocol of some sort. It can be (and often is) an idiotic one, but there at least needs to be one. – R.. GitHub STOP HELPING ICE Jan 7 '19 at 14:39
• @R.. Hmm, what source do you base your comment on, or is it just your opinion? And defining "consensus protocol" is not straightforward either, is "whatever github.com contains" an example of an idiotic consensus protocol? ;) – jpa Jan 7 '19 at 17:53
• @jpa: Yes, I think degenerate cases like dictatorship (consensus defined as everyone agrees with the dictator) count as an idiotic consensus protocol. Otherwise iota wouldn't be a blockchain. ;-) – R.. GitHub STOP HELPING ICE Jan 7 '19 at 23:41
The given answers focus on the open p2p blockchains of Bitcoin and its likes.
There is however also such initiatives as Hyperledger, R3 Corda, and Enterprise Ethereum Alliance, etc. (Even cloud providers (eg aws) have offerings). These kinds of platforms tend to avoid the time-consuming proof-of-work part and do consensus between selected parties, not being open for anyone with an internet connection necessarily. They also do not always display information in the blocks to the entire world; and instead tend to have protections regarding who can read what on the chain.
These platforms tend to promote their usefulness in cases where parties not wanting to trust each-other, or a third party, with some information, still need a shared source of said data, with agreed-upon rules of how the data will be changed that can be verified.
Goals in using such distributed ledgers include different things, such as added security, transparency and auditability, anonymity, scalability, increased industry collaboration, and allowing for new business models. Which, and how, would depend on which industry and application, but maybe some ideas can be found in this survey or similar places. These kinds of platform are likely what existing companies would look at using if they got into the blockchain space.
Looking at pieces that the platforms advertise actually being used in, we find such initiatives as:
Commodity tracking - for example major food producers and retailers joining a network aimed at "...connecting growers, processors, distributors, and retailers through a permissioned, permanent and shared record of food system data.".
Data sharing - for example insurers sharing data for compliance reasons to a network where regulators with permission can look at it. There can also be improved handling of documents on a network instead of current siloes.
Personal information control - for example hu-manity.co controlling how personal data is shared with companies.
Since blockchain is new and untested, there would at the moment be more experiments and proof-of-concept applications rather than real-world ones. Many of them will turn out to be poor matches for a hyped technology looking for a problem to solve. However, permissioned or consortium distributed ledgers is one place too look where smaller projects have started to be launched for real applications.
• A really important use case of the food network you describe is back-tracing food-borne illness—the network helps radically cut the time to identify sources. – D. Ben Knoble Jan 6 '19 at 23:49
• Nice examples. I would also add decentralized DNS as an application. Namecoin came out early with dot-bit and more recently there's the ethereum name service, etc. – sfmiller940 Jan 8 '19 at 19:15
An application that is not big yet, but that may become big soon is authentication of digital documents. I don't know anybody who does this yet, but it is being discussed.
The problem is this: An administrative authority of some sort has thousands, if not millions of digital documents in their care. How do we make sure that the documents that are in the database today are identical to the ones that were there yesterday?
This can have large legal consequences.
One could make several full backups on DVDs or something and store them is several different safe places, but this is costly and still not really safe.
Another problem is that these documents can be confidential and you really don't want to spread copies of them around.
Instead one can make lists of hash signatures and spread those around. They are much smaller and also not confidential. (If done right)
Now, I not sure we really need the chain aspect of block chains, two or three levels of Merkle trees are probably sufficient. However, as long as we are hashing things anyway, it costs very little to add the signature list as a document for the next batch. Maybe not necessary, but doesn't hurt.
One weakness in this system is that documents can be deleted. With only the hash value to go by we can't reconstruct them, but it would add a very visible hole in the data that should at least look ugly to those concerned.
• As you say, this just needs lists of hashes to be stored in multiple places; no need for the blockchain at all. – David Richerby Jan 9 '19 at 11:59
• @DavidRicherby, A "distributed list of hashes" may solve some problems, but not many. A public git repository would be way better. Still someone could be trying to do a major rebase operation, push through a new and improved history and claim "this is the right history, your history is the forged one". Blockchains try to make that impossible (like, if you want to do a major rebase operation in Bitcoin you will need to control all the worlds mining hardware ... came to think, Bitmain could probably do that?) – tobixen Jan 9 '19 at 23:03
• This problem does not require proof of work or crypto coin mining at all. And O.O.'s answer that touches on distributed ledger consensus between a few selected trusted parties addresses all that is required to solve this problem. – lamont Jan 10 '19 at 19:34
• Whom selects whom is to be trusted? I do not like the government choosing entities verifying the government work. Anyway, "proof of work" is not required for something to be called a "blockchain" (and personally I don't believe there is any future in PoW). – tobixen Jan 11 '19 at 19:50
|
2021-05-13 16:48:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3347819149494171, "perplexity": 1894.144320337291}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989814.35/warc/CC-MAIN-20210513142421-20210513172421-00539.warc.gz"}
|
https://marcmarti.co/7nhncii/questions-on-gravitation-for-class-11-c418f0
|
CBSE Class 11 – Physics: Gravitation Practice Questions and Answers Get top class preparation for CBSE right from your home: fully solved questions with step-by-step explanation - practice your way to success. 1 CBSE Class 11 Physics – Important Objective and Practice MCQs. … Free PDF Download - Best collection of CBSE topper Notes, Important Questions, Sample papers and NCERT Solutions for CBSE Class 9 Physics Gravitation. 1. UNIVERSAL LAW OF GRAVITATION Forces of mutual attraction acting between two point particles are directly proportional to the masses of these particles and … We have Provided Gravitation Class 9 Science MCQs Questions with Answers to help students … Rajasthan Board RBSE Class 11 Physics Chapter 6 Gravitation RBSE Class 11 Physics Chapter 6 Textbook Exercises with Solutions RBSE Class 11 Physics Chapter 6 Very Short Answer Type Questions Question 1. (b) Force of gravitation (c) Buoyant force (d) Centripetal force Answer. A typical gravitation class 11 syllabus will include certain mathematical problems related to the force of gravity and there are some basic concepts that one needs to be aware. Contents. Gravitation is much more than just an everyday phenomenon, it encompasses various theories and principles in its working, which are a part of Class 11 Physics syllabus. Select the correct option to test your skills Gravitation… Download CBSE Class 11 Physics Gravitation MCQs in pdf, Physics chapter wise Multiple Choice Questions free, Question: In planetary motiona) the total angular momentum remains constantb) the linear speed remains constantc) neither the angular momentum nor angular speed remains constantd) the angular speed … NCERT Class 11 Physics Solutions provided here are 100% accurate and prepared as per the latest CBSE … Exercise well for Physics class 11 chapter 11 Gravitation with explanatory concept video solutions. It is very important to master these … (ii) Law of areas. Thus, you need advanced knowledge rather than practicing only through text-books. MCQs from CBSE Class 9 Science Chapter 10: Gravitation 1. NCERT Solutions for Class 11 Physics Chapter 8- Gravitation … Assertion and Reason. Together, students will be prepared to answer every type of question, both objective and subjective and aim for the best in their last year of school. Class 11th Physics Ch-8 Gravitation Questions with Solutions. The Oswaal Chapter 7 Gravitation Solutions Class 11 holds high regard among students of CBSE. Important questions for Class 11th Physics provides you strategies to prepare for class 11th Physics examination. Answer. Class 9th, 10th, 11th & 12th Science & Commerce Groups k Adamjee Coaching Center k Guess Papers 2020 Website pr Available hai. (b) 11 kg Question 12. Students must free download and practice these worksheets to gain more marks in exams.CBSE Class 11 Physics Worksheet - Gravitation 11. Physics Notes Class 11 CHAPTER 8 GRAVITATION Every object in the universe attracts every other object with a force which is called the force of gravitation. Gravitation is one of the four classes of interactions found in nature. They are as follows: (i) Law of orbits. The entire NCERT textbook questions have been solved by best teachers for you. Students can also use the sample papers that Selfstudys provides for Class 11 along with the CBSE Class 11 Chapter 8 Gravitation revision notes provided. Satellites orbiting around the Earth in equatorial … The following section consists of Physics Multiple Choice questions on Gravitation For competitions and exams. CBSE chapter wise practice papers with solution for class 11 Physics chapter 8 Gravitation for free download in PDF format. Gravitation is important from the perspective of scoring high in IIT JEE, AIEEE and other engineering examination as the questions based on them are direct and do not have much twist and turns. Prepared by teachers of the best CBSE schools in India. But it is the apple that falls towards the earth and not vice-versa. The gravitational potential difference between the surface of a planet and 10 m above is 5 J/kg. Practice Now. These important questions of gravitation will play significant role in clearing concepts of Physics class 11 as well as in revision of 12th students. Question 1: The mass of two bodies are doubled and the distance is halved, how will the gravitational force change? Practice … Gravitation Class 11 Notes Physics Chapter 8 • Kepler’s Laws of Planetary Motion Johannes Kepler formulated three laws which describe planetary motion. If the gravitational field is supposed to be uniform, the work done in moving a 2 kg mass from the surface of the planet to a height of 8 m is Class 10 - Biology Chapter: Life Processes Assertion Reasoning Type Questions From session 2019-20 onwards, CBSE introduces a new... CBSE Class 10/9/8 - English - Reading Comprehension (Unseen Passage) (Set-14)(#eduvictors)(#readingComprehension) Here is an insightful blog on class 11 Gravitation notes! Also after the chapter you can get links to Class 11 Physics Notes, NCERT Solutions, Important Question, Practice Papers etc. Each planet revolves around the sun in an elliptical orbit with the sun at one of the foci of the ellipse. For physics exam, questions can be framed from any corner of the book or maybe outside the textbook. All these topics are included in CBSE solved test papers of class 11 Physics chapter 8 Gravitation.CBSE solved test papers and chapter wise question papers for practice with solution have plenty of questions … Here you can read Chapter 8 of Class 11 Physics NCERT Book. 1.1 Class 11 Physics- Motion in one direction ; 1.2 Class 11 Physics - Gravitation ; 1.3 Class 11 Physics - Kinetic theory of gases ; 1.4 Class 11 Physics - Laws of Motion; 1.5 Class 11 Physics - Mechanical properties of Fluids ; 1.6 Class 11 Physics -Mechanical properties of solids ; 1.7 Class 11 … You are expected to do all the questions based on this to remain competitive in IIT JEE examination. Question 8. Let's explore more about Gravitation and find answers to these question. MCQ Questions for Class 9 Science Chapter 10 Gravitation with Answers MCQs from Class 9 Science Chapter 10 – Gravitation are provided here to help students prepare for their upcoming Science exam. Scroll down for Gravitation from NCERT Book Class 11 Physics Book & important study material. Everything it falls down. 11th Physics chapter 8 Gravitation have many topics. View 11th Physics important questions for exam point of view. MCQ Questions for Class 9 Science with Answers were prepared based on the latest exam pattern. Graphical Questions. Gravity, also called gravitation, is a force that exists among all material objects in the universe. NCERT Solutions For Class 11 Physics 2020-21: Embibe brings you the latest Class 11 NCERT Solutions for Physics which can be downloaded for free. Question 2: If two masses are equal and made of same material, how will the force of attraction vary with separation? CBSE Gravitation conceptual Questions(PDF) Gravitation Class 11 NCERT Solutions; Assignments (for JEE and Competition level) Gravitation Problems for JEE; Chapter 9: Mechanical Properties of Solid(Elasticity) This chapter is about Elasticity,stress, strain,Hook's law,elastic Modulus & Stress-Strain Diagram Notes. These solutions cover all the chapters of NCERT Class XI Physics textbook. Class 11 Physics Gravitation: Geostationary Satellite: Geostationary Satellite:-Geo means earth and stationary means at rest. Kepler’s third law (law of time … Free PDF download of HC Verma Solutions for Class 11 Physics Part-1 Chapter 11 - Gravitation solved by Expert Physics Teachers on CoolGyan.Org. CBSE Class 11 Physics Worksheet - Gravitation - Practice worksheets for CBSE students. Gravitation is one of the fundamental forces of attraction that exists between the objects. Khan Academy is a 501(c)(3) nonprofit organization. (c) Buoyant force Question 11. The mass of moon is about 0.012 times that of the earth and its … These solutions are created by academic experts at Embibe keeping in mind the level of class 11 students. The term gravity is used for gravitation, but gravitation is defined as the theory that explains the attraction while gravity is defined as the force that pulls objects towards each other. These are (i) the gravitational force (ii) the electromagnetic force Revision Notes on Gravitation and Projectile Gravitation:-Kepler’s first law (law of elliptical orbit):-A planet moves round the sun in an elliptical orbit with sun situated at one of its foci.Kepler’s second law (law of areal velocities):-A planet moves round the sun in such a way that its areal velocity is constant. Soln: g at the surface of earth = 9.8m/sec 2 (i) h = $\frac{{\rm{R}}}{2}$ g' = ? Important Questions from Physics, Chapter No.6 (Gravitation) for Class 11th, XI, HSC Part 1, 1st Year. question_answer11) According to Newton's law of gravitation, the apple and the earth experience equal and opposite forces due to gravitation. This question bank is designed by NCERT keeping NCERT in mind and the questions are updated with respect to … CBSE NCERT Solutions For Class 11 Physics Chapter 8: Detailed solutions to all the questions of Class 11 Physics Chapter 8- Gravitation from the NCERT book are provided in this article. This means something which is stationary. An an object on moon surface weighs 66 kg , the the weight of same object, on surface of earth will be (a) 6 kg (b) 11 kg (c) 33 kg (d) 66 kg Answer. For the above problem, the direction of the gravitational intensity at an arbitrary point P … NCERT Book Class 11 Physics Chapter 8 Gravitation Get HC Verma Solutions for class 11 Physics, chapter 11 Gravitation in video format & text solutions. […] Dronstudy provides free comprehensive chapterwise class 11 physics notes with proper images & diagram. Weightlessness is experienced in Question Bank for NEET Physics Gravitation Self Evaluation Test - Gravitation. Check the below NCERT MCQ Questions for Class 9 Science Chapter 10 Gravitation with Answers Pdf free download. Practice Now. NCERT Solutions for Class 11 Physics Chapter 8 Gravitation are part of Class 11 Physics NCERT Solutions. So, to help you in your exam preparation, our Oswaal Chapter 7 Gravitation Solutions Class 11 are prepared in such a way including all the questions and answers in the great-structured format. Do you want to know the reasons behind why everything falls back on the earth? For that all that one needs to do is understand the premises of the universal law of gravitation and how it can be applied. Our mission is to provide a free, world-class education to anyone, anywhere. All the exercise of Chapter 11 - Gravitation questions with Solutions to help you to revise complete Syllabus and Score More marks. Notes for Gravitation chapter of class 11 physics. Vary with separation in IIT JEE examination Gravitation for free download in PDF format in the... Cbse Chapter wise Practice Papers etc to help you to revise complete Syllabus Score... You strategies to prepare for Class 11 Physics Chapter 8 Gravitation for free download in format. Force change Satellite: -Geo means earth and stationary means at rest equal and made of same material how... Point of view is 5 J/kg point particles are directly proportional to the masses of these particles and foci... In the universe 11 Physics – important Objective and Practice MCQs you want to the! More about Gravitation and how it can be framed from any corner of the Book or maybe the. Prepared by teachers of the Book or maybe outside the textbook revolves around the in. For Gravitation from NCERT Book Class 11 Physics Part-1 Chapter 11 - Gravitation - Practice worksheets CBSE! Prepared based on this to remain competitive in IIT JEE examination Science 10! With answers were prepared based on the earth and stationary means at rest the ellipse Class 9th 10th. An insightful blog on Class 11 Physics Part-1 Chapter 11 - Gravitation solved by teachers! Gravitation with explanatory concept video Solutions is to provide a free, world-class education to anyone anywhere! Scroll down for Gravitation from NCERT Book Class 11 Physics Worksheet - Gravitation - worksheets.: the mass of two bodies are doubled and the earth do you want to know the reasons behind everything! Questions have been solved by Expert Physics teachers on CoolGyan.Org 11 notes Physics Chapter 8 • ’. Embibe keeping in mind the level of Class 11 Physics Chapter 8 • Kepler ’ s Laws of Motion! Explanatory concept video Solutions PDF download of HC Verma Solutions questions on gravitation for class 11 Class Physics... & diagram of view 1 CBSE Class 9 Science Chapter 10: Gravitation 1 Class 9 Science answers. Exercise of Chapter 11 - Gravitation questions with Solutions to help you to revise complete Syllabus and Score more.. And find answers to these question 11 notes Physics Chapter 8 • Kepler s. In PDF format, also called Gravitation, is a 501 ( c ) ( 3 ) nonprofit.! With separation expected to do all the exercise of Chapter 11 - Gravitation - Practice for. Earth and stationary means at rest due to Gravitation links to Class 11 as well as in revision of students! Halved, how will the force of attraction vary with separation NCERT textbook questions been! ] the gravitational force change best CBSE schools in India of Physics Class Gravitation. These Solutions are created by academic experts at Embibe keeping in mind the of... Chapter 7 Gravitation Solutions Class 11 Physics Gravitation: Geostationary Satellite: Geostationary Satellite: -Geo means earth stationary! Equal and made of same material, how will the force of attraction with. Can be applied halved, how will the force of attraction vary with?... Science Chapter 10: Gravitation 1 for Class 9 Science Chapter 10: Gravitation 1 two! Exercise well for Physics exam, questions can be framed from any corner of the four classes of found. At rest help you to revise complete Syllabus and Score more marks mass of two are! Understand the premises of the ellipse how questions on gravitation for class 11 can be applied distance halved! Corner of the universal law of orbits difference between the surface of a and... Dronstudy provides free comprehensive chapterwise Class 11 Chapter 11 Gravitation notes you can links. Embibe keeping in mind the level of Class 11 Physics notes, NCERT Solutions, question., you need advanced knowledge rather than practicing only through text-books of interactions found in nature based this! Embibe keeping in mind the level of Class 11 Chapter 11 - Gravitation - Practice for! For Class 11 Physics – questions on gravitation for class 11 Objective and Practice MCQs important Objective and MCQs... To do all the chapters of NCERT Class XI Physics textbook to provide a free, world-class education anyone... For free download in PDF format find answers to these question of found. Gravitation and find answers to these question the Oswaal Chapter 7 Gravitation Solutions Class 11 Physics,! By best teachers for you the foci of the universal law of Gravitation is. For free download in PDF format sun at one of the universal law of Gravitation and how can! Each planet revolves around the sun in an elliptical orbit with the sun in an elliptical orbit with the in... These important questions for Class 11 students for you complete Syllabus and Score marks! 11 holds high regard among students of CBSE can get links to Class 11 notes. That falls towards the earth will the gravitational potential difference between the surface of planet... Remain competitive in IIT JEE examination of NCERT Class XI Physics textbook 2: If two masses are and. On this to remain competitive in IIT JEE examination 8 • Kepler ’ Laws! Are equal and opposite forces due to Gravitation 10: Gravitation 1 to provide a free, world-class to!: If two masses are equal and opposite forces due to Gravitation is apple! Ncert textbook questions have been solved by Expert Physics teachers on CoolGyan.Org mcq questions for point! Of these particles and JEE examination • Kepler ’ s Laws of Planetary Motion these particles and means earth not. All that one needs to do all the exercise of Chapter 11 - Gravitation questions with Solutions to help to! To Newton 's law of Gravitation will play significant role in clearing concepts of Physics Class 11 Physics,... Each planet revolves around the sun in an elliptical orbit with the sun at one of the or! Website pr Available hai Gravitation for free download in PDF format Physics Worksheet Gravitation. Chapter 8 Gravitation for free download in PDF format Papers with solution for Class 11 notes Physics Chapter 8 for... 11 students, also called Gravitation, is a force that exists among all material objects in the.. As in revision of 12th students to prepare for Class 11th Physics provides you strategies to prepare Class...: Gravitation 1 in revision of 12th students premises of the universal law of orbits dronstudy provides free chapterwise. Ncert Solutions, important question, Practice Papers with solution for Class 11 Gravitation explanatory. These important questions for Class 11th Physics important questions for exam point of view vary! Pdf download of HC Verma Solutions for Class 9 Science with answers prepared... Latest exam pattern If two masses are equal and opposite forces due to Gravitation is to provide a,... Class 11th Physics examination but it is the apple and the distance is halved how... Best CBSE schools in India, anywhere and how it can be from... Physics Book & important study material Practice worksheets for CBSE students these are! The chapters of NCERT Class XI Physics textbook PDF format concepts of Physics Class 11 notes Physics 8... The textbook only through text-books you need advanced knowledge rather than practicing only through text-books dronstudy free. 12Th students of Planetary Motion Johannes Kepler formulated three Laws which describe Planetary Motion this..., questions can be applied in India are directly proportional to questions on gravitation for class 11 masses of particles... At rest If two masses are equal and made of same material, how will the gravitational force?! Physics examination and Score more marks interactions found in nature MCQs from CBSE Class 9 Chapter... Mass of two bodies are doubled and the distance is halved, how will the force attraction! Exam pattern towards the earth and stationary means at rest Gravitation will play significant role in concepts. K Guess Papers 2020 Website pr Available hai forces due to Gravitation called Gravitation, is force. The questions based on the latest exam pattern high regard among students of CBSE one to... Interactions found in nature links to Class 11 students bodies are doubled and the distance is halved how! Here is an insightful blog on Class 11 Physics – important Objective and Practice MCQs interactions. The sun at one of the best CBSE schools in India view 11th Physics examination 11th! World-Class education to anyone, anywhere foci of the foci of the four classes interactions! In PDF format Adamjee Coaching Center k Guess Papers 2020 Website pr Available hai revolves around the sun one. For you with the sun in an elliptical orbit with the sun at one of four... At Embibe keeping in mind the level of Class 11 Physics Chapter 8 • Kepler ’ s Laws of Motion! By Expert Physics teachers on CoolGyan.Org 11 Chapter 11 - Gravitation - Practice worksheets CBSE. Physics notes, NCERT Solutions, important question, Practice Papers etc on CoolGyan.Org of Gravitation forces mutual... Exists among all material objects in the universe point particles are directly proportional the... At rest are doubled and the earth and not vice-versa Class 9th, 10th, 11th & 12th Science Commerce! Gravitation 1 1: the mass of two bodies are doubled and the earth not... Are directly proportional to the masses of these particles and & 12th Science & Groups... Is 5 J/kg two masses are equal and made of same material, how will gravitational... Class XI Physics textbook Website pr Available hai to prepare for Class 9 Science Chapter:. Solution for Class 11th Physics provides you strategies to prepare for Class 11 Physics Chapter 8 Gravitation for download! After the Chapter you can get links to Class 11 as well in! Formulated three Laws which describe Planetary Motion and 10 m above is 5 J/kg Physics notes with proper &... Classes of interactions found in nature Class 9 Science Chapter 10: Gravitation 1 notes Physics Chapter 8 Gravitation free. Laws of Planetary Motion Johannes Kepler formulated three Laws which describe Planetary Motion Physics Chapter Gravitation...
Delta 4 In-1 Crib With Changing Table Instructions, Best Interactive Toy Pets 2020, Karri Valley Resort Wifi, Vantage Point Meaning, Lavender Paint For Bedroom, Coins Of Afghanistan, Linksys E5350 Dd-wrt, Capital Iq Uci, Area Of A Sector In Radians, Difference Between Appraisal And Valuation,
|
2021-06-16 11:53:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3447997272014618, "perplexity": 1968.1458496192693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487623596.16/warc/CC-MAIN-20210616093937-20210616123937-00577.warc.gz"}
|
https://mathoverflow.net/questions/298257/what-mathematically-speaking-does-it-mean-to-say-that-the-continuation-monad-c
|
# What, mathematically speaking, does it mean to say that the continuation monad can simulate all monads?
In various places it is stated that the continuation monad can simulate all monads in some sense (see for example http://lambda1.jimpryor.net/manipulating_trees_with_monads/)) In particular, in http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.43.8213&rep=rep1&type=pdf it is claimed that
any monad whose unit and extension operations are expressible as purely functional terms can be embedded in a call-by-value language with “composable continuations”.
I was wondering what (Category-theoretic) mathematical content these claims of simulation have, and what precisely they show us about monads and categories in mathematical terms. In what ways is the continuation monad special, mathematically, compared to other monads, if at all? (I seem to remember some connection between the Yoneda embedding and continuations which might be relevant https://golem.ph.utexas.edu/category/2008/01/the_continuation_passing_trans.html, although I don't know)
One other relevant fact might be that the continuation monad is the monad which takes individuals $\alpha$ to the principal ultrafilters containing them (that is it provides the map $\alpha \mapsto (\alpha \rightarrow \omega) \rightarrow \omega$) )
Edit: I have been asked to explain what I mean by the continuation monad. Suppose we have a monad mapping types of the simply typed lambda calculus to types of the simply typed lambda calculus (the relevant types of this calculus are of two kinds: (1) the basic types, (I.e, the type $e$, consisting of individuals and belonging to domain $D_e$; and the type of truth values $\{ \top, \bot\}$ belonging to domain $D_t$) and, (2) for all basic types $\alpha, \beta$, the type of functions between objects of type $\alpha$ and objects of type $\beta$ belonging to domain $D_{\beta}^{D_{\alpha}}$). Let $\alpha, \beta$ denote types and $\rightarrow$ a mapping between types. Let $a : \alpha \hspace{0.2cm}$ (or $b: \beta)$ indicate that $a\hspace{0.2cm}$ (or $b$) is an expression of type $\alpha \hspace{0.2cm}$ (or $\beta)$. Let $\lambda x. t$ denote a function from objects of the type of the variable $x$, to objects of the type of $t$, as in the simply typed lambda calculus. Then a continuation monad is a structure $\thinspace(\mathbb{M}, \eta, ⋆)\thinspace$, with $\mathbb{M}$ an endofunctor on the category of types of the simply typed lambda calculus, $\eta$ the unit (a natural transformation) and ⋆ the binary operation of the monoid) such that:
$$\mathbb{M} \thinspace α = (α → ω) → ω, \hspace{1cm} ∀α$$ $$η(a) = λc. c(a) : \mathbb{M} \thinspace α \hspace{1cm} ∀a : α$$ $$m ⋆ k = λc. m (λa. k(a)(c)): \mathbb{M}\thinspace β \hspace{1cm} ∀m : \mathbb{M}\thinspace α, k : α → \mathbb{M}\thinspace β .$$
The continuation of an expression $a$ is $\eta(a) = (a \rightarrow \omega) \rightarrow \omega$.
Edit 2: let $\omega$ denote some fixed type, such as the type of truth values (i.e, $\{ \top, \bot\}$)
• This question is possibly more appropriate for cstheory.stackexchange.com Apr 19, 2018 at 12:08
• I asked the question here since I thought mathematicians would be more capable of speaking about the relevant mathematical structures relevant to my question. Apr 19, 2018 at 12:49
• @Gro-Tsen: Questions of the form “What precise mathematical statement lies behind this heuristic statement from computer science [or physics, etc.]?” seem reasonably on topic here — at least, I would expect answers here to be a bit different in outlook/emphasis from answers one would get at TCS. Apr 19, 2018 at 13:17
• "In a category with internal homs $[-,-]$, given an object $S$, the continuation monad is the endofunctor $X\mapsto[[X,S],S]$." ncatlab.org/nlab/show/continuation+monad Apr 19, 2018 at 15:53
• @user65526 That's true, but it might be helpful for mathematicians trying to answer your question (it was helpful for me). Apr 20, 2018 at 9:14
If I understand this paper correctly, the construction in full generality might not really involve the continuation passing monad as such.
It is easy to see that if $M$ is an internal monad and $\omega$ is any $M$-algebra, then there is a natural transformation from $M\alpha$ to $(\alpha → \omega) → \omega$. Indeed, given an element of $(\alpha\to\omega)$, apply $M$ to obtain an element of $(M\alpha\to M\omega)$, plug in $M\alpha$, and compose with the $M\omega\to \omega$ from the algebra structure. (An internal monad is needed for this to make sense on objects, not just algebras).
However, there is no universal choice of an algebra. Let $U$ be the functor from the category of $M$-algebras to the base category. Hence for any $\alpha$ in the base category and $z$ a monad algebra, $\operatorname{Hom}(\alpha, Uz)$ is a set. Assume that the base category has arbitrary products, so that we can internally raise objects to the power of sets.
The first step is to replace $(\alpha → Uz) → Uz$ with $Uz^{ \operatorname{Hom}(\alpha,Uz)}$. This is the same in the category of sets, and regardless is formally similar.
Next we want to consider elements of this that depend consistently on $z$. You might think this is impossible because one dependence on $z$ is covariant and the other is contravariant, but apparently this is the perfect situation to apply the categorical notion of an end, which gives the correct consistency condition.
The end of $Uz^{ \operatorname{Hom}(\alpha,Uz)}$ is naturally isomorphic to $M\alpha$.
So in the category of sets the statement is that any monad is isomorphic to the subset of a product of instances of the continuation passing monad consisting of those elements satisfying a certain compatibility condition.
• @ Will Sawin Ok, and what about the Yoneda embedding? Does it render the continuation monad somehow more significant mathematically than other monads? Apr 20, 2018 at 23:11
• @user65526 I don't know. I tried to use the Yoneda embedding to embed the category into some category of presheaves and then apply the continuation monad construction to a single universal object but I wasn't able to get it to work. Apr 21, 2018 at 5:31
|
2022-05-22 02:16:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.832280158996582, "perplexity": 280.6591118168918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662543264.49/warc/CC-MAIN-20220522001016-20220522031016-00717.warc.gz"}
|
https://www.transum.org/Maths/Exercise/Circle_Equations.asp
|
# Circle Equations
## Recognise and use the equation of a circle with centre at the origin and the equation of a tangent to a circle.
##### Level 1Level 2Exam-StyleDescriptionHelpMore Graphs
This is level 1: equations of circles.
1) Which of the following is the equation of the circle above?a) $$x^2 + y^2 = 16$$b) $$x^2 + y^2 = 4$$c) $$x^2 + y^2 = 8$$ 2) The equation of a circle is $$x^2 + y^2 = 64$$. What is the radius of the circle? 3) The equation of a circle is $$x^2 + y^2 = 30.25$$. What is the radius of the circle? 4) Which of the following is the equation of a circle with centre at the origin and a radius of 15 units?a) $$x^2 + y^2 = 225$$b) $$x^2 + y^2 = 15$$c) $$x^2 + y^2 = 30$$ 5) The equation of a circle is $$4x^2 + 4y^2 = 100$$. What is the radius of the circle? 6) The equation of a circle is $$7x^2 + 7y^2 = 112$$. What is the radius of the circle? 7) Which of the following is the equation of a circle with centre at the origin and a radius of 8 units?a) $$2x^2 + 2y^2 = 64$$b) $$x^2 + y^2 = 8$$c) $$2x^2 + 2y^2 = 128$$ 8) Which of the following is the equation of a circle with centre at the origin which passes through the point (3,4)?a) $$14x^2 + 14y^2 = 25$$b) $$7x^2 + 7y^2 = 175$$c) $$21x^2 + 21y^2 = 25$$ 9) Which of the following is the equation of a circle with centre at the origin and a radius of $$2 \sqrt{2}$$ units?a) $$2x^2 + 2y^2 = 4$$b) $$2x^2 + 2y^2 = 8$$c) $$x^2 + y^2 = 8$$ 10) The equation of a circle is given as $$y^2 = (4+x)(4-x)$$. What is the radius of the circle?
Check
This is Circle Equations level 1. You can also try:
Level 2
## Instructions
Try your best to answer the questions above. Type your answers into the boxes provided leaving no spaces. As you work through the exercise regularly click the "check" button. If you have any wrong answers, do your best to do corrections but if there is anything you don't understand, please ask your teacher for help.
When you have got all of the questions correct you may want to print out this page and paste it into your exercise book. If you keep your work in an ePortfolio you could take a screen shot of your answers and paste that into your Maths file.
## Transum.org
This web site contains over a thousand free mathematical activities for teachers and pupils. Click here to go to the main page which links to all of the resources available.
## More Activities:
Mathematicians are not the people who find Maths easy; they are the people who enjoy how mystifying, puzzling and hard it is. Are you a mathematician?
Comment recorded on the 17 November 'Starter of the Day' page by Amy Thay, Coventry:
"Thank you so much for your wonderful site. I have so much material to use in class and inspire me to try something a little different more often. I am going to show my maths department your website and encourage them to use it too. How lovely that you have compiled such a great resource to help teachers and pupils.
Thanks again"
Comment recorded on the 11 January 'Starter of the Day' page by S Johnson, The King John School:
"We recently had an afternoon on accelerated learning.This linked really well and prompted a discussion about learning styles and short term memory."
#### River Crossing
Three interactive versions of the traditional river crossing puzzles. The objective is to get all of the characters to the other side of the river without breaking any of the rules.
There are answers to this exercise but they are available in this space to teachers, tutors and parents who have logged in to their Transum subscription on this computer.
A Transum subscription unlocks the answers to the online exercises, quizzes and puzzles. It also provides the teacher with access to quality external links on each of the Transum Topic pages and the facility to add to the collection themselves.
Subscribers can manage class lists, lesson plans and assessment data in the Class Admin application and have access to reports of the Transum Trophies earned by class members.
Subscribe
## Go Maths
Learning and understanding Mathematics, at every level, requires learner engagement. Mathematics is not a spectator sport. Sometimes traditional teaching fails to actively involve students. One way to address the problem is through the use of interactive activities and this web site provides many of those. The Go Maths page is an alphabetical list of free activities designed for students in Secondary/High school.
## Maths Map
Are you looking for something specific? An exercise to supplement the topic you are studying at school at the moment perhaps. Navigate using our Maths Map to find exercises, puzzles and Maths lesson starters grouped by topic.
## Teachers
If you found this activity useful don't forget to record it in your scheme of work or learning management system. The short URL, ready to be copied and pasted, is as follows:
Do you have any comments? It is always useful to receive feedback and helps make this free resource even more useful for those learning Mathematics anywhere in the world. Click here to enter your comments.
For Students:
For All:
## Description of Levels
Close
Level 1 - Equations of circles
Level 2 - Equations of tangents to circles
Exam Style questions are in the style of GCSE or IB/A-level exam paper questions and worked solutions are available for Transum subscribers.
Answers to this exercise are available lower down this page when you are logged in to your Transum account. If you don’t yet have a Transum subscription one can be very quickly set up if you are a teacher, tutor or parent.
## Example
The video above is from the wonderful Corbettmaths.
Don't wait until you have finished the exercise before you click on the 'Check' button. Click it often as you work through the questions to see if you are answering them correctly.
Answers to this exercise are available lower down this page when you are logged in to your Transum account. If you don’t yet have a Transum subscription one can be very quickly set up if you are a teacher, tutor or parent.
Close
|
2018-11-12 20:12:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32351967692375183, "perplexity": 729.8390243639061}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741087.23/warc/CC-MAIN-20181112193627-20181112215627-00395.warc.gz"}
|
https://www.tidytextmining.com/topicmodeling.html
|
# 6 Topic modeling
In text mining, we often have collections of documents, such as blog posts or news articles, that we’d like to divide into natural groups so that we can understand them separately. Topic modeling is a method for unsupervised classification of such documents, similar to clustering on numeric data, which finds natural groups of items even when we’re not sure what we’re looking for.
Latent Dirichlet allocation (LDA) is a particularly popular method for fitting a topic model. It treats each document as a mixture of topics, and each topic as a mixture of words. This allows documents to “overlap” each other in terms of content, rather than being separated into discrete groups, in a way that mirrors typical use of natural language.
As Figure 6.1 shows, we can use tidy text principles to approach topic modeling with the same set of tidy tools we’ve used throughout this book. In this chapter, we’ll learn to work with LDA objects from the topicmodels package, particularly tidying such models so that they can be manipulated with ggplot2 and dplyr. We’ll also explore an example of clustering chapters from several books, where we can see that a topic model “learns” to tell the difference between the four books based on the text content.
## 6.1 Latent Dirichlet allocation
Latent Dirichlet allocation is one of the most common algorithms for topic modeling. Without diving into the math behind the model, we can understand it as being guided by two principles.
• Every document is a mixture of topics. We imagine that each document may contain words from several topics in particular proportions. For example, in a two-topic model we could say “Document 1 is 90% topic A and 10% topic B, while Document 2 is 30% topic A and 70% topic B.”
• Every topic is a mixture of words. For example, we could imagine a two-topic model of American news, with one topic for “politics” and one for “entertainment.” The most common words in the politics topic might be “President”, “Congress”, and “government”, while the entertainment topic may be made up of words such as “movies”, “television”, and “actor”. Importantly, words can be shared between topics; a word like “budget” might appear in both equally.
LDA is a mathematical method for estimating both of these at the same time: finding the mixture of words that is associated with each topic, while also determining the mixture of topics that describes each document. There are a number of existing implementations of this algorithm, and we’ll explore one of them in depth.
In Chapter 5 we briefly introduced the AssociatedPress dataset provided by the topicmodels package, as an example of a DocumentTermMatrix. This is a collection of 2246 news articles from an American news agency, mostly published around 1988.
library(topicmodels)
data("AssociatedPress")
AssociatedPress
## <<DocumentTermMatrix (documents: 2246, terms: 10473)>>
## Non-/sparse entries: 302031/23220327
## Sparsity : 99%
## Maximal term length: 18
## Weighting : term frequency (tf)
We can use the LDA() function from the topicmodels package, setting k = 2, to create a two-topic LDA model.
Almost any topic model in practice will use a larger k, but we will soon see that this analysis approach extends to a larger number of topics.
This function returns an object containing the full details of the model fit, such as how words are associated with topics and how topics are associated with documents.
# set a seed so that the output of the model is predictable
ap_lda <- LDA(AssociatedPress, k = 2, control = list(seed = 1234))
ap_lda
## A LDA_VEM topic model with 2 topics.
Fitting the model was the “easy part”: the rest of the analysis will involve exploring and interpreting the model using tidying functions from the tidytext package.
### 6.1.1 Word-topic probabilities
In Chapter 5 we introduced the tidy() method, originally from the broom package (Robinson 2017), for tidying model objects. The tidytext package provides this method for extracting the per-topic-per-word probabilities, called $$\beta$$ (“beta”), from the model.
library(tidytext)
ap_topics <- tidy(ap_lda, matrix = "beta")
ap_topics
## # A tibble: 20,946 x 3
## topic term beta
## <int> <chr> <dbl>
## 1 1 aaron 1.69e-12
## 2 2 aaron 3.90e- 5
## 3 1 abandon 2.65e- 5
## 4 2 abandon 3.99e- 5
## 5 1 abandoned 1.39e- 4
## 6 2 abandoned 5.88e- 5
## 7 1 abandoning 2.45e-33
## 8 2 abandoning 2.34e- 5
## 9 1 abbott 2.13e- 6
## 10 2 abbott 2.97e- 5
## # ... with 20,936 more rows
Notice that this has turned the model into a one-topic-per-term-per-row format. For each combination, the model computes the probability of that term being generated from that topic. For example, the term “aaron” has a $$1.686917\times 10^{-12}$$ probability of being generated from topic 1, but a $$3.8959408\times 10^{-5}$$ probability of being generated from topic 2.
We could use dplyr’s top_n() to find the 10 terms that are most common within each topic. As a tidy data frame, this lends itself well to a ggplot2 visualization (Figure 6.2).
library(ggplot2)
library(dplyr)
ap_top_terms <- ap_topics %>%
group_by(topic) %>%
top_n(10, beta) %>%
ungroup() %>%
arrange(topic, -beta)
ap_top_terms %>%
mutate(term = reorder(term, beta)) %>%
ggplot(aes(term, beta, fill = factor(topic))) +
geom_col(show.legend = FALSE) +
facet_wrap(~ topic, scales = "free") +
coord_flip()
This visualization lets us understand the two topics that were extracted from the articles. The most common words in topic 1 include “percent”, “million”, “billion”, and “company”, which suggests it may represent business or financial news. Those most common in topic 2 include “president”, “government”, and “soviet”, suggeting that this topic represents political news. One important observation about the words in each topic is that some words, such as “new” and “people”, are common within both topics. This is an advantage of topic modeling as opposed to “hard clustering” methods: topics used in natural language could have some overlap in terms of words.
As an alternative, we could consider the terms that had the greatest difference in $$\beta$$ between topic 1 and topic 2. This can be estimated based on the log ratio of the two: $$\log_2(\frac{\beta_2}{\beta_1})$$ (a log ratio is useful because it makes the difference symmetrical: $$\beta_2$$ being twice as large leads to a log ratio of 1, while $$\beta_1$$ being twice as large results in -1). To constrain it to a set of especially relevant words, we can filter for relatively common words, such as those that have a $$\beta$$ greater than 1/1000 in at least one topic.
library(tidyr)
mutate(topic = paste0("topic", topic)) %>%
filter(topic1 > .001 | topic2 > .001) %>%
mutate(log_ratio = log2(topic2 / topic1))
beta_spread
## # A tibble: 198 x 4
## term topic1 topic2 log_ratio
## <chr> <dbl> <dbl> <dbl>
## 1 administration 0.000431 0.00138 1.68
## 2 ago 0.00107 0.000842 -0.339
## 3 agreement 0.000671 0.00104 0.630
## 4 aid 0.0000476 0.00105 4.46
## 5 air 0.00214 0.000297 -2.85
## 6 american 0.00203 0.00168 -0.270
## 7 analysts 0.00109 0.000000578 -10.9
## 8 area 0.00137 0.000231 -2.57
## 9 army 0.000262 0.00105 2.00
## 10 asked 0.000189 0.00156 3.05
## # ... with 188 more rows
The words with the greatest differences between the two topics are visualized in Figure 6.3.
We can see that the words more common in topic 2 include political parties such as “democratic” and “republican”, as well as politician’s names such as “dukakis” and “gorbachev”. Topic 1 was more characterized by currencies like “yen” and “dollar”, as well as financial terms such as “index”, “prices” and “rates”. This helps confirm that the two topics the algorithm identified were political and financial news.
### 6.1.2 Document-topic probabilities
Besides estimating each topic as a mixture of words, LDA also models each document as a mixture of topics. We can examine the per-document-per-topic probabilities, called $$\gamma$$ (“gamma”), with the matrix = "gamma" argument to tidy().
ap_documents <- tidy(ap_lda, matrix = "gamma")
ap_documents
## # A tibble: 4,492 x 3
## document topic gamma
## <int> <int> <dbl>
## 1 1 1 0.248
## 2 2 1 0.362
## 3 3 1 0.527
## 4 4 1 0.357
## 5 5 1 0.181
## 6 6 1 0.000588
## 7 7 1 0.773
## 8 8 1 0.00445
## 9 9 1 0.967
## 10 10 1 0.147
## # ... with 4,482 more rows
Each of these values is an estimated proportion of words from that document that are generated from that topic. For example, the model estimates that only about 24.8% of the words in document 1 were generated from topic 1.
We can see that many of these documents were drawn from a mix of the two topics, but that document 6 was drawn almost entirely from topic 2, having a $$\gamma$$ from topic 1 close to zero. To check this answer, we could tidy() the document-term matrix (see Chapter 5.1) and check what the most common words in that document were.
tidy(AssociatedPress) %>%
filter(document == 6) %>%
arrange(desc(count))
## # A tibble: 287 x 3
## document term count
## <int> <chr> <dbl>
## 1 6 noriega 16
## 2 6 panama 12
## 3 6 jackson 6
## 4 6 powell 6
## 6 6 economic 5
## 7 6 general 5
## 8 6 i 5
## 9 6 panamanian 5
## 10 6 american 4
## # ... with 277 more rows
Based on the most common words, this appears to be an article about the relationship between the American government and Panamanian dictator Manuel Noriega, which means the algorithm was right to place it in topic 2 (as political/national news).
## 6.2 Example: the great library heist
When examining a statistical method, it can be useful to try it on a very simple case where you know the “right answer”. For example, we could collect a set of documents that definitely relate to four separate topics, then perform topic modeling to see whether the algorithm can correctly distinguish the four groups. This lets us double-check that the method is useful, and gain a sense of how and when it can go wrong. We’ll try this with some data from classic literature.
Suppose a vandal has broken into your study and torn apart four of your books:
• Great Expectations by Charles Dickens
• The War of the Worlds by H.G. Wells
• Twenty Thousand Leagues Under the Sea by Jules Verne
• Pride and Prejudice by Jane Austen
This vandal has torn the books into individual chapters, and left them in one large pile. How can we restore these disorganized chapters to their original books? This is a challenging problem since the individual chapters are unlabeled: we don’t know what words might distinguish them into groups. We’ll thus use topic modeling to discover how chapters cluster into distinct topics, each of them (presumably) representing one of the books.
We’ll retrieve the text of these four books using the gutenbergr package introduced in Chapter 3.
titles <- c("Twenty Thousand Leagues under the Sea", "The War of the Worlds",
"Pride and Prejudice", "Great Expectations")
library(gutenbergr)
books <- gutenberg_works(title %in% titles) %>%
gutenberg_download(meta_fields = "title")
As pre-processing, we divide these into chapters, use tidytext’s unnest_tokens() to separate them into words, then remove stop_words. We’re treating every chapter as a separate “document”, each with a name like Great Expectations_1 or Pride and Prejudice_11. (In other applications, each document might be one newspaper article, or one blog post).
library(stringr)
# divide into documents, each representing one chapter
by_chapter <- books %>%
group_by(title) %>%
mutate(chapter = cumsum(str_detect(text, regex("^chapter ", ignore_case = TRUE)))) %>%
ungroup() %>%
filter(chapter > 0) %>%
unite(document, title, chapter)
# split into words
by_chapter_word <- by_chapter %>%
unnest_tokens(word, text)
# find document-word counts
word_counts <- by_chapter_word %>%
anti_join(stop_words) %>%
count(document, word, sort = TRUE) %>%
ungroup()
word_counts
## # A tibble: 104,721 x 3
## document word n
## <chr> <chr> <int>
## 1 Great Expectations_57 joe 88
## 2 Great Expectations_7 joe 70
## 3 Great Expectations_17 biddy 63
## 4 Great Expectations_27 joe 58
## 5 Great Expectations_38 estella 58
## 6 Great Expectations_2 joe 56
## 7 Great Expectations_23 pocket 53
## 8 Great Expectations_15 joe 50
## 9 Great Expectations_18 joe 50
## 10 The War of the Worlds_16 brother 50
## # ... with 104,711 more rows
### 6.2.1 LDA on chapters
Right now our data frame word_counts is in a tidy form, with one-term-per-document-per-row, but the topicmodels package requires a DocumentTermMatrix. As described in Chapter 5.2, we can cast a one-token-per-row table into a DocumentTermMatrix with tidytext’s cast_dtm().
chapters_dtm <- word_counts %>%
cast_dtm(document, word, n)
chapters_dtm
## <<DocumentTermMatrix (documents: 193, terms: 18215)>>
## Non-/sparse entries: 104721/3410774
## Sparsity : 97%
## Maximal term length: 19
## Weighting : term frequency (tf)
We can then use the LDA() function to create a four-topic model. In this case we know we’re looking for four topics because there are four books; in other problems we may need to try a few different values of k.
chapters_lda <- LDA(chapters_dtm, k = 4, control = list(seed = 1234))
chapters_lda
## A LDA_VEM topic model with 4 topics.
Much as we did on the Associated Press data, we can examine per-topic-per-word probabilities.
chapter_topics <- tidy(chapters_lda, matrix = "beta")
chapter_topics
## # A tibble: 72,860 x 3
## topic term beta
## <int> <chr> <dbl>
## 1 1 joe 5.83e-17
## 2 2 joe 3.19e-57
## 3 3 joe 4.16e-24
## 4 4 joe 1.45e- 2
## 5 1 biddy 7.85e-27
## 6 2 biddy 4.67e-69
## 7 3 biddy 2.26e-46
## 8 4 biddy 4.77e- 3
## 9 1 estella 3.83e- 6
## 10 2 estella 5.32e-65
## # ... with 72,850 more rows
Notice that this has turned the model into a one-topic-per-term-per-row format. For each combination, the model computes the probability of that term being generated from that topic. For example, the term “joe” has an almost zero probability of being generated from topics 1, 2, or 3, but it makes up 1.45% of topic 4.
We could use dplyr’s top_n() to find the top 5 terms within each topic.
top_terms <- chapter_topics %>%
group_by(topic) %>%
top_n(5, beta) %>%
ungroup() %>%
arrange(topic, -beta)
top_terms
## # A tibble: 20 x 3
## topic term beta
## <int> <chr> <dbl>
## 1 1 elizabeth 0.0141
## 2 1 darcy 0.00881
## 3 1 miss 0.00871
## 4 1 bennet 0.00695
## 5 1 jane 0.00650
## 6 2 captain 0.0155
## 7 2 nautilus 0.0131
## 8 2 sea 0.00885
## 9 2 nemo 0.00871
## 10 2 ned 0.00803
## 11 3 people 0.00680
## 12 3 martians 0.00651
## 13 3 time 0.00535
## 14 3 black 0.00528
## 15 3 night 0.00448
## 16 4 joe 0.0145
## 17 4 time 0.00685
## 18 4 pip 0.00682
## 19 4 looked 0.00637
## 20 4 miss 0.00623
This tidy output lends itself well to a ggplot2 visualization (Figure 6.4).
library(ggplot2)
top_terms %>%
mutate(term = reorder(term, beta)) %>%
ggplot(aes(term, beta, fill = factor(topic))) +
geom_col(show.legend = FALSE) +
facet_wrap(~ topic, scales = "free") +
coord_flip()
These topics are pretty clearly associated with the four books! There’s no question that the topic of “captain”, “nautilus”, “sea”, and “nemo” belongs to Twenty Thousand Leagues Under the Sea, and that “jane”, “darcy”, and “elizabeth” belongs to Pride and Prejudice. We see “pip” and “joe” from Great Expectations and “martians”, “black”, and “night” from The War of the Worlds. We also notice that, in line with LDA being a “fuzzy clustering” method, there can be words in common between multiple topics, such as “miss” in topics 1 and 4, and “time” in topics 3 and 4.
### 6.2.2 Per-document classification
Each document in this analysis represented a single chapter. Thus, we may want to know which topics are associated with each document. Can we put the chapters back together in the correct books? We can find this by examining the per-document-per-topic probabilities, $$\gamma$$ (“gamma”).
chapters_gamma <- tidy(chapters_lda, matrix = "gamma")
chapters_gamma
## # A tibble: 772 x 3
## document topic gamma
## <chr> <int> <dbl>
## 1 Great Expectations_57 1 0.0000135
## 2 Great Expectations_7 1 0.0000147
## 3 Great Expectations_17 1 0.0000212
## 4 Great Expectations_27 1 0.0000192
## 5 Great Expectations_38 1 0.354
## 6 Great Expectations_2 1 0.0000172
## 7 Great Expectations_23 1 0.551
## 8 Great Expectations_15 1 0.0168
## 9 Great Expectations_18 1 0.0000127
## 10 The War of the Worlds_16 1 0.0000108
## # ... with 762 more rows
Each of these values is an estimated proportion of words from that document that are generated from that topic. For example, the model estimates that each word in the Great Expectations_57 document has only a 0.00135% probability of coming from topic 1 (Pride and Prejudice).
Now that we have these topic probabilities, we can see how well our unsupervised learning did at distinguishing the four books. We’d expect that chapters within a book would be found to be mostly (or entirely), generated from the corresponding topic.
First we re-separate the document name into title and chapter, after which we can visualize the per-document-per-topic probability for each (Figure 6.5).
chapters_gamma <- chapters_gamma %>%
separate(document, c("title", "chapter"), sep = "_", convert = TRUE)
chapters_gamma
## # A tibble: 772 x 4
## title chapter topic gamma
## <chr> <int> <int> <dbl>
## 1 Great Expectations 57 1 0.0000135
## 2 Great Expectations 7 1 0.0000147
## 3 Great Expectations 17 1 0.0000212
## 4 Great Expectations 27 1 0.0000192
## 5 Great Expectations 38 1 0.354
## 6 Great Expectations 2 1 0.0000172
## 7 Great Expectations 23 1 0.551
## 8 Great Expectations 15 1 0.0168
## 9 Great Expectations 18 1 0.0000127
## 10 The War of the Worlds 16 1 0.0000108
## # ... with 762 more rows
# reorder titles in order of topic 1, topic 2, etc before plotting
chapters_gamma %>%
mutate(title = reorder(title, gamma * topic)) %>%
ggplot(aes(factor(topic), gamma)) +
geom_boxplot() +
facet_wrap(~ title)
We notice that almost all of the chapters from Pride and Prejudice, War of the Worlds, and Twenty Thousand Leagues Under the Sea were uniquely identified as a single topic each.
It does look like some chapters from Great Expectations (which should be topic 4) were somewhat associated with other topics. Are there any cases where the topic most associated with a chapter belonged to another book? First we’d find the topic that was most associated with each chapter using top_n(), which is effectively the “classification” of that chapter.
chapter_classifications <- chapters_gamma %>%
group_by(title, chapter) %>%
top_n(1, gamma) %>%
ungroup()
chapter_classifications
## # A tibble: 193 x 4
## title chapter topic gamma
## <chr> <int> <int> <dbl>
## 1 Great Expectations 23 1 0.551
## 2 Pride and Prejudice 43 1 1.000
## 3 Pride and Prejudice 18 1 1.000
## 4 Pride and Prejudice 45 1 1.000
## 5 Pride and Prejudice 16 1 1.000
## 6 Pride and Prejudice 29 1 1.000
## 7 Pride and Prejudice 10 1 1.000
## 8 Pride and Prejudice 8 1 1.000
## 9 Pride and Prejudice 56 1 1.000
## 10 Pride and Prejudice 47 1 1.000
## # ... with 183 more rows
We can then compare each to the “consensus” topic for each book (the most common topic among its chapters), and see which were most often misidentified.
book_topics <- chapter_classifications %>%
count(title, topic) %>%
group_by(title) %>%
top_n(1, n) %>%
ungroup() %>%
transmute(consensus = title, topic)
chapter_classifications %>%
inner_join(book_topics, by = "topic") %>%
filter(title != consensus)
## # A tibble: 2 x 5
## title chapter topic gamma consensus
## <chr> <int> <int> <dbl> <chr>
## 1 Great Expectations 23 1 0.551 Pride and Prejudice
## 2 Great Expectations 54 3 0.480 The War of the Worlds
We see that only two chapters from Great Expectations were misclassified, as LDA described one as coming from the “Pride and Prejudice” topic (topic 1) and one from The War of the Worlds (topic 3). That’s not bad for unsupervised clustering!
### 6.2.3 By word assignments: augment
One step of the LDA algorithm is assigning each word in each document to a topic. The more words in a document are assigned to that topic, generally, the more weight (gamma) will go on that document-topic classification.
We may want to take the original document-word pairs and find which words in each document were assigned to which topic. This is the job of the augment() function, which also originated in the broom package as a way of tidying model output. While tidy() retrieves the statistical components of the model, augment() uses a model to add information to each observation in the original data.
assignments <- augment(chapters_lda, data = chapters_dtm)
assignments
## # A tibble: 104,721 x 4
## document term count .topic
## <chr> <chr> <dbl> <dbl>
## 1 Great Expectations_57 joe 88 4
## 2 Great Expectations_7 joe 70 4
## 3 Great Expectations_17 joe 5 4
## 4 Great Expectations_27 joe 58 4
## 5 Great Expectations_2 joe 56 4
## 6 Great Expectations_23 joe 1 4
## 7 Great Expectations_15 joe 50 4
## 8 Great Expectations_18 joe 50 4
## 9 Great Expectations_9 joe 44 4
## 10 Great Expectations_13 joe 40 4
## # ... with 104,711 more rows
This returns a tidy data frame of book-term counts, but adds an extra column: .topic, with the topic each term was assigned to within each document. (Extra columns added by augment always start with ., to prevent overwriting existing columns). We can combine this assignments table with the consensus book titles to find which words were incorrectly classified.
assignments <- assignments %>%
separate(document, c("title", "chapter"), sep = "_", convert = TRUE) %>%
inner_join(book_topics, by = c(".topic" = "topic"))
assignments
## # A tibble: 104,721 x 6
## title chapter term count .topic consensus
## <chr> <int> <chr> <dbl> <dbl> <chr>
## 1 Great Expectations 57 joe 88 4 Great Expectations
## 2 Great Expectations 7 joe 70 4 Great Expectations
## 3 Great Expectations 17 joe 5 4 Great Expectations
## 4 Great Expectations 27 joe 58 4 Great Expectations
## 5 Great Expectations 2 joe 56 4 Great Expectations
## 6 Great Expectations 23 joe 1 4 Great Expectations
## 7 Great Expectations 15 joe 50 4 Great Expectations
## 8 Great Expectations 18 joe 50 4 Great Expectations
## 9 Great Expectations 9 joe 44 4 Great Expectations
## 10 Great Expectations 13 joe 40 4 Great Expectations
## # ... with 104,711 more rows
This combination of the true book (title) and the book assigned to it (consensus) is useful for further exploration. We can, for example, visualize a confusion matrix, showing how often words from one book were assigned to another, using dplyr’s count() and ggplot2’s geom_tile (Figure 6.6.
assignments %>%
count(title, consensus, wt = count) %>%
group_by(title) %>%
mutate(percent = n / sum(n)) %>%
ggplot(aes(consensus, title, fill = percent)) +
geom_tile() +
scale_fill_gradient2(high = "red", label = percent_format()) +
theme_minimal() +
theme(axis.text.x = element_text(angle = 90, hjust = 1),
panel.grid = element_blank()) +
labs(x = "Book words were assigned to",
y = "Book words came from",
fill = "% of assignments")
We notice that almost all the words for Pride and Prejudice, Twenty Thousand Leagues Under the Sea, and War of the Worlds were correctly assigned, while Great Expectations had a fair number of misassigned words (which, as we saw above, led to two chapters getting misclassified).
What were the most commonly mistaken words?
wrong_words <- assignments %>%
filter(title != consensus)
wrong_words
## # A tibble: 4,535 x 6
## title chapter term count .topic
## <chr> <int> <chr> <dbl> <dbl>
## 1 Great Expectations 38 brother 2 1
## 2 Great Expectations 22 brother 4 1
## 3 Great Expectations 23 miss 2 1
## 4 Great Expectations 22 miss 23 1
## 5 Twenty Thousand Leagues under the Sea 8 miss 1 1
## 6 Great Expectations 31 miss 1 1
## 7 Great Expectations 5 sergeant 37 1
## 8 Great Expectations 46 captain 1 2
## 9 Great Expectations 32 captain 1 2
## 10 The War of the Worlds 17 captain 5 2
## consensus
## <chr>
## 1 Pride and Prejudice
## 2 Pride and Prejudice
## 3 Pride and Prejudice
## 4 Pride and Prejudice
## 5 Pride and Prejudice
## 6 Pride and Prejudice
## 7 Pride and Prejudice
## 8 Twenty Thousand Leagues under the Sea
## 9 Twenty Thousand Leagues under the Sea
## 10 Twenty Thousand Leagues under the Sea
## # ... with 4,525 more rows
wrong_words %>%
count(title, consensus, term, wt = count) %>%
ungroup() %>%
arrange(desc(n))
## # A tibble: 3,500 x 4
## title consensus term n
## <chr> <chr> <chr> <dbl>
## 1 Great Expectations Pride and Prejudice love 44
## 2 Great Expectations Pride and Prejudice sergeant 37
## 3 Great Expectations Pride and Prejudice lady 32
## 4 Great Expectations Pride and Prejudice miss 26
## 5 Great Expectations The War of the Worlds boat 25
## 6 Great Expectations Pride and Prejudice father 19
## 7 Great Expectations The War of the Worlds water 19
## 8 Great Expectations Pride and Prejudice baby 18
## 9 Great Expectations Pride and Prejudice flopson 18
## 10 Great Expectations Pride and Prejudice family 16
## # ... with 3,490 more rows
We can see that a number of words were often assigned to the Pride and Prejudice or War of the Worlds cluster even when they appeared in Great Expectations. For some of these words, such as “love” and “lady”, that’s because they’re more common in Pride and Prejudice (we could confirm that by examining the counts).
On the other hand, there are a few wrongly classified words that never appeared in the novel they were misassigned to. For example, we can confirm “flopson” appears only in Great Expectations, even though it’s assigned to the “Pride and Prejudice” cluster.
word_counts %>%
filter(word == "flopson")
## # A tibble: 3 x 3
## document word n
## <chr> <chr> <int>
## 1 Great Expectations_22 flopson 10
## 2 Great Expectations_23 flopson 7
## 3 Great Expectations_33 flopson 1
The LDA algorithm is stochastic, and it can accidentally land on a topic that spans multiple books.
## 6.3 Alternative LDA implementations
The LDA() function in the topicmodels package is only one implementation of the latent Dirichlet allocation algorithm. For example, the mallet package (Mimno 2013) implements a wrapper around the MALLET Java package for text classification tools, and the tidytext package provides tidiers for this model output as well.
The mallet package takes a somewhat different approach to the input format. For instance, it takes non-tokenized documents and performs the tokenization itself, and requires a separate file of stopwords. This means we have to collapse the text into one string for each document before performing LDA.
library(mallet)
# create a vector with one string per chapter
collapsed <- by_chapter_word %>%
anti_join(stop_words, by = "word") %>%
mutate(word = str_replace(word, "'", "")) %>%
group_by(document) %>%
summarize(text = paste(word, collapse = " "))
# create an empty file of "stopwords"
file.create(empty_file <- tempfile())
docs <- mallet.import(collapsed$document, collapsed$text, empty_file)
mallet_model <- MalletLDA(num.topics = 4)
mallet_model$loadDocuments(docs) mallet_model$train(100)
Once the model is created, however, we can use the tidy() and augment() functions described in the rest of the chapter in an almost identical way. This includes extracting the probabilities of words within each topic or topics within each document.
# word-topic pairs
tidy(mallet_model)
# document-topic pairs
tidy(mallet_model, matrix = "gamma")
# column needs to be named "term" for "augment"
term_counts <- rename(word_counts, term = word)
augment(mallet_model, term_counts)
We could use ggplot2 to explore and visualize the model in the same way we did the LDA output.
## 6.4 Summary
This chapter introduces topic modeling for finding clusters of words that characterize a set of documents, and shows how the tidy() verb lets us explore and understand these models using dplyr and ggplot2. This is one of the advantages of the tidy approach to model exploration: the challenges of different output formats are handled by the tidying functions, and we can explore model results using a standard set of tools. In particular, we saw that topic modeling is able to separate and distinguish chapters from four separate books, and explored the limitations of the model by finding words and chapters that it assigned incorrectly.
### References
Robinson, David. 2017. broom: Convert Statistical Analysis Objects into Tidy Data Frames. https://CRAN.R-project.org/package=broom.
Mimno, David. 2013. mallet: A Wrapper Around the Java Machine Learning Tool Mallet. https://cran.r-project.org/package=mallet.
|
2018-10-22 03:52:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17171715199947357, "perplexity": 7950.471662171367}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583514497.14/warc/CC-MAIN-20181022025852-20181022051352-00111.warc.gz"}
|
https://math.stackexchange.com/questions/2922783/covariant-derivative-acting-on-vector-field
|
# Covariant derivative acting on vector field
I'm following the lecture here, and this question is with respect to the content on the whiteboard at the timestamp provided.
Consider Newtonian spacetime $(\mathcal{M}, \mathcal{O}, \mathcal{A}, \nabla, t)$ where $(\mathcal{M}, \mathcal{O}, \mathcal{A})$ is a smooth 4-dimensional manifold where all charts $(\mathcal{U}, x)$ are of the form $$x^0 : \mathcal{U} \to \mathbb{R}\\ x^1 : \mathcal{U} \to \mathbb{R}\\ \vdots\\ x^3 : \mathcal{U} \to \mathbb{R}$$ where $x^0 = t|_\mathcal{U}$ is the restriction of the absolute time function $t$ to the chart domain $\mathcal{U}$ and $\nabla$ is a prescribed covariant derivative operator with given connection coeffitient functions.
I'm trying to reevaluate the claim on the right, namely $$0 = \nabla dt.$$ As I understand it, one does so by picking any direction $\frac{\partial}{\partial x^a}$ from a chart-induced basis for $a=0,1, \ldots, 3$, then act on this vectorfield with the prescribed covariant derivative operator and apply the result to the $(0,1)$-tensor field $dt$, which is the gradient of the 0th component function: $d(x^0)$. The action of the covariant derivative along a vectorfield on a $(p, q)$-tensor field yields again a $(p, q)$-tensor field, hence we can look at the resulting $(0,1)$-tensor-field, i.e. a vector field, component-wise for components $b=0, \ldots, 3$. Formally, we apply the rules for covariant derivatives $$\left(\nabla_{\frac{\partial}{\partial x^a}} d(x^0)\right)_b = \frac{\partial}{\partial x^a}\left(d(x^0)_b\right) - \Gamma_{b m}^n d(x^0)_n \left(\frac{\partial}{\partial x^a}\right)^m$$ According to the notes on the blackboard, the first term should vanish entirely, while the second one should reduce to $\Gamma_{ba}^0$. I'm havin trouble to see this.
Let's start with the first term: $d(x^0)$ is a covector, so $d(x^0)_b$ is the 0th component function. $\left(\frac{\partial}{\partial x^a}\right)$ is a vector field, which can be applied to a fucntion to yield another function. Alright. Still I can't see how I need to proceed there.
Secondly, I'm stuck with the rightmost term. Somehow, $\left(\frac{\partial}{\partial x^a}\right)^m$ must reduce to $\delta_a^m$ and $d(x^0)_n$ must reduce to $\delta_n^0$, however I don't see how.
Basically, as far as I understand, the whole expression needs to be the zero function in the end.
Can anyone explain where I seem to go wrong? Or help me understand the step missing?
## Edit 1
After the commetnt by @willie-wong, I think I can answer the first part. First I get the component functions of $dx^0$ by means of $$(df)_j := (df) \left(\frac{\partial}{\partial x^j}\right) = \left(\frac{\partial}{\partial x^j}\right)(f).$$ In the case I presented this would result in $$(dx^0)_b = \frac{\partial x^0}{\partial x^b} =\delta_b^0,$$ which results in the colletion of components $(1, 0, 0, 0)$ and $$\frac{\partial}{\partial x^a}\delta_b^0 = \partial_a(\delta_b^0 \circ x^{-1}) = 0,$$ since the output of $\delta$ is constantly 0 or one, depending on $b$ with respect to any index $a$. Makes sense to me.
## Edit 2
As in edit 1, the term $d(x^0)_n$ becomes $\delta_n^0$ following the same calculation. In regards to the second term, I may interpret $$\left( \frac{\partial}{\partial x^a}\right)^m = \left( \frac{\partial x^m}{\partial x^a}\right),$$ i.e. as the $a$-th basis vector applied to the $m$-th component function. Hence the result doubtlessly is the $\delta_a^m$ function, and hence the second term is nonvanishing only for $n=0$ and $m=a$, resulting in the "time flows uniformly"-equation presented by Prof. Schuller.
• relative the the coordinates $x^0, \ldots x^3$, the components of the one form $d(x^0)$ are $(1, 0, 0, 0)$. Can you see why? [This answers your first question.] – Willie Wong Sep 19 '18 at 13:38
• for your second question: the vectors $\partial / \partial x^a$ form a basis of the tangent space. Relative to this basis, what is $\partial / \partial x^i$, for any given $i$? – Willie Wong Sep 19 '18 at 13:39
• @willie-wong, I inserted two edits in regards to your comments. Is this what you wanted to point me at? – marc Sep 19 '18 at 15:22
• The coefficients $v^m$ of a vector field $v$ is defined to be such that $$v = \sum v^m (\partial / \partial x^m)$$ So yes, if you take $v(x^k)$ you will get $\sum_{m} v^m (\partial x^k / \partial x^m) = v^k$. And yes, you correctly applied this to compute the $m$th coefficient of $\partial/\partial x^a$. – Willie Wong Sep 19 '18 at 15:50
|
2019-08-18 06:39:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9350860118865967, "perplexity": 211.66716984808514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313715.51/warc/CC-MAIN-20190818062817-20190818084817-00006.warc.gz"}
|
http://alvessurfboards.com/woodseats-road-cbghic/operations-with-complex-numbers-practice-worksheet-c6f64c
|
Vermiculite Fire Bricks, Interesting Facts About Mauna Loa, Pella Casement Window Lock Mechanism, 2008 Jeep Commander Msrp, Zinsser Bin Shellac Vs Synthetic, Rustoleum 780 Elastomeric Roof Coating, Mazda Mp3 For Sale, Used Range Rover Sport For Sale, " />
Write the result in the form a bi. Some of the worksheets displayed are Order of operations pemdas practice work, Pre algebra, Rational expressions, Exponents complex fractions and the order of operations, Exercise work, Word problem practice workbook, Grade levelcourse 7th, Order of operations. 6 7i 4. A really great activity for allowing students to understand the concept of Multiplying and Dividing Complex Numbers. Independent Practice 1. Plus each one comes with an answer key. Multiplying Mixed Numbers by Mixed Numbers. 4 2i 7. Showing top 8 worksheets in the category - Complex Order Of Operations. Make lightning-fast progress with these multiplying mixed fractions worksheet pdfs. Complex Number – any number that can be written in the form + , where and are real numbers. Perform the operations and write the result in standard form. All the math worksheets available on edHelper for quick printing. Video c. Practice (adding / subtracting) d. Practice (multiplying / dividing) e. Video (adding / subtracting) f. Video (multiplying / dividing) g. Worksheet h. Quiz (all operations) i. (Note: and both can be 0.) 6 2. Some worksheets are dynamically generated to give you a different set to practice each time. Example 3: Calculate $(2 + 3i) \cdot (4 + 2i)$. Complex numbers won't seem complicated any more with these clear, precise student worksheets covering expressing numbers in simplest form, irrational roots, decimals, exponents all the way through all aspects of quadratic equations, and graphing! Worksheet given in this section is much useful to the students who would like to practice problems on complex numbers and operations. Printable in convenient PDF format. 12. 5 i 8. Representing Complex Numbers in Complex Plane (Video) j. operations with complex numbers worksheet answers, but stop occurring in harmful downloads. 3 - (-2+3i) + (-5+i) = ? Worksheet by Kuta Software LLC Algebra 2 Operations with Complex Numbers Name_____ Date_____ Period____ ©Q y2M0N1t6K dKFu\tTaq USGoMfLt\wJaorhed ]LnLKCM.w LA^lHlS NrJiKgghItsM rrEeDskeDrDvMeHdU.-1-Simplify. Subtraction Worksheets for Math Practice! Here is a set of practice problems to accompany the Complex Numbers< section of the Preliminaries chapter of the notes for Paul Dawkins Algebra course at Lamar University. They are also interactive and will give you immediate feedback, Number, fractions, addition, subtraction, division, multiplication, order of operations, money and time worksheets, with video lessons, examples and step-by-step solutions. Addition / Subtraction - Combine like terms (i.e. 3 3i 4 7i 11. 9. Free math worksheets for basic operations This worksheet generator allows you to make worksheets for addition, subtraction, division, and multiplication of whole numbers and integers, including both horizontal and vertical forms (long division etc. Operations With Complex Numbers Worksheets Generator. Complex numbers is vital in high school math. 8 5i 5. Our Order of Operations Worksheets are free to download, easy to use, and very flexible. The worksheets are meant for the study of rational numbers, typically in 7th or 8th grade math (pre-algebra and algebra 1). Some of the worksheets for this concept are operations with complex numbers complex numbers and powers of i dividing complex numbers adding and subtracting complex numbers real part and imaginary part 1 a complete the complex numbers complex numbers properties of complex numbers. 6. Some of the worksheets for this concept are operations with complex numbers complex numbers and powers of i appendix e complex numbers e1 e complex numbers dividing complex numbers irrational and imaginary root theorems conjugate of complex numbers 1 complex numbers … ), and simple equations with variables. A compilation of free math worksheets categorized by topics. These Order of Operations Worksheets are a great resource for children in Kindergarten, 1st Grade, 2nd Grade, 3rd Grade, 4th Grade, and 5th Grade. Practice: Multiply complex numbers (basic) Multiplying complex numbers. Have you mastered the addition facts? The union of the set of all imaginary numbers and the set of all real numbers is the set of complex numbers. A2.1.4 Determine rational and complex zeros for quadratic equations Imaginary complex numbers practice worksheet answers. If you are looking for order of operations worksheets that test your knowledge of the PEMDAS rules, these math worksheets are a good start. Rather than enjoying a fine PDF like a mug of coffee in the afternoon, then again they juggled as soon as some harmful virus inside their computer. 3i … the real parts with real Printable worksheets and online practice tests on complex-numbers for Grade 10. Complex Numbers Operations Worksheet : Worksheet given in this section is much useful to the students who would like to practice problems on complex numbers and operations. Quadratic equations with complex solutions. Each worksheet consists of 10 numerical expressions with fractions; some of the terms also have exponents so your young geniuses will need to more than prove their mettle! Worksheet by Kuta Software LLC Algebra 2 Multiplying Complex Numbers Practice Name_____ ID: 1 Date_____ Period____ ©M a2v0p1K6R ]KeuYtoa[ GSZojfptpwnarrKeG LLvLYCT.g T VA[lXl_ SrgiWgYhxtYsM Hr_eIsCefrgvOetdY.-1-Simplify. 4i 3. Multiplying Complex Numbers - Displaying top 8 worksheets found for this concept. Practice: Multiply complex numbers. Operations of addition and multiplication of complex numbers are commutative, associative and distributive. a. 1. Operations include adding, subtracting and multiplying complex numbers. Great resource for teachers. Some of the worksheets for this concept are Multiplying complex numbers, Infinite algebra 2, Operations with complex numbers, Dividing complex numbers, Multiplying complex numbers, Complex numbers and powers of i, F q2v0f1r5 fktuitah wshofitewwagreu p aolrln, Rationalizing imaginary denominators. Perform operations like addition, subtraction and multiplication on complex numbers, write the complex numbers in standard form, identify the real and imaginary parts, find the conjugate, graph complex numbers, rationalize the denominator, find the absolute value, modulus, and argument in this collection of printable complex number worksheets. Title: Level: Rows: Columns: Show Answers: Font: Font Size: Operations With Complex Numbers. Free Algebra 2 worksheets created with Infinite Algebra 2. Solution: Some of the worksheets for this concept are Operations with complex numbers, Exercise work, Whole numbers order of operations work, Real numbers and number operations, Signed numbers and order of operations, Order of operations, Add subtract multiply divide rational numbers date period, Order of operations pemdas practice work. Click here for a Detailed Description of all the Order of Operations Worksheets. Next lesson. Polynomial Functions Naming and simple operations 45 question end of unit review sheet on adding, subtracting, mulitplying, dividing and simplify complex and imaginary numbers. Complex number operations review. Read Online Operations With Complex Numbers Worksheet AnswersWorksheet - Worksheet List Each worksheet has model problems worked out step by step, practice problems, as well as challenge questions at the sheets end. Change the mixed numbers to improper fractions, cross-cancel to reduce them to the lowest terms, multiply the numerators together and the denominators together and convert them to mixed numbers, if improper fractions. Displaying top 8 worksheets found for - Numbers And Operations. Feb 1, 2016 - In this activity students work in small groups to simplify complex numbers. The answer to one question leads students to the next one. Practice B Operations with Complex Numbers Graph each complex number. 2 - 2i c. -2i d. 2i 2. 3i Add or subtract. Free worksheet(pdf) and answer key on Complex Numbers. Operations With Complex Numbers Worksheets. Notes b. Be conversant with the basic arithmetic operations: addition, subtraction, multiplication, and division involving radicals with this worksheet … All worksheets created with ... Operations with complex numbers Properties of complex numbers Rationalizing imaginary denominators. Time to move on to subtraction! 1 2i 6 9i 10. A2.1.2 Demonstrate knowledge of how real and complex numbers are related both arithmetically and graphically. Some of the worksheets for this concept are Exercise work, Order of operations, Order of operations work decimals and fractions mixed, Mixed operations work order of operations with, Whole numbers order of operations work, Exponents and order of operations complex fractions, Order of operations, Word problem practice workbook. Operations with complex numbers Author: Stephen Lane Description: Problems with complex numbers Last modified by: Stephen Lane Created Date: 8/7/1997 8:06:00 PM Company *** Other titles: Operations with complex numbers With at least two arithmetic operations to perform in each question, they will have to practice caution as they carry out the division, multiplication, addition, and subtraction of fractions. View worksheet There are 12 question cards included, plus a student recording sheet. Simplify Imaginary Numbers; Adding and Subtracting 1. 2+2i b. and are real numbers and ≠0. 5 2i 2 8i Multiply. Create here an unlimited supply of worksheets for simplifying complex fractions — fractions where the numerator, the denominator, or both are fractions/mixed numbers. A2.1.1 Define complex numbers and perform basic operations with them. Complex numbers allow you to represent two numbers, two pieces of information, and this quiz and worksheet combo will help you test your understanding of this important mathematical concept. 3i Find each absolute value. Test and Worksheet Generators for Math Teachers. Worksheet g. Quiz 15. Showing top 8 worksheets in the category operations with complex numbers. A2.1 Students analyze complex numbers and perform basic operations. Complex numbers operations a. This is the currently selected item. Operations with Radicals. Write the result in the form a bi. More Complex Order of Operations: These order of operations worksheets mix basic arithmetic, including parentheses and exponents, and tests students understanding of PEMDAS. Of addition and multiplication of complex numbers - Displaying top 8 worksheets in the category - complex of! / Subtraction - Combine like terms ( i.e give you a different set to problems! Complex and imaginary numbers ; adding and subtracting Our Order of Operations worksheets free! ( basic ) multiplying complex numbers: Font: Font: Font: Font Size Operations! Generated to give you a different set to practice each time of complex numbers related! $( operations with complex numbers practice worksheet + 3i ) \cdot ( 4 + 2i )$ to one question leads students to the! Problems on complex numbers Rationalizing imaginary denominators ( 2 + 3i ) \cdot ( 4 + )... For Grade 10 and very flexible in the category - complex Order of Operations example 3: $! On complex-numbers for Grade 10 ( -5+i ) = that can be in. Font: Font: Font: Font: Font Size: Operations with complex numbers recording... A really great activity for allowing students to understand the concept of multiplying and Dividing numbers. On complex numbers and perform basic Operations Operations include adding, subtracting, mulitplying, Dividing and complex... Displaying operations with complex numbers practice worksheet 8 worksheets found for - numbers and perform basic Operations with complex numbers worksheet Answers, but occurring! ( 4 + 2i )$ with these multiplying mixed fractions worksheet pdfs include adding subtracting... 4 + 2i ) $the worksheets are dynamically generated to give you a different set to practice each.! Found for this concept 12 question cards included, plus a student recording sheet and imaginary numbers and basic! 8 worksheets found for - numbers and Operations plus a student recording sheet, but stop occurring harmful! Here for a Detailed Description of all the Order of Operations worksheets are free download! Numbers Rationalizing imaginary denominators here for a Detailed Description of all the math worksheets categorized by topics numbers. Practice B Operations with complex numbers subtracting Our Order of Operations worksheet given this... Union of the set of all the Order of Operations great activity for allowing students the. Subtracting, mulitplying, Dividing and simplify complex and imaginary numbers ; adding and subtracting Our Order of Operations are. Different set to practice each time free worksheet ( pdf ) and key. - numbers and perform basic Operations download, easy to use, and flexible. Note: and both can be 0. complex zeros for quadratic Operations. - Displaying top 8 worksheets in the category Operations with complex numbers is. Students analyze complex numbers are commutative, associative and distributive adding, subtracting, mulitplying Dividing. Analyze complex numbers - Displaying top 8 worksheets found for this concept complex and imaginary numbers and distributive:$. Are dynamically generated to give you a different set to practice each time created! Include adding, subtracting and multiplying complex numbers by topics and multiplication of complex numbers but stop occurring harmful. Associative and distributive online practice tests on complex-numbers for Grade 10 ( Video ) j make lightning-fast progress these... The form +, where and are real numbers is the set of real. Analyze complex numbers worksheet Answers, but stop occurring in harmful downloads are meant the... Operations worksheets are dynamically generated to operations with complex numbers practice worksheet you a different set to each. The set of all imaginary numbers and perform basic Operations is much useful to the one. A2.1 students analyze complex numbers in complex Plane ( Video ) j much useful to the next one the! Edhelper for quick printing Answers: Font: Font Size: Operations with numbers! Rows: Columns: Show Answers: Font Size: Operations with complex numbers are related arithmetically! Level: Rows: Columns: Show Answers: Font Size: Operations with Radicals each number! Commutative, associative and distributive worksheets created with Infinite Algebra 2 standard form different to. Typically in 7th or 8th Grade math ( pre-algebra and Algebra 1 ) worksheet pdfs great activity for allowing to... - Displaying top 8 worksheets in the category Operations with complex numbers Rationalizing imaginary denominators the Order Operations... In harmful downloads online practice tests on complex-numbers for Grade 10 related both and! The form +, where and are real numbers a2.1.4 Determine rational and complex zeros quadratic... A different set to practice problems on complex numbers Detailed Description of all the math worksheets by... Occurring in harmful downloads math ( pre-algebra and Algebra 1 ) the one! Quadratic equations Operations with complex numbers - Displaying top 8 worksheets in the form +, and... Students to the students who would like to practice each time real numbers is the set of all the worksheets. Of how real and complex zeros for quadratic equations Operations with Radicals … Operations of addition multiplication... Multiplying complex numbers the next one answer to one question leads students to understand the concept multiplying! Adding and subtracting Our Order of Operations worksheets are dynamically generated to give you a different set practice... For quick printing can be 0. in standard form Show Answers: Font Size: Operations with numbers... Study of rational numbers, typically in 7th or 8th Grade math ( and! Order of Operations created with... Operations with complex numbers Rationalizing imaginary denominators Algebra 2 worksheets created with Infinite 2... Tests on complex-numbers for Grade 10, plus a student recording sheet are both! Student recording sheet Size: Operations with Radicals 3 - ( -2+3i ) + ( -5+i =... Simplify imaginary numbers ; adding and subtracting Our Order of Operations worksheets are meant for study! Download, easy to use, and very flexible numbers ; adding and subtracting Order! Numbers is the set of all imaginary numbers ; adding and subtracting Order... Both arithmetically and graphically adding, subtracting, mulitplying, Dividing and complex... ( 2 + 3i ) \cdot ( 4 + 2i ) $with them (... Algebra 1 ) free math worksheets categorized by topics on complex numbers + 2i ).. Commutative, associative and distributive / Subtraction - Combine like terms ( i.e mixed! Multiplication of complex numbers operations with complex numbers practice worksheet created with... Operations with complex numbers Answers! For allowing students to understand the concept of multiplying and Dividing complex numbers Operations! All worksheets created with Infinite Algebra 2 problems on complex numbers Properties of complex numbers ).. The students who would like to practice each time addition and multiplication of complex numbers in complex Plane ( )! Complex Plane ( Video ) j -2+3i ) + ( -5+i ) = with these multiplying fractions... Question end of unit review sheet on adding, subtracting and multiplying complex Graph... Math worksheets available on edHelper for quick printing like to practice each time )! The form +, where and are real numbers is the set of all real is! Quick printing basic Operations with complex numbers, but stop occurring in harmful downloads concept of multiplying and Dividing numbers! Are 12 question cards included, plus a student recording sheet, typically in 7th 8th... Each time worksheet ( pdf ) and answer key on complex numbers and simplify complex and imaginary numbers be in! Students to the next one ( i.e for allowing students to the students who would like practice...: Font Size: Operations with operations with complex numbers practice worksheet numbers are dynamically generated to give a. ( Note: and both can be written in the category Operations with Radicals Operations... ) multiplying complex numbers 4 + 2i )$: Multiply complex in. To give you a different set to practice problems on complex numbers Show Answers: Font:! And multiplication of complex numbers and the set of all real numbers numbers are related both arithmetically operations with complex numbers practice worksheet... Numbers and the set of all real numbers that can be written in the form,!: Calculate $( 2 + 3i ) \cdot ( 4 + 2i$. And multiplying complex numbers ( basic ) multiplying complex numbers 7th or 8th Grade math ( pre-algebra and 1. That can be written in the category Operations with complex numbers this section much. Answers, but stop occurring in harmful downloads the study of rational numbers, typically in 7th or Grade! Review sheet on adding, subtracting, mulitplying, Dividing and simplify complex imaginary. Detailed Description of all the math worksheets available on edHelper for quick printing mulitplying Dividing... Arithmetically and graphically and the set of complex numbers problems on complex numbers each! And write the result in standard form ) multiplying complex numbers are commutative, associative and distributive Demonstrate of. Can be 0. numbers - Displaying top 8 worksheets in the Operations! To download, easy to use, and very flexible on edHelper for quick printing the result in standard.! Worksheets available on edHelper for quick printing include adding, subtracting and multiplying complex numbers imaginary. Sheet on adding, subtracting and multiplying complex numbers in complex Plane ( Video j... Be written in the form +, where and are real numbers is the set of all math. Like to practice each time, plus a student recording sheet free worksheet ( pdf ) and key... Of how real and complex numbers in complex Plane ( Video ) j ) + ( -5+i =. Set of all imaginary numbers ; adding and subtracting Our Order of Operations worksheets are dynamically generated give! Each complex number tests on complex-numbers for Grade 10, Dividing and simplify complex and imaginary and. – any number that can be 0. Plane ( Video ) j tests on complex-numbers Grade... The answer to one question leads students to the students who would like to practice each time write...
|
2021-05-06 09:28:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35047945380210876, "perplexity": 2312.4170994073856}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988753.91/warc/CC-MAIN-20210506083716-20210506113716-00600.warc.gz"}
|
https://www.gurobi.com/documentation/9.0/refman/updatemode.html
|
UpdateMode
Filter Content By
Version
Languages
UpdateMode
Changes the behavior of lazy updates
Type: int Default value: 1 Minimum value: 0 Maximum value: 1
Determines how newly added variables and linear constraints are handled. The default setting (1) allows you to use new variables and constraints immediately for building or modifying the model. A setting of 0 requires you to call update before these can be used.
Since the vast majority of programs never query Gurobi for details about the optimization models they build, the default setting typically removes the need to call update, or even be aware of the details of our lazy update approach for handling model modifications. However, these details will show through when you try to query modified model information.
In the Gurobi interface, model modifications (bound changes, right-hand side changes, objective changes, etc.) are placed in a queue. These queued modifications are applied to the model at three times: when you call update, when you call optimize, or when you call write to write the model to disk. When you query information about the model, the result will depend on both whether that information was modified and when it was modified. In particular, if the modification is sitting in the queue, you'll get the result from before the modification. Note that this lazy update behavior is independent of the value of the UpdateMode parameter.
The only potential benefit to changing the parameter to 0 is that in unusual cases this setting may allow simplex make more aggressive use of warm-start information after a model modification.
If you want to change this parameter, you need to set it as soon as you create your Gurobi environment.
Note that you still need to call update to modify an attribute on an SOS constraint, quadratic constraint, or general constraint.
For examples of how to query or modify parameter values from our different APIs, refer to our Parameter Examples.
|
2022-05-20 09:53:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22432510554790497, "perplexity": 1117.8370144910655}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662531779.10/warc/CC-MAIN-20220520093441-20220520123441-00273.warc.gz"}
|
https://math.stackexchange.com/questions/3327652/line-graph-doubt
|
Line Graph Doubt
The line graph $$L(G)$$ of a simple graph $$G$$ is defined as follows:
There is exactly one vertex $$v(e)$$ in $$L(G)$$ for each edge $$e$$ in $$G$$.
For any two edges $$e$$ and $$e'$$ in $$G$$, $$L(G)$$ has an edge between $$v(e)$$ and $$v(e')$$, if and only if $$e$$ and $$e'$$ are incident with the same vertex in $$G$$.
Which of the following statements is/are TRUE?
1. The line graph of a cycle is a cycle.
2. The line graph of a clique is a clique.
3. The line graph of a planar graph is planar.
4. The line graph of a tree is a tree.
I have already done the following:
1. The line graph of a cycle is a cycle.
2. [See below.]
3. The line graph of a planar graph is planar. Proof by counter-example: Let $$G$$ have $$5$$ vertices and $$9$$ edges which is a planar graph but $$L(G)$$ isn't a planar graph because then it will have $$25$$ edges; therefore, $$|E|\leq 3\cdot|V|-6$$ is violated.
4. The line graph of a tree is a tree. By counter-example: Try drawing a simple tree which has a root node. The root node has one child $$A$$ and node $$A$$ has two children $$B$$ and $$C$$. Draw its line graph according to given rules in question and you will get a cycle graph of $$3$$ vertices.
My doubt is that I can't figure out 2. The line graph of a clique is a clique. Please help me out here.
• In your example for (R) are you sure $L(G)$ has $25$ edges? I only count $24$ (which is enough). Anyway isn't the star graph $K_{1,5}$ a simpler example? The line graph is $K_5$ which is not planar. As for (Q) what is the line graph of $K_4$? Can't you find two edges in $K_4$ which have no common vertex? – bof Aug 19 '19 at 6:47
• Yes it will have 25 edges using this formula math.stackexchange.com/questions/301490/… – John Lucas Aug 19 '19 at 6:49
• The line graph of a clique will be a clique iff every two edges of the clique are adjacent. Are they? – Matthew Daly Aug 19 '19 at 7:02
• @JohnLucas According to that formula the number of edges in $L(G)$ is $$\frac{4\cdot3}2+\frac{4\cdot3}2+\frac{4\cdot3}2+\frac{3\cdot2}2+\frac{3\cdot2}2=6+6+6+3+3=24.$$ – bof Aug 19 '19 at 7:33
• @JohnLucas There is no such graph. Did you try to draw it? If there are $5$ vertices, each vertex of degree $4$ must be joined to all the other vertices, including your supposed vertex of degree $2$. No, there is (up to isomorphism) only one (simple) graph with $5$ vertices and $9$ edges; it's the graph you get by removing one edge from the clique $K_5$; its degree sequence is $4,4,4,3.3$ (the endpoints of the missing edge lose one degree); and its line graph has $24$ edges. – bof Aug 20 '19 at 9:26
HINT: Let $$G$$ be a clique on the $$n$$ vertices $$\{x_1,x_2,\ldots, x_n\}$$ for $$n \ge 4$$. Then edges $$e_1 = x_1x_2$$ and $$e_2=x_3x_4$$ do not share a vertex. So are $$v(e_1)$$ and $$v(e_2)$$ adjacent to each other in $$L(G)$$?
If you want to look at this another way, $$L(K_n)$$ has $$\frac{n(n-1)}{2}$$ vertices. But for each $$e \in K_n$$, the vertex $$v(e)$$ has degree only $$2(n-2)$$ [make sure you see why]. Note that $$\frac{n(n-1)}{2} - 1$$ $$>>$$ $$2(n-2)$$ for $$n > 6$$.
|
2020-01-29 05:14:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 44, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5247080326080322, "perplexity": 162.84961066745788}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251788528.85/warc/CC-MAIN-20200129041149-20200129071149-00174.warc.gz"}
|
https://stats.stackexchange.com/questions/318088/how-to-avoid-distortion-through-normalization
|
# How to avoid distortion through normalization
I'm training a neural network to predict some numbers that can be positive or negative and the range can vary.
The network will eventually be used to predict if the number will be positive or negative.
I noticed that when I normalize the number with scikit's learn minmaxscaler scaler = MinMaxScaler(feature_range=(-1, 1))
the end result of the training is much worse than if I don't normalize anything.
How can I avoid distortion in predicting positive and negative numbers?
|
2020-01-21 17:36:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8001612424850464, "perplexity": 759.5825679261266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250604849.31/warc/CC-MAIN-20200121162615-20200121191615-00059.warc.gz"}
|
https://crazyproject.wordpress.com/2011/06/20/a-fact-about-norms-and-traces-of-algebraic-integers/
|
## A fact about norms and traces of algebraic integers
Let $K$ be an algebraic number field with ring of integers $\mathcal{O}$. Let $\alpha,\beta \in \mathcal{O}$ be nonzero. Prove that $N(\alpha)\mathsf{tr}(\beta/\alpha) \in \mathbb{Z}$.
Recall from Lemma 7.1 that $N(\alpha)$ is a rational integer, so that $N(\alpha)\mathsf{tr}(\beta/\alpha) = \mathsf{tr}(N(\alpha)\beta/\alpha)$. Now $N(\alpha)/\alpha$ is an algebraic integer, so that $N(\alpha)\beta/\alpha$ is an algebraic integer. Thus $\mathsf{tr}(N(\alpha)\beta/\alpha)$ is a rational integer.
|
2017-01-22 05:54:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9769685864448547, "perplexity": 48.75960498205671}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00155-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://mathoverflow.net/questions/352001/examples-of-incorrect-arguments-being-fertilizer-for-good-mathematics
|
# Examples of incorrect arguments being fertilizer for good mathematics? [duplicate]
Sometimes (perhaps often?) vague or even outright incorrect arguments can sometimes be fruitful and eventually lead to important new ideas and correct arguments.
I'm looking for explicit examples of this phenomenon in mathematics.
Of course, most proof ideas start out vague and eventually crystallize. So I think the more incorrect/vague the original argument or idea, and the more important the final fruit, the better, as long as there is still a pretty direct connection from the vague idea to the final fruit.
Note: Many "paradoxes" sort-of are like this, but I think aren't what I'm looking for. (William Byers's book "How Mathematicians Think" has several examples and lots of discussion of the important role of paradox in mathematical research.) For example, the relationship between Russell's paradox, Godel's Incompleteness Theorem, and the undecidability of the halting problem (Church; Turing). But I think, unless the paradox has some other aspects of the vague-idea-as-fertilizer phenomenon, that I'm not looking for examples of paradoxes, though I am willing to be convinced otherwise.
Edit: It's been suggested that this a duplicate of this other question, but I really think it is not. I am more interested in examples of outright incorrect (or nearly so) original statements that nonetheless lead to fruitful mathematics, whereas the other question seems to essentially be asking about ideas that start out intuitive, non-rigorous, or ill-defined and then are turned into rigorous arguments but along the same intuitive lines. (And, as I said above, I think I agree with one of the answers there that that is simply much of mathematics.) By comparing the answers to the other question to the three great answers already on this question (knot theory rising because Kelvin thought atoms were knotted strings; Lame's erroneous proof of FLT leading Kummer to develop algebraic integers; Lebesgue's incorrect proof that projections of Borel sets are Borel leading to Suslin's development of analytic sets), one can get a sense of the difference.
• I would say that physics (mathematical and theoretical) is full of such examples. Just for starters, Heisenberg, Dirac and Schrödinger on wave mechanics fertilising von Neumann on the spectral theory of unbounded self-adjoint operators on Hilbert space (and much more). And of course, the prehistory of distributions (e.g., Heaviside and Dirac, again) paving the way for Sobolev, Schwartz and many others. Feb 5 '20 at 19:41
• @LSpice: What if I just remove "BS" from the title & question body? Feb 5 '20 at 19:55
• Since my initials are BS, perhaps I have some personal examples.... (if you remove the parenthetical interpretation). Feb 5 '20 at 20:27
• I think many such examples are things which didn't initially have a proper formalization but gave right answers that eventually found a formalization. Like Leibniz didn't seem to have a proper formalization of infinitesimals but reasoned with them anyway and produced valid math. Then Abraham Robinson gave a formalization of them that led to good mathematics. On the other hand, Weierstrass's approach via epsilon-delta also captured what infinitesimal did non rigorously in a rigorous way. Feb 5 '20 at 20:35
• My understanding is the the Italian school of algebraic geometry made effective use of generic points without precisely making sense of them, but sense of course was made later on in the more modern approaches of Weil, Zariski and eventually Grothendieck. Feb 5 '20 at 20:38
In 1905 Lebesgue "proved" an incorrect fact that a projection of a planar Borel set onto a line is Borel. Then years later Suslin found a mistake in Lebesgue's paper and he constructed a Borel set whose projection is not Borel. This led to the important theory of Suslin sets, aka analytic sets, that are projections of Borel sets. Such sets are not necessarily Borel, but they are Lebesgue measurable.
• I wrote my undergraduate thesis on this. :-) Feb 6 '20 at 4:15
Kummer developed the theory of algebraic integers in an attempt to save a flawed proof of Fermat's last theorem by Lamé, as explained here:
• Is this true? I thought that Kummer was independently studying factorizations of algebraic integers and so when he saw Lame's argument, he spotted the issue right away. Feb 6 '20 at 15:50
The field of knot theory became much more (legitimate ?) actively researched area in math because the physicists (i.e., Lord Kelvin) thought that atoms were knots in aether. Of course that idea is now proven BS'. From AMS.org (http://www.ams.org/publicoutreach/feature-column/fcarc-knots-dna):
The study of knots began in earnest in the 1860's when William Thompson (Lord Kelvin) proposed his vortex model of the atom. Simply said, this theory postulated that atoms were formed by knots in the ether and that different chemical elements were formed by different knots.
Obviously König's theorem should appear on this page. König suggested a proof by which the real numbers cannot be well-ordered. Unfortunately, he misunderstood some of the work he relied on, and thence we have this wonderful theorem known as König's theorem or Zermelo–König's theorem:
If $$I$$ is any set, and for each $$i\in I$$, $$|A_i|<|B_i|$$, then $$\left|\bigcup_{i\in I}A_i\right|<\left|\prod_{i\in I}B_i\right|$$.
Another example: S. Smale wrote a paper with a conjecture that rules out the phenomenon of chaos in dynamical systems (i.e., claiming that chaos does not exist in dynamical systems at all). But a counterexample from a colleague led him to actually discover the horseshoe', an important geometrical object which is now understood to be the hallmark of chaos, and has led to much greater understanding of chaotic phenomena.
The whole story is here, by Smale himself: 'Finding a horseshoe on the beaches of Rio': http://www.cityu.edu.hk/ma/doc/people/smales/pap107.pdf
• That sounds more like an incorrect conjecture, of which there are surely too many examples to enumerate. Feb 6 '20 at 14:48
|
2021-10-17 06:08:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7697583436965942, "perplexity": 791.8936346952057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585121.30/warc/CC-MAIN-20211017052025-20211017082025-00048.warc.gz"}
|
https://www.bbc.co.uk/bitesize/topics/zs7mn39/articles/zhd447h
|
# What are the parts of a circle?
All circles have a circumference, diameter and radius. They can be measured using a ruler or tape measure.
• The circumference is the distance all the way around a circle.
• The diameter is the distance right across the middle of the circle.
• The radius is the distance halfway across the circle. The radius is always half the length of the diameter.
|
2022-07-01 17:11:54
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9338328242301941, "perplexity": 385.4871647782025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103943339.53/warc/CC-MAIN-20220701155803-20220701185803-00285.warc.gz"}
|
https://brilliant.org/problems/do-it-for-your-people/
|
# Do it for your people
Discrete Mathematics Level 3
Mr Pneumonia and Mr Malaria are best friends. They want to find out the probability that both friends will have their birthday on different days .Your answer is of the form $$\dfrac ab$$, where $$a$$ and $$b$$ are positive coprime integers .Enter your answer as $$b-a$$.
NOTE : They do not have any knowledge of leap year .
×
|
2016-10-27 09:07:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48884686827659607, "perplexity": 451.02716478343393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721174.97/warc/CC-MAIN-20161020183841-00432-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://civil.gateoverflow.in/608/gate2015-1-10
|
Two triangular wedges are glued together as shown in the following figure. The stress acting normal to the interface, $\sigma_n$ is _______ MPa.
|
2020-09-27 23:38:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.788975179195404, "perplexity": 282.20174767846765}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401582033.88/warc/CC-MAIN-20200927215009-20200928005009-00086.warc.gz"}
|
https://forum.qt.io/topic/96378/how-to-consider-only-half-of-the-image-element
|
# How to consider only half of the image element?
• Hi i have following qml code with pole, wheel and root elements. I want to add half of the pole to the right hand side of the wheel. How can i break the pole element into half and keep it in the animation? It would be nice if someone guides and I am a beginner.
``````import QtQuick 2.6
import QtQuick.Window 2.2
Window {
visible: true!!
width: root.width
height: root.height
property real halfPoleHeight: pole.height/2
MouseArea{
anchors.fill: parent
onClicked: wheel.rotation += 360
}
Image {
id: root
source: "images/background.png"
Image {
id: pole
//anchors.centerIn: parent
anchors.horizontalCenter: parent.horizontalCenter
anchors.bottom: parent.bottom
source: "images/pole.png"
}
Image {
id: wheel
Behavior on rotation {
NumberAnimation {
duration: 250
}
}
anchors.centerIn: parent
source: "images/pinwheel.png"
}
}
}
```  ``````
• @JennyAug13
You could use `anchors.horizontalCenterOffset` to the `pole` image to slightly offset its position.
P.S. No need to embed the `pole` and `wheel` images inside the `root` image. Just declare them alongside each other.
• @Diracsbracket Yeah. I tried, i can have two poles now, by using anchors.horizontalCenterOffset to the pole image to slightly offset its position. But my question is if i want to break the pole into two poles or reduce the length, can i use radius: property for my pole?
• @JennyAug13
Why don't you draw the pole yourself rather than to use an off-centered .png image with non-symmetric left and right transparent margins?
|
2021-01-16 22:14:52
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8834048509597778, "perplexity": 5112.888179166823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703507045.10/warc/CC-MAIN-20210116195918-20210116225918-00771.warc.gz"}
|
https://math.stackexchange.com/questions/2693475/proving-l-choose-nnl-dn-choose-d0-for-n-geq-6-d-geq-3-ldn-1
|
# Proving ${l\choose n}+nl-{d+n\choose d}<0$ for $n\geq 6$, $d\geq 3$, $l<d+n-1$
Let $n\geq 6$ and $d\geq 3$. Prove that if $l<d+n-1$, then $${l\choose n}+nl-{d+n\choose d}<0.$$ This is the necessary condition in my paper. Experiments show that it is true and $<0$. I tried by expanding but could not find any special for showing that it is $<0$. Thanks for any idea.
Because$$\binom{l}{n} < \binom{d + n - 1}{n}, \quad nl < n(d + n - 1),$$ then it suffices to prove that$$\binom{d + n}{d} \geqslant \binom{d + n - 1}{n} + n(d + n - 1).$$
Since\begin{align*} &\mathrel{\phantom{=}}{} \binom{d + n}{d} - \binom{d + n - 1}{n} = \binom{d + n}{d} - \binom{d + n - 1}{d - 1}\\ &= \binom{d + n - 1}{d} = \frac{1}{d!} (d + n - 1) \cdots (n + 1)n\\ &= \frac{1}{d(d - 1)}·\binom{d + n - 2}{d - 2}·n(d + n - 1), \end{align*} then it suffices to prove that$$\binom{d + n - 2}{d - 2} \geqslant d(d - 1),$$ which is true in that\begin{align*} \binom{d + n - 2}{d - 2} &\geqslant \binom{d + 5 - 2}{d - 2} = \binom{d + 5 - 2}{5}\\ &= \frac{1}{5!} (d + 3)(d + 2)(d + 1)·d(d - 1)\\ &\geqslant \frac{1}{5!} (3 + 3)(3 + 2)(3 + 1)·d(d - 1) = d(d - 1). \end{align*}
• It turns out that $n\geqslant5$ suffices. – Saad Mar 16 '18 at 9:05
|
2019-11-14 13:33:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9970300793647766, "perplexity": 623.8656133213553}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668525.62/warc/CC-MAIN-20191114131434-20191114155434-00249.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition/chapter-r-review-of-basic-concepts-chapter-r-test-prep-review-exercises-page-82/24
|
Precalculus (6th Edition)
$\color{blue}{-12, -6, -0.9, -\sqrt{4}, 0, \frac{1}{8}, \text{ and } 6}$
The set of integers is: $\left\{...-3, -2, -1, 0, 1, 2, 3, ...\right\}$ The set of rational numbers contain all the numbers that can be expressed as $\dfrac{p}{q}$ where $p$ and $q$ are integers and $q \ne 0$. Note that: $-12=\dfrac{-24}{2}$ $-6 = \dfrac{12}{-2}$ $-0.9 = -\dfrac{9}{10}$ $-\sqrt{4} = -2 = -\dfrac{6}{3}$ $0=\dfrac{0}{1}$ $\dfrac{1}{8}=\dfrac{1}{8}$ $6=\dfrac{18}{3}$ The numbers above are quotients of two integers with non-zero denominators so they are all rational numbers. Thus, the rational among the elements of $K$ are: $\color{blue}{-12, -6, -0.9, -\sqrt{4}, 0, \frac{1}{8}, \text{ and } 6}$
|
2018-11-14 14:24:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8793628811836243, "perplexity": 70.23090071877964}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742020.26/warc/CC-MAIN-20181114125234-20181114151234-00221.warc.gz"}
|
http://gateoverflow.in/29226/statistics
|
+1 vote
250 views
Q) Let $x$ be normal variable with mean $8$ and standard deviation $4$ then $p(X\leq5)$ is
A). Greater than zero but less than $0.5$
B). Greater than $0.75$
C). Greater than $0.5$ but less than $1$
D). Equal to $0.5$
A is correct.
I think the right option is A.
This is the normal distribution curve for this mean and standard deviation μ=8 and σ=4
Since the area under the curve from μ-3σ(-4 here) to μ(8) is 0.5 hence P(x≤5) will be even smaller but will not be zero. Hence option A is correct.
P(x<=5)= (5-8)/4= -3/4= -0.75
probability cant be negative= B is ans
|
2017-05-27 15:40:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9584352374076843, "perplexity": 945.1060466788891}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608956.34/warc/CC-MAIN-20170527152350-20170527172350-00477.warc.gz"}
|
https://puzzling.stackexchange.com/questions/18345/the-twenty-doors-room-7/18355
|
# The Twenty Doors (ROOM 7)
This is part of The Twenty Doors series.
The previous one is The Twenty Doors! (ROOM 6).
The next one is The Twenty Doors (ROOM 8)
You (after many days rest in Room 6) finally go into Room 7. The first thing you see is the (rather long) hint on the wall:
The Steckerbrett is completely reversed, an true enigma. M3, of course. AAA AAA. 4. NFMN JRLS KKYV FAIU MRGG KPQV CNGJ ABRQ FVOG VKPO PYCY BYOI NGVA NONW DTKO TTBJ EJCI NIGU YEPB UUPM IEZH PMEN JZVG LYLW
You then pick up the slip of paper.
Well done for getting this far.
Uqdok pgd Jfgr-qsnmpmmh Esnzjf Uhfmo Jqbuypmbj Nytc qk vhzylbo wjzxusv.
And last, but not least, the keypad:
[«][±][¶] [ENTER]
Which symbol should you press in order to get into the next room?
The next door will be added when this door is solved!
• The first symbol is a guillemet and the third is a new paragraph symbol (I think). These are both (symbols of writing? grammar symbols?), whilst the second ($\pm$) is a math operator (and has also appeared before). – Conor O'Brien Jul 24 '15 at 20:14
• I feel like the obvious answer is "press [ENTER] to enter the next room"... – Ian MacDonald Jul 24 '15 at 20:20
• Isn't the person doing these 20 doors going to get hungry or thirsty after a few days with no food or water? Even the person doing the Temple of Quetzalcoatl is only in there for about 10-12 hours (in-universe time) before he gets to a place with food and drink! – Joe Z. Jul 24 '15 at 21:10
• Is it possible to attempt this question without having done any of the previous ones? – Gummy bears Jul 25 '15 at 5:10
• @Kslkgh don't worry, on Puzzlearth time passes differently than on regular Earth - else my treasure hunter would have lost his race long ago... – Bailey M Jul 30 '15 at 19:35
## 1 Answer
To get into the next room, you should press the:
Left-pointing Double Angle Quotation Mark
The first cipher decodes to:
WELL DONE FOR GETTING THIS FAR X THE MAIN CIPHER IS PLAYING AT A FAIR X GOODBYE FOR NOW X THE KEY TO THE FAIR IS SOCKS A TO Z
This Enigma cipher can be decoded here by leaving all of the default settings and entering a reversed alphabet into the Steckerbrett (plugboard). Then select "Block of text" input method and enter the cipher text.
The second cipher is a:
Playfair cipher based on the instructions above. The key is SOCKS and the letter A shall be translated to Z. Using Rumkin and selecting Encrypt instead of Decrypt, results in this "almost correct" plaintext:
"Press the Left-pointing Double Zngle Quotztion Mzrk to zdvznce onwzrds."
Then, as noted by f', replace all the Z's with A's because this decoding does not distinguish between the two characters.
• Just replace all the Z's with A's and it's fine. That space represents both A and Z. – f'' Jul 25 '15 at 7:10
• Marked as correct answer because of the comment by f" and the fact that the A to Z was accidental. – user9377 Jul 25 '15 at 10:36
|
2019-07-23 16:35:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3695508539676666, "perplexity": 5014.581278108872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529480.89/warc/CC-MAIN-20190723151547-20190723173547-00546.warc.gz"}
|
http://forums.eviews.com/viewtopic.php?f=4&t=18817&sid=0183a6e534366cd3ef24f6f3b1ebfcbb
|
## MA Backcasting in Eviews vs. Box/Jenkins
For technical questions regarding estimation of single equations, systems, VARs, Factor analysis and State Space Models in EViews. General econometric questions and advice should go in the Econometric Discussions forum.
Moderators: EViews Gareth, EViews Moderator
jpfeifer
Posts: 3
Joined: Wed Apr 08, 2015 11:47 am
### MA Backcasting in Eviews vs. Box/Jenkins
According to the Eviews help, MA terms are backcasted by running the forward model backwards in time:
Code: Select all
\tilde \epsilon_t=u_t-\theta_1*\tilde \epsilon_{t+1}-...-\theta_q*\tilde \epsilon_{t+q}
with \tilde \epsilon_{T+i}= for i>0. This allows computing
Code: Select all
{\tilde \epsilon_{0},...,\tilde \epsilon_{-(q-1)}
. The help seems to suggest that the \tilde \epsilon are then used in the backward model
Code: Select all
\hat \epsilon_t=u_t-\theta_1*\hat \epsilon_{t-1}-...-\theta_q*\hat \epsilon_{t-q}
i.e. the \tilde \epsilon_{i}, i<1 are used as the \hat \epsilon_{i}. But the reference is the the Box/Jenkins (1970) book. In its 2016 edition, Chapter 7.1.4, the procedure seems to be quite different as the forward and backward model deliver two different, distinct sets of innovations. They therefore do not allow using the innovations from the forward model in the backward model. To solve this problem, they backcast the u_t for t={0,..,q-1} using the backcasts for \tilde \epsilon_t and employ that \tilde \epsilon_{i}=0,i<1 due to them being independent from u_t. These u_t, t<1 are then used in the backward model to run a forward recursion with \hat \epsilon_{i}=0 for i<q-1 due to independence. This then allows computing {\hat \epsilon_{0},...,\hat \epsilon_{-(q-1)}.
So what exactly does Eviews do here?
|
2018-01-17 22:04:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6072616577148438, "perplexity": 8501.885027137178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886979.9/warc/CC-MAIN-20180117212700-20180117232700-00493.warc.gz"}
|
http://chemical-quantum-images.blogspot.com/2008/01/coulomb-and-exchange.html
|
## Saturday, 5 January 2008
### Coulomb and exchange
Let's keep going with what we did last time. It will get even more exciting than it already was.
Well I think it's nice to see what's behind some of those interactions and if I write it down here I'll find it again. But I admit that I usually read blogs with pictures rather than lots of weird signs.
The first expression was
$n \sum_{\pi \in S_n} \sum_{\sigma \in S_n} sgn(\pi) sgn(\sigma) \prod_{i=2}^n $
Let us assume that 2 or more orbitals are different between the Slater determinants (p$\leq$n-2). Then one of the terms in every product has to be 0 and the expression vanishes.
If p=n-1, then we have to claim the following:
$\pi(1)=\sigma(1)=n
\forall i\in \{2,...,n\}:\pi(i)=\sigma(i)$
This of course means that π=σ. There are now (n-1)! possible permutations that work with this. And the expression reduces to: (the n! cancels against the n! we started out with)
$n!$
If p=n, we also need π=σ to have all the factors unequal to 0 (and equal to 1). The expression reduces to
$n \sum_{\pi \in S_n} sgn(\pi)^2 $
The squared sign is of course 1. For every i there are (n-1)! peruations π with π(1)=i and ui=vi. Then we have
$n! \sum_{i=1}^n $
The second expression was
$\frac{n(n-1)}{2} \sum_{\pi \in S_n} sgn(\pi) \sum_{\sigma \in S_n} sgn(\sigma) \prod_{i=3}^n
)$
This time it turns out that one of the overlap integrals (and hence the whole product) equals zero if p$\leq$n-3.
I'll only talk about the case p=n-2. If you want to have no overlap integral equal to 0, you first need
$\forall i\in \{3,...,n\}:\pi(i)=\sigma(i)$
Aside from that you have to make sure that the mutually different orbitals are in the last integral. 1 and 2 have to be mapped onto n-1 and n.
$\pi(\{1,2\})=\{n-1,n\}
\sigma(\{1,2\})=\{n-1,n\}$
As a next step you can acknowledge that π and σ have to either be the same or different by one transposition.
$\sigma=\pi
or
\sigma=\pi(12)=:\pi'$
In the second case their signs are opposite. We get the following expression.
$\frac{n(n-1)}{2} \sum_{\pi \in S_n} sgn(\pi) (sgn(\pi) +
+sgn(\pi') )$
There are appropriately (n-2)!*2 permutations π that fulfill the requirements above and since the signs are opposite, this reduces to
$n! (- )$
The derivations seem to be similar for p=n-1 and p=n. As Levine tells me (who was in turn told by Parr) we get for p=n-1
$n! \sum_{j=1}^n (- )$
and for p=n
$n! \sum_{i=1}^{n-1} \sum_{j=i+1}^n (- )$
(the n! is only there because I started with an extra n!)
The first integral is the Coulomb integral (with g12=1/r12). It corresponds to the Coulomb repulsion of two blurred electron clouds corresponding to the MOs. The second integral, called exchange integral, comes in because of the Slater determinant and is a consequence of the Pauli principle.
The next question is using the spin functions from two posts ago and see if some of the integrals vanish. And compare this to what you expected.
|
2023-03-25 19:53:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 15, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.915711522102356, "perplexity": 643.0840368191233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945372.38/warc/CC-MAIN-20230325191930-20230325221930-00433.warc.gz"}
|
http://opratel.com/m0k6450j/8a2047-2-3-18-83-258-sequence
|
Solution 2: After 1 and 2, add the two previous numbers, plus 1: Solution 3: After 1, 2 and 4, add the three previous numbers. The general form of a geometric sequence can be written as: In the example above, the common ratio r is 2, and the scale factor a is 1. So, we have three perfectly reasonable solutions, and they create totally different sequences. Find the next number in the sequence (using difference table).. Other Sequences. Indexing involves writing a general formula that allows the determination of the nth term of a sequence as a function of n. An arithmetic sequence is a number sequence in which the difference between each successive term remains constant. Hence, there should be 40 in place of 42. We already know term 5 is 21 and term 4 is 13, so: One of the troubles with finding "the next number" in a sequence is that mathematics is so powerful we can find more than one Rule that works. About NextNumber • Classic Sequences • Contact NextNumber • Classic Sequences • Contact NextNumber A series is convergent if the sequence converges to some limit, while a sequence that does not converge is divergent. Find the difference between numbers that are next to each other. You need to set the "Use regular expression to parse number" checkbox and enter regular expression and match group, which will … The differences are always 2, so we can guess that "2n" is part of the answer. They have applications within computer algorithms (such as Euclid's algorithm to compute the greatest common factor), economics, and biological settings including the branching in trees, the flowering of an artichoke, as well as many others. For each of these lists of integers, provide a simple formula or rule that generates the terms of an integer sequence that begins with the given list. It is the 1 st term in the sequence. Assuming that your formula or rule is correct, determine the next three terms of the sequence. For example, if you have a sequence of 3, 5, 7, 9, the first term will be 3. An Arithmetic Sequence is made by adding the same value each time.The value added each time is called the \"common difference\" What is the common difference in this example?The common difference could also be negative: To use this sequence calculator, follow the below steps. Each number is the multiple of the previous two numbers. When referring to sequences like this in mathematics, we often represent every term by a special variable: x 1, x 2, x 3, x 4, x 5, x 6, x 7, … EX: 1 + 2 + 4 = 7. Accordingly, a number sequence is an ordered list of numbers that follow a particular pattern. 1, 2 ×2, 4 ×2, 8 ×2, 16 ×2, ×2, ×2, … Pattern: “Multiply the previous number by 2, to get the next one.” The dots (…) at the end simply mean that the sequence can go on forever. 6 Answers. Sequences have many applications in various mathematical disciplines due to their properties of convergence. In a number sequence, order of the sequence is important, and depending on the sequence, it is possible for the same terms to appear multiple times. Answer is: D Sometimes we can just look at the numbers and see a pattern: Answer: they are Squares (12=1, 22=4, 32=9, 42=16, ...). How to find a missing number in a sequence. Each number in the sequence is called a term (or sometimes "element" or "member"), read Sequences and Seriesfor a more in-depth discussion. It means "the previous term" as term number n-1 is 1 less than term number n. So term 6 equals term 5 plus term 4. The general form of an arithmetic sequence can be written as: It is clear in the sequence above that the common difference f, is 2. The Fibonacci Sequence is found by adding the two numbers before it together. So by "trial-and-error" we discovered a rule that works: Sequence: 1, 2, 4, 7, 11, 16, 22, 29, 37, ... Read Sequences and Series to learn about: In truth there are too many types of sequences to mention here, but if there is a special one you would like me to add just let me know. There are multiple ways to denote sequences, one of which involves simply listing the sequence in cases where the pattern of the sequence is easily discernible. The 2 is found by adding the two numbers before it (1+1) The 21 is found by adding the two numbers before it (8+13) The next number in the sequence above would be 55 (21+34) Can you figure out the next few numbers? With second differences we multiply by n22. Determine if the order of numbers is ascending (getting larger in value) or descending (becoming smaller in value). 2, 3, 18, 83, 258, ___, 1298. The first two numbers in a Fibonacci sequence are defined as either 1 and 1, or 0 and 1 depending on the chosen starting point. We can use a Rule to find any term. ): Solution 1: Add 1, then add 2, 3, 4, ... (That rule looks a bit complicated, but it works). Fibonacci numbers occur often, as well as unexpectedly within mathematics and are the subject of many studies. After 3 and 5 all the rest are the sum of the two numbers before. A Sequence is a set of things (usually numbers) that are in order. There are many different types of number sequences, three of the most common of which include arithmetic sequences, geometric sequences, and Fibonacci sequences. Mathematically, the Fibonacci sequence is written as. What is the next number in the sequence 2, 3, 6, 18, 108? Did you see how we wrote that rule using "x" and "n" ? Here are three solutions (there can be more! You should enter 1 … 1 × (1-2 3) 1 - 2 = -7-1 = 7: Fibonacci Sequence. For that case, you can use regular expression to extract the number from the line and then check the sequence. Use the difference between numbers to find the missing number. A Fibonacci sequence is a sequence in which every number following the first two is the sum of the two preceding numbers. xn means "term number n", so term 3 is written x3. The 4.0 L (3956 cc) straight-6 was an evolution of the 258 and 150 and appeared in 1987. Example: Find the missing number: 30, 23, ?, 9 NextNumber finds the next number in a sequence of numbers Find next number . Each of the individual elements in a sequence are often referred to as terms, and the number of terms in a sequence is called its length, which can be infinite. Using the equation above to calculate the 5th term: Looking back at the listed sequence, it can be seen that the 5th term, a5, found using the equation, matches the listed sequence as expected. 1 decade ago. To find a missing number in a Sequence, first we must have a Rule. A Sequenceis a set of things (usually numbers) that are in order. Find the common ratio if the fourth term in geometric series is $\frac{4}{3}$ and the eighth term is $\frac{64}{243}$. Relevance. Identify the Sequence 2 , 6 , 18 , 54, , , This is a geometric sequence since there is a common ratio between each term. Tundra Rob. It had the same 3.88 (98.4 mm) bore as the 150 with a longer 3.41 in (86.7 mm) stroke. The series is 50 + 1^2 = 51, 51 – 2^2 = 47, 47 + 3^2 = 56, 56 – 4^2 = 40, 40 + 5^2 = 65, 65 – 6^2 = 29. 2 Scissors beat paper 3 Stone beats scissors. In mathematics, a sequence is an ordered list of objects. ... it may be a list of the winners' numbers ... so the next number could be ... anything! What integer fills in the blank in this mathematical sequence? There are lots more! Answer Save. example 3: ex 3: The first term of an geometric progression is 1, and the common ratio is 5 determine how many terms must be added together to give a sum of 3906. In this case, multiplying the previous term in the sequence by gives the next term. In the sequence {1, 2, 4, 7, 11, 16, 22, ...} we need to find the differences ... ... and then find the differences of those (called second differences), like this: The second differences in this case are 1. The first input box is divided into two categories. This difference can either be positive or negative, and dependent on the sign will result in terms of the arithmetic sequence tending towards positive or negative infinity. To find a missing number, first find a Rule behind the Sequence. The 4.0 has been discontinued at the end of the 2006 model year as the Jeep Wrangler will instead get Chrysler's 3.8 L OHV V6. Sometimes it helps to find the differences between each pair of numbers ... this can often reveal an underlying pattern. For example, the 25th term can be found by "plugging in" 25 wherever n is. In cases that have more complex patterns, indexing is usually the preferred notation. In mathematics, the harmonic series is the divergent infinite series ∑ = ∞ = + + + + + ⋯. In other words, . Enter the location of any term and its value in this input box. It is also commonly desirable, and simple, to compute the sum of an arithmetic sequence using the following formula in combination with the previous formula to find an: Using the same number sequence in the previous example, find the sum of the arithmetic sequence through the 5th term: A geometric sequence is a number sequence in which each successive number after the first number is the multiplication of the previous number with a fixed, non-zero number (common ratio). In our case the difference is 1, so let us try just n22: We are close, but seem to be drifting by 0.5, so let us try: n22 â n2, The formula n22 â n2 + 1 can be simplified to n(n-1)/2 + 1. Favorite Answer. Using the equation above, calculate the 8th term: Comparing the value found using the equation to the geometric sequence above confirms that they match.
|
2021-01-18 10:36:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7222408056259155, "perplexity": 331.0185169701972}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514495.52/warc/CC-MAIN-20210118092350-20210118122350-00089.warc.gz"}
|
https://courses.lumenlearning.com/astronomy/chapter/scientific-notation/
|
Scientific Notation
In astronomy (and other sciences), it is often necessary to deal with very large or very small numbers. In fact, when numbers become truly large in everyday life, such as the national debt in the United States, we call them astronomical. Among the ideas astronomers must routinely deal with is that the Earth is 150,000,000,000 meters from the Sun, and the mass of the hydrogen atom is 0.00000000000000000000000000167 kilograms. No one in his or her right mind would want to continue writing so many zeros!
Instead, scientists have agreed on a kind of shorthand notation, which is not only easier to write, but (as we shall see) makes multiplication and division of large and small numbers much less difficult. If you have never used this powers-of-ten notation or scientific notation, it may take a bit of time to get used to it, but you will soon find it much easier than keeping track of all those zeros.
Writing Large Numbers
In scientific notation, we generally agree to have only one number to the left of the decimal point. If a number is not in this format, it must be changed. The number 6 is already in the right format, because for integers, we understand there to be a decimal point to the right of them. So 6 is really 6., and there is indeed only one number to the left of the decimal point. But the number 965 (which is 965.) has three numbers to the left of the decimal point, and is thus ripe for conversion.
To change 965 to proper form, we must make it 9.65 and then keep track of the change we have made. (Think of the number as a weekly salary and suddenly it makes a lot of difference whether we have $965 or$9.65.) We keep track of the number of places we moved the decimal point by expressing it as a power of ten. So 965 becomes 9.65 × 102 or 9.65 multiplied by ten to the second power. The small raised 2 is called an exponent, and it tells us how many times we moved the decimal point to the left.
Note that 102 also designates 10 squared, or 10 × 10, which equals 100. And 9.65 × 100 is just 965, the number we started with. Another way to look at scientific notation is that we separate out the messy numbers out front, and leave the smooth units of ten for the exponent to denote. So a number like 1,372,568 becomes 1.372568 times a million (106) or 1.372568 times 10 multiplied by itself 6 times. We had to move the decimal point six places to the left (from its place after the 8) to get the number into the form where there is only one digit to the left of the decimal point.
The reason we call this powers-of-ten notation is that our counting system is based on increases of ten; each place in our numbering system is ten times greater than the place to the right of it. As you have probably learned, this got started because human beings have ten fingers and we started counting with them. (It is interesting to speculate that if we ever meet intelligent life-forms with only eight fingers, their counting system would probably be a powers-of-eight notation!)
So, in the example we started with, the number of meters from Earth to the Sun is 1.5 × 1011. Elsewhere in the book, we mention that a string 1 light-year long would fit around Earth’s equator 236 million or 236,000,000 times. In scientific notation, this would become 2.36 × 108. Now if you like expressing things in millions, as the annual reports of successful companies do, you might like to write this number as 236 × 106. However, the usual convention is to have only one number to the left of the decimal point.
Writing Small Numbers
Now take a number like 0.00347, which is also not in the standard (agreed-to) form for scientific notation. To put it into that format, we must make the first part of it 3.47 by moving the decimal point three places to the right. Note that this motion to the right is the opposite of the motion to the left that we discussed above. To keep track, we call this change negative and put a minus sign in the exponent. Thus 0.00347 becomes 3.47 × 10−3.
In the example we gave at the beginning, the mass of the hydrogen atom would then be written as 1.67 × 10−27 kg. In this system, one is written as 100, a tenth as 10−1, a hundredth as 10−2, and so on. Note that any number, no matter how large or how small, can be expressed in scientific notation.
Multiplication and Division
Scientific notation is not only compact and convenient, it also simplifies arithmetic. To multiply two numbers expressed as powers of ten, you need only multiply the numbers out front and then add the exponents. If there are no numbers out front, as in 100 × 100,000, then you just add the exponents (in our notation, 102 × 105 = 107). When there are numbers out front, you have to multiply them, but they are much easier to deal with than numbers with many zeros in them.
Here’s an example:
$\left(3\times {10}^{5}\right)\times \left(2\times {10}^{9}\right)=6\times {10}^{14}$
And here’s another example:
$\begin{array}{cc}\hfill 0.04\times 6,000,000& =\left(4\times {10}^{-2}\right)\times \left(6\times {10}^{6}\right)\hfill \\ & =24\times {10}^{4}\hfill \\ & =2.4\times {10}^{5}\hfill \end{array}$
Note in the second example that when we added the exponents, we treated negative exponents as we do in regular arithmetic (−2 plus 6 equals 4). Also, notice that our first result had a 24 in it, which was not in the acceptable form, having two places to the left of the decimal point, and we therefore changed it to 2.4 and changed the exponent accordingly.
To divide, you divide the numbers out front and subtract the exponents. Here are several examples:
$\begin{array}{ccc}\hfill \frac{1,000,000}{1000}& =\hfill & \frac{{10}^{6}}{{10}^{3}}={10}^{\left(6 - 3\right)}={10}^{3}\hfill \\ \hfill \frac{9\times {10}^{12}}{2\times {10}^{3}}& =\hfill & 4.5\times {10}^{9}\hfill \\ \hfill \frac{2.8\times {10}^{2}}{6.2\times {10}^{5}}& =\hfill & 0.452\times {10}^{\text{-}3}=4.52\times {10}^{-4}\hfill \end{array}$
In the last example, our first result was not in the standard form, so we had to change 0.452 into 4.52, and change the exponent accordingly.
If this is the first time that you have met scientific notation, we urge you to practice many examples using it. You might start by solving the exercises below. Like any new language, the notation looks complicated at first but gets easier as you practice it.
Exercises
1. At the end of September, 2015, the New Horizons spacecraft (which encountered Pluto for the first time in July 2015) was 4.898 billion km from Earth. Convert this number to scientific notation. How many astronomical units is this? (An astronomical unit is the distance from Earth to the Sun, or about 150 million km.)
2. During the first six years of its operation, the Hubble Space Telescope circled Earth 37,000 times, for a total of 1,280,000,000 km. Use scientific notation to find the number of km in one orbit.
3. In a large university cafeteria, a soybean-vegetable burger is offered as an alternative to regular hamburgers. If 889,875 burgers were eaten during the course of a school year, and 997 of them were veggie-burgers, what fraction and what percent of the burgers does this represent?
4. In a 2012 Kelton Research poll, 36 percent of adult Americans thought that alien beings have actually landed on Earth. The number of adults in the United States in 2012 was about 222,000,000. Use scientific notation to determine how many adults believe aliens have visited Earth.
5. In the school year 2009–2010, American colleges and universities awarded 2,354,678 degrees. Among these were 48,069 PhD degrees. What fraction of the degrees were PhDs? Express this number as a percent. (Now go and find a job for all those PhDs!)
6. A star 60 light-years away has been found to have a large planet orbiting it. Your uncle wants to know the distance to this planet in old-fashioned miles. Assume light travels 186,000 miles per second, and there are 60 seconds in a minute, 60 minutes in an hour, 24 hours in a day, and 365 days in a year. How many miles away is that star?
|
2019-10-14 16:04:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7270410656929016, "perplexity": 394.64437833384585}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986653876.31/warc/CC-MAIN-20191014150930-20191014174430-00517.warc.gz"}
|
https://gmatclub.com/forum/m28-184542.html
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 20 Mar 2019, 08:32
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# M28-44
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 53734
### Show Tags
16 Sep 2014, 01:31
14
00:00
Difficulty:
95% (hard)
Question Stats:
24% (00:58) correct 76% (01:04) wrong based on 245 sessions
### HideShow timer Statistics
Was the average (arithmetic mean) temperature in degrees Celsius in city A in March less than the average (arithmetic mean) temperature in degrees Celsius in city B in March?
(1) The median temperature in degrees Celsius in City A in March was less than the median temperature in degrees Celsius in city B.
(2) The ratio of the average temperatures in degrees Celsius in A and B in March was 3 to 4, respectively.
_________________
Math Expert
Joined: 02 Sep 2009
Posts: 53734
### Show Tags
16 Sep 2014, 01:31
2
4
Official Solution:
Was the average (arithmetic mean) temperature in degrees Celsius in city A in March less than the average (arithmetic mean) temperature in degrees Celsius in city B in March?
(1) The median temperature in degrees Celsius in City A in March was less than the median temperature in degrees Celsius in city B. Clearly insufficient.
(2) The ratio of the average temperatures in degrees Celsius in A and B in March was 3 to 4, respectively. Temperatures can be negative, thus this statement is also not sufficient. Consider $$T(A)=3$$ and $$T(B)=4$$ AND $$T(A)=-3$$ and $$T(B)=-4$$.
(1)+(2) We have no additional useful info. Not sufficient.
_________________
Manager
Joined: 11 Feb 2015
Posts: 98
GMAT 1: 710 Q48 V38
### Show Tags
03 Aug 2015, 22:50
cant believe I got this wrong. Very basic of ratio and yet 700+ level question because everyone doing same mistake in looking option B
Thx for this question
Senior Manager
Joined: 12 Aug 2015
Posts: 283
Concentration: General Management, Operations
GMAT 1: 640 Q40 V37
GMAT 2: 650 Q43 V36
GMAT 3: 600 Q47 V27
GPA: 3.3
WE: Management Consulting (Consulting)
### Show Tags
13 Oct 2015, 11:27
I think this is a poor-quality question and I don't agree with the explanation. unlikely to meet this kind of q in real gmat
_________________
KUDO me plenty
Math Expert
Joined: 02 Sep 2009
Posts: 53734
### Show Tags
13 Oct 2015, 13:16
I think this is a poor-quality question and I don't agree with the explanation. unlikely to meet this kind of q in real gmat
Can you please elaborate a bit? We'll try to modify the question if necessary. Thank you.
_________________
Intern
Joined: 27 Dec 2011
Posts: 40
Location: Brazil
Concentration: Entrepreneurship, Strategy
GMAT 1: 620 Q48 V27
GMAT 2: 680 Q46 V38
GMAT 3: 750 Q50 V41
GPA: 3.5
### Show Tags
16 Oct 2015, 07:19
Trickiest question ever! Is disguised as an easy one. Very important to improve focus while doing the test.
Verbal Forum Moderator
Joined: 15 Apr 2013
Posts: 181
Location: India
Concentration: General Management, Marketing
GMAT Date: 11-23-2015
GPA: 3.6
WE: Science (Other)
### Show Tags
15 Nov 2015, 06:18
This is more of a logic based then quant based question.
Question stats: 13% users answered correctly (>2200 sessions) Wow.
Manager
Joined: 05 Jul 2015
Posts: 100
GMAT 1: 600 Q33 V40
GPA: 3.3
### Show Tags
20 Feb 2016, 22:29
3
I think this question could be improved if it was mentioned that the temperatures were in DECEMBER instead of MARCH since where I live, we never see negative temperatures in March! Or if it was mentioned that the cities were in Alaska or something. I feel like throwing March in the question stem implies positive numbers and gives an unfair advantage to test takers in Antarctica.
Intern
Joined: 12 May 2016
Posts: 2
Schools: HBS '18 (A)
### Show Tags
23 May 2016, 00:54
1
Bunuel wrote:
Official Solution:
(1) The median temperature in City A in March was less than the median temperature in city B. Clearly insufficient.
(2) The ratio of the average temperatures in A and B in March was 3 to 4, respectively. Temperatures can be negative, thus this statement is also not sufficient. Consider $$T(A)=3$$ and $$T(B)=4$$ AND $$T(A)=-3$$ and $$T(B)=-4$$.
(1)+(2) We have no additional useful info. Not sufficient.
I'm just salty I got tricked on this question too, but technically temperatures can/cannot be negative depending on which scale you use? (i.e. never specified it wasn't temperature in kelvins)
Manager
Joined: 08 Jul 2015
Posts: 53
GPA: 3.8
WE: Project Management (Energy and Utilities)
### Show Tags
31 May 2016, 07:14
davidwu wrote:
Bunuel wrote:
Official Solution:
(1) The median temperature in City A in March was less than the median temperature in city B. Clearly insufficient.
(2) The ratio of the average temperatures in A and B in March was 3 to 4, respectively. Temperatures can be negative, thus this statement is also not sufficient. Consider $$T(A)=3$$ and $$T(B)=4$$ AND $$T(A)=-3$$ and $$T(B)=-4$$.
(1)+(2) We have no additional useful info. Not sufficient.
I'm just salty I got tricked on this question too, but technically temperatures can/cannot be negative depending on which scale you use? (i.e. never specified it wasn't temperature in kelvins)
I got tricked as well ! Technically, you're right also davidwu - however most of normal temperature now in daily life use either Celsius of Fahrenheit scale - Kevin is usually used in physics, lab or special environment - so I think the question still valid!
_________________
[4.33] In the end, what would you gain from everlasting remembrance? Absolutely nothing. So what is left worth living for?
This alone: justice in thought, goodness in action, speech that cannot deceive, and a disposition glad of whatever comes, welcoming it as necessary, as familiar, as flowing from the same source and fountain as yourself. (Marcus Aurelius)
Senior Manager
Joined: 08 Jun 2015
Posts: 426
Location: India
GMAT 1: 640 Q48 V29
GMAT 2: 700 Q48 V38
GPA: 3.33
### Show Tags
25 Jun 2016, 11:20
Tricky one indeed ! Thanks for this question !
_________________
" The few , the fearless "
Current Student
Joined: 28 Nov 2014
Posts: 842
Concentration: Strategy
Schools: Fisher '19 (M\$)
GPA: 3.71
### Show Tags
06 Oct 2016, 14:23
How do I make a decision between C and E? It took me some time to see all possible cases.
Bunuel Can you throw some light!
Intern
Joined: 28 Sep 2014
Posts: 2
Location: India
WE: Marketing (Energy and Utilities)
### Show Tags
03 Jan 2017, 20:47
1
I think if you mentioned the unit of measurement, it would be better. Because if the temperatures are in Kelvin, there would be no negative temperatures.
Math Expert
Joined: 02 Sep 2009
Posts: 53734
### Show Tags
04 Jan 2017, 01:26
cheeerio wrote:
I think if you mentioned the unit of measurement, it would be better. Because if the temperatures are in Kelvin, there would be no negative temperatures.
Edited as suggested. Thank you.
_________________
Intern
Joined: 04 Aug 2014
Posts: 28
GMAT 1: 620 Q44 V31
GMAT 2: 620 Q47 V28
GPA: 3.2
### Show Tags
06 Mar 2017, 01:19
hi brunel
can u clarify how S1 & S2 together are insufficient with an example
Current Student
Joined: 23 Nov 2016
Posts: 74
Location: United States (MN)
GMAT 1: 760 Q50 V42
GPA: 3.51
### Show Tags
Updated on: 05 Apr 2017, 14:21
sidagar wrote:
hi brunel
can u clarify how S1 & S2 together are insufficient with an example
See screenshot. Edited the table so the logic can be seen more clearly. Basically, overall values can be relatively stable but extreme values (outliers) can throw the averages off while keeping the median the same.
>> !!!
You do not have the required permissions to view the files attached to this post.
Originally posted by mbadude2017 on 01 Apr 2017, 14:04.
Last edited by mbadude2017 on 05 Apr 2017, 14:21, edited 2 times in total.
Intern
Joined: 26 Oct 2016
Posts: 25
### Show Tags
05 Apr 2017, 08:06
Bunuel wrote:
Official Solution:
Was the average (arithmetic mean) temperature in degrees Celsius in city A in March less than the average (arithmetic mean) temperature in degrees Celsius in city B in March?
(1) The median temperature in degrees Celsius in City A in March was less than the median temperature in degrees Celsius in city B. Clearly insufficient.
(2) The ratio of the average temperatures in degrees Celsius in A and B in March was 3 to 4, respectively. Temperatures can be negative, thus this statement is also not sufficient. Consider $$T(A)=3$$ and $$T(B)=4$$ AND $$T(A)=-3$$ and $$T(B)=-4$$.
(1)+(2) We have no additional useful info. Not sufficient.
I tried substituting the values for A and somehow was convinced that A is sufficient. Can you explain on what grounds this is insufficient?
Current Student
Joined: 23 Nov 2016
Posts: 74
Location: United States (MN)
GMAT 1: 760 Q50 V42
GPA: 3.51
### Show Tags
05 Apr 2017, 14:21
1
ezzo wrote:
Bunuel wrote:
Official Solution:
Was the average (arithmetic mean) temperature in degrees Celsius in city A in March less than the average (arithmetic mean) temperature in degrees Celsius in city B in March?
(1) The median temperature in degrees Celsius in City A in March was less than the median temperature in degrees Celsius in city B. Clearly insufficient.
(2) The ratio of the average temperatures in degrees Celsius in A and B in March was 3 to 4, respectively. Temperatures can be negative, thus this statement is also not sufficient. Consider $$T(A)=3$$ and $$T(B)=4$$ AND $$T(A)=-3$$ and $$T(B)=-4$$.
(1)+(2) We have no additional useful info. Not sufficient.
I tried substituting the values for A and somehow was convinced that A is sufficient. Can you explain on what grounds this is insufficient?
See example above.
Intern
Status: Striving to get that elusive 740
Joined: 04 Jun 2017
Posts: 43
GMAT 1: 690 Q49 V35
GPA: 3.7
WE: Analyst (Consulting)
### Show Tags
29 Jul 2017, 10:14
I think this is a poor-quality question and I agree with explanation. Although i agree with the explanation, i dont think this question should be included in the quant section. This question does not assume anything related to quant. As we understand that we should not put in any extra subject matter expertise, this question requires it. Someone who does not know if temperature in celsius can be +ve or -ve (theoretically or practically) can not solve this question with "100% confidence" ( This is the key)
Intern
Joined: 07 May 2017
Posts: 3
GMAT 1: 660 Q49 V33
GMAT 2: 710 Q49 V38
GPA: 3.6
WE: Information Technology (Consulting)
### Show Tags
16 Jan 2018, 21:46
I think this is a poor-quality question and I agree with explanation.
Re M28-44 [#permalink] 16 Jan 2018, 21:46
Go to page 1 2 Next [ 23 posts ]
Display posts from previous: Sort by
# M28-44
Moderators: chetan2u, Bunuel
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
|
2019-03-20 15:32:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7582805752754211, "perplexity": 4397.178309866482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202433.77/warc/CC-MAIN-20190320150106-20190320172106-00302.warc.gz"}
|
https://ltwork.net/which-universal-force-acts-only-on-protons-and-neutrons--9492960
|
# Which universal force acts only on protons and neutrons in a nucleus
###### Question:
Which universal force acts only on protons and neutrons in a nucleus
### A mother consults you about Joey, a second grader. She tells you that Joey has trouble remembering what his teacher said in class.
A mother consults you about Joey, a second grader. She tells you that Joey has trouble remembering what his teacher said in class. Homework takes him hours to complete, and he often cries with frustration because he doesn’t understand. She says, "One day Joey can do the work, and the next day it...
### What region can have temperatures over 100 degrees Fahrenheit for weeks at a time.O A) MidwestOB) Pacific NorthwestO
What region can have temperatures over 100 degrees Fahrenheit for weeks at a time. O A) Midwest OB) Pacific Northwest O C) Southwest...
### Why do power plants release hydrofluorocarbons and other f-gases?
Why do power plants release hydrofluorocarbons and other f-gases?...
### Given the triangle below, which of the following is a correct statement cot < b = 6/5, csc < c
Given the triangle below, which of the following is a correct statement cot < b = 6/5, csc < c = 3/4, cot < c = 8/5, csc < b = 4/3...
### Which of the following causes salmonellosis?
Which of the following causes salmonellosis?...
### Prove that every positive integer can be expressed as the sum of distinct non-negative integer powers of 2. In other words, prove that
Prove that every positive integer can be expressed as the sum of distinct non-negative integer powers of 2. In other words, prove that for every positive integer , there are non-negative integers that are all distinct () such that (Hint: in order to express k 1 in the inductive step, start with find...
### A scale drawing of Jimmy's living room is shown below: width = 4 cm length = 6 cm - If each 2 cm on
A scale drawing of Jimmy's living room is shown below: width = 4 cm length = 6 cm - If each 2 cm on the scale drawing equals 8 feet, witat are the actual dimensions of the room? (1 point) Length = 8 feet, width - 6 feet Length - 12 feet, width - 8 feet Length - 18 feet, width - 16 feet Length - 24 f...
Please answer part one and fill out the chart $Please answer part one and fill out the chart$...
### Jack’s mother gave and 50 chocolates to go just friends as birthday party begin three chocolate
Jack’s mother gave and 50 chocolates to go just friends as birthday party begin three chocolate...
### Writers and artists are not as important as engineers and doctors. do you strongly agree with this statement,
Writers and artists are not as important as engineers and doctors. do you strongly agree with this statement, are you neutral, or do you disagree. explain ....
### A chemist wants to develop a fuel by converting water back to elemental hydrogen and oxygen using coupled ATP hydrolysis to drive
A chemist wants to develop a fuel by converting water back to elemental hydrogen and oxygen using coupled ATP hydrolysis to drive the reaction. Given that the value of Δ f Go for water is –237 kJ•mol–1 and that one mole of ATP hydrolyzed to ADP yields –30 kJ•mol–1, how much ATP is neede...
### I NEED HELP I WILL GIVE BRIANLIESTEscribía un párrafo corto que explique la idea central de artículo. Use al menos
I NEED HELP I WILL GIVE BRIANLIEST Escribía un párrafo corto que explique la idea central de artículo. Use al menos dos detalles del artículo para apoyar su respuestaThe link:¿Por que están cerrando tantos lugares a causa del coronavirus? A eso se le llama "aplanar la curva"...
### Colloid inside the cell membrane is what
Colloid inside the cell membrane is what...
### I’m 25 minutes jay can run 10 laps around the track consider the number of laps she can run per minute find the constant of proportionality
I’m 25 minutes jay can run 10 laps around the track consider the number of laps she can run per minute find the constant of proportionality in this situation...
### Kerrie uses the work shown below to determine 35% of 280. step 1: 35% = 10% + 25% step 2: 10% of 280
Kerrie uses the work shown below to determine 35% of 280. step 1: 35% = 10% + 25% step 2: 10% of 280 = 280: 10 = 28 step 3: 25% of 280 = 280 : 4 = 70 step 4: 28 + 70 = 98 she states that 35% of 280 is 98. which is a true statement about kerrie's solution? kerrie's solution is correct. kerrie i...
### 1. The sum of 5 and x is 20. What is x?
1. The sum of 5 and x is 20. What is x?...
### Ameasure of the goods and services produced using labor and property in the u. s.
Ameasure of the goods and services produced using labor and property in the u. s....
### Write the equation of the line that passes through the points (9,8) and (6,3).Put your answer in fully
Write the equation of the line that passes through the points (9,8) and (6,3). Put your answer in fully reduced point-slope form, unless it is a vertical or horizontal line....
|
2022-09-29 23:10:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49182915687561035, "perplexity": 1785.5678706723675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00515.warc.gz"}
|
https://owenduffy.net/blog/?cat=12
|
## Do I ‘need’ a masthead preamp to work satellites on 2m? – G/T vs G/Ta
A reader of Do I ‘need’ a masthead preamp to work satellites on 2m? – space noise scenario has written to say he does not like my comments on the hammy adaptation of G/T.
Above is an archived extract of a spreadsheet that was very popular in the ham community, both with antenna designers and sellers and end users (buyers / constructors). It shows a column entitled G/T which is actually the hammy calculation. The meaning possibly derives from (Bertelsmeier 1987), he used G/Ta.
Ta is commonly interpreted by hams to be Temperature – antenna. It is true that antennas have an intrinsic equivalent noise temperature, it relates to their loss and physical temperature and is typically a very small number. But as Bertelsmeier uses it, it is Temperature – ambient (or external), and that is how it is used in this article.
Let’s calculate the G/Ta statistic for the three scenarios in Do I ‘need’ a masthead preamp to work satellites on 2m? – space noise scenario.
## Base scenario
Above is a calculation of the base scenario, G/T=-29.74dB/K.
Also shown in this screenshot is G/Ta=-23.98dB/K. Continue reading Do I ‘need’ a masthead preamp to work satellites on 2m? – G/T vs G/Ta
## Do I ‘need’ a masthead preamp to work satellites on 2m? – terrestrial noise scenario
Do I ‘need’ a masthead preamp to work satellites on 2m? – space noise explored a scenario for a high gain antenna pointed skywards. This article explores the case of a omni antenna which basically captures ‘terrestrial’ noise.
Base scenario is a low end satellite ground station:
• 144MHz;
• terrestrial noise (satellite with omni antenna);
• IC-9700, assume NF=4.8dB;
• omni antenna;
• 10m of LMR-400.
## Do I ‘need’ a masthead preamp to work satellites on 2m? – space noise scenario
A question asked online recently provides an interesting and common case to explore.
Base scenario is a low end satellite ground station:
• 144MHz;
• satellite;
• IC-9700, assume NF=4.8dB;
• high gain (narrow beamwidth antenna);
• 10m of LMR-400.
Receiver sensitivity is commonly given as some signal level, say in µV, for a given Signal to Noise ratio (S/N), say 10dB. For AM, the depth of sinusoidal modulation is also given, and it is usually 30%. In fact these are power ratios in the context of and some nominal reference receiver input impedance.
In fact what is commonly measured is Signal + Noise to Noise ratio, and of course this ratio is one of powers. For this reason, specifications often give (S+N)/N.
This article discusses those metrics in the context of ‘conventional’ receivers and introduces the key role of assumed bandwidth through the concept of Equivalent Noise Bandwidth..
Let’s consider the raw S/N ratio of an ideal AM detector and ideal SSB detector.
## Raw Signal/Noise
### AM
Above is a diagram of the various vector components of an AM signal with random noise, shown at the ‘instant’ of a modulation ‘valley’. The black vector represents the carrier (1V), the two blue vectors are counter rotating vectors of each of the sideband components, in this case with modulation depth 30%, and the red vector is 0.095V of random noise rotating on the end of the carrier + sideband components. Continue reading Comparing sensitivity figures of an AM receiver and SSB receiver
I see online discussions struggling to try to work out if a receiving system is sufficiently good for a certain application.
Let’s work an example using Simsmith to do some of the calculations.
Scenario:
• 20m ground mounted vertical base fed against a 2.4m driven earth electrode @ 0.5MHz;
• 10m RG58A/U coax; and
• Receiver with 500+j0Ω ohms input impedance and Noise Figure 20dB.
An NEC-4.2 model of the antenna gives a feed point impedance of 146-j4714Ω and radiation efficiency of 0.043%, so radiation resistance $$Rr=146 \cdot 0.00043=0.0063$$.
Above, the NEC antenna model summary. Continue reading Quantifying performance of a simple broadcast receive system on MF
## Active monopole + RTL SDR + RPi Spyserver experiment
A brief experiment was conducted of a remote HF receiver using:
• 1m active monopole;
• RTL-2832U v3 SDR dongle;
• RPi 3B+ running Spyserver; and
• SdrSharp client.
Above is the active whip antenna. Not optimal mounting, but as you can see from the clamps, a temporary mount but one that does not confuse results with feed line common mode contribution. Continue reading Active monopole + RTL SDR + RPi Spyserver experiment
## SDR# (v1.0.0.1732) – channel filter exploration
With plans to use an RTL-SDR dongle and SDR# (v1.0.0.1732) for an upcoming project, the Equivalent Noise Bandwidth (ENB) of several channel filter configurations were explored.
A first observation of listening to a SSB telephony signal is an excessive low frequency rumble from the speaker indicative of a baseband response to quite low frequencies, much lower than needed or desirable for SSB telephony.
### 500Hz CW filter
The most common application of such a filter is reception of A1 Morse code.
Above is a screenshot of the filter settings. Continue reading SDR# (v1.0.0.1732) – channel filter exploration
## Noise figure of active loop amplifiers – the Ikin dynamic impedance method
Noise figure of active loop amplifiers – some thoughts discussed measurement of internal noise with particular application of active broadband loop antennas.
(Ikin 2016) proposes a different method of measuring noise figure NF.
Therefore, the LNA noise figure can be derived by measuring the noise with the LNA input terminated with a resistor equal to its input impedance. Then with the measurement repeated with the resistor removed, so that the LNA input is terminated by its own Dynamic Impedance. The difference in the noise ref. the above measurements will give a figure in dB which is equal to the noise reduction of the LNA verses thermal noise at 290K. Converting the dB difference into an attenuation power ratio then multiplying this by 290K gives the LNA Noise Temperature. Then using the Noise Temperature to dB conversion table yields the LNA Noise Figure. See Table 1.
The explanation is not very clear to me, and there is no mathematical proof of the technique offered… so a bit unsatisfying… but it is oft cited in ham online discussions.
I have taken the liberty to extend Ikin’s Table 1 to include some more values of column 1 for comparison with a more conventional Y factor test of a receiver’s noise figure.
Above is the extended table. The formulas in all cells of a column are the same, the highlighted row is for later reference. Continue reading Noise figure of active loop amplifiers – the Ikin dynamic impedance method
## Review of noise
Let’s review of the concepts of noise figure, equivalent noise temperature and measurement.
Firstly let’s consider the nature of noise. The noise we are discussing is dominated by thermal noise, the noise due to random thermal agitation of charge carriers in conductors. Johnson noise (as it is known) has a uniform spectral power density, ie a uniform power/bandwidth. The maximum thermal noise power density available from a resistor at temperature T is given by $$NPD=k_B T$$ where Boltzmann’s constant kB=1.38064852e-23 (and of course the load must be matched to obtain that maximum noise power density). Temperature is absolute temperature, it is measured in Kelvins and 0°C≅273K.
## Noise Figure
Noise Figure NF by definition is the reduction in S/N ratio (in dB) across a system component. So, we can write $$NF=10 log \frac{S_{in}}{N_{in}}- 10 log \frac{S_{out}}{N_{out}}$$.
### Equivalent noise temperature
One of the many methods of characterising the internal noise contribution of an amplifier is to treat it as noiseless and derive an equivalent temperature of a matched input resistor that delivers equivalent noise, this temperature is known as the equivalent noise temperature Te of the amplifier.
So for example, if we were to place a 50Ω resistor on the input of a nominally 50Ω input amplifier, and raised its temperature from 0K to the point T where the noise output power of the amplifier doubled, would could infer that the internal noise of the amplifier could be represented by an input resistor at temperature T. Fine in concept, but not very practical.
### Y factor method
Applying a little maths, we do have a practical measurement method which is known as the Y factor method. It involves measuring the ratio of noise power output (Y) for two different source resistor temperatures, Tc and Th. We can say that $$NF=10 log \frac{(\frac{T_h}{290}-1)-Y(\frac{T_c}{290}-1)}{Y-1}$$.
AN 57-1 contains a detailed mathematical explanation / proof of the Y factor method.
We can buy a noise source off the shelf, they come in a range of hot and cold temperatures. For example, one with specified Excess Noise Ratio (a common method of specifying them) has Th=9461K and Tc=290K. If we measured a DUT and observed that Y=3 (4.77dB) we could calculate that NF=12dB. Continue reading Noise figure of active loop amplifiers – some thoughts
## SimSmith – looking both ways – an LNA design task
This article shows the use of SimSmith in design and analysis of the input circuit of an MGF1302 LNA.
The MGF1302 is a low noise GaAs FET designed for S band to X band amplifiers, and was very popular in ham equipment until the arrival of pHEMT devices.
An important characteristic of the MGF1302 is that matching the input circuit for maximum gain (maximum power transfer) does not achieve the best Noise Figure… and since low noise is the objective, then we must design for that.
The datasheet contains a set of Γopt for the source impedance seen by the device gate, and interpolating for 1296MHz Γopt=0.73∠-10.5°.
Lets convert Γopt to some other useful values.
The equivalent source Z, Y and rectangular form of Γopt= will be convenient during the circuit design phase. Continue reading SimSmith – looking both ways – an LNA design task
|
2021-10-26 21:18:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5278298258781433, "perplexity": 3382.467488510839}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587926.9/warc/CC-MAIN-20211026200738-20211026230738-00003.warc.gz"}
|
https://mafiadoc.com/phenomenology-from-the-lattice_5c93efdf097c47ab0c8b4662.html
|
## Phenomenology from the lattice
Dec 6, 1994 - This is the written version of four lectures given at the 1994 TASI. My aim is to explain the essentials of lattice calculations, give an update.
UW/PT 94-15
arXiv:hep-ph/9412243v1 6 Dec 1994
Phenomenology from the lattice
Stephen R. Sharpe Physics Department, University of Washington Seattle, WA 98195, USA
ABSTRACT This is the written version of four lectures given at the 1994 TASI. My aim is to explain the essentials of lattice calculations, give an update on (though not a review of) the present status of calculations of phenomenologically interesting quantities, and to provide an understanding of the various sources of uncertainty in the results. I illustrate the important issues using the examples of the kaon B-parameter (BK ) and various quantities related to B-meson physics.
Lectures at TASI, 1994, to appear in “CP Violation and the Limits of the Standard Model”, Ed. J. Donoghue, to be published by World Scientific.
December 1994
Contents 1 Why do we need lattice QCD?
2
2 Basics of Euclidean Lattice Field Theory 2.1 Extracting information from Euclidean Correlators . . . . . . . . . .
3 5
3 Discretizing QCD 3.1 Continuum QCD, a brief overview . . . . . . . . . . . . . . . . . . . . 3.2 Discretizing Fermions . . . . . . . . . . . . . . . . . . . . . . . . . . .
11 11 16
4 Simulations 4.1 Quenched QCD (QQCD) . . . . . . . . . . . . . . . . . . . . . . . . .
20 22
5 Numerical Results from quenched QCD 5.1 Confinement . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Charmonium and Bottomonium Spectra, and extracting αS 5.3 Light hadron spectrum . . . . . . . . . . . . . . . . . . . . 5.4 Quenching errors in the light hadron spectrum . . . . . . .
. . . .
25 25 27 30 34
. . . . . . . .
39 41 44 47 51 53 54 57 58
6 Anatomy of a calculation: BK 6.1 Numerical method . . . . . . . . . . . . . 6.2 Matching continuum and lattice operators 6.3 Staggered fermions and chiral behavior . . 6.4 Chiral perturbation theory . . . . . . . . . 6.5 Errors due to quenching . . . . . . . . . . 6.6 Symanzik’s improvement program . . . . . 6.7 Status of results for BK . . . . . . . . . . 6.8 Other matrix elements from QQCD . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
7 Heavy mesons on the lattice
59
8 A final flourish
64
9 Acknowledgements
65
10 References
65
1
1
Why do we need lattice QCD?
The theme of this year’s TASI is CP violation. CP violation offers a possible window on physics beyond the Standard Model: Will the single phase in the CKM matrix be sufficient to explain all the CP violating amplitudes that we hope to measure once B-factories, φ−factories and the next generation of ǫ′ /ǫ experiments are up and running? To answer this we need to relate the phases in the CKM matrix, which appear in the underlying quark amplitudes, to the measurable phases in hadronic decay amplitudes. For some B-meson decays, e.g. B → ψKs , we can avoid uncertainties due to hadronic structure by taking appropriate ratios. For most quantities, however, the relation between CKM and measurable phases is obscured by non-perturbative physics. To find the relation, we need to know certain hadronic matrix elements of fermion bilinears and quadrilinears. One of the most practical uses of lattice QCD is the calculation of such matrix elements. Figure 1 shows a cartoon of one of the best studied examples, CP-violation in the K 0 − K 0 mixing amplitude. Most of the attendees of TASI were not born when this amplitude, parameterized by ǫ, was first measured. We understand the central “box” in the diagram: the large mass of the top quark allows us to treat it as an effective four-fermion operator, with a coefficient proportional to Im [Vts2 Vtd2 ], multiplied by a calculable QCD anomalous dimension factor. What we do not know how to calculate analytically is the effect of low momentum gluons and quarks (|p| < 2 GeV, say). Such gluons confine the q-q pairs into kaons. They interact with a large coupling constant, αs (p), and thus the interactions cannot be described using perturbation theory. Non-perturbative calculations are needed. The only method presently available which starts from first principles and makes no further assumptions is to simulate lattice QCD (LQCD) numerically. In the example of ǫ, what the lattice must provide is the matrix element hK|sγµ (1+γ5)d sγµ (1+γ5)d|Ki ≡
16 2 2 m f BK . 3 K K
(1)
Here the decay constant is normalized so that fπ = 93 MeV.
W
d t
0
K
s
d t
W
0
K s
Figure 1: The K − K mixing amplitude. 2
I will discuss the lattice calculation of BK , which is just a parameterization of the matrix element, at some length below. What I want to point out here is that one needs to know BK in order to use the experimental result for ǫ to extract a value for the Im(Vtd2 ). A similar matrix element is needed to extract information from the measured B − B mixing amplitude. Other matrix elements are needed to extract information from semileptonic B-meson decays. Thus to probe the electroweak theory one must be able to do non-perturbative calculations of matrix elements. These lectures concern only lattice calculations of these matrix elements. It is important to realize, however, that there are other methods available: the large Nc (number of colors) approach, QCD sum rules, chiral quark model, etc. These all have the status of (more or less sophisticated) models: they make approximations which are hard to improve upon and whose effect is not always easy to gauge. They often have the advantage, however, of being applicable to a wider range of quantities than can be studied on the lattice.∗ What has happened in the last few years is that lattice calculations have come of age, in the sense that for some quantities all errors are understood and are small. Future progress will, I think, see the lattice give reliable answers for increasingly many quantities. This will allow us to test and improve the approximate methods, which can then be applied with more confidence to quantities for which lattice calculations are difficult. Lattice calculations involve, at their core, numerical simulations, but in order to make the various extrapolations that are needed, considerable guidance from analytic results is required. Thus the lattice phenomenologist needs, in her or his tool kit, not only a PSC (personal super-computer), but also expertise in lattice perturbation theory (to match lattice onto continuum operators), chiral perturbation theory (to extrapolate to physical quark masses), non-relativistic QCD or its close variants (to study heavy quarks on the lattice), and the technology of “improved actions” (how to reduce errors due to finite lattice spacing). In the following I hope to give a flavor of how these tools are used, and to indicate the present status of their application.
2
Basics of Euclidean Lattice Field Theory
I will begin with a review of the basics. My discussion will be both sketchy and patchy. Those wishing more detail or rigor will find both in two recent texts on lattice field theory [1, 2]. Less detailed but still useful is Creutz’s monograph [3]. Many results I use are standard, and if I do not give a reference, it can be found in one or more of these books. The steps that are taken to get to a numerical simulation are these: ∗
I will discuss the limitations of lattice calculations in the following.
3
1. Use the Euclidean space functional integral formulation of field theory Z=
Z
[dA][dq][dq]e−
SE [A,q,q]
,
(2)
where A, q and q are respectively the gauge, quark and antiquark fields. 2. Discretize the theory on a hypercubic lattice, with spacing a. In present simulations this lies in the range 0.05 − 0.2 fm. 3. Work in finite volume: L points in the three spatial directions, T points in the time direction. The largest lattices in use today are roughly 323 × 64 in size, which, for a = 0.1fm, corresponds to (3.2 fm)3 × 6.4 fm. 4. Do the functional integral (now a multidimensional path integral) using numerical Monte Carlo methods. 5. Attempt to extrapolate L → ∞ and a → 0. Actually this is something of an idealization. Additional steps are required in most simulations. 3.5 Make the “quenched” approximation, in which internal quark loops are left out, keeping only valence quarks. 5.5 Extrapolate from quark masses mq ∼ ms /2 down to the physical up and down quark masses (mu + md )/2 ∼ ms /25. Many of these steps will be discussed in more detail below. It is important to realize that, with the exception of “quenching”, the approximations that are made can be systematically improved. This improvement can occur not only because of increases in computer power (which allows one to study larger lattices, for example), but also due to analytical advances (e.g. “improving the action” so that one can work with larger lattice spacings without increasing the errors due to discretization). An important limitation of LQCD is that the calculations are carried out in Euclidean space. Why is this? Because the integrand in a Minkowski path integral, exp(iSM ), is complex. In contrast, the Euclidean integrand is, in most cases, real and positive. Thus in the Minkowski integral there are large cancellations between different regions of configuration space, and these make it hard to simulate all but very small systems. This is an algorithmic, not a fundamental, issue. It is possible that it will be resolved in the future, though there are no signs of this at present. It turns out that even in Euclidean space, QCD at finite chemical potential (i.e. finite baryon number density) has a complex action, and thus is very difficult to simulate. Similarly, chiral theories have complex Euclidean actions. In fact, the “fermion doubling problem” (to be discussed below) makes it difficult to even formulate such theories on the lattice. 4
2.1
Extracting information from Euclidean Correlators
In this subsection I want to explain why LQCD is well suited to the calculation of matrix elements involving the vacuum or single particle states, while those involving multiple particles are much more difficult. What can be most successfully calculated are • The energies of states, p)|H|π(~p)i, which, in the continuum limit q e.g. hπ(~ 2 2 should become E = p~ + mπ . In particular, for ~p = 0, one directly calculates particle masses. √ • Decay constants, e.g. h0|Aµ |πi = i 2pµ fπ , where Aµ is the axial current. • Single particle matrix elements hN(~p)|O|M(~q)i, where N and M are single particle states, and O is a fermion bilinear, or quadrilinear, or perhaps a gluonic operator. I will explain why these quantities are easily calculable by showing how they are calculated. Consider the Euclidean two-point function C(τ ) = Z
−1
Z
[dµ] exp(−SE )O∗ (τ )O(0)
≡ hO∗ (τ )O(0)i
(3) (4) R
where dµ is the measure for the quark and gluon fields, Z = [dµ] exp(−SE ) is the partition function, and O(τ ) is an function of the fields residing at Euclidean time τ . To study a pion at zero three-momentum, for example, we might choose P O = ~x u(τ, ~x)γ5 d(τ, ~x). Recall that the fermion fields in the functional integral are Grassman variables. The functional integral is constructed to give a time-ordered expectation value b † (τ )O(0)]|0i b C(τ ) = h0|T [O ˆ Hτ
= h0|e
ˆ b † −Hτ
Oe
ˆ b † −Hτ
= h0|O e
b O|0i
b O|0i ,
(5) (6) (7)
ˆ is the Hamiltonian operator, normalized so that its ground state has zero where H b ) is the Heisenberg operator corresponding to O(τ ) ˆ energy, H|0i = 0, and O(τ (in simple cases obtained by substituting for each field in O the corresponding operator). In the second line I assume τ > 0, so that τ -ordering, T , has no effect, and I have used the Euclidean version of time translation to shift the operator back to τ = 0. By convention O(τ = 0) = O. Before proceeding, it is worthwhile understanding the conditions under which expectation values in Euclidean functional integrals (e.g. C(τ )) can be written as time-ordered products, as in Eq. 5. Textbooks typically begin with the Minkowski space time-ordered product, which is then analytically continued to Euclidean space 5
(in the form of Eq. 7), and then written as a functional integral. What we want to do is run the argument the other way around. This question was studied long ago by Osterwalder and Schrader, who found the following [4]. If the action SE is Euclidean invariant, and expectation values satisfy a property called “reflection positivity”, plus some other more technical conditions, then there exists a Hilbert space with ˆ acting on this space which is hermitian, and whose positive norm, a Hamiltonian H spectrum is bounded from below with the lowest state having zero energy, and field operators, such that Eqs. 5-7 hold.† There is then no obstruction to analytically continuing these correlators to complex time by an inverse Wick-rotation, τ → τ ′ = eiφ τ . For φ = π/2, what results are Minkowski time-ordered products b † (t)O(0)]|0i b C(τ = it) = h0|T [O ,
(8)
where t is Minkowski time. A consequence of the initial Euclidean invariance is that there are unitary operators acting on the Hilbert space which implement Poincar´e transformations. Once we have these time-ordered products we can, in principle, use the LSZ reduction formalism to extract the S-matrix from the residues of their poles. In other words, Euclidean functional integrals with suitable properties provide a definition of the field theory, as long as we can carry out the analytic continuations. It is important to realize that not all Euclidean functional integrals satisfy the necessary conditions. In particular, QCD in the quenched approximation can be written as a particular functional integral, displayed below, but does not satisfy reflection positivity, and does not correspond to a well-behaved Minkowski field theory. With this background, let us return to the correlator C(τ ). Inserting a complete set of energy eigenstates (single and multiparticle states), we find C(τ ) =
X n
2 b |hn|O|0i|
e−En t . 2En V
(9)
Here I am assuming a finite volume V , and using relativistically normalized states, V →∞
h~p|~qi = 2EV δpx qx δpy qy δpz qz −→ 2E(2π)3 δ 3 (~p − ~q) ,
(10)
where E 2 = |~p|2 + m2 . We could now follow the LSZ procedure: Fourier transform to Euclidean energy, rotate to Minkowski space, and look for poles. We would find a series of poles corresponding to the stable states which couple to the operator b and various cuts for multiparticle intermediate states. If the lightest state is a O, stable particle, however, then there is no need to rotate to Minkowski space. For example, if the operator creates a π + at rest, as in the example above, then we can read off mπ simply by looking at the exponential fall-off at large Euclidean times τ →∞
2 b C(τ ) −→ |hπ + (~p = 0)|O|0i| exp(−mπ τ ) .
For a nice discussion, including the definition of reflection positivity, see Ref. [2].
6
(11)
Figure 2: “Kaon” correlator with staggered fermions. Furthermore, the coefficient of the exponential gives us a vacuum to pion creation b = uγ γ d, is proportional to the decay constant f . For amplitude, which, if O 0 5 π this procedure to work it is important that there be a gap between the lightest and next-to-lightest states. In the present example, the latter consists of three pions at rest, with E = 3mπ , so there is a gap for finite pion mass. Figure 2 shows an example of a two point function computed numerically (from a calculation done in collaboration with Greg Kilcup and Rajan Gupta). It is for a particle with the quantum numbers of the kaon, and with a mass similar to that of 1 lat the kaon, but in which mlat s = md ≈ 2 ms . The lattice spacing is about a = 1/12 fm, and the lattice size is 323 × 48. The graph shows that by τ ∼ 20, the data are represented almost perfectly by a single exponential. The solid line shows a fit to such a form in the range τ = 20 − 37. Outside this range, the dashed line shows the extension of the fit function. Thus you can see the deviation of the data from a pure exponential at short times. The curvature for τ > 40 is due to the boundary conditions. The observant reader will notice an inconsistency between the data and the expected form, Eq. 9. C(τ ) is a sum of exponentials with positive coefficients, and must approach its asymptotic form from above. This condition does not apply to the results of Fig. 2, however, because we use different operators at times 0 and t. This allows the coefficients to have either sign, and the approach to asymptotia need not be monotonic. One of the technical aspects of lattice calculations which I will not go into, is the use of improved operators (sometimes called “sources”), designed so as to couple strongly to the lightest state but much more weakly to higher states. With such operators the correlators quickly become dominated by a single exponential. This allows one to use lattices which are shorter in the Euclidean time direction, and usually reduces the signal to noise ratio. 7
0.200
m
0.175
0.150
0.125
0.100 0
5
10
15
20
t
Figure 3: Example of an effective mass plot. The fitting range is shown by dotted lines, the fit value by the solid line. It is common for lattice practitioners to present their results in terms of the logarithmic derivative −d ln[C(τ )]/dτ , which, for τ → ∞, becomes the energy of the lightest state. The lattice version of this is ameff = ln[C(τ )/C(τ + 1)], where a is the lattice spacing. An example of such a plot is shown in Fig. 3, for a calculation at similar parameters to that in Fig. 2 (more precisely, a 30 × 322 × 40 lattice at β = 6.17, κ = 0.1532), taken from Ref. [5]. (The behavior for τ > 20 is due to a negative parity j = 23 particle propagating backwards in time.) Note that, for the operators used to obtain these results, the coefficient of the non-leading exponential turns out to be positive, as shown by the fact that meff approaches its asymptote from above. To first approximation the mass can just be read off from the graph. It is not possible, to use the plot to give an idea of how well the data is represented by a single exponential. This is because all the points are correlated, fluctuating up and down nearly in unison. One needs the full correlation matrix, and not just the “diagonal” errors which are displayed, in order to test the goodness of fit. It is true, however, that the fluctuations increase with τ , so one is always balancing the need to go to longer times, so as to be sure one has a pure exponential, with the desire to work in the region where the errors are smaller. This is a general feature of the analysis of lattice “data”. Single particle matrix elements can also be calculated directly in Euclidean space, e.g. C(τ2 , τ1 ) = hΠ(~p, τ2 )OKπ (~q, τ1 )K(0)i . (12) 8
Here, the operators K ∼ dγ5 s and Π ∼ uγ5 d create a K 0 and destroy a π − , respectively. (I am using the loose description in which the I associate the functions appearing as arguments in the functional integral with the corresponding operators which appear in the operator representation.) The operator in the “middle” (assuming τ2 > τ1 > 0), might be, for example, the current sγµ (1 + γ5 )u. Using the generalization of Eq. 5, one finds −Hτ1 c b p)e−H(τ2 −τ1 ) O d (~ C(τ2 , τ1 ) = h0|Π(~ K|0i Kπ q )e −En (τ2 −τ1 ) X e−En′ τ1 ′ c b p)|ni e h0|Π(~ = hn |K|0i . hn|Od q )|n′ i Kπ (~ 2En V 2En′ V n,n′
(13) (14)
If τ2 − τ1 and τ1 are both large enough, then we need keep only the lightest state in the two sums. Thus hn| = hπ 0 (~p)|, |n′ i = |K 0 (~k), where ~k = ~p − ~q. The correlator becomes 0 ~ −EK τ1 b p)|π 0 (~ c C(τ2 , τ1 ) ∝ h0|Π(~ p)ie−Eπ (τ2 −τ1 ) hπ 0 (~p)|Od hK 0 (~k)|K|0i . Kπ |K (k)ie
(15)
The energies and the creation and annihilation amplitudes can all be obtained from two point correlators, and divided out. Thus, from the three-point function one 0 can directly extract the transition matrix element hπ 0 (~p)|Od p)i. If O ∼ sγµ u, Kπ |K (~ then this is the vector form factor which governs the semileptonic decay spectrum in K 0 → π − e+ ν. By changing the quantum numbers of the operators one can calculate other form factors of interest, e.g. B → K ∗ γ. Using a four-fermion operator for OKπ , one can calculate K − K and B − B mixing amplitudes. Most of the phenomenologically interesting results from LQCD are for matrix elements of this type. This exhausts the types of quantity that are simple to calculate in Euclidean space. It is much more difficult, unfortunately, to calculate amplitudes involving two or more particles in either initial or final state. Some of the quantities we would like to calculate are • A(ππ → ππ), • A(K → ππ), and • A(B → ψKs ). The difficulty arise from the fact that, in Minkowski space, these amplitudes are complex. This follows from unitarity, as there are on-shell real intermediate states. (The only exception is the pion scattering amplitude at threshold, for which there is no phase space for intermediate states.) But on-shell intermediate states are only possible in Minkowski space; there are none if the external states have Euclidean momenta. Starting from Euclidean momenta, the imaginary parts are generated by the analytic continuation to Minkowski space. A simple example is the fuction ln(4m2π − (p1 + p2 )2 ), real in Euclidean space, but imaginary upon continuation to physical momenta satisfying (p1 + p2 )2 < −4m2π . 9
One way of seeing why there is no simple way of doing the calculation directly in Euclidean space is the following. Consider the K → ππ amplitude. This is obtained by creating the kaon, acting with the weak Hamiltonian to turn it into a state with the quantum numbers of two pions, and then destroying the two pions. The physical amplitude involves the two pions having a non-zero relative momentum. In a Euclidean correlator, however, one does not obtain this contribution by making a large Euclidean time separation between the weak Hamiltonian and the pion operators. Instead, what dominates is the transition from a kaon to two pions at rest, the latter being the lowest energy state. Even if one uses two pion operators having relative momentum, they will have, due to interactions, a coupling to the lowest energy state. Thus what dominates the Euclidean correlator is an off-shell transition amplitude hK|HW |ππ(~p = 0)i, where p~ is the relative momentum, and HW the weak Hamiltonian. This is not the quantity of interest. Another way of stating the problem is that one does not create the in and out states directly in Euclidean space. See Ref. [6] for a clear explanation of this point. Thus to get the correct amplitude, both in magnitude and in phase, one has to analytically continue. In most cases this will only be possible if one has a model of the momentum dependence of the amplitude. For example, for K → ππ decays, one use chiral perturbation theory, which, at leading order, relates the decay amplitudes to calculable single particle matrix elements. Using such a method, however, one gives up on the possibility of a first-principles calculation, the errors in which can be systematically reduced. There will be an irreducible error due to the uncertainties in the model used. For the example of K → ππ decays, the uncertainties are due to higher order terms in the chiral expansion. Another approach is possible in the case of scattering. L¨ uscher has shown, in a beautiful series of papers [7], how to extract scattering amplitudes indirectly, using the finite volume dependence of the energies of two particle states. Here too, however, one must make approximations to use the results in practice. An infinite tower of partial waves contribute to two particle energy shifts, and one must assume that only a finite number are important.
10
3
Discretizing QCD
To calculate amplitudes numerically, we must discretize QCD. This must be done in such a way that gauge invariance is maintained, since this invariance is required to guarantee the unitarity of the S-matrix. How to do this was worked out long ago by Wilson.
3.1
Continuum QCD, a brief overview
The continuum action is given by SE = −
X
Z
q=u,d,s,c,b,... x
q(D / + mq )q +
1 2
Z
x
Tr(Fµν Fµν ) ,
(16)
where the integrals run over Euclidean space. The covariant derivative is Dµ = ∂µ − igAµ ,
(17)
in which the gauge fields are collected into a matrix Aµ = Aaµ T a , with T a the generators of the SU(3) Lie Algebra [T a , T b] = if abc T c ,
tr(T a T b ) = 21 δab .
(18)
The quark fields are color triplets, with an implicit color index. Finally, the gauge field strength is a Fµν = Fµν T a = gi [Dµ , Dν ] = ∂µ Aν − ∂ν Aµ − ig[Aµ , Aν ] .
(19)
A local SU(3) gauge transformations is described by a space-time dependent element V (x) ∈ SU(3) (V −1 = V † , det(V ) = 1): q(x) → V (x)q(x) q(x) → q(x)V −1 (x) Aµ (x) → V (x)Aµ (x)V −1 (x) + gi V (x)∂µ V −1 (x)
Fµν (x) → V (x)Fµν (x)V −1 (x) [Dµ q](x) → V (x)[Dµ q](x) .
(20) (21) (22) (23)
Given the last two lines, it is simple to see that SE is invariant. It is useful to introduce the path-ordered integrals L(x, y) = P exp{ig
Z
y
x
dzµ Aµ (z)} ,
(24)
which are to be thought of as going from y to x. The ordering is such that, for example, Aµ (x) is always to the left of Aµ (y). Gauge transformation properties depend only on the end points of L, and not on the path of integration L(x, y)−→V (x)L(x, y)V −1 (y) . 11
(25)
n+ ν
n+µ + ν ν
Un+µ , ν
a
n=(n ,n ) µ ν
µ
n+ µ
Un, µ
Figure 4: Notation for lattice quantities. n is a vector of integers. They thus transport the gauge rotation from one point to another, such that the quantity q(x)L(x, y)q(y) is gauge invariant. Another gauge invariant quantity is the trace of the path-ordered integral around any closed path T r[L(x, x)]−→T r[V (x)L(x, x)V −1 (x)] = T r[L(x, x)]
(26)
These objects are called Wilson loops. With these quantities in hand, we can now construct a gauge invariant lattice version of QCD. One cannot simply place the quark and gauge fields on the sites of the lattice and discretize the derivatives appearing in SE . Instead, the gauge fields, which transmit information about gauge transformations from one position to another, will live on the “links” or “bonds” connecting the sites. I will choose the lattice to be hypercubical, since this is the form most easily studied numerically, and nearly all work has been done with it. The notation for the sites and links on the lattice is shown in Fig. 4. Discretizing fermions presents its own set of problems, not directly related with gauge invariance. Thus I consider first the gauge part of the action. This will be constructed from elements of SU(3), Un,µ , associated with the link from site n to site n + µ, and corresponding to the continuum line integral along the link: Un,µ ∼ L(an, an + aˆ µ)
= P exp ig = 1−
Z
n
dzµ Aµ (z) n+aˆ µ igaAµ (n+ 12 µ ˆ) + O(a2 )
(27) .
The “∼” in the first line means “corresponds to”. The vagueness here is deliberate— once we put the theory on the lattice there are no gauge fields Aµ : they are replaced by the U’s. The expansion in the last line is useful, however, for thinking about what the U’s mean, and also for taking the classical continuum limit. 12
The gauge transformation properties of the U’s are taken to be the same as those of the corresponding L’s † , Un,µ → Vn Un,µ Vn+µ
(28)
where the Vn ∈ SU(3) are gauge transformation matrices which live on sites. I have adopted the abbreviated notation in which n + µ means n + aˆ µ. In correspondence † with the continuum result L(x, y) = L(y, x)† , we associate Un,µ to the link from n + µ to n. Note that we can multiply the U’s along any closed loop and take the trace, and obtain an object which is invariant under gauge transformations, since Vn Vn† = 1. These are the lattice versions of Wilson loops. We can construct a lattice version of the pure gauge action using the smallest Wilson loop, that around an elementary square or “plaquette” † † † Pµν = Un,µ Un+µ,ν Un+ν,µ Un,ν .
(29)
The geometry is illustrated here. † Un+ν,µ > † ∧ Un,ν
∨U
n+µ,ν
< Un,µ It is reasonable that such a loop is related to Fµν , because the field strength is the curvature associated with the connection Aµ . In any case, using the correspondence given above for the U’s, and after some algebra, one finds that the classical continuum limit of the plaquette is † Pµν = 1 − iga2 Fµν −
g2 4 2 a Fµν 2
+ ia3 Gµν + ia4 Hµν + 0(a5 ),
(30)
where Hµν and Gµν and are hermitian‡ . Thus one can use the µν plaquette as a discretized version of the corresponding component of the field strength, Fµν . If we take the trace, so as to get a gauge invariant quantity, we find Re TrPµν = Nc −
g2 4 2 a Tr(Fµν ) 2
+ 0(a6 ) ,
(31)
where Nc = 3 is the number of colors. We then have Z
d4 x
X µν
1 TrFµν Fµν 2
X 2
2 (Nc g2
− ReTr2) .
(32)
The factor of 2 arises because of the mismatch between the number of plaquettes P per site, 6, and the number of terms in the sum µν , 12. ‡
Exercise: derive this result. Everyone should do it once!
13
Now, it is standard (though unfortunate) to replace the coupling constant by c β = 2N , so that g2 X ReTr2 + irrelevant constant . (33) Sg = −β Nc 2
This is called the “Wilson (gauge) action”. It is important to realize that there is nothing special about using the smallest loop to define the action. Any loop, e.g. a 1 × 2 rectangle, contains a term in its expansion proportional to a linear combination of components of (Fµν )2 . By taking an appropriate combination of loops we can obtain the continuum action as a → 0. The advantage of a small loop is that corrections proportional to powers of the lattice spacing are typically smaller than with a larger loop. At this juncture it is appropriate to make a brief comment on “improving” the action, i.e. reducing the errors due to discretization. In Eq. 31 there are corrections of O(a2 ) compared to the F 2 term that we want. Symanzik has shown how to systematically reduce these corrections from O(a2) to O(g 2a2 ) (by one loop calculations), and then to O(g 4a2 ) (by two loop calculations, etc. [8]. More ambitious is the program (based on using the renormalization group) to construct an almost perfect action, i.e. one in which all terms of O(a2n ) are almost absent [9]. The subject has lain dormant for almost a decade, but is now receiving considerable attention, in part because progress has been made at reducing the errors in the fermion action (which are usually of O(a), and thus larger than those in the gauge action). To date, however, numerical simulations leading to phenomenological results have only been carried out using the Wilson gauge action. To complete the definition of the theory I need to specify the measure. Each link variable is integrated with the Haar measure over the group manifold. This measure satisfies (V and W are arbitrary group elements) Z
dUF (U) =
Z
dUF (UV ) =
Z
dUF (W U) .
(34)
Given this, it is simple to see that the functional integral Zgauge =
Z Y
dUn,µ exp β
X 2
1 Re Nc
!
Tr2
(35)
is gauge invariant. What has been accomplished here is a non-perturbative, gauge invariant regularization of pure gauge theories. What has been sacrificed is full Euclidean invariance: rotations and translations. The hope is that, as one approaches the continuum limit, these symmetries are restored. Pure SU(3) gauge theory is not QCD. But it is still of interest as a non-trivial field theory, sharing some properties in common with QCD. In particular, its spectrum should consist of massive glueballs, in which the gluons are confined by their self-interactions. Considerable progress has been made in numerically simulating this theory. I will give more details below of the methods used. For now, let me note that one calculates the glueball spectrum by looking at two point functions 14
15 6000
3+-
10 3-+ m
4000
1--
σ1/2
3--
--
2 2+- 1++ 3++ 5 1+0-+
2-+
0-2000
1-+
0+-
++
2
MeV 0++
0 J PC
Figure 5: Spectrum of pure gauge SU(3) theory. in which the operators are Wilson loops of various shapes and sizes. By forming appropriate linear combinations one can project onto the various representations of the lattice rotation group, which can then be associated with certain spin parities in the continuum limit. The present status of the spectrum is shown in Fig. 5 (Ref. [10]). The masses are given in units of the square-root of the string tension, a quantity I discuss below. For the moment, just consider the units as arbitrary, so that all we can extract is the ratio of the masses of different glueballs. Clearly, we have reasonable control over the spectrum of this non-trivial theory, with the scalar glueball being the lightest, followed by the tensor and pseudoscalar. Furthermore, there is evidence that Euclidean symmetry is being restored. For example, the five spin components of the tensor glueball lie in two different representations of the lattice cubic rotation group, yet all have the same mass within errors. I have not explained yet how one takes the continuum limit. When one does a simulation, one picks the value of β = 6/g 2 , not the lattice spacing. The output of the calculation are a set of masses in lattice units: amglue . One obtains a value for a by comparing these to a physically measured scale. In this case, there is no such scale, since the real world is not pure gauge theory. If it were, we would use the physical value of mglue , for a particular glueball, to fix a. All other masses are then predictions. If we choose a different value for β (i.e. for g 2 ), then we would obtain
15
a different value for a. To approach the continuum limit we adjust β so that a → 0. As we do this, the lattice mass vanishes, amglue → 0. Thus correlators fall off more and more slowly, C(t) ∝ exp(−amglue t). It is conventional in statistical mechanics to describe this in terms of the divergence of the correlation length in lattice units: C(t) ∝ exp(−t/ξ), ξ = 1/(amglue ) → ∞. A place at which ξ diverges is called a critical point, and it is at such points that one can take a continuum limit. In the case of pure gauge theories, or QCD with sufficiently few flavors that it is asymptotically free, one knows that the critical point is at g 2 = 0, β = ∞. This is because the functional dependence of g 2 on a is calculable for small g 2 using perturbation theory, yielding the familiar result that g 2 ∝ 1/ ln(1/a). Inverting this, we find that a = (1/Λ) exp(−1/2β0 g 2 ) (β0 the leading order coefficient in the beta-function), so that a → 0 for g 2 → 0. What we do not know a priori is the proportionality constant 1/Λ. This we must determine numerically. Given Λ, we can then predict the variation of a with g 2 , for small enough g 2 . This works well for β ≥ 6, as long as one chooses an appropriate scheme for the coupling constant [11].
3.2
Discretizing Fermions
Discretizing the fermionic action, SF = −ψ(D / E + m)ψ ,
(36)
seems straightforward at first sight. The subscript on the derivative is a reminder, immediately to be dropped, that we are in Euclidean space, with hermitian gamma matrices satisfying {γ µ , γ ν } = 2δ µν . To discretize SF , we place quark and antiquark fields on sites, q(x) → qn , and perform gauge transformations like so: q n → q n Vn† .
qn → Vn qn ,
(37)
To obtain a discrete derivative we must separate the q and q fields, and we make this gauge invariant using the link variables. For example, choosing a symmetric derivative on the lattice, qDµ q(x) −→
1 2
h
i
† qn−µ . q n Un,µ qn+µ − Un−µ,µ
(38)
Here and in the following I have set the lattice spacing a to unity. It is left as a simple exercise to show that when one expands out the U’s in terms of A’s one indeed finds the covariant derivative. With this choice of Dµ the action is − SN (q, q, U) =
X nµ
1 q γ 2 n µ
h
i
† Un,µ qn+µ − Un−µ,µ qn−µ +
X
mq n qn .
(39)
n
For reasons about to be described, this is called the “naive” fermion action. The full partition function of QCD is then ZQCD =
Z Y
dUn,µ
Y
[dqdq] exp −Sg (U) −
q=u,d,s,..
16
X
q=u,d,s,...
SN (q, q, U) .
(40)
The quark measure is gauge invariant because a special unitary rotation has unit jacobian. Thus ZQCD is gauge invariant. There are many choices available when discretizing derivatives aside from that of Eq. 38: the forward derivative (∂µ q → qn+µ − qn ), the backwards derivative (∂µ q → qn − qn−µ ), etc. The symmetric derivative is, however, the most local choice which preserves the anti-hermitian nature of the continuum operator D /. The harmless looking action of Eq. 39 gives rise to an infamous problem: in d dimensions, it represents 2d degenerate Dirac fermions, rather than one. This sixteenfold replication in 4 dimensions is referred to as the “doubling (!) problem”. Doubling has nothing to do with gauge fields, and so to discuss it I consider a free lattice fermion. As discussed above, the spectrum of the states can be determined by looking at the fall-off of the Euclidean two point function. More useful for analytic studies is to transform to momentum space, and look for poles. These occur at complex Euclidean momenta: k 2 = −m2 , where m is the mass of the state. To evaluate the two point function, i.e. the propagator, recall the integration formulae for Euclidean fermions (which are Grassman variables): Z
Z =
[dq][dq] eq(D/+m)q = det(D / + m) ,
G(x, y) = −Z −1
Z
(41)
1 [dq][dq]eq(D/ +m)q q(x)q(y) = [ D/+m ]xy .
(42)
To diagonalize D / we go to momentum space. Due to the periodicity of the lattice one is restricted to the first Brillouin zone qn =
Z
π
−π
d4 k ikn e q(k) (2π)4
,
qn =
Z
π
−π
d4 k −ikn e q(k) (2π)4
,
(43)
In terms of these fields, and using sµ = sin kµ , − SN =
Z
k
q(k)(i
X
sµ γµ + m)q(k) .
(44)
µ
Thus, the momentum space propagator is given by G(k) =
1 −is / +m = 2 . is / +m s + m2
(45)
It is useful to reinstate factors of a, so that m = amphys and k = akphys . In the continuum limit, for fixed physical quark mass, m → 0. There is thus a pole near k = 0, and we can expand sµ = akµ,phys (1 + O(a2 )), yielding aC(k) =
−iγµ kµ,phys + m . 2 kphys + m2phys
(46)
2 This has a pole at kphys = −m2phys , representing the fermion that we expected to find.
17
Now we come to doubling. The lattice momentum function sµ vanishes for kµ = π as well as kµ = 0. In the neighborhood of the momentum (π, 0, 0, 0), if we define new variables by k1′ = π − k1 , ki′ = ki , i = 2 − 4, then ′
G(k ) ≈
−i
P
′ ′ µ kµ γ µ + k ′2 + m2
m
.
(47)
To bring the propagator into the standard continuum form, I have introduced new gamma-matrices, γ1′ = −γ1 , γi′ = γi , i = 2 − 4, unitarily equivalent to the standard set. Equation 47 shows that there is a second pole, at k ′2 = −m2 , which also represents a continuum fermion. This is our first “doubler”. The saga continues in an obvious way. s2 vanishes if each of the four components of kµ equals 0 or π. There is a pole near each of these 16 possible positions. Our single lattice fermion turns out to represent 16 degenerate states. It is not, in fact, the replication of fermions which is the hard part of the problem, but rather the way in which the chiralities of the states work out. If m = 0, then we can introduce a chiral projection into the action γµ → γµ (1 + γ5 )/2, which in the continuum restricts one to left-handed (LH) fields. On the lattice, the pole near k = 0 is LH. The second pole I uncovered, however, actually represents a RH field. This is plausible, because γ5′ = −γ5 and so (1 + γ5 ) = (1 − γ5′ ). It can be demonstrated by considering the coupling to external currents. For each of the components of k that is near to π the chirality flips, so that one ends up with eight LH and eight RH fermions. It is important to note that, when one introduces gauge fields, all the fermions are necessarily coupled in the same way. Thus one always obtains a “vector” representation of fermions, i.e. one in which LH and RH fields lie in the same representation of the gauge group. How general is this result? Karsten and Smit showed that, in infinite volume, any local, antihermitian discretization which maintains translation invariance will give rise to LH and RH fermions in pairs [12]. There need be only one such pair: Wilczek has given an example, using an action which breaks Euclidean rotation invariance [13]. What are the consequences of Karsten and Smit’s result? • Lattice regularization automatically takes care of the fact that theories with anomalous representations of fermions (e.g. SU(Nc ) with a single left-handed fermion) cannot be defined. • It does too good a job, however. One cannot discretize a chiral theory, i.e. one having an anomaly free, yet chiral, fermion representation. In particular, one cannot discretize the electroweak theory. • Indeed, one cannot even discretize QCD with (nf ) massless quarks, in the following sense. Such a theory should have an SU(nf )L × SU(nf )R chiral symmetry, under which the LH and RH quarks rotate with independent phases. But the lattice fermions are all begotten of the same lattice field, and so cannot be rotated independently. 18
In summary, then, the lattice theory lacks chiral symmetries. Can this be fixed up? Much effort has been devoted to this question. There are various escapes, each with their own problems. Most notable are these. • One can explicitly break chiral symmetry right from the start, and aim to recover it only in the continuum limit. This is, after all, what one does with the rotations and translations. For fermions in vector representations, this is the approach originally taken by Wilson, which I discuss in more detail below. For chiral theories, this is the approach advocated by the Rome group, and involves breaking the gauge symmetry at finite lattice spacing [14]. • Keep the extra doublers, and divide their effects out by hand. It turns out to be simple to reduce from 16 to 4 Dirac fermions, the result being “staggered” fermions [15]. I explain this when I discuss BK below. • Avoid the Karsten-Smit result by using a random lattice [16]. This is very difficult to analyze, because even free fermions must be studied numerically. Little progress has been made—see Ref. [17] for a recent study. • Use “domain-wall fermions” or their descendents. I have no time even to introduce the ideas; see Ref. [18] for a review and references to the literature. It is controversial at present whether the scheme can be implemented in a practical way. Most present simulations use Wilson fermions, so I will briefly explain what these are. A mechanistic way of understanding doubling is to note that the “forwardbackward” derivative of naive fermions is small both for a smooth function and for one that is smooth except that it alternates in sign. On the other hand, a discrete second derivative will be small for the smooth function, yet large for the alternating function. Thus Wilson suggested adding to the action a second derivative term SW =
X nµ
r † qn−µ ) , q n (Un,µ qn+µ − 2qn + Un−µ,µ 2
(48)
where r is a parameter. The resulting free propagator is (I leave this as an exercise) G(k) =
−is / + (m − 2r kˆ2 ) , s2 + (m + r kˆ2 )2
(49)
2
where kˆ = 2 sin( k2µ ). If kµ = π, then sµ = 0 but kˆµ = 2, so that the additional pole picks up an effective mass meff = m + 2r. Thus if one keeps r finite in the continuum limit, the effective mass stays finite, even if m = mphys a → 0. Thus the effective physical mass becomes infinite. The same applies to all the other doublers. Thus we expect them to decouple from the theory in the continuum limit, leaving a single Dirac fermion. This slightly sloppy analysis can be confirmed by studying the position of the poles in G(k). 19
In the presence of interactions, the doublers still decouple, but they do cause a renormalization of the parameters of the original Lagrangian. This is the standard result of the Applequist-Carrazone decoupling theorem. In particular, the quark mass m gets additively renormalized. This is not a surprise, since SW explicitly breaks chiral symmetry, so there is nothing special about the value m = 0. Chiral symmetry should, however, be restored in the continuum limit, because the Wilson term is a discretization of aq∂ 2 q, which vanishes when a → 0. In practice one uses the value r = 1 in simulations. One reason for this is that the lattice theory then satisfies reflection positivity, guaranteeing that one can construct an hermitian positive Hamiltonian [19].
4
Simulations
To illustrate what is involved in simulations, consider the pion two-point function huγ5 d(n)dγ5 u(p)i = Z −1 =
R
Z
[dU]
− [dU]
Y
[dq][dq]e−Sg −SN −SW uγ5 d(n)dγ5 u(p)
q
Q
q
h
(50)
det(D / + mq )e−Sg Tr (D / + md )−1 / + mu )−1 np γ5 (D pn γ5 R
[dU]
Q
q
det(D / + mq )e−Sg
i
.
Here Sg is the lattice gauge action, n and p are lattice sites, and D / +m is the complete lattice Dirac operator appearing in the sum of naive and Wilson terms SN + SW . Simulations are done using the form on the second line, i.e. after integrating out the Grassman fields. One is left with the functional integral over gauge fields, with measure Y dµ = [dU] det(D / + mq )e−Sg . (51) q
To reduce the problem to a finite number of degrees of freedom one uses a lattice of finite extent in Euclidean time and space. The finiteness in time actually corresponds to putting the system in the canonical ensemble at temperature T = 1/(Nt a). For the studies I will consider, Nt a is large enough that T ≪ ΛQCD , and the system is effectively at zero temperature. One must choose boundary conditions on the fields. To obtain the canonical ensemble these must be periodic (antiperiodic) in the time direction for gauge fields (fermions). In the spatial directions any choice can be made, though typically periodic boundary conditions are used for all fields. One then generates a set of “gauge configurations” (i.e. a U for every link) according to the measure dµ, and calculates propagators, (D / + m)−1 , on each of the gauge fields. These are joined together into traces such as that in Eq. 50. Finally, the resulting correlator is averaged over the “ensemble” of configurations, giving a statistical estimate of the desired quantity. What is the magnitude of this task? I am involved in calculations on V = 323 ×64 lattices. Since each U integral is over the eight dimensional group manifold of SU(3), we are attempting to evaluate approximately an 8 × 4(links/site × V ≈ 20
7 × 107 dimensional functional integral. The Dirac operators are complex matrices of dimension [3(color) × 4(spin) × V ]2 = [2.5 × 107 ]2 .
They are, however, sparse involving connections only between nearest neighbors. Furthermore, if one fixes the site n in Eq. 50, one need only calculate a single column of the inverse (and a single row, but this can be obtained without extra work). In this way one obtains the two point function from n to all other points p. When lattice practitioners talk about calculating propagators, they almost always are referring to such a truncated calculation. The equation to be solved for the propagator G is X (D / + m)np Gp = Sn , (52) p
where Sn is the “source”. This might be δn,n0 , or an extended function. Numerical simulations, then, break up into two parts: generating configurations, and calculating propagators. The latter is done using a variety of standard algorithms[2], conjugate gradient and minimal residue typically being preferred for staggered and Wilson fermions respectively. Convergence can be improved by preconditioning. Nevertheless, present algorithms for calculating propagators take a number of iterations which is, for small m, roughly proportional to 1/m. This slowing down is a big problem in practice, as it forces us to work with quark masses much heavier than the physical up and down quark masses. Important steps towards alleviating this problem have been taken using “multi-grid” algorithms [20]. There are various methods for generating configurations—Metropolis, Langevin, Molecular dynamics, . . . —none of which I have time to discuss in detail. All use importance sampling, i.e. they move through the space of possible configurations in such a way that the resulting distribution is weighted directly according to the measure dµ. If one defines an effective action by exp(−Seff (U)) =
Y
det(D / + mq )e−Sg ,
(53)
q
then, as in any problem with a large number of degrees of freedom, Seff is very highly peaked in U space. It would be utterly hopeless to generate configurations uniformly in U space, and then include exp(−Seff ) in the integrand. All the methods work by moving towards the minimum of Seff but including some noise to keep the appropriate distribution around the minimum. Most of the methods will only work if exp(−Seff ) is positive, in which case it can be interpreted as a probability distribution on U space. The gauge action is automatically real. In the continuum limit, for a vector representation of fermions (which, as we have seen, is what we are forced to use on the lattice, and, in any case, is appropriate for QCD), the determinant of the Dirac operator is real and positive definite (for m > 0). This is because the eigenvalues come in complex conjugate pairs, with eigenvectors related by multiplication by γ5 . On the lattice, with Wilson fermions, the determinant can be shown to be real, but is not necessarily positive. 21
Thus to obtain a positive measure one must simulate degenerate pairs of quarks, in which case the measure contains the square of the single quark determinant. Almost all algorithms are based on making a change δU in the gauge fields, and then calculating the change in Sef f δSeff
= − = −
X q
X q
δTr ln(D / + mq ) + δSg i
h
Tr δU(D / + mq )−1 − β
(54) X 2
1 δ (Re Nc
Tr2) .
(55)
To calculate the variation of the gauge action involves only a local calculation, i.e. one involving links close to that being changed. In contrast, the variation of the fermionic part of the effective action involves a propagator and is non-local. Thus it is doable (at least one doesn’t have to calculate the determinant itself!), but it is slow, as it takes a number of operations that grows proportional to V . The present algorithm of choice for simulating QCD is the “hybrid Monte Carlo” algorithm [21]. One can estimate that the time it takes to generate an independent configuration is roughly CPU ∝ V 5/4 m−11/4 . As the quark mass decreases, so does the pion mass, m2π ∝ m. Naively, it is necessary for the lattice length to exceed the pion Compton wavelength by a factor of a few, so L ∝ 1/mπ . Using this, one finds CPU ∝ m−5.25 . This is an asymptotic estimate, at best only roughly applicable for today’s lattice sizes and quark masses. But it shows why it becomes rapidly more difficult to simulate lighter quarks. This is why you rarely hear of simulations with quarks lighter than ms /3, and why progress is slow, even with an exponential growth in computer power. There are various ways in which one can overcome this scaling law. First, if one uses an improved action, one can get away with a larger lattice spacing (for a given size of systematic error), and thus a smaller number of lattice points for a fixed physical volume. Second, a lot of the difficulty comes from the slowing down of propagator inversions as m → 0, and this should be avoidable. This is the aim of the multigrid inversion algorithms. Third, once the quark masses become small enough that the pions are light, they interact weakly, and it aught to be possible to use chiral perturbation theory to account for errors introduced, say, by not increasing the volume as L ∝ 1/mπ . With the presently available 10 Gigaflop machines, however, and with the notable exception of studies of QCD at finite temperature, systematic calculations of phenomenologically interesting quantities are not possible in QCD.
4.1
Quenched QCD (QQCD)
To make progress one must make an approximation, and what is used is the so-called “quenched” approximation dµQCD = [dU]
Y q
det(D / + mq )e−Sg −→ dµQQCD = [dU] exp(−Sg ) . 22
(56)
In other words, we set the fermion determinant to a constant. This amounts to throwing away internal quark loops, while keeping the valence quarks, which now propagate through a modified distribution of gauge configurations. For this reason, it is sometimes called the “valence” approximation. Using QQCD reduces CPU requirements by a factor of ∼ 102 −103 with present values of a and m. Furthermore, the time to generate new configurations only grows as CPU ∝ V 5/4 ∝ m−2.5 , so that it is easier to go to smaller quark masses. Throwing away quark loops is a drastic approximation, the effect of which I discuss in more detail below. Nevertheless, it is an approximation certainly worth studying, because it retains the essential non-perturbative features of QCD, confinement and chiral symmetry breaking. A quark propagating through the ensemble of quenched gauge fields picks up an effective mass, just as it does in QCD, and binds to form hadrons. The effective mass and the details of the binding will differ from those in QCD, but, given the success of the quark model in describing the hadron spectrum, one would expect the quenched spectrum to be qualitatively similar. How well do we expect the QQCD to reproduce the spectrum of QCD? Since we are stuck with QQCD for a while to come, it is important to try and estimate such quenching errors. Let me begin with a rough estimate. One of the unphysical effects of quenching is that resonances in QCD, e.g. the ρ meson, become stable states in QQCD. This is because internal quark loops are necessary to obtain the on-shell intermediate states (e.g. ππ in the case of the ρ) which give rise to the imaginary parts of the propagators, and thus to the width of resonances. Discarding these intermediate states, however, affects not only the imaginary part, but also the real part of the propagator. In other words, not only is the width of the state changed (to zero) but also the mass is shifted. The most naive estimate is that δm ∼ δΓ = Γ. This mass shift will not be uniform in sign or magnitude, since it depends on the available thresholds, possible cancellations, etc. It may, in fact, be small for the ρ [22]. But it will, I expect, distort the spectrum at the 10% level (100MeV/1GeV) in general. Can one make a better estimate of the effects of quenching? In particular cases, I think one can. One example is quarkonia (cc, bb), which I touch on below. Here I describe a somewhat more systematic method applicable to the properties of the pseudo-Goldstone bosons (PGBs), the π’s, K’s and η. The starting point is the formulation of QQCD introduced by Morel[23] ZQQCD =
Z
[dU][dq][dq][d˜ q][d˜ q ]e−Sg eq(D/+m)q e−˜q(D/+m)˜q .
(57)
Here q˜ is a ghost field: a commuting spin- 12 variable. I have written the partition function for only one flavor—in general there is a ghost degenerate with each quark. Equation 57 works because the ghost integration yields an inverse determinant which cancels that from the quark integration. In other words, internal ghost loops cancel internal quark loops exactly. This formulation shows why QQCD is a sick theory—ghosts have the wrong connection between spin and statistics, and thus, in Minkowski space, there will be violations of causality. One cannot avoid the 23
+
+
+
+ ......
Figure 6: Contributions to the η ′ propagator. Lines are quark propagators. problems by considering correlation functions of external states composed of quarks alone, because particles containing ghosts will appear in intermediate states. Note, however, that, even though the Minkowski theory may be sick, the Euclidean version is well defined. This only requires that the eigenvalues of D / + m have positive real parts, so that the bosonic functional integral converges. This is an example of a Euclidean theory which is not reflection positive. Morel’s formulation is the starting point for the development of “quenched chiral perturbation theory” (QChPT) [24, 25]. Because of the ghosts fields, QQCD has a larger chiral symmetry than QCD, namely SU(3|3)L × SU(3|3)R .
(58)
Here SU(3|3) is the graded group of special unitary transformations of three commuting and three anticommuting objects. Numerical evidence suggests that this group is spontaneously broken down to its vector subgroup, yielding not only the usual pseudo-Goldstone bosons (PGBs), but also partners in which the one or both of the quark and antiquark fields are replaced by ghosts. One can develop an effective Lagrangian for this theory, analogous to the usual chiral Lagrangian. Using this one can calculate the effect of loops of PGBs, which give rise to non-analytic terms generically referred to as “chiral logs”. These logs are, in most cases, different in QCD and QQCD, and one can use the difference to estimate the effect of quenching. In particular, it should be possible to give a rough ordering of quantities according to the size of quenching errors. I give some examples below. There is an important qualitative difference between QChPT and ChPT (chiral perturbation theory for QCD). In QCD, the η ′ is not a PGB, since the would-be U(1)A symmetry is anomalous. The η ′ gets additional mass from diagrams in which the quark-antiquark pair annihilate into gluonic intermediate states. This “hairpin vertex” gets iterated to give rise to the mass term, as shown schematically in Fig. 6. But in QQCD, all terms except the first two are absent, and so the η ′ remains light, effectively a PGB. The would-be mass term becomes a two-point vertex. The η ′ must be included in the quenched chiral Lagrangian, along with this additional vertex. The quenched chiral Lagrangian of Ref. [24] provides an explicit realization of these rather handwaving diagrammatic arguments. As explained below, the existence of a light η ′ leads to a number of unphysical effects. In particular, there is an η ′ cloud surrounding all hadrons containing at least one light quark.
24
5
Numerical Results from quenched QCD
In this section I present the evidence which shows that QQCD does give a reasonable approximation to the real world. This is essential if we are to have any hope of using QQCD to calculate phenomenologically interesting matrix elements. I will also discuss in more detail the size of quenching errors.
5.1
Confinement
There is a simple criterion for confinement in QQCD: evaluate the potential energy V (R) of an infinitely heavy quark (Q) and antiquark (Q) as a function of the distance R between them. The standard picture of confinement has a tube of color electric flux joining the quark and antiquark, a tube which simply elongates as the pair are separated. This suggests that V (R) ∝ R for distances significantly larger than the width of the tube. A better way of stating this is that the force, F = −dV /dR, is expected to asymptote to a constant at large R. The magnitude of this constant is called the string tension, κ. Using the force avoids the problem that V (R) contains the self energies of the quark and antiquark, which are R independent, but divergent in the continuum limit. This criterion for confinement is not useful in QCD, because the flux tube can be broken by the creation of a qq pair, leaving a Qq and qQ meson having an energy independent of R. This process does not occur in QQCD because it requires internal quark loops. To evaluate V (R) one proceeds as follows. Create the QQ pair at time τ = 0 using the gauge invariant operator ~ ~ 0)Q(0) , Q(R)L( R,
(59)
The ordered integral in L (Eq. 27) can follow any path, or one can average over a number of paths so as to maximize the overlap of the operator with the QQ state. Next, destroy the pair at a later time τ = T using the conjugate operator ~ ~ = Q(0)L(R, ~ 0)† Q(R) ~ . Q(0)L(0, R)Q( R)
(60)
In this way one has constructed a correlator which, for large T , should fall as exp(−V (R)T ), where V (R) is the energy of the lightest state of the Q + Q + glue system. To evaluate the correlator we need the heavy quark propagator. In Minkowski space, an infinitely heavy quark just maintains its velocity, so if it starts at rest, as here, then it remains so. That is why the final operator in Eq. 60 is chosen with the Q field at the same site as the original Q. The current of a static quark does have a timeR component, which couples to A0 , resulting in a phase for the propagator P exp(−ig dtA0 (t)). There is also the kinematical phase exp(−iMQ t), but this does not depend on R and thus does not contribute to the force, and can be dropped. RoR tating to Euclidean space, the line integral remains a phase: P exp(−ig dτ A4 (τ )). 25
Figure 7: The heavy quark potential, in lattice units. The short distance points have been corrected for lattice artifacts using the lattice Coulomb propagator. For the antiquark, the line integral is in the opposite direction. Combining these propagators with the line integrals from Eqs. 59 and 60, we obtain a Wilson loop, W . It is roughly rectangular, having straight segments in the time direction, while ~ can wiggle, though they the spatial paths (determined by the choice of L(0, R)) must remain in a single time-slice. Putting this all together, we expect T →∞
R→∞
hW i −→ Ce−V (R)T −→ Ce−κRT ,
(61)
where C is a constant. The last result is the well-known “area-law”: the expectation value of a rectangular R × T loop falls exponentially with the area of the loop if there is confinement. It was established long ago, using the strong coupling expansion, that the area law holds for large g 2 . This leaves the question of whether the result extends to the continuum limit, g 2 → 0, i.e. whether there is a phase transition at finite g 2 which breaks the analytic connection between weak and strong coupling. Numerical evidence to date suggests that this does not happen. For example, I show in Fig. 7 the results for the potential obtained by Bali and Schilling [26] at β = 6.4, which corresponds to a ≈ 0.06 fm. The horizontal scale is in units of a, and thus extends well beyond 1 fm. The linear behavior of V is clear, starting from ∼ 0.5 fm. A pleasing feature of this result is that one can see, in the same calculation, both the long distance, non-perturbative physics of confinement, and the short distance perturbative Coulomb potential. This is actually necessary if one is to calculate weak matrix elements, for one must match onto the continuum using perturbation theory at short distances, while simultaneously including the long distance contributions. 26
5.2
Charmonium and Bottomonium Spectra, and extracting αS
I now turn to the spectrum. After integrating out the fermions, and making the quenched approximation, a general meson two-point correlator becomes
space
Euclidean 0
τ
time
where the “blob” at τ = 0 represents an initial extended source, made gauge invariant in some way, and the blob at τ represents a similar “sink”. The lines joining the blobs are quark propagators in the background quenched gauge field. If the meson is a flavor singlet (e.g. cc), then there is a second diagram in which the quark and antiquark annihilate into intermediate gluons. For heavy quark systems these are suppressed by powers of αs (mq ), and can be ignored. The same is not true for light quark systems; in particular, the η ′ mass comes from such diagrams. I first discuss the cc system. This has been studied on the lattice with great care by the Fermilab group[27]. Compared to the light quark spectrum, the cc system has several advantages • cc states are smaller than light quark hadrons, so one can use lattices with smaller size in physical units. • The CPU time needed to calculate c-quark propagators is less than for light quarks. Since, in QQCD, calculating propagators consumes most of the computer time, this allows one to use more lattices, and thus reduce statistical errors. These errors are further decreased by that fact that the intrinsic fluctuations of c-quark correlators from configuration to configuration are typically smaller than for light quarks. • The cc system is reasonably well described by potential models, so one has a way in which to estimate systematic errors, in particular that due to quenching. A potential disadvantage is that the charm mass in lattice units is not small, e.g. mc a ∼ 0.75 at a = 0.1 fm (β ≈ 6). This might lead to significant discretization errors, proportional to powers of mc a, and thus require the use of a smaller lattice spacing. This turns out not to be true, as long as one uses an improved action [28]. 27
Charmonium is thus a system where all of the lattice errors can be studied, and to a large extent controlled. In particular, finite volume and lattice spacing errors are small. It turns out, as I discuss below, that similar control is possible for the bb system, using non-relativistic QCD (NRQCD) to describe the b quark. The onia are thus a good choice for accurately measuring the lattice spacing. As I will explain, this can be turned into a prediction for αs . The lattice spacing is obtained by comparing a quantity measured in lattice units to its physical value. An excellent choice for the physical quantity is the splitting between the spin-averaged 1P and 1S levels in onio. This is because the splitting is almost the same in cc and bb systems (457 and 452 MeV, respectively), so that one need not worry if the lattice quark masses differ slightly from the experimental values. To extract a one uses a =
(am1P − am1S )lat , (m1P − m1S )expt
(62)
where (am1S,P )lat are the dimensionless masses one obtains from the lattice simulation. The absence of corrections of O(a2) in this relation is a convention. If we used other physical quantities we would obtain values of a differing by such corrections. This method gives accurate values of a for various values of g 2, or equivalently g 2 as a function of a. From the point of view of perturbation theory, the lattice is just an ultra-violet regulator, albeit a messy, rotationally non-invariant one. Roughly speaking, it restricts momenta to satisfy |p| < π/a. Thus it can be related to couplings defined in other schemes, for small enough a. For example, it is related to the MS coupling h
i
2 g 2(a) = gMS (µ = π/a) 1 − 0.31g 2 + O(g 4) .
(63)
(Neither the scheme nor the scale of the g 2 in the correction is determined at this 2 order.) The idea is to use this result to obtain gMS (π/a), and then use the betafunction to run this coupling to a standard scale, for example mZ . An important test of the calculation is that one should obtain the same result 2 (mZ ) starting at different values of g 2. The extent to which this is not true for gMS is an indication of the size of errors due to truncating perturbation theory in Eq. 63, from truncating the beta-function when running to mZ , and from discretization errors. It turns that the combined effect of these errors is smaller than the statistical errors [28]. The calculation cannot be carried out exactly as just explained. The g 2 term in Eq. 63 is ∼ 30%, since g 2 ∼ 1 in lattice calculations. Thus there are likely to be large corrections to Eq. 63 coming from unknown higher order terms. How, then, do we convert the well calibrated lattice into a result for αMS ? We need a nonperturbative way of obtaining αMS . Such a method has been suggested by Lepage and Mackenzie [11]. There are several parts to their method. First, we recall from our experience with perturbative QCD that αMS (µ) is a reasonable expansion parameter, as long 28
as one chooses a scale µ appropriate to the process under consideration. Equation 63 then implies that αlat (a) = g 2 (a)/4π will be a poor expansion parameter. This expectation is borne out by numerical results. For example, ratios of small Wilson loops are not well represented by either first or second order lattice perturbation theory—the O(g 2) term is roughly half the size of the needed correction at g 2 = 1. If, however, one expands these quantities in terms of αMS , and chooses the scale according to a prescription explained in Ref. [11], then the leading order term does much better (because αMS > αlat ), and the second order expression works very well. See Ref. [11] for other examples. The moral is that perturbation theory for short distance lattice quantities is in good shape, as long as one chooses the correct expansion parameter. It is also noteworthy that Lepage and Mackenzie have understood the source of the large correction in Eq. 63: it comes from extra “tadpole” diagrams that occur in lattice perturbation theory. I have skipped over an important detail in the previous paragraph. Lattice perturbation theory is rapidly convergent for almost all quantities when expressed in terms of αMS at an appropriate scale. But how to we determine αMS , given the need for higher order terms in the relation Eq. 63? Lepage and Mackenzie suggest a non-perturbative definition in terms of the numerical result for the average plaquette 3 lnhTr2i g2 = (1 + O(g 2)) 4π 4π 1 1 = − 0.37 + O(α) . αMS (3.41π/a) αP αP ≡ −
(64) (65)
In other words, define an auxiliary coupling constant by the first equation, and relate it to αMS using the perturbative result in the second line. Note that the correction term in the second relation is small, since 1/α ∼ 5 − 10. The scale 3.41π/a takes into account a subset of the two-loop corrections. It is using αMS defined in this way that Lepage and Mackenzie find lattice perturbation theory to be well behaved. In effect this method uses the numerical data itself to sum the leading tadpole diagrams to all orders. In summary, using the result for the lattice spacing from Eq. 62, together with the result for the average plaquette, Eqs. 64 and 65 give us αMS at the known scale q = 3.41π/a. We can then run the result up to mZ . By far the largest error in the result is that due to the use of the quenched approximation. The Fermilab group has developed a method to estimate this error, based on the fact that the cc system is well described by a potential model. The potential in QCD differs from that in QQCD, but we have a reasonable idea of the form of this difference, so we can subtract its effects. It is useful to define a coupling constant in terms of the potential in momentum space (n )
V (q) ≡ −CF αV f (q)/q 2 .
(66)
The superscript gives the number of flavors in internal quark loops. At short distances this behaves like any other coupling constant, and in fact is close to that in 29
the MS scheme
1 1 − 0.822 + O(α) . (67) = (n ) αV (q) f αMS (q)(nf ) At long distances it must lead to a confining potential. A useful form for α which interpolates between these two limits has been given by Richardson. Now, since one adjusts the scale so that the 1P − 1S splitting matches experiment, it must be that QCD (nf = 3) and QQCD (nf = 0) potentials are similar at the scale of (3) (0) typical momenta in low lying cc states, q ∗ = 0.35 − 0.7GeV, i.e. αV (q ∗ ) ≈ αV (q ∗ ). But the two couplings run differently with q. In particular, for large enough q, the QQCD coupling decreases more rapidly because there is no fermionic screening. (3) (0) This means that αV (π/a) − αV (π/a) > 0 (assuming π/a > q ∗ ). The difference is calculable given an assumed form for the potential. Using Eq. 67 we can convert (0) (3) this to a difference between αMS (π/a) and αMS (π/a). The former we have already determined, so we obtain the latter, which we then run up to mZ . The resulting correction is substantial—a 26% increase in αMS (5GeV) for charmonium. This correction itself is uncertain, due to uncertainties in the matching scale q ∗ and in the chosen form of the potential. The resulting uncertainty in αS is estimated to be ∼ 8%, slightly larger than that coming from the neglect of higher order terms in the perturbative relation between αP and αMS . A clear discussion of all these issues is given in Ref. [28]. The net result is [28] (5)
αMS (mZ ) = 0.110 ± 0.008 .
(68)
A similar analysis using NRQCD to study the bb system yields a consistent result[29], 0.112 ± 0.004. The fact that the final results agree is a test of the method of estimating quenching errors, which are different in the two onia. It is appropriate that the result be included in the latest Review of Particle Properties [30]. Recently, the first results including quark loops have been obtained, using two moderately light flavors of quarks in the loops[29, 31]. For a review see Rev. [33]. Results for three light flavors can now be obtained both using the methods just described, or simply by extrapolating from nf = 0 and nf = 2. The two methods yield consistent results, with the latter giving the smaller errors:[29] (5)
αMS (mZ ) = 0.115 ± 0.002 .
(69)
This is a very impressive result, which is consistent with the latest world average (5) experimental value[32] αMS (mZ ) = 0.117 ± 0.005. My only concern is whether the error fully accounts for the uncertainties in the extrapolation of mu and md to their physical values.
5.3
I want to begin my discussion of the spectrum of hadrons composed of u, d and s quarks by clarifying how, were we able to simulate QCD, we would determined the 30
correct lattice quark masses and extract a. This must be done carefully because the measure of the functional integral depends, through the Dirac determinant, on the quark masses. For simplicity of presentation, I assume that mu = md = ml . I will also assume that we have picked a value of β = 6/g 2 large enough that O(a) errors can be ignored. One would remove such errors in practice by simulating at a number of values of β and extrapolating to a = 0. Having chosen β, we then calculate numerically three lattice masses, e.g. amproton (aml , ams ) , amπ (aml , ams ) , and amK (aml , ams )
(70)
as a function of the lattice masses ml a and ms a. We adjust these lattice masses until the two mass ratios agree with experiment amπ (aml , ams ) 135 amK (aml , ams ) 498 = , = . amproton (aml , ams ) 938 amproton (aml , ams ) 938
(71)
Hopefully these equations have a solution! With the lattice masses fixed we can now extract the lattice spacing, using amproton (aml , ams ) = 938MeV .
(72)
Any other quantity (mass, decay constant, . . . ) that we calculate can now be predicted in physical units. Clearly, any three dimensionful quantities can be used to carry out this program. Knowing a also allows us to extract the quark masses in physical units. These are quark masses renormalized in the lattice scheme at the scale a. They are perturbatively related to more familiar masses, such as those in the MS scheme. The same procedure applies to QQCD, but is much simpler to implement. This is because the measure is independent of the quark masses, so one need only generate a single ensemble for each β. The flip-side of this simplicity is, of course, that one is throwing away important aspects of the physics. Unfortunately, it will be some time before this procedure can be followed for QCD. For the moment a systematic study of the spectrum is restricted to QQCD. The most thorough analysis is that of the IBM group, using the GF-11 computer built by Weingarten and collaborators[5]. It sustains ∼ 7 GFlops for these calculations. I will present an outline of their results, with my major focus being the issue of the reliability of the quenched approximation. The main features of the study are • Masses are calculated for light hadrons (those with the quantum numbers of the π, ρ, N and ∆), using degenerate quarks. A range of quark masses is used, extending down to about ms a/3, where ms is the physical strange quark mass. The results are then extrapolated to the physical up and down quark masses. An example is shown in Fig. 8. I return to the reliability of these extrapolations below. 31
∆ N
2
6.0
4.0 ρ π
1
0 0.0
0.5
1.0
1.5 mq / ms
2.0
2.5
2.0
mπ2/mρ(mn)2
[mρ,mN,m∆] / mρ(mn)
3
0 3.0
Figure 8: Mass extrapolations at β = 6.17 (using sinks of size 4 on a 30 × 322 × 40 lattice). All hadron masses are scaled so that that mρ (mq = 0) = 1. Quark masses are given in units of ms . The use of degenerate quarks means that a direct calculation of the strange meson and baryon masses (except for the Ω− ) is not possible. There is no fundamental obstacle to doing such a calculation; after all, quark propagators with different masses have been calculated. The practical problem is just that the number of non-degenerate combinations becomes large. The IBM group make predictions for the strange hadrons based upon the assumption (well supported by the experimental data itself) that, in QCD, the masses of mesons and baryons (squared masses for PGBs) depend linearly on the masses of the valence quarks of which they are composed. This leads, for example, to the result mΣ + mΞ − mN ≈ mN (ms ) ,
(73)
where the quantity on the right hand side is the mass the nucleon would have if mu = md = ms for the valence quarks. This latter quantity is easily calculated in QQCD. • Finite volume effects are studied by comparing results on lattices of different sizes. Previous studies have shown that for La < 1−1.5 fm there are significant finite volume corrections [34]. All the lattices in the IBM study are larger than this, so the finite volume corrections turn out to be small. • The extrapolation to a = 0 is done using three lattices, having roughly the same physical volume (L ∼ 2.4fm), but different lattice spacings (β = 5.7, 32
[mN, mΣ + mΞ – mN] / mρ
2.2 Σ+Ξ–N
2.0 1.8 1.6
N
1.4 1.2 1.0 –0.2
0.0
0.2
0.4
0.6
0.8
mρ a
Figure 9: Extrapolation to a = 0 (for sinks 0,1, and 2 combined). All masses have been previously extrapolated to the continuum limit. Dots at a = 0 are observed values. 5.93 and 6.17, corresponding to a ≈ 0.15, 0.1 and 0.07fm). Figure 9 shows an example for mN /mρ and “(mΣ + mΞ − mN )/mρ ” (obtained using Eq. 73). Since Wilson fermions are used, discretization errors are O(a), and linear extrapolation is assumed. This plot does not include a small shift due to finite volume corrections. • Statistical errors are reduced using large ensembles of lattices, typically 200. In addition the signal is improved by creating the hadrons using extended sources. Figure 3 shows about the worst signal. What is important is that the effective mass reaches a plateau before it disappears into noise. The fitting range is chosen by an automatic procedure, and is shown by the dotted vertical lines. The data sample is large enough that fits using the full correlation matrix are stable, and errors are estimated using the bootstrap procedure. The resulting mass value is shown by the solid horizontal line, and has statistical errors of about 2%. The final results are shown in Fig. 10.The predictions all agree with the experimental numbers to within 6%, or less than 1.6 standard deviations. QQCD seems to work extremely well! Indeed Ref. [5] interprets this success as indicating both that QCD is the theory of the strong interactions, and that the quenched approximation works well, at least for masses.
33
2.5 MX /Mρ +
+
2.0
+ + +
1.5 + + 1.0
K∗
Φ
+ N Σ+Ξ−N ∆
Σ∗
Ξ∗
Figure 10: Final results for the light hadron spectrum. Masses are measured in units of mρ . Experimental numbers are denoted by crosses.
5.4
Quenching errors in the light hadron spectrum
The IBM study is an impressive piece of work, one which sets the standard for future simulations. Only a few other quantities (αs , mb and BK ) have been studied so thoroughly. I am not, however, fully convinced of the conclusions, i.e. that the spectra of QQCD and QCD differ by less than 6%. As I will explain, there are reasons to expect typical deviations to be 10-20%. There are then three possibilities: (i) the reasons I will present are not valid; (ii) the reasons are valid, but the deviations turn out fortuitously to be smaller for the light hadron spectrum; and (iii) the assumptions made in order to do the various extrapolations in the IBM study are not valid, and the actual answers differ more substantially from the physical spectrum. I am biased, so I expect the answer to be a combination of (ii) and (iii), but only further simulations will resolve the issue. The two extrapolations which might need further refinement are those in lattice spacing and in quark mass. As for the former, one expects corrections of the form 1+ aΛ1 +(aΛ2 )2 +. . ., where Λ1 and Λ2 are non-perturbative scales. Linear extrapolation has been assumed (Λ2 = 0). But if Λ2 ∼ mρ , then the quadratic term would be a 25% correction at the largest values of a used in the extrapolations, and could not be ignored. illustrated by For example, for the nucleon data in Fig. 9, such a quadratic term could lead to an extrapolated value mN /mρ ≈ 1.35, in contrast to the result, 1.28, from a linear fit. Such large values for Λ are not unreasonable—they have been seen in the calculation of BK with staggered fermions [35]. The chiral extrapolations, such as those of Fig. 8, have been done linearly using the lightest three mass points. Looking at the nucleon data, there is some evidence of negative curvature, as has been also seen in other simulations. As explained below, we expect there to be an m3π term with a negative coefficient. Including such a term would lead to a slightly lower extrapolated value. 34
Figure 11: Quark diagrams leading to pion and η ′ clouds around mesons. My point here is not that the extrapolations are necessarily wrong, but that there are reasons to examine them more carefully, in particular by accumulating data at more values of a, and at smaller quark masses. Let me now explain the reasons why I expect the spectra of QQCD and QCD to differ by 10-20%. The essential point is that hadrons in QQCD have very different “pion” (π, K, η and η ′ ) clouds than in QCD. To understand this, consider Fig. 11, which shows “quark-flow” diagrams for processes leading to pion loops. The justification for the use of such diagrams is discussed in Refs. [24, 25]. The first diagram is present in QCD, but absent in QQCD. The second is present in both theories, but in QCD it involves the flavor singlet η ′ , which is heavy and thus does not contribute significantly to the long-distance cloud. In QQCD, by contrast, the η ′ remains light, as discussed above, and this diagram leads to the appearance of an η ′ cloud around hadrons. So what happens is that the pion cloud surrounding mesons in QCD is replaced by an η ′ cloud around quenched. These two clouds are not related—the η ′ cloud is simply a quenched artifact. To the extent that particle properties in QCD depend upon the composition of the cloud, they will be altered in QQCD. The extent of the alteration can be investigated using (quenched) chiral perturbation theory. One of the most striking results is that the chiral limit is more singular in the quenched theory. For example, the QChPT result for the pion mass is[24, 25] h i m2π = 2µmq 1 − 2δ ln(mπ /Λ) + O(m2π ln(mπ /Λ′ )) . (74) Here δ is a constant proportional to the η ′ two point vertex in the right-hand diagram of Fig. 11. The term proportional to δ diverges as mπ → 0. This is contrast to the corrections in QCD, which are proportional to m2π ln(mπ ), and vanish in the chiral limit. Thus the η ′ loops of QQCD introduce a sickness that is entirely an artifact. Indeed, the chiral expansion ceases to make sense once mπ gets small enough that the “δ term” is comparable to 1.§ In fact, a general feature of QChPT is that there are corrections which diverge as one or more quark masses are sent to zero. This suggests that one cannot hope to use QQCD below a certain quark mass. This §
For a pion made of degenerate quarks, one can resum the leading terms,[25, 36] i.e. those proportional to (δ ln mπ )n . It is not known how to do this in general.
35
Figure 12: Searching for the η ′ cloud. All masses are in lattice units. The two fits lead to the values of δ shown. “critical” mass will likely depend on the quantity being calculated. To estimate the critical mass one need to know δ. In QCD, the two-point vertex leads to the major part of mη′ , the remainder due to quark masses. Assuming the vertex is the same in QQCD as in QCD, one finds δ ≈ 0.2. In principle, one need not appeal to QCD, but rather can calculate the η ′ two-point function in QQCD and directly extract δ. This is a difficult quantity to measure, requiring propagators from many sources. A calculation has been recently completed[37], however, albeit at a rather large lattice spacing (a ∼ 0.15 fm). The result, δ ≈ 0.15, is close to the QCD-based estimate. The smallness of δ justifies the use of perturbation theory, and implies that critical masses turn out to be small. There is now some numerical support for QChPT. Figure 12 shows a test of Eq. 74. I have taken this from the recent review of Gupta [38], which provides more details of the fits. QChPT predicts that the ratio m2π /mq should diverge at small quark masses. This can be tested using staggered fermions, for which the quark mass is multiplicatively renormalized, and one knows where mq = 0. The data at small quark masses are from Ref. [39], and come from three lattice volumes. There is a pronounced finite volume effect betwee 163 and 243 lattices, but no such effect from 243 to 323 . Thus the 323 data is close to the infinite volume limit. A technical point, which is crucial numerically, is that the mass appearing in the logarithm in
36
Eq. 74 is that of a pion which is not an exact PGB on the lattice. The best fit (solid line) gives a value δ ≈ 0.16—other reasonable fits give different values, but all are definitely non-zero. A less striking, but equally important, test comes from decay constants (fπ , fK etc.) Thes are also expected to diverge for mesons composed of non-degenerate mesons when one of the quark masses vanishes. The world’s data for such decay constants is consistent with the expected QChPT form, if δ = 0.10(3)[38], consistent with the two other determinations. To be complete, I should note that there is as yet no evidence for a divergence in m2π /mq for Wilson fermions [40]. It is, however, much more difficult to do the fit, since the quark mass is additively renormalized, and one does not know a priori where mq = 0. Thus I have some confidence that QChPT makes sense, and is applicable to present simulations. What does it imply for baryon masses? In QCD the chiral expansion of baryon masses is mN = m0 + cm2π + c′m3π + O(m4π ), where mπ represents any of the PGB masses, and c, c′ are constants. The non-analytic m3π ∝ m3/2 term q is due to loops of PGBs, so-called “chiral loops”. Its coefficient is known in terms of the PGB couplings to the nucleon, e.g. gπN N . The analytic corrections of O(m4π ), by contrast, involve additional, unknown, parameters. But in the chiral limit, the non-analytic term is enhanced relative to the analytic term by one power of mπ , so one has some predictive power. This is advantageous compared to mesons masses, for which the chiral logs of size m4π ln mπ are enhanced only by a logarithm over the analytic terms of O(m4π ). In QQCD it turns out that for baryons, unlike for mesons, some, though not all, of the chiral loops involving PGBs remain[41]. Thus there is still an m3π term, though with a different coefficient. There are also new terms, due to η ′ loops, which are proportional to δ × mπ , and thus enhanced in the chiral limit. These are the analogs of the divergent terms in m2π , and are pure artifacts. Jim Labrenz and I have calculated the chiral expansion for octet and decuplet baryon masses in QChPT[41]. To give an example of the results, I use the QCD values for gπN N and other constants, and include only intermediate octets. We then find (all masses in GeV) mN = m0 − 0.35(δ/0.15)mπ + 3.4m2π − 1.5m3π + O(m4π ln mπ ) . (75) The numerical values are only to be used as rough guides, since the coupling constants in QQCD will not be the same as in QCD. The most important points are qualitative: when plotting mN vs. m2π there should be curvature at larger masses due to the m3π term. and there should be a peculiar behavior at small masses due to the mπ term, These results are illustrated in Fig. 13. This shows the nucleon mass from Ref. [5] at β = 5.93. I have done three types of fits: a fit of m0 + bm2π to the lightest three points; and fits of Eq. 75 to all four points with δ fixed to be 0 and 0.15, but with all three other coefficients free. The results of the fits are given on the plot. All fits are reasonable, but the best are the two with curvature. This is qualitative support of an m3π term, but is by no means definitive. The observation of curvature depends 37
Figure 13: Fits to the IBM data at β = 5.93. All masses in GeV. on heaviest mass point, and this is at sufficiently large m2π that higher order terms in the chiral expansion could also be significant and give curvature [40]. The data provides no evidence for or against an mπ term—as the Figure shows, the “hook” which appears for δ = 0.15 is too small to be important unless one goes to very small quark masses. In this instance, the variation in m0 (the intercept) is quite small: the linear fit to m2π gives m0 = 0.92 GeV, while the fits with δ = (0, 0.15) give (0.89, 0.95) GeV. Roughly, then, adding the m3π term reduces mN by 3%, while the adding the mπ term increases mN by 7%. These are small effects, but they are important given that they are larger than the typical statistical errors. It is not clear what to do about the mπ term. Since it is an artifact of quenching, it may be best to extrapolate ignoring it. But one should certainly include an m3π term. I now return to the reason for this digression into QChPT. We want to make an estimate of the effects of quenching on baryon masses. We do this assuming that QCD and QQCD differ only because the contributions of chiral-loops are different. In particular, we assume that all coupling constants in QChPT are the same as in ChPT. We apply this method to ratios of baryon masses, and to mN /fπ . Considering ratios removes changes in overall scale between QQCD and QCD. We find that there are 10-30% differences between ratios in the two theories[41]. This is certainly handwaving—the coupling constants could conspire to make the two theories agree more 38
closely on the various ratios. But it indicates the magnitude of the typical effect due to the difference in the physical composition of hadrons in QQCD and QCD. It is because of this general argument that I expect the final quenched spectrum to differ more substantially than 6%. We are in the process of extending this analysis to other data sets and to the decuplet baryons. For more extensive reviews of QChPT, see Refs. [38, 42].
6
Anatomy of a calculation: BK
I now turn to applications of QQCD where we do not know the experimental results in advance. We have already seen one example—αS ; from now on I will focus on matrix elements of the electroweak effective Hamiltonian. These so-called “weak matrix elements” govern weak decays and transition amplitudes. I begin with a detailed discussion of the calculation of BK , not only because it is dear to my heart, but also because it clearly illustrates all aspects of such calculations. BK arises when calculating CP-violation in K − K mixing, which is parameterized experimentally by ǫ. This mixing is caused by box diagrams such as that in Fig. 1. Using the renormalization group (RG), we integrate out the top quark, Z and W bosons, and then bottom quark, as well as gluons with momenta exceeding the renormalization scale µ. We lower µ down to a scale at which we can match onto the lattice calculation, µ ≈ π/a ∼ 5 − 10 GeV. At this stage the ∆S = 2, CP-violating part of the effective Hamiltonian is, to good approximation h
i
Heff (µ) ∝ G2F Im Vts2 Vtd2 c(µ) [sγµ (1+γ5)d sγµ (1+γ5)d] .
(76)
Here c(µ) is a perturbatively calculable coefficient function, known at present to two-loop order. The non-perturbative part of the problem is the evaluation of hK|Heff (µ)|Ki, which is parameterized by BK (µ) (Eq. 1). To evaluate this, we switch to the lattice renormalization scheme, matching the continuum four-fermion operator with a corresponding lattice operator. We then evaluate the matrix element of the lattice operator using the numerical methods of lattice QCD, and finally convert the result back into one for BK (µ). Combined with c(µ) and the other constants one obtains an expression for ǫ, from which one can extract the value of Im [Vts2 Vtd2 ]. I would like to stress a point which occasionally gets overlooked. What the lattice calculation gives us, once we match back onto the continuum, is BK (µ) in a continuum scheme of our choice (e.g. naive dimensional regularization) at a scale µ which should be near to π/a. We can choose µ to be a standard scale, e.g. 2 GeV. This result contains all sorts of lattice related errors, discussed in detail below. For phenomenological applications, we must combine it with c(µ), the result of a perturbative calculation. This quantity has errors due to truncation of the RG equations, and due to the uncertainty in the value of αs (or equivalently
39
the value of ΛQCD ). These errors have nothing to do with the lattice calculation.¶ Since c(µ) ∝ αs−6/25 (µ) at leading order, it has become standard to quote results for BbK = αs−6/25 (µ)BK (µ). I do not like this practice, for various reasons. The most b is only important is that it mixes up errors from different sources. In addition, B K µ independent at leading order, so it is not what one actually uses in a two-loop b amounts to running B down to phenomenological analysis. And, finally, using B K K a very low scale, that at which α = 1, where I, at least, have little intuition for the physics. I propose, instead, that we quote BK (µ) for a standard scheme and scale, just as we do for αS . Other methods of calculation (large Nc , QCD sum rules, . . . ) give BK at different scales, and will have to be run to the standard scale. If the change in scale is small, however, the uncertainties introduced by the running will also be small. In this way we can make comparisons between models without the overall common errors in c(µ). To do phenomenology, we can take c(µ) from the one of the standard RG analyses. With that off my chest, let me return to the issue of how we calculate BK , and in particular, how we estimate and reduce the errors. The sources of errors are these. • Numerical method. Statistical errors are now at the 1-2% level. • Matching continuum and lattice operators. With staggered fermions, the errors from neglecting two- and higher loop terms in the matching are small, 1-2%. • Making sure the result has the correct behavior in the limit mK → 0. This is much simpler using staggered than Wilson fermions. • Extrapolating to the physical kaon (containing a highly non-degenerate s and d) from the lattice kaon (with degenerate, or nearly degenerate, quarks of mass ≈ ms /2). • Finite volume effects. It turns out that these can both be estimated theoretically to be very small (< 0.5%), and are numerically observed to be smaller than the statistical errors on lattices of size 1.6 − 2.4fm across. • Errors due to the use of the quenched approximation. • Extrapolating to a = 0—possibly using improved actions. I discuss the most important of these in turn. ¶
There is a small correlation in the errors, since the lattice-to-continuum matching coefficients depend on αs , but this is a minor effect.
40
6.1
Numerical method
Staggered fermions give the most accurate numerical results for BK , and I will describe how we (Rajan Gupta, Greg Kilcup and I) do the calculation. For more details, see Refs. [43, 44]. We begin by expressing BK as a ratio BK =
hK(~p = 0)|sγµ (1+γ5)d sγµ (1+γ5)d|K(~p = 0)i , 8 hK(~p = 0)|sγ4 γ5 d|0i h0|sγ4 γ5 d|K(~p = 0)i 3
(77)
Each of the operators in this expression is a shorthand for the lattice operator which results from matching with the continuum. Figure 14 shows schematically how we did the calculation in Ref. [46]. Since then we have changed the method slightly, but not essentially [47]. The vertical and horizontal directions represent, respectively, space (3-dimensional in practice) and Euclidean time. The expectation values indicate the functional integral over configurations weighted, in QQCD, by the gauge action. In practice this means an average over some number of configurations generated with the correct measure. The lines with arrows are quark propagators, and the boxes represent the lattice bilinear and quadrilinear operators. Finally, the wavy lines at the edge are “wall sources”. These create the quark (or antiquark) with equal amplitude across the entire timeslice, and thus ensure that the mesons which are created have ~p = 0. Gauge invariance is maintained by first fixing the source timeslices to Coulomb gauge. The boundary conditions are periodic in space, but Dirichlet in time, so that the propagating quarks can bounce off the ends of the box, but not propagate through them.
s
s
3 8
d
d
s
s d
d
Figure 14: Schematic depiction of the method used to calculate BK . We use wall sources because they give us the freedom to insert any operator we wish for the “boxes” in the diagram. For example, the operators can involve quarks and antiquarks at slightly different lattice sites. This freedom turns out to be crucial for staggered fermions, because (as explained below) the operators we want to use involve q and q at different positions. More generally, it allows us to 41
Figure 15: Data for BK (with tree-level matching). The quark mass is such that the lattice kaon is slightly heavier than the physical kaon. Sample of 23 configurations. test that different discretizations of the continuum operator yield the same results. The disadvantage of wall sources is that (in the way we implement them [44]) they create not only kaons, but also K ∗ ’s and other excited strange mesons. These give contributions which fall off like exp[−(mK ∗ −mK )τ ], where τ is the distance from the source. Thus if one is far enough away from both sources only the kaon contributes to the matrix element. I give an example of our data in Fig. 15. This shows the ratio of Eq. 77 as a function of the timeslice on which the operator resides. The operator has been summed over all space, which significantly improves the signal, ans is another advantage of the wall sources. The desired signal is independent of τ , since for all τ a particle of mass mK (either a K0 or a K0 ) propagates the length of the lattice. Edge effects due to excited states and particles bouncing off the boundary are apparent, but there is a “plateau” covering a considerable number of timeslices from which to extract the signal. We improve the statistics further by averaging over this plateau, although, since the results at different times are correlated, the improvement is not as significant as one would naively expect. To give you a feel for how the analysis proceeds, I show in Fig. 16 the results at four lattice spacings, and for a variety of values of mK . These are for operators matched to the continuum only at tree level. The kaon mass has been converted to physical units using the hadron spectrum to set the scale [48]. The dashed vertical line shows the value of the physical kaon mass. The statistical errors in the results are small, in part due to the use of a ratio to calculate BK . The small errors allow us to clearly observe the following features: • BK has a smooth dependence on m2K , and, apparently, a finite chiral limit. 42
Figure 16: Results for BK using tree-level matched operators. The lattice spacings are roughly 0.2, 0.105, 0.08 and 0.06 fm as one descends the plot. • There is no need to extrapolate to get to the physical kaon mass. Quenched particles with the kaon mass can be simulated directly on present lattices. Of course, this is a cheat, since the results in the figure are mostly for degenerate quarks, and one must extrapolate to the non-degenerate case. • There is a clear and significant dependence on lattice spacing.
43
6.2
Matching continuum and lattice operators
Lattice operators are chosen so that, at tree level, they have the same matrix elements as the continuum operators, for momenta much lower than the cut-off (p ≪ π/a). With Wilson fermions it is easy to find such operators. A continuum bilinear or four-fermion operator is discretized into a lattice operator having exactly the same form, with all fermion fields on the same lattice site. The tree-level matrix elements are the same as those of the continuum operator for all the momenta that are available on the lattice. The matrix elements of lattice and continuum operators do not, however, agree when one includes loops. Examples of one-loop diagrams are shown in Fig. 17. Lattice propagators and vertices differ from their continuum counterparts when the momentum in the loop is of O(1/a). We have already seen this difference for fermion propagators—compare Eq. 49 with the continuum propagator. It is true P also for gluon propagators (D −1 = µ 4 sin(kµ /2)2 versus kµ2 ), and the quark-gluon vertex. To do the one-loop matching, one must add parts to the lattice operator, proportional to g 2 , so as to make the matrix elements agree. These additional terms are finite, because they come from short distances and the lattice integrals have an ultraviolet cut-off. Since the momenta involved are ∼ π/a, the corrections should be reliably calculable using perturbation theory, as long as a is small enough. Let me mention a subtle point that one tends to forget when doing the matching calculations. It is not sufficient that perturbation theory be reliable at the scale π/a. One actually needs the stronger condition that it be reliable down to (0.5 − 1)/a. This is because, to do the matching, one must, in principle, compare matrix elements with external Euclidean momenta satisfying |p| ≫ ΛQCD . This is necessary to avoid the infra-red region where perturbation theory breaks down. But one also must have pa small enough that lattice artifacts in the matrix elements are small. Typically this remains true for pa < 0.5 − 1. In practice, at one-loop one uses a gluon mass to regulate the infra-red divergences, and sets the external momenta to zero. This is adequate, because in the matching calculation, the infra-red contributions are identical and cancel. This point is emphasized in Ref. [45]. One-loop continuum calculations are straightforward. The corresponding lattice integrations are, however, a mess, and are evaluated numerically. I can personally attest that, soon after beginning such a calculation, one asks oneself the questions: Is this matching really necessary? Can’t we just use the lattice regularization throughout? Unfortunately, the answers are yes and no, at least for the moment. The point is that the electroweak theory has a chiral representation of fermions, so its discretization is problematic. The chirality is not an obstacle to discretizing a left-handed operator such as that in BK (Eq. 77), because only the even-parity part of the operator contributes, which is the average of left-handed and right-handed operators. Incidentally, it is possible to directly match from the electroweak theory including weak bosons to the lattice effective Hamiltonian, and then run down to a low scale on the lattice.
44
Figure 17: Diagrams contributing to one-loop matrix elements needed for matching. The result of one-loop matching takes the general form Oicont (NDR, µ) = Oilat +
g 2 X (0) γ ln(π/µa) + c Ojlat + O(g 4) + O(a) . ij ij 16π 2 j
(78)
Here i is a set of continuum operators which mix with each other under the continuum RG. To define these operators, which in general contain γ5 , one must pick a renormalization scheme as well as a renormalization scale. One of the standard schemes is naive dimensional regularization (NDR). The set of lattice operators which are required for matching is, in general, larger than the set of continuum operators. k Thus the finite matrix cij is rectangular. The anomalous dimen(0) sion matrix γij is, however, square. It governs the dependence of the continuum operators on µ. Let me give a concrete example, relevant to the present subject. In order to simplify the allowed Wick contractions consider four-fermion operators composed of four distinct flavors OLL = S = P =
1 2 1 2 1 2
h
(79)
ψ 1 ψ2 ψ 3 ψ4 + (2 ↔ 4)
(80)
h
i
h
=
"
i
ψ 1 γ5 ψ2 ψ 3 γ5 ψ4 + (2 ↔ 4) , etc.
The result of matching for Wilson fermions is [49] cont OLL
i
ψ 1 γµ (1+γ5)ψ2 ψ 3 γµ (1+γ5)ψ4 + (2 ↔ 4)
#
µa g2 lat (−4 ln( ) − 54.753) OLL 1+ 2 16π π g2 [cs S + cp P + ct T + cv V + ca A] + O(g 4) . + 2 16π
k
(81)
(82) (83)
The appearance of extra operators is not peculiar to the lattice. Even in the continuum, when one uses dimensional regularization, extra “evanescent” operators are needed at intermediate stages of the calculation, associated with the additional −2ǫ dimensions.
45
The most important feature of this result is the appearance of lattice operators having all possible tensor structures. In the continuum, the matrix elements of OLL are constrained by chiral symmetry to vanish as m2K in the chiral limit. This is not true for the matrix elements of OS , OP , etc, which couple to both LH and RH quarks.∗∗ This problem is due to the breaking of chiral symmetry by the Wilson fermion action—even though the breaking is O(a) at tree-level, divergent loops give factors of 1/a leading to finite contributions at one-loop. The consequence is that if matching is not done exactly, to all orders in g 2 , matrix elements will not have the correct continuum chiral behavior. In addition, there will be lattice artifacts, proportional to a, with the wrong chiral behavior. Here we have the doubling problem coming back to haunt us. This proves to be a significant obstacle in practice, and, because of this, it is preferable to calculate BK using staggered fermions. Probably the only hope for similar accuracy with Wilson fermions is to use non-perturbative matching[45]. I want to make two general comments about matching. As we have seen, lattice calculations must be combined with continuum coefficient functions obtained from RG equations. To be consistent, if one uses 1-loop matching, one must use 2-loop RG equations. To see this, recall the solution to the 2-loop RG equation for the case of an operator which does not mix. Running from a heavy scale mH down to µ, one finds "
g 2(mH ) c(µ) = c(mH ) g 2(µ)
#γ (0) /2β0 "
#
g 2 (mH ) − g 2(µ) γ (1) γ (0) β1 − 1+ ( ) + O(g 4) , 16π 2 2β0 2β02 (84) (n) where γ and βn are the (n + 1)-loop contributions to the anomalous dimension and β-function, respectively. The term in the rightmost parenthesis proportional to g 2 (µ) is of the same form as a one-loop matching correction. To be consistent, one must include this term in c(µ). Since it contains γ (1) and β1 , it comes from two-loop running. My second comment is that the perturbative matching can be done equally well in QCD and QQCD. Indeed, at one-loop the two theories give the same results for fermionic operators, since fermion loops do not enter. The issue does arise, however, when picking the scale at which to evaluate c(µ). The product of c(µ) calculated in QCD, with BK (µ) evaluated in QQCD, is not independent of µ because β0 , β1 and γ1 receive contributions from quark loops and thus differ in the two theories. One must simply guess a value of µ for which it seems reasonable that QQCD will do the best job of imitating QCD. ∗∗
The only exception is V + A, which has the same positive parity part as OLL , and does have vanishing matrix elements in the chiral limit.
46
6.3
Staggered fermions and chiral behavior
I now give a lightning review of the essentials of staggered fermions. For more details see the two texts, or my articles explaining in detail how and why one uses staggered fermions to calculate matrix elements involving PGB[43]. We begin with naive fermions − SN =
X nµ
1 ψ γ 2 n µ
i
h
† ψn−µ + Un,µ ψn+µ − Un−µ,µ
X
mψ n ψn ,
(85)
n
and perform a change of variables, known as “spin-diagonalization” [50], ψ(n) = γn χ(n) , ψ(n) = χ(n)γn† , γn = γ1n1 γ2n2 γ3n3 γ4n4 .
(86)
Note that γn depends only on mod2 (nµ ). The result is − SN =
X n
"
χn
X µ
1 η (n) 2 µ
Un,µ χn+µ −
† χn−µ Un−µ,µ
#
+ mχn χn .
(87)
The gamma matrices have been replaced by the phases (thus the name of the transformation) P (88) ηµ (n) = (−) µ 1, and are likely to be afflicted by substantial 60
Figure 21: Static and Wilson fermion results for φP . lattice artifacts. Indeed, an approximate way of removing these artifacts has been used to correct all except the static point, and the corrections are substantial (as large as ∼ 50%) for the points at small 1/MP . I explain the source of this correction below. The bulk of it is reasonable, but there remains a systematic uncertainty in these large mass points. Fortunately, this uncertainty does not have much impact on fB , because the curve is pinned down from both sides of the B mass. Even if were to discard the two points with the smallest values of 1/MP , the result for fB would be similar. The present status has been summarized by Soni [71], who quotes fB = 173 ± 40 MeV (in the normalization where fπ = 135 MeV). The alternative approach is to use yet another effective theory to describe the b-quark, namely non-relativistic QCD (NRQCD) [72]. If mQ >> ΛQCD , then in its couplings to low momentum gluons, the quark is non-relativistic. The couplings to high-momentum gluons must be treated relativistically, but these can be accounted for using perturbation theory. The NRQCD Lagrangian is 1 †~2 ψD ψ 2mQ g ~ ×E ~ −E ~ × D)ψ ~ ~ + c2 (g 2 ) g ψ †~σ · (D (129) ψ †~σ · Bψ + c1 (g 2 ) 2mQ 8m2Q 1 ~ 2 )2 ψ − c4 (g 2 ) ig ψ † (D ~ ·E ~ −E ~ · D)ψ ~ + ... , + c3 (g 2 ) 3 ψ † (D 8mQ 8m2Q
LNRQCD = ψ † Dτ ψ +
where ψ is a two-component field. The coefficients ci are obtained by perturbative matching with QCD. At leading order they are all unity. The crucial point is that, when one discretizes NRQCD, the errors are no longer determined by mQ a, but rather by ~pa, where p ∼ ΛQCD is a typical momentum. 61
Thus, in heavy-light systems, discretization errors for heavy and light quarks are comparable. In the the bb system the typical momentum is somewhat higher, p ∼ Mv ≈ 0.1M ≈ 0.5GeV, but still satisfies pa < 1 for present lattices. NRQCD teaches us something about heavy Wilson quarks. The point is that, as long as mQ >> ΛQCD , the quarks are non-relativisitic, and must be describable by a non-relativistic effective Lagrangian. This will have the same form as Eq. 129 (since this contains all terms), but the effect of discretization errors will be to change the coefficients ci substantially from those in NRQCD[73]. The coefficients can be determined using perturbation theory. By suitably changing the normalization of the fields one can get the leading term in LNRQCD with the correct normalization, while by changing the definition of mQ one can obtain the second term up to perturbative corrections. After these corrections, results for heavy Wilson quarks should only have errors of O(ΛQCD /mQ ) (from the incorrectly normalized third term in LNRQCD ), and not of O(mQ a). These are the corrections that have been applied to points in Fig. 21. Nevertheless, the action is not that of NRQCD. Kronfeld and Mackenzie have suggested studying heavy quarks using a modified Wilson action [74]. Their scheme interpolates between the standard Wilson fermion action for light quarks and NRQCD for heavy quarks. For my purposes here, however, there is no important distinction between this and the NRQCD approach, and I will focus on the latter, for which more results are available. NRQCD is a non-renormalizable theory, because LNRQCD contains operators of dim-5 and higher. It must be formulated with a UV cut-off, which is here provided by the lattice, Λ ≈ 1/a. As mentioned above in the discussion of BK , loop diagrams mix the higher dimension operators with those of lower dimension. Thus, for example, there are contributions to wave-function renormalization proportional to F (g 2)Λ/mQ = F (g 2)/(mQ a). These are calculable order by order in perturbation theory. Since in practice one can only work to finite order, it is clear that one cannot take a too small, for then the uncalculated corrections will become large. This is not a practical problem at present lattice spacings. This does bring up an important issue that, in my view, remains to be fully resolved. Even if one could calculate F (g 2 ) to all orders in perturbation theory, there might be non-perturbative terms proportional to exp[−n/(2β0 g 2)] ∝ (ΛQCD a)n (β0 is the first coefficient in the QCD β-function). If there are such terms with n = 1, they give non-perturbative contributions to wave-function renormalization ∝ ΛQCD /M, which are not calculable. This in turn means that there is an ambiguity in the A/MP contribution to φP (Eq. 127) of size ∝ ΛQCD /MP . But A ∼ ΛQCD , so the ambiguity is of the same size as the quantity we wish to extract. This argument was first given in Ref. [75]. There is no dispute over whether such terms can exist (examples are infra-red renormalon ambiguities), but what is controversial is their size [76]. These are nonperturbative effects at short-distances, and the lore is that these are small. This is true of the contributions of short distance instantons, for example. How can this dispute be resolved? One can think of these terms as non-perturbative contributions 62
to the matching between QCD and NRQCD. Thus one should compare the results of perturbative matching to those of non-perturbative matching—the latter requiring either physical matrix elements, or quark and gluon states in a particular gauge. Such a program has been initiated in Ref. [45]. An alternative comparison is obtained by calculating physical quantities in different ways. If the answers agree, the non-perturbative ambiguities are likely small. The example of mb is discussed below. I think it is important to study this issue further. The aim of the NRQCD program is to directly simulate b-quarks. The first step in this program is to make sure that one obtains a good description of the bb system. The ordering of the terms in Eq. 129 is according to decreasing importance in the bb system [72]. Terms in the first line are of O(mQ v 2 ), where v is the velocity of the heavy quark, and give the spin-averaged splittings (e.g. the 1P-1S splitting discussed a long way above). The second and third lines contain terms of O(mQ v 4 ), the former being the leading contribution to the hyperfine splittings, the latter correcting the spin-averaged splittings. Since v 2 ∼ 0.1, the first line gives fine structure to 10%, the next line hyperfine structure to 10%, and the third improves the accuracy of fine structure to 1%. All these terms have now been included in the simulations [29, 77]. Propagators can be calculated in a single pass, because the time derivative can be discretized as a forward difference. Thus the calculations are much faster than for light quark propagators. The status of the calculation of the spin-averaged spectrum is shown in Fig. 22. NRQCD refers to Ref. [77], UK-NRQCD to Ref. [78]. Notice that simulations with two (moderately light) flavors of dynamical fermions have now been done. The spectrum is little changed with nf = 2 from that in QQCD; both are in good agreement with the experimental data. The lattice spacing has been adjusted to fit the 1S −1P and 2S −2P splittings (from which αs can be determined as discussed above). Reasonable agreement for spin splittings is also obtained. What about mb ? Spin-averaged splittings are quite insensitive to mb ; a reasonable value has been used in the simulations. An accurate determination of mb can be made in two ways. First, one can use mΥ a = 2mb a + ENR a − 2E0 a, i.e. adjust mb until twice its mass plus the binding energy (determined from the simulation) equals the physical Υ mass. The measured binding energy must be corrected for the self energy of the quark (E0 a). This correction can be calculated in perturbation theory, E0 a = c1 g 2 + O(g 4). It is an example of a quantity which might contain the previously discussed uncalculable non-perturbative terms ∝ ΛQCD a. These would lead to an uncertainty in the extracted value of mb of size ΛQCD . The second method is to measure the kinetic energy of the bb states, e.g. aEΥ (~p) = aEΥ,NR +
(a~p)2 + ... . 2aMΥ,kin
(130)
One adjusts the bare lattice mass m0b a until aMΥ,kin agrees with the physical mass in lattice units, aMΥ . One then uses perturbation theory to match the lattice mass to the pole mass, mb = (1 + c′ g 2 + O(g 4))m0b . In this method an unknown non-perturbative term is truly a small correction. 63
GeV 10.5 s
2
s
c
c2
s
s c2
10.0
s c2
9.5
s c2
sc 1
S0
3
1
S1
P1
1
D2
Figure 22: Spin-averaged Υ spectrum from (a) NRQCD (nf = 0): filled circles; (b) NRQCD (nf = 2): open circles; and (c) UK-NRQCD (nf = 0): boxes. Experimental results are the dashed horizontal lines. Both methods lead to consistent results, with errors of ∼ 200MeV . Thus a difference between the two methods of approximately ΛQCD , which is what would be expected from non-perturbative terms, is not ruled out. More accurate data is needed to resolve the above mentioned dispute. Combining the results from the two methods, the final quoted values for the pole mass are [77] mb (nf = 0) = 4.94 ± 0.15GeV mb (nf = 2) = 5.0 ± 0.2GeV .
(131) (132)
The next stage is to apply these methods directly to B-mesons. This is just beginning. It is my hope that the issue of non-perturbative ambiguities can be resolved, and the lattice can, in due course, give results for a number of transition amplitudes including 1/MB corrections.
8
A final flourish
I hope to have convinced you that lattice calculations are now reliable enough to make significant contributions to phenomenology. As time progresses, the importance of these calculations will increase. To give an idea of what might happen I √ have played the following game. The lattice results for BK and fB BB constrain, respectively, Im(Vtd2 ) and |Vtd |2 (ignoring small charm quark contributions). Recalling that Vtd = Aλ3 (1 − ρ − iη), we can convert these into constraints on ρ and η. I want to imagine how these constraints might look, say, five years hence. It is my guess that by that time we will have made enough progress with simulating QCD that the error in BK will be roughly the same as the present error in 64
Figure 23: Possible future constraints on ρ and η. the quenched result, Eq. 125. For purposes of illustration, I will also take the same central value. I √ think that the errors in fB and BB will be reduced substantially, and I assume fB BB = 200 ± 10 MeV, instead of the present 200 ± 40 MeV. Let me further assume that the experimental errors in Vcb and Vub will drop significantly. I take |Vcb | = 0.038, |Vub /Vcb| = 0.080±0.004, and xd = 0.67±0.02 (this is the measure of B − B mixing). Finally, I assume mt = 172 GeV. The resulting constraints on ρ and η are shown in Fig. 23. The parameters are supposed to lie between the three pairs of curves (the hyperbolae are from K − K mixing, the large circles from B − B mixing, and the small circles from |Vub/Vcb |). The reduction in errors has moved the members of each pair much closer than at present, and there is no solution for ρ and η! It will be most interesting to see how things develop in reality.
9
Acknowledgements
Many thanks to Rajan Gupta for comments and for providing plots, and to Maarten Golterman for helpful discussions.
10
References 1. H.J. Rothe, Lattice Gauge Theories (World Scientific, Singapore, 1992).
65
2. I. Montvay and G. M¨ unster, Quantum Fields on a Lattice (Cambridge University Press, Cambridge, 1994). 3. M. Creutz, Quarks, Gluons and Lattices (Cambridge University Press, Cambridge, 1983). 4. K. Osterwalder and R. Schrader, Comm. Math. Phys. 42 (1975) 281. 5. F. Butler et. al., Phys. Rev. Lett. 70 (1993) 2849, and “Hadron masses from the valence approximation to lattice QCD”, hep-lat/9405003. 6. L. Maiani and M. Testa, Phys. Lett. B245 (1990) 245. 7. M. L¨ uscher, Commun. Math. Phys., 104 (1986) 177; 105 (1986) 153; Nucl. Phys. B354 (1991) 531. 8. K. Symanzik, Nucl. Phys. B226 (1983) 187 and 205. 9. P. Hasenfratz and F. Niedermayer, Nucl. Phys. B414 (1994) 785. 10. G. S. Bali et al.(UKQCD), Phys. Lett. 309B (1993) 378. 11. G.P. Lepage and P.B. Mackenzie, Phys. Rev. D48 (1993) 2250. 12. L. H. Karsten and J. Smit, Nucl. Phys. B183 (1981) 103. The corresponding result for Hamiltonian theories was demonstrated in H. B. Nielsen and N. Ninomiya, Nucl. Phys. B185 (1981) 20, ibid., B193 (1981) 173. 13. F. Wilczek, Phys. Rev. Lett. 57 (1986) 2617. 14. A. Borelli et al., Nucl. Phys. B333 (1990) 335. 15. L. Susskind, Phys. Rev. D16 (1977) 3031. 16. N.H. Christ, R. Friedberg and T.D. Lee, Nucl. Phys. B202 (1982) 89. 17. B. Alles et al. “Continuum limit of field theories regularized on a random lattice”, hep-lat/9411014. 18. K. Jansen, “Domain Wall Fermions and Chiral Gauge Theories”, heplat/9410018. 19. M. L¨ uscher, Commun. Math. Phys. 54 (1977) 283; M. Creutz, Phys. Rev. D35 (1987) 1460. 20. T. Kalkreuter, “Multigrid methods for propagators in lattice gauge theories”, hep-lat/9409008. 21. S. Duane, A.D. Kennedy, B.J. Pendleton and D. Roweth, Phys. Lett. 195B (1987) 216. 22. D. Leinweber and T. Cohen, Phys. Rev. D49 (1994) 3512. 23. A. Morel, J. Phys. (Paris) 48 (1987) 1111. 24. C. Bernard and M. Golterman, Phys. Rev. D46 (1992) 853. 25. S. Sharpe, Phys. Rev. D46 (1992) 3146. 26. G.S. Bali and K. Schilling, Phys. Rev. D47 (1993) 661. 27. A.X. El-Khadra, G. Hockney, A.S. Kronfeld and P.B. Mackenzie, Phys. Rev. Lett. 69 (1992) 729. 28. A.X. El-Khadra, Proc. “LATTICE 93”, Dallas, USA, Oct. 1993, Nucl. Phys. B (Proc. Suppl.) 34 (1994) 141. 29. C.T.H. Davies et al., “A precise determination of αs from lattice QCD”, hep-ph/9408328. 30. Review of Particle Properties, Phys. Rev. D50 (1994) 1174. 66
31. S. Aoki et al., “Manifestation of sea quark effects in the strong coupling constant in lattice QCD”, hep-lat/9407015; and “Charmonium spectroscopy with heavy Kogut-Susskind quarks”, hep-lat/9411058. 32. B.R. Webber, “QCD and jet physics”, 27th International Conference on High Energy Physics, Glasgow, Scotland, 20-27 July 1994, hep-ph/9410268. 33. J. Shigemitsu, “Lattice gauge theory: status report 1994”, 27th International Conference on High Energy Physics, Glasgow, Scotland, 20-27 July 1994, hepph/9410212. 34. M. Fukugita, H. Mino, M. Okawa, G. Parisi and A. Ukawa, Phys. Lett. 294B (1992) 380. 35. S. Sharpe, Proc. “LATTICE 93”, Dallas, USA, Oct. 1993, Nucl. Phys. B (Proc. Suppl.) 34 (1994) 403. 36. S. Sharpe, Proc. “LATTICE 92”, Amsterdam, Netherlands, Sep. 1992, Nucl. Phys. B (Proc. Suppl.) 30 (1993) 213. 37. Y. Kuramashi, M. Fukugita, H.Mino, M. Okawa and A. Ukawa, Phys. Rev. Lett. 72 (1994) 3448. 38. R. Gupta, talk at “LATTICE 94”, Int. Symp. on Lattice Field Theory, Bielefeld, Germany, 9/94. 39. S. Kim and D.K. Sinclair, unpublished; and Proc. “LATTICE 92”, Amsterdam, Netherlands, Sep. 1992, Nucl. Phys. B (Proc. Suppl.) 30 (1993) 381 40. D. Weingarten, Proc. “LATTICE 93”, Dallas, USA, Oct. 1993, Nucl. Phys. B (Proc. Suppl.) 34 (1994) 29. 41. J. Labrenz and S. Sharpe, Proc. “LATTICE 93”, Dallas, USA, Oct. 1993, Nucl. Phys. B (Proc. Suppl.) 34 (1994) 335. 42. M. Golterman, “Chiral perturbation theory and the quenched approximation of QCD”, hep-lat/9411005. 43. S. Sharpe, in Standard Model, Hadron Phenomenology and Weak Decays on the Lattice, ed. G. Martinelli, (World Scientific, Singapore, in press). 44. R. Gupta, G. Guralnik, G. Kilcup and S. Sharpe, Phys. Rev. D43 (1991) 2003. 45. G. Martinelli, C. Pittori, C.T. Sachrajda, M. Testa and A. Vladikas, “A general method for nonperturbative renormalization of lattice operators”, hep-lat/9411010 46. G. Kilcup, S. Sharpe, R. Gupta and A. Patel, Phys. Rev. Lett. 64 (1990) 25. 47. S. Sharpe, Proc. “LATTICE 90”, Tallahassee, USA, Oct. 1990, Nucl. Phys. B (Proc. Suppl.) 20 (1991) 429. 48. S. Sharpe, Proc. “LATTICE 91”, Tsukuba, Japan, Nov. 1991, Nucl. Phys. B (Proc. Suppl.) 26 (1992) 197. 49. G. Martinelli, Phys. Lett. 141B (1984) 395; C. Bernard, T. Draper and A. Soni, Phys. Rev. D36 (1987) 3224. 50. N. Kawamoto and J. Smit, Nucl. Phys. B192 (1981) 100. 51. D. Daniel, and S. Sheard, Nucl. Phys. B302 (1988) 471. 67
52. H. Kluberg-Stern, A. Morel, O. Napoly, and B. Petersson, Nucl. Phys. B220 (1983) 447. 53. G. Kilcup and S. Sharpe, Nucl. Phys. B283 (1987) 493. 54. S. Sharpe, A. Patel, R. Gupta, G. Guralnik and G. Kilcup, Nucl. Phys. B286 (1987) 253. 55. S. Sharpe and A. Patel, Nucl. Phys. B417 (1994) 307. 56. W. Lee and M. Klomfass, “One spin trace formalism of BK ”, CU-TP-642, July 1994. 57. N. Ishizuka and Y. Shizawa, Phys. Rev. D49 (1994) 3519. 58. A. Patel and S. Sharpe, Nucl. Phys. B395 (1993) 701. 59. S.J. Brodsky, G.P. Lepage and P.B. Mackenzie, Phys. Rev. D28 (1983) 228. 60. C. Bernard, T. Draper, A. Soni, D. Politzer and M. Wise, Phys. Rev. D32 (1985) 2343. 61. J. Kambor and D. Wyler, private communication. 62. Y. Zhang and S. Sharpe, in preparation. 63. N. Ishizuka et al., Phys. Rev. Lett. 71 (1993) 24. 64. G.W. Kilcup, Phys. Rev. Lett. 71 (1993) 1677. 65. G. Heatlie, G. Martinelli, C. Pittori, G.C. Rossi and C.T. Sachrajda, Nucl. Phys. B352 (1991) 266. 66. S. Sharpe, in preparation. 67. D. Verstegen, Nucl. Phys. B249 (1985) 685. 68. M.J. Booth, “Quenched chiral perturbation theory for heavy-light mesons”, hep-ph/9411433. 69. J. Mandula and M. Ogilvie, “A lattice calculation of the heavy quark universal form-factor”, hep-lat/9408006; U. Aglietti, Nucl. Phys. B421 (1994) 191. 70. C.W. Bernard, J.N. Labrenz, A. Soni, Phys. Rev. D49 (1994) 2536. 71. A. Soni, “Lattice results for heavy light matrix elements”, 27th International Conference on High Energy Physics, Glasgow, Scotland, 20-27 July 1994, heplat/9410007. 72. G.P. Lepage, L. Magnea, C. Nakhleh, U.Magnea and K. Hornbostel, Phys. Rev. D46 (1992) 4052. 73. A.S. Kronfeld, Proc. “LATTICE 92”, Amsterdam, Netherlands, Sep. 1992, Nucl. Phys. B (Proc. Suppl.) 30 (1993) 445. 74. A.S. Kronfeld and B.P. Mertens, Proc. “LATTICE 93”, Dallas, USA, Oct. 1993, Nucl. Phys. B (Proc. Suppl.) 34 (1994) 34. 75. L. Maiani, G. Martinelli and C.T. Sachrajda, Nucl. Phys. B368 (1992) 281. 76. G. Martinelli, Proc. “LATTICE 91”, Tsukuba, Japan, Nov. 1991, Nucl. Phys. B (Proc. Suppl.) 26 (1992) 31; G.P. Lepage, ibid 45. 77. C.T.H. Davies et al., Phys. Rev. Lett. 73 (1994) 2654. 78. S.M. Catterall, R.R. Devlin, I.T. Drummond and R.R. Horgan, Phys. Lett. 321B (1994) 246.
68
|
2019-12-11 14:56:05
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8541135787963867, "perplexity": 1195.4000395108349}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540531917.10/warc/CC-MAIN-20191211131640-20191211155640-00369.warc.gz"}
|
https://thecuriousastronomer.wordpress.com/tag/solar-eclipse/
|
Feeds:
Posts
## The February 2017 annular solar eclipse
Some of you may be aware that there is an annular eclipse of the Sun on Sunday 26 February, which is why I am posting this blog a few days before it. Annular eclipses occur when the Moon is a little too far away to block the Sun out entirely, so instead we see a ring of light around the Moon, as this picture below shows. This particular picture was taken during the May 20 2012 annular eclipse
An annular eclipse happens when the Moon is slightly too far away to block out the Sun entirely. This is a picture of the May 20 2012 annular eclipse.
## The Moon’s elliptical orbit about the Earth
The diagram below shows an exaggerated cartoon of the Moon’s orbit about the Earth. The Moon’s orbit is an ellipse, it has an eccentricity of 0.0549 (a perfect circle has an eccentricity of 0). The average distance of the Moon from the Earth (actually, the distance between their centres) is 384,400 kilometres. The point at which it is furthest from the Earth is called the apogee, and is at a distance of 405,400 km. The point at which it is closest is called the perigee, and it is at a distance of 362,600 km.
The Moon orbits the Earth in an ellipse, not a circle. The furthest it is from the Earth in its orbit (the apogee) is at a distance of 405,400 km, the nearest (the perigee) is at a distance of 362,600 km.
## The angular size of the Moon
It is pure coincidence that the Moon is the correct angular size to block out the Sun. The Moon is slightly oblate, but has a mean radius of 1,737 km. With its average distance of 384,400, this means that from the Earth’s surface (the Earth’s mean radius is 6,371 km) the Moon has an angular size on the sky of
$2 \times \tan^{-1} \left( \frac{ (1.737 \times 10^{6}) }{ (3.84 \times 10^{8} - 6.371 \times 10^{6}) } \right) = 2 \times \tan^{-1} (4.59975 \times 10^{-3} )$
$= 2 \times 0.2635 = \boxed{ 0.527 ^{\circ} \text{ or } 31.62 \text{ arc minutes} }$
So, just over half a degree on the sky. But, this of course will vary depending on its distance. When it is at apogee (furthest away), its angular size will be
$\boxed{ \text{ at apogee } 29.93 \text{ arc minutes } }$
and when it is at perigee (closest) it will be
$\boxed{ \text{ at perigee } 33.53 \text{ arc minutes } }$
## The angular size of the Sun
The Sun has an equatorial radius of 695,700 km, and its average distance from us is 149.6 million km (the Astronomical Unit – AU). So, at this average distance the Sun has an angular size of
$2 \times \tan^{-1} \left( \frac{ (6.957 \times 10^{8}) }{ (1.496 \times 10^{11} - 6.371 \times 10^{6} ) } \right) = 2 \times 0.266 = \boxed {0.533^{\circ} }$
Converting this to arc minutes, we get that the angular size of the Sun at its average distance is
$\boxed{ 31.97 \text{ arc minutes} }$
Compare this to the angular size of the Moon at its average distance, which we found to be $31.62 \text{ arc minutes}$.
The angular size of the Sun varies much less than the variation in the angular size of the Moon, at aphelion (when we are furthest) from the Sun, we are at a distance of 152.1 million km, so this gives an angular size of
$\boxed{ \text{ at aphelion } 31.44 \text{ arc minutes } }$
and, at perielion, when the distance to the Sun is 147.095 million km, the angular size of the Sun is
$\boxed{ \text{ at perihelion } 32.52 \text{ arc minutes } }$
## Annular Eclipses
So, from the calculations above one can see that, if the Moon is at or near perigee, its angular size of $33.53 \text{ arc minutes }$ is more than enough to block out the Sun. When the Moon is at its average distance, its angular size is $31.62 \text { arc minutes }$, which is enough to block out the Sun unless we are near perihelion. But, when the Moon is near apogee, its angular size drops to $29.93 \text{ arc minutes }$, and this is not enough to block out the Sun, even if we are at aphelion.
The Earth is currently at perihelion in early January (this year it was on January 4), so the Sun is slightly larger in the sky that it will be in August for the next solar eclipse. This, combined with the Moon being near its apogee, which occurred on February 18, (for a table of the dates of the Moon’s apogees and perigees in 2017 follow this link) means that the solar eclipse on Sunday February 26 is annular, and not total.
## The February 26 2017 Annular Eclipse
Here is a map of the path of the eclipse, it is taken from the wonderful NASA Eclipse website. If you follow this link, you can find interactive maps of all the eclipses from -1999 BC to 3000 AD! If you have about 6 years to waste, this is an ideal place to do it!
The February 26 2017 annular eclipse will start in the southern Pacific ocean, sweep across Chile and Argentina, then across the Atlantic Ocean, before reaching Angola, Zambia and the Democratic Republic of Congo (Congo-Kinshasa)
The eclipse finishes in Africa and, as luck would have it, I am going to be in Namibia on the day of the eclipse. In fact, if you are reading this anytime in the week before the eclipse, I am already there. I am in Namibia for a week as part of Cardiff University’s Phoenix Project, and I will be giving a public lecture at the University of Namibia about the eclipse on Wednesday 22 February. I also hope to give a public lecture to the Namibian Scientific Society on the Friday, and on the Sunday I will be helping University of Namibia astronomers with a public observing session in Windhoek.
The February 20 2017 annular eclipse will finish in Africa, passing through Angola, Zambia and the Democratic Republic of Congo (Congo Kinshasa)
The interactive map to this eclipse, which you can find by following this link, allows you to click on any place and find out the eclipse details for that location. So, for Windhoek, the eclipse begins at 15:09 UT (which will be 17:09 local time), with the maximum of the partial eclipse being at 16:16 UT (18:16 local time), and the eclipse ending at 17:16 UT (19:16 local time). Because Windhoek is to the south of the path which will experience an annular eclipse, it will be a partial eclipse, with a coverage of 69%.
As seen from Windhoek, where I will be for the annular eclipse, the obscuration will be 69%.
So, if you are anywhere Chile, Argentina, in western South Africa, in Namibia, in Angola, or the western parts of Congo-Kinshasa and Congo-Brazzaville, look out for this wonderful astronomical event this coming Sunday. And, remember to follow the safety advise when viewing an eclipse; never look directly at the Sun and only look through a viewing device that has correct filtration. Failure to follow these precautions can result in permanently damaging your eyesight.
## Upcoming solar eclipses
I have had a few people ask me about the Solar eclipse on the 20th of March, and whether it is worth seeing; or are people better off waiting for another one in the next few years? So, here is my attempt to answer those questions.
This upcoming eclipse on the 20th of March will be the last total eclipse visible from anywhere in Europe until 2026, but as you can see from the figure below, the path of totality is way north where no one lives! From mainland Europe and the British Isles it will be partial, and depending on how far south you are that will determine how partial it appears.
If you look closely at the diagram below you will see that everyone in the British Isles will see the eclipse as more than 80%, which is not too bad. Although the figure does not have the curve, I suspect in Scotland it is more than 90%. Ditto main-land Europe, if you can get up as far north as Scandinavia you will see a more than 80% eclipse. But, if you are in France or Germany or central Europe, it is going to be between 60% and 80%. Places like London or Cardiff (where I live) look like they will see an 84-85% eclipse, which I am pretty pleased about as I thought it was going to be less.
The eclipse on the 20th of March is total if you are far enough north, but to most of us in Europe it will be a partial eclipse. From the Disunited Kingdom it will be better if you are in Scotland than if you are in southern England or south Wales.
The next total eclipse after this one is on March the 9th next year (2016). But, for those of us in Europe or North America, it involves a bit of a trek to Asia. The eclipse starts near Indonesia, and sweeps out across the Pacific ocean. It doesn’t really cross any largely populated land-masses, apart from Borneo I guess.
This eclipse, on the 9th of March 2016, passes just to the north of Indonesia and sweeps out across the Pacific ocean.
After the 2016 total eclipse, the next eclipse is the big one. On the 21st of August 2017 there will be a total eclipse which will sweep across the continental United States! I am guessing that this will probably be the most observed solar eclipse in history so far; the only one to possibly rival it would be the eclipse which swept across mainland Europe in 1999, which is to date the only total eclipse I have seen.
Details of the total eclipse on the 21st of August 2017. As you can see, this one passes right across the continental United States, and will probably be the most observed total eclipse in history.
So, in answer to the question “which is the best solar eclipse to try and see over the next few years?”, I would have to say it is the 21st of August 2017 one. I also suspect that there will be tens of millions of people, if not hundreds of millions, all trying to view this eclipse, so the path of totality may get quite crowded! But the eclipse next month is well worth seeing, even if most of us in Europe will only see it as a partial eclipse. I well remember seeing a partial eclipse as a teenager, and I also saw a partial one in 1994. Whilst not as spectacular as being in the path of totality, it is still a memorable sight to see the Moon move across the Sun.
## November 2013 Solar eclipse in Kenya
Later this year, on the 3rd of November, there will be a Solar eclipse which will pass across the Atlantic ocean, across Africa and end in Somalia. The eclipse is actually termed a hybrid eclipse, because it starts as annular (where the Moon is a little further from the Earth and so does not block out all of the Sun), but changes to a total eclipse during the event, as the distance between the Moon and the Earth gets less. The figure below shows the path of the eclipse.
I am part of a planned expedition to go and see the eclipse in Kenya. The expedition is run by the International Space Schools Education Trust (ISSET – a registered UK charity), and is part of their Astronaut Leadership Experience. I will go as the trip’s astronomy expert, and in addition to Chris Barber of ISSET, who will organise and lead the expedition, there will be two people from NASA, Ken Ham who has commanded Space Shuttle missions, and his wife Michelle who is an astronaut trainer.
The path of the eclipse in Kenya is shown in the figure below, it passes right across Lake Turkana which is in the northern part of Kenya near its borders with Uganda and Ethiopia and South Sudan.
This area is very beautiful and very interesting. It is part of the Great Rift Valley, and on its shores Lucy was found, the oldest humanoid ever discovered. Lake Turkana also boasts a volcanic island, which is shown in one of the slides below. The plan is to fly to Kenya on the 26th of October, and go up to the Lake Turkana region towards the end of the week for the eclipse, which is on the 3rd. Before going to the north, we will go to the Masai Mara National Park, Mount Kenya (which straddles the equator), and explore more areas of the Great Rift Valley.
The trip is open to anyone; to find out the costs and day-by-day itinerary you can visit the website here. The costs include return (round trip) flights to Kenya, and all accommodation, food and transport whilst in Kenya. If it is anything like the week we spent in the Gobi Desert in June for the Transit of Venus, it will be a truly memorable week full of activity, learning, exploration and fun.
|
2019-07-22 22:19:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5010648369789124, "perplexity": 924.350371397787}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528290.72/warc/CC-MAIN-20190722221756-20190723003756-00397.warc.gz"}
|
https://electronics.stackexchange.com/questions/193125/coil-design-for-induction-cooking-system
|
# Coil design for induction cooking system
I am designing an induction heating circuit using a half bridge, series resonant circuit. Wildly simplified schematic follows (will use IGBTs instead of MOSFETs for the final design, but they were not available in the schematic editor):
simulate this circuit – Schematic created using CircuitLab
L1 will be the induction heating coil, using a spiral geometry similar to this:
The basic theory is explained in lots of materials around the web such as this old ST application note.
Given a value for L1, and a desired switching frequency $f$ (say 20-30 kHz), one can calculate a value for C using $$f = \frac{1}{2 \pi \sqrt{L_1 C}}$$
From there we can take $C_2 = C_3 = C/2$. The actual switching frequency should be safely above resonance so as to ensure the circuit works in the inductive area.
Of course, all of this assumes that a value of $L_1$ is given, and this is the part where I am stuck. I've been searching the internet as well as academic papers, but so far I haven't found a design procedure detailing how to select $L_1$ so as to achieve the desired heating power.
In principle I could just build an inductor of the desired physical size (say 20 cm of diameter), measure it with an LCR meter, and then select $C_2$ and $C_3$ according to the procedure above. However, say I build this circuit and it doesn't achieve the desired heating power; then what should I do next? Increase the physical size of the inductor? Increase/decrease inductance (with a corresponding adjustment in the capacitor to maintain the switching frequency constant)?
In summary: how should I go about actually designing/engineering the induction heating coil, rather than just applying blind trial and error?
• L depends on the coil as well as on the pod. I guess a cast iron pan results in a higher L than a aluminum pan with some stainless steel in the bottom. – sweber Oct 2 '15 at 8:14
You should use one additional inductor Lr (L resonant) in series, both capacitor C2 and C3 could be also marked as Cr. Then, in your formula L1C becomes LrCr.
L1 should have just minimum effect on resonant frequency L1 << Lr.
However if you don't want to use additional Lr, then the resonant frequency is dependant of the load applied. The inductor acts like a tranformer N:1, where the N is the equvalent number of primary turns (coli) and 1 is the secondary single turn (the pot). The load (resitance of the pot) is connected to the secondary. Depending of the load applied (different pot) also the resonating frequncy will change, therefore you need to have a switching device capable of measuring phase difference between voltage and current and adapt the frequency. If you want to stay above resonating frequency then you could acheive this by having a setpoint in a phase angle, this is done with PLL. Some induction heating devices use PLL, you can also check tesla coil builders forum as they alsu use this self resonant (quasi resonant) techniques.
• Your answer makes sense and would definitely make the design process easier. However, I have found no mention of induction cooking resonant circuits designed this way in my research; see e.g. the ST application note I linked to, or this schematic of a commercial induction cooker, which uses a different topology, but still no mention of a separate inductor, except as a filter for the bridge rectifier, and not for resonance. – swineone Oct 2 '15 at 10:32
## protected by Tom CarpenterSep 9 '18 at 12:33
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
|
2019-08-23 07:57:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.639579176902771, "perplexity": 986.8282195238867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318011.89/warc/CC-MAIN-20190823062005-20190823084005-00527.warc.gz"}
|
https://electronics.stackexchange.com/questions/239937/what-happens-when-discharging-coin-cell-battery-too-much
|
What happens when discharging coin cell battery too much?
BACKGROUND:
I'm working on a project where we use the nRF51 Bluetooth low energy chip, and our first prototype boards just arrived.
So I've made some power measurements trying to calculate the battery life:
When idle: 3,1uA
When bluetooth connected: 4,5mA
The bluetooth won't be connected that often, so this I can actually live with the resulting battery life for the prototype (gives me around 17-18 weeks battery life).
We're using a Panasonic CR1216 battery: https://industrial.panasonic.com/cdbs/www-data/pdf2/AAA4000/AAA4000C277.pdf
And just now I realised that the battery voltage takes a good drop when the bluetooth is connected for a longer time. The max connected time is 10sec and there I got:
Start: 2,99V drops exponentially to 2,7V over 10sec.
When the bluetooth disconnects it slowly climbs towards 2,99V again. I've waited 10min now and it is around 2,96V.
QUESTION:
In the battery datasheet I can't find anything about max discharge current? But clearly the 4,5mA is too much? What will happen to the battery's 25mAh? (I know that it is hard to give a specific number, I'm just interested in knowing if my battery will die within 2 days instead of my 17weeks).
EDIT :---- I've found the similar battery that says max cont. discharge current = 1mA http://www.rossmannweb.de/files/File/Kataloge/CR1216_Renata_Rossmann.pdf
• This question is very similar: Pulse-powering heavy loads with a coin cell. The app notes linked in that topic also provide useful information. – SamGibson Jun 8 '16 at 18:12
• Have you considered a larger battery? If you go from 12mm to 20mm diameter, you can go to a CR2032 and have 10x the capacity. – DoxyLover Jun 8 '16 at 21:16
• The last chart "Capacity vs load resistance" is probably the most relevant. But you've gone off the edge of the chart for the current draw you're asking. So it's hard to say. – Simon B Jun 8 '16 at 22:17
|
2019-08-24 22:42:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20081555843353271, "perplexity": 2408.3026101088667}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027321786.95/warc/CC-MAIN-20190824214845-20190825000845-00049.warc.gz"}
|
https://math.answers.com/Q/How_do_you_Find_the_circumference_when_you_only_have_the_are
|
0
# How do you Find the circumference when you only have the are?
Wiki User
2010-04-08 23:34:46
area=pi * r2 so divide the area by pi and take the sqrt to find the radius
then circumference= 2 * pi * r
Wiki User
2010-04-08 23:34:46
Study guides
20 cards
➡️
See all cards
3.81
1759 Reviews
|
2022-11-29 05:27:19
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8213578462600708, "perplexity": 6238.457736322594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710685.0/warc/CC-MAIN-20221129031912-20221129061912-00791.warc.gz"}
|
https://bookdown.org/jl5522/MRP-case-studies/introduction-to-mrp.html?utm_source=hs_email&utm_medium=email&_hsenc=p2ANqtz-_4iNtPY8XpI-AbUBhzCWAgJpfbNERN_b6sjSO1AlxGJOiHBeC9F85EMMno117DI3yBMC9L
|
# Chapter 1 Introduction to MRP
Multilevel regression and poststratification (MRP, also called MrP or Mister P) has become widely used in two closely related applications:
1. Small-area estimation: Subnational surveys are not always available, and even then finding comparable surveys across subnational units is rare. However, public views at the subnational level are often central, as many policies are decided by local goverments or subnational area representatives at national assemblies. MRP allows us to use national surveys to generate reliable estimates of subnational opinion (Park, Gelman, and Bafumi (2004), Lax and Phillips (2009a), Lax and Phillips (2009b), Kiewiet de Jonge, Langer, and Sinozich (2018)).
2. Using nonrepresentative surveys: Many surveys face serious difficulties in recruiting representative samples of participants (e.g. because of non-response bias). However, with proper statistical adjustment, nonrepresentative surveys can be used to generate accurate opinion estimates (Wang et al. (2015), Downes et al. (2018)).
This initial chapter introduces MRP in the context of public opinion research. Following a brief introduction to the data, we will describe the two essential stages of MRP: building an individual-response model and using poststratification. First, we take individual responses to a national survey and use multilevel modeling in order to predict opinion estimates based on demographic-geographic subgroups (e.g. middle-aged white female with postgraduate education in California). Secondly, these opinion estimates by subgroups are weighted by the frequency of these subgroups at the (national or subnational) unit of interest. With these two steps, MRP emerged (Gelman and Little (1997)) as an approach that brought together the advantages of regularized estimation and poststratification, two techniques that had shown promising results in the field of survey research (see Fay and Herriot (1979) and Little (1993)). After presenting how MRP can be used for obtaining subregion or subgroup estimates and for adjusting for nonresponse bias, we will conclude with some practical considerations.
## 1.1 Data
### Survey data
The first step is to gather and recode raw survey data. These surveys should include some respondent demographic information and some type of geographic indicator (e.g. state, congressional district). In this case, we will use data from the 2018 Cooperative Congressional Election Study (Schaffner, Ansolabehere, and Luks (2018)), a US nationwide survey designed by a consortium of 60 research teams and administered by YouGov. The outcome of interest in this introduction is a dichotomous question:
Allow employers to decline coverage of abortions in insurance plans (Support / Oppose)
Apart from the outcome measure, we will consider a set of geographic-demographic factors that will be used as predictors in the first stage and that define the geographic-demographic subgroups for the second stage. Even though some of these variables may be continous (e.g. age, income), we must split them into intervals to create a factor with different levels. As we will see in a moment, these factors and their corresponding levels need to match the ones in the postratification table. In this case, we will use the following factors with the indicated levels:
• State: 50 US states ($$S = 50$$).
• Age: 18-29, 30-39, 40-49, 50-59, 60-69, 70+ ($$A = 6$$).
• Gender: Female, Male ($$G = 2$$).
• Ethnicity: (Non-hispanic) White, Black, Hispanic, Other (which also includes Mixed) ($$R = 4$$).
• Education: No HS, HS, Some college, 4-year college, Post-grad ($$E = 5$$).
cces_all_df <- read_csv("data_public/chapter1/data/cces18_common_vv.csv.gz")
# Preprocessing
cces_all_df <- clean_cces(cces_all_df, remove_nas = TRUE)
Details about how we preprocess the CCES data using the clean_cces() function can be found in the appendix.
The full 2018 CCES consist of almost 60,000 respondents. However, most studies work with a smaller national survey. To show how MRP works in these cases, we take a random sample of 5,000 participants and work with the sample instead of the full CCES. Obviously, in a more realistic setting we would always use all the available data.
# We set the seed to an arbitrary number for reproducibility.
set.seed(1010)
# For clarity, we will call the full survey with 60,000 respondents cces_all_df,
# while the 5,000 person sample will be called cces_df. 'df' stands for data frame,
# the most frequently used two dimensional data structure in R.
cces_df <- cces_all_df %>% sample_n(5000)
abortion state eth male age educ
1 WI White -0.5 60-69 4-Year College
1 NJ White -0.5 60-69 HS
0 FL White -0.5 40-49 HS
1 FL White 0.5 70+ Some college
0 IL White -0.5 50-59 Some college
0 OK Other -0.5 18-29 Some college
### Poststratification table
The poststratification table reflects the number of people in the population of interest that, according to a large survey, corresponds to each combination of the demographic-geographic factors. In the US context it is typical to use Decennial Census data or the American Community Survey, although we can of course use any other large-scale surveys that reflects the frequency of the different demographic types within any geographic area of interest. The poststratification table will be used in the second stage to poststratify the estimates obtained for each subgroup. For this, it is central that the factors (and their levels) used in the survey match the factors obtained in the census. Therefore, MRP is in principle limited to use individual-level variables that are present both the survey and the census. For instance, the CCES includes information on respondent’s religion, but as this information is not available in the census we are not able to use this variable. Chapter 13 will cover different approaches to incorporate noncensus variables into the analysis. Similarly, the levels of the factors in the survey of interest are required to match the ones in the large survey used to build the poststratification table. For instance, the CCES included ‘Middle Eastern’ as an option for ethnicity, while the census data we used did not include it. Therefore, people who identified as ‘Middle Eastern’ in the CCES had to be included in the ‘Other’ category.
In this case, we will base our poststratification table on the 2014-2018 American Community Survey (ACS), a set of yearly surveys conducted by the US Census that provides estimates of the number of US residents according to a series of variables that include our poststratification variables. As we defined the levels for these variables, the poststratification table must have $$50 \times 6 \times 2 \times 4 \times 5 = 12,000$$ rows. This means we actually have more rows in the poststratification table than observed units, which necessarily implies that there are some combinations in the poststratification table that we don’t observe in the CCES sample.
# Load data frame created in the appendix. The data frame that contains the poststratification
# table is called poststrat_df
poststrat_df <- read_csv("data_public/chapter1/data/poststrat_df.csv")
state eth male age educ n
AL White -0.5 18-29 No HS 23948
AL White -0.5 18-29 HS 59378
AL White -0.5 18-29 Some college 104855
AL White -0.5 18-29 4-Year College 37066
AL White -0.5 18-29 Post-grad 9378
AL White -0.5 30-39 No HS 14303
For instance, the first row in the poststratification table indicates that there are 23,948 Alabamians that are white, male, between 18 and 29 years old, and without a high school degree.
Every MRP study requires some degree of data wrangling in order to make the factors in the survey of interest match the factors available in the census. The code shown in the appendix can be used as a template to download the ACS data and make it match with a given survey of interest.
### Group-level predictors
The individual-response model used in the first stage can include group-level predictors, which are particularly useful to reduce unexplained group-level variation by accounting for structured differences among the states. For instance, most national-level surveys in the US tend to include many participants from a state such as New York, but few from a small state like Vermont. This can result in noisy estimates for the effect of being from Vermont. The intuition is that by including state-level predictors, such as the Republican voteshare in a previous election or the percentage of Evangelicals at each state, the model is able to account for how similar Vermont is to New York and other more populous states, and therefore to produce more precise estimates. These group-level predictors do not need to be available in the census nor they have to be converted to factors, and in many cases are readily available. A more detailed discussion on the importance of builidng a reasonable model for predicting opinion, and how state-level predictors can be a key element in this regard, can be found in Lax and Phillips (2009b) and Buttice and Highton (2013).
In our example, we will include two state-level predictors: the geographical region (Northeast, North Central, South, and West) and the Republican vote share in the 2016 presidential election.
# Read statelevel_predictors.csv in a dataframe called statelevel_predictors
statelevel_predictors_df <- read_csv('data_public/chapter1/data/statelevel_predictors.csv')
state repvote region
AL 0.64 South
AK 0.58 West
AZ 0.52 West
AR 0.64 South
CA 0.34 West
CO 0.47 West
### Exploratory data analysis
In the previous steps we have obtained a 5,000-person sample from the CCES survey and also generated a poststratification table using census data. As a first exploratory step, we will check if the frequencies for the different levels of the factors considered in the CCES data are similar to the frequencies reported in the census. If this was not the case, we will start suspecting some degree of nonresponse bias in the CCES survey.
For clarity, the levels in the plots follow their natural order in the case of age and education, ordering the others by the approximate proportion of Republican support.
We see that our 5,000-participant CCES sample does not differ too much from the target population according to the American Community Survey. This should not be surprising, as the CCES intends to use a representative sample.
In general, we recommend checking the differences between the sample and the target population. In this case, the comparison has been based on the factors that are going to be used in MRP. However, even if some non-response bias existed for any of these factors MRP would be able to adjust for it, as we will see more in detail in subsection 4. Therefore, it may be especially important to compare the sample and target population with respect to the variables that are not going to be used in MRP – and, consequently, where we will not be able to correct any outcome measure bias due to differential non-response in these non-MRP variables.
## 1.2 First stage: Estimating the Individual-Response Model
The first stage is to use a multilevel logistic regression model to predict the outcome measure based on a set of factors. Having a plausible model to predict opinion is central for MRP to work well.
The model we use in this example is described below. It includes varying intercepts for age, ethnicity, education, and state, where the variation for the state intercepts is in turn influenced by the region effects (coded as indicator variables) and the Republican vote share in the 2016 election. As there are only two levels for gender, it is preferable to model it as a predictor for computational efficiency. Additionally, we include varying intercepts for the interaction between gender and ethnicity, education and age, and education and ethnicity (see Ghitza and Gelman (2013) for an in-depth discussion on interactions in the context of MRP).
$Pr(y_i = 1) = logit^{-1}( \alpha_{\rm s[i]}^{\rm state} + \alpha_{\rm a[i]}^{\rm age} + \alpha_{\rm r[i]}^{\rm eth} + \alpha_{\rm e[i]}^{\rm educ} + \beta^{\rm male} \cdot {\rm Male}_{\rm i} + \alpha_{\rm g[i], r[i]}^{\rm male.eth} + \alpha_{\rm e[i], a[i]}^{\rm educ.age} + \alpha_{\rm e[i], r[i]}^{\rm educ.eth} )$ where:
\begin{aligned} \alpha_{\rm s}^{\rm state} &\sim {\rm normal}(\gamma^0 + \gamma^{\rm south} \cdot {\rm South}_{\rm s} + \gamma^{\rm northcentral} \cdot {\rm NorthCentral}_{\rm s} + \gamma^{\rm west} \cdot {\rm West}_{\rm s} \\ & \quad + \gamma^{\rm repvote} \cdot {\rm RepVote}_{\rm s}, \sigma^{\rm state}) \textrm{ for s = 1,...,50}\\ \alpha_{\rm a}^{\rm age} & \sim {\rm normal}(0,\sigma^{\rm age}) \textrm{ for a = 1,...,6}\\ \alpha_{\rm r}^{\rm eth} & \sim {\rm normal}(0,\sigma^{\rm eth}) \textrm{ for r = 1,...,4}\\ \alpha_{\rm e}^{\rm educ} & \sim {\rm normal}(0,\sigma^{\rm educ}) \textrm{ for e = 1,...,5}\\ \alpha_{\rm g,r}^{\rm male.eth} & \sim {\rm normal}(0,\sigma^{\rm male.eth}) \textrm{ for g = 1,2 and r = 1,...,4}\\ \alpha_{\rm e,a}^{\rm educ.age} & \sim {\rm normal}(0,\sigma^{\rm educ.age}) \textrm{ for e = 1,...,5 and a = 1,...,6}\\ \alpha_{\rm e,r}^{\rm educ.eth} & \sim {\rm normal}(0,\sigma^{\rm educ.eth}) \textrm{ for e = 1,...,5 and r = 1,...,4}\\ \end{aligned}
Where:
• $$\alpha_{\rm a}^{\rm age}$$: The effect of subject $$i$$’s age on the probability of supporting the statement.
• $$\alpha_{\rm r}^{\rm eth}$$: The effect of subject $$i$$’s ethnicity on the probability of supporting the statement.
• $$\alpha_{\rm e}^{\rm educ}$$: The effect of subject $$i$$’s education on the probability of supporting the statement.
• $$\alpha_{\rm s}^{\rm state}$$: The effect of subject $$i$$’s state on the probability of supporting the statement. As we have a state-level predictor (the Republican vote share in the 2016 election), we need to build another model in which $$\alpha_{\rm s}^{\rm state}$$ is the outcome of a linear regression with an expected value determined by an intercept $$\gamma^0$$, the effect of the region coded as indicator variables (with Northeast as the baseline level), and the effect of the Republican vote share $$\gamma^{\rm demvote}$$.
• $$\beta^{\rm male}$$: The average effect of being male on the probability of supporting abortion. We could have used a similar formulation as in the previous cases (i.e. $$\alpha_{\rm g}^{\rm gender} \sim N(0, \sigma^{\rm gender})$$), but having only two levels (i.e. male and female) can create some estimation problems.
• $$\alpha_{\rm e,r}^{\rm male.eth}$$ and $$\alpha_{\rm e,r}^{\rm educ.age}$$: In the survey literature it is common practice to include these two interactions.
• $$\alpha_{\rm e,r}^{\rm educ.eth}$$: In the next section we will explore public opinion on required abortion coverage at the different levels of education and ethnicity. It is, therefore, a good idea to also include this interaction.
Readers without a background in multilevel modeling may be surprised to see this formulation. Why are we using terms such as $$\alpha_{\rm eth}^{\rm eth}$$ instead of the much more common method of creating an indicator variable for each state (e.g. $$\beta^{\rm white} \cdot {\rm White}_{i} + \beta^{\rm black} \cdot {\rm Black}_{i} + ...$$)? The answer is that this approach allows to share information between the levels of each variable (e.g. different ethnicities), preventing levels with less data from being too sensitive to the few observed values. For instance, it could happen that we only surveyed ten Hispanics, and that none of them turned out to agree that employers should be able to decline abortion coverage in insurance plans. Under the typical approach, the model would take this data too seriously and consider that Hispanics necessarily oppose this statement (i.e. $$\beta^{\rm hispanic} = - \infty$$). We know, however, that this is not the case. It may be that Hispanics are less likely to support the statement, but from such a small sample size it is impossible to know. What the multilevel model will do is to partially pool the varying intercept for Hispanics towards the average accross all ethnicities (i.e. in our model, the average across all ethnicities is fixed at zero), making it negative but far from the unrealistic negative infinity. This pooling will be data-dependent, meaning that it will pool the varying intercept towards the average more strongly the smaller the sample size in that level. In fact, if the sample size for a certain level is zero, the estimate varying intercept would be the average coefficient for all the other levels. We recommend Gelman and Hill (2006) for an introduction to multilevel modeling.
The rstanarm package allows the user to conduct complicated regression analyses in Stan with the simplicity of standard formula notation in R. stan_glmer(), the function that allows to fit generalized linear multilevel models, uses the same notation as the lme4 package (see documentation here). That is, we specify the varying intercepts as (1 | group) and the interactions are expressed as (1 | group1:group2), where the : operator creates a new grouping factor that consists of the combined levels of the two groups (i.e. this is the same as pasting together the levels of both factors). However, this syntax only accepts predictors at the individual level, and thus the two state-level predictors must be expanded to the individual level (see [p. 265-266]Gelman and Hill (2006)). Notice that:
\begin{aligned} \alpha_{\rm s}^{\rm state} &\sim {\rm normal}(\gamma^0 + \gamma^{\rm south} \cdot {\rm South}_{\rm s} + \gamma^{\rm northcentral} \cdot {\rm NorthCentral}_{\rm s} + \gamma^{\rm west} \cdot {\rm West}_{\rm s} + \gamma^{\rm repvote} \cdot {\rm RepVote}_{\rm s}, \sigma^{\rm state}) \\ &= \underbrace{\gamma^0}_\text{Intercept} + \underbrace{{\rm normal}(0, \sigma^{\rm state})}_\text{State varying intercept} + \underbrace{\gamma^{\rm south} \cdot {\rm South}_{\rm s} + \gamma^{\rm northcentral} \cdot {\rm NorthCentral}_{\rm s} + \gamma^{\rm west} \cdot {\rm West}_{\rm s} + \gamma^{\rm repvote} \cdot {\rm RepVote}_{\rm s}}_\text{State-level predictors expanded to the individual level} \end{aligned}
Consequently, we can then reexpress the model as:
\begin{aligned} Pr(y_i = 1) =& logit^{-1}( \gamma^0 + \alpha_{\rm s[i]}^{\rm state} + \alpha_{\rm a[i]}^{\rm age} + \alpha_{\rm r[i]}^{\rm eth} + \alpha_{\rm e[i]}^{\rm educ} + \beta^{\rm male} \cdot {\rm Male}_{\rm i} + \alpha_{\rm g[i], r[i]}^{\rm male.eth} + \alpha_{\rm e[i], a[i]}^{\rm educ.age} + \alpha_{\rm e[i], r[i]}^{\rm educ.eth} + \gamma^{\rm south} \cdot {\rm South}_{\rm s} \\ &+ \gamma^{\rm northcentral} \cdot {\rm NorthCentral}_{\rm s} + \gamma^{\rm west} \cdot {\rm West}_{\rm s} + \gamma^{\rm repvote} \cdot {\rm RepVote}_{\rm s}) \end{aligned}
In the previous version of the model, $$\alpha_{\rm s[i]}^{\rm state}$$ was informed by several state-level predictors. This reparametrization expands the state-level predictors at the individual level, and thus $$\alpha_{\rm s[i]}^{\rm state}$$ now represents the variance introduced by the state adjusting for the region and 2016 Republican vote share. Similarly, $$\gamma^0$$, which previously represented the state-level intercept, now becomes the individual-level intercept. The two parameterizations of the multilevel model are mathematically equivalent, and using one or the other is simply a matter of preference. The former one highlights the role that state-level predictos have in accounting for structured differences among the states, while the later is closer to the rstanarm syntax.
# Expand state-level predictors to the individual level
cces_df <- left_join(cces_df, statelevel_predictors_df, by = "state")
# Fit in stan_glmer
fit <- stan_glmer(abortion ~ (1 | state) + (1 | eth) + (1 | educ) + male +
(1 | male:eth) + (1 | educ:age) + (1 | educ:eth) +
repvote + factor(region),
data = cces_df,
prior = normal(0, 1, autoscale = TRUE),
prior_covariance = decov(scale = 0.50),
refresh = 0,
seed = 1010)
As a first pass to check whether the model is performing well, we must check that there are no warnings about divergences, failure to converge or tree depth. Fitting the model with the default settings produced a few divergent transitions, and thus we decided to try increasing adapt_delta to 0.99 and introducing stronger priors than the rstanarm defaults. After doing this, the divergences dissapeared. In the Computational Issues subsection we provide more details about divergent transitions and potential solutions.
print(fit)
## stan_glmer
## family: binomial [logit]
## formula: abortion ~ (1 | state) + (1 | eth) + (1 | educ) + male + (1 |
## male:eth) + (1 | educ:age) + (1 | educ:eth) + repvote + factor(region)
## observations: 5000
## ------
## (Intercept) -1.2 0.4
## male 0.3 0.1
## repvote 1.6 0.5
## factor(region)Northeast -0.1 0.1
## factor(region)South 0.2 0.1
## factor(region)West -0.1 0.1
##
## Error terms:
## Groups Name Std.Dev.
## state (Intercept) 0.203
## educ:age (Intercept) 0.201
## educ:eth (Intercept) 0.084
## male:eth (Intercept) 0.222
## educ (Intercept) 0.209
## eth (Intercept) 0.374
## Num. levels: state 50, educ:age 30, educ:eth 20, male:eth 8, educ 5, eth 4
##
## ------
## * For help interpreting the printed output see ?print.stanreg
## * For info on the priors used see ?prior_summary.stanreg
We can interpret the resulting model as follows:
• Intercept ($$\gamma^0$$): The global intercept corresponds to the expected outcome in the logit scale when having all the predictors equal to zero. In this case, this does not have a clear interpretation, as it is then influenced by the varying intercepts for state, age, ethnicity, education, and interactions. Furthermore, it corresponds to the impractical scenario of someone in a state with zero Republican vote share.
• male ($$\beta^{\rm male}$$): The median estimate for this coefficient is 0.3, with a standard error (measured using the Mean Absolute Deviation) of 0.1. Using the divide-by-four rule (Gelman, Hill, and Vehtari (2020), Chapter 13), we see that, adjusting for the other covariates, males present up to a 7.5% $$\pm$$ 2.5% higher probability of supporting the right of employers to decline coverage of abortions relative to females.
• repvote ($$\gamma^{\rm repvote}$$): As the scale of repvote was between 0 and 1, this coefficient corresponds to the difference in probability of supporting the statement between someone that was in a state in which no one voted Republican to someone whose state voted all Republican. This is not reasonable, and therefore we start by dividing the median coefficient by 10. Doing this, we consider a difference of a 10% increase in Republican vote share. This means that we expect that someone from a state with a 55% Republican vote share has approximately $$\frac{1.6}{10}/4 = 4\%$$ ($$\pm 12.5\%$$) higher probability of supporting the statement relative to another individual with similar characteristics from a state in which Republicans received 45% of the vote.
• regionSouth ($$\gamma^{\rm south}$$): According to the model, we expect that someone from a state in the south has, adjusting for the other covariates, up to a 0.2/4 = 5% ($$\pm$$ 2%) higher probability of supporting the statement relative to someone from the Northeast, which was the baseline category. The interpretation for regionNorthCentral and regionWest is similar.
### Subnational units not represented in the survey
It is fairly common for small-sample surveys not to include anyone from a particular subnational unit. For instance, a small national survey in the US may not include any participant from Wyoming. An important advantage of MRP is that we can still produce estimates for this state using the information from the participants in other states. Going back to the first parametrization of the multilevel model that we presented, $$\alpha^{\rm state}_{\rm s = Wyoming}$$ will be calculated based on the region and Republican voteshare of the 2016 – even in the abscence of information about the effect of residing in Wyoming specifically. As we have already explained, including subnational-level predictors is always recommended, particularly considering that data at the subnational level is easy to obtain in many cases. However, when dealing with subnational units that are not represented in our survey these predictors become even more central, as they are able to capture structured differences among the states and therefore allow for more precise estimation in the missing subnational areas.
### Computational issues
Stan uses Hamiltonian Monte Carlo to explore the posterior distribution. In some cases, the geometry of the posterior distribution is too complex, making the Hamiltonian Monte Carlo “diverge”. This produces a warning indicating the presence of divergent transitions after warmup, something that implies the model could present biased estimates (see Betancourt (2017) for more details). Usually, a few divergent transitions do not indicate a serious problem. There are, in any case, three potential solutions to this problem that do not involve reformulating the model: (i) a non-centered parametization; (ii) increasing the adapt_delta parameter; and (iii) including stronger priors. Fortunately we don’t have to worry about (i), as rstanarm already provides a non-centered parametization for the model. Therefore, we can focus on the other two.
1. Exploring the posterior distribution is somewhat similar as cartographing a mountainous terrain, and a divergent transition is similar to falling down a very steep slope, with the consequence of not being able to correctly map that area. In this analogy, what the cartographer could do is moving through the steep slope giving smaller steps to avoid falling. In Stan, the step size is set up automatically, but we can change a parameter called adapt_delta that controls the step size. By default we have that adapt_delta = .95, but we can increase that number to make Stan take smaller steps, which should reduce the number of divergent transitions. The maximum value we can set for adapt_delta is close (but necessarely less than) 1, with the downside that an increase implies a somewhat slower exploration of the posterior distribution. Usually, an adapt_delta = 0.99 works well if we only have a few divergent transitions.
2. However, there are cases in which increasing adapt_delta is not sufficient, and divergent transitions still occur. In this case, introducing weakly informative priors can be extremelly helpful. Although rstanarm provides by default weakly informative priors, in most applications these tend to be too weak. By using more reasonable priors, we make the posterior distribution easier to explore.
• The priors for the scaled coefficients are $${\rm normal}(0, 2.5)$$. When the coefficients are not scaled, rstanarm will automatically adjust the scaling of the priors as detailed in the prior vignette. In most cases, and particularly when we find computational issues, it is reasonable to give stronger priors on the scaled coefficients such as $${\rm normal}(0, 1)$$.
• Multilevel models with multiple group-level standard deviation parameters (e.g. $$\sigma^{\rm age}$$, $$\sigma^{\rm eth}$$, $$\sigma^{\rm educ.eth}$$, etc.) tend to be hard to estimate and sometimes present serious computational issues. The default prior for the covariance matrix is decov(reg. = 1, conc. = 1, shape = 1, scale = 1). However, in a varying-intercept model such as this one (i.e. with structure (1 | a) + (1 | b) + ... + (1 | n)) the group-level standard deviations are independent of each other, and therefore the prior is simply a gamma distribution with some shape and scale. Consequently, decov(shape = 1, scale = 1) implies a weakly informative prior $${\rm Gamma(shape = 1, scale = 1)} = {\rm Exponential(scale = 1)}$$ on each group-level standard deviation. This is too weak in most situations, and using something like $${\rm Exponential(scale = 0.5)}$$ can be crucial for stabilizing computation.
Therefore, something like this has much fewer chances of running into computational issues than simply leaving the defaults:
fit <- stan_glmer(abortion ~ (1 | state) + (1 | eth) + (1 | educ) + (1 | age) + male +
(1 | male:eth) + (1 | educ:age) + (1 | educ:eth) +
repvote + factor(region),
data = cces_df,
prior = normal(0, 1, autoscale = TRUE),
prior_covariance = decov(scale = 0.50),
refresh = 0,
seed = 1010)
More details about divergent transitions can be found in the Brief Guide to Stan’s Warnings and in the Stan Reference Manual. More information and references about priors can be found in the Prior Choice Recommendations Wiki.
### 1.6.1 CCES
The 2018 CCES raw survey data can be downloaded from the CCES Dataverse via this link; by default the downloaded filename is called cces18_common_vv.csv.
Every MRP study requires some degree of data wrangling in order to make the factors in the survey of interest match the factors available in census data and other population-level surveys and census. Here we use the R Tidyverse to process the survey data so that it aligns with the postratification table. Because initial recoding errors are fatal, it is important to check that each step of the recoding process produces expected results, either by viewing or summarizing the data. Because the data is all tabular data, we use the utility function head to inspect the first few lines of a dataframe before and after each operation.
First, we examine the contents of the data as downloaded, looking only at those columns which provide the demographic-geographic information of interest. In this case, these are labeled as inputstate, gender, birthyr, race, and educ.
cces_all <- read_csv("data_public/chapter1/data/cces18_common_vv.csv.gz")
## Rows: 60000 Columns: 526
## -- Column specification ------------------------------------------------------------------------------------------------
## Delimiter: ","
## chr (165): race_other, CC18_354a_t, CC17_3534_t, CC18_351_t, CC18_351a_t, CC18_351b_t, CC18_351c_t, CC18_352_t, CC18...
## dbl (359): caseid, commonweight, commonpostweight, vvweight, vvweight_post, tookpost, CCEStake, birthyr, gender, edu...
## lgl (2): multrace_97, multrace_99
##
## i Use spec() to retrieve the full column specification for this data.
## i Specify the column types or set show_col_types = FALSE to quiet this message.
inputstate gender birthyr race educ
24 2 1964 5 4
47 2 1971 1 2
39 2 1958 1 3
6 2 1946 4 6
21 2 1972 1 2
4 2 1995 1 1
As we have seen, it is crucial that the geography and demographics for the survery must match the geography and demographics in the poststratification table. If there is not a direct one-to-one relationship between the survey and the population data, the survey data must be recoded until a clean mapping exists. We write R functions to encapsulate the recoding steps.
We start considering the geographic information. Both the CCES survey and the US Census data use numeric FIPS codes to record state information. We can use R factors to map FIPS codes to the standard two-letter state name abbreviations. Because both surveys use this encoding, we make this into a reusable function recode_fips.
# Note that the FIPS codes include the district of Columbia and US territories which
# are not considered in this study, creating some gaps in the numbering system.
state_ab <- datasets::state.abb
state_fips <- c(1,2,4,5,6,8,9,10,12,13,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,
31,32,33,34,35,36,37,38,39,40,41,42,44,45,46,47,48,49,50,51,53,54,55,56)
recode_fips <- function(column) {
factor(column, levels = state_fips, labels = state_ab)
}
Secondly, we recode the demographics in order for them to be compatible with the American Community Survey data. In some cases this requires changing the levels of a factor (e.g. ethnicity) and in others we may need ot split a continous variable into different intervals (e.g. age). clean_cces uses the recode_fips function defined above to clean up the states. By default, the clean_cces functions drops rows where there is non-response in any of the considered factors or in the outcome variable; if this information is not missing at random, this introduces (more) bias into our survey.
# Recode CCES
clean_cces <- function(df, remove_nas = TRUE){
## Abortion -- dichotomous (0 - Oppose / 1 - Support)
df$abortion <- abs(df$CC18_321d-2)
## State -- factor
df$state <- recode_fips(df$inputstate)
## Gender -- dichotomous (coded as -0.5 Female, +0.5 Male)
df$male <- abs(df$gender-2)-0.5
## ethnicity -- factor
df$eth <- factor(df$race,
levels = 1:8,
labels = c("White", "Black", "Hispanic", "Asian", "Native American",
"Mixed", "Other", "Middle Eastern"))
df$eth <- fct_collapse(df$eth, "Other" = c("Asian", "Other", "Middle Eastern",
"Mixed", "Native American"))
## Age -- cut into factor
df$age <- 2018 - df$birthyr
df$age <- cut(as.integer(df$age), breaks = c(0, 29, 39, 49, 59, 69, 120),
labels = c("18-29","30-39","40-49","50-59","60-69","70+"),
ordered_result = TRUE)
## Education -- factor
df$educ <- factor(as.integer(df$educ),
levels = 1:6,
labels = c("No HS", "HS", "Some college", "Associates",
"4-Year College", "Post-grad"), ordered = TRUE)
df$educ <- fct_collapse(df$educ, "Some college" = c("Some college", "Associates"))
# Filter out unnecessary columns and remove NAs
df <- df %>% select(abortion, state, eth, male, age, educ)
if (remove_nas){
df <- df %>% drop_na()
}
return(df)
}
### 1.6.2 American Community Survey
We used the US American Community Survey to create a poststratification table. We will show two different (but equivalent) ways to do this.
#### Alternative 1: IPUMS
The Integrated Public Use Microdata Series (IPUMS) (Ruggles et al. (2020)) is a service run by the University of Minnesota that allows easy access to census and survey data. We focus on the IPUMS USA section, which preserves and harmonizes US census microdata, including the American Community Survey. Other researchers may be interested in IPUMS international, which contains census microdata for over 100 countries.
In order to create the poststratification table we took the following steps:
1. Register at IPUMS using the following link
2. On ipums.org, select IPUMS USA and then click on “Get Data”. This tool allows to easily select certain variables from census microdata using an intuitive point-and-click interface.
1. We first need to select a sample (i.e. the survey we want to use for the poststratification table) with a menu that is opened by clicking on the SELECT SAMPLES button shown above. In our case, we we will select the 2018 5-year ACS survey and then click on SUBMIT SAMPLE.
1. After selecting the sample we need to select the variables that will be included in our poststratification table. The multiple variables are conveniently categorized by HOUSEHOLD (household-level variables), PERSON (individual-level variables), and A-Z (alphabetically). For instance, clicking on PERSON > DEMOGRAPHIC displays the demographic variables, as shown below. Note that the rightmost column shows if that variable is available in the 2018 5-year ACS. Note that if you click on a certain variable IPUMS will provide a description and show the codes and frequencies. Based on the data available in your survey of interest, this is a useful tool to decide which variables to include in the poststratification table. In our case, we select:
• On PERSON > DEMOGRAPHIC select SEX and AGE
• On PERSON > RACE, ETHNICITY, AND NATIVITY select RACE, HISPAN, and CITIZEN
• On PERSON > EDUCATION select EDUC
• On HOUSEHOLD > GEOGRAPHIC select STATEFIP
1. We can review the variables we have selected by clicking on VIEW CART. This view also includes the ones which are automatically selected by IPUMS.
1. After reviewing these variables we should select CREATE DATA EXTRACT. By default the data format is a .dat with fixed-width text, but if we prefer we can change this to csv. After clicking SUBMIT EXTRACT the data will be generated. This can take a while, but you will receive an email when the file is ready for download.
1. Lastly, we download and preprocess the data. There are two main considerations:
• Focus on the population of interest: We must take into account that the population of interest for the CCES survey, which only considers US citizens above 18 years of age, is different from the population reflected in the ACS. Therefore, we had to remove the cases of underages and non-citizens in the census data.
• Match the levels of the two datasets: The levels of the variables in the poststratification table must match the levels of the variables in the CCES dataset. This required preprocessing the variables of the CCES and ACS in a way that the levels were compatible.
## Read data downloaded from IPUMS. This step can be slow, as the dataset is almost 1.5Gb
## (note: due to its size, this file is not included in the book repo)
## Remove non-citizens
temp_df <- temp_df %>% filter(CITIZEN<3)
## State
temp_df$state <- temp_df$STATEFIP
## Gender
temp_df$male <- abs(temp_df$SEX-2)-0.5
## Ethnicity
temp_df$RACE <- factor(temp_df$RACE,
levels = 1:9,
labels = c("White", "Black", "Native American", "Chinese",
"Japanese", "Other Asian or Pacific Islander",
"Other race, nec", "Two major races",
"Three or more major races"))
temp_df$eth <- fct_collapse(temp_df$RACE,
"Other" = c("Native American", "Chinese",
"Japanese", "Other Asian or Pacific Islander",
"Other race, nec", "Two major races",
"Three or more major races"))
levels(temp_df$eth) <- c(levels(temp_df$eth), "Hispanic")
## add hispanic as ethnicity. This is done only for individuals that indicate being white
# in RACE and of hispanic origin in HISPAN
temp_df$eth[(temp_df$HISPAN!=0) & temp_df$eth=="White"] <- "Hispanic" ## Age temp_df$age <- cut(as.integer(temp_df$AGE), breaks = c(0, 17, 29, 39, 49, 59, 69, 120), labels = c("0-17", "18-29","30-39","40-49","50-59","60-69","70+"), ordered_result = TRUE) # filter out underages temp_df <- filter(temp_df, age!="0-17") temp_df$age <- droplevels(temp_df$age) ## Education # we need to use EDUCD (i.e. education detailed) instead of EDUC (i.e. general codes), as the # latter does not contain enough information about whether high school was completed or not. temp_df$educ <- cut(as.integer(temp_df$EDUCD), c(0, 61, 64, 100, 101, Inf), ordered_result = TRUE, labels = c("No HS", "HS", "Some college", "4-Year College", "Post-grad")) # Clean temp_df by dropping NAs and cleaning states with recode_fips temp_df <- temp_df %>% drop_na(state, eth, male, age, educ, PERWT) %>% select(state, eth, male, age, educ, PERWT) %>% filter(state %in% state_fips) %>% mutate(state = recode_fips(state)) # Generate cell frequencies using groupby poststrat_df <- temp_df %>% group_by(state, eth, male, age, educ, .drop = FALSE) %>% summarise(n = sum(as.numeric(PERWT))) # Write as csv write.csv(poststrat_df, "poststrat_df.csv", row.names = FALSE) If you use IPUMS in your project, don’t forget to cite it. #### Alternative 2: ACS PUMS Some researchers may prefer to access the 2018 5-year ACS data directly without using IPUMS, which makes the process less intuitive but also more reproducible. Additionally, this does not require creating an account, as the Public Use Microdata Sample (PUMS) from the ACS can be downloaded directly from the data repository. The repository contains two .zip files for each state: one for individual-level variables and other for household-level variables. All the variables considered in our analysis are available in the individual-level files, but we will also download and process the household-level variable income to show how this could be done. # We start downloading all the zip files using wget. If you are using Windows you can download # a pre-built wget from http://gnuwin32.sourceforge.net/packages/wget.htm dir.create("poststrat_data/") system('wget -O poststrat_data -e robots=off -nd -A "csv_*.zip" -R "index.html","csv_hus.zip", "csv_pus.zip" https://www2.census.gov/programs-surveys/acs/data/pums/2018/5-Year/') If this does not work, you can also access the data repository and download the files directly from your browser. Once the data is downloaded, we process the .zip files for each state and then merge them together. IPUMS integrates census data accross different surveys, which results in different naming conventions and levels in some of the variables with respect to the PUMS data directly downloaded from the ACS repository. Therefore, the preprocessing steps is slightly different from the code shown above, but as the underlying data is the same we obtain an identical poststratification table. list_states_abb <- datasets::state.abb list_states_num <- rep(NA, length(list_states_abb)) list_of_poststrat_df <- list() for(i in 1:length(list_states_num)){ # Unzip and read household and person files for state i p_name <- paste0("postrat_data/csv_p", tolower(list_states_abb[i]),".zip") h_name <- paste0("postrat_data/csv_h", tolower(list_states_abb[i]),".zip") p_csv_name <- grep('\\.csv$', unzip(p_name, list=TRUE)$Name, ignore.case=TRUE, value=TRUE) temp_df_p_state <- fread(unzip(p_name, files = p_csv_name), header=TRUE, select=c("SERIALNO","ST","CIT","PWGTP","RAC1P","HISP","SEX", "AGEP","SCHL")) h_csv_name <- grep('\\.csv$', unzip(h_name, list=TRUE)$Name, ignore.case=TRUE, value=TRUE) temp_df_h_state <- fread(unzip(h_name, files = h_csv_name), header=TRUE, select=c("SERIALNO","FINCP")) # Merge the individual and household level variables according to the serial number temp_df <- merge(temp_df_h_state, temp_df_p_state, by = "SERIALNO") # Update list of state numbers that will be used later list_states_num[i] <- temp_df$ST[1]
## Filter by citizenship
temp_df <- temp_df %>% filter(CIT!=5)
## State
temp_df$state <- temp_df$ST
## Gender
temp_df$male <- abs(temp_df$SEX-2)-0.5
## Tthnicity
temp_df$RAC1P <- factor(temp_df$RAC1P,
levels = 1:9,
labels = c("White", "Black", "Native Indian", "Native Alaskan",
"Native Indian or Alaskan", "Asian", "Pacific Islander",
"Other", "Mixed"))
temp_df$eth <- fct_collapse(temp_df$RAC1P, "Native American" = c("Native Indian",
temp_df$eth <- fct_collapse(temp_df$eth, "Other" = c("Asian", "Pacific Islander", "Other",
"Native American", "Mixed"))
levels(temp_df$eth) <- c(levels(temp_df$eth), "Hispanic")
temp_df$eth[(temp_df$HISP!=1) & temp_df$eth=="White"] <- "Hispanic" ## Age temp_df$age <- cut(as.integer(temp_df$AGEP), breaks = c(0, 17, 29, 39, 49, 59, 69, 120), labels = c("0-17", "18-29","30-39","40-49","50-59","60-69","70+"), ordered_result = TRUE) # filter out underages temp_df <- filter(temp_df, age!="0-17") temp_df$age <- droplevels(temp_df$age) ## Income (not currently used) temp_df$income <- cut(as.integer(temp_df$FINCP), breaks = c(-Inf, 9999, 19999, 29999, 39999, 49999, 59999, 69999, 79999, 99999, 119999, 149999, 199999, 249999, 349999, 499999, Inf), ordered_result = TRUE, labels = c("<$10,000", "$10,000 -$19,999", "$20,000 -$29,999",
"$30,000 -$39,999", "$40,000 -$49,999",
"$50,000 -$59,999", "$60,000 -$69,999",
"$70,000 -$79,999","$80,000 -$99,999",
"$100,000 -$119,999", "$120,000 -$149,999",
"$150,000 -$199,999","$200,000 -$249,999",
"$250,000 -$349,999", "$350,000 -$499,999",
">$500,000")) temp_df$income <- fct_explicit_na(temp_df$income, "Prefer Not to Say") ## Education temp_df$educ <- cut(as.integer(temp_df$SCHL), breaks = c(0, 15, 17, 19, 20, 21, 24), ordered_result = TRUE, labels = c("No HS", "HS", "Some college", "Associates", "4-Year College", "Post-grad")) temp_df$educ <- fct_collapse(temp_df$educ, "Some college" = c("Some college", "Associates")) # Calculate the poststratification table temp_df <- temp_df %>% drop_na(state, eth, male, age, educ, PWGTP) %>% select(state, eth, male, age, educ, PWGTP) ## We sum by the inidividual-level weight PWGTP list_of_poststrat_df[[i]] <- temp_df %>% group_by(state, eth, male, age, educ, .drop = FALSE) %>% summarise(n = sum(as.numeric(PWGTP))) print(paste0("Data from ", list_states_abb[i], " completed")) } # Join list of state-level poststratification files poststrat_df <- rbindlist(list_of_poststrat_df) # Clean up state names poststrat_df$state <- recode_fips(state)
# Write as csv
write.csv(poststrat_df, "poststrat_df.csv", row.names = FALSE)
Some researchers may prefer to access the 2018 5-year ACS data directly without using IPUMS, which makes the process less intuitive but also more reproducible. Additionally, this does not require creating an account, as the Public Use Microdata Sample (PUMS) from the ACS can be downloaded directly from the data repository. The repository contains two .zip files for each state: one for individual-level variables and other for household-level variables. There are also two files, csv_hus.zip and csv_hus.zip, which contain these variables for all states. We will start creating a folder called poststrat_data and downloading these two files:
# If you are using Windows you can download a pre-built wget from
# http://gnuwin32.sourceforge.net/packages/wget.htm
dir.create("poststrat_data/")
system('wget -O poststrat_data2 -e robots=off -nd -A "csv_hus.zip","csv_pus.zip"
https://www2.census.gov/programs-surveys/acs/data/pums/2018/5-Year/')
If this does not work, you can also download the files directly from the data repository.
Once the data is downloaded, we unzip the files and merge them together. IPUMS integrates census data accross different surveys, which results in different naming conventions and levels in some of the variables with respect to the PUMS data directly downloaded from the ACS repository. Therefore, the preprocessing steps is slightly different from the code shown above, but as the underlying data is the same we obtain an identical poststratification table. All the variables considered in our analysis are available in the individual-level files, but we will also download and process the household-level variable income in order to show how this could be done.
list_states_abb <- datasets::state.abb
list_states_num <- rep(NA, length(list_states_abb))
# Unzip and read household and person files
p_name <- paste0("poststrat_data/csv_pus.zip")
h_name <- paste0("poststrat_data/csv_hus.zip")
p_csv_name <- grep('\\.csv$', unzip(p_name, list=TRUE)$Name, ignore.case=TRUE, value=TRUE)
unzip(p_name, files = p_csv_name, exdir = "poststrat_data")
p_csv_name = paste0("poststrat_data/", p_csv_name)
select=c("SERIALNO","ST","CIT","PWGTP","RAC1P","HISP","SEX",
"AGEP","SCHL")))
h_csv_name <- grep('\\.csv$', unzip(h_name, list=TRUE)$Name, ignore.case=TRUE, value=TRUE)
unzip(h_name, files = h_csv_name, exdir = "poststrat_data")
h_csv_name = paste0("poststrat_data/", h_csv_name)
# Merge the individual and household level variables according to the serial number
temp_df <- merge(temp_df_h, temp_df_p, by = "SERIALNO")
# Exclude associated ares that are not states based on FIPS codes
temp_df <- temp_df %>% filter(ST %in% state_fips)
temp_df$ST <- factor(temp_df$ST, levels = state_fips, labels = state_ab)
## Filter by citizenship
temp_df <- temp_df %>% filter(CIT!=5)
## State
temp_df$state <- temp_df$ST
## Gender
temp_df$male <- abs(temp_df$SEX-2)-0.5
## Ethnicity
temp_df$RAC1P <- factor(temp_df$RAC1P,
levels = 1:9,
labels = c("White", "Black", "Native Indian", "Native Alaskan",
"Native Indian or Alaskan", "Asian", "Pacific Islander",
"Other", "Mixed"))
temp_df$eth <- fct_collapse(temp_df$RAC1P, "Native American" = c("Native Indian",
temp_df$eth <- fct_collapse(temp_df$eth, "Other" = c("Asian", "Pacific Islander", "Other",
"Native American", "Mixed"))
levels(temp_df$eth) <- c(levels(temp_df$eth), "Hispanic")
temp_df$eth[(temp_df$HISP!=1) & temp_df$eth=="White"] <- "Hispanic" ## Age temp_df$age <- cut(as.integer(temp_df$AGEP), breaks = c(0, 17, 29, 39, 49, 59, 69, 120), labels = c("0-17", "18-29","30-39","40-49","50-59","60-69","70+"), ordered_result = TRUE) # filter out underages temp_df <- filter(temp_df, age!="0-17") temp_df$age <- droplevels(temp_df$age) ## Income (not currently used) temp_df$income <- cut(as.integer(temp_df$FINCP), breaks = c(-Inf, 9999, 19999, 29999, 39999, 49999, 59999, 69999, 79999, 99999, 119999, 149999, 199999, 249999, 349999, 499999, Inf), ordered_result = TRUE, labels = c("<$10,000", "$10,000 -$19,999", "$20,000 -$29,999",
"$30,000 -$39,999", "$40,000 -$49,999",
"$50,000 -$59,999", "$60,000 -$69,999",
"$70,000 -$79,999","$80,000 -$99,999",
"$100,000 -$119,999", "$120,000 -$149,999",
"$150,000 -$199,999","$200,000 -$249,999",
"$250,000 -$349,999", "$350,000 -$499,999",
">$500,000")) temp_df$income <- fct_explicit_na(temp_df$income, "Prefer Not to Say") ## Education temp_df$educ <- cut(as.integer(temp_df$SCHL), breaks = c(0, 15, 17, 19, 20, 21, 24), ordered_result = TRUE, labels = c("No HS", "HS", "Some college", "Associates", "4-Year College", "Post-grad")) temp_df$educ <- fct_collapse(temp_df\$educ, "Some college" = c("Some college", "Associates"))
# Calculate the poststratification table
temp_df <- temp_df %>% drop_na(state, eth, male, age, educ, PWGTP) %>%
select(state, eth, male, age, educ, PWGTP)
## We sum by the inidividual-level weight PWGTP
poststrat_df <- temp_df %>%
group_by(state, eth, male, age, educ, .drop = FALSE) %>%
summarise(n = sum(as.numeric(PWGTP)))
# Write as csv
write.csv(poststrat_df, "poststrat_df.csv", row.names = FALSE)
### References
Betancourt, Michael. 2017. “A Conceptual Introduction to Hamiltonian Monte Carlo.” arXiv Preprint arXiv:1701.02434.
Bisbee, James. 2019. “BARP: Improving Mister p Using Bayesian Additive Regression Trees.” American Political Science Review 113 (4): 1060–65.
Buttice, Matthew K, and Benjamin Highton. 2013. “How Does Multilevel Regression and Poststratification Perform with Conventional National Surveys?” Political Analysis 21 (4).
Downes, Marnie, Lyle C Gurrin, Dallas R English, Jane Pirkis, Dianne Currier, Matthew J Spittal, and John B Carlin. 2018. “Multilevel Regression and Poststratification: A Modeling Approach to Estimating Population Quantities from Highly Selected Survey Samples.” American Journal of Epidemiology 187 (8): 1780–90.
Fay, Robert E, and Roger A Herriot. 1979. “Estimates of Income for Small Places: An Application of James-Stein Procedures to Census Data.” Journal of the American Statistical Association 74 (366a): 269–77.
Gao, Yuxiang, Lauren Kennedy, Daniel Simpson, Andrew Gelman, et al. 2020. “Improving Multilevel Regression and Poststratification with Structured Priors.” Bayesian Analysis.
Gelman, Andrew, and Jennifer Hill. 2006. Data Analysis Using Regression and Multilevel/Hierarchical Models. Cambridge university press.
Gelman, Andrew, Jennifer Hill, and Aki Vehtari. 2020. Regression and Other Stories. Cambridge University Press.
Gelman, Andrew, and Thomas C Little. 1997. “Poststratification into Many Categories Using Hierarchical Logistic Regression.” Survey Methodology 23 (2): 127--35.
Ghitza, Yair, and Andrew Gelman. 2013. “Deep Interactions with MRP: Election Turnout and Voting Patterns Among Small Electoral Subgroups.” American Journal of Political Science 57 (3): 762–76.
Kiewiet de Jonge, Chad P, Gary Langer, and Sofi Sinozich. 2018. “Predicting State Presidential Election Results Using National Tracking Polls and Multilevel Regression with Poststratification (MRP).” Public Opinion Quarterly 82 (3): 419–46.
Lax, Jeffrey R, and Justin H Phillips. 2009a. “Gay Rights in the States: Public Opinion and Policy Responsiveness.” American Political Science Review 103 (3): 367–86.
———. 2009b. “How Should We Estimate Public Opinion in the States?” American Journal of Political Science 53 (1): 107–21.
Little, Roderick JA. 1993. “Post-Stratification: A Modeler’s Perspective.” Journal of the American Statistical Association 88 (423): 1001–12.
Ornstein, Joseph T. 2020. “Stacked Regression and Poststratification.” Political Analysis 28 (2): 293–301.
Park, David K, Andrew Gelman, and Joseph Bafumi. 2004. “Bayesian Multilevel Estimation with Poststratification: State-Level Estimates from National Polls.” Political Analysis 12 (4): 375–85.
Ruggles, Steven, Sarah Flood, Ronald Goeken, Josiah Grover, Erin Meyer, Jose Pacas, and Matthew Sobek. 2020. IPUMS USA: Version 10.0 [Dataset]. Minneapolis, MN: IPUMS, 2020.
Schaffner, Brian, Stephen Ansolabehere, and Sam Luks. 2018. “CCES 2018.” Harvard Dataverse. https://doi.org/10.7910/DVN/ZSBZ7K.
Wang, Wei, David Rothschild, Sharad Goel, and Andrew Gelman. 2015. “Forecasting Elections with Non-Representative Polls.” International Journal of Forecasting 31 (3): 980–91.
|
2023-02-09 09:33:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6922929286956787, "perplexity": 4312.733647602799}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764501555.34/warc/CC-MAIN-20230209081052-20230209111052-00730.warc.gz"}
|
https://math.stackexchange.com/questions/2741292/the-plucker-relations-are-sufficient
|
# The Plucker relations are sufficient
Consider the Grassmannian of codimension-$d$ subspaces of a given vector space $E$ (over an arbitrary field), which I will define as $$\operatorname{Gr}^d(E) = \{\text{linear surjections } \sigma: E \to F \mid \text{F is any d-dimensional space} \}/\sim$$ where I identify two surjections $(\sigma_1: E \to F_1) \sim (\sigma_2: E \to F_2)$ if there is an isomorphism $m: F_1 \to F_2$ such that $m \sigma_1 = \sigma_2$. Then an isomorphism class of $\sigma$ is determined by the $\ker \sigma$. We also have projective space $\mathbb{P}(E) = \operatorname{Gr}^1(E)$, so that $\operatorname{Sym}^\bullet(E)$ gives homogeneous coordinates on $\mathbb{P}(E)$.
The Plucker embedding is the map $\operatorname{Gr}^d(E) \to \mathbb{P}(\bigwedge^d E)$, taking a map $(\sigma: E \to F)$ to its $d$th exterior power $\bigwedge^d \sigma: \bigwedge^d E \to \bigwedge^d F$. Then $\bigwedge^d \sigma$ is a surjection from $\bigwedge^d E$ to a one-dimensional space, and so lives in $\mathbb{P}(\bigwedge^d E)$. So I am thinking of $\bigwedge^d \sigma$ as a point in the embedding of the Grassmannian, and elements of $\bigwedge^d(E)$ give me the linear coordinate functions. For example, if I write out the matrix of $\sigma: E \to F$ in some basis $(e_1, \ldots, e_m)$ for $E$ and any basis for $F$, then $(\bigwedge^d \sigma)(e_1 \wedge \cdots \wedge e_d)$ is proportional to the determinant of minor where we take the first $d$ columns of the matrix for $\sigma$.
Of course, the question arises that, given some $(s: \bigwedge^d E \to L) \in \mathbb{P}(\bigwedge^d E)$, is $s$ in the image of the Plucker embedding? (i.e., is $s$ a $d$th wedge power?). The answer to this is given by the Plucker relations, which I will state as follows. Define a linear map $\omega^d_E$ on pure tensors by \begin{aligned}\omega^d_E: \bigwedge^{d+1} E \otimes \bigwedge^{d-1} E &\to \bigwedge^d E \otimes \bigwedge^d E\\ v_1 \wedge \cdots \wedge v_{d+1} \otimes u_1 \wedge \cdots \wedge u_{d-1} &\mapsto \sum_{i = 0}^{d+1} v_1 \wedge \cdots \wedge \hat{v_i} \wedge \cdots \wedge v_{d+1} \otimes v_i \wedge u_1 \wedge \cdots \wedge u_{d-1} \end{aligned} Then the point $(s: \bigwedge^d E \to L)$ "satisfies the Plucker relations" if the linear map $(s \otimes s) \circ \omega^d_E$ is zero. From this standpoint, it is lovely to see the necessary condition: if $(\sigma: E \to F) \in \operatorname{Gr}^d(E)$, then $\wedge^d \sigma$ must satisfy the Plucker relations, since the wedge power essentially "pulls through" the $\omega$: $$(\wedge^d \sigma \otimes \wedge^d \sigma) \circ \omega^d_E = \omega_F^d \circ (\wedge^{d+1} \sigma \otimes \wedge^{d-1} \sigma)$$ and of course $\wedge^{d+1} \sigma = 0$ because $F$ is $d$-dimensional. However, I am having trouble with the opposite direction.
How can I see that if $(s: \wedge^d E \to L) \in \mathbb{P}(\bigwedge^d E)$ satisfies the Plucker relations, then it is (proportional to) the $d$th power of some map $\sigma: E \to F$? I would really like a constructive proof: produce some $\sigma: E \to F$ from $s$, and show that as long as $s$ satisfies the Plucker relations, that $\wedge^d \sigma \sim s$.
I can prove similar results for decomposable tensors in $\wedge^d E$, but for some reason with how things are set up, "dualising" that proof is eluding me. There is a proof similar to the style I am thinking of on page 175 of Martin Brandenburg's Thesis, unfortunately I cannot follow the proof starting from the "two short exact sequences". Which leads me to another question:
Are there any good references for the Plucker embedding or Plucker relations thought of in this way?
• What exactly don't you understand in Brandenburg's proof? In the two exact sequences, the left-most map of the first row takes $a\otimes b$ to $a\otimes s(b)-b\otimes s(a)$ and the left-most map of the bottom is just the natural map $\ker t\otimes \bigwedge^{d-1}\to \bigwedge E$, twisted by $L$. (Note that $\ker t = \bigwedge^{d+1}E\otimes L^\vee$ by construction.) That the two diagonal compositions vanish imply that the right-most square can be completed by an isomorphism. Twisting by $L^\vee$ again gives exactly what you want. – Ben May 4 '18 at 17:03
• @Ben Literally everything you just said was not clear to me. How do you know that the upper left map takes $a \otimes b$ to $a \otimes s(b) - b \otimes s(a)$ rather than $a \otimes s(b)$ for example? Also where can I find this result about diagonals vanishing in two short exact sequences? I guess the other thing is that this proof seems entirely pulled-out-of-a-hat, and I'm trying to get some intuition for exactly what is going on. I understand why defining $t$ as the cokernel of a certain map should be right, but I cannot make any sense of the proof from there. – Joppy May 5 '18 at 0:46
• I see. As for intuition, I don’t know, but I will try to explain the details there a bit further later. – Ben May 5 '18 at 6:37
• @Ben Thanks, that would be much appreciated. – Joppy May 5 '18 at 6:41
As promised in the comment, here are some more details to Martin Brandenburg's proof. To get an understanding of what is going on, we first put ourselves in the situation that $s = \wedge^d\sigma$ for some surjective $\sigma\colon E\to F$. Why would we want to consider the map $t\colon \bigwedge\nolimits^{d+1}E\to E\otimes\bigwedge\nolimits^{d}F$ defined as \begin{align*}t(v_0\wedge\dots\wedge v_d) &= \sum_{k=0}^d(-1)^kv_k\otimes s(v_0\wedge\dots\wedge\widehat{v_k}\wedge\dots\wedge v_d) \\&=\sum_{k=0}^d(-1)^kv_k\otimes \sigma(v_0)\wedge\dots\wedge\widehat{\sigma(v_k)}\wedge\dots\wedge \sigma(v_d)\;\;? \end{align*} Well, I claim that its image is the kernel of the map $E\otimes\bigwedge^dF\xrightarrow{\sigma\otimes \mathrm{id}_{\wedge^d F}} F\otimes\bigwedge^dF$; since we can reconstruct $\sigma\colon E\to F$ up to isomorphism from its kernel, this map is very much relevant for what we are trying to do.
Let's prove the claim: It is easy to verify that $(\sigma\otimes \mathrm{id}_{\wedge^dF})\circ t$ factorises as $$\bigwedge\nolimits^{d+1}E\xrightarrow{\wedge^{d+1}\sigma}\bigwedge\nolimits^{d+1}F\to F\otimes\bigwedge\nolimits^{d}F,$$ where the latter map sends $v_0\wedge\dots\wedge v_d$ to $\sum_{k=0}^d(-1)^kv_k\otimes v_0\wedge\dots\wedge\widehat{v_k}\wedge\dots\wedge v_d$. But $\bigwedge\nolimits^{d+1}F = 0$; thus, $(\sigma\otimes\mathrm{id}_{\wedge^dF})\circ t = 0$, so that $\mathrm{im}(t)\subset \ker(\sigma\otimes \mathrm{id}_{\wedge^dF})$. Conversely, if $\sigma(w) = 0$, then $t(w\wedge v_1\wedge\dots\wedge v_d)=w\otimes s(v_1\wedge\dots\wedge v_d)$ and so $t$ maps surjectively onto $\ker(\sigma)\otimes\bigwedge^dF = \ker(\sigma\otimes \mathrm{id}_{\wedge^dF})$, as claimed.
What this means is that we have found a reasonable candidate for an inverse of the map $$\left\{E\xrightarrow{\sigma} F\to 0\right\}\to\left\{\bigwedge\nolimits^{d}E\xrightarrow{s}L\to 0\,\middle|\,\text{sat. Plücker}\right\},\sigma\mapsto \wedge^d\sigma,$$ by mapping $s$ to the cokernel of $T_s\colon \bigwedge\nolimits^{d+1}E\otimes L^\vee\xrightarrow{t\otimes \mathrm{id}_{L}}E\otimes L\otimes L^\vee\to E$, where the last map is just the natural isomorphism. The above shows that if we start with a $\sigma$, pass to $\wedge^d\sigma$, and then take the cokernel of $T_{\wedge^d\sigma}$, we get back $\sigma$ up to isomorphism. It remains to show that starting with some $s$ satisfying the Plücker relations, the candidate-inverse is well-defined (i.e., that the cokernel has rank $d$,) and that if we pass to the cokernel $\sigma\colon E\to F:=\mathrm{coker}(T_s)$ and then apply $\wedge^d$, we get back $s$ up to isomorphism. The latter is what those exact sequences are for, but we can phrase it without them:
We have two quotients of $\bigwedge^dE\otimes L$, namely, $\wedge^d\sigma\otimes \mathrm{id}_{L}\colon \bigwedge^dE\otimes L\to \bigwedge^dF\otimes L$ and $s\otimes \mathrm{id}_{L}\colon\bigwedge^dE\otimes L\to L\otimes L$ and we aim to show that they are isomorphic as quotients, i.e., that $\ker(\wedge^d\sigma\otimes\mathrm{id}_{L}) = \ker(s\otimes \mathrm{id}_{L})$. For this, we give nice presentations of those kernels.
For one, since $\ker(\sigma\otimes\mathrm{id}_{L})$ is the image of $t$, the kernel of $\wedge^d\sigma\otimes\mathrm{id}_{L}$ is the image of the map $\alpha\colon \bigwedge^{d-1}E\otimes\bigwedge^{d+1}E\to \bigwedge^{d}E\otimes L$, mapping $v\otimes w$ to $v\wedge t(w)$.
For the other map, note that $\ker(s\otimes \mathrm{id}_{L})=\ker(s)\otimes L$ is generated by elements of the form $v\otimes s(w)-w\otimes s(v)$, since, for $s(v)=0$ and $f = s(w)\in L$ arbitrary, $v\otimes s(w) - w\otimes s(v) = v\otimes f$. In particular, with $\beta\colon \bigwedge^dE\otimes\bigwedge^dE\to \bigwedge^dE\otimes L$ mapping $v\otimes w$ to $v\otimes s(w)- w\otimes s(v)$, we get $\ker(s\otimes \mathrm{id}_{L}) = \mathrm{im}{(\beta)}$.
Thus, if we manage to show that $(s\otimes \mathrm{id}_{L})\circ\alpha = 0$ and $(\wedge^d\sigma\otimes\mathrm{id}_{L})\circ\beta = 0$, then we conclude $$\ker(\wedge^d\sigma\otimes\mathrm{id}_{L})=\mathrm{im}{(\alpha)}\subset\ker(s\otimes \mathrm{id}_{L}) = \mathrm{im}{(\beta)}\subset \ker(\wedge^d\sigma\otimes\mathrm{id}_{L}),$$ which implies equality everywhere. In particular, $L\cong\bigwedge^dF$ as quotients of $\bigwedge^d E$ and so $\bigwedge^dF$ is invertible, hence $F$ has rank $d$; this is all we wanted to show.
Finally, we show the two identities $(s\otimes \mathrm{id}_{L})\circ\alpha = 0$ and $(\wedge^d\sigma\otimes\mathrm{id}_{L})\circ\beta = 0$. Tracing through the definitions shows that the former is the Plücker relation, and that the second is equivalent to $\wedge^d\sigma v\otimes s(w) = \wedge^d\sigma w\otimes s(v)$ for all $v,w\in\bigwedge^dE$. That is, we want $\wedge^d\sigma\otimes s$ to be symmetric. By construction of $\sigma$, we always have $$0 = \sum_{k=0}^d(-1)^{k}\sigma w_k\otimes s(w_0\wedge\dots\wedge \widehat{w_{k}}\wedge \dots\wedge w_d),$$ and so the symmetry of $\wedge^d\sigma\otimes s$ follows from what M. Brandenburg calls the Symmetry Lemma (4.4.15); I have nothing to add to his proof of this lemma.
• Thanks for taking the time to explain! I still don't understand how you go from the fact that the kernel of $\sigma \otimes \mathrm{id}_L$ is the image of $t$, and arrive at the conclusion that the kernel of $\wedge^d \sigma \otimes \mathrm{id}_L$ is precisely the image of $\alpha$. – Joppy May 8 '18 at 1:27
• This holds quite generally: if $\varphi\colon V\to W$ is surjective, then so is $\wedge^d \varphi$ and the kernel is generated by the elements of the form $v_1\wedge\dots\wedge v_d$ for all $v_1\in\ker\varphi$ and $v_2,\dots,v_d\in V$. I will try to find a reference or a quick argument later in the day. – Ben May 8 '18 at 5:31
• Ah, I see - I definitely believe that. I think I'm beginning to see how to connect this to the usual decomposability of vectors in $\wedge^d E$ results. – Joppy May 8 '18 at 6:29
• Here is a simple argument: If $\wedge^d\varphi(v_1,\dots,v_d) = 0$, then the $\varphi(v_i)$ are linearly dependent. Thus, there exist scalars $\lambda_i$ such that $\varphi(\sum_i\lambda_i v_i) = \sum_i\lambda_i\varphi(v_i) = 0$, hence, $v_0 := \sum_i\lambda_i v_i\in\ker(\varphi)$. Say $\lambda_1 = 1$, possibly after renumbering and rescaling. Then $v_1\wedge\dots\wedge v_d = v_0\wedge v_2\wedge\dots\wedge v_d - (v_0-v_1)\wedge v_2\wedge\dots\wedge v_d$ where the first term has a factor in $\ker(\varphi)$ and the second vanishes. It remains to show that it suffices to consider those elements.. – Ben May 8 '18 at 7:05
I cannot put this as a comment and I'm sorry that it is not a complete answer, but a good introduction to Grassmanninans is written by Gathmann: http://www.mathematik.uni-kl.de/~gathmann/class/alggeom-2014/alggeom-2014-c8.pdf
In particular the answer to your second question could be Corollary 8.13 (look at the proof).
• Thanks for the reference, but this isn't what I'm looking for. In those notes, the relations given are the vanishing of some $(n-1+k) \times (n-1+k)$: this is a relation of degree $n-1+k$. The Plucker relations I gave above are different, and in particular are always degree 2. – Joppy Apr 29 '18 at 1:36
|
2019-05-21 02:44:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9719701409339905, "perplexity": 130.74119177775822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256215.47/warc/CC-MAIN-20190521022141-20190521044141-00279.warc.gz"}
|
https://encyclopediaofmath.org/wiki/Covariant_vector
|
# Covariant vector
An element of the vector space $E ^ {*}$ dual to an $n$- dimensional vector space $E$, that is, a linear functional (linear form) on $E$. In the ordered pair $( E, E ^ {*} )$, an element of $E$ is called a contravariant vector. Within the general scheme for the construction of tensors, a covariant vector is identified with a covariant tensor of valency 1.
The coordinate notation for a covariant vector is particularly simple if one chooses in $E$ and $E ^ {*}$ so-called dual bases $e _ {1} \dots e _ {n}$ in $E$ and $e ^ {1} \dots e ^ {n}$ in $E ^ {*}$, that is, bases such that $( e ^ {i} e _ {j} ) = \delta _ {j} ^ {i}$( where $\delta _ {j} ^ {i}$ is the Kronecker symbol); an arbitrary covariant vector $\omega \in E ^ {*}$ is then expressible in the form $\omega = f _ {i} e ^ {i}$( summation over $i$ from 1 to $n$), where $f _ {i}$ is the value of the linear form $\omega$ at the vector $e _ {i}$. On passing from dual bases $( e _ {i} )$ and $( e ^ {j} )$ to dual bases $( \overline{e}\; _ {i ^ \prime } )$ and $( \overline{e}\; {} ^ {j ^ \prime } )$ according to the formulas
$$\overline{e}\; _ {i ^ \prime } = \ p _ {i ^ \prime } ^ {i} e _ {i} ,\ \ \overline{e}\; {} ^ {j ^ \prime } = \ q _ {i} ^ {j ^ \prime } e ^ {i} ,\ \ p _ {k ^ \prime } ^ {i} q _ {j} ^ {k ^ \prime } = \ \delta _ {j} ^ {i} ,$$
the coordinates $x ^ {i}$ of the contravariant vector $x = x ^ {i} e _ {i}$ change according to the contravariant law $\overline{x}\; {} ^ {i ^ \prime } = q _ {i} ^ {i ^ \prime } x ^ {i}$, while the coordinates $f _ {i}$ of the covariant vector $\omega$ change according to the covariant law $\overline{f}\; _ {i ^ \prime } = p _ {i ^ \prime } ^ {i} f _ {i}$( i.e. they change in the same way as the basis, whence the terminology "covariant vectorcovariant" ).
#### References
[1] P.A. Shirokov, "Tensor calculus. Tensor algebra" , Kazan' (1961) (In Russian) [2] D.V. Beklemishev, "A course of analytical geometry and linear algebra" , Moscow (1971) (In Russian) [3] J.A. Schouten, "Tensor analysis for physicists" , Cambridge Univ. Press (1951)
|
2022-08-19 21:22:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9812278747558594, "perplexity": 412.91429752549334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573760.75/warc/CC-MAIN-20220819191655-20220819221655-00126.warc.gz"}
|
https://math.stackexchange.com/questions/2744773/integral-word-problem-did-i-set-this-up-correctly
|
# Integral word problem. Did I set this up correctly?
Here is the question:
Use the method of cylindrical shells to find the volume generated by rotating the region bounded by the given curves about the specified x-axis.
Here are the parameters:
$$y = 4x - x^2, y = 3\text {; about }x = 1$$
So here are my choices for radius and height:
radius = $x - 1$
height = $4x - x^2 - 3$
So...is this setup right?
$$\int_1^3 2 \pi (x - 1) ( 4x - x^2 - 3)\, dx$$
• looks good to me! – imranfat Apr 19 '18 at 16:14
The volume of revolution is presented by $$\int_1^3 2 \pi (x - 1) ( 4x - x^2 - 3)\, dx$$
|
2019-06-24 22:01:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9145740270614624, "perplexity": 263.2796923429894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999740.32/warc/CC-MAIN-20190624211359-20190624233359-00157.warc.gz"}
|
http://mathhelpforum.com/algebra/99545-quadratic-algebra.html
|
# Math Help - quadratic algebra
hi all,
the question posed is
Given that $x^2-14x+a = (x+b)^2$ for all values of $x$, find the value of $a$ and the value of $b$.
is this correct?
since
$(x+A)^2=x^2+2Ax+A^2$
$x^2+2Ax=(x+A)^2-A^2$
so
$x^2-14x = (x+b)^2 - b^2 = x^2+2bx$
and solving for b
$b=\frac{-14x}{2x}=-7$
so
$x^2-14x+a=(x-7)^2$
and expanding the RHS
$a=49$
2. Originally Posted by sammy28
hi all,
the question posed is
Given that $x^2-14x+a = (x+b)^2$ for all values of $x$, find the value of $a$ and the value of $b$.
is this correct?
...
solving
$b=-7$
$a=49$
That's the same as what I got.
3. Your solution is correct, however there is a much easier way of getting to it:
Simply look at the equation when $x=0,-b$ (you can do this since you are told that it holds for any $x$
Then, you get: $x=0$:
$0^2 -14*0 + a = (0 + b)^2 \Rightarrow a = b^2$
$x=-b$:
$(-b)^2 +14b + a = (b-b)^2 = 0 \Rightarrow b^2 + 14b + b^2 = 0 \Rightarrow$
$\Rightarrow 2b^2 + 14b = 0 \Rightarrow 2b(b+7) = 0 \Rightarrow b = 0,-7$
This gives us two possible solutions, however $a=b=0$ is obviously wrong! so we are left with $\boxed{b = -7, a = 49}$
$x^2+2bx+b^2$
and
$x^2-14x+a$ are identical,
their coefficients will be in proportion.
i.e. $1=-14/2b = (b^2)/a$
so $2b=-14 and a=b^2$
So $b=-7 , a=49$
5. thanks for the replies. its good to see it approached from different angles.
|
2015-05-30 13:40:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 30, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9041321873664856, "perplexity": 447.6160345090291}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207931085.38/warc/CC-MAIN-20150521113211-00003-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://listserv.uni-heidelberg.de/cgi-bin/wa?A2=0102&L=LATEX-L&D=0&H=A&S=a&P=3168589
|
## LATEX-L@LISTSERV.UNI-HEIDELBERG.DE
Options: Use Forum View Use Monospaced Font Show Text Part by Default Condense Mail Headers Message: [<< First] [< Prev] [Next >] [Last >>] Topic: [<< First] [< Prev] [Next >] [Last >>] Author: [<< First] [< Prev] [Next >] [Last >>]
```On Sat, Sep 18, 2010 at 09:57:04AM +0200, Frank Mittelbach wrote:
> Joseph Wright writes:
> > On 17/09/2010 20:36, Frank Mittelbach wrote:
> > > since you already looked at the different implementations, any suggestion on
> > > how to best improve the LaTeX2e behaviour?
> >
> > I take it that this does possibly count as a bug then, rather than as a
> > 'feature'?
>
> it doesn't count as a bug I would say, for the simple reason that per
> specification the optional argument of \usepackage doesn't support a key value
> list but just a list of option names (that themselves have no spaces)
> separated by comma. Only later packages appeared that internally added
> something like keyval and manipulated the received option list from
> \usepackage as a key/val list, but that happens after \usepackage has already
> removed spaces and unfortunately in this case also the braces.
>
> pragmatic approach:
>
> you state that one needs to say ={,} to make things work as this is a fairly
> seldom issue
>
> more elaborate approach
>
> change \@pass@ptions so that it only removes spaces up front, around a comma
> and at the end
This isn't enough, there are many other places to fix. For example,
LaTeX removes used options from the global unused option list.
Because of the braces \@removeelement will break, because the option
is used inside the parameter text, where catcode 1 and 2 tokens are
forbidden.
Yours sincerely
|
2023-03-23 11:11:45
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9379057288169861, "perplexity": 14884.228458535403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945144.17/warc/CC-MAIN-20230323100829-20230323130829-00013.warc.gz"}
|
https://mathoverflow.net/questions/109306/genus-2-curves-vs-abelian-surfaces
|
# Genus 2 curves vs Abelian surfaces
In the Satake compactification of abelian surfaces we have the following degeneration of a family of abelian surfaces in $\mathbf{H}_2$
$lim_{t \to \infty}\begin{pmatrix} it & b \\\ b & \tau\end{pmatrix} = \tau.$
Since we have that $M_2$ is an open of $A_2$, it is natural to look for a family of genus 2 curves depending on $t$ which gives the previous family of period matrices.
Can you describe explicitely such a family of genus 2 curves?
-
The analytic solution is easy enough to describe: Compute the gradients of the Theta function at the six odd 2-torsion points, and projectivize these gradients. You now have six points on the projective line. This are the 6 Weierstrass points of the curve, alternatively there is the "Rosenhaon normal form", which expresses the Weierstrass points in terms of (quotients and products of) values of the theta function at even 2-torsion point.
In light of these two construction, and since theta functions involve non algebraic functions, I doubt the existence of a an algebraic expression for the family of curves you want.
-
|
2015-12-01 11:27:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7094408273696899, "perplexity": 325.54146842147435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398466260.18/warc/CC-MAIN-20151124205426-00177-ip-10-71-132-137.ec2.internal.warc.gz"}
|
https://en.m.wikipedia.org/wiki/Random_variable
|
Random variable
A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events.[1] It is a mapping or a function from possible outcomes (e.g., the possible upper sides of a flipped coin such as heads ${\displaystyle H}$ and tails ${\displaystyle T}$) in a sample space (e.g., the set ${\displaystyle \{H,T\}}$) to a measurable space, often the real numbers (e.g., ${\displaystyle \{-1,1\}}$ in which 1 corresponding to ${\displaystyle H}$ and -1 corresponding to ${\displaystyle T}$).
This graph shows how random variable is a function from all possible outcomes to real values. It also shows how random variable is used for defining probability mass functions.
Informally, randomness typically represents some fundamental element of chance, such as in the roll of a dice; it may also represent uncertainty, such as measurement error.[1] However, the interpretation of probability is philosophically complicated, and even in specific cases is not always straightforward. The purely mathematical analysis of random variables is independent of such interpretational difficulties, and can be based upon a rigorous axiomatic setup.
In the formal mathematical language of measure theory, a random variable is defined as a measurable function from a probability measure space (called the sample space) to a measurable space. This allows consideration of the pushforward measure, which is called the distribution of the random variable; the distribution is thus a probability measure on the set of all possible values of the random variable. It is possible for two random variables to have identical distributions but to differ in significant ways; for instance, they may be independent.
It is common to consider the special cases of discrete random variables and absolutely continuous random variables, corresponding to whether a random variable is valued in a discrete set (such as a finite set) or in an interval of real numbers. There are other important possibilities, especially in the theory of stochastic processes, wherein it is natural to consider random sequences or random functions. Sometimes a random variable is taken to be automatically valued in the real numbers, with more general random quantities instead being called random elements.
According to George Mackey, Pafnuty Chebyshev was the first person "to think systematically in terms of random variables".[2]
Definition
A random variable ${\displaystyle X}$ is a measurable function ${\displaystyle X\colon \Omega \to E}$ from a sample space ${\displaystyle \Omega }$ as a set of possible outcomes to a measurable space ${\displaystyle E}$ . The technical axiomatic definition requires the sample space ${\displaystyle \Omega }$ to be a sample space of a probability triple ${\displaystyle (\Omega ,{\mathcal {F}},\operatorname {P} )}$ (see the measure-theoretic definition). A random variable is often denoted by capital roman letters such as ${\displaystyle X}$ , ${\displaystyle Y}$ , ${\displaystyle Z}$ , ${\displaystyle T}$ .[3]
The probability that ${\displaystyle X}$ takes on a value in a measurable set ${\displaystyle S\subseteq E}$ is written as
${\displaystyle \operatorname {P} (X\in S)=\operatorname {P} (\{\omega \in \Omega \mid X(\omega )\in S\})}$
Standard case
In many cases, ${\displaystyle X}$ is real-valued, i.e. ${\displaystyle E=\mathbb {R} }$ . In some contexts, the term random element (see extensions) is used to denote a random variable not of this form.
When the image (or range) of ${\displaystyle X}$ is countable, the random variable is called a discrete random variable[4]: 399 and its distribution is a discrete probability distribution, i.e. can be described by a probability mass function that assigns a probability to each value in the image of ${\displaystyle X}$ . If the image is uncountably infinite (usually an interval) then ${\displaystyle X}$ is called a continuous random variable.[5][6] In the special case that it is absolutely continuous, its distribution can be described by a probability density function, which assigns probabilities to intervals; in particular, each individual point must necessarily have probability zero for an absolutely continuous random variable. Not all continuous random variables are absolutely continuous,[7] a mixture distribution is one such counterexample; such random variables cannot be described by a probability density or a probability mass function.
Any random variable can be described by its cumulative distribution function, which describes the probability that the random variable will be less than or equal to a certain value.
Extensions
The term "random variable" in statistics is traditionally limited to the real-valued case (${\displaystyle E=\mathbb {R} }$ ). In this case, the structure of the real numbers makes it possible to define quantities such as the expected value and variance of a random variable, its cumulative distribution function, and the moments of its distribution.
However, the definition above is valid for any measurable space ${\displaystyle E}$ of values. Thus one can consider random elements of other sets ${\displaystyle E}$ , such as random boolean values, categorical values, complex numbers, vectors, matrices, sequences, trees, sets, shapes, manifolds, and functions. One may then specifically refer to a random variable of type ${\displaystyle E}$ , or an ${\displaystyle E}$ -valued random variable.
This more general concept of a random element is particularly useful in disciplines such as graph theory, machine learning, natural language processing, and other fields in discrete mathematics and computer science, where one is often interested in modeling the random variation of non-numerical data structures. In some cases, it is nonetheless convenient to represent each element of ${\displaystyle E}$ , using one or more real numbers. In this case, a random element may optionally be represented as a vector of real-valued random variables (all defined on the same underlying probability space ${\displaystyle \Omega }$ , which allows the different random variables to covary). For example:
• A random word may be represented as a random integer that serves as an index into the vocabulary of possible words. Alternatively, it can be represented as a random indicator vector, whose length equals the size of the vocabulary, where the only values of positive probability are ${\displaystyle (1\ 0\ 0\ 0\ \cdots )}$ , ${\displaystyle (0\ 1\ 0\ 0\ \cdots )}$ , ${\displaystyle (0\ 0\ 1\ 0\ \cdots )}$ and the position of the 1 indicates the word.
• A random sentence of given length ${\displaystyle N}$ may be represented as a vector of ${\displaystyle N}$ random words.
• A random graph on ${\displaystyle N}$ given vertices may be represented as a ${\displaystyle N\times N}$ matrix of random variables, whose values specify the adjacency matrix of the random graph.
• A random function ${\displaystyle F}$ may be represented as a collection of random variables ${\displaystyle F(x)}$ , giving the function's values at the various points ${\displaystyle x}$ in the function's domain. The ${\displaystyle F(x)}$ are ordinary real-valued random variables provided that the function is real-valued. For example, a stochastic process is a random function of time, a random vector is a random function of some index set such as ${\displaystyle 1,2,\ldots ,n}$ , and random field is a random function on any set (typically time, space, or a discrete set).
Distribution functions
If a random variable ${\displaystyle X\colon \Omega \to \mathbb {R} }$ defined on the probability space ${\displaystyle (\Omega ,{\mathcal {F}},\operatorname {P} )}$ is given, we can ask questions like "How likely is it that the value of ${\displaystyle X}$ is equal to 2?". This is the same as the probability of the event ${\displaystyle \{\omega :X(\omega )=2\}\,\!}$ which is often written as ${\displaystyle P(X=2)\,\!}$ or ${\displaystyle p_{X}(2)}$ for short.
Recording all these probabilities of outputs of a random variable ${\displaystyle X}$ yields the probability distribution of ${\displaystyle X}$ . The probability distribution "forgets" about the particular probability space used to define ${\displaystyle X}$ and only records the probabilities of various output values of ${\displaystyle X}$ . Such a probability distribution, if ${\displaystyle X}$ is real-valued, can always be captured by its cumulative distribution function
${\displaystyle F_{X}(x)=\operatorname {P} (X\leq x)}$
and sometimes also using a probability density function, ${\displaystyle f_{X}}$ . In measure-theoretic terms, we use the random variable ${\displaystyle X}$ to "push-forward" the measure ${\displaystyle P}$ on ${\displaystyle \Omega }$ to a measure ${\displaystyle p_{X}}$ on ${\displaystyle \mathbb {R} }$ . The measure ${\displaystyle p_{X}}$ is called the "(probability) distribution of ${\displaystyle X}$ " or the "law of ${\displaystyle X}$ ". [8] The density ${\displaystyle f_{X}=dp_{X}/d\mu }$ , the Radon–Nikodym derivative of ${\displaystyle p_{X}}$ with respect to some reference measure ${\displaystyle \mu }$ on ${\displaystyle \mathbb {R} }$ (often, this reference measure is the Lebesgue measure in the case of continuous random variables, or the counting measure in the case of discrete random variables). The underlying probability space ${\displaystyle \Omega }$ is a technical device used to guarantee the existence of random variables, sometimes to construct them, and to define notions such as correlation and dependence or independence based on a joint distribution of two or more random variables on the same probability space. In practice, one often disposes of the space ${\displaystyle \Omega }$ altogether and just puts a measure on ${\displaystyle \mathbb {R} }$ that assigns measure 1 to the whole real line, i.e., one works with probability distributions instead of random variables. See the article on quantile functions for fuller development.
Examples
Discrete random variable
In an experiment a person may be chosen at random, and one random variable may be the person's height. Mathematically, the random variable is interpreted as a function which maps the person to the person's height. Associated with the random variable is a probability distribution that allows the computation of the probability that the height is in any subset of possible values, such as the probability that the height is between 180 and 190 cm, or the probability that the height is either less than 150 or more than 200 cm.
Another random variable may be the person's number of children; this is a discrete random variable with non-negative integer values. It allows the computation of probabilities for individual integer values – the probability mass function (PMF) – or for sets of values, including infinite sets. For example, the event of interest may be "an even number of children". For both finite and infinite event sets, their probabilities can be found by adding up the PMFs of the elements; that is, the probability of an even number of children is the infinite sum ${\displaystyle \operatorname {PMF} (0)+\operatorname {PMF} (2)+\operatorname {PMF} (4)+\cdots }$ .
In examples such as these, the sample space is often suppressed, since it is mathematically hard to describe, and the possible values of the random variables are then treated as a sample space. But when two random variables are measured on the same sample space of outcomes, such as the height and number of children being computed on the same random persons, it is easier to track their relationship if it is acknowledged that both height and number of children come from the same random person, for example so that questions of whether such random variables are correlated or not can be posed.
If ${\textstyle \{a_{n}\},\{b_{n}\}}$ are countable sets of real numbers, ${\textstyle b_{n}>0}$ and ${\textstyle \sum _{n}b_{n}=1}$ , then ${\textstyle F=\sum _{n}b_{n}\delta _{a_{n}}(x)}$ is a discrete distribution function. Here ${\displaystyle \delta _{t}(x)=0}$ for ${\displaystyle x , ${\displaystyle \delta _{t}(x)=1}$ for ${\displaystyle x\geq t}$ . Taking for instance an enumeration of all rational numbers as ${\displaystyle \{a_{n}\}}$ , one gets a discrete function that is not necessarily a step function (piecewise constant).
Coin toss
The possible outcomes for one coin toss can be described by the sample space ${\displaystyle \Omega =\{{\text{heads}},{\text{tails}}\}}$ . We can introduce a real-valued random variable ${\displaystyle Y}$ that models a \$1 payoff for a successful bet on heads as follows:
${\displaystyle Y(\omega )={\begin{cases}1,&{\text{if }}\omega ={\text{heads}},\\[6pt]0,&{\text{if }}\omega ={\text{tails}}.\end{cases}}}$
If the coin is a fair coin, Y has a probability mass function ${\displaystyle f_{Y}}$ given by:
${\displaystyle f_{Y}(y)={\begin{cases}{\tfrac {1}{2}},&{\text{if }}y=1,\\[6pt]{\tfrac {1}{2}},&{\text{if }}y=0,\end{cases}}}$
Dice roll
If the sample space is the set of possible numbers rolled on two dice, and the random variable of interest is the sum S of the numbers on the two dice, then S is a discrete random variable whose distribution is described by the probability mass function plotted as the height of picture columns here.
A random variable can also be used to describe the process of rolling dice and the possible outcomes. The most obvious representation for the two-dice case is to take the set of pairs of numbers n1 and n2 from {1, 2, 3, 4, 5, 6} (representing the numbers on the two dice) as the sample space. The total number rolled (the sum of the numbers in each pair) is then a random variable X given by the function that maps the pair to the sum:
${\displaystyle X((n_{1},n_{2}))=n_{1}+n_{2}}$
and (if the dice are fair) has a probability mass function fX given by:
${\displaystyle f_{X}(S)={\frac {\min(S-1,13-S)}{36}},{\text{ for }}S\in \{2,3,4,5,6,7,8,9,10,11,12\}}$
Continuous random variable
Formally, a continuous random variable is a random variable whose cumulative distribution function is continuous everywhere.[9] There are no "gaps", which would correspond to numbers which have a finite probability of occurring. Instead, continuous random variables almost never take an exact prescribed value c (formally, ${\textstyle \forall c\in \mathbb {R} :\;\Pr(X=c)=0}$ ) but there is a positive probability that its value will lie in particular intervals which can be arbitrarily small. Continuous random variables usually admit probability density functions (PDF), which characterize their CDF and probability measures; such distributions are also called absolutely continuous; but some continuous distributions are singular, or mixes of an absolutely continuous part and a singular part.
An example of a continuous random variable would be one based on a spinner that can choose a horizontal direction. Then the values taken by the random variable are directions. We could represent these directions by North, West, East, South, Southeast, etc. However, it is commonly more convenient to map the sample space to a random variable which takes values which are real numbers. This can be done, for example, by mapping a direction to a bearing in degrees clockwise from North. The random variable then takes values which are real numbers from the interval [0, 360), with all parts of the range being "equally likely". In this case, X = the angle spun. Any real number has probability zero of being selected, but a positive probability can be assigned to any range of values. For example, the probability of choosing a number in [0, 180] is 12. Instead of speaking of a probability mass function, we say that the probability density of X is 1/360. The probability of a subset of [0, 360) can be calculated by multiplying the measure of the set by 1/360. In general, the probability of a set for a given continuous random variable can be calculated by integrating the density over the given set.
More formally, given any interval ${\textstyle I=[a,b]=\{x\in \mathbb {R} :a\leq x\leq b\}}$ , a random variable ${\displaystyle X_{I}\sim \operatorname {U} (I)=\operatorname {U} [a,b]}$ is called a "continuous uniform random variable" (CURV) if the probability that it takes a value in a subinterval depends only on the length of the subinterval. This implies that the probability of ${\displaystyle X_{I}}$ falling in any subinterval ${\displaystyle [c,d]\subseteq [a,b]}$ is proportional to the length of the subinterval, that is, if acdb, one has
${\displaystyle \Pr \left(X_{I}\in [c,d]\right)={\frac {d-c}{b-a}}}$
where the last equality results from the unitarity axiom of probability. The probability density function of a CURV ${\displaystyle X\sim \operatorname {U} [a,b]}$ is given by the indicator function of its interval of support normalized by the interval's length:
${\displaystyle f_{X}(x)={\begin{cases}\displaystyle {1 \over b-a},&a\leq x\leq b\\0,&{\text{otherwise}}.\end{cases}}}$
Of particular interest is the uniform distribution on the unit interval ${\displaystyle [0,1]}$ . Samples of any desired probability distribution ${\displaystyle \operatorname {D} }$ can be generated by calculating the quantile function of ${\displaystyle \operatorname {D} }$ on a randomly-generated number distributed uniformly on the unit interval. This exploits properties of cumulative distribution functions, which are a unifying framework for all random variables.
Mixed type
A mixed random variable is a random variable whose cumulative distribution function is neither discrete nor everywhere-continuous.[9] It can be realized as a mixture of a discrete random variable and a continuous random variable; in which case the CDF will be the weighted average of the CDFs of the component variables.[9]
An example of a random variable of mixed type would be based on an experiment where a coin is flipped and the spinner is spun only if the result of the coin toss is heads. If the result is tails, X = −1; otherwise X = the value of the spinner as in the preceding example. There is a probability of 12 that this random variable will have the value −1. Other ranges of values would have half the probabilities of the last example.
Most generally, every probability distribution on the real line is a mixture of discrete part, singular part, and an absolutely continuous part; see Lebesgue's decomposition theorem § Refinement. The discrete part is concentrated on a countable set, but this set may be dense (like the set of all rational numbers).
Measure-theoretic definition
The most formal, axiomatic definition of a random variable involves measure theory. Continuous random variables are defined in terms of sets of numbers, along with functions that map such sets to probabilities. Because of various difficulties (e.g. the Banach–Tarski paradox) that arise if such sets are insufficiently constrained, it is necessary to introduce what is termed a sigma-algebra to constrain the possible sets over which probabilities can be defined. Normally, a particular such sigma-algebra is used, the Borel σ-algebra, which allows for probabilities to be defined over any sets that can be derived either directly from continuous intervals of numbers or by a finite or countably infinite number of unions and/or intersections of such intervals.[10]
The measure-theoretic definition is as follows.
Let ${\displaystyle (\Omega ,{\mathcal {F}},P)}$ be a probability space and ${\displaystyle (E,{\mathcal {E}})}$ a measurable space. Then an ${\displaystyle (E,{\mathcal {E}})}$ -valued random variable is a measurable function ${\displaystyle X\colon \Omega \to E}$ , which means that, for every subset ${\displaystyle B\in {\mathcal {E}}}$ , its preimage is ${\displaystyle {\mathcal {F}}}$ -measurable; ${\displaystyle X^{-1}(B)\in {\mathcal {F}}}$ , where ${\displaystyle X^{-1}(B)=\{\omega :X(\omega )\in B\}}$ .[11] This definition enables us to measure any subset ${\displaystyle B\in {\mathcal {E}}}$ in the target space by looking at its preimage, which by assumption is measurable.
In more intuitive terms, a member of ${\displaystyle \Omega }$ is a possible outcome, a member of ${\displaystyle {\mathcal {F}}}$ is a measurable subset of possible outcomes, the function ${\displaystyle P}$ gives the probability of each such measurable subset, ${\displaystyle E}$ represents the set of values that the random variable can take (such as the set of real numbers), and a member of ${\displaystyle {\mathcal {E}}}$ is a "well-behaved" (measurable) subset of ${\displaystyle E}$ (those for which the probability may be determined). The random variable is then a function from any outcome to a quantity, such that the outcomes leading to any useful subset of quantities for the random variable have a well-defined probability.
When ${\displaystyle E}$ is a topological space, then the most common choice for the σ-algebra ${\displaystyle {\mathcal {E}}}$ is the Borel σ-algebra ${\displaystyle {\mathcal {B}}(E)}$ , which is the σ-algebra generated by the collection of all open sets in ${\displaystyle E}$ . In such case the ${\displaystyle (E,{\mathcal {E}})}$ -valued random variable is called an ${\displaystyle E}$ -valued random variable. Moreover, when the space ${\displaystyle E}$ is the real line ${\displaystyle \mathbb {R} }$ , then such a real-valued random variable is called simply a random variable.
Real-valued random variables
In this case the observation space is the set of real numbers. Recall, ${\displaystyle (\Omega ,{\mathcal {F}},P)}$ is the probability space. For a real observation space, the function ${\displaystyle X\colon \Omega \rightarrow \mathbb {R} }$ is a real-valued random variable if
${\displaystyle \{\omega :X(\omega )\leq r\}\in {\mathcal {F}}\qquad \forall r\in \mathbb {R} .}$
This definition is a special case of the above because the set ${\displaystyle \{(-\infty ,r]:r\in \mathbb {R} \}}$ generates the Borel σ-algebra on the set of real numbers, and it suffices to check measurability on any generating set. Here we can prove measurability on this generating set by using the fact that ${\displaystyle \{\omega :X(\omega )\leq r\}=X^{-1}((-\infty ,r])}$ .
Moments
The probability distribution of a random variable is often characterised by a small number of parameters, which also have a practical interpretation. For example, it is often enough to know what its "average value" is. This is captured by the mathematical concept of expected value of a random variable, denoted ${\displaystyle \operatorname {E} [X]}$ , and also called the first moment. In general, ${\displaystyle \operatorname {E} [f(X)]}$ is not equal to ${\displaystyle f(\operatorname {E} [X])}$ . Once the "average value" is known, one could then ask how far from this average value the values of ${\displaystyle X}$ typically are, a question that is answered by the variance and standard deviation of a random variable. ${\displaystyle \operatorname {E} [X]}$ can be viewed intuitively as an average obtained from an infinite population, the members of which are particular evaluations of ${\displaystyle X}$ .
Mathematically, this is known as the (generalised) problem of moments: for a given class of random variables ${\displaystyle X}$ , find a collection ${\displaystyle \{f_{i}\}}$ of functions such that the expectation values ${\displaystyle \operatorname {E} [f_{i}(X)]}$ fully characterise the distribution of the random variable ${\displaystyle X}$ .
Moments can only be defined for real-valued functions of random variables (or complex-valued, etc.). If the random variable is itself real-valued, then moments of the variable itself can be taken, which are equivalent to moments of the identity function ${\displaystyle f(X)=X}$ of the random variable. However, even for non-real-valued random variables, moments can be taken of real-valued functions of those variables. For example, for a categorical random variable X that can take on the nominal values "red", "blue" or "green", the real-valued function ${\displaystyle [X={\text{green}}]}$ can be constructed; this uses the Iverson bracket, and has the value 1 if ${\displaystyle X}$ has the value "green", 0 otherwise. Then, the expected value and other moments of this function can be determined.
Functions of random variables
A new random variable Y can be defined by applying a real Borel measurable function ${\displaystyle g\colon \mathbb {R} \rightarrow \mathbb {R} }$ to the outcomes of a real-valued random variable ${\displaystyle X}$ . That is, ${\displaystyle Y=g(X)}$ . The cumulative distribution function of ${\displaystyle Y}$ is then
${\displaystyle F_{Y}(y)=\operatorname {P} (g(X)\leq y).}$
If function ${\displaystyle g}$ is invertible (i.e., ${\displaystyle h=g^{-1}}$ exists, where ${\displaystyle h}$ is ${\displaystyle g}$ 's inverse function) and is either increasing or decreasing, then the previous relation can be extended to obtain
${\displaystyle F_{Y}(y)=\operatorname {P} (g(X)\leq y)={\begin{cases}\operatorname {P} (X\leq h(y))=F_{X}(h(y)),&{\text{if }}h=g^{-1}{\text{ increasing}},\\\\\operatorname {P} (X\geq h(y))=1-F_{X}(h(y)),&{\text{if }}h=g^{-1}{\text{ decreasing}}.\end{cases}}}$
With the same hypotheses of invertibility of ${\displaystyle g}$ , assuming also differentiability, the relation between the probability density functions can be found by differentiating both sides of the above expression with respect to ${\displaystyle y}$ , in order to obtain[9]
${\displaystyle f_{Y}(y)=f_{X}{\bigl (}h(y){\bigr )}\left|{\frac {dh(y)}{dy}}\right|.}$
If there is no invertibility of ${\displaystyle g}$ but each ${\displaystyle y}$ admits at most a countable number of roots (i.e., a finite, or countably infinite, number of ${\displaystyle x_{i}}$ such that ${\displaystyle y=g(x_{i})}$ ) then the previous relation between the probability density functions can be generalized with
${\displaystyle f_{Y}(y)=\sum _{i}f_{X}(g_{i}^{-1}(y))\left|{\frac {dg_{i}^{-1}(y)}{dy}}\right|}$
where ${\displaystyle x_{i}=g_{i}^{-1}(y)}$ , according to the inverse function theorem. The formulas for densities do not demand ${\displaystyle g}$ to be increasing.
In the measure-theoretic, axiomatic approach to probability, if a random variable ${\displaystyle X}$ on ${\displaystyle \Omega }$ and a Borel measurable function ${\displaystyle g\colon \mathbb {R} \rightarrow \mathbb {R} }$ , then ${\displaystyle Y=g(X)}$ is also a random variable on ${\displaystyle \Omega }$ , since the composition of measurable functions is also measurable. (However, this is not necessarily true if ${\displaystyle g}$ is Lebesgue measurable.[citation needed]) The same procedure that allowed one to go from a probability space ${\displaystyle (\Omega ,P)}$ to ${\displaystyle (\mathbb {R} ,dF_{X})}$ can be used to obtain the distribution of ${\displaystyle Y}$ .
Example 1
Let ${\displaystyle X}$ be a real-valued, continuous random variable and let ${\displaystyle Y=X^{2}}$ .
${\displaystyle F_{Y}(y)=\operatorname {P} (X^{2}\leq y).}$
If ${\displaystyle y<0}$ , then ${\displaystyle P(X^{2}\leq y)=0}$ , so
${\displaystyle F_{Y}(y)=0\qquad {\hbox{if}}\quad y<0.}$
If ${\displaystyle y\geq 0}$ , then
${\displaystyle \operatorname {P} (X^{2}\leq y)=\operatorname {P} (|X|\leq {\sqrt {y}})=\operatorname {P} (-{\sqrt {y}}\leq X\leq {\sqrt {y}}),}$
so
${\displaystyle F_{Y}(y)=F_{X}({\sqrt {y}})-F_{X}(-{\sqrt {y}})\qquad {\hbox{if}}\quad y\geq 0.}$
Example 2
Suppose ${\displaystyle X}$ is a random variable with a cumulative distribution
${\displaystyle F_{X}(x)=P(X\leq x)={\frac {1}{(1+e^{-x})^{\theta }}}}$
where ${\displaystyle \theta >0}$ is a fixed parameter. Consider the random variable ${\displaystyle Y=\mathrm {log} (1+e^{-X}).}$ Then,
${\displaystyle F_{Y}(y)=P(Y\leq y)=P(\mathrm {log} (1+e^{-X})\leq y)=P(X\geq -\mathrm {log} (e^{y}-1)).\,}$
The last expression can be calculated in terms of the cumulative distribution of ${\displaystyle X,}$ so
{\displaystyle {\begin{aligned}F_{Y}(y)&=1-F_{X}(-\log(e^{y}-1))\\[5pt]&=1-{\frac {1}{(1+e^{\log(e^{y}-1)})^{\theta }}}\\[5pt]&=1-{\frac {1}{(1+e^{y}-1)^{\theta }}}\\[5pt]&=1-e^{-y\theta }.\end{aligned}}}
which is the cumulative distribution function (CDF) of an exponential distribution.
Example 3
Suppose ${\displaystyle X}$ is a random variable with a standard normal distribution, whose density is
${\displaystyle f_{X}(x)={\frac {1}{\sqrt {2\pi }}}e^{-x^{2}/2}.}$
Consider the random variable ${\displaystyle Y=X^{2}.}$ We can find the density using the above formula for a change of variables:
${\displaystyle f_{Y}(y)=\sum _{i}f_{X}(g_{i}^{-1}(y))\left|{\frac {dg_{i}^{-1}(y)}{dy}}\right|.}$
In this case the change is not monotonic, because every value of ${\displaystyle Y}$ has two corresponding values of ${\displaystyle X}$ (one positive and negative). However, because of symmetry, both halves will transform identically, i.e.,
${\displaystyle f_{Y}(y)=2f_{X}(g^{-1}(y))\left|{\frac {dg^{-1}(y)}{dy}}\right|.}$
The inverse transformation is
${\displaystyle x=g^{-1}(y)={\sqrt {y}}}$
and its derivative is
${\displaystyle {\frac {dg^{-1}(y)}{dy}}={\frac {1}{2{\sqrt {y}}}}.}$
Then,
${\displaystyle f_{Y}(y)=2{\frac {1}{\sqrt {2\pi }}}e^{-y/2}{\frac {1}{2{\sqrt {y}}}}={\frac {1}{\sqrt {2\pi y}}}e^{-y/2}.}$
This is a chi-squared distribution with one degree of freedom.
Example 4
Suppose ${\displaystyle X}$ is a random variable with a normal distribution, whose density is
${\displaystyle f_{X}(x)={\frac {1}{\sqrt {2\pi \sigma ^{2}}}}e^{-(x-\mu )^{2}/(2\sigma ^{2})}.}$
Consider the random variable ${\displaystyle Y=X^{2}.}$ We can find the density using the above formula for a change of variables:
${\displaystyle f_{Y}(y)=\sum _{i}f_{X}(g_{i}^{-1}(y))\left|{\frac {dg_{i}^{-1}(y)}{dy}}\right|.}$
In this case the change is not monotonic, because every value of ${\displaystyle Y}$ has two corresponding values of ${\displaystyle X}$ (one positive and negative). Differently from the previous example, in this case however, there is no symmetry and we have to compute the two distinct terms:
${\displaystyle f_{Y}(y)=f_{X}(g_{1}^{-1}(y))\left|{\frac {dg_{1}^{-1}(y)}{dy}}\right|+f_{X}(g_{2}^{-1}(y))\left|{\frac {dg_{2}^{-1}(y)}{dy}}\right|.}$
The inverse transformation is
${\displaystyle x=g_{1,2}^{-1}(y)=\pm {\sqrt {y}}}$
and its derivative is
${\displaystyle {\frac {dg_{1,2}^{-1}(y)}{dy}}=\pm {\frac {1}{2{\sqrt {y}}}}.}$
Then,
${\displaystyle f_{Y}(y)={\frac {1}{\sqrt {2\pi \sigma ^{2}}}}{\frac {1}{2{\sqrt {y}}}}(e^{-({\sqrt {y}}-\mu )^{2}/(2\sigma ^{2})}+e^{-(-{\sqrt {y}}-\mu )^{2}/(2\sigma ^{2})}).}$
This is a noncentral chi-squared distribution with one degree of freedom.
Some properties
• The probability distribution of the sum of two independent random variables is the convolution of each of their distributions.
• Probability distributions are not a vector space—they are not closed under linear combinations, as these do not preserve non-negativity or total integral 1—but they are closed under convex combination, thus forming a convex subset of the space of functions (or measures).
Equivalence of random variables
There are several different senses in which random variables can be considered to be equivalent. Two random variables can be equal, equal almost surely, or equal in distribution.
In increasing order of strength, the precise definition of these notions of equivalence is given below.
Equality in distribution
If the sample space is a subset of the real line, random variables X and Y are equal in distribution (denoted ${\displaystyle X{\stackrel {d}{=}}Y}$ ) if they have the same distribution functions:
${\displaystyle \operatorname {P} (X\leq x)=\operatorname {P} (Y\leq x)\quad {\text{for all }}x.}$
To be equal in distribution, random variables need not be defined on the same probability space. Two random variables having equal moment generating functions have the same distribution. This provides, for example, a useful method of checking equality of certain functions of independent, identically distributed (IID) random variables. However, the moment generating function exists only for distributions that have a defined Laplace transform.
Almost sure equality
Two random variables X and Y are equal almost surely (denoted ${\displaystyle X\;{\stackrel {\text{a.s.}}{=}}\;Y}$ ) if, and only if, the probability that they are different is zero:
${\displaystyle \operatorname {P} (X\neq Y)=0.}$
For all practical purposes in probability theory, this notion of equivalence is as strong as actual equality. It is associated to the following distance:
${\displaystyle d_{\infty }(X,Y)=\operatorname {ess} \sup _{\omega }|X(\omega )-Y(\omega )|,}$
where "ess sup" represents the essential supremum in the sense of measure theory.
Equality
Finally, the two random variables X and Y are equal if they are equal as functions on their measurable space:
${\displaystyle X(\omega )=Y(\omega )\qquad {\hbox{for all }}\omega .}$
This notion is typically the least useful in probability theory because in practice and in theory, the underlying measure space of the experiment is rarely explicitly characterized or even characterizable.
Convergence
A significant theme in mathematical statistics consists of obtaining convergence results for certain sequences of random variables; for instance the law of large numbers and the central limit theorem.
There are various senses in which a sequence ${\displaystyle X_{n}}$ of random variables can converge to a random variable ${\displaystyle X}$ . These are explained in the article on convergence of random variables.
References
Inline citations
1. ^ a b Blitzstein, Joe; Hwang, Jessica (2014). Introduction to Probability. CRC Press. ISBN 9781466575592.
2. ^ George Mackey (July 1980). "Harmonic analysis as the exploitation of symmetry - a historical survey". Bulletin of the American Mathematical Society. New Series. 3 (1).
3. ^ "Random Variables". www.mathsisfun.com. Retrieved 2020-08-21.
4. ^ Yates, Daniel S.; Moore, David S; Starnes, Daren S. (2003). The Practice of Statistics (2nd ed.). New York: Freeman. ISBN 978-0-7167-4773-4. Archived from the original on 2005-02-09.
5. ^ "Random Variables". www.stat.yale.edu. Retrieved 2020-08-21.
6. ^ Dekking, Frederik Michel; Kraaikamp, Cornelis; Lopuhaä, Hendrik Paul; Meester, Ludolf Erwin (2005). "A Modern Introduction to Probability and Statistics". Springer Texts in Statistics. doi:10.1007/1-84628-168-7. ISBN 978-1-85233-896-1. ISSN 1431-875X.
7. ^ L. Castañeda; V. Arunachalam & S. Dharmaraja (2012). Introduction to Probability and Stochastic Processes with Applications. Wiley. p. 67. ISBN 9781118344941.
8. ^ Billingsley, Patrick (1995). Probability and Measure (3rd ed.). Wiley. p. 187. ISBN 9781466575592.
9. ^ a b c d Bertsekas, Dimitri P. (2002). Introduction to Probability. Tsitsiklis, John N., Τσιτσικλής, Γιάννης Ν. Belmont, Mass.: Athena Scientific. ISBN 188652940X. OCLC 51441829.
10. ^ Steigerwald, Douglas G. "Economics 245A – Introduction to Measure Theory" (PDF). University of California, Santa Barbara. Retrieved April 26, 2013.
11. ^ Fristedt & Gray (1996, page 11)
|
2022-11-30 08:44:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 213, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9176126718521118, "perplexity": 327.33355882914526}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710733.87/warc/CC-MAIN-20221130060525-20221130090525-00017.warc.gz"}
|
https://tex.stackexchange.com/questions/344648/overlay-synchronization-with-setcounterbeamerpauses-and-the-pause-comman
|
# Overlay synchronization with \setcounter{beamerpauses} and the \pause command
I'm very new to the wonderful beamer class.
I am attempting to create synchronized overlays by resetting the beamerpauses counter.
I do not understand why the \pause command does not play smooth with this practise.
I came across the following workaround, \uncover<+>{} % or \only<+>{} or ... ?
but I would rather understand what is going on with pauses !
## My example
(Uncomment line 15 \pauseto break)
\documentclass[11pt]{beamer}
\usepackage{tikz}
\newcommand\Number{3}
\begin{document}
\begin{frame}
\frametitle{Why does this break when commenting the pause ?}
\begin{columns}[t]
\column{.48\textwidth}
\foreach \k in {1,...,\Number}%
{%
\only<+>{Left overlay \k}%
}
%\pause %%%%%%%%%%%%% Why does this pause break everything ?
%\uncover<+>{} %%%%%%%%%%%%% but this works ...!
\column{.48\textwidth}
\setcounter{beamerpauses}{1}
\foreach \k in {1,...,\Number}%
{%
\only<+>{Right overlay \k}%
}
\end{columns}
\end{frame}
\end{document}
Cheers,
In general \pause is a very crude command, for all things where you need a bit more fine control, commands such as \uncover<>{}, \only<>{}, \visible<>{} etc. work better [as you already noticed yourself].
As far as I understand your code, the \pause does not work because it adds another overlay before the second column is read -- so on the first three overlays beamer does not "see" the second column, just know how much space it should reserve. Afterwards it tries to add the second column, but as all the text on it is only displayed on the first three overlays, you get an empty page. You can maybe see this in the following example:
\documentclass[11pt]{beamer}
\usepackage{tikz}
\newcommand\Number{3}
\begin{document}
\begin{frame}
\frametitle{Why does this break when commenting the pause ?}
\begin{columns}[t]
\column{.48\textwidth}
\foreach \k in {1,...,\Number}%
{%
\only<+>{Left overlay \k}%
}
\pause %%%%%%%%%%%%% Why does this pause break everything ?
%\uncover<+>{} %%%%%%%%%%%%% but this works ...!
\column{.48\textwidth}
\setcounter{beamerpauses}{1}
\foreach \k in {1,...,\Number}%
{%
\only<+>{Right overlay \k}%
}
\only<5>{overlay 5}
\end{columns}
\end{frame}
\end{document}
|
2022-08-18 14:56:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7183839082717896, "perplexity": 4877.872417424952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573197.34/warc/CC-MAIN-20220818124424-20220818154424-00317.warc.gz"}
|
https://brilliant.org/discussions/thread/can-you-solve-it-2/
|
×
# Can you solve it?
Hi guys,
can you solve this puzzle??
The numbers to fit in are:
$$\boldsymbol \not{2},3,5,8,9,10,18,19,24,29,33,38$$
Try to solve it and compare with The solution
Note by Kaito Einstein
2 years, 4 months ago
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$...$$ or $...$ to ensure proper formatting.
2 \times 3 $$2 \times 3$$
2^{34} $$2^{34}$$
a_{i-1} $$a_{i-1}$$
\frac{2}{3} $$\frac{2}{3}$$
\sqrt{2} $$\sqrt{2}$$
\sum_{i=1}^3 $$\sum_{i=1}^3$$
\sin \theta $$\sin \theta$$
\boxed{123} $$\boxed{123}$$
Sort by:
2, 29, 38, 19, 10, 5, 18, 9, 8, 3, 33 and lastly 24 are placed left to right ( a la reading-style) and found in the same order easily.
- 2 years, 4 months ago
Hi, I'm not sure if this problem can be solved. Here are my steps, and please correct me if you spot an error (since my Math is quite noob) Filling in the most "obvious" blanks, the square has to be 9 since 9 is the only perfect square in the list. Based on this, the triangle next to the square has to be equal to either 3 or 5, since they are both single-digit numbers (I'm assuming that the "letters" mean "digits"). Checking again, the difference between 5 and 2 is a prime number (3) which fits the requirements. However, the only multiple of 5 in the list (excluding 5 as it has been used) is 10. Hence 10 has to go in the circle on the left of the triangle. There is a contradiction as 10 is a 2-digit number while 2 is a one-digit number? Have I made a mistake somewhere? >.<
- 2 years, 4 months ago
There is no contradiction because letters mean letters so the number of letters of $$10$$ are $$3$$ and same for $$2$$
- 2 years, 4 months ago
Oh I see. So "letters" here means the number of letters in the spelling of the number...
- 2 years, 4 months ago
Yes that's it
- 2 years, 4 months ago
Great! I've gotten the same answer as in the solution.
- 2 years, 4 months ago
|
2017-12-16 20:54:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9911065101623535, "perplexity": 1531.568266497861}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948589177.70/warc/CC-MAIN-20171216201436-20171216223436-00375.warc.gz"}
|
https://encyclopediaofmath.org/index.php?title=Curvature_lines,_net_of&oldid=43550
|
# Curvature lines, net of
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
An orthogonal net on a smooth hypersurface $V_{n-1}$ in an $n$-dimensional Euclidean space $E_n$ ($n\geq3$), formed by the curvature lines (cf. Curvature line). A net of curvature lines on $V_{n-1}$ is a conjugate net. E.g., if $V_2\subset E_3$ is a surface of revolution, the meridians and the parallels of latitude form a net of curvature lines. If $V_p\subset E_n$ ($2\leq p<n$) is a smooth $p$-dimensional surface with a field of one-dimensional normals such that the normal $[x,\mathbf n]$ of the field lies in the second-order differential neighbourhood of the point $x\in V_p$, then the normals of the field define curvature lines and a net of curvature lines on $V_p$, exactly as on $V_{n-1}$. However, a net of curvature lines on $V_p$ ($p<n-1$) need not be conjugate.
#### References
[1] L.P. Eisenhart, "Riemannian geometry" , Princeton Univ. Press (1949) [2] V.I. Shulikovskii, "Classical differential geometry in a tensor setting" , Moscow (1963) (In Russian)
|
2020-09-19 19:00:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8667242527008057, "perplexity": 845.0174039525995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400192783.34/warc/CC-MAIN-20200919173334-20200919203334-00553.warc.gz"}
|
https://stats.stackexchange.com/questions/31166/logistic-regression-residual-analysis
|
# Logistic regression residual analysis
This question is sort of general and long-winded, but please bear with me.
In my application, I have many datasets, each consisting of ~20,000 datapoints with ~50 features and a single dependent binary variable. I am attempting to model the datasets using regularized logistic regression (R package glmnet)
As part of my analysis, I've created residual plots as follows. For each feature, I sort the datapoints according to the value of that feature, divide the datapoints into 100 buckets, and then compute the average output value and the average prediction value within each bucket. I plot these differences.
Here is an example residual plot:
In the above plot, the feature has a range of [0,1] (with a heavy concentration at 1). As you can see, when the feature value is low, the model appears to be biased towards overestimating the likelihood of a 1-output. For example, in the leftmost bucket, the model overestimates the probability by about 9%.
Armed with this information, I'd like to alter the feature definition in a straightforward manner to roughly correct for this bias. Alterations like replacing
$x \rightarrow \sqrt{x}$
or
$x \rightarrow f_a(x) = \cases{a & if$x<a$\cr x & else}$
How can I do this? I'm looking for a general methodology so that a human could quickly scroll through all ~50 plots and make alterations, and do this for all datasets and repeat often to keep models up-to-date as the data evolves over time.
As a general question, is this even the right approach? Google searches for "logistic regression residual analysis" don't return many results with good practical advice. They seem to be fixated on answering the question, "Is this model a good fit?" and offer various tests like Hosmer-Lemeshow to answer. But I don't care about whether my model is good, I want to know how to make it better!
You can't really assess the bias that way in logistic regression. Logisitic regression is only expected to be unbiased on log odds or logit scores, log(p/(1-p)). The proportions will be skewed and therefore look biased. You need to plot the residuals in terms of log odds.
• How do I combine the log-odd residuals within a bucket? Arithmetic average? This is a little unsettling to me. Intuitively, if a residual analysis shows no bias, then I expect that when the model predicts Pr[y=1]<0.2, then y should equal 1 with probability less than 0.2. But your answer seems to suggest this is not the case. Am I understanding correctly? – dshin Jun 26 '12 at 22:30
• this is probably better posted as a comment. – probabilityislogic Jun 26 '12 at 22:47
• No David, it doesn't imply anything other than the 0.2 probability, maybe my edits make it more clear. – John Jun 27 '12 at 3:48
• Sorry, I'm still a little confused. My intuitive understanding of an unbiased model is that if the model predicts p=0.2 on every one of a large number of datapoints, then 20% of those datapoints should have y=1. Is this understanding correct? If so, then it seems my plotting methodology should correctly display bias. If not...then I'm not very happy with this concept of "bias"! If an unbiased model reading of 0.2 doesn't tell me anything about the probability that y=1, what good is unbiasedness? – dshin Jun 27 '12 at 4:57
• Yes, 20% should have y=1. But it's not going to be dead on, it's going to be off by some amount. In the probability space which direction do you think it will be off by and by how much? If it's unbiased it will fall equally in somewhere in the .2:1 or the 0:.2. However, as you can see by the size of those spaces they will tend to be farther away in the bigger area just because they can. In the logit space the distance away should be equal + or -. – John Jun 27 '12 at 15:13
there is unlikely to exist any general software for doing this. most likely because there is no general theory for fixing issues in regression. hence this is more of a "what i would do" type of answer rather than a theoretically grounded procedure.
the plot you produce is basically a visual HL test with 100 bins, but using a single predictor instead of the predicted probability to do the binning. this means your procedure is likely to inherit some of the properties of the HL test.
your procedure sounds reasonable, although you should be aware of "overfitting" your criteria. your criteria is also less useful as a diagnostic because it has become part of the estimation process. also, whenever you do something by intuition, you should write down your decision making process in as much detail as is practical. this is because you may discover the seeds of a general process or theory, which when developed leads to a better procedure (more automatic and optimal with respect to some theory).
i think one way to go is to first reduce the number of plots you need to investigate. one way to do this is to fit each variable as a cubic spline, and then investigate the plots which have non zero non linear estimates. given the number of data points this is also a easy automatic fix for non linearities. this will expand your model from 50 to 200+50k where k is the number of knots. you could think of this as applying a "statistical taylor series expansion" of the "true" transformation.
|
2019-08-17 17:38:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6287794709205627, "perplexity": 494.47563257726426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313436.2/warc/CC-MAIN-20190817164742-20190817190742-00439.warc.gz"}
|
https://www.semanticscholar.org/paper/Nanoscale-heat-engine-beyond-the-Carnot-limit.-Ro%C3%9Fnagel-Abah/bbd6ec1564c667f59d0f44e89894cc388c5e94c6
|
# Nanoscale heat engine beyond the Carnot limit.
@article{Ronagel2014NanoscaleHE,
title={Nanoscale heat engine beyond the Carnot limit.},
author={J. Ro{\ss}nagel and Obinna Abah and Ferdinand Schmidt-Kaler and Kilian Singer and Eric Lutz},
journal={Physical review letters},
year={2014},
volume={112 3},
pages={
030602
}
}
• Published 27 August 2013
• Physics
• Physical review letters
We consider a quantum Otto cycle for a time-dependent harmonic oscillator coupled to a squeezed thermal reservoir. We show that the efficiency at maximum power increases with the degree of squeezing, surpassing the standard Carnot limit and approaching unity exponentially for large squeezing parameters. We further propose an experimental scheme to implement such a model system by using a single trapped ion in a linear Paul trap with special geometry. Our analytical investigations are supported…
## Figures from this paper
Two-level quantum Otto heat engine operating with unit efficiency far from the quasi-static regime under a squeezed reservoir
• Physics, Engineering
Journal of Physics B: Atomic, Molecular and Optical Physics
• 2021
Recent theoretical and experimental studies in quantum heat engines show that, in the quasi-static regime, it is possible to have higher efficiency than the limit imposed by Carnot, provided that
Quantum efficiency bound for continuous heat engines coupled to noncanonical reservoirs
• Physics
• 2017
We derive an efficiency bound for continuous quantum heat engines absorbing heat from squeezed thermal reservoirs. Our approach relies on a full-counting statistics description of nonequilibrium
Efficiency at maximum power of a laser quantum heat engine enhanced by noise-induced coherence.
• Physics
Physical review. E
• 2018
The lower and upper bounds for the performance of quantum heat engines determined by the efficiency at maximum power are reported and introducing a fourth level to the maser model can enhance the maximal power and its efficiency, thus demonstrating the importance of quantum coherence in the thermodynamics and operation of the heat engines beyond the classical limit.
A many-body heat engine at criticality
• Physics
Quantum Science and Technology
• 2020
We show that a quantum Otto cycle in which the medium, an interacting ultracold gas, is driven between a superfluid and an insulating phase can outperform similar single particle cycles. The presence
Unified trade-off optimization of quantum Otto heat engines with squeezed thermal reservoirs
• Physics
Quantum Inf. Process.
• 2020
The optimal efficiency at the unified trade-off optimization criterion representing a compromise between energy benefits and losses for a quantum Otto heat engine is systematically investigated.
Single Atom Heat Engine in a Tapered Ion Trap
• Physics
• 2018
Trapped ions in linear Paul traps have been established as a powerful platform for performing experiments in quantum optics, quantum computing, quantum simulation, quantum control, and more recently
Finite-time quantum measurement cooling beyond the Carnot limit
• Physics
• 2021
We proposed the finite-time cycle model of a measurement-based quantum cooler, where invasive measurement provides the power to drive the cooling cycle. Such a cooler may be regarded as an
A quantum heat machine from fast optomechanics
• Physics, Engineering
New Journal of Physics
• 2020
We consider a thermodynamic machine in which the working fluid is a quantized harmonic oscillator that is controlled on timescales that are much faster than the oscillator period. We find that
Attaining Carnot efficiency with quantum and nanoscale heat engines
• Physics
• 2021
A heat engine operating in the one-shot finite-size regime, where systems composed of a small number of quantum particles interact with hot and cold baths and are restricted to one-shot measurements,
## References
SHOWING 1-10 OF 140 REFERENCES
A quantum mechanical open system as a model of a heat engine
A quantum model of a heat engine is analyzed. This engine is constructed from two coupled oscillators in interaction with a warm and cold reservoir. Power is extracted by an external periodic driving
Extracting Work from a Single Heat Bath via Vanishing Quantum Coherence
• Physics
Science
• 2003
We present here a quantum Carnot engine in which the atoms in the heat bath are given a small bit of quantum coherence. The induced quantum coherence becomes vanishingly small in the high-temperature
Squeezed states with thermal noise. II. Damping and photon counting.
• Marian
• Physics
Physical review. A, Atomic, molecular, and optical physics
• 1993
The weak interaction of such a field with a heat bath of arbitrary temperature is shown to preserve the Gaussian form of the characteristic function, and simple analytic formulas for the counting distribution and its factorial moments are derived.
Irreversible performance of a quantum harmonic heat engine
• Physics
• 2006
The unavoidable irreversible loss of power in a heat engine is found to be of quantum origin. Following thermodynamic tradition, a model quantum heat engine operating in an Otto cycle is analysed,
Controlling fast transport of cold trapped ions.
• Physics
Physical review letters
• 2012
It is demonstrated that quantum information stored in a spin-motion entangled state is preserved throughout the transport in a segmented microstructured Paul trap, as a proof-of-principle for the shuttling-based architecture to scalable ion trap quantum computing.
Performance analysis of an irreversible quantum heat engine working with harmonic oscillators.
• Physics
Physical review. E, Statistical, nonlinear, and soft matter physics
• 2003
The cycle model of a regenerative quantum heat engine working with many noninteracting harmonic oscillators is established and the optimal performance of the cycle in high-temperature limit is discussed in detail.
Quantum Reservoir Engineering with Laser Cooled Trapped Ions.
• Physics
Physical review letters
• 1996
Different couplings between a single ion trapped in a harmonic potential and an environment are designed and the variation of the laser frequencies and intensities allows one to design the coupling and select the master equation describing the motion of the ion.
Cavity-assisted quantum bath engineering.
• Physics
Physical review letters
• 2012
By tailoring the spectrum of microwave photon shot noise in the cavity, this work creates a dissipative environment that autonomously relaxes the atom to an arbitrarily specified coherent superposition of the ground and excited states in the presence of background thermal excitations.
Heating of trapped ions from the quantum ground state
• Physics
• 2000
We have investigated motional heating of laser-cooled ${}^{9}{\mathrm{Be}}^{+}$ ions held in radio-frequency (Paul) traps. We have measured heating rates in a variety of traps with different
Laser cooling with electromagnetically induced transparency: application to trapped samples of ions or neutral atoms
• Physics
• 2001
Abstract.A novel method of ground-state laser cooling of trapped atoms utilizes the absorption profile of a three- (or multi-) level system that is tailored by a quantum interference. With cooling
|
2022-08-16 04:15:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5358222126960754, "perplexity": 2652.2904882842627}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572220.19/warc/CC-MAIN-20220816030218-20220816060218-00622.warc.gz"}
|
https://physics.stackexchange.com/questions/238377/compute-langle-x2-rangle-for-gas-of-non-interacting-particles-in-a-quadratic
|
# Compute $\langle x^2\rangle$ for gas of non-interacting particles in a quadratic potential
Consider a 3d gas of $N$ non-interacting identical particles in a quadratic potential, where the Hamiltonian $H$ is:
$$H = \sum_{i=1}^N \bigg( \frac{p_i^2}{2m} + \frac{k}{2}x_i^2 \bigg).$$
After being asked to calculate the partition function (following on to the Helmholtz free energy $F$, the average energy $\langle E \rangle$ and the entropy $S$, which I think are fine), I now need to calculate $\langle x^2 \rangle$, which I am less sure of.
My instinct is to do something like:
$$\langle x^2 \rangle = \int x^2 p_r \,\mathrm{d}x$$
where $p_r$ are the probabilities for each state of the system to occur, and in this case (canonical ensemble), $p_r = \frac{\mathrm{e}^{-\beta H}}{Z}$. But I feel I'm losing track of the physics and what $\langle x^2 \rangle$ actually means for a gas - is it per particle then somehow summed over every particle? My suggestion:
$$\langle x^2 \rangle = \frac{1}{Z} \sum_{i=0}^{N} \bigg( \int x_i^2 \mathrm{e}^{-\frac{\beta k}{2} x_i^2} \mathrm{d}x_i \bigg)$$
which would end up being $N$ equal sums, so
$$\langle x^2 \rangle = \frac{N}{Z} \int x^2 \mathrm{e}^{-\frac{\beta k}{2} x^2} \mathrm{d}x$$
...I think. Does that make sense? So are we working out $\langle x^2 \rangle$ for each particle and just multiplying by the number of particles? So each gas particle is moving around (the kinetic energy part of the Hamiltonian) and also feels some extra force from the potential energy term which results in a displacement from its original path... and if we sum up all these displacements for every particle, we get the total $\langle x^2 \rangle$ for the system?
Edit: I think much of my confusion with the maths stemmed from the difference between $x$ and $x_i$, and as suggested by By Symmetry, I believe it's very likely the question was meant to be to work out $\langle x_i^2\rangle$ (i.e., measure of fluctuation of the $i$th particle), in which case the maths is straightforward.
• Generally, $\langle \ \rangle$ corresponds to an ensemble average over some specified ensemble (e.g., energy, spatial coordinate, etc.). In your example, $\langle x \rangle$ could be the average position of the particles (depending on the ensemble over which you average) so that $\langle x^{2} \rangle$ could correspond to a variance or spread of particles relative to the average position. – honeste_vivere Feb 19 '16 at 15:23
• I would guess that it probably wants you to calculate $\langle x^2_i \rangle$ for a single particle, although from what you've said it seems ambiguous. The reason $\langle x^2_i \rangle$ is an interesting quantity is that $\langle x_i \rangle$ will simply average to $0$, and $x^2$ tends to be simpler to deal with that $|x|$ to get a measure of the mean displacement of particles from the origin. Since $\langle x_i \rangle = 0$, you will also have $\langle x^2_i \rangle = \sigma_{x}^2$, so it gives you a measure of the fluctuations in $x$ – By Symmetry Feb 19 '16 at 15:24
Actually, it is a bit more complicated than it first appears (and it took me to wake up after reading your post to figure/remember it).
For instance, you can't just go from the canonical distribution and probability to distribution over positions or speed.
In deed, in order to compute the mean value of a quantity you must get its probability or its density of probability and $p_r$ is not the density of probability to get $x_i$ given the hamiltonian.
It gives you the probability that your system has an energy $H=E$ comprised between $E$ and $E+dE$.
For the sake of clarity, I'll use $\mathbf{r}_i$ for the position of a particule in a 3D space and $x_i$ for the position in a 1D space (same for $\mathbf{p}_i$ and $p_{x_i})$.
Now what is $\frac{1}{Z}$? It is just a factor of normalization. So we can say that probability to have a set of particles at the position $\mathbf{r}_i$ with momentum $\mathbf{p}_i$, in a fluid of $N$ particules at thermal equilibrium with a thermostat (canonical ensemble) is given by :
$$dP(\{\mathbf{r_i},\mathbf{p_i}\}) = A \exp\left(- \beta \sum_{i=1}^N \frac{\mathbf{p}_i^2}{2m} + \frac{k}{2}\mathbf{r}_i^2\right) d^3\mathbf{r}_1 \ldots d^3\mathbf{r}_N ~d^3\mathbf{p}_1 \ldots d^3\mathbf{p}_N$$
where $\{\mathbf{r}_i,\mathbf{p}_i\} = \{\mathbf{r}_1, \ldots \mathbf{r}_N,\mathbf{p}_1, \ldots, \mathbf{p}_N\}$, $d^3\mathbf{r}_1 = dx_1dy_1dz_1$ and $d^3\mathbf{p}_1 = dp_{x_1} dp_{y_1} dp_{z_1}$, and A is just a normalization factor given by :
$$\frac{1}{A} = \int \exp\left(- \beta \sum_{i=1}^N \frac{\mathbf{p}_i^2}{2m} + \frac{k}{2}\mathbf{r}_i^2\right) ~ d^3\mathbf{r}_1 \ldots d^3\mathbf{r}_N ~d^3\mathbf{p}_1 \ldots d^3\mathbf{p}_N$$ which factorizes nicely in a product over N :
$$\begin{eqnarray*} \frac{1}{A} &=& \frac{1}{Z} \prod_{i=1}^N \int \exp\left( - \beta \left( \frac{p^2}{2m} + \frac{k}{2}x^2 \right) \right) ~d^3\mathbf{r}_1~d^3\mathbf{p}_i\\ &=& \frac{1}{Z} \left( \int e^{-\beta \frac{p^2}{2m}}dp \int e^{ -\beta \frac{k}{2}x^2} ~dx \right)^{3N} \end{eqnarray*}$$
The results is not hard to compute (gaussians) but is not important for us. There, we just said that the exponential of a sum is the product of exponential and that all the integrals were independent between each other (N particules and for each particule 3 coordinates in the 3D space).
Then if you want to consider only the positions (the probability that the set of particules occupies the position $\{\mathbf{r_i}\}$) , you will sum over all the momentum contributions :
$$dP(\{\mathbf{r_i}\})= A \exp\left(- \beta \sum_{i=1}^N \frac{k}{2}\mathbf{r}_i^2\right) ~ d^3\mathbf{r}_1 \ldots d^3\mathbf{r}_N ~ \int \exp\left(- \beta \sum_{i=1}^N \frac{\mathbf{p}_i^2}{2m}\right) d^3\mathbf{p}_1 \ldots d^3\mathbf{p}_N$$
Since we are working with a conservative system (H is explicitly independent of time => the total energy is conserved), the second integral is defined and will give just a number that we don't really care about and we are going to multiply it by the constant A and name the product B, it reads :
$$dP(\{\mathbf{r_i}\})= B \exp\left(- \beta \sum_{i=1}^N \frac{k}{2}\mathbf{r}_i^2\right) ~ d^3\mathbf{r}_1 \ldots d^3\mathbf{r}_N$$
Now, let's say that you want to get the probability for only one particule, the i-th one, you'll have to sum over the other positions :
$$\begin{eqnarray*} dP(\mathbf{r_i}) &=& B \exp\left(- \beta \frac{k}{2}\mathbf{r}_i^2\right) ~ d^3\mathbf{r}_i \int \exp\left(- \beta \sum_{j\neq i}^N \frac{k}{2}\mathbf{r}_j^2\right) ~ d^3\mathbf{r}_1 \ldots d^3\mathbf{r}_{i-1} d^3\mathbf{r}_{i+1} \ldots d^3\mathbf{r}_N \\ &=& C ~\exp\left(- \beta \frac{k}{2}\mathbf{r}_i^2\right) ~ d^3\mathbf{r}_i \end{eqnarray*}$$
where we have absorbed the integral and the $B$ factor together in the new pre-factor $C$ --- they are just normalization factors. Does that look familiar to you? It should! The steps we took are the same as the ones one would take to find the Maxwellian distribution, yet instead of position, one would have decided to work with speeds.
So the normalisation $\int_{Volume} dP(\mathbf{r_i}) = 1$ gives you the value of $C$. And now it is interesting to compute $C$. If you consider that your volume is very big compared to the size of the particule, you can take your boundaries to infinity (then the computation is easy). Normally it is common to say that the integration should be carried over the possible states which means from -$\infty$ to +$\infty$.
$$C = \left( \frac{k \beta}{2 \pi} \right)^{3/2}$$
Next question is : "How many particules of the system are at the position $\mathbf{r}$ (within a $d^3\mathbf{r}$ radius)?". The answers is simply :
$$dN(\mathbf{r}) = N dP(\mathbf{r})$$
and due to the normalization of $dP(\mathbf{i})$, when integrating over all the positions, we get $N$ the total number of particules in our system.
Then our density of probability $\rho(\mathbf{r)}$ is defined as :
$$\rho(\mathbf{r}) = \frac{1}{N} \frac{dN(\mathbf{r})}{d^3\mathbf{r}}$$
Then $\left<x^2\right>$ is given by :
$$\begin{eqnarray*} \left<x ^2 \right> &=& \int x^2 \rho(\mathbf{r}) {d^3\mathbf{r}} \\ &=& \frac{1}{N}\int x^2 dN(x) \\ &=& \left( \frac{k \beta}{2 \pi} \right)^{3/2} \int x^2 \exp\left(- \beta \frac{k}{2}\mathbf{r}^2\right) ~ d^3\mathbf{r} \end{eqnarray*}$$
The final result is then obtained by integration by parts and the integration over a gaussian.
• I'm still not sure about how you can reach the final two equations. You have a product from $i=0$ to $N$ in the second to last equation but no $x_i$. I don't understand how the $x_i$ "become" $x$ - or, perhaps a better way of phrasing it, what the relation is between $x_i$ and $x$? – nancy Feb 19 '16 at 19:54
• My apologies, I had to change everything I wrote since I misjudged the problem you asked. Now, it should be clearer. – A.G. Feb 19 '16 at 22:40
|
2020-01-21 22:31:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8965060710906982, "perplexity": 154.55624776315378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606226.29/warc/CC-MAIN-20200121222429-20200122011429-00499.warc.gz"}
|
https://www.thejournal.club/c/paper/49770/
|
#### New tabulation and sparse dynamic programming based techniques for sequence similarity problems
##### Szymon Grabowski
Calculating the length of a longest common subsequence (LCS) of two strings $A$ and $B$ of length $n$ and $m$ is a classic research topic, with many worst-case oriented results known. We present two algorithms for LCS length calculation with respectively $O(mn \log\log n / \log^2 n)$ and $O(mn / \log^2 n + r)$ time complexity, the latter working for $r = o(mn / (\log n \log\log n))$, where $r$ is the number of matches in the dynamic programming matrix. We also describe conditions for a given problem sufficient to apply our techniques, with several concrete examples presented, namely the edit distance, LCTS and MerLCS problems.
arrow_drop_up
|
2021-06-14 15:46:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7528108358383179, "perplexity": 529.8754920463765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487612537.23/warc/CC-MAIN-20210614135913-20210614165913-00299.warc.gz"}
|
http://bitmath.blogspot.nl/2012_10_01_archive.html
|
## Monday, 1 October 2012
### Calculating the lower and upper bounds of the bitwise XOR of two bounded variables
Again, the same sort of deal as before. This time with the eXclusive OR (XOR). So the goal is to calculate $$\min _{x \in [a, b], y \in [c, d]} x \oplus y$$ And $$\max _{x \in [a, b], y \in [c, d]} x \oplus y$$ See how I didn't abuse the notation so badly for once?
Anyway, let's get to it. I'll be honest here, I couldn't solve this one. So no clever bithacks this time. Disappointing, I know. But I still have something relevant to write, so first I'll reproduce Warren's solution from Hacker's Delight and then there will be something relevant.
unsigned minXOR(unsigned a, unsigned b,
unsigned c, unsigned d) {
unsigned m, temp;
m = 0x80000000;
while (m != 0) {
if (~a & c & m) {
temp = (a | m) & -m;
if (temp <= b) a = temp;
}
else if (a & ~c & m) {
temp = (c | m) & -m;
if (temp <= d) c = temp;
}
m = m >> 1;
}
return a ^ c;
}
unsigned maxXOR(unsigned a, unsigned b,
unsigned c, unsigned d) {
unsigned m, temp;
m = 0x80000000;
while (m != 0) {
if (b & d & m) {
temp = (b - m) | (m - 1);
if (temp >= a) b = temp;
temp = (d - m) | (m - 1);
if (temp >= c) d = temp;
}
m = m >> 1;
}
return b ^ d;
}
This time, there is no break. That's because after handling one bit, there may still be more opportunities to decrease (or increase) the value even further. Worse, these opportunities are not necessarily the same as they were before the upper bits were handled. Handling a bit can create more opportunities and at the same time changes the bound such that further changes may not be allowed at all. That's why there is no clever bithack - you have to iterate in some way. Of course you can still skip to a relevant bit in one go, but it's still a loop.
So the "something relevant" I promised is not about how to do it better. Instead, it's about how to extend it to signed values. Warren notes that this can't be done using algebraic equivalences, but that doesn't mean it can't be done.
What has to change to make a signed minXOR? Firstly, the comparison of the potential new lower bound to the corresponding upper bound must be signed, otherwise it would even declare the old bound to be invalid if it crosses zero. And secondly, the signbit needs special treatment, treating it the same as other bits would make the bound go the wrong way (eg setting it as in minXOR doesn't increase a bound, it decreases it, and obviously you can't decrease the lower bound). The same argument about the signbit holds for the other operators, but the functions that preprocess their inputs so they become signed (see Hacker's Delight, Table 4-1) already make sure that that situation never occurs.
int32_t minXORs_internal(int32_t a, int32_t b, int32_t c, int32_t d)
{
int32_t temp;
int32_t m = 0x40000000;
while (m != 0)
{
if (~a & c & m)
{
temp = (a | m) & -m;
if (temp <= b) a = temp;
}
else if (a & ~c & m)
{
temp = (c | m) & -m;
if (temp <= d) c = temp;
}
m = m >> 1;
}
return a ^ c;
}
int32_t minXORs(int32_t a, int32_t b, int32_t c, int32_t d)
{
int32_t r1;
if (a < 0 && b < 0 && c < 0 && d >= 0) r1 = minXORs_internal(a, b, 0, d);
else if (a < 0 && b >= 0 && c < 0 && d < 0) r1 = minXORs_internal(0, b, c, d);
else if (a < 0 && b >= 0 && c < 0 && d >= 0) r1 =
min(minXORs_internal(0, b, c, d),
minXORs_internal(a, b, 0, d));
else r1 = minXORs_internal(a, b, c, d);
return r1;
}
What's going on there is that if both lower bounds have their sign set, it tries to avoid cancelling the sign out with the other bound. Having the sign set is obviously always lower than having the sign not set. In the third case, there are two ways to avoid cancelling the sign, which must both be checked to see which way gives the lowest result. For all other cases, the sign is either forced to cancel, or just not set in the first place, so it's all handled by minXORs_internal.
The code for maxXORs is very similar, except of course it doesn't try to avoid cancelling signs, it tried to make them cancel.
Next post, something simpler and more general, so probably more usefull.
|
2017-05-24 07:49:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.4044044613838196, "perplexity": 1740.2860035260371}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607806.46/warc/CC-MAIN-20170524074252-20170524094252-00636.warc.gz"}
|
https://askdev.io/questions/36610/version-control-for-images
|
# Version control for images
When collaborating with images, I often tend to start conserving points as image_001. png and also image_002. png for various variations of the very same photo. Being a designer, I recognize that that isn't actually an excellent way to do version control. I understand that there are some devices that I can make use of to do this such as git, yet there is no straightforward means to watch the background of a documents making use of such a device.
Exists version control software program for images that permits you to watch a photo in it is existing and also previous states?
0
2019-05-13 03:03:03
Source Share
Aperture (Mac - just) and also Photoshop Lightroom (Mac/Windows) both sustain version control of images.
0
2019-06-01 04:31:34
Source
If it is solitary customer I can very advise FileHamster. Every single time you conserve the documents it develops an alteration, and also you can curtail regarding you've set your background to go.
Certainly, with huge images this can take a horrible great deal of room (yet you can remove it as soon as you are completed with the documents).
0
2019-06-01 04:29:09
Source
Beyond Compare has a plugin to diff images. It is actually convenient, and also you need to have the ability to utilize it with whatever resource control system you are making use of.
0
2019-05-31 08:23:23
Source
You do not state if you are functioning on your own or on a group ... thinking the previous (and also this practically benefits the last, simply not too) get a DropBox account. It takes care of back variations of all the documents you store, you are account is just dented for the room taken by the "existing" variation, and also it offers you an off - website backup.
And also, as a single programmer that is frequently jumping from my laptop computer to my desktop computer, its been a God - send in maintaining every little thing in sync.
I need to additionally state that it is a heck of a whole lot less complicated to set up and also make use of than SVN et al.
. The free account obtains you 2GB of room, and also you can spend for even more room if required. *
*As kept in mind in the comment, DropBox has actually transformed their plans a little given that I first uploaded my solution. I can not appear to locate a relevant notification in the blog site, yet this is the message of the e - mail I obtained:
The Dropbox group has actually been hard at the workplace these previous couple of months and also we would certainly such as to inform you concerning some forthcoming adjustments and also improvements to the Dropbox solution.
We are Changing Undo History Did you recognize that Dropbox instantly:
* Safeguards any files you delete in case you need to undelete them
* Saves old file versions in case you need to go back to them later
It resembles having "undo" for all your documents and also folders.
Today, Dropbox maintains these removed documents and also old documents variations (" undo background") for life. For many individuals this develops mess, and also it additionally throws away room.
As a result of this, starting August 1st, our new plan will certainly be to maintain 30 days of undo background. Nonetheless, if you favor, you can pick to have endless undo background at no charge.
I desire endless undo background
or
30 days of undo background is all I require
iPhone App Almost Here! Along with this adjustment to undo background, in the future we'll be launching our free iPhone application that will certainly permit you to access your Dropbox on the move, watch your documents, conserve them to your phone, and also also take images that sync promptly to your Dropbox!
Performance Improvements and also LAN Sync In enhancement to the iPhone application, we are additionally ending up a new variation of the Dropbox desktop computer software program that includes countless performance renovations and also our new "LAN sync" attribute. LAN sync recognizes when Dropboxes get on the very same network and also will instantly trade documents straight in between computer systems as opposed to downloading them from our web servers - this makes sharing huge documents in a workplace setting much faster than was formerly feasible.
We've obtained wonderful responses from most of our customers and also are working with a great deal of right stuff you've been requesting for. Keep tuned and also satisfied Dropboxing!
Many thanks for making use of Dropbox! - The Dropbox Team
There is no sign from the e - mail that this alternative is just for paying participants, yet there is no sign that it isn't either. FWIW, I selected the paid alternative and also have actually never ever recalled. Makes use of for DropBox are countless - to include version control for my individual tasks - and also you'll marvel just how quickly you go through 2GB ... I'll stop the Advertisement currently.
0
2019-05-22 19:34:43
Source
Numbering the files and also maintaining them all has actually constantly functioned ideal for me - at the very least for layouts and also reference graphics. It is less complicated to check out the background and also get at the previous variations straight from Photoshop (no demand to make use of a VCS customer).
For reference, this is what TortoiseIDiff resembles:
As for a media - oriented versioning and also property management system, look into Alienbrain.
0
2019-05-19 08:36:30
Source
Honestly, I would certainly make use of GIMP with layers, and also simply export to.jpg if I required a level photo. This would certainly offer you the background along with even more control editing and enhancing, and also isn't that far more job to keep.
0
2019-05-17 14:20:15
Source
The means I would certainly do it would certainly be to get a program that can contrast 2 images, and afterwards simply make use of a normal resource control device like Subversion.
Popular customers for Subversion, like TortoiseSVN can be set up to make use of various programs to contrast 2 variations of details documents kinds, so you can conveniently set it approximately make use of that photo - contrast program for.png documents.
Yet after that I'm a designer, not a musician or developer.
0
2019-05-17 14:19:19
Source
Straight - up version control for binary documents is given by Subversion.
For a photo diff, however, I have no suggestion what that would certainly also resemble.
0
2019-05-17 13:34:21
Source
You can constantly make use of Visual SVN Server to do this. I version control records and also images with it simply penalty. And Also with Visual SVN Server+Tortoise SVN, subversion is so straightforward to set up and also make use of.
0
2019-05-17 13:23:43
Source
|
2021-01-18 03:42:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23798269033432007, "perplexity": 2605.46187201811}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514121.8/warc/CC-MAIN-20210118030549-20210118060549-00775.warc.gz"}
|
https://imathworks.com/tex/tex-latex-draw-molecule-charge-in-latex-using-chemfig/
|
# [Tex/LaTex] Draw molecule charge in latex using chemfig
chemfigchemistry
I've got a lewis dot structure diagram as seen in the following image:
However, I would like to have brackets on the outside to indicate the positive charge, since this is supposed to represent H3O 1+. Any suggestions would be appreciated.
Here's my current code:
% Preamble
{\color[rgb]{0.500000,0.500000,0.500000}\documentclass[10pt]{article}
\usepackage[usenames]{color} %used for font color
\usepackage{amssymb} %maths
\usepackage{amsmath} %maths
\usepackage[utf8]{inputenc} %useful to type directly diacritic characters
\usepackage{chemfig}
\newcommand{\pol}[1]{\rlap{${}^{^{\color{red} \delta #1}}$}}
\newcommand{\ind}[0]{\text{ }}}
% ========================= end preamble ============================
\chemfig{\lewis{,H}\pol{+}-\lewis{2:,O}\pol{-}(-[6]\lewis{,H}\pol{+})-\lewis{,H}\pol{+}}
\chemleft[\chemfig{\lewis{,H}\pol{+}-\lewis{2:,O}\pol{-}(-[6]\lewis{,H}\pol{+})-\lewis{,H}\pol{+}}\ind\ind\chemright]^{+}
|
2022-08-11 16:59:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9265565872192383, "perplexity": 9190.02185604963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571483.70/warc/CC-MAIN-20220811164257-20220811194257-00173.warc.gz"}
|
https://doctorpenguin.com/covid19
|
Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.
General
General
General
Public Health
Pathology
General
General
Public Health
General
General
General
General
Public Health
General
General
General
Surgery
General
General
General
General
General
General
General
General
General
General
Public Health
Cardiology
General
General
Public Health
General
General
## Multilevel Deep-Aggregated Boosted Network to Recognize COVID-19 Infection from Large-Scale Heterogeneous Radiographic Data.
Internal Medicine
General
Oncology
General
General
Oncology
General
Dermatology
General
General
General
Cardiology
Oncology
General
General
General
## Diagnostic accuracy estimates for COVID-19 RT-PCR and Lateral flow immunoassay tests with Bayesian latent class models.
#### In American journal of epidemiology ; h5-index 65.0 The objective was to estimate the diagnostic accuracy of real time polymerase chain reaction (RT-PCR) and lateral flow immunoassay (LFIA) tests for COVID-19, depending on the time post symptom onset. Based on the cross-classified results of RT-PCR and LFIA, we used Bayesian latent class models (BLCMs), which do not require a gold standard for the evaluation of diagnostics. Data were extracted from studies that evaluated LFIA (IgG and/or IgM) assays using RT-PCR as the reference method. ${Se}_{RT- PCR}$ was 0.68 (95% probability intervals: 0.63; 0.73). ${Se}_{IgG/M}$ was 0.32 (0.23; 0.41) for the first week and increased steadily. It was 0.75 (0.67; 0.83) and 0.93 (0.88; 0.97) for the second and third week post symptom onset, respectively. Both tests had a high to absolute Sp, with higher point median estimates for ${Sp}_{RT- PCR}$ and narrower probability intervals: ${Sp}_{RT- PCR}$ was 0.99 (0.98; 1.00) and ${Sp}_{IgG/M}$ was 0.97 (0.92; 1.00), 0.98 (0.95; 1.00) and 0.98 (0.94; 1.00) for the first, second and third week post symptom onset. The diagnostic accuracy of LFIA varies with time post symptom onset. BLCMs provide a valid and efficient alternative for evaluating the rapidly evolving diagnostics for COVID-19, under various clinical settings and different risk profiles.Kostoulas Polychronis, Eusebi Paolo, Hartnack Sonja2021-Mar-31Bayesian latent class models, COVID-19, LFIA, RT-PCR, Sensitivity, Specificity
Internal Medicine
General
General
General
General
## The value of AI based CT severity scoring system in triage of patients with Covid-19 pneumonia as regards oxygen requirement and place of admission.
#### In The Indian journal of radiology & imaging Context : CT scan is a quick and effective method to triage patients in the Covid-19 pandemic to prevent the heathcare facilities from getting overwhelmed.Aims : To find whether an initial HRCT chest can help triage patient by determining their oxygen requirement, place of treatment, laboratory parameters and risk of mortality and to compare 3 CT scoring systems (0-20, 0-25 and percentage of involved lung models) to find if one is a better predictor of prognosis than the other.Settings and Design : This was a prospective observational study conducted at a Tertiary care hospital in Mumbai, Patients undergoing CT scan were included by complete enumeration method.Methods and Material : Data collected included demographics, days from swab positivity to CT scan, comorbidities, place of treatment, laboratory parameters, oxygen requirement and mortality. We divided the patients into mild, moderate and severe based on 3 criteria - 20 point CT score (OS1), 25 point CT score (OS2) and opacity percentage (OP). CT scans were analysed using CT pneumonia analysis prototype software (Siemens Healthcare version 2.5.2, Erlangen, Germany).Statistical Analysis : ROC curve and Youden's index were used to determine cut off points. Multinomial logistic regression used to study the relations with oxygen requirement and place of admission. Hosmer-Lemeshow test was done to test the goodness of fit of our models.Results : A total of 740 patients were included in our study. All the 3 scoring systems showed a significant positive correlation with oxygen requirement, place of admission and death. Based on ROC analysis a score of 4 for OS1, 9 for OS2 and 12.7% for OP was determined as the cut off for oxygen requirement.Conclusions : CT severity scoring using an automated deep learning software programme is a boon for determining oxygen requirement and triage. As the score increases, the chances of requirement of higher oxygen and intubation increase. All the three scoring systems are predictive of oxygen requirement.Kohli Anirudh, Jha Tanya, Pazhayattil Amal Babu2021-JanCovid-19, HRCT chest, oxygen requirement
Internal Medicine
Public Health
## Emotions of COVID-19: A Study of Self-Reported Information and Emotions during the COVID-19 Pandemic using Artificial Intelligence.
#### In Journal of medical Internet research ; h5-index 88.0 BACKGROUND : The COVID-19 pandemic has disrupted human societies across the world. Starting with a public health emergency, followed by a significant loss of human life, and the ensuing social restrictions leading to loss of employment, lack of interactions and burgeoning psychological distress. As physical distancing regulations were introduced to manage outbreaks, individuals, groups and communities engaged extensively on social media to express their thoughts and emotions. This internet-mediated communication of self-reported information encapsulates the emotional health and mental wellbeing of all individuals impacted by the pandemic.OBJECTIVE : This research aims to investigate the human emotions of the COVID-19 pandemic expressed on social media over time, using an Artificial Intelligence framework.METHODS : Our study explores emotion classifications, intensities, transitions, profiles and alignment to key themes and topics, across the four stages of the pandemic; declaration of a global health crisis, first lockdown, easing of restrictions, and the second lockdown. This study employs an artificial intelligence framework comprising of natural language processing, word embeddings, Markov models and Growing Self-Organizing Maps that are collectively used to investigate the social media conversations. The investigation was carried out using 73,000 public Twitter conversations from users in Australia from January to September 2020.RESULTS : The outcomes of this study enabled us to analyse and visualise different emotions and related concerns expressed, reflected on social media during the COVID-19 pandemic, that can be used to gain insights on citizens' mental health. First, the topic analysis showed the diverse as well as common concerns people have expressed during the four stages of the pandemic. It was noted that starting from personal level concerns, the concerns expressed over social media has escalated to broader concerns over time. Second, the emotion intensity and emotion state transitions showed that 'fear' and 'sad' emotions were more prominently expressed at first, however, they transition into 'anger' and 'disgust' over time. Negative emotions except 'sad' were significantly higher (P < .05) in the second lockdown showing increased frustration. The temporal emotion analysis was conducted by modelling the emotion state changes across the four stages which demonstrated how different emotions emerge and shift over time. Third, the concerns expressed by social media users were categorized into profiles, where differences could be seen between the first and second lockdown profiles.CONCLUSIONS : This study showed diverse emotions and concerns expressed and recorded on social media during the COVID-19 pandemic reflected the mental health of the general public. While this study establishes the use of social media to discover informed insights during a time where physical communication is impossible, the outcomes also contribute towards post-pandemic recovery, understanding psychological impact via emotion changes and potentially informing healthcare decision-making. The study exploits AI and social media to enhance our understanding of human behaviours in global emergencies, leading to improved planning and policymaking for future crises.CLINICALTRIAL : Adikari Achini, Nawaratne Rashmika, De Silva Daswin, Ranasinghe Sajani, Alahakoon Oshadi, Alahakoon Damminda2021-Apr-01
Internal Medicine
General
General
General
Surgery
General
General
General
General
General
General
General
General
General
General
General
Surgery
General
General
General
General
Public Health
General
General
Pathology
General
General
General
General
General
General
General
General
General
General
General
General
General
General
General
General
General
General
General
Public Health
Public Health
General
Public Health
General
Public Health
General
General
General
General
General
General
General
Dermatology
General
General
## Predicting Lyme Disease From Patients' Peripheral Blood Mononuclear Cells Profiled With RNA-Sequencing.
#### In Frontiers in immunology ; h5-index 100.0 Although widely prevalent, Lyme disease is still under-diagnosed and misunderstood. Here we followed 73 acute Lyme disease patients and uninfected controls over a period of a year. At each visit, RNA-sequencing was applied to profile patients' peripheral blood mononuclear cells in addition to extensive clinical phenotyping. Based on the projection of the RNA-seq data into lower dimensions, we observe that the cases are separated from controls, and almost all cases never return to cluster with the controls over time. Enrichment analysis of the differentially expressed genes between clusters identifies up-regulation of immune response genes. This observation is also supported by deconvolution analysis to identify the changes in cell type composition due to Lyme disease infection. Importantly, we developed several machine learning classifiers that attempt to perform various Lyme disease classifications. We show that Lyme patients can be distinguished from the controls as well as from COVID-19 patients, but classification was not successful in distinguishing those patients with early Lyme disease cases that would advance to develop post-treatment persistent symptoms.Clarke Daniel J B, Rebman Alison W, Bailey Allison, Wojciechowicz Megan L, Jenkins Sherry L, Evangelista John E, Danieletto Matteo, Fan Jinshui, Eshoo Mark W, Mosel Michael R, Robinson William, Ramadoss Nitya, Bobe Jason, Soloski Mark J, Aucott John N, Ma’ayan Avi2021Lyme disease, PBMCs, PTLDS, RNA-seq, data mining, machine learning
Internal Medicine
General
Public Health
General
General
General
General
Public Health
General
General
General
General
Public Health
General
General
General
General
General
Public Health
Public Health
General
General
General
General
General
Public Health
General
General
General
Ophthalmology
General
Public Health
General
## Transfer learning-based ensemble support vector machine model for automated COVID-19 detection using lung computerized tomography scan data.
#### In Medical & biological engineering & computing ; h5-index 32.0 The novel discovered disease coronavirus popularly known as COVID-19 is caused due to severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) and declared a pandemic by the World Health Organization (WHO). An early-stage detection of COVID-19 is crucial for the containment of the pandemic it has caused. In this study, a transfer learning-based COVID-19 screening technique is proposed. The motivation of this study is to design an automated system that can assist medical staff especially in areas where trained staff are outnumbered. The study investigates the potential of transfer learning-based models for automatically diagnosing diseases like COVID-19 to assist the medical force, especially in times of an outbreak. In the proposed work, a deep learning model, i.e., truncated VGG16 (Visual Geometry Group from Oxford) is implemented to screen COVID-19 CT scans. The VGG16 architecture is fine-tuned and used to extract features from CT scan images. Further principal component analysis (PCA) is used for feature selection. For the final classification, four different classifiers, namely deep convolutional neural network (DCNN), extreme learning machine (ELM), online sequential ELM, and bagging ensemble with support vector machine (SVM) are compared. The best performing classifier bagging ensemble with SVM within 385 ms achieved an accuracy of 95.7%, the precision of 95.8%, area under curve (AUC) of 0.958, and an F1 score of 95.3% on 208 test images. The results obtained on diverse datasets prove the superiority and robustness of the proposed work. A pre-processing technique has also been proposed for radiological data. The study further compares pre-trained CNN architectures and classification models against the proposed technique.Singh Mukul, Bansal Shrey, Ahuja Sakshi, Dubey Rahul Kumar, Panigrahi Bijaya Ketan, Dey Nilanjan2021-Mar-18COVID-19, CT scan data, Ensemble SVM, Transfer learning, VGG16
Internal Medicine
General
General
General
Public Health
Ophthalmology
Public Health
Pathology
General
General
General
General
General
General
Public Health
Public Health
Public Health
General
General
General
General
General
General
General
General
Dermatology
General
General
General
Public Health
General
General
General
General
General
Public Health
Public Health
General
General
General
General
Surgery
General
General
Pathology
Public Health
General
General
General
General
General
Public Health
General
General
General
Public Health
Dermatology
General
General
Public Health
## Prediction of Patient Management in COVID-19 Using Deep Learning-Based Fully Automated Extraction of Cardiothoracic CT Metrics and Laboratory Findings.
#### In Korean journal of radiology OBJECTIVE : To extract pulmonary and cardiovascular metrics from chest CTs of patients with coronavirus disease 2019 (COVID-19) using a fully automated deep learning-based approach and assess their potential to predict patient management.MATERIALS AND METHODS : All initial chest CTs of patients who tested positive for severe acute respiratory syndrome coronavirus 2 at our emergency department between March 25 and April 25, 2020, were identified (n = 120). Three patient management groups were defined: group 1 (outpatient), group 2 (general ward), and group 3 (intensive care unit [ICU]). Multiple pulmonary and cardiovascular metrics were extracted from the chest CT images using deep learning. Additionally, six laboratory findings indicating inflammation and cellular damage were considered. Differences in CT metrics, laboratory findings, and demographics between the patient management groups were assessed. The potential of these parameters to predict patients' needs for intensive care (yes/no) was analyzed using logistic regression and receiver operating characteristic curves. Internal and external validity were assessed using 109 independent chest CT scans.RESULTS : While demographic parameters alone (sex and age) were not sufficient to predict ICU management status, both CT metrics alone (including both pulmonary and cardiovascular metrics; area under the curve [AUC] = 0.88; 95% confidence interval [CI] = 0.79-0.97) and laboratory findings alone (C-reactive protein, lactate dehydrogenase, white blood cell count, and albumin; AUC = 0.86; 95% CI = 0.77-0.94) were good classifiers. Excellent performance was achieved by a combination of demographic parameters, CT metrics, and laboratory findings (AUC = 0.91; 95% CI = 0.85-0.98). Application of a model that combined both pulmonary CT metrics and demographic parameters on a dataset from another hospital indicated its external validity (AUC = 0.77; 95% CI = 0.66-0.88).CONCLUSION : Chest CT of patients with COVID-19 contains valuable information that can be accessed using automated image analysis. These metrics are useful for the prediction of patient management.Weikert Thomas, Rapaka Saikiran, Grbic Sasa, Re Thomas, Chaganti Shikha, Winkel David J, Anastasopoulos Constantin, Niemann Tilo, Wiggli Benedikt J, Bremerich Jens, Twerenbold Raphael, Sommer Gregor, Comaniciu Dorin, Sauter Alexander W2021-Feb-24Artificial intelligence, COVID-19, Computed tomography, Deep learning, Patient management
Internal Medicine
General
General
General
General
General
General
General
General
General
General
## Face mask detection using YOLOv3 and faster R-CNN models: COVID-19 environment.
Internal Medicine
General
General
General
General
## Video-Based Analyses of Parkinson's Disease Severity: A Brief Review.
#### In Journal of Parkinson's disease Remote and objective assessment of the motor symptoms of Parkinson's disease is an area of great interest particularly since the COVID-19 crisis emerged. In this paper, we focus on a) the challenges of assessing motor severity via videos and b) the use of emerging video-based Artificial Intelligence (AI)/Machine Learning techniques to quantitate human movement and its potential utility in assessing motor severity in patients with Parkinson's disease. While we conclude that video-based assessment may be an accessible and useful way of monitoring motor severity of Parkinson's disease, the potential of video-based AI to diagnose and quantify disease severity in the clinical context is dependent on research with large, diverse samples, and further validation using carefully considered performance standards.Sibley Krista G, Girges Christine, Hoque Ehsan, Foltynie Thomas2021-Mar-01Parkinson’s disease, artificial intelligence, machine learning, video
Internal Medicine
General
General
General
General
General
General
General
Public Health
Pathology
General
Oncology
General
General
General
General
Public Health
General
General
General
General
Public Health
General
General
General
General
General
Public Health
General
General
General
General
General
Public Health
General
General
General
General
|
2021-04-21 01:08:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3051805794239044, "perplexity": 8000.712393530975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039503725.80/warc/CC-MAIN-20210421004512-20210421034512-00622.warc.gz"}
|
https://www.physicsforums.com/threads/size-of-everything.209705/
|
# Size of everything
1. Jan 19, 2008
### bvic4
There was an article in this month's Scientific American and it mentioned that without the Higgs particle that atoms could be several inches across. This got me to thinking why we know the absolute size of something. Would something in a different world know if they were made up of larger atoms? While those atoms might be large compared to our scales, wouldn't a creature made up of larger atoms just have a different idea of what is small?
Is there anything in the equations for quantum mechanics that would change if we thought differently about the idea of size?
Just curious, thanks
Brian
2. Jan 19, 2008
### DaveC426913
One of the interesting questions about our universe is why it is so exquisitely tuned to support the reactions it does. If any one of a half dozen fundamental factors (such as the strength of attraction between a proton and an electron) were the slightest bit different, there would be no stars, let alone life.
If atoms were larger, chemistry would be very different. You might be able to get only very few elements to combine. The universe might have never evolved beyond hydrogen and helium.
3. Jan 19, 2008
### StatusX
You're right, all that really matters is the relative size of things. So if everything "got bigger" by the same amount, there would be no way to detect it. That being said, there is a fundamental length scale, the planck length, given by:
$$l_P = \sqrt{\frac{\hbar G}{c^3}} \approx 10^{-35} m$$
We know that, in our universe, atoms are about 10^25 planck lengths in size. What the article probably means is that, without the Higgs mechanism, atoms would be more like 10^34 planck lengths across. Rather than saying the atoms are larger in such a universe, you might instead interpret this by saying the values of h,G, or c are different. This would have consequences for things like the stability of atoms, as DaveC mentioned, but you're right to say it doesn't mean there would be atoms the size of footballs.
4. Jan 19, 2008
### nonequilibrium
But wouldn't the planck length itself also change.
5. Jan 19, 2008
### StatusX
There's no natural way to compare lengths in different hypothetical universes. One choice would be to say the planck length is the same in all universes, which is what I implied the article the OP mentioned was doing. This has the consequence that the constants of nature, G, c, and h, are the same in every universe, all that can be different are things like the masses of fundamental particles (again, in planck units).
Another way would be to pick something else as unchanging, ie, define the meter so that an atom in any universe (ie, one that has atoms) would be something like 10^-10 meters in diamter. This would mean the planck length, and so also the constant of nature, would take different values in this universe (when expressed in "meters").
The point is that the only things that can be meaningfully compared in a physical theory are real numbers, like the ratio of the masses of two particles, or the ratio of a certain physical length to the planck length.
6. Jan 20, 2008
### bvic4
Thanks for the replies. This is really interesting for me to think about. The article was referring to the size of atoms based on the Planck length. The idea just made me think of relative size of atoms. I think it basically comes down to:
"The point is that the only things that can be meaningfully compared in a physical theory are real numbers, like the ratio of the masses of two particles, or the ratio of a certain physical length to the planck length."
But it makes it strange to think of size as something that is relative, and what we think of as being large could be relatively small in a different world. A multi-verse theory is popular among many people. How would information propagate between 2 universes that have decidedly different Planck lengths?
|
2017-06-26 16:00:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6571163535118103, "perplexity": 508.12857599700703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320823.40/warc/CC-MAIN-20170626152050-20170626172050-00141.warc.gz"}
|
https://homotopytypetheory.org/2012/03/30/a-direct-proof-of-hedbergs-theorem/?replytocom=1311
|
## A direct proof of Hedberg’s theorem
In his article published 1998, Michael Hedberg has shown that a type with decidable equality also features the uniqueness of identity proofs property. Reading through Nils Anders Danielsson’s Agda formalization, I noticed that the statement “A has decidable equality”, i.e. $\forall x, y : A . (x \leadsto y) + (x \leadsto y \rightarrow \bot)$, looks pretty similar to “A is contractible”, i.e. $\exists x : A . \forall y : A. x \leadsto y$. Therefore, it is not at all surprising that a slight modification of the proof that the h-level is upwards closed works as a more direct proof for Hedberg’s theorem than the original one, but I want to share my Coq-Code anyway.
There is also a simple slight improvement of the theorem, stating that “local decidability of equality” implies “local UIP”. I use the definition of the identity type and some basic properties from Andrej Bauer’s github.
A file containing the complete Coq code (inclusive the lines that are necessary to make the code below work) can be found here.
I first prove a (local) lemma, namely that any identity proof is equal to one that can be extracted from the decidability term dec. Of course, this is already nearly the complete proof. I do that as follows: given $x$ and $y \in A$ and a proof $p$ that they are equal, I check what $\mathtt{dec}$ “thinks” about $x$ and $y$ (as well as $x$ and $x$). If $\mathtt{dec}$ tells me they are not equal, I get an obvious contradiction. Otherwise, $\mathtt{dec}$ precisely says that they are in the same “contractible component”, so I can just go on as in the proof that the h-level is upwards closed. With this lemma at hand, the rest is immediate.
Theorem hedberg A : dec_equ A -> uip A.
Proof.
intros A dec x y.
assert (
lemma :
forall proof : x ~~> y,
match dec x x, dec x y with
| inl r, inl s => proof ~~> !r @ s
| _, _ => False
end
).
Proof.
induction proof.
destruct (dec x x) as [pr | f].
apply opposite_left_inverse.
exact (f (idpath x)).
intros p q.
assert (p_given_by_dec := lemma p).
assert (q_given_by_dec := lemma q).
destruct (dec x x).
destruct (dec x y).
apply (p_given_by_dec @ !q_given_by_dec).
Qed.
Christian Sattler has pointed out to me that the above proof can actually be used to show the following slightly stronger version of Hedberg’s theorem (again, see here), stating that “local decidability implies local UIP”:
Theorem hedberg_strong A (x : A) :
(forall y : A, decidable (x ~~> y)) ->
(forall y : A, proof_irrelevant (x ~~> y)).
At the University of Nottingham
This entry was posted in Code. Bookmark the permalink.
### 5 Responses to A direct proof of Hedberg’s theorem
1. Mike Shulman says:
Interesting! Is the proof in the HoTT repository equivalent to either Hedberg’s proof or yours?
• Mike Shulman says:
(That proof is written out in prose on the nLab page h-set.)
• Thanks for the links! I think the nlab proof uses indeed a similar approach. Sorry, I was not aware of that. But to be honest, I don’t understand it completely. It says
Let $r$ be the image of $(1_x,p) \in \textit{Paths}_{A \times A}((x,x),(x,x))$ under the section $d$.
Does this implicitly use that $f : A \rightarrow B$ gives rise to $\textit{Paths}_A \rightarrow \textit{Paths}_B$ ? Is $(1_x, p)$ viewed as a “subset” of $(A \times A) \times (A \times A)$ ?
I’ll need to have a closer look at the Coq code in the HoTT repository to understand how exactly it works.
• Mike Shulman says:
It means what the HoTT repository calls map_dep, which is a dependent version of $f\colon A\to B$ giving rise to $map(f)\colon Paths_A \to Paths_B$.
|
2019-12-09 18:14:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 21, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8939788341522217, "perplexity": 1320.0666910207929}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540521378.25/warc/CC-MAIN-20191209173528-20191209201528-00146.warc.gz"}
|
https://www.gradesaver.com/textbooks/science/chemistry/chemistry-molecular-approach-4th-edition/chapter-16-self-assessment-quiz-page-768/q11
|
## Chemistry: Molecular Approach (4th Edition)
(d) $S{O_3}^{2-}$
(a) and (b): $Br^-$ and $N{O_3}^-$ are both anions of strong acids; therefore, they are pH-neutral. (c) $HS{O_4}^-$ acts as an acid when dissolved in water. (d) $S{O_3}^{2-}$ reacts with water to produce $HS{O_3}^-$ and $OH^-$, forming a basic solution.
|
2020-01-27 10:03:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7413938641548157, "perplexity": 2563.7862227554174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251696046.73/warc/CC-MAIN-20200127081933-20200127111933-00185.warc.gz"}
|
https://www.queryxchange.com/q/21_307077/taylor-series-for-logarithm-converges-towards-logarithm/
|
# Taylor series for logarithm converges towards logarithm
by Ulrik Last Updated June 12, 2019 08:20 AM
Is there a way to show that the Taylor series around 0 of $f(x) = \ln(1-x)$ converges towards $f$ on the interval $(-1,1)$, just by considering the remainder from the Taylor polynomial? I'm having a little trouble with this.
The series is
$T_n(x) = - \sum_{k=1}^n \frac{x^k}{k}$
The convergence on $(-1,0]$ is not a problem, but on $[0,1)$ things start to get a little complicated. Let $0<x<1$. then $|f^{(n+1)}(t)| = \frac{n!}{(1-t)^{n+1}} \leq \frac{n!}{(1-x)^{n+1}}$ for all $t \in [0,x]$. The Lagrange form of the remainder then tells us that
$|R_n(x)| \leq \frac{n!}{(1-x)^{n+1} (n+1)!}x^{n+1} = \frac{1}{n+1} \left( \frac{x}{1-x} \right)^n$
But if $x > 1/2$, this goes to infinity. I have tried to use the integral form of the remainder as well, but with no luck.
Using the machinery of power series, it's easy to prove the convergence. But is there a way using only theory about Taylor polynomials?
Tags :
#### Answers 1
The only proof of this that I know of uses the Cauchy form of the remainder. If $$f^{(n+1)}(x)$$ exists for all $$x$$ between $$0$$ and $$h$$ and is continuous on this region$$^\dagger$$, then this form of the remainder can be shown to hold by using the integral form of the remainder: $$R_{n+1}(x)= {1\over n!} \int_0^x f^{(n+1)}(t)(x-t)^n\,dt$$ and applying the Second Mean Value Theorem for Integrals to this expression (with $$g(t)=1$$).
So, let's use the Cauchy form of the remainder to show that the Taylor (Maclaurin) series of $$\ln(1+x)$$ converges to $$\ln(1+x)$$ for $$-1:
Let $$f(x)=\ln(1+x)$$. Using the Cauchy form of the remainder, one has $$\ln(1+x) =x-{x^2\over2}+\cdots+{(-1)^{n-1} x^n\over n} + R_{n+1}(x),$$ where $$R_{n+1}(x)={f^{(n+1)}(c)\over n!}(x-c)^n x$$ for some $$c$$ between $$0$$ and $$x$$. Note we may write $$R_{n+1}(x)= {f^{(n+1)}(\theta x)\over n!}(1-\theta)^n x^{n+1}$$ for some $$0\le\theta\le1$$.
Evaluating the required derivative, we have $$R_{n+1}(x)=(-1)^n{x^{n+1}(1-\theta)^n\over (1+\theta x)^{n+1}}.$$
Now, for $$-1, note that $$1+\theta x\ge 1+x$$ and $$0\le{1-\theta\over 1+\theta x}\le 1;$$ whence $$| R_{n+1}(x)| =\biggl|{x^{n+1}(1-\theta)^n\over (1+\theta x)^{n+1}} \biggr| =\biggl|{1-\theta\over 1+\theta x} \biggr|^n \cdot{|x|^{n+1}\over |1+\theta x|} \le{|x|^{n+1}\over 1+x}\ \buildrel{n\rightarrow\infty}\over\longrightarrow\ 0.$$
$$^\dagger$$ The continuity of $$f^{(n+1)}$$ is not required here; but in this case, a different proof is required.
David Mitra
February 18, 2013 14:26 PM
|
2019-06-26 13:52:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 25, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9710725545883179, "perplexity": 154.60580551451167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000353.82/warc/CC-MAIN-20190626134339-20190626160339-00088.warc.gz"}
|
http://www.mzan.com/article/48777906-vectorising-multiple-calls-of-matlab-find.shtml
|
Home vectorising multiple calls of Matlab 'find'
I make a large number of calls to the 'find' function of Matlab. For example, the following should give the essence: x=rand(1,10^8); indx=zeros(1,10^8); for i=1:10^8 indx(i) = find([0.2, 0.52, 0.76,1] < x(i), 1, 'last'); end Is there a way to vectorize this code to speed it up? Just including x as a vector creates an error. If vectorization is not possible, then any other suggestions for speed would be appreciated. The actual problem I wish to solve has a considerably longer vector in the place of [0.2, 0.52, 0.76,1], so any solution shouldn't depend on the specific vector I provided. thanks.
|
2018-06-18 07:19:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44141823053359985, "perplexity": 1435.9273469456166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860089.13/warc/CC-MAIN-20180618070542-20180618090542-00034.warc.gz"}
|
http://santotirso.pt/manhas/email-security-fpex/esq5j.php?1b5b77=social-work-faculty-jobs
|
Banks though do not help each other in bad times, why would they? That rate — the federal funds rate — has the effect of trickling through into other borrowing rates. Reserves include vault cash that the banks hold and deposits they have with the Fed, which is the banks' bank. Since 1980, any bank, including foreign ones, can borrow at the Fed's discount window. Relevance. The fact that LIBOR is different suggests that it is used for other reasons rather than not meeting Reserve Requirements. Thomas Metcalf has worked as an economist, stockbroker and technology salesman. I see, so your point is that banks would need to borrow/lend due at this particular day to heterogeneity of the net cash flows of deposits and loans close to that day. They borrow money when their reserves dip below the required level. Banks and other finance companies can, and do, borrow directly from the capital markets by selling what’s called commercial paper. 5 years ago. The Discount rate at the latter is usually relatively high as Fed wants banks to borrow from each other, so bank A is likely to make an overnight loan with bank B. They did not want to identify any given bank as potentially not solvent. If the bank A does not have enough reserve, it has to borrow it either from another bank B (with an excess reserve) or directly from the Discount Window at Fed. But banks also need to have stable funds, so they borrow those on the open market, either from insurance companies (in the form of revolving lines of credit) or by issuing bonds. LIBOR, or London Interbank Offered Rate, is the interest rate at which banks borrow from each other. I read on that topic a bit now, so banks have to have enough equity - say at least 8% of their risk-weighted investments. Mathematical (matrix) notation for a regression model with several dummy variables. rev 2020.12.10.38158, The best answers are voted up and rise to the top, Quantitative Finance Stack Exchange works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us. Thanks for contributing an answer to Quantitative Finance Stack Exchange! I was using the historical usage of "capital requirements", which isn't really relevant anymore since most use it in the more sensible way you and Malick described. So banks borrow, overnight, from each other, to settle accounts and save themselves some money. Banks are regulated by the Federal Reserve System and state regulatory agencies. This somebody else may have excess reserves since such sweet deal was not available to him, it was only available to bank A - otherwise why give 1M to bank A for a fairly little interest, if you can double this money in a month. 0 0. LIBID, or London Interbank Bid Rate, is the rate of interest a bank wishing to borrow is prepared to pay. Banks take out these overnight loans to make sure they can meet the reserve requirement when they close each night. A writer since 1997, he has written a monthly column for "Life Association News," authored several books and contributed to national publications such as the History Channel's "HISTORY Magazine." What is not that clear are the reasons why some banks would have not enough reserves whereas others will have excess reserves. Fill-in-the-blanks: 1. Making statements based on opinion; back them up with references or personal experience. Circular motion: is there another vector-based proof for high school students? (At this writing, there's sloshing around in the system.) (cf money multiplier). Governments generally don’t borrow on a private basis, but instead operate a public debt market by issuing what’s called “bonds” . How is the Bank of England independent of the Government? After the repo rate rose to 10%, the federal-funds rate, at which banks can borrow from each other, climbed above the Fed’s target (see chart). If bank A invests this 1M, then it has only 9M as reserves and needs to borrow 1M from somebody else. That encourages banks to borrow fed funds from each other. This is all clear to me. A country can borrow money from its own governmental institutions and subsidiaries. I only intended to remove confusion for further research. Central banks around the world are supposed to be autonomous, concerned only with monetary policy while the governments are to be concerned with fiscal policy. Bill. Rather than turn business away, they will borrow temporally from another bank. That’s the reason why banks borrow each other’s and why central banks have, at the end, the control of the money supply. Indeed the (International) monetary system is complex and can’t be summarized in few lines. The Mikel. B. banks borrow funds directly from the Federal Reserve. How do governments borrow money in practice? The discount rate covers very short-term loans, usually overnight and is higher than the funds rate, because the Fed encourages banks to borrow from each other first. The benefit is for investors/hedgers/speculators to customize interest rate swap/FRA/cds terms. The former is a market rate, but controlled by Fed via open market operations to balance demand and supply of Fed Funds primarily regarding the Reserve Requirements. The worst case is that reserves are drying up because they're being used to satisfy withdrawals made out of fear of a bank's bad assets. Question: Why does The U.S. government borrow money and thereby create debt when it has the sovereign and Constitutional right to create whatever money we NEED? As far as I'm concerned, bank A got the better end of the deal even if bank B did walk away with some haircut. Title of a "Spy vs Extraterrestrials" Novella set on Pacific Island? Book with a female lead on a ship made of microorganisms. My question is for which reasons is it used, and why some banks would need to lend and some to borrow insuchcase. Banks are often temporally short of cash to make a loan. If banks face any kinds of liquidity shortages that prevent them from meeting these overnight requirements, they can typically borrow from each other over the short term. Is the stem usable until the replacement arrives? Do banks lend to each other or borrow from the fed? People default, of business loans decide to repay their balances earlier than expected. Does my concept for light speed travel pass the "handwave test"? Then it’s a matter of strategy, risk management, profitability, liquidity position… //3 :wrong, withdrawal exists @quantycuenta .// 4(Libor) : i don’t know ...sorry; Finally, If you want to go deeper in thinking I recommend you the traditional “Economics of Money, Banking, and Financial Markets“ by Mishkin. I stripped one of four bolts on the faceplate of my stem. 1 … Additionally, these same securities reflecting the rates they are benchmarked against won't be "static" as they are benchmarked against a constantly readjusted rate. When banks don't want to lend to each other, it means they perceive the risk of lending is too great. The discount rate is the interest rate on loans that the Federal Reserve makes to banks. If a bank experiences big withdrawals and its reserves fall below the required level, then it must borrow the money to make up the deficit. When banks borrow from the Federal Reserve they can do so through the discount windows: The discount window helps to relieve liquidity strains for individual depository institutions and for the banking system as a whole by providing a reliable backup source of funding. site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. Copyright 2020 Leaf Group Ltd. / Leaf Group Media, All Rights Reserved. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Banks can also meet the overnight requirement by borrowing from the Federal Reserve's discount window. So banks borrow from each other to cover daily cash flow needs. It seems though what are you talking about are still Reserve Requirements - those on the assets sides (which percentage of deposits you have to hold), whereas there are also Capital Requirements on the liabilities sides, and those seem to be different regulations as @Malick mentioned. If we use potentiometers as volume controls, don't they waste electric power? Moreover If M.Doe keeps this money in its bank (A) there is no others problems, however if M.Doe use this money to pay M.H which has an account in another bank (B) then the bank A will have to give central banks money to banks B, and this a source of liquidity needs. Do native English speakers notice when non-native speakers skip the word "the" in sentences? So the first point does generally fit "the heterogeneity of investment opportunities": saving banks just don't invest as much and in such places as investment banks do. Banks are required by most national laws to hold a portion of assets "in reserve", cash or deposits at the banknote issuer, a central bank. Benchmark rates found in LIBOR, Treasury Yields, Discount Window Rates, are the best the banking system can do as far as a one-rate-fits all solution. Some of these payments are on behalf of their customers and some are related to their own business. With that said… the answer is in the question. Are the vertical sections of the Ackermann function primitive recursive? They are not charities, they are not interested in being giving, and they care more about themselves than their customers. The opposite is the most likely case for a normally functioning bank: it has experienced less relative demand relative to the rate of deposits, so it has an excess of cash that needs to be loaned. So in the event bank B wants to loan his excess reserves again, he at least have some starting point on how much to charge... "Capital requirements" is a misnomer as a minimum quota is not being placed on liabilities thus equities but on assets. Reserves must be maintained continuously, so a bank must cover a deficit on an overnight basis. If the bank does not have as much, he would like to borrow these money from other banks. Now, again - why would some of the banks have surplus and some would need money? My professor skipped me on christmas bonus payment. As you know, LIBOR rates changes daily while the FFR does not. Most interbank loans are for maturities of one week or less, the majority being over day. As these regulations apply to the majority of big banks in the US at least, I would expect that again if all the banks would have an access/will to invest in the same places, then nobody would lend money to others - am I right here? The Federal Reserve discount window is how the U.S. central bank lends money to its member banks. Banks can end up depleted when debtors default and or with more money than they'd like to have when debtors repay ahead of schedule - resulting in a problem where they may need to borrow or loan to another bank in order to meet reserve requirements or to put capital to work, respectively. Quantitative Finance Stack Exchange is a question and answer site for finance professionals and academics. How do you label an equation with something on the left and on the right? Lv 7. Lv 7. First know this: Banks are in the business of making money. They borrow money when their reserves dip below the required level. Banks are required to keep some percentage of their deposit money (say, 10%) in vault cash or at Fed. However, between the constant capital flows going back and forth between thousands of banks on a daily basis and the asymmetric nature of the banking model, it's difficult and unrealistic to determine a fair market rate between the two parties. I guess, there may be another reason which is heterogeneity in the investment opportunities. A. banks borrow from other banks with excess reserves. Why can I not maximize Activity Monitor to full screen? Before central banks existed, bank managers might have had to sit around a table at the end of each day and squared off all their incomings and outgoings against each other with actual currency. What are some technical words that I should avoid using while giving F1 visa interview? Answer Save. If you look at the time series charts, FFR rates look like "ladders" since the Federal Reserve engages in the open market only when they feel the need to. Part of it has to do with banks borrowing from each other, rather than owning large parts of each other. Banks are required to maintain reserves against their deposits. The Reserve Bank estimates the demand for ES balances each day. One requirement that the Federal Reserve -- the Fed -- places on banks is that they maintain a fraction of their deposits in reserves. they are completely different entities, and more than likely in competition with each other, and would regard other Banks as possible high risks to lend too. Withdrawals are paid with cash or accounts at the banknote issuer. Due to the fact that there is usually a spread between FFR and LIBOR, I guess the reason is as follows. Banks use ES balances as a store of value and to make payments between each other. The overnight rate is generally the interest rate that large banks use to borrow and lend from one another in the overnight market. C. households' savings are invested in the Federal Reserve. The US, for instance, owes around $5.6 trillion to a number of its own federal agencies, which accounts for nearly 30% of the total federal debt. My questions are: is this second reason underlying interbank lending reasonable? Use MathJax to format equations. The interbank lending market is a market in which banks lend funds to one another for a specified term. Instead, each year we give around £500 million back to the public through HM Treasury. MathJax reference. To satisfy reserve requirements, a bank need only to borrow reserves from another. That interest rate, known as the Federal discount rate, is usually higher than the fed funds rate. Standars for assigning maturities and hedging account balances in commercial banks, EURIBOR zero rates vs forward rates to project future income on a bank's loans, Calculation for WACC for commercial banks. Run a command on files with filenames matching a pattern, excluding a particular list of files. And as far as why why banks would need to lend and borrow, there's really just a myriad of reasons that you cannot cover. To learn more, see our tips on writing great answers. Is it also related to middle-term loans made at LIBOR rate (I don't think they are used just to meet reserve requirements, or are they)? How does US banks ensure that other country's banks aren't counterfeiting USD? When a bank falls into this situation, it has two choices -- it can borrow from the Federal Reserve or it can turn to another bank that has a reserve surplus. If you have a thing for fancy words, you could say that 30% of the US national debt is locked in intra-governmental holdings. They are overnight because interest rates are usually adjusted overnight to allow those in deficit to attract withdrawals or slow new loans while those in excess can pay less interest rates for deposits while simultaneously lowering interest rates demanded for loans to attract more demand. Is it best to fully reveal a backstory in the first book? Banks may also specialize in … I guess, at least one reason is the change in the value of deposits: if bank A had 100M in deposits and kept 10M as a reserve, if 2M deposits were withdrawn then it has a reserve of 8M whereas it has to keep 9.8M = 10%$\times \$98M reserves, so the bank A needs to get 1.8M somewhere. We have specific legal responsibilities for setting policy – for interest rates, for financial stability, and for the regulation of banks and insurance companies. Metcalf holds a master's degree in economics from Tufts University. One bank lends its extra cash to a second bank so it can meet those requirements, with the promise that those funds are paid back overnight. Overnight rates are the interest rates charged for the loans made by those with excess reserves to those in deficit. I'll need to see Mishkin's book then, didn't know that it's classic already :) what is 1b btw? Such loans are made at the interbank rate (also called the overnight rate if the term of the loan is overnight). It's also called the Fed's use of credit. How to gzip 100 GB files faster with high compression. By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. It is the rate of interest the lending bank expects to receive. D. the influential companies borrow from banks. This is purposeful, as the government wants banks to loan and borrow amongst themselves, as it helps stabilize the economy. Banks then lend to each other. Interbank Loans and the Federal Funds Market This stigma is a reason why, during the 2008 financial collapse, the U.S. Federal Reserve required all the major banks to borrow from the Discount Window whether they needed to or not. I apologize for adding to it by writing when tired. When a bank falls into this situation, it has two choices -- it can borrow from the Federal Reserve or it can turn to another bank that has a reserve surplus. The Fed is considered a lender of last resort, so a bank with a reserve deficit will most likely borrow from another bank that has a surplus. What many elected representatives do not realize is that fiscal policy and monetary policy interact with each other and can supplement each other. Finally, do you mean that the borrowings with maturities 1/3/6/12 months that LIBOR. Now, there appears a very sweet deal which requires investing 1M today to get 2M back at the end of month. That’s an interesting question. The Fed is the clearing house, yes they pass money through the Fed to each other. According to a 2012 report by the International Monetary Fund, ‘major banks are highly interconnected, as they are among each other’s largest counterparties.’ But that connection is far from direct. First point to consider : some banks are by nature "positive" in their account to the central banks , for instance classical saving banks tend to get more deposit than loans; conversely others are more engage in loans activity (investments banks..) and are by "nature" borrowers on Interbank markets. The rate at which this loan is made is called Fed Funds Effective Rate (well, the latter is a weighted average of all such rates) which is kept by Fed close to the target Fed Funds Rate by performing Open Market Operations. Asking for help, clarification, or responding to other answers. I was only trying to highlight that fact because in older literature, the term will often be used in the way I criticized because the liability side became regulated after the asset side, and "capital" in those days referred to the assets, such as "capitalized with...". Right? Whereas the LIBOR time series charts exhibit behaviors similar to any other exchange-listed tradable security. It only takes a minute to sign up. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. The interbank rate probably isn't reasonable given your second example. Yes, the lending bank makes a … Withdrawals are paid with cash or accounts at the banknote issuer. By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. They contact an investment bank or else set up their own entity to carry out this function: in Ireland we have the NTMA, National Treasury Management Agency . That's the rate that banks charge each other when they borrow from each other. For the benchmark I would consider American banking system as I've mostly used sources such as FRS and Federal Bank of New York when doing reading. The bank lends you £100k, which is borrowed from it's customers, and it gives them circa 5% interest for the privilege. The reserve requirement varies according to the type of account, but is generally in the 10 percent range. Girlfriend's cat hisses and swipes at me - can I get it to like me despite that? Banks are required to maintain reserves against their deposits. This is clear to me. I have a hard time understanding your question...but I'll take a crack at it... My interpretation as to why LIBOR is used over FFR is for securitization purposes. What's a great christmas present for someone with a PhD in Mathematics? In some countries (the United States of America, for example), the overnight rate may be the rate targeted by the central bank to influence monetary policy. 6.9K views what would be a fair and deterring disciplinary sanction for a student who commited plagiarism? This capital requirement is much more money's bank central consuming than reserve requirements. 1: yes, + “heterogeneity of depositors/lenders” // 1b : wrong : the bank has others possibilities : Asset’s sales,Federal reserve, raise capital, decrease its lends (asset side). Secondly (the point you underestimate), mandatory reserves is not the only point, when a bank A lends money to someone it has also a certain percentage of that loan that it has to keep as "capital requirement" ( cf Basel agreements) : it is the main source of central money "leaks". The thing is that some governmental agencies, such as the Soci… Thanks for mentioning the capital requirement. The market for interbank loans is called the federal funds market and the rate banks charge each other is the federal funds rate. This day-to-day borrowing is how every single major bank gets through each day. Banks not only lend money but they borrow money, those deposits in checking and savings accounts, and also to a degree CD’s. A reason why one bank might have a deficit of reserves is because it has met with withdrawals in excess the rate that loans have been repaid, frequently the result of higher relative demand. A reason why one bank might have a deficit of reserves is because it has met with withdrawals in excess the rate that loans have been repaid, frequently the result of higher relative demand. As banks are in the business of risk management, things happen. NB : In case the bank A lends (pure money creation) a certain amount to, let’s say "M.Doe" , bank A needs to keep a percentage of this amount in capital reserve. Borrow from the fed. @Ilya Yes, that's definitely true. 5 years ago. Suppose, bank A still holds 100M and has 10M in reserves. This is the rate by which banks can borrow money directly from the Fed [source: Federal Reserve Bank of San Francisco]. Banks usually borrow money from one another when they are running short of cash. Related: Australian banks financing companies accused of land grabs, child labour A study of the Australian bank network by the Reserve Bank of Australia found that more than half of outstanding authorised deposit-t… Q: From where do banks borrow money cheaply, when interest rates they offer to their depositors are at record lows? 3 Answers. Commercial banks borrow from the Federal Reserve System (FRS) primarily to meet reserve requirements before the end of the business day when their cash on hand is … To satisfy reserve requirements, a bank need only to borrow reserves from another. Are there any other common reasons why banks borrow from each other - or equivalently why some banks would like to borrow whereas others would like to lend? I was reading on the topic, and would like to be sure that my understanding is correct. s̅a̅v̅e̅ ̅o̅n̅ ̅a̅m̅a̅z̅o̅n̅ ̅u̅s̅i̅n̅g̅ ̅t̅h̅i̅s̅ ̅l̅i̅n̅k̅ http://goo.gl/YJ85In Thanks, I understand the whole complexity of determining fair rates for the IB lending market. Rate of interest the lending bank makes a … Instead, each we. Guess, there may be another reason which is the interest rate that large banks use to reserves! The word the '' in sentences of a Spy vs Extraterrestrials '' Novella set Pacific. Do not help each other primitive recursive light speed travel pass the handwave test '',! Their customers and some to borrow and lend from one another for a specified term are on behalf of customers. Interact with each other charge each other that other country 's banks are counterfeiting! S called commercial paper, can borrow money when their reserves dip below the required.! System. set on Pacific Island classic already: ) what is not that clear the... Reserve -- the Fed funds from each other and deposits they have with the Fed funds from each in... Lend to each other to cover daily cash flow needs lend and some to borrow 1M from somebody.... Federal discount rate, known as the Soci… how do you label an with! And academics they are not interested in being giving, why do banks borrow from each other they care more themselves... Statements based on opinion ; back them up with references or personal experience, they are interested... Known as the Federal Reserve makes to banks customize interest rate swap/FRA/cds terms cash that the funds... Bank gets through each day under cc by-sa light speed travel pass ! To identify any given bank as potentially not solvent a specified term with PhD. Between each other one requirement that the Federal Reserve system and state regulatory agencies does my concept for light travel... Of one week or less, the majority being over day the Ackermann function primitive recursive reason interbank. Give around £500 million back to the type of account, but is generally the... Electric power to get 2M back at the Fed [ source: Federal Reserve discount is. You mean that the borrowings with maturities 1/3/6/12 months that LIBOR is different suggests that it 's also called Federal. Underlying interbank lending reasonable 1/3/6/12 months that LIBOR is different suggests that it 's classic:! Is too great public through HM Treasury trickling through into other borrowing rates or less, the lending bank to! Required level suppose, bank a invests this 1M, then it has only 9M as reserves and needs borrow... Than expected, as why do banks borrow from each other government wants banks to loan and borrow themselves. A pattern, excluding a particular list of files the first book on. Lend from one another for a specified term other or borrow from the Fed 's use credit... Overnight rates are the interest rates they offer to their depositors are at record lows book then, n't! Investors/Hedgers/Speculators to customize interest rate swap/FRA/cds terms for mentioning the capital markets by selling what s... Than the Fed [ source: Federal Reserve 's discount window is how U.S.... Borrow these money from other banks sections of the Ackermann function primitive recursive a …,! Every single major bank gets through each day their deposit money ( say, 10 % ) vault... Book then, did n't know that it 's also called the Fed [ source Federal! Summarized in few lines system and state regulatory agencies independent of the government t be summarized in few.. Market in which banks lend funds why do banks borrow from each other one another in the overnight by... Quantitative finance Stack Exchange, i understand the whole complexity of determining fair rates for the lending. Did n't know that it is used for other reasons rather than owning large parts each! And LIBOR, i guess the reason is as follows reserves include vault cash that the Federal --! Other or borrow from each other, it means they perceive the risk of lending is great... Of risk management, things happen, can borrow money when their reserves dip the. Set on Pacific Island do not realize is that they maintain a fraction of their customers and some borrow. The loan is overnight ) and subsidiaries house, yes they pass money the! Borrow at the end of month making money banks ' bank its member banks or borrow from each other rather. Second reason underlying interbank lending reasonable funds from each other: is there another vector-based proof for high school?... For adding to it by writing when tired between FFR and LIBOR, i guess, there may be reason! Rate, is the clearing house, yes they pass money through the Fed -- on... Visa interview these payments are on behalf of their customers and some to borrow insuchcase making based... They will borrow temporally from another bank lends money to its member banks on loans the. Discount window subscribe to this RSS feed, copy and paste this URL your! Instead, each year we give around £500 million back to the public HM. Borrowing is how the U.S. central bank lends money to its member banks vector-based! The Soci… how do governments borrow money directly from the capital requirement is much money... To our terms of service, privacy policy and cookie policy 10 percent range pay... Through HM Treasury only 9M as reserves and needs to borrow these money from banks! To identify any given bank as potentially not solvent year we give around £500 million back to the that! The majority being over day while the FFR does not left and the... For the IB lending market is a question and answer site for finance professionals academics... Most interbank loans are made at the end of month loan and borrow amongst,... Funds market So banks borrow money from its own governmental institutions and subsidiaries the IB lending market a and... To each other a Spy vs Extraterrestrials '' Novella set on Island. Through each day at me - can i not maximize Activity Monitor full... How is the rate of interest the lending bank makes a … Instead, each we... Money 's bank central consuming than Reserve requirements, then it has only 9M as and. Requirement that the Federal funds market So banks borrow from other banks excluding a particular list of.. Clearing house, yes they pass money through the Fed, which is heterogeneity in first. Contributing an answer to quantitative finance Stack Exchange Inc ; user contributions licensed under cc.. Bank of England independent of the Ackermann function primitive recursive, LIBOR rates changes daily while the does... On files with filenames matching a pattern, excluding a particular list of files each. Lending bank makes a … Instead, each year we give around £500 million back to the public HM. They did not want to identify any given bank as potentially not solvent the demand for ES each., then it has only 9M as reserves and needs to borrow and lend from another! It is the rate banks charge each other, rather than owning large parts of each other in bad,! Funds from each other when they close why do banks borrow from each other night rate banks charge each or! Are: is this second reason underlying interbank lending reasonable what many elected representatives do not realize is that policy... Their deposits in reserves of a Spy vs Extraterrestrials '' Novella set on Pacific Island risk management things... When their reserves dip below the required level notice when non-native speakers skip the word the. Elected representatives do not realize is that they maintain a fraction of their deposits of making money tired! Has to do with banks borrowing from each other or borrow from each other, it means they the. As the government matrix ) notation for a student who commited plagiarism be! Bank, including foreign ones, can borrow money when their reserves dip below the required level of week. The reasons why some banks would need to lend to each other, it they. 100 GB files faster with high compression keep some percentage of their customers some! Gets through each day the system. required to maintain reserves against deposits. A. banks borrow funds directly from the Federal Reserve discount window christmas present someone... Of lending is too great at me - can i get it like. Still holds 100M and has 10M in reserves and cookie policy sure they can meet the Reserve estimates... If the bank of England independent of the Ackermann function primitive recursive bank as potentially not.... Series charts exhibit behaviors similar to any other exchange-listed tradable security bolts on the of... Enough reserves whereas others will have excess reserves, there 's sloshing around in the 10 percent.! Hm Treasury Fed funds rate of trickling through into other borrowing rates licensed under cc.! Satisfy Reserve requirements, a bank must cover a deficit on an overnight basis Group... For which reasons is it used, and would like to borrow and lend from one in! More, see our tips on writing great answers the IB lending market is a and. Clearing house, yes they pass money through the Fed 's use of credit the handwave test?... Deficit on an overnight basis quantitative finance Stack Exchange Inc ; user licensed... Interested in being giving, and would like to borrow reserves from another bank consuming. Between FFR and LIBOR, or London interbank Offered rate, is the rate of interest the bank! Requirements, a bank must cover a deficit on an overnight basis other answers for of... Bank as potentially not solvent different suggests that it is the interest rates they offer their... Maintained continuously, So a bank must cover a deficit on an basis.
Lg Bp350 User Manual, Tripp Trapp Cushion Pink, How To Clean Gas Oven Igniter, Stockholm Population Density, Cinnamon Vs Gnome Vs Kde, Huntington Surf And Sport, Seaborn Line Plot, Asylum Seeker Ap Human Geography Definition, Magnetic Domain Theory,
|
2022-08-12 12:03:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2633676826953888, "perplexity": 3234.2866338734525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571692.3/warc/CC-MAIN-20220812105810-20220812135810-00196.warc.gz"}
|
https://www.gradesaver.com/textbooks/science/physics/college-physics-4th-edition/chapter-9-problems-page-362/28
|
## College Physics (4th Edition)
The average blood pressure in a person's foot is $210~mm~Hg$
We can find the increase in blood pressure in the foot (compared with the aorta) due to the height difference with the aorta: $\Delta P = \rho~g~h$ $\Delta P = (1050~kg/m^3)(9.80~m/s^2)(1.37~m)$ $\Delta P = 14097~N/m^2$ We can find the height $h$ of mercury that would have this difference in pressure: $\rho~g~h = 14097~N/m^2$ $h = \frac{14097~N/m^2}{\rho~g}$ $h = \frac{14097~N/m^2}{(13,600~kg/m^3)(9.80~m/s^2)}$ $h = 0.106~m = 106~mm$ The average blood pressure in a person's foot is the pressure increase due to the height difference, added to the pressure at the aorta. We can find the average blood pressure in a person's foot: $P_{ave} = P_{aorta}+ \Delta P$ $P_{ave} = (104~mm~Hg)+ (106~mm~Hg)$ $P_{ave} = 210~mm~Hg$ The average blood pressure in a person's foot is $210~mm~Hg$.
|
2020-02-19 10:10:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8679356575012207, "perplexity": 383.0232863984835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144111.17/warc/CC-MAIN-20200219092153-20200219122153-00125.warc.gz"}
|
https://math.stackexchange.com/questions/2729874/how-many-solutions-do-these-systems-of-equations-have
|
# How many solutions do these systems of equations have?
I know what makes a system of equations have no solutions, but what leaves me confused are these matrices that are below. Do these matrices have an infinite number of solutions, since not every column have a pivot 1? Or is there a unique solution to both of these? $$\left[ \begin{array}{ccc|c} 1 & 0 & 0 & a\\ 0 & 1 & 0 & b\\ 0 & 0 & 0 & 0 \end{array} \right]$$ or $$\left[ \begin{array}{ccc|c} 0 & 0 & 0 & 0\\ 0 & 1 & 0 & b\\ 0 & 0 & 1 & c \end{array} \right]$$
In the first case we can find $x=a$ and $y=b$ but we don't have conditions on $z$ which is free thus we have infinitely many solutions, that is the line $(a,b,0)+t(0,0,1)$.
And similarly for the second one for which the solutions is the line $(0,b,c)+t(1,0,0)$.
|
2019-12-15 05:35:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8835461139678955, "perplexity": 79.52366909220102}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541301598.62/warc/CC-MAIN-20191215042926-20191215070926-00340.warc.gz"}
|
https://www.chemeurope.com/en/encyclopedia/Generalized_Newtonian_fluid.html
|
My watch list
my.chemeurope.com
Generalized Newtonian fluid
A generalized Newtonian fluid is an idealized fluid for which the shear stress, τ, is a function of shear rate at the particular time, but not dependent upon the history of deformation.
$\tau = F\left( \frac {\partial u} {\partial y} \right)$
where: ∂u/∂y is the shear rate or the velocity gradient perpendicular to the plane of shear (SI unit s−1).
The quantity
$\mu_{\operatorname{eff}} = { {F \left( \frac {\partial u} {\partial y} \right)}\bigg/{\frac {\partial u} {\partial y}}}$
represents an apparent or effective viscosity as a function of the shear rate (SI unit Pa⋅s).
The most commonly used types of generalized Newtonian fluids are:
|
2021-09-23 03:55:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9237748980522156, "perplexity": 1283.064691069505}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057416.67/warc/CC-MAIN-20210923013955-20210923043955-00385.warc.gz"}
|
https://artofproblemsolving.com/wiki/index.php?title=2014_AIME_I_Problems/Problem_2&oldid=60870
|
# 2014 AIME I Problems/Problem 2
An urn contains $4$ green balls and $6$ blue balls. A second urn contains $16$ green bals and $N$ blue balls. A single ball is drawn at random from each urn. The probability that both balls are of the same color is $0.58$. Find $N$.
Solution 1:The probability of drawing balls of the same color is the sum of the probabilities of drawing two green balls and of drawing two blue balls. This give us $\frac{29}{50} = \frac{2}{5} \cdot \frac{16}{16+N} + \frac{3}{5} \cdot \frac{N}{16 + N}$
|
2021-12-05 01:27:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7846483588218689, "perplexity": 67.53531139610564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363134.25/warc/CC-MAIN-20211205005314-20211205035314-00566.warc.gz"}
|
https://nforum.ncatlab.org/discussion/6053/localization-is-adjoint-functor-factorization/?Focus=47997
|
## Not signed in
Want to take part in these discussions? Sign in if you have an account, or apply for one below
## Site Tag Cloud
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
• CommentRowNumber1.
• CommentAuthorMike Shulman
• CommentTimeJun 23rd 2014
I just noticed that the notion of Bousfield localization of spectra, and also the similar sort of localization of spaces with respect to homology, is an $(\infty,1)$-categorical version of the “adjoint-functor factorization” of Applegate-Tierney-Day (e.g. Applegate-Tierney “Iterated Cotriples” or Day “On adjoint-functor factorization”). We have an adjunction $S \rightleftarrows E Mod$ for some ring spectrum $E$ (with $S$ either spaces or spectra), and we want to factor it through the inclusion of a reflective subcategory of $S$ consisting of the objects that are orthogonal to the morphisms inverted by the left adjoint $S\to E Mod$ (the $E$-homology equivalences).
Is this analogy known? Can the specific constructions be related as well? E.g. Applegate-Tierney construct the factorization by a transfinite tower of adjunctions, while Day does it by factoring the adjunction units and then applying an adjoint functor theorem; do either of those strategies generalize to the $(\infty,1)$-context? Conversely, the Bousfield-Kan construction of localization (as the totalization of a cosimplicial object) seems to be the composite of the comparison functor for the induced comonad on $E Mod$ and the functor that would be its inverse if the adjunction were comonadic; are there general conditions under which this yields the desired sort of reflection?
• CommentRowNumber2.
• CommentAuthorUrs
• CommentTimeJun 24th 2014
Is there available an electronic copy with details on the Applegate-Tierney-Day story?
(I am not at the institute these days…)
• CommentRowNumber3.
• CommentAuthorThomas Holder
• CommentTimeJun 24th 2014
The papers of Applegate-Tierney and Day have appeared in LNM volumes. In a recent paper 1406.2361 on the arXiv Lucyshyn-Wright derives some of their results.
In the 90s Casacuberta and Frei did some work on localizations and idempotent approximations. It might be worthwhile to have a look at ’extending localizing functors’ and ’localizations as idempotent approximations to completions’ on Casacuberta’s homepage.
• CommentRowNumber4.
• CommentAuthorMike Shulman
• CommentTimeJun 25th 2014
@Thomas, thanks! Lots of good stuff to read there.
• CommentRowNumber5.
• CommentAuthorUrs
• CommentTimeJul 11th 2014
• (edited Jul 11th 2014)
• CommentRowNumber6.
• CommentAuthorUrs
• CommentTimeJul 11th 2014
I have added statement of the theorem that the idempotent completion/core of a monad is that induced from a reflection which factors any adjunction that gives the original monad through a conservative left adjoint. here
• CommentRowNumber7.
• CommentAuthorThomas Holder
• CommentTimeJul 11th 2014
As I lately try to figure out how the nucleus of an adjunction is related to what Lawvere (TAC 2008) calls the core (variety) and this in turn to Lucyshyn-Wright’s above idempotent core (following a terminological suggestions by Lawvere) I came across this squib by Brian Day. Unless I am not completely mistaken this construction by Day actually answers the following MO question because as I understand him using the above adjoint functor factorization for what Todd calls L yields the desired bicompletion for small $C$.
|
2022-05-25 17:39:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 10, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8194787502288818, "perplexity": 2819.459241339538}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662588661.65/warc/CC-MAIN-20220525151311-20220525181311-00200.warc.gz"}
|
http://subtractionrecords.com/books/logic-and-set-theory-with-applications
|
# Logic and Set Theory with Applications
Format: Paperback
Language:
Format: PDF / Kindle / ePub
Size: 5.94 MB
The combination of all of these factors results in an increase in musical activity that reaches a fever pitch just before the presentation of set A. A more restrained, but still unorthodox, view is of inconsistency as a non-revisionary extension of classical theory. The wall is approximately 4200 miles long and some sections are in ruins and some parts have totally disappeared. A and A' together cover every possible eventuality. These numbers are identified with specific sets.
Pages: 440
Publisher: MAI, Inc; 5th edition edition (2009)
ISBN: 0916060004
Python: The Ultimate Beginner's Guide!
Combinatorics: Set Systems, Hypergraphs, Families of Vectors and Combinatorial Probability
Elementa Set Theory : Proof Technques
The Mathematics of Infinity: A Guide to Great Ideas
An Introduction to Matrices, Sets and Groups for Science Students (Dover Books on Mathematics)
Diophantus of Alexandria: A study in the history of Greek algebra
Roughly, non-algebraic theories are theories which appear at first sight to be about a unique model: the intended model of the theory. We have seen examples of such theories: arithmetic, mathematical analysis… Algebraic theories, in contrast, do not carry a prima facie claim to be about a unique model ref.: Toposes and Local Set read here subtractionrecords.com. It also provides five additional self-contained chapters, consolidates the material on real numbers into a single updated chapter affording flexibility in course design, supplies end-of-section problems, with hints, of varying degrees of difficulty, includes new material on normal forms and Goodstein sequences, and adds important recent ideas including filters, ultrafilters, closed unbounded and stationary sets, and partitions Recent Issues on Fuzzy read online subtractionrecords.com. Because we can objectify things as things individually and communally we have a common world of things, which is not only the abstract domain of mechanics but becomes, as extended, the subject matter of arithmetic. Arithmetic, therefore, "applies to everything, to tastes and to sounds, to apples and to angels, to the ideas of the mind and to the bones of the body. The nature of the things is perfectly indifferent, of all things it is true that two and two make four" (IM 2) Invariant Sets for Windows download online http://img.kennygao.com/?ebooks/invariant-sets-for-windows-world-scientific-series-on-nonlinear-science-series-a. The major program must include at least nine courses: four basic courses (II.), four elective courses (III.), and one cognate course (IV.) as described below download. Motion of Charged Particles in crossed electric & magnetic fields, Velocity Selector & Magnetic focussing, Gauss law, continuity equation, inconsistency in Ampere�s Law, Maxwell�s equations (differential and integral forms), poynting vector, Poynting Theorem (Statement only), propagation of plane electromagnetic waves in conducting and non-conducting medium.����������������������������������� ����������������������������������� [No. of Hrs. 8] Quantum Mechanics & Statistical Physics: De-Broglie Hypothesis, Davisson Germer experiment, wave function and its properties, expectation value, Wave Packet, Uncertainity principle Optimality Theory: An Overview fratelliespresso.com.
Foundations of Translation Planes (Chapman & Hall/CRC Pure and Applied Mathematics)
Analytic Quotients: Theory of Liftings for Quotients over Analytic Ideals on the Integers (Memoirs of the American Mathematical Society)
Exploring Sets and Logic:
Solvable Cases Of The Decision Problem (Studies in Logic and the Foundations of Mathematics)
Model Theory of Fields: Lecture Notes in Logic 5, Second Edition
In Search of Infinity
Doing Mathematics: An Introduction to Proofs and Problem-Solving
Axiomatic Set Theory (Dover Books on Mathematics)
Axiomatic Fuzzy Set Theory and Its Applications (Studies in Fuzziness and Soft Computing)
Logic Colloquium '90: Asl Summer Meeting in Helsinki (Lecture Notes in Logic)
Set Theory An Introduction To Independence Proofs (Studies in Logic and the Foundations of Mathematics)
A Concise Introduction to Pure Mathematics
Manual of Axiomatic Set Theory
E-Recursion, Forcing and C*-Algebras (Lecture Notes Series, Institute for Mathematical Sciences, National University of Singapore)
Papers on Formal Logic: Volume 1
Sets and Extensions in the Twentieth Century, Volume 6 (Handbook of the History of Logic)
Fuzzy Geometric Programming (Applied Optimization)
Vicious Circles: On the Mathematics of Non-Wellfounded Phenomena (CSLI Lecture Notes)
Some Applications of Model Theory in Set Theory
Real Analysis (4th Edition)
A SNAP math fair is a non-competitive event that gives teachers an opportunity to have their students do problem solving with a particular goal in mind. The math fair can be adapted to almost any curriculum and set of standards, and it will motivate and inspire all of the students Schaum's Outline of download pdf Schaum's Outline of Introduction to. In particular, undergraduate mathematics students often experience difficulties in understanding and constructing proofs Schaum's Outline of Set Theory read for free read for free. Such a formalization of the underlying logic was employed from the beginning by Frege and by Russell, but has come into use in connection with the other -- postulational or axiomatic -- view only comparatively recently (with, perhaps, a partial exception in the case of Peano) epub. For this reason I add to the usual prerequisite that the reader have a fair amount of mathematical sophistication, the further prerequisite that he have no other kind. Starting with the most basic notions, Universal Algebra: Fundamentals and Selected Topics introduces all the key elements needed to read and understand current research in this field , e.g. Set Theory and the Contunuum read pdf http://subtractionrecords.com/books/set-theory-and-the-contunuum-hypothesis. Originally, the legends of the Gods concerning cosmogonical or cosmological questions. Later, a fiction presented as historically true but lacking factual basis; a popular and traditional falsehood 500 Multiplication Worksheets download online download online. This suggests a good pragmatic criterion: one should start from authors who have significantly influenced the conceptions of Cantor, Dedekind, and Zermelo. For the most part, this is the criterion adopted here. Nevertheless, as every rule calls for an exception, the case of Bolzano is important and instructive, even though Bolzano did not significantly influence later writers TCL LCD flat panel color TV set theory and analysis(Chinese Edition) http://fusionsur.com.ar/?books/tcl-lcd-flat-panel-color-tv-set-theory-and-analysis-chinese-edition. One series you are sure to hear about is the great series by Feynman , e.g. Elementary Logic: Third Edition http://img.kennygao.com/?ebooks/elementary-logic-third-edition. Two sets are said to be equal sets if every element of one set is in the other set and vice-versa. So, two sets are equal, if x in A ⇒ x in B and x ∈ B ⇒ x ∈ A. If sets A and B are not equal, then we write A ≠ B Let A = {x: x ∈ N, 2 ≤ x ≤ 6 } and B = { 2, 3, 4, 5, 6}, then A = B Let A = {x: x ∈ N, 10< x<11} and B = {10.5}, then A ≠ B since 10.5 ∉ A The Intersection of two or more sets is a set containing only the common elements among all the sets under consideration , cited: 60 Worksheets - Find Predecessor and Successor of 8 Digit Numbers: Math Practice Workbook (60 Days Math Number Between Series) (Volume 8) 60 Worksheets - Find Predecessor and. Also, with Gödel’s work around 1940 (and also with forcing in the 1960s) it became clear why the research of the 1920s and 30s had stagnated: the fundamental new independence results showed that the theorems established by Suslin (perfect set property for analytic sets), Sierpinski ($$\sum^{1}_{2}$$ sets as unions of $$\aleph_{1}$$ Borel sets) and a few others were the best possible results on the basis of axiom system ZFC , cited: Localization and iteration of read online http://platinumflyer.com/library/localization-and-iteration-of-axiomatic-set-theory. Set theory is also defined in terms of logic they are inextricably entwined for instance A intersect B = {x:x elem A ^ x elem B} Programming with Class: A read here http://fratelliespresso.com/?ebooks/programming-with-class-a-practical-introduction-to-object-oriented-programming-with-c. Caicedo for Is the Riemann Hypothesis equivalent to a $\Pi_1$ sentence? This is a consequence of the Davis-Matiyasevich-Putnam-Robinson work on Hilbert's 10th problem, and some standard number theory. A number of papers have details of the $\Pi^0_1$ sentence. To begin with, take a look at the relevant paper in Mathematical developments arising from Hilbert's problems (Proc. Pure Math., Northern Illinois Un […] Two elementary inequalities for real-valued polynomials March 4, 2016 I am looking for references discussing two inequalities that come up in the study of the dynamics of Newton's method on real-valued polynomials (in one variable) If You Were a Set (Math Fun) http://subtractionrecords.com/books/if-you-were-a-set-math-fun.
Rated 4.9/5
based on 1953 customer reviews
|
2017-08-22 01:39:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4903571605682373, "perplexity": 1619.2240906379157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886109803.8/warc/CC-MAIN-20170822011838-20170822031838-00596.warc.gz"}
|
https://mattermodeling.stackexchange.com/tags/terminology/hot?filter=all
|
# Tag Info
21
I will start with acronyms for coupled-cluster, and someone else might answer with acronyms for basis sets or for functionals or for many body perturbation theory or for composite approaches or for different types of SCF: Coupled Cluster acronyms CCSD, CCSDT, CCSDTQ, ... (coupled cluster with singles, doubles, triples, quadrtuples, etc.) CCSD(T), CCSDT(Q), ...
20
I'm just as curious of a user here as you, but I was able to find that a Materials Modeling Wikipedia page does in fact exist. :) Computational materials science and engineering uses modeling, simulation, theory, and informatics to understand materials. Main goals include discovering new materials, determining material behavior and mechanisms, ...
19
"Matter modeling" is a term that was coined by Stack Exchangers! That's why the words "matter" and "modeling" next to each other, don't seem to exist in any of the results in a Google search of "Matter Modeling", apart from this Stack Exchange site, Adam Iaizzi's blog post about this SE site, and maybe our community's ...
18
You can see it with VESTA software. For example, we can see the different lattice planes of NaCl crystal. [001] plane of NaCl: [101] plane of NaCl: [111] plane of NaCl:
17
CDFT: Current DFT Current DFT is defined via the generalized Hohenberg-Kohn theorem (HKT), which extends the traditional HKT to account for the effect of magnetic fields. The generalized HKT says that the scalar potential $\mathbf{V}$, the (nondegenerate) ground state wavefunction $\Psi$, and the vector potential $\mathbf{A}$ are uniquely determined by the ...
16
"Mio" refers to "Million" and "CPUh" is the abbreviation of CPU hours, also refered to as Core hours. This is a good read if you need to know more about HPC. This is an interesting reddit post on the same topic.
15
Answering your last questions: yes, yes and yes. One thing to consider is which approach do you want/need to use. You can use an atomistic approach where you simulate/model the properties and the material starting from the atomic structure of it (an input file with the list of atoms and its coordinates). The other approach is to use your material as a ...
15
OF-DFT: Orbital-free density functional theory Hohenberg and Kohn established that the ground state energy, $E$, of interacting electrons in a potential, $v(\mathbf{r})$, is a functional of the electron density, $n(\mathbf{r})$: $$\tag{1} E[n] = F[n] + \int \mathrm{d}\mathbf{r} \, v(\mathbf{r}) n(\mathbf{r}) .$$ While this statement is formally true, we do ...
15
What is a force field? The Wikipedia entry on this is a good resource, but I'll give my own description below. In the context of molecular dynamics (MD), a force field is one way of describing the interactions between atoms. In classical MD, the motion of atoms is determined by the instantaneous forces acting on the atoms (i.e., we need forces in order to ...
14
First Principles mean starting directly at the level of established science and not making assumptions such as any empirical models or parameter fitting. With respect to DFT, EMF (Electromagnetic force) is a very strong force governing nucleus and electrons (referring to a single atom). With that, atoms, molecules, macro molecules and materials are built up....
13
Assuming a generic chemistry background I wouldn't assume that knowledge of crystal structure would be too in depth at an undergraduate level. It is definitely encountered, but depending on the type of chemistry you want to go into, you probably never deal with solid state chemistry. I would first explain briefly how crystals are described by periodic ...
11
CPMD: Car-Parrinello Molecular Dynamics An approximation of BOMD (Born-Oppenheimer MD) where fictitious dynamics is used on the electrons to keep them close to their ground state, so that we do not have to keep solving for their ground state at every single step. We start with Newton's 2nd law (as does classical MD), but instead of the force being calculated ...
11
I don't think it's ideal, but there is the term "base metal". There are various definitions for what is considered a base metal, but the main noble (or precious) metals are always excluded. Some definitions also exclude ferrous metals. Unfortunately, this variation in meaning means it might be better to just say "non-ferrous, non-noble", ...
10
The anions of the form $\ce{MnO_x^y-}$ are referred to as manganates (see Wikipedia). I'm not sure if there might be a "special" name for $\ce{MnO2^-}$ specifically (that's the species you have here) because I never encountered this anion in my lab times, but given that $\ce{MnO4^2-}$ are the "normal" manganates and $\ce{MnO4^-}$ the permanganates, I think ...
10
Density Functionals LDA (LSDA): S: Slater (Dirac) exchange functional for a uniform electron gas. VWN: Vosko, Wilk, and Nusair 1980 correlation functional fitting the random phase approximation solution to the uniform electron gas. PL: Perdew and Wang 1992 local correlation functional (also known as PW or PW92). PZ81: Perdew-Zunger parameterization of the ...
10
First of all, we assume the matter is constituted by nuclei and electrons, as illustrated by the following figure: Mathematically, the matter is mapped into the following Hamiltonian: $$H=T_e + T_n + V_{ee} + V_{nn} + V_{en} \tag{1}$$ which is called the standard model of condensed matter physics. In principle, all information (ground state/excited state/...
9
Basis Sets Throughout, I use square brackets to denote additional options that can be included, but are not required. Slater: STO-nG: Slater Type Orbitals represented by a function of n contracted Gaussian functions Pople: X-YZG: Split valence double zeta basis functions, where each core orbital is represented by a contracted function on X primitive ...
9
The $\Sigma$ values represent the volume of the Coincident Site Lattice (CSL) of the grain boundary in terms of the volume of the unit cell of the crystal. In general, grain boundaries with higher symmetry have lower $\Sigma$ values. Note that CSL boundaries are special grain boundaries. So, they do not represent all grain boundaries comprehensively. But, ...
9
By making a supercell and modifying it in some way, you are creating an entirely new structure which you hope can give you some insight by being compared to the original structure. Any of the conclusions you draw from your calculations will come from the modified supercell, rather than a version where you have converted it back into the primitive cell. Some ...
9
I am curious to know if the word has been used in any way other than the above three ways. Mark. J. Winter at University of Sheffield has used the word differently — not as a term meaning something specific, but as a name for the so-called Sheffield ChemPuter: Welcome to the University of Sheffield's ChemPuter, a set of simple interactive calculators for ...
8
The term chemputer also refers to a universal chemical synthesis robot - this is because the robot uses a high level abstraction of chemical synthesis. From this approach we developed a programming language for chemistry that can be run on the chemputer robot. Since in principle ANY chemical can be made in our robot, and the langauge is universal, we coined ...
7
Density functional perturbation theory (DFPT) This method refers to the calculation of the linear response of the system under some external perturbation. Consider some set of parameters $\{\lambda_i\}$. The first and second derivatives of the total energy with respect to these parameters in DFT read: $$\frac{\partial E}{\partial\lambda_i}=\int\frac{\... 7 2nd Generation CPMD Car-Parrinello MD avoids repeatedly solving the electronic problem by propagating the orbitals as if they were particles governed by Newton's equations. This is much more efficient than having to solve at each time step as is done in Born-Oppenheimer MD, though at the cost of decreasing the maximum timestep for the dynamics (too large a ... 7 The term's origin goes back to vector field which is a function that returns a vector for any given point in space (as in the image below on the left). We can almost "see" that such fields exist, by putting iron filings and a magnet on a sheet of paper (the magnetic field causes the iron filings to move accordingly): In classical molecular ... 6 HPCs work by allowing you to run jobs on many computers with many CPUs in parallel. CPUh refers to how many CPUs are being used for how long (the h in CPUh stands for hours). For instance, you may have a job that needs 64 CPUs and will run for a whole day. That would be 1536 CPU hours. The largest jobs I see on our (relatively small) HPC are 128 CPUs for ... 6 When your lattice is primitive you have only the (0,0,0)+ set; when your lattice has some kind of centering (body- or face-centering) other sets are present, such as (1/2, 1/2, 1/2)+ or (1/2, 1/2, 0)+ It's not clear to me what you write. In the first page of the Internationl Tables you find all the symmetry operations that are listed with Roman numerals (1),(... 6 ab initio Ehrenfest Dynamics From Li et.al.,2005, JCP "The Born Oppenheimer (BO) and extended Lagrangian (EL) trajectories are founded on the assumption that a single electronic potential surface governs the dynamics. .. A major limitation of adiabatic trajectories is that they are not applicable to reactions involving nonadiabatic electronic processes, ... 6 Quantum Monte Carlo What are the types of Quantum Monte Carlo? Methods SSE QMC: Stochastic Series Expansion QMC SAC: Stochastic Analytic Continuation (an add-on technique for real-time dynamics) VMC: Variational Monte Carlo DMC: Diffusion Monte Carlo t-VMC: Time-dependent variational Monte Carlo CT-QMC: Continuous time QMC DDQMC (or DDMC): [Diagrammatic ... 6 KS-DFT: Kohn-Sham DFT The KS-DFT is proposed to deal with the problems of orbital-free DFT (OFDFT), which has been explained by @wcw. OFDFT attempts to compute the energy of interacting electrons, as the functional of the density. While this brute force approach is in principle correct, in practice it is not very accurate. This is due to the lack of accurate ... 6 Real-time TDDFT (RT-TDDFT) This is the straightforward non-perturbative solution of the TDDFT equations by means of direct propagation in time. Pioneered by Theilhaber and Yabana & Bertsch it has since found its way into several molecular or solid-state codes. The TDDFT equations in the Kohn–Sham (KS) framework are$$ i \frac{\partial}{\partial t} \phi_i ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
2021-12-02 22:32:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6204474568367004, "perplexity": 950.5578588903825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362297.22/warc/CC-MAIN-20211202205828-20211202235828-00587.warc.gz"}
|
https://www.elevri.com/courses/calculus-several-variables/vector-calculus
|
# Vector calculus
Vector calculus is the extension of differentiation and integration to vector fields. The techniques are primarily employed for vector fields in two and three dimensions, but the theory applies to any number of dimensions.
## Intro
Maxwell's equations is a set of four equations that describe and unite the concepts of electricity and magnetism. In terms of both theoretical and practical value, these equations are priceless.
Giants like Albert Einstein and Richard Feynman have put Maxwell on a pedestal for laying the foundation of electrodynamics, and so would most people aware of their consequences.
Without these pieces of vector calculus, you would not be reading this right now, since your phone would not exist. Even if it did, there would be no telecommunication nor internet. Literally all electronic advancements, and many more, are built upon the framework they provide.
## Concept
Functions whose output consist of more than one component give rise to vector fields, describing the function at each point. Vector calculus is the study of these fields.
One of its most important concepts is that of conservative v.s. non-conservative vector fields. Consider this mind-bending piece of art by M.C. Escher:
Walking clock-wise, one would move upwards, requiring a lot of energy to counter-act gravity. Walking in the opposite direction, down the stairs, is much easier. Still, after a full lap, we would end up at the same position?
This makes no sense because gravity is in fact a conservative field, and is therefore path independent. However we move from some point to another point should require the exact same amount of energy. It turns out that conservative fields have no circulation in them.
In the picture above, the left vector field is conservative, while the right is not.
## Math
Vector calculus is all about differentiating and integrating vector fields. In this course, our vector fields will be in or .
There are a few new concepts that deal with differentiation and vector fields. They are
1. Gradient: . Measures the rate of change and its direction for scalar fields.
2. Divergence: . Measure how much a vector fields "spreads out" in a point.
3. Curl: . Measure how much a vector field rotates or "swirls" in a point.
Let . Then, the divergence is
and the curl is
## Divergence
To paint the full picture of the rate of change of a scalar function of several variables, we turned to the gradient, forming a vector of 's partial derivatives.
Now with the same goal, but for a vector valued function , the approach we take is a bit different.
A function takes the form
and therefore has first partial derivatives.
These are handled by handpicking and grouping those of more interest into two different concepts: divergence and curl.
This lecture note is devoted to divergence, only dealing with these three partial derivatives:
They give a measure of how strongly a field is intensified or dampened at each point, that is, how the inflow and the outflow differ at the point.
Divergence is a measure of a point's tendency to act as a source in the vector field
A source gives back more field strength than it takes in, and the divergence there is positive.
In contrast, a sink swallows more of the field strength than it spits out, which is signified by a negative divergence.
If the divergence is zero, an equal amount flows in and out at the point.
So, how do we go about finding the divergence?
To extract the three partial derivatives
we utilize two concepts we have seen before: The del operator and the dot product:
In more detail:
Note that the result is a scalar function, giving the divergence of at each point in the field.
### Example
Find the divergence at the origin in the following vector field:
Using the formula for divergence we get
and evaluating at the origin yields
## Curl
As you're having your evening bath, you usually light scented candles and listen to classical music. You start moving your hand in a circular motion, as if you were conducting the orchestra. And then - look! - there's an eddy on the surface. If you'd draw a vector field, you'd get something like this:
The water seems to be swirling around a point. Since it's swirling clockwise, we say the vector field has negative curl at that point.
In contrast, if there'd be a current in the bath tub, your vector field might look as follows:
Here, curl is . There's no swirling going on. Just a normal current.
The notion of curl extends to 3D too; it's just that we use a vector. The vector points in the direction of your thumb as you curl the fingers of your right hand and stick the thumb out. The magnitude indicates how much the water swirls.
The curl of a vector field is written and is computed as follows:
If the vector field is in , , we can still compute its curl using the same formula. Just let . This gives the significantly more concise curl:
### Example
Find the curl at the point in the following vector field
using the formula for curl we get
and evaluating at yields
## Conservative vector fields
Since the gradient of a scalar field is a vector field, do we have that all vector fields are gradients of some scalar field?
The answer is no! Only conservative vector fields are, which results in them having some special characteristics.
First of all, gradients have no curl. So a conservative vector field is irrotational:
This signifies that in 2D:
In 3D, two more conditions apply:
As a consequence, line integrals over vector fields that are conservative are path independent.
In other words, what ever curve we integrate over, given some specific start and end point, the integral will always evaluate to the same value.
The circulation of a conservative vector field around a closed loop is 0
This means that the integral is fully determined by the start and end points alone. It further means that if we integrate conservative vector fields over closed loops, it's as though we didn't integrate at all:
One example of a conservative vector field is the gravity of Earth which, as we will see in the following example, is therefore path independent.
### Example
Near the surface of the Earth, the gravitational acceleration is approximately constant at about . It points toward the Earth's center, but for in a small enough region we can imagine it pointing straight down. Let therefore:
be the field of gravitational acceleration. Now how much energy is needed to transport someone weighing kg up to the top of a slide at the point , from the starting position at ?
Force is obtained by multiplying mass and acceleration, and so the force field asserted by the Earth's gravitation on the person will be:
A line integral over the vector field then gives the change in energy as we move along some path.
To highlight the fact that any corresponding line integral is independent of the path, we will consider two cases.
First, the person can walk up some stairs, in which case the path resembles the line given by .
Second, there is an elevator going straight up to the point , where after the person will have to walk horizontally down a platform to reach the point.
Case 1
We parametrize the path by letting
Giving us that
And so the integral becomes
Case 2
This path is piece-wise parametrized as
Meaning that we have
The line integral then evaluates to
The fact that the integrals are negative here, means that the integral goes against the vector field. Thus, the energy of joules has to be provided from some outside force. Here, it's provided by the person themselves and the elevator.
The concepts of changes in energy and how it relates to the sign of the line integral will hopefully become more clear in the next lecture note, where we discuss the scalar potential.
## Scalar potential
We introduced conservative vector fields by pointing out that they are all gradients of a scalar field. The scalar field in question is referred to as the scalar potential of the conservative vector field.
To see what this means in practice, let's return to the person making their way up to the top of the slide.
The scalar field , of which the force field is the gradient, gives the potential energy at each point compared to some reference point.
The scalar potential in a point is the line integral over the corresponding vector field, to some reference point of zero potential
Let's for simplicity say this reference point is at . The scalar field then gives a measure of how much energy you get, as you slide back down to from the point .
This is exactly what the line integral over the vector field calculates. Like we said before, this line integral is path independent.
### Finding the scalar potential
Evaluating a line integral over a vector field is a piece of cake if we know the scalar potential function .
But if this is not given to us, how do we go about finding it?
We know per definition that
Now what we can do is to first integrate with respect to . The constant of integration can then depend on and , but not .
Next, we differentiate what we get with respect to , which must be equal to
From this we have
which implies that
We now have
If we then take
and rearrange the expression, whereafter we integrate with respect to we get
Hence, the scalar potential in its full and final form will be
It involves a bit of work, but we are able to determine the scalar potential function exactly, apart from some constant .
This is fine, however, because when we use the scalar potential to evaluate line integrals of its gradient vector field, will cancel out.
After all, scalar potential is all about differences between points in the field, and not so much about the actual values there.
## Vector potential
So let's say you've got yourself some vector, .
And now you want to find whatever vector, when crossed with , gives . That vector is the vector potential.
Oh, and by the way, the vector potential is arbitrary. If you're obsessed with the number , you might as well add to every term in the vector potential. And here's a fun fact: if you'd take the divergence of , it'd be .
But why care about vector potentials? They're widely used in electromagnetism and fluid dynamics, and they can vastly simplify our calculations (which is always good). So yeah, it's worth learning about vector potentials.
Just to hammer home the point, let's do a bit of physics. So assume you've got a current and a magnetic field . Then those white arrows represent the vector potential.
### Example
We can find many different vector potentials for a vector field which has a vector potential.
Below, we show that the two vector fields and give rise to the same vector field and therefore both are vector potentials to .
If we compute the curl of the two vector fields, we find that
## Vector calculus rules
Without proving them, we provide here a useful set of equalities to use when solving vector calculus problems.
In the following equations, and are scalar fields while and are vector fields, none of them related to each other in any particular way, and each having continuous partial derivatives.
1.
2.
Curl has no divergence
3.
4.
5.
6.
7.
8.
9.
|
2023-03-26 21:57:06
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8878046274185181, "perplexity": 434.56724866836595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946535.82/warc/CC-MAIN-20230326204136-20230326234136-00037.warc.gz"}
|
https://www.gamedev.net/forums/topic/680576-matrix-calculation-efficiency/
|
• 11
• 9
• 10
• 9
• 11
• ### Similar Content
• By lxjk
Hi guys,
There are many ways to do light culling in tile-based shading. I've been playing with this idea for a while, and just want to throw it out there.
Because tile frustums are general small compared to light radius, I tried using cone test to reduce false positives introduced by commonly used sphere-frustum test.
On top of that, I use distance to camera rather than depth for near/far test (aka. sliced by spheres).
This method can be naturally extended to clustered light culling as well.
The following image shows the general ideas
Performance-wise I get around 15% improvement over sphere-frustum test. You can also see how a single light performs as the following: from left to right (1) standard rendering of a point light; then tiles passed the test of (2) sphere-frustum test; (3) cone test; (4) spherical-sliced cone test
I put the details in my blog post (https://lxjk.github.io/2018/03/25/Improve-Tile-based-Light-Culling-with-Spherical-sliced-Cone.html), GLSL source code included!
Eric
• Hello,
I am trying to make a GeometryUtil class that has methods to draw point,line ,polygon etc. I am trying to make a method to draw circle.
There are many ways to draw a circle. I have found two ways,
The one way:
public static void drawBresenhamCircle(PolygonSpriteBatch batch, int centerX, int centerY, int radius, ColorRGBA color) { int x = 0, y = radius; int d = 3 - 2 * radius; while (y >= x) { drawCirclePoints(batch, centerX, centerY, x, y, color); if (d <= 0) { d = d + 4 * x + 6; } else { y--; d = d + 4 * (x - y) + 10; } x++; //drawCirclePoints(batch,centerX,centerY,x,y,color); } } private static void drawCirclePoints(PolygonSpriteBatch batch, int centerX, int centerY, int x, int y, ColorRGBA color) { drawPoint(batch, centerX + x, centerY + y, color); drawPoint(batch, centerX - x, centerY + y, color); drawPoint(batch, centerX + x, centerY - y, color); drawPoint(batch, centerX - x, centerY - y, color); drawPoint(batch, centerX + y, centerY + x, color); drawPoint(batch, centerX - y, centerY + x, color); drawPoint(batch, centerX + y, centerY - x, color); drawPoint(batch, centerX - y, centerY - x, color); } The other way:
public static void drawCircle(PolygonSpriteBatch target, Vector2 center, float radius, int lineWidth, int segments, int tintColorR, int tintColorG, int tintColorB, int tintColorA) { Vector2[] vertices = new Vector2[segments]; double increment = Math.PI * 2.0 / segments; double theta = 0.0; for (int i = 0; i < segments; i++) { vertices[i] = new Vector2((float) Math.cos(theta) * radius + center.x, (float) Math.sin(theta) * radius + center.y); theta += increment; } drawPolygon(target, vertices, lineWidth, segments, tintColorR, tintColorG, tintColorB, tintColorA); } In the render loop:
polygonSpriteBatch.begin(); Bitmap.drawBresenhamCircle(polygonSpriteBatch,500,300,200,ColorRGBA.Blue); Bitmap.drawCircle(polygonSpriteBatch,new Vector2(500,300),200,5,50,255,0,0,255); polygonSpriteBatch.end(); I am trying to choose one of them. So I thought that I should go with the one that does not involve heavy calculations and is efficient and faster. It is said that the use of floating point numbers , trigonometric operations etc. slows down things a bit. What do you think would be the best method to use? When I compared the code by observing the time taken by the flow from start of the method to the end, it shows that the second one is faster. (I think I am doing something wrong here ).
Thank you.
• Hi Forum,
in terms of rendering a tiled game level, lets say the level is 3840x2208 pixels using 16x16 tiles. which method is recommended;
method 1- draw the whole level, store it in a texture-object, and only render whats in view, each frame.
method 2- on each frame, loop trough all tiles, and only draw and render it to the window if its in view.
are both of these methods valid? is there other ways? i know method 1 is memory intensive but method 2 is processing heavy.
• By wobes
Hi there. I am really sorry to post this, but I would like to clarify the delta compression method. I've read Quake 3 Networking Model: http://trac.bookofhook.com/bookofhook/trac.cgi/wiki/Quake3Networking, but still have some question. First of all, I am using LiteNetLib as networking library, it works pretty well with Google.Protobuf serialization. But then I've faced with an issue when the server pushes a lot of data, let's say 10 players, and server pushes 250kb/s of data with 30hz tickrate, so I realized that I have to compress it, let's say with delta compression. As I understood, the client and server both use unreliable channel. LiteNetLib meta file says that unreliable packet can be dropped, or duplicated; while sequenced channel says that packet can be dropped but never duplicated, so I think I have to use the sequenced channel for Delta compression? And do I have to use reliable channel for acknowledgment, or I can just go with sequenced, and send the StateId with a snapshot and not separately?
Thank you.
• By dp304
Hello!
As far as I understand, the traditional approach to the architecture of a game with different states or "screens" (such as a menu screen, a screen where you fly your ship in space, another screen where you walk around on the surface of a planet etc.) is to make some sort of FSM with virtual update/render methods in the state classes, which in turn are called in the game loop; something similar to this:
struct State { virtual void update()=0; virtual void render()=0; virtual ~State() {} }; struct MenuState:State { void update() override { /*...*/ } void render() override { /*...*/ } }; struct FreeSpaceState:State { void update() override { /*...*/ } void render() override { /*...*/ } }; struct PlanetSurfaceState:State { void update() override { /*...*/ } void render() override { /*...*/ } }; MenuState menu; FreeSpaceState freespace; PlanetSurfaceState planet; State * states[] = {&menu, &freespace, &planet}; int currentState = 0; void loop() { while (!exiting) { /* Handle input, time etc. here */ states[currentState]->update(); states[currentState]->render(); } } int main() { loop(); } My problem here is that if the state changes only rarely, like every couple of minutes, then the very same update/render method will be called several times for that time period, about 100 times per second in case of a 100FPS game. This seems a bit to make dynamic dispatch, which has some performance penalty, pointless. Of course, one may argue that a couple hundred virtual function calls per second is nothing for even a not so modern computer, and especially nothing compared to the complexity of the render/update function in a real life scenario. But I am not quite sure. Anyway, I might have become a bit too paranoid about virtual functions, so I wanted to somehow "move out" the virtual function calls from the game loop, so that the only time a virtual function is called is when the game enters a new state. This is what I had in mind:
template<class TState> void loop(TState * state) { while (!exiting && !stateChanged) { /* Handle input, time etc. here */ state->update(); state->render(); } } struct State { /* No update or render function declared here! */ virtual void run()=0; virtual ~State() {} }; struct MenuState:State { void update() { /*...*/ } void render() { /*...*/ } void run() override { loop<MenuState>(this); } }; struct FreeSpaceState:State { void update() { /*...*/ } void render() { /*...*/ } void run() override { loop<FreeSpaceState>(this); } }; struct PlanetSurfaceState:State { void update() { /*...*/ } void render() { /*...*/ } void run() override { loop<PlanetSurfaceState>(this); } }; MenuState menu; FreeSpaceState freespace; PlanetSurfaceState planet; State * states[] = {&menu, &freespace, &planet}; void run() { while (!exiting) { stateChanged = false; states[currentState]->run(); /* Runs until next state change */ } } int main() { run(); } The game loop is basically the same as the one before, except that it now exits in case of a state change as well, and the containing loop() function has become a function template.
Instead of loop() being called directly by main(), it is now called by the run() method of the concrete state subclasses, each instantiating the function template with the appropriate type. The loop runs until the state changes, in which case the run() method shall be called again for the new state. This is the task of the global run() function, called by main().
There are two negative consequences. First, it has become slightly more complicated and harder to maintain than the one above; but only SLIGHTLY, as far as I can tell based on this simple example. Second, code for the game loop will be duplicated for each concrete state; but it should not be a big problem as a game loop in a real game should not be much more complicated than in this example.
My question: Is this a good idea at all? Does anybody else do anything like this, either in a scenario like this, or for completely different purposes? Any feedback is appreciated!
# Matrix Calculation Efficiency
This topic is 640 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Hi Guys,
At present, I send the W, V, & P matrices to the shader where they are multiplied within the shader to position vertices.
Would it be more efficient to pre-multiply these on the CPU and then pass the result to the shader?
##### Share on other sites
Do not prematurely optimize things, you might end up having to switch to the other method later. Profile and test things, that is what will make the best determination. There are very, very few steadfast rules about this stuff, it is highly dependent upon what you're doing code wise, and the data you're pumping through the CPU/GPU, etc.
##### Share on other sites
It's my premature optimisation that is allowing me to be able to render so much in the first place.
I was just wondering what the normal practice was.
##### Share on other sites
Simple answer: yes - doing multiplication once ahead of time, in order to avoid doing it hundreds of thousands of times (once per vertex) is obviously a good idea.
However, there may be cases where uploading a single WVP matrix introduces its own problems too!
For example, lets say we have a scene with 1000 static objects in it and a moving camera.
Each frame, we have to calculate VP = V*P, and then perform 1000 WVP = W * VP calculations, and upload the 1000 resulting WVP matrices to the GPU.
If instead, we sent W and VP to the GPU separetely, then we could pre-upload 1000 W matrices one time in advance, and then upload a single VP matrix per frame.... which means that the CPU will be doing 1000x less matrix/upload work in the second situation... but the GPU will be doing Nx more matrix multiplications, where N is the number of vertices drawn.
The right choice there would depend on the exact size of the CPU/GPU costs incurred/saved, and how close to your GPU/CPU processing budgets you are.
##### Share on other sites
Yes. Multiply once outside is the way to go. If it's doing something static like rendering landscape then yes. A bit more tricky if its your game entities. In that case you need to weigh up instancing for translation and orientation of objects vs updating the matrix on the fly each draw call.
For static yes. For dynamic in low numbers yes. More murky when you start dealing with alot of objects.
##### Share on other sites
Thanks guys!
In my case just about all of the geometry will be pre-transformed in my 3D package. So, there won't be any additional rotations, scaling, etc to do either.
##### Share on other sites
Yes.
And no, no, no, no, no: this is not premature optimization, it's engineering for efficiency, they're not the same thing and don't listen to anyone who tells you different.
##### Share on other sites
I got a similar question about fine performance measurment:
Imagine I have in Geometry Shader two loops with known compile-time consts:
for (x = 0; x < 4; ++x) {
for (y = 0; y < 3; ++y{
... DoStuff();
}
}
This code in release mode gives me "Approximately 22 instruction slots used" (VS compiler will output this info)
If I would place [unroll] before each loop, I would have "Approximately 89 instruction slots used".
Right now I can measure time in NSight's "Events" window with nonosec-precision and can’t see performance gain between the shaders.
Is there a way to measure the difference in a finer way?
The question is similar, because measurement perf. diff in such optimizations (2 matrices vs 1, unroll/not unroll) requires some tool to measure the difference.
Edited by Happy SDE
##### Share on other sites
If you can't see any perf difference it might just be because you're bottlenecked elsewhere; e.g. you might be CPU-bound.
##### Share on other sites
If you can't see any perf difference it might just be because you're bottlenecked elsewhere; e.g. you might be CPU-bound.
No, I am not CPU bound at all.
This code calculates 4 Shadow Maps in one pass, which is faster, that 4 separate calls (I can see difference in NSight, because it is significant like 50-200% win dependent on quality settings).
This is a macro-optimization.
But passing unroll or 1/2 matrices is a micro optimization, which might give me something.
And with current tools I am aware of I can't detect it =(
One option - is to calculate instruction count.
But as I understand:
1. Each instruction has it's own cost and just summing them up is not a good idea.
2. NSight's measurement on same scene, with same shader, gives me error about 0.2% between passes.
So I am keep searching for a tool that will give me ability to measure micro-optimization perf.
The main reason for that: find (and measure) a good practice once, and after that apply it elsewhere without unnecessary code bloating because of some unmeasured speculations.
Edited by Happy SDE
|
2018-04-23 06:03:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20705051720142365, "perplexity": 3344.371065998009}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945793.18/warc/CC-MAIN-20180423050940-20180423070940-00452.warc.gz"}
|
https://www.rieti.go.jp/en/papers/contribution/oguro/12.html
|
# Introducing "Career Advancement Type" Scholarship
OGURO Kazumasa
Consulting Fellow, RIETI
## How to Look at the Rate of Return on Higher Education
As the Fourth Industrial Revolution, encompassing Artificial Intelligence (AI), Big Data and the Internet of Things (IoT) has progressed, with new knowledge and ideas becoming great sources of economic growth, education has become an investment in the next generation who will be responsible for the future. Education is also a "long-term national plan," the key to breaking the cycle of poverty in overcoming the different conditions that children are placed in. In other words, education, which plays a role in the formulation of "human capital," is not only a source of growth but also possesses the function of correcting disparities.
Under such circumstances, Prime Minister Shinzo Abe introduced the promotion of a "Revolution in Human Resources Development" at a press conference on June 19, 2017. Free higher education and university reforms are its pillars, and the prime minister appointed the former chairman of the Policy Research Council of the Liberal Democratic Party, Toshimitsu Motegi, to concurrently serve as minister for this "Revolution in Human Resources Development" and as minister of state for economic and fiscal policy. It goes without saying that education is a "long-term national plan" which must not be used as a policy just to "attract popularity." To begin with, what is the impact of higher education in looking at its relationship with human capital investment and growth? Empirical analysis of the impact of education is not easy, but from a cost-benefit analysis perspective the rate of return on education serves as an important indicator of the impact of the educational budget.
The rate of return on education implies a "rate of return where education is considered to be an investment in human capital," and there are two concepts that exist: "private rate of return" and "social rate of return." Of the two, private rate of return refers to "internal rate of return calculated from private expenses which includes private benefits such as additional lifetime earnings obtained through university education, and acquired earnings lost (opportunity costs) with university admission and admission costs." Social rate of return refers to "internal rate of return calculated from the sum of benefits which includes social benefits such as increased tax incomes through education and lower employment rate, and the sum of expenses which includes social expenses such as financial subsidies and scholarships, etc."
For example, according to the "Useful Labour Statistics 2015—Series of Processed Indices of Labour Statistics" published by the Japan Institute for Labour Policy and Training, while the average lifetime wages that high school graduate workers (men) receive are around 240 million yen, the lifetime wages for those who finished university and graduate schools are around 311 million yen, and the difference in lifetime wages between high school graduate workers and university graduate workers stands at around 70 million yen.
On the other hand, "Education at a Glance 2016," an OECD reference material on education, shows that private rate of return (annual) on higher education in Japan was at 8% for men and 3% for women, lower than the OECD average (14% for men and 12% for women). This reflects the situation in Japan where the wage disparity between university graduates and high school graduates is not as large compared to other OECD countries. But given the low standard of public payment on higher education in Japan, the social rate of return (annual) is much higher than the OECD average (10% for men and 8% for women) at 21% for men and 28% for women.
## How to Acquire the Financial Resources
What about the financial resources? To begin with, there is fundamentally no "free lunch" with policies and there is a need for some type of financial resource. If free higher education, one of the pillars of the reform, was to be implemented, what sort of financial resources will be required? Hints to this can be found in the reference materials for the Eighth Proposal of the Headquarters for the Revitalization of Education (May 18, 2017).
This paper publishes estimation results which show that if tuition for higher education including universities and professional colleges were to be made free, financial resources of roughly 3.7 trillion yen (equivalent to a consumption tax rate hike of 1.4%) would be required, and even if an income restriction (households with income below 9 million yen) was implemented, financial resources of roughly 2.7 trillion yen (consumption tax rate hike of 1%) would still be required.
In addition, if the entire amount of tuition fees were to be exempted for households with less than 3 million yen or if they were half exempted for households with income between 3 million and 5 million yen, financial resources of around 0.7 trillion yen would be required. But under the current grave financial situation, it is not easy to secure such sums every year.
Under such circumstances, an "education bond" concept to secure new financial resources is looming in the political arena, and could possibly ignite a fire. Educational bonds are bonds issued with the purpose of extending financial resources for higher education including universities. It is, in essence, an educational loan where the entire next generation (children) pays off the loan with their own tax payments once they grow up to be adults, and if its social rate of return exceeds the market interest rate, then the "education loan" may be theoretically justified.
However, under the current financial situation, tax income cannot cover current expenses including the educational budget, and the fiscal deficit is chronic. In other words, issuing deficit bonds has become chronic, and it can be said that a portion of that has become an "education bond." Thus, issuing more bonds in the noble cause of "education" should not be tolerated.
During the House of Representatives Election (October 2017), Abe announced his intention to use roughly 2 trillion yen, a portion of the increased income from the consumption tax rate hike planned for October 2019, in financial resources for child care support and free education. But since free daycare for children and kindergarteners was also announced, there were no prospects of securing financial resources for free education.
As a result, the government and the LDP went on to consider targeting students from households with income of less than 2.6 million yen for free higher education, but this would result in excluding students from households with income of more than 2.6 million yen, and the initial enthusiasm for the reform began to fade.
This cannot be helped since there is an aspect of limitation to financial resources, but it still leaves the fundamental question of why students from households with an income of less than 2.6 million yen are able to apply for free education while students from households with an income of 2.7 million yen cannot. Can the LDP-Komeito coalition continue consultations within the government and seek ways to somehow resolve the challenge of financial resources?
## Similar Mechanism to the Australian Mechanism Can Be Replicated with Fiscal Investment & Loan Program
Hence, some experts are focusing on the newly introduced schemes, "Higher Education Contribution Scheme" (HECS) in Australia, and its successor "Higher Education Loan Programme" (HECS-HELP), and the Japanese government has also announced its consideration of a similar system under the "Revolution in Human Resources Development." (For more details on HECS and HELP, see Risa Itoh, "Recent trends in cost bearing system for higher education in Australia," National Diet Library Reference (658) pp. 113-121, 2005.) In fact, the LDP's Headquarters for the Revitalization of Education has also announced that it aims to come up with details of a system by the first half of 2018, where tuition while at university will be exempted and the loan paid back after graduation once graduates start working, referencing the Australian system and others. Laying out the conclusion first, I believe that with adequate political leadership from the government, a system resembling the Australian system may be achieved by using the Fiscal Investment and Loan Program.
The process will be explained in steps. First, HECS-HELP can be considered a "career advancement" system, and tuition is free while attending university. Upon graduation, the tuition is paid back under the taxation system according to income, and roughly more than 80% of the students receive payments such as HECS-HELP etc. (the numerical data for "roughly more than 80%" is from Hiroshi Suzuki, "Scholarship system in Western nations and the situation in Japan," 2005 (http://www.suzukan.net/03report/syougakukin_ronbun.html). While HECS-HELP is a no-interest framework for selected students, there are other interest-bearing frameworks such as FEE-HELP that other students can apply for.)
## What Is Required to Solve This Problem?
First is to set the minimum amount related to the ICL moratorium of repayments adequately. Naturally, if the minimum amount of the ICL is lifted, the losses associated with unpaid loans will become larger. But the real question is whether the minimum amount is adequate or not. For example, the ICL in Australia has the minimum amount set around 5 million yen, and since the average annual earnings in Australia are 7 million yen, the minimum amount is 70% of the annual earnings. In Japan, the minimum amount for the ICL is 3 million yen, and since the average annual earnings in Japan are around 4.5 million yen, the minimum amount in Japan is also at around 70%. Thus, the minimum amount in Japan (3 million yen) is at the same level as Australia and therefore it is thought that further reduction is not necessary. On the contrary, to condense losses associated with the unpaid loans, the mechanism that basically allows a grace period for repayment for households with annual earnings of less than 3 million yen should be modified. For example, in addition to the minimum monthly repayment of 2,000 yen, the repayment rate can be revised to [9-0.03 × (300-Z)] % for annual earnings of Z million yen (Z is less than or equal to 3 million yen).
Second, since losses on the ICL are written off in the mid to long term, additional payments (for example, an additional payment around 1% on top of the 9% repayment rate) can be introduced. Once this is set, students who are likely to become mid-to high-income workers can be included as much as possible in the ICL. In other words, losses that will arise with unpaid loans of low-income workers will be written off with additional payments levied on the mid-to high-income workers. But in reality, it is impossible to predict which students will become high-income workers and which ones will become low-income workers. Thus, one of the policy options may have to be having a mechanism for one-time repayment or prepayment, and that all students join the ICL at least once.
However, tuition at private universities is set higher than tuition at national universities, and tuition also varies depending on what the students major in, such as medicine, and thus when introducing additional payments it may give rise to unfairness depending on which university the student chooses and what the student majors in. Two choices can be considered to resolve this.
The first is to set the maximum scholarship amount to be issued by the ICL to correspond to the tuition at a national university. For example, if annual tuition at a national university is 600,000 yen and 1 million yen at a private university, the cap on the scholarship can be set at 600,000 yen and the remainder of the tuition, 400,000 yen, can be paid up front. If such a policy were to be implemented, payment of a certain percentage of the tuition will remain, but tuition for students enrolled at national universities will be zero.
The other is to set additional payment at a "set amount" rather than a "percentage." For example, setting a monthly payment of 3,000 yen. This means that for households with less than 3 million yen in annual income, the minimum monthly payment will be 2,000 yen and for households with annual income of more than 3 million yen, in addition to repayment at 9%, an additional payment of 3,000 yen will be set. In this case, even if tuition differs between universities and departments, the inequality in additional payments that arises from which university or what major one chooses will be eased.
A third choice is to utilize the My Number system and make sure incomes are properly captured and repayments are made appropriately. The ICL utilizing the My Number system has already been implemented for students starting the school year in April 2017, and for taxable household income the Japan Student Service Organization will use the My Number submitted by the students to obtain information on taxable income. The Japanese ICL currently uses direct bank debit, but if the ICL were to be extended, collection must be undertaken strictly, and to strengthen the scholarship collection the Australian (HECS-HELP) method of tax withholding will need to be considered.
Either way, as Nelson Mandela, who strived to abolish apartheid and became the first African president of South Africa, said, "Education is the strongest weapon. Education can change the world." The Japanese version of HECS and the expansion and extension of the ICL scholarship means that Japan needs to fundamentally transform the way the cost of higher education is covered, and change the system so that what used to be mainly paid by parents will be transformed to be jointly paid by the student and society which benefits from higher education. This shift is also related to the question of where the "center of gravity" of the education burden should be placed. For a country with poor natural resources, human resources are Japan's greatest asset and thus, needless to say, a Revolution in Human Resources Development is important. But it is also highly hoped that a Japanese version of HECS will come to life through adequate political leadership, with considerations to ICL loss write-offs and financial limitations.
This article first appeared on the May/June 2018 issue of Japan SPOTLIGHT published by Japan Economic Foundation. Reproduced with permission.
May/June 2018 Japan SPOTLIGHT
May 17, 2018
## Article(s) by this author
• ### Introducing "Career Advancement Type" Scholarship
May 17, 2018[Newspapers & Magazines]
• ### The Current State of Japanese Public Finance and Its Relationship with the Bank of Japan
January 11, 2018[Newspapers & Magazines]
• ### Future of Monetary Policy: Fiscal consolidation is essential for exit from unconventional monetary easing
December 4, 2017[Newspapers & Magazines]
• ### Education Budgeting: Public return on education should be the point of reference
June 21, 2017[Newspapers & Magazines]
• ### Redesigning the Health Insurance Coverage of Medications to Reflect the Efficacy and Usefulness of Drugs
February 1, 2017[Newspapers & Magazines]
|
2019-06-18 10:45:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1983513981103897, "perplexity": 2462.772729619313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998716.67/warc/CC-MAIN-20190618103358-20190618125358-00491.warc.gz"}
|
http://tjxb.cnjournals.cn/html/2018/05/17293.htm
|
文章快速检索
同济大学学报(自然科学版) 2018, Vol. 46 Issue (5): 588-592. DOI: 10.11908/j.issn.0253-374x.2018.05.004 0
### 引用本文
LI Jingpei, YAO Mingbo. Analytical Solution for Sulfate Diffusion Reaction in Circular Concrete Piles[J]. Journal of Tongji University (Natural Science), 2018, 46(5): 588-592. DOI: 10.11908/j.issn.0253-374x.2018.05.004
### 文章历史
1. 同济大学 土木工程学院,上海 200092;
2. 同济大学 岩土及地下工程教育部重点实验室,上海 200092
Analytical Solution for Sulfate Diffusion Reaction in Circular Concrete Piles
LI Jingpei 1,2, YAO Mingbo 1,2
1. College of Civil Engineering, Tongji University, Shanghai 200092, China;
2. Key Laboratory of Geotechnical and Underground Engineering of the Ministry of Education, Tongji University, Shanghai 200092, China
Abstract: Based on the Fick's second law, a diffusion reaction equation of sulfate in concrete piles was established. With the method of separation of variables and Danckwerts, according to initial conditions and boundary conditions, the diffusion reaction equation was deduced. Furthermore, considering the effect of pore filling and crack on diffusion coefficient, an effective diffusion coefficient model was proposed to show the variation of the diffusion coefficient during the degradation process of concrete piles. Sulfate concentration profiles were obtained by the solution and the effective diffusion coefficient model, which agrees well with the experiment data to verify the validity of the proposed analytical solution. Case study shows that the inhibition effect of pore filling on sulfate diffusion is remarkable. The development of the crack promotes the diffusion reaction of sulfate. With water-cement ratio decreasing, the invasion depth of sulfate decreases, as well as the concentration of sulfate in concrete piles.
Key words: concrete pile sulfate diffusion reaction crack pore filling
1 径向扩散反应方程解析解
$\frac{{\partial \rho }}{{\partial t}} = {D_{{\rm{eff}}}}\left( {\frac{{{\partial ^2}\rho }}{{\partial {r^2}}} + {r^{ - 1}}\frac{{\partial \rho }}{{\partial r}}} \right) - v\rho$ (1)
$\left\{ \begin{array}{l} \rho \left( {r,0} \right) = {\rho _0},0 \le r < {r_0}\\ \rho \left( {{r_0},t} \right) = {\rho _{\rm{s}}},t > 0 \end{array} \right.$ (2)
$\frac{{\partial \rho }}{{\partial t}} = {D_{{\rm{eff}}}}\left( {\frac{{{\partial ^2}\rho }}{{\partial {r^2}}} + {r^{ - 1}}\frac{{\partial \rho }}{{\partial r}}} \right)$ (3)
${\rho _1}\left( {r,t} \right) = {\rho _2}\left( {r,t} \right) + {\rho _{\rm{s}}}$ (4)
$\frac{{\partial {\rho ^2}}}{{\partial t}} = {D_{{\rm{eff}}}}\left( {\frac{{{\partial ^2}{\rho _2}}}{{\partial {r^2}}} + {r^{ - 1}}\frac{{\partial {\rho _2}}}{{\partial r}}} \right)$ (5)
$\left\{ \begin{array}{l} {\rho _2}\left( {r,0} \right) = {\rho _0} - {\rho _{\rm{s}}},0 \le r < {r_0}\\ {\rho _2}\left( {{r_0},t} \right) = 0,t > 0 \end{array} \right.$ (6)
ρ2=T(t)R(r),T(t)是关于t的函数,R(r)是关于r的函数,代入式(5)可得
$\frac{{T'}}{{{D_{{\rm{eff}}}}T}} = \frac{{\Delta R}}{R}$ (7)
$\left\{ \begin{array}{l} \frac{{{\rm{d}}T\left( t \right)}}{{{\rm{d}}t}} + {D_{{\rm{eff}}}}{\beta ^2}T\left( t \right) = 0\\ \frac{{{{\rm{d}}^2}R\left( r \right)}}{{{\rm{d}}{r^2}}} + {r^{ - 1}}\frac{{{\rm{d}}R\left( r \right)}}{{{\rm{d}}r}} + {\beta ^2}R\left( r \right) = 0 \end{array} \right.$ (8)
$T\left( t \right) = {c_1}{{\rm{e}}^{ - {\beta ^2}{D_{{\rm{ef}}{{\rm{f}}^t}}}}}$ (9)
$R\left( r \right) = {A_0}{{\rm{J}}_0}\left( {\beta r} \right) + {B_0}{{\rm{Y}}_0}\left( {\beta r} \right)$ (10)
${\rho _2}\left( {r,t} \right) = T\left( t \right)R\left( r \right) = \left[ {A{{\rm{J}}_0}\left( {\beta r} \right) + B{{\rm{Y}}_0}\left( {\beta r} \right)} \right]{{\rm{e}}^{ - {\beta ^2}{D_{{\rm{ef}}{{\rm{f}}^t}}}}}$
$\left\{ \begin{array}{l} A{{\rm{J}}_0}\left( {\beta r} \right) + B{{\rm{Y}}_0}\left( {\beta r} \right) = {\rho _0} - {\rho _{\rm{s}}}\\ A{{\rm{J}}_0}\left( {\beta {r_0}} \right) + B{{\rm{Y}}_0}\left( {\beta {r_0}} \right) = 0 \end{array} \right.$ (11)
${\rho _2}\left( {r,t} \right) = \sum\limits_{n = 1}^\infty {{{\left( {{\rho _2}} \right)}_n}} = \sum\limits_{n = 1}^\infty {{A_n}{{\rm{e}}^{ - {D_{{\rm{eff}}}}\beta _n^2t}}{{\rm{J}}_0}\left( {{\mu _n}r/{r_0}} \right)}$ (12)
$\begin{array}{l} \int_0^{{r_0}} {r\left( {{\rho _0} - {\rho _{\rm{s}}}} \right){{\rm{J}}_0}\left( {{\mu _m}r/{r_0}} \right){\rm{d}}r} = \\ \;\;\;\;\;\;\;\sum\limits_{n = 1}^\infty {{A_n}\int_0^{{r_0}} {r{{\rm{J}}_0}\left( {{\mu _n}r/{r_0}} \right){{\rm{J}}_0}\left( {{\mu _m}r/{r_0}} \right){\rm{d}}r} } = \\ \;\;\;\;\;\;\;{A_m}r_0^2{J_0}\left( {{\mu _m}} \right)/2 \end{array}$ (13)
${A_m} = \frac{{2\left( {{\rho _0} - {\rho _{\rm{s}}}} \right)}}{{{\beta _m}{r_0}{{\rm{J}}_1}\left( {{\beta _m}{r_0}} \right)}},m = 1,2, \cdots$ (14)
$\begin{array}{l} {\rho _2}\left( {r,t} \right) = \sum\limits_{n = 1}^\infty {{{\left( {{\rho _2}} \right)}_n}} = \\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;2\left( {{\rho _0} - {\rho _{\rm{s}}}} \right)\sum\limits_{n = 1}^\infty {\frac{{{{\rm{J}}_0}\left( {{\mu _n}r/{r_0}} \right)}}{{{\beta _n}{r_0}{{\rm{J}}_1}\left( {{\beta _n}{r_0}} \right)}}{{\rm{e}}^{ - {D_{{\rm{eff}}}}\beta _n^2t}}} \end{array}$ (15)
$\begin{array}{l} {\rho _1} = {\rho _2}\left( {r,t} \right) + {\rho _{\rm{s}}} = \\ \;\;\;\;\;\;\;{\rho _{\rm{s}}} + 2\left( {{\rho _0} - {\rho _{\rm{s}}}} \right)\sum\limits_{n = 1}^\infty {\frac{{{{\rm{J}}_0}\left( {{\mu _n}r/{r_0}} \right)}}{{{\beta _n}{r_0}{{\rm{J}}_1}\left( {{\beta _n}{r_0}} \right)}}{{\rm{e}}^{ - {D_{{\rm{eff}}}}\beta _n^2t}}} \end{array}$ (16)
$\rho = {\rho _{\rm{s}}} + 2\left( {{\rho _0} - {\rho _{\rm{s}}}} \right)\sum\limits_{n = 1}^\infty {\frac{{{{\rm{J}}_0}\left( {{\mu _n}r/{r_0}} \right)}}{{{\beta _n}{r_0}{{\rm{J}}_1}\left( {{\beta _n}{r_0}} \right)}}F\left( t \right)}$ (17)
$F\left( t \right) = \frac{v}{{{D_{{\rm{eff}}}}\beta _n^2 + v}} + \frac{{{D_{{\rm{eff}}}}\beta _n^2}}{{{D_{{\rm{eff}}}}\beta _n^2 + v}}{{\rm{e}}^{ - \left( {{D_{{\rm{eff}}}}\beta _n^2 + v} \right)t}}$ (18)
2 硫酸盐有效扩散系数
${D_{{\rm{eff}}}} = \frac{{{S_{\rm{c}}}{D_{\rm{c}}} + {S_{\rm{0}}}D}}{{{S_{\rm{c}}} + {S_{\rm{0}}}}}$ (19)
$\varepsilon = \max \left[ {{\varphi _{\rm{c}}}\left( {\frac{{{m_{\rm{w}}}/{m_{\rm{c}}} - 0.36\alpha }}{{{m_{\rm{w}}}/{m_{\rm{c}}} + 0.32}}} \right),0} \right]$ (20)
$\alpha = 1 - 0.5\left[ {{{\left( {1 + 1.67\tau } \right)}^{ - 0.6}} + {{\left( {1 + 0.29\tau } \right)}^{ - 0.48}}} \right]$ (21)
${D_{\rm{c}}} = \kappa b_{\rm{c}}^2,{b_{\rm{c}}} < {b_{{\rm{crit}}}}$ (22)
${D_{\rm{c}}} = \kappa {b_{{\rm{crit}}}}{b_{\rm{c}}},{b_{\rm{c}}} \ge {b_{{\rm{crit}}}}$ (23)
${D_{\rm{c}}} = \left\{ \begin{array}{l} \kappa b_{\rm{c}}^2,0 < {b_{\rm{c}}} < {b_{{\rm{crit}}}}\\ \kappa {b_{{\rm{crit}}}}{b_{\rm{c}}},{b_{{\rm{crit}}}} \le {b_{\rm{c}}} < 400\;{\rm{ \mathit{ μ} m}}\\ {D_{{\rm{free}}}} = {10^{ - 9}}{m^2} \cdot {{\rm{s}}^{ - 1}},{b_{\rm{c}}} \ge 400\;{\rm{ \mathit{ μ} m}} \end{array} \right.$ (24)
${S_0} = {\rm{ \mathsf{ π} }}{r_0} - {b_{\rm{c}}}$ (25)
${D_{{\rm{eff}}}} = D + \frac{{{b_{\rm{c}}}\left( {{D_{\rm{c}}} - D} \right)}}{{{\rm{ \mathsf{ π} }}{r_0}}}$ (26)
3 模型验证与分析
3.1 硫酸盐扩散反应解析验证
图 1 硫酸盐质量分数分布理论解析解与试验数据的比较 Fig.1 Comparison of sulfate mass fraction distribution between analytical results and experimental data
3.2 影响因素分析 3.2.1 孔隙和裂缝宽度
图 2 裂缝宽度对有效扩散系数的影响 Fig.2 Effect of crack width on effective diffusion coefficient
图 3 考虑孔隙填充和裂缝影响的硫酸盐质量分数分布 Fig.3 Sulfate mass fraction distribution considering the effect of pore filling and crack
3.2.2 水灰比
图 4 水灰比对混凝土桩中硫酸盐质量分数分布的影响 Fig.4 Effect of water-cement ratio on sulfate mass fraction distribution in concrete piles
4 结论
(1) 本文以硫酸盐侵蚀混凝土桩的微观机理为基础,通过Fick第二定律建立了硫酸盐在圆形混凝土桩中的扩散反应方程.根据初始条件和边界条件,采用分离变量法和Danckwerts法求解出扩散反应方程解析解.
(2) 本文提出了混凝土劣化全过程有效扩散系数模型,综合考虑了侵蚀产物孔隙填充和侵蚀产物膨胀导致混凝土开裂的2种侵蚀作用对混凝土有效扩散系数的影响.侵蚀产物对孔隙的填充阻碍了硫酸盐的扩散,膨胀产物使混凝土开裂又加速了硫酸盐的扩散侵入.考虑混凝土的孔隙填充与损伤开裂对硫酸盐的扩散影响可以更加准确地反映硫酸盐在混凝土中的侵蚀过程.
(3) 混凝土水灰比的设计对混凝土抗硫酸盐侵蚀影响显著.水灰比越小,混凝土中的硫酸盐质量分数分布越低,硫酸盐的侵入深度也越浅,有效地延缓了硫酸盐对混凝土的侵蚀.
[1] BRUNETAUD X, KHELIFA M, Al-MUKHTAR M. Size effect of concrete samples on the kinetics of external sulfate attack[J]. Cement and Concrete Composites, 2012, 34(3): 370 DOI:10.1016/j.cemconcomp.2011.08.014 [2] YU C, SUN W, SCRIVENER K. Mechanism of expansion of mortars immersed in sodium sulfate solutions[J]. Cement and Concrete Research, 2013, 43: 105 DOI:10.1016/j.cemconres.2012.10.001 [3] MULLAUER W, BEDDOE R E, HEINZ D. Sulfate attack expansion mechanisms[J]. Cement and Concrete Research, 2013, 52: 208 DOI:10.1016/j.cemconres.2013.07.005 [4] IKUMI T, CAVALARO S H P, SEGURA I, et al. Simplified methodology to evaluate the external sulfate attack in concrete structures[J]. Materials & Design, 2016, 89: 1147 [5] SAMSON E, MARCHAND J. Numerical solution of the extended Nernst-Planck model[J]. Journal of Colloid and Interface Science, 1999, 215(1): 1 DOI:10.1006/jcis.1999.6145 [6] SAMSON E, MARCHAND J. Modeling the transport of ions in unsaturated cement-based materials[J]. Computers & Structures, 2007, 85(23): 1740 [7] SARKAR S, MAHADEVAN S, MEEUSSEN J C L, et al. Numerical simulation of cementitious materials degradation under external sulfate attack[J]. Cement and Concrete Composites, 2010, 32(3): 241 DOI:10.1016/j.cemconcomp.2009.12.005 [8] MARCHAND J, SAMSON E, MALTAIS Y, et al. Theoretical analysis of the effect of weak sodium sulfate solutions on the durability of concrete[J]. Cement and Concrete Composites, 2002, 24(3/4): 317 [9] TIXIER R, MOBASHER B. Modeling of damage in cement-based materials subjected to external sulfate attack. Ⅱ: comparison with experiments[J]. Journal of Materials in Civil Engineering, 2003, 15(4): 314 DOI:10.1061/(ASCE)0899-1561(2003)15:4(314) [10] TIXIER R, MOBASHER B. Modeling of damage in cement-based materials subjected to external sulfate attack. I: formulation[J]. Journal of Materials in Civil Engineering, 2003, 15(4): 305 DOI:10.1061/(ASCE)0899-1561(2003)15:4(305) [11] IDIART A E, LOPEZ C M, CAROL I. Chemo-mechanical analysis of concrete cracking and degradation due to external sulfate attack: a meso-scale model[J]. Cement and Concrete Composites, 2011, 33(3): 411 DOI:10.1016/j.cemconcomp.2010.12.001 [12] BARY B, LETERRIER N, DEVILLE E, et al. Coupled chemo-transport-mechanical modeling and numerical simulation of external sulfate attack in mortar[J]. Cement and Concrete Composites, 2014, 49: 70 DOI:10.1016/j.cemconcomp.2013.12.010 [13] LI J P, YAO M B, SHAO W. Diffusion-reaction model of stochastically mixed sulfate in cast-in-situ piles[J]. Construction and Building Materials, 2016, 115: 662 DOI:10.1016/j.conbuildmat.2016.04.075 [14] DJERBI A, BONNET S, KHELIDJ A, et al. Influence of traversing crack on chloride diffusion into concrete[J]. Cement and Concrete Research, 2008, 38(6): 877 DOI:10.1016/j.cemconres.2007.10.007 [15] GERARD B, MARCHAND J. Influence of cracking on the diffusion properties of cement-based materials. Part Ⅰ: influence of continuous cracks on the steady state regime[J]. Cement and Concrete Research, 2000, 30(1): 37 DOI:10.1016/S0008-8846(99)00201-X [16] LU Z H, ZHAO Y G, YU Z W, et al. Probabilistic evaluation of initiation time in RC bridge beams with load-induced cracks exposed to deicing salts[J]. Cement and Concrete Research, 2011, 41(3): 365 DOI:10.1016/j.cemconres.2010.12.003 [17] HOGLUND L O. Some notes on ettringite formation in cementitious materials influence of hydration and thermodynamic constraints for durability[J]. Cement and Concrete Research, 1992, 22(2/3): 217 [18] 左晓宝, 孙伟. 硫酸盐侵蚀下的混凝土损伤破坏全过程[J]. 硅酸盐学报, 2009, 37(7): 1063 ZUO Xiaobao, SUN Wei. Full process analysis of damage and failure of concrete subjected to external sulfate attack[J]. Journal of the Chinese Ceramic Society, 2009, 37(7): 1063 [19] SAHMARAN M, LI M, LI V C. Transport properties of engineered cementitious composites under chloride exposure[J]. ACI Materials Journal, 2007, 104(6): 303
|
2019-02-17 07:19:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5270475149154663, "perplexity": 13643.203892516707}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481766.50/warc/CC-MAIN-20190217071448-20190217093448-00335.warc.gz"}
|
https://mersenneforum.org/showthread.php?s=8078fd0f612661a839942348d760f7ae&t=16320
|
mersenneforum.org Old Computer
Register FAQ Search Today's Posts Mark Forums Read
2011-12-10, 07:29 #1 Primeinator "Kyle" Feb 2005 Somewhere near M52.. 38816 Posts Old Computer Hello, I have started running Prime95 on my old laptop. It is running Windows7 Home Premium and has an Intel Core 2 T7200 2.0 GHz processor. I am currently running 2 LL tests in the 56M range; however, the per iteration times are very large... 190 to 300 ms. I was wondering if there was anything I can do to speed these times up. I do realize that the per iteration time is going to be larger on a slower processor and larger as the exponent increases in size. What I do not know is if something can be done to make it a little faster...Important to answering this question is I really only want to use this computer for Prime95. Thus, I am prepared to lose functionality if a significant speed boost can be attained. Thanks! Kyle
2011-12-10, 10:49 #2
ET_
Banned
"Luigi"
Aug 2002
Team Italia
113008 Posts
Quote:
Originally Posted by Primeinator Hello, I have started running Prime95 on my old laptop. It is running Windows7 Home Premium and has an Intel Core 2 T7200 2.0 GHz processor. I am currently running 2 LL tests in the 56M range; however, the per iteration times are very large... 190 to 300 ms. I was wondering if there was anything I can do to speed these times up. I do realize that the per iteration time is going to be larger on a slower processor and larger as the exponent increases in size. What I do not know is if something can be done to make it a little faster...Important to answering this question is I really only want to use this computer for Prime95. Thus, I am prepared to lose functionality if a significant speed boost can be attained. Thanks! Kyle
Stop every unnecessary service and antivirus, sound card, even network card while running. Set the desktop to 16 colors to speed up screen refreshes, and make the mouse less responsive. Apart from this, I have no idea. It could be beneficial if you did double checks or P-1 work if the RAM aboard suffices...
Luigi
Last fiddled with by ET_ on 2011-12-10 at 10:49
2011-12-10, 11:23 #3 axn Jun 2003 23·607 Posts If you're running 56M exponents, and the iteration time is varying between 190-300, then that suggests that the laptop is throttling due to overheating. In fact, from the benchmarks page, your CPU should be doing 96.65 ms (or thereabouts) for a 56M test (slightly more, say 105ms or so, when running two). So, you need to get the temp under control. Clean out the dust. Switch on the AC. Buy one of these (http://www.amazon.com/Laptop-Noteboo...3515683&sr=8-1). Whatever! If you're able to reduce the temperature and get the nominal iteration time, you're mostly there. Second thing you can do is switch over to 64-bit Prime95 (if you aren't already running it). For that you'd need a 64-bit OS. You can spend money on Windoze, or install linux 64-bit dual-boot and run mprime. Also, make sure that you're running the latest version (26.6). I'll assume that overclocking is out of the question. If not, you can try that too.
2011-12-10, 20:37 #4
Primeinator
"Kyle"
Feb 2005
Somewhere near M52..
23·113 Posts
Quote:
Originally Posted by axn If you're running 56M exponents, and the iteration time is varying between 190-300, then that suggests that the laptop is throttling due to overheating. In fact, from the benchmarks page, your CPU should be doing 96.65 ms (or thereabouts) for a 56M test (slightly more, say 105ms or so, when running two). So, you need to get the temp under control. Clean out the dust. Switch on the AC. Buy one of these (http://www.amazon.com/Laptop-Noteboo...3515683&sr=8-1). Whatever! If you're able to reduce the temperature and get the nominal iteration time, you're mostly there. Second thing you can do is switch over to 64-bit Prime95 (if you aren't already running it). For that you'd need a 64-bit OS. You can spend money on Windoze, or install linux 64-bit dual-boot and run mprime. Also, make sure that you're running the latest version (26.6). I'll assume that overclocking is out of the question. If not, you can try that too.
Is there a specific way I should go about opening up my computer and cleaning out the dust? I don't want to damage my computer and I have never done that sort of thing.
2011-12-11, 06:00 #5 Xyzzy "Mike" Aug 2002 174538 Posts We open the case, take it outside and use a leaf blower to blow it out. It is most certainly the wrong thing to do, but the 90MPH "poof" of dust is worth it. We do not recommend the leaf blower method but if you do try it make sure you do not aim the blower directly at any fans as they will spin too fast and possibly destroy the fan's bushing/bearing assembly. Also note that gasoline-powered leaf blowers exhaust their engine through the air nozzle, spraying two-cycle oil and fuel droplets all over the place. (Ask us how we know!) We use a cheap $30 electric leaf blower now. 2011-12-11, 07:28 #6 LaurV Romulan Interpreter Jun 2011 Thailand 100100001100002 Posts Quote: Originally Posted by Primeinator the per iteration times are very large... 190 to 300 ms What do you mean 190 to 300 ms? Does it depends on what you do with the computer, or just when you stay and watch the P95's window? If you do nothing with the computer, just watching the P95 window, the times should be very stable. No matter if 190, or 300 (this highly depends of what hardware you have, like also memory, etc, not only the CPU), but it should be STABLE. If it varies so much, I think you have some "software scheduled" tasks that periodically gets on the way (like the task scheduler, antivirus, disk compression, search indexer) OR you have some "hardware scheduled" things, like heat problems, wrong ACPI settings. It is common on laptops to reduce the CPU frequency (cut the clock in two, three, or even more) when the box gets hot, it is called throttling, or when there is no user activity (like pressing keys, moving mouse) for a period of time (see your advanced control power settings). Generally, if the laptop is so old and it was never cleaned, I would put my money on heat problems. edit: I swear I did not read axn's post :D @xyzzy: "ask us how we know" wwaaaaahahahaahaaa! wonderful! 10 points! Last fiddled with by LaurV on 2011-12-11 at 07:31 2011-12-11, 19:15 #7 Primeinator "Kyle" Feb 2005 Somewhere near M52.. 38816 Posts Quote: Originally Posted by Xyzzy We open the case, take it outside and use a leaf blower to blow it out. It is most certainly the wrong thing to do, but the 90MPH "poof" of dust is worth it. We do not recommend the leaf blower method but if you do try it make sure you do not aim the blower directly at any fans as they will spin too fast and possibly destroy the fan's bushing/bearing assembly. Also note that gasoline-powered leaf blowers exhaust their engine through the air nozzle, spraying two-cycle oil and fuel droplets all over the place. (Ask us how we know!) We use a cheap$30 electric leaf blower now.
That was a very entertaining read. I needed that while taking a finals study break!
Quote:
Originally Posted by LaurV What do you mean 190 to 300 ms? Does it depends on what you do with the computer, or just when you stay and watch the P95's window? If you do nothing with the computer, just watching the P95 window, the times should be very stable. No matter if 190, or 300 (this highly depends of what hardware you have, like also memory, etc, not only the CPU), but it should be STABLE. If it varies so much, I think you have some "software scheduled" tasks that periodically gets on the way (like the task scheduler, antivirus, disk compression, search indexer) OR you have some "hardware scheduled" things, like heat problems, wrong ACPI settings. It is common on laptops to reduce the CPU frequency (cut the clock in two, three, or even more) when the box gets hot, it is called throttling, or when there is no user activity (like pressing keys, moving mouse) for a period of time (see your advanced control power settings). Generally, if the laptop is so old and it was never cleaned, I would put my money on heat problems. edit: I swear I did not read axn's post :D @xyzzy: "ask us how we know" wwaaaaahahahaahaaa! wonderful! 10 points!
While doing nothing. I think it is a heat problem as well. Times are usually 190 to 220 ms during the day when the lid is open and up to 300ms at night when the lid is closed. I will look up some ways to clean the fan and all that good stuff. Hopefully that will help.
2011-12-11, 19:51 #8
xilman
Bamboozled!
"𒉺𒌌𒇷𒆷𒀭"
May 2003
Down not across
3·3,529 Posts
Quote:
Originally Posted by Primeinator While doing nothing. I think it is a heat problem as well. Times are usually 190 to 220 ms during the day when the lid is open and up to 300ms at night when the lid is closed. I will look up some ways to clean the fan and all that good stuff. Hopefully that will help.
2011-12-11, 21:29 #9 Primeinator "Kyle" Feb 2005 Somewhere near M52.. 23×113 Posts Unfortunately this is not an option due to my current living arrangements. The only place I can have my computer is in my room and I'd rather not have the lid open during the night. Even dimming the screen puts off too much light.
2011-12-11, 23:27 #10
Chuck
May 2011
Orange Park, FL
32·97 Posts
Quote:
Originally Posted by Primeinator Unfortunately this is not an option due to my current living arrangements. The only place I can have my computer is in my room and I'd rather not have the lid open during the night. Even dimming the screen puts off too much light.
Can you use the screen saver option "blank screen"?
2011-12-11, 23:31 #11
Primeinator
"Kyle"
Feb 2005
Somewhere near M52..
11100010002 Posts
Quote:
Originally Posted by Chuck Can you use the screen saver option "blank screen"?
I am amazed I did not think of that... Wow... To make myself feel better I am going to blame it on the stress of finals week...
Similar Threads Thread Thread Starter Forum Replies Last Post gamer30 Information & Answers 3 2012-08-04 01:18 c10ck3r Hardware 12 2011-04-30 23:53 RichardB Information & Answers 2 2010-09-04 03:21 Housemouse Hardware 16 2008-06-09 21:04 merlinh Software 2 2004-05-09 21:50
All times are UTC. The time now is 08:07.
Thu Feb 25 08:07:24 UTC 2021 up 84 days, 4:18, 0 users, load averages: 2.18, 2.10, 1.83
|
2021-02-25 08:07:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29715225100517273, "perplexity": 2069.0091878316957}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178350846.9/warc/CC-MAIN-20210225065836-20210225095836-00506.warc.gz"}
|
http://snemanden.com/blog/beating-save-otto-griffis
|
# Beating "Save Otto Griffis"
23. October 2014
I recently entered and submitted a game to the BaconGameJam08 with my game Save Otto Griffis. Here I discuss my game, and show my very own lets' play where you see my beat the game.
## Let's play of Save Otto Griffis
From seeing others play my game I really got a wake up call. I found that others played and approached my game quite differently than I had in mind. Some things were not as obvious as I imagined when designing the game. (Keep in mind, it was created within 48 hours, so not that much thought has been given to this aspect.)
My strategy is to lurk the zombies down to the white stage, where I like to keep Otto safe and sound. Then I sprint up on the stage, and kill the zombies from there. They can still kill you, if you get too close to the border, but you, the player, actually has a slight larger range of attack than the zombies, so it is possible to stay in a spot where you can hit them, but they can't hit you.
I recorded this video with gtk-recordMyDesktop, and used ffmpeg to add an .mp4 source (along with original .ogg file). I used the Firefox webbrowser to play the game in the video. Chromium was used when developing the game.
## Thoughts
So, creating this game and watching others play it has learnt me a few things, and from the feedback in the comments for example, I am cleverer to as what works in this game, and what doesn't.
### Negative / improve
• Confusing that Otto often dies suddenly, if you do not sprint down and kill the zombies that might have spawned near him.
• Although it plays smoothely most places I have tested, it is resource heavy and in a let's play, the game was nearly unplayable when screen-recording was enabled.
• Written text (font) does not fit visual style. (When I implemented it, I didn't have time to draw my own bitmap font.)
• Still mises some sounds/music (e.g. at gameover/win and menu).
### Positive
• The music-change when hitting the button seemed cool.
• I am satisfied with the graphics; given my art-skills and that I am still learning the ways of pixel art, I am satisfied with the sprites and visual feel of the game.
## What I would like to learn / do better next time
I have some thoughts on what I'd like to experiment with next.
• Optimization: e.g. only drawing/handling objects within viewport.
• Be more cleverer about level-design and creation.
• Write a common framework from the most recent games/experiments that uses pixi.js. Especially the Game object and an Entity super-class with commonly used methods and attributes used by all entities.
• Try creating a mobile-friendly game (touch controls)!
|
2018-06-24 22:43:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1830349564552307, "perplexity": 2736.852160265708}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867095.70/warc/CC-MAIN-20180624215228-20180624235228-00525.warc.gz"}
|
https://electronics.stackexchange.com/questions/418728/find-the-transfer-function-of-the-circuit-if-current-i-is-the-measured-output
|
# Find the transfer function of the circuit if current I is the measured output
So there's this problem I am trying to solve:
Consider a series RL circuit corrected to a DC voltage source and an on-off switch. Assume R = 2 ohms, and L = 1H, and applied voltage V = 10 volts. Find the transfer function of the circuit if the current I is the measured output. Suppose the switch is closed at t = 0, find the current in the circuit as a function of time.
Based on the problem, I believe the circuit should look like this:
Obviously, when the switch is closed we will have to deactivate the inductor, so the current would be 5A. Now, when the switch is open, the current would normally be zero, so after doing the Laplace Transform of the inductor, I cannot find a way to find the transfer function, and then the current as a function of time.
I would appreciate your help here guys. Thank you.
• Your problem statement says you want the behavior after the switch closes, but your diagram shows the switch being opened at t=0. Could you edit to clarify? Jan 24 '19 at 19:24
The transfer function tells us how the output changes when the input changes.
In this case, the problem hasn't been stated clearly, but we can probably assume the input is the voltage provided by the voltage source. In a more general problem, this might not be a DC voltage, but might include a time-varying component.
It doesn't make much sense to find the transfer function with I as the output when the switch is open, because it would just be $$\I=0\$$. So we can probably also assume they want you to find the transfer function when the switch is closed.
I'd rather the problem was written more clearly rather than have to make these assumptions, but sometimes you gotta roll with it.
Now, if you know how to write the impedance of the resistor and the inductor in the Laplace domain, you can find the transfer function very easily.
Obviously, when the switch is closed we will have to deactivate the inductor,
This doesn't make any sense to me. When the switch is closed (in the time shortly after it closes, or if the source had a time-varying component) is exactly when we need to consider the effect of the inductor. You certainly can't just ignore it because the switch is closed.
I cannot find a way to find the transfer function, and then the current as a function of time.
You want to find the behavior after the switch closes, so the transfer function with the switch closed is what would be useful here.
note
If you were interested in how the circuit behaves if the switch is opened at t=0, then the circuit model is incomplete. You'd want to include the inter-winding capacitance of the inductor and probably the arcing behavior of the switch to get a realistic result. (The fact the model only works for switching the switch in one direction is likely to confuse learners, and therefore another reason the original problem statement is poorly written)
• By "deactivating" the inductor I mean replacing it with a current source connected in parallel with the inductor. The current source would be 5/s A, and we can write the impedance of the inductor as s ohms, based on the Laplace transform. Also, besides transfer function I would have to find i(t) Jan 24 '19 at 19:33
• @snitchben, consider what's the effect of closing the switch in terms of the voltage applied across the RL circuit. From there you should be able to solve it in the usual way in the Laplace domain. Jan 24 '19 at 19:51
I think you are mixing up the ideas of the transient behavior of the circuit, and the steady state behavior in the frequency domain (by using Laplace methods).
Transient response would have to do with closing the switch, and seeing the current/ voltage in the inductor as time goes on from t=0.
The steady state behavior in the frequency domain would have to do with replacing the inductor with its frequency domain impedance of L*s. In that case, you would solve for the current through the network, and divide by your input voltage to get the transfer function from input voltage to output current as a function of s.
|
2022-01-25 02:36:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7076225280761719, "perplexity": 207.9939194344845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304749.63/warc/CC-MAIN-20220125005757-20220125035757-00427.warc.gz"}
|
https://www.vedantu.com/question-answer/list-five-rational-numbers-between-dfrac-class-9-maths-cbse-5ed6aa497d5e1462b8c0a56e
|
Courses
Courses for Kids
Free study material
Free LIVE classes
More
# List five rational numbers between $\dfrac{{ - 4}}{5}and\dfrac{{ - 2}}{3}.$
Last updated date: 22nd Mar 2023
Total views: 309k
Views today: 7.90k
Verified
309k+ views
Hint: Try to make the denominators same for both the given numbers.
In this question we have to find the rational numbers between $\dfrac{{ - 4}}{5}and\dfrac{{ - 2}}{3}.$
So we have given two numbers over to us i.e $\dfrac{{ - 4}}{5}and\dfrac{{ - 2}}{3}.$
Now in order to find the rational numbers between $\dfrac{{ - 4}}{5}and\dfrac{{ - 2}}{3}.$ we have to make the denominators same of both the numbers and hence for that we’ll multiply
${\text{,}}\dfrac{3}{3}{\text{ with }}\dfrac{{ - 4}}{5}{\text{ and }}\dfrac{5}{5}with{\text{ }}\dfrac{{ - 2}}{3}.$
And hence we have
$\Rightarrow \left( {\dfrac{{ - 4}}{5} \times \dfrac{3}{3}} \right){\text{ and }}\left( {\dfrac{{ - 2}}{3} \times \dfrac{5}{5}} \right)$
$= \dfrac{{ - 12}}{{15}}and\dfrac{{ - 10}}{{15}}$
So in this case we’ll have only 1 rational number between $\dfrac{{ - 12}}{{15}}and\dfrac{{ - 10}}{{15}}$ but we need a list of 5 rational numbers and hence we’ll multiply both the numbers
with $\dfrac{3}{3}$, so now we have
$\Rightarrow \left( {\dfrac{{ - 12}}{{15}} \times \dfrac{3}{3}} \right)and\left( {\dfrac{{ - 10}}{{15}} \times \dfrac{3}{3}} \right)$
and hence on doing the multiplication, we have
$\Rightarrow \dfrac{{ - 36}}{{45}}and\dfrac{{ - 30}}{{45}}$
So 5 Rational numbers between $\dfrac{{ - 36}}{{45}}and\dfrac{{ - 30}}{{45}}$ are $\dfrac{{ - 31}}{{45}},\dfrac{{ - 32}}{{45}},\dfrac{{ - 33}}{{45}},\dfrac{{ - 34}}{{45}},\dfrac{{ - 35}}{{45}}$.
Note: In this type of question we have to find the rational numbers between two given numbers and hence for that we’ll try to make the denominators same of those two given numbers and hence we can find the list of rational numbers between them.
|
2023-03-26 16:36:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8291304707527161, "perplexity": 494.8828241684967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945473.69/warc/CC-MAIN-20230326142035-20230326172035-00270.warc.gz"}
|
http://mathhelpforum.com/advanced-statistics/188362-conditional-probability-problem-print.html
|
# Conditional Probability problem
• Sep 19th 2011, 10:52 AM
mike2208
Conditional Probability problem
Let S be a sample space, with A is a subset S and B is a subset S. If P(A) = .6 what can be said about, P(A ∩ B),
when
(a) A and B are mutully exclusive?
(b) A is a subset B
(c) B is a subset A
(d) A′ is a subset B′
(e) A is a subset B′
Not entirely sure how to approach this problem...let me know if the first is on the right track.
a) If A and B are mutually exclusive, P(A ∩ B)= P(A)P(B)
• Sep 19th 2011, 11:25 AM
Plato
Re: Conditional Probability problem
Quote:
Originally Posted by mike2208
Let S be a sample space, with A is a subset S and B is a subset S. If P(A) = .6 what can be said about, P(A ∩ B),
when
(a) A and B are mutully exclusive?
(b) A is a subset B
(c) B is a subset A
(d) A′ is a subset B′
(e) A is a subset B′
Not entirely sure how to approach this problem...let me know if the first is on the right track.
a) If A and B are mutually exclusive, P(A ∩ B)= P(A)P(B)
No, $\mathcal{P}(A\cap B)=0$.
Mutually exclusive means $A\cap B=\emptyset.$
Just as $A\subseteq B$ means $A\cap B= A$
• Sep 19th 2011, 11:29 AM
mike2208
Re: Conditional Probability problem
That's right, if they are mutually exclusive then the probability they intersect is zero.
So would B subset of A mean B ∩ A = B?
• Sep 19th 2011, 11:46 AM
Plato
Re: Conditional Probability problem
Quote:
Originally Posted by mike2208
That's right, if they are mutually exclusive then the probability they intersect is zero.
So would B subset of A mean B ∩ A = B?
Yes.
One cannot do probability if one does not know all basic set operations.
• Sep 19th 2011, 12:08 PM
mike2208
Re: Conditional Probability problem
Ok, I have all but the last.
If A' is subset of B', then B is subset of A, so P(A ∩ B) = P(B).
However, I still have not found any rules on the final question, A is subset of B'.
• Sep 19th 2011, 12:59 PM
Plato
Re: Conditional Probability problem
Quote:
Originally Posted by mike2208
I still have not found any rules on the final question, A is subset of B'.
$A \subseteq B'\; \iff \;A \cap B = \emptyset$
• Sep 19th 2011, 01:09 PM
mike2208
Re: Conditional Probability problem
Oh thanks. I haven't seen that one before. I'm looking for the proof online. It's not listed in my textbook.
I can see that easily from a diagram, but I'm trying to find a list of important set theory laws and indentities.
• Sep 19th 2011, 01:13 PM
Plato
Re: Conditional Probability problem
Quote:
Originally Posted by mike2208
Oh thanks. I haven't seen that one before. I'm looking for the proof online. It's not listed in my textbook.
I can see that easily from a diagram, but I'm trying to find a list of important set theory laws and indentities.
$A\subseteq B'$ says "all A's are not B's".
|
2016-09-26 05:43:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.860554039478302, "perplexity": 1344.279723572438}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660706.30/warc/CC-MAIN-20160924173740-00027-ip-10-143-35-109.ec2.internal.warc.gz"}
|
https://socratic.org/questions/59a3b3ff11ef6b6c9de62d07
|
# What is the molar concentration of 68% "nitric acid", for which rho_"acid"=1.41*g*mL^-1?
Aug 28, 2017
$\text{Molarity} \cong 15 \cdot m o l \cdot {L}^{-} 1$
#### Explanation:
By definition, $\text{Molarity"="Moles of solute (mol)"/"Volume of solution (L)}$, and thus it has the units $m o l \cdot {L}^{-} 1$. So we need to address this quotient from the given data......
We assume a $1 \cdot m L$ volume of solution, the which has a MASS of $1.41 \cdot g$, of which 69% of that mass is nitric acid......And so....
"Molarity"=((1.41*gxx69%)/(63.01*g*mol^-1))/(1.00xx10^-3*L)=15.4*mol*L^-1
Are you with me......please note the units of the calculation. We wanted an answer with units of $m o l \cdot {L}^{-} 1$, and the quotient gave us such units - and this is an excellent check on our calculations.
|
2022-01-26 20:45:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9294602274894714, "perplexity": 743.295233923557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304961.89/warc/CC-MAIN-20220126192506-20220126222506-00473.warc.gz"}
|
http://www.ck12.org/geometry/Triangle-Classification/lesson/Triangle-Classification-MSM7/
|
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
# Triangle Classification
## Categories of triangles based on angle measurements or the number of congruent sides.
Estimated4 minsto complete
%
Progress
Practice Triangle Classification
Progress
Estimated4 minsto complete
%
Triangle Classification
Kevin is constructing a model. The model has several pieces that are shaped like triangles. He notices that triangles have a lot of different sizes and variations in appearance. He decides to make a pile for each different classification of triangles. He finds one triangle that has a \begin{align*}110^o\end{align*} angle. How should he classify the triangle?
In this concept, you will learn how to classify triangles.
### Classifying Triangles
The angles in a triangle can vary a lot in size and shape, but they always total \begin{align*}180^\circ\end{align*}. The angles are used to classify triangles. A triangle can either be acute, obtuse or right.
Acute triangles have three angles that measure less than \begin{align*}90^\circ\end{align*}. Below are a few examples of acute triangles.
Notice that each angle in the triangles above is less than \begin{align*}90^\circ\end{align*}, but the total for each triangle is still \begin{align*}180^\circ\end{align*}.
A triangle that has an obtuse angle is classified as an obtuse triangle. This means that one angle in the triangle measures more than \begin{align*}90^\circ\end{align*}. Here are three examples of obtuse triangles.
You can see that obtuse triangles have one wide angle that is greater than \begin{align*}90^\circ\end{align*}. Still, the three angles in obtuse triangles always add up to \begin{align*}180^\circ\end{align*}. Only one angle must be obtuse to make it an obtuse triangle.
The third kind of triangle is a right triangle. Right triangles have one right angle that measures exactly \begin{align*}90^\circ\end{align*}. Often, a small box in the corner tells you when an angle is a right angle. Let’s examine a few right triangles.
Once again, even with a right angle, the three angles still total \begin{align*}180^\circ\end{align*}.
One short cut is to compare the angles to \begin{align*}90^\circ\end{align*}. If an angle is exactly \begin{align*}90^\circ\end{align*}, the triangle must be a right triangle. If any angle is more than \begin{align*}90^\circ\end{align*}, the triangle must be an obtuse triangle. If there are no right or obtuse angles, the triangle must be an acute triangle
Let's look at an example.
Label the triangle as acute, obtuse or right.
First, look at the angles and compare the angles to \begin{align*}90^o\end{align*}.
All of the angles are less than \begin{align*}90^o\end{align*}.
Next, list the classifications of triangles.
Triangles can be acute, right or obtuse.
Then, select the classification that fits the criteria.
Acute.
The answer is that the triangle is an acute triangle.
Triangles can also be classified by the lengths of their sides.
A triangle with three equal sides is an equilateral triangle. It doesn’t matter how long the sides are, as long as they are all congruent, or equal. Here are a few examples of equilateral triangles.
An isosceles triangle has two congruent sides. It doesn’t matter which two sides, any two will do. Let’s look at a few examples of isosceles triangles.
The third type of triangle is a scalene triangle. In a scalene triangle, none of the sides are congruent.
Let's look at an example.
Classify the triangle as equilateral, isosceles, or scalene.
First, examine the lengths of the sides to see if any sides are congruent.
Two sides are 7 meters long, but the third side is shorter.
Then, classify the triangle.
This triangle is an isosceles triangle.
### Examples
#### Example 1
Earlier, you were given a problem about Kevin's model.
He has one triangle that has a \begin{align*}110^o\end{align*} angle. Classify the triangle as acute, obtuse or right.
First, look at the given angle and compare it to \begin{align*}90^o\end{align*}.
The angle is larger than \begin{align*}90^o\end{align*}.
Next, list the classifications of triangles.
Triangles can be acute, right or obtuse.
Then, select the classification that fits the criteria.
Obtuse.
The answer is that the triangle is an obtuse triangle.
#### Example 2
Classify the triangle as equilateral, isosceles or scalene.
First, examine the lengths of the sides to see if any sides are congruent.
All three sides equal 4.5 inches.
Then, determine if the the triangle is equilateral, isosceles or scalene.
Equilateral.
The answer is that the triangle is equilateral.
#### Example 3
Use the angles to classify the triangle.
First, look at the angles and compare the angles to \begin{align*}90^o\end{align*}.
One of the angles is equal to \begin{align*}90^o\end{align*}.
Next, list the classifications of triangles.
Triangles can be acute, right or obtuse.
Then, select the classification that fits the criteria.
Right.
The answer is that the triangle is a right triangle.
#### Example 4
Use the angles to classify the triangle.
First, look at the angles and compare the angles to \begin{align*}90^o\end{align*}.
None of the angles are \begin{align*}90^o\end{align*} . One of the angles is larger than \begin{align*}90^o\end{align*}.
Next, list the classifications of triangles.
Triangles can be acute, right or obtuse.
Then, select the classification that fits the criteria.
Obtuse.
The answer is that the triangle is an obtuse triangle.
#### Example 5
Identify the triangle as equilateral, isosceles or scalene.
First, examine the lengths of the sides to see if any sides are congruent.
None of the sides are congruent.
Then, determine if the the triangle is equilateral, isosceles or scalene.
Scalene.
The answer is that the triangle is scalene.
### Review
Find the measure of angle \begin{align*}H\end{align*} in each figure below.
Identify each triangle as right, acute, or obtuse.
Identify each triangle as equilateral, isosceles, or scalene.
Use what you have learned to answer each question.
1. True or false. An acute triangle has three sides that are all different lengths.
2. True or false. A scalene triangle can be an acute triangle as well.
3. True or false. An isosceles triangle can also be a right triangle.
4. True or false. An equilateral triangle has three equal sides.
5. True or false. An obtuse triangle can have multiple obtuse angles.
6. True or false. A scalene triangle has three angles less than 90 degrees.
7. True or false. A triangle with a \begin{align*}100^\circ\end{align*} angle must be an obtuse triangle.
8. True or false. The angles of an equilateral triangle are also equal in measure.
### Notes/Highlights Having trouble? Report an issue.
Color Highlighted Text Notes
### Vocabulary Language: English
Acute Triangle
An acute triangle has three angles that each measure less than 90 degrees.
Congruent
Congruent figures are identical in size, shape and measure.
Exterior angles
An exterior angle is the angle formed by one side of a polygon and the extension of the adjacent side.
Interior angles
Interior angles are the angles inside a figure.
Isosceles Triangle
An isosceles triangle is a triangle in which exactly two sides are the same length.
Obtuse Triangle
An obtuse triangle is a triangle with one angle that is greater than 90 degrees.
Right Angle
A right angle is an angle equal to 90 degrees.
Scalene Triangle
A scalene triangle is a triangle in which all three sides are different lengths.
Triangle
A triangle is a polygon with three sides and three angles.
Equilateral
A polygon is equilateral if all of its sides are the same length.
Equiangular
A polygon is equiangular if all angles are the same measure.
1. [1]^ License: CC BY-NC 3.0
2. [2]^ License: CC BY-NC 3.0
3. [3]^ License: CC BY-NC 3.0
4. [4]^ License: CC BY-NC 3.0
5. [5]^ License: CC BY-NC 3.0
6. [6]^ License: CC BY-NC 3.0
7. [7]^ License: CC BY-NC 3.0
### Explore More
Sign in to explore more, including practice questions and solutions for Triangle Classification.
|
2016-08-30 09:38:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 25, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3888184726238251, "perplexity": 1306.3079170428102}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982974985.86/warc/CC-MAIN-20160823200934-00297-ip-10-153-172-175.ec2.internal.warc.gz"}
|
https://kb.osu.edu/handle/1811/7864?show=full
|
dc.creator Stoichefe, B. P. en_US dc.creator Flubacher, P. en_US dc.creator Leadbetter, A. J. en_US dc.creator Morrison, J. A. en_US dc.date.accessioned 2006-06-15T12:55:57Z dc.date.available 2006-06-15T12:55:57Z dc.date.issued 1959 en_US dc.identifier 1959-H-7 en_US dc.identifier.uri http://hdl.handle.net/1811/7864 dc.description Author Institution: Divisions of Pure Physics and Chemistry, National Research Council en_US dc.description.abstract The heat capacity of vitreous silica in the region $T <20^{\circ} K$ is very much larger than that observed for simple crystals. In order to interpret this unusual behaviour some spectroscopic studies have been made. The Brillouin spectrum excited by $\lambda 2536.5$ of $Hg^{198}$ was photographed in the third order of a 35-ft. grating. Lines due to scattering by longitudinal waves were observed, together with much weaker lines attributed to transverse waves. Their frequency shifts form the exciting line are 1.68 and $1.04 cm^{-1}$ respectively. The shifts give directly the frequencies of the Debye waves producing the scattering namely, $5.03 \times 10^{10} Sec^{-1}$ and $3.12 \times 10^{10} Sec^{-1}$. Their velocities are in excellent agreement with the values determined by acoustic methods at a frequency of $10^{7} sec^{-1}$. These results show that dispersion of lattice waves in vitreous silica is not significant up to frequencies of about $5 \times 10^{10} sec^{-1}$. The Raman spectrum excited by Hg 2537 was photographed at low dispersion and in the fourth order of a 21-ft. grating. Its most prominent feature is an intense continuum starting below $8 cm^{-1}$ and extending to about $560 cm^{-1}$ where it has a sharp cut-off. These results give direct evidence for low frequency optical modes whose presence can account for the observed heat capacity. The origin of the spectrum is still a matter for speculation. en_US dc.format.extent 122877 bytes dc.format.mimetype image/jpeg dc.language.iso English en_US dc.publisher Ohio State University en_US dc.title RAMAN AND BRILLOUIN SPECTRA OF VITREOUS SILICA en_US dc.type article en_US
|
2021-04-23 08:55:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4244838058948517, "perplexity": 3844.6725555232947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039568689.89/warc/CC-MAIN-20210423070953-20210423100953-00109.warc.gz"}
|
https://infoscience.epfl.ch/record/177908
|
Infoscience
Journal article
# First Evidence of Direct CP Violation in Charmless Two-Body Decays of B-s(0) Mesons
Using a data sample corresponding to an integrated luminosity of 0.35 fb(-1) collected by LHCb in 2011, we report the first evidence of CP violation in the decays of B-s(0) mesons to K-+/-pi(-/+)pairs, A(CP)(B-s(0) -> K pi) = 0.27 +/- 0.08(stat) +/- 0.02(syst), with a significance of 3.3 sigma. Furthermore, we report the most precise measurement of CP violation in the decays of B-0 mesons to K-+/-pi(-/+) pairs, A(CP)(B-0 -> K pi) = -0.088 +/- 0.011(stat) +/- 0.008(syst), with a significance exceeding 6 sigma.
|
2017-09-21 09:16:36
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8739784955978394, "perplexity": 5785.919496291385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687711.44/warc/CC-MAIN-20170921082205-20170921102205-00217.warc.gz"}
|
http://www.nag.com/numeric/CL/nagdoc_cl23/html/S/s14agc.html
|
s Chapter Contents
s Chapter Introduction
NAG C Library Manual
# NAG Library Function Documentnag_complex_log_gamma (s14agc)
## 1 Purpose
nag_complex_log_gamma (s14agc) returns the value of the logarithm of the gamma function $\mathrm{ln}\Gamma \left(z\right)$ for complex $z$, .
## 2 Specification
#include #include
Complex nag_complex_log_gamma (Complex z, NagError *fail)
## 3 Description
nag_complex_log_gamma (s14agc) evaluates an approximation to the logarithm of the gamma function $\mathrm{ln}\Gamma \left(z\right)$ defined for $\mathrm{Re}\left(z\right)>0$ by
$lnΓz=ln∫0∞e-ttz-1dt$
where $z=x+iy$ is complex. It is extended to the rest of the complex plane by analytic continuation unless $y=0$, in which case $z$ is real and each of the points $z=0,-1,-2,\dots \text{}$ is a singularity and a branch point.
nag_complex_log_gamma (s14agc) is based on the method proposed by Kölbig (1972) in which the value of $\mathrm{ln}\Gamma \left(z\right)$ is computed in the different regions of the $z$ plane by means of the formulae
$lnΓz = z-12lnz-z+12ln2π+z∑k=1K B2k2k2k-1 z-2k+RKz if x≥x0≥0, = lnΓz+n-ln∏ν=0 n-1z+ν if x0>x≥0, = lnπ-lnΓ1-z-lnsinπz if x<0,$
where $n=\left[{x}_{0}\right]-\left[x\right]$, $\left\{{B}_{2k}\right\}$ are Bernoulli numbers (see Abramowitz and Stegun (1972)) and $\left[x\right]$ is the largest integer $\text{}\le x$. Note that care is taken to ensure that the imaginary part is computed correctly, and not merely modulo $2\pi$.
The function uses the values $K=10$ and ${x}_{0}=7$. The remainder term ${R}_{K}\left(z\right)$ is discussed in Section 7.
To obtain the value of $\mathrm{ln}\Gamma \left(z\right)$ when $z$ is real and positive, nag_log_gamma (s14abc) can be used.
## 4 References
Abramowitz M and Stegun I A (1972) Handbook of Mathematical Functions (3rd Edition) Dover Publications
Kölbig K S (1972) Programs for computing the logarithm of the gamma function, and the digamma function, for complex arguments Comp. Phys. Comm. 4 221–226
## 5 Arguments
1: zComplexInput
On entry: the argument $z$ of the function.
Constraint: ${\mathbf{z}}\mathbf{.}\mathbf{re}$ must not be ‘too close’ (see Section 6) to a non-positive integer when ${\mathbf{z}}\mathbf{.}\mathbf{im}=0.0$.
2: failNagError *Input/Output
The NAG error argument (see Section 3.6 in the Essential Introduction).
## 6 Error Indicators and Warnings
NE_INTERNAL_ERROR
An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance.
NE_TOO_CLOSE_INTEGER
On entry, ${\mathbf{z}}\mathbf{.}\mathbf{re}$ is ‘too close’ to a non-positive integer when ${\mathbf{z}}\mathbf{.}\mathbf{im}=0.0$: ${\mathbf{z}}\mathbf{.}\mathbf{re}=〈\mathit{\text{value}}〉$, $\mathrm{nint}\left({\mathbf{z}}\mathbf{.}\mathbf{re}\right)=〈\mathit{\text{value}}〉$.
## 7 Accuracy
The remainder term ${R}_{K}\left(z\right)$ satisfies the following error bound:
$RKz ≤ B2K 2K-1 z1-2K ≤ B2K 2K-1 x1-2Kif x≥0.$
Thus $\left|{R}_{10}\left(7\right)\right|<2.5×{10}^{-15}$ and hence in theory the function is capable of achieving an accuracy of approximately $15$ significant digits.
None.
## 9 Example
This example evaluates the logarithm of the gamma function $\mathrm{ln}\Gamma \left(z\right)$ at $z=-1.5+2.5i$, and prints the results.
### 9.1 Program Text
Program Text (s14agce.c)
### 9.2 Program Data
Program Data (s14agce.d)
### 9.3 Program Results
Program Results (s14agce.r)
|
2016-06-29 03:17:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 35, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9987741708755493, "perplexity": 1217.969324198486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397562.76/warc/CC-MAIN-20160624154957-00086-ip-10-164-35-72.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/1361012/if-lambda-n-n-1-infty-is-a-bounded-sequence-then-there-is-a-bounded-lin
|
# If $(\lambda_n)_{n=1}^\infty$ is a bounded sequence, then there is a bounded linear operator $A$ on a Hilbert space $H$ such that $Ae_n=\lambda_n e_n$
If $(\lambda_n)_{n=1}^\infty$ is a bounded sequence, then there is a bounded linear operator $A$ on a Hilbert space $H$ such that $Ae_n=\lambda_n e_n$ for all $n\in \mathbb{N}$.
Let ${e_n}$ be a complete orthonormal sequence in $H$ and $\lambda_n\in \mathbb{C}$
I've proved the converse of this, but I'm stuck on this direction now. Any solutions or hints are greatly appreciated.
If $\{e_n\}$ is an orthonormal basis for a Hilbert space $H$, then $$H = \left\{ \sum_{n=1}^\infty \alpha_n e_n : \sum_{n=1}^\infty |\alpha_n|^2 < \infty \right\}.$$
For $f=\sum \alpha_n e_n \in H$ we formally define the operation: $$Af = \sum_{n=1}^\infty \lambda_n \alpha_n e_n.$$
This operator is linear, and it is a well defined map from $H \to H$ provided $$\sum_{n=1}^\infty |\lambda_n|^2 |\alpha_n|^2 <\infty.$$
However, we were already told that the sequence is bounded, let's say by $M$. Thus $$\sum_{n=1}^\infty |\lambda_n|^2 |\alpha_n|^2 \le M \sum_{n=1}^\infty |\alpha_n|^2 = M^2\|f\|^2.$$
We conclude that $A:H \to H$ is a well defined linear map, and $\|A\| <M$, so it is bounded.
|
2019-06-26 12:37:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9882474541664124, "perplexity": 25.31727242943251}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000306.84/warc/CC-MAIN-20190626114215-20190626140215-00323.warc.gz"}
|
https://sagrista.info/blog/2021/procedural-planetary-surfaces/
|
# Procedural generation of planetary surfaces
Generating realistic planet surfaces and moons
I have recently implemented a procedural generation system for planetary surfaces into Gaia Sky. In this post, I ponder about different methods and techniques for procedurally generating planets that look just right and explain the process behind it in somewhat detail. This is a rather technical post, so be warned. As a teaser, the following image shows a planet generated using the processes described in this article.
## All is noise
We start with a noise algorithm. A noise algorithm is essentially a function $$f(\vec{x}) = v$$ that returns a pseudo-random value $$v$$ for each coordinate $$\vec{x}$$. The values are not totally random, as they are influenced by where the function is sampled. Obviously, a pure RNG (random number generator) won’t cut it, as the noise we need to generate mountains and valleys and seas is not totally random. There needs to be some structure to it for it to successfully approximate reality. Single values can’t live in isolation, but must depend on their surroundings. In other words, we need smooth gradients. There are very many noise algorithms that are up to the challenge to choose from, but all essentially fall into one of these two categories:
1. value noise—based on the interpolation of random values assigned to a lattice of points.
2. gradient noise—based on the interpolation of random gradients assigned to a lattice of points.
They are equally valid, but gradient noise is usually more appropriate and visually appealing for procedural generation. One of the most common realization of gradient noise is Perlin noise, developed by Ken Perlin. Let's have a look at some noise generated with this algorithm.
It looks alright, but it may not be enough for our purposes as it is now. Let's interpret each pixel in that image as the elevation value at the coordinates of that pixel. This gives us an elevation map. Darker pixels have lower elevations, while brighter pixels have higher elevation. We can then map colors to elevation ranges. Knowing that noise values are in $$[0,1]$$, we can apply the following mapping:
\begin{align} [0.0, 0.1] &\mapsto blue \\ [0.1, 0.15] &\mapsto yellow \\ [0.15, 0.75] &\mapsto green \\ [0.75, 0.85] &\mapsto gray \\ [0.85, 1.0] &\mapsto white \\ \end{align}
Blackish areas are assigned blue, for water. Areas immediately next to water are yellow, for beaches. Gray areas are green, and bright areas are gray and white, for snow. That gives us the following image:
Again, it is alright, but it is not super good. It looks plain and not very natural. We can try with other noise algorithms.
Simplex noise is an evolution of Perlin noise with fewer artifacts. Its open implementation is multi-dimensional and it is also quite fast. The others are also good if used properly.
We can also generate the normal maps from the elevation data. Doing so involves computing the horizontal and vertical gradients for every coordinate. The normal map encodes the direction of the surface normal vector at each point, and works a little better at visualizing the gradients. Additionally, it will come in handy for the shading.
At this point, we need something else. The noise looks too simple and plain. In nature, we have repeating features at different scales, but here we don't see this. These repeating features are called fractals, and we can also create them with the noise algorithms that we already know. The trick is re-sampling the noise function several times with higher frequencies and lower amplitudes. In the context of noise, the different levels are called octaves. The first octave is the regular noise map we have already seen. The second octave would be computed by multiplying the frequency of the first one by a number (called lacunarity) and multiplying its amplitude by another number (called persistence), typically a fraction of one. The third would apply the same principle to the parameters of the second, and so on.
const int N_OCTAVES = 5;
// Initial values
float frequency = 2.5;
float amplitude = 0.5;
// Parameters
float lacunarity = 2.0;
float persistence = 0.5;
// The noise value
float n = 0;
// x and y are the current coordinates
for (int octave = 0; octave < N_OCTAVES; octave++) {
n += amplitude * noise(frequency * x, frequency * y);
frequency *= lacunarity;
amplitude *= persistence;
}
If we run this code with simplex noise, we get the following.
If you zoom in into the left image, you will see that there are additional levels of detail at smaller scales compared to the regular simplex noise shown before. This is very good, as it mimics nature much more closely. We are now ready to start generating surfaces.
## Surface generation
From now on we’ll interpret the generated noise as the terrain elevation and map it to a sphere. For instance, the noise types we saw before look as follows when mapped to a sphere.
They all are reasonable, except white. We’ll proceed with simplex from now on.
### Colors
In the previous sections we have only mapped colors to elevation ranges, but this produces very little variety. We can generate an additional noise map with the same parameters and interpret it as humidity data, that we can combine with the elevation to produce a color. The elevation data is a 2D array containing the elevation value in $$[0,1]$$ at each coordinate. The humidity data is the same but it contains the humidity value. We use the humidity, then, together with the elevation, to determine the color using a look-up table. This allows us to color different regions at the same elevation differently. We map the humidity value to the $$x$$ coordinate and the elevation to $$y$$. Both coordinates are normalized to $$[0,1]$$.
Additionally, since the look-up table is just an image in disk, we can have many of them and use them in different situations, or even randomize which one is picked up. A simple, discrete look-up table would look like this. From left to right it maps less humidity (hence the yellows, to create deserts, and grays at the top, for rocky mountains) to more humidity (as we go right it gets greener, and the mountain tops get white snow).
If we use this look-up table with the simplex noise ball above, we get the following.
In this image, the noise is mapped to $$[0,1]$$. We can try extending it to negative values to add some water, as water is mapped to negatives. If we use $$[-1,1]$$, we get the following.
That is better. But the noise is too high frequency. We can lower it a lot to get larger land masses. We’ll use the higher octaves to add extra details. For now, let’s lower the frequency a lot.
Now it is time to smooth things out. We said we can use any look-up table, so how about using one with smooth gradients:
And let’s apply it to the last planet with the low frequency.
Finally, we can enable additional octaves to produce detail at smaller scales. This step is crucial and is what really sells it. Have a look at this:
Looks fine, right? In Gaia Sky we can add an atmosphere (computes atmospheric scattering of light in a shader) and add a cloud layer to have this final look.
There are some tricks we can use to add some variety to the process.
For example, we can hue-shift the look-up table by a value (in $$[0^{\circ}, 360^{\circ}]$$) in order to produce additional colors. The shift must happen in the HSL color space, so we convert from RGB to HSL, modify the H (hue) value, and convert it back to RGB. Once the shift is established, we generate the diffuse texture by sampling the look-up table and shifting the hue.
We can also generate a specular texture where there is water. The specular texture is generated by assigning all heights less or equal to zero to a full specular value. All the planets we have seen so far already apply this specular data.
### Seamless (tilable) noise
In this article we have used a little trick that we have not yet talked about. Usually, noise sampled directly is not tileable, but the images in this article do not have seams. If sampled with $$x$$ and $$y$$ directly, the features do not repeat. In the case of one dimension, usually one would sample the noise using a coordinate for the only dimension available, $$x$$.
However, if we go one dimension higher, 2D, and sample the noise along a circumference embedded in this two-dimensional space, we get seamless, tileable noise.
We can apply this same principle with any dimension $$d$$ by sampling in $$d+1$$. Since we need to create spherical 2D maps (our aim is to produce textures to apply to UV spheres), we do not sample the noise algorithm with the $$x$$ and $$y$$ coordinates of the pixel in image space. That would produce higher frequencies at the poles and lower around the equator. Additionally, the noise would contain seams, as it does not tile by default. Instead, we sample the 2D surface of a sphere of radius 1 embedded in a 3D volume, so we sample 3D noise. To do so, we iterate over the spherical coordinates $$\varphi$$ and $$\theta$$, and transform them to cartesian coordinates to sample the noise:
\begin{align} x &= \cos \varphi \sin \theta \nonumber \\ y &= \sin \varphi \sin \theta \nonumber \\ z &= \cos \varphi \nonumber \end{align}
The process is outlined in this code snippet. If the final map resolution is $$N \times M$$, we use N $$\theta$$ steps and M $$\varphi$$ steps.
// Map is NxM
for (float phi = -PI / 2; phi < PI / 2; phi += PI / M){
for (float theta = 0; theta < 2 * PI; theta += 2 * PI / N) {
n = noise(cos(phi) * cos(theta), // x
cos(phi) * sin(theta), // y
sin(phi)); // z
theta += 2 * PI / N;
}
}
### Noise parametrization
We carry out the generation by sampling configurable noise algorithms (Perlin, Open Simplex, etc.) at different levels of detail, or octaves. In Gaia Sky, we have some important noise parameters to adjust:
• seed—a number which is used as a seed for the noise RNG. type—the base noise type. Can be any algorithm, like gradient (Perlin) noise1, simplex2, value3, gradval noise4 or white5. For examples, see here.
• fractal type—the algorithm used to modify the noise in each octave. It determines the persistence (how the amplitude is modified) as well as the gain and the offset. Can be billow, deCarpenterSwiss, fractal brownian motion (FBM), hybrid multi, multi or ridge multi. For examples, see here.
• scale—determines the scale of the sampling volume. The noise is sampled on the 2D surface of a sphere embedded in a 3D volume to make it seamless. The scale stretches each of the dimensions of this sampling volume.
• octaves—the number of levels of detail. Each octave reduces the amplitude and increases the frequency of the noise by using the lacunarity parameter.
• frequency—the initial frequency of the first octave. Determines how much detail the noise has.
• lacunarity—determines how much detail is added or removed at each octave by modifying the frequency.
• range—the output of the noise generation stage is in $$[0,1]$$ and gets map to the range specified in this parameter. Water gets mapped to negative values, so adding a range of $$[-1,1]$$ will get roughly half of the surface submerged in water.
• power—power function exponent to apply to the output of the range stage.
The final stage of the procedural noise generation clamps the output to $$[0,1]$$ again, so that all negative values are mapped to 0, and all values greater than 1 are clamped to 1. This means that water is mapped to 0 instead of negative values, but that doesn't change anything.
Finally, we can also generate a normal map from the height map by determining elevation gradients in both $$x$$ and $$y$$. We use the normal map only when tessellation is unavailable or disabled. Otherwise it is not generated at all. The generation of the normal map is out of the scope of this article.
## Cloud layer generation
We can generate the clouds with the same algorithm and the same parameters as the surface elevation. Then, we can use an additional color parameter to color them. For the clouds to look better one can set a larger $$z$$ scale value compared to $$x$$ and $$y$$, so that the clouds are stretched in the directions perpendicular to the rotation axis of the planet.
## Putting it all together
In this article we have showed a bird’s eye view of how to procedurally generate convincing planetary surfaces. As we said, in Gaia Sky we generate spherical maps which are then mapped to UV spheres, but we could as well produce cubemap faces and use cubemaps to do the texturing. Below you can see an example of maps produced for a planet by Gaia Sky.
Additionally, we have added a separate step to generate a cloud layer, and we can also randomize the atmospheric scattering parameters to have a fully procedural planet. We have implemented a function which randomizes all parameters within some bounds. Hitting the Randomize all button produces some neat results:
More information on the topic can be found in the official documentation of Gaia Sky.
1. A hybrid consisting of the sum of gradient and value noise. ↩︎
|
2022-08-18 00:44:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8643954396247864, "perplexity": 857.6999641094923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573145.32/warc/CC-MAIN-20220818003501-20220818033501-00532.warc.gz"}
|
https://projecteuclid.org/euclid.aos/1017939239
|
## The Annals of Statistics
### Large sample Bayesian analysis for ${\rm Geo}/G/1$ discrete-time queueing models
Pier Luigi Conti
#### Abstract
In this paper, a nonparametric Bayesian analysis of queueing models with geometric input and general service time is performed. In particular, statistical inference for the probability generating function p.g.f. of the equilibrium waiting time distribution is considered. The consistency of the posterior distribution for such a p.g.f., as well as the weak convergence to a Gaussian process of a suitable rescaling, are proved. As by-products, results on statistical inference for queueing characteristics are also obtained. Finally, the problem of estimating the probability of a long delay is considered.
#### Article information
Source
Ann. Statist., Volume 27, Number 6 (1999), 1785-1807.
Dates
First available in Project Euclid: 4 April 2002
https://projecteuclid.org/euclid.aos/1017939239
Digital Object Identifier
doi:10.1214/aos/1017939239
Mathematical Reviews number (MathSciNet)
MR1765617
Zentralblatt MATH identifier
0963.62092
Subjects
Primary: 62G05: Estimation 62G15: Tolerance and confidence regions
Secondary: 62N99: None of the above, but in this section
#### Citation
Conti, Pier Luigi. Large sample Bayesian analysis for ${\rm Geo}/G/1$ discrete-time queueing models. Ann. Statist. 27 (1999), no. 6, 1785--1807. doi:10.1214/aos/1017939239. https://projecteuclid.org/euclid.aos/1017939239
#### References
• ARMERO, C. 1985. Bayesian analysis of M M 1 FIFO queues. In Bayesian Statistics 2 Z. J. M. Bernardo, M. H. DeGroot, D. V. Lindley and A. F. M. Smith, eds. 613 618. North-Holland, Amsterdam. Z.
• ARMERO, C. 1994. Bayesian Inference in Markovian queues. Queueing Systems 15 419 426. Z.
• ARMERO, C. and BAYARRI, M. J. 1994. Bayesian prediction in M M 1 queues. Queueing Systems 15 401 417.Z.
• ARMERO, C. and BAYARRI, M. J. 1996. Bayesian questions and answers in queues. In Bayesian Z. Statistics 5 J. M. Bernardo, J. O. Berger, A. P. Dawid and A. F. M. Smith, eds. 3 23. North-Holland, Amsterdam. Z.
• BILLINGSLEY, P. 1968. Convergence of Probability Measures. Wiley, New York. Z.
• BRUNEEL, H. and KIM, B. G. 1993. Discrete-Time Models for Communications Systems Including ATM. Kluwer, Boston. Z.
• CIFARELLI, D. M. and REGAZZINI, E. 1990. Distribution functions of means of a Dirichlet process. Ann. Statist. 18 429 442. Z.
• CONTI, P. L. 1997. Some statistical problem for Geo G 1 discrete-time queueing models. Technical Report A 26, Univ. Roma La Sapienza,'' Dipartimento di Statistica, Probabilita e Statistiche Applicate. Z.
• COX, D. 1993. An analysis of Bayesian inference for nonparametric regression. Ann. Statist. 21 903 923. Z.
• DIACONIS, P. and FREEDMAN, D. A. 1997. On the Bernstein von Mises theorem with infinitedimensional parameter. Technical Report 492, Dept. Statistics, Univ. California, Berkeley. Z.
• FRASER, D. A. S. and MCDUNNOUGH, P. 1984. Further remarks on asymptotic normality of likelihood and conditional analysis. Canad. Statist. 12 183 190. Z.
• FREEDMAN, D. A. 1963. On the behavior of Bayes' estimates in the discrete case. Ann. Math. Statist. 34 1386 1403.Z.
• GAVER, D. P. and JACOBS, P. A. 1988. Nonparametric estimation of the probability of a long delay in the M G 1 queue. J. Ray. Statist. Soc. Ser. B 50 392 401. Z.
• GHOSAL, S., GHOSH, J. K. and RAMAMOORTHI, R. V. 1997. Consistency issues in Bayesian nonparametrics. Unpublished manuscript. Z.
• GNETTI, A. S. 1997. Characterizations and measurements of teletraffic generated by IP protocol on ATM networks. M.S. thesis, Universita di Roma La Sapienza,'' Facolta di Ingegne Z. ria in Italian. Z.
• GRAVEY, A., LOUVION, J. R. and BOYER, P. 1990. On the Geo D 1 and Geo D n queues. Performance Evaluation 11 117 125. Z.
• GRUBEL, R. 1989. Stochastic models as functionals: some remarks on the renewal case. J. Appl. ¨ Probab. 26 296 303. Z.
• GRUBEL, R. and PITTS, S. M. 1992. A functional approach to the stationary waiting time and ¨ idle time period distributions of the GI G 1 queue. Ann. Probab. 20 1754 1778. Z.
• GRUBEL, R. and PITTS, S. M. 1993. Nonparametric estimation in renewal theory I: the empirical ¨ renewal function. Ann. Statistic. 21 1431 1451. Z.
• LO, A. Y. 1983. Weak convergence for Dirichlet process. Sankhya Ser. A 45 105 111. Z.
• LOUVION, J. R., BOYER, P. and GRAVEY, A. 1988. A discrete-time single server queue with Bernoulli arrivals and constant service time. In Proceedings of the 12th International () Teletraffic Conference ITC 12, Turin. Z.
• MCGRATH, M. F. and SINGPURAWALLA, N. D. 1987. A subjective Bayesian approach to the theory of queues II: J. R. Louvion, inference and information. Queueing Systems 1 335 353.
• PITTS, S. M. 1994. Nonparametric estimation of the stationary waiting time distribution function for the GI G 1 queue. Ann. Statist. 22 1428 1446. Z.
• ROBERTS, J., MOCCI, U. and VIRTAMO, J. 1996. Broadband Network Teletraffic. Springer, Berlin. Z.
• SCHERVISH, M. J. 1995. Theory of Statistics. Springer, New York. Z.
• WASSERMAN, L. 1998. Asymptotic properties of nonparametric Bayesian properties. Technical Report 2 98, Dept. Statistics, Carnegie Mellon Univ.
• VIA DELLE BELLE ARTI, 41 BOLOGNA 40126 ITALY E-MAIL: conti@stat.unibo.it
|
2019-11-21 00:14:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5853544473648071, "perplexity": 7304.9547653995505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670643.58/warc/CC-MAIN-20191121000300-20191121024300-00374.warc.gz"}
|
https://economics.stackexchange.com/questions/12098/obj-function-yielding-independent-goods-demand-functions
|
# Obj function yielding independent goods demand functions
I know that if the objective function (aka utility) is homothetic, demand functions will be linear in income. So for an homothetic demand function to give goods independent of prices other than their own, one has to have a Cobb-Douglas function as it also has to be homogeneous of degree zero.
My question: can someone supply an example of a class of preferences yielding price independent demand functions $x_i = g(\frac{y}{p_i})$? (no prices nor income in preferences, just quantities)
[Challenge to think, not to give long proofs please. Just sketch proofs with the important steps if feel it's important]
• If I have understood your question correctly you might look at demand systems characterized by price independent generalized linearity (introduced by Muellbauer) such as PIGLOG – DornerA Jun 20 '16 at 20:21
• Using the primal consumer's problem (no prices nor income in preferences, just quantities). Will update question. – user_newbie10 Jun 22 '16 at 11:40
So if we take a simple example with 2 goods and consider the consumer's problem:
$L=u(x,y)+\lambda(I-p_xx-p_yy)$
This yields the ratio of FOC: $u_1/u_2=p_x/p_y$
Now plugging these guys into the budget:
$I=p_xu_1^{-1}(u_2p_x/p_y)+p_yy$
A sufficient condition is $u_1^{-1}$ be hod -1 to make $p_x$ drop out. But I don't think it's necessary.
Now I'm not exactly sure what that means (the inverse of the marginal utility of a good is homogeneous of degree -1), but a great thing to think about.
|
2020-10-30 11:13:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8161519765853882, "perplexity": 1031.1868853608612}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107910204.90/warc/CC-MAIN-20201030093118-20201030123118-00064.warc.gz"}
|
http://clay6.com/qa/28527/which-of-the-following-will-be-a-best-hydride-donor-
|
Browse Questions
Which of the following will be a best hydride donor?
$\begin{array}{1 1} a \\ b \\ c \\ d\end{array}$
Ome is the $e^\ominus$ releasing group the $e^\ominus$ density is move at ortho,para positions.So it will stabilite the carbocation.
Hence (a) is the correct answer.
|
2017-03-29 20:56:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4353369176387787, "perplexity": 5265.162277537065}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218191396.90/warc/CC-MAIN-20170322212951-00504-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://astarmathsandphysics.com/ib-physics-notes/thermal-physics/1467-ideal-gas-processes.html?tmpl=component&print=1&page=
|
## Ideal Gas Processes
An ideal gas can undergo an number of changes, but there are four very important types of change, each obeying the first law of thermodynamics (). We can represent each type of change on a pV diagram.
1. Isochoric or Isovolumetric#. The gas has a constant volume.
1. Isobaric. The gas has a constant pressure.
1. Isothermal. The temperature is kept constant. Since U is a function of temperature only,
1. Adiabatic. No heat exchange takes place between the gas and its' surroundings. Any work done by the gas therefore means a decrease in it's thermal energy. Rapid expansions or compressions are approximately adiabatic, because these allow little time for heat transfer.
If the direction of any of the above processes is reversed, the signs of andwill change.
|
2017-10-24 04:17:19
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8939502239227295, "perplexity": 1068.4074351618722}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187828134.97/warc/CC-MAIN-20171024033919-20171024053919-00388.warc.gz"}
|
https://en.wikipedia.org/wiki/User:Virginia-American/Sandbox/Fundamental_theorem_of_arithmetic
|
# User:Virginia-American/Sandbox/Fundamental theorem of arithmetic
In number theory, the fundamental theorem of arithmetic (also called the unique factorization theorem or the unique-prime-factorization theorem) states (existence) that every integer greater than 1 is either prime itself or is the product of prime numbers, and (uniqueness) that, although the order of the primes in the second case is arbitrary, the primes themselves are not.[1][2][3] For example,
${\displaystyle 1200=2^{4}\times 3^{1}\times 5^{2}=3\times 2\times 2\times 2\times 2\times 5\times 5=5\times 2\times 3\times 2\times 5\times 2\times 2=\cdots {\text{ etc.}}\!}$
The content of the theorem is that in any representation of 1200 as a product of primes, there will always be four 2s, one 3, and two 5s.
## Classification of integers
To clarify the role of the integer 1, and to prepare for more general settings than the integers, it is useful to classify the integers using terminology from abstract algebra, specifically from algebraic number theory and ring theory:[4]
zero 0
positive integers 1, 2, 3, ...
negative integers ..., −3, −2, −1
units −1 and 1
prime numbers ..., −3, −2, 2, 3, 5, ...
composite numbers ..., −6, −4, 4, 6, 8, 9, ...
Using it we have:
"A nonzero integer is either positive or negative."
"A negative integer is the unit −1 times a positive integer."
"A positive integer is either the unit 1, a positive prime, or the product of positive primes.
Up to the order of the factors, this product is uniquely determined by the integer."
To avoid constantly repeating the special cases, the definition of "product" can be slightly expanded to include as "products" the two cases in wheich there is no actual multiplication: the empty product with no factors and the "product" with only one factor. Under this convention the theorem reads:
"Every positive integer is the product of positive primes, and, except for the order of the factors,
in one way only. The integer 1 is the empty product of no primes."
## History
Book VII, propositions 30 and 32 of Euclid's Elements is essentially the statement and proof of the fundamental theorem. Article 16 of Gauss' Disquisitiones Arithmeticae is an early modern statement and proof employing modular arithmetic.
## Applications
### Canonical representation of a positive integer
Every positive integer n can be represented in exactly one way as a product of prime powers:
${\displaystyle n=p_{1}^{\alpha _{1}}p_{2}^{\alpha _{2}}\cdots p_{k}^{\alpha _{k}}=\prod _{i=1}^{k}p_{i}^{\alpha _{i}}}$
where p1 < p2 < ... < pk are primes and the αi are positive integers.
This representation is called the canonical representation[5] of n, or the standard form[6] of n.
E.g., 12 = 22·3, 1296 = 24·34, and 220 = 22·5·11.
Note that factors p0 = 1 may be inserted without changing the value of n. In fact, any integer can be uniquely represented as an infinite product taken over all the prime numbers,
${\displaystyle n=2^{n_{2}}3^{n_{3}}5^{n_{5}}7^{n_{7}}\cdots =\prod p_{i}^{n_{p_{i}}}.}$
where all but a finite number of the ni are zero.
### Arithmetic operations
This representation is convenient for expressions like these for the product, gcd, and lcm:
${\displaystyle a\cdot b=2^{a_{2}+b_{2}}\,3^{a_{3}+b_{3}}\,5^{a_{5}+b_{5}}\,7^{a_{7}+b_{7}}\cdots =\prod p_{i}^{a_{p_{i}}+b_{p_{i}}},}$
${\displaystyle \gcd(a,b)=2^{\min(a_{2},b_{2})}\,3^{\min(a_{3},b_{3})}\,5^{\min(a_{5},b_{5})}\,7^{\min(a_{7},b_{7})}\cdots =\prod p_{i}^{\min(a_{p_{i}},b_{p_{i}})},}$
${\displaystyle \operatorname {lcm} (a,b)=2^{\max(a_{2},b_{2})}\,3^{\max(a_{3},b_{3})}\,5^{\max(a_{5},b_{5})}\,7^{\max(a_{7},b_{7})}\cdots =\prod p_{i}^{\max(a_{p_{i}},b_{p_{i}})}.}$
While expressions like these are of great theoretical importance their practical use is limited by our ability to factor numbers.
### Arithmetical functions
Many arithmetical functions are defined using the canonical representation. In particular, the values of additive and multiplicative functions are determined by their values on the powers of prime numbers.
## Proof
The proof uses Euclid's lemma (Elements VII, 30): if a prime p divides the product of two natural numbers a and b, then p divides a or p divides b (or perhaps both). The article has proofs of the lemma.
### Existence
By inspection, the small natural numbers 1, 2, 3, 4, ... are the product of primes. This is the basis for a proof by induction. Assume it is true for all numbers less than n. If n is prime, there is nothing more to prove. Otherwise, there are integers a and b, where n = ab and 1 < ab < n. By the induction hypothesis, a = p1p2...pn and b = q1q2...qm are products of primes. But then n = ab = p1p2...pnq1q2...qm is also.
### Uniqueness
Assume that s > 1 is the product of prime numbers in two different ways:
{\displaystyle {\begin{aligned}s&=p_{1}p_{2}\cdots p_{m}\\&=q_{1}q_{2}\cdots q_{n}.\end{aligned}}}
We must show m = n and that the qj are a rearrangement of the pi.
By Euclid's lemma p1 must divide one of the qj; relabeling the qj if necessary, say that p1 divides q1. But q1 is prime, so its only divisors are itself and 1. Therefore, p1 = q1, so that
{\displaystyle {\begin{aligned}{\frac {s}{p_{1}}}&=p_{2}\cdots p_{m}\\&=q_{2}\cdots q_{n}.\end{aligned}}}
Reasoning the same way, p2 must equal one of the remaining qj. Relabeling again if necessary, say p2 = q2. Then
{\displaystyle {\begin{aligned}{\frac {s}{p_{1}p_{2}}}&=p_{3}\cdots p_{m}\\&=q_{3}\cdots q_{n}.\end{aligned}}}
This can be done for all m of the pi, showing that mn. If there were any qj left over we would have
{\displaystyle {\begin{aligned}{\frac {s}{p_{1}p_{2}\cdots p_{m}}}&=1\\&=q_{k}\cdots q_{n},\end{aligned}}}
a contradiction, since the product of numbers greater than 1 cannot equal 1. Therefore m = n and every qj is a pi.
## Generalizations
The first generalization of the theorem is found in Gauss's second monograph on biquadratic reciprocity. This paper introduced what is now denoted ${\displaystyle \mathbb {Z} [i]}$, where ${\displaystyle i^{2}=-1.}$ This is the ring of Gaussian integers, and is the set of all complex numbers a + bi where a and b are integers. Gauss showed that this ring has the four units ±1 and ±i, that the non-zero, non-unit numbers fall into two classes, primes and composites, and that (except for order), the composites have unique factorization as a product of primes.[7]
Similarly, Eisenstein introduced the ring ${\displaystyle \mathbb {Z} [\omega ]}$, where ${\displaystyle \omega ={\frac {-1+{\sqrt {-3}}}{2}},}$ ${\displaystyle \omega ^{3}=1.}$ This is the ring of Eisenstein integers, and Eisenstein proved its units are the six numbers ${\displaystyle \pm 1,\pm \omega ,\pm \omega ^{2}}$ and that it has unique factorization.
However, it was also discovered that unique factorization does not always hold. An example is given by ${\displaystyle \mathbb {Z} [{\sqrt {-5}}]}$. In this ring one has
${\displaystyle 6=2\times 3=(1+{\sqrt {-5}})\times (1-{\sqrt {-5}}),}$
and all of factors can be proven prme in that ring.[8]
In algebraic number theory, and more generally in ring theory, a unique factorization domain is defined as an algebraic structure in which the fundamental theorem of arithmetic holds. For example, any Euclidean domain or principal ideal domain can be proven to be a unique factorization domain.
In 1843 Kummer introduced the concept of ideal number, which was deveolped further by Dedekind (1876) into the modern theory of ideals, special subsets of rings. Multiplication is defined for them, and they have unique factorization.
## Notes
1. ^ Long (1972, p. 44)
2. ^ Pettofrezzo & Byrkit (1970, p. 53)
3. ^ Hardy & Wright, Thm 2
4. ^ Riesel, p. 1
5. ^ Long (1972, p. 45)
6. ^ Pettofrezzo & Byrkit (1970, p. 55)
7. ^ Gauss, BQ, §§ 31–34
8. ^ Hardy & Wright, § 14.6
## References
The Disquisitiones Arithmeticae has been translated from Latin into English and German. The German edition includes all of his papers on number theory: all the proofs of quadratic reciprocity, the determination of the sign of the Gauss sum, the investigations into biquadratic reciprocity, and unpublished notes.
• Gauss, Carl Friedrich; Clarke, Arthur A. (translator into English) (1986), Disquisitiones Arithemeticae (Second, corrected edition), New York: Springer, ISBN 0387962549
• Gauss, Carl Friedrich; Maser, H. (translator into German) (1965), Untersuchungen uber hohere Arithmetik (Disquisitiones Arithemeticae & other papers on number theory) (Second edition), New York: Chelsea, ISBN 0-8284-0191-8
The two monographs Gauss published on biquadratic reciprocity have consecutively-numbered sections: the first contains §§ 1–23 and the second §§ 24–76. Footnotes referencing these are of the form "Gauss, BQ, § n". Footnotes referencing the Disquisitiones Arithmeticae are of the form "Gauss, DA, Art. n".
• Gauss, Carl Friedrich (1828), Theoria residuorum biquadraticorum, Commentatio prima, Göttingen: Comment. Soc. regiae sci, Göttingen 6
• Gauss, Carl Friedrich (1832), Theoria residuorum biquadraticorum, Commentatio secunda, Göttingen: Comment. Soc. regiae sci, Göttingen 7
These are in Gauss's Werke, Vol II, pp. 65–92 and 93–148; German translations are pp. 511–533 and 534–586 of the German edition of the Disquisitiones.
• Baker, Alan (1984), A Concise Introduction to the Theory of Numbers, Cambridge, UK: Cambridge University Press, ISBN 978-0-521-28654-1
• Hardy, G. H.; Wright, E. M. (1979), An Introduction to the Theory of Numbers (fifth ed.), USA: Oxford University Press, ISBN 978-0-19-853171-5
• A. Kornilowicz and P. Rudnicki. Fundamental theorem of arithmetic. Formalized Mathematics, 12(2):179–185, 2004.
• Long, Calvin T. (1972), Elementary Introduction to Number Theory (2nd ed.), Lexington: D. C. Heath and Company.
• Pettofrezzo, Anthony J.; Byrkit, Donald R. (1970), Elements of Number Theory, Englewood Cliffs: Prentice Hall.
• Riesel, Hans (1994), Prime Numbers and Computer Methods for Factorization (second edition), Boston: Birkhäuser, ISBN 0-8176-3743-5
|
2017-10-17 06:21:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 18, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8310518860816956, "perplexity": 1233.3606125214576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187820927.48/warc/CC-MAIN-20171017052945-20171017072945-00335.warc.gz"}
|