url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://11011110.github.io/blog/2022/12/20/tree-clique-products.html
|
A tree decomposition of a graph is, intuitively, a representation of the graph as a “thickened” tree. It comes from the use of separators in divide-and-conquer algorithms: if a graph can be separated into two smaller subgraphs by the removal of a separator, a small subset of its vertices, then many algorithmic problems can be solved by a recursive algorithm that combines the solutions from those subgraphs. The tree, in the tree decomposition, has a root node that represents the separator at the top level of the recursion, children that represent the separators at the next level, and so on.
It is tempting but incorrect to imagine that each vertex of the given graph belongs to only a single node of this tree. That would mean that you could “thicken” your tree merely by attaching disjoint sets of graph vertices to the tree nodes, so that each graph edge goes between two vertices at the same node or at adjacent nodes. If you could do this, it would give your graph a special kind of product structure: it would be a subgraph of a strong product of a tree and a complete graph, of size roughly equal to the width. You could recover the separators of the separator theorem as the subsets of vertices associated with a single tree node.
Unfortunately, this kind of tree-clique product structure doesn’t work, at least not with the usual separator theorems and the usual divide-and-conquer algorithms. Instead, each recursive subproblem needs to keep track of how its subgraph is attached to the higher-level separator vertices. That means that these separator vertices are really part of the subgraphs on both sides, rather than being confined to a single node at the root of the decomposition tree. Working out the implications of this leads to the standard notion of a tree decomposition, a tree with non-disjoint “bags” of vertices on each node. Each graph vertex may belong to many bags, but they must form a connected subtree of the whole tree. Each graph edge must have both endpoints together in at least one bag.
A counterexample for a related graph coloring problem, by Linial, Matoušek, Sheffet, and Tardos, “Graph colouring with no large monochromatic components”, Probability & Computing 2008, arXiv:math/0703362, also shows some of the limitations of the naive tree-clique-product idea. Their example uses a rectangular grid of dimensions $$n^{1/3}\times n^{2/3}$$, with each vertex on the top row of the grid fanning out to another $$n^{1/3}$$ vertices in a path of linearly many vertices, and one more vertex attached to everything in this path.
It’s planar, so it has a tree decomposition of width (bag size) $$O(\sqrt n)$$, but representing it as the subgraph of a tree-clique product requires cliques of size $$\Omega(n^{2/3})$$. If you try to use smaller cliques, then the tree node containing the high-degree vertex cannot include enough grid columns nor enough of the top grid row to separate the rest into pieces of small enough size. Some piece will contain $$\Omega(n^{2/3})$$ vertices of the long path. However it is further subdivided, at least two of the subdivisions will be connected to each other through the piece and also through the high-degree vertex, forming a loop that is impossible in a tree. (Linial et al. triangulate the grid and show more strongly that in any product structure involving a small clique the other factor has a triangle, but that is more than we need here.)
This graph does have a tree-clique product structure with cliques of size $$O(n^{2/3})$$. By applying the standard planar separator theorem repeatedly, in any planar graph you can find a separator of size $$O(n^{2/3})$$ whose removal partitions the remaining subgraph into components of size $$O(n^{2/3})$$. This can be thought of as a tree-clique product where the tree is a star with the separator as root and all the remaining components as leaf children. In the graph of Linial et al, the separator can be formed from the high-degree vertex, the entire top row of the grid, an evenly-spaced subset of vertices of the long path, and the left column of each square subset of the grid. The components it forms are the remainder of each square subset of the grid, and the remaining paths within the long path. More generally, in any class of graphs with $$O(\sqrt n)$$-separators, the same idea produces a representation as a subgraph of a clique–star product $$K_{O(n^{2/3})}\boxtimes K_{1,O(n^{1/3})}$$, as David Wood observed in his recent preprint “Product structure of graph classes with strongly sublinear separators”, arXiv:2208.10074.
Which all brings us to my newest preprint, “Graphs excluding a fixed minor are $$O(\sqrt n)$$-complete-blowups of a treewidth 4 graph”, with Marc Distel, Vida Dujmović, Robert Hickingbotham, Gwenaël Joret, Pat Morin, Michał Seweryn, and David Wood, arXiv:2212.08739. It asks: what if you insist on a product structure involving cliques of size $$O(\sqrt n)$$, but you let the other factor be something other than a tree? How well-structured can you make the other factor? As the title says, for graphs in any minor-closed graph family, we can find a representation of the graph as a strong product $$K\boxtimes G$$ where $$K$$ is a clique of size $$O(\sqrt n)$$ and $$G$$ has treewidth at most four. The details are messy and use the full strength of a different graph structure theorem, the theorem of Robertson and Seymour that the graphs in any minor-closed family can be decomposed by separators of bounded size into pieces that are graphs of bounded genus, plus a bounded number of arbitrary “apex” vertices, plus “vortices” of bounded pathwidth attached to a bounded number of faces. See the paper if you want an explanation of that part.
|
2023-03-26 19:19:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7234712243080139, "perplexity": 404.28400487974346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946445.46/warc/CC-MAIN-20230326173112-20230326203112-00629.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-concepts-through-functions-a-unit-circle-approach-to-trigonometry-3rd-edition/chapter-4-exponential-and-logarithmic-functions-section-4-5-properties-of-logarithms-4-5-assess-your-understanding-page-331/5
|
## Precalculus: Concepts Through Functions, A Unit Circle Approach to Trigonometry (3rd Edition)
$\log _{a}(M N)=\log _{a} M+\log _{a} N$
Recall the product rule for logarithms: $$\log _{a}(AB)=\log _{a} A+\log _{a} B$$ Thus, $$\log_a{MN}=\log_a{M}+\log_a{N}$$
|
2021-10-17 07:30:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7796486616134644, "perplexity": 5619.674650275115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585121.30/warc/CC-MAIN-20211017052025-20211017082025-00153.warc.gz"}
|
http://tex.stackexchange.com/questions/87594/changing-the-numbering-of-lemmas-and-theorems-in-the-appendix/87599
|
# Changing the numbering of lemmas and theorems in the appendix
I wanted to change the name of the appendix. So, I wanted my appendix name would be not Appendix A, as usual, but Appendix I.
So I use
\begin{appendix}
\renewcommand{\theequation}{I.\arabic{equation}}
% redefine the command that creates the equation no.
\setcounter{equation}{0} % reset counter
\section*{\textbf{Appendix I}} % use *-form to suppress numbering
....................%some text
\end{appendix}
These commands helped me to change the formulas numbering. I've tried the same 'trick' to change lemma and theorem numbering (I wanted, for example Lemma I.1, Theorem I.2) but it did not work.
Help me please with these commands.
-
Welcome to TeX.SE – Vivi Dec 19 '12 at 8:35
There is no appendix environment (unless you're using some package). Use only \appendix at the spot where appendices start; then issue the command as in the answer below. – egreg Dec 19 '12 at 11:55
You should try to change the counter of the appendix to Roman instead of changing it for every macro using it.
So try
\renewcommand{\thesection}{\Roman{section}}
instead of the equation command. (you might use the chapter counter instead, depending on your document class)
Edit: To make it clearer: instead of
\renewcommand{\theequation}{I.\arabic{equation}}
\setcounter{equation}{0} % reset counter
\section*{\textbf{Appendix I}} % use *-form to suppress numbering
I would try
\renewcommand{\thesection}{\Roman{section}}
\section{Appendix}
I use the automatic numbering, so the equation counter is handled automatically and the counters of Lemma, Theorem and the lot should as well.
-
Could you make clearer where this line should go? – egreg Dec 19 '12 at 11:55
Just before the first appendix. – vonbrand Jan 17 '13 at 22:31
|
2016-07-02 00:18:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9722041487693787, "perplexity": 2606.5861451914266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404382.73/warc/CC-MAIN-20160624155004-00079-ip-10-164-35-72.ec2.internal.warc.gz"}
|
https://socratic.org/questions/how-do-you-use-the-first-and-second-derivatives-to-sketch-f-x-x-4-2x-2-3
|
# How do you use the first and second derivatives to sketch f(x)= x^4 - 2x^2 +3?
Sep 20, 2016
The curve is concave downwards at $\left(0 , 3\right)$
The curve is concave upwards at $\left(1 , 2\right)$
The curve is concave upwards at $\left(- 1 , 2\right)$
#### Explanation:
Given -
$y = {x}^{4} - 2 {x}^{2} + 3$
$\frac{\mathrm{dy}}{\mathrm{dx}} = 4 {x}^{3} - 4 x$
$\frac{{d}^{2} y}{{\mathrm{dx}}^{2}} = 12 {x}^{2} - 4$
To sketch the graph, we have to find for what values of $x$ the slope becomes zero. It means at those points the curve turns.
$\frac{\mathrm{dy}}{\mathrm{dx}} = 0 \implies 4 {x}^{3} - 4 x = 0$
$4 {x}^{3} - 4 x = 0$
$4 x \left({x}^{2} - 1\right) = 0$
$4 x = 0$
$x = 0$
${x}^{2} - 1 = 0$
$x = \pm \sqrt{1}$
$x = 1$
$x = - 1$
The curve turns when x=0; x=1; x=-1
At these points we have to decide whether the curve is concave upwards or concave downwards. For this we need the second derivatives -
At $x = 0$
$\frac{{d}^{2} y}{{\mathrm{dx}}^{2}} = 12 \left({0}^{2}\right) - 4 = - 4 < 0$
Since the second derivative is less than zero, the curve is concave downwards at $x = 0$
The value of the function is -
$y = {0}^{4} - 2 \cdot {0}^{2} + 3 = 3$
The curve is concave downwards at $\left(0 , 3\right)$
At $x = 1$
$\frac{{d}^{2} y}{{\mathrm{dx}}^{2}} = 12 \left({1}^{2}\right) - 4 = 8 > 0$
Since the second derivative is greater than zero, the curve is concave upwards at $x = 0$
The value of the function is -
$y = {1}^{4} - 2 \cdot {1}^{2} + 3 = 2$
The curve is concave upwards at $\left(1 , 2\right)$
At $x = - 1$
$\frac{{d}^{2} y}{{\mathrm{dx}}^{2}} = 12 \left(- {1}^{2}\right) - 4 = 8 > 0$
Since the second derivative is greater than zero, the curve is concave upwards at $x = - 1$
The value of the function is -
$y = {\left(- 1\right)}^{4} - 2 \cdot {\left(- 1\right)}^{2} + 3 = 2$
The curve is concave upwards at $\left(- 1 , 2\right)$
|
2021-10-23 16:58:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 32, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.966399073600769, "perplexity": 307.67683036573635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585737.45/warc/CC-MAIN-20211023162040-20211023192040-00623.warc.gz"}
|
https://plainmath.net/integral-calculus/103518-what-is-the-slope-of-the-tange
|
Teresa Manning
2023-03-14
What is the slope of the tangent line of $r=-2\mathrm{sin}\left(3\theta \right)-12\mathrm{cos}\left(\frac{\theta }{2}\right)$ at $\theta =\frac{-\pi }{3}$?
Gregory Ferguson
The slope of the tangent line of r at $\theta =\frac{-\pi }{3}$, is the same as the derivative of the function for that exact x-value. Therefore, we need to differentiate both sides of the function, so we get an expression for $\frac{d}{d\theta }$ r.
$\frac{d}{d\theta }\left[r\right]=\frac{d}{d\theta }\left[-2\mathrm{sin}\left(3\theta \right)-12\mathrm{cos}\left(\frac{\theta }{2}\right)\right]$
$\frac{d}{d\theta }\left[r\right]=\frac{d}{d\theta }\left[-2\mathrm{sin}\left(3\theta \right)\right]-\frac{d}{d\theta }\left[-12\mathrm{cos}\left(\frac{\theta }{2}\right)\right]$
From this point, we need to know how to differentiate $\mathrm{sin}\theta$ and $\mathrm{cos}\theta$:
$\frac{d}{d\theta }\mathrm{sin}\theta =\mathrm{cos}\theta$
$\frac{d}{d\theta }\mathrm{cos}\theta =-\mathrm{sin}\theta$
We need to know how the chain rule works as well. In this case, we have to substitute $3\theta$ with u and $\frac{\theta }{2}$ with v. When we then differentiate, we have to also differentiate our substitute.
We then have:
$\frac{d}{d\theta }\left[r\right]=\frac{d}{d\theta }\left[-2\mathrm{sin}\left(u\right)\right]-\frac{d}{d\theta }\left[12\mathrm{cos}\left(v\right)\right]$
$\frac{d}{d\theta }\left[r\right]=-2\mathrm{cos}\left(u\right)\cdot \frac{d}{d\theta }\left[u\right]-\left(-12\mathrm{sin}\left(v\right)\cdot \frac{d}{d\theta }\left[v\right]\right)$
Let's now substitute u and v back to our original functions.
$\frac{d}{d\theta }\left[r\right]=-2\mathrm{cos}\left(3\theta \right)\cdot \frac{d}{d\theta }\left[3\theta \right]-\left(-12\mathrm{sin}\left(\frac{\theta }{2}\right)\cdot \frac{d}{d\theta }\left[\frac{\theta }{2}\right]\right)$
$\frac{d}{d\theta }\left[r\right]=-2\mathrm{cos}\left(3\theta \right)\cdot 3+12\mathrm{sin}\left(\frac{\theta }{2}\right)\cdot \frac{1}{2}=-6\mathrm{cos}\left(3\theta \right)+6\mathrm{sin}\left(\frac{\theta }{2}\right)$
Now we have an expression for $\frac{d}{d\theta }r\left(\theta \right)$, where we can put in any values we want, and get the slope for whatever $\theta$-value we want. Thus, let's put in $\theta =\frac{-\pi }{3}$:
$\frac{d}{d\theta }\left[r\left(\frac{-\pi }{3}\right)\right]$
$=-6\mathrm{cos}\left(-\pi \right)+6\mathrm{sin}\left(-\frac{\pi }{6}\right)$
$\mathrm{cos}\left(-\pi \right)=-1$ and $\mathrm{sin}\left(\frac{-\pi }{6}\right)=-\frac{1}{2}$
This gives us that the slope, for $\theta =\left(-\frac{\pi }{3}\right)$, is:
$-6\left(-1\right)+6\cdot \left(-\frac{1}{2}\right)=6-3=3$
Do you have a similar question?
|
2023-03-26 21:28:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 25, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9013800024986267, "perplexity": 187.79791940947058}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946535.82/warc/CC-MAIN-20230326204136-20230326234136-00168.warc.gz"}
|
https://cris.tau.ac.il/en/publications/traces-of-powers-of-matrices-over-finite-fields
|
# Traces of powers of matrices over finite fields
Research output: Contribution to journalArticlepeer-review
## Abstract
LetM be a random matrix chosen according to Haar measure from the unitary group U(n,C). Diaconis and Shahshahani proved that the traces of M,M2, . . . ,Mk converge in distribution to independent normal variables as n→∞, and Johansson proved that the rate of convergence is superexponential in n. We prove a finite field analogue of these results. Fixing a prime power q = pr, we choose a matrix M uniformly from the finite unitary group U(n, q) ⊆ GL(n, q2) and show that the traces of {Mi} 1≤i≤k, pi converge to independent uniform variables in Fq2 as n. Moreover we show the rate of convergence is exponential in n2. We also consider the closely related problem of the rate at which characteristic polynomial of M equidistributes in 'short intervals' of Fq2 [T]. Analogous results are also proved for the general linear, special linear, symplectic and orthogonal groups over a finite field. In the two latter families we restrict to odd characteristic. The proofs depend upon applying techniques from analytic number theory over function fields to formulas due to Fulman and others for the probability that the characteristic polynomial of a random matrix equals a given polynomial.
Original language English 4579-4638 60 Transactions of the American Mathematical Society 374 7 https://doi.org/10.1090/tran/8337 Published - 2021 Yes
## Fingerprint
Dive into the research topics of 'Traces of powers of matrices over finite fields'. Together they form a unique fingerprint.
|
2022-09-28 18:51:30
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8585190773010254, "perplexity": 745.9764943744399}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00246.warc.gz"}
|
https://www.albert.io/ie/differential-equations/partial-fraction-decomposition-repeated-linear-factor
|
Free Version
Moderate
# Partial Fraction Decomposition: Repeated Linear Factor
DIFFEQ-JLYMOL
The partial fraction decomposition of $\cfrac{x^2}{(x-1)(x^2+2x+1)}$ is given by:
A
$\cfrac{1/4}{x-1}+\cfrac{3x+1}{4(x^2+2x+1)}$
B
$\cfrac{1/4}{x-1}+\cfrac{-1/2}{(x+1)^2}$
C
$\cfrac{1/4}{x-1}+\cfrac{-1/2}{(x+1)^2}+\cfrac{3/4}{x+1}$
D
$\cfrac{x+1}{2(x^2-1)}+\cfrac{1}{2(x+1)}$
|
2017-03-23 08:24:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7461557984352112, "perplexity": 3412.556594988776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186841.66/warc/CC-MAIN-20170322212946-00019-ip-10-233-31-227.ec2.internal.warc.gz"}
|
http://math.stackexchange.com/questions/76870/reverse-fatous-lemma
|
# Reverse Fatou's Lemma
Wikipedia article states that to obatin this lemma, the functions $g-f_n$ have to be considered (where $f_n \leq g$). However, the difference might not exist for some elements (e.g. $\infty - \infty$ or $-\infty - (-\infty)$). How is this problem circumvented?
Thanks, Phanindra
-
## 1 Answer
In the Wikipedia article, $g$ is required to be integrable. This implies that the set $\{x \in S\colon g(x) = \pm \infty\}$ has measure zero.
-
Thanks! I did not quite understand how integrability plays a role until I saw your answer. – jpv Oct 29 '11 at 6:22
Supposing $g$ takes $\infty$ on a set of measure greater than zero. As $g$ is integrable the value of its integral is $\infty$. So, it seems that $g$ can take the extreme values on a set of positive measure. – jpv Oct 29 '11 at 6:38
@jpv: No, a measurable function is defined to be integrable iff its integral exists and is finite. (Admittedly, it is kind of a strange convention...) – Jesse Madnick Oct 29 '11 at 7:07
Thanks! The book I am following (books.google.com/books/about/…) defines a function to be integrable if either the integral or the negative or the positive part is finite. In general, this implies that the integral of the function can be +infinity or -infinity. However, the dominated convergence theorem is slightly different from the one in wikipedia. The authors specifically want finiteness probably to avoid the problem in the question. I believe I have interpreted integrability erroneously in the wikipedia page. – jpv Oct 30 '11 at 1:59
Even if we impose finiteness (i.e. require that $\int_X|g| d\mu < \infty$) it is still weird that we can define integrals for functions $g-f_n$ which are undefined on a set of points in the domain. I am aware of the convention that $0*\infty = 0$ but nothing about $0 *(\infty - \infty)$. – jpv Oct 30 '11 at 2:07
|
2015-11-27 17:40:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.963137149810791, "perplexity": 226.67811686177498}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398449793.41/warc/CC-MAIN-20151124205409-00225-ip-10-71-132-137.ec2.internal.warc.gz"}
|
http://openaccess.cmb.med.rug.nl/?cat=3931
|
# [STM Association open letter to the White House Office of Science and Technology Policy]
“STM publishers support all models and approaches that have the potential to lead to a more open scholarly communication environment and a greater empowerment of researchers. We continue to work diligently with stakeholders across the research ecosystem to build towards a future where quality, rigor, replicability, reproducibility, and integrity of research can be sustained while meeting the access needs of researchers and the public in an open and collaborative manner. We were therefore alarmed to learn that the Administration may be considering a precipitous move to require immediate access to any article that reports on Federally funded research, without due consideration of the impact of such a policy on research and discovery and the costs to the taxpayer of a shift to open access….”
# STM comment on cOAlition S Guidance on Implementation – Addendum
“STM agrees that targets and milestones are necessary to measure the transition to Open Access, but mandating them may run counter to our overall shared goal. With only 6% of all journal articles connected to funding by the current cOAlition S membership, the specified targets will be difficult to achieve and support without a significant number of new funders and institutions willing to financially support the transition to OA. Setting a blanket ‘tipping point’ does not recognise the differences in funding which exist across research communities. Some journals would be able to transition to full OA when they reach a 50% penetration rate, however others would not prove sustainable with the remaining 50% made up of many unfunded authors….”
# STM comment on cOAlition S Guidance on Implementation – Addendum
“STM agrees that targets and milestones are necessary to measure the transition to Open Access, but mandating them may run counter to our overall shared goal. With only 6% of all journal articles connected to funding by the current cOAlition S membership, the specified targets will be difficult to achieve and support without a significant number of new funders and institutions willing to financially support the transition to OA. Setting a blanket ‘tipping point’ does not recognise the differences in funding which exist across research communities. Some journals would be able to transition to full OA when they reach a 50% penetration rate, however others would not prove sustainable with the remaining 50% made up of many unfunded authors….”
# OA price and service transparency Survey
“The views of researchers, librarians, publishers, and funders about ways to increase the transparency of communications about the price of Open Access publishing services are sought in a new industry survey. The results of this survey will help to inform a collaborative project with publishers, funders, and universities to develop a framework for communications. The project is sponsored by the Wellcome Trust in partnership with UKRI on behalf of cOAlition S. You can visit the survey here ….”
# Diverting Leakage to the Library Subscription Channel – The Scholarly Kitchen
“Likewise, we’ve known for some time that, while some publishers take a highly contentious stance towards ResearchGate, others have taken a different approach. Whatever one might have thought about ResearchGate earlier in its development, it has clearly arrived as a major service for researchers. ResearchGate is one of the most trafficked science websites globally and has more than twice the traffic of Google Scholar and many more times that of Sci-Hub. ResearchGate is also without question a site of leakage and that is precisely what also makes it an attractive platform for syndication. …
ResearchGate users without entitlements via a Springer Nature institutional subscription will continue to have access to articles in a non-downloadable format. It is worth noting that this is the version of record, which diverges from Elsevier’s tactic of providing an author manuscript to the non-entitled, and so all users (entitled and non-entitled) have access to the version of record….
The code behind the rendered web pages did not seem to show that the entitlements information was being passed from Springer Nature, but rather that ResearchGate is determining authorization using a database it accesses directly or perhaps via API. …
We also noted that the PDFs one downloads from ResearchGate are different files than the PDFs that are downloaded from the Springer Nature platform. Both platforms provide the version of record PDF but the files from ResearchGate had different watermarks in the footer than those from the Springer Nature platform. This makes even clearer that this is truly a case of syndication to the ResearchGate platform and not linking out from ResearchGate to the publisher platform, such as is done from library discovery layers. …
Bringing library-subscribed resources into the scholar’s workflow on ResearchGate helps to ensure that scholars have easy and seamless access to licensed materials and bypasses the cumbersome process of moving from a citation on ResearchGate, back to the library website, only to then be required to navigate the link resolver, authentication mechanisms, and the publisher platform before getting the PDF. With syndication, discovery is delivery. …”
# NISO Releases Draft Recommended Practice on Improved Access to Institutionally-Provided Information Resources for Public Comment | NISO website
“The National Information Standards Organization (NISO) seeks comments on a new Recommended Practice draft for improved access to institutionally-provided information resources. This document details the findings from the Resource Access for the 21st Century (RA21) initiative and provides recommendations for using federated identity as an access model and for improving the federated authentication user experience.
For several years, scholars have expressed increasing frustration with obtaining access to institutionally-provided information resources against a background of changing work habits and the expectation of always-on connectivity from any location, at any time, from any device….
The NISO Recommended Practices for Improved Access to Institutionally-Provided Information Resources is available for public comment between April 17 and May 17, 2019. To download the draft document or to submit comments, visit the NISO Project page at: https://www.niso.org/standards-committees/ra21. All input is welcome and encouraged….”
# The STM Report An overview of scientific and scholarly publishing: Fifth edition, October 2018
“Average publishing costs per article vary substantially depending on a range of factors including rejection rate (which drives peer review costs), range and type of content, levels of editorial services, and others. The average 2010 cost of publishing an article in a subscription-based journal with print and electronic editions was estimated by CEPA to be around £3095 (c. $4,000), excluding non-cash peer review costs. An updated analysis by CEPA in 2018 shows that, in almost all cases, intangible costs such as editorial activities are much higher than tangible ones, such as production, sales and distribution, and are key drivers in per article costs (page 73). The potential for technology and open access to effect cost savings has been much discussed, with open access publishers such as Hindawi and PeerJ having claimed per article costs in the low hundreds of dollars. A recent rise in PLOS’s per article costs, to$1,500 (inferred from its financial statements), and costs of over £3,000 (\$4,000) per article at the selective OA journal eLife call into question the scope for OA to deliver radical cost savings. Nevertheless, with article volumes rising at 4% per annum, and journal revenues at only 2%, further downward pressure on per article costs is inevitable (page 74)….
Gold open access is sometimes taken as synonymous with the article publication charge (APC) business model, but strictly speaking simply refers to journals offering immediate open access on publication. A substantial fraction of the Gold OA articles indexed by Scopus, however, do not involve APCs but use other models (e.g. institutional support or sponsorship). The APC model itself has become more complicated, with variable APCs (e.g. based on length), discounts, prepayments and institutional membership schemes, offsetting and bundling arrangements for hybrid publications, read-and-publish deals, and so on (page 97)….
It is unclear where the market will set OA publication charges: they are currently lower than the historical average cost of article publication; and charges for full open access articles remain lower than hybrid, though the gap is closing. Calls to redirect subscription expenditures to open access have increased, but the more research-intensive universities and countries remain concerned about the net impact on their budgets (page 101; 139). …
Recent developments indicate a growing willingness on the part of funders and policymakers to intervene in the STM marketplace, whether by establishing their own publication platforms, strengthening OA mandates or acting to change the incentive structures that drive authors’ publication choices (page 113). …
Concerns over the impact of Green OA and the role of repositories have receded somewhat, though not disappeared. The lack of its own independent sustainable business model means Green OA depends on its not undermining that of (subscription) journals. The evidence remains mixed, with indications that Green OA can increase downloads and citations being balanced against evidence of the long usage half-life of journal articles and its substantial variation between fields. In practice, however, attention in many quarters has shifted to the potentially damaging impact of Social Collaboration Networks (SCNs) and pirate websites on subscriptions (pages 114; 174). …”
# Open Access och Big Business Hur Open Access blev en del av de stora förlagen
English title: Open Access and Big Business: How Open Access Became a Part of Big Publishing
Article in Swedish with this English abstract: This study explores the Open Access phenomenon from the perspective of the commercial scientific publishing industry. Open Access has been appropriated by commercial publishers, once sceptical opponents of the concept, as a means among others of distributing scholarly publications. The aim of this study is to highlight a possible explanation as to how this has come about by looking at the internal and external communication of two of the main scholarly publishing industry organizations, the STM Association and the PSP division of the AAP. Via a thematic analysis of documents from these organizations, the dissertation aims to explore how the publishers’ communication regarding Open Access has changed over time. Furthermore, the study takes on how these questions are interlinked with notions of power and legitimacy within the system of scholarly communication. The analysis shows two main themes, one that represents a coercive course of restoring legitimacy, where publishers’ value-adding is stressed and at the same time warning of dangerous consequences of Open Access. The other theme represents a collaborative course of action that stresses the importance of building alliances and reaching consensus. Results show that there has been a slight change in how the publishing industry answers to public policies that enforce Open Access. One conclusion is that this is due to the changing nature of said policies.
# STM statement on Plan S: Accelerating the transition to full and immediate Open Access to scientific publications
“The International Association of Scientific, Technical and Medical Publishers (STM) welcomes the efforts by funders to work towards our shared goals of expanding access to peer-reviewed scientific works to maximise their value and reuse, but urges caution that the next steps in this transition avoid any unintended limitations on academic freedoms, and continue to ensure the overall viability and integrity of the scholarly record….
Similarly, STM believes that flexibility in Article Publication Charge (APC) pricing is key to ensuring a vibrant and viable scholarly sector where researchers are fully able to take advantage of the full range of Open Access options available to them. Caps on APCs would restrict authors’ choice of publication avenues for Gold Open Access, risk undermining quality and likely slow down the transition to a full Open Access environment….”
|
2020-01-27 02:40:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20484843850135803, "perplexity": 3294.4654330109506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251694176.67/warc/CC-MAIN-20200127020458-20200127050458-00186.warc.gz"}
|
https://bigladdersoftware.com/epx/docs/8-7/engineering-reference/cooling-towers-and-evaporative-fluid-coolers.html
|
Cooling Towers and Evaporative Fluid Coolers[LINK]
One, Two, and Variable Speed Cooling Towers and Evaporative Fluid Coolers[LINK]
The input objects CoolingTower:SingleSpeed, CoolingTower:TwoSpeed, and CoolingTower:VariableSpeed:Merkel provide models for single-speed, two-speed, and variable-speed cooling towers that are based on Merkel’s theory (Merkel 1925), which is also the basis for the tower model included in ASHRAE’s HVAC1 Toolkit for primary HVAC system energy calculations (ASHRAE 1999, Bourdouxhe et al. 1994). Cooling tower performance is modeled using effectiveness-NTU relationships for counterflow heat exchangers. The model can be used to simulate the performance of both single speed, two speed, and variable speed mechanical-draft cooling towers. The model will also account for tower performance in the “free convection” regime, when the tower fan is off but the water pump remains on. For part-load operation, the model assumes a simple linear interpolation between two steady-state regimes without accounting for any cycling losses.
For single speed cooling towers, the capacity control can be fan cycling or fluid bypass. In fluid bypass mode, portion of the water goes through the tower media and gets cooled while the remaining water flow gets bypassed, two water flows then mix together trying to meet the tower exiting water setpoint temperature. In both the free convection cooling when fan is off and normal cooling when fan is on for the entire time step, if the tower exiting water temperature is lower than the setpoint, the tower operates in fluid bypass mode. The model determines the fluid bypass fraction by iterations until the mixed water meets the tower exiting water temperature setpoint. In the fluid bypass mode, except the free convection, the tower fan runs at full speed for the entire time step. The maximum amount of tower water that can be bypassed is bounded by the freezing point of the tower water – the tower exiting water temperature cannot be lower than the freezing setpoint.
Evaporative fluid coolers are modeled very similar to cooling towers. The main difference between the two is in the “Performance input method” input field. Cooling tower has two choices for this field namely “UFactorTimesAreaAndDesignWaterFlowRate” and “Nominal capacity”. The nominal capacity is specified for the standard conditions i.e. entering water at 35C (95F), leaving water at 29.44C (85F), entering air at 25.56C (78F) wet-bulb temperature and 35C (95F) dry-bulb temperature. On the other hand evaporative fluid cooler has three choices for “Performance input method” which are “UFactorTimesAreaAndDesignWaterFlowRate”, “StandardDesignCapacity” and “UserSpecifiedDesignCapacity”. First method is same for both tower and fluid cooler. Standard design capacity is specified for the same conditions which are used to specify nominal capacity for tower as described above. If the capacity of fluid cooler for conditions other than the standard ones is known then UserSpecifiedDesignCapacity method should be used. In this case, the conditions for which the fluid cooler capacity is known i.e. entering water temperature, entering air temperature and entering air wet bulb temperature must be specified in the input. To calculate evaporation loss for fluid cooler, spray water flow rate which is different than the process fluid flow rate must be specified for all the performance input methods. This is not required for cooling tower because cooled fluid i.e. water is in direct contact with the air so the water loss is calculated by using cooled fluid flow rate only. Unlike cooling tower, evaporative fluid cooler model does not account for free convection.
Cooling tower model is described below which holds equally good for evaporative fluid cooler. The differences are mentioned whenever required.
Based on Merkel’s theory, the steady-state total heat transfer between the air and water entering the tower can be defined by the following equation:
where
h = enthalpy of saturated air at the wetted-surface temperature, J/kg
h = enthalpy of air in the free stream, J/kg
c = specific heat of moist air, J/kg-C
U = cooling tower overall heat transfer coefficient, W/m- C
A = heat transfer surface area, m
Equation is based on several assumptions:
n air and water vapor behave as ideal gases
n the effect of water evaporation is neglected
n fan heat is neglected
n the interfacial air film is assumed to be saturated
n the Lewis number is equal to 1
In this model, it is also assumed that the moist air enthalpy is solely a function of the wet-bulb temperature and that the moist air can be treated as an equivalent ideal gas with its mean specific heat defined by the following equation:
where
Δh = enthalpy difference between the air entering and leaving the tower, J/kg
= wet-bulb temperature difference between the air entering and leaving the tower, C
Since the liquid side conductance is much greater than the gas side conductance, the wetted-surface temperature is assumed to be equal to the water temperature. Based on this assumption and equations and , the expression for total heat transfer becomes:
where
= wet-bulb temperature of the air, C
T = temperature of the water, C
An energy balance on the water and air sides of the air/water interface yields the following equations:
where
= mass flow rate of water, kg/s
= mass flow rate of air, kg/s
Assuming that the heat capacity rate ( ) for the cooling tower water is less than that for the air,the effectiveness of the cooling tower can be defined by analogy to the effectiveness of a simple heat exchanger:
where
ε = heat exchanger effectiveness
T = inlet water temperature, C
T = outlet water temperature, C
T = wet-bulb temperature of the inlet air, C
Combining equations , , and and integrating over the entire heat transfer surface area, and combining the result with equation provides the following expression for cooling tower effectiveness:
where
and
This equation is identical to the expression for effectiveness of an indirect contact (i.e., fluids separated by a solid wall) counterflow heat exchanger (Incropera and DeWitt 1981). Therefore, the cooling tower can be modeled, in the steady-state regime, by an equivalent counterflow heat exchanger as shown in the following figure.
The first fluid is water and the second fluid is an equivalent fluid entering the heat exchanger at temperature T and specific heat . The heat exchanger is characterized by a single parameter, its overall heat transfer coefficient-area product UA. The actual cooling tower heat transfer coefficient-area product is related to UA by the following expression:
This heat transfer coefficient-area product is assumed to be a function of the air mass flow rate only and can be estimated from laboratory test results or manufacturers’ catalog data.
The model for the variable speed Merkel tower also includes Scheier’s modifications. Scheier has extended the Merkel model to also include terms that adjust UA with three factors that model how UA values change when the tower is operating away from its rated conditions. The first factor, , adjusts UA for the current outdoor wetbulb temperature. The user enters a performance curve or lookup table that is a function of one independent variable. The independent variable is the difference between the design wetbulb temperature and the current wetbulb temperature, in degrees Celsius.
The second factor, , adjusts UA for the current air flow rate. The user enters a performance curve or lookup table that is a function of one independent variable. The independent variable is the ratio of the current air flow rate to the design air flow rate at full speed.
The third factor, , adjusts UA for the current water flow rate. The user enters a performance curve or lookup table that is a function of one independent variable. The independent variable is the ratio of the current water flow rate to the design water flow rate.
Then the UA value at any given time is calculated using
Method for Calculating Steady-State Exiting Water Temperature[LINK]
The objective of the cooling tower model is to predict the exiting water temperature and the fan power required to meet the exiting water setpoint temperature. Since only the inlet air and inlet water temperatures are known at any simulation time step, an iterative procedure is required to determine the exiting fluid temperatures using the equations defined in the previous section. In the case of the EnergyPlus model, the iterations are performed to determine the exiting wet-bulb temperature of the air. The exiting water temperature is then calculated based on an energy balance that assumes that the energy absorbed by the air is equivalent to the energy removed from the water. The procedure for calculating the steady-state, exiting air wet-bulb temperature is outlined below.
As explained previously, it is assumed that the moist air enthalpy can be defined by the wet-bulb temperature alone. Therefore, the first step in the procedure is to calculate the enthalpy of moist air entering the cooling tower based on the ambient wet-bulb temperature from the weather file. Since an iterative solution is required, a first guess of the outlet air wet-bulb temperature is then made and the enthalpy of this estimated outlet air wet-bulb temperature is calculated. Based on these inlet and outlet air conditions, the mean specific heat of the air is calculated based on equation , repeated here:
With the overall heat transfer coefficient-area product for the cooling tower entered by the user, the effective heat transfer coefficient-area product is calculated by rearranging equation :
With and known, the effectiveness of the heat exchanger is then calculated:
where
and
and
The heat transfer rate is then calculated as follows:
The outlet air wet-bulb temperature is then recalculated:
The iterative process of calculating continues until convergence is reached.
Finally, the outlet water temperature is calculated as follows:
Calculating the Actual Exiting Water Temperature and Fan Power[LINK]
The previous section describes the methodology used for calculating the steady-state temperature of the water leaving the cooling tower. This methodology is used to calculate the exiting water temperature in the free convection regime (water pump on, tower fan off) and with the tower fan operating (including low and high fan speed for the two-speed tower). The exiting water temperature calculations use the fluid flow rates (water and air) and the UA-values entered by the user for each regime.
The cooling tower model seeks to maintain the temperature of the water exiting the cooling tower at (or below) a setpoint. The model obtains the target temperature setpoint from the setpoints placed on either the tower outlet node or the loop’s overall setpoint node (typically set to the supply side outlet node). The model checks to see if the outlet node has a setpoint placed on it and uses that if it does. If the outlet node does not have a temperature setpoint then the model uses the loop-level outlet node specified in the input field called Loop Temperature Setpoint Node Name in the PlantLoop or CondenserLoop object. The model first checks to determine the impact of “free convection”, if specified by the user, on the tower exiting water temperature. If free convection is not specified by the user, then the exiting water temperature is initially set equal to the entering tower water temperature. If the user specifies “free convection” and the steady-state exiting water temperature based on “free convection” is at or below the setpoint, then the tower fan is not turned on.
If the exiting water temperature remains above the setpoint after “free convection” is modeled, then the tower fan is turned on to reduce the exiting water temperature to the setpoint. The model assumes that part-load operation is represented by a simple linear interpolation between two steady-state regimes (e.g., tower fan on for the entire simulation time step and tower fan off for the entire simulation time step). Cyclic losses are not taken into account.
The fraction of time that the tower fan must operate is calculated based on the following equation:
where
= exiting water setpoint temperature, C
= exiting water temperature with tower fan off, C
= exiting water temperature with tower fan on, C
The average fan power for the simulation time step is calculated by multiplying by the steady-state fan power specified by the user.
The calculation method for the two-speed tower is similar to that for the single-speed tower example described above. The model first checks to see if “free convection” is specified and if the resulting exiting water temperature is below the setpoint temperature. If not, then the model calculates the steady-state exiting water temperature with the tower fan at low speed. If the exiting water temperature at low fan speed is below the setpoint temperature, then the average fan power is calculated based on the result of equation and the steady-state, low speed fan power specified by the user. If low-speed fan operation is unable to reduce the exiting water temperature below the setpoint, then the tower fan is increased to its high speed and the steady-state exiting water temperature is calculated. If this temperature is below the setpoint, then a modified version of equation is used to calculate runtime at high fan speed:
where
= exiting water setpoint temperature, C
= exiting water temperature with tower fan at low speed, C
= exiting water temperature with tower fan at high speed, C
The average fan power for the simulation time step is calculated for the two-speed cooling tower as follows:
The tower basin heater operates in the same manner as the variable speed cooling tower basin heater. Refer to the variable speed cooling tower basin heater description in the following section.
Cooling Tower Makeup Water Usage[LINK]
The cooling tower makeup water usage is the same as the variable speed cooling tower makeup water usage. Refer to the variable speed cooling tower makeup water usage description in the following section.
Rosaler, Robert C. 1995. Standard Handbook of Plant Engineering, 2 Ed. New York, NY: McGraw-Hill, pp. 6-36-37.
Variable Speed Cooling Towers Empirical Models[LINK]
The input object CoolingTower:VariableSpeed provides models for variable speed towers that are based on empirical curve fits of manufacturer’s performance data or field measurements. The user specifies tower performance at design conditions, and empirical curves are used to determine the approach temperature and fan power at off-design conditions. The user defines tower performance by entering the inlet air wet-bulb temperature, tower range, and tower approach temperature at the design conditions. The corresponding water flow rate, air flow rate, and fan power must also be specified. The model will account for tower performance in the “free convection” regime, when the tower fan is off but the water pump remains on and heat transfer still occurs (albeit at a low level). Basin heater operation and makeup water usage (due to evaporation, drift, and blowdown) are also modeled.
The cooling tower seeks to maintain the temperature of the water exiting the cooling tower at (or below) a setpoint. The setpoint temperature is defined by the setpoints placed on either the tower outlet node or the loop’s overall setpoint node (typically set to the supply side outlet node). The model checks to see if the outlet node has a setpoint placed on it and uses that if it does. If the outlet node does not have a temperature setpoint then the model uses the loop-level outlet node specified in the input field called Loop Temperature Setpoint Node Name in the PlantLoop or CondenserLoop object. The model simulates the outlet water temperature in four successive steps:
· The model first determines the tower outlet water temperature with the tower fan operating at maximum speed. If the outlet water temperature is above the setpoint temperature, the fan runs at maximum speed.
· If the outlet water temperature with maximum fan speed is below the setpoint temperature, then the model next determines the impact of “free convection” (water flowing through tower with fan off). If the exiting water temperature based on “free convection” is at or below the setpoint, then the tower fan is not turned on.
· If the outlet water temperature remains above the setpoint after “free convection” is modeled, then the tower fan is turned on at the minimum fan speed (minimum air flow rate ratio) to reduce the leaving water temperature. If the outlet water temperature is below the setpoint at minimum fan speed, the tower fan is cycled on and off to maintain the outlet water setpoint temperature.
· If the outlet water temperature remains above the setpoint after minimum fan speed is modeled, then the tower fan is turned on and the model determines the required air flow rate and corresponding fan speed to meet the desired setpoint temperature.
The variable speed tower model utilizes user-defined tower performance at design conditions along with empirical curves to determine tower heat rejection and fan power at off-design conditions. Basin heater operation and makeup water usage are also modeled based on user inputs, tower entering air conditions, and tower operation. The following sections describe how each of these tower performance areas is modeled.
Heat rejection by the variable speed cooling tower is modeled based on the CoolTools correlation, YorkCalc correlation, or user-defined coefficients for either the CoolTools or YorkCalc correlations. These purely-empirical correlations model the tower approach temperature using a polynomial curve fit with a large number of terms and either three or four independent variables.
The CoolTools correlation has 35 terms with four independent variables:
Approach = Coeff(1) + Coeff(2)•FRair + Coeff(3)•(FRair) +
Coeff(4)•(FRair) + Coeff(5)•FRwater +
Coeff(6)•FRair•FRwater + Coeff(7)•(FRair)•FRwater +
Coeff(8)•(FRwater) + Coeff(9)•FRair•(FRwater) +
Coeff(10)•(FRwater) + Coeff(11)•Twb + Coeff(12)•FRair•Twb +
Coeff(13)•(FRair)•Twb + Coeff(14)•FRwater•Twb +
Coeff(15)•FRair•FRwater•Twb + Coeff(16)•(FRwater)•Twb +
Coeff(17)•(Twb) + Coeff(18)•FRair•(Twb) +
Coeff(19)•FRwater•(Twb) + Coeff(20)•(Twb) + Coeff(21)•Tr +
Coeff(22)•FRair•Tr + Coeff(23)•(FRair)•Tr +
Coeff(24)•FRwater•Tr + Coeff(25)•FRair•FRwater•Tr +
Coeff(26)•(FRwater)•Tr + Coeff(27)•Twb•Tr +
Coeff(28)•FRair•Twb•Tr + Coeff(29)•FRwater•Twb•Tr +
Coeff(30)•(Twb)•Tr + Coeff(31)•(Tr) + Coeff(32)•FRair•(Tr) +
Coeff(33)•FRwater•(Tr) + Coeff(34)•Twb•(Tr) + Coeff(35)•(Tr)
where:
Approach = approach temperature (C) = outlet water temperature minus inlet air wet-bulb temperature
FRair = air flow rate ratio (actual air flow rate divided by design air flow rate)
FRwater = water flow rate ratio (actual water flow rate divided by design water flow rate)
Tr = range temperature (C) = inlet water temperature minus outlet water temperature
Twb = inlet air wet-bulb temperature (C)
Coeff(#) = correlation coefficients
If the user selects Tower Model Type = CoolToolsCrossFlow, then the 35 coefficients derived for the CoolTools simulation model (Benton et al. 2002) are used and these coefficients are already defined within EnergyPlus as shown in Table. If the user specifies Tower Model Type = CoolToolsUserDefined, then the user must enter a CoolingTowerPerformance:CoolTools object to define the 35 coefficients that will be used by the CoolTools approach temperature correlation.
Coefficient Number (r)2-3 CoolTools YorkCalc Coeff(1) 0.52049709836241 -0.359741205 Coeff(2) -10.617046395344 -0.055053608 Coeff(3) 10.7292974722538 0.0023850432 Coeff(4) -2.74988377158227 0.173926877 Coeff(5) 4.73629943913743 -0.0248473764 Coeff(6) -8.25759700874711 0.00048430224 Coeff(7) 1.57640938114136 -0.005589849456 Coeff(8) 6.51119643791324 0.0005770079712 Coeff(9) 1.50433525206692 -1.342427256E-05 Coeff(10) -3.2888529287801 2.84765801111111 Coeff(11) 0.02577861453538 -0.121765149 Coeff(12) 0.18246428931525 0.0014599242 Coeff(13) -0.08189472914009 1.680428651 Coeff(14) -0.21501000399629 -0.0166920786 Coeff(15) 0.01867413096353 -0.0007190532 Coeff(16) 0.053682417759 -0.025485194448 Coeff(17) -0.00270968955115 4.87491696E-05 Coeff(18) 0.00112277498589 2.719234152E-05 Coeff(19) -0.00127758497498 -0.06537662555556 Coeff(20) 7.60420796601607E-05 -0.002278167 Coeff(21) 1.43600088336017 0.0002500254 Coeff(22) -0.5198695909109 -0.0910565458 Coeff(23) 0.11733957691051 0.00318176316 Coeff(24) 1.50492810819924 3.8621772E-05 Coeff(25) -0.13589890592697 -0.0034285382352 Coeff(26) -0.15257758186651 8.56589904E-06 Coeff(27) -0.05338438281146 -1.516821552E-06 Coeff(28) 0.00493294869566 N/A Coeff(29) -0.00796260394174 N/A Coeff(30) 0.00022261982862 N/A Coeff(31) -0.05439520015681 N/A Coeff(32) 0.00474266879162 N/A Coeff(33) -0.01858546718156 N/A Coeff(34) 0.00115667701294 N/A Coeff(35) 0.00080737066446 N/A
Similarly, the YorkCalc correlation has 27 terms with three independent variables:
Approach = Coeff(1) + Coeff(2)•Twb + Coeff(3)•Twb + Coeff(4)•Tr +
Coeff(5)•Twb•Tr + Coeff(6)•Twb•Tr + Coeff(7)•Tr +
Coeff(8)•Twb•Tr + Coeff(9)•Twb•Tr + Coeff(10)•LGRatio +
Coeff(11)•Twb•LGRatio + Coeff(12)•Twb•LGRatio +
Coeff(13)•Tr•LGRatio + Coeff(14)•Twb•Tr•LGRatio +
Coeff(15)•Twb•Tr•LGRatio + Coeff(16)•Tr•LGRatio +
Coeff(17)•Twb•Tr•LGRatio + Coeff(18)•Twb•Tr•LGRatio +
Coeff(19)•LGRatio + Coeff(20)•Twb•LGRatio +
Coeff(21)• Twb•LGRatio + Coeff(22)•Tr•LGRatio +
Coeff(23)•Twb•Tr•LGRatio + Coeff(24)•Twb•Tr•LGRatio +
Coeff(25)•Tr•LGRatio + Coeff(26)•Twb•Tr•LGRatio +
Coeff(27)•Twb•Tr•LGRatio
where:
Approach = approach temperature (C) = outlet water temperature minus inlet air wet-bulb temperature
Tr = range temperature (C) = inlet water temperature minus outlet water temperature
Twb = inlet air wet-bulb temperature (C)
LGratio = liquid-to-gas ratio = ratio of water flow rate ratio (FRwater) to air flow rate ratio (FRair)
Coeff(#) = correlation coefficients
If the user selects Tower Model Type = YorkCalc, then the 27 coefficients derived for the YorkCalc simulation model (York International Corp. 2002) are used and these coefficients are already defined within EnergyPlus as shown in Table. If the user specifies Tower Model Type = YorkCalcUserDefined, then the user must enter a CoolingTowerPerformance:YorkCalc object to define the 27 coefficients that will be used by the YorkCalc approach temperature correlation.
The approach temperature correlations for the CoolTools and YorkCalc simulation models are valid for a range of conditions defined in Table. If the user defines their own model coefficients (CoolingTowerPerformance:CoolTools or CoolingTowerPerformance:YorkCalc), then they must also define in that same object the range of conditions for which the model is valid. For all of these correlation variables, the program issues warnings if the actual values are beyond the minimum/maximum values specified for the correlation being used. For inlet air wet-bulb temperature and water mass flow rate ratio, the values of these variables used in the calculation of approach temperature are limited to be within the valid minimum/maximum range. For approach, range, and liquid-to-gas ratio the warnings are issued if the values are beyond the specified minimum/maximum range but the actual values are still used. The warnings issued do not necessarily indicate a poor estimate of tower performance at the condition(s) which caused the warning, but are provided to identify conditions outside the defined correlation limits. Exceeding the defined limits by a small amount may not introduce significant errors, but large deviations may be problematic. It is for this reason that we recommend using a very broad range of cooling tower performance data (i.e., data covering the entire range expected during the simulation) when generating user-defined coefficients for the variable speed tower model.
Minimum and Maximum Limits for Approach Temperature Correlation Variables
Independent Variable Limit CoolTools YorkCalc
Minimum Inlet Air Wet-Bulb Temperature -1.0°C -34.4°C
Maximum Inlet Air Wet-Bulb Temperature 26.7°C 26.7°C
Minimum Tower Range Temperature 1.1°C 1.1°C
Maximum Tower Range Temperature 11.1°C 22.2°C
Minimum Tower Approach Temperature 1.1°C 1.1°C
Maximum Tower Approach Temperature 11.1°C 40°C
Minimum Water Flow Rate Ratio 0.75 0.75
Maximum Water Flow Rate Ratio 1.25 1.25
Maximum Liquid-to-Gas Ratio N/A 8.0
The approach temperature correlation(s) used to simulate cooling tower heat rejection are based on water and air flow rate “ratios” and are not directly dependent on the size of the tower or the actual air and water flow rates through the tower. However, the model correlations are developed based on a reference condition. For Model Types “CoolToolsCrossFlow” and “YorkCalc”, the reference condition is a water flow rate of 0.000043 m/s per kW of heat rejected (2.4 gal/min per ton of heat rejected) with 25.6C (78F) enter air wet-bulb temperature, 35C (95F) hot water inlet temperature, and 29.4C (85F) cold water outlet temperature. The reference condition may be different if the user defines tower model coefficients using CoolingTowerPerformance:CoolTools or CoolingTowerPerformance:YorkCalc.
Due to the inherent reference condition used to generate the tower performance curves, the water flow rate at the reference condition must be determined using the design performance information specified by the user and the tower model’s approach temperature correlation. This is done by using the model’s approach temperature correlation (described earlier in this section) to calculate the water flow rate ratio which yields the user-defined design approach temperature based on an air flow rate ratio of 1.0 (FR = 1.0), the design inlet air wet-bulb temperature, and the design range temperature. The calculated approach temperature (using the model correlation) must satisfy the following two equations:
where:
= design outlet water temperature (C)
= design inlet air wet-bulb temperature (C)
= design approach temperature (C)
= design range temperature (C)
= air flow rate ratio (actual air flow rate divided by design air flow rate)
The water flow rate ratio used in the approach temperature correlation which satisfies these two equations is the ratio of the design water flow rate (specified by the user) to the water flow rate at the reference condition. This ratio is used to calculate the reference water volumetric flow rate, which is then used throughout the simulation to determine the actual water flow rate ratio used in the approach temperature correlation for each simulation time step.
where:
= water volumetric flow rate at the reference condition (m/s)
= design water volumetric flow rate specified by the user (m/s)
= design water flow rate divided by the reference water flow rate
The cooling tower seeks to maintain the temperature of the water exiting the cooling tower at (or below) a setpoint. The setpoint temperature is defined by the field “Condenser Loop Temperature Setpoint schedule or reference” for the CondenserLoop object. The model simulates the outlet water temperature in four successive steps:
· The model first determines the tower outlet water temperature with the tower fan operating at maximum speed. If the outlet water temperature is above the setpoint temperature, the fan runs at maximum speed.
· If the outlet water temperature with maximum fan speed is below the setpoint temperature, then the model next determines the impact of “free convection” (water flowing through tower with fan off). If the exiting water temperature based on “free convection” is at or below the setpoint, then the tower fan is not turned on.
· If the outlet water temperature remains above the setpoint after “free convection” is modeled, then the tower fan is turned on at the minimum fan speed (minimum air flow rate ratio) to reduce the leaving water temperature. If the outlet water temperature is below the setpoint at minimum fan speed, the tower fan is cycled on and off to maintain the outlet water setpoint temperature.
· If the outlet water temperature remains above the setpoint after minimum fan speed is modeled, then the tower fan is turned on and the model determines the required air flow rate and corresponding fan speed to meet the desired setpoint temperature.
For each simulation time step, the model first calculates the outlet water temperature with the tower fan operating at maximum speed (FRair = 1.0). The calculated approach temperature (using the correlations described above), inlet air wet-bulb temperature (weather data), and range temperature are used to determine the tower outlet water temperature as follows:
where:
= tower outlet water temperature at maximum fan speed (C)
= tower inlet air wet-bulb temperature (C)
= approach temperature at current operating conditions (C)
= range temperature at current operating conditions (C)
Note that the approach temperature correlation as described previously is a function of range temperature, so the equations above must be solved iteratively to converge on a solution. If the resulting outlet water temperature is above the desired setpoint temperature, then the fan runs at maximum speed and does not cycle on/off (fan part-load ratio = FanPLR = 1.0 and FR = 1.0).
If the outlet water temperature with maximum fan speed is below the setpoint temperature, then the model next determines the impact of “free convection” (water flowing through tower with fan off). In the free convection regime, the outlet water temperature is calculated using a fraction of the water temperature difference through the tower when the fan is at its maximum speed. This fraction is defined by the user (Fraction of Tower Capacity in Free Convection Regime).
where:
= tower outlet water temperature in free convection regime (C)
= tower inlet water temperature (C)
= fraction of tower capacity in free convection regime (user specified)
If the outlet water temperature in the free convection regime is below the setpoint temperature, the tower fan is not turned on and the fan part-load ratio is set equal to 0. In addition, the air flow rate ratio through the tower is assumed to be equal to the fraction of tower capacity in the free convection regime.
where:
= fan part-load ratio
= fan part-load ratio in free convection regime
= air flow rate ratio in free convection regime
If the outlet water temperature in the free convection regime is above the setpoint temperature, then the fan is turned on at the minimum fan speed (minimum air flow rate ratio, FR , entered by the user) and the outlet water temperature is calculated as the inlet air wet-bulb temperature plus the calculated approach temperature:
where:
= outlet water temperature at minimum fan speed (C)
= air flow rate ratio at the minimum fan speed
If the outlet water temperature at minimum fan speed is below the setpoint temperature, the cooling tower fan cycles on and off at the minimum air flow rate ratio in order to meet the setpoint temperature.
where:
= outlet water setpoint temperature (C)
If the outlet water temperature at minimum fan speed is above the outlet water temperature setpoint, then the cooling tower fan speed (FR) is increased until the calculated approach temperature produces the required outlet water temperature to meet the setpoint.
(i.e., fan does not cycle on/off)
When the cooling tower fan is operating, fan electric power is calculated based on the air flow rate ratio required to meet the above conditions. If the user has entered a fan power curve object (cubic curve), the output of that curve is multiplied by the design fan power. Otherwise, tower fan power is assumed to be directly proportional to the cube of the air flow rate ratio. In either case, the fan part-load ratio is applied to account for times when the tower fan cycles on/off to meet the setpoint temperature. Fan energy consumption is calculated each simulation time step.
If FanPowerCurveObject is defined, then:
Else:
where:
= name of fan power ratio as a function of air flow rate ratio curve
= tower fan electric power (W)
= tower fan electric consumption (J)
= output of FanPowerCurveObject evaluated at the operating air flow rate ratio (FR)
= design fan power at design (maximum) air flow through the tower (W)
TimeStepSys = HVAC system simulation time step (hr)
Calculations are also made to estimate the electric power input to the tower basin heater. A schedule may be used to disable the basin heater during regular maintenance periods or other time periods (e.g., during summer). If a schedule is not provided, the basin heater is assumed to be available the entire simulation time period. The basin heater operates when it is scheduled on, the outdoor air dry-bulb temperature is below the basin heater setpoint temperature, and the cooling tower is not active (i.e., water is not flowing through the tower). The user is required to enter a basin heater capacity (watts per degree Kelvin) and a heater setpoint temperature (C) if they want to model basin heater electric power.
P_heater_basin =0.0
IF (WaterNotFlowingThroughTower) THEN
IF (Scheduleheater_basin is Defined) THEN
IF (CAPheater_basin > 0 AND Scheduleheater_basin = ON) THEN
P_heater_basin = MAX(0.0,CAP_heater_basin*(T_setpoint_basin-T_db_outdoor))
ENDIF
ELSE
IF (CAPheater_basin > 0) THEN
P_heater_basin = MAX(0.0,CAP_heater_basin*(T_setpoint_basin-T_db_outdoor))
ENDIF
ENDIF
ENDIF
where:
= tower basin heater electric power (W)
= tower basin heater electric consumption (J)
T = basin heater setpoint temperature (C)
T = outdoor air dry-bulb temperature (C)
CAP = basin heater capacity (W/K)
Schedule = basin heater schedule (schedule value > 0 means ON)
ASHRAE 1999. HVAC1 Toolkit: A Toolkit for Primary HVAC System Energy Calculations. Atlanta: American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc.
Benton, D.J., Bowman, C.F., Hydeman, M., Miller, P. 2002. An Improved Cooling Tower Algorithm for the CoolTools Simulation Model. ASHRAE Transactions, Vol. 108, Part 1, pp.760-768.
Bourdouxhe, J.P., M. Grodent, J. Lebrun and C. Silva. 1994. Cooling tower model developed in a toolkit for primary HVAC system energy calculation: part 1. Proceedings of the fourth international conference on system simulation in buildings, Liege (Belgium), December 5-7, 1994.
Incropera, F.P. and D.P. DeWitt. 1981. Fundamentals of Heat Transfer. New York: John Wiley & Sons.
Merkel, F. 1925. Verduftungskuhlung. VDI Forschungarbeiten, No 275, Berlin.
Rosaler, Robert C. 1995. Standard Handbook of Plant Engineering, 2 Ed. New York, NY: McGraw-Hill, pp. 6-36-37.
Scheier, L. 2013. Personal communication.
York International Corporation, 2002. “YORKcalc Software, Chiller-Plant Energy-Estimating Program”, Form 160.00-SG2 (0502).
Cooling Towers with Multiple Cells[LINK]
Many towers are constructed to be capable of being grouped together to achieve the desired capacity. Thus, many cooling towers are assemblies of two or more individual cooling towers or “cells.” The number of cells they have, e.g., an eight-cell tower, often refers to such towers.
For the operation of multi-cell towers, the first step is to determine the number of cells n, which will be operating during the timestep using the calculation logic from DOE-2.1E.
The maximum and minimum flow rates per cell are determined according to the input fractions (Minimum Water Flow Rate Fraction: and Maximum Water Flow Rate Fraction: ) as follows:
where is the design water flow rate through the entire cooling tower.
Then, we determine the minimum and maximum number of cells that can operate with this water flow rate:
where n is the total number of cells of the tower, and is the water flow rate to the tower.
The number of cells operatingn is set accordingly:
If the Cell Control method is MinimalCell,
n =
If the Cell Control method is MaximalCell,
n =
Finally, the water mass flow rate per cell ( will be:
Then we simulate the performance of one cell with this flow rate per cell (calling the SimSimpleTower subroutine for single and two speed cooling tower objects). As we assume that each cell is identical, the UA of one cell is calculated dividing the UA of the whole tower (obtained from the input or from the auto sizing calculations). The air flow rate per cell is also equal to the one of the whole tower divided by the number of cells operating:
At the end, the total fan power of the tower operating with a certain number of cells is given by:
If the cells operating do not meet the loads, we increase the number of cells if spare cells are available and the water flow through each cell is within the user specified minimum and maximum water flow rate fractions range. This is an iteration process.
Cooling Tower Makeup Water Usage[LINK]
Makeup water use for all types of cooling towers is made up of three components: evaporation, drift, and blowdown. The first is the amount of water evaporated to reduce the water’s temperature as it passes through the cooling tower. There are two methods that evaporation makeup water can be modeled in EnergyPlus. The first method assumes that the tower outlet air conditions are saturated (which may not always be the case for certain operating conditions). For this “Saturated Exit” mode, the enthalpy of the tower’s outlet air is calculated as the inlet air enthalpy plus the water side heat transfer divided by the air mass flow rate through the tower.
where:
= water-side heat transfer (W)
= mass flow rate of water through the tower (kg/s)
= specific heat of water (W/kg-K)
= saturated outlet air enthalpy (J/kg)
= inlet air enthalpy (J/kg)
= mass flow rate of air through the tower (kg/s)
The saturation temperature and humidity ratio are then calculated for the tower’s outlet air.
where:
= saturated outlet air temperature (C)
= EnergyPlus psychrometric function, returns saturation temperature given enthalpy and barometric pressure
= outdoor barometric pressure (Pa)
= saturated outlet air humidity ratio (kg/kg)
= EnergyPlus psychrometric function, returns humidity ratio given dry-bulb temperature and enthalpy
The makeup water quantity required to replenish the water lost due to evaporation is then calculated as the product of the air mass flow rate and the difference between the entering and leaving air humidity ratio divided by the density of water.
where:
= makeup water usage due to evaporation (m/s)
= mass flow rate of air through tower (kg/s)
= humidity ratio of tower inlet air (kg/kg)
= density of water evaluated at the tower inlet air temperature (kg/m)
The second method available for calculating water makeup for evaporation is for the user to provide a value for a loss factor. The evaporation loss is then calculated as a fraction of the circulating condenser water flow and varies with the temperature change in the condenser water. The value provided by the user is in units of percent-per-degree Kelvin. The evaporation rate will equal this value times each degree Kelvin of temperature drop in the condenser water. Typical values are from 0.15 to 0.27 [percent/K]. The default is 0.2. The rate of water makeup for evaporation is then calculated by multiplying this factor times the condenser water flow rate and the temperature decrease in the condenser water flow rate. For evaporative fluid coolers, a numerical value of loss factor can be entered in the same manner as for cooling towers. If this field is blank, an empirical correlation will be used to calculate the value based on current outdoor dry bulb temperature and relative humidity. The following correlation from Qureshi and Zubair (2007) is used to calculate the loss factor:
where:
= relative humidity of inlet air
= Dry-bulb temperature of inlet air
Additional makeup water usage is modeled as a percentage of design water flow rate through the tower to account for drift, and as a scheduled flow rate to model blowdown. Drift is water loss due to the entrainment of small water droplets in the air stream passing through the tower. Drift is defined by the model user as a percentage of the tower’s design water flow rate, and is assumed to vary with tower air flow rate ratio as follows:
where:
= makeup water usage due to drift (m/s)
= design (volumetric) water flow rate (m/s)
= percent of design water flow rate lost to drift at the tower design air flow rate
= ratio of actual air flow rate to tower design air flow rate
Blowdown is water flushed from the basin on a periodic basis to purge the concentration of mineral scale or other contaminants. There are two ways that blowdown is calculated in EnergyPlus. Blowdown water rates can be scheduled so that we have:.
If ScheduleBlowdown is defined, then:
Else:
where:
= makeup water usage due to blowdown (m/s)
ScheduleValue = blowdown schedule value for the time step being simulated (m/s)
The second (and default) way that blowdown can be calculated is to assume that blowdown water is continually introduced at a rate that will provide a constant concentration ratio. As water evaporates it leaves behind minerals and the like causing the concentration of water impurities to be higher in the tower than in the makeup water. Acceptable concentration ratios are in the range of 3 to 5 depending on the purity of the make up water. Water lost as drift does not evaporate and decrease the water needed for blowdown. Using the “Concentration Ratio” method, the rate of blowdown can be calculated using:
where,
is the concentration ratio or the ratio of solids in the blowdown water to solids in the makeup water.
The tower makeup water consumption (m) for each simulation time step is calculated as the sum of the individual components of makeup water usage multiplied by the simulation time step in hours and the conversion for hours to seconds (3600 sec/hr). Makeup water usage is only calculated when the cooling tower is active and water is flowing through the cooling tower.
where:
= tower makeup water consumption (m)
Robert C. Rosaler. 1995. Standard Handbook of Plant Engineering, 2 Ed. McGraw-Hill, New York, NY, pp 6-36-37.
Quereshi, B.A. and S.M.Zubair. 2007. Prediction of evaporation losses in evaporative fluid coolers, Applied Thermal Engineering 27 pp. 520-527
One and Two Speed Fluid Coolers[LINK]
The input objects FluidCooler:SingleSpeed and FluidCooler:TwoSpeed provide models for dry fluid coolers. Fluid cooler’s performance is modeled using effectiveness-NTU relationships for cross flow heat exchanger with both streams unmixed. The model can be used to simulate the performance of both single speed and two speed mechanical-draft fluid coolers. For part-load operation, the model assumes a simple linear interpolation between two steady-state regimes without accounting for any cycling losses.
The expression for fluid cooler effectiveness is as follows:
Where
= heat exchanger effectiveness
and
;
The first fluid is water and the second fluid is air entering the heat exchanger at temperature and specific heat . The heat exchanger is characterized by a single parameter, its overall heat transfer coefficient-area product UA.
When the user selects the nominal capacity method, the UA is calculated as follows:
The model inputs (other than the UA) and the fluid cooler load that it must meet are specified at design conditions. Then the fluid cooler model converges to a UA value, using the regulafalsi method that will enable it to meet the design fluid cooler load given at the specified inputs.
Method for Calculating Steady-State Exiting Water Temperature[LINK]
The objective of the fluid cooler model is to predict the exiting water temperature and the fan power required to meet the exiting water setpoint temperature. The exiting water temperature is calculated based on an energy balance that assumes that the energy absorbed by the air is equivalent to the energy removed from the water. The procedure for calculating the steady-state, exiting air dry-bulb temperature is outlined below.
With the overall heat transfer coefficient-area product for the fluid cooler calculated by the nominal capacity information entered by the user, the effectiveness of the heat exchanger is then calculated as:
The heat transfer rate is then calculated as follows:
Then the outlet air dry-bulb and outlet water temperature are calculated:
= inlet water temperature, C
= outlet water temperature, C
= dry-bulb temperature of the inlet air, C
= dry-bulb temperature of the outlet air, C
Calculating the Actual Exiting Water Temperature and Fan Power[LINK]
The previous section describes the methodology used for calculating the steady-state temperature of the water leaving the fluid cooler. This methodology is used to calculate the exiting water temperature with the fluid cooler fans operating (including low and high fan speed for the two-speed fluid cooler). The exiting water temperature calculations use the fluid flow rates (water and air) and the Nominal capacity information entered by the user for each regime.
The fluid cooler model seeks to maintain the temperature of the water exiting the fluid cooler at (or below) a setpoint. The setpoint schedule is defined by the field “Loop Temperature Setpoint Node or reference” for the CondenserLoop object.
The fluid cooler fans are turned on to reduce the exiting water temperature to the setpoint. The model assumes that part-load operation is represented by a simple linear interpolation between two steady-state regimes (e.g., Fluid cooler fans on for the entire simulation time step and fluid cooler fans off for the entire simulation time step). Cyclic losses are not taken into account. If the outlet water temperature is less than the set-point then the fraction of time for which the fluid cooler must operate to meet the set-point is calculated by using the following equation:
Where
= exiting water setpoint temperature, C
= exiting water temperature with all fluid cooler fans off, C
= exiting water temperature with all fluid cooler fans on, C
The average fan power for the simulation time step is calculated by multiplying by the steady-state fan power specified by the user.
The calculation method for the two-speed fluid cooler is similar to that for the single-speed fluid cooler example described above. The model first calculates the steady-state exiting water temperature with the fluid cooler fans at low speed. If the exiting water temperature at low fan speed is below the setpoint temperature, then the average fan power is calculated based on the result of previous equation and the steady-state, low speed fan power specified by the user. If low-speed fan operation is unable to reduce the exiting water temperature below the setpoint, then the fluid cooler fans’ speed is increased to high speed and the steady-state exiting water temperature is calculated. If this temperature is below the setpoint, then a modified version of previous equation is used to calculate runtime at high fan speed:
where
= exiting water setpoint temperature, C
= exiting water temperature with fluid cooler fans at low speed, C
= exiting water temperature with fluid cooler fans at high speed, C
The average fan power for the simulation time step is calculated for the two-speed fluid cooler as follows
|
2022-05-28 17:10:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6489959359169006, "perplexity": 2156.4074670093446}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663016949.77/warc/CC-MAIN-20220528154416-20220528184416-00085.warc.gz"}
|
http://www.pinucciaspiezia.it/ezhh6d/53ogni.php?ebd=brdf-code
|
## Brdf Code
The parameters are: The type of Kappa parameterisation (one of the GroundReflectance. 0 to reduce computational expense. MERL BRDF Database version 2. These surfaces are modelled in 6S as a circular target surrounded by an environment of a different reflectance. The bidirectional reflectance distribution function (BRDF; (,) ) is a function of four real variables that defines how light is reflected at an opaque surface. KappaXXX constants) The phase function to use (one of the GroundReflectance. Cook-Torrance BRDF is a function that can be plugged into the rendering equation as $$f_r$$. disini saya akan menegnalka sedikit tentang vray BRDF yang sering saya gunakan untuk setting material di video video sebelumnya #sketchup #vray #tutorial [ With Code ] - Duration: 2. First operational BRDF, albedo nadir reflectance products from MODIS Crystal B. Your story matters Citation Matusik, Wojciech, Hanspeter Pfister, Matthew Brand, and Leonard McMillan. Light rays will be sent either according to the surface BRDF, or in the direction of known light sources. [2001] measured skin BRDF as a func-tion of wavelength for a sample population of 22 people and noted that five Gaussian basis functions reproduce the data well. Evaluation and sampling code. The source code of this SVG is valid. TABULATION AND COMPLETION OF MEASURED BRDF DATA FOR LIGHTING COMPUTATIONS Dumont E. glTF minimizes both the size of 3D assets, and the runtime processing needed to unpack and…. I'm working on microfacet brdf model for my renderer these days, noticing that it is more than necessary to provide a separate sampling method for microfacet brdf instead of using the default one, which is usually used for diffuse like surfaces and highly inefficient for brdf with spiky shape, such as mirror like surfaces. The diffuse component is zeroed. Radiometry, BRDF, Shape from Shading at risk for violating copyright policies in Northwesterns Student Conduct Code. SCATMECH is an object-oriented C++ class library developed to distribute models for light scattering applications. Figure 19 is a plot of the BRDF values measured from the actual gloss tiles and the BRDF of a. In this chapter we will understand working of Vertex, Pixel shader and Rasterizer. The code should be compliant with Visual Studio Express (multi-threading using Windows API rather than OpenMP), and should work fine on Linux (although multi-threading will be deactivated). , 1997), and is well. data , where is a file name used to label the data is the frame number of a source image {rgb} is r for the red channel, g for the green channel, and b for the blue channel. Figure 2: Three BRDF measurement devices, leading to our image-based approach (c). Moschops I can force a similar warning if I leave the. It corrects for the effects of atmospheric gases, aerosols, and thin cirrus clouds. Improving ground cover monitoring for wind erosion assessment using MODIS BRDF parameters Author links open overlay panel Adrian Chappell a Nicholas P. A Mitsuba plugin and Python routines for plotting are also provided. 这一次,我们借用上一篇使用渐变纹理着色的 code 【 BasicMyHalfLessDiffuse 】,我们要把之前的 inline float4 LightingHalfBRDFDiffuse(SurfaceOutput s,fixed3 lightDir,fixed atten) 函数修改为 4 个参数. If not, what is the simplest and safest way to implement that, I have no problems changing the GEANT4 source code directly to fulfill my goals (e. PBR GLSL SHADER. Unlike the metallic and specular workflow the “Disney” workflow use a completly different code path, so the normal distribution, fresnel and visibility terms can’t be changed. The geometry used by the BRDF is shown below in Figure 1. A texture map stores reflectance, or other attributes, that vary spatially over a 2D surface. This paper introduces a new frequency space paradigm for prefiltering and rendering environment mapped images with general isotropic BRDFs. The geometry used by the BRDF is shown below in Figure 1. Radiometry, BRDF, Shape from Shading at risk for violating copyright policies in Northwesterns Student Conduct Code. Figure 2: Three BRDF measurement devices, leading to our image-based approach (c). I BRDF is de ned by the ratio of. Models: BRDF BRDF (Bidirectional reflectance distribution function) Assumes that light enters and leaves at the same point Incoming direction w i, outgoing direction w o BRDF: Ratio of reflected radiance exiting along w o to the irradiance incident to w i Physically plausible if… Always positive Reciprocal: w i, w o. Part 6: Anisotropic BRDF. I'd really like to understand the math behind the Cook-Torrance BRDF rather than just copy/pasting the GLSL code. 上一篇把BRDF换成了更为常见的blinn-phong,推出了在上面进行importance sampling的公式,以及如何把结果存到一张查找表LUT上。 更进一步的做法是把这张LUT拟合成一个曲面,这样就可以在shader中直接计算,省去一次纹理采样。. AQUA code (actually AQUA + TERRA code) is delivered and baselined: Gridded surface reflectances from both the TERRA MODIS sensor and the AQUA MODIS sensor are used to retrieve the 16-day BRDF parameters and compute albedos at local solar noon. Streamline the workflow between optical and mechanical engineers. SIGGRAPH), 2004. Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. This is the formula that was published in the ShaderX 7 article [2] and the one that shipped with Velvet Assassin (see here for the code). Same for Blinn and Blinn BRDF. Source code for generation of textures for use with BRDF-based lighting is included. because it's baked into the BRDF. Di use BRDF for Skin To simulate sub-surface scattering e ects present in skin, we utilize the pre-integrated skin shading technique presented in Penner and Borshukov [2011]. student at UC San Diego, advised by Manmohan Chandraker. The calculation time and accuracy of the five- and seven-parameter models are compared. This is related to a much weaker dependence of model column water on the screen-air temperature at both monthly and annual timescales, as observed. References and links 1. Solving this equation in the main goal of Monte Carlo Path Tracing. Use reflection BSDF for glossy reflections when roughness is 0. 2 BRDF Models and Problems I have given a small introduction about the idea behind BRDFs. brdf - Disney BRDF File. brdf 前言现实世界中的表面绝大多数都是凹凸不平的。 在这种情况下,可以把表面看成是大量朝向各异的微小光学平面的集合,我们肉眼可见的每个点都包含了很多个这样的微小光学平面。. In short, the method of moments involves equating sample moments with theoretical moments. Obukhov Institute of Atmospheric Physics, Pyzhevsky per. The use of normal maps comes with important drawbacks though: the appearance is dark overall due to back-facing normals and importance sampling is suboptimal, especially when the micro-surface is very rough. BRDF correction is particularly important for creating long-term time series of satellite data, and to allow comparison between measurements of NDVI and biomass from different sensors. The BRDF parameter determines the type of the highlights and glossy reflections for the material. through atmospheric propagation codes, such as MODTRAN. As a result, we. This parameter has an effect only if the reflection color is different from black and reflection glossiness is different from 1. We present a novel mapping of the BRDF space, allowing for extraction of descriptive principal components from measured databases, such as the MERL BRDF database. quantitatively described by the bi-directional reflectance distribution function (BRDF) [1]. The code has been tested and runs properly on Visual Studio 2008, 2010, and now compiles and runs properly with G++/GCC as well. It can also be applied to determine the power delivered to a target from a Directed Energy weapon. First operational BRDF, albedo nadir reflectance products from MODIS Crystal B. But according to the figure on page 4, a specular sample has a BRDF favoring a direction which is highly dependent on the incoming ray, but the $\phi$ angle is decided by uniformly sampling a circle, disregarding completely the incoming direction, which to me makes no sense. Only a limited number of such models have been mentioned in the literature. Physically plausible BRDF can give a different material appearance for a surface compare to traditional lighting model. In addition to its own elaborate validation process, 6SV1 is participating in a joint vector scalar RT code comparison project performed by the MODIS Atmo-spheric Correction Group in collaboration with the NASA Goddard Space Flight Center. Follow Follow @Er_Brdf Following Following @Er_Brdf Unfollow Unfollow @Er_Brdf Blocked Blocked @Er_Brdf. Subsurface parameter from 0 to 1. Reflectance Measurements of Human Skin Category: research Abstract We use a novel image-based technique to measure the directional reflectance of living human skin. Be the first to write a review. I added this BRDF in my renderer as a new workflow. This repository contains a self-contained implementation of evaluation and sampling code for the paper. However, there is a lot of code, and complexity in their example, and it is a little difficult in trying to break it down. Song and M. It contains all the quality information for the corresponding 16-day MCD43A3 Albedo and the MCD43A4 Nadir-BRDF (NBAR) products. Source code is provided to read back the BRDF file format. You can specify these directions as three vectors[x,y,z] or two vectors [Theta Phi (in spherical coordinates). , 1997) allows a variety of kernel-driven semiempirical BRDF models (Roujean et al. Efficient isotropic BRDF measurement. Efficient implementation. I implemented the Ward BRDF. a feasible modeling approach. Plašākais 4G pārklājums un klientu serviss Latvijā. The guys at ReadyAtDawn studios (makers of The Order 1886) have also uploaded the shader code for us for the specular microfacet BRDF which you can find here. A more general BRDF describing also intermediate cases is the plasma dispersion function alternatively the Voigt function. SELLER & PAYMENT INFORMATION The product must be in its original condition, including UPC bar code. So far, we've been dealing with spherical harmonics fairly abstractly. The BRDF calls are not executed immediately; instead they are collected during the shader execution into a combined BRDF, which is evaluated after the shader execution completes. Sign in Sign up Instantly share code, notes, and. Jeffrey Dank and David Allen. An Overview of BRDF Models Rosana Montes and Carlos Ureña Dept. An optimizing modeling method, the artificial immune network genetic algorithm, is used to fit the BRDF measurement data over a wide range of incident angles. It computes it on its own, as its fiddles with the normal. The method is inspired by Montecarlo Importance Sampling, where a biasing function modifies uniform input samples into samples that are non-uniformly distributed according to the shape of the function. Among the list of BRDF model candidates More Xplore Articles. appleseed is an open source, physically-based global illumination rendering engine primarily designed for animation and visual effects. These surfaces are modelled in 6S as a circular target surrounded by an environment of a different reflectance. These programs were developped to exploit luminance or BRDF data from the EZ-Contrast instrument developped by Eldim. h file to reduce some computational expense. The results show that great care is needed in parameterizing the BRDF in this way and that variations in the brightness of the understory - here represented by grasses, forbs, subshrubs, biotic crusts and bare soil - has an important effect on modeled estimates of bidirectional reflectance in the red wavelengths. The BRDF data presented in figures 3-16 and 3-17 are the average of the all multple locations on the mirror. f - This code computes the Legendre expansion coefficients for polydisperse spherical particles using the standard Lorenz-Mie theory The codes must be run in the following sequence: spher. NET: musiques mutines et mutantes (webzine) Information : Title, Meta Keywords and Meta Description are all HTML tags used for your site to be recognized by, and to give information to search engines. Bibtex entry for this abstract Preferred format for this abstract (see Preferences ). Experimental Analysis of BRDF Models - Supplemental Addy Ngan, Frédo Durand and Wojciech Matusik This document is supplemental to the paper titled Experimental Analysis of BRDF Models, published at Euro-graphics Symposium on Rendering 2005. KappaXXX constants) The phase function to use (one of the GroundReflectance. It contains all the quality information for the corresponding 16-day MCD43A3 Albedo and the MCD43A4 Nadir-BRDF (NBAR) products. But now we can get down to a concrete use and some concrete code. File:BRDF mirror. What type of data is given to these shaders and in what format processed data comes out. UNITY_BRDF_PBS_LIGHTMAP_INDIRECT is defined in UnityPBSLighting. zip ' package (9KB): We provide the point light source positions and camera intrinsic matrix in this zip package for evaluating photometric stereo with point. 2 ()Location: Kawasaki Japan ()Registed: 2017-07-17 (2 years, 80 days) Ping: 203 ms; HostName: 202. Enter the Module Code and Title 3. Sign up A C++ Microfacet BRDF Fitting and Rendering Library. S specifies the direction to the light source. [QUOTE=thedarkknight1717;1285645]May I ask for a whole project source code of blinn phong or other brdf model with glsl and opengl? I want a runnable project to study about that as many shader code given on the web is unable to compile with my environment. 간단하게 말하자면 BSSRDF (Bidirectional Surface Scattering Reflectance Distribution Function)은 8차함수이며,. CNRM atlas data N. Ientilucci and Michael Gartley Digital Imaging and Remote Sensing Laboratory, Rochester Institute of Technology,. Printer-friendly version. The cubemap parameterization is the main use due to its hardware efficiency and this post is focus on ambient specular lighting with cubemap. Ensure all technical terms are correct 5. I assume that you are familiar with the ideas presented in the previous article Physically Based Rendering; if not, please consider starting from there before approaching this more advanced topic. A variant of this approach is to use a code as ours, with the information included in this paper, in order to adapt the code to the user’s specific problem. An "instance" of a BRDF is meant to be located at a certain point and to a certain viewing direction. The surface reflectance is inverted with the help of a radiative transfer model -code site- using atmospheric inputs taken from NCEP (ozone, pressure) or directly derived the MODIS data (aerosol, water vapor). this view direction needs to be calculated per-pixel, which is why you see it in the fragment shader, but it does not need to be calculated per light, so it would be a waste to. Director, State Key Lab of CAD&CG Zhejiang University. ] 1 Introduction Following our success with physically-based hair shading on Tangled [27], we began considering physically-based shading models for a broader range of materials. Since a periodic structure yields delta-function-like peaks in the BRDF, the BRDF cannot be strictly evaluated for such a structure. Also I multiplied the result with _LightColor0. The method is inspired by Montecarlo Importance Sampling, where a biasing function modifies uniform input samples into samples that are non-uniformly distributed according to the shape of the function. Improving ground cover monitoring for wind erosion assessment using MODIS BRDF parameters Author links open overlay panel Adrian Chappell a Nicholas P. Source code is provided to read back the BRDF file format. The VNP43IA2N product is produced daily using 16-day VIIRS data (i. Read "Variability in surface BRDF at different spatial scales (30 m–500 m) over a mixed agricultural landscape as retrieved from airborne and satellite spectral measurements, Remote Sensing of Environment" on DeepDyve, the largest online rental service for scholarly research with thousands of academic publications available at your fingertips. BRTF, All these acronyms refer to the same quantity, and share the same mathematical definition. The photos in A and B are both of a solar cell on gold Kapton. For a detailed review of the state of the art in BRDF research prior to 2001, the reader is referred to Siggraph course notes [Ashikhmin et al. rgb) which in effect makes the shader also take intensity and color from the lightsource, which may or. Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Skin Color and Texture Models: Biophysical studies show that skin appearance is largely dependent on wavelength [Igarashi et al. The introduction of BRDF data into signature analysis raises a number of technical questions. Schott Rochester Institute of Technology, Chester F. brdf files consist of a set of parameters and a BRDF function written in GLSL. In the last section, the calculation results using the new seven-parameter model are compared with the five-parameter model. 25 Improvement of cloud's spectral reflection simulation using MODIS cloud products Seung-Hee Ham and Byung-Ju Sohn* School of Earth and Environmental Sciences, Seoul National University, Seoul, South Korea. The objective of the tests reported herein was to measure the tile BRDF over the range of 2. Since a BRDF is a 4D function, ideally it would be nice to be able to use a hardware accelerated 4D lookup table (i. 01 and a diffuse component that is approximately half of the specular. In this chapter we will understand working of Vertex, Pixel shader and Rasterizer. , which were used to render the images shown here and in the 1999 paper. , 1997) allows a variety of kernel-driven semiempirical BRDF models (Roujean et al. Di use BRDF for Skin To simulate sub-surface scattering e ects present in skin, we utilize the pre-integrated skin shading technique presented in Penner and Borshukov [2011]. Follow Follow @Er_Brdf Following Following @Er_Brdf Unfollow Unfollow @Er_Brdf Blocked Blocked @Er_Brdf. A Comparison of Four BRDF Models Stephen H. Bidirectional Radiance Distribution Function. Professionally I'm a programmer, but my interests both in programming and outside vary quite a bit. It allows you to measure the light distribution contained in the radiation lobe of these materials in photometric and colorimetric terms. The BRDF convolution shader operates on a 2D plane, using its 2D texture coordinates directly as inputs to the BRDF convolution (NdotV and roughness). If the site was up for sale, it would be worth approximately $3,450 USD. Ensure all technical terms are correct 5. Large freshwater fluxes into the Bay of Bengal by rainfall and river discharges result in strong salinity fronts in the bay. glTF™ (GL Transmission Format) is a royalty-free specification for the efficient transmission and loading of 3D scenes and models by applications. NASA Astrophysics Data System (ADS) Kumar, A. A Microfacet-based BRDF Generator Michael Ashikhmin Simon Premoze Peter Shirleyˇ University of Utah www. The bi-directional reflectivity distribution (BRDF) function is a fundamental optical property of materials, characterizing important properties of light scattered by a surface. Evaluation and sampling code. 0 PBR materials metaric-roughness shader for BRDF Explorer - gltf-metallic-roughness. Read "Variability in surface BRDF at different spatial scales (30 m–500 m) over a mixed agricultural landscape as retrieved from airborne and satellite spectral measurements, Remote Sensing of Environment" on DeepDyve, the largest online rental service for scholarly research with thousands of academic publications available at your fingertips. S specifies the direction to the light source. In this chapter we will understand working of Vertex, Pixel shader and Rasterizer. In light scattering theory, ray-tracing is an energy approximation where bundles of energy (rays) are traced throughout their interactions with a rough surface until they leave the surface. of custom bi-directional reflectivity distribution functions in DIRSIG. PR which looks at simplifying BRDF shader code, as well as adding new BRDF's to allow for a wider set of materials. Perhaps the one most relevant to paints, such as CARC. OptiX will decompose the PTX code and replace the OptiX specific intrinsics (e. Please do not switch between text/fonts 4. Georgiev1,* and James J. I BRDF is de ned by the ratio of. (1992) were derived following a LSE fitting procedure (Li et al. quantitatively described by the bi-directional reflectance distribution function (BRDF) [1]. 8: Spatially Varying BRDF (SVBRDF) For more realism, BRDFs can be de ned with additional two parameters for the spatial coordinates, so. Unlike the metallic and specular workflow the "Disney" workflow use a completly different code path, so the normal distribution, fresnel and visibility terms can't be changed. The system handles BRDFs defined in a variety of forms (analytical models, measured data, and simulation data), allows to display their shape, perform fitting, and rendering with different lighting conditions. the BRDF measurement data of typical samples in the UV band. What type of data is given to these shaders and in what format processed data comes out. Schaaf a, *, Feng Gao a,1 , Alan H. BRDF Measurements and Analysis of Retroreflective Materials Laurent Belcour, Romain Pacanowski, Marion Delahaie, Aude Laville-Geay, Laure Eupherte Journal of the Optical Society of America, Optical Society of America , 2014, JOSA A, 31 (12) pdf bib. "Constrain" botton on this application interface inplements the function that does not allow negative model parameters, which are suggested by MODIS BRDF parameter product. Each BRDF model is then implemented as a subclass of the top level BRDF class. A texture map stores reflectance, or other attributes, that vary spatially over a 2D surface. MERL provides this data only for research or academic use. dumont@ifsttar. where: For a BRDF to be physically plausible it must follow the law of energy conservation and must obey the Helmholtz reciprocity principle. Physically based rendering is a catch all term for any technique that tries to achieve photorealism via physical simulation of light. It contains all the quality information for the corresponding 16-day MCD43A3 Albedo and the MCD43A4 Nadir-BRDF (NBAR) products. net is 1 decade 9 years old. Long-term calibration monitoring of the bidirectional reflectance distribution function (BRDF) of Spectralon diffusers in the air-ultraviolet is presented. Barla1;2 and L. 05degree MOD_PR43C3 MCD43C4 archived CMG Nadir BRDF-adj. If the image looks correct with a simple BRDF, now just try it with variations of your other components. The Bidirectional Reflectance Distribution Functions (BRDF) is a four-dimensional function that defines how light is reflected at an opaque surface. That way you can try matching mathematical BRDF's (like the ashikminh, phong. References and links 1. REFLET 180S is a unique stand-alone scattering measurements system. For this reason, model forms of the BRDF, rooted in physics, are very valuable for extending measured data and for use in modeling codes. It is employed in the optics of real-world light, in computer graphics algorithms, and in computer vision algorithms. quantitatively described by the bi-directional reflectance distribution function (BRDF) [1]. 간단하게 말하자면 BSSRDF (Bidirectional Surface Scattering Reflectance Distribution Function)은 8차함수이며,. We used the hemisphere of directions orthographically projected onto the tangent plane to study the BRDF lobe statistics. For real diffuse surfaces, the bi-directional reflectance distribution function (BRDF) is non-Lambertian, and may require a more complex model in ray tracing simulations. MERL provides this data only for research or academic use. LSA is estimated for every clear-sky land pixel. The convolution code is largely similar to the pre-filter convolution, except that it now processes the sample vector according to our BRDF's geometry function and Fresnel-Schlick's approximation:. For the imaged-pixel, three BRDF derived quantities are required: • the BRDF value at the solar and viewing angles, for the reflected direct solar flux calculations;. Nvidia’s whitepaper on the theory + implementation of BRDF is fine. A BRDF is a function describing how much light reflects off a surface towards the eye/camera, given a certain lighting environment. 0 License, and code samples are licensed under the Apache 2. BRDFLab is a modular system to design complex bidirectional reflectance distribution functions (BRDF). Spatial subsets of the land products are available for predefined 7 x 7-km areas centered on the selected sites. You don't give the code for lookup_brdf_val() at all, so beyond that one can only speculate. Fundamentally, the algorithm is integrating over all the illuminance arriving to a single point on the surface of an object. This is done. In the Image Slice, Lit Object, and Lit Sphere windows, you're seeing the first (topmost in the parameters window) enabled BRDF. This video is unavailable. If you use a normalized Phong BRDF lighting in your game , i. Bidirectional Reflectance Distribution Function or BRDF means, in its literal meaning, 2 direction composing how light reflection is distributed on the surface of a material. Inside the code, you. Note that brdf_normalization must be set to False in order to use the original Blinn specular model. GitHub Gist: instantly share code, notes, and snippets. foqint_morel (char *file, float wave[], int32_t nwave, float solz, float senzp, float phi, float chl, float brdf[]). Code will be shared when it is more stable, general ideas and pitfalls are described for now. Please refer to the 'readme. Improved the optics of the clearcoat part by using GTR1 instead of GTR2 for the D term. For details, see the Google Developers Site Policies. LSA is estimated for every clear-sky land pixel. Description. ypoissant wrote:Their principled BRDF is based on Walter et al. Albedo simulations were achieved by using satellite orbital models and the SAIL code to produce synthetic bidirectional reflectance data sets for a range of vegetation canopies. ing the best approximation of the input BRDF that falls within the printer's gamut is the problem known as BRDF gamut mapping. SIGGRAPH 2017 Course: Physically Based Shading in Theory and Practice © DreamWorks Animation 2017. Each question must have the marks it’s worth shown 6. These 2 direction are light direction towards the surface, which is the 'IN' hence they call this 'incident light', and eye-to-surface direction or view direction, which. I didn't downvote, but linking the code, and especially an entire project in a stack overflow question generally means you're using a wrong format. This should be GLSL code # that defines a function called BRDF (although you can # add whatever other functions you want too). Included in the library are models for diffuse surface scattering that predict the bidirectional reflectance distribution function (BRDF), codes for calculating scattering by isolated particles, and codes for reflection, transmission, and diffraction from gratings. What is a BRDF?. [QUOTE=thedarkknight1717;1285645]May I ask for a whole project source code of blinn phong or other brdf model with glsl and opengl? I want a runnable project to study about that as many shader code given on the web is unable to compile with my environment. The BRDF (Figures 7 and 8) of this sample shows the nearly lambertian reflectance at 0 degrees incident illumination, but shows a more pronounced variation between the hot spot and specular reflectance at 65 degrees incident illumination, with the backscattering hotspot relatively. I think what most people are referring to when they're talking about physically based shading, is the model underlying the BRDF. Université Paris-Est, Ifsttar, Paris, France eric. 95 and have a daily income of around$ 0. For example, a transformation may swap two sub-expressions in a BRDF or remove one of them altogether. com! The Web's largest and most authoritative acronyms and abbreviations resource. We also show how handy such an approach is for the eventual end user, whose main concern is the ease with which one. For simple sample shapes (spheres and cylinders) the method requires only a digital camera and a stable light source. You can also just compute Rrs with BRDF correction disabled, if you simply don't want the air-sea interface correction. The first column shows the same color code we use for each parametrization axis. That matrix has all the information necessary to perform different analyses. // Unity BRDF code expect already simplified data by. What I did to solve it was to 1) put the brdf texture wrapmode to clamp, because there seems to be some UV values outside of the 0-1 range when using these lights. While it may complicate the code a bit, it gives us // the flexibility later, to specialize the type of the coordinates into anything we want. Improved the optics of the clearcoat part by using GTR1 instead of GTR2 for the D term. Matcap uses a 2-dimensional lookup table, and the lookup is not really based on either the view or the light direction; just the normal. Anisotropic BRDF From Hertzmann & Seitz, CVPR’03. of a BRDF model that matches the data given as input. James Jafolla, President of Surface Optics Corporation, presented today at the 13th International IR Target and Background Modeling & Simulation Workshop (ITBMS) convened in Banyuls-sur-mer, France. 0 to reduce computational expense. "Microfacet Models for Refraction through Rough Surfaces" GGX BRDF. Source code is provided to read back the BRDF file format. The tile BRDF is input into radiation models to help predict the expected power levels observed by the IR cameras. This is the formula that was published in the ShaderX 7 article [2] and the one that shipped with Velvet Assassin (see here for the code). Physically-Based Shading at Disney by Brent Burley, Walt Disney Animation Studios [Revised Aug 31, 2012. An "instance" of a BRDF is meant to be located at a certain point and to a certain viewing direction. The next step is to sort the resulting variants based on their ac-curacy. A full BRDF is a four-dimensional function, whereas this shader only uses two. *click* We do so because in this space, a GGX microfacet model has a shape that is close to being rotationally symmetric and this permits to restrict the number of statistics to study to only three values: the energy, the. developed the post-processing code necessary to calibrate and ingest these lab measurements into our DIRIG based SSA modeling system. $\endgroup$ - ivokabel May 22 '16 at 15:49. MERL provides this data only for research or academic use. We present a new image-based process for measuring the bidirectional reflectance of homogeneous surfaces rapidly, completely, and accurately. This method differs from previous microfacet-based BRDF models in that it uses a simple shadowing term which allows it to handle very general microfacet distributions while maintaining reciprocity and energy conservation. the BRDF correction in PROBA-V 300m and 1km data. Modify the nl = saturate(dot(normal, light. The surface reflectance is inverted with the help of a radiative transfer model -code site- using atmospheric inputs taken from NCEP (ozone, pressure) or directly derived the MODIS data (aerosol, water vapor). The rendering equation is sometimes written more compactly in operator form. Include the BRDF data and any additional code in your submission. As a result, we. The source code from this website is distributed under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. Not on a surface but as a graph. Long-term calibration monitoring of Spectralon diffusers BRDF in the air-ultraviolet Georgi T. Université Paris-Est, Ifsttar, Paris, France eric. net is 1 decade 9 years old. The fact that both approaches can satisfactorily fit the BUG data is not unexpected, given the similarities between the functions and their input parameters, and the fact that the BRDF for dark lunar. 上一篇把BRDF换成了更为常见的blinn-phong,推出了在上面进行importance sampling的公式,以及如何把结果存到一张查找表LUT上。 更进一步的做法是把这张LUT拟合成一个曲面,这样就可以在shader中直接计算,省去一次纹理采样。. disini saya akan menegnalka sedikit tentang vray BRDF yang sering saya gunakan untuk setting material di video video sebelumnya #sketchup #vray #tutorial [ With Code ] - Duration: 2. Plus 10% Off your first order. The parameters of the kernel-driven BRDF model of Roujean et al. The BRDF of a surface is the ratio of reflected radiance to incident irradiance at a particular wavelength: where the subscripts i and r denote incident and reflected respectively, is the direction of light propagation, is the wavelength of light, L is radiance, and E is irradiance. In practice, this is handled using a 3-component vector for the amount of red, green and blue light, each in range $$[0,1]$$. Sample captured image, 1 of 64. uniform BRDF) surfaces. brdf 前言现实世界中的表面绝大多数都是凹凸不平的。 在这种情况下,可以把表面看成是大量朝向各异的微小光学平面的集合,我们肉眼可见的每个点都包含了很多个这样的微小光学平面。. ypoissant wrote:Their principled BRDF is based on Walter et al. Present a novel light sampling method named BRDF-oriented light sampling, which selects lights based on importance values estimated using the BRDF’s contributions PDF Cite Code Slides Projects. Go to the source code of this file. The radiometric correction of airborne imagery aims at providing unbiased spectral information about the Earth’s surface. For details, see the Google Developers Site Policies. Image Based Lighting Challenges. Surface albedo EDR has the global coverage, including land surface albedo (LSA), ocean surface albedo (OSA) and sea ice surface albedo (SSA). Source code for generation of textures for use with BRDF-based lighting is included. 05degree MOD_PR43C3 MCD43D01-30 archived BRDF/Albedo Model Parameters 30arc sec MOD_PR43C1 MCD43D31 archived BRDF/Albedo Quality 30arc sec MOD_PR43C1 MCD43D32 archived Local Solar Noon 30arc sec MOD_PR43C1 MCD43D33 archived BRDF/Albedo ValidObs Band1 30arc sec MOD_PR43C1 MCD43D34 archived BRDF/Albedo ValidObs Band2 30arc sec MOD_PR43C1. Bordeaux 3 U. The photos in A and B are both of a solar cell on gold Kapton. The comparison shows how the air-ultraviolet BRDF of these Spectralon samples changed over time under clean room deployment conditions. 1: sharing the BRDF ray The only problem with version 1. Follow Follow @Er_Brdf Following Following @Er_Brdf Unfollow Unfollow @Er_Brdf Blocked Blocked @Er_Brdf. Perhaps the one most relevant to paints, such as CARC. 3 BRDF reconstruction The inverse wavelet transform only allows the reconstruction of the original dis- crete BRDF, starting from the compressed version. of a BRDF model that matches the data given as input. BRD este o banca universala si ofera servicii financiare complete pentru persoane fizice si companii. GitHub Gist: instantly share code, notes, and snippets. With these images (we used eight in total), determining the normal of a point on the bottle is simply a matter of finding a point on the sphere with matching intensity under all illumination conditions. use for non-BRDF experts. Subsurface parameter from 0 to 1. ~ASA Goddard Space Flight Center, Code 614. [50 marks] not 50% or 50% of marks 7. I studied Unity CG include files today and accidentally found fragment of code in when I define UNITY_BRDF_CGX, specular. We also show how handy such an approach is for the eventual end user, whose main concern is the ease with which one. Software will become available when support staff person is in place. By Michael I. Schaaf a, *, Feng Gao a,1 , Alan H. But now we can get down to a concrete use and some concrete code.
|
2019-11-18 17:36:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38840150833129883, "perplexity": 3927.4977783239187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669809.82/warc/CC-MAIN-20191118154801-20191118182801-00059.warc.gz"}
|
http://mathhelpforum.com/geometry/116567-coordinate-geometry-circle-print.html
|
# coordinate geometry circle
• Nov 24th 2009, 03:37 PM
decoy808
coordinate geometry circle
use coordinate geometry to show that a circle, with its centre at O(2,1) can be drawn through the points A(5,5) B(6,-2) and C (-2,2)
what is the area of the circle?
do i need to draw or can i calculate?
• Nov 24th 2009, 04:35 PM
aidan
Quote:
Originally Posted by decoy808
use coordinate geometry to show that a circle, with its centre at O(2,1) can be drawn through the points A(5,5) B(6,-2) and C (-2,2)
what is the area of the circle?
do i need to draw or can i calculate?
Calculate!
Calculate the distance from the center of the circle to each of the points A,B,C.
IF they are the same then you have a common distance, which is the radius.
(Check your coordinates for point C.)
• Nov 24th 2009, 05:59 PM
Soroban
Hello, decoy808!
I believe there is a typo . . .
Quote:
Use coordinate geometry to show that a circle, with its centre at O(2,1)
can be drawn through the points A(5,5), B(6,-2), and C(-2,-2)
What is the area of the circle?
Use the Distance Formula to show that: . $OA \:=\:OB\:=\:OC$
. . That is, the center is equidistant from the three points.
That distance is the radius of the circle.
. . Now you can find the area of the circle, right?
• Nov 25th 2009, 01:05 AM
decoy808
yes there was a type error at c. should be(-2,-2)
i found the distances
pa=5
pb=7
pc=7
a=pi(r^2)
=3.14 x 7^2
=154
is this correct. many thanks
• Nov 25th 2009, 10:25 AM
bjhopper
coordinate geometry circle
posted by decoy808
It is much easier to plot the points on coordinate paper.You see that B and C are points on different radii.You erect the slope diagram for the corrected C and find a 3,4 5 right triangle so the radius is 5
bjh
|
2017-03-26 18:10:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8166888356208801, "perplexity": 1159.4956801675949}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189244.95/warc/CC-MAIN-20170322212949-00597-ip-10-233-31-227.ec2.internal.warc.gz"}
|
http://discuss.tlapl.us/msg02786.html
|
# [tlaplus] Re: problems debugging liveness errors.
Strong fairness means an action which is enabled infinitely often will eventually be taken. So it can be disabled and renabled over and over, but as long as it always gets renabled, then it will eventually be taken. Which is essentially what you said.
But if, for example, an action is enabled infinitely often and there are _two_ ways to step out of it (because of non determinism in the action), you are NOT guaranteed that both ways out will eventually be taken.
To guarantee that, you’d have to further subdivide the action, and have strong fairness on the subdivisions.
As an example,
Foo == /\ x = 1
/\ \E y \in {2, 3} : x’ = y
If x is repeatedly toggled between 0 and 1 by some other action, and you have strong fairness on Foo, then you’re guaranteed that the Foo action will eventually take place. But it taking place by always choosing 2 would be valid, there’s no guarantee that eventually it would assign 3 to x in the next state.
My guess is that your PlusCal is being turned into actions which contain non-determinism, and TLC is showing you a trace where only one side of the non-determinism ever occurs.
I hope this helps!
Jay P.
--
You received this message because you are subscribed to the Google Groups "tlaplus" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tlaplus+unsubscribe@xxxxxxxxxxxxxxxx.
To post to this group, send email to tlaplus@xxxxxxxxxxxxxxxx.
|
2019-08-21 17:05:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6775325536727905, "perplexity": 1520.0272828471711}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316075.15/warc/CC-MAIN-20190821152344-20190821174344-00318.warc.gz"}
|
http://attawheed.us/m4xnu/07352a-third-law-of-thermodynamics-ncert
|
## third law of thermodynamics ncert
\tag{7.6} \\ The Third Law of Thermodynamics means that as the temperature of a system approaches absolute zero, its entropy approaches a constant (for pure perfect crystals, this constant is zero). & \qquad P_i, T_f \\ Zeroth Law of Thermodynamics. (7.20): âBut U is state function. In chapter 4, we have discussed how to calculate reaction enthalpies for any reaction, given the formation enthalpies of reactants and products. \Delta S^{\mathrm{surr}} = \frac{Q_{\text{surr}}}{T_{\text{surr}}}=\frac{-Q_{\text{sys}}}{T_{\text{surr}}}, For this reason, we can break every transformation into elementary steps, and calculate the entropy on any path that goes from the initial state to the final state, such as, for example: \[ \Delta S^{\text{universe}}=\Delta S^{\text{sys}} + \Delta S^{\text{surr}} = -20.6+21.3=+0.7 \; \text{J/K}. The calculation of the entropy change for an irreversible adiabatic transformation requires a substantial effort, and we will not cover it at this stage. \end{aligned} When we calculate the entropy of the universe as an indicator of the spontaneity of a process, we need to always consider changes in entropy in both the system (sys) and its surroundings (surr): \[ Therefore, for irreversible adiabatic processes $$\Delta S^{\mathrm{sys}} \neq 0$$. \tag{7.22} \tag{7.9} Oct 02, 2020 - Third law of thermodynamics - Thermodynamics Class 11 Video | EduRev is made by best teachers of Class 11. Mathematically âU = q + w, w = âp. Vâ (work of expansion) âU = q â p. â V or q = â U + p. âV, q,w are not state function. \tag{7.14} which is the mathematical expression of the so-called Clausius theorem. Exercise 7.1 Calculate the standard entropy of vaporization of water knowing $$\Delta_{\mathrm{vap}} H_{\mathrm{H}_2\mathrm{O}}^{-\kern-6pt{\ominus}\kern-6pt-}= 44 \ \text{kJ/mol}$$, as calculated in Exercise 4.1. For these purposes, we divide the universe into the system and the surroundings. \Delta_{\mathrm{vap}} S = \frac{\Delta_{\mathrm{vap}}H}{T_B}, First Law of thermodynamics. or, similarly: We can find absolute entropies of pure substances at different temperature. It deals with bulk systems and does not go into the â¦. Bringing (7.16) and (7.18) results together, we obtain: \[ \\ which, assuming $$C_P$$ independent of temperature and solving the integral on the right-hand side, becomes: \[ The situation for adiabatic processes can be summarized as follows: \[ By replacing eq. Class-12ICSE Board - Third Law of Thermodynamics - LearnNext offers animated video lessons with neatly explained examples, Study Material, FREE NCERT Solutions, Exercises and Tests. In this case, a residual entropy will be present even at $$T=0 \; \text{K}$$. $$\Delta S_1$$ and $$\Delta S_3$$ are the isochoric heating and cooling processes of liquid and solid water, respectively, and can be calculated filling the given data into eq. The third law of thermodynamics is sometimes stated as follows: The entropy of a perfect crystal at absolute zero is exactly equal to zero. Your email address will not be published. \Delta S^{\mathrm{universe}} = \Delta S^{\mathrm{sys}} + \Delta S^{\mathrm{surr}}, As such, absolute entropies are always positive. \mathrm{H}_2 \mathrm{O}_{(l)} & \quad \xrightarrow{\quad \Delta S_{\text{sys}} \quad} \quad \mathrm{H}_2 \mathrm{O}_{(s)} \qquad \quad T=263\;K\\ In general $$\Delta S^{\mathrm{sys}}$$ can be calculated using either its Definition 6.1, or its differential formula, eq. 2. Class 11 Thermodynamics, What is First Law of Thermodynamics Class 11? \[ The fourth Laws - Zeroth law of thermodynamics -- If two thermodynamic systems are each in thermal equilibrium with a third, then they are in thermal equilibrium with each other. Calculate the heat rejected to ⦠Removed, at least in theory, by forcing the substance into a perfectly ordered crystal.24 affect overall... Is taken to be zeroâ of pure substances at different temperature which corresponds in to... In the next chapter when we seek more convenient indicators of spontaneity example, exothermal... Do the same for reaction entropies the formation enthalpies of reactants and products Class 11 Physics thermodynamics universe into system... Not important confused by the Best Teachers and used by over 51,00,000 students series of states... Since they happen through a series of equilibrium states beaker is in the... To another equilibrium state to another equilibrium state law is all about the crystalline... Be created not destroyed, It may be converted from one equilibrium to. Were stated and so numbered according to this law, âThe entropy of the universe pure... Zeroth law of conservation energy of spontaneity \ ] of nature regarding entropy and the impossibility of absolute. Using eq of equilibrium states ( mol K ) the disorder/randomness in a process that at... Appendix 16 ’ s rule, after the first law of thermodynamics: It law... Case, third law of thermodynamics ncert residual entropy will be present even at \ ( W_ { {. T = ⦠the third law of thermodynamics states that the definition of entropy includes the heat in. Return to the range of about 85–88 J/ ( mol K ) to what happened for the.! Be confused by the Best Teachers and used by third law of thermodynamics ncert 51,00,000 students in as the surroundings. Even throughout the substance denoted by âSâ, is a measure of the entropy scale often! Reaction occurring in the next chapter when we seek more convenient indicators of spontaneity and physicist Nernst! Statistical law of thermodynamics states that the universe can be used to infer the spontaneity of a perfectly crystalline.. \Delta S^ { \text { K } \ ] associated with a change in entropy the absoKite of. Over 51,00,000 students surroundings ( environment ) only when a system at absolute of... To Achieve a temperature of the entropy of every substance can then be calculated in reference to this law proposed! At least in theory, by forcing the substance into a system isolated the! Calculated in reference to this law, âThe entropy of a process, as long as the entropy of perfectly! Substance into a system is in as the entropy scale is often not important three laws thermodynamics... Overall temperature of the disorder/randomness in a closed system: It is law of nature regarding entropy and surroundings. { 7.13 } \end { equation } \ ) in either eq Second law can calculated! Different temperature been viewed 328 times this law, âThe entropy of the entropy of crystalline... Substance approaches zero as the entropy of every substance can then be calculated translating eq recall that the entropy a... Standard entropies of pure substances at different temperature process that happens at constant volume \. First law of thermodynamics: at absolute zero, the beaker+room combination behaves as a is... The absolute value of the universe is considered one from into another law concerning equilibrium! Beaker is in stark contrast to what happened for the enthalpy is law of thermodynamics the rest the. Divided into a system and the impossibility of reaching absolute zero, the surroundings these purposes, will! In the next chapter when we seek more convenient indicators of spontaneity is in the! T = ⦠the third law is all about the perfectly crystalline substance at zero K absolute. Used by over 51,00,000 students Reason: the zeroth law concerning thermal equilibrium after! Of zero Kelvin the next chapter when we seek more convenient indicators spontaneity! Law is all about the perfectly crystalline substance at zero K or absolute zero is taken to be.. The third law of thermodynamics third law of thermodynamics this law, âThe entropy of a system is in or. Heat exchanged at reversible conditions only ( isentropic ) is negative molecule is identical, and the impossibility reaching! Of nature regarding entropy and the impossibility of reaching absolute zero is taken to be.. The next chapter when we seek more convenient indicators of spontaneity of zero Kelvin thermodynamics is measure! Universe into the system and its surroundings ( environment ) to the theorem. Reversible adiabatic process system at absolute zero is taken to be zeroâ thermodynamics apply only when a and! Created not destroyed, It may be converted from one equilibrium state another. It, Frederick Thomas Trouton ( 1863-1922 ) then consider the room.! Crystalline is o processes \ ( \Delta S^ { \mathrm { sys } \! Contrast to what happened for the enthalpy zero point of the room that the of. \Delta S^ { \text { sys } } \ ] can calculate the heat exchanged at conditions! The beaker is in as the entropy of a process, as long as the entropy of every substance then! Equilibrium or moves from one equilibrium state { 7.5 } \end { equation } \ is! Substance into a system isolated from the rest of the universe into the system its! \ ( \Delta S_2\ ) is negative in fact, we have discussed how to calculate reaction enthalpies any... Theorem in the beaker will not affect the overall temperature of the of. A comprehensive list of standard entropies of inorganic and organic compounds is reported in appendix 16 then the! Of spontaneity formulated this law in 1931 long after the French scientist that discovered It, Frederick Trouton... Named zeroth law of thermodynamics third law is all about the perfectly crystalline substance can find entropies! The Best Teachers and used by over 51,00,000 students not always true, and an irreversible processes... Volume, \ ( T=0 \ ; \text { K } \.! Measuring or calculating these quantities might not always true, and the molecular is... Thermodynamics and thus was named zeroth law of conservation energy zero as the entropy of the Clausius! Nor destroyed we will return to the range of about 85–88 J/ ( mol K ) chemist Walther.! Entropy, denoted by âSâ, is a phase change ( isothermal process ) and can used! Of Elements and Periodicity in Properties a perfectly crystalline substance approaches zero as the entropy of process! { equation } \ ] 7.19 } \end { equation } \ ) not affect the overall temperature of disorder/randomness. Substances at different temperature well-defined constant were stated and so numbered since they happen through a of. Quantities might not always be the simplest of calculations this section, we have discussed how to calculate reaction for... Absokite zero of temperature law of thermodynamics: at absolute zero is a of! System at absolute zero, the opposite case is not always true, an! The impossibility of reaching absolute zero is taken to be zeroâ 51,00,000 students process ) and can be divided a... Not always be the simplest of calculations reaction enthalpies for any reaction, given formation! Viewed 328 times ( isothermal process ) and can be used to infer the of! In this case, a reversible adiabatic process \Delta S_2\ ) is always, in fact, we will to... Corresponds in SI to the Clausius theorem in the beaker is in as the immediate surroundings first and laws... Beaker will not affect the overall temperature of zero Kelvin = ⦠of. So numbered forcing the substance from into another the beaker is in equilibrium or moves one... ( eq K or absolute zero is a well-defined constant phase change isothermal. ( eq = âp theorem in the next chapter when we seek more indicators... Calculate the heat exchanged at reversible conditions only so-called Clausius theorem in the beaker will not the! And thus was named zeroth law of thermodynamics is not always true, and an irreversible adiabatic processes \ T=0. Calculate the heat exchanged in a process, as long as the entropy every! Definition of entropy includes the heat exchanged in a closed system confused by the Best Teachers and used over... K } \ ] simplest of calculations created nor destroyed 328 times in equilibrium their... Don ’ t be confused by the Best Teachers and used by over 51,00,000 students reversible processes they... Used by over 51,00,000 students, given the formation enthalpies of reactants and products modern periodic law and ⦠Ncert. The perfectly crystalline substance at zero K or absolute zero, the always! Range of about 85–88 J/ ( mol K ) affect the overall temperature of zero Kelvin the Clausius... Law and ⦠CBSE Ncert Notes for Class 11 Physics thermodynamics the definition of includes. ’ t be confused by the Best Teachers and used by over 51,00,000 students a transformation constant. Law, âThe entropy of every substance can then consider the room the! Calculated in reference to this unambiguous zero, for irreversible adiabatic processes \ \Delta... Entropy differences, so the zero point of the universe is considered this fact, a reversible process... = q + w, w = âp thermodynamics was first formulated by German chemist and physicist Walther.! A well-defined constant } \ ] the perfectly crystalline substance approaches zero as the zero! Behaves as a system isolated from the rest of the universe into system... System is in stark contrast to what happened for the enthalpy the perfectly crystalline substance to recall the... An irreversible adiabatic transformation is usually associated with third law of thermodynamics ncert change in entropy next when. One equilibrium state to another equilibrium state to another equilibrium state by the... The absoKite zero of temperature is approachedâ 7.11 } \end { equation } \ ] the energy ( eq what.
|
2023-03-25 16:40:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.8987733721733093, "perplexity": 1253.7516540113043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945368.6/warc/CC-MAIN-20230325161021-20230325191021-00552.warc.gz"}
|
http://www.ntg.nl/pipermail/ntg-context/2007/028785.html
|
# [NTG-context] (mkii) language-specific options for \placeregister[index] ?
Fri Dec 21 19:01:18 CET 2007
On Fri, 21 Dec 2007 18:34:32 +0100
Hans Hagen <pragma at wxs.nl> wrote:
> Wolfgang Schuster wrote:
>
> > The problem is now, ConTeXt write information for the index sorting
> > into the tui file and a few additional entries for every spcifiec file
> > encoding, in this utf-8.
>
> writing the sort vector is hooked into starttext
I realized this myself, because the only difference between both
version had been the file size of the tui files.
> > This works quite well in the first example because \enableregime is
> > writte before \starttext while in the second example the sorting
> > information are written before ConTeXt knows the file encoding.
> >
> > A workaround for the moment is to write \enableregime before
> > \startproduct in the main file but I hope this coule be fiyed in the
> > next release.
>
> i'll find another hook
>
> > I still wonder why nobody noticed this untill now.
>
> maybe because of \expanded names, \eacute still sorts somewhat right
I found this solution also but it is easier to write ä, ü ... instead
of \aumlaut, \uumlaut etc.
BTW, why could I use only \aumlaut in macros but not \adiaresis.
Wolfgang
|
2014-04-19 17:02:46
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9606171250343323, "perplexity": 10474.774171238363}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
|
https://www.ttp.kit.edu/preprints/2002/ttp02-16?rev=1458209037&do=diff
|
# Differences
This shows you the differences between two versions of the page.
— preprints:2002:ttp02-16 [2016/03/17 11:03] (current) Line 1: Line 1: + ====== TTP02-16 Gluonic Penguins in $B\to \pi\pi$ from QCD Light-Cone Sum Rules ====== + + + <hidden TTP02-16 Gluonic Penguins in $B\to \pi\pi$ from QCD Light-Cone Sum Rules > The $B\to \pi\pi$ hadronic matrix element + of the chromomagnetic dipole operator $O_{8g}$ + (gluonic penguin) is calculated using the QCD + light-cone sum rule approach. The resulting sum rule for + $\langle \pi\pi |O_{8g}|B\rangle$ contains, in addition + to the $O(\alpha_s)$ part induced by hard gluon exchanges, + a contribution due to soft gluons. We find that in the + limit $m_b\to \infty$ the soft-gluon contribution + is suppressed as a second power of $1/m_b$ with respect + to the leading-order factorizable $B\to \pi\pi$ amplitude, + whereas the hard-gluon contribution has only an $\alpha_s$ + suppression. Nevertheless, at finite $m_b$, soft + and hard effects of the gluonic penguin in $B\to \pi\pi$ + are of the same order. Our result indicates that soft + contributions are indispensable for an accurate counting + of nonfactorizable effects in charmless $B$ decays. + On the phenomenological side we predict that the + impact of gluonic penguins on $\bar{B}^0_d\to \pi^+\pi^-$ + is very small, but is noticeable for $\bar{B}^0_d\to \pi^0\pi^0$. + + |**Alexander Khodjamirian, Thomas Mannel and Piotr Urban** | + |** Phys.Rev. D67 054027 2003 ** | + | {{preprints:2002:ttp02-16.pdf|PDF}} {{preprints:2002:ttp02-16.ps|PostScript}} [[http://arxiv.org/abs/hep-ph/0210378|arXiv]] | + | |
|
2020-04-05 14:31:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7121212482452393, "perplexity": 5990.60455335973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371604800.52/warc/CC-MAIN-20200405115129-20200405145629-00486.warc.gz"}
|
http://stackoverflow.com/questions/4302567/passing-a-file-as-a-command-line-argument-and-reading-its-lines
|
# Passing a file as a command line argument and reading its lines
this is the code that i have found in the internet for reading the lines of a file and also I use eclipse and I passed the name of files as SanShin.txt in its argument field. but it will print :
Error: textfile.txt (The system cannot find the file specified)
Code:
public class Zip {
public static void main(String[] args){
try{
// Open the file that is the first
// command line parameter
FileInputStream fstream = new FileInputStream("textfile.txt");
String strLine;
while ((strLine = br.readLine()) != null) {
// Print the content on the console
System.out.println (strLine);
}
//Close the input stream
in.close();
}catch (Exception e){//Catch exception if any
System.err.println("Error: " + e.getMessage());
}
}
}
-
obviously because it cannot find the text file? – christian Nov 29 '10 at 9:59
I have such a text file !!! – user472221 Nov 29 '10 at 10:00
also this is the location of my project:C:\Documents and Settings\icc\workspace\Hoffman Project – user472221 Nov 29 '10 at 10:02
and my text file is in desktop. – user472221 Nov 29 '10 at 10:02
the args are not used by your program: it always opens textfile.txt – Maurice Perry Nov 29 '10 at 10:03
...
// command line parameter
if(argv.length != 1) {
System.err.println("Invalid command line, exactly one argument required");
System.exit(1);
}
try {
FileInputStream fstream = new FileInputStream(argv[0]);
} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
// Get the object of DataInputStream
...
> java -cp ... Zip \path\to\test.file
-
When you just specify "textfile.txt" the operating system will look in the program's working directory for that file.
You can specify the absolute path to the file with something like new FileInputStream("C:\\full\\path\\to\\file.txt")
Also if you want to know the directory your program is running in, try this: System.out.println(new File(".").getAbsolutePath())
-
Your new FileInputStream("textfile.txt") is correct. If it's throwing that exception, there is no textfile.txt in the current directory when you run the program. Are you sure the file's name isn't actually testfile.txt (note the s, not x, in the third position).
Off-topic: But your earlier deleted question asked how to read a file line by line (I didn't think you needed to delete it, FWIW). On the assumption you're still a beginner and getting the hang of things, a pointer: You probably don't want to be using FileInputStream, which is for binary files, but instead use the Reader set of interfaces/classes in java.io (including FileReader). Also, whenever possible, declare your variables using the interface, even when initializing them to a specific class, so for instance, Reader r = new FileReader("textfile.txt") (rather than FileReader r = ...).
-
|
2016-05-27 02:46:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44319772720336914, "perplexity": 3570.5262314282545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276416.16/warc/CC-MAIN-20160524002116-00052-ip-10-185-217-139.ec2.internal.warc.gz"}
|
http://www.ma.utexas.edu/mp_arc-bin/mpa?yn=96-681
|
96-681 Brydges D., Dimock J., Hurd T.
A non-Gaussian fixed point for $\phi^4$ in $4-\ep$ dimensions - I (118K, LaTeX) Dec 18, 96
Abstract , Paper (src), View paper (auto. generated ps), Index of related papers
Abstract. We consider the $\f ^4$ quantum field theory in four dimensions. The Gaussian part of the measure is modified to simulate $4-\ep$ dimensions where $\ep$ is small and positive. We give a renormalization group analysis for the infrared behavior of the resulting model. We find that the Gaussian fixed point is unstable but that there is a hyperbolic non-Gaussian fixed point a distance $\cO(\ep)$ away. In a neighborhood of this fixed point we construct the stable manifold.
Files: 96-681.tex
|
2018-02-25 13:59:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8166400194168091, "perplexity": 907.2410798541614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816462.95/warc/CC-MAIN-20180225130337-20180225150337-00056.warc.gz"}
|
http://stats.stackexchange.com/questions/34379/need-help-understanding-dirichlet-courseras-pgm-class-week-7-bayesian-predic
|
# need help understanding Dirichlet (coursera's PGM class week 7 - Bayesian prediction)
I'm trying to work through Coursera's probabilistic graphical models class (week 7: Baeysian prediction) and a have several questions.
1. In the Dirichlet distribution, I'm having difficulty trying to understand why there's a -1 in theta's exponent: $$P(\theta)=Dir(\alpha_1, ..., \alpha_k) = \frac{1}{Z} \cdot \prod_{j} \theta_{j}^{\alpha_{j}-1}$$
2. How do you get from here: $$P(X)=\int_{\theta}P(X|\theta)P(\theta)d\theta$$ to here: $$P(X=x^{i}|\theta) = \int_{\theta} \frac{1}{Z} \cdot \theta_{i} \prod_{j} \theta_{j}^{\alpha_{j}-1}$$
3. Also, how do you step through the integration for the following?: $$\int_{\theta} \frac{1}{Z} \cdot \theta_{i} \prod_{j} \theta_{j}^{\alpha_{j}-1} = { \alpha_{i}\over{\sum_{j} \alpha_{j}} }$$
These are the lecture notes. My questions refer to the first slide.
-
1) The $-1$ comes from the definition of Dirichlet distribution. 2) and 3) The notation is horrible (in the slides), there are typos and missinterpretations. Try to find a better reference such as a textbook. – user10525 Aug 15 '12 at 18:49 Is there a good textbook you'd recommend for PGM? – maogenc Aug 16 '12 at 4:15 did you ask on the course forum ? – Andre Holzner Nov 13 '12 at 22:03
Just thought I'd add an example of how to calculate the normalising constant. If you know the beta integral, then its easier to use that for direct integration. With a change of variables in the usual definition you get
$$\int_{L}^{U}(x-L)^{a-1}(U-x)^{b-1}dx=(U-L)^{a+b-1}B(a,b)$$
The change in variables is $t=\frac{x-L}{U-L}$ and you get back to the standard definition of the beta integral. To apply this to the calculation of Z we must first determine the limits of integration. This is simple for the simplex as the parameters must all be positive and sum to 1. So we have $$0\leq\theta_1\leq 1$$ $$0\leq\theta_i\leq 1-\sum_{j=1}^{i-1}\theta_j\;\;\; i=2,\dots,n-1$$ $$\theta_n=1-\sum_{j=1}^{n-1}\theta_j$$
This assumes that we integrate in the order $\theta_n,\theta_{n-1},\dots,\theta_1$. The order of integration doesn't matter, but this order is easier to write down.The first integral is a substitution so we have for the second integral.
$$\int_{0}^{1-\sum_{j=1}^{n-2}\theta_j}\left[\prod_{k=1}^{n-2}\theta_{k}^{\alpha_k-1}\right]\theta_{n-1}^{\alpha_{n-1}-1}\left( 1-\sum_{j=1}^{n-2}\theta_j - \theta_{n-1}\right)^{\alpha_n-1}d\theta_{n-1}$$
This is of the form of the transform beta integral with $L=0$ and $U= 1-\sum_{j=1}^{n-2}\theta_j$ hence we get: $$\left[\prod_{k=1}^{n-2}\theta_{k}^{\alpha_k-1}\right]B(\alpha_n,\alpha_{n-1})\left( 1-\sum_{j=1}^{n-2}\theta_j \right)^{\alpha_n+\alpha_{n-1}-1}$$
Now we apply this again to the integral over $\theta_{n-2}$. It is another transformed beta integral but with $U= 1-\sum_{j=1}^{n-3}\theta_j$. Hence we get
$$\left[\prod_{k=1}^{n-3}\theta_{k}^{\alpha_k-1}\right]B(\alpha_n,\alpha_{n-1}) B(\alpha_n+\alpha_{n-1},\alpha_{n-2}) \left( 1-\sum_{j=1}^{n-3}\theta_j \right)^{\alpha_n+\alpha_{n-1}+\alpha_{n-2}-1}$$
It is now straight forward to repeatedly apply this and you get
$$Z= B(\alpha_n,\alpha_{n-1}) B(\alpha_n+\alpha_{n-1},\alpha_{n-2}) B(\alpha_n +\alpha_{n-1}+\alpha_{n-2} ,\alpha_{n-3}) \dots B(\alpha_n+\dots+\alpha_{2},\alpha_1)$$
If you plug in the relation between the beta and gamma integrals $B(a,b)=\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}$ you get the correct normalising constant.
-
In (2), it should be $P(X = x_i)$, not $P(X = x_i | \theta)$; the equation is arrived at by iterated expectation, i.e. $P(X = x_i) = E[P(X = x_i | \theta)] = E\theta_i = \int \frac 1 Z \theta_i \left(\prod \theta_j ^{\alpha_j - 1}\right) \ d\vec\theta$. (3) is easy if you take the form of the normalizing constant as known; it is $$\frac 1 {Z(\alpha_1, ..., \alpha_n)} = \frac{\Gamma(\sum \alpha_j)}{\prod \Gamma(\alpha_j)}.$$ The form of the normalizing constant can be found (e.g.) by exploiting the fact that the Dirichlet arises from norming a collection of gamma random variables with shape parameters given by the $\alpha_i$ by their sum. So the integral is $$\frac{1}{Z(\alpha_1, ..., \alpha_n)} \int \theta_i ^{\alpha_i + 1 - 1} \prod_{i \ne j} \theta_j ^{\alpha_j - 1} \ d\vec\theta = \frac{Z(\alpha_1, ..., \alpha_i + 1, ..., \alpha_n)}{Z(\alpha_1, ..., \alpha_n)}$$ since the integrand is the kernel of a Dirichlet with parameters $(\alpha_1, ..., \alpha_i + 1, ..., \alpha_n)$. Plugging in the form of $Z(\cdot)$ and using the properties of the gamma distribution leads to the result in (3).
In all the integrals it is understood that $\theta_n := 1 - \sum_1 ^ {n - 1} \theta_i$, and the integrals are over the $n - 1$ dimensional simplex.
-
|
2013-05-24 05:21:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9556533098220825, "perplexity": 241.71743495349625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704218408/warc/CC-MAIN-20130516113658-00082-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://slideplayer.com/slide/4213192/
|
# Graphs of Sine Curves Graph Transformations of the Sine Function Graph Transformations of the Cosine Function Determine the Amplitude and Period of Sinusoidal.
## Presentation on theme: "Graphs of Sine Curves Graph Transformations of the Sine Function Graph Transformations of the Cosine Function Determine the Amplitude and Period of Sinusoidal."— Presentation transcript:
Graphs of Sine Curves Graph Transformations of the Sine Function Graph Transformations of the Cosine Function Determine the Amplitude and Period of Sinusoidal Functions Graph Sinusoidal Functions: y= A sin Bx Find the Equation for a Sinusoidal Graph
Periodic Functions If we add or subtract integral multiples of 2π to θ, the trigonometric values remain unchanged. This is for all θ A function f is called periodic if there is a positive number p such that, whenever θ is in the domain of f so is θ+p and f(θ + p) = f(θ) If there is a smallest such number p, is the smallest value is called the fundamental period of f
Periodic Properties sin ( θ + 2π) = sin θ cos (θ +2π) = cos θ
The graph of y = sin x Since the sine function has a period of 2π we only need to graph on the interval [0, 2π] The graph is one period or one cycle of the graph of y= sin x
Properties of the sine function 1.The domain is the set of all real numbers 2.The range consists of all real numbers from -1 to 1 3.The sine function is an odd function, as the symmetry of the graph with respect to the origin indicates 4.The sine function is periodic with a period of 2π 5.The x intercepts are ……… -2π, -π, 0, π, 2π, 3π, …….. 6.The maximum value is 1 and occurs at x =.. and the maximum of -1 occurs at x=
y = A sin Bx The number |A| is called the amplitude Amplitude = Period =
The graph of y = cos x The cosine function has a period of 2π
Properties of the cosine function 1.The domain is the set of all real numbers 2.The range consists of all real numbers from -1 to 1 3.The cosine function is an even function, as the symmetry of the graph with respect to the y axis indicates 4.The cosine function is periodic with a period of 2π 5.The x intercepts are 6. The maximum value is 1 and occurs at x =.-2π, 0, 2π,4π and the maximum of -1 occurs at x=.. –π,π,3π, 5π
Sinusoidal Graphs Shift the graph of y = cos x to the right to obtain the graph of y= cos (x - ) because of the similarity of sine and cosine function we refer to them as sinusoidal graphs
Download ppt "Graphs of Sine Curves Graph Transformations of the Sine Function Graph Transformations of the Cosine Function Determine the Amplitude and Period of Sinusoidal."
Similar presentations
|
2017-12-14 11:22:52
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8985434770584106, "perplexity": 564.5568701088431}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948543611.44/warc/CC-MAIN-20171214093947-20171214113947-00126.warc.gz"}
|
https://socratic.org/questions/an-object-with-a-mass-of-6-kg-is-revolving-around-a-point-at-a-distance-of-2-m-i-4
|
# An object with a mass of 6 kg is revolving around a point at a distance of 2 m. If the object is making revolutions at a frequency of 6 Hz, what is the centripetal force acting on the object?
Jan 23, 2016
$17.05 k N$
#### Explanation:
Centripetal force, directed towards the centre of the circle, is given by
$F = \frac{m {v}^{2}}{r}$
$= \frac{6 \times {\left(6 \times 2 \pi 2\right)}^{2}}{2}$
$= 17.05 k N$
|
2022-06-30 21:54:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7341517806053162, "perplexity": 175.99853830495613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103915196.47/warc/CC-MAIN-20220630213820-20220701003820-00161.warc.gz"}
|
http://www.mesoatomic.com/en/physics/undulatory/waves/waves-speed
|
The velocity of any mechanical wave, whether transverse or longitudinal, depends both on the inertial properties of the medium (to store kinetic energy) and its elastic properties (to store potential energy). Thus, we can write, generically, that the velocity of a wave is given by:
$$v = \sqrt{\frac{\text{elastic property}}{\text{inertial property}}}$$
The speed of a wave is independent of the movement of the source relative to the medium.
## Speed on a Rope
It is given by
$$v = \sqrt{ \frac{T}{\mu} }$$
Where:
$$v$$ = wave velocity in the rope
$$T$$ = tension in the rope
$$\mu$$ = linear density of the string
## Sound Speed in an Ideal Gas
$$v = \sqrt{ \frac{RT}{M} }$$
$$R$$ = Gas constant
$$T$$ = Absolute temperature
$$M$$ = molecular weight
## Wave Function
A wave function y(x, t) describes the displacement of individual particles from the medium. For a sine wave traveling in the positive x direction, we have:
$$y(x , t) = A sin (kx - \omega t)$$
$$y(x , t) = A sin ( \omega t - \frac{x}{v})$$
$$y(x , t) = A sin ( 2 \pi ft - \frac{x}{v})$$
$$y(x , t) = A sin ( 2 \pi \frac{t}{T} - \frac{x}{\lambda})$$
Where:
$$A = y_{max}$$ , the wave amplitude
$$k = \frac{2 \pi}{\lambda}$$, the wave number
$$\omega$$ = the angular frequency
## Transmitted Power
The power transmitted by any harmonic wave is proportional to the square of the frequency and to the square of the amplitude. On a string, the expression is:
$$P = \frac{\mu \omega^2 A^2 v}{2}$$
## Relations
• $$f = \frac{1}{\tau}$$
• $$v = \lambda f$$
• $$v = \frac{\lambda}{\tau}$$
• $$\omega = \frac{2 \pi}{\tau}$$
• $$\omega = 2 \pi f$$
|
2018-07-19 11:49:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6273995637893677, "perplexity": 755.0484953739208}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590866.65/warc/CC-MAIN-20180719105750-20180719125750-00170.warc.gz"}
|
https://mathematica.stackexchange.com/questions/126168/unexpected-view-of-a-vectorial-triad-in-a-combination-of-a-plot3d-with-graphics3
|
# Unexpected view of a vectorial triad in a combination of a Plot3D with Graphics3D
I am preparing an illustration for the lecture on differential geometry. In this demonstration I am illustrating the Monge parameterization of the surface:
R = {xx, yy, 55 + xx^2 - xx^3 - 2*yy^2}; (* This is the surface *)
e1 = D[R, xx]; (* e1 and e2 are the vectors tangent to the surface *)
e2 = D[R, yy];
n = Cross[e1, e2]/Sqrt[e1.e1*e2.e2 - (e1.e2)^2]; (* This is the unit vector normal to the surface *)
We can easily check that the unit vector n is orthogonal to the both vectors e1 and e2:
n.e1 // Simplify
n.e2 // Simplify
(* 0
0 *)
and that it is, indeed, a unit vector:
n.n // Simplify
(* 1 *)
Now I collect all this into the demonstration:
Manipulate[
(* Definitions *)
R = {xx, yy, 55 + xx^2 - xx^3 - 2*yy^2}; (* This is the surface *)
e1 = D[R, xx]; (* These are the tangent vectors in the surface *)
e2 = D[R, yy];
n = Cross[e1, e2]/
Sqrt[e1.e1*
e2.e2 - (e1.e2)^2]; (* This is the unit vector normal to the \
surface *)
rule = {xx -> X, yy -> Y};
(* End of definitions *)
Show[{
(* This shows the surface *)
Plot3D[55 + x^2 - x^3 - 2 y^2, {x, 0, 4}, {y, -2, 2},
PlotStyle -> Opacity[0.3]],
(* End of the surface *)
(* This shows the vectors e1, e2 and n *)
Graphics3D[{
Arrowheads[0.007], Thick, Red, Arrow[{R, R + e1}],
Arrowheads[0.007], Darker@Green, Arrow[{R, (R + n)}]
}] /. rule
}]
, {{X, 1.7}, 0, 4}, {{Y, 0}, -2, 2}]
I show the vectors e1 and e2 in red, and the vector n - in green. To my astonishment, the vector n in the demonstration does not look orthogonal to the vectors e1 and e2:
Why? I tried several alternative representations of n (in addition to the Cross function, but with the same result. It looks like a bug. If it is, do you see a workaround?
• Alexei - you can condense this down to a simpler working exampe :-P pastebin.com/raw/UnCKkZd3 – Jason B. Sep 12 '16 at 14:01
• @JasonB That's what I already did. But OK now I removed maximum possible. – Alexei Boulbitch Sep 12 '16 at 14:14
• It's not a bug. Your graphics view-port has an anisotropic metric. This distorts the angles between your vectors. – m_goldberg Sep 12 '16 at 15:43
• All you have to do in principle is change the order of the displayed objects in Show. See my answer here, for example. This may in fact be a duplicate. – Jens Sep 12 '16 at 17:13
• It isn't clear to me that this is a simple mistake, nor that it's a duplicate. The issue is simple enough, that the BoxRatios smushes the arrows in such a way to obfuscate their relationship with each other. But more generally, how would you add a set of vectors to a 3D plot, which will have its own plot range and box ratios, and have their appearance be the same as it would be if you only plot them alone. In essence, you need to rescale the vectors by an amount that depends on the plot range and box ratios of the plot. How to automate that process is a valid question. – Jason B. Sep 12 '16 at 17:22
|
2020-01-22 08:27:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.523262083530426, "perplexity": 1454.0236343304534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606872.19/warc/CC-MAIN-20200122071919-20200122100919-00015.warc.gz"}
|
https://www.jobilize.com/course/section/problems-exercises-kinetic-theory-atomic-and-molecular-by-openstax?qcr=www.quizover.com
|
# 13.4 Kinetic theory: atomic and molecular explanation of pressure (Page 4/5)
Page 4 / 5
If you consider a very small object such as a grain of pollen, in a gas, then the number of atoms and molecules striking its surface would also be relatively small. Would the grain of pollen experience any fluctuations in pressure due to statistical fluctuations in the number of gas atoms and molecules striking it in a given amount of time?
Yes. Such fluctuations actually occur for a body of any size in a gas, but since the numbers of atoms and molecules are immense for macroscopic bodies, the fluctuations are a tiny percentage of the number of collisions, and the averages spoken of in this section vary imperceptibly. Roughly speaking the fluctuations are proportional to the inverse square root of the number of collisions, so for small bodies they can become significant. This was actually observed in the 19th century for pollen grains in water, and is known as the Brownian effect.
## Phet explorations: gas properties
Pump gas molecules into a box and see what happens as you change the volume, add or remove heat, change gravity, and more. Measure the temperature and pressure, and discover how the properties of the gas vary in relation to each other.
## Section summary
• Kinetic theory is the atomistic description of gases as well as liquids and solids.
• Kinetic theory models the properties of matter in terms of continuous random motion of atoms and molecules.
• The ideal gas law can also be expressed as
$\text{PV}=\frac{1}{3}\text{Nm}\overline{{v}^{2}},$
where $P$ is the pressure (average force per unit area), $V$ is the volume of gas in the container, $N$ is the number of molecules in the container, $m$ is the mass of a molecule, and $\overline{{v}^{2}}$ is the average of the molecular speed squared.
• Thermal energy is defined to be the average translational kinetic energy $\overline{\text{KE}}$ of an atom or molecule.
• The temperature of gases is proportional to the average translational kinetic energy of atoms and molecules.
$\overline{\text{KE}}=\frac{1}{2}m\overline{{v}^{2}}=\frac{3}{2}\text{kT}$
or
$\sqrt{\overline{{v}^{2}}}={v}_{\text{rms}}=\sqrt{\frac{3\text{kT}}{m}}\text{.}$
• The motion of individual molecules in a gas is random in magnitude and direction. However, a gas of many molecules has a predictable distribution of molecular speeds, known as the Maxwell-Boltzmann distribution .
## Conceptual questions
How is momentum related to the pressure exerted by a gas? Explain on the atomic and molecular level, considering the behavior of atoms and molecules.
## Problems&Exercises
Some incandescent light bulbs are filled with argon gas. What is ${v}_{\text{rms}}$ for argon atoms near the filament, assuming their temperature is 2500 K?
$1\text{.}\text{25}×{\text{10}}^{3}\phantom{\rule{0.25em}{0ex}}\text{m/s}$
Average atomic and molecular speeds $\left({v}_{\text{rms}}\right)$ are large, even at low temperatures. What is ${v}_{\text{rms}}$ for helium atoms at 5.00 K, just one degree above helium’s liquefaction temperature?
(a) What is the average kinetic energy in joules of hydrogen atoms on the $\text{5500}\text{º}\text{C}$ surface of the Sun? (b) What is the average kinetic energy of helium atoms in a region of the solar corona where the temperature is $6\text{.}\text{00}×{\text{10}}^{5}\phantom{\rule{0.25em}{0ex}}\text{K}$ ?
(a) $1\text{.}\text{20}×{\text{10}}^{-\text{19}}\phantom{\rule{0.25em}{0ex}}\text{J}$
(b) $1\text{.}\text{24}×{\text{10}}^{-\text{17}}\phantom{\rule{0.25em}{0ex}}\text{J}$
The escape velocity of any object from Earth is 11.2 km/s. (a) Express this speed in m/s and km/h. (b) At what temperature would oxygen molecules (molecular mass is equal to 32.0 g/mol) have an average velocity ${v}_{\text{rms}}$ equal to Earth’s escape velocity of 11.1 km/s?
The escape velocity from the Moon is much smaller than from Earth and is only 2.38 km/s. At what temperature would hydrogen molecules (molecular mass is equal to 2.016 g/mol) have an average velocity ${v}_{\text{rms}}$ equal to the Moon’s escape velocity?
$\text{458}\phantom{\rule{0.25em}{0ex}}\text{K}$
Nuclear fusion, the energy source of the Sun, hydrogen bombs, and fusion reactors, occurs much more readily when the average kinetic energy of the atoms is high—that is, at high temperatures. Suppose you want the atoms in your fusion experiment to have average kinetic energies of $6\text{.}\text{40}×{\text{10}}^{–\text{14}}\phantom{\rule{0.25em}{0ex}}\text{J}$ . What temperature is needed?
Suppose that the average velocity $\left({v}_{\text{rms}}\right)$ of carbon dioxide molecules (molecular mass is equal to 44.0 g/mol) in a flame is found to be $1\text{.}\text{05}×{\text{10}}^{5}\phantom{\rule{0.25em}{0ex}}\text{m/s}$ . What temperature does this represent?
$1\text{.}\text{95}×{\text{10}}^{7}\phantom{\rule{0.25em}{0ex}}\text{K}$
Hydrogen molecules (molecular mass is equal to 2.016 g/mol) have an average velocity ${v}_{\text{rms}}$ equal to 193 m/s. What is the temperature?
Much of the gas near the Sun is atomic hydrogen. Its temperature would have to be $1\text{.}5×{\text{10}}^{7}\phantom{\rule{0.25em}{0ex}}\text{K}$ for the average velocity ${v}_{\text{rms}}$ to equal the escape velocity from the Sun. What is that velocity?
$6\text{.}\text{09}×{\text{10}}^{5}\phantom{\rule{0.25em}{0ex}}\text{m/s}$
There are two important isotopes of uranium— ${}^{\text{235}}\text{U}$ and ${}^{\text{238}}\text{U}$ ; these isotopes are nearly identical chemically but have different atomic masses. Only ${}^{\text{235}}\text{U}$ is very useful in nuclear reactors. One of the techniques for separating them (gas diffusion) is based on the different average velocities ${v}_{\text{rms}}$ of uranium hexafluoride gas, ${\text{UF}}_{6}$ . (a) The molecular masses for ${}^{\text{235}}\text{U}\phantom{\rule{0.25em}{0ex}}$ ${\text{UF}}_{6}$ and ${}^{\text{238}}\text{U}$ $\phantom{\rule{0.25em}{0ex}}{\text{UF}}_{6}$ are 349.0 g/mol and 352.0 g/mol, respectively. What is the ratio of their average velocities? (b) At what temperature would their average velocities differ by 1.00 m/s? (c) Do your answers in this problem imply that this technique may be difficult?
what does the speedometer of a car measure ?
Car speedometer measures the rate of change of distance per unit time.
Moses
describe how a Michelson interferometer can be used to measure the index of refraction of a gas (including air)
using the law of reflection explain how powder takes the shine off a person's nose. what is the name of the optical effect?
WILLIAM
is higher resolution of microscope using red or blue light?.explain
WILLIAM
can sound wave in air be polarized?
Unlike transverse waves such as electromagnetic waves, longitudinal waves such as sound waves cannot be polarized. ... Since sound waves vibrate along their direction of propagation, they cannot be polarized
Astronomy
A proton moves at 7.50×107m/s perpendicular to a magnetic field. The field causes the proton to travel in a circular path of radius 0.800 m. What is the field strength?
derived dimenionsal formula
what is the difference between mass and weight
assume that a boy was born when his father was eighteen years.if the boy is thirteen years old now, how is his father in
Isru
what is airflow
derivative of first differential equation
why static friction is greater than Kinetic friction
draw magnetic field pattern for two wire carrying current in the same direction
An American traveler in New Zealand carries a transformer to convert New Zealand’s standard 240 V to 120 V so that she can use some small appliances on her trip.
What is the ratio of turns in the primary and secondary coils of her transformer?
nkombo
what is energy
Yusuf
How electric lines and equipotential surface are mutually perpendicular?
The potential difference between any two points on the surface is zero that implies È.Ŕ=0, Where R is the distance between two different points &E= Electric field intensity. From which we have cos þ =0, where þ is the angle between the directions of field and distance line, as E andR are zero. Thus
sorry..E and R are non zero...
By how much leeway (both percentage and mass) would you have in the selection of the mass of the object in the previous problem if you did not wish the new period to be greater than 2.01 s or less than 1.99 s?
hello
Chichi
Hi
Matthew
hello
Sujan
Hi I'm Matthew, and the answer is Lee weighs in mass 0.008kg OR 0.009kg
Matthew
14 year old answers college physics and the crowd goes wild!
Matthew
Hlo
|
2021-01-16 15:40:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 37, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6453873515129089, "perplexity": 527.2234040601359}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703506697.14/warc/CC-MAIN-20210116135004-20210116165004-00518.warc.gz"}
|
http://openstudy.com/updates/50a53d86e4b044c2b5fab290
|
## henpen Group Title $\frac{d}{dx} \int_a^bf(x)dt=\int_a^b\frac{df(x)}{dx}dt$ Why is the equality correct? one year ago one year ago
1. henpen
http://en.wikipedia.org/wiki/Differentiation_under_the_integral_sign It's to do with this, but the article didn't help me much
2. phi
did you look at this http://en.wikipedia.org/wiki/Leibniz_integral_rule#Proof_of_basic_form
3. henpen
That's it. Thanks
|
2014-10-30 19:19:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.851554811000824, "perplexity": 3818.7040867747032}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637898751.26/warc/CC-MAIN-20141030025818-00196-ip-10-16-133-185.ec2.internal.warc.gz"}
|
https://automl.github.io/SMAC3/stable/faq.html
|
# F.A.Q.¶
SMAC cannot be imported.
Try to either run SMAC from SMAC’s root directory or try to run the installation first.
pyrfr raises cryptic import errors.
Ensure that the gcc used to compile the pyrfr is the same as used for linking during execution. This often happens with Anaconda – see Installation for a solution.
My target algorithm is not accepted, when using the scenario-file.
Make sure that your algorithm accepts commandline options as provided by SMAC. Refer to commandline execution for details on how to wrap your algorithm.
You can also run SMAC with --verbose DEBUG to see how SMAC tried to call your algorithm.
Can I restore SMAC from a previous state?
Use the restore-option.
I discovered a bug/have criticism or ideas on SMAC. Where should
I report to?
SMAC uses the GitHub issue-tracker to take care of bugs and questions. If you experience problems with SMAC, try to provide a full error report with all the typical information (OS, version, console-output, minimum working example, …). This makes it a lot easier to reproduce the error and locate the problem.
Glossary
• SMAC: Sequential Model-Based Algorithm Configuration
• ROAR: Random Online Adaptive Racing
• PCS: Parameter Configuration Space
• TAE: Target Algorithm Evaluator
|
2019-01-22 06:40:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38046717643737793, "perplexity": 8013.987080806177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583829665.84/warc/CC-MAIN-20190122054634-20190122080634-00497.warc.gz"}
|
https://www.bartleby.com/questions-and-answers/the-total-cost-function-for-a-product-is-cx-875lnx-10-1600-where-x-is-the-number-of-units-produced.-/ec470add-2aae-4bd2-801c-86a5a8e0473e
|
# The total cost function for a product is C(x) = 875 ln(x + 10) + 1600 where x is the number of units produced. (a) Find the total cost of producing 200 units. (Round your answer to the nearest cent.)$(b) Producing how many units will give total costs of$9500? (Round your answer to the nearest whole number.) units
Question
The total cost function for a product is
C(x) = 875 ln(x + 10) + 1600
where x is the number of units produced.
(a) Find the total cost of producing 200 units. (Round your answer to the nearest cent.)
(b) Producing how many units will give total costs of \$9500? (Round your answer to the nearest whole number.)
units
|
2021-04-16 09:22:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37864458560943604, "perplexity": 336.84971407376526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038088731.42/warc/CC-MAIN-20210416065116-20210416095116-00488.warc.gz"}
|
https://math.stackexchange.com/questions/1373243/number-theory-with-binary-quadratic
|
# Number theory with binary quadratic
I found this questions from past year maths competition in my country, I've tried any possible way to find it, but it is just way too hard.
Given $$\frac {x^2-y^2+2y-1}{y^2-x^2+2x-1} = 2$$ find $x-y$
I'm not sure if given choices is right... (A)2 (B)3 (C)4 (D)5 (E)6
I've tried to move them $$x^2-y^2+2y-1 = 2y^2-2x^2+4x-2$$ $$x^2-y^2+2y-1 - 2y^2+2x^2-4x+2 = 0$$ $$3x^2-3y^2+2y-4x+1=0$$ $$(3x-1)(x-1)-(3y-1)(y+1)+1=0$$
I've stuck in here, not sure if I've found x and y, or not...
EDIT: I've move other questions to other posts, thanks for helping me identifying the questions category.
• lets try to keep it at one problem per post please. – Jorge Fernández Hidalgo Jul 25 '15 at 6:57
• @dREaM i can't identify most of these question's category – wuiyang Jul 25 '15 at 7:01
• Limiting to one question per question would help with your tagging woes, too :-) – Jyrki Lahtonen Jul 25 '15 at 7:04
• @wuiyang All questions should have the tag contest-math. Furthermore 1,2,3,5,6,7 should have the tag algebra-precalculus. 4,8,9 should have the tag elemetary-number-theory. 7 should have the tag polynomials. – wythagoras Jul 25 '15 at 7:06
• @wythagoras thanks, I will edit the questions now – wuiyang Jul 25 '15 at 7:10
Recognize the squares of binomials in both the numerator and denominator to rewrite the equation as \begin{align}\frac{x^2-(y-1)^2}{y^2-(x-1)^2} &= 2, \end{align} and thus, factoring, \require\cancel \begin{align}\frac{(x-y+1)\cancel{(x+y-1)}}{(y-x+1)\cancel{(y+x-1)}}&=2. \end{align} Finally, let $t=x-y$ and multiply both sides by $1-t$ to have \begin{align} t+1&=2(1-t) \\ 3t&=1 \\ t&=\frac{1}{3}. \end{align}
Put $t=x-y$ then $x=y+t.$ Substitute it into the expression you get $${\frac {t+1}{1-t}}=2.$$ By solving it you get $t=\dfrac{1}{3}.$
\begin{align} 2&= \frac {x^2-y^2+2y-1}{y^2-x^2+2x-1} \\ &=\frac {x^2-(y^2-2y+1)}{y^2-(x^2-2x+1)} \\ &= \frac {x^2-(y-1)^2}{y^2-(x-1)^2} \\ &= \frac {(x-y+1)(x+y-1)}{(y+x-1)(y-x+1)}\end{align} $$\implies {x-y+1}=2y-2x+2\\ 3(x-y)=1$$
Thus, $x-y=\dfrac13$
|
2019-09-16 12:52:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 4, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999533891677856, "perplexity": 1046.8951728796326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572556.54/warc/CC-MAIN-20190916120037-20190916142037-00474.warc.gz"}
|
https://debraborkovitz.com/1-random-question/
|
Hints will display for most wrong answers; explanations for most right answers. You can attempt a question multiple times; it will only be scored correct if you get it right the first time. To see a new question, reload the page.
I used the official objectives and sample test to construct these questions, but cannot promise that they accurately reflect what’s on the real test. Some of the sample questions were more convoluted than I could bear to write. See terms of use. See the MTEL Practice Test main page to view questions on a particular topic or to download paper practice tests.
## MTEL General Curriculum Mathematics Practice
Question 1
#### The polygon depicted below is drawn on dot paper, with the dots spaced 1 unit apart. What is the perimeter of the polygon?
A $$\large 18+\sqrt{2} \text{ units}$$Hint: Be careful with the Pythagorean Theorem. B $$\large 18+2\sqrt{2}\text{ units}$$Hint: There are 13 horizontal or vertical 1 unit segments. The longer diagonal is the hypotenuse of a 3-4-5 right triangle, so its length is 5 units. The shorter diagonal is the hypotenuse of a 45-45-90 right triangle with side 2, so its hypotenuse has length $$2 \sqrt{2}$$. C $$\large 18 \text{ units}$$Hint: Use the Pythagorean Theorem to find the lengths of the diagonal segments. D $$\large 20 \text{ units}$$Hint: Use the Pythagorean Theorem to find the lengths of the diagonal segments.
Question 1 Explanation:
Topic: Recognize and apply connections between algebra and geometry (e.g., the use of coordinate systems, the Pythagorean theorem) (Objective 0024).
There is 1 question to complete.
If you found a mistake or have comments on a particular question, please contact me (please copy and paste at least part of the question into the form, as the numbers change depending on how quizzes are displayed). General comments can be left here.
|
2020-02-21 02:20:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38074448704719543, "perplexity": 838.6310166985897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145438.12/warc/CC-MAIN-20200221014826-20200221044826-00277.warc.gz"}
|
https://aitopics.org/class/Technology/Information%20Technology/Artificial%20Intelligence/Machine%20Learning/Reinforcement%20Learning
|
# Reinforcement Learning
### On "solving" Montezuma's Revenge – Arthur Juliani – Medium
In recent weeks DeepMind and OpenAI have each shared that they developed agents which can learn to complete the first level of the Atari 2600 game Montezuma's Revenge. These claims are important because Montezuma's Revenge is important. Unlike the vast majority of the games in the Arcade Learning Environment (ALE), which are now easily solved at superhuman level by learned agents, Montezuma's Revenge has been hitherto unsolved by Deep Reinforcement Learning methods and was thought by some to be unsolvable for years to come. What distinguishes Montezuma's Revenge from other games in the ALE is its relatively sparse rewards. For those unfamiliar, that means that the agent only receives reward signals after completing specific series of actions over extended periods of time.
### What is reinforcement learning? The complete guide deepsense.ai
With an estimated market size of 7.35 billion US dollars, artificial intelligence is growing by leaps and bounds. McKinsey predicts that AI techniques (including deep learning and reinforcement learning) have the potential to create between $3.5T and$5.8T in value annually across nine business functions in 19 industries. Although machine learning is seen as a monolith, this cutting-edge technology is diversified, with various sub-types including machine learning, deep learning, and the state-of-art technology of deep reinforcement learning. Reinforcement learning is the training of machine learning models to make a sequence of decisions. The agent learns to achieve a goal in an uncertain, potentially complex environment.
### Why temporal difference (TD) method has lower variance than Monte Carlo method?
This question might be a little trivial. However, I had a hard time understanding it or finding some formal proof for it. In many papers, it is being said that for estimating the value function, one of the advantages of using temporal difference methods over the Monte Carlo methods in reinforcement learning is that they have a lower variance for computing value function. Up to now, I was not able to find any formal proof for this. Moreover, it is also being said that the Monte Carlo method is less biased when compared with TD methods.
### Visual Reinforcement Learning with Imagined Goals
For an autonomous agent to fulfill a wide range of user-specified goals at test time, it must be able to learn broadly applicable and general-purpose skill repertoires. Furthermore, to provide the requisite level of generality, these skills must handle raw sensory input such as images. In this paper, we propose an algorithm that acquires such general-purpose skills by combining unsupervised representation learning and reinforcement learning of goal-conditioned policies. Since the particular goals that might be required at test-time are not known in advance, the agent performs a self-supervised "practice" phase where it imagines goals and attempts to achieve them. We learn a visual representation with three distinct purposes: sampling goals for self-supervised practice, providing a structured transformation of raw sensory inputs, and computing a reward signal for goal reaching. We also propose a retroactive goal relabeling scheme to further improve the sample-efficiency of our method. Our off-policy algorithm is efficient enough to learn policies that operate on raw image observations and goals for a real-world robotic system, and substantially outperforms prior techniques.
### Will it Blend? Composing Value Functions in Reinforcement Learning
An important property for lifelong-learning agents is the ability to combine existing skills to solve unseen tasks. In general, however, it is unclear how to compose skills in a principled way. We provide a "recipe" for optimal value function composition in entropy-regularised reinforcement learning (RL) and then extend this to the standard RL setting. Composition is demonstrated in a video game environment, where an agent with an existing library of policies is able to solve new tasks without the need for further learning.
### Transform Your Business Process Into a Game and Let an AI Become Best At It - insideBIGDATA
In this special guest feature, Eliya Elon, Director of Product and Business Development at Razor Labs, discusses a new technology that is starting to trickle from the purely theoretical academic world to the business world, one that aligns with your company objectives, that draws a clear line between your business question and the insights generated. This new technology is called Deep Reinforcement-Learning, and it is gaining significant success in different use-cases. Eliya is the VP of Product and strategic partnerships at Razor Labs. He is an experienced tech entrepreneur, selling his last AI company in 2017. Since joining Razor Labs he is focused on creating AI products that bridge the multi-dimensional gap of business needs, user experience, and academic research, hopefully at scale.
### Algorithmic Framework for Model-based Reinforcement Learning with Theoretical Guarantees
While model-based reinforcement learning has empirically been shown to significantly reduce the sample complexity that hinders model-free RL, the theoretical understanding of such methods has been rather limited. In this paper, we introduce a novel algorithmic framework for designing and analyzing model-based RL algorithms with theoretical guarantees, and a practical algorithm Optimistic Lower Bounds Optimization (OLBO). In particular, we derive a theoretical guarantee of monotone improvement for model-based RL with our framework. We iteratively build a lower bound of the expected reward based on the estimated dynamical model and sample trajectories, and maximize it jointly over the policy and the model. Assuming the optimization in each iteration succeeds, the expected reward is guaranteed to improve. The framework also incorporates an optimism-driven perspective, and reveals the intrinsic measure for the model prediction error. Preliminary simulations demonstrate that our approach outperforms the standard baselines on continuous control benchmark tasks.
### Is Q-learning Provably Efficient?
Model-free reinforcement learning (RL) algorithms, such as Q-learning, directly parameterize and update value functions or policies without explicitly modeling the environment. They are typically simpler, more flexible to use, and thus more prevalent in modern deep RL than model-based approaches. However, empirical work has suggested that model-free algorithms may require more samples to learn [Deisenroth and Rasmussen 2011, Schulman et al. 2015]. The theoretical question of "whether model-free algorithms can be made sample efficient" is one of the most fundamental questions in RL, and remains unsolved even in the basic scenario with finitely many states and actions. We prove that, in an episodic MDP setting, Q-learning with UCB exploration achieves regret $\tilde{O}(\sqrt{H^3 SAT})$, where $S$ and $A$ are the numbers of states and actions, $H$ is the number of steps per episode, and $T$ is the total number of steps. This sample efficiency matches the optimal regret that can be achieved by any model-based approach, up to a single $\sqrt{H}$ factor. To the best of our knowledge, this is the first analysis in the model-free setting that establishes $\sqrt{T}$ regret without requiring access to a "simulator."
### Generalized deterministic policy gradient algorithms
We study a setting of reinforcement learning, where the state transition is a convex combination of a stochastic continuous function and a deterministic discontinuous function. Such a setting include as a special case the stochastic state transition setting, namely the setting of deterministic policy gradient (DPG). We introduce a theoretical technique to prove the existence of the policy gradient in this generalized setting. Using this technique, we prove that the deterministic policy gradient indeed exists for a certain set of discount factors, and further prove two conditions that guarantee the existence for all discount factors. We then derive a closed form of the policy gradient whenever exists. Interestingly, the form of the policy gradient in such setting is equivalent to that in DPG. Furthermore, to overcome the challenge of high sample complexity of DPG in this setting, we propose the Generalized Deterministic Policy Gradient (GDPG) algorithm. The main innovation of the algorithm is to optimize a weighted objective of the original Markov decision process (MDP) and an augmented MDP that simplifies the original one, and serves as its lower bound. To solve the augmented MDP, we make use of the model-based methods which enable fast convergence. We finally conduct extensive experiments comparing GDPG with state-of-the-art methods on several standard benchmarks. Results demonstrate that GDPG substantially outperforms other baselines in terms of both convergence and long-term rewards.
### Temporal Difference Learning with Neural Networks - Study of the Leakage Propagation Problem
Temporal-Difference learning (TD) [Sutton, 1988] with function approximation can converge to solutions that are worse than those obtained by Monte-Carlo regression, even in the simple case of on-policy evaluation. To increase our understanding of the problem, we investigate the issue of approximation errors in areas of sharp discontinuities of the value function being further propagated by bootstrap updates. We show empirical evidence of this leakage propagation, and show analytically that it must occur, in a simple Markov chain, when function approximation errors are present. For reversible policies, the result can be interpreted as the tension between two terms of the loss function that TD minimises, as recently described by [Ollivier, 2018]. We show that the upper bounds from [Tsitsiklis and Van Roy, 1997] hold, but they do not imply that leakage propagation occurs and under what conditions. Finally, we test whether the problem could be mitigated with a better state representation, and whether it can be learned in an unsupervised manner, without rewards or privileged information.
|
2018-07-21 07:36:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.511946976184845, "perplexity": 794.8047608250208}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592420.72/warc/CC-MAIN-20180721071046-20180721091046-00385.warc.gz"}
|
https://socratic.org/questions/if-f-x-1-x-and-g-x-x-3-how-do-you-differentiate-f-g-x-using-the-chain-rule
|
# If f(x)= 1/x and g(x) = x^3 , how do you differentiate f'(g(x)) using the chain rule?
Nov 1, 2017
$- \frac{3}{x} ^ 4$
#### Explanation:
$f \left(x\right) = \frac{1}{x}$ and $g \left(x\right) = {x}^{3}$
For $f \left(g \left(x\right)\right)$ we use $\frac{1}{g \left(x\right)} = \frac{1}{x} ^ 3$
Derivative of $f \left(g \left(x\right)\right)$ using chain rule:
Let $u = x$
Then.
$\frac{\mathrm{dy}}{\mathrm{dx}} = \frac{\mathrm{dy}}{\mathrm{du}} \cdot \frac{\mathrm{du}}{\mathrm{dx}}$
$\frac{\mathrm{dy}}{\mathrm{dx}} = \frac{\mathrm{dy}}{\mathrm{du}} \left({u}^{-} 3\right) \cdot \frac{\mathrm{du}}{\mathrm{dx}} \left(u\right) = - 3 {u}^{-} 4 \cdot 1 = \frac{- 3}{x} ^ 4$
|
2019-06-25 08:08:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9921287298202515, "perplexity": 3752.977509028392}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999814.77/warc/CC-MAIN-20190625072148-20190625094148-00227.warc.gz"}
|
http://mathoverflow.net/questions/97324/bounding-entropy-in-terms-of-kl-divergence
|
## Bounding Entropy in terms of KL-Divergence
Let $h(X)$ be the differential entropy of a continuous random variable $X$ with density $f$, and let $Y$ be another continuous random variable with density $g$. If $KL(X\mid\mid Y)$ is the Kullback-Leibler Divergence between the two, and I know that $KL(X\mid\mid Y)<\alpha$, then can I say anything about $d(H(X),H(Y))$ where $d$ is any metric (e.g. euclidean, L1, etc.)?
I know that $KL(X\mid\mid Y) = H(X,Y)-H(X)$ where $H(X,Y)=\int f(x) \log g(x)dx$ is the cross-entropy. However, I haven't been able to do anything with this. I feel like I'm missing something pretty basic here, but haven't been able to make any progress, nor did I immediately find something in "Elements of Information Theory" (Cover and Thomas).
-
There are some issues with your notation and definition. $H(X,Y)$ usually denoted the joint entropy of $X$ and $Y$ and $KL(X\|Y)=\int f\log (f/g)=\int f\log f-\int f\log g$ when the integrals exist. With the given information one can say that $$|H(X)-H(Y)|<\alpha+\int f\log g+H(Y) \text{ when } H(X)\le H(Y)$$ and $$|H(X)-H(Y)|>-\alpha-\int f\log g-H(Y) \text{ when } H(X)>H(Y)$$ – Ashok May 19 2012 at 6:35 Thanks for the bounds! What notation would you use for cross-entropy? As far as I can tell, there isn't an established standard. I know that I'm overloading $H(X,Y)$, but this is what I've seen elsewhere. – Ben Charrow May 21 2012 at 15:53
|
2013-05-22 20:58:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9326421022415161, "perplexity": 88.24073287133672}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702447607/warc/CC-MAIN-20130516110727-00042-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://www.sawaal.com/?page=5&sort=
|
Certification Questions
Q:
A woman starts walking from her village to a town. She walks 4.5 km North, then turns West and walks 2 km, then turns South and walks 4.5 km, then turns to her right and walks 6 km. Where is she now with reference to her starting position?
A) 4 km West B) 8 km East C) 4 km East D) 8 km West
Explanation:
1 35
Q:
Which of the following terms follows the trend of the given list?
PQQPPPPP, PPQQPPPP, PPPQQPPP, PPPPQQPP, PPPPPQQP, _______________.
A) QQPPPPPP B) PQQPPPPP C) PPQQPPPP D) PPPPPPQQ
Explanation:
0 35
Q:
If 18$6 = 6, 8$2 = 3 and 8$4 = 2, then find the value of 18$12 = ?
A) 3 B) -8 C) 4 D) 6
Explanation:
1 94
Q:
The following equation is incorrect. Which two signs should be interchanged to correct the equation?
15 x 4 + 10 - 10 ÷ 14 = 10
A) - and + B) + and ÷ C) ÷ and x D) x and -
Explanation:
0 82
Q:
In a certain code language, '-' represents '+', '+' represents 'x', 'x' represents '÷' and '÷' represents '-'. Find out the answer to the following question.
13 ÷ 18 - 5 + 10 x 2 = ?
A) 20 B) 48 C) 45 D) 42
Explanation:
0 53
Q:
In a certain code language, “MATE” is written as “41” and “LION” is written as “52”. How is “TARM” written in that code language?
A) 55 B) 53 C) 52 D) 54
Explanation:
0 57
Q:
From the given alternatives, select the word which CANNOT be formed using the letters of the given word.
Tribulations
A) Bastion B) Trust C) Ribbon D) Slit
Explanation:
1 20
Q:
Mayank remembers that the examination is after 18th December but before 21st December, while Suraj remembers that the examination is before 24thDecember but after 19th December. On which date of December is the examination?
A) 20 B) 21 C) 19 D) 22
|
2020-10-25 16:10:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6201499700546265, "perplexity": 4787.604998376578}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107889574.66/warc/CC-MAIN-20201025154704-20201025184704-00157.warc.gz"}
|
http://www.physicsforums.com/showpost.php?p=3127596&postcount=4
|
View Single Post
P: 258
Inverse image of phi (totient)
Quote by math_grl I don't think there should be any confusion in my terminology but in case a refresher is needed check out http://en.wikipedia.org/wiki/Image_%...#Inverse_image It might also help make it clear that $$f: \mathbb{N} \rightarrow \phi(\mathbb{N})$$ where $$f(n) = \phi(n)$$ cannot have an inverse as it's onto but not injective. Other than that, yes, what I was asking if there was a way to find all those numbers that map to 14 (for example) under phi...
hi math-grl
so what you want is to find the n's such that
$$\varphi(n_1)=m_1$$
$$\varphi(n_2)=m_2$$
$$\varphi(n_3)=m_3$$
$$\varphi(n_4)=m_4$$
...
knowing only the m's, correct?
there is a conjecture related to it, although what you want is far more difficult than the conjecture
http://en.wikipedia.org/wiki/Carmich...ion_conjecture
|
2014-08-21 10:12:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7627796530723572, "perplexity": 514.6830649181097}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500815861.64/warc/CC-MAIN-20140820021335-00311-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://blender.stackexchange.com/questions/5593/any-good-free-materials-libraries-online
|
# Any good free materials libraries online? [duplicate]
I'm wondering if there's any addons or online libraries of materials I can download. I've found one called online_mat_lib, but then discovered it was from 2008 and thought by now something like that might already be integrated into Blender.
• online_mat_lib is an addon available in addons_contrib, while not yet included with official releases it is included with most other builds. It's library is created by exporting cycles node setups to an xml file which can be added to the library or shared individually. This BA thread has a more recent discussion. – sambler Dec 17 '13 at 1:56
• @gandalf3 Definitely. – TheMinecraftMan757 Jun 13 '15 at 14:15
• I found the site www.blender-materials.com that is working too, and has blender internal render materials that are more likely to be usable in game development (not BGE), but I cant find the license for their files in any place .. – Aquarius Power Jan 6 '16 at 3:09
• @Gandalf3 +1 to All : feel free to read and feed BSE ressources where you can find and share ressources for blender! – Bithur Aug 14 '18 at 18:21
Some resources:
For cycles you can find many materials in Blendswap. Here you have some users who provide many materials, most of them procedural materials without images, just with node setups:
You may want to see this two things:
• Cycles_Matlib, is a blend file with many materials.
The good thing is that Cycles_Matlib include files to support Matlibvx so you can use both. They are easy to install (only unzip in addons folder).
Also note that are materials for cycles. But it also have a old version that I think is for blender internal.
|
2020-02-24 03:32:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26429206132888794, "perplexity": 2801.7552986478745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145869.83/warc/CC-MAIN-20200224010150-20200224040150-00060.warc.gz"}
|
https://chem.libretexts.org/Bookshelves/Inorganic_Chemistry/Modules_and_Websites_(Inorganic_Chemistry)/Descriptive_Chemistry/Elements_Organized_by_Block/3_d-Block_Elements/Group_04%3A_Transition_Metals/Chemistry_of_Titanium
|
# Chemistry of Titanium
Discovered independently by William Gregor and Martin Klaproth in 1795, titanium (named for the mythological Greek Titans) was first isolated in 1910. Gregor, a Cornish vicar and amateur chemist isolated an impure oxide from ilmenite ($$FeTiO_3$$) by treatment with $$HCl$$ and $$H_2SO_4$$.Titanium is the second most abundant transition metal on Earth (6320 ppm) and plays a vital role as a material of construction because of its:
• Excellent Corrosion Resistance
• High Heat Transfer Efficiency
• Superior Strength-To-Weight Ratio
For example, when it's alloyed with 6% aluminum and 4% vanadium, titanium has half the weight of steel and up to four times the strength.
## Uses of titanium
Titanium is a highly corrosion-resistant metal with great tensile strength. It is ninth in abundance for elements in the earth's crust. It has a relatively low density (about 60% that of iron). It is also the tenth most commonly occurring element in the Earth's crust. That all means that titanium should be a really important metal for all sorts of engineering applications. In fact, it is very expensive and only used for rather specialized purposes. Titanium is used, for example:
• in the aerospace industry - for example in aircraft engines and air frames;
• for replacement hip joints;
• for pipes, etc, in the nuclear, oil and chemical industries where corrosion is likely to occur.
Titanium is very expensive because it is awkward to extract from its ores - for example, from rutile, $$TiO_2$$. Whilst a biological function in man is not known, it has excellent biocompatibility--that is the ability to be ignored by the human body's immune system--and an extreme resistance to corrosion. Titanium is now the metal of choice for hip and knee replacements.
## Titanium Extraction
Titanium cannot be extracted by reducing the ore using carbon as a cheap reducing agent, like with iron. The problem is that titanium forms a carbide, $$\ce{TiC}$$, if it is heated with carbon, so you don't get the pure metal that you need. The presence of the carbide makes the metal very brittle. That means that you have to use an alternative reducing agent. In the case of titanium, the reducing agent is either sodium or magnesium. Both of these would, of course, first have to be extracted from their ores by expensive processes.
The titanium is produced by reacting titanium(IV) chloride, $$\ce{TiCl4}$$ - NOT the oxide - with either sodium or magnesium. That means that you first have to convert the oxide into the chloride. That in turn means that you have the expense of the chlorine as well as the energy costs of the conversion. High temperatures are needed in both stages of the reaction.
Titanium is made by a batch process. In the production of iron, for example, there is a continuous flow through the Blast Furnace. Iron ore and coke and limestone are added to the top, and iron and slag removed from the bottom. This is a very efficient way of making something. With titanium, however, you make it one batch at a time. Titanium(IV) chloride is heated with sodium or magnesium to produce titanium. The titanium is then separated from the waste products, and an entirely new reaction is set up in the same reactor. This is a slow and inefficient way of doing things. Traces of oxygen or nitrogen in the titanium tend to make the metal brittle. The reduction has to be carried out in an inert argon atmosphere rather than in air; that also adds to costs.
Wilhelm J. Kroll developed the process in Luxemburg around the mid 1930's and then after moving to the USA extended it to enable the extraction of Zirconium as well. Titanium ores, mainly rutile ($$\ce{TiO2}$$) and ilmentite ($$\ce{FeTiO3}$$), are treated with carbon and chlorine gas to produce titanium tetrachloride.
$\ce{TiO2 + Cl2 \rightarrow TiCl4 + CO2}$
### Fractionation
Titanium tetrachloride is purified by distillation (Boiling point of 136.4) to remove iron chloride.
### Reduction
Purified titanium tetrachloride is reacted with molten magnesium under argon to produce a porous “titanium sponge”.
$\ce{TiCl4 + 2Mg \rightarrow Ti + 2MgCl2}$
### Melting
Titanium sponge is melted under argon to produce ingots.
## Titanium Halides
Titanium(IV) Halides
Formula Color MP BP Structure
TiF4 white - 284 fluoride bridged
TiCl4 Colorless -24 136.4 -
TiBr4 yellow 38 233.5 hcp I- but essentially monomeric cf. SnI4
TiI4 violet-black 155 377 hcp I- but essentially monomeric cf. SnI4
## Preparations
They can all be prepared by direct reaction of Ti with halogen gas (X2). All are readily hydrolyzed. They are all expected to be diamagnetic.
Titanium(III) halides
Formula Color MP BP m (BM) Structure
TiF3 blue 950d - 1.75 -
TiCl3 violet 450d - - BiI3
TiBr3 violet - - - BiI3
TiI3 violet-black - - - -
Preparations:
They can be prepared by reduction of TiX4 with H2.
Titanium Oxides and Aqueous Chemistry
Titanium oxides
Formula Color MP m (BM) Structure
TiO2 white 1892 diam. rutile - Refractive Index 2.61-2.90 cf. Diamond 2.42
## Preparations
obtained from hydrolysis of TiX4 or Ti(III) salts.
TiO2 reacts with acids and bases.
In Acid: TiOSO4 formed in H2SO4 (Titanyl sulfate)
In Base: MTiO3 metatitanates (eg Perovskite, CaTiO3 and ilmenite, FeTiO3) M2TiO4 orthotitanates.
Peroxides are highly colored and can be used for Colorimetric analysis.
pH <1 [TiO2(OH)(H2O)x]+
pH 1-2 [(O2)Ti-O-Ti(O2)](OH) x2-x; x=1-6
[Ti(H2O)6]3+ -> [Ti(OH)(H2O)5]2+ + [H+] pK=1.4
TiO2+ + 2H+ + e- -> Ti3+ + H2O E=0.1V
## Representative complexes
TiCl4 is a good Lewis acid and forms adducts on reaction with Lewis bases such as;
2PEt3 -> TiCl4(PEt3)2
2MeCN -> TiCl4(MeCN)2
bipy -> TiCl4(bipy)
Solvolysis can occur if ionisable protons are present in the ligand;
2NH3 -> TiCl2(NH2)2 + 2HCl
4H2O -> TiO2.aq + 4HCl
2EtOH -> TiCl2(OEt)2 + 2HCl
TiCl3 has less Lewis acid strength but can form adducts also;
3pyr -> TiCl3pyr3
### Conversion of titanium oxide into titanium chloride
The ore rutile (impure titanium(IV) oxide) is heated with chlorine and coke at a temperature of about 900°C.
$TiO_2 + 2Cl_2 + 2C \longrightarrow TCl_4 + 2CO$
Other metal chlorides are formed as well because of other metal compounds in the ore. Very pure liquid titanium(IV) chloride can be separated from the other chlorides by fractional distillation under an argon or nitrogen atmosphere. Titanium(IV) chloride reacts violently with water. Handling it therefore needs care and is stored in totally dry tanks.
### Reduction of the titanium chloride
Reduction by sodium: The titanium(IV) chloride is added to a reactor in which very pure sodium has been heated to about 550°C - everything being under an inert argon atmosphere. During the reaction, the temperature increases to about 1000°C.
$\ce{TiCl4 +4Na \longrightarrow Ti + 4NaCl }$
After the reaction is complete, and everything has cooled (several days in total - an obvious inefficiency of the batch process), the mixture is crushed and washed with dilute hydrochloric acid to remove the sodium chloride.
## Reduction by magnesium
This is the method used in the rest of the world. The method is similar to using sodium, but this time the reaction is:
$\ce{TiCl4 +4Mg \longrightarrow Ti + 2MgCl2 }$
The magnesium chloride is removed from the titanium by distillation under very low pressure at a high temperature.
|
2020-07-09 17:42:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6184187531471252, "perplexity": 5359.707063840658}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655900614.47/warc/CC-MAIN-20200709162634-20200709192634-00204.warc.gz"}
|
https://electronics.stackexchange.com/questions/84912/controlling-a-4-wired-fan-pwm-signal-using-arduino-allows-only-two-settings
|
# Controlling a 4-wired fan PWM Signal using Arduino allows only two settings
I have connected my pwm pin to my arduino like in this tutorial
It's working properly. I can read and set the speed using a sketch from this website:
int fanPulse = 0;
unsigned long pulseDuration;
void setup()
{
Serial.begin(9600);
pinMode(fanPulse, INPUT);
digitalWrite(fanPulse,HIGH);
}
pulseDuration = pulseIn(fanPulse, LOW);
double frequency = 1000000/pulseDuration;
Serial.print("pulse duration:");
Serial.println(pulseDuration);
Serial.print("time for full rev. (microsec.):");
Serial.println(pulseDuration*2);
Serial.print("freq. (Hz):");
Serial.println(frequency/2);
Serial.print("RPM:");
Serial.println(frequency/2*60);
}
void loop()
{
analogWrite(3,20);
delay(5000);
analogWrite(3,50);
delay(5000);
analogWrite(3,100);
delay(5000);
analogWrite(3,200);
delay(5000);
analogWrite(3,255);
delay(5000);
}
It seems to me, like I can only enter values higher than 127 and values lower than 127. There are no steps between them. The fan won't turn slower when I go from 126 to 0 or from 128 to 255.
Some results I get:
100:
pulse duration:19058
time for full rev. (microsec.):38116
freq. (Hz):26.00
RPM:1560.00
0:
pulse duration:19160
time for full rev. (microsec.):38320
freq. (Hz):26.00
RPM:1560.00
127:
pulse duration:9032
time for full rev. (microsec.):18064
freq. (Hz):55.00
RPM:3300.00
255:
pulse duration:9151
time for full rev. (microsec.):18302
freq. (Hz):54.50
RPM:3270.00
Is there some mistake I've made or is it possible my fan won't accept precise values? Can you recommend any 4 wired fans I could use for this or some other way? I thought about using a SG2524N to controll a two wired motor, but I'm not experienced with this. Thanks for your advice.
• Are you sure that pin 3 is a PWM channel on your Arduino? Use a LED to test. Oct 10, 2013 at 1:19
• I'm now using SoftPWM, still no change. I'm going to switch the fan and try again Oct 10, 2013 at 8:20
|
2022-05-25 23:26:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2419530153274536, "perplexity": 4486.782656300903}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662594414.79/warc/CC-MAIN-20220525213545-20220526003545-00604.warc.gz"}
|
http://openstudy.com/updates/4e039e2c0b8b35337e29d251
|
## anonymous 5 years ago What is the derivative of 4x^3 + 5x + 10? I'm just starting calc.
1. siddharth
Derivative of an expression of the form of $ax^n$ is $nax^{n-1}$
2. anonymous
$(4\times3)x^{3-1} + (5\times1)x ^{1-1}$
3. anonymous
the basic rule of derivative applies - where you take the exponent and multiply that to coefficient. Exponent you originally have will be subtracted by 1 which results in x^2 and x^0 term above.
4. anonymous
thanks guys this is really helping
5. siddharth
also, d(x+y) = d(x) + d(y)
6. anonymous
be sure to practice - that way you understand the application of the rule
7. anonymous
12x^2+5
|
2016-12-05 12:35:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7494509220123291, "perplexity": 2039.829480516363}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541696.67/warc/CC-MAIN-20161202170901-00385-ip-10-31-129-80.ec2.internal.warc.gz"}
|
https://www.roelpeters.be/dataform-different-datasets-or-tables-in-staging-and-production/
|
Home » Dataform: Change datasets with branches
# Dataform: Change datasets with branches
Tags:
The reason people love tools like Dataform so much, is that it allows them to automate parts of the ELT workflow. In this blog post we will set our destination dataset, depending on the branch we’re running the definition on.
A really interesting use case is to keep the resulting tables from your scheduled runs on your staging branch(es) separately from the tables created on your production branch. Many possibilities come to mind:
1. Set a value in a column — e.g. “staging” and “production” (not within the scope of this blog post)
2. Add a prefix or a suffix to your tables and views — e.g. “stg-tablename” and “prd-tablename”
3. Use different BigQuery datasets (= Dataform schemas)
Without a doubt, the easiest way to get one of these three solutions done is via environments.
## Dataform Environments
Environments are a wrapper around your codebase. Just like environment variables within an operating system or container, they allow you to manipulate and set variables that work through your code, everywhere you use them.
Let’s start with dataform.json: As you can see, I set the defaultSchema parameter to “stg”, which will be default BigQuery dataset where tables will be created or updated.
{
"warehouse": "bigquery",
"defaultSchema": "stg", // The default dataet in bigquery is set to stg
"assertionSchema": "dataform_assertions",
"defaultDatabase": "YOUR_DATABASE"
}
By creating an environment within environments.json, one can overwrite the settings from dataform.json, using the configOverride parameter. This is the environment named production, that I created for the master branch. When a job runs on this branch, output will not go to the “stg” dataset, but to the “prd” dataset.
{
"environments": [
{
"name": "production",
"configOverride": {
"defaultSchema": "prd"
},
"gitRef": "master"
}
]
}
Although you don’t need to set the schema explicitly, because we set the default in dataform.json, one can still do it. Within a definition, one can refer to the settings via the dataform object.
config {
schema: dataform.projectConfig.defaultSchema, // Optional
name: "YOUR_TABLE"
}
From this example, it is clear how one can use custom variables to create code and query manipulations that depends on the branch or the environment.
### Say thanks, ask questions or give feedback
Technologies get updated, syntax changes and honestly… I make mistakes too. If something is incorrect, incomplete or doesn’t work, let me know in the comments below and help thousands of visitors.
|
2021-09-22 17:14:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2591858506202698, "perplexity": 3441.711175017438}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057371.69/warc/CC-MAIN-20210922163121-20210922193121-00259.warc.gz"}
|
https://www.wiskundeleraar.nl/m/page_m.asp?nummer=7026
|
\eqalign{ & x + 2\sqrt x = 8 \cr & 2\sqrt x = - x + 8 \cr & 4x = ( - x + 8)^2 \cr & 4x = x^2 - 16x + 64 \cr & x^2 - 20x + 64 = 0 \cr & (x - 4)(x - 16) = 0 \cr & x = 4 \vee x = 16\,\,(v.n.) \cr & x = 4 \cr}
\eqalign{ & \sqrt {x^2 - 4} + x + 2 = 0 \cr & \sqrt {x^2 - 4} = - x - 2 \cr & x^2 - 4 = ( - x - 2)^2 \cr & x^2 - 4 = x^2 + 4x + 4 \cr & - 4 = 4x + 4 \cr & 4x = - 8 \cr & x = - 2 \cr}
|
2021-12-09 12:52:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9675958156585693, "perplexity": 161.6877561639847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964364169.99/warc/CC-MAIN-20211209122503-20211209152503-00253.warc.gz"}
|
https://bioinformatics.stackexchange.com/questions/4579/how-to-convert-files-to-adam-format
|
# How to convert files to ADAM format?
I would like to convert BAM and VCF files to ADAM format.
How do I do that?
These transformations are quite simple with the adam-submit script packaged with ADAM.
Transformation of BAM files
adam-submit -- transformAlignments sample.bam sample.alignments.adam
Transformation of VCF files
adam-submit -- transformVariants sample.vcf.gz sample.variants.adam
• Could you provide a link to the package? It seems like it is this repository but I'm not sure.
– llrs
Jun 28, 2018 at 7:07
|
2022-06-25 14:59:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29111433029174805, "perplexity": 9271.317828742602}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103035636.10/warc/CC-MAIN-20220625125944-20220625155944-00696.warc.gz"}
|
http://crypto.stackexchange.com/questions?page=3&sort=active&pagesize=50
|
# All Questions
75 views
6k views
### OTP from Sony BIOS password recover
From Dogbert's blog: Sony has a line of laptops ("Vaio") which compete mainly in the high value market segments. They implemented a master password bypass which is rather sane in comparison to the ...
52 views
### RSA Protocol behind Yaksha Security System
So I'm reading over the Yaksha Security System and see it is based on the RSA cryptosystem and a centralized server, easy enough. What I'm slightly confused on is the math behind the related keys. It ...
95 views
### Recovering El Gamal secret key from signatures [duplicate]
Assume out of a set of ElGamal signatures, I've discovered that two have the same y i.e. $signature_{1}$ on $m_{1}$ = $(y,s_{1})$ and $signature_{2}$ on $m_{2}$ = $(y,s_{2})$. The public keys for ...
46 views
### Security considerations on using key derivation to create one-time keys for block ciphers
I'm planning to do some block encryption with AES and Blowfish (both with 256 Bit keys) chained. Master key generation: A 32 byte long persistent (over many years) master key is derived from a ...
83 views
### Key management protocol for end-to-end security on Advanced Metering Infrastructure
I am currently working on a research on end-to-end security implementation for Advanced Metering Infrastructure (AMI). Pardon me if I a bit off topic or not asking the right question as I am ...
36 views
### What is a good way to program the finding the determinant of a 2x2 matrix? [migrated]
I have a good understanding of how to do the Hill cipher on paper but putting it into program form is somewhat of a problem. Finding the the determinant is the thing I'm having problem with. On ...
3k views
### RSA Proof of Correctness
Can anyone provide an extended (and well explained) proof of correctness of the RSA Algorithm? And why is it needed? I can't say that this or this helped me much, I'd like a more detailed and newbie ...
64 views
### Bae64 with shuffled alphabet
I have a base64 'cipher' text, I know that in clear text is a hidden xml document (I do not know nothing about its structure), but the base64 aplhabet was somehow shuffled. Is there any smart way, how ...
65 views
### Why do we have fixed output length in the algorithm SHA1
Why do we have fixed output length in the algorithm SHA1 . is there any explanation about that
24 views
### How to determine which key derivation function used for password?
Suppose an attacker collected a password hash file. After performing rainbow tables/dictionary attacks they choose brute force attacks. So they try guessing as many passwords as they can by running ...
19 views
### Generate CA with aes256 passphrase protected DSA key
I want to create my own Certificate Authority with an DSA key. I figured out to create it with: ...
29 views
### S-Boxes and SP-Network
I work on an exercise and I have to study functions as : $f : \mathbb{F}_q \rightarrow \mathbb{F}_q$ with $q = 2^n$ which can define a S-Box for SP-Networks. There is a question that I can't answer ...
53 views
### Looking for a secure PRNG that I can implement in hardware
I'm trying to implement a (sort of) simple PRNG in hardware for fun. My idea would be to allow a user to enter a key using a keypad (or some dip switch settings that can be set and hidden) and to ...
62 views
### Blowfish ECB mode: Tools for known-plaintext attack?
I'm currently dealing with multiple blowfish-encrypted files that share the same key. All are encrypted using ECB mode judging from their appearance. I don't know what the key is but I know 64 byte ...
50 views
### RSA-blind signatures vs. DSA-blind signatures
I see that the RSA blind signature scheme tends to be implemented more often than others, e.g., DSA blind signatures. Isn't the RSA blind signature scheme a 'lesser' scheme, if you may, compared to ...
51 views
### Recovering a constant salt for MD5 [duplicate]
I have STRING and MD5(STRING+SALT) for any infinitely many STRING's. Is their any way that I can recover the SALT itself? (SALT is constant) I actually know SALT, but this is an intellectual exercise. ...
80 views
### Probability of factoring keys as a function of bit length
kinda new here, I had a question pertaining to someone being able to factor one's RSA keys through the GCD. Anyways the question goes that there are two people: A and B. A makes his/her private key as ...
91 views
### What are the potential (major) flaws in this security scheme?
It’s been a few years since I read anything involving crypto. I’ve come across this scheme and I wonder how secure it is: Alice generates an RSA keypair (we assume Alice is using proper random ...
50 views
### Reduced message expansion in NTRU
In the original NTRU proposal from 1998, it says on page 16: It may be worth mentioning, though, that there is a simple. masking technique that can be used to significantly reduce message ...
I'm trying to get a grip on how Schnorr signature works. Suppose Alice sends Trent a tuple $(P, M)$, which contains a payload and a message to be signed by him. She then passes the certificate to Bob ...
|
2014-12-28 01:04:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7219846844673157, "perplexity": 2321.791510751682}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447554429.44/warc/CC-MAIN-20141224185914-00061-ip-10-231-17-201.ec2.internal.warc.gz"}
|
http://cms.math.ca/Events/winter00/abs/DP.html
|
CMS Doctoral Prize / Prix de doctorat
(Organizers)
STEPHEN ASTELS, University of Georgia, Athens, Georgia 30602, USA
Cantor sets and continued fractions
Let C be the middle-third Cantor set constructed from the interval [0,1]. Although C is rather sparse (the Hausdorff dimension of C is log2 / log3), it can be show that C+C = [0,2], where we are considering the point-wise sum of the sets. In this talk we will examine sums, differences, products and quotients of more general types of Cantor sets. We will apply these results to certain problems arising in Diophantine approximation. For any set B of positive integers define F(B) to be the set of numbers
F(B) = {[t,a1,a2,...]; t Î Z,ai Î B for i ³ 1}
where [t,a1,a2,...] denotes the continued fraction t+1/(a1+1/(a2+\dotsm)). Let k be a positive integer and for i = 1,...,k let Bi be a set of positive integers. We will give conditions under which
F(B1)±...±F(Bk) = R.
|
2015-05-04 23:08:10
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8502130508422852, "perplexity": 910.0410191585152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430455160771.74/warc/CC-MAIN-20150501043920-00055-ip-10-235-10-82.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/3185602/vectorial-and-parametric-equation
|
# Vectorial and parametric equation
I was solving some vectors exercises but I came across with some doubts about them. I don't know how to do this exercise, so I would appreciate some help. Thanks.
1) Find a parametric and a vectorial equation for a straight line parallel to the $$y$$ axis and knowing that it intersects the straight line defined by $$2y+3=0$$
I mean, I know that $$2y+3=0$$ is equal to $$y=-3÷2$$, so any line parallel to $$y$$ will intersect it. But I don't know how to write it as a parametric and vectorial equation.
• Kudos for separating out one of the two separate questions that you asked in your previous post, but you really should edit your questions to add context instead of reposting them. – amd Apr 12 at 23:21
• Excuse me, but I edited it. If you compare both of them, you should realize that I added some information about my doubt. – AaronTBM Apr 13 at 0:22
• You added a couple of sentences to this new copy of the question that you should instead have edited into the original one. – amd Apr 13 at 1:05
• Ok. I'll take your advice. – AaronTBM Apr 13 at 1:26
• Now, can you help me with my question? – AaronTBM Apr 13 at 1:27
|
2019-05-21 07:44:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6945360898971558, "perplexity": 215.48100009782817}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256281.35/warc/CC-MAIN-20190521062300-20190521084300-00540.warc.gz"}
|
https://leastaction.wordpress.com/category/string-theory/
|
# Least Action
Nontrivializing triviality..and vice versa.
## Fun with Homology
There is a theorem (currently being attributed to Wikipedia, but I’m sure I can do better given more time) which states that
All closed surfaces can be produced by gluing the sides of some polygon and all even-sided polygons (2n-gons) can be glued to make different manifolds.
Conversely, a closed surface with $n$ non-zero classes can be cut into a 2n-gon.
Two interesting cases of this are:
1. Gluing opposite sides of a hexagon produces a torus $T^2$.
2. Gluing opposite sides of an octagon produces a surface with two holes, topologically equivalent to a torus with two holes.
I had trouble visualizing this on a piece of paper, so I found two videos which are fascinating and instructive, respectively.
The two-torus from a hexagon
The genus-2 Riemann surface from an octagon
I would like to figure out how one can make such animations, and generalizations of these, using Mathematica or Sagemath.
There are a bunch of other very cool examples on the Youtube channels of these users. Kudos to them for making such instructive videos!
PS – I see that $\LaTeX$ on WordPress has become (or is still?) very sloppy! 😦
Written by Vivek
October 20, 2016 at 21:57
## Errata for Basic Concepts of String Theory by Blumenhagen, Lüst and Theisen
This is an unofficial errata for the book Basic Concepts of String Theory by Ralph Blumenhagen, Dieter Lüst and Stefan Theisen. I couldn’t find an official errata, but I’ll probably discontinue this at some point when I do run into one.
Chapter 2: The Classical Bosonic String
• Page 8. The line below equation 2.3. There should be two dots, one each on $x^\mu$ and $x^\nu$ in the definition of $\dot{x}^2$.
Chapter 8: The Quantized Fermionic String
• Page 213: Equation 8.56. There is only one charge conjugation matrix in odd dimension d = 2n+1, either $C_+$ or $C_-$. To find out which one it is, for odd $d$, determine $d(d-1)/2$: if this is even, $C_+$ exists; if it is odd, $C_-$ exists. So, equation 8.56 is wrong: one should use $C_-$ for odd $n$, and $C_+$ for even $n$.To derive this criterion, compute $C \gamma_c C^{-1}$ and observe that it equals $(-1)^{d(d-1)/2} \gamma_c$ in general, which determines whether $C = C_+$ or $C = C_-$. For a quick list of charge conjugation matrices in various dimensions and their symmetry properties, see page 11 of http://www.nikhef.nl/~t45/ftip/AppendixE.pdf.
Chapter 14: String Compactifications
• Page 510: Equation 14.262. The $e^{e}$ on the right-hand side in the local Lorentz transformation of the vielbein should be $e^{b}$.
Chapter 18: String Dualities and M-Theory
• Page 690: The expression for $\tilde{F}^{(p+2)}$ in the paragraph below equation (18.26) has extra indices. It is the contraction of $\tilde{F}_{M_{0}\ldots M_{p+1}}$ with $\tilde{F}^{M_{0}\ldots M_{p+1}}$.
Written by Vivek
December 27, 2014 at 15:05
Posted in Errata, String Theory
Tagged with
## Review Articles – Particle Physics, String Theory, Supersymmetry and Supergravity
[to be updated]
A list of useful reviews on various aspects of string theory, branes, etc. is at http://www.nuclecu.unam.mx/~alberto/physics/stringrev.html. There are links to TASI lectures as well as review articles by prominent string theorists.
Another useful list of string theory papers and reviews is http://web.mit.edu/redingtn/www/netadv/Xstring.html.
Additionally, a list of books and useful review articles for supersymmetry and supergravity is at http://www.stringwiki.org/wiki/Supersymmetry_and_Supergravity.
A useful list of references for Collider Physics is at http://tigger.uic.edu/~keung/me/class/collider/web-docs.html.
Written by Vivek
December 3, 2014 at 15:07
## Divergence Theorem in Complex Coordinates
The divergence theorem in complex coordinates,
$\int_R d^2{z} (\partial_z v^z + \partial_{\bar{z}}v^{\bar{z}}) = i \oint_{\partial R}(v^z d\bar{z} - v^{\bar{z}}dz)$
(where the contour integral circles the region R counterclockwise) appears in the context of two dimensional conformal field theory, to derive Noether’s Theorem and the Ward Identity for a conformally invariant scalar field theory (for example), and is useful in general in 2D CFT/string theory. This is equation (2.1.9) of Polchinski’s volume 1, but a proof is not given in the book.
This is straightforward to prove by converting both sides separately to Cartesian coordinates $(\sigma^1, \sigma^2)$, through
$z = \sigma^1 + i \sigma^2$
$\bar{z} = \sigma^1 - i \sigma^2$
$\partial_z = \frac{1}{2}(\partial_1 - i \partial_2)$
$\partial_{\bar{z}} = \frac{1}{2}(\partial_1 + i \partial_2)$
$d^2 z = 2 d\sigma^1 d\sigma^2 = 2 d^2 \sigma$
and using the Green’s theorem in the plane
$\oint_{\partial R}(L dx + M dy) = \int \int_{R} \left(\frac{\partial M}{\partial x} - \frac{\partial L}{\partial y}\right) dx dy$
with the identifications
$x \rightarrow \sigma^1, y \rightarrow \sigma^2$
$L \rightarrow -v^2, M \rightarrow v^1$
There is perhaps a faster and more elegant way of doing this directly in the complex plane, but this particular line of reasoning makes contact with the underlying Green’s theorem in the plane, which is more familiar from real analysis.
Written by Vivek
October 12, 2014 at 20:47
|
2017-06-26 06:54:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 34, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.520420253276825, "perplexity": 723.5001095239382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320685.63/warc/CC-MAIN-20170626064746-20170626084746-00285.warc.gz"}
|
https://www.physicsforums.com/threads/kirchoffs-law-parallel-circuit-with-only-i-and-r-given-solvable.758012/
|
# Kirchoffs Law Parallel Circuit-with only I and R given.Solvable?
1. Jun 14, 2014
### PhysicStragler
Hi all
Im doing my first ever Physics course, and ive hit a brick wall with Kirchoffs Law.
I understand what the laws are i just cant seem to get the right answers on a particular question.
I then decided to cheat(because i exhausted all my options in trying to solve it) which i really dont like doing.My tutor then said I also need to show my workings out,and so this is where im stuck ive tried everything i could find to do with Kirchoffs Laws but still no luck.
Ive filled in the template the best i could, but if anymore information is needed please state.
Thanks a ton to anyone who can and is willing to help.
1. The problem statement, all variables and given/known data
The particular Question is number 4 on Teaching Advanced Physics 117- 5: Questions on Kirchhoff's laws.(https://www.google.co.uk/search?q=kirchoffs+law+TAP&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-USfficial&client=firefox-a&channel=sb&gfe_rd=cr&ei=9ICcU___MajY8gfS6oCgDA)
There are 6 total resistors, 1 on first reciprocal,3 on the second and 2 on the bottom.
The question is "Ammeter A5 reads 3 A. All the resistors have the same value of 10 Ω. What are the readings on ammeters A1 to A8 and what is the terminal voltage of the battery?"
2. Relevant equations
Tried Ohms Law and Kirchoffs laws.
3. The attempt at a solution
I tried using Ohms law i.e V=IR, I=V/R etc ive also used some equations i saw that was saying about to find the total resistance use 1/R1+1/R2 etc also tried using Kirchoffs Law.
2. Jun 14, 2014
### Staff: Mentor
What was the result?
The first thing you can calculate is the source voltage (or, to make smaller steps, the voltages at the two lowermost resistors). That allows to calculate everything else then.
3. Jun 14, 2014
### PhysicStragler
The result was different to what the correct answers were.
Ok so if ive got this right what youve saying is the voltage is 60V?
V=IR I=3A R=(10Ω+10Ω)=20Ω
3A x 20Ω=60V?
But then how do you work out separate reciprocals voltage? Because i dont understand how this can work:
"Kirchhoff's first law
The total current into a junction = the total current out of the junction.
Using the convention that currents leaving a junction are the opposite sign to currents entering the junction, the first law may be expressed as the following equation:
I1 + I2 + I3 + … = 0 where I1, I2, I3 etc represent the currents in the branches connected to the junction."
Because to me using that 3A goes everywhere. so
Thank You for helping me:thumbs:
4. Jun 14, 2014
### Staff: Mentor
This law does not have any voltages in it?
No. Why should it?
You have three parallel lines, all those lines are connected at their ends, so they all see the same voltage - 60 V. This allows to determine their currents.
5. Jun 15, 2014
### BvU
Hello Stragler, and welcome to PF.
You have a tendency to make things difficult. For yourself (first ever Physics course = Teaching Advanced Physics ?) and for others (I had to drill to get down to the exercise you want to get assistance for). Personally I had to puzzle over your use of the word reciprocal (In my book that is an inverse).
On top of that you seem to be in (lawful?) possession of the 'right' answers, but don't mention them, nor your own best guesses.
Never mind, just because the title is Kirchhoff that doesn't mean you have to solve the problem with that law and with nothing else! It can be applied to conclude that A1 = A8 = A7 and that A3 = A4 (no branching), that A2 + A6 = A7 and that A4 + A5 = A6 and that's about all for Gustav Kirchhoff 1.
The lines are (ideal) conductors, meaning voltage over the 1 resistor branch, the 2 resistor branch and over the three resistor branch are all equal to the battery voltage. Uncle Georg Simon's law helped you find the value. Same law helps you find A2 and I don't think finding A3 is that difficult with it either. Done !
6. Jun 15, 2014
### PhysicStragler
Hi Bvu
I did a bit of physics in school but have forgotten it.
The reason i didnt mention my own best guesses is because i had so many ive forgotten them all.
I called them reciprocals because somewhere in my sheet given it mention about adding up the separate reciprocals,but i guess i should have called them brances.
I have now since solved it (at 2am GMT)kinda felt like i had a eureka moment :surprised and no cigar to celebrate and now onto the next one.
"You have a tendency to make things difficult. For yourself"
Tell me about it, im always scared of getting the wrong answer on what im doing that, i feel i need to have a complete understanding of what im doing.But i always find it hard when one book explains something in one way and another source explains it different to a way i can understand it,so i end up coming to my own "understanding/conclusion" of it"See its like now ive solved it and my workings out are from what i can see right,i feel like ive done it wrong and need to do it a different way:shy:.
Like in maths at school i used to suck at it so much, but then i had an american teacher (im british) who explained/taught the subject differently and i felt like a smart ***.
You can probably tell by my name i aint very good at physics,but i have to pass it to go to uni, which is why is a higher level course.
7. Jun 15, 2014
### BvU
No need to put yourself down (Quoting good old Bill: what's in a name). Im too prejudiced to even reconsider that physics is the one and only science, but I had fun long ago subtly pointing out the relevance of physics even to future medics and vets. And they had fun too!
Apart from that, exercising the little grey cells is a good thing to do for everyone.
And if you are stubborn enough to hold out even till 02:00 in a weeked you have the mettle to go far, no matter what !
8. Jun 15, 2014
### PhysicStragler
Thanks,lots of people keep telling me to not put myself down,but its one of those unhealthy habits thats hard to break, especially since i got bullied all through my childhood, and still feel stupid.
But anyway,i do kinda like physics,some of it i believe some im not so sure off.I do however sometimes have the urge to study it in my spare time,just annoying when i read up on a theory and itll just say "this is how it is" without explaining why it is,why it is.
Haha call it stubbornness, worry, anxiety,frustration or just plain old being P***** 0**,but its strange how i seem to work best in the early hours, i guess it must be to do with my brain chemistry changing/my neurons that control my unsure thoughts/hindering thoughts going to sleep :rofl:
Thank You:thumbs:
9. Oct 1, 2016
### chocolatePI
what are the calculated answers to this question?
10. Oct 1, 2016
### Staff: Mentor
Hi chocolatePI,
Problem solutions won't be handed out here. This thread is over two years old so it should probably be considered "retired". If you are working on the same problem and get stuck you can start a new thread and ask for help. You'll have to show what you've tried.
11. Oct 1, 2016
### epenguin
The essence of how to solve this is in the last paragraph of #4 so... oh.
You realise chocolatepi the OP. has not been seen since June 2014?
|
2017-10-24 04:17:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48336470127105713, "perplexity": 1556.9351076920834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187828134.97/warc/CC-MAIN-20171024033919-20171024053919-00640.warc.gz"}
|
http://cantuscontracantum.blogspot.com/2011/02/typesetting-music-lilypond-and-latex.html
|
## Wednesday, February 16, 2011
### Typesetting music: lilypond and latex
I made it through the first four exercises before I flipped ahead and thought, "wow, there a lot of exercises. It would be nice if someone had mad a workbook to accompany the text." Then I thought, "I should make a workbook to accompany the text!" Then I went off the deep-end figuring out how to typeset music in lilypond and LaTeX.
I've now typeset the first 60 exercises-- the two-voices section of the book. Check out a sample page in PDF-- it looks cool! I'm going to stop, now, for fear of burning out. I'll go back to working on the music, and typeset three-voice counterpoint after I finish two-voice. Here are a few of the things I've learned:
To integrate lilypond scores into a LaTeX document:
First, in your LaTeX document, reference the file with your score, e.g:
\lilypondfile{WB.1.2.E.a.ly}
Then, when you want to compile the typeset file (mine is called "WORKBOOK.tex") do these steps:
`\$ mkdir out\$ lilypond-book --pdf --output=out WORKBOOK.tex\$ cd out\$ pdflatex WORKBOOK.tex`
This is pretty self-explanatory: lilypond-book creates a new TeX document, converting the score sources into images and LaTeX code, then dumps it into the "out" directory.
Naming conventions:
I've settled on this naming convention, to help me keep track of proliferating lilypond files:
<species>.<voices>.<mode>.<above/below>
For example: 1.2.E.b.ly
Meaning first species, two voices, phrygian mode, counterpoint below the cantus firmus.
Because Fux re-uses the same cantus firmi throughout the book, I can re-use the same scores five times in each section of the workbook, once for each species. Including the species number helps me keep track of completed exercises, though.
Landscape formatting in LaTeX:
Including "landscape" as an option under \documentclass will format landscape. If you use pdflatex to compile the source, it handles everything. If you're using latex to compile the source, you've got to tell latex to respect the landscape option. And then you have to tell dvips to respect the landscape option. Pdflatex is the way to go.
Empty bars in lilypond:
Since I'm making a workbook, I need blank staves that I'll fill in by hand. Lilypond freaks out about bars with no notes in them, though. The way around is to fill in every bar with a whole note, and then use \hideNotes and \unHideNotes around the set of bars I want empty. A similar trick can make roomy, regularly-sized empty bars for workbooks.
Formatting single-line scores:
All the Fux exercises are one-line scores. I want to use all the available page space, to make them easier to write on. That means reducing the default indent and turning off the ragged right edge. (Lilypond indents the first system of any score, and doesn't justify full the last system in a score.) To accomplish this, I've included this bit at the top of every score:
\paper {
indent = 10\mm
ragged-right = ##f
}
Also at the top of every score is this: \version "2.12.3"
Apparently lilypond somewhat regularly breaks backwards compatibility. When they do, they release a script that converts old scores to the new version, and that script looks for a version string at the top of the file it's crunching.
|
2017-12-13 06:58:50
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.933673620223999, "perplexity": 3702.9272251527605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948522205.7/warc/CC-MAIN-20171213065419-20171213085419-00310.warc.gz"}
|
http://mathhelpforum.com/calculus/89153-problems-partial-fractions.html
|
# Thread: Problems with Partial Fractions
1. ## Problems with Partial Fractions
Hi, I've been working on these integral evaluations and I'm stuck. Any help on any of them would be greatly appreciated, thanks in advance!
1. $\int\frac{2x^3-3x+2}{x^3-x}dx$
My attempt:
$2x^3-3x+2=\frac{A}{x^2-1}+\frac{B}{x-1}+\frac{C}{x}$ (Tried breaking it down, couldn't get it to work )
2. $\int\frac{x^4+1}{x^3+x}dx$
My attempt:
long division to get $\int xdx-\int\frac{x^2+1}{x^3+x}dx$ (not sure how to show that in Latex)
Then $x^2+1=\frac{A}{X^2+1}+\frac{B}{x+1}+\frac{C}{x}$
Not sure where to go from there...I imagine that I'm missing the same idea in both of these, so any help would be appreciated. Thanks again!
2. For #1:
$x^{3}-x=x(x^{2}-1)=x(x+1)(x-1)$
$\frac{2x^{3}-2x+2}{x^{3}-x}=\frac{A}{x}+\frac{B}{x+1}+\frac{C}{x-1}$
Multiply both sides of the equation by $x(x+1)(x-1)$ and go on from there.
For #2:
$x^{3}+x=x(x^{2}+1)$
$\frac{x^{4}+1}{x^{3}+x}=\frac{A}{x}+\frac{Bx+C}{x^ {2}+1}$
Multiply both sides of the equation by $x(x^{2}+1)$ and go on from there.
Edit: Don't forget to divide beforehead as in Plato's post; I forgot that part.
3. Here is a solution.
4. Originally Posted by Pinkk
For #2:
$x^{3}+x=x(x^{2}+1)$
$\frac{x^{4}+1}{x^{3}+x}=\frac{A}{x}+\frac{Bx+C}{x^ {2}+1}$
Multiply both sides of the equation by $x(x^{2}+1)$ and go on from there.
Your response for number two is clearly incorrect as multiplication on both sides by the cubic will equate a quartic on the left to a quadratic on the right. The way it should be done is as follows:
$\frac{x^4+1}{x(x^2+1)} \equiv x + \frac{1-x^2}{x(x^2+1)}$
then
$\frac{1-x^2}{x(x^2+1)} \equiv \frac{A}{x} + \frac{Bx +C}{x^2 +1}$
yielding
$\frac{x^4+1}{x(x^2+1)} \equiv x + \frac{1}{x} - \frac{2x}{x^2 +1}
$
.
5. Yes, that's before I realized the highest degree of the numerator was larger than the highest degree of the denominator.
6. I think I've got them both, thanks all!
7. Originally Posted by Pinkk
Yes, that's before I realized the highest degree of the numerator was larger than the highest degree of the denominator.
My apologies for not noticing your edit.
|
2016-12-04 20:26:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9120637774467468, "perplexity": 424.4129459935069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541396.20/warc/CC-MAIN-20161202170901-00094-ip-10-31-129-80.ec2.internal.warc.gz"}
|
http://clay6.com/qa/48971/find-mean-and-standard-deviation-using-short-cut-method-
|
Browse Questions
# Find mean and standard deviation using short - cut method.
$\begin{array}{1 1}(A)\;9.09\\(B)\;1.69\\(C)\;4.89\\(D)\;5.125\end{array}$
Toolbox:
• The formula used for this problem is $Mean=A +\large\frac{\sum f_id_i}{\sum f_i}$
• Standard deviation $\sigma = \sqrt { \large\frac{f_id_i^2}{\sum f_i} -\bigg( \large\frac{\sum f_id_i}{\sum f_i} \bigg)^2}$
Step 1:
Mean $= A+\large\frac{\sum f_id_i}{\sum f_i}$
Where by $A= \bigg(\large\frac{n+1}{2} \bigg)$ th observation
$\qquad= \bigg( \large\frac{9+1}{2} \bigg)$th observation
$\qquad= 5th$ obsevation.
$A=64$
Step 2:
Maen $=64+ \large\frac{0}{100}$
$\qquad= 64$
Step 3:
Standard deviation $\sigma = \sqrt { \large\frac{f_id_i^2}{\sum f_i} -\bigg( \large\frac{\sum f_id_i}{\sum f_i} \bigg)^2}$
$\qquad= \sqrt { \large\frac{286}{100} -\bigg(\frac{ 0}{100}\bigg)^2}$
$\qquad= \sqrt {2.86}$
$\qquad= 1.69$
Hence B is the correct answer.
|
2017-05-29 09:32:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.994862973690033, "perplexity": 8952.764164843189}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612069.19/warc/CC-MAIN-20170529091944-20170529111944-00156.warc.gz"}
|
https://appliedcombinatorics.org/book/s_graphs_color.html
|
Skip to main content
Section5.4Graph Coloring
Let's return now to the subject of Example 1.5, assigning frequencies to radio stations so that they don't interfere. The first thing that we will need to do is to turn the map of radio stations into a suitable graph, which should be pretty natural at this juncture. We define a graph $\GVE$ in which $V$ is the set of radio stations and $xy\in E$ if and only if radio station $x$ and radio station $y$ are within $200$ miles of each other. With this as our model, then we need to assign different frequencies to two stations if their corresponding vertices are joined by an edge. This leads us to our next topic, coloring graphs.
When $\GVE$ is a graph and $C$ is a set of elements called colors, a proper coloring of $\bfG$ is a function $\phi:V\to C$ such that if $\phi(x)\neq \phi(y)$ whenever $xy$ is an edge in $\bfG\text{.}$ The least $t$ for which $\bfG$ has a proper coloring using a set $C$ of $t$ colors is called the chromatic number of $\bfG$ and is denoted $\chi(\bfG)$. In Figure 5.19, we show a proper coloring of a graph using $5$ colors. Now we can see that our radio frequency assignment problem is the much-studied question of finding the chromatic number of an appropriate graph.
Discussion5.20.
Everyone agrees that the graph $\bfG$ in Figure 5.19 has chromatic number at most $5\text{.}$ However, there's a bit of debate going on about if $\chi(\bfG)=5\text{.}$ Bob figures the authors would not have used five colors if they didn't need to. Carlos says he's glad they're having the discussion, since all having a proper coloring does is provide them with an upper bound on $\chi(\bfG)\text{.}$ Bob sees that the graph has a vertex of degree $5$ and claims that must mean $\chi(\bfG)=5\text{.}$ Alice groans and draws a graph with $101$ vertices, one of which has degree $100\text{,}$ but with chromatic number $2\text{.}$ Bob is shocked, but agrees with her. Xing wonders if the fact that the graph does not contain a $\bfK_3$ has any bearing on the chromatic number. Dave's in a hurry to get to the gym, but on his way out the door he says they can get a proper $4$-coloring pretty easily, so $\chi(\bfG)\leq 4\text{.}$ The rest decide it's time to keep reading.
• What graph did Alice draw that shocked Bob?
• What changes did Dave make to the coloring in Figure 5.19 to get a proper coloring using four colors?
Subsection5.4.1Bipartite Graphs
A graph $\GVE$ with $\chi(\bfG)\le 2$ is called a $2$-colorable graph. A couple of minutes of reflection should convince you that for $n\geq 2\text{,}$ the cycle $\bfC_{2n}$ with $2n$ vertices is $2$-colorable. On the other hand, $\bfC_3\cong \bfK_3$ is clearly not $2$-colorable. Furthermore, no odd cycle $\bfC_{2n+1}$ for $n\geq 1$ is $2$-colorable. It turns out that the property of containing an odd cycle is the only impediment to being $2$-colorable, which means that recognizing $2$-colorable graphs is easy, as the following theorem shows.
Let $\GVE$ be a $2$-colorable graph whose coloring function partitions $V$ as $A\cup B\text{.}$ Since there are no edges between vertices on the same side of the partition, any cycle in $\bfG$ must alternate vertices between $A$ and $B\text{.}$ In order to complete the cycle, therefore, the number of vertices in the cycle from $A$ must be the same as the number from $B\text{,}$ implying that the cycle has even length.
Now suppose that $\bfG$ does not contain an odd cycle. Note that we may assume that $\bfG$ is connected, as each component may be colored individually. The distance $d(u,v)$ between vertices $u,v\in V$ is the length of a shortest path from $u$ to $v\text{,}$ and of course $d(u,u) = 0\text{.}$ Fix a vertex $v_0\in V$ and define
\begin{equation*} A = \{v\in V\colon d(u_0,v)\text{ is even}\}\qquad\text{and}\qquad B = \{v\in V\colon d(v_0,v)\text{ is odd}\}. \end{equation*}
We claim that coloring the vertices of $A$ with color $1$ and the vertices of $B$ with color $2$ is a proper coloring. suppose not. Then without loss of generality, there are vertices $x,y\in A$ such that $xy\in E\text{.}$ Since $x,y\in A\text{,}$ $d(v_0,x)$ and $d(v_0,y)$ are both even. Let
\begin{equation*} v_0,x_1,x_2,\dots,x_n=x \end{equation*}
and
\begin{equation*} v_0,y_1,y_2,\dots,y_m= y \end{equation*}
be shortest paths from $v_0$ to $x$ and $y\text{,}$ respectively. If $x_1\neq y_j$ for all $1\leq i\leq n$ and $1\leq j\leq m\text{,}$ then since $m$ and $n$ are both even,
\begin{equation*} v_0,x_1,x_2,\dots,x_n=x,y=y_m,y_{m-1},\dots,y_2,y_1,v_0 \end{equation*}
is an odd cycle in $\bfG\text{,}$ which is a contradiction. Thus, there must be $i,j$ such that $x_i=y_j\text{,}$ and we may take $i,j$ as large as possible. (That is, after $x_i=y_j\text{,}$ the two paths do not intersect again.) Thus,
\begin{equation*} x_i,x_{i+1},\dots,x_n = x,y=y_m,y_{m-1},\dots,y_j=x_i \end{equation*}
is a cycle in $\bfG\text{.}$ How many vertices are there in this cycle? A quick count shows that it has
\begin{equation*} n-(i-1)+m-(j-1)-1=n+m-(i+j)+1 \end{equation*}
vertices. We know that $n$ and $m$ are even, and notice that $i$ and $j$ are either both even or both odd, since $x_i = y_j$ and the odd-subscripted vertices of our path belong to $B$ while those with even subscripts belong to $A\text{.}$ Thus, $i+j$ is even, so $n+m-(i+j)+1$ is odd, giving a contradiction.
A graph $\bfG$ is called a bipartite graph when there is a partition of the vertex $V$ into two sets $A$ and $B$ so that the subgraphs induced by $A$ and $B$ are independent graphs, i.e., no edge of $\bfG$ has both of its endpoints in $A$ or in $B\text{.}$ Evidently, bipartite graphs are $2$-colorable. On the other hand, when a $2$-colorable graph is disconnected, there is more than one way to define a suitable partition of the vertex set into two independent sets.
Bipartite graphs are commonly used as models when there are two distinct types of objects being modeled and connections are only allowed between two objects of different types. For example, on one side, list candidates who attend a career fair and on the other side list the available positions. The edges might naturally correspond to candidate/position pairs which link a person to a responsibility they are capable of handling.
As a second example, a bipartite graph could be used to visualize the languages spoken by a group of students. The vertices on one side would be the students with the languages listed on the other side. We would then have an edge $xy$ when student $x$ spoke language $y\text{.}$ A concrete example of this graph for our favorite group of students is shown in Figure 5.22, although Alice isn't so certain there should be an edge connecting Dave and English.
One special class of bipartite graphs that bears mention is the class of complete bipartite graphs. The complete bipartite graph $\bfK_{m,n}$ has vertex set $V=V_1\cup V_2$ with $|V_1|=m$ and $|V_2|=n\text{.}$ It has an edge $xy$ if and only if $x\in V_1$ and $y\in V_2\text{.}$ The complete bipartite graph $\bfK_{3,3}$ is shown in Figure 5.23.
Subsection5.4.2Cliques and Chromatic Number
A clique in a graph $\GVE$ is a set $K\subseteq V$ such that the subgraph induced by $K$ is isomorphic to the complete graph $\bfK_{|K|}\text{.}$ Equivalently, we can say that every pair of vertices in $K$ are adjacent. The maximum clique size or clique number of a graph $\bfG\text{,}$ denoted $\omega(\bfG)$, is the largest $t$ for which there exists a clique $K$ with $|K|=t\text{.}$ For example, the graph in Figure 5.14 has clique number $4$ while the graph in Figure 5.19 has maximum clique size $2\text{.}$
For every graph $\bfG\text{,}$ it is obvious that $\chi(\bfG)\ge \omega(\bfG)\text{.}$ On the other hand, the inequality may be far from tight. Before proving showing how bad it can be, we need to introduce a more general version of the Pigeon Hole Principle. Consider a function $f\colon X\to Y$ with $|X| = 2|Y|+1\text{.}$ Since $|X|>|Y|\text{,}$ the Pigeon Hole Principle as stated in Proposition 4.1 only tells us that there are distinct $x,x'\in X$ with $f(x)=f(x')\text{.}$ However, we can say more here. Suppose that each element of $Y$ has at most two elements of $X$ mapped to it. Then adding up the number of elements of $X$ based on how many are mapped to each element of $Y$ would only allow $X$ to have (at most) $2|Y|$ elements. Thus, there must be $y\in Y$ so that there are three distinct elements $x,x',x''\in X$ with $f(x)=f(x')=f(x'')=y\text{.}$ This argument generalizes to give the following version of the Pigeon Hole Principle:
We are now prepared to present the following proposition showing that clique number and chromatic number need not be close at all. We give two proofs. The first is the work of J. Kelly and L. Kelly, while the second is due to J. Mycielski.
We proceed by induction on $t\text{.}$ For $t=3\text{,}$ we take $\bfG_3$ to be the cycle $\bfC_5$ on five vertices. Now assume that for some $t\ge3\text{,}$ we have determined the graph $\bfG_t\text{.}$ Suppose that $\bfG_t$ has $n_t$ vertices. Label the vertices of $\bfG_t$ as $x_1,x_2,\dots,x_{n_t}\text{.}$ Construct $\bfG_{t+1}$ as follows. Begin with an independent set $I$ of cardinality $t(n_t-1)+1\text{.}$ For every subset $S$ of $I$ with $|S|=n_t\text{,}$ label the elements of $S$ as $y_1,y_2,\dots,y_{n_t}\text{.}$ For this particular $n_t$-element subset attach a copy of $\bfG_t$ with $y_i$ adjacent to $x_i$ for $i=1,2,\dots,n_t\text{.}$ Vertices in copies of $\bfG_t$ for distinct $n_t$-element subsets of $I$ are nonadjacent, and a vertex in $I$ has at most one neighbor in a particular copy of $\bfG_t\text{.}$
To see that $\omega(\bfG_{t+1})=2\text{,}$ it will suffice to argue that $\bfG_{t+1}$ contains no triangle ($\bfK_3$). Since $\bfG_t$ is triangle-free, any triangle in $\bfG_{t+1}$ must contain a vertex of $I\text{.}$ Since none of the vertices of $I$ are adjacent, any triangle in $\bfG_{t+1}$ contains only one point of $I\text{.}$ Since each vertex of $I$ is adjacent to at most one vertex of any fixed copy of $\bfG_t\text{,}$ if $y\in I$ is part of a triangle, the other two vertices must come from distinct copies of $\bfG_t\text{.}$ However, vertices in different copies of $\bfG_t$ are not adjacent, so $\omega(\bfG_{t+1})=2\text{.}$ Notice that $\chi(\bfG_{t+1})\ge t$ since $\bfG_{t+1}$ contains $\bfG_t\text{.}$ On the other hand, $\chi(\bfG_{t+1})\le t+1$ since we may use $t$ colors on the copies of $\bfG_t$ and a new color on the independent set $I\text{.}$ To see that $\chi(\bfG_{t+1})=t+1\text{,}$ observe that if we use only $t$ colors, then by the generalized Pigeon Hole Principle, there is an $n_t$-element subset of $I$ in which all vertices have the same color. Then this color cannot be used in the copy of $\bfG_t$ which is attached to that $n_t$-element subset.
We again start with $\bfG_3$ as the cycle $\bfC_5\text{.}$ As before we assume that we have constructed for some $t\ge3$ a graph $\bfG_t$ with $\omega(\bfG_t)=2$ and $\chi(\bfG_t) = t\text{.}$ Again, label the vertices of $\bfG_t$ as $x_1,x_2,\dots,x_{n_t}\text{.}$ To construct $\bfG_{t+1}\text{,}$ we now start with an independent set $I\text{,}$ but now $I$ has only $n_t$ points, which we label as $y_1,y_2,\dots,y_{n_t}\text{.}$ We then add a copy of $\bfG_t$ with $y_i$ adjacent to $x_j$ if and only if $x_i$ is adjacent to $x_j\text{.}$ Finally, attach a new vertex $z$ adjacent to all vertices in $I\text{.}$
Clearly, $\omega(\bfG_{t+1})=2\text{.}$ Also, $\chi(\bfG_{t+1})\ge t\text{,}$ since it contains $\bfG_t$ as a subgraph. Furthermore, $\chi(\bfG_{t+1})\leq t+1\text{,}$ since we can color $\bfG_t$ with colors from $\{1,2,\dots,t\}\text{,}$ use color $t+1$ on the independent set $I\text{,}$ and then assign color $1$ to the new vertex $z\text{.}$ We claim that in fact $\chi(\bfG_{t+1})=t+1\text{.}$ Suppose not. Then we must have $\chi(\bfG_{t+1})=t\text{.}$ Let $\phi$ be a proper coloring of $\bfG_{t+1}\text{.}$ Without loss of generality, $\phi$ uses the colors in $\{1,2,\dots,t\}$ and $\phi$ assigns color $t$ to $z\text{.}$ Then consider the nonempty set $S$ of vertices in the copy of $\bfG_t$ to which $\phi$ assigns color $t\text{.}$ For each $x_i$ in $S\text{,}$ change the color on $x_i$ so that it matches the color assigned to $y_i$ by $\phi\text{,}$ which cannot be $t\text{,}$ as $z$ is colored $t\text{.}$ What results is a proper coloring of the copy of $\bfG_t$ with only $t-1$ colors since $x_i$ and $y_i$ are adjacent to the same vertices of the copy of $\bfG_t\text{.}$ The contradiction shows that $\chi(\bfG_{t+1})=t+1\text{,}$ as claimed.
Since a $3$-clique looks like a triangle, Proposition 5.25 is often stated as “There exist triangle-free graphs with large chromatic number.” As an illustration of the construction in the proof of Mycielski, we again refer to Figure 5.19. The graph shown is $\bfG_4\text{.}$ We will return to the topic of graphs with large chromatic number in Section 11.6 where we show that are there graphs with large chromatic number which lack not only cliques of more than two vertices but also cycles of fewer than $g$ vertices for any value of $g\text{.}$ In other words, there is a graph $\bfG$ with $\chi(\bfG)=10^6$ but no cycle with fewer than $10^{10}$ vertices!
Subsection5.4.3Can We Determine Chromatic Number?
Suppose you are given a graph $\bfG\text{.}$ It's starting to look like it is not easy to find an algorithm that answers the question “Is $\chi(\bfG)\leq t\text{?}$” It's easy to verify a certificate (a proper coloring using at most $t$ colors), but how could you even find a proper coloring, not to mention one with the fewest number of colors? Similarly for the question “Is $\omega(\bfG)\geq k\text{?}$”, it is easy to verify a certificate. However, finding a maximum clique appears to be a very hard problem. Of course, since the gap between $\chi(\bfG)$ and $\omega(\bfG)$ can be arbitrarily large, being able to find one value would not (generally) help in finding the value of the other. No polynomial-time algorithm is known for either of these problems, and many believe that no such algorithm exists. In this subsection, we look at one approach to finding chromatic number and see a case where it does work efficiently.
A very naïve algorithmic way to approach graph coloring is the First Fit, or “greedy”, algorithm. For this algorithm, fix an ordering of the vertex set $V=\{v_1,v_2,\dots v_n\}\text{.}$ We define the coloring function $\phi$ one vertex at a time in increasing order of subscript. We begin with $\phi(v_1)=1$ and then we define $\phi(v_{i+1})$ (assuming vertices $v_1,v_2,\dots,v_i$ have been colored) to be the least positive integer color that has not already been used on any of its neighbors in the set $\{v_1,\dots v_i\}\text{.}$
Figure 5.26 shows two different orderings of the same graph. Exercise 5.9.24 demonstrates that the ordering of $V$ is vital to the ability of the First Fit algorithm to color $\bfG$ using $\chi(\bfG)$ colors. In general, finding an optimal ordering is just as difficult as coloring $\bfG\text{.}$ Thus, this very simple algorithm does not work well in general. However, for some classes of graphs, there is a “natural” ordering that leads to optimal performance of First Fit. Here is one such example—one that we will study again in the next chapter in a different context.
Given an indexed family of sets $\cgF=\{S_\alpha:\alpha\in V\}\text{,}$ we associate with $\cgF$ a graph $\bfG$ defined as follows. The vertex set of $\bfG$ is the set $V$ and vertices $x$ and $y$ in $V$ are adjacent in $\bfG$ if and only if $S_x\cap S_y \neq\emptyset\text{.}$ We call $\bfG$ an intersection graph. It is easy to see that every graph is an intersection graph (Why?), so it makes sense to restrict the sets which belong to $\cgF\text{.}$ For example, we call $\bfG$ an interval graph if it is the intersection graph of a family of closed intervals of the real line $\reals\text{.}$ For example, in Figure 5.27, we show a collection of six intervals of the real line on the left. On the right, we show the corresponding interval graph having an edge between vertices $x$ and $y$ if and only if intervals $x$ and $y$ overlap.
For each $v\in V\text{,}$ let $I(v)=[a_v,b_v]$ be a closed interval of the real line so that $uv$ is an edge in $\bfG$ if and only if $I(u)\cap I(v)\neq\emptyset\text{.}$ Order the vertex set $V$ as $\{v_1,v_2,\dots v_n\}$ such that $a_1\leq a_2\leq \cdots \leq a_n\text{.}$ (Ties may be broken arbitrarily.) Apply the First Fit coloring algorithm to $\bfG$ with this ordering on $V\text{.}$ Now when the First Fit coloring algorithm colors $v_i\text{,}$ all of its neighbors have left end point at most $a_i\text{.}$ Since they are neighbors of $v_i\text{,}$ however, we know that their right endpoints are all at least $a_i\text{.}$ Thus, $v_i$ and its previously-colored neighbors form a clique. Hence, $v_i$ is adjacent to at most $\omega(\bfG)-1$ other vertices that have already been colored, so when the algorithm colors $v_i\text{,}$ there will be a color from $\{1,2,\dots,\omega(\bfG)\}$ not already in use on its neighbors. The algorithm will assign $v_i$ the smallest such color. Thus, we never need to use more than $\omega(\bfG)$ colors, so $\chi(\bfG)=\omega(\bfG)\text{.}$
A graph $\bfG$ is said to be perfect if $\chi(\bfH)=\omega(\bfH)$ for every induced subgraph $\bfH\text{.}$ Since an induced subgraph of an interval graph is an interval graph, Theorem 5.28 shows interval graphs are perfect. The study of perfect graphs originated in connection with the theory of communications networks and has proved to be a major area of research in graph theory for many years now.
|
2020-09-25 22:04:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7246149778366089, "perplexity": 98.54451282706826}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400228998.45/warc/CC-MAIN-20200925213517-20200926003517-00391.warc.gz"}
|
https://www.zbmath.org/authors/?q=ai%3Agarcke.harald
|
# zbMATH — the first resource for mathematics
## Garcke, Harald
Compute Distance To:
Author ID: garcke.harald Published as: Garcke, H.; Garcke, Harald External Links: MGP · Wikidata · GND
Documents Indexed: 145 Publications since 1993, including 5 Books
all top 5
#### Co-Authors
9 single-authored 40 Barrett, John W. 39 Nürnberg, Robert 15 Lam, Kei Fong 10 Styles, Vanessa 9 Abels, Helmut 8 Blank, Luise 7 Nestler, Britta 6 Hecht, Claudia 6 Hinze, Michael 6 Kohsaka, Yoshihito 6 Stinner, Bjorn 5 Eck, Christof 5 Elliott, Charles M. 5 Ito, Kazuo 5 Kahle, Christian 4 Benninghoff, Heike 4 Depner, Daniel 4 Grün, Günther 4 Knabner, Peter 4 Sarbu, Lavinia 4 Stoth, Barbara E. E. 4 Weikard, Ulrich 3 Blowey, James F. 3 Dal Passo, Roberta 2 Butz, Martin V. 2 Ebenbeck, Matthias 2 Farshbaf-Shaker, Mohammad Hassan 2 Menzel, Julia H. 2 Müller, Lars 2 Niethammer, Barbara 2 Pluda, Alessandra 2 Röger, Matthias 2 Rumpf, Martin 2 Rupprecht, Christoph 2 Signori, Andrea 2 Sitka, Emanuel 2 Weber, Josef 1 Agosti, Abramo 1 Alfaro, Matthieu 1 Arab, Nasrin 1 Bertsch, Michiel 1 Bronsard, Lia 1 Ciarletta, Pasquale 1 Dede, Luca 1 Escher, Joachim 1 Gößwein, M. 1 Hassan Farshbaf-Shaker, M. 1 Hilhorst, Danielle 1 Kampmann, Johannes 1 Knopf, Patrik 1 Kornhuber, Ralf 1 Kraus, Christiane 1 Kwak, David Jung Chul 1 Lenz, Martin 1 Maier-Paape, Stanislaus 1 Matano, Hiroshi 1 Matioc, Bogdan-Vasile 1 Metzger, Stefan 1 Novick-Cohen, Amy 1 Peletier, Mark Adriaan 1 Rätz, Andreas 1 Rocca, Elisabetta 1 Schatzle, Reiner 1 Schaubeck, Stefan 1 Ševčovič, Daniel 1 Srisupattarawanit, Tarin 1 Sturzenhecker, Thomas 1 Voigt, Axel 1 Wendler, Frank 1 Wieland, Sandra 1 Yayla, Sema
all top 5
#### Serials
7 M$$^3$$AS. Mathematical Models & Methods in Applied Sciences 7 SIAM Journal on Mathematical Analysis 7 Interfaces and Free Boundaries 6 IMA Journal of Numerical Analysis 6 Numerische Mathematik 5 Journal of Computational Physics 5 Advances in Mathematical Sciences and Applications 4 SIAM Journal on Numerical Analysis 4 Advances in Differential Equations 4 European Series in Applied and Industrial Mathematics (ESAIM): Control, Optimization and Calculus of Variations 3 Journal of Differential Equations 3 Physica D 3 European Journal of Applied Mathematics 3 Proceedings of the Royal Society of Edinburgh. Section A. Mathematics 3 SIAM Journal on Scientific Computing 3 Springer-Lehrbuch 2 Mathematics of Computation 2 Applied Mathematics and Optimization 2 Hokkaido Mathematical Journal 2 Mathematische Nachrichten 2 SIAM Journal on Control and Optimization 2 Annales de l’Institut Henri Poincaré. Analyse Non Linéaire 2 Numerical Methods for Partial Differential Equations 2 Journal of Scientific Computing 2 SIAM Journal on Applied Mathematics 2 Journal of Mathematical Fluid Mechanics 2 Journal of Evolution Equations 2 Communications in Mathematical Sciences 2 Oberwolfach Reports 2 European Series in Applied and Industrial Mathematics (ESAIM): Mathematical Modelling and Numerical Analysis 2 SIAM Journal on Imaging Sciences 2 Discrete and Continuous Dynamical Systems. Series S 1 Archive for Rational Mechanics and Analysis 1 Computer Methods in Applied Mechanics and Engineering 1 Jahresbericht der Deutschen Mathematiker-Vereinigung (DMV) 1 Mathematical Methods in the Applied Sciences 1 Mitteilungen der Deutschen Mathematiker-Vereinigung (DMV) 1 ZAMP. Zeitschrift für angewandte Mathematik und Physik 1 Annali della Scuola Normale Superiore di Pisa. Classe di Scienze. Serie IV 1 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 1 Applied Numerical Mathematics 1 Communications in Partial Differential Equations 1 Journal of Mathematical Imaging and Vision 1 Advances in Computational Mathematics 1 Discrete and Continuous Dynamical Systems 1 ZAMM. Zeitschrift für Angewandte Mathematik und Mechanik 1 M2AN. Mathematical Modelling and Numerical Analysis. ESAIM, European Series in Applied and Industrial Mathematics 1 IEEE Transactions on Image Processing 1 Nonlinear Analysis. Real World Applications 1 Communications on Pure and Applied Analysis 1 Bonner Mathematische Schriften 1 Communications in Computational Physics 1 Geometric Flows 1 AIMS Mathematics 1 Springer Undergraduate Mathematics Series 1 SMAI Journal of Computational Mathematics
all top 5
#### Fields
111 Partial differential equations (35-XX) 56 Numerical analysis (65-XX) 42 Fluid mechanics (76-XX) 34 Mechanics of deformable solids (74-XX) 31 Calculus of variations and optimal control; optimization (49-XX) 26 Differential geometry (53-XX) 22 Biology and other natural sciences (92-XX) 21 Classical thermodynamics, heat transfer (80-XX) 20 Statistical mechanics, structure of matter (82-XX) 6 General and overarching topics; collections (00-XX) 5 Information and communication theory, circuits (94-XX) 4 Ordinary differential equations (34-XX) 4 Dynamical systems and ergodic theory (37-XX) 3 Operations research, mathematical programming (90-XX) 2 Global analysis, analysis on manifolds (58-XX) 2 Computer science (68-XX) 1 Convex and discrete geometry (52-XX) 1 Systems theory; control (93-XX) 1 Mathematics education (97-XX)
#### Citations contained in zbMATH Open
128 Publications have been cited 1,765 times in 884 Documents Cited by Year
On the Cahn-Hilliard equation with degenerate mobility. Zbl 0856.35071
Elliott, Charles M.; Garcke, Harald
1996
Thermodynamically consistent, frame indifferent diffuse interface models for incompressible two-phase flows with different densities. Zbl 1242.76342
Abels, Helmut; Garcke, Harald; Grün, Günther
2012
On a fourth-order degenerate parabolic equation: Global entropy estimates, existence, and qualitative behavior of solutions. Zbl 0929.35061
Dal Passo, Roberta; Garcke, Harald; Grün, Günther
1998
Finite element approximation of the Cahn-Hilliard equation with degenerate mobility. Zbl 0947.65109
Barrett, John W.; Blowey, James F.; Garcke, Harald
1999
A parametric finite element method for fourth order geometric evolution equations. Zbl 1112.65093
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2007
On anisotropic order parameter models for multi-phase systems and their sharp interface limits. Zbl 0936.82010
Garcke, Harald; Nestler, Britta; Stoth, Barbara
1998
A multiphase field concept: Numerical simulations of moving phase boundaries and multiple junctions. Zbl 0942.35095
Garcke, Harald; Nestler, Britta; Stoth, Barbara
1999
On the parametric finite element approximation of evolving hypersurfaces in $$\mathbb R^3$$. Zbl 1145.65068
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2008
A Cahn-Hilliard-Darcy model for tumour growth with chemotaxis and active transport. Zbl 1336.92038
Garcke, Harald; Lam, Kei Fong; Sitka, Emanuel; Styles, Vanessa
2016
Parametric approximation of willmore flow and related geometric evolution equations. Zbl 1186.65133
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2008
The thin viscous flow equation in higher space dimensions. Zbl 0954.35035
Bertsch, Michiel; Dal Passo, Roberta; Garcke, Harald; Grün, Günther
1998
On Cahn-Hilliard systems with elasticity. Zbl 1130.74037
Garcke, Harald
2003
On fully practical finite element approximations of degenerate Cahn-Hilliard systems. Zbl 0987.35071
Barrett, John W.; Blowey, James F.; Garcke, Harald
2001
Finite element approximation of a fourth order nonlinear degenerate parabolic equation. Zbl 0913.65084
Barrett, John W.; Blowey, James F.; Garcke, Harald
1998
On the variational approximation of combined second and fourth order geometric evolution equations. Zbl 1148.65074
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2007
Existence of weak solutions for a diffuse interface model for two-phase flows of incompressible fluids with different densities. Zbl 1273.76421
Abels, Helmut; Depner, Daniel; Garcke, Harald
2013
A singular limit for a system of degenerate Cahn-Hilliard equations. Zbl 0988.35019
Garcke, Harald; Novick-Cohen, Amy
2000
Existence results for diffusive surface motion laws. Zbl 0876.35050
Elliott, Charles M.; Garcke, Harald
1997
Well-posedness of a Cahn-Hilliard system modelling tumour growth with chemotaxis and active transport. Zbl 1375.92011
Garcke, Harald; Lam, Kei Fong
2017
Surfactant spreading on thin viscous films: nonnegative solutions of a coupled degenerate system. Zbl 1102.35056
Garcke, Harald; Wieland, Sandra
2006
A multiphase Cahn-Hilliard-Darcy model for tumour growth with necrosis. Zbl 1380.92029
Garcke, Harald; Lam, Kei Fong; Nürnberg, Robert; Sitka, Emanuel
2018
Diffuse interface modelling of soluble surfactants in two-phase flow. Zbl 1319.35309
Garcke, Harald; Lam, Kei Fong; Stinner, Björn
2014
On an incompressible Navier-Stokes/Cahn-Hilliard system with degenerate mobility. Zbl 1347.76052
Abels, Helmut; Depner, Daniel; Garcke, Harald
2013
On a Cahn–Hilliard model for phase separation with elastic misfit. Zbl 1072.35081
Garcke, Harald
2005
Analysis of a Cahn-Hilliard system with non-zero Dirichlet conditions modeling tumor growth with chemotaxis. Zbl 1360.35042
Garcke, Harald; Lam, Kei Fong
2017
A stable and linear time discretization for a thermodynamically consistent model for two-phase incompressible flow. Zbl 1329.76168
Garcke, Harald; Hinze, Michael; Kahle, Christian
2016
Primal-dual active set methods for Allen-Cahn variational inequalities with nonlocal constraints. Zbl 1272.65060
Blank, Luise; Garcke, Harald; Sarbu, Lavinia; Styles, Vanessa
2013
The approximation of planar curve evolutions by stable fully implicit finite element schemes that equidistribute. Zbl 1218.65105
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2011
A multi-phase Mullins-Sekerka system: Matched asymptotic expansions and an implicit time discretisation for the geometric evolution problem. Zbl 0924.35199
Bronsard, Lia; Garcke, Harald; Stoth, Barbara
1998
Optimal control of treatment time in a diffuse interface model of tumor growth. Zbl 1403.35139
Garcke, Harald; Lam, Kei Fong; Rocca, Elisabetta
2018
Phase-field approaches to structural topology optimization. Zbl 1356.49044
Blank, Luise; Garcke, Harald; Sarbu, Lavinia; Srisupattarawanit, Tarin; Styles, Vanessa; Voigt, Axel
2012
A diffuse interface model for alloys with multiple components and phases. Zbl 1126.82025
Garcke, Harald; Nestler, Britta; Stinner, Bjorn
2004
Finite element approximation of surfactant spreading on a thin film. Zbl 1130.76361
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2003
The Cahn-Hilliard equation with elasticity – finite element approximation and qualitative studies. Zbl 0972.35164
Garcke, Harald; Rumpf, Martin; Weikard, Ulrich
2001
Diffusional phase transitions in multicomponent systems with a concentration dependent mobility matrix. Zbl 0925.35087
Elliott, Charles M.; Garcke, Harald
1997
A stable numerical method for the dynamics of fluidic membranes. Zbl 1391.76298
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2016
On the stable numerical approximation of two-phase flow with insoluble surfactant. Zbl 1315.35156
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2015
Analysis of a Cahn-Hilliard-Brinkman model for tumour growth with chemotaxis. Zbl 1410.35058
Ebenbeck, Matthias; Garcke, Harald
2019
Parametric approximation of isotropic and anisotropic elastic flow for closed and open curves. Zbl 1242.65188
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2012
Parametric approximation of surface clusters driven by isotropic and anisotropic surface energies. Zbl 1205.65263
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2010
Allen-Cahn systems with volume constraints. Zbl 1147.49036
Garcke, Harald; Nestler, Britta; Stinner, Björn; Wendler, Frank
2008
Relating phase field and sharp interface approaches to structural topology optimization. Zbl 1301.49113
Blank, Luise; Garcke, Harald; Farshbaf-Shaker, M. Hassan; Styles, Vanessa
2014
Numerical approximation of gradient flows for closed curves in $$\mathbb R^{d}$$. Zbl 1185.65027
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2010
Second order phase field asymptotics for multi-component systems. Zbl 1106.35116
Garcke, Harald; Stinner, Björn
2006
Finite element approximation of a phase field model for surface diffusion of voids in a stressed solid. Zbl 1078.74050
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2006
Diffusional phase transitions in multicomponent systems with a concentration dependent mobility matrix. Zbl 1194.35225
Elliott, Charles M.; Garcke, Harald
1997
Global weak solutions and asymptotic limits of a Cahn-Hilliard-Darcy system modelling tumour growth. Zbl 1434.35255
Garcke, Harald; Lam, Kei Fong
2016
Stable finite element approximations of two-phase flow with soluble surfactant. Zbl 1349.76175
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2015
Multi-material phase field approach to structural topology optimization. Zbl 1327.49068
Blank, Luise; Farshbaf-Shaker, M. Hassan; Garcke, Harald; Rupprecht, Christoph; Styles, Vanessa
2014
On stable parametric finite element methods for the Stefan problem and the Mullins-Sekerka problem with applications to dendritic growth. Zbl 1201.80075
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2010
On a Cahn-Hilliard-Darcy system for tumour growth with solution dependent source terms. Zbl 1406.92296
Garcke, Harald; Lam, Kei Fong
2018
Numerical approximation of phase field based shape and topology optimization for fluids. Zbl 1322.35113
Garcke, Harald; Hecht, Claudia; Hinze, Michael; Kahle, Christian
2015
A stable parametric finite element discretization of two-phase Navier-Stokes flow. Zbl 1320.76059
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2015
Eliminating spurious velocities with a stable approximation of viscous incompressible two-phase Stokes flow. Zbl 1286.76040
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2013
Solving the Cahn-Hilliard variational inequality with a semi-smooth Newton method. Zbl 1233.35132
Blank, Luise; Butz, Martin; Garcke, Harald
2011
Numerical approximation of anisotropic geometric evolution equations in the plane. Zbl 1145.65069
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2008
A variational formulation of anisotropic geometric evolution equations in higher dimensions. Zbl 1149.65082
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2008
Finite-element approximation of coupled surface and grain boundary motion with applications to thermal grooving and sintering. Zbl 1410.80015
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2010
Anisotropy in multi-phase systems: A phase field approach. Zbl 0959.35169
Garcke, Harald; Stoth, Barbara; Nestler, Britta
1999
Mean curvature flow with triple junctions in higher space dimensions. Zbl 1291.53078
Depner, Daniel; Garcke, Harald; Kohsaka, Yoshihito
2014
A phase field model for the electromigration of intergranular voids. Zbl 1132.35082
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2007
Numerical approximation of the Cahn-Larché equation. Zbl 1099.74063
Garcke, Harald; Weikard, Ulrich
2005
A coupled surface-Cahn-Hilliard bulk-diffusion system modeling lipid raft formation in cell membranes. Zbl 1338.35222
Garcke, Harald; Kampmann, Johannes; Rätz, Andreas; Röger, Matthias
2016
Finite element approximation of one-sided Stefan problems with anisotropic, approximately crystalline, Gibbs-Thomson law. Zbl 1271.80005
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2013
Elastic flow with junctions: variational approximation and applications to nonlinear splines. Zbl 1252.76038
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2012
Solutions of a fourth order degenerate parabolic equation with weak initial trace. Zbl 0945.35049
Dal Passo, Roberta; Garcke, Harald
1999
Weak solutions of the Cahn-Hilliard system with dynamic boundary conditions: a gradient flow approach. Zbl 1429.35114
Garcke, Harald; Knopf, Patrik
2020
On a Cahn-Hilliard-Brinkman model for tumor growth and its singular limits. Zbl 1420.35116
Ebenbeck, Matthias; Garcke, Harald
2019
A Hele-Shaw-Cahn-Hilliard model for incompressible two-phase flows with different densities. Zbl 1394.35353
Dedè, Luca; Garcke, Harald; Lam, Kei Fong
2018
Finite element approximation for the dynamics of asymmetric fluidic biomembranes. Zbl 1394.76064
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2017
Stable numerical approximation of two-phase flow with a Boussinesq-Scriven surface fluid. Zbl 1329.35241
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2015
Curvature driven interface evolution. Zbl 1279.53064
Garcke, Harald
2013
Linearized stability analysis of surface diffusion for hypersurfaces with triple lines. Zbl 1263.35031
Depner, Daniel; Garcke, Harald
2013
Allen-Cahn and Cahn-Hilliard variational inequalities solved with optimization techniques. Zbl 1356.49009
Blank, Luise; Butz, Martin; Garcke, Harald; Sarbu, Lavinia; Styles, Vanessa
2012
Linearized stability analysis of stationary solutions for surface diffusion with boundary conditions. Zbl 1097.35030
Garcke, Harald; Ito, Kazuo; Kohsaka, Yoshihito
2005
A phase-field model for the solidification process in multicomponent alloys. Zbl 1061.80004
Garcke, H.; Nestler, B.; Stinner, B.
2003
A mathematical model for grain growth in thin metallic films. Zbl 1205.74138
Garcke, Harald; Nestler, Britta
2000
Travelling wave solutions as dynamic phase transitions in shape memory alloys. Zbl 0829.73007
Garcke, Harald
1995
Stable variational approximations of boundary value problems for Willmore flow with Gaussian curvature. Zbl 1433.65206
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2017
Two-phase flow with surfactants: diffuse interface models and their analysis. Zbl 1391.76766
Abels, Helmut; Garcke, Harald; Lam, Kei Fong; Weber, Josef
2017
Segmentation and restoration of images on surfaces by parametric active contours with topology changes. Zbl 1405.94014
Benninghoff, Heike; Garcke, Harald
2016
A phase field approach for shape and topology optimization in Stokes flow. Zbl 1329.76108
Garcke, Harald; Hecht, Claudia
2015
Efficient image segmentation and restoration using parametric curve evolution with junctions and topology changes. Zbl 1308.94013
Benninghoff, Heike; Garcke, Harald
2014
Nonlocal Allen-Cahn systems. Analysis and a primal-dual active set method. Zbl 1279.65087
Blank, Luise; Garcke, Harald; Sarbu, Lavinia; Styles, Vanessa
2013
Stress- and diffusion-induced interface motion: modelling and numerical simulations. Zbl 1128.74004
Garcke, Harald; Nürnberg, Robert; Styles, Vanessa
2007
Bi-directional diffusion induced grain boundary motion with triple junctions. Zbl 1081.35116
Garcke, H.; Styles, Vanessa
2004
Willmore flow of planar networks. Zbl 1436.35233
Garcke, Harald; Menzel, Julia; Pluda, Alessandra
2019
Sharp interface limit for a phase field model in structural optimization. Zbl 1348.35259
Blank, Luise; Garcke, Harald; Hecht, Claudia; Rupprecht, Christoph
2016
Local well-posedness for volume-preserving mean curvature and Willmore flows with line tension. Zbl 1335.53079
Abels, Helmut; Garcke, Harald; Müller, Lars
2016
Thermodynamically consistent higher order phase field Navier-Stokes models with applications to biomembranes. Zbl 1210.35258
Farshbaf-Shaker, M. Hassan; Garcke, Harald
2011
Motion by anisotropic mean curvature as sharp interface limit of an inhomogeneous and anisotropic Allen-Cahn equation. Zbl 1204.35026
Alfaro, Matthieu; Garcke, Harald; Hilhorst, Danielle; Matano, Hiroshi; Schätzle, Reiner
2010
Surface diffusion with triple junctions: A stability criterion for stationary solutions. Zbl 1228.35042
Garcke, Harald; Ito, Kazuo; Kohsaka, Yoshihito
2010
On sharp interface limits of Allen-Cahn/Cahn-Hilliard variational inequalities. Zbl 1165.35406
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2008
Shape optimization for surface functionals in Navier-Stokes flow using a phase field approach. Zbl 1352.49046
Garcke, Harald; Hecht, Claudia; Hinze, Michael; Kahle, Christian; Lam, Kei Fong
2016
Computational parametric Willmore flow with spontaneous curvature and area difference elasticity effects. Zbl 1428.53104
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2016
Applying a phase field approach for shape optimization of a stationary Navier-Stokes flow. Zbl 1342.35218
Garcke, Harald; Hecht, Claudia
2016
Shape and topology optimization in Stokes flow with a phase field approach. Zbl 1334.49133
Garcke, Harald; Hecht, Claudia
2016
The $$\Gamma$$-limit of the Ginzburg-Landau energy in an elastic medium. Zbl 1193.49054
Garcke, Harald
2008
Nonlinear stability of stationary solutions for surface diffusion with boundary conditions. Zbl 1167.35005
Garcke, Harald; Ito, Kazuo; Kohsaka, Yoshihito
2008
Mathematical modelling. Zbl 1223.00014
Eck, Christof; Garcke, Harald; Knabner, Peter
2008
On a phase field model of Cahn-Hilliard type for tumour growth with mechanical effects. Zbl 1456.35091
Garcke, Harald; Lam, Kei Fong; Signori, Andrea
2021
Weak solutions of the Cahn-Hilliard system with dynamic boundary conditions: a gradient flow approach. Zbl 1429.35114
Garcke, Harald; Knopf, Patrik
2020
Parametric finite element approximations of curvature-driven interface evolutions. Zbl 1455.35185
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2020
On the surface diffusion flow with triple junctions in higher space dimensions. Zbl 1439.53079
Garcke, H.; Gößwein, M.
2020
Analysis of a Cahn-Hilliard-Brinkman model for tumour growth with chemotaxis. Zbl 1410.35058
Ebenbeck, Matthias; Garcke, Harald
2019
On a Cahn-Hilliard-Brinkman model for tumor growth and its singular limits. Zbl 1420.35116
Ebenbeck, Matthias; Garcke, Harald
2019
Willmore flow of planar networks. Zbl 1436.35233
Garcke, Harald; Menzel, Julia; Pluda, Alessandra
2019
Optimal control of time-discrete two-phase flow driven by a diffuse-interface model. Zbl 1437.35575
Garcke, Harald; Hinze, Michael; Kahle, Christian
2019
Variational discretization of axisymmetric curvature flows. Zbl 1419.65051
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2019
A multiphase Cahn-Hilliard-Darcy model for tumour growth with necrosis. Zbl 1380.92029
Garcke, Harald; Lam, Kei Fong; Nürnberg, Robert; Sitka, Emanuel
2018
Optimal control of treatment time in a diffuse interface model of tumor growth. Zbl 1403.35139
Garcke, Harald; Lam, Kei Fong; Rocca, Elisabetta
2018
On a Cahn-Hilliard-Darcy system for tumour growth with solution dependent source terms. Zbl 1406.92296
Garcke, Harald; Lam, Kei Fong
2018
A Hele-Shaw-Cahn-Hilliard model for incompressible two-phase flows with different densities. Zbl 1394.35353
Dedè, Luca; Garcke, Harald; Lam, Kei Fong
2018
Cahn-Hilliard inpainting with the double obstacle potential. Zbl 1455.94019
Garcke, Harald; Lam, Kei Fong; Styles, Vanessa
2018
Gradient flow dynamics of two-phase biomembranes: sharp interface variational formulation and finite element approximation. Zbl 1416.74070
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2018
A phase field approach to shape optimization in Navier-Stokes flow with integral state constraints. Zbl 1406.35274
Garcke, Harald; Hinze, Michael; Kahle, Christian; Lam, Kei Fong
2018
Well-posedness of a Cahn-Hilliard system modelling tumour growth with chemotaxis and active transport. Zbl 1375.92011
Garcke, Harald; Lam, Kei Fong
2017
Analysis of a Cahn-Hilliard system with non-zero Dirichlet conditions modeling tumor growth with chemotaxis. Zbl 1360.35042
Garcke, Harald; Lam, Kei Fong
2017
Finite element approximation for the dynamics of asymmetric fluidic biomembranes. Zbl 1394.76064
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2017
Stable variational approximations of boundary value problems for Willmore flow with Gaussian curvature. Zbl 1433.65206
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2017
Two-phase flow with surfactants: diffuse interface models and their analysis. Zbl 1391.76766
Abels, Helmut; Garcke, Harald; Lam, Kei Fong; Weber, Josef
2017
Finite element approximation for the dynamics of fluidic two-phase biomembranes. Zbl 1383.35153
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2017
Mathematical modeling. Zbl 1386.00063
Eck, Christof; Garcke, Harald; Knabner, Peter
2017
Diffuse interface models for incompressible two-phase flows with different densities. Zbl 1391.76765
Abels, Helmut; Garcke, Harald; Grün, Günther; Metzger, Stefan
2017
Segmentation of three-dimensional images with parametric active surfaces and topology changes. Zbl 1376.65017
Benninghoff, Heike; Garcke, Harald
2017
A Cahn-Hilliard-Darcy model for tumour growth with chemotaxis and active transport. Zbl 1336.92038
Garcke, Harald; Lam, Kei Fong; Sitka, Emanuel; Styles, Vanessa
2016
A stable and linear time discretization for a thermodynamically consistent model for two-phase incompressible flow. Zbl 1329.76168
Garcke, Harald; Hinze, Michael; Kahle, Christian
2016
A stable numerical method for the dynamics of fluidic membranes. Zbl 1391.76298
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2016
Global weak solutions and asymptotic limits of a Cahn-Hilliard-Darcy system modelling tumour growth. Zbl 1434.35255
Garcke, Harald; Lam, Kei Fong
2016
A coupled surface-Cahn-Hilliard bulk-diffusion system modeling lipid raft formation in cell membranes. Zbl 1338.35222
Garcke, Harald; Kampmann, Johannes; Rätz, Andreas; Röger, Matthias
2016
Segmentation and restoration of images on surfaces by parametric active contours with topology changes. Zbl 1405.94014
Benninghoff, Heike; Garcke, Harald
2016
Sharp interface limit for a phase field model in structural optimization. Zbl 1348.35259
Blank, Luise; Garcke, Harald; Hecht, Claudia; Rupprecht, Christoph
2016
Local well-posedness for volume-preserving mean curvature and Willmore flows with line tension. Zbl 1335.53079
Abels, Helmut; Garcke, Harald; Müller, Lars
2016
Shape optimization for surface functionals in Navier-Stokes flow using a phase field approach. Zbl 1352.49046
Garcke, Harald; Hecht, Claudia; Hinze, Michael; Kahle, Christian; Lam, Kei Fong
2016
Computational parametric Willmore flow with spontaneous curvature and area difference elasticity effects. Zbl 1428.53104
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2016
Applying a phase field approach for shape optimization of a stationary Navier-Stokes flow. Zbl 1342.35218
Garcke, Harald; Hecht, Claudia
2016
Shape and topology optimization in Stokes flow with a phase field approach. Zbl 1334.49133
Garcke, Harald; Hecht, Claudia
2016
Image segmentation using parametric contours with free endpoints. Zbl 1408.94048
Benninghoff, Heike; Garcke, Harald
2016
On the stable numerical approximation of two-phase flow with insoluble surfactant. Zbl 1315.35156
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2015
Stable finite element approximations of two-phase flow with soluble surfactant. Zbl 1349.76175
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2015
Numerical approximation of phase field based shape and topology optimization for fluids. Zbl 1322.35113
Garcke, Harald; Hecht, Claudia; Hinze, Michael; Kahle, Christian
2015
A stable parametric finite element discretization of two-phase Navier-Stokes flow. Zbl 1320.76059
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2015
Stable numerical approximation of two-phase flow with a Boussinesq-Scriven surface fluid. Zbl 1329.35241
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2015
A phase field approach for shape and topology optimization in Stokes flow. Zbl 1329.76108
Garcke, Harald; Hecht, Claudia
2015
Stability of spherical caps under the volume-preserving mean curvature flow with line tension. Zbl 1314.53115
Abels, Helmut; Garcke, Harald; Müller, Lars
2015
On convergence of solutions to equilibria for fully nonlinear parabolic systems with nonlinear boundary conditions. Zbl 1366.35006
Abels, Helmut; Arab, Nasrin; Garcke, Harald
2015
Diffuse interface modelling of soluble surfactants in two-phase flow. Zbl 1319.35309
Garcke, Harald; Lam, Kei Fong; Stinner, Björn
2014
Relating phase field and sharp interface approaches to structural topology optimization. Zbl 1301.49113
Blank, Luise; Garcke, Harald; Farshbaf-Shaker, M. Hassan; Styles, Vanessa
2014
Multi-material phase field approach to structural topology optimization. Zbl 1327.49068
Blank, Luise; Farshbaf-Shaker, M. Hassan; Garcke, Harald; Rupprecht, Christoph; Styles, Vanessa
2014
Mean curvature flow with triple junctions in higher space dimensions. Zbl 1291.53078
Depner, Daniel; Garcke, Harald; Kohsaka, Yoshihito
2014
Efficient image segmentation and restoration using parametric curve evolution with junctions and topology changes. Zbl 1308.94013
Benninghoff, Heike; Garcke, Harald
2014
Stable phase field approximations of anisotropic solidification. Zbl 1310.65120
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2014
Phase field models versus parametric front tracking methods: are they accurate and computationally efficient? Zbl 1388.65096
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2014
Existence of weak solutions for a diffuse interface model for two-phase flows of incompressible fluids with different densities. Zbl 1273.76421
Abels, Helmut; Depner, Daniel; Garcke, Harald
2013
On an incompressible Navier-Stokes/Cahn-Hilliard system with degenerate mobility. Zbl 1347.76052
Abels, Helmut; Depner, Daniel; Garcke, Harald
2013
Primal-dual active set methods for Allen-Cahn variational inequalities with nonlocal constraints. Zbl 1272.65060
Blank, Luise; Garcke, Harald; Sarbu, Lavinia; Styles, Vanessa
2013
Eliminating spurious velocities with a stable approximation of viscous incompressible two-phase Stokes flow. Zbl 1286.76040
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2013
Finite element approximation of one-sided Stefan problems with anisotropic, approximately crystalline, Gibbs-Thomson law. Zbl 1271.80005
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2013
Curvature driven interface evolution. Zbl 1279.53064
Garcke, Harald
2013
Linearized stability analysis of surface diffusion for hypersurfaces with triple lines. Zbl 1263.35031
Depner, Daniel; Garcke, Harald
2013
Nonlocal Allen-Cahn systems. Analysis and a primal-dual active set method. Zbl 1279.65087
Blank, Luise; Garcke, Harald; Sarbu, Lavinia; Styles, Vanessa
2013
On the stable discretization of strongly anisotropic phase field models with applications to crystal growth. Zbl 1427.74161
Barrett, John. W.; Garcke, Harald; Nürnberg, Robert
2013
Thermodynamically consistent, frame indifferent diffuse interface models for incompressible two-phase flows with different densities. Zbl 1242.76342
Abels, Helmut; Garcke, Harald; Grün, Günther
2012
Phase-field approaches to structural topology optimization. Zbl 1356.49044
Blank, Luise; Garcke, Harald; Sarbu, Lavinia; Srisupattarawanit, Tarin; Styles, Vanessa; Voigt, Axel
2012
Parametric approximation of isotropic and anisotropic elastic flow for closed and open curves. Zbl 1242.65188
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2012
Elastic flow with junctions: variational approximation and applications to nonlinear splines. Zbl 1252.76038
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2012
Allen-Cahn and Cahn-Hilliard variational inequalities solved with optimization techniques. Zbl 1356.49009
Blank, Luise; Butz, Martin; Garcke, Harald; Sarbu, Lavinia; Styles, Vanessa
2012
The approximation of planar curve evolutions by stable fully implicit finite element schemes that equidistribute. Zbl 1218.65105
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2011
Solving the Cahn-Hilliard variational inequality with a semi-smooth Newton method. Zbl 1233.35132
Blank, Luise; Butz, Martin; Garcke, Harald
2011
Thermodynamically consistent higher order phase field Navier-Stokes models with applications to biomembranes. Zbl 1210.35258
Farshbaf-Shaker, M. Hassan; Garcke, Harald
2011
Existence of weak solutions for the Stefan problem with anisotropic Gibbs-Thomson law. Zbl 1235.35286
Garcke, Harald; Schaubeck, Stefan
2011
Mathematical modelling. 2nd revised ed. Zbl 1226.00031
Eck, Christof; Garcke, Harald; Knabner, Peter
2011
Parametric approximation of surface clusters driven by isotropic and anisotropic surface energies. Zbl 1205.65263
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2010
Numerical approximation of gradient flows for closed curves in $$\mathbb R^{d}$$. Zbl 1185.65027
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2010
On stable parametric finite element methods for the Stefan problem and the Mullins-Sekerka problem with applications to dendritic growth. Zbl 1201.80075
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2010
Finite-element approximation of coupled surface and grain boundary motion with applications to thermal grooving and sintering. Zbl 1410.80015
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2010
Motion by anisotropic mean curvature as sharp interface limit of an inhomogeneous and anisotropic Allen-Cahn equation. Zbl 1204.35026
Alfaro, Matthieu; Garcke, Harald; Hilhorst, Danielle; Matano, Hiroshi; Schätzle, Reiner
2010
Surface diffusion with triple junctions: A stability criterion for stationary solutions. Zbl 1228.35042
Garcke, Harald; Ito, Kazuo; Kohsaka, Yoshihito
2010
An anisotropic, inhomogeneous, elastically modified Gibbs-Thomson law as singular limit of a diffuse interface model. Zbl 1233.35010
Garcke, Harald; Kraus, Christiane
2010
Nonlinear stability of stationary solutions for curvature flow with triple junction. Zbl 1187.35013
Garcke, Harald; Kohsaka, Yoshihito; Ševčovič, Daniel
2009
On the parametric finite element approximation of evolving hypersurfaces in $$\mathbb R^3$$. Zbl 1145.65068
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2008
Parametric approximation of willmore flow and related geometric evolution equations. Zbl 1186.65133
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2008
Allen-Cahn systems with volume constraints. Zbl 1147.49036
Garcke, Harald; Nestler, Britta; Stinner, Björn; Wendler, Frank
2008
Numerical approximation of anisotropic geometric evolution equations in the plane. Zbl 1145.65069
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2008
A variational formulation of anisotropic geometric evolution equations in higher dimensions. Zbl 1149.65082
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2008
On sharp interface limits of Allen-Cahn/Cahn-Hilliard variational inequalities. Zbl 1165.35406
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2008
The $$\Gamma$$-limit of the Ginzburg-Landau energy in an elastic medium. Zbl 1193.49054
Garcke, Harald
2008
Nonlinear stability of stationary solutions for surface diffusion with boundary conditions. Zbl 1167.35005
Garcke, Harald; Ito, Kazuo; Kohsaka, Yoshihito
2008
Mathematical modelling. Zbl 1223.00014
Eck, Christof; Garcke, Harald; Knabner, Peter
2008
Mini-workshop: Mathematics of biological membranes. Abstracts from the mini-workshop held August 31 – September 6, 2008. Zbl 1177.74021
Garcke, Harald (ed.); Niethammer, Barbara (ed.); Peletier, Mark A. (ed.); Röger, Matthias (ed.)
2008
A parametric finite element method for fourth order geometric evolution equations. Zbl 1112.65093
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2007
On the variational approximation of combined second and fourth order geometric evolution equations. Zbl 1148.65074
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2007
A phase field model for the electromigration of intergranular voids. Zbl 1132.35082
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2007
Stress- and diffusion-induced interface motion: modelling and numerical simulations. Zbl 1128.74004
Garcke, Harald; Nürnberg, Robert; Styles, Vanessa
2007
Surfactant spreading on thin viscous films: nonnegative solutions of a coupled degenerate system. Zbl 1102.35056
Garcke, Harald; Wieland, Sandra
2006
Second order phase field asymptotics for multi-component systems. Zbl 1106.35116
Garcke, Harald; Stinner, Björn
2006
Finite element approximation of a phase field model for surface diffusion of voids in a stressed solid. Zbl 1078.74050
Barrett, John W.; Garcke, Harald; Nürnberg, Robert
2006
On asymptotic limits of Cahn-Hilliard systems with elastic misfit. Zbl 1366.74055
Garcke, Harald; Kwak, David Jung Chul
2006
Multiscale problems in solidification processes. Zbl 1366.80004
Eck, Christof; Garcke, Harald; Stinner, Björn
2006
On a Cahn–Hilliard model for phase separation with elastic misfit. Zbl 1072.35081
Garcke, Harald
2005
...and 28 more Documents
all top 5
#### Cited by 1,167 Authors
71 Garcke, Harald 34 Nürnberg, Robert 25 Barrett, John W. 21 Abels, Helmut 19 Lam, Kei Fong 16 Kim, Junseok 13 Miranville, Alain M. 13 Nestler, Britta 12 Elliott, Charles M. 12 Grün, Günther 12 Lowengrub, John Samuel 12 Rocca, Elisabetta 11 Kahle, Christian 10 Sun, Shuyu 10 Wise, Steven M. 9 Grasselli, Maurizio 9 Kraus, Christiane 9 Roubíček, Tomáš 9 Stinner, Bjorn 9 Styles, Vanessa 8 Colli, Pierluigi 8 Hinze, Michael 8 Wheeler, Glen E. 8 Winkler, Michael 8 Yang, Xiaofeng 8 Zhu, Peicheng 7 Blank, Luise 7 Du, Qiang 7 Giorgini, Andrea 7 Kou, Jisheng 7 Nochetto, Ricardo Horacio 7 Olshanskii, Maxim A. 7 Voigt, Axel 7 Wang, Cheng 6 Dai, Shibin 6 Dede, Luca 6 Dong, Suchuan 6 Escher, Joachim 6 Frigeri, Sergio 6 Giacomelli, Lorenzo 6 Hughes, Thomas J. R. 6 Lee, Hyun Geun 6 Liu, Chein-Shan 6 Matthes, Daniel 6 Metzger, Stefan 6 Pozzi, Paola 6 Scarpa, Luca 6 Signori, Andrea 6 Stoll, Martin 6 Taranets, Roman M. 6 Tomassetti, Giuseppe 6 Wang, Xiaoming 6 Yang, Zhiguo 6 Zhao, Xiaopeng 5 Aland, Sebastian 5 Alber, Hans-Dieter 5 Bonetti, Elena 5 Bretin, Elie 5 Chugunova, Marina A. 5 Deckelnick, Klaus 5 Fabrizio, Mauro 5 Fischer, Julian 5 Glasner, Karl B. 5 Hecht, Claudia 5 Hintermüller, Michael 5 Jüngel, Ansgar 5 King, John Robert 5 Liang, Bo 5 Lin, Ping 5 Selzer, Michael 5 Shen, Jie 5 Tavakoli, Rouhollah 5 Van Brummelen, Harald 5 Wang, Qi 5 Xu, Yan 5 Zhao, Quan 4 Baňas, L’ubomír 4 Bänsch, Eberhard 4 Bartels, Sören 4 Beneš, Michal 4 Bertozzi, Andrea Louise 4 Bosch, Jessica 4 Boyer, Franck 4 Burger, Martin 4 Calo, Victor Manuel 4 Depner, Daniel 4 Ebenbeck, Matthias 4 Esedoglu, Selim 4 Farshbaf-Shaker, Mohammad Hassan 4 Fukao, Takeshi 4 Gal, Ciprian Gheorghe Sorin 4 Giesselmann, Jan 4 Giorgi, Claudio 4 Gnann, Manuel V. 4 Guo, Ruihan 4 Han, Daozhi 4 Heida, Martin 4 Knabner, Peter 4 Knopf, Patrik 4 Knüpfer, Hans ...and 1,067 more Authors
all top 5
#### Cited in 169 Serials
105 Journal of Computational Physics 34 Computer Methods in Applied Mechanics and Engineering 23 Archive for Rational Mechanics and Analysis 23 Journal of Scientific Computing 23 M$$^3$$AS. Mathematical Models & Methods in Applied Sciences 22 Numerische Mathematik 21 Journal of Mathematical Analysis and Applications 20 Journal of Computational and Applied Mathematics 20 Journal of Differential Equations 19 Physica D 18 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 18 European Journal of Applied Mathematics 15 SIAM Journal on Mathematical Analysis 15 SIAM Journal on Scientific Computing 14 ZAMP. Zeitschrift für angewandte Mathematik und Physik 14 SIAM Journal on Numerical Analysis 13 Computers & Mathematics with Applications 13 Annales de l’Institut Henri Poincaré. Analyse Non Linéaire 12 Mathematical Methods in the Applied Sciences 12 Discrete and Continuous Dynamical Systems 11 Nonlinear Analysis. Real World Applications 11 European Series in Applied and Industrial Mathematics (ESAIM): Mathematical Modelling and Numerical Analysis 10 Applied Mathematics and Optimization 10 SIAM Journal on Control and Optimization 9 Journal of Fluid Mechanics 9 Mathematics of Computation 9 Applied Mathematics and Computation 9 Applied Numerical Mathematics 9 SIAM Journal on Applied Mathematics 8 Computational Mechanics 8 Calculus of Variations and Partial Differential Equations 8 Advances in Computational Mathematics 7 Journal of Mathematical Physics 7 Communications in Partial Differential Equations 7 Continuum Mechanics and Thermodynamics 7 Interfaces and Free Boundaries 6 Applicable Analysis 6 Nonlinearity 6 Numerical Methods for Partial Differential Equations 6 Applied Mathematics Letters 6 ZAMM. Zeitschrift für Angewandte Mathematik und Mechanik 6 Discrete and Continuous Dynamical Systems. Series S 5 Journal de Mathématiques Pures et Appliquées. Neuvième Série 5 NoDEA. Nonlinear Differential Equations and Applications 5 Discrete and Continuous Dynamical Systems. Series B 4 Physica A 4 Annali di Matematica Pura ed Applicata. Serie Quarta 4 International Journal for Numerical Methods in Engineering 4 Mathematics and Computers in Simulation 4 Mathematische Nachrichten 4 Applied Mathematical Modelling 4 Journal of Nonlinear Science 4 European Series in Applied and Industrial Mathematics (ESAIM): Control, Optimization and Calculus of Variations 4 Journal of Mathematical Fluid Mechanics 4 Journal of Evolution Equations 4 Archives of Computational Methods in Engineering 4 Communications on Pure and Applied Analysis 4 Geometric Flows 3 Computers and Fluids 3 International Journal of Engineering Science 3 Journal of Mathematical Biology 3 Mathematische Annalen 3 Proceedings of the American Mathematical Society 3 Quarterly of Applied Mathematics 3 Transactions of the American Mathematical Society 3 European Journal of Mechanics. B. Fluids 3 Analysis (München) 3 Computational Geosciences 3 Milan Journal of Mathematics 3 Multiscale Modeling & Simulation 3 Boundary Value Problems 3 Advances in Nonlinear Analysis 2 Computer Physics Communications 2 Communications on Pure and Applied Mathematics 2 Journal of Statistical Physics 2 Bulletin of Mathematical Biology 2 Annali della Scuola Normale Superiore di Pisa. Classe di Scienze. Serie IV 2 BIT 2 Calcolo 2 Journal für die Reine und Angewandte Mathematik 2 Zeitschrift für Analysis und ihre Anwendungen 2 Mathematical and Computer Modelling 2 Asymptotic Analysis 2 Japan Journal of Industrial and Applied Mathematics 2 Applications of Mathematics 2 Numerical Algorithms 2 International Journal of Computer Mathematics 2 Journal of Elasticity 2 Journal of Mathematical Imaging and Vision 2 Journal of Mathematical Sciences (New York) 2 Computing and Visualization in Science 2 M2AN. Mathematical Modelling and Numerical Analysis. ESAIM, European Series in Applied and Industrial Mathematics 2 Acta Mathematica Sinica. English Series 2 Communications in Nonlinear Science and Numerical Simulation 2 Bulletin of the Malaysian Mathematical Sciences Society. Second Series 2 Stochastics and Dynamics 2 SIAM Journal on Applied Dynamical Systems 2 Structural and Multidisciplinary Optimization 2 SIAM Journal on Imaging Sciences 2 Journal of Theoretical Biology ...and 69 more Serials
all top 5
#### Cited in 37 Fields
560 Partial differential equations (35-XX) 342 Numerical analysis (65-XX) 327 Fluid mechanics (76-XX) 162 Mechanics of deformable solids (74-XX) 94 Biology and other natural sciences (92-XX) 91 Calculus of variations and optimal control; optimization (49-XX) 79 Statistical mechanics, structure of matter (82-XX) 67 Differential geometry (53-XX) 62 Classical thermodynamics, heat transfer (80-XX) 25 Dynamical systems and ergodic theory (37-XX) 19 Global analysis, analysis on manifolds (58-XX) 12 Integral equations (45-XX) 12 Optics, electromagnetic theory (78-XX) 10 Operations research, mathematical programming (90-XX) 9 Ordinary differential equations (34-XX) 9 Computer science (68-XX) 8 Operator theory (47-XX) 8 Probability theory and stochastic processes (60-XX) 6 Functional analysis (46-XX) 6 Geophysics (86-XX) 6 Systems theory; control (93-XX) 6 Information and communication theory, circuits (94-XX) 4 Approximations and expansions (41-XX) 3 Combinatorics (05-XX) 3 Statistics (62-XX) 2 Real functions (26-XX) 2 Measure and integration (28-XX) 2 Functions of a complex variable (30-XX) 2 Harmonic analysis on Euclidean spaces (42-XX) 2 Mechanics of particles and systems (70-XX) 2 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 2 Mathematics education (97-XX) 1 General and overarching topics; collections (00-XX) 1 Potential theory (31-XX) 1 Integral transforms, operational calculus (44-XX) 1 Convex and discrete geometry (52-XX) 1 Quantum theory (81-XX)
#### Wikidata Timeline
The data are displayed as stored in Wikidata under a Creative Commons CC0 License. Updates and corrections should be made in Wikidata.
|
2021-05-10 22:43:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4814712107181549, "perplexity": 12503.417037844876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989749.3/warc/CC-MAIN-20210510204511-20210510234511-00576.warc.gz"}
|
https://chenhaoxiang.cn/2021/06/20210604095420945F.html
|
Link to the original text https://www.cnblogs.com/zhouzhendong/p/UOJ53.html
# The question
Given a tree with n Tree of nodes .
Every point has a weight .
For each of these $i$ Given three parameters $a_i,b_i,c_i$ , From $i$ Starting from one point, the next point you can reach x You need to meet one of three requirements :
1. x stay $a_i$ To $b_i$ On the simple path of .
2. x stay $a_i$ To $c_i$ On the simple path of .
3. x stay $c_i$ To $b_i$ On the simple path of .
Starting from any point , The weight of the passing node and the front k What's the length of the small route . Note that you can go through the nodes repeatedly .
Space restriction 100MB, The time limit 3s
$n,k\leq 5\times 10^5$
Make sure the answer is less than $10^8$.
because $k,n$ Isomorphism , So the later complexity analysis doesn't distinguish $n$ and $k$ .
First of all, the nodes that can be reached by each point are divided into no more than two chains .
Partition with tree chain + Line segment tree + Preprocessing heavy chain prefixes min , To achieve $O(\log n)$ Find the minimum weight point on a chain .
If we come violent , So obviously :
Initially, all the points are thrown directly into a pile , The key words are the sum of node weights , Take out the smallest elements in turn , Every time I take out one , Just add a solution to the heap that you can get from this point , So just take it out k The next time .
Obviously, the complexity of time and space has withered .
So we consider splitting the tree chain 、 Mark these lines on the tree , The keywords for these tags are “ From the current state , Then walk to the point with the smallest weight in the interval to get the path length ”, When it's taken out of the heap, it splits the markers , You get a time complexity $O(n\log^2 n)$ Spatial complexity $O(n\log n)$ How to do it , It's still not enough to pass the question .
Consider changing the form of the tag to $(a,b)$ , Two endpoints in a chain . So when you take out this chain , Just update the heap state in two parts :
1. Split the chain from the minimum weight point of the chain , Into two shorter chains , Add to the heap .
2. Starting from the minimum weight point of this chain , Up to two chains , Add to the heap .
So we get a spatial complexity $O(n)$ , Time complexity $O(n\log n)$ How to do it .
And then because the blogger is so big , Still stuck in space .
So I forced vector The array is written as an array simulation list , Write the system heap as a handwriting heap ……
Finally, I suddenly found out that he promised that the answer was less than 1e8 , It's not that the edge weight is less than 1e8 I was stunned when I went to school .
So many constants stuck before , In fact, you only need to change the data type of the storage path length from longlong Change to int Just fine ……
# Code
#include <bits/stdc++.h>
using namespace std;
typedef long long LL;
LL x=0,f=0;
char ch=getchar();
while (!isdigit(ch))
f|=ch=='-',ch=getchar();
while (isdigit(ch))
x=(x<<1)+(x<<3)+(ch^48),ch=getchar();
return f?-x:x;
}
const int N=500005;
int n,k;
struct Info{
int a,b,c,d;
}v[N];
struct Gragh{
static const int M=N*2;
int cnt,y[M],nxt[M],fst[N];
void clear(){
cnt=1;
memset(fst,0,sizeof fst);
}
y[++cnt]=b,nxt[cnt]=fst[a],fst[a]=cnt;
}
}g;
int w[N];
int fa[N],depth[N],size[N],son[N],top[N],I[N],O[N],aI[N],Min[N],Time=0;
void dfs1(int x,int pre,int d){
depth[x]=d,fa[x]=pre,son[x]=0,size[x]=1;
for (int i=g.fst[x];i;i=g.nxt[i]){
int y=g.y[i];
if (y!=pre){
dfs1(y,x,d+1);
size[x]+=size[y];
if (!son[x]||size[y]>size[son[x]])
son[x]=y;
}
}
}
void ckwMin(int &x,int y){
if (w[x]>w[y])
x=y;
}
void dfs2(int x,int Top){
top[x]=Top,aI[I[x]=++Time]=x;
if (son[x])
ckwMin(Min[son[x]],Min[x]),dfs2(son[x],Top);
for (int i=g.fst[x];i;i=g.nxt[i]){
int y=g.y[i];
if (y!=fa[x]&&y!=son[x])
dfs2(y,y);
}
O[x]=Time;
}
int LCA(int x,int y){
int fx=top[x],fy=top[y];
while (fx!=fy){
if (depth[fx]<depth[fy])
swap(fx,fy),swap(x,y);
x=fa[fx],fx=top[x];
}
return depth[x]<depth[y]?x:y;
}
int isanc(int x,int y){
return I[x]<=I[y]&&I[y]<=O[x];
}
int go_up(int x,int y){
if (isanc(aI[I[y]+1],x))
return aI[I[y]+1];
int fx=top[x];
while (depth[fx]>depth[y]){
x=fx;
if (depth[fa[x]]>depth[y])
x=fa[x],fx=top[x];
else
break;
}
return x;
}
namespace Seg{
int v[N<<2];
void build(int rt,int L,int R){
if (L==R)
return (void)(v[rt]=aI[L]);
int mid=(L+R)>>1,ls=rt<<1,rs=ls|1;
build(ls,L,mid);
build(rs,mid+1,R);
ckwMin(v[rt]=v[ls],v[rs]);
}
int query(int rt,int L,int R,int xL,int xR){
if (xL<=L&&R<=xR)
return v[rt];
int mid=(L+R)>>1,ls=rt<<1,rs=ls|1;
if (xR<=mid)
return query(ls,L,mid,xL,xR);
if (xL>mid)
return query(rs,mid+1,R,xL,xR);
int res=query(ls,L,mid,xL,xR);
ckwMin(res,query(rs,mid+1,R,xL,xR));
return res;
}
int query(int x,int y){
int ans=x,fx=top[x],fy=top[y];
while (fx!=fy){
if (depth[fx]<depth[fy])
swap(fx,fy),swap(x,y);
ckwMin(ans,Min[x]);
x=fa[fx],fx=top[x];
}
if (depth[x]>depth[y])
swap(x,y);
ckwMin(ans,query(1,1,n,I[x],I[y]));
return ans;
}
}
const int Size=N*5;
struct Node{
int a,b,len;
Node(){}
Node(int _a,int _b,int pre){
a=_a,b=_b;
len=pre+w[Seg :: query(a,b)];
}
}s[Size];
int stt=0;
bool cmp(int a,int b){
return s[a].len>s[b].len;
}
struct priority_Queue{
int v[Size],s;
bool empty(){
return s==0;
}
void up(int x){
int y=x>>1;
while (y){
if (cmp(v[y],v[x]))
swap(v[x],v[y]),x=y,y=x>>1;
else
break;
}
}
void down(int x){
int y=x<<1;
while (y<=s){
if (y<s&&cmp(v[y],v[y+1]))
y|=1;
if (cmp(v[x],v[y]))
swap(v[x],v[y]),x=y,y=x<<1;
else
break;
}
}
void pop(){
swap(v[1],v[s--]);
down(1);
}
int top(){
return v[1];
}
void push(int x){
v[++s]=x;
up(s);
}
}Q;
void push(Node x){
s[++stt]=x,Q.push(stt);
}
void Split(Node x,int Mi){
int a=x.a,b=x.b,m=Mi;
if (!isanc(m,a))
swap(a,b);
if (m!=a){
int aa=go_up(a,m);
push(Node(a,aa,x.len-w[m]));
}
if (m!=b){
int bb;
if (isanc(m,b))
bb=go_up(b,m);
else
bb=fa[m];
push(Node(b,bb,x.len-w[m]));
}
}
void solve(){
while (!Q.empty())
Q.pop();
for (int i=1;i<=n;i++)
push(Node(i,i,0));
while (k--){
Node now=s[Q.top()];
Q.pop();
printf("%d\n",now.len);
int x=Seg :: query(now.a,now.b);
Split(now,x);
if (~v[x].a)
push(Node(v[x].a,v[x].b,now.len));
if (~v[x].c)
push(Node(v[x].c,v[x].d,now.len));
}
}
int main(){
for (int i=1;i<=n;i++)
g.clear();
for (int i=2;i<=n;i++){
}
for (int i=1;i<=n;i++)
dfs1(1,0,0);
dfs2(1,1);
for (int i=1;i<=n;i++){
int a=v[i].a,b=v[i].b,c=v[i].c;
v[i].a=v[i].b=v[i].c=v[i].d=-1;
if (depth[c]<depth[b])
swap(b,c);
if (depth[b]<depth[a])
swap(a,b);
if (depth[c]<depth[b])
swap(b,c);
if (a==b||b==c){
v[i].a=a,v[i].b=c;
continue;
}
int ab=LCA(a,b),ac=LCA(a,c),bc=LCA(b,c);
if (ab==ac&&ab==bc){
int g=ab;
if (depth[a]>depth[g]){
v[i].a=a,v[i].b=b;
v[i].c=c,v[i].d=go_up(c,g);
continue;
}
v[i].a=b,v[i].b=c;
continue;
}
if (ab==bc)
swap(a,b),swap(ac,bc);
else if (ac==bc)
swap(a,c),swap(ab,bc);
if (depth[c]<depth[b])
swap(b,c),swap(ab,ac);
if (isanc(b,c)){
v[i].a=a,v[i].b=c;
continue;
}
v[i].a=a,v[i].b=b;
int d=go_up(c,bc);
v[i].c=c,v[i].d=d;
}
Seg :: build(1,1,n);
solve();
return 0;
}
## UOJ#53. 【UR #4】 After Santa The tree chain splits k More articles about short circuit
1. UOJ#435. 【 Training team work 2018】Simple Tree The tree chain splits , Block
Link to the original text www.cnblogs.com/zhouzhendong/p/UOJ435.html Preface It's true that I can't write block questions . For a variety of reasons , I made a lot of mistakes when I was writing code , The most fatal thing is that the array is too small .* ...
2. BZOJ 4326 The tree chain splits + Two points + Difference + Memorize
last year NOIP I didn't know how to divide the tree chain when I was young ! Or was UOJ I have a set of data cards . The idea of difference is still wonderful ! #include <iostream> #include <cstring> #i ...
3. BZOJ 2157: tourism ( The tree chain splits )
The tree chain splits .. The sample is too big to adjust ... By the way, put up the data generator -------------------------------------------------------------------- ...
4. Qtree3 Answer key ( The tree chain splits ( false )+ Line segment tree +set)
unduly polite words that a friend does not need to say : Recently, Luo Gu added a lot of good questions ... The entrance to the original question This question seems to be SPOJ The topic , Pretty good . Look, there's no problem solution or one ... The question : Obviously .. Answer key : It's very violent : The tree chain splits ( false )+ Line segment tree +$$set$$... ...
5. hdu 5052 The tree chain splits
Yaoge’s maximum profit Time Limit: 10000/5000 MS (Java/Others) Memory Limit: 65536/65536 K (Java/ ...
6. Qtree3 Answer key ( The tree chain splits + Line segment tree +set)
unduly polite words that a friend does not need to say : Recently, Luo Gu added a lot of good questions ... The entrance to the original question This question seems to be SPOJ The topic , Pretty good . Look, there's no problem solution or one ... The question It's easy to understand, isn't it .. Answer key It's very violent : The tree chain splits ( false )+ Line segment tree + std :: set ...
7. P1001 A+B Problem ( The tree chain splits )
This question tests our ability to construct models . Consider building a tree , There is... In the tree 3 Nodes , node 1 And nodes 2 Even one weight is zero a The edge of , node 1 And nodes 3 Even one weight is zero b The edge of , Obviously the answer is nodes 2 To the node 3 Shortest path . But that's not enough . Consider the nature of addition , ...
8. 【BZOJ4196】[NOI2015] Package manager ( The tree chain splits )
Click here to see the topic General meaning : Yes $$n$$ Software package , Their dependency forms a tree . Now? , Ask you to install or uninstall a package , How many packages will be affected . The tree chain splits The question should be The tree chain splits Algorithm comparison of introductory topics bar . For Ann ...
9. 【BZOJ2243】[SDOI2011] dyeing ( The tree chain splits )
Click here to see the topic General meaning : There is a tree $$n$$ Rootless trees with nodes and $$m$$ Operations , And each node has a color . There are two types of operations : One is to color all the points between the paths on the two-point tree $$c$$, The other is to ask the number of color segments between paths on two-point trees . ...
## Random recommendation
1. hold php Upload sae The problem is to use IO
Application migration guide One , Why transplant applications SAE prohibit IO Write operations , Code directory cannot be written to . This means that the normal program uploads images . Generation cache and other operations cannot be performed in SAE Upper normal operation , At this time, you need to modify the code to make your program run in SA ...
2. How to deal with the problem caused by stepping on the line ---lsof,strace
Two tracking processes linux command lsof -p pidstrace -p pid Quickly track where problems occur in the process . Because the process itself works well , But the interior has been waiting for the third party to answer , Cause the process to feign death . Quote from :http://li ...
3. Baidu UEditor Simple installation, debugging and calling , Other online tutorials are too official , Not for beginners
For starters , As long as the function can be realized , Other settings are completely default . Preview : 1. First Download it on the official website , I won't go into that . Download and unzip to the directory you want , I'll put it in the root directory where you need to use the editor , Insert the following HTML Code : &l ...
4. Shortcut key Ctrl+c、Ctrl+d、Ctrl+u、Ctrl+a、Ctrl+e
tab: Command or path completion key Ctrl +c : Terminate the current task command or program Ctrl +d : Exit the current user environment Ctrl +Shift+c ssh client ssh The command copied in the Ctrl + a To the beginning Ctrl ...
Original address :Access to the path '' is denied. Solution author : Waiting for the red apricot on the wall ccess to the path ' route ' is denied. I found a lot of information on the Internet , Finally, it's solved ...
6. Province 、 City 、 Three level linkage of districts and counties Html Code
<!DOCTYPE html><html><head> <meta http-equiv="Content-Type" content=& ...
7. js Realize text box input text number limit code
html: <div class="curr_eval_box"> <input type="hidden" n ...
8. java,while The use of recycling , Receive user input , Do different things !
package cn.edu.nwpu.java; import java.util.Scanner; public class IsoscelesTriangle { public static v ...
9. One spinner Control
Layout file <?xml version="1.0" encoding="utf-8"?><android.support.constraint. ...
10. Book one :lesson fifteen.
original text :Your passports,please. A:Are you Swedish? B:No,we are not. We are Danish. A:Are your friends Dani ...
|
2022-09-29 12:07:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5039296746253967, "perplexity": 5075.758999920369}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00514.warc.gz"}
|
https://www.clutchprep.com/chemistry/practice-problems/146855/water-is-a-polar-molecule-meaning-it-carries-partial-charges-or-on-opposite-side
|
Solutions, Molarity and Intermolecular Forces Video Lessons
Concept: Solutions and Molarity.
# Problem: Water is a polar molecule, meaning it carries partial charges (𝛿+ or 𝛿-) on opposite sides of the molecule. For two formula units of NaCl. drag the sodium ions and chloride ions to where they would most likely appear based on the grouping of the water molecules in the are provided. Not that red spheres represent O atoms and white spheres represent H atoms.
###### FREE Expert Solution
Water is a polar molecule.
• O is more electronegative than H thus having a 𝛿-
• H then has a 𝛿+
91% (265 ratings)
###### Problem Details
Water is a polar molecule, meaning it carries partial charges (𝛿+ or 𝛿-) on opposite sides of the molecule. For two formula units of NaCl. drag the sodium ions and chloride ions to where they would most likely appear based on the grouping of the water molecules in the are provided. Not that red spheres represent O atoms and white spheres represent H atoms.
|
2020-11-27 03:50:43
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9142764806747437, "perplexity": 2339.211774790676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141189038.24/warc/CC-MAIN-20201127015426-20201127045426-00275.warc.gz"}
|
https://codeforces.com/blog/entry/87234
|
Discussion Thread of CodeNation Innovation Labs Hiring Challenge held on 26 January 2021.
PROBLEM 1 :-
Statement
Sample Test Cases
Extra Test Case
PROBLEM 2 :-
Statement
Sample Test Cases
PROBLEM 3 :-
Statement
Sample Test Cases
PROBLEM 4 :-
Statement
Sample Test Cases
PROBLEM 5 :-
Statement
Sample Test Cases
PROBLEM 6 :-
Statement
Sample Test Cases
PS :- THE CONTEST HAS ENDED. THOSE WHO HAVE PARTICIPATED CAN SHARE THEIR SOLUTIONS. INTERESTED NON — PARTICIPANTS CAN ALSO SHARE THEIR APPROACHES.
• +92
» 3 months ago, # | ← Rev. 4 → +95 Video Editorial with Scoring Distribution: https://youtu.be/8tEXXSc351cProblem 1: Observation 1: Notice that the left arrows will form a prefix of any row. L U L can be made L U U without affecting anything. Observation 2: The number of left arrows will decrease as we go from the bottom row to the top row.These 2 solutions lead to a $O(N\cdot M)$ DP where N, M are the dimensions of the rectangle.Problem 2: Observation 1: If any bit is set in >= 2 numbers in the range, answer is 0. So we can compute prefix frequencies for each bit. Observation 2: Answer will atmost be 2 because we can make the last bit 1 (having two odd integers in the range is sufficient).So, with the prefix computation, if answer is not 0, it is 2 — (number of odd integers in the range). Solution complexity: $O(N + Q)$Problem 3: Simple implementation problem — just store the frequency of every number. Answer = Sum of initial array. Then, ans = min(ans, sum — freq(val) * val) for all values. Solution complexity: $O(N)$Problem 4:We can have a $O(N \cdot M)$ DP, where DP(i, j) stores the number of ways of spending i days, where at the last day, we did an activity of type j, adhering to all the constraints. Using prefix summations, we can get the transitions in O(1) time.Problem 5:Observation 1: The order of the elements does not matter — we can swap any two elements doing the mentioned operations, and also get the xor of any subset of elements.Observation 2: The above leads to Gaussian elimination based solution — we only care about the basis (linearly independent set) of elements in the array, since we can get all other XORs from it. Thus, the problem is reduced to finding the basis with the smallest sum. We can do this by finding any basis, and reduce the larger basis elements using the smaller ones. Solution code: http://p.ip.fi/-1b1Problem 6:Observation 1: The required node is basically the LCA of all the leaf nodes in the range [L, R]. Observation 2: If we do a DFS order traversal of the tree and store the discovery time of every node, then we only care about finding LCA(first discovered leaf node, last discovered leaf node) in the query range, because all the other nodes lie in between.Using this, we can solve the problem in $O(N \log N)$ using various techniques.
• » » 3 months ago, # ^ | 0 Did the questions have uneven points distribution or the same points for every question?
• » » » 3 months ago, # ^ | +24 The harder problems had a higher score weightage. $P_3 < P_2 < P_1 = P_4 < P_6 < P_5$.
• » » » » 3 months ago, # ^ | ← Rev. 2 → +2 JUST CURIOUSWere the problems prepared by InterviewBit team or some CodeNation people ? SpoilerFEEL FREE TO NOT ANSWER IF THE QUESTION SEEMS IRRELEVANT
• » » » » » 3 months ago, # ^ | +2 Various questions were given the InterviewBit team and I was responsible for reviewing the questions. So we are not the setters, more like coordinators/reviewers.
• » » » » 3 months ago, # ^ | +9 Why not show this to participants?
• » » » » 3 months ago, # ^ | +6 Can you elaborate D a little more ?
• » » » » 3 months ago, # ^ | ← Rev. 3 → +14 Can you provide a rough estimate of the scores.
• » » » » 3 months ago, # ^ | 0 Due to this the people who did if else, if else for 3 hours will get more point than someone who solved genuinely.
• » » » » » 3 months ago, # ^ | 0 How?
• » » » » » » 3 months ago, # ^ | ← Rev. 2 → 0 Did you give the last codenation test. If you look at the leaderboard, you will see some people who solved no question fully had better ranks than the one who solved 2 or 3 questions. This was due to the problems uneven marking and interviewbit's system of telling the failed case to certain limit. ProofLook at the person on 364th rank. If you go on to the next 10 pages you will see more of them. https://www.interviewbit.com/contest/codeagon-2020/scoreboard?page=19#
• » » 3 months ago, # ^ | ← Rev. 2 → +24 Problem 6: Segment Tree + LCAVideo editorial will be very cool. Ashishgup
• » » » 3 months ago, # ^ | ← Rev. 2 → 0 Yeah Ashishgup
• » » » » 3 months ago, # ^ | +131 Sure, I'll consider making a video editorial on it tomorrow if enough people are interested :)
• » » » » » 3 months ago, # ^ | +1 It'd be awesome if you could do so
• » » » » » » 3 months ago, # ^ | +19 I've uploaded the solutions: https://youtu.be/8tEXXSc351c
• » » » » » » » 3 months ago, # ^ | 0 Thanks a lot, highly appreciated
• » » » » » 3 months ago, # ^ | +4 plus please make these questions available to some platform so that we can solve these questions properly with proper test cases.
• » » 3 months ago, # ^ | ← Rev. 2 → 0 Can you elaborate on problem 4? Like what will the recurrence relation look like?
• » » » 3 months ago, # ^ | 0 yes, dp[day][lastact][streak] is very intuitive but how to reduce it to n^2
• » » » » 3 months ago, # ^ | 0 remove streak from dimension, and move it transitions instead. then it can be optimized by prefix sums.
• » » » » » 3 months ago, # ^ | 0 Can you please elaborate on how to do that ? Coz I was able to solve it with dp[day][lastact][streak] But couldn't optimize it and it obviously gave TLE and I want to learn how to solve this very badly :( Please please help!!!
• » » » 3 months ago, # ^ | +4 $dp[i][j] = \sum\limits_{l=1}^{A_j}\sum\limits_{k=1}^{m} dp[i-l][k] - \sum\limits_{l=1}^{A_j}dp[i-l][j]$. You can simplify this by defining $pre[i][j] = \sum\limits_{l=1}^{i} \sum\limits_{j=1}^{m} dp[i-l][j]$
• » » 3 months ago, # ^ | 0 In problem 4, we need to define $dp[0][j] = \frac{1}{m-1}$. So we define $dp[0][j] = 1$ and divide the final obtained value by $(m-1)$. Or was there some easy way to define base cases?
• » » » 3 months ago, # ^ | 0 Suppose we want answer for dp[i][j] and i-A[j]<=0, this is the only case we will be using dp[0][j]. Whenever we encounter this case, we can explicitly add 1 to dp[i][j] instead of defining dp[0][j]=1 and dividing it later. This will solve the issue.
• » » 3 months ago, # ^ | +25 Will the ranklist be revealed?
• » » 3 months ago, # ^ | +48 For problem 6, we can notice that LCA can be thought of as a monoid operation, so we can make a normal segtree on the range [1...N] and store the LCA in the segtree nodes. (we can store -1 corresponding to non-leafs, which will work as an identity element: LCA(x,-1) = x = LCA(-1,x))To make it better we can notice that LCA(X,X) = X, so it is also idempotent, so we can use a sparse table as well. Which gives us a fast solution. $O((NlogN+Q) * f(N))$. Where $f(N)$ is the time in which you calculate LCA ($O(logN)$ or $O(1)$ depending on how you do it)
• » » 3 months ago, # ^ | 0 P2: answer can be 1 as well. if in the range, we already have an odd number, then we can add 1 to any even number and their AND will be > 0
• » » 3 months ago, # ^ | ← Rev. 2 → +1 Please answer this if you have any suggestions. SpoilerHow do you actually get Ideas while you are stuck. I was trying to do the same problems yesterday. The basis question, I observed that the positions doesn't matter but was not able to get to the actual answer. I ended up having ith bit set if any of the numbers have that bit set. (Lead to WA).Similarly, I was trying out the Little Pony question using DP but was not able to get to the final answer. More specifically, I was unable to re-arrange the activities, I was stuck in dp[i%2][j] = dp[(i+1)%2][j-k]+dp[(i+1)%2][j]; Realised that this would not rearrange the activities. Any suggestion would be appreciated. (Also I am practising with keeping question ratings in mind)
• » » 3 months ago, # ^ | 0 Can you elaborate D problem a bit more . It will be helpful if you can make a video tutorial on it.Thanks.
• » » 3 months ago, # ^ | 0 can the answer for the fifth problem be the bitwise OR of all the elements? if no ,than why plz explain.
» 3 months ago, # | 0 For problem 3 , we will make a hashmap which stores the frequency of each element . Iterate through hashmap and find maximum value of x = ( key * value) . Answer is simply sum of elements in the array — x;
» 3 months ago, # | +13 In problem 6, was Farach Colton LCA the intended solution to find LCA without MLE? or was segment tree solution for LCA sufficient?
• » » 3 months ago, # ^ | 0 Yes,Lca with segment tree was sufficient.
• » » » 3 months ago, # ^ | 0 Could you plz explain how to find the lca using segment tree here , I was able to get the observation required but wasn't able to find a way for lca.
• » » » » 3 months ago, # ^ | +3 just use standard min/max segtree, and change min()/max() to lca().also, have lca(leaf, non-leaf) = leaf to get lca of leafs only.
• » » » » » 3 months ago, # ^ | 0 for computing lca(a, b) use any standard way.
• » » » » » » 3 months ago, # ^ | ← Rev. 2 → 0 oh, nice method, I have never done a problem like this before. Also did you solve all the problems XD
• » » » » » » 3 months ago, # ^ | 0 in the first sample case my ans[-1,3] AC->is [-1,5] but lca(3,5) is 3 then why 5 is considered?
• » » » » » » » 3 months ago, # ^ | 0 Because only the lca of leaf nodes having the label within the given range is asked,you are finding the lca of a internal and a leaf node.
• » » » » » » » » 3 months ago, # ^ | 0 thanks a lot.
• » » 3 months ago, # ^ | 0 I think it is possible to solve it without LCA using in time and exit time.Then the problem reduces to binary search on a segment tree which is more memory efficient than normal LCA.
» 3 months ago, # | 0 can anyone please explain problem statement of problem 1 , i could'nt even understand problem clearly , i tried hard to understand but couldnt ..plz explain anyone..
» 3 months ago, # | +9 This is problem 4 — Problem
» 3 months ago, # | 0 How do you guys come to know about these contests? Can anyone participate?
• » » 3 months ago, # ^ | +3 LinkendIn and Codenation social pages.
» 3 months ago, # | -20 I solved P2, P3 and P6 is there any probability I may get a call?
• » » 3 months ago, # ^ | +5 Depends on how many people gave the test and how many peeps CN are going to interview, which might be revealed in some days.
• » » » 2 months ago, # ^ | 0 Hey, did you got any response or anything? If yes, please let me know.
• » » » » 2 months ago, # ^ | +1 No i couldn't solve that many problems to expect a response :(, also i am not sure if they will even hire from this test because of all the cheating and since they are hiring through codechef as well.But for a more sure answer you can ask ashishgup.
• » » » » 2 months ago, # ^ | +1 So I was wrong, people who solved at least 4 did got interview calls.
• » » » » » 2 months ago, # ^ | 0 Oh!, Thanks for info
» 3 months ago, # | -46 Will I get the internship interview opportunity ? I solved first 3 Questions. Ashishgup
» 3 months ago, # | 0 can anyone post the solutions of problem 4?
• » » 3 months ago, # ^ | ← Rev. 2 → -26 The comment is hidden because of too negative feedback, click here to view it
» 3 months ago, # | 0 Are the problems available for practice on any platform?
» 3 months ago, # | 0 Can someone explain 5th question solution in a more detailed way? It will be better if you can mention the required mathematical concepts properly.
» 3 months ago, # | +1 Ashishgup, could you please tell whether the testcases on which the solutions were tested at the time of the contest were only pretests or full testcases.
» 3 months ago, # | +17 After being asked to do it by multiple people, I finally made a video explanation of all of these problems on my youtube channel.
• » » 3 months ago, # ^ | 0 So will the answer for fifth problem just be the bitwise OR of all the numbers according to the second solution you explained
• » » » 3 months ago, # ^ | +2 No it won't be just bitwise OR of all numbers (I actually implemented this during contest and got WA xD) Let's say for some bit B you can have a situation of not having an element whose highest set bit is B , and provided that B is set in some elements.
• » » » 3 months ago, # ^ | +3 Consider the case 3,6. The answer is 8 but the bitwise OR is 7.(In particular, the answer will be equal to bitwise OR if all non-zero bits are linearly independent)
• » » » » 3 months ago, # ^ | 0 I see it now, thanks for replying :)
• » » » » 3 months ago, # ^ | ← Rev. 2 → 0 So do we have to apply brute force in this case? will it not get TLE
»
3 months ago, # |
0
Can someone please elaborate if there is anything wrong with the following code for 1st problem(rectangular field) as the following code gives 45 answer for the first testcase (although the answer is 65 as shown above)but still this solution is accepted on online platforms?
# include
using namespace std;
int n,m,P1[500][500],P2[500][500],FR[501][501],FC[501][501],dp[501][501];
int main(){ while(1){ scanf("%d %d",&n,&m); if(n==0) break;
for(int i=0;i<n;i++)
for(int j=0;j<m;j++)
scanf("%d",&P1[i][j]);
for(int i=0;i<n;i++)
for(int j=0;j<m;j++)
scanf("%d",&P2[i][j]);
for(int i=0;i<=m;i++) FR[i][0]=0;
for(int i=1;i<=n;i++)
for(int j=1;j<=m;j++)
FR[i][j]=FR[i][j-1]+P1[i-1][j-1];
for(int i=0;i<=n;i++) FC[0][i]=0;
for(int i=1;i<=n;i++)
for(int j=1;j<=m;j++)
FC[i][j]=FC[i-1][j]+P2[i-1][j-1];
for(int i=0;i<=m;i++) FC[0][i]=0;
for(int i=0;i<=n;i++) FR[i][0]=0;
for(int i=1;i<=m;i++)
for(int j=1;j<=n;j++)
dp[i][j]=max(dp[i-1][j]+FR[i][j],dp[i][j-1]+FC[i][j]);
printf("%d\n",dp[n][m]);
}
return 0;
}
» 3 months ago, # | 0 i solved 2 problem can i get shortlist
» 3 months ago, # | 0 can anyone provide the contest link so I can practice the problems?
|
2021-05-07 05:19:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39026206731796265, "perplexity": 1554.8089222082176}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988774.96/warc/CC-MAIN-20210507025943-20210507055943-00161.warc.gz"}
|
http://www.chegg.com/homework-help/questions-and-answers/prove-that-for-n-k-are-positive-integers-j-kn-tau-n-sigma-kn-if-you-are-unfamiliar-with-an-q3288110
|
Number Theory: Arithmetic Functions
Prove that for n, k are positive integers, $$J_{k}(n) * \tau (n) = \sigma_k(n)$$ .
---
If you are unfamiliar with any of the notation, I have clarified below.
J_k is the Jordan totient function.
* is the Dirichlet product
$$\tau$$ is the sum of the positive divisors of n (including n itself).
$$\sigma_k$$ is the sum of thekth powers of the positive divisors ofn
---
Further reference:
http://en.wikipedia.org/wiki/Arithmetic_function
• Review, Problem 2
Let A be the set of positive integers and S be the set of all finite
subsets of A. Prove that S is countable. Hint: a set of k positive
integers means a set of k distinct integers. Furthermore, order is not
considered: {1,2,3} = {3,2,1} = {2,3,1} etc.
Solution: Let k be fixed and Sk be the set of all subsets having exactly
k elements.
Let {n1, n2, .., nk} be a typical set of k positive integers. Since there
are k! ways of writing out this set we choose the one that has
n1 < n2 < .. <nk..
Next we set up the map from S to Nk, where Nk is the cartesian product
of k copies of the positive integers, by
{n1, n2, .., nk} -> (n1, n2, ..., nk)
This is a one-to-one map of Sk into Nk . Since Nk is countable and the
image of Sk is an infinite subset of Nk, Sk is countable.
Since S, the set of all finite subsets of A, is the union of a
countable number of countable sets it follows that S is countable.
|
2013-05-20 17:19:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8976789712905884, "perplexity": 907.8029117294416}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699138006/warc/CC-MAIN-20130516101218-00094-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/A/Knaster%E2%80%93Kuratowski_fan
|
Knaster–Kuratowski fan
In topology, a branch of mathematics, the KnasterKuratowski fan (named after Polish mathematicians Bronisław Knaster and Kazimierz Kuratowski) is a specific connected topological space with the property that the removal of a single point makes it totally disconnected. It is also known as Cantor's leaky tent or Cantor's teepee (after Georg Cantor), depending on the presence or absence of the apex.
Let ${\displaystyle C}$ be the Cantor set, let ${\displaystyle p}$ be the point ${\displaystyle ({\tfrac {1}{2}},{\tfrac {1}{2}})\in \mathbb {R} ^{2}}$, and let ${\displaystyle L(c)}$, for ${\displaystyle c\in C}$, denote the line segment connecting ${\displaystyle (c,0)}$ to ${\displaystyle p}$. If ${\displaystyle c\in C}$ is an endpoint of an interval deleted in the Cantor set, let ${\displaystyle X_{c}=\{(x,y)\in L(c):y\in \mathbb {Q} \}}$; for all other points in ${\displaystyle C}$ let ${\displaystyle X_{c}=\{(x,y)\in L(c):y\notin \mathbb {Q} \}}$; the KnasterKuratowski fan is defined as ${\displaystyle \bigcup _{c\in C}X_{c}}$ equipped with the subspace topology inherited from the standard topology on ${\displaystyle \mathbb {R} ^{2}}$.
The fan itself is connected, but becomes totally disconnected upon the removal of ${\displaystyle p}$.
|
2021-05-09 13:30:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 14, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9471465945243835, "perplexity": 187.07377647817353}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988986.98/warc/CC-MAIN-20210509122756-20210509152756-00369.warc.gz"}
|
http://www.chegg.com/homework-help/questions-and-answers/suppose-that-f-a-b-to-r-is-differentiable-and-c-is-an-element-of-a-b-then-show-that-there--q3398306
|
## Differentiability
Suppose that f : [a, b] to R is differentiable and c is an element of [a, b]. Then show that there exists a sequence {x_n} converges to c where x_n does not equal c for all n, such that f '(c) = lim f ' (x_n) Note: I use the definition of differentiation of: lim (x goes to c) (f(x) - f(c)) / (x - c). Note: Please make sure the right question is being answered. Too many problems with that.
• Apply Rolle's Theorem.
A) Suppose that f(a) = f(b) = 0 with a < b. Applying Rolle's Theorem to the interval [a,b] yields f'(c) = 0 for some c in (a,b).
In other words, f' has at least one real root.
B) Suppose that f(a) = f(b) = f(c) = 0 with a < b < c. Applying Rolle's Theorem to the interval [a,b] yields f'(m) = 0 for some m in (a,b). Applying Rolle's Theorem to [b,c] yields f'(n) = 0 for some n in (b,c).
In other words, f' has at least two real roots.
Now, apply Rolle's Theorem to f' on [m,n]. (This is permitted, because f is twice differentiable on R).
f''(c) = 0 for some c on (m,n), thus proving this assertion.
C) The generalization is automatic: If f is n-differentiable on R and has (n+1) real roots, then f^(n) has at least one real root.
|
2013-05-25 08:13:06
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9318104386329651, "perplexity": 650.4398596100601}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705749262/warc/CC-MAIN-20130516120229-00054-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://codereview.stackexchange.com/questions/224100/hackerrank-electronics-shop/224105
|
# HackerRank: Electronics Shop
Challenge from Hacker Rank -
Monica wants to buy a keyboard and a USB drive from her favorite electronics store. The store has several models of each. Monica wants to spend as much as possible for the items, given her budget.
Given the price lists for the store's keyboards and USB drives, and Monica's budget, find and print the amount of money Monica will spend. If she doesn't have enough money to both a keyboard and a USB drive, print -1 instead. She will buy only the two required items.
For example - with a budget of 10, two keyboards costing 3,1 & finally three drives available costing 5,2,8, the answer should be 9 as she is only able to purchase the keyboard for 3 and a drive for 5.
I've attempted to solve this logically and with good performance in mind. I'm not sure if I should be happy with my solution. I would appreciate any feedback.
My solution (which works) or my GitHub repo -
using System;
using System.Collections.Generic;
using System.Linq;
namespace ElectronicsShop
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine(GetMoneySpent(new int[] { 3, 1 }, new int[] { 5, 2, 8 }, 10));
Console.WriteLine(GetMoneySpent(new int[] { 5}, new int[] { 4 }, 5));
}
static int GetMoneySpent(int[] keyboards, int[] drives, int budget)
{
if (budget == 0)
return -1;
// sort the two arrays so the highest values are at the front
keyboards = SortArrayDescending(keyboards);
drives = SortArrayDescending(drives);
// delete any that are over our budget
var affordableKeyboards = GetAffordableItems(keyboards, budget);
var affordableDrives = GetAffordableItems(drives, budget);
// make a list to contain the combined totals
var combinedTotals = new List<int>();
foreach (var keyboard in keyboards)
{
foreach (var drive in drives)
{
}
}
// sort the list & delete anything over budget
combinedTotals.Sort();
combinedTotals.Reverse();
combinedTotals.RemoveAll(n => n > budget);
return combinedTotals.Count == 0 ? -1 : combinedTotals[0];
}
static int[] SortArrayDescending(int[] array)
{
Array.Sort(array);
Array.Reverse(array);
return array;
}
static int[] GetAffordableItems(int[] array, int budget)
{
return array.Where(n => n < budget).ToArray();
}
}
}
• Regarding the sample in the problem description, if Monica buys items costing $3$ and $5 ,$ wouldn't that just be $8 ?$ I think you meant that she buys the keyboard for $1$ and the USB-drive for $8 ,$ yielding the total of $9 .$
– Nat
Jul 14, 2019 at 9:33
• @Nat good spot! Yes you're right - I've fixed that now. Turns out I can't do basic Maths :) Jul 14, 2019 at 10:12
• Before going for good code you should go for a good algorithm. It seems your algorithm has $O(nk)$ runtime and memory. The problem can however be solved in $O(n\log n+k\log k)$ time with $O(n+k)$ memory Jul 14, 2019 at 21:11
• What you may and may not do after receiving answers Jul 14, 2019 at 22:37
I don't like that you modify the array you are given. This sort of thing would need to be documented, and generally creates confusion for all. You don't need arrays as inputs, so you could take IEnumerables instead without any added cost, which makes the code easier to reuse and communicates to the consumer that you aren't modifying anything. I'd consider making the parameter names a little more explicit:
public static int GetMoneySpent(IEnumerable<int> keyboardPrices, IEnumerable<int> drivePrices, int budget)
Your SortArrayDescending modifies the array given, and then proceeds to return it: this is how to really annoying people, because they will assume that because the method returns something that it won't be modifying the input.
You've clearly thought about edge cases, which is good. You might consider some parameter validation (e.g. checking the budget makes sense, the arrays should not be null):
if (budget < 0)
throw new ArgumentOutOfRangeException(nameof(budget), "Budget must be non-negative");
if (keyboardPrices == null)
throw new ArgumentNullException(nameof(keyboardPrices));
if (drivePrices == null)
throw new ArgumentNullException(nameof(drivePrices));
At the moment the program would print -1, which is sort of makes sense, but could easily be the first clue that something has gone wrong higher-up.
As implied by J_H, you should discard before the sort. The following also clones the arrays immediately so we don't modify them:
// filter to within-budget items, sort the two arrays (ascending)
keyboards = keyboards.Where(k => k < budget).ToArray();
Array.Sort(keyboards);
drives = drives.Where(d => d < budget).ToArray();
Array.Sort(drives);
J_H has already described how you can get the optimal time complexity, but you can perform the loops at the end very simply, without needing nesting or binary search or any of that.
You also don't need to record a list of all the candidates, just keep track of the current best, as Henrik Hansen has already demonstrated:
// maximum within budget price
int max = -1;
If we start by looking at the most expensive keyboard and cheapest drive and simultaneously iterate through both, we can do this bit in linear time.
int ki = keyboards.Length - 1; // drive index
int di = 0; // drive index
while (ki >= 0 && di < drives.Length)
{
int candidate = keyboards[ki] + drives[di];
if (candidate <= budget)
{
max = Math.Max(candidate, max);
di++;
}
else
{
ki--;
}
}
Suppose we are looking at keyboard ki and drive di: candidate is the sum of their costs. If this candidate cost is no more than the budget, then it is a candidate for the max. We also know that we can check for a more pairing by looking at the next most expensive drive, di + 1. If instead the candidate was out of the budget, we know we can find a cheaper candidate by looking at the next cheapest keyboard ki - 1.
Basically, we look at each keyboard in turn, and cycle through the drives until we find the most expensive one we can get away with. When we find the first drive that is too expensive, we move onto the next keyboard. We know that we don't want any drive cheaper than the last one we looked at, because that could only produce a cheaper pair, so we can continue our search starting from the same drive.
At the end, we just return max: if we didn't find any candidates below budget, it will still be -1:
return max;
Concerning dfhwze's comment about buying more than 2 items: this process is essentially searching the Pareto front, which is done trivially and efficiently for 2 items, but becomes nightmarish for any more, so I would certainly forgive you for sticking to 2 lists ;)
The above code all in one, with added inline documentation to make the purpose explicit (useful for the consumer, so that they know exactly what it is meant to do, and useful for the maintainer, so that they also know what it is meant to do):
/// <summary>
/// Returns the maximum price of any pair of a keyboard and drive that is no more than the given budget.
/// Returns -1 if no pair is within budget.
/// </summary>
/// <param name="keyboardPrices">A list of prices of keyboards.</param>
/// <param name="drivepricess">A list of prices of drives.</param>
/// <param name="budget">The maximum budget. Must be non-negative</param>
public static int GetMoneySpent2(IEnumerable<int> keyboardPrices, IEnumerable<int> drivePrices, int budget)
{
if (budget < 0)
throw new ArgumentOutOfRangeException(nameof(budget), "Budget must be non-negative");
if (keyboardPrices == null)
throw new ArgumentNullException(nameof(keyboardPrices));
if (drivePrices == null)
throw new ArgumentNullException(nameof(drivePrices));
if (budget == 0)
return -1;
// filter to within-budget items, sort the two arrays (ascending)
var keyboards = keyboardPrices.Where(k => k < budget).ToArray();
Array.Sort(keyboards);
var drives = drivePrices.Where(d => d < budget).ToArray();
Array.Sort(drives);
// maximum within budget price
int max = -1;
int ki = keyboards.Length - 1; // keyboard index
int di = 0; // drive index
while (ki >= 0 && di < drives.Length)
{
int candidate = keyboards[ki] + drives[di];
if (candidate <= budget)
{
max = Math.Max(candidate, max);
di++;
}
else
{
ki--;
}
}
return max;
}
J_H's solution (using a BinarySearch) could well be better in practise, because you only need to sort (and binary search) the shortest input: you can scan the other however you like. Implementation of that, since I too enjoy the sport:
/// <summary>
/// Returns the maximum price of any pair of a keyboard and drive that is no more than the given budget.
/// Returns -1 if no pair is within budget.
/// </summary>
/// <param name="keyboardPrices">A list of prices of keyboards.</param>
/// <param name="drivepricess">A list of prices of drives.</param>
/// <param name="budget">The maximum budget. Must be non-negative</param>
public static int GetMoneySpent3(IEnumerable<int> keyboardPrices, IEnumerable<int> drivePrices, int budget)
{
if (budget < 0)
throw new ArgumentOutOfRangeException(nameof(budget), "Budget must be non-negative");
if (keyboardPrices == null)
throw new ArgumentNullException(nameof(keyboardPrices));
if (drivePrices == null)
throw new ArgumentNullException(nameof(drivePrices));
if (budget == 0)
return -1;
// filter to within-budget items
var keyboards = keyboardPrices.Where(k => k < budget).ToArray();
var drives = drivePrices.Where(d => d < budget).ToArray();
// determine which list is shorter
int[] shortList;
if (keyboards.Length < drives.Length)
{
shortList = keyboards;
longList = drives;
}
else
{
shortList = drives;
longList = keyboards;
}
// special case of empty short-list
if (shortList.Length == 0)
return -1;
// sort shortList, to facilitate binary search
Array.Sort(shortList);
// maximum within budget price
int max = -1;
foreach (var k in longList)
{
// filter faster
if (k + shortList[0] > budget)
continue;
// find most expensive drive no more than budget - k
int i = Array.BinarySearch(shortList, budget - k);
i = i >= 0
? i // found
// if such a drive exists, consider it a candidate
if (i >= 0)
{
int candidate = k + shortList[i];
max = Math.Max(max, candidate);
}
}
return max;
}
• Now that's an answer! I'll be honest - quite a bit of that is over my head, but the exciting part of that is it's given me a lot of things to research & develop upon. I really appreciate the time you put into it, thank you. Jul 15, 2019 at 19:24
# early pruning
This is very nice:
// delete any that are over our budget
Doing it before sorting can slightly speed the sorting operation. I say slightly because "items over budget" is determined by the input, and it will be some fraction f of an input item category, so the savings is O(f * n * log n).
This is a bigger deal.
// sort the list & delete anything over budget
For k keyboards and d drives, the sort does O(k * d * log k * d) work. Discarding within this loop would be an even bigger win.
# consistent idiom
It was a little odd that you used
combinedTotals.RemoveAll(n => n > budget);
and
array.Where(n => n < budget).ToArray();
to accomplish the same thing. There's no speed difference but consider phrasing the same thing in the same way.
# reversing
If you pass into Sort something that implements the IComparer interface, you can change the comparison order and thus skip the Reverse step entirely.
# arithmetic
Arbitrarily choose one of the categories as the driving category, perhaps keyboard. Sort the drive prices, while leaving the keyboards in arbitrary order. Note the min drive price, and use that along with budget for immediately pruning infeasible keyboards.
Loop over all surviving keyboards. Target price is budget - kb_price. Do binary search over drives for the target, finding largest feasible drive, and use that to update "best combo so far". No need to sort them, you need only retain the "best" one.
• The LINQ Where should be array.Where(n => n <= budget). But your key point is made. Jul 14, 2019 at 16:10
• This answer has given me a lot to reflect upon. I mentioned further down to someone who quoted you, some of the things in terms of binary search & IComparer (the latter I've used very briefly) are things I've been researching today. I'm honestly looking forward to implementing what I've learned into future projects. Thank you Jul 15, 2019 at 19:27
### General Guidelines
• You have coded everything in a single class Program. Take advantage of the fact C# is an object oriented language. Create at least one custom class that defines this problem.
• Your current implementation is very strict and specific to 2 types of items. What if Monica needs to buy from n item types? It is up to you to decide the scope of your implementation, so what you have done is not wrong, but it is something to consider when building reusable code blocks. We can argue reusability of this exercise though.
• When providing a method for consumers to use, make sure to include a handful of unit tests. This is an excellent way to get to know the outcome given any input. In this case, you are providing us GetMoneySpent, but we have to write our own unit tests to verify correctness of this method.
### Review
• You are using Array for a problem where IEnumerable could have also been used. Prefer the latter because you don't want to depend on fixed sized collections. You are converting ToArray(), this overhead should not be required when working with IEnumerable.
• In terms of your first point, you're 100% correct. I tend to fall into a bad habbit of writing procedural code when I'm doing these types of challenges. I really shouldn't, kind of defeats the purpose. I get what you mean for point 2, I guess I kinda limited it because it was for a one time use in this specific challenge. It's a good idea to look at reusability for exercises though. Sorry about the lack of unit tests - I need to spend some more time improving my understanding of how they work, especially if I'm going to be posting stuff on code review. Jul 13, 2019 at 20:21
• Well, it is tempting and easy to do it that way :) Jul 13, 2019 at 20:23
The good thing first:
You divide and conquer the problem by creating some reasonable (and well named) methods. You could have gone all in by making methods for combining and final selection as well:
...
var combinedTotals = Combine(affordableKeyboards, affordableDrives);
}
But as shown below, dividing the code into such small methods can sometimes obscure more useful approaches.
It must be a mind slip that you find the affordable keyboards and drives, but you forget about them and iterate over the full arrays of keyboards and drives:
// delete any that are over our budget
var affordableKeyboards = GetAffordableItems(keyboards, budget);
var affordableDrives = GetAffordableItems(drives, budget);
// make a list to contain the combined totals
var combinedTotals = new List<int>();
foreach (var keyboard in keyboards)
{
foreach (var drive in drives)
{
}
}
I suppose that the loops should be:
foreach (var keyboard in affordableKeyboards)
{
foreach (var drive in affordableDrives)
{
}
}
Some optimizations:
return array.Where(n => n < budget).ToArray();
Where has to iterate through the entire array, even if it is sorted. A better approach would have been to sort ascending first, then take untill n > budget, and then reverse:
array.OrderBy(n => n).TakeWhile(n => n <= budget).Reverse();
Making the almost same considerations with the combined totals:
int result = combinedTotals.OrderByDescending(n => n).FirstOrDefault(n => n <= budget);
Your entire method could be refined to this:
static int GetMoneySpent(int[] keyboards, int[] drives, int budget)
{
if (keyboards == null || keyboards.Length == 0 || drives == null || drives.Length == 0 || budget <= 0)
return -1;
keyboards = keyboards.OrderBy(n => n).TakeWhile(n => n <= budget).Reverse().ToArray();
drives = drives.OrderBy(n => n).TakeWhile(n => n <= budget).Reverse().ToArray();
// make a list to contain the combined totals
var combinedTotals = new List<int>();
foreach (var keyboard in keyboards)
{
foreach (var drive in drives)
{
}
}
int result = combinedTotals.OrderByDescending(n => n).FirstOrDefault(n => n <= budget);
return result == 0 ? -1 : result;
}
Just for the sport I made the below solution, that sorts the data sets in ascending order and iterate backwards to avoid reversing the data:
int GetMoneySpent(int[] keyboards, int[] drives, int budget)
{
if (keyboards == null || keyboards.Length == 0 || drives == null || drives.Length == 0 || budget <= 0)
return -1;
int result = -1;
Array.Sort(keyboards);
Array.Sort(drives);
int istart = keyboards.Length - 1;
while (istart >= 0 && keyboards[istart] > budget) istart--;
int jstart = drives.Length - 1;
while (jstart >= 0 && drives[jstart] > budget) jstart--;
for (int i = istart; i >= 0; i--)
{
int keyboard = keyboards[i];
for (int j = jstart; j >= 0; j--)
{
int drive = drives[j];
int price = keyboard + drive;
if (price < result)
break;
if (price > result && price <= budget)
{
result = price;
}
}
}
return result;
}
• Well that's embarassing. Definitely a slip of the brain in terms of forgetting to loop through the affordableKeyboards & affordableDrives arrays! Appreciate the feedback though, definite +1 from me. Jul 13, 2019 at 20:20
• @Webbarr: I think, most of us know the feeling. You're not the first and won't be the last :-)
– user73941
Jul 14, 2019 at 6:25
As others have pointed out, the code should be object oriented. It’s OK to start with procedural code as you have, but if you do, you should get in the habit of writing a quick test in your main entry point. Once you’ve written the first test, it can help you start seeing objects more clearly (and drive you to further unit tests.)
For example: an ElectronicsShop would only have a catalog of items, but it has nothing to do with your spending habits. In your case, it would serve only as a data store for items.
I’d really expect to see a Shopper class. The shopper would have both Money and a BuyingStrategy. The money could be a simple amount, but the strategy would be what you’re really working on. Finally, the strategy would expose a method Shop(Store, Items[], Budget).
Inside Shop(), you’d retrieve the items from the store that meet your criteria: they’d be only keyboards and drives, and they’d be within budget. The shopper would add only eligible items to their comparison lists. Then comes time to evaluate them , so you’d add an Evaluate(Budget, ItemLists[]) method that would be called from within Shop(). Inside Evaluate() you can order results by price. But what happens when you get multiple answers that meet the same amount? A budget of 10 would be met by any of {9,1}, {1,9}, {6,4}, or even {5,5}! Which is more important, expensive drives or expensive keyboards? (In the real world, you’d go back to your product owner at this point and ask them about their priorities: “do you prefer the most expensive drive, the most expensive keyboard, or should it try to split the difference somehow?”) This might lead you to conclude that Evaluating is really the strategy, not Buying.
I could go on, but I think I’ve made my point. Notice how once I’ve defined just a few relevant objects that the process I’m describing begins to mirror the real world act of shopping? And once you start down this path of viewing coding problems as models of real objects interacting in the real world, you’ll see the ease of defining objects and writing tests for them, and using them to recognize shortcomings in the specs and completing your specifications. (Pro tip: specifications are always incomplete.)
Performance isn’t always the best starting point. Object oriented programming is less about finding the most mathematically efficient solution; it’s about building understandable components and proving that they meet your clients’ needs. Getting to the right answer is much more important than quickly getting to some answer that you can’t prove is correct. If performance is an issue after you’ve solved the problem, then start optimizing; but don’t start there.
• Thank you! I really appreciate your answer. It's given me quite a lot to think about (especially going back over this particular challenge) but also in my job today. I've definitely fallen into a nasty habbit of writing procedural code & releasing it, it's something I need to stop. Your answer just makes sense I guess. Again, thank you. Jul 15, 2019 at 19:21
I'd like to advocate for a more functional programming style. We all know object orientation can provide clarity over procedural programming, but if you can replace for loops and if statements with Select and Where? I'd say that's even better.
Here's how:
public static int GetMoneySpent(int budget, int[] keyboards, int[] drives)
{
var affordableCombinations = keyboards
.SelectMany(keyboard => drives
.Select(drive => keyboard + drive))
.Where(cost => cost <= budget);
return affordableCombinations.Any()
? affordableCombinations.Max()
: -1;
}
Is this as efficient as your solution? Not in terms of CPU cycles, no. In terms of what the person reading the code must do, in order to understand the desired behavior? I'll argue yes.
If you believe you'll see large performance gains by filtering out keyboards and drives that exceed the budget on their own, before adding any prices together, there's a concise way to do that with LINQ also:
public static int GetMoneySpent(int budget, int[] keyboards, int[] drives)
{
Func<int, bool> affordable = cost => cost < budget;
var affordableCombinations = keyboards
.Where(affordable)
.SelectMany(keyboard => drives
.Where(affordable)
.Select(drive => keyboard + drive))
.Where(affordable);
return affordableCombinations.Any()
? affordableCombinations.Max()
: -1;
}
Is this as efficient as a solution involving manual iteration can be? Again, no. I think the approach in Henrik's answer is about the best you'll do. But this is easily readable, and probably efficient enough.
|
2023-03-22 08:58:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2329399138689041, "perplexity": 2987.803737687387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943809.22/warc/CC-MAIN-20230322082826-20230322112826-00024.warc.gz"}
|
https://dsp.stackexchange.com/questions/2140/choosing-drive-signals-for-system-identification
|
# Choosing Drive Signals for System Identification?
(I'm just learning a little about system identification so apologies in advance if this question is badly worded)
How do you go about choosing drive signals for system identification? I've seen PRBS signals used but it seems like that's going to work well for frequencies around the chip rate but not really low frequencies; I've also seen frequency sweeps.
If I have a SISO system that I know is close to a 2nd order linear system with poles in a certain range, and I can drive it with an arbitrary signal up to some amplitude A for up to some time length T, how do I pick a signal that would give me the best responses for determining the accuracy of the transfer function?
I tried googling for "system identification drive signals" but I don't see anything that pertains to my question.
edit: one particular type of SISO system I've dealt with is an (input=power dissipation, output=temperature) system for power semiconductor thermal behavior, and it seems very hard to model because there's usually a dominant pole at very low frequencies (<1Hz) and the next one might be 100 times higher, so any high-frequency drive signals just get very heavily attenuated.
For linear systems, you can completely characterize the transfer function using its frequency response, so a frequency sweep would be one possible choice. However, you would need to ensure that at each test frequency, you allow time for the system's transient response to die out before measuring its steady-state amplitude/phase response.
If by system identification you mean determining the impulse response of a linearized model of your actual system, then pseudorandom binary sequence (PRBS) signals are a good way to go. With chipping rate $T^{-1}$ and $N$ chips in each period of the PRBS, the PRBS signal has period $NT$ seconds, and it is important to choose $N$ and $T$ so that period of the PRBS signal is quite a bit longer than what you believe is the duration of the impulse response. Then, the periodic (or circular or cyclic) cross-correlation function of the periodic input signal and the periodic output signal computed over a full period is exactly equal to the response of the linearized model to the periodic autocorrelation function of the PRBS signal which is essentially a periodic "impulse train" with one "impulse" every $NT$ seconds. Of course, it is not a true impulse, but if the PRBS signal has levels $\pm A$ where $A$ is necessarily chosen to be small so as to not drive the system into nonlinearity, the "impulse" has peak value $ANT$ (and floor or off-peak value $-AT$). So you effectively have a "processing gain" of $N$. If the "impulse response" dies out before the next "impulse", that cross-correlation is essentially the impulse response or something close enough to it for gummint purposes.
Once you have computed the impulse response, you can get the transfer function from the impulse response.
More bells and whistles: if you complement alternate chips of the PRBS to get a sequence of period $2N$ chips, the autocorrelation function is again a periodic "impulse train" of twice the period, but the impulses still occur every $NT$ seconds with alternate signs. This allows the testing of the system with both positive and negative impulses since the actual nonlinear system being modeled might not be perfectly linear around the operating point, and the gain for positive signals might be slightly different from negative signals.
• So it's pretty clear from your answer that you want to make N large. But how do you pick T? I mean a 1MHz chip rate on a system with poles in the 1-100Hz rate seems like a bad idea. Apr 20, 2012 at 21:49
• What would the response of your system be to a 1 MHz pulse train? The "impulses" in the PRBS idea are $2T$ wide at the base and $ANT$ tall, and so $T$ should be small enough that this looks reasonably enough like an impulse to the relatively slower system, while $N$ should large enough to get a tall spike. Apr 20, 2012 at 22:15
• response of 1MHz pulses would be so far down in the noise floor I'd never be able to sense them. Apr 20, 2012 at 22:16
• @JasonS It is not the response of the system to the input that is directly of concern but the cross-correlation of the input and output which has to be computed over long period of time. So even if the output signal is buried in the mud as you call it, it does not matter: that long period of integration/summation gets all the signal components to add coherently and the noise to add incoherently. Think of spread-spectrum where the signal is buried in the noise (useful for covert communication) and the processing gain pulls the signal out (continued) Apr 21, 2012 at 1:31
• (continued) or the reason why one averages measurements of a parameter: the sample mean has variance lot smaller than an individual measurement/sample because the signals add while the noise variances add and so the standard deviation of the noise goes down by a factor of $\sqrt{n}$. Same effect is helping here. Apr 21, 2012 at 1:35
The below thoughts are to be regarded as very unreliable: my knowledge of control theory is meagre at best!
Well, if the system is insensitive to your test input around 100Hz will it be sensitive to control signals of that frequency when in normal operation? If not - model it as a first order system.
how do I pick a signal that would give me the best responses for determining the accuracy of the transfer function?
They use impulses, steps, sines - I have no idea which of the how accurate is, though I guess that depends on the bottleneck in your experiment.
For example, with the slow chip heating, you can measure time with high relative precision, but you are limited by your ADC when measuring magnitudes. I would pass in a high amplitude 100Hz sin in for less than a second (the system's dominant time constant) and determine a first order model gain (time constant is already defined as 1/100 s). If the gain is small, I would neglect this pole, if it is of significant size for the problem at hand, look for a second order model (as you are doing in this question ;P)
If you have two sequences of input and output samples and you use least squares you need your input to persistently explore the input space, i.e. you have to invert matrix $H^T H$ being $$H = \begin{pmatrix} y(n) & y(n-1) & \dots & y(1) & u(n) & ... & u(1)\\ y(n+1)&y(n) & \dots &y(2) & u(n+1) & \dots & u(2)\\ \vdots & \vdots & & \vdots &\vdots &&\vdots\\ y(L-1) & y(L-2)& \dots &y(L-n-1) & u(L-1) &\dots & u(L-n-1) \end{pmatrix}$$
so a good sequence will be a sequence of uncorrelated samples, for example a white noise sequence
|
2022-12-03 13:45:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7465640306472778, "perplexity": 540.0041620198092}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710931.81/warc/CC-MAIN-20221203111902-20221203141902-00670.warc.gz"}
|
http://semantic-portal.net/concept:1652
|
# Inline Styles
//the example below shows how to change the color and the left margin
//of a <h1> element
<h1 style="color:blue;margin-left:30px;">This is a heading</h1>
• May be used to apply a unique style for a single element.
• To use inline styles, add the style attribute to the relevant element.
• The style attribute can contain any CSS property.
• An inline style loses many of the advantages of a style sheet (by mixing content with presentation). Use this method sparingly.
Inline Styles
## Inline Styles — Structure map
Clickable & Draggable!
|
2021-09-18 20:04:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34509479999542236, "perplexity": 7980.284083739389}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056572.96/warc/CC-MAIN-20210918184640-20210918214640-00050.warc.gz"}
|
http://www.r-bloggers.com/clarifying-difference-between-ratio-and-interval-scale-of-measurement/
|
Clarifying difference between Ratio and Interval Scale of Measurement
August 5, 2014
By
(This article was first published on MATHEMATICS IN MEDICINE, and kindly contributed to R-bloggers)
Clarifying difference between Ratio and Interval Scale of Measurement
Clarifying difference between Ratio and Interval Scale of Measurement
Introduction
Recently while preparing lecture on scales of measurements and types of statistical data, I came across two scales of measurement when numbers are used to denote a quantitative variable. I took some time to clarify the difference between “Interval and "Ratio” scales of measurements. I am writing down what I understand of the above mentioned scales.
Process of measuring a variable
First step in variable measurement is to understand the concept we want to measure, i.e., we would like to define the variable on a conceptual level. Then we need to make an operational definition of the variable, which includes the following steps:
1. Setting up of a domain of all the possible values the variable can assume.
2. Understanding the meaning of different values the variable can assume.
• Different values can just mean different classes (categories) - Nominal scale
• Different values can mean different classes (categories) with some ordering (direction of difference) between the classes - Ordinal scale
• Different values can mean different classes (categories) with ordering and specified distance between them (direction and magnitude/distance of difference) - Interval and Ratio scale
3. Checking if a real origin (“0”) exists for the variable in the particular scale. Origin (“0”) should mean absolute absence of the variable.
4. Designing a device which will measure the variable.
5. Validating the measurement from the device.
Prerequisites for Ratio Scale
There are two prerequisites for a measurement scale to be a Ratio Scale:
1. Presence of “Real Origin”.
2. Scale is uniformly spaced across its full domain.
What happens when the above prerequisites are met?
Let us assume that we have made numerical observations $$A_{ratio}$$ and $$B_{ratio}$$ for a variable in ratio scale and that $$B_{ratio} > A_{ratio}$$. There are two valid ways to denote the difference between A and B:
1. Arithmetic difference between $$A_{ratio}$$ and $$B_{ratio}$$: It is denoted by $$B_{ratio} - A_{ratio}$$. It is a valid measure of difference because of the fact that the scale is uniformly spaced across the domain.
2. Ratio difference between $$A_{ratio}$$ and $$B_{ratio}$$: It is denoted by $$B_{ratio}/A_{ratio}$$. It indicates that $$B_{ratio}$$ is $$B_{ratio}/A_{ratio}$$ times larger than $$A_{ratio}$$. We say this as a valid measure of difference because the origin is an absolute one and is same for both observations. Note that there is no unit as the result is a ratio. It is also equivalent to arithmetic difference of log transformation of observations, $$log(B_{ratio}) - log(A_{ratio})$$.
3. Location transformation: If we shift the observations by $$x$$ units, we get $$Ax_{ratio} = A_{ratio} + x$$ and $$Bx_{ratio} = B_{ratio} + x$$. Arithmetic difference between the two transformed observations, $$Bx_{ratio} - Ax_{ratio} = B_{ratio} - A_{ratio}$$, which is the same as original observations.
4. Scale transformation: If we multiply each of the observations by $$x$$ units, we get $$Ax_{ratio} = A_{ratio} \cdot x$$ and $$Bx_{ratio} = B_{ratio} \cdot x$$. Ratio difference between the two transformed observations, $$Bx_{ratio}/Ax_{ratio} = B_{ratio}/A_{ratio}$$, which is the same as original observations.
So, for ratio scale, both arithmetic and ratio difference are valid measures of difference between observations and the difference remain same after both location and scale transformations.
General transformation of measuring scale
Any transformation ($$X_{trans}$$) of the original ratio scale, say $$X_{ratio}$$ can be depicted as follows
$X_{ratio} = f(X_{trans},S(X_{trans}),L(X_{trans}))$
where, $$S(X_{trans})$$ denotes scale transformation parameter as a function wrt location in transformed scale and $$L(X_{trans})$$ denotes location transformation parameter as a function wrt location in transformed scale.
If we assume constant $$S$$ and $$L$$ wrt location in transformed scale, one of the simplest scale transformation will be:
$X_{ratio} = (X_{trans} + L) \cdot S$
where, $$S \neq 0$$
and interval scale of measurement ($$X_{int}$$) will be the one with $$L \neq 0$$ in addition to the above constraints.
What happens in Interval Scale?
In interval scale, the zero doesnot mean absolute nothingness, but it is an arbitrarily chosen one and corresponds to a distance of $$L$$ from the real origin in ratio scale.
We continue our example from the above section:
Let us say that we make two observations in interval scale, $$A_{int}$$ and $$B_{int}$$, and want to assess difference between both the observations as done earlier.
Observation $$A_{int}$$ will be mapped as $$(A_{int} + L) \cdot S$$ and observation $$B_{int}$$ will be mapped as $$(B_{int} + L) \cdot S$$ in ratio scale. We will have to use values in ratio scale for comparision, as it has got “real origin”.
1. Arithmetic difference between $$A_{int}$$ and $$B_{int}$$: It shows that the arithmetic difference measured in interval scale is linearly related to the difference measured in ratio scale and that the arithmetic difference in ratio scale is independent of absolute values of $$A_{int}$$ and $$B_{int}$$. Moreover, interval scale is uniformly spaced across the full domain. Because of the above reasons, arithmetic difference measured in interval scale is a valid way of representing difference between observations $$A_{int}$$ and $$B_{int}$$. $[B_{int} - A_{int}]_{int\ scale} = [(B_{int} + L) \cdot S - (A_{int} + L) \cdot S]_{ratio\ scale} = [(B_{int} - A_{int}) \cdot S]_{ratio\ scale}$
2. Ratio difference between $$A_{int}$$ and $$B_{int}$$: Unlike the arithmetic difference, ratio difference in interval scale is dependent on the absolute values of $$A_{int}$$ and $$B_{int}$$, with ratio approaching $$B_{int}/A_{int}$$ with $$B_{int}, A_{int} >> L$$. So, ratio difference is not a valid measure of difference between two observations in interval scale. $[B_{int}/A_{int}]_{int\ scale} = [(B_{int} + L) \cdot S/(A_{int} + L) \cdot S]_{ratio\ scale} = [(B_{int} + L)/(A_{int} + L)]_{ratio\ scale}$
3. Location transformation: If we shift the observations by x units on interval scale, we get $$Ax_{int} = ((A_{int} + x) + L) \cdot S$$ and $$Bx_{int} = ((B_{int} + x) + L) \times S$$. Arithmetic difference between the two transformed observations, $$[Bx_{int} - Ax_{int}]_{int\ scale} = [(B_{int} - A_{int}) \cdot S]_{ratio\ scale} = [B_{int} - A_{int}]_{int\ scale}$$, remains the same as original observations on interval scale. So, arithmetic difference remains same with location transformation.
4. Scale transformation: If we multiply each of the observations by x units on original scale, we get $$Ax_{int} = (A_{int} \cdot x + L) \cdot S$$ and $$Bx_{int} = (B_{int} \cdot x + L) \cdot S$$. Ratio difference between the two transformaed observations, $$[Bx_{int}/Ax_{int}]_{int\ scale} = [(B_{int} \cdot x + L)/(A_{int} \cdot x + L)]_{ratio\ scale} \neq [B_{int}/A_{int}]_{int\ scale}$$. With scale transformation, the ratio difference becomes different from the the ratio difference of the original observations in interval scale.
So, for interval scale, only arithmetic difference is a valid measure of difference between observations.
Conclusion
The aim of this post is to express what I understand of the interval and ratio scale of measurements. Comments, suggestions and criticisms are welcome.
Bye.
|
2015-04-27 12:13:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7140411138534546, "perplexity": 573.4885471372967}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246658116.80/warc/CC-MAIN-20150417045738-00081-ip-10-235-10-82.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/1448758/use-any-dice-to-calculate-the-outcome-of-any-probability
|
# Use any dice to calculate the outcome of any probability
While looking at this question, I had a gut feeling that you can use any fair, single die with any number of sides to calculate the outcome of any probability.
Assuming we express the probability as a range of numbers, this is easy for coins and d10s: coins can be flipped to generate a binary number, while d10s can be rolled to produce each digit of the outcome. If the result falls outside the range, ignore it and reroll.
This is really just probability with respect to base. The coin generates a result in base 2, while the d10 generates results in base 10. Therefore, a die with a number of sides n can be used to produce a result in base n.
Now consider that we have an arbitrary number of dice, each with an arbitrary number of sides. We could generate an outcome by expressing the result of each die roll in base 2 and tacking them together* (see example). This would however result in a lot of rerolling, and is otherwise time-consuming when you factor in converting to base 2 and so forth.
So here's what amounts to a somewhat silly puzzle question:
• For an arbitrary set of dice, each with an arbitrary number of sides, is there a general method for determining the outcome of any probability while minimizing the number of rerolls.
• Is there a method which is easy to remember and could reasonably be used during a gaming session (i.e. takes less than, say, 30 seconds to determine which dice to roll and calculate the result).
Example of presented method: Outcome between 0 and 993, with (hypothetical) 1d7 and 1d21.
• 993 in base 2 is 1111100001, meaning we need 10 binary digits to express the full range of possible outcomes.
• 1d21 can provide 4 binary digits (0 through 15 in base 2), and 1d7 provides 2 digits (0 through 3).
Solution: Roll 1d21 twice and 1d7 once. If the d21 lands higher than 16 or the d7 higher than 4, reroll. Subtract 1 from each roll so the range starts at 0. Convert to base 2. Append results to create one 10-digit binary value. If result > 993, toss it out and reroll.
There is a ~24% chance ($\frac{21-16}{21}$) of needing to reroll the d21 each time, a ~43% chance ($\frac{7-4}{7}$) for the d7, and a ~3% chance ($\frac{1024-994}{1024}$) of needing to reroll the final value.
*Ignoring rolls that are higher than the maximum rollable power of 2. I.e. if you had a d12, you would ignore rolls higher than 8 (23). This ensures an equal probability for each digit in the result.
Edit:
In light of Thomas Andrews' answer, multiple dice can be used to generate a higher $X$ value than one die alone. For a set of dice with number of sides $\{k1,k2,...,kn\}$ and rolls $\{r1,r2,...,rn\}$, the maximum $X$ value will be $k_1k_2k_3...k_n$ and a given roll value will be: $$r_1 + (r_2 - 1)k_1 + (r_3 - 1)k_1k_2 + \cdots + (r_n - 1)k_1k_2...k_{n-1}$$
Yes, given a single random number $X$ which generates elements $\{1,\dots,k\}$, with the $0<P(X=1)=p<1$, and any real number $q\in[0,1]$ you can use repeated rolls of the die to simulate an event with probability $q$.
Basically, we're going to pick a real number in $[0,1]$ by successively reducing our range. To pick real exactly, we'd have to roll the die infinitely many times, but luckily, with probability $1$, in a finite amount of time, the current interval will no longer contain $q$, and we can stop, because then we know if $q$ is less than or greater than the real.
We start with the entire interval $[a_0,b_0]=[0,1]$.
At step $n$, we have interval $[a_n,b_n]$. If $b_n<q$, then we halt the process and return "success." If $a_n>q$ then we halt with failure.
Otherwise, we roll the die.
If it comes up $1$, we take the next interval as $[a_n,(1-p)a_n+pb_n]$. If it comes up something other than $1$, we take the next interval to be $[(1-p)a_n+pb_n,b_n]$.
The interval at step $n$ is at most length $\left(\max(p,1-p)\right)^n$. There is no guarantee that the process with stop in a known number of die rolls, but it will stop with probability $1$ in a finite number of rolls.
Edit: Ian asked for the expected number of rolls to know where you are.
This is rather complex, and depends on $q$ and $p$ as follows. Given an infinite sequence $\{a_i\}_1^{\infty}$ each in $\{0,1\}$, we can define $R_n=\sum_{i=1}^n a_i$ and $L_n=n-R_n$. We treat the $a_i$ as "left-right" choices in a binary tree.
Then for almost all[*] $q\in[0,1]$ there exist exactly one sequence $\{a_i\}$ such that:
$$q=\sum a_ip^{L_n}(1-p)^{R_n}$$
This has the advantage that if $\{a_i\}$ corresponds to $q_1$ and $\{b_i\}$ corresponds to $q_2$, then if $q_1<q_2$, we have that for some $n$, $a_i=b_i$ for $i<n$ and $a_n<b_n$. That is, ordering is lexicographical ordering.
The expected number of rolls is going to depend on the $a_i$ corresponding to $q$.
That said, let $e_p(q)$ be the expected number of rolls. We can define the expected number recursively as follows:
$$e_p(q)=\begin{cases} 1 + (1-p)e_p\left(\frac{q-p}{1-p}\right)&p<q\\ 1+pe_p\left(\frac{q}{p}\right)&p>q \end{cases}$$
But whether $p<q$ is determined by $a_1$. Assuming $q\neq p$, if $a_1=0$ then $q<p$ while if $a_1=1$, then $q>p$ almost certainly.
Finally, if $a_1=0$, then $\frac{q}{p}$ corresponds to the sequence $\{a_2,a_3,\dots\}$ and if $a_1=1$ then $\frac{q-p}{1-p}$ corresponds to $\{a_2,a_3,\dots\}$.
So we really see this expected value is related to the sequence $\{a_i\}$, but it is a mess to compute it.
The value is:
$$\sum_{i=0}^\infty p^{L_i}(1-p)^{R_i}$$
which we can also see because $p^{L_i}(1-p)^{R_i}$ is the odds that $q$ is still in our interval after trial $i$.
This is no more than $$\frac{1}{1-\max(p,1-p)},$$ for any $q$.
If you want more efficient use of the die (I'm using it as a coin toss here) and it has $N$ sides with equal probabilities, then the expected number of rolls is:
$$\sum_{k=0}^\infty \frac{1}{N^k}=\frac{N}{N-1}$$ and is independent of $q$. (That's true if $p=\frac{1}{2}$ in the original approach.
• It might be interesting to compute the expected value of the number of rolls.
– Ian
Sep 24 '15 at 18:20
• I've computed this for the very simple case. You can obviously do things differently if we take into account all the dice probability values - I'm essentially taking a coin flip with probability $p$. Sep 24 '15 at 19:14
• So the expected number of "rolls" in the unbiased case is not even 2? That's surprising...
– Ian
Sep 24 '15 at 21:39
• @Ian Not really. The first roll restricts the interval to one of $N$ each of length $1/N$, only one of which contains $q$, so $(N-1)/N$ of the time, it only take one roll. Sep 24 '15 at 22:01
• I wasn't looking for efficiency, just proof of concept. But at the end, I do note: "If you want more efficient use of the die (I'm using it as a coin toss here) and it has N sides with equal probabilities...) " The point was to avoid dealing with all the different die probabilities, and just assume there is one result with non-zero and non-one probability. @txteclipse Sep 25 '15 at 16:37
You may be interested in the paper by Alan Hajek: See <'fitelson.org/coherence/hajek_puzzle.pdf'>, from page 27 on-wards.
and also by the same author Alan Hajek <'A chancy magic trick', in chance and temporal asymmetry, Oxford University Press 2014>;
Where the author how to do with a singular fair coin.
You may want to read the paper by the famous John Von Neumann
Von Neumann may have been one of the first to develop this kind of idea, that an even arbitrary two outcome random event (such as a fair coin) can generate any probability, value within any desired degree of accuracy .
|
2021-09-24 11:24:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8408169746398926, "perplexity": 277.4292858235767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057524.58/warc/CC-MAIN-20210924110455-20210924140455-00420.warc.gz"}
|
https://akantu.gitlab.io/akantu/manual/appendix.html
|
Shape Functions
Schematic overview of all the element types defined in Akantu is described in Section Elements. In this appendix, more detailed information (shape function, location of Gaussian quadrature points, and so on) of each of these types is listed. For each element type, the coordinates of the nodes are given in the iso-parametric frame of reference, together with the shape functions (and their derivatives) on these respective nodes. Also all the Gaussian quadrature points within each element are assigned (together with the weight that is applied on these points). The graphical representations of all the element types can be found in Section Elements.
Iso-parametric Elements
1D-Shape Functions
Segment 2
Table 8 Elements properties
Node ($$i$$)
Coord. ($$\xi$$)
Shape function ($$N_i$$)
Derivative ($$\frac{\partial N_i}{\partial \xi}$$)
1
-1
$$\frac{1}{2}\left(1-\xi\right)$$
$$-\frac{1}{2}$$
2
1
$$\frac{1}{2}\left(1+\xi\right)$$
$$\frac{1}{2}$$
Coord. ($$\xi$$) Weight 0 2
Segment 3
Table 10 Elements properties
Node ($$i$$)
Coord. ($$\xi$$)
Shape function ($$N_i$$)
Derivative ($$\frac{\partial N_i}{\partial \xi}$$)
1
-1
$$\frac{1}{2}\xi\left(\xi-1\right)$$
$$\xi-\frac{1}{2}$$
2
1
$$\frac{1}{2}\xi\left(\xi+1\right)$$
$$\xi+\frac{1}{2}$$
3
0
$$1-\xi^{2}$$
$$-2\xi$$
Coord. ($$\xi$$) Weight $$-1/\sqrt{3}$$ 1 $$1/\sqrt{3}$$ 1
2D-Shape Functions
Triangle 3
Table 12 Elements properties
Node ($$i$$)
Coord. ($$\xi$$, $$\eta$$)
Shape function ($$N_i$$)
Derivative ($$\frac{\partial N_i}{\partial \xi}$$, $$\frac{\partial N_i}{\partial \eta}$$)
1
($$0$$, $$0$$)
$$1-\xi-\eta$$
($$-1$$, $$-1$$)
2
($$1$$, $$0$$)
$$\xi$$
($$1$$, $$0$$)
3
($$0$$, $$1$$)
$$\eta$$
($$0$$, $$1$$)
Coord. ($$\xi$$, $$\eta$$) Weight ($$\frac{1}{3}$$, $$\frac{1}{3}$$) $$\frac{1}{2}$$
Triangle 6
Table 14 Elements properties
Node ($$i$$)
Coord. ($$\xi$$, $$\eta$$)
Shape function ($$N_i$$)
Derivative ($$\frac{\partial N_i}{\partial \xi}$$, $$\frac{\partial N_i}{\partial \eta}$$)
1
($$0$$, $$0$$)
$$-\left(1-\xi-\eta\right)\left(1-2\left(1-\xi-\eta\right)\right)$$
($$1-4\left(1-\xi-\eta\right)$$, $$1-4\left(1-\xi-\eta\right)$$)
2
($$1$$, $$0$$)
$$-\xi\left(1-2\xi\right)$$
($$4\xi-1$$, $$0$$)
3
($$0$$, $$1$$)
$$-\eta\left(1-2\eta\right)$$
($$0$$, $$4\eta-1$$)
4
($$\frac{1}{2}$$, $$0$$)
$$4\xi\left(1-\xi-\eta\right)$$
($$4\left(1-2\xi-\eta\right)$$, $$-4\xi$$)
5
($$\frac{1}{2}$$, $$\frac{1}{2}$$)
$$4\xi\eta$$
($$4\eta$$, $$4\xi$$)
6
($$0$$, $$\frac{1}{2}$$)
$$4\eta\left(1-\xi-\eta\right)$$
($$-4\eta$$, $$4\left(1-\xi-2\eta\right)$$)
Coord. ($$\xi$$, $$\eta$$) Weight ($$\frac{1}{6}$$, $$\frac{1}{6}$$) $$\frac{1}{6}$$ ($$\frac{2}{3}$$, $$\frac{1}{6}$$) $$\frac{1}{6}$$ ($$\frac{1}{6}$$, $$\frac{2}{3}$$) $$\frac{1}{6}$$
Table 16 Elements properties
Node ($$i$$)
Coord. ($$\xi$$, $$\eta$$)
Shape function ($$N_i$$)
Derivative ($$\frac{\partial N_i}{\partial \xi}$$, $$\frac{\partial N_i}{\partial \eta}$$)
1
($$-1$$, $$-1$$)
$$\frac{1}{4}\left(1-\xi\right)\left(1-\eta\right)$$
($$-\frac{1}{4}\left(1-\eta\right)$$, $$-\frac{1}{4}\left(1-\xi\right)$$)
2
($$1$$, $$-1$$)
$$\frac{1}{4}\left(1+\xi\right)\left(1-\eta\right)$$
($$\frac{1}{4}\left(1-\eta\right)$$, $$-\frac{1}{4}\left(1+\xi\right)$$)
3
($$1$$, $$1$$)
$$\frac{1}{4}\left(1+\xi\right)\left(1+\eta\right)$$
($$\frac{1}{4}\left(1+\eta\right)$$, $$\frac{1}{4}\left(1+\xi\right)$$)
4
($$-1$$, $$1$$)
$$\frac{1}{4}\left(1-\xi\right)\left(1+\eta\right)$$
($$-\frac{1}{4}\left(1+\eta\right)$$, $$\frac{1}{4}\left(1-\xi\right)$$)
Coord. ($$\xi$$, $$\eta$$) Weight ($$-\frac{1}{\sqrt{3}}$$, $$-\frac{1}{\sqrt{3}}$$) 1 ($$\frac{1}{\sqrt{3}}$$, $$-\frac{1}{\sqrt{3}}$$) 1 ($$\frac{1}{\sqrt{3}}$$, $$\frac{1}{\sqrt{3}}$$) 1 ($$-\frac{1}{\sqrt{3}}$$, $$\frac{1}{\sqrt{3}}$$) 1
Table 18 Elements properties
Node ($$i$$)
Coord. ($$\xi$$, $$\eta$$)
Shape function ($$N_i$$)
Derivative ($$\frac{\partial N_i}{\partial \xi}$$, $$\frac{\partial N_i}{\partial \eta}$$)
1
($$-1$$, $$-1$$)
$$\frac{1}{4}\left(1-\xi\right)\left(1-\eta\right)\left(-1-\xi-\eta\right)$$
($$\frac{1}{4}\left(1-\eta\right)\left(2\xi+\eta\right)$$, $$\frac{1}{4}\left(1-\xi\right)\left(\xi+2\eta\right)$$)
2
($$1$$, $$-1$$)
$$\frac{1}{4}\left(1+\xi\right)\left(1-\eta\right)\left(-1+\xi-\eta\right)$$
($$\frac{1}{4}\left(1-\eta\right)\left(2\xi-\eta\right)$$, $$-\frac{1}{4}\left(1+\xi\right)\left(\xi-2\eta\right)$$)
3
($$1$$, $$1$$)
$$\frac{1}{4}\left(1+\xi\right)\left(1+\eta\right)\left(-1+\xi+\eta\right)$$
($$\frac{1}{4}\left(1+\eta\right)\left(2\xi+\eta\right)$$, $$\frac{1}{4}\left(1+\xi\right)\left(\xi+2\eta\right)$$)
4
($$-1$$, $$1$$)
$$\frac{1}{4}\left(1-\xi\right)\left(1+\eta\right)\left(-1-\xi+\eta\right)$$
($$\frac{1}{4}\left(1+\eta\right)\left(2\xi-\eta\right)$$, $$-\frac{1}{4}\left(1-\xi\right)\left(\xi-2\eta\right)$$)
5
($$0$$, $$-1$$)
$$\frac{1}{2}\left(1-\xi^{2}\right)\left(1-\eta\right)$$
($$-\xi\left(1-\eta\right)$$, $$-\frac{1}{2}\left(1-\xi^{2}\right)$$)
6
($$1$$, $$0$$)
$$\frac{1}{2}\left(1+\xi\right)\left(1-\eta^{2}\right)$$
($$\frac{1}{2}\left(1-\eta^{2}\right)$$, $$-\eta\left(1+\xi\right)$$)
7
($$0$$, $$1$$)
$$\frac{1}{2}\left(1-\xi^{2}\right)\left(1+\eta\right)$$
($$-\xi\left(1+\eta\right)$$, $$\frac{1}{2}\left(1-\xi^{2}\right)$$)
8
($$-1$$, $$0$$)
$$\frac{1}{2}\left(1-\xi\right)\left(1-\eta^{2}\right)$$
($$-\frac{1}{2}\left(1-\eta^{2}\right)$$, $$-\eta\left(1-\xi\right)$$)
Coord. ($$\xi$$, $$\eta$$) Weight ($$0$$, $$0$$) $$\frac{64}{81}$$ ($$\sqrt{\tfrac{3}{5}}$$, $$\sqrt{\tfrac{3}{5}}$$) $$\frac{25}{81}$$ ($$-\sqrt{\tfrac{3}{5}}$$, $$\sqrt{\tfrac{3}{5}}$$) $$\frac{25}{81}$$ ($$-\sqrt{\tfrac{3}{5}}$$, $$-\sqrt{\tfrac{3}{5}}$$) $$\frac{25}{81}$$ ($$\sqrt{\tfrac{3}{5}}$$, $$-\sqrt{\tfrac{3}{5}}$$) $$\frac{25}{81}$$ ($$0$$, $$\sqrt{\tfrac{3}{5}}$$) $$\frac{40}{81}$$ ($$-\sqrt{\tfrac{3}{5}}$$, $$0$$) $$\frac{40}{81}$$ ($$0$$, $$-\sqrt{\tfrac{3}{5}}$$) $$\frac{40}{81}$$ ($$\sqrt{\tfrac{3}{5}}$$, $$0$$) $$\frac{40}{81}$$
3D-Shape Functions
Tetrahedron 4
Table 20 Elements properties
Node ($$i$$)
Coord. ($$\xi$$, $$\eta$$, $$\zeta$$)
Shape function ($$N_i$$)
Derivative ($$\frac{\partial N_i}{\partial \xi}$$, $$\frac{\partial N_i}{\partial \eta}$$, $$\frac{\partial N_i}{\partial \zeta}$$)
1
($$0$$, $$0$$, $$0$$)
$$1-\xi-\eta-\zeta$$
($$-1$$, $$-1$$, $$-1$$)
2
($$1$$, $$0$$, $$0$$)
$$\xi$$
($$1$$, $$0$$, $$0$$)
3
($$0$$, $$1$$, $$0$$)
$$\eta$$
($$0$$, $$1$$, $$0$$)
4
($$0$$, $$0$$, $$1$$)
$$\zeta$$
($$0$$, $$0$$, $$1$$)
Coord. ($$\xi$$, $$\eta$$, $$\zeta$$) Weight ($$\frac{1}{4}$$, $$\frac{1}{4}$$, $$\frac{1}{4}$$) $$\frac{1}{6}$$
Tetrahedron 10
Table 22 Elements properties
Node ($$i$$)
Coord. ($$\xi$$, $$\eta$$, $$\zeta$$)
Shape function ($$N_i$$)
Derivative ($$\frac{\partial N_i}{\partial \xi}$$, $$\frac{\partial N_i}{\partial \eta}$$, $$\frac{\partial N_i}{\partial \zeta}$$)
1
($$0$$, $$0$$, $$0$$)
$$\left(1-\xi-\eta-\zeta\right)\left(1-2\xi-2\eta-2\zeta\right)$$
$$4\xi+4\eta+4\zeta-3$$, $$4\xi+4\eta+4\zeta-3$$, $$4\xi+4\eta+4\zeta-3$$
2
($$1$$, $$0$$, $$0$$)
$$\xi\left(2\xi-1\right)$$
($$4\xi-1$$, $$0$$, $$0$$)
3
($$0$$, $$1$$, $$0$$)
$$\eta\left(2\eta-1\right)$$
($$0$$, $$4\eta-1$$, $$0$$)
4
($$0$$, $$0$$, $$1$$)
$$\zeta\left(2\zeta-1\right)$$
($$0$$, $$0$$, $$4\zeta-1$$)
5
($$\frac{1}{2}$$, $$0$$, $$0$$)
$$4\xi\left(1-\xi-\eta-\zeta\right)$$
($$4-8\xi-4\eta-4\zeta$$, $$-4\xi$$, $$-4\xi$$)
6
($$\frac{1}{2}$$, $$\frac{1}{2}$$, $$0$$)
$$4\xi\eta$$
($$4\eta$$, $$4\xi$$, $$0$$)
7
($$0$$, $$\frac{1}{2}$$, $$0$$)
$$4\eta\left(1-\xi-\eta-\zeta\right)$$
($$-4\eta$$, $$4-4\xi-8\eta-4\zeta$$, $$-4\eta$$)
8
($$0$$, $$0$$, $$\frac{1}{2}$$)
$$4\zeta\left(1-\xi-\eta-\zeta\right)$$
($$-4\zeta$$, $$-4\zeta$$, $$4-4\xi-4\eta-8\zeta$$)
9
($$\frac{1}{2}$$, $$0$$, $$\frac{1}{2}$$)
$$4\xi\zeta$$
($$4\zeta$$, $$0$$, $$4\xi$$)
10
($$0$$, $$\frac{1}{2}$$, $$\frac{1}{2}$$)
$$4\eta\zeta$$
($$0$$, $$4\zeta$$, $$4\eta$$)
Coord. ($$\xi$$, $$\eta$$, $$\zeta$$) Weight ($$\frac{5-\sqrt{5}}{20}$$, $$\frac{5-\sqrt{5}}{20}$$, $$\frac{5-\sqrt{5}}{20}$$) $$\frac{1}{24}$$ ($$\frac{5+3\sqrt{5}}{20}$$, $$\frac{5-\sqrt{5}}{20}$$, $$\frac{5-\sqrt{5}}{20}$$) $$\frac{1}{24}$$ ($$\frac{5-\sqrt{5}}{20}$$, $$\frac{5+3\sqrt{5}}{20}$$, $$\frac{5-\sqrt{5}}{20}$$) $$\frac{1}{24}$$ ($$\frac{5-\sqrt{5}}{20}$$, $$\frac{5-\sqrt{5}}{20}$$, $$\frac{5+3\sqrt{5}}{20}$$) $$\frac{1}{24}$$
Hexahedron 8
Table 24 Elements properties
Node ($$i$$)
Coord. ($$\xi$$, $$\eta$$, $$\zeta$$)
Shape function ($$N_i$$)
Derivative ($$\frac{\partial N_i}{\partial \xi}$$, $$\frac{\partial N_i}{\partial \eta}$$, $$\frac{\partial N_i}{\partial \zeta}$$)
1
($$-1$$, $$-1$$, $$-1$$)
$$\frac{1}{8}\left(1-\xi\right)\left(1-\eta\right)\left(1-\zeta\right)$$
($$-\frac{1}{8}\left(1-\eta\right)\left(1-\zeta\right)$$, $$-\frac{1}{8}\left(1-\xi\right)\left(1-\zeta\right)$$, $$3$$)
2
($$1$$, $$-1$$, $$-1$$)
$$\frac{1}{8}\left(1+\xi\right)\left(1-\eta\right)\left(1-\zeta\right)$$
($$\frac{1}{8}\left(1-\eta\right)\left(1-\zeta\right)$$, $$-\frac{1}{8}\left(1+\xi\right)\left(1-\zeta\right)$$, $$3$$)
3
($$1$$, $$1$$, $$-1$$)
$$\frac{1}{8}\left(1+\xi\right)\left(1+\eta\right)\left(1-\zeta\right)$$
($$\frac{1}{8}\left(1+\eta\right)\left(1-\zeta\right)$$, $$\frac{1}{8}\left(1+\xi\right)\left(1-\zeta\right)$$, $$3$$)
4
($$-1$$, $$1$$, $$-1$$)
$$\frac{1}{8}\left(1-\xi\right)\left(1+\eta\right)\left(1-\zeta\right)$$
($$-\frac{1}{8}\left(1+\eta\right)\left(1-\zeta\right)$$, $$\frac{1}{8}\left(1-\xi\right)\left(1-\zeta\right)$$, $$3$$)
5
($$-1$$, $$-1$$, $$1$$)
$$\frac{1}{8}\left(1-\xi\right)\left(1-\eta\right)\left(1+\zeta\right)$$
($$-\frac{1}{8}\left(1-\eta\right)\left(1+\zeta\right)$$, $$-\frac{1}{8}\left(1-\xi\right)\left(1+\zeta\right)$$, $$3$$)
6
($$1$$, $$-1$$, $$1$$)
$$\frac{1}{8}\left(1+\xi\right)\left(1-\eta\right)\left(1+\zeta\right)$$
($$\frac{1}{8}\left(1-\eta\right)\left(1+\zeta\right)$$, $$-\frac{1}{8}\left(1+\xi\right)\left(1+\zeta\right)$$, $$3$$)
7
($$1$$, $$1$$, $$1$$)
$$\frac{1}{8}\left(1+\xi\right)\left(1+\eta\right)\left(1+\zeta\right)$$
($$\frac{1}{8}\left(1+\eta\right)\left(1+\zeta\right)$$, $$\frac{1}{8}\left(1+\xi\right)\left(1+\zeta\right)$$, $$3$$)
8
($$-1$$, $$1$$, $$1$$)
$$\frac{1}{8}\left(1-\xi\right)\left(1+\eta\right)\left(1+\zeta\right)$$
($$-\frac{1}{8}\left(1+\eta\right)\left(1+\zeta\right)$$, $$\frac{1}{8}\left(1-\xi\right)\left(1+\zeta\right)$$, $$3$$)
Coord. ($$\xi$$, $$\eta$$, $$\zeta$$) Weight ($$-\frac{1}{\sqrt{3}}$$, $$-\frac{1}{\sqrt{3}}$$, $$-\frac{1}{\sqrt{3}}$$) 1 ($$\frac{1}{\sqrt{3}}$$, $$-\frac{1}{\sqrt{3}}$$, $$-\frac{1}{\sqrt{3}}$$) 1 ($$\frac{1}{\sqrt{3}}$$, $$\frac{1}{\sqrt{3}}$$, $$-\frac{1}{\sqrt{3}}$$) 1 ($$-\frac{1}{\sqrt{3}}$$, $$\frac{1}{\sqrt{3}}$$, $$-\frac{1}{\sqrt{3}}$$) 1 ($$-\frac{1}{\sqrt{3}}$$, $$-\frac{1}{\sqrt{3}}$$, $$\frac{1}{\sqrt{3}}$$) 1 ($$\frac{1}{\sqrt{3}}$$, $$-\frac{1}{\sqrt{3}}$$, $$\frac{1}{\sqrt{3}}$$) 1 ($$\frac{1}{\sqrt{3}}$$, $$\frac{1}{\sqrt{3}}$$, $$\frac{1}{\sqrt{3}}$$) 1 ($$-\frac{1}{\sqrt{3}}$$, $$\frac{1}{\sqrt{3}}$$, $$\frac{1}{\sqrt{3}}$$) 1
Pentahedron 6
Table 26 Elements properties
Node ($$i$$)
Coord. ($$\xi$$, $$\eta$$, $$\zeta$$)
Shape function ($$N_i$$)
Derivative ($$\frac{\partial N_i}{\partial \xi}$$, $$\frac{\partial N_i}{\partial \eta}$$, $$\frac{\partial N_i}{\partial \zeta}$$)
1
($$-1$$, $$1$$, $$0$$)
$$\frac{1}{2}\left(1-\xi\right)\eta$$
($$-\frac{1}{2}\eta$$, $$\frac{1}{2}\left(1-\xi\right)$$, $$3$$)
2
($$-1$$, $$0$$, $$1$$)
$$\frac{1}{2}\left(1-\xi\right)\zeta$$
($$-\frac{1}{2}\zeta$$, $$0.0$$, $$3$$)
3
($$-1$$, $$0$$, $$0$$)
$$\frac{1}{2}\left(1-\xi\right)\left(1-\eta-\zeta\right)$$
($$-\frac{1}{2}\left(1-\eta-\zeta\right)$$, $$-\frac{1}{2}\left(1-\xi\right)$$, $$3$$)
4
($$1$$, $$1$$, $$0$$)
$$\frac{1}{2}\left(1+\xi\right)\eta$$
($$\frac{1}{2}\eta$$, $$\frac{1}{2}\left(1+\xi\right)$$, $$3$$)
5
($$1$$, $$0$$, $$1$$)
$$\frac{1}{2}\left(1+\xi\right)\zeta$$
($$\frac{1}{2}\zeta$$, $$0.0$$, $$3$$)
6
($$1$$, $$0$$, $$0$$)
$$\frac{1}{2}\left(1+\xi\right)\left(1-\eta-\zeta\right)$$
($$\frac{1}{2}\left(1-\eta-\zeta\right)$$, $$-\frac{1}{2}\left(1+\xi\right)$$, $$3$$)
Coord. ($$\xi$$, $$\eta$$, $$\zeta$$) Weight ($$-\frac{1}{\sqrt{3}}$$, $$0.5$$, $$0.5$$) $$\frac{1}{6}$$ ($$-\frac{1}{\sqrt{3}}$$, $$0.0$$, $$0.5$$) $$\frac{1}{6}$$ ($$-\frac{1}{\sqrt{3}}$$, $$0.5$$, $$0.0$$) $$\frac{1}{6}$$ ($$\frac{1}{\sqrt{3}}$$, $$0.5$$, $$0.5$$) $$\frac{1}{6}$$ ($$\frac{1}{\sqrt{3}}$$, $$0.0$$, $$0.5$$) $$\frac{1}{6}$$ ($$\frac{1}{\sqrt{3}}$$, $$0.5$$, $$0.0$$) $$\frac{1}{6}$$
Hexahedron 20
Table 28 Elements properties
Node ($$i$$)
Coord. ($$\xi$$, $$\eta$$, $$\zeta$$)
Shape function ($$N_i$$)
Derivative ($$\frac{\partial N_i}{\partial \xi}$$, $$\frac{\partial N_i}{\partial \eta}$$, $$\frac{\partial N_i}{\partial \zeta}$$)
1
($$-1$$, $$-1$$, $$-1$$)
$$\frac{1}{8}\left(1-\xi\right)\left(1-\eta\right)\left(1-\zeta\right)\left(-2-\xi-\eta-\zeta\right)$$
($$\frac{1}{4}\left(\xi+\frac{1}{2}\left(\eta+\zeta+1\right)\right)\left(\eta-1\right)\left(\zeta-1\right)$$, $$\frac{1}{4}\left(\eta+\frac{1}{2}\left(\xi+\zeta+1\right)\right)\left(\xi-1\right)\left(\zeta-1\right)$$, $$3$$)
2
($$1$$, $$-1$$, $$-1$$)
$$\frac{1}{8}\left(1+\xi\right)\left(1-\eta\right)\left(1-\zeta\right)\left(-2+\xi-\eta-\zeta\right)$$
($$\frac{1}{4}\left(\xi-\frac{1}{2}\left(\eta+\zeta+1\right)\right)\left(\eta-1\right)\left(\zeta-1\right)$$, $$-\frac{1}{4}\left(\eta-\frac{1}{2}\left(\xi-\zeta-1\right)\right)\left(\xi+1\right)\left(\zeta-1\right)$$, $$3$$)
3
($$1$$, $$1$$, $$-1$$)
$$\frac{1}{8}\left(1+\xi\right)\left(1+\eta\right)\left(1-\zeta\right)\left(-2+\xi+\eta-\zeta\right)$$
($$-\frac{1}{4}\left(\xi+\frac{1}{2}\left(\eta-\zeta-1\right)\right)\left(\eta+1\right)\left(\zeta-1\right)$$, $$-\frac{1}{4}\left(\eta+\frac{1}{2}\left(\xi-\zeta-1\right)\right)\left(\xi+1\right)\left(\zeta-1\right)$$, $$3$$)
4
($$-1$$, $$1$$, $$-1$$)
$$\frac{1}{8}\left(1-\xi\right)\left(1+\eta\right)\left(1-\zeta\right)\left(-2-\xi+\eta-\zeta\right)$$
($$-\frac{1}{4}\left(\xi-\frac{1}{2}\left(\eta-\zeta-1\right)\right)\left(\eta+1\right)\left(\zeta-1\right)$$, $$\frac{1}{4}\left(\eta-\frac{1}{2}\left(\xi+\zeta+1\right)\right)\left(\xi-1\right)\left(\zeta-1\right)$$, $$3$$)
5
($$-1$$, $$-1$$, $$1$$)
$$\frac{1}{8}\left(1-\xi\right)\left(1-\eta\right)\left(1+\zeta\right)\left(-2-\xi-\eta+\zeta\right)$$
($$-\frac{1}{4}\left(\xi+\frac{1}{2}\left(\eta-\zeta+1\right)\right)\left(\eta-1\right)\left(\zeta+1\right)$$, $$-\frac{1}{4}\left(\eta+\frac{1}{2}\left(\xi-\zeta+1\right)\right)\left(\xi-1\right)\left(\zeta+1\right)$$, $$3$$)
6
($$1$$, $$-1$$, $$1$$)
$$\frac{1}{8}\left(1+\xi\right)\left(1-\eta\right)\left(1+\zeta\right)\left(-2+\xi-\eta+\zeta\right)$$
($$-\frac{1}{4}\left(\xi-\frac{1}{2}\left(\eta-\zeta+1\right)\right)\left(\eta-1\right)\left(\zeta+1\right)$$, $$\frac{1}{4}\left(\eta-\frac{1}{2}\left(\xi+\zeta-1\right)\right)\left(\xi+1\right)\left(\zeta+1\right)$$, $$3$$)
7
($$1$$, $$1$$, $$1$$)
$$\frac{1}{8}\left(1+\xi\right)\left(1+\eta\right)\left(1+\zeta\right)\left(-2+\xi+\eta+\zeta\right)$$
($$\frac{1}{4}\left(\xi+\frac{1}{2}\left(\eta+\zeta-1\right)\right)\left(\eta+1\right)\left(\zeta+1\right)$$, $$\frac{1}{4}\left(\eta+\frac{1}{2}\left(\xi+\zeta-1\right)\right)\left(\xi+1\right)\left(\zeta+1\right)$$, $$3$$)
8
($$-1$$, $$1$$, $$1$$)
$$\frac{1}{8}\left(1-\xi\right)\left(1+\eta\right)\left(1+\zeta\right)\left(-2-\xi+\eta+\zeta\right)$$
($$\frac{1}{4}\left(\xi-\frac{1}{2}\left(\eta+\zeta-1\right)\right)\left(\eta+1\right)\left(\zeta+1\right)$$, $$-\frac{1}{4}\left(\eta-\frac{1}{2}\left(\xi-\zeta+1\right)\right)\left(\xi-1\right)\left(\zeta+1\right)$$, $$3$$)
9
($$0$$, $$-1$$, $$-1$$)
$$\frac{1}{4}\left(1-\xi^{2}\right)\left(1-\eta\right)\left(1-\zeta\right)$$
($$-\frac{1}{2}\xi\left(\eta-1\right)\left(\zeta-1\right)$$, $$-\frac{1}{4}\left(\xi^{2}-1\right)\left(\zeta-1\right)$$, $$3$$)
10
($$1$$, $$0$$, $$-1$$)
$$\frac{1}{4}\left(1+\xi\right)\left(1-\eta^{2}\right)\left(1-\zeta\right)$$
($$\frac{1}{4}\left(\eta^{2}-1\right)\left(\zeta-1\right)$$, $$\frac{1}{2}\eta\left(\xi+1\right)\left(\zeta-1\right)$$, $$3$$)
11
($$0$$, $$1$$, $$-1$$)
$$\frac{1}{4}\left(1-\xi^{2}\right)\left(1+\eta\right)\left(1-\zeta\right)$$
($$\frac{1}{2}\xi\left(\eta+1\right)\left(\zeta-1\right)$$, $$\frac{1}{4}\left(\xi^{2}-1\right)\left(\zeta-1\right)$$, $$3$$)
12
($$-1$$, $$0$$, $$-1$$)
$$\frac{1}{4}\left(1-\xi\right)\left(1-\eta^{2}\right)\left(1-\zeta\right)$$
($$-\frac{1}{4}\left(\eta^{2}-1\right)\left(\zeta-1\right)$$, $$-\frac{1}{2}\eta\left(\xi-1\right)\left(\zeta-1\right)$$, $$3$$)
13
($$-1$$, $$-1$$, $$0$$)
$$\frac{1}{4}\left(1-\xi\right)\left(1-\eta\right)\left(1-\zeta^{2}\right)$$
($$-\frac{1}{4}\left(\eta-1\right)\left(\zeta^{2}-1\right)$$, $$-\frac{1}{4}\left(\xi-1\right)\left(\zeta^{2}-1\right)$$, $$3$$)
14
($$1$$, $$-1$$, $$0$$)
$$\frac{1}{4}\left(1+\xi\right)\left(1-\eta\right)\left(1-\zeta^{2}\right)$$
($$\frac{1}{4}\left(\eta-1\right)\left(\zeta^{2}-1\right)$$, $$\frac{1}{4}\left(\xi+1\right)\left(\zeta^{2}-1\right)$$, $$3$$)
15
($$1$$, $$1$$, $$0$$)
$$\frac{1}{4}\left(1+\xi\right)\left(1+\eta\right)\left(1-\zeta^{2}\right)$$
($$-\frac{1}{4}\left(\eta+1\right)\left(\zeta^{2}-1\right)$$, $$-\frac{1}{4}\left(\xi+1\right)\left(\zeta^{2}-1\right)$$, $$3$$)
16
($$-1$$, $$1$$, $$0$$)
$$\frac{1}{4}\left(1-\xi\right)\left(1+\eta\right)\left(1-\zeta^{2}\right)$$
($$\frac{1}{4}\left(\eta+1\right)\left(\zeta^{2}-1\right)$$, $$\frac{1}{4}\left(\xi-1\right)\left(\zeta^{2}-1\right)$$, $$3$$)
17
($$0$$, $$-1$$, $$1$$)
$$\frac{1}{4}\left(1-\xi^{2}\right)\left(1-\eta\right)\left(1+\zeta\right)$$
($$\frac{1}{2}\xi\left(\eta-1\right)\left(\zeta+1\right)$$, $$\frac{1}{4}\left(\xi^{2}-1\right)\left(\zeta+1\right)$$, $$3$$)
18
($$1$$, $$0$$, $$1$$)
$$\frac{1}{4}\left(1+\xi\right)\left(1-\eta^{2}\right)\left(1+\zeta\right)$$
($$-\frac{1}{4}\left(\eta^{2}-1\right)\left(\zeta+1\right)$$, $$-\frac{1}{2}\eta\left(\xi+1\right)\left(\zeta+1\right)$$, $$3$$)
19
($$0$$, $$1$$, $$1$$)
$$\frac{1}{4}\left(1-\xi^{2}\right)\left(1+\eta\right)\left(1+\zeta\right)$$
($$-\frac{1}{2}\xi\left(\eta+1\right)\left(\zeta+1\right)$$, $$-\frac{1}{4}\left(\xi^{2}-1\right)\left(\zeta+1\right)$$, $$3$$)
20
($$-1$$, $$0$$, $$1$$)
$$\frac{1}{4}\left(1-\xi\right)\left(1-\eta^{2}\right)\left(1+\zeta\right)$$
($$\frac{1}{4}\left(\eta^{2}-1\right)\left(\zeta+1\right)$$, $$\frac{1}{2}\eta\left(\xi-1\right)\left(\zeta+1\right)$$, $$3$$)
Coord. ($$\xi$$, $$\eta$$, $$\zeta$$) Weight ($$-\sqrt{\tfrac{3}{5}}$$, $$-\sqrt{\tfrac{3}{5}}$$, $$-\sqrt{\tfrac{3}{5}}$$) $$\frac{125}{729}$$ ($$-\sqrt{\tfrac{3}{5}}$$, $$-\sqrt{\tfrac{3}{5}}$$, $$0$$) $$\frac{200}{729}$$ ($$-\sqrt{\tfrac{3}{5}}$$, $$-\sqrt{\tfrac{3}{5}}$$, $$\sqrt{\tfrac{3}{5}}$$) $$\frac{125}{729}$$ ($$-\sqrt{\tfrac{3}{5}}$$, $$0$$, $$-\sqrt{\tfrac{3}{5}}$$) $$\frac{200}{729}$$ ($$-\sqrt{\tfrac{3}{5}}$$, $$0$$, $$0$$) $$\frac{320}{729}$$ ($$-\sqrt{\tfrac{3}{5}}$$, $$0$$, $$\sqrt{\tfrac{3}{5}}$$) $$\frac{200}{729}$$ ($$-\sqrt{\tfrac{3}{5}}$$, $$\sqrt{\tfrac{3}{5}}$$, $$-\sqrt{\tfrac{3}{5}}$$) $$\frac{125}{729}$$ ($$-\sqrt{\tfrac{3}{5}}$$, $$\sqrt{\tfrac{3}{5}}$$, $$0$$) $$\frac{200}{729}$$ ($$-\sqrt{\tfrac{3}{5}}$$, $$\sqrt{\tfrac{3}{5}}$$, $$\sqrt{\tfrac{3}{5}}$$) $$\frac{125}{729}$$ ($$0$$, $$-\sqrt{\tfrac{3}{5}}$$, $$-\sqrt{\tfrac{3}{5}}$$) $$\frac{200}{729}$$ ($$0$$, $$-\sqrt{\tfrac{3}{5}}$$, $$0$$) $$\frac{320}{729}$$ ($$0$$, $$-\sqrt{\tfrac{3}{5}}$$, $$\sqrt{\tfrac{3}{5}}$$) $$\frac{200}{729}$$ ($$0$$, $$0$$, $$-\sqrt{\tfrac{3}{5}}$$) $$\frac{320}{729}$$ ($$0$$, $$0$$, $$0$$) $$\frac{512}{729}$$ ($$0$$, $$0$$, $$\sqrt{\tfrac{3}{5}}$$) $$\frac{320}{729}$$ ($$0$$, $$\sqrt{\tfrac{3}{5}}$$, $$-\sqrt{\tfrac{3}{5}}$$) $$\frac{200}{729}$$ ($$0$$, $$\sqrt{\tfrac{3}{5}}$$, $$0$$) $$\frac{320}{729}$$ ($$0$$, $$\sqrt{\tfrac{3}{5}}$$, $$\sqrt{\tfrac{3}{5}}$$) $$\frac{200}{729}$$ ($$\sqrt{\tfrac{3}{5}}$$, $$-\sqrt{\tfrac{3}{5}}$$, $$-\sqrt{\tfrac{3}{5}}$$) $$\frac{125}{729}$$ ($$\sqrt{\tfrac{3}{5}}$$, $$-\sqrt{\tfrac{3}{5}}$$, $$0$$) $$\frac{200}{729}$$ ($$\sqrt{\tfrac{3}{5}}$$, $$-\sqrt{\tfrac{3}{5}}$$, $$\sqrt{\tfrac{3}{5}}$$) $$\frac{125}{729}$$ ($$\sqrt{\tfrac{3}{5}}$$, $$0$$, $$-\sqrt{\tfrac{3}{5}}$$) $$\frac{200}{729}$$ ($$\sqrt{\tfrac{3}{5}}$$, $$0$$, $$0$$) $$\frac{320}{729}$$ ($$\sqrt{\tfrac{3}{5}}$$, $$0$$, $$\sqrt{\tfrac{3}{5}}$$) $$\frac{200}{729}$$ ($$\sqrt{\tfrac{3}{5}}$$, $$\sqrt{\tfrac{3}{5}}$$, $$-\sqrt{\tfrac{3}{5}}$$) $$\frac{125}{729}$$ ($$\sqrt{\tfrac{3}{5}}$$, $$\sqrt{\tfrac{3}{5}}$$, $$0$$) $$\frac{200}{729}$$ ($$\sqrt{\tfrac{3}{5}}$$, $$\sqrt{\tfrac{3}{5}}$$, $$\sqrt{\tfrac{3}{5}}$$) $$\frac{125}{729}$$
Pentahedron 15
Table 30 Elements properties
Node ($$i$$)
Coord. ($$\xi$$, $$\eta$$, $$\zeta$$)
Shape function ($$N_i$$)
Derivative ($$\frac{\partial N_i}{\partial \xi}$$, $$\frac{\partial N_i}{\partial \eta}$$, $$\frac{\partial N_i}{\partial \zeta}$$)
1
($$-1$$, $$1$$, $$0$$)
$$\frac{1}{2}\eta\left(1-\xi\right)\left(2\eta-2-\xi\right)$$
($$\frac{1}{2}\eta\left(2\xi-2\eta+1\right)$$, $$-\frac{1}{2}\left(\xi-1\right)\left(4\eta-\xi-2\right)$$, $$3$$)
2
($$-1$$, $$0$$, $$1$$)
$$\frac{1}{2}\zeta\left(1-\xi\right)\left(2\zeta-2-\xi\right)$$
($$\frac{1}{2}\zeta\left(2\xi-2\zeta+1\right)$$, $$0.0$$, $$3$$)
3
($$-1$$, $$0$$, $$0$$)
$$\frac{1}{2}\left(\xi-1\right)\left(1-\eta-\zeta\right)\left(\xi+2\eta+2\zeta\right)$$
($$-\frac{1}{2}\left(2\xi+2\eta+2\zeta-1\right)\left(\eta+\zeta-1\right)$$, $$-\frac{1}{2}\left(\xi-1\right)\left(4\eta+\xi+2\left(2\zeta-1\right)\right)$$, $$3$$)
4
($$1$$, $$1$$, $$0$$)
$$\frac{1}{2}\eta\left(1+\xi\right)\left(2\eta-2+\xi\right)$$
($$\frac{1}{2}\eta\left(2\xi+2\eta-1\right)$$, $$\frac{1}{2}\left(\xi+1\right)\left(4\eta+\xi-2\right)$$, $$3$$)
5
($$1$$, $$0$$, $$1$$)
$$\frac{1}{2}\zeta\left(1+\xi\right)\left(2\zeta-2+\xi\right)$$
($$\frac{1}{2}\zeta\left(2\xi+2\zeta-1\right)$$, $$0.0$$, $$3$$)
6
($$1$$, $$0$$, $$0$$)
$$\frac{1}{2}\left(-\xi-1\right)\left(1-\eta-\zeta\right)\left(-\xi+2\eta+2\zeta\right)$$
($$-\frac{1}{2}\left(\eta+\zeta-1\right)\left(2\xi-2\eta-2\zeta+1\right)$$, $$\frac{1}{2}\left(\xi+1\right)\left(4\eta-\xi+2\left(2\zeta-1\right)\right)$$, $$3$$)
7
($$-1$$, $$0.5$$, $$0.5$$)
$$2\eta\zeta\left(1-\xi\right)$$
($$-2\eta\zeta$$, $$-2\left(\xi-1\right)\zeta$$, $$3$$)
8
($$-1$$, $$0$$, $$0.5$$)
$$2\zeta\left(1-\eta-\zeta\right)\left(1-\xi\right)$$
($$2\zeta\left(\eta+\zeta-1\right)$$, $$2\zeta-\left(\xi-1\right)$$, $$3$$)
9
($$-1$$, $$0.5$$, $$0$$)
$$2\eta\left(1-\xi\right)\left(1-\eta-\zeta\right)$$
($$2\eta\left(\eta+\zeta-1\right)$$, $$2\left(2\eta+\zeta-1\right)\left(\xi-1\right)$$, $$3$$)
10
($$0$$, $$1$$, $$0$$)
$$\eta\left(1-\xi^{2}\right)$$
($$-2\xi\eta$$, $$-\left(\xi^{2}-1\right)$$, $$3$$)
11
($$0$$, $$0$$, $$1$$)
$$\zeta\left(1-\xi^{2}\right)$$
($$-2\xi\zeta$$, $$0.0$$, $$3$$)
12
($$0$$, $$0$$, $$0$$)
$$\left(1-\xi^{2}\right)\left(1-\eta-\zeta\right)$$
($$2\xi\left(\eta+\zeta-1\right)$$, $$\left(\xi^{2}-1\right)$$, $$3$$)
13
($$1$$, $$0.5$$, $$0.5$$)
$$2\eta\zeta\left(1+\xi\right)$$
($$2\eta\zeta$$, $$2\zeta\left(\xi+1\right)$$, $$3$$)
14
($$1$$, $$0$$, $$0.5$$)
$$2\zeta\left(1+\xi\right)\left(1-\eta-\zeta\right)$$
($$-2\zeta\left(\eta+\zeta-1\right)$$, $$-2\zeta\left(\xi+1\right)$$, $$3$$)
15
($$1$$, $$0.5$$, $$0$$)
$$2\eta\left(1+\xi\right)\left(1-\eta-\zeta\right)$$
($$-2\eta\left(\eta+\zeta-1\right)$$, $$-2\left(2\eta+\zeta-1\right)\left(\xi+1\right)$$, $$3$$)
Coord. ($$\xi$$, $$\eta$$, $$\zeta$$) Weight ($$-{\tfrac{1}{\sqrt{3}}}$$, $$\tfrac{1}{3}$$, $$\tfrac{1}{3}$$) -$$\frac{27}{96}$$ ($$-{\tfrac{1}{\sqrt{3}}}$$, $$0.6$$, $$0.2$$) $$\frac{25}{96}$$ ($$-{\tfrac{1}{\sqrt{3}}}$$, $$0.2$$, $$0.6$$) $$\frac{25}{96}$$ ($$-{\tfrac{1}{\sqrt{3}}}$$, $$0.2$$, $$0.2$$) $$\frac{25}{96}$$ ($${\tfrac{1}{\sqrt{3}}}$$, $$\tfrac{1}{3}$$, $$\tfrac{1}{3}$$) -$$\frac{27}{96}$$ ($${\tfrac{1}{\sqrt{3}}}$$, $$0.6$$, $$0.2$$) $$\frac{25}{96}$$ ($${\tfrac{1}{\sqrt{3}}}$$, $$0.2$$, $$0.6$$) $$\frac{25}{96}$$ ($${\tfrac{1}{\sqrt{3}}}$$, $$0.2$$, $$0.2$$) $$\frac{25}{96}$$
|
2021-11-27 20:25:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999682903289795, "perplexity": 13371.392039119075}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358233.7/warc/CC-MAIN-20211127193525-20211127223525-00162.warc.gz"}
|
https://math.stackexchange.com/questions/3143479/reverse-poisson-distribution-problem
|
# Reverse Poisson Distribution problem
I was recently solving a quiz on Poisson distribution and I encountered this question
A call center receives an average of • 4.5 calls every 5 minutes. Each agent can handle one of these calls over the 5 minute period. If a call is received, but no agent is available to take it, then that caller will be placed on hold. Assuming that the calls follow a Poisson distribution, what is the minimum number of agents needed on duty so that calls are placed on hold at most 10% of the time?
I figured out this was a case of finding K while we are given the expected probability in the question itself.
what is the minimum number of agents needed on duty so that calls are placed on hold at most 10% of the time?
I could not event determine the lambda and X to solve this question. PS : The correct answer was 7 agents.A detailed explanation will be helpful.
Thanks
## 1 Answer
I believe you are being asked to focus on a typical 5-minute period of time. Then the average number $$X$$ of incoming calls within 5 minutes has the distribution $$X \sim \mathsf{Pois}(\lambda = 4.5).$$ A bar plot of this distribution is shown below:
Poisson Model: Computations with the Poisson PDF $$P(X = i) = e^{-\lambda}\frac{\lambda^i}{i!},$$ where $$\lambda = 4.5,$$ show that $$P(X \le 6) = 0.8311 < 0.9$$ and $$P(X \le 7) = 0.9134.$$ Thus seven agents would suffice to serve incoming customers at least 90% of the time.
Trial and error with Poisson CDF: Computations using R statistical software (where the CDF of a Poisson distribution is denoted ppois) are as follows:
ppois(6, 4.5)
## 0.8310506
ppois(7, 4.5)
## 0.9134135
Using a Poisson quantile function: If you have such software available, you can use the inverse CDF or quantile function qpois to get to the answer without exploration.
qpois(.9, 4.5)
## 7
Note: Without knowing the context of this problem in your course, I suppose something like this is the approach you are expected to take. However, there are some unstated assumptions involved. A complete analysis of such a problem would involve finding the required number of servers $$k$$ required in an M/M/k queue (at steady state) that are sufficient to keep the average number of customers in the system below $$k$$ 90% of the time. This would involve knowing the exponential arrival rate $$\lambda = 4.5$$ per five minutes or $$0.9$$ per minute, and knowing the exponential rate (often denoted $$\mu)$$ at which each server can finish serving customers.
For starters: here are a few unstated assumptions: (a) No agents are busy at the beginning of the 5-minute period. (b) Each agent handles only one call within this period. (c) We are not concerned whether agents are still handling calls from this 5-minute period ends and the next begins.
• maybe I was making too many assumptions at first and did not look it in a simplistic manner. I did try it using a cdf with hit and trial like you mentioned, but accepting lambda to be 4.5 instead of 0.9 was not clear to me. Considering the given assumptions, I believe you have provided the simplest solution to this. Thanks a lot – Piyush Dixit Mar 13 at 7:31
|
2019-12-10 00:05:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8461474180221558, "perplexity": 203.85187265470012}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540525598.55/warc/CC-MAIN-20191209225803-20191210013803-00036.warc.gz"}
|
http://ewingconsulting.com/find-life-tmik/0139ff-circumference-questions-and-answers
|
# circumference questions and answers
The basketball team needs to play basketball, but first, they must find the circumference of thebasketball court in order to play. The diameter of the court is 5 ft. Will you help them find the circumference of the basketball court so they can play basketball? Further Maths; Practice Papers; Conundrums; Class Quizzes; … 8 In. 6)circumference=63 so radius=? How to … I'm excited to share with you a twist on the traditional teaching of … Use the Pi button and round to the nearest tenth. … Corbettmaths Videos, worksheets, 5-a-day and much more. Write down the size of angle ABC. However, basic calculators should be used once the concept is understood to eliminate the potential for calculation errors. Area of a Circle 2. Even though Pi technically represents an infinite number that starts with 3.14159265358979323846264..., students should remember the base form of Pi which will provide accurate-enough measurements of the circle's area and circumference. This page does not grade your responses. Donʼt spend too long on one question. Info. Our online circumference trivia quizzes can be adapted to suit your requirements for taking some of the top circumference quizzes. See similar topic questions: Geometry. View PDF. 2. However, we think it's important to remember the value of the constant Pi at 3.14. In order for students to complete these worksheets, they will need to understand the following vocabulary: area, formula, circle, perimeter, radius, pi and the symbol for pi, and diameter. Area of a … Use = 3.14 to calculate your answers. 11 Money questions F G to E 11 12 Shading fractions of rectangles F G to E 12 13 Ordering Fractions, Decimals & Percentages F G to E 13 14 Estimating answers F G to E 14 15 Place value when multiplying F G to E 15 16 Addition and subtraction F G to E 16 17 Long multiplication F G to E 17 18 Long division F G to E 18 19 Multiplication & Division with Decimals F G to E 19 20 … Deb Russell is a school principal and teacher with over 25 years of experience teaching mathematics at all levels. 6th through 8th Grades. Part B *Calculate* the diameter and radius of each circle in Part A. Either the diameter or the radius is shown on each shape. Circle problems and questions with answers for grade 8 are presented. 5 cm Find the circumference in… Circumference questions from all 13+ Exam Papers. Which Harry Potter Hogwarts House Do You Belong To Quiz! Exam Style Questions Ensure you have: Pencil, pen, ruler, protractor, pair of compasses and eraser You may use tracing paper if needed Guidance 1. (Your Answers Should Be Close To The Gizmo Answers.) To find the circumference of a circle use the formula $$C = πD$$. 6th through 8th Grades. Find the circumference of the circle, giving your answer in terms of . Created: Dec 24, 2011 | Updated: Jan 20, 2015. Following quiz provides Multiple Choice Questions M C Q s related to Circumference and Area of a Circle. 4.8 81 customer reviews. Here are a few questions I need help with--person with correct and best explained answer will get Best Answer! $\pi{r}^{2}$ … Check your answers seem right. Report a problem. Learn more about our online math practice software. The three circles C1, C2 and C3 have their centers O1, O2 and O3 on the line L … Worksheet to calculate circumference of circle when given diameter or radius. A diameter is a straight line going straight through the centre of the circle and touching the circumference at each end. X. Update: can you put that in a decimal? Area of a Circle 1. With a little thinking we can easily figure out that, C = π x 4.3m = 13.51m (to 2 decimal places). Welcome; Videos and Worksheets; Primary; 5-a-day. 5. The surface of the pond is a semicircle of radius 1.4m. 2)diameter=4.2cm so circumference=? About this resource. This resource is designed for UK teachers. Exam: Skill #77- Calculate The Area And Circumference Of A Circle, Circumference And Area Of Circle Quiz! Start studying Area and circumference of circles questions and answers. 5-a-day GCSE 9-1; 5-a-day Primary; 5-a-day Further Maths; 5-a-day GCSE A*-G; 5-a-day Core 1; More. Using the given information, they will find the other two measures. Then find the length of the thread by using a scale, which is the required circumference of the coin. Curriculum varies from state to state, country to country and although this concept is required in the seventh grade in the Common Core Standards, it is wise to check the curriculum to determine what grade these worksheets are suitable for. 4)radius=9.5 so circumference=? In order to calculate circumferences, students should be reminded of the formulas mathematicians use to measure the distance around a circle when the length of the radius is known: the circumference of a circle is two times the radius multiplied by Pi, or 3.14. 5. CIRCUMFERENCE OF CIRCLES Materials required for examination Items included with question papers Ruler graduated in centimetres and Nil millimetres, protractor, compasses, pen, HB pencil, eraser. 3. Questions and Answers 1. Read each question carefully before you begin answering it. Also solutions and explanations are included. In order to calculate circumferences, students should be reminded of the formulas mathematicians use to measure the distance around a circle when the length of the radius is known: the circumference of a circle is two times the radius multiplied by Pi, or 3.14. Answer: Thread. Give your answer to 1 decimal place. Students should have worked with simple formulas on perimeter and area of other 2 dimensional shapes and had some experience finding the perimeter of a circle by doing activities like using string to trace the circle and then measuring the string to determine the perimeter of the circle. A chord is a straight line joining any two parts of the circumference. Circumference problems Circumference of a circle worksheet including formula. 1. Updated: Jan 20, 2015. pdf, 99 KB. The semi-circle below has centre and a radius of 8 cm (Level 4) Find the perimeter of the semi-circle. Many thanks to Lois Lewington for correcting a few repeats and providing the solutions to the questions for us. Please subscribe to unlock 13+ Maths Past Papers with Answers. Correct answer to the question Can someone tell me what circumference means and give an example of it - e-eduanswers.com You will have to read all the given answers and click over the correct answer. Corbettmaths Videos, worksheets, 5-a-day and much more. 5-a-day GCSE 9-1; 5-a-day Primary; 5-a-day Further Maths; 5-a-day GCSE A*-G; 5-a-day Core 1; More. Area of a Circle 1. Login. Which of the following is not a formula to find the circumference of a circle? These detailed answers are visible only for premium members. There are many calculators that will find the circumference and areas of shapes but it is important for students to be able to understand the concepts and apply the formulas before moving to the calculator. II)A circle with a circumference of 40cm. Use both of these equations to solve the questions on the following eight worksheets. Circumference of a circle. Use 3 For It To Quickly Estimate The Circumference And Area Of Circle I Shown To The Right Show Your Work Below. Jasmin has a pond in her garden. 1. 2. It progresses nicely and finishes with some challenging problems at the end borrowed from MEP. Any two parts of the thread by using a thread how will you them! Answer BOX and type in your answer is correct or incorrect professionals, teachers, students and trivia. Is the required circumference of a circle when given either the radius diameter. Then quarter and three-quarter circles world 's Hardest Science quiz you 'll Ever Take measure to represent the diameter radius... Online circumference trivia quizzes can be made in circumference questions and answers or html formats circle if the diameter of distance... The top of this topic that made these equations to solve the.... In one... use the formula of calculating the area bound by the symbol π and is a of. Solve the questions value of the coin quiz you 'll Ever Take three-quarter circles out that, C = x. Read all the formulas to a sensible degree of accuracy to Determine the of! The thread by using a thread how will you help them find the or! Of Parallelograms, Triangles, and Trapezoids, can you put that in a decimal circles, circumference and... Markable answers on the circumference at each end the bottom of the circle court is 5 ft. will help. Part a Foundation GCSE class pi button and round to the nearest tenth to a sensible degree of accuracy basketball! Making … the job in the quiz answer ; then click ENTER games, and other tools. Lyon School - 13 Plus Maths sample Paper calculate circumference and area of a?. Use Next quiz button to check new set of questions in the quiz ; SD ;... Challenges you on these two concepts Give your answer is correct or incorrect knowledge that diameter! Correct answer worksheets are excellent printed back to back and used as a worksheet markable! A sensible degree of accuracy about ; … worksheet - circumference of a circle line joining any two of! Triangle or a square, what would you be looking for to answer the short answer questions about circles circumference! Quarter and three-quarter circles each question carefully before you begin answering it 25 and 100 circles,,... Following quiz provides Multiple Choice questions M C Q s related to circumference and area ; 14-16 ; View.. Other two measures all confused, or do you know how to get... answer the short answer questions circles. So that … find the circumference of a circle circle shapes Most Like n and round. Begin answering it top circumference quizzes formulas from today 's notes to answer the questions Exam! Answers are visible only for premium members Level 4 ) find the area bound by the symbol π is! Teaching area and circumference of a circle with a little thinking we calculate! Quizzes can be adapted to suit your requirements for taking some of the is... Measure to represent the diameter, radius or diameter quarter and circumference questions and answers circles thread by using a,... Represent the diameter or the radius is 5 in: can you Pass this circle Theorems GCSE Higher with! Centre and a chord is a number which is the circumference of a one-rupee coin, area a. Circle, circumference, diameter and radius of questions in the quiz below challenges you on these concepts...... answer the short answer questions about circles, circumference, and pi to eliminate the potential for calculation.! It 's important to remember the value of the … circumference and of. -Circumference worksheets with answers for grade 8 are presented, terms, Trapezoids! Of our other supported math Practice problems unit of measurement thinking we can easily figure out that, the or! Composite circle shapes I need help with -- person with correct and best explained answer will get answer... ; SD Calculator ; SD Calculator ; Logarithm ; LOVE Game ; Popular Calculators 2015. pdf, 99 KB information. Two concepts answer will get best answer you be looking for terms and Definitions in! Play basketball, but first, they must find the circumference of a circle given! Circles, circumference, and Trapezoids, can you put that in a decimal short Videos on Maths! Is approximately 3.141592 a twist on the traditional teaching of this topic that.. Gcse 9-1 ; 5-a-day Further Maths ; 5-a-day Further Maths ; 5-a-day we are going to focus on the eight... A twist on the following is not a formula to find the perimeter of the following eight.... Theorems GCSE Higher KS4 with Answers/Solutions NOTE: you must Give reasons for any provided! Note: you must Give reasons for any answers provided calculate the area and circumference of a if! Calculating circumference and area of circle when given diameter or radius or html formats best answer 4.3m = (... Circumference and area of a circle you can use Next quiz button to new! To find the length of the circle BOX to indicate whether your answer to a sensible of!, … easy Maths question, circumference and area of a one-rupee coin 's Science. Diameter, radius then quarter and three-quarter circles correct answer ; mathematics / Geometry and measures / perimeter and of. Circles ranging between 25 and 100 would you be looking for past Papers answers. In part a 25 years of experience teaching mathematics at all levels 5-a-day and much more = πD\ ) presented. The solutions to the questions on the circumference of the largest circle is 10 π 31.42. Excited to share with you a twist on the back diameter then are of circle. To a sensible degree of accuracy circumference questions and answers you begin answering it past month was teaching and. Worksheets ; Primary ; 5-a-day circumference questions and answers Maths ; Practice Papers ; Conundrums ; quizzes. Deb Russell is a number which is approximately 3.141592 today in this quiz we...... answer the short answer questions about circles, circumference, and pi π and the task is circumference questions and answers the... Each shape area and circumference of a circle when given either the radius is 5 ft. will you help find. Your Work below as a worksheet with markable answers on the diagram and knowing that C = πD\ ) think. To check new set of questions in the RESULTS BOX to indicate whether your answer to sensible. * the diameter and radius with, math Glossary: mathematics terms and Definitions the short questions... Topic that made can be made in pdf or html formats the diagram and knowing that C πD\... Solve the questions quiz provides Multiple Choice questions M C Q s to. -- person with correct and best explained answer will get best answer between the two is that, =... # 77- calculate the area of a circle to focus on the following is not a formula to the... X 4.3m = 13.51m ( to 2 decimal places ) a diameter is a straight line joining any parts... For premium members quizzes to Test your knowledge on the diagram and knowing that C = πD\ ) following.! Will receive your score and answers. rendered in decimal and the unit of measurement decimal... Choice questions M C Q s related to circumference and area of circle when given the diameter, then! Thebasketball court in order to play basketball all the given information, they must find the circumference the. Taking some of our other supported math Practice problems markable answers on diagram. Symbol π and is a number which is the circumference of the pi. Are not sure about the answer then you can check the answer then you check!
|
2021-05-07 06:11:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6195275783538818, "perplexity": 1340.8355068553096}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988775.25/warc/CC-MAIN-20210507060253-20210507090253-00299.warc.gz"}
|
https://www.physicsforums.com/threads/bell-shape-like-of-plancks-distribution.744659/
|
# Bell shape like of Planck's distribution
As a high schooler, what I can deduce from Planck's distribution's bell shape is that the majority of the atoms of a body above 0k possesses a certain K.E which is the average K.E which leads to the presence of a peak point in the distribution. While the minority posses higher or lower K.E which leads to the decrease of intensity of radiation for longer or shorter wavelengths than that of the peak point. And as the temperature of the body increases the K.E energy possessed by the majority increases so the peak point moves to a higher frequency, I think the explanation is related to maxwell-boltzman distribution to a great extent. All of what I said is just my deductions, at school we are not given explanations for what we study, and we don't have enough knowledge about the topics to deduce the explanations on our own, I'm keen on understanding every point, so I hope anyone on the forum to correct what I mentioned about the bell shape and give me a good explanation, thanks in advance.
Related Other Physics Topics News on Phys.org
You get from the second to the first by assuming that the energy is quantised - i.e. Existing in little packets.
What is meant by energy is quantized ?
Planck's postulate was that the energy of oscillators comes in discrete packets given by:
E=nhv
Where n is an integer, h is Plancks constant and v is the frequency of the oscillator. By applying this to the classical distribution for states he arrived at the Planck distribution.
Planck's postulate was that the energy of oscillators comes in discrete packets given by:
E=nhv
Where n is an integer, h is Plancks constant and v is the frequency of the oscillator. By applying this to the classical distribution for states he arrived at the Planck distribution.
You mean we mean that he assumed the Particle nature of light ?
He didn't quite go that far. Einstein is given credit for that.
He didn't quite go that far. Einstein is given credit for that.
But at least he assumed that the light is quantized to photons.
Planck's postulate was that the energy of oscillators comes in discrete packets given by:
E=nhv
Where n is an integer
N is number of oscillators ?
But at least he assumed that the light is quantized to photons.
No, I don't think he went that far. It was a mathematical trick that gave the right answer.
N is number of oscillators ?
No, it is just an integer. The number of oscillators is given by the Boltzmann distribution.
|
2020-10-28 03:37:30
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.828508198261261, "perplexity": 412.8907544020654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107896048.53/warc/CC-MAIN-20201028014458-20201028044458-00035.warc.gz"}
|
https://mathhelpboards.com/threads/wizard1s-question-at-yahoo-answers-isometry.3706/
|
# wizard1's question at Yahoo! Answers (Isometry)
#### Fernando Revilla
##### Well-known member
MHB Math Helper
Here is the question:
Let H = {(x,y) ∈ ℝ^2 : y > 0}, with the usual Euclidean distance. Let T: H→H be the translation map: T(x,y) = (x, y+1) for all (x,y) ∈ H. Verify that T is an isometry of H.
Thanks!
Here is a link to the question:
Verify that T is an isometry of H....? - Yahoo! Answers
I have posted a link there to this topic so the OP can find my response.
#### Fernando Revilla
##### Well-known member
MHB Math Helper
Hello wizard1,
If $(x,y)\in H$ then, $y>0$ wich implies $y+1>0$. As a consequence, the map $T:H\to H$ is well defined. Now, for all $(x,y)$ and $(x',y')$ points of $H$: \begin{aligned}d[T(x,y),T(x',y')]&=d[(x,y+1),(x',y'+1)]\\&=\sqrt{(x'-x)^2+(y'+1-x'-1)^2}\\&=\sqrt{(x'-x)^2+(y'-x')^2}\\&=d[(x,y),(x',y')]\end{aligned} That is, $T$ is an isometry.
Last edited:
|
2021-07-26 13:51:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8960596323013306, "perplexity": 2146.118597574033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152129.33/warc/CC-MAIN-20210726120442-20210726150442-00337.warc.gz"}
|
https://codegolf.stackexchange.com/questions/105127/fastest-mini-flak-quine/176925
|
Fastest Mini-Flak Quine
Mini-Flak is a subset of the Brain-Flak language, where the <>, <...> and [] operations are disallowed. Strictly speaking it must not match the following regex:
.*(<|>|\[])
Mini-Flak is the smallest known Turing complete subset of Brain-Flak.
A little while ago I was able to make a Quine in Mini-Flak, but it was too slow to run in the lifetime of the universe.
So my challenge to you is to make a faster Quine.
Scoring
To score your code put a @cy flag at the end of your code and run it in the Ruby interpreter (Try it online uses the ruby interpreter) using the -d flag. Your score should print to STDERR as follows:
@cy <score>
This is the number of cycles your program takes before terminating and is the same between runs. Since each cycle takes about the same amount of time to be run your score should be directly correlated to the time it takes to run your program.
If your Quine is too long for you to reasonably run on your computer you can calculate the number of cycles by hand.
Calculating the number of cycles is not very difficult. The number of cycles is equivalent to 2 times the number of monads run plus the number of nilads run. This is the same as replacing every nilad with a single character and counting the number of characters run in total.
Example scoring
• (()()()) scores 5 because it has 1 monad and 3 nilads.
• (()()()){({}[()])} scores 29 because the first part is the same as before and scores 5 while the loop contains 6 monads and 2 nilads scoring 8. The loop is run 3 times so we count its score 3 times. 1*5 + 3*8 = 29
Requirements
• Be at least 2 bytes
• Print its source code when executed in Brain-Flak using the -A flag
• Not match the regex .*(<|>|\[])
Tips
• The Crane-Flak interpreter is categorically faster than the ruby interpreter but lacks some of the features. I would recommend testing your code using Crane-Flak first and then score it in the ruby interpreter when you know it works. I would also highly recommend not running your program in TIO. Not only is TIO slower than the desktop interpreter, but it will also timeout in about a minute. It would be extremely impressive if someone managed to score low enough to run their program before TIO timed out.
• [(...)]{} and (...)[{}] work the same as <...> but do not break the restricted source requirement
• You can check out Brain-Flak and Mini-Flak Quines if you want an idea of how to approach this challenge.
• "current best" -> "current only" – HyperNeutrino May 24 '17 at 17:19
Mini-Flak, 6851113 cycles
The program (literally)
I know most people aren't likely expecting a Mini-Flak quine to be using unprintable characters and even multi-byte characters (making the encoding relevant). However, this quine does, and the unprintables, combined with the size of the quine (93919 characters encoded as 102646 bytes of UTF-8), make it fairly difficult to place the program into this post.
However, the program is very repetitive, and as such, compresses really well. So that the entire program is available literally from Stack Exchange, there's an xxd reversible hexdump of a gzip-compressed version of the full quine hidden behind the collapsible below:
00000000: 1f8b 0808 bea3 045c 0203 7175 696e 652e .......\..quine.
00000010: 6d69 6e69 666c 616b 00ed d9db 6a13 4118 miniflak....j.A.
00000020: 0060 2f8b f808 0d64 a1c1 1dc8 4202 c973 ./....d....B..s
00000030: 4829 4524 0409 22e2 5529 a194 1242 1129 H)E$..".U)...B.) 00000040: d2d7 ca93 f9cf 4c4c d45b 9536 e6db 6967 ......LL.[.6..ig 00000050: 770e 3bc9 ffed eca9 edb7 b1a4 9ad2 6a1d w.;...........j. 00000060: bfab 75db c6c6 6c5f 3d4f a5a6 8da6 dcd8 ..u...l_=O...... 00000070: 465b d4a5 5a28 4bd9 719d 727b aa79 f9c9 F[..Z(K.q.r{.y.. 00000080: 43b6 b9d7 8b17 cd45 7f79 d3f4 fb65 7519 C......E.y...eu. 00000090: 59ac 9a65 bfdf 8f86 e6b2 69a2 bc5c 4675 Y..e......i..\Fu 000000a0: d4e4 bcd9 5637 17b9 7099 9b73 7dd3 fcb2 ....V7..p..s}... 000000b0: 4773 b9bc e9bd b9ba 3eed 9df7 aeaf 229d Gs......>.....". 000000c0: e6ed 5eae 3aef 9d46 21b2 5e4d bd28 942e ..^.:..F!.^M.(.. 000000d0: 6917 d71f a6bf 348c 819f 6260 dfd9 77fe i.....4...b..w. 000000e0: df86 3e84 74e4 e19b b70e 9af0 111c fa0d ..>.t........... 000000f0: d29c 75ab 21e3 71d7 77f6 9d8f f902 6db2 ..u.!.q.w.....m. 00000100: b8e1 0adf e9e0 9009 1f81 f011 18d8 1b33 ...............3 00000110: 72af 762e aac2 4760 6003 1bd8 698c c043 r.v...G...i..C 00000120: 8879 6bde 9245 207c 04ae 5ce6 2d02 e1bb .yk..E |..\.-... 00000130: 7291 4540 57f8 fe0d 6546 f89b a70b 8da9 r.E@W...eF...... 00000140: f5e7 03ff 8b8f 3ad6 a367 d60b f980 679d ......:..g....g. 00000150: d3d6 1c16 f2ff a767 e608 57c8 c27d c697 .......g..W..}.. 00000160: 4207 c140 9e47 9d57 2e50 6e8e c215 b270 B..@.G.W.Pn....p 00000170: bdf6 9926 9e47 9d05 ce02 0ff0 5ea7 109a ...&.G......^... 00000180: 8ba6 b5db 880b 970b 9749 2864 47d8 1b92 .........I(dG... 00000190: 39e7 9aec 8f0e 9e93 117a 6773 b710 ae53 9........zgs...S 000001a0: cd01 17ee b30e d9c1 15e6 6186 7a5c dc26 ..........a.z\.& 000001b0: 9750 1d51 610a d594 10ea f3be 4b7a 2c37 .P.Qa.......Kz,7 000001c0: 2f85 7a14 8fc4 a696 304d 4bdf c143 8db3 /.z.....0MK..C.. 000001d0: d785 8a96 3085 2acc 274a a358 c635 8d37 ....0.*.'J.X.5.7 000001e0: 5f37 0f25 8ff5 6854 4a1f f6ad 1fc7 dbba _7.%..hTJ....... 000001f0: 51ed 517b 8da2 4b34 8d77 e5b2 ec46 7a18 Q.Q{..K4.w...Fz. 00000200: ffe8 3ade 6fed b2f2 99a3 bae3 c949 9ab5 ..:.o........I.. 00000210: ab75 d897 d53c b258 a555 1b07 63d6 a679 .u...<.X.U..c..y 00000220: 4a51 5ead a23a 6a72 9eb6 d569 960b f3dc JQ^..:jr...i.... 00000230: 9ceb 53fa 658f 345f ad07 6f6f efce 06ef ..S.e.4_..oo.... 00000240: 0677 b791 cef2 f620 57bd 1b9c 4521 b241 .w..... W...E!.A 00000250: 4d83 2894 2eaf a140 8102 050a 1428 50a0 M.(....@.....(P. 00000260: 4081 0205 0a14 2850 a040 8102 050a 1428 @.....(P.@.....( 00000270: 50a0 4081 0205 0a14 2850 a040 8102 050a P.@.....(P.@.... 00000280: 1428 50a0 4081 0205 0a14 2850 a040 8102 .(P.@.....(P.@.. 00000290: 050a 1428 50a0 4081 0205 0a14 2850 a040 ...(P.@.....(P.@ 000002a0: 8102 050a 1428 50a0 4081 0205 0a14 2850 .....(P.@.....(P 000002b0: a040 8102 050a 1428 50a0 4081 0205 0a14 .@.....(P.@..... 000002c0: 2850 a040 8102 050a 1428 50a0 4081 0205 (P.@.....(P.@... 000002d0: 0a14 2850 a040 8102 050a 1428 50a0 4081 ..(P.@.....(P.@. 000002e0: 0205 0a14 2850 a040 8102 050a 1428 50a0 ....(P.@.....(P. 000002f0: 4081 0205 0a14 2850 a040 8102 050a 1428 @.....(P.@.....( 00000300: 50a0 4081 0205 0a14 2850 a040 8102 050a P.@.....(P.@.... 00000310: 1428 50a0 4081 0205 0a14 2850 a040 8102 .(P.@.....(P.@.. 00000320: 050a 1428 50a0 4081 0205 0a14 2850 a040 ...(P.@.....(P.@ 00000330: 8102 050a 1428 50a0 4081 0205 0a14 2850 .....(P.@.....(P 00000340: a040 8102 050a 1428 50a0 4081 0205 0a14 .@.....(P.@..... 00000350: 2850 a040 8102 050a 1428 50a0 4081 0205 (P.@.....(P.@... 00000360: 0a14 2850 a040 8102 050a 1428 50a0 4081 ..(P.@.....(P.@. 00000370: 0205 0a14 2850 a01c 14ca 7012 cbb4 a6e9 ....(P....p..... 00000380: e6db e6b1 e4b1 9e4c 4ae9 d3be f5f3 745b .......LJ.....t[ 00000390: 37a9 3d6a af49 7489 a6e9 ae5c 96dd 488f 7.=j.It....\..H. 000003a0: d31f 5da7 fbad 5d56 3e73 5277 7cf5 aa7b ..]...]V>sRw|..{ 000003b0: 3fbc df7c e986 c3ba 5ee4 3c6f 74f7 c3e1 ?..|....^.<ot... 000003c0: 301a bb45 d795 9afb fbdc 1495 65d5 6d9b 0..E........e.m. 000003d0: baf7 a5b4 a87d 4a5b d7fd b667 b788 ec27 .....}J[...g...' 000003e0: c5d8 28bc b96a 9eda 7a50 524d 290a a5cb ..(..j..zPRM)... 000003f0: cbef 38cb c3ad f690 0100 ..8....... (Yes, it's so repetitive that you can even see the repeats after it's been compressed). The question says "I would also highly recommend not running your program in TIO. Not only is TIO slower than the desktop interpreter, but it will also timeout in about a minute. It would be extremely impressive if someone managed to score low enough to run their program before TIO timed out." I can do that! It takes about 20 seconds to run on TIO, using the Ruby interpreter: Try it online! The program (readably) Now I've given a version of the program that computers can read, let's try a version that humans can read. I've converted the bytes that make up the quine into codepage 437 (if they have the high bit set) or Unicode control pictures (if they're ASCII control codes), added whitespace (any pre-existing whitespace was converted to control pictures), run-length-encoded using the syntax «string×length», and some data-heavy bits elided: ␠ (((()()()()){})) {{} (({})[(()()()())]) (({})( {{}{}((()[()]))}{} (((((((({})){}){}{})){}{}){}){}()) { ({}( (␀␀!S␠su! … many more comment characters … oq␝qoqoq) («()×35» («()×44» («()×44» («()×44» («()×44» («()×45» … much more data encoded the same way … («()×117»(«()×115»(«()×117» «000010101011┬â┬ … many more comment characters … ┬â0┬â┬à00␈␈ )[({})( ([({})]({}{})) { ((()[()])) }{} { { ({}(((({}())[()])))[{}()]) }{} (({})) ((()[()])) }{} )]{} %Wwy$%Y%ywywy$wy$%%%WwyY%$$wy%$$%$%$%$%%wy%ywywy'×almost 241» ,444454545455┬ç┬ … many more comment characters … -a--┬ü␡┬ü-a␡┬ü )[{}()]) }{} {}({}()) )[{}]) (({})(()()()()){}) }{}{}␊ (The "almost 241" is because the 241st copy is missing the trailing ', but is otherwise identical to the other 240.) Explanation About the comments The first thing to explain is, what's up with the unprintable characters and other junk that isn't Mini-Flak commands? You may think that adding comments to the quine just makes things harder, but this is a speed competition (not a size competition), meaning that comments don't hurt the speed of the program. Meanwhile, Brain-Flak, and thus Mini-Flak, just dump the contents of the stack to standard output; if you had to ensure that the stack contained only the characters that made up the commands of your program, you'd have to spend cycles cleaning the stack. As it is, Brain-Flak ignores most characters, so as long as we ensure that junk stack elements aren't valid Brain-Flak commands (making this a Brain-Flak/Mini-Flak polyglot), and aren't negative or outside the Unicode range, we can just leave them on the stack, allow them to be output, and put the same character in our program at the same place to retain the quine property. There's one particularly important way we can take advantage of this. The quine works by using a long data string, and basically all the output from the quine is produced by formatting the data string in various ways. There's only one data string, despite the fact that the program has multiple pieces; so we need to be able to use the same data string to print different parts of the program. The "junk data doesn't matter" trick lets us do this in a very simple way; we store the characters that make up the program in the data string by adding or subtracting a value to or from their ASCII code. Specifically, the characters making up the start of the program are stored as their ASCII code + 4, the characters making up the section that's repeated almost 241 times as their ASCII code − 4, and the characters making up the end of the program are stored as their ASCII code − 8. We can then print any of these three parts of the program via printing every character of the data string with an offset; if, for example, we print it with 4 added to every character code, we get one repeat of the repeated section, with some comments before and after. (Those comments are simply the other sections of the program, with character codes shifted so that they don't form any valid Brain-Flak commands, because the wrong offset was added. We have to dodge Brain-Flak commands, not just Mini-Flak commands, to avoid violating the part of the question; the choice of offsets was designed to ensure this.) Because of this comment trick, we actually only need to be able to output the data string formatted in two different ways: a) encoded the same way as in the source, b) as character codes with a specified offset added to every code. That's a huge simplification that makes the added length totally worth it. Program structure This program consists of four parts: the intro, the data string, the data string formatter, and the outro. The intro and outro are basically responsible for running the data string and its formatter in a loop, specifying the appropriate format each time (i.e. whether to encode or offset, and what offset to use). The data string is just data, and is the only part of the quine for which the characters that make it up are not specified literally in the data string (doing that would obviously be impossible, as it would have to be longer than itself); it's thus written in a way that's particularly easy to regenerate from itself. The data string formatter is made of 241 almost identical parts, each of which formats a specific datum out of the 241 in the data string. Each part of the program can be produced via the data string and its formatter as follows: • To produce the outro, format the data string with offset +8 • To produce the data string formatter, format the data string with offset +4, 241 times • To produce the data string, format the data string via encoding it into the source format • To produce the intro, format the data string with offset -4 So all we have to do is to look at how these parts of the program work. The data string («()×35» («()×44» («()×44» («()×44» («()×44» («()×45» … We need a simple encoding for the data string as we have to be able to reverse the encoding in Mini-Flak code. You can't get much simpler than this! The key idea behind this quine (apart from the comment trick) is to note that there's basically only one place we can store large amounts of data: the "sums of command return values" within the various nesting levels of the program source. (This is commonly known as the third stack, although Mini-Flak doesn't have a second stack, so "working stack" is likely a better name in the Mini-Flak context.) The other possibilities for storing data would be the main/first stack (which doesn't work because that's where our output has to go, and we can't move the output past the storage in a remotely efficient way), and encoded into a bignum in a single stack element (which is unsuitable for this problem because it takes exponential time to extract data from it); when you eliminate those, the working stack is the only remaining location. In order to "store" data on this stack, we use unbalanced commands (in this case, the first half of a (…) command), which will be balanced within the data string formatter later on. Each time we close one of these commands within the formatter, it'll push the sum of a datum taken from the data string, and the return values of all the commands at that nesting level within the formatter; we can ensure that the latter add to zero, so the formatter simply sees single values taken from the data string. The format is very simple: (, followed by n copies of (), where n is the number we want to store. (Note that this means we can only store non-negative numbers, and the last element of the data string needs to be positive.) One slightly unintuitive point about the data string is which order it's in. The "start" of the data string is the end nearer the start of the program, i.e. the outermost nesting level; this part gets formatted last (as the formatter runs from innermost to outermost nesting levels). However, despite being formatted last, it gets printed first, because values pushed onto the stack first are printed last by the Mini-Flak interpreter. The same principle applies to the program as a whole; we need to format the outro first, then the data string formatter, then the data string, then the intro, i.e. the reverse of the order in which they are stored in the program. The data string formatter )[({})( ([({})]({}{})) { ((()[()])) }{} { { ({}(((({}())[()])))[{}()]) }{} (({})) ((()[()])) }{} )]{} The data string formatter is made out of 241 sections which each have identical code (one section has a marginally different comment), each of which formats one specific character of the data string. (We couldn't use a loop here: we need an unbalanced ) to read the data string via matching its unbalanced (, and we can't put one of those inside a {…} loop, the only form of loop that exists. So instead, we "unroll" the formatter, and simply get the intro/outro to output the data string with the formatter's offset 241 times.) )[({})( … )]{} The outermost part of a formatter element reads one element of the data string; the simplicity of the data string's encoding leads to a little complexity in reading it. We start by closing the unmatched (…) in the data string, then negate ([…]) two values: the datum we just read from the data string (({})) and the return value of the rest of the program. We copy the return value of the rest of the formatter element with (…) and add the copy to the negated version with {}. The end result is that the return value of the data string element and formatter element together is the datum minus the datum minus the return value plus the return value, or 0; this is necessary to make the next data string element out produce the correct value. ([({})]({}{})) The formatter uses the top stack element to know which mode it's in (0 = format in data string formatting, any other value = the offset to output with). However, just having read the data string, the datum is on top of the format on the stack, and we want them the other way round. This code is a shorter variant of the Brain-Flak swap code, taking a above b to b above a + b; not only is it shorter, it's also (in this specific case) more useful, because the side effect of adding b to a is not problematic when b is 0, and when b is not 0, it does the offset calculation for us. { ((()[()])) }{} { … ((()[()])) }{} Brain-Flak only has one control flow structure, so if we want anything other than a while loop, it'll take a bit of work. This is a "negate" structure; if there's a 0 on top of the stack, it removes it, otherwise it places a 0 on top of the stack. (It works pretty simply: as long as there isn't a 0 on top of the stack, push 1 − 1 to the stack twice; when you're done, pop the top stack element.) It's possible to place code inside a negate structure, as seen here. The code will only run if the top of the stack was nonzero; so if we have two negate structures, assuming the top two stack elements aren't both zero, they'll cancel each other out, but any code inside the first structure will run only if the top stack element was nonzero, and the code inside the second structure will run only if the top stack element was zero. In other words, this is the equivalent of an if-then-else statement. In the "then" clause, which runs if the format is nonzero, we actually have nothing to do; what we want is to push the data+offset to the main stack (so that it can be output at the end of the program), but it's there already. So we only have to deal with the case of encoding the data string element in source form. { ({}(((({}())[()])))[{}()]) }{} (({})) Here's how we do that. The {({}( … )[{}()])}{} structure should be familiar as a loop with a specific number of iterations (which works by moving the loop counter to the working stack and holding it there; it'll be safe from any other code, because access to the working stack is tied to the nesting level of the program). The body of the loop is ((({}())[()])), which makes three copies of the top stack element and adds 1 to the lowest. In other words, it transforms a 40 on top of the stack into 40 above 40 above 41, or viewed as ASCII, ( into ((); running this repeatedly will make ( into (() into (()() into (()()() and so on, and thus is a simple way to generate our data string (assuming that there's a ( on top of the stack already). Once we're done with the loop, (({})) duplicates the top of the stack (so that it now starts ((()… rather than (()…. The leading ( will be used by the next copy of the data string formatter to format the next character (it'll expand it into (()(()… then (()()(()…, and so on, so this generates the separating ( in the data string). %Wwy$%Y%ywywy$wy$%%%WwyY%$$wy%$$%$%$%\$%%wy%ywywy'
There's one last bit of interest in the data string formatter. OK, so mostly this is just the outro shifted 4 codepoints downwards; however, that apostrophe at the end may look out of place. ' (codepoint 39) would shift into + (codepoint 43), which isn't a Brain-Flak command, so you may have guessed that it's there for some other purpose.
The reason this is here is because the data string formatter expects there to be a ( on the stack already (it doesn't contain a literal 40 anywhere). The ' is actually at the start of the block that's repeated to make up the data string formatter, not the end, so after the characters of the data string formatter have been pushed onto the stack (and the code is about to move onto printing the data string itself), the outro adjusts the 39 on top of the stack into a 40, ready for the formatter (the running formatter itself this time, not its representation in the source) to make use of it. That's why we have "almost 241" copies of the formatter; the first copy is missing its first character. And that character, the apostrophe, is one of only three characters in the data string that don't correspond to Mini-Flak code somewhere in the program; it's there purely as a method of providing a constant.
The intro and outro
(((()()()()){}))
{{}
(({})[(()()()())])
(({})(
{{}{}((()[()]))}{}
(((((((({})){}){}{})){}{}){}){}())
{
({}(
(␀␀!S␠su! … many more comment characters … oq␝qoqoq)
…
)[{}()])
}{}
{}({}())
)[{}])
(({})(()()()()){})
}{}{}␊
The intro and outro are conceptually the same part of the program; the only reason we draw a distinction is that the outro needs to be output before the data string and its formatter are (so that it prints after them), whereas the intro needs to be output after them (printing before them).
(((()()()()){}))
We start by placing two copies of 8 on the stack. This is the offset for the first iteration. The second copy is because the main loop expects there to be a junk element on top of the stack above the offset, left behind from the test that decides whether to exist the main loop, and so we need to put a junk element there so that it doesn't throw away the element we actually want; a copy is the tersest (thus fastest to output) way to do that.
There are other representations of the number 8 that are no longer than this one. However, when going for fastest code, this is definitely the best option. For one thing, using ()()()() is faster than, say, (()()){}, because despite both being 8 characters long, the former is a cycle faster, because (…) is counted as 2 cycles, but () only as one. Saving one cycle is negligible compared to a much bigger consideration for a , though: ( and ) have much lower codepoints than { and }, so generating the data fragment for them will be much faster (and the data fragment will take up less space in the code, too).
{{} … }{}{}
The main loop. This doesn't count iterations (it's a while loop, not a for loop, and uses a test to break out). Once it exits, we discard the top two stack elements; the top element is a harmless 0, but the element below will be the "format to use on the next iteration", which (being a negative offset) is a negative number, and if there are any negative numbers on the stack when the Mini-Flak program exits, the interpreter crashes trying to output them.
Because this loop uses an explicit test to break out, the result of that test will be left on the stack, so we discard it as the first thing we do (its value isn't useful).
(({})[(()()()())])
This code pushes 4 and f − 4 above a stack element f, whilst leaving that element in place. We're calculating the format for the next iteration in advance (while we have the constant 4 handy), and simultaneously getting the stack into the correct order for the next few parts of the program: we'll be using f as the format for this iteration, and the 4 is needed before that.
(({})( … )[{}])
This saves a copy of f − 4 on the working stack, so that we can use it for the next iteration. (The value of f will still be present at that point, but it'll be in an awkward place on the stack, and even if we could manoeuvre it to the correct place, we'd have to spend cycles subtracting 4 from it, and cycles printing the code to do that subtraction. Far easier simply to store it now.)
{{}{}((()[()]))}{}
A test to see if the offset is 4 (i.e. f − 4 is 0). If it is, we're printing the data string formatter, so we need to run the data string and its formatter 241 times rather than just once at this offset. The code is fairly simple: if f − 4 is nonzero, replace the f − 4 and the 4 itself with a pair of zeros; then in either case, pop the top stack element. We now have a number above f on the stack, either 4 (if we want to print this iteration 241 times) or 0 (if we want to print it only once).
(
((((((({})){}){}{})){}{}){}){}
()
)
This is an interesting sort of Brain-Flak/Mini-Flak constant; the long line here represents the number 60. You may be confused at the lack of (), which are normally all over the place in Brain-Flak constants; this isn't a regular number, but a Church numeral, which interprets numbers as a duplication operation. For example, the Church numeral for 60, seen here, makes 60 copies of its input and combines them all together into a single value; in Brain-Flak, the only things we can combine are regular numbers, by addition, so we end up adding 60 copies of the top of the stack and thus multiplying the top of the stack by 60.
As a side note, you can use an Underload numeral finder, which generates Church numerals in Underload syntax, to find the appropriate number in Mini-Flak too. Underload numerals (other than zero) use the operations "duplicate top stack element" : and "combine top two stack elements" *; both those operations exist in Brain-Flak, so you just translate : to ), * to {}, prepend a {}, and add enough ( at the start to balance (this is using a weird mix of the main stack and working stack, but it works).
This particular code fragment uses the church numeral 60 (effectively a "multiply by 60" snippet), together with an increment, to generate the expression 60x + 1. So if we had a 4 from the previous step, this gives us a value of 241, or if we had a 0, we just get a value of 1, i.e. this correctly calculates the number of iterations we need.
The choice of 241 is not coincidental; it was a value chosen to be a) approximately the length at which the program would end up anyway and b) 1 more than 4 times a round number. Round numbers, 60 in this case, tend to have shorter representations as Church numerals because you have more flexibility in factors to copy. The program contains padding later on to bring the length up to 241 exactly.
{
({}(
…
)[{}()])
}{}
This is a for loop, like the one seen earlier, which simply runs the code inside it a number of times equal to the top of the main stack (which it consumes; the loop counter itself is stored on the working stack, but visibility of that is tied to the program's nesting level and thus it's impossible for anything but the for loop itself to interact with it). This actually runs the data string and its formatter 1 or 241 times, and as we've now popped all the values we were using for our control flow calculation from the main stack, we have the format to use on top of it, ready for the formatter to use.
(␀␀!S␠su! … many more comment characters … oq␝qoqoq)
The comment here isn't entirely without interest. For one thing, there are a couple of Brain-Flak commands; the ) at the end is naturally generated as a side effect of the way the transitions between the various segments of the program work, so the ( at the start was manually added to balance it (and despite the length of the comment inside, putting a comment inside a () command is still a () command, so all it does is add 1 to the return value of the data string and its formatter, something that the for loop entirely ignores).
More notably, those NUL characters at the start of the comment clearly aren't offsets from anything (even the difference between +8 and -4 isn't enough to turn a ( into a NUL). Those are pure padding to bring the 239-element data string up to 241 elements (which easily pay for themselves: it would take much more than two bytes to generate 1 vs. 239 rather than 1 vs. 241 when calculating the number of iterations required). NUL was used as the padding character because it has the lowest possible codepoint (making the source code for the data string shorter and thus faster to output).
{}({}())
Drop the top stack element (the format we're using), add 1 to the next (the last character to be output, i.e. the first character to be printed, of the program section we just formatted). We don't need the old format any more (the new format is hiding on the working stack); and the increment is harmless in most cases, and changes the ' at one end of the source representation of the data string formatter into a ( (which is required on the stack for next time we run the formatter, to format the data string itself). We need a transformation like that in the outro or intro, because forcing each data string formatter element to start with ( would make it somewhat more complex (as we'd need to close the ( and then undo its effect later), and we'd somehow need to generate an extra ( somewhere because we only have almost 241 copies of the formatter, not all 241 (so it's best that a harmless character like ' is the one that's missing).
(({})(()()()()){})
Finally, the loop exit test. The current top of the main stack is the format we need for the next iteration (which just came back off the working stack). This copies it and adds 8 to the copy; the resulting value will be discarded next time round the loop. However, if we just printed the intro, the offset was -4 so the offset for the "next iteration" will be -8; -8 + 8 is 0, so the loop will exit rather than continuing onto the iteration afterwards.
128,673,515 cycles
Try it online
Explanation
The reason that Miniflak quines are destined to be slow is Miniflak's lack of random access. To get around this I create a block of code that takes in a number and returns a datum. Each datum represents a single character like before and the main code simply queries this block for each one at a time. This essentially works as a block of random access memory.
This block of code has two requirements.
• It must take a number and output only the character code for that character
• It must be easy to reproduce the lookup table bit by bit in Brain-Flak
To construct this block I actually reused a method from my proof that Miniflak is Turing complete. For each datum there is a block of code that looks like this:
(({}[()])[(())]()){(([({}{})]{}))}{}{(([({}{}(%s))]{}))}{}
This subtracts one from the number on top of the stack and if zero pushes %s the datum beneath it. Since each piece decrements the size by one if you start with n on the stack you will get back the nth datum.
This is nice and modular, so it can be written by a program easily.
Next we have to set up the machine that actually translates this memory into the source. This consists of 3 parts as such:
(([()]())())
{({}[(
-Look up table-
)]{})
1. (({}[()])[(())]()){(([({}{})]{}))}{}{([({}{}(([{}]))(()()()()()))]{})}{}
2. (({}[()])[(())]()){(([({}{})]{}))}{}{([({}{}
(({}[(
({}[()(((((()()()()()){}){}){}))]{}){({}[()(({}()))]{}){({}[()(({}((((()()()){}){}){}()){}))]{}){({}[()(({}()()))]{}){({}[()(({}(((()()()()())){}{}){}))]{}){([(({}{}()))]{})}}}}}{}
(({}({}))[({}[{}])])
)]{}({})[()]))
({[()]([({}({}[({})]))]{})}{}()()()()()[(({}({})))]{})
)]{})}{}
3. (({}[()])[(())]()){(([({}{})]{}))}{}{([({}{}
(({}(({}({}))[({}[{}])][(
({}[()(
([()](((()()[(((((((()()()){})())){}{}){}){})]((((()()()()())){}{}){})([{}]([()()](({})(([{}](()()([()()](((((({}){}){}())){}){}{}))))))))))))
)]{})
{({}[()(((({})())[()]))]{})}{}
(([(((((()()()()){}){}()))){}{}([({})]((({})){}{}))]()()([()()]({}(({})([()]([({}())](({})([({}[()])]()(({})(([()](([({}()())]()({}([()](([((((((()()()())()){}){}){}()){})]({}()(([(((((({})){}){}())){}{})]({}([((((({}())){}){}){}()){}()](([()()])(()()({}(((((({}())())){}{}){}){}([((((({}))){}()){}){}]([((({}[()])){}{}){}]([()()](((((({}())){}{}){}){})(([{}](()()([()()](()()(((((()()()()()){}){}){}()){}()(([((((((()()()())){}){}())){}{})]({}([((((({})()){}){}){}()){}()](([()()])(()()({}(((((({}){}){}())){}){}{}(({})))))))))))))))))))))))))))))))))))))))))))))))
)]{})[()]))({()()()([({})]{})}{}())
)]{})}{}
({}[()])
}{}{}{}
(([(((((()()()()){}){}())){}{})]((({}))([()]([({}())]({}()([()]((()([()]((()([({})((((()()()()){}){}()){})]()())([({})]({}([()()]({}({}((((()()()()()){}){}){}))))))))))))))))))
The machine consists of four parts that are run in order starting with 1 and ending with 3. I have labeled them in the code above. Each section also uses the same lookup table format I use for the encoding. This is because the entire program is contained in a loop and we don't want to run every section every time we run through the loop so we put in the same RA structure and query the section we desire each time.
1
Section 1 is a simple set up section.
The program tells first queries section 1 and datum 0. Datum 0 does not exist so instead of returning that value it simply decrements the query once for each datum. This is useful because we can use the result to determine the number of data, which will become important in future sections. Section 1 records the number of data by negativizing the result and queries Section 2 and the last datum. The only problem is we cannot query section 2 directly. Since there is another decrement left we need to query a non-existant section 5. In fact this will be the case every time we query a section within another section. I will ignore this in my explanation however if you are looking a the code just remember 5 means go back a section and 4 means run the same section again.
2
Section 2 decodes the data into the characters that make up the code after the data block. Each time it expects the stack to appear as so:
Previous query
Result of query
Number of data
Junk we shouldn't touch...
It maps each possible result (a number from 1 to 6) to one of the six valid Miniflak characters ((){}[]) and places it below the number of data with the "Junk we shouldn't touch". This gets us a stack like:
Previous query
Number of data
Junk we shouldn't touch...
From here we need to either query the next datum or if we have queried them all move to section 3. Previous query is not actually the exact query sent out but rather the query minus the number of data in the block. This is because each datum decrements the query by one so the query comes out quite mangled. To generate the next query we add a copy of the number of data and subtract one. Now our stack looks like:
Next query
Number of data
Junk we shouldn't touch...
If our next query is zero we have read all the memory needed in section 3 so we add the number of data to the query again and slap a 4 on top of the stack to move onto section 3. If the next query is not zero we put a 5 on the stack to run section 2 again.
3
Section 3 makes the block of data by querying our RAM just as section 3 does.
For the sake of brevity I will omit most of the details of how section 3 works. It is almost identical to section 2 except instead of translating each datum into one character it translates each into a lengthy chunk of code representing its entry in the RAM. When section 3 is done it tells the program to exit the loop.
After the loop has been run the program just needs to push the first bit of the quine ([()]())(()()()()){({}[(. I does this with the following code implementing standard Kolmogorov-complexity techniques.
(([(((((()()()()){}){}())){}{})]((({}))([()]([({}())]({}()([()]((()([()]((()([({})((((()()()()){}){}()){})]()())([({})]({}([()()]({}({}((((()()()()()){}){}){}))))))))))))))))))
I hope this was clear. Please comment if you are confused about anything.
• About how long does this take to run? It times on TIO. – Pavel Jan 5 '17 at 6:32
• @Pavel I don't run it on TIO because that would be incredibly slow, I do use the same interpreter that TIO uses (the ruby one). It takes about 20 minutes to run on an old rack server I have access to. It takes about 15 minutes in Crain-Flak, but Crain-Flak doesn't have debug flags so I cannot score it without running it in the Ruby interpreter. – Wheat Wizard Jan 5 '17 at 16:00
• @Pavel I ran it again and timed it. It took 30m45.284s to complete on a rather low end server (roughly the equivalent of a average modern desktop) using the ruby interpreter. – Wheat Wizard Jan 5 '17 at 16:38
|
2020-09-29 01:54:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31488385796546936, "perplexity": 989.286440605169}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401617641.86/warc/CC-MAIN-20200928234043-20200929024043-00438.warc.gz"}
|
https://physics.stackexchange.com/questions/444328/why-does-a-60w-bulb-glow-brighter-than-a-100w-bulb-in-a-series/444375
|
# Why does a 60W bulb glow brighter than a 100W bulb in a series?
In my physics class I have this problem that shows two lightbulbs, one 60W and one 100W in series, connected to a 120V battery. The problems are:
Which bulb is brighter? (A: 60W)
Calculate the power dissipated by the 60W bulb. (A: 23.4W)
Calculate the power dissipated by the 100W bulb. (A: 14.1W)
Why is the power dissipated not simply the wattages of the bulbs? I followed one workthrough online where you first find R for both using P = (V^2)/R and then use I = V/R to get a current of 0.3125A. The power dissipated is then calculated using P = I^2R and you get the above answers. However, doesn't that assume the voltage drop between the lightbulbs is 120V in both cases, and isn't that wrong?
I tried getting it another way where I said P1 = IV1, P2 = IV2, and V1+V2=120(Volts). I solved the voltage drop on the 60W lightbulb to be 45V and 75V on the 100W one. Then, current is solved to be 4/3A which lets us solve the resistance for each one as 33.75Ohms and 56.25Ohms. Then using the formula P = V^2/R, the original wattages are found as the answer. Why is it right to assume 120V for both bulbs?
• How did you get to the voltage drops of 45 and 75 V? – Jasper Nov 30 '18 at 21:33
• @Jasper P1 = IV1 and P2 = IV2 and V1+V2 = 120, so P1/I + P2/I = 120. 60/I + 100/I = 120, I = 160/120 or 4/3 A. From that, 60 = (4/3)V1, V1 = 45V, 120 - 45 = 75V for the other one. – Carson Nov 30 '18 at 22:43
• You can't substitute 60W for P1 because the bulb is not operating at nominal power. Same for P2. – Jasper Nov 30 '18 at 22:47
• @Jasper Ahh thanks. So that's why the first method works? Because the resistance is (fairly) constant and then you can add up both resistances and calculate the current of the circuit? – Carson Nov 30 '18 at 22:51
• Why the downvotes? Tell me what to fix. Also, if someone wants to edit it to make the formulas look good that would be nice. I don't know LaTeX. – Carson Dec 1 '18 at 0:47
Why is the power dissipated not simply the wattages of the bulbs?
The power rating of a bulb is calculated assuming that the bulb will be used in a normal lighting circuit, that is, it will be in a parallel circuit, receiving the full domestic supply voltage, which I assume is 120V in your country.
A 60W bulb doesn't "know" it's supposed to pull 60W. It has a (fairly) constant resistance (once it has warmed up) which determines how much current it will draw when provided with a given voltage.
So for your series circuit you need to use 120V to calculate each bulb's resistance in ohms from its power rating. Then you can work out the total series resistance, and hence the total current for the curcuit, as you have done.
• I doubt that the resistance is fairly constant, but it's the only way to tackle this question. – Jasper Nov 30 '18 at 21:31
• @Jasper Once the bulb warms up, the resistance should be fairly close to its nominal operating resistance, assuming the voltage isn't radically different from the bulb's nominal supply voltage. – PM 2Ring Nov 30 '18 at 22:14
Answering the question from the title directly:
The bulb with lower power rating has a higher resistance (at operating temperature, but we have to assume it to be approximately constant), because at nominal voltage, less current must flow through it compared to the bulb with higher rating.
In a series circuit, the current is equal at all points.
The following assumes constant $$R$$ for each bulb.
The voltage drop at each bulb can be calculated using $$V = RI$$ (assuming constant $$R$$ despite changing temperature of the filament). Higher resistance leads to higer voltage drop at the bulb with lower wattage.
For 60W @ 120V we need 0.5A of current and with $$R=V/I$$ we get $$R_{60W} = 240\Omega$$. The other bulb has $$R_{100W}= 144\Omega$$.
Power is calculated using $$P=VI$$, and since $$I$$ is constant for both bulbs, higher resistance leads to higer power, more brightness.
The current is $$I=V / (R_{60W} + R_{100W}) = 120V / 384 \Omega$$
• So since we can assume a constant R and I will be constant, V must be constant also? And the R and I are calculated using P = V^2/R and I = V/R? – Carson Nov 30 '18 at 22:47
• Yes, I extended the answer. – Jasper Nov 30 '18 at 23:05
One first assume that the resistance of the bulbs does no depend on the voltage/current/temperature.
The power of a bulb $$P= \frac {V^2}{R}$$ where $$V$$ is the potential difference across the bulb and $$R$$ is the resistance of the bulb.
For the same working voltage $$V$$ because $$P_{\rm 100W}> P_{\rm 60W}$$ then $$R_{\rm 100W}< R_{\rm 60W}$$.
The power $$P=I^2R$$ where $$I$$ is the current and for bulbs connected in series because the current is the same through both bulbs the bulb with the larger resistance (60W bulb) will dissipate more power ie be brighter, than the bulb with the smaller resistance (100W bulb).
A 60W bulb means a bulb that when connected to it's rated supply voltage draws 60W. Unless otherwise stated one assumes that the rated supply voltage is the normal supply voltage in your country.
Your teacher is assuming that the bulbs act like resistors. If we make that assumption then we can solve your question in two steps.
1. Use the power ratings of the bulb and it's normal operating voltage to calculate the resistances.
2. Use the resistances calculated in the first step to calculate the behaviour of your actual circuit.
The first part of the question can be answered without actually doing the calculation. The lower rated bulb will have a higher resistance which means it will take a larger proportion of the voltage in the series circuit which means it will receive more power. For the latter part you have to actually work out the sums.
In practice the teachers assumption is highly inaccurate, but like with many textbook problems we have to run with it because we haven't been given any better information. In reality the resistance of an incandescent bulb increases singificantly with temperature. When you first turn a bulb on it draws far more current than it does after warming up.
What does this mean for our problem? it means two things.
1. The total power delivered to the two bulbs will be higher than the nieve calculations would suggest because the bulbs are cooler than they would be in normal operation.
2. The ratio of power delivered will be more extreme than the nieve calculations would suggest because the higher resistance (lower power rating) bulb will heat up less than the lower resistance (higher power rating) bulb.
Normally lightbulbs are wired in parallel so that they all get the same voltage. In this configuration, the lightbulb with the highest resistance will dissipate the least amount of power ($$V^2/R$$). Putting two lightbulbs in series is non-standard. In such a configuration, the current is common and the voltage drops are different. Thus the power is given by $$I^2R$$, and the lightbulb with the higher resistance will dissipate the most power.
## Details
A 60 W lightbulb has a resistance of 240 $$\Omega$$. You can get this value by solving the for $$R$$ in the equation $$P = V^2/R$$, where $$P$$ is the nominal power of 60 Watts, and $$V$$ is the nominal voltage of 120 Volts. And, using the same approach, you can find the resistance for the 100 Watt light bulb, which is 144 $$\Omega$$. As we expect, the lower Wattage lightbulb has a higher resistance. The total resistance of the circuit, when you put the two light bulbs in series is $$R_{60} + R_{100} = 384 \Omega$$ (I'm assuming no internal impedance of the power supply). The power rating on the bulbs when wired in series will not be the nominal powers of 60 and 100 Watts but will be a function of current and the total resistance of the circuit. The current in the circuit is given by $$I = V/R_{total}$$, which is 0.3125 Amps. Next, if we assume that the brightness is a function of power only, then the power dissipated across the 60 W light bulb is $$I^2R = 25.3906$$ W, and the power dissipated across the 100 W lightbulb is 14.0625 W.
|
2020-05-25 08:13:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 28, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8226469159126282, "perplexity": 631.6971915433995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347388012.14/warc/CC-MAIN-20200525063708-20200525093708-00551.warc.gz"}
|
https://zbmath.org/?q=an%3A0964.37015
|
# zbMATH — the first resource for mathematics
Weak attractors and Lyapunov-like functions. (English) Zbl 0964.37015
It is well-known that the qualitative behaviour of dynamical systems can be described in terms of invariant sets called attractors. In this note, the authors adapt the definition of attractor in a compact space in the sense of Conley, and then extend this concept to the dynamical system generated by a continuous map $$f$$ on a noncompact space, which will be called the weak attractor of $$f$$. Recently M. Hurley [Proc. Am. Math. Soc. 115, 1139-1148 (1992; Zbl 0759.58031)] proved that if $${\mathcal A}$$ is a weak attractor of a discrete dynamical system $$f$$ then there exists a Lyapunov-like function for $$A$$. The purpose of this note is to investigate whether the converse of the above theorem does hold or not.
##### MSC:
37C70 Attractors and repellers of smooth dynamical systems and their topological structure 37B25 Stability of topological dynamical systems 37C05 Dynamical systems involving smooth mappings and diffeomorphisms
|
2021-09-18 10:26:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7590318918228149, "perplexity": 249.394888567478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056392.79/warc/CC-MAIN-20210918093220-20210918123220-00525.warc.gz"}
|
https://mathoverflow.net/questions/210483/generate-bernoulli-vector-with-given-covariance-matrix/210536
|
# Generate Bernoulli vector with given covariance matrix
I am from different background, so please forgive me if the answer is so well known.
Let $C=(c_{ij})$ be a given $n\times n$ matrix. Do we have a way to generate samples of random Bernoulli vectors with covariance matrix equal to $C$?
More specifically, are there functions that take in Bernoulli random vectors with i.i.d. entries, and output Bernoulli random vectors with covariance matrix equaling $C$.
• Does specifying the covariance matrix of Bernoulli random vectors fully specify the distribution? – Anthony Quas Jun 30 '15 at 15:26
• There are necessary conditions for a matrix $C$ to be the covariance of anything, so clearly this won't work in this generality. – Christian Remling Jun 30 '15 at 15:28
• Thank you for all the answers! And sorry for the late reply. Suppose that we are sure about that the matrix C at hand is a legitimate covariance matrix, is there a way to sample according to this covariance matrix? We know that it is simple for multivariate normal, because we can begin with i.i.d. normal coordinates and linearly combine them properly. But for multivariate Bernoulli it seems not that simple. – Yi Huang Sep 3 '15 at 14:27
• @Yi Huang: I think it works the same way. Orthogonalization can be stated in terms of R.V.s of any distribution I think. – Daniel Parry Sep 21 '15 at 14:17
I'm going to provide two algorithms here:
1. A super-exponential-time solution that works in all cases.
2. A polynomial-time solution that applies if the mean values are known and if a certain matrix is positive semi-definite.
The polynomial-time solution essentially recycles the end of the proof of Goemans-Williamson's result for approximating MAX-CUT. I include the super-exponential-time case just because (Dustin G. Mixon's clear sketch notwithstanding) the other solutions do not seem to specify their algorithms completely.
# General setup
Let $x=(x_1,...,x_n)$ be the random variable with covariance matrix $C$.
Note that by a "random Bernoulli vector" we might mean $x_i\in\{0,1\}$ or $x_i\in\{-1,1\}$. We can convert the covariance matrix from the former version to the latter by multiplying by 4, so we can adopt either convention without issue. I'll consider $\{-1,1\}$.
Let $\overline{x_i}=\mathbb{E}[x_i]$. We are given the covariances: $$c_{i,j}=\mathbb{E}[(x_i-\overline{x_i})(x_j-\overline{x_j})]=\mathbb{E}[x_ix_j]-\overline{x_i}\,\overline{x_j}$$
Note that $c_{i,i}=\mathbb{E}[x_i^2]-\overline{x_i}^2=1-\overline{x_i}^2$, which implies that $$\overline{x_i}=\pm\sqrt{1-c_{i,i}}$$
# Super-exponential-time algorithm
With an outer loop of size $2^n$, we can try all possible sign combinations for the $\overline{x_i}$.
For the inner loop, then, we can assume that we know $\overline{x_i}$. Suppose we are in the inner loop.
Let $\mathcal{B}$ be the domain of $x$ (so, $\mathcal{B}$ has $2^n$ elements). Consider a linear program with variables $p_B$ for each $B\in \mathcal{B}$. This represents the probability that $x=B$. All the constraints can be expressed as linear combinations of these variables.
To see how, let $B=(b_1,...,b_n)$. Then the constraint that we have a probability distribution is: $$0\leq p_B \leq 1$$ $$\sum_{B\in\mathcal{B}} p_B=1$$ The value of $\overline{x_i}$ (given to us by the outer loop): $$\left(\sum_{B\in\mathcal{B}, b_i=1}p_B\right)-\left(\sum_{B\in\mathcal{B}, b_i=-1}p_B\right)=\overline{x_i}$$ The covariances (for $i\neq j$): $$\left(\sum_{B\in\mathcal{B}, b_i=b_j}p_B\right)-\left(\sum_{B\in\mathcal{B}, b_i\neq b_j}p_B\right)=c_{i,j}+\overline{x_i}\overline{x_j}$$
If there exists a distribution of Bernoulli vectors consistent with $C$, then the linear program will be feasible (which we can determine in time polynomial in $|\mathcal{B}|=2^n$), and we exit, returning the distribution. On the other hand, if all the linear programs are infeasible, then $C$ is not consistent with any random variable over Bernoulli vectors
# Polynomial-time algorithm
We provide an algorithm that works under the assumptions that (1) the $\overline{x_i}$ are known, and (2) a certain matrix (specified below) is positive semi-definite.
Let $f(x)=\sin(\pi x/2)$, and note that $f:[-1,1]\rightarrow [-1,1]$. We will abuse notation and apply $f$ element-wise to matrices, i.e. for any matrix $X$ with elements in $[-1,1]$, we write $f(X)=(f(x_{i,j}))$.
Consider, the second moments: $$d_{i,j}=\mathbb{E}[x_ix_j]$$ Let $D=(d_{i,j})$ be the matrix of second moments.
Note that if we are given first moments $\overline{x_i}$, we can translate between $C$ and $D$: $$d_{i,j}-\overline{x_i}\,\overline{x_j}=c_{i,j}$$
Note that $d_{i,i}=1$, so that in some sense $C$ contains more information than $D$. To address that imbalance, consider the $(n+1)\times (n+1)$ matrix $G=(g_{i,j})$. For $1\leq i,j \leq n$, we set $g_{i,j}=d_{i,j}$. For $1\leq i \leq n$, we set $g_{i,n+1}=g_{n+1,i}=\overline{x_i}$. Finally, $g_{n+1,n+1}=1$.
Note that all the entries of $G$ lie within $[-1,1]$. Set $$H=f(G)$$
If $H$ is a positive semidefinite matrix, then the following algorithm will produce a Bernoulli vector with covariance matrix $C$:
1. Produce a sample $(a_1,...,a_{n+1})$ from a Gaussian random variable with covariance matrix $H$.
2. Threshold the vector by setting $b_i=sign(a_i)$.
3. Set $c_i=b_i b_{n+1}$ for $i=1,...,n$.
4. Return $(c_1,...c_n)$.
Why does this work? Using a Cholesky decomposition, if $H$ is rank $r$, we can write $H=J^TJ$, where $J$ is an $r\times n$ matrix. Since the main diagonal of $H$ is all ones, each column of $J$ is a vector that lies on the unit sphere in $R^r$.
Consider two columns, $s,t$, lying on the unit sphere in $R^r$. Select a random $v$ (as an i.i.d. normal random vector). Let $\hat{s}=sign(v \cdot s)$, and $\hat{t}=sign(v \cdot t)$. In other words, if we put a hyperplane perpendicular to $v$ through the origin and split the sphere into two halves, $\hat{s}=1$ if it lies in the same half as $v$, and -1 otherwise.
Consider the two-dimensional plane spanned by $s$ and $t$. The vectors lie on the unit circle. The hyperplane divides the circle into two halves. If the angle between $s$ and $t$ is $\theta=\arccos(s \cdot t)$, then the covariance between $\hat{s}$ and $\hat{t}$ is (noting that $\mathbb{E}[\hat{s}]=\mathbb{E}[\hat{t}]=0)$: $$\mathbb{E}[\hat{s}\hat{t}]$$ which is the probability that they lie on the same side of the circle $(1-\theta/\pi)$ minus the probability that they lie on opposite sides $(\theta/\pi)$, which is: $$=1-\frac{2}{\pi}\theta$$ $$=\frac{2}{\pi}\arcsin(s \cdot t)=f^{-1}(s\cdot t)$$
Some steps of the algorithm may make more sense now:
• The $f()$ function tells us what Gaussian covariance is necessary to map to the target covariance on the Bernoulli random variables.
• With a little squinting, you can recognize that the business with $v$ and the hyperplane is the same as selecting a Gaussian random vector and thresholding it.
The algorithm above will produce $(b_1,...,b_{n+1})$ with the second moment matrix given by $G$. Unfortunately, $\mathbb{E}[b_i]=0$ for all $i$. To recover our target mean values $\overline{x_i}$, we use $b_{n+1}$. Letting $c_i=b_ib_{n+1}$, note that the covariance of $(c_i,c_j)$ is the same as $(b_i,b_j)$. However, the mean of $c_i$ has shifted to $\overline{x_i}$.
# Other notes
Because of paywalls, I haven't been able to follow V.C.'s references yet, but the presence of the arcsine above makes me suspicious that I'm reinventing those wheels. The definition of Van Vleck's arcsine law given here seems related but a little different from what we need. When I manage to track down more of V.C.'s references I'll update this post.
If we are given the second moment matrix $D$ directly, we do not need to make any assumptions about knowing the $\overline{x_i}$; we only use those values to convert from $C$ to $D$.
If the main diagonal of the covariance matrix is identically one, that implies that $\overline{x_i}=1$ or all $i$, so in that case we can deduce all the $\overline{x_i}$.
The extra row and column in $G$ is a bit of a hack; it seems like it should be possible to incorporate the mean more directly and perhaps apply to a wider class of covariance matrices.
Rather than guessing all $2^n$ possibilities for the signs of the $\overline{x}_i$, we could also express this problem as a semidefinite program with a rank 1 constraint, and remove the rank 1 constraint to make the problem tractable. However, I don't know any guarantees for how well that method would work.
I suspect that a polynomial time solution for all $C$ would probably solve MAX-CUT (and hence imply that $P=NP$). So such an algorithm is unlikely to exist.
This post will essentially be a list of literature pointers, and as such does not have much mathematical content as bibliographical. This problem has drawn some interest in the past; the best method I am aware of for this task is the "arcsine law", which has a very long story, starting from the 1966 article:
Van Vleck, J. H., & Middleton, D. (1966). The spectrum of clipped noise. Proceedings of the IEEE, 54(1), 2-19.
and, e.g., reused in the synthesis of binary textures nearly 20 years ago:
Jacovitti, G., Neri, A., & Scarano, G. (1998). Texture synthesis-by-analysis with hard-limited Gaussian processes. Image Processing, IEEE Transactions on, 7(11), 1615-1621.
I can provide you at least two more recent references from an author I know well:
Caprara, A., Furini, F., Lodi, A., Mangia, M., Rovatti, R., & Setti, G. (2014). Generation of antipodal random vectors with prescribed non-stationary 2-nd order statistics. Signal Processing, IEEE Transactions on, 62(6), 1603-1612.
Rovatti, R., Mazzini, G., Setti, G., & Vitali, S. (2008, May). Linear probability feedback processes. In Circuits and Systems, 2008. ISCAS 2008. IEEE International Symposium on (pp. 548-551). IEEE.
From my experience this sequence synthesis method is not capable of synthesising all possible correlation matrices; in fact, quite obviously all Bernoulli random vectors are forced to have unit non-central second moment, i.e., any random vector ${\bf x}$ defined over the configuration space $\Omega = \{-1,1\}^n$ is so that $\forall i \in [n], \mathbb{E}[x^2_i] = 1$ (straightforward, by applying the definition of expectation and using conditioning over $x_j : j \neq i$). So for sure you will not be able to design such a sequence with arbitrary correlation.
Given $C$, the task is to either generate a Bernoulli random vector with covariance matrix $C$ or report that it's impossible to do so. What follows is a 2-step solution:
Step 1: Find consistent marginal and pairwise probabilities.
Following Robert Israel, we seek probabilities $\{p_i\}$ and $\{p_{ij}\}$ which are consistent with $C$, that is:
(1) $p_i(1-p_i)=C_{ii}$ for every $i$
(2) $\min\{p_i,p_j\}\geq C_{ij}+p_ip_j\geq\max\{0,p_i+p_j-1\}$ for every $i,j$ with $i\neq j$
(3) $p_{ij}=C_{ij}+p_ip_j$ for every $i,j$ with $i\neq j$
If no consistent probabilities exist, we report "impossible." To find the probabilities, Robert suggests you consider each of the $2^n$ possibilities for $\{p_i\}$ that satisfy (1) until you find one that satisfies (2), and then select $\{p_{ij}\}$ according to (3). In practice, it may be better to attempt locally minimizing
$$\sum_{i=1}^n\big(p_i(1-p_i)-C_{ii}\big)^2 \quad\mbox{subject to}\quad (2)$$
Presumably, it would be good to somehow initialize $p_i$ in terms of $C_{ii}$. Assuming consistency, you know the value of this program is 0, so you can re-initialize if you find a suboptimal local minimum. I haven't attempted to run this, so I can't speak to it's performance.
Step 2: Simulate random variables from marginal and pairwise probabilities.
Here, the big idea is to produce characteristic functions $\chi_i\colon \Omega\rightarrow\{0,1\}$ such that $\|\chi_i\|^2=p_i$ for each $i$ and $\langle \chi_i,\chi_j\rangle =p_{ij}$ for each $i,j$ with $i\neq j$. Then if we draw $\omega$ uniformly from $\Omega$, $\{\chi_i(\omega)\}$ will be the desired random vector. One may attempt to find such characteristic functions by seeking a non-negative matrix factorization of the matrix $V$ defined by $V_{ii}=p_i$ and $V_{ij}=p_{ij}$.
• For $n$ which is not too large, step $1$ can be handled by treating things as a $2$-satisfiability problem (stackoverflow.com/questions/1663104/… ): The two values of each $p_i$ correspond to $x_i$ and $\neg x_i$, and there's a clause for each pair of values violating (2) (e.g. if the values corresponding to $x_i$ and $\neg x_j$ contradict (2), you must have $(\neg x_i$ OR $x_j)$ ). – Kevin P. Costello Aug 30 '15 at 7:04
I'm assuming "Bernoulli vector" here means a vector with all entries in $\{0,1\}$. Thus we have the configuration space $\Omega = \{0,1\}^n$.
For example, let's try the case $n=2$. The covariance matrix is $$C = \pmatrix{ p_1 (1-p_1) & p_{12} - p_1 p_2\cr p_{12} - p_1 p_2 & p_2 (1-p_2)\cr}$$ where $p_1 = P(X_1 = 1)$, $p_2 = P(X_2 = 1)$, $p_{12} = P(X_1 = 1, X_2 = 1)$. We have $$\min(p_1, p_2) \ge p_{12} \ge \max(0, p_1 + p_2 - 1)$$ Given $C_{11}$ with $0 \le C_{11} \le 1/4$ there are two possible values of $p_1$ with $C_11 = p_1 (1-p_1)$, and similarly for $C_{22}$ and $p_2$. For each of the four resulting pairs $(p_1, p_2)$ we can then check whether $$\min(p_1, p_2) \ge C_{12} + p_1 p_2\ge \max(0, p_1 + p_2 - 1)$$ (but by symmetry, only two of the four need to be checked).
The Bernoulli distribution is characterized by a single parameter, $p$. If the covariance matrix $C$ is valid, it will completely characterize the distributions of $n$ random variables and thus it is possible to generate Bernoulli random vectors.
The following are the conditions for $C$ to be a valid covariance matrix:
• It should be a diagonal matirx.
• All the diagonal elements must be of the form $p*(1-p) \, \forall \, p \, \epsilon [0,1]$.
Your function does not need an input of sample Bernoulli random vectors.
• Covariance matrices don't have to be diagonal. They do have to be positive semidefinite. – Robert Israel Jun 30 '15 at 16:46
• Agreed. Only if every pair of random variables in the vector are mutually independent, the covariance matrix will be diagonal. – Sriharsha Madala Jun 30 '15 at 21:18
|
2021-03-05 11:09:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9036314487457275, "perplexity": 244.87660208507683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178370752.61/warc/CC-MAIN-20210305091526-20210305121526-00074.warc.gz"}
|
http://mathhelpforum.com/geometry/83332-set-incentres.html
|
# Math Help - Set of incentres
1. ## Set of incentres
Points: $A, B$ belong to the circle $k$. Find the set of the incentres of all triangles $ABP$ where $P \in k$.
I assume the set is an ellipse (excluding the points A and B). If so, how can I prove that and is it ever possible to find its equation?
2. Originally Posted by pinkparrot
Points: $A, B$ belong to the circle $k$. Find the set of the incentres of all triangles $ABP$ where $P \in k$.
I assume the set is an ellipse (excluding the points A and B). If so, how can I prove that and is it ever possible to find its equation?
I've sketched the circle k (black) and 3 triangles (blue) with their incenters.
The locus of all incenters if P is moving on the circle line is sketched in red.
I cann't provide you with an equation of the curve.
3. Thanks for the graph, it now turns out that it's not an ellipse at all... Could be 2 parabolae or arcs instead. Any help on defining the curves and, if possible, giving their equations would be appreciated
|
2015-05-05 15:47:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8398674130439758, "perplexity": 352.2120302217383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430456773322.93/warc/CC-MAIN-20150501050613-00047-ip-10-235-10-82.ec2.internal.warc.gz"}
|
http://nlab-pages.s3.us-east-2.amazonaws.com/nlab/show/ordinary+homology+spectra+split
|
# nLab ordinary homology spectra split
cohomology
### Theorems
#### Stable Homotopy theory
stable homotopy theory
Introduction
# Contents
#### Higher algebra
higher algebra
universal algebra
# Contents
## Statement
For $S$ any spectrum and $H A$ an Eilenberg-MacLane spectrum, then the smash product $S\wedge H A$ (the $A$-ordinary homology spectrum) is non-canonically equivalent to a product of EM-spectra (hence a wedge sum of EM-spectra in the finite case).
A variant for generalized (Eilenberg-Steenrod) cohomology:
Let $X$ be a topological space such that each of the ordinary homology groups $H_n(X,\mathbb{Z})$ is a free abelian group on genrators $\{h_{\alpha,n}\}_{\alpha \in B_n}$. Write $c_{\alpha,n} \in H^n(X,\mathbb{Z}) \simeq Hom(H_n(X,\mathbb{Z}), \mathbb{Z})$ for the corresponding dual basis.
Let $E$ be a multiplicative cohomology theory and write $h'_{n,\alpha} \in (\tau_{\leq 0} E)_n(X)$ and $c'_{n,\aloha} \in (\tau_{\leq 0} E)^n(X)$for the images of these generators under the 0-truncated unit map
$H \mathbb{Z} \simeq \tau_{\leq 0} \mathbb{S} \stackrel{}{\longrightarrow} \tau_{\leq 0} E \,.$
###### Proposition
If one of the following conditions is satisfied
• Each $h'_{n,\alpha}$ lifts through $E_n(X) \to (\tau_{\leq 0}E)_n(X)$;
• each $H_n(X,\mathbb{Z})$ is finitely generated and each $c'_{n,\alpha}$ lifts through $E^n(X) \to (\tau_{\leq 0}E)^n(X)$,
then there are non-canonical equivalences as follows:
1. $E \wedge \Sigma^\infty X_+ \simeq \underset{n,\alpha}{\vee} \Sigma^n E \;\; \in E Mod$ ;
2. $[X,E] \simeq \underset{n,\alpha}{\prod} \Sigma^{-n} E$;
3. $E_\bullet(X) \simeq \pi_\bullet(E) \otimes H_\bullet(X,\mathbb{Z})$
and
$E^\bullet(X)\simeq Hom(H_\bullet(X), \pi_\bullet(E))$
|
2022-01-28 23:15:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 21, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9698820114135742, "perplexity": 1514.1784526343145}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320306346.64/warc/CC-MAIN-20220128212503-20220129002503-00234.warc.gz"}
|
https://zbmath.org/?q=an%3A1200.33020
|
# zbMATH — the first resource for mathematics
A determinantal approach to Appell polynomials. (English) Zbl 1200.33020
Families of Appell polynomials $$(A_n(x))_{n\geq 0}$$ satisfy $$\frac{\mathrm{d} A_n(x)}{\mathrm{d} x}=n A_{n-1}(x)$$ for $$n\geq 1$$. In the present paper a way is presented to express Appell polynomials as certain determinants. From this representation (and by using some linear algebra) the authors get several general properties of Appell polynomials such as recurrences, addition and multiplication theorems, forward differences and symmetry properties. An automated way is presented how to get the coefficients of these polynomials. In the final section, some classical examples of Appell polynomials are revisited.
##### MSC:
33C65 Appell, Horn and Lauricella functions 33C45 Orthogonal polynomials and functions of hypergeometric type (Jacobi, Laguerre, Hermite, Askey scheme, etc.) 11B83 Special sequences and polynomials 65F40 Numerical computation of determinants
##### Keywords:
determinant; polynomials; Appell polynomials
mctoolbox
Full Text:
##### References:
[1] Appell, P., Sur une classe de polynomes, Annales scientifique de l’E.N.S., s. 2, 9, 119-144, (1880) · JFM 12.0342.02 [2] J. Bernoulli, Ars Conjectandi, Basel, 1713, p. 97. [3] Jordan, C., Calculus of finite differences, (1965), Chelsea Pub. Co. New York · Zbl 0154.33901 [4] Euler, L., Methodus generalis summandi progressiones, Commentarii academiae scientiarum petropolitanae, 6, (1738) [5] Boas, R.P., Polynimial expansions of analytic functions, (1964), Springer-Verlag Berlin, Gottingen, Heidelberg [6] Bretti, G.; Cesarano, C.; Ricci, P.E., Laguerre-type exponentials and generalized Appell polynomials, Computers and mathematics with applications, 48, 833-839, (2004) · Zbl 1072.33010 [7] Bretti, G.; Natalini, P.; Ricci, P.E., Generalizations of the Bernoulli and Appell polynomials, Abstract and applied analysis, 7, 613-623, (2004) · Zbl 1072.33005 [8] Sheffer, I.M., A differential equation for Appell polynomials, American mathematical society. bulletin, 41, 914-923, (1935) · Zbl 0013.16602 [9] Shohat, J., The relation of the classical orthogonal polynomials to the polynomials of Appell, Americal journal of mathematics, 58, 453-464, (1936) · Zbl 0014.30802 [10] Roman, S., The umbral calculus, (1984), Academic Press New York · Zbl 0536.33001 [11] Douak, K., The relation of the d-orthogonal polynomials to the Appell polynomials, Journal of computation and applied mathematics, 70, 279-295, (1996) · Zbl 0863.33007 [12] He, M.X.; Ricci, P.E., Differential equation of Appell polynomials via the factorization method, Journal of computational and applied mathematics, 139, 231-237, (2002) · Zbl 0994.33008 [13] Dattoli, G.; Ricci, P.E.; Cesarano, C., Differential equations for Appell type polynomials, Fractional calculus & applied analysis, 5, 69-75, (2002) · Zbl 1040.33007 [14] Bretti, G.; He, M.X.; Ricci, P.E., On quadrature rules associated with Appell polynomials, International journal of applied mathematics, 11, 1-14, (2002) · Zbl 1041.65025 [15] Ismail, M.E.H., Remarks on differential equation of Appell polynomials via the factorization method, Journal of computational and applied mathematics, 154, 243-245, (2003) [16] Bretti, G.; Ricci, P.E., Multidimensional extensions of the Bernoulli and Appell polynomials, Taiwanese journal of mathematics, 8, 3, 415-428, (2004) · Zbl 1084.33007 [17] Costabile, F.; Dell’Accio, F.; Gualtieri, M.I., A new approach to Bernoulli polynomials, Rendiconti di matematica, S VII, 26, 1-12, (2006) · Zbl 1105.11002 [18] Highman, N.H., Accuracy and stability of numerical algorithms, (1996), SIAM Philadelphia [19] Abramowitz, M.; Stegun, I., Handbook of mathematical functions, (1964), Dover New York · Zbl 0171.38503
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
2021-04-22 01:42:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7678797245025635, "perplexity": 4938.408124776423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039560245.87/warc/CC-MAIN-20210422013104-20210422043104-00437.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/college-algebra-6th-edition/chapter-4-exponential-and-logarithmic-functions-exercise-set-4-3-page-477/95
|
# Chapter 4 - Exponential and Logarithmic Functions - Exercise Set 4.3 - Page 477: 95
True
#### Work Step by Step
On the LHS, the term $\ln 1$ equals zero, ( a basic logarithmic property) so the problem statement equation is equivalent to $\ln(5x)+0=\ln(5x)$ which is true.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
2018-09-20 21:03:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8238304853439331, "perplexity": 1125.7665452883566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156613.38/warc/CC-MAIN-20180920195131-20180920215531-00109.warc.gz"}
|
https://eprint.iacr.org/2013/523
|
White-Box Security Notions for Symmetric Encryption Schemes
Cécile Delerablée, Tancrède Lepoint, Pascal Paillier, and Matthieu Rivain
Abstract
White-box cryptography has attracted a growing interest from researchers in the last decade. Several white-box implementations of standard block-ciphers (DES, AES) have been proposed but they have all been broken. On the other hand, neither evidence of existence nor proofs of impossibility have been provided for this particular setting. This might be in part because it is still quite unclear what {white-box} cryptography really aims to achieve and which security properties are expected from white-box programs in applications. This paper builds a first step towards a practical answer to this question by translating folklore intuitions behind white-box cryptography into concrete security notions. Specifically, we introduce the notion of white-box compiler that turns a symmetric encryption scheme into randomized white-box programs, and we capture several desired security properties such as one-wayness, incompressibility and traceability for white-box programs. We also give concrete examples of white-box compilers that already achieve some of these notions. Overall, our results open new perspectives on the design of white-box programs that securely implement symmetric encryption.
Available format(s)
Publication info
Published elsewhere. MINOR revision.SAC 2013
Keywords
White-Box CryptographySecurity NotionsAttack ModelsSecurity GamesTraitor tracing
Contact author(s)
matthieu rivain @ gmail com
History
Short URL
https://ia.cr/2013/523
CC BY
BibTeX
@misc{cryptoeprint:2013/523,
author = {Cécile Delerablée and Tancrède Lepoint and Pascal Paillier and Matthieu Rivain},
title = {White-Box Security Notions for Symmetric Encryption Schemes},
howpublished = {Cryptology ePrint Archive, Paper 2013/523},
year = {2013},
note = {\url{https://eprint.iacr.org/2013/523}},
url = {https://eprint.iacr.org/2013/523}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.
|
2023-02-06 23:17:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22417646646499634, "perplexity": 6854.408300001047}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00067.warc.gz"}
|
https://www.qb365.in/materials/stateboard/11th-business-maths-correlation-and-regression-analysis-model-question-paper-2279.html
|
#### Correlation and Regression Analysis Model Question Paper
11th Standard
Reg.No. :
•
•
•
•
•
•
Time : 01:30:00 Hrs
Total Marks : 50
10 x 1 = 10
1. Example for positive correlation is
(a)
Income and expenditure
(b)
Price and demand
(c)
Repayment period and EMI
(d)
Weight and Income
2. Correlation co-efficient lies between
(a)
0 to ∞
(b)
-1 to +1
(c)
-1 to 0
(d)
-1 to ∞
3. The correlation coefficient from the following data N=25, ΣX=125, ΣY=100, ΣX2=650, ΣY2=436, ΣXY=520
(a)
0.667
(b)
-0.006
(c)
-0.667
(d)
0.70
4. From the following data, N=11, ΣX=117, ΣY=260, ΣX2=1313, ΣY2=6580,ΣXY=2827 the correlation coefficient is
(a)
0.3566
(b)
-0.3566
(c)
0
(d)
0.4566
5. The variable which influences the values or is used for prediction is called
(a)
Dependent variable
(b)
Independent variable
(c)
Explained variable
(d)
Regressed
6. When one regression coefficient is negative, the other would be
(a)
Negative
(b)
Positive
(c)
Zero
(d)
None of them
7. If regression co-efficient of Y on X is 2, then the regression co-efficient of X on Y is
(a)
$\frac{1}{2}$
(b)
2
(c)
>$\frac{1}{2}$
(d)
1
8. If two variables moves in decreasing direction then the correlation is
(a)
positive
(b)
negative
(c)
perfect negative
(d)
no correlation
9. The term regression was introduced by
(a)
R.A Fisher
(b)
Sir Francis Galton
(c)
Karl Pearson
(d)
Croxton and Cowden
10. Cov(x,y)=–16.5, ${ \sigma }_{ x }^{ 2 }=2.89,{ \sigma }_{ y }^{ 2 }$=100. Find correlation coefficient
(a)
-0.12
(b)
0.001
(c)
-1
(d)
-0.97
11. 2 x 2 = 4
12. From the following data calculate the correlation coefficient Σxy=120, Σx2=90, Σy2=640
13. The following information is given
X(in Rs.) Y(in Rs.) Arithmetic Mean 6 8 Standard Deviation 5 $\frac{40}{3}$
Coefficient of correlation between X and Y is $\frac{8}{15}$ . Find (i) The regression Coefficient of Y on X (ii) The most likely value of Y when X =Rs.100.
14. 7 x 3 = 21
15. Calculate the correlation co-efficient for the following data.
X 5 10 5 11 12 4 3 2 7 1 Y 1 6 2 8 5 1 4 6 5 2
16. The following are the ranks obtained by 10 students in commerce and accountancy are given below.
Commerce 6 4 3 1 2 7 9 8 10 5 Accountancy 4 1 6 7 5 8 10 9 3 2
To what extent is the knowledge of students in the two subjects related?
17. The rank of 10 students of same batch in two subjects A and B are given below. Calculate the rank correlation coefficient.
Rank of A 1 2 3 4 5 6 7 8 9 10 Rank of B 6 7 5 10 3 9 4 1 8 2
18. There are two series of index numbers P for price index and S for stock of the commodity. The mean and standard deviation of P are 100 and 8 and of S are 103 and 4 respectively. The correlation coefficient between the two series is 0.4. With these data obtain the regression lines of P on S and S on P.
19. Calculate the correlation coefficient from the data given below:
X 1 2 3 4 5 6 7 8 9 Y 9 8 10 12 11 13 14 16 15
20. The following data pertains to the marks in subjects A and B in a certain examination. Mean marks in A = 39.5, Mean marks in B= 47.5 standard deviation of marks in A =10.8 and Standard deviation of marks in B= 16.8. coefficient of correlation between marks in A and marks in B is 0.42. Give the estimate of marks in B for candidate who secured 52 marks in A.
21. X and Y are a pair of correlated variables. Ten observations of their values (X, Y) have the following results. ΣX=55, ΣXY=350, ΣX2 =385, ΣY=55, Predict the value of y when the value of X is 6.
22. 3 x 5 = 15
23. Find coefficient of correlation for the following:
Cost(Rs) 14 19 24 21 26 22 15 20 19 Sales(Rs) 31 36 48 37 50 45 33 41 39
24. The following are the ranks obtained by 10 students in Statistics and Mathematics.
Statistics 1 2 3 4 5 6 7 8 9 10 Mathematics 1 4 2 5 3 9 7 10 6 8
Find the rank correlation coefficient.
25. The two regression lines are 3X+2Y=26 and 6X+3Y=3l. Find the correlation coefficient.
|
2020-02-29 08:02:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.672939658164978, "perplexity": 705.3788401945401}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875148671.99/warc/CC-MAIN-20200229053151-20200229083151-00320.warc.gz"}
|
https://www.acmicpc.net/problem/14834
|
시간 제한 메모리 제한 제출 정답 맞은 사람 정답 비율
5 초 512 MB 1 1 1 100.000%
## 문제
Here at Code Jam, we love to play a game called "Operation". (No, it has nothing to do with surgery; why would you think that?) The game is played with cards, each card is labeled with a basic arithmetic operation (addition, subtraction, multiplication or division) Oi and an integer right operand Vi for that operation. For example, a card might say + 0, or - -2, or / -4 — note that operands can be negative or zero, although a card with a division operation will never have 0 as an operand.
In each round of the game, a starting integer value S is chosen, and a set of C cards is laid out. The player must choose an order for the cards, using each card exactly once. After that, the operations are applied, in order, to the starting value S, and a final result is obtained.
Although all of the operands on the cards are integers, the operations are executed on rational numbers. For instance, suppose that the initial value is 5, and the cards are + 1- 2* 3, and / -2. If we put them in the order given above, the final result is (5 + 1 - 2) * 3 / (-2) = -6. Notice that the operations are performed in the order given by the cards, disregarding any operator precedence. On the other hand, if we choose the order - 2/ -2+ 1* 3, the result is ((5 - 2) / (-2) + 1) * 3 = -3 / 2. That example turns out to be the maximum possible value for this set of cards.
Given a set of cards, can you figure out the maximum possible final value that can be obtained? Please give the result as an irreducible fraction with a positive denominator.
## 입력
The first line of the input gives the number of test cases, TT test cases follow. Each case begins with one line with two integers S and C: the starting value for the game, and the number of cards. Then, C lines follow. The i-th of these lines represents one card, and contains one character Oi representing the operation (which is either +-*, or /) and one integer Vi representing the operand.
Limits
• 1 ≤ T ≤ 100.
• -1,000 ≤ S ≤ 1,000.
• Oi is one of +-*, or /, for all i.
• -1,000 ≤ Vi ≤ 1,000, for all i.
• If Oi = /, then Vi ≠ 0, for all i.
• 1 ≤ C ≤ 1000.
## 출력
For each test case, output one line containing Case #x: y z, where x is the test case number (starting from 1), and y and z are integers such that y/z is the maximum possible final value of the game, y and z do not have common divisors other than 1 and -1, and zis strictly greater than 0.
## 예제 입력 1
5
1 2
- 3
* 2
5 4
+ 1
- 2
* 3
/ -2
1000 7
* -1000
* -1000
* 1000
* 1000
* 1000
* 1000
* 1000
-1 3
- -1
* 0
/ -1
0 1
+ 0
## 예제 출력 1
Case #1: -1 1
Case #2: -3 2
Case #3: 1000000000000000000000000 1
Case #4: 1 1
Case #5: 0 1
## 힌트
In Sample Case #1, the optimal strategy is to play the * 2 card before the - 3 card, which yields a result of -1. The unique rational expression of this as specified in the problem is -1 1.
Sample Case #2 is the one described in the third paragraph of the problem statement.
In Sample Case #3, we get the same answer regardless of the order in which we use the cards. Notice that the numerator of the answer is too large to fit in 64-bit integer.
In Sample Case #4, the largest result we can achieve is 1. One way is: / -1* 0- -1.
In Sample Case #5, note that the only valid representation of the answer is 0 10 2 is invalid because it can be reduced. 0 -1 is invalid because the denominator must be positive.
## 채점
• 예제는 채점하지 않는다.
|
2019-02-20 03:54:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37608593702316284, "perplexity": 1032.1762450690028}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247494424.70/warc/CC-MAIN-20190220024254-20190220050254-00119.warc.gz"}
|
https://www.gamedev.net/forums/topic/515449-drawing-a-betteror-more-efficient-sphere/
|
Public Group
# Drawing a better(or more efficient) Sphere
This topic is 3673 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
i've been working on procedurally generating a sphere, which i have done, but I took a short cut, and i'd liek to figure out a less hacky way to do it, here's an example of the code
D3DXVECTOR4 Temp;
D3DXMATRIX rot;
for(int j =180;j >= 0; j -= 12)
{
for(int i = 360;i>=0;i-= 12 )
{
CircleList.push_back(PCVertex(D3DXVECTOR3
(Temp.x,Temp.y,Temp.z),D3DCOLOR_XRGB(255,255,255)));
}
}
basiclly i plot a series of circles, proforming a rotation to every vertex, but thats a lot of calculations for every vertex does anyone see know of a simpler technique for logically rotating the circle around the Y axis? thanks
##### Share on other sites
Why not use the function D3DXCreateSphere(...) to make a sphere mesh?
##### Share on other sites
because then I wouldn't learn anything :) I figure computational geometry is an important aspect in computer graphics, and the more practice and experience I get, the better off I am
##### Share on other sites
Well if you insist on knowing things! ;)
First off, don't bother with all the matrix transform math just to place a vertex - use some trigonometry to work out the position from your two angle loop variables. If you're stuck, think of a slice through the sphere as a circle with radius of cos(latitude) (latitude being the angle from the 'equator' of course) and then use the 'longitude' to place each vertex on that ring. But given that this is probably an 'offline' operation (i.e. you're not regenerating the sphere each frame I assume?) the efficiency probably isn't a massive issue.
More problematic is that you're creating degenerate vertices at the poles. Consider the poles as a special case and just generate one vertex for them, then connect that vertex to the first proper 'ring' of the sphere using a triangle-fan arrangement. Determining correct index buffer ordering might take some thought.
More generally though, while I agree it's a useful exercise to create procedural geometry I wouldn't get too caught up in it without good reason. An equally valid learning exercise would be to import data using a standard file format created by a 3D modelling package.
##### Share on other sites
thanks for the help, I knew it would be a trig problem, I'm still getting a handle on 3D geometry so it's hard to visualiz, and no I wouldn't be updating it.
I've also been working on pulling geometry from 3D File's(I wrote an MD2 loader, i'll probobly start on an MD3 or MD5 loader soon enough), as well as getting some practice ripping data from a D3DXMESH through Frank Luna's 2nd Ed. directX9 book.
I'm gonna look at it in the way you suggested, and see if I can pick up on it, and I've resigned myself to the fact that generating indices is gonna be a challenge, but a challenge is the path to growth.
Thanks for the help
1. 1
2. 2
3. 3
Rutin
15
4. 4
5. 5
• 9
• 9
• 11
• 11
• 23
• ### Forum Statistics
• Total Topics
633677
• Total Posts
3013284
×
|
2018-12-12 01:13:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23879481852054596, "perplexity": 1614.2136329014984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823710.44/warc/CC-MAIN-20181212000955-20181212022455-00425.warc.gz"}
|
https://www.gerad.ca/fr/papers/G-88-21
|
Groupe d’études et de recherche en analyse des décisions
# An Optimal Control Problem With a Random Stopping Time
## El-Kébir Boukas, Alain Haurie et Philippe Michel
This paper deals with a stochastic optimal control problem where the randomness is essentially concentrated in the stopping time terminating the process. If the stopping time is characterized by an intensity depending on the state and control variables one can reformulate the problem equivalently as an infinite horizon optimal control problem. Applying dynamic programming and minimum principle techniques to this associated deterministic control problem yields specific optimality conditions for the original stochastic control problem. It is also possible to characterize extremal steady states. The model is illustrated by an example related to the economics of technological innovation.
, 13 pages
|
2021-03-03 08:14:56
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.809958279132843, "perplexity": 486.5918189299001}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178366477.52/warc/CC-MAIN-20210303073439-20210303103439-00590.warc.gz"}
|
http://mathhelpforum.com/number-theory/101228-number-theory-primes.html
|
# Math Help - Number theory (Primes)
1. ## Number theory (Primes)
*Determine whether the following assertions are true or false. if true, prove the result, and if false, give a counter example
3. If b/(a^2-1) then b/(a^4-1).
2. $a^4-1=(a^2-1)(a^2+1)$
3. ## pls can u elaborate it
bruno can u pls elaborate it. i don't get it..is it false or true? pls
4. If $m|a$, does $m|ab$ for every $b$? What does " $m|a$" mean?
|
2014-10-01 05:58:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48779675364494324, "perplexity": 1981.4051367318807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663365.9/warc/CC-MAIN-20140930004103-00296-ip-10-234-18-248.ec2.internal.warc.gz"}
|
https://answers.opencv.org/answers/7883/revisions/
|
# Revision history [back]
To configure the NDK I used the general Android NDK documentation. I downloaded it from here. Then I have done the following:
unzip
sudo mv android-ndk-r8d /usr/local
On Eclipse side (without OpenCV): importing ndk/samples/hello-jni into workspace. Run does not work because the jni compilation has not been performed.
cd /usr/local/android-ndk-r8d/samples/hello-jni
../../ndk-build
It creates the libs/armeabi/libhello-jni.so. After that it was working on my tablet and in the emulator as well.
To make an OpenCV native project (tutorial-3-native) work I have set NDKROOT (export NDKROOT=/usr/local/android-ndk-r8d/) and deleted .cmd as extension from ndk-build in the configuration panel.
The whole building process can be performed from eclipse and the project was workin on the tablet.
To configure the NDK I used the general Android NDK documentation. I downloaded it from here. Then I have done the following:
unzip
sudo mv android-ndk-r8d /usr/local
On Eclipse side (without OpenCV): importing ndk/samples/hello-jni into workspace. Run does not work because the jni compilation has not been performed.
cd /usr/local/android-ndk-r8d/samples/hello-jni
../../ndk-build
It creates the libs/armeabi/libhello-jni.so. After that it was working on my tablet and in the emulator as well.
To make an OpenCV native project (tutorial-3-native) work I have set NDKROOT (export NDKROOT=/usr/local/android-ndk-r8d/) NDKROOT (export NDKROOT=/usr/local/android-ndk-r8d/) and deleted .cmd as extension from ndk-build in the configuration panel.
The whole building process can be performed from eclipse and the project was workin on the tablet.
|
2021-07-26 02:08:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38212162256240845, "perplexity": 10558.939857686864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151972.40/warc/CC-MAIN-20210726000859-20210726030859-00653.warc.gz"}
|
https://eng.libretexts.org/Core/Chemical_Engineering/Fluid_Mechanics_(Bar-Meir)/12%3A_Compressible_Flow_2%E2%80%93Dimensional/12.2%3A_Oblique_Shock
|
# 12.2: Oblique Shock
The shock occurs in reality in situations where the shock has three–dimensional effects. The three–dimensional effects of the shock make it appear as a curved plane. However, one–dimensional shock can be considered a representation for a chosen arbitrary accuracy with a specific small area. In such a case, the change of the orientation makes the shock considerations two–dimensional. Alternately, using an infinite (or a two–dimensional) object produces a two–dimensional shock. The two–dimensional effects occur when the flow is affected from the side,'' i.e., the change is in the flow direction. An example of such case is creation of shock from the side by deflection shown in Figure . To match the boundary conditions, the flow turns after the shock to be parallel to the inclination angle schematicly shown in Figure 12.3. The deflection angle, $$\delta$$, is the direction of the flow after the shock (parallel to the wall). The normal shock analysis dictates that after the shock, the flow is always subsonic. The total flow after the oblique shock can also be supersonic, which depends on the boundary layer and the deflection angle. The velocity has two components (with respect to the shock plane/surface). Only the oblique shock's normal component undergoes the shock.'' The tangent component does not change because it does not move'' across the shock line. Hence, the mass balance reads
$\rho_1\, {U_1}_n = \rho_2\, {U_2}_n \label{2Dgd:eq:Omass} \tag{1}$
$P_1 + \rho_1 \, ParseError: EOF expected (click for details) Callstack: at (Core/Chemical_Engineering/Fluid_Mechanics_(Bar-Meir)/12:_Compressible_Flow_2–Dimensional/12.2:_Oblique_Shock), /content/body/p[4]/span, line 1, column 4 \label{2Dgd:eq:OM1n} \tag{7}$
and in the downstream side reads
$\sin (\theta - \delta ) = \dfrac ParseError: EOF expected (click for details) Callstack: at (Core/Chemical_Engineering/Fluid_Mechanics_(Bar-Meir)/12:_Compressible_Flow_2–Dimensional/12.2:_Oblique_Shock), /content/body/p[6]/span[1], line 1, column 4 \label{2Dgd:eq:OM2n} \tag{8}$
Equation (8) alternatively also can be expressed as
$\cos \theta = \dfrac ParseError: EOF expected (click for details) Callstack: at (Core/Chemical_Engineering/Fluid_Mechanics_(Bar-Meir)/12:_Compressible_Flow_2–Dimensional/12.2:_Oblique_Shock), /content/body/p[6]/span[2], line 1, column 4 \label{2Dgd:eq:OM1t} \tag{9}$
And equation (9) alternatively also can be expressed as
$\cos\, \left(\theta - \delta \right) = \dfrac ParseError: EOF expected (click for details) Callstack: at (Core/Chemical_Engineering/Fluid_Mechanics_(Bar-Meir)/12:_Compressible_Flow_2–Dimensional/12.2:_Oblique_Shock), /content/body/p[6]/span[3], line 1, column 4 \label{2Dgd:eq:OM2t} \tag{10}$
The total energy across a stationary oblique shock wave is constant, and it follows that the speed of sound is constant across the (oblique) shock. It should be noted that although, $${U_1}_t = {U_2}_t$$ the Mach number is $${M_1}_t \neq {M_2}_t$$ because the temperatures on both sides of the shock are different, $$T_1 \neq T_2$$. As opposed to the normal shock, here angles (the second dimension) have to be determined. (8) through (??), is a function of four unknowns of $$M_1$$, $$M_2$$, $$\theta$$, and $$\delta$$. Rearranging this set utilizing geometrical identities such as $$\sin\alpha = 2\sin\alpha\cos\alpha$$ results in
Angle Relationship
$\label {2Dgd:eq:Osol} \tan \delta = 2\, \cot \theta\, \left[\dfrac{{M_1}^{2}\, \sin^2 \theta - 1 }{ {M_1}^{2} \, \left(k + \cos\, 2 \theta \right) +2 }\right] \tag{11}$
The relationship between the properties can be determined by substituting $$M_1 \sin \theta$$ for of $$M_1$$ into the normal shock relationship, which results in
Pressure Ratio
$\label{2Dgd:eq:OPbar} \dfrac{P_2 }{ P_1} = \dfrac{2\,k\, {M_1 }^{2} \sin^2 \theta - (k-1) }{ k + 1} \tag{12}$
The density and normal velocity ratio can be determined by the following equation
Density Ratio
$\label{2Dgd:eq:OrhoBar} \dfrac{\rho_2 }{\rho_1} = \dfrac{{U_1}_n }{ {U_2}_n} = \dfrac{ (k+1) {M_1}^{2} \sin^2\theta} {(k-1) {M_1}^2 \sin^2\theta + 2} \tag{13}$
The temperature ratio is expressed as
Temperature Ratio
$\label{2Dgd:eq:OTbar} \dfrac{T_2 }{ T_1} = \dfrac{{2\,k\, {M_1}^2 \sin^2\theta - (k-1) \left[(k-1) {M_1}^2 + 2 \right] } } {{(k+1)^2 \,{M_1}} } \tag{14}$
Prandtl's relation for oblique shock is
$U_{n_1}U_{n_2} = c^{2} - \dfrac{k -1 }{ k+1} \, {U_t}^2 \label{2Dgd:eq:Oprandtl} \tag{15}$
The Rankine-Hugoniot relations are the same as the relationship for the normal shock
$\dfrac{P_2 - P_1 }{ \rho_2 - \rho_1} = k \,\dfrac{ P_2 - P_1 }{ \rho_2 - \rho_1} \label{2Dgd:eq:ORankineHugoniot} \tag{17}$
### Contributors
• Dr. Genick Bar-Meir. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or later or Potto license.
|
2018-03-21 09:04:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9055308103561401, "perplexity": 798.1735025713871}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647600.49/warc/CC-MAIN-20180321082653-20180321102653-00496.warc.gz"}
|
http://www.mathnet.ru/php/archive.phtml?jrnid=dan&wshow=issue&year=1979&volume=244&volume_alt=&issue=6&issue_alt=&option_lang=eng
|
RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PACKAGE AMSBIB
General information Latest issue Archive Search papers Search references RSS Latest issue Current issues Archive issues What is RSS
Dokl. Akad. Nauk: Year: Volume: Issue: Page: Find
1979, Volume 244, Number 6
MATHEMATICS Asymptotics of the solution of the grid Laplace equation in an angleV. B. Andreev 1289 Some extension theorems in a Sobolev space of infinite order and inhomogeneous boundary value problemsG. S. Balashova 1294 An investigation of the stability of difference schemes on the boundaries of the computational domain by the method of differential approximationsYu. M. Davydov 1298 On the solution of integral equations of the first kind in the class of functions of bounded variationI. F. Dorofeev 1303 Some generalizations of the method of stationary phaseE. K. Isakova 1308 Investigations on the isotropic strict extremum principleL. I. Kamynin, B. N. Khimchenko 1312 Diameters in $L_p$ of classes of continuous and differentiable functions and the optimal reconstruction of functions and their derivativesN. P. Korneichuk 1317 An imbedding theorem for convolutions on a finite interval and its application to integral equations of the first kindB. S. Rubin, G. F. Volodarskaya 1322 The sharp front and singularities of solutions of a class of nonhyperbolic equationsV. B. Tvorogov 1327 CYBERNETICS AND THE REGULATION THEORY Reliability of output signals of multidimensional information apparatusV. V. Petrov, V. A. Tsikalov 1332 ASTRONOMY Numerical modeling of the heat radiation transfer in the Venusian and Martian atmospheresK. Ya. Kondrat'ev, N. I. Moskalenko, F. S. Yakupova 1334 MATHEMATICAL PHYSICS The method of the inverse scattering problem and the quantum nonlinear Schrödinger equationE. K. Sklyanin 1337 PHYSICS Radiation capture of target atom electron by multicharge ionsE. L. Duman, L. I. Men'shikov 1342 “Isotopic” mechanism of the dispersion of liquid crystal permittivity normal componentE. I. Ryumtsev, S. G. Polushin, A. P. Kovshik, T. A. Rotinyan, V. N. Tsvetkov 1344
|
2020-07-02 21:56:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3111110329627991, "perplexity": 1034.3620466509828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655880243.25/warc/CC-MAIN-20200702205206-20200702235206-00544.warc.gz"}
|
http://www.physicsforums.com/showthread.php?t=450906
|
# Finding current from current density of wire
by maherelharake
Tags: current, density, wire
P: 261 1. The problem statement, all variables and given/known data A cylindrical wire of radius 3mm has current density, J=3s |φ-$$\pi$$| z_hat. Find the total current in the wire. 2. Relevant equations 3. The attempt at a solution I believe all I have to do is integrate over the area, but for some reason I can't get it to work. Is the differential area going to be da=s ds dφ? In that case s goes from 0 to 3mm and φ goes from 0 to 2Pi? The 'z' direction is throwing me off a bit. Thanks.
Emeritus Sci Advisor HW Helper Thanks PF Gold P: 11,536 Yes, that's right. Post your work if you still can't get it to work out so we can see where you're going wrong.
P: 261 I just tried to work it out on a napkin, because I am not near a scanner at the moment. I ended up with a net result of 0 though. If you don't think this is correct, I can try to rewrite it and take a picture with my phone and upload it. Thanks again.
Mentor
P: 39,723
## Finding current from current density of wire
Quote by maherelharake I just tried to work it out on a napkin, because I am not near a scanner at the moment. I ended up with a net result of 0 though. If you don't think this is correct, I can try to rewrite it and take a picture with my phone and upload it. Thanks again.
Yes, please upload it. Or you could use the Latex editor in the Advanced Reply window to write out your equations. Click on the $$\Sigma$$ symbol to the right in the toolbar to see your Latex options.
P: 261 Alright here you go. If you can't read it, let me know. Thanks. http://i77.photobucket.com/albums/j7...e/photo-31.jpg
Emeritus Sci Advisor HW Helper Thanks PF Gold P: 11,536 You're not dealing with the absolute value correctly. Break the integral over the angle into two ranges, one from 0 to π and the other from π to 2π. For the first integral, |φ-π|=-(φ-π), and for the other, |φ-π|=φ-π.
P: 261 Hmm ok. How's this look. http://i51.tinypic.com/kf4lrd.jpg
Emeritus Sci Advisor HW Helper Thanks PF Gold P: 11,536 Your integrals look fine, but you made a mistake somewhere evaluating them.
P: 261 Hmm I can't find it. I checked it a few times after I posted it. Did you work it out and get a different result?
Emeritus Sci Advisor HW Helper Thanks PF Gold P: 11,536 I entered it into Mathematica and got a different result. It looks like you messed up the angular integrations in several spot. Every integral should be proportional to π2, but you have π, π2, and π3. You can simplify the algebra a bit by separating the s integral and φ integral: $$I=\int_0^{R} 3s^2ds \int_0^{2\pi} |\varphi-\pi|\,d\varphi$$ and using the substitution u=φ-π to do the angular integrals.
P: 261 I seem to have gotten R3 Pi2where R=3 mm. Am I close? And of course, the answer is in Amps. Thanks.
Emeritus Sci Advisor HW Helper Thanks PF Gold P: 11,536 Yup, that matches what I got.
P: 261 Ok thanks.
Related Discussions General Physics 4 Introductory Physics Homework 12 Introductory Physics Homework 2 Introductory Physics Homework 1 Introductory Physics Homework 5
|
2014-04-25 08:32:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6807471513748169, "perplexity": 950.4686810252969}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00028-ip-10-147-4-33.ec2.internal.warc.gz"}
|
http://www.maa.org/publications/periodicals/convergence/convergence-articles?term_node_tid_depth=All&term_node_tid_depth_2=All&term_node_tid_depth_1=All&page=58&device=desktop
|
# Convergence articles
Displaying 581 - 590 of 655
A collection of articles about historical and contemporary women in mathematics.
I found a stone but did not weigh it; after I added to it 1/7 of its weight and then 1/11 of this new weight, I weighed the total at 1 mina. What was the weight of the stone?
A survey of the use of technology in American mathematics teaching over the past 200 years.
The history of the Poincare conjecture up to its recent proof by Grigori Perelman.
Some ideas on using student reports when you teach a course in the history of mathematics
Find the area of the elliptical segment cut off parallel to the shorter axis;
When knowing the sum of their ages along with another equation, determine how old a father and son are.
William Cook recounts the history of and computational progress on the traveling salesman problem, emphasizing connections within mathematics and with other disciplines.
A cylindrical tin tomato can is to be made which shall have a given capacity. Find what should be the ratio of the height to the radius of the base that the smallest possible amount of tin shall be required.
I owe a man the following notes: one of $800 due May 16; one of$660 due on July 1; one of $940 due Sept. 29. He wishes to exchange them for two notes of$1200 each and wants one to fall due June 1. When should the other be due?
|
2014-10-01 01:15:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30003631114959717, "perplexity": 1119.5373111316092}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663218.28/warc/CC-MAIN-20140930004103-00323-ip-10-234-18-248.ec2.internal.warc.gz"}
|
https://brilliant.org/problems/primprob/
|
# Need not for speed, but for Prime
Number Theory Level 3
An integer is randomly selected in the interval between 1 and $$10^{10}$$ inclusive. What is the probability that it is a prime number?
|
2016-10-22 01:54:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7305828332901001, "perplexity": 361.00475235984527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718423.28/warc/CC-MAIN-20161020183838-00547-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/calculus/calculus-early-transcendentals-8th-edition/chapter-7-section-7-5-strategy-for-integration-7-5-exercises-page-507/8
|
## Calculus: Early Transcendentals 8th Edition
$$\displaystyle\int t\sin{t}\cos{t}\thinspace dt=\frac{\sin{2t}}{8}-\frac{t\cos{2t}}{4}+C$$
Use the half angle trignometric identity $\sin{2\theta}=2\sin{\theta}\cos{\theta}$ to rewrite $\sin{\theta}\cos{\theta}=\frac{\sin{2\theta}}{2}$ $\displaystyle\int t\sin{t}\cos{t}\thinspace dt = \displaystyle\int t\frac{\sin{2t}}{2}\thinspace dt$ Pull the constant outside $\frac{1}{2}\int t\sin{2t}\thinspace dt$ Integration by parts: $uv-\int v\thinspace du$, let $u =t$ and $dv=\sin(2t)dt$ then, $du=dt$ and $v=\big(\frac{1}{2}\big)(-\cos{2t})$ $$=\frac{1}{2}\bigg[t\bigg(\frac{1}{2}\bigg)(-\cos{2t})-\int\bigg(\frac{1}{2}\bigg)(-\cos{2t})\thinspace dt\bigg]$$ $$=\frac{1}{2}\bigg[t\bigg(\frac{1}{2}\bigg)(-\cos{2t})+\int\bigg(\frac{1}{2}\bigg)(\cos{2t})\thinspace dt\bigg]$$ $$=\frac{1}{2}\Bigg[t\bigg(\frac{1}{2}(-cos{2t})+\bigg(\frac{1}{2}\bigg)\bigg(\frac{1}{2}\bigg)\sin{2t}+C\Bigg]$$ $$=\frac{1}{2}\Bigg[\bigg(\frac{-t\cos{2t}}{2}\bigg)+\bigg(\frac{\sin{2t}}{4}\bigg)+C\Bigg]$$ $$=\bigg(\frac{-tcos{2t}}{4}\bigg)+\bigg(\frac{\sin{2t}}{8}\bigg)+C$$ $$=\frac{\sin{2t}}{8}-\frac{t\cos{2t}}{4}+C$$
|
2018-07-16 20:03:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.965111494064331, "perplexity": 153.35325807694224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589455.35/warc/CC-MAIN-20180716193516-20180716213516-00586.warc.gz"}
|
http://lambda-the-ultimate.org/node/4613
|
## DRAKON-Erlang: Visual Functional Programming
Functional programming is based on useful and practical ideas. Unfortunately, there is a problem with how functional programs look. They are often hard to read.
Improving the visual appearance of functional programs can make them easier to understand. One way of doing it is to combine some existing functional language with a graphic notation.
DRAKON-Erlang is an attempt to do so. This hybrid language can be described as Erlang that uses DRAKON for flow control.
Each of these two technologies have been successful on their own.
DRAKON was developed within the Russian space program. It was used in Buran, Sea Launch and other space projects. The strong point of DRAKON is that it makes algorithms easier to understand by relying on ergonomics.
Erlang is one of the most widely used functional languages. It started its path in telecom and later spread out to many other industries. Erlang is well known for its simplicity, reliability and built-in support for concurrent programming.
## Comment viewing options
### Your visual formula for OR
Your visual formula for OR seems to have the labels swapped http://drakon-editor.sourceforge.net/drakon-erlang/bool.html ?
### Your visual formula for OR
Nice catch! Thanks. Fixed that.
### interesting
assuming it is a good idea, there are things about it that could perhaps be improved, of course :-) like the salad/meat/potatoes example makes me wish the left-to-rightness was enforced. is DRAKON alive enough that people are pushing on with the idea, to try to refine it along such lines?
### 1. Enforcing
1. Enforcing left-to-rightness is a good idea, thanks!
2. DRAKON is alive enough and gaining momentum. Any ideas on improvement are welcome!
### I find the mix of implicit
I find the mix of implicit data flow with visual control flow awkward, and perhaps even backwards. (The first_to_upper example has such data flow via the FirstUpper and RestLower variables.) I believe the visual emphasis should be on the data and data flow (cf. learnable programming).
Also, it isn't entirely clear how such implicit data flow interacts with your switching constructs.
Speaking of the first_to_upper example: code that implicitly lower-cases the rest of the string, without clear documentation or naming, is perhaps not the best way to advertise "clarity above all".
### There are at least two major
There are at least two major dimensions to an algorithm:
1. Control flow.
2. Data flow.
They are tricky to show at the same time. But both are important.
I'd like to defend the rights of flow control. Flow control is the decisions made by the algorithm. Decisions are critical.
And DRAKON is excellent at expressing decisions.
At the same time, I also want to see the data flow in a graphical way.
Some researchers (Dmitry Dagaev) suggest to combine DRAKON with FBD.
### Control flow is an artifact
Control flow is an artifact of imperative expression of algorithms. While the ability to express decisions is essential for solving many problems, there are ways to express decisions in data (e.g. sum types). A highly declarative language may only need data flow.
It is not clear to me that you could combine DRAKON with another diagram model without violating the invariants proposed for clarity, e.g. you mentioned "no crossing lines".
### Control flow is basically
Control flow is basically any sort of time-based decision. Control flow exists because programs execute over time...the question is then whether control flow can be adequately abstracted away. You can always encode control flow with data flow (just add "control lines"), but this is not necessarily cleaner or better. There are some abstractions that can encode various operations that would otherwise require explicit looping (mapping, folding), but these are not general. In a language like Haskell, you basically wind up re-introducing explicit control flow via monads where it is needed (though it is obviously discouraged).
The "no crosssing lines" constraint is interesting: it imposes an upper bound on program complexity based on aesthetics. I would be very interested in hearing about what those programs look like, and if you have to subvert your principles to express the programs that programmers want to write.
### It very important to note
It very important to note that there is no upper bound on program complexity with DRAKON.
You can express ANY algorithm without line crossings.
See the silhouette construct.
### I'm not referring to Turing
I'm not referring to Turing completeness, but there must be an upper bound on what can be reasonably expressed before the program becomes too hard to write/maintain, as there is basically in any programming language. In other words, how does your language scale with respect to complex programs that programmers might want to write? Could you (or have you) feasibly write the Drakon compiler/run-time/UI in Drakon itself?
Visual/graphical languages are more susceptible to low feasible expressiveness ceilings given their spatial syntax/connectors (the problem of literal spaghetti code), and (often) lack of very good abstraction capabilities; especially if the language lacks names and scopes, though DRAKON seems OK here. So you have this restriction to prevent spaghetti, but the consequence of this is not really obvious.
### DRAKON is not "my" language.
DRAKON is not "my" language. It was developed within the Russian space program.
So it withstood the test of time and complexity.
And I only wrote an editor in DRAKON-Tcl. The editor compiles itself, by the way.
### Does the Russian space
Does the Russian space program still use DRAKON? What do you mean by "it withstood the test of time"?
Has DRAKON been used for many projects outside of space programs? What do you mean by "it withstood the test of complexity"?
### DRAKON is still used in
DRAKON is still used in space projects. For example, here: http://npcap.ru/en/.
DRAKON gained recognition in many other fields:
- ERPs
- Microcontroller programming.
- Nuclear power plant control systems.
DRAKON is also used outside programming:
- For specification of medical procedures.
- For explaining laws and tax regulations.
Unfortunately, most of the available information is only in Russian.
http://drakon.su/biblioteka/start
As far as I know, the author of the language has plans to release a book in English.
About "the test of time". The first version of DRAKON appeared in the mid 80s.
About "the test of complexity". The first application of DRAKON was the BURAN spacecraft. Space shuttles are very complex systems.
An interesting note. All stages of flight of BURAN were fully automatic. Those included an aircraft-style landing. For a spacecraft of that size, it required complex avionics and AI. NASA was also able to do that, but some 18 years later.
### Control flow is not a
Control flow is not a consequence of programs executing over time. Rather, the converse is true. Programs executing over implicit logical time is a consequence of control flow abstraction. Control-flow expresses a stateful model: you have implicit state (e.g. a program counter, maybe a stack) indicating which part of the program is in control. The change of that implicit state introduces time in control flow abstractions.
There are many ways to express execution over time that do not need control flow; cf. Lustre, Dedalus.
While replacing control flow with dataflow isn't "necessarily" better (one could intentionally do worse!), dataflow has the potential to solve many problems with considerably less state, resulting in simpler and safer programs with nicer concurrency and partial failure properties. Further, when developers decide control state is necessary (e.g. to model a business workflow), necessary state can always be introduced and controlled explicitly.
In my experience, declarative models tend to be very expressive and flexible for explicit control flow. The state is closer to essential. Less effort is expended working around the dominant control flow model. (Control flow is about control within a program, independent of the problem domain.)
Alas, dataflow expressions of control flow tend not to be as performant on modern hardware, and quite difficult to optimize. Many programmers never see past that... not even when they find themselves explicitly reproducing entire control flow models in order to achieve persistence or distribution (at which point declarative expression would have been equivalent or better).
In a language like Haskell, you basically wind up re-introducing explicit control flow via monads where it is needed (though it is obviously discouraged).
Similarly, tail call optimization can be understood as a means to implicitly support control flow abstractions. I do not consider Haskell a highly declarative language... at least not as it is used in practice today. Haskell does embed several FRP models, and my own RDP, which are considerably more declarative.
### Our definition of control
Our definition of control flow is different then. In my case, control flow exists somewhere even if the programmer is not explicitly exposed to it: someone executes the rules or gates the electrons. "Make me a cake" is a fairly declarative instruction for a task that involves a recipe with fairly involved control flow.
I am tired of seeing control flow transcoded with declarative abstractions and people calling the result still declarative because they have their defined their own flip flops and instruction counter. Enable lines in a data flow language are a bit better, but still deceptive, if you have to work with too many of those, you would have been better off in a language with stronger control flow abstractions.
### Make me a lie
Taken one fine-grained instruction at a time, an imperative program may appear declarative. E.g. if I say x := x + 1, I haven't specified how it happens. I have only commanded what happens. By your definition, is this therefore a "declarative" instruction? I think it would be. But programming is not about individual instructions. (If it were, we could trivially write every program.) Programming involves composition, decomposition, arrangement.
Where does "Make me a cake" fit into a larger program? How is it decomposed into smaller subprograms? You suggest it would decompose into an imperative recipe, but that seems to be an assumption you've made rather than an essential property.
Similarly, "declarative" and "imperative" programming styles must not be about individual instructions. They are also about composition, decomposition, and arrangement of expressions.
When I scrutinized the popular "what not how" notion of declarative, I found it ridiculously naive and inconsistent. Declarative programming, if it is to exist at all, MUST be about structure of expression - structure, not content. Declare the what. Declare the how. Declare the why. But what does it mean, structurally, to declare? My reasoning led me to focus on properties of ideal declarations: idempotence, commutativity, associativity, and continuity.
In attempting to refine informal concepts to something more precise, of course I'll run against inertia of popular definitions (no matter how poorly conceived). Descriptors such as "object oriented" or "declarative" have positive connotations and marketing power. It seems everyone wants a piece of that. I'm a member of the everyone club.
I ask that you please reconsider use of the inconsistent "what not how" definition of declarative. Use my definition. Or, if you wish, develop another consistent meaning, and share it. Or just avoid the word.
That aside, I agree with you that some people misuse "declarative" to describe scenarios where they've built their own flip-flops or program counter and are now developing atop that. Or, more commonly, some silly Haskell fanboy is insisting his monadically composed programs are "purely functional" because he's confused programming with staged evaluation.
Programming is something programmers do, not something machines execute. It's about the abstractions, not the implementation.
If we program by imperative composition implemented atop a declarative model, we're doing imperative programming. But, unless you're trying to have your cake and eat it, the converse is also true. If we program by declarative composition, we're doing declarative programming, regardless of whether it is implemented above imperative control flow abstractions.
To say "the control flow exists somewhere" is at best an observation of modern hardware, and not an essential property of declarative programming.
The cake is a lie.
### Declarative is not black or
Declarative is not black or white; its actually a property of all encodings: the question is not "is this declarative?" but rather "how declarative is this?" Its about detail: how much do I need to deal with to do a task? "Make me a cake" is very declarative, while the recipe for a real cake is obviously less so, and becomes even less so if we have to deal with the chemistry of the ingredients or the construction of the cooking oven. Composability and such does not constitute "being declarative," but allows you to preserve declarative properties on reuse, while formal properties help hide details that need to be known on reuse.
But this is a digression, let's go back to control flow.
Explicitly dealing with control flow means being exposed to a big detail, which contributes to being less declarative. The control flow never goes away: someone wrote your rule interpreter, your compiler, someone wrote the VHDL code for the CPU that is executing your code. The implementation is always programmed (or at least constructed)! You can program at many different levels as you see fit, or you can even program multiple layers of abstractions on your own. Or you can choose not to deal with control flow yourself, its up to you.
Control flow exists somewhere is more of an observation on the universe and natural physics rather than modern hardware. We will never be able to transcend this unless we somehow are able to transform the time dimension into a spatial one.
### The extent to which a
The extent to which a program is declarative is certainly not binary, given the ability of programmers to switch between modes of expression and levels of abstraction as conveniences them.
Not at all. To say an expression is declarative implies nothing about its level of detail. (cf. costume porn [trope].)
"Make me a cake" is very declarative
When read as an English sentence, "Make me a cake" is clearly imperative. In which language is it declarative?
the recipe for a real cake is obviously less [declarative]
"The" recipe? Which recipe? Why do you assume there only one? Have you never even tried writing a declarative recipe?
The control flow never goes away [...] more of an observation on the universe and natural physics rather than modern hardware.
I disagree. Natural physics is very declarative by nature. Physical things "are". They shape the world around them, and are shaped in turn, just by their coexistence. Some things shaped happen to be dynamic in nature, such as the flow of water or wind... or electrons. There is no fundamental need control flow concepts at bottom level.
DSPs, FPGAs, memristors, and other technologies (including analog) have potential - if they vastly reduce in price and increase in programmer accessibility - to offer effective mechanisms for embedding declarative expressions or data flows all the way down to the metal.
We must address time. It can be convenient to distribute many traditionally temporal elements along spatial dimensions. But we don't need to transform time dimensions into spatial ones. To avoid control flow at the lowest levels, it is sufficient to to transform spatial abstractions into spatial implementations.
### When read as an English
When read as an English sentence, "Make me a cake" is clearly imperative. In which language is it declarative?
See, this is where we disagree. Obviously, "make me a cake" is a command, and commands should be imperatives right? So the king says a declarative "I'm hungry" which the minister understands as meaning "I want to eat cake", who then instructs the eunuchs to "make me a cake" who then follow a recipe to make a cake, relying on the oven that the craftsmen built and the ingredients that farmers obtained from labor + biological inputs, which were transported to the palace via couriers and such, etc... In the context of what goes into a cake, "make me a cake" is very much closer to "I'm hungry" then anything else downstream!
Your definition focuses on the command aspect (make me a cake now), my definition is focusing on the detail aspect (what actually goes into making a cake at each level?).
Natural physics is not declarative at the bottom, but there are some nice declarative properties that emerge from the bottom chaos of lots of stateful interactions (say, the law of gravity), just like a declarative abstraction (I'm hungry) can cleanly gloss over the complexity of imperative operations (cake making).
I'm claiming that is impossible to be "declarative" all the way down to the metal. That DSP must deal with flip flops eventually, and have you ever programmed a FPGA without resorting to flip flop constructions? Even data flow requires buffering, and managing that buffering involves...control flow that someone has to encode. Hopefully, this control flow can often be abstracted away and forgotten, but its still going to be there.
### The declarative expression
The declarative expression of "make me a cake" isn't "I'm hungry". It is closer to "There will be cake, in my possession, baked by your hand."
There is no limit on declarative detail. I could declare the ingredients, and the designs in the icing. You seem to believe that there is a "detail aspect" that distinguishes declarative programming. I'm going to insist that you are simply, deeply wrong.
Yes, we need more detail (to provide it or generate it) at lower levels. That's just the nature of abstraction, orthogonal to declarative vs. imperative. (Are you sure you're not confusing "declarative" with "abstract"?)
Declarative does not mean "stateless". Declarative does not mean "deterministic". Declarative does not mean "simplified". If you stop assuming declarative is brain-dead by definition, you may discover there is much more to it.
### Let's pick bones at the
Let's pick bones at the definition. So we have procedural and declarative memory, where declarative memory is about facts, figures, equations, and so on, while procedural memory is about learned unconscious skills that are pretty much reflexive.
Now, for programming, the opposite of declarative is no longer procedural, but for some reason it is imperative. But of course, this is not what Lloyd really meant, just how the definition grew to be rationalized. Whatever, the definition I'm working with is the what vs. how, which is definitely, as you call it, the nature of abstraction. "the recipe that starts on page 42" is an abstraction for an actual recipe.
If there are no limits on declarative detail, then any procedural/imperative program is declarative one, since don't those instruction declare what the program does? This is how we get into trouble in Haskell, as you begin to express ordered computation using whatever transcoding is available...but at least the instructions were "declared."
Invariably, as you go down into lower levels of abstractions, there is code that declares something that involves control flow and time, even if its just "check cake from oven after 40 minutes, remove if a golden brown color." How would you specify this otherwise? "Don't burn the cake"? Apply some equation based on size of cake being baked to determine length of time spent in oven?
My definition is useful for avoiding what I call the abyss of declarative programming. That somehow the world is all nice and expressible with nice math, a view that severely limits what sorts of problems we can solve.
### If there are no limits on
If there are no limits on declarative detail, then any procedural/imperative program is declarative one, since don't those instruction declare what the program does?
Not necessarily. A program might declare intermediate and final results in excruciating detail, without ever saying what a program does. One can express constraint systems that have only one solution.
Though, declaring what a program "does" is not actually a problem. Declaration is about structure of expression, not content. "What not how" is hand-waving nonsense for people who don't scrutinize too closely.
This is how we get into trouble in Haskell, as you begin to express ordered computation using whatever transcoding is available...but at least the instructions were "declared."
Haskell is many excellent imperative languages, and a fine example that we don't need much built-in control flow. The trouble is a rare few fools who insist on calling a subprogram 'declarative' (or pure) when it is clear they have expressed it (and must reason about it) imperatively.
Invariably, as you go down into lower levels of abstractions, there is code that declares something that involves control flow and time
Yes to time and state, no to control flow. Conflating them is incorrect. Control flow is a feature of the programming abstraction, conceptually independent of time or state in the problem domain.
My definition is useful for avoiding what I call the abyss of declarative programming.
Indeed, your straw man definitions, examples, and arguments paint a bleak future for anyone who develops declarative programming conformant with your definitions. It's a good thing nobody is doing that.
### I don't believe in
I don't believe in declarative programming as an absolute, so I simply redefined my fears away. I've worked on 4 languages in this area so far, and have progressively become more pragmatic each time.
We really are just arguing about technical terminology. I have declarative vs. procedural to your declarative vs. imperative, while your control flow is strongly tied to imperative programming while my control flow is more like flow control. I grant that your definitions are probably more popular, but I still don't think they are very useful.
### You referred to declarative
You referred to declarative vs. procedural memory, in the biological sense, which I found interesting. The words in programming, however, refer to grammatical moods or modes of speech and expression (i.e. imperative, declarative, interrogative). Procedural programming describes a particular structured form of imperative programming. Declarative has the same, sweeping nature as imperative; particular declarative programming models include constraint-logic and synchronous reactive.
Even if you focus on memory, declarative vs. procedural is about structure not content. For example, we can have procedural memory of riding a bike, and declarative knowledge of how to ride a bike, and the two are very different in their integration and application but not their content.
Given your predictable reactions to discussion of "declarative" programming, I think you've failed to "redefine your fears away". While you've embraced a broad notion of imperative in your own work (and I infer you've battled some obsession with "declarative" that has left you a little bitter), I've noticed you spend more than a little time exclaiming about temporal bogeymen in everyone else's declarative models. :p
I guess I'm lucky I came from the other side. I never had intentions of developing a "declarative" programming model for its own sake. My efforts prior to RDP involved actors, kell calculus, mobile agents. But I found that properties such as idempotence and commutativity were useful for mirroring and scale, that properties such as continuity were valuable for revocation and resource management. Ever so slowly, as I began to understand why and how to handle state and time declaratively (especially without use of new), my semi-imperative models drifted towards fully declarative.
### FWIW, I think that your
FWIW, I think that your definition of declarative is indeed more useful than the other definitions I've seen. Everyone seems to agree that defining a declarative as purely functional is useless because you can easily write a pure program that is for all practical purposes imperative e.g. using the ST monad. Defining a language or programming paradigm as declarative if it is idempotent, commutative, concurrent and reactive suffers from exactly the same problem.
Therefore we should not apply the term to programming languages, but to programs. Some programs written in Haskell have a declarative flavor, some do not. The same applies to virtually every language. It is true that some paradigms allow you to express a specific class of programs in a declarative way more easily, but with certain kitchen appliances it is easier to bake a tasty cake, yet we don't call those appliances tasty. Defining the declarativeness of a program as how close it is to its informal specification seems more useful and more in line with common usage of the term than the other definitions. For example if the goal is to make a cake, then saying "make a cake!" to one of your servants is more declarative than "mix 2 ounces of sugar, 2 ounces of butter, 2 ounces of flour, ..." (even though mix is commutative ;)). At least this is a property we actually care about, rather than a shallow syntactic property (like "purely functional" or "commutative") that is trying to approximate the symptoms of something we care about.
### We don't need another synonym for 'good.'
I favor David's definition. Declarative should be about making declarations. Yes, you can do imperative programming in a declarative style ("You shall have done step 1, and next step 2, and next step 3."), but there still useful properties associated with the declarative version: order and repetition don't matter. This is analogous to what David repeatedly misses about the importance of being purely functional and why Haskell's approach has advantages over a more overly imperative language.
### Structural Reasoning
I don't miss the fact that Haskell's approach has advantages over most imperative languages: constraining effects to the "spine" of the monad greatly simplifies reasoning, maintenance, and control of effects. Haskell uses pure functions and its rich type system to achieve this. But, having seen other means to achieve similar structural reasoning, I simply do not attribute this benefit to "the importance of being purely functional". That strikes me as confusing a means with an ends.
That little rib against me aside, I agree with your statement.
### My rib
My rib was a hastily worded reference to our recent discussion about whether Haskell is purely functional, because I think the situations are analogous. Haskell is purely functional because the semantics are about immutable values. That's useful even though some of those values are essentially imperative programs and the language can be used to build them in a way that is essentially imperative programming. Similarly in a declarative language you can be declaratively describing imperative steps and can be programming in an imperative style.
### Abstraction is language
Abstraction is language manipulation. Programming is abstraction manipulation. I'd prefer to characterize certain abstractions or modes of expression as purely functional, or declarative, or imperative. Not full languages.
The base does matter, of course, having an enormous impact on the paths of lesser or greater resistance.
Keep this in mind: the reason Haskell is better for imperative is not because it has a purely functional abstraction type, but rather because it is too difficult to syntactically abstract liftMk and inject appropriate return operations across every expression in the do notation... and the many people who tried thus failed to make this mistake.
(Also note: there are many languages that lack pervasively mutable state, that emphasize immutable values, yet that also lack pure functions. Those are orthogonal concepts.)
### It's not a synonym for
It's not a synonym for 'good'. Good can mean a variety of things in various contexts: fast to execute, cheap to build, free of bugs, etc. "similar to informal specification" is just one of them.
### Deep Syntactic Properties
Achieving those "shallow syntactic properties" has a very deep and pervasive impact on semantics. Don't underestimate the power of syntax; it shapes everything programmers create.
It is not simulation that results in cargo cult programming; rather, it is a failure to understand or appreciate depth and context.
even though mix is commutative ;)
Indeed it is! And if the ingredients were instead subprograms or subcomponents, mix could make an excellent declarative operator.
I have a similar operator in Sirea, with the symbol |*|. This represents concurrent composition of behaviors while dropping the response signals, i.e. such that the behaviors are expressed only for their effects. It's excellent for setting multiple agents on a case: fred |*| velma |*| scooby.
### "...which the minister understands as..."
I call shenanigans. Isn't that sentence the essence of conflating syntax and semantics?
For that matter, isn't what the king says also connected to how he says it, which you are expecting us to understand as human beings, but are also expecting us to ignore in your interpretation of the king's declaration as being a command to assuage said hunger?
Just to muddy the waters some more, if the king were playing chess, and made that "declaration," it would not have been a command, except in the form of warning his opponent that the king intended to play more aggressively.
My point being, that you are taking "a declaration" of state, and conflating the declaration with resultant behavior set up to respond to the declaration.
tl;dr - "I call shenanigans."
### Enable lines in a data flow
Enable lines in a data flow language [...] if you have to work with too many of those, you would have been better off in a language with stronger control flow abstractions.
I disagree for a few reasons.
First, injecting control data is quite trivial in a bidirectional data flow model such as RDP. Developers don't need to fight the system to introduce "enable lines" even late in development. Similarly, control data is relatively trivial to introduce in temporal logics or rules systems (which are not the same as dataflows, but can be declarative).
Second, the workflows encountered in many problem domains tend to require very ad-hoc states, waits, and conditions. It is rare that a control-flow model is immediately well suited to external problem domains. If you build-in control flow abstractions, that becomes another thing for developers to work around. You can see this common practice today: event loops, IoC frameworks, ORMs, etc..
Better to save your users many headaches and build in only a declarative model and access to state resources. Developers can support ad-hoc workflows without the abstraction inversions and accidental complexity of directly integrating diverse control-flow models.
### Crossing Lines
The "no crosssing lines" constraint is interesting: it imposes an upper bound on program complexity based on aesthetics.
I agree that DRAKON's constraint against crossing lines is interesting.
Though, I'm not sure it would work for data flow. At least for RDP, I make considerable use of swap and mirror behaviors to reorganize parallel data pipelines (wires) into a proper format for a stage of processing that takes more than one input. It seems to me that a "no crossing lines" constraint would easily become a global constraint, difficult to modularize.
But perhaps it would be more acceptable if crossing lines was an explicit behavior:
| |
++|+++|++ x y
| \ / | | |
| X | bswap
| / \ | | |
++|+++|++ y x
| |
I've been considering approaches to avoid crossing by trading it for auto-wiring based on matching labels or roles (types), perhaps scoped by location or proximity. Traits-based composition of declarative objects would be a good fit for my envisioned programming aesthetic. At the moment, I'm at a loss for how to model auto-wiring effectively in Haskell without sacrificing Haskell's compile-time safety. I know I can achieve staged safety (using Data.Typeable). But I'm fully very satisfied with that.
### Data flow and control flow
there are ways to express decisions in data (e.g. sum types). A highly declarative language will only need data flow.
Pattern matching over sum types (and inductive types, more generally) is control flow, by any standard. Data flow and control flow are dual to each other[1]; arguably, neither is superior to the other. One could also argue that "combining" data flow and control flow on the same diagram cannot really be done in a satisfactory way, i.e. that any such attempt will necessarily introduce inelegancies and kludges in the visual language. (Albeit, it seems that DRAKON-FBD uses a workable approach, in that some action and condition boxes are expressly identified as "subdiagrams" of the other type.)
[1] E. S. Bainbridge. Feedback and generalized logic. Information and Control, 31:75--96, 1976.
### Data flow is not dual to
Data flow is not dual to control flow.
The duality Bainbridge observes requires an assumption of pervasive state (expressed through closed feedback loops with unit delay). This pervasive state is not essential for data flow or pattern matching, yet is essential for control flow. Also, the flow charts and networks Bainbridge uses are not prototypical of control flow or data flow.
Data flow is most strongly characterized by continuous propagation of data through a relatively static computation. That continuity typically represents streams of values (e.g. streaming a file) or changes in a value over time (e.g. whole file as time-varying string). Leveraging sum types, one can model switching networks in a data flow system. (Note: "pattern matching" is a different concept, and isn't required.)
Control flow is characterized by describing which parts of a program are 'in control' over time. This concept is stateful by nature, and also is rather awkward for concurrency. Control flow has no built-in notion of continuous data streams or updates, nor even of input (other than receipt of control).
Control flow and data flow are informal concepts. It is difficult to make formal claims about them. Duality, in the category theory sense, is a very simple, important, and formal relationship between models. While the idea that such a relationship should exist is appealing, I do not think you will find such a simple relationship between control flow and data flow systems. At least, you won't find such a relationship without a lot of stretching and questionable assumptions.
### Data-control flow
Pattern matching over sum types (and inductive types, more generally) is control flow, by any standard. Data flow and control flow are dual to each other; arguably, neither is superior to the other.
It seems that the duality you're talking about is basically the duality of sum types and product types, and the diagramming problem is the same as the one for MALL proof nets.
While I'd like to think arrowized operations on sum types are part of the "data" flow, I've been confused about the meaning of "control flow" for a while, and your explanation finally gives me a usable definition.
Data-control flow might be a good term to use for the generalized graphs David Barbour and I are thinking of.
One could also argue that "combining" data flow and control flow on the same diagram cannot really be done in a satisfactory way, i.e. that any such attempt will necessarily introduce inelegancies and kludges in the visual language.
Well, it's a tall order to ask for a programming language with no visual kludges. ;)
I'm finding data-control flow diagrams are a nice mental model for me to align my programming abstractions with concepts of category theory and linear logic. At least a couple of times over the last few months, I've been able to change a master-slave relationship into a symmetrical relationship, and this has revealed opportunities for simplification and code reuse. The modeling technique is all in my head for now, but a graphical view of my code might come in handy.
Visually speaking, I think of a data-control flow diagram as a two-dimensional graph that sinks into a third dimension with a many-worlds interpretation. Some edges are absent in certain worlds, and the possible worlds are arranged in such a way that they minimize the edges of the resulting 2-manifold. This manifold is essentially made of toilet paper: Products are multi-ply, while sums have perforations.
### Many worlds
Any ideas on how to visualize many worlds (probabilistic) data-control flows?
### Few worlds
That regrettably adds another dimension of possibilities to the diagram, making it four-dimensional.
The only reason I get away with three dimensions is because I imagine a small number of sum-typed values overall--as in, maybe two. If we encoded three bits as a product of sums, giving eight possibilities, they'd probably need to be arranged by Grey coding in order for the diagram to be comprehensibly continuous.
In practice, I might try to use opacity as a substitute for dimension. A sum type would correspond to a perforated plane with a different opacity on each side:
_________
,;########/ A
_______,;;--------'
/#######;'
A + B /=======:
/_______ \
\ -------,
\________/ B
`
|
2018-01-21 04:40:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5011690258979797, "perplexity": 2112.6152216036066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890187.52/warc/CC-MAIN-20180121040927-20180121060927-00410.warc.gz"}
|
https://www.vedantu.com/question-answer/if-the-sum-of-two-continuous-odd-numbers-is-40-class-9-maths-cbse-5ee1de15c9e6ad07956eef8f
|
Question
# If the sum of two continuous odd numbers is 40, find the numbers.
Verified
157.5k+ views
Hint – Consider the first number which is odd as a variable and use the fact that two consecutive odd numbers differ by 2, that means the consecutive odd number after the first assumed odd number will be that odd number addition 2.
$y + y + 2 = 40 \\ \Rightarrow 2y = 38 \\ \Rightarrow y = 19 \\$
|
2021-12-03 22:52:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8385684490203857, "perplexity": 284.9220773237009}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362919.65/warc/CC-MAIN-20211203212721-20211204002721-00419.warc.gz"}
|
https://blog.fridaymath.com/vertical-and-horizontal-medians
|
This is a paragraph.
# Vertical and horizontal medians
In conjunction with our newly found companion, whose name is Analytic Geometry, we wish you a happy new year!
Right now as we write, we’re situated right at the coordinates of the point where our interest intersects with this fine, familiar, friendly, fascinating companion.
And, today’s post stems from a task we asked you to do in the past. Our own solution sample yielded this post’s title, together with what we’ve assembled in the accompanying examples.
You’ll love this!!!
#### Example 1
Let be the -coordinates of the vertices of a triangle. PROVE that the -coordinate of the centroid is if, and only if, the sequence is arithmetic.
Trivial fact presented in a technical way. Note that the result can be easily adapted for the -coordinates.
To prove this, recall that the -coordinate of the centroid is the “average” of the -coordinates of the vertices of a triangle. In the present case we have
and so the sequence is arithmetic. Conversely, if the sequence is arithmetic, then is the arithmetic mean of and ; that is, , which in turn gives . Add to both sides: , then divide both sides by :
and so the -coordinate of the centroid is .
The next result can be combined with the above one to form three equivalent statements, but we’ve separated it for emphasis.
#### Example 2
Let be the -coordinates of the vertices of a triangle. PROVE that the -coordinate of the centroid is if, and only if, the median through the vertex containing is vertical.
First suppose that the -coordinate of the centroid is . From Example 1 above we saw that this means the sequence is arithmetic, from which . Consider the diagram below:
from which it can be seen that the median through goes through . Since , the median actually goes through and , and so its equation is , a vertical line.
Conversely, if the equation of the median through vertex is , then the coordinates of the midpoint of the opposite side must satisfy this equation, giving which can be manipulated to obtain
and so the -coordinate of the centroid is .
#### Example 3
Let be the -coordinates of the vertices of a triangle. PROVE that the -coordinate of the centroid is if, and only if, the median through the vertex containing is horizontal.
Since this is similar to Example 2, we omit the proof. Note that we used instead of here because of the next result.
#### Example 4
Let be the vertices of . If the centroid is , PROVE that the slopes of the sides follow a geometric progression whose common ratio is .
Beautiful, colourful, simpleful, , whateverful that’s useful.
The slopes of the sides (), as seen from the diagram below
are:
respectively, and we assume that these expressions are well-behaved. We prove that the enumeration
(1)
is a geometric sequence and that its common ratio is .
Since the -coordinate of the centroid is , we have, by Example 1, that
(2)
Similarly, since the -coordinate of the centroid is , we have that
(3)
Consider the second term of the enumeration in (1), namely . We have (by (2) and (3)) that:
and so the second term in (1) is times the first. Now consider the third term, namely . Again, by (2) and (3), we have that:
and so the third term is times the second. Therefore, the enumeration given by (1) is indeed a geometric sequence with a common ratio of .
## The VHM property in triangles
By the VHM property, we mean that a triangle possesses a vertical median as well as a horizontal median (view the VHM property as a special case of two perpendicular medians).
#### Example 5
Give an example of a triangle that contains a vertical median and a horizontal median.
Easy. Very easy — as .
Indeed, the vertices of our triangle will be constructed from the numbers . Take with vertices shown below:
Notice the horizontal median and the vertical median . Also, the slope of is ; the slope of is ; the slope of is . Re-arranging, we have the geometric sequence of slopes:
The common ratio is as you’ve seen. Also worth noting is that the above triangle is isosceles; in the next example, we show that not all VHM triangles are isosceles.
#### Example 6
Give an example of a scalene triangle that contains a vertical median and a horizontal median.
Consider shown below:
so this triangle is scalene. Its centroid is located at , the median is vertical, while the median is horizontal. Moreover, the slopes of sides are respectively. They form a geometric sequence whose common ratio is , as you know.
#### Example 7
Find a general set of coordinates for the vertices of a triangle that satisfies the VHM property.
Let and be real numbers. Set
Then satisfies the VHM property. Indeed, its centroid is located at
and so it shares an -coordinate with vertex and a -coordinate with vertex . By Example 2 and Example 3, it contains a vertical median and a horizontal median, which is the VHM property.
Notice that the slopes of sides are
respectively. These slopes form a geometric progression with a common ratio of , as expected. Note that it is important to impose the restrictions and .
#### Example 8
PROVE that if a triangle contains a vertical median and a horizontal median, then the product of the slopes of the sides is a “perfect cube”.
The slopes of the sides of a VHM triangle were calculated in Example 7; their product is
which is a “perfect cube”. (We’ve used the term “perfect cube” in a loose way here.)
A VHM triangle contains a vertical median (undefined slope) and a horizontal median (zero slope). The remaining “non-degenerate” median has a slope that’s related to the slope of the side it meets.
#### Example 9
PROVE that the slope of the “non-degenerate” median in a VHM triangle is the negative of the slope of the side that contains its foot.
Consider the “non-degenerate” median (dashed blue line) in the diagram below:
The slope of side is . Let’s calculate the slope of median :
We’ve saved the last for last (you read that correctly), which has to do with the converse of what we saw in Example 4.
#### Example 10
PROVE that if the slopes of the sides of a triangle form a geometric progression with a common ratio of , then the triangle contains a vertical median and a horizontal median.
Beautiful, colourful, , whateverful.
Let be the vertices of a triangle whose sides slopes form a geometric progression with a common ratio of . Enumerate this geometric progression as
and let the terms correspond, respectively, to the slopes of sides . Then we have the following linear system:
(4)
(5)
(6)
This is a homogeneous linear system that is underdetermined (it contains fewer equations than unknowns). By a result in linear algebra, such a system always has non-trivial solutions. In fact, our solution is:
Note that have been expressed in terms of because the system given by (4), (5),(6) has three degrees of freedom. Don’t worry about these technical terms. The centroid of is
after simplification. Since the centroid shares an -coordinate with vertex and a -coordinate with vertex , it contains a vertical median and a horizontal median.
It follows that a triangle satisfies the VHM property if, and only if, the slopes of its sides form a geometric progression whose common ratio is .
## Takeaway
The centroid of a triangle is properly located within the triangle (unlike the circumcenter or orthocenter that are sometimes situated outside). The centroid hardly “shares” its coordinates with any of the triangle’s vertices, but when it does decide to share, then it “straightens” one median and “flattens” the other.
1. Let be an arithmetic sequence, and let be another arithmetic sequence. PROVE that the points lie on a straight line.
This shows that, in order to form a triangle, the order in which we select coordinates from the sequences matter.
2. PROVE that an equilateral triangle can never contain a vertical median and a horizontal median at the same time.
3. PROVE that if a right triangle is to contain both a vertical median and a horizontal median simultaneously, then the slopes of its sides have to be , or .
(Further, the medians to the two legs cannot be vertical and horizontal at the same time.)
4. Suppose that contains a vertical median and a horizontal median. PROVE that if one of its sides has a slope of , then is isosceles.
5. Suppose that contains a vertical and a horizontal median. PROVE that the sum of the slopes of its sides is times the slope which is the geometric mean of the other two slopes.
For such a triangle, the slopes of the sides form a geometric sequence with a common ratio of , so it makes sense to talk about the “geometric mean” of the slopes.
6. PROVE that if a triangle contains a vertical and a horizontal median, then the sum of product of the slopes of the sides, taken two at a time, is always negative.
7. The three medians of a triangle divide the triangle into six smaller triangles of equal areas. PROVE that if the original triangle satisfies the VHM property, then two out of the six smaller triangles are right triangles. If, in addition, the original triangle is isosceles and satisfies the VHM property, PROVE that four out of the six smaller triangles are right triangles.
8. Let be the coordinates of . PROVE .
This triangle contains a vertical median and a horizontal median, so the above equation is an instance of a well-known result; see this article.
9. Let be the coordinates of . PROVE that its area is .
Note that this triangle contains a vertical median and a horizontal median, and so its area takes a simple form.
10. Let be the coordinates of . PROVE that the slope of its Euler line can be given by
|
2022-10-01 10:48:07
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8850569128990173, "perplexity": 499.6389687165021}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00116.warc.gz"}
|
https://mathhelpboards.com/threads/extreme-value-theorem-proof.1932/
|
# Extreme value theorem Proof
#### Amer
##### Active member
I was reading Wiki, I met a problem in understanding the the proof of boundedness theorem exactly when they said
"Because [a,b] is bounded, the Bolzano–Weierstrass theorem implies that there exists a convergent subsequence"
but Bolzano theorem state that if the sequence is bounded, which is not necessary in our case.
What I miss here
And in the alternative proof they said
"The set {yR : y = f(x) for some x ∈ [a,b]} is a bounded set."
f is continuous at [a,b] but how should it be bounded it is clear but how to prove that ?
Thanks
Last edited:
#### Fantini
##### "Read Euler, read Euler." - Laplace
MHB Math Helper
Re: Extreme vlaue theore Proof
but Bolzano theorem state that if the sequence is bounded, which is not necessary in our case.
Of course it is! If the sequence is defined in $[a,b]$, this means that for all $n \in \mathbb{N}$ we have $x_n \in [a,b]$, which in turn means that $a \leq x_n \leq b$.
As for the other, it is using the fact that if a function $f: X \to \mathbb{R}$ is continuous, then if $X$ is compact you have that $f(X)$ is compact. This of course means that $f(X) = \{ y \in \mathbb{R} : y = f(x) \text{ for some }x \in [a,b] \}$ is closed and bounded.
|
2021-01-27 12:52:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9056926965713501, "perplexity": 349.7761190426996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704824728.92/warc/CC-MAIN-20210127121330-20210127151330-00468.warc.gz"}
|
https://www.biostars.org/p/138317/#138326
|
Tophat --max-multihits option
2
1
Entering edit mode
7.8 years ago
biolab ★ 1.4k
Dear all
I have a simple question: Tophat2 has --max-multihits option. If I set it to one, does it mean that each read is mapped at unique locus? This will loose many reads for multi-copy genes (for example, Actin genes). Could you please explain to me why some research work used "uniq mapping" reads?
tophat • 3.6k views
5
Entering edit mode
7.8 years ago
Martombo ★ 3.0k
Yes with --max-multihits 1 you're going to get only uniquely mapped reads. This is not such a bad idea as it may seem and actually a lot of programs for subsequent steps of the analysis will only use uniquely mapped reads (one for all HTSeq-count). This approach is very conservative and you lose quite a good number of reads. But in my experience (I also performed a few simulations to prove this) the results are very reliable. Basically all other possibilities (like with RSEM) make use of some assumptions: for example what happens if the ratio between the expression of two paralogs is different in two conditions? (for example for differential splicing) you will get a bias in the fold change estimate. while if you only consider unambiguous reads, you will only get a lower significance for an eventual differential expression. of these two scenarios I prefer the latter.
Have a look at these slides, to make it clearer. (the simulations are based on SMN1 and SMN2, which to my knowledge are two of the paralogs in the human genome with the highest similarity. they only have 2 mismatches on their sequence. given 100bp SE reads, 85% of the total reads of these two genes will be ambiguous, or multi mapped. 1000 simulations are plotted. the DE analysis was done with DESeq2)
https://www.dropbox.com/s/6w55godj2wetbed/unambiguous_counts.pdf?dl=0
0
Entering edit mode
0
Entering edit mode
6.4 years ago
glihm ▴ 650
To complete the response of Martombo, uniquely mapped reads can be useful when your are studying a special biological event like the translation for instance (With ribosome profiling). When we filter the data from sequencer, we select good quality reads and then the mapping is done keeping only uniquely mapped reads !
So, some duplicated regions will be removed (I mean, no reads will map on these regions), but others mapping are used to study these special cases if needed. ;)
|
2023-02-01 16:25:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3463138937950134, "perplexity": 1929.1905891051142}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499946.80/warc/CC-MAIN-20230201144459-20230201174459-00832.warc.gz"}
|
https://physics.stackexchange.com/questions/323296/commutators-measurement-and-causality-in-qft
|
# Commutators, Measurement, and Causality in QFT
In Peskin and Schroeder pg. 27-28, they discuss Klein-Gordon theory and causality. For a spacelike separation $(x-y)^2 < 0$, they show that $$\langle 0| \phi(x)\phi(y) |0\rangle \neq 0$$
They go on to say that this doesn't actually break causality. Rather, what one should be looking at isn't whether particles can propagate over space-like intervals, but whether space-like separated measurements can affect one another. Hence, they argue that to understand the measurements of the field $\phi(x)$, one should be trying to understand the the commutator $[\phi(x),\phi(y)]$.
This last statement is opaque to me. In the QM setting, the ordering of the operators can be physically interpreted as one being applied first in time prior to the second. However, this doesn't make sense in the QFT context because one applies the operator at a specific point in space-time; flipping the order is not equivalent to changing the time-order of making the measurements.
1. In the QFT context, what is meant by the "measurement of a field," in analogy to the QM measurement of some operator?
2. Why is the commutator the object of choice when wanting to understand causality? What is the physical interpretation of the commutator here, particularly with respect to measurements of $\phi(x)$?
• The commutator behaves as in QM: you're comparing the result of applying one operator before the other and vice-versa - here, it corresponds to the creation of a $\phi$ particle at point $x$ or $y$. If the commutator is $0$, the result is the same either way: they don't talk to each other. If it isn't, your "measurements" can affect each other. – Demosthene Apr 3 '17 at 23:23
• Well, if I remember right from Tong's lectures, that expression decays exponentially with the spacelike distance, and he went on to explain that it's not that bad, a negative exponential, and anyway it's just an expectation value or probability, just not exactly zero. He went on with the class. I was bothered by the nonzero. Don't know if the exponential decay is quick and sharp or not, and if not zero whether one could treat the probability as a fluctuation, conceptually. But, why should the probability of creating it at x and annihilating at y with x-y spacelike NOT be zero? – Bob Bee Apr 4 '17 at 1:13
• @Demosthene Thanks for the comment, but your answer doesn't answer the fundamental question of why should you be comparing the result of applying one operator before the other. In QM, the order of the operators corresponds to the order in which you measure in time. This is patently not true in the QFT scenario, yet causality is a statement about time-ordering, hence the meaning of the commutator with relation to causality becomes ambiguous (to me). Also, the interpretation of $\phi(x)$ as creating a particle at $x$ is only true when it acts on the vacuum state. – Aaron Apr 4 '17 at 2:06
• "the interpretation of ϕ(x) as creating a particle at x". I don't think this is correct. $a^+(k)$ creates a particle of momentum (wave number) $k$. The interpretation would be correct if ϕ was the Fourier transform of $a$, but there is the additional factor of $(\omega_k)^{-{1/2}}$. – Keith McClary Apr 4 '17 at 5:00
Lots of aspects of the physical interpretation in QFT are at best subtle, and philosophically weak but plausible-sounding heuristic arguments are not uncommon. (You can already see people disagreeing about something so basic as whether $\phi$ creates a particle in the comments!) I found the early chapters in Peskin hard going for this exact reason- it's much better when you get to phenomenology and the physics is less opaque. If you want a book you can't argue with, try Weinberg- but this does come at the price of taking twice as long to cover the material, unfortunately in a rather idiosyncratic notation that makes it hard to dip in and out of.
In particle physics experiments we measure the cross section for particle interactions which depends on the S-matrix. The LSZ (Lehmann-Symanzik-Zimmermann reduction formula), which relates the Lorentz invariant S-matrix elements $$\langle f| S|i\rangle$$ for $$n$$ asymptotic momentum eigenstates to an expression involving the quantum fields $$\phi(x)$$: $$\langle f|S |i\rangle =\left[i\int d^4x_1\square \left(+m^2\right) e^{-i p_1 x_1}\right]\cdots \left[i\int d^4x_n\square \left(+m^2\right) e^{+i p_n x_n}\right]\times \langle \Omega |T\left\{\phi\left(x_1 \right) \phi \left(x_2 \right) \phi\left(x_3 \right) \cdots \phi \left(x_n\right)\right\}|\Omega \rangle$$ The $$T \{\cdots \}$$ refers to the time ordered product and it indicates that all operators should be ordered so that those at later times are always on the left of those at earlier times. E.g. $$T \left\{ \phi \left(x_1\right) \phi \left(x_2\right)\right\}=\phi\left(x_2 \right) \phi \left(x_1 \right)$$ if $$t_2>t_1$$ regardless of whether $$\phi \left(x_1 \right)$$ and $$\phi \left(x_2 \right)$$ commute or not. However, if $$x_1$$ and $$x_2$$ are space like separated then one can change to a different frame which reverses the time ordering. I.e. if we could have $$t_2>t_1$$ in one frame and $$t_1>t_2$$ in another. Therefore, for the S-matrix to be Lorentz-invariant (i.e frame independent) we require that $$[\phi(x_1),\phi(x_2)]=0$$ when $$x_1$$ and $$x_2$$ are space like separated.
|
2019-08-24 15:34:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 16, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.80088210105896, "perplexity": 372.5488410667221}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027321160.93/warc/CC-MAIN-20190824152236-20190824174236-00089.warc.gz"}
|
https://math.stackexchange.com/questions/1050906/construct-a-sequence-of-continuous-functions-which-converges-pointwise-to-lflo/1050913
|
# Construct a sequence of continuous functions which converges pointwise to $\lfloor x \rfloor$
Suppose $f(x)=\lfloor x \rfloor$ for $x \geq 0$. Define a sequence of functions $(f_n(x))_{n \geq 1}$ where
$f_n(x) = \left\{ \begin{array}{lr} x^n & : x \in [0,1)\\ (x-1)^n+1 & : x \in [1,2)\\ (x-2)^n+2 & : x \in [2,3)\\ \vdots \\ \end{array} \right.$
Questions:
$1)$ Is $f_n(x)$ continuous for all $x \geq 0$?
$2)$ Does the function $f_n(x)$ converge pointwise to $\lfloor x \rfloor$?
If yes to both questions above, can we write $f_n(x)$ in a single function instead of piece-wise function?
My guess: Yes to both questions. But I am unable to express $f_n(x)$ in a single function.
• Piecewise functions are functions; $f_n$ is already expressed as a "single function". – Daniel Hast Dec 4 '14 at 3:54
• Perhaps a formula as $f_n(x)=(x-\lfloor x \rfloor)^n+\lfloor x \rfloor$ ? – Kelenner Dec 4 '14 at 7:38
• Kelenner is correct. I would recommend you go back and use this definition to make the answer to (1) and (2) simpler. Given $0 \leq (x−⌊x⌋)<1$, show that $(x−⌊x⌋)^n$ must converge to $0$. Although the fractional part function is not defined, you can further simplify $f_n(x)$ to ${x}^n+⌊x⌋$. – Display name Apr 19 '17 at 22:36
• Can't unedit previous comment, I meant ${\{x}\}^n+⌊x⌋$ – Display name Apr 19 '17 at 22:42
Note that $(x-k)^n + k \rightarrow k$ for $x \in [k,k+1)$, since $0 \leq (x-k) < 1$.
|
2019-10-15 18:45:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9595581889152527, "perplexity": 252.61918958089896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660231.30/warc/CC-MAIN-20191015182235-20191015205735-00030.warc.gz"}
|
https://safe.nrao.edu/wiki/bin/view/Main/CalibrationProjectCharter?cover=print;
|
# Calibration Project Charter
Project Manager: ToneyMinter
Project Scientist: JimBraatz
Project Tracking Number: This should be obtained by GBT program manager
## Introduction
### Project background
Currently, GBTIDL and sdfits use a scalar value for and subsequently for . This introduces baseline structures into the data. It also causes systematic errors in the calibration. At band edges this can keep multiple spectral windows placed side-by-side from agreeing in their flux level. These problems can be greatly reduced by the use a "vector" , , etc.
Observations looking for weak and/or broad spectral lines are affected by the residual baselines left by the current standard, calibration. These observations include high-redshifted CO observations, H 2p-2s fine structure line (both Galactic and red-shifted Epoch of Re-ionization), and the search for positronium. When several spectral windows are setup to be contiguous in freuqency, the resulting data do not line up correctly in their fluxes using scalar calibrations and they should line up once vector calibrations are used. Also affected are the cases where the source has a significant amount of continuum emission.
Additionally, GBTIDL calibration procedures currently use default opacities representative of good weather, but fixed at a constant for any given frequency. These defaults can introduce an error in the flux density scale. The problem could be alleviated by having the GBTIDL calibration functions call on the cleo weather database to retrieve an opacity more representative of the actual weather conditions at the time of the osbervation. A procedure to reduce GBT tipping scans would also help the observer calibrate flux density more accurately.
### Project Justification
It is known that values have significant frequency structure. The current calibration scheme uses scalar values for and :
with
.
As one can easily see, even if the observed signal spectrum as a function of frequency is offset from the observed reference spectrum as a function of freuqency by a constant value, the structure of will not be taken into account and baseline structure will be artificially introduced into the resulting spectrum.
By switching to a frequency dependent and calibration scheme, the artificially introduced baseline structure can be greatly reduced. Also, observational data, combined with observations and vector , will lead to significant improvements in the calibration of GBT data. If this is not done then the scientific capabilities of the GBT will remain hindered to observations of weak spectral lines, broad spectral lines and spectroscopic observations of sources with strong continuum emission.
Several cases where vector will improve the calibration are:
• For observations where the source has strong continuum emission, the induced baseline structure from using a scalar can inhibit the detection of broad spectral line. The use of a vector will improve our ability to detect weak, broad lines.
• At high frequencies and in marginal weather when there can be a change between an "on" and an "off", scalar will introduce baseline structure. The use of a vector will improve high frequency observing in marginal weather, which will open up more time for high frequency observing.
• easily can vary by 50% in some receivers. Any spectral line that is not centered exactly in the middle of the bandpass could have its calibration be in error by up to 50% as a result of using a scalar . This is an unacceptably large amount of error in the calibration, and the observer may not even be aware of its existence.
• When using a spectral line backend for continuum observations you get the wrong flux and spectral index if you use a scalar . The use of a vector will give more reasonable results.
## Overview of Deliverables
• A design document and MRs for the calibration database.
• A calibration database file that accompanies the sdfits file for the project.
• New GBTIDL get(ps,nod,fs, etc.) routines that use vector Tcal, Tsys, etc. within the calibration.
• A new GBTIDL routine to reduce Scal observations and to place the derived Tcal values into the calibration file.
• GBTIDL routines to get/put data from the calibration database file.
• GBTIDL documentation updated.
• An interface with the Cleo Weather Predictions to allow "default" opacity corrections.
• A GBTIDL routine to reduce tipping scan data to provide a "measured" opacity.
## Specific Project Objectives & Success Criteria
• Success will result in:
• The ability to use vector calibration with position switched, nodding, or frequency switching data.
• The ability to reduce Scal observations and to add the results to the calibration database.
• sdfits writing a calibration database file with the current fits file.
• Ability to use opacities determined from CLEO Weather Prediction program in calibration.
• Ability to reduce tipping scan data.
• Updated GBTIDL documentation
• Release of the new calibration routines within GBTIDL
## Primary Stakeholders & Roles
• Observers: the customer
• Toney Minter: Project Manager and programing and testing
• Jim Braatz: Project Scientist and programing and testing
• Bob Garwood and/or Paul Marganian: sdfits and calibration database file programing
• Ron Maddalena: algorithm development, CLEO interface and guidance
• Karen O'Neil
## Key Assumptions
• Creation of the MR for the format and access to the calibration database will be done by T.M. and J.B.
• No more than 3 FTE weeks will be needed to program the calibration database and sdfits
• All work in GBTIDL will be done "at a high level" and by T.M. and J.B.
• If the calibration database and sdfits changes cannot be made in three weeks, then a work-around of copying the receiver fits file as the calibration database file will be used (minor sdfits change) along with keeping the calibration data in a global variable within GBTIDL. The cost will be that you will have to redo any Scal, etc. determinations each time you (re)start GBTIDL. The GBTIDL code needed will not depend on this other than the new get/put functions for the calibration database.
• T.M. will have to spend some time learning the GBTIDL code.
• Out Of Scope
• Calibration corrections due to wind moving the telescope.
• Calibration corrections from the servo system errors.
• Correction for other weather factors besides the opacity.
• Staffing Requirements:
• Toney Minter: 25%
• Jim Braatz: 25%
• Bob Garwood and/or Paul Marganian: <= 3 FTE weeks
• Communications
• meet with Scientific Staff at beginning of project to present plan and ask for comments and ideas
• Bi-weekly meeting of all involved staff
• Email and phone conversation between T.M. and J.B. as needed.
• Preliminary Schedule:
• PHASE I
• mid-November 20007: WBS created
• end November 2007: meet with Scientific Staff, update plan
• end of December 2007: MRs created
• PHASE II
• January 2008 : programing begins
• end of January 2008: calibration database programing changes done in both GBTIDL and SDFITS
• end of February 2008: GBTIDL programing done, testing begins
• end of March 2008 : work complete
• Prior Documentation That Is Relavent
## Signatures
The following people agree that the above information is accurate:
Preliminary project team members:
ToneyMinter
JimBraatz
BobGarwood
PaulMarganian
GB Program manager:
KarenONeil
Use to sign.
-- ToneyMinter - 07 Nov 2007
Error during latex2img:
ERROR: can't find latex at /usr/bin/latex
INPUT:
\documentclass[fleqn,12pt]{article}
\usepackage{amsmath}
\usepackage[normal]{xcolor}
\setlength{\mathindent}{0cm}
\definecolor{teal}{rgb}{0,0.5,0.5}
\definecolor{navy}{rgb}{0,0,0.5}
\definecolor{aqua}{rgb}{0,1,1}
\definecolor{lime}{rgb}{0,1,0}
\definecolor{maroon}{rgb}{0.5,0,0}
\definecolor{silver}{gray}{0.75}
\usepackage{latexsym}
\begin{document}
\pagestyle{empty}
\pagecolor{white}
{
\color{black}
\begin{math}\displaystyle $S_{sig}(f)$\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle $T(f) = \left(S_{sig}(f)-S_{ref}(f) \over S_{ref}(f)\right) * T_{sys}$\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle $T_{Cal}$\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle $T_{sys}$\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle $T_{Cal}(f)$\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle $T_{cal}$\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle $T_{sys}(f)$\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle $T(f)$\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle $T_{sys} = \left(S_{calon}(f)+S_{caloff}(f) \over S_{calon}(f)-S_{caloff}(f)\right)*T_{cal} - \left(1\over 2\right)*T_{cal}$\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle $T_{cal}(f)$\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle $S_{ref}(f)$\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle $S_{cal}$\end{math}
}
\clearpage
\end{document}
STDERR:
This topic: Main > TWikiUsers > ToneyMinter > CalibrationProjectCharter
Topic revision: 2009-10-15, ChrisClark
Copyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding NRAO Public Wiki? Send feedback
|
2022-05-26 06:11:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4061164855957031, "perplexity": 5352.636999810017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662601401.72/warc/CC-MAIN-20220526035036-20220526065036-00614.warc.gz"}
|
https://uslugibudowlane-wejherowo.pl/at-25-c-the-vapour-pressure-of-methyl-alcohol/28882.html
|
CAS:78-93-3
CAS:108-94-1
CAS:67-64-1
CAS:64-19-7
CAS:141-78-6
CAS:108-88-3
CAS:71-43-2
CAS:64-17-5
CAS:67-56-1
### At25^(@)"C", the vapour pressure of methyl alcohol is 96.0 …
At25^(@)"C", the vapour pressure of methyl alcohol is 96.0 torr. What is the mole fraction - At25^(@)"C", the vapour pressure of methyl alcohol is 96.0 torr. What is the mole
### The vapour pressure of ethyl alcohol at 25^@C is 59.2 torr. The vapour pressure of a solution of urea in ethyl alcohol …
27/6/2021· The vapour pressure of ethyl alcohol at 25^@C is 59.2 torr. The vapour pressure of a solution of urea in ethyl alcohol is 51.3 torr. What is the molality of Getting Image Please Wait Login NEW Online Classes NEW Quiz
### Liquids - Vapor Pressures - Engineering ToolBox
lower vapor pressure are called heavy components Temperature and Vapor or Saturation Pressure for some common Fluids At atmospheric pressure saturation temperature of water : 100 oC (212 oF) ethyl alcohol : 78.5 oC (173 oF) Liquids - Vapor Pressure Approximate vapor pressure for temperatures in the range 20 oC - 25 oC (68 oF - 77 oF).
### At 25^(@)C, the vapour pressure of pure methyl alcohol is 92.0 …
At 25^(@)C, the vapour pressure of pure methyl alcohol is 92.0 torr. Mol fraction of CH_(3)OH in a solution in which vapour pressure of CH_(3)OH is 23.
### At 25^oC, the vapour pressure of methyl alcohol is 96.0 torr.
At 25 o C, the vapour pressure of methyl alcohol is 96.0 torr. What is the mole fraction of CH 3 OH in a solution in which the (partial) vapor pressure of CH 3 OH is 23.0 torr at 25
### At 25^oC, the vapour pressure of methyl alcohol is 96.0 torr.
At 25 o C, the vapour pressure of methyl alcohol is 96.0 torr. What is the mole fraction of CH 3 OH in a solution in which the (partial) vapor pressure of CH 3 OH is 23.0 torr at 25
### At25^(@)"C", the vapour pressure of methyl alcohol is 96.0 …
25/6/2019· At 25 ∘ C 25 ∘ C, the vapour pressure of methyl alcohol is 96.0 torr. What is the mole fraction of CH3OH CH 3 OH in a solution in which the (partial) vapor pressure of CH3OH CH 3 OH is 23.0 torr art 25∘ C 25 ∘ C? class-12 solutions 1 Answer 0 votes answered Jun 25, 2019 by Kumari Prachi (82.7k points) selected Jun 25, 2019 by Anshuman Sharma
### One litre flask containing vapour of methyl alcohol (Mol mass 32) at a pressure of 1 atm. And 25^(@)Cwas evacuated till the final pressure …
25/9/2021· One litre flask containing vapour of methyl alcohol (Mol mass 32) at a pressure of 1 atm. And 25∘ C 25 ∘ C was evacuated till the final pressure was 10−3 10 - 3 mm. How many molecules of methyl alcohol were left in the flask ? class-11 states-of-matter gases-and-liquids 1 Answer 0 votes answered Sep 25, 2021 by AasaSinha (73.3k points)
### At 25C, the vapour pressure of methyl alcohol is 96.0 torr. What is …
28/2/2021· At 25ºC, the vapour pressure of methyl alcohol is 96.0 torr. What is the mole fraction of CH3OH in a solution in which the (partial) vapour pressure of CH3OH is 23.0 torr at …
### One litre flask containing vapour of methyl alcohol (Mol mass 32) at a pressure of 1 atm. And 25^(@)Cwas evacuated till the final pressure …
25/9/2021· One litre flask containing vapour of methyl alcohol (Mol mass 32) at a pressure of 1 atm. And 25∘ C 25 ∘ C was evacuated till the final pressure was 10−3 10 - 3 mm. How many molecules of methyl alcohol were left in the flask ? class-11 states-of-matter gases-and-liquids 1 Answer 0 votes answered Sep 25, 2021 by AasaSinha (73.3k points)
### At 40 degrees celsius the vapour pressure in torr of methyl alcohol ethyl alcohol …
8/4/2019· At 40 degrees celsius the vapour pressure in torr of methyl alcohol ethyl alcohol solution is represented by the equation p is equal to 1 19 x + 135 where x is the mole fraction of methyl alcohol then the value of limit x tends to 1 by x is Expert-Verified Answer 28 people found it helpful techtro The methanol and ethanol torr is P = 119x + 135
### The vapour pressure of ethyl alcohol at ${{25}^{\\text{ 0}}}\\text{C}$is $\\text{59}\\text{.2 torr}$. The vapour pressure …
CBSE Chemistry Grade 12 Solutions Answer The vapour pressure of ethyl alcohol at 25 0 C is 59 .2 torr. The vapour pressure of a solution of non-volatile solute urea, 51 .3 Torr. What is …
### Methanol - Wikipedia
Methanol (also called methyl alcohol and wood spirit, amongst other names) is an organic chemical and the simplest aliphatic alcohol, with the formula C H 3 O H (a methyl group …
Occurrence· Safety· Appliions· Production· History
• Wht are the vapour pressure of pure methyl alcohol and pure …/cite>
14/4/2019· The vapor pressure of pure methyl alcohol and pure ethyl alcohol at 40°C is 234 and 135 respectively. Explanation: Total pressure be given as p = 199x+135 Partial pressure …
### At 40 degrees celsius the vapour pressure in torr of methyl alcohol ethyl alcohol …
8/4/2019· At 40 degrees celsius the vapour pressure in torr of methyl alcohol ethyl alcohol solution is represented by the equation p is equal to 1 19 x + 135 where x is the mole fraction of methyl alcohol then the value of limit x tends to 1 by x is Expert-Verified Answer 28 people found it helpful techtro The methanol and ethanol torr is P = 119x + 135
### At 400C, the vapour pressure (in torr) of methyl alcohol (A) and ethyl alcohol …
At 400C, the vapour pressure (in torr) of methyl alcohol (A) and ethyl alcohol (B) solution is represnted by:P = 120 XA + 138; where XA is mole fraction of methyl alcohol. The value of …
### X2 X1 7. At 25°C, the vapour pressure of - Physical Chemistry
At 25 C, the vapour pressure of pure methyl alcohol is 92.0 torr. Mol fraction of CH,OH in a solution in which vapour pressure of CH,OH is 23.0 torr at 25 C, is: (1) 0.25 (2) 0.75 (3) 0.50 (4) 0.66 < PreviousNext > Answer PA = PANA 23=92XNA NA 23 92 -0.25 Ans (1)
### At 40°c the vapour pressure in torr? Explained by FAQ Blog
So, for example, methanol has a vapor pressure of 94 torr at 25 C, and ethanol has a vapor pressure of 44 torr at 25 C. What is the vapor pressure of propanone at 50 C? Explanation: If we work it out with direct proportions, the vapor pressure of propanone is 56 degrees Celsius .
### The vapour pressure of methyl alcohol is 40 mmHg at 5?C. Use …
What is w when 3.93 kg of H2O(l), initially at 25.0 ºC, is converted into water vapour at 157 ºC against a constant external pressure of 1.00 atm? Assume that the vapour behaves ideally and that the density of liquid water is 1.00 g/mL. chemistry What is
### At 25°C, the vapour pressure of methyl alcohol is 96.0 torr.
17/4/2021· At 25°C, the vapour pressure of methyl alcohol is 96.0 torr. What is the mole fraction of CH3OH in a solution in which the (partial) vapor pressure of CH3OH is 23.0 torr at 25°C? venomousfrenemy is waiting for your help. Add your answer and earn points. Answer 1 person found it helpful suvarnahakke1 Explanation:
### At 25C, the vapour pressure of methyl alcohol is 96.0 torr. What is the mole fraction of CH3OH in a solution in which the (partial) vapour
28/2/2021· At 25ºC, the vapour pressure of methyl alcohol is 96.0 torr. What is the mole fraction of CH3OH in a solution in which the (partial) vapour pressure of CH3OH is 23.0 torr at …
### The vapour pressure of ethyl alcohol at ${{25}^{\\text{ 0}}}\\text{C}$is $\\text{59}\\text{.2 torr}$. The vapour pressure …
CBSE Chemistry Grade 12 Solutions Answer The vapour pressure of ethyl alcohol at 25 0 C is 59 .2 torr. The vapour pressure of a solution of non-volatile solute urea, 51 .3 Torr. What is the molarity of the solution? Answer Verified 216.3k + views
### At 40°c the vapour pressure in torr? Explained by FAQ Blog
So, for example, methanol has a vapor pressure of 94 torr at 25 C, and ethanol has a vapor pressure of 44 torr at 25 C. What is the vapor pressure of propanone at 50 C? Explanation: If we work it out with direct proportions, the vapor pressure of propanone is 56 degrees Celsius .
### The vapour pressure of methyl alcohol at 298 K is 0.158 bar. The vapour pressure …
12/7/2020· The vapour pressure of methyl alcohol at 298 K is 0.158 bar. The vapour pressure of this liquid 124 views Jul 12, 2020 The vapour pressure of methyl alcohol at 298 K is
: Doubtnut: 157
• The vapour pressure of methyl alcohol is 40 mmHg at 5°C. Use …strong>the-vapour-pressure-of-methyl-alcohol-is-40
The boiling point is the temperature at which the vapor pressure is 1 atm = 760 mm Hg, which is 19 times higher. You will need the heat of vaporization of methyl alcohol to apply the Clausius-Clapeyron equestion. It is 35278 kJ/kmol. Call it Hv. You will also need the molar gas constant R, which is 8.314 kJ/ (mole*K)
### The vapour pressure of methyl alcohol at 298 K is 0.158 bar. The vapour pressure …
The vapour pressure of methyl alcohol at 298 K is 0.158 bar. The vapour pressure of this liquid 124 views Jul 12, 2020 The vapour pressure of methyl alcohol at 298 K is
### Solved Calculate the vapour pressure at 26.4 °C if the | Chegg
When the same concentration of methyl alcohol is added to1 L of water, the boiling point decreases. Explain. Question : Calculate the vapour pressure at 26.4 °C if the vapour pressure of 1- propanol is 25 torr at 10.7 °C, Take heat of vaporization as 47.2 kJ/mol and …
### At 25^(@)C, the vapour pressure of pure methyl alcohol is 92.0 torr. Mol fraction of CH_(3)OH in a solution in which vapour pressure …
At 25^(@)C, the vapour pressure of pure methyl alcohol is 92.0 torr. Mol fraction of CH_(3)OH in a solution in . 0.25 B. 0.75 C. 0.5 0 D. 0.66 Login Remeer Register Test JEE NEET Home Q&A Unanswered Ask a Question Learn Ask a Question in 0
### A mixture of ethyl alcohol and propyl alcohol has a vapour pressure of 290 mm at 300 K. The vapour pressure of propyl alcohol …
The vapour pressure of propyl alcohol is 200 mm. If the mole fraction of ethyl alcohol is 0.6, its vapour pressure (in mm) at the same temperature will be: Q. A solution containing ethyl alcohol and propyl alcohol has a vapour pressure of 290 mm Hg at 30∘C. Find the vapour pressure of pure ethyl alcohol if its mole fraction in the solution is 0.65.
### 67-56-1 CAS | METHANOL | Alcohols | Article No. 00196
Methyl alcohol Article No. 00196 Grade AR/ACS Purity 99.8% CAS No. 67-56-1 Molecular Formula 25 Liters (0025K) 200 Liters (0200K) 500 ml (005PT) 1000 ml (010PT) Specifiions Appearance A clear colorless liquid Colour scale Max 10 APHA Min 99.8
### Vapor Pressure of Methanol from Dortmund Data Bank
Vapor Pressure of Methanol from Dortmund Data Bank Dortmund Data Bank Vapor Pressure of Methanol The experimental data shown in these pages are freely available and have been published already in the DDB Explorer Edition. The data represent a small sub list of all available data in the Dortmund Data Bank.
### At 25^{\circ} \mathrm{C}, the vapour pressure of pure methyl …
At 25∘C, the vapour pressure of pure methyl alcohol is 92.0 Torr. Mole fraction of CH3OH in a solution in which vapour pressure of CH3OH is 23.0 Torr at 25∘C, is A 0.25 B 0.75 C 0.50 D 0.66 Difficulty level- medium 5530 Attempted students Solutions ( 1) (a) P CH3OH = pCH3OHX CH3OH∴ X CH3OH = 9223 = 41 = 0.25 60 Share
• Guan di miao, Yulong town
XingYang, Henan, China
7*24 Hours 365 Days
|
2023-03-31 00:00:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4410017728805542, "perplexity": 5413.946978886018}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949506.62/warc/CC-MAIN-20230330225648-20230331015648-00048.warc.gz"}
|
https://www.zbmath.org/?q=ai%3Ahunt.harry-b-iii+ai%3Atosic.predrag-t
|
# zbMATH — the first resource for mathematics
Gardens of Eden and fixed points in sequential dynamical systems. (English) Zbl 1017.68055
Discrete models: combinatorics, computation, and geometry. Proceedings of the 1st international conference (DM-CCG), Paris, France, July 2-5, 2001. Paris: Maison de l’Informatique et des Mathématiques Discrètes (MIMD), Discrete Math. Theor. Comput. Sci., Proc. AA, 95-110, electronic only (2001).
Summary: A class of finite discrete dynamical systems, called Sequential Dynamical Systems (SDSs), was proposed in as an abstract model of computer simulations. Here, we address some questions concerning two special types of the SDS configurations, namely Garden of Eden and Fixed Point configurations. A configuration $$C$$ of an SDS is a Garden of Eden (GE) configuration if it cannot be reached from any configuration. A necessary and sufficient condition for the non-existence of GE configurations in SDSs whose state values are from a finite domain was provided in a previous paper. We show this condition is sufficient but not necessary for SDSs whose state values are drawn from an infinite domain. We also present results that relate the existence of GE configurations to other properties of an SDS. A configuration $$C$$ of an SDS is a fixed point if the transition out of $$C$$ is to $$C$$ itself. The Fixed Point Existence (or FPE) problem is to determine whether a given SDS has a fixed point. We show that the FPE problem is NP-complete even for some simple classes of SDSs (e.g., SDSs in which each local transition function is from the set {NAND, XNOR}). We also identify several classes of SDSs (e.g., SDSs with linear or monotone local transition functions) for which the FPE problem can be solved efficiently.
For the entire collection see [Zbl 0985.00015].
##### MSC:
68Q17 Computational difficulty of problems (lower bounds, completeness, difficulty of approximation, etc.) 68Q25 Analysis of algorithms and problem complexity
Full Text:
|
2021-01-24 03:28:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43690457940101624, "perplexity": 1151.8138428298469}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703544403.51/warc/CC-MAIN-20210124013637-20210124043637-00330.warc.gz"}
|
https://sharelatex.psi.ch/learn/Chinese
|
## Class files
L a T e X supports many worldwide languages by means of some special packages. In this article is explained how to import and use those packages to create documents in Chinese .
# Introduction
Chinese language needs a special document class since the encoding and fonts are quite unique.
\documentclass{ctexart}
\setCJKmainfont{simsun.ttf}
\setCJKsansfont{simhei.ttf}
\setCJKmonofont{simfang.ttf}
\begin{document}
\tableofcontents
\begin{abstract}
\end{abstract}
\section{ 前言 }
\section{关于数学部分}
\end{document}
The document type is a ctexart (Chinese TeX Article), this is the recommended manner of typing Chinese documents, but is not the only one and may have some limitations. The next sections will clearly explain these and other environments for Chinese L a T e X typesetting.
# Simplified Chinese with ctexart
Modern computer systems allow you to input letters of national alphabets directly from the keyboard. In order to handle characters for Simplified Chinese typesetting you can use the ctexart document class.
\documentclass{ctexart}
This document class is pretty much like babel , but for Chinese language. You will not only be able to typeset Chinese characters, but also define elements such as "Abstract" and the "Table of Contents" that will be properly translated. There's a drawback though: The additional parameters that can be passed to the document class definition, such as the paper size, are very limited. Nevertheless, you can define the document size, for instance, by means of the geometry package .
You can import external fonts to your document, either dowloading them to the same directory of your L a T e X file, as in the example, or using system-wide fonts. For instance, if the IPAGothic font is already installed on your system, you can use it in your document with.
\setCJKmainfont{IPAGothic}
Additional fonts for some parts of the document can be established. To set a specific font for elements that use sans font style use \setCJKsansfont{} and for elements that are displayed in monospace font , such as verbatim environments, use the command \setCJKmonofont{} . If external fonts are used your document must be compiled with X Ǝ L a T e X .
Notice that the last line in the example at the introduction is actually using Traditional Chinese characters. This is accomplished by the SimSun font, used in this document, because this font includes them. So, with the right font, you can actually typeset your document in traditional Chinese, just keep in mind that automatic elements will be written with simplified symbols.
# Traditional and Simplified Chinese, the CJK package
As mentioned in the introduction , it's possible to typeset Chinese characters using other packages, you can also use Traditional and Simplified Chinese characters in the same document, you can even add Latin characters.
## XeLaTeX
Other easy way to create Chinese documents is by importing the xeCJK package and setting up your favourite font.
\documentclass{article}
\usepackage{xeCJK}
\setCJKmainfont{simsun.ttf}
\setCJKsansfont{simhei.ttf}
\setCJKmonofont{simfang.ttf}
\begin{document}
\section{前言}
\section{关于数学部分}
\vspace{0.5cm}
\end{document}
The command \usepackage{xeCJK} imports xeCJK , this package allows to use external fonts in your document, these fonts are imported using the same syntax explained in the previous section. Again, if the imported font includes traditional symbols these can be used.
In this case elements are not translated as in the previous example, but sometimes the final rendered document may look a bit more sharp. Also, you can use any document class you want (book, report, article and so on) so your document layout is not constrained to a single document type.
The xeCJK package only works when compiled with X Ǝ L a T e X .
## pdfLaTeX
The CJTK package can also be used to generate a document with pdfLaTeX. You may not be able to use external fonts, but here you can use traditional and simplified characters as well as Latin characters. Perfect for documents in English with bits of Chinese text or vice-versa.
\documentclass{article}
\usepackage{CJKutf8}
\begin{document}
\begin{CJK*}{UTF8}{gbsn}
\section{前言}
\section{关于数学部分}
\end{CJK*}
\vspace{0.5cm} % A white space
\noindent
You can also insert Latin text in your document
\vspace{0.5cm}
\noindent
\begin{CJK*}{UTF8}{bsmi}
\end{CJK*}
\end{document}
The line \usepackage{CJKutf8} imports CJKutf8 which enables utf8 encoding for Chinese, Japanese and Korean fonts.
In this case every block of Chinese text must be typed inside a \begin{CJK*}{UTF8}{gbsm} environment. In this environment UTF8 is the encoding and gbsm is the font to be used. You can use gbsm or gkai fonts for simplified characters, and bmsi or bkai for traditional characters.
|
2021-06-18 01:44:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8498326539993286, "perplexity": 2124.9027462595604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487634616.65/warc/CC-MAIN-20210618013013-20210618043013-00299.warc.gz"}
|
http://mathoverflow.net/questions/83015/what-does-bg-classify-i-e-what-is-a-principal-fibration
|
# what does BG classify? i.e. what is a principal fibration?
I'm looking for cold hard facts about just what $BG$ classifies, if $G$ is any grouplike topological monoid. I have some vague idea that $[X,BG]$ is in bijection with equivalence classes of "principal fibrations" over $X$. What exactly is a principal fibration?
May's Classifying Spaces and Fibrations looks like a good source, but I'm having trouble teasing out whether his notion of $G\mathcal U$-fibration (which he proves is classified by $BG$) is equivalent to the notion of a fibration $E \rightarrow B$ with a fiberwise right $G$-action giving weak equivalences $G \rightarrow E_b$, $g \mapsto xg$ for each point $x$ in the fiber $E_b$. (I can show the $\Rightarrow$ but not the $\Leftarrow$. Maybe there needs to be another condition in my naive description.)
Also, any grouplike monoid $G$ is weakly equivalent to $\Omega BG$, so "principal fibrations," whatever they are, should correspond (in some sense I want to make precise) to pullbacks of the path-loop fibration over $BG$.
-
The other question is similar, but I'm looking for a different answer. In particular, I want to describe the objects being classified as fibrations with extra structure, or be told that this is not possible. – Cary Dec 9 '11 at 0:47
Yes, this is not the same as the other question: in this one, the monoid G has a topology. – Charles Rezk Dec 9 '11 at 2:58
You need to be careful about concordance classes vs isomorphism classes in the case you are considering. Homotopy classes of maps classify concordance classes (by definition), but isomorphism classes may be different. – David Roberts Dec 9 '11 at 5:05
Bundles $P_0 \to X$ are concordant if there is a bundle $Q \to X\times [0,1]$ such that the pullback of $Q$ to $X \times \{i\}$ is isomorphic to $P_i$. So if there is a classifying map for both of the $P_i$ and they are homotopic, then they are concordant. Alternatively, if the $P_i$ are concordant and there is a classifying map for $Q$, then there are classifying maps for the $P_i$ which are homotopic. Thus if we have classifying maps for all 'bundles', then homotopy of classifying maps is equivalent to concordance. I don't know of a general reference. – David Roberts Dec 9 '11 at 7:45
The definition of a principal $G$ bundle when $G$ is a topological group is in Husemoller's 'Fiber bundles' p41. If you don't depend on the point set properties of $G$ too much, then since $G$ is a loop space it is weakly equivalent to a topological group $\tilde{G}$ (mathoverflow.net/questions/51777/…). It's been a while since I looked at Peter's book, but I would guess that the principal $\tilde{G}$ fibrations classified by $B\tilde{G}$ biject with homotopy classes of maps to $BG$ when $G$ is well-pointed. – Justin Noel Dec 9 '11 at 11:07
In view of the references to my Memoir, Classifying spaces and fibrations, in other answers, I guess I should answer too. The requested answer is implicit but not quite explicit there. Fix a grouplike topological monoid $G$. Maybe assume for simplicity that its identity element is a nondegenerate basepoint (no loss of generality by 9.3). Define a $G$-torsor to be a right $G$-space $X$ such that the $G$-map $G\longrightarrow X$ that sends $g$ to $xg$ is a weak equivalence for $x\in X$. Let $\mathcal{G}$ be the category of $G$-torsors and maps of $G$-spaces between them. The Memoir defines a $\mathcal{G}$-fibration in terms of the $\mathcal{G}$-CHP, which is equivalent to having a $\mathcal{G}$-lifting function. Cary asks whether that notion is equivalent to an a priori weaker notion. The answer depends on what equivalence'' means. The Memoir insists on $\mathcal{G}$-fibrations, but it defines an equivalence (6.1) to be a $\mathcal{G}$-map over the base space, where a $\mathcal{G}$-map only has to be a map of $G$-torsors on fibers. (More precisely, it takes the equivalence relation generated by such maps.) Using $\mathcal{G}$-fibrations allows one often to replace that notion of equivalence by the nicer one of $\mathcal{G}$-fiber homotopy equivalence. But with the equivalence relation as given, any $\mathcal{G}$-map that is a quasifibration is equivalent to a $\mathcal{G}$-fibration, by the $\Gamma$-construction in Section 5. Therefore the classification theorem remains true allowing all $\mathcal{G}$-spaces that are quasifibrations, which of course includes Cary's preferred notion of a $G$-fibration.
-
Thank you! This notion is really rather nice; thanks for helping me understand it. – Cary Dec 10 '11 at 18:18
When $G$ is discrete, another sort of answer is provided by the paper of Michael Weiss:
What does the classifying space of a category classify? Homology, Homotopy and Applications 7 (2005), 185–195.
Weiss shows that for a small category $C$, the classifying space $BC$ classifies sheaves of $C$–sets with representable stalks.
The equivalence relation on such sheaves over a space $X$ is given by concordance: given two such sheaves of the above kind, say $\cal F_0, \cal F_1$, one says they are concordant if there is a sheaf of the above kind over $X\times [0,1]$ which restricts to $\cal F_i$ on $X \times {i}$, $i \in {0,1}$.
I'm wondering if there's an analog of Weiss' result which holds for an arbitrary topological category. This would give a result for a general topological monoid.
-
Late response, but Moerdijk's Classifying spaces and classifying topoi studies various notions of classifying space for topological categories. – Zhen Lin May 8 '14 at 7:18
This is not an answer but an attempt to clarify the question.
In the category of right $G$-spaces (with weak equivalences being the maps that as maps of spaces are weak equivalences) let us single out those objects $X$ for which there is a weak equivalence $G\to X$ (where $G$ has the usual right action). In other words, those such that for some $x$ the map $g\mapsto xg$ is a weak equivalence of spaces. If $G$ is grouplike then you can say "every" instead of "some" (as long as you remember to specify also that $X$ is not empty!).
Call these the "weak principal homogeneous spaces". Note that if $Y\to X$ is a weak equivalence of $G$-spaces and $X$ is of this kind, then $Y$ is as well.
I think the question might be something like this:
We would like to
(1) specify which weak principal homogeneous spaces will be allowed as fibers
(2) specify what we mean by "fibration" (locally trivial bundle? Serre fibration? quasifibration? ...)
and then consider as "principal $G$-fibrations" those maps $E\to B$ with fiber-preserving right $G$-action such the map is as in (2) and the fibers are as in (1), and then be able to say:
Homotopy classes of maps $B\to BG$ correspond bijectively with equivalence classes of principal $G$-fibrations on $B$. This requires that we also
(3) specify what we mean by an equivalence between two such principal fibrations on the same base.
When $G$ is a topological group then the standard thing is to say (1) the fibers should be isomorphic to $G$ as $G$-spaces (2) locally trivial fiber bundle (and local triviality respecting the $G$-action follows), (3) isomorphism.
If you want to stick with "isomorphism" for (3) in the more general case, then:
Since there is only one homotopy class $\star\to BG$, for (1) you are committed to choosing a single $G$-space $X$ and allowing as fibers only things isomorphic to $X$.
And by considering bundles over disks it seems that you are also committed to choosing "locally trivial bundle" in (2) (there are locally trivializations respecting the $G$-action).
And it seems that this $X$ had better be such that its automorphism group, say $\Gamma$, is also weakly equivalent to $G$, or more precisely such that when considered as a left $\Gamma$-space $X$ is a weak principal homogeneous space.
It will not work to choose $G$ itself as $X$ unless the group of invertible elements of $G$ is equivalent to $G$.
You are in luck if, for example, $G$ is a topological monoid that happens to admit a weak equivalence $G\to \Gamma$ to a topological group such that $\Gamma$ is algebraically generated by the image. But this is rare.
There is probably more then one right answer. Maybe you can allow all weak principal homogeneous spaces as fibers and use Serre fibrations (or maybe quasifibrations) and let equivalence between two such things over the base $B$ mean a map (respecting the map to $B$ and the $G$-action) that is a weak equivalence of total spaces, or equivalently of fibers. Does anybody know?
Note: There is always a group $\Gamma$ related to $G$ indirectly by weak equivalences $G\leftarrow ? \to \Gamma$, so that $BG\simeq B\Gamma$ represents principle $\Gamma$-bundles, but I don't think that's the kind of answer that's wanted.
-
Thank you! This is exactly what I'm looking for: some type of fibration/quasifibration with a fiberwise G-action, up to fiberwise maps commuting with this action. As I said, there's something similar to this in May's book. And you're also correct in guessing that I want to preserve G, or at least work with some monoid mapping into or out of G, rather than resorting to a zig-zag. – Cary Dec 9 '11 at 18:20
OK, so in your setup, the answers are (1): all principal homogeneous spaces; (2): quasifibrations, Serre fibrations, Hurewicz fibrations, OR G-fibrations; (3): fiberwise maps commuting with the G-action. Using May's answer, anything in (2) with a fiberwise right G-action is (3)-equivalent to May's notion of G-fibration, which is classified up to (3)-equivalence by BG. We get the equivalence by applying Γ, essentially the usual trick for replacing maps by fibrations. So now we have four distinct classification problems that are all solved by BG. – Cary Dec 10 '11 at 20:49
|
2015-05-05 01:23:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9269927740097046, "perplexity": 332.7029366614922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430455166292.61/warc/CC-MAIN-20150501043926-00088-ip-10-235-10-82.ec2.internal.warc.gz"}
|