url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://tex.stackexchange.com/questions/126944/why-dont-these-figures-align-horizontally
# Why don't these figures align horizontally? I am trying to use the subcaption package for horizontally aligning up some figures, and I did as the documentation instructs me to do, yet the result is these are vertically aligned. It really confuses me. \documentclass{article} \usepackage[font=small]{caption} \usepackage{subcaption} \usepackage{graphicx} \usepackage{xltxtra} \usepackage{fontspec} \setmainfont[Ligatures=TeX]{Linux Libertine O} \begin{document} \begin{figure}[htp] \centering \begin{minipage}[t]{0.3\linewidth} \includegraphics[width=\linewidth]{fig/bed1.jpg} \subcaption{Bedroom 1} \end{minipage} \begin{minipage}[t]{0.3\linewidth} \includegraphics[width=\linewidth]{fig/bed2.jpg} \subcaption{Bedroom 2} \end{minipage} \caption{Two bedrooms} \end{figure} \end{document} The second minipage is moved onto a new line because you have an extra line between the first \end{minipage} and the second \begin{minipage}. Check the documentation here http://mirror.ctan.org/macros/latex/contrib/caption/subcaption.pdf and you'll see the missing newline in the example. The following MWE will put them on the same line: \documentclass{article} \usepackage[font=small]{caption} \usepackage{subcaption} \usepackage{graphicx} \begin{document} \begin{figure}[htp] \centering \begin{minipage}[t]{0.3\linewidth} \includegraphics[width=\linewidth]{fig/bed1} \subcaption{Bedroom 1} \end{minipage}% \begin{minipage}[t]{0.3\linewidth} \includegraphics[width=\linewidth]{fig/bed2} \subcaption{Bedroom 2} \end{minipage} \caption{Two bedrooms} \end{figure} \end{document} • I have removed the newline but result is still the same. – qed Aug 6, 2013 at 13:13 • @CravingSpirit Did you compile this code? When I do (with the demo option to graphicx as I don't have the pictures) both subfigures are placed next to each other as advertised! Aug 6, 2013 at 13:19 • i see the problem now, I changed the width to 0.5, try it yourself and you will see it really doesn't work in that width, while if you remove the \centering line, everything will look as expected. Mysterious thing. – qed Aug 6, 2013 at 15:52 • @CravingSpirit not really mysterious: with 0.5\linewidth both minipages don't fit in a line anymore: there's also a space between them. With \centering they're placed below each other then. Without you'll get the warning Overfull \hbox (2.22221pt too wide). If you add a % after the first \end{minipage} they'll fit again. Aug 6, 2013 at 16:05 • Ok. What is the % for? Thanks. – qed Aug 6, 2013 at 16:40 darthbith's answer is partially right, here is how I solved it: 1. remove the \centering line in the figure environment 2. remove the superfluous newline as darthbith suggested. • The centering line should have no effect on whether or not there is a new line in the figure... Aug 6, 2013 at 13:37 • Indeed it has. But maybe it also has something to do with the command you use to compile the doc? I use xelatex. – qed Aug 6, 2013 at 14:16 • I use xelatex as well, but as with cgnieder, the \centering has no effect in terms of line breaks when I compile. Perhaps it has something to do with your fonts? I eliminated those in my MWE because they are not installed on my system. Aug 6, 2013 at 14:18 • The \centering has nothing to do with the problem posted in the question, see my other comment Aug 6, 2013 at 16:10
2023-04-01 19:38:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7853955030441284, "perplexity": 1654.9447880240111}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950247.65/warc/CC-MAIN-20230401191131-20230401221131-00471.warc.gz"}
https://spot.lrde.epita.fr/ipynb/parity.html
In [1]: from IPython.display import display import spot spot.setup() # Definitions and examples¶ In Spot a Parity acceptance is defined by an kind, a style and a numsets (number of acceptance sets): • The numsets is the number of acceptance sets used by the parity acceptance. • The kind can be either max or min. The parity kind is well defined only if the numsets is strictly greater than 1. • max odd 4: Inf(3) | (Fin(2) & (Inf(1) | Fin(0))) • min odd 4: Fin(0) & (Inf(1) | (Fin(2) & Inf(3))) • The style can be either odd or even. The parity style is well defined only if the numsets is non-null. • max odd 4: Inf(3) | (Fin(2) & (Inf(1) | Fin(0))) • min even 4: Fin(3) & (Inf(2) | (Fin(1) & Inf(0))) Some parity acceptance examples: **numsets = 1:** max min odd Fin(1) Fin(1) even Inf(0) Inf(0) **numsets = 2:** max min odd Inf(1) | Fin(0) Fin(1) & Inf(0) even Fin(0) & Inf(1) Inf(0) | Fin(1) **numsets = 3:** max min odd Fin(2) & (Inf(1) | Fin(0)) Inf(2) | (Fin(1) & Inf(0)) even Fin(0) & (Inf(1) | Fin(2)) Inf(0) | (Fin(1) & Inf(2)) **numsets = 4:** max min odd Inf(3) | (Fin(2) & (Inf(1) | Fin(0))) Fin(3) & (Inf(2) | (Fin(1) & Inf(0))) even Fin(0) & (Inf(1) | (Fin(2) & Inf(3))) Inf(0) | (Fin(1) & (Inf(2) | Fin(3))) According to the given examples, we can remark that: • Given a parity max: Acceptance sets with greater indexes are more significant • Given a parity min: Acceptance sets with lower indexes are more significant # Change parity¶ ## To toggle style¶ ### A new acceptance set is introduced and all the existing sets' indexes are increased by 1.¶ #### Parity max odd 5 -> Parity max even¶ If the acceptance is a parity max, all the transitions that do not belong to any acceptance set will belong to the new set. In [2]: aut_max_odd5 = tuple(spot.automata("randaut -A 'parity max odd 5' -Q4 2|"))[0] display(aut_max_odd5.show(".a")) The new indexes of the acceptance sets: • 4 -> 5 • 3 -> 4 • 2 -> 3 • 1 -> 2 • 0 -> 1 • ∅ -> 0 #### Result of Parity max odd 5 -> Parity max even 6¶ In [3]: aut_max_odd5_to_even = spot.change_parity(aut_max_odd5, spot.parity_kind_any, spot.parity_style_even) display(aut_max_odd5_to_even.show(".a")) #### Parity min odd 5 -> Parity min even¶ If the acceptance is a parity min, the new acceptance set will not be used. In [4]: aut_min_odd5 = tuple(spot.automata("randaut -A 'parity min odd 5' -Q4 2|"))[0] display(aut_min_odd5.show(".a")) The new indexes of the acceptance sets: • 4 -> 5 • 3 -> 4 • 2 -> 3 • 1 -> 2 • 0 -> 1 • ∅ -> ∅ #### Result of Parity min odd 5 -> Parity min even 6¶ In [5]: aut_min_odd5_to_even = spot.change_parity(aut_min_odd5, spot.parity_kind_any, spot.parity_style_even) display(aut_min_odd5_to_even.show(".a")) ### To toggle kind¶ #### Parity max odd 5 ----> Parity min:¶ In [6]: aut_max_odd5 = tuple(spot.automata("randaut -A 'parity max odd 5' -Q4 2|"))[0] display(aut_max_odd5.show(".a")) The new indexes of the acceptance sets: • 4 -> 0 • 3 -> 1 • 2 -> 2 • 1 -> 3 • 0 -> 4 • ∅ -> ∅ #### Result of Parity max odd 5 ----> Parity min odd 5:¶ In [7]: aut_max_odd5_to_min = spot.change_parity(aut_max_odd5, spot.parity_kind_min, spot.parity_style_any) display(aut_max_odd5_to_min.show(".a")) #### Parity max odd 4 ----> Parity min odd:¶ In [8]: aut_max_odd4 = tuple(spot.automata("randaut -A 'parity max odd 4' -Q4 2|"))[0] display(aut_max_odd4.show(".a")) The new indexes of the acceptance sets: • 3 -> 0 • 2 -> 1 • 1 -> 2 • 0 -> 3 • ∅ -> ∅ #### Result of Parity max odd 4 ----> Parity min even 4:¶ If the numsets is even and the kind is toggled, then the style will be toggled too. In [9]: aut_max_odd4_to_min = spot.change_parity(aut_max_odd4, spot.parity_kind_min, spot.parity_style_any) display(aut_max_odd4_to_min.show(".a")) To keep the same style a new acceptance set is introduced, thus the style is toggled once again. The new indexes of the acceptance sets are: • 3 -> 0 -> 1 • 2 -> 1 -> 2 • 1 -> 2 -> 3 • 0 -> 3 -> 4 • ∅ -> ∅ -> 0 (as the resulting automaton is a parity min) #### Result of Parity max odd 4 ----> Parity min even 5:¶ In [10]: aut_max_odd4_to_min_bis = spot.change_parity(aut_max_odd4, spot.parity_kind_min, spot.parity_style_same) display(aut_max_odd4_to_min_bis.show(".a")) # Colorize parity¶ An automaton with a parity acceptance is not necessarily a parity automaton. It must be colored to be qualified like this. ## Parity max¶ Transitions with multiple acceptance sets are purified by keeping only the set with the greatest index. If there is a transition that do not belong to any acceptance set, a new acceptance set is introduced at the least significant place. The least significant place of a parity max acceptance is where the indexes are the lowest, so all the existing acceptance sets' indexes will be shifted. #### Colorize parity max odd 4¶ In [11]: aut_max_odd4 = tuple(spot.automata("randaut -A 'parity max odd 4' -Q4 2|"))[0] display(aut_max_odd4.show(".a")) The new acceptance sets are: • ∅ -> 0 • 0 -> 1 • 1 -> 2 • 2 -> 3 • 3 -> 4 #### The result of colorizing the given parity max odd 4 is¶ In [12]: aut_max_odd4_colored = spot.colorize_parity(aut_max_odd4, False) display(aut_max_odd4_colored.show(".a")) You can notice that the style has been toggled. To prevent colorize_parity from this we can add one extra acceptance set in the acceptance condition. The new acceptance sets are now: • ∅ -> 1 • 0 -> 2 • 1 -> 3 • 2 -> 4 • 3 -> 5 #### The result of colorizing the given parity max odd 4 without changing the style is In [13]: aut_max_odd4_colored_bis = spot.colorize_parity(aut_max_odd4, True) display(aut_max_odd4_colored_bis.show(".a")) ## Parity min¶ Transitions with multiple acceptance sets are purified by keeping only the set with the lowest index. If there is a transition that do not belong to any acceptance set, a new acceptance set is introduced at the least significant place. The least significant place of a parity min acceptance is where the indexes are the greatest. #### Colorize parity min odd 4¶ In [14]: aut_min_odd4 = tuple(spot.automata("randaut -A 'parity min odd 4' -Q4 2|"))[0] display(aut_min_odd4.show(".a")) The new acceptance sets are: • ∅ -> 4 • 0 -> 0 • 1 -> 1 • 2 -> 2 • 3 -> 3 #### The result of colorizing the given parity max odd 4 is¶ In [15]: aut_min_odd4_colored_bis = spot.colorize_parity(aut_min_odd4, True) display(aut_min_odd4_colored_bis.show(".a")) Remark: colorizing a parity min won't change the style of the acceptance.
2019-02-23 16:41:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22120626270771027, "perplexity": 14788.100166076521}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249508792.98/warc/CC-MAIN-20190223162938-20190223184938-00142.warc.gz"}
https://www.impan.pl/pl/wydawnictwa/czasopisma-i-serie-wydawnicze/studia-mathematica/all/164/1/90650/elliptic-functions-area-integrals-and-the-exponential-square-class-on-b-1-0-subseteq-bbb-r-n-n-2
JEDNOSTKA NAUKOWA KATEGORII A+ # Wydawnictwa / Czasopisma IMPAN / Studia Mathematica / Wszystkie zeszyty ## Elliptic functions, area integrals and the exponential square class on $B_{1}(0) \subseteq {\Bbb R}^{n},n>2$ ### Tom 164 / 2004 Studia Mathematica 164 (2004), 1-28 MSC: 35J25, 42B25. DOI: 10.4064/sm164-1-1 #### Streszczenie For two strictly elliptic operators $L_{0}$ and $L_{1}$ on the unit ball in ${\mathbb R}^{n}$, whose coefficients have a difference function that satisfies a Carleson-type condition, it is shown that a pointwise comparison concerning Lusin area integrals is valid. This result is used to prove that if $L_1u_1 = 0$ in $B_1(0)$ and $Su_{1}\in L^{\infty }(S^{n-1})$ then $u_{1}|_{S^{n-1}}=f$ lies in the exponential square class whenever $L_{0}$ is an operator so that $L_0u_0 = 0$ and $Su_{0}\in L^{\infty }$ implies $u_{0}|_{S^{n-1}}$ is in the exponential square class; here $S$ is the Lusin area integral. The exponential square theorem, first proved by Thomas Wolff for harmonic functions in the upper half-space, is proved on $B_{1}(0)$ for constant coefficient operator solutions, thus giving a family of operators for $L_{0}$. Methods of proof include martingales and stopping time arguments. #### Autorzy • Caroline SweezyDepartment of Mathematical Sciences New Mexico State University Las Cruces, NM 88003, U.S.A. e-mail ## Przeszukaj wydawnictwa IMPAN Zbyt krótkie zapytanie. Wpisz co najmniej 4 znaki. Odśwież obrazek
2023-04-02 11:21:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8058053851127625, "perplexity": 1786.465694455945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950528.96/warc/CC-MAIN-20230402105054-20230402135054-00313.warc.gz"}
https://gamedev.stackexchange.com/questions/90280/how-to-calculate-reflection-vector-from-point-sprite-sphere
# How to calculate reflection vector from point sprite sphere? So far I achieved building a cube map following this tutorial. Then I drew three points using glDrawArrays(GL_POINTS,0,3) and calculate the normals based on a sphere. To compute the incoming light direction and hence the reflection vector I need the surface position. My questions are 1. Do I need view-space or world-space coordinates for each fragment? 2. How to compute the surface position for each fragment? 3. What's the difference between computing lighting in view-space vs. world-space? Forgot to mention that this is the way I compute the Normal for the spheres vec3 N; N.xy = gl_PointCoord * 2.0f - 1.0f; float mag = dot(N.xy,N.xy); N.z = sqrt(1.0f - mag); N = normalize(N); • You might (or not) need to change your (mag > 1.0f) to (mag >= 1.0f) or even slightly smaller (mag >= 0.98f) to prevent crazy reflection/lighting rounding errors when nearing the edges. I'd do it just to be safe with all the different GPUs & drivers out there. – Stephane Hockenhull Dec 26 '14 at 19:27 The surface position is the normal itself multiplied by the size of the sphere (point sprite) plus the sphere origin. Keep in mind a point sprite is a slice of the sphere at its center and not the surface of it so there will be some slight error due to the perspective projection not being accounted for but that's par for the course when we cheat using point sprites. (It works perfectly for parallel (ortho) projections) If you look at it closely you'll see issues with the reflection/lighting but it falls into the [close enough] category and wont really be visible on tiny "spheres". You can fix this but it roughly doubles the shader cost. Note that this example is quite extreme with the player's nose about 10 cm (3 inches) from the sphere, the further you are the more "parallel" the view gets and the smaller the error is. If this is important then you should probably switch to a regular sphere mesh as the camera gets closer or use a shader with correction but only for those close to the camera. The situation is complicated further with point-sprites that are off-center: That said, points sprites are really cool for bullet hell shooters and retro feel, as well as far-away approximations of spheres (planets, stars, boulders, bushes, pack of leaves, etc). Just don't spend too much effort trying to get them to look like perfect spheres in close-up situation the GPU will spend more time on this than with a bunch of simple triangles. If you want Mario64 retro-feel with giant point sprites you don't need to correct them, and if you need good looking spheres then use point-sprites only for spheres smaller than 32 or even 16 pixels of radius and actual 3D meshes with different levels of details for larger and larger spheres as the camera gets closer. The reason to use point sprites is that when polygons get smaller than 8x8 pixels a lot of GPUs start to waste huge amount of vertex and pixel shading power due to the way they work. With pixel shader units being grouped in NxM pixels (often 8x8 or 8x4) having to process the full NxM pixels for EACH triangles that might only have 1 to 4 actual pixels. A few GPUs have tricks to help fix this but not all and you're processing a huge number of vertex for a few pixels. I'll stop here on this as its almost off-topic but was worth explaining why point sprites / billboards are still useful. • So the position in the surface is sphereOrigin + N * sphereRadius? The sphereOrigin is the vertex that is the input in the vertex shader? – BRabbit27 Dec 26 '14 at 19:41 • I am just learning about point-sprites and OpenGL in general. Even if we can cheat with the point-sprite spheres, how should I do it to account for the perspective projection? Do you know a good source that walk you through this? – BRabbit27 Dec 26 '14 at 19:42 • Yes on [sphereOrigin + N * sphereRadius]. The solution to correct for the perspective is to do some Pythagorean math to figure out the point where the side of the circle (a 2D slice of a sphere) forms a 90-degree triangle between the camera, the circle center, and the surface of the circle. See the "Belt problem" or "Pulley-belt problem" ( en.wikipedia.org/wiki/Belt_problem ) which is the same issue but in 2D. There is a further issue that (x,y)/z projection is not a fisheye projection so there's another error when the point "sphere" isn't centered on screen. (will add another picture) – Stephane Hockenhull Dec 26 '14 at 20:00
2021-05-16 07:09:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37836843729019165, "perplexity": 1497.5575877573037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989690.55/warc/CC-MAIN-20210516044552-20210516074552-00175.warc.gz"}
https://socratic.org/questions/how-do-you-solve-the-system-of-linear-equations-x-3y-5-and-2x-y-5
# How do you solve the system of linear equations x + 3y = 5 and 2x - y = 5? May 30, 2018 $\frac{20}{7} = x$, $\frac{5}{7} = y$ #### Explanation: $x + 3 y = 5$ $2 x - y = 5$ If both equations equal $5$, we can set them equal to eachother $x + 3 y = 2 x - y$ $4 y = x$ Now we can substitute $4 y$ for $x$ in one of the equations (let's pick the first one) $4 y + 3 y = 5$ $7 y = 5$ $y = \frac{5}{7}$ Now, if $4 y = x$, then $4 \times \frac{5}{7} = x$ or $x = \frac{20}{7}$ Now to check our work. Let's substitute $\frac{5}{7}$ and $\frac{20}{7}$ for $y$ and $x$ in the second equation. If we have the correct answers, our equation should still equal $5$. $2 \left(\frac{20}{7}\right) - \frac{5}{7}$ $\frac{40}{7} - \frac{5}{7}$ $\frac{35}{7}$, which simplifies to $5$! So we were correct.
2019-12-10 16:13:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 24, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8474234342575073, "perplexity": 205.9347027266027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540528457.66/warc/CC-MAIN-20191210152154-20191210180154-00510.warc.gz"}
https://www.techwhiff.com/issue/pls-answer-me-this-question-asap-with-working-the-average--174649
# Pls answer me this question asap, with working.The average monthly salary of "m" male employees and "f" female employees of a company is $2000. If the average monthly salary of the male employees is$(b+200),find the average monthly salary of the female employees. ###### Question: Pls answer me this question asap, with working. The average monthly salary of "m" male employees and "f" female employees of a company is $2000. If the average monthly salary of the male employees is$(b+200), find the average monthly salary of the female employees. ### Glass is an example of a ____ material. a. translucent c. opaque b. medium d. transparent Glass is an example of a ____ material. a. translucent c. opaque b. medium d. transparent... ### Which choice correctly identifies the legs and hypotenuse of a right triangle which choice correctly identifies the legs and hypotenuse of a right triangle... ### Find the measure of DF Find the measure of DF... ### What are 5 restaurants in Denver Colorado What are 5 restaurants in Denver Colorado... ### Le Which two structures produce energy that cells can use? A and B, B and C, C and D, D and A​ le Which two structures produce energy that cells can use? A and B, B and C, C and D, D and A​... ### What is the distance between -25 and -12 on a number line ​ what is the distance between -25 and -12 on a number line ​... ### PLZZ HELP MEE WORTH 15 POINTZZ PLZZ HELP MEE WORTH 15 POINTZZ... ### If 10 x + 3 = 20, then x = ? If 10 x + 3 = 20, then x = ?... ### Explain which trig functions and identities produce circles, ellipses and hyperbolas. Will give Brainliest for correct answer. Explain which trig functions and identities produce circles, ellipses and hyperbolas. Will give Brainliest for correct answer.... ### What strategies can aid in leading a substance-free life? Check all that apply. building a strong relationship with your parents modeling behaviors seen on reality TV shows choosing friends who do not use drugs being a positive role model for others finding activities you can enjoy without drugs answers: A,C,D,E What strategies can aid in leading a substance-free life? Check all that apply. building a strong relationship with your parents modeling behaviors seen on reality TV shows choosing friends who do not use drugs being a positive role model for others finding activities you can enjoy without drugs ans... ### List two ways in which an invasive species can be introduced to a new environment. List two ways in which an invasive species can be introduced to a new environment.... ### Based on the sequence of events in the excerpt, which sentence best states a cause-and-effect relationship? Because Antonio Meucci was a trained engineer, he was able to invent a “talking telegraph.” Because electricity was used to cure sickness, Antonio Meucci invented the “talking telegraph.” Because the device was so useful, Antonio Meucci invented the “talking telegraph.” Because he heard a patient’s yelp through a copper wire, Antonio Meucci invented the “talking telegraph.” Based on the sequence of events in the excerpt, which sentence best states a cause-and-effect relationship? Because Antonio Meucci was a trained engineer, he was able to invent a “talking telegraph.” Because electricity was used to cure sickness, Antonio Meucci invented the “talking telegraph... ### A cell-phone company has noticed that the probability of a customer experiencing a dropped call decreases as the customer approaches a cell-site base station. A company representative approached a cell site at a constant speed and calculated the probability of a dropped call at regular intervals, and the probabilities formed the recursive sequence 0.6, 0.4, 0.24, 0.096, 0.02304. If the company representative continues calculating the probability of a dropped call while approaching the cell site A cell-phone company has noticed that the probability of a customer experiencing a dropped call decreases as the customer approaches a cell-site base station. A company representative approached a cell site at a constant speed and calculated the probability of a dropped call at regular intervals, an... ### Expository writing is fact based so why does it sometimes include more than just facts expository writing is fact based so why does it sometimes include more than just facts... ### Meter 1 can measure voltage to within 0.1 volts. Meter 2 can measure voltage to within 0.01 volts. A.) Which meter has the greatest precision? B.) Which meter is more accurate? Why? Meter 1 can measure voltage to within 0.1 volts. Meter 2 can measure voltage to within 0.01 volts. A.) Which meter has the greatest precision? B.) Which meter is more accurate? Why?...
2022-11-27 01:46:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3492797017097473, "perplexity": 2789.4277256305963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710155.67/warc/CC-MAIN-20221127005113-20221127035113-00188.warc.gz"}
https://abagoffruit.wordpress.com/2003/10/16/grid-problem-or-nonintersecting-rook-path-problem/
## Grid problem (or nonintersecting rook path problem) Someone in Intermediate Counting posed the following problem as a generalization of a class problem: Given an m by n grid, how many ways are there to get from one corner to the diagonally opposite corner moving horizontally and vertically such that no tile is used more than once? (I know I posted this problem before. Sorry about that.) Thanks to the Encyclopedia of Integer Sequences, we now know more terms for the square grid cases: 1, 2, 12, 184, 8512, 1262816, 575780564, etc. (They have a few more on the Encyclopedia of Integer Sequences.) It’s sequence A007764 if anyone cares. Let us call the terms a_n, and let b_n=a_n/a_{n-1}. I did think that lim_{n\to\infty} b_n/b_{n-1}=3. Now I really don’t know. The first few terms of b_n/b_{n-1} are 3, 2.5556, 3.0170, 3.2070, 3.0733, 3.0068, 3.0186, 3.0362, 3.0419, 3.0423. That’s a weird sequence. I don’t understand it. It would be interesting if the limit turns out to be pi, but I don’t think that will be the case.
2018-04-22 10:26:36
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8566540479660034, "perplexity": 296.8313003678869}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945584.75/warc/CC-MAIN-20180422100104-20180422120104-00019.warc.gz"}
https://www.physicsforums.com/threads/system-in-equilibrium.296599/
# System in equilibrium ## Homework Statement A light rod is holding a weight with mass m in equilibrium. The rod is attached to the wall with a hinge and a wire as shown on the figure. Problem: Draw a force diagram of the rod and determine the force with which the hinge affects the rod and the tension force in the wire. ## The Attempt at a Solution I did the force diagram as shown on the figure, with the green arrows as the forces. I want to determine The force with which the hinge affects the rod, Fc. The tension force in the wire, T. I have that Ww = mg. I wrote up the conditions for equilibrium, $$\sum F_x = F_c - T cos(45) = 0$$ $$\sum F_y = T sin(45) - W_r - mg = 0$$ I do torque around the attachment point on the wall, $$\sum \tau = 2amg + aW_r - aF_c = 0$$ But trying to solve for e.g. $$F_c$$ now gives me $$F_c = cos(45) \frac{F_c - mg}{sin(45)} = F_c - mg$$, which is kinda bad. What am I doing wrong?? Last edited by a moderator: Related Introductory Physics Homework Help News on Phys.org PhanthomJay Homework Helper Gold Member Since it is given that the rod is light, you can ignore W_r. But you are forgetting the vertical reaction at O. Hi Jay, thanks. I will ignore W_r then. How should the vertical reaction at O look like? Should it be another component, or should it be part of F_c? Thanks. PhanthomJay
2020-06-04 17:49:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6557986736297607, "perplexity": 905.8268549955019}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347445880.79/warc/CC-MAIN-20200604161214-20200604191214-00565.warc.gz"}
https://www.math.princeton.edu/events/cr-moduli-spaces-contact-3-manifold-2011-03-11t200003
# CR moduli spaces on a contact 3-manifold - We study low-dimensional problems in topology and geometry via a study of contact and Cauchy-Riemann ($CR$) structures. In particular, we consider various $CR$ moduli spaces on a contact 3-manifold. A contact structure is called spherical if it admits a compatible spherical $CR$ structure. We will talk about spherical contact structures and our analytic tool, an evolution equation of $CR$ structures. We argue that solving such an equation for the standard contact 3-sphere is related to the Smale conjecture in 3-topology. Furthermore, we propose a contact analogue of Ray-Singer's analytic torsion. This ''contact torsion'' is expected to be able to distinguish among ''spherical space forms'' $\{\Gamma\backslash S^{3}\}$ as contact manifolds. Positivity of the $CR$ Paneitz operator becomes an important property in the study of recent years. We will investigate the relation between this property and the embeddability of $CR$ structures if we have enough time.
2018-02-23 02:41:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7456408739089966, "perplexity": 469.5232519749878}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814311.76/warc/CC-MAIN-20180223015726-20180223035726-00011.warc.gz"}
https://math.stackexchange.com/questions/1496856/how-do-you-solve-this-limit-involving-definite-integration
How do you solve this limit involving definite integration? $$\lim \limits_{r \to \infty} \frac {r^C \int_0^{\frac{\pi}{2}} x^r \sin(x)\, dx}{\int_0^{\frac{\pi}{2}} x^r \cos(x)\, dx} = L$$ Find the value of $\pi L - C$, given that $C\in\mathbb{R}$ and $L>0$. My approach: I tried to apply integration by parts to both the numerator and denominator to get a recurring relation, hoping to cancel something off, but to no avail. I'm not getting any other method to solve it, so any help will be appreciated. • Won't the limit depend on the value of $C$? – G-man Oct 25 '15 at 15:20 • That's the thing. You're supposed to get the value of C so that the limit is a finite quantity (which is equal to L, which too you have to find). – Ashish Gupta Oct 25 '15 at 15:21 • That's just too much work for a single question. – G-man Oct 25 '15 at 15:23 • The integrals come out in terms of hypergeometric functions so I wouldn't spend much time on that. The answer is 3 but I have no idea how to do it without cheating. – Ian Miller Oct 25 '15 at 15:23 • @G-Man I know, but I think it's a really well thought of question. – Ashish Gupta Oct 25 '15 at 15:23 $$\lim_{r\to +\infty}\frac{\int_{0}^{\pi/2}x^{r+1}\sin(x)\,dx}{\int_{0}^{\pi/2}x^r\sin(x)\,dx} = \frac{\pi}{2}$$ since the integrand functions in the numerator/denominator get more and more concentrated around the right endpoint as $r$ increases, and their ratio at $x=\frac{\pi}{2}$ is exactly $\frac{\pi}{2}$. By using integration by parts, we have: $$\lim_{r\to +\infty}\frac{(r+1)\int_{0}^{\pi/2}x^{r}\cos(x)\,dx}{\int_{0}^{\pi/2}x^r\sin(x)\,dx} = \frac{\pi}{2}$$ hence the given limit is finite iff $C=-1$ and in such a case $L=\frac{2}{\pi}$.
2019-09-15 19:59:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8808924555778503, "perplexity": 141.95284929989185}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572289.5/warc/CC-MAIN-20190915195146-20190915221146-00441.warc.gz"}
http://overanalyst.blogspot.com/2012/11/on-research-and-lectures-third-down.html
## Wednesday, November 28, 2012 ### class in session, part 2: third down, incomplete pass .. (also: #1200) so i punted. today was already lecture 3 of 4 and i planned sufficiently poorly so that the $\LaTeX$/PDF notes for lecture 2 were still only half-complete.  despite this, they remain self-contained, readable, and just-barely-suitable for public consumption [1]. this pains me nontrivially [2]. if these lectures were part of a "real" course .. that is, where one would solve problem sets and get actual credit .. then the students would be lost. this is not an exaggeration.  i can tell something isn't quite clear, if only because my collaborator was part of the audience, and she had a great many questions about some points i made. so if she, an established researcher, could not catch everything then what chances would a ph.d. student have to be able to catch something .. especially if this is not their field of interest? [sighs] so yes, i gave up. between a final push of research collaboration this week and writing up my own notes [3] for lecture 4, i see little-to-no time available for catching up with the $\LaTeX$ for lecture 2 and proceeding with lecture 3 from this afternoon.  so if is infeasible to do so, then why bother? instead i posted onto my webpage some PDF scans of some notes from previous talks.  it's not a perfect solution, but it's better than nothing.  more than that, it's important to make something available for those students who may actually want to look at the details [4]. yes, probably none of the students will actually do this .. but if there is a nonzero probability that one might, then it is worth doing. i also have other reasons for disappointment.  for example, i really wanted to have the lecture notes in wiki format.  my reasons involve symmetry: i thought about what i wish other researchers would do in expositional formats, how very interesting (but technical) topics could be made more accessible, and how great it would be if a proof were actually "clickable" ..! here i mean more than just having the information publicly available.  yes, the arXiv is a great thing .. but the mathematics opens only in PS or PDF or even DVI.  though there have been many strides in progress and in improvement for the user interface, none of these are really that clickable. take, for example, a theorem in the text where the citation is hyperlinked.  if you click on the link, then you are teleported to the end of the PDF file, where one finds the references. [this is an example of what i mean, regarding hyperlinks] if the PDF opened in a web browser, then pressing [Back] can exit the PDF and lead you back to the previous webpage.  pressing [Forward] then re-opens the PDF back to the damned beginning! argh ..! it's a small matter, but i still get annoyed by it. \-: in contrast, a wiki fits seamlessly into the medium that is the internet.  these days, mathjax, asciimath and other $\LaTeX$-rendering tools for the web are sufficiently robust, so that publishing maths on the web needn't ruin one's sense of aesthetics [5].  moreover, any sufficiently important definition usually has its own wiki, and one can link directly to it [6]. smooth and sweet, no fuss; wouldn't that be great? (-: .. right: as for this talk of symmetry, i have this crazy idea that if i do this and convince others that it's a good idea and not hard to do, then maybe those others will do the same .. \-: on a completely unrelated note: according to the count from blogger this is post #1200.  it still amazes me that i still have mathematical things to write about, even after these seven (7..!) years of blogging.  the best explanation is that i'm moving along in my academic career; with the changes comes a different perspective and new things to write about. on the other hand, i think i'm starting to get repetitive with the themes in this blog post.  i can't conveniently count the number of times that i've written about my neuroses about giving talks, rants about teaching, and the frustrations of research. i don't know.  maybe i should end this blog once and for all, but there's no sense in making a hasty decision now. i'll make a final decision by post #1300. (-: [1] you can find the notes on my new homepage; just google me and click on "talks and lectures" .. [2] this seems to me a sentence that could only have been written by an academic. [3] to clarify, there are at least two kinds of notes: (A) those intended to be shared publicly, and (B) those intended to be like a script for delivering the lecture.  in that sense, giving a good lecture is like acting a very specific role. [4] one could make the argument that if a student is really interested, then (s)he will go to your paper and read it carefully.  i don't disagree with that .. but that assumes that a student is paying attention to only one thing.  student or not, when was the last time that anyone had only one thing for which to be responsible? [5] part of the appeal of $\LaTeX$, admittedly, is how fluidly and aesthetically the typography appears.  that can really matter, if for example one uses a cumbersome notation, like $\tilde{f}_{i_j}$ to indicate convex combinations of a sub-subsequence $f_{i_j}$ of an initial sequence $f_i$. [6] not to get too "meta" on you readers .. but those of you who clicked on the mathjax link (or were thinking about it) must know exactly what i mean!
2018-04-22 22:05:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6593562960624695, "perplexity": 1450.3494625707303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945660.53/warc/CC-MAIN-20180422212935-20180422232935-00345.warc.gz"}
https://popflock.com/learn?s=Standard_atomic_weight
Standard Atomic Weight Get Standard Atomic Weight essential facts below. View Videos or join the Standard Atomic Weight discussion. Add Standard Atomic Weight to your PopFlock.com topic list for future reference or share this resource on social media. Standard Atomic Weight Example: copper in terrestrial sources. Two isotopes are present: copper-63 (62.9) and copper-65 (64.9), in abundances 69% + 31%. The standard atomic weight for copper is the average, weighted by their natural abundance, and then divided by the atomic mass constant mu.[1] The standard atomic weight (Ar, standard(E)) of a chemical element is the weighted arithmetic mean of the relative isotopic masses of all isotopes of that element weighted by each isotope's abundance on Earth. For example, isotope 63Cu (Ar = 62.929) constitutes 69% of the copper on Earth, the rest being 65Cu (Ar = 64.927), so ${\displaystyle A_{\text{r, standard}}(_{\text{29}}{\text{Cu}})=0.69\times 62.929+0.31\times 64.927=63.55.}$ Because relative isotopic masses are dimensionless quantities, this weighted mean is also dimensionless. It can be converted into a measure of mass (with dimension M) by multiplying it with the dalton, also known as the atomic mass constant. Among various variants of the notion of atomic weight (Ar, also known as relative atomic mass) used by scientists, the standard atomic weight is the most common and practical. The standard atomic weight of each chemical element is determined and published by the Commission on Isotopic Abundances and Atomic Weights (CIAAW) of the International Union of Pure and Applied Chemistry (IUPAC) based on natural, stable, terrestrial sources of the element. The definition specifies the use of samples from many representative sources from the Earth, so that the value can widely be used as 'the' atomic weight for substances as they are encountered in reality--for example, in pharmaceuticals and scientific research. Non-standardized atomic weights of an element are specific to sources and samples, such as the atomic weight of carbon in a particular bone from a particular archeological site. Standard atomic weight averages such values to the range of atomic weights that a chemist might expect to derive from many random samples from Earth. This range is the rationale for the interval notation given for some standard atomic weight values. Of the 118 known chemical elements, 80 have stable isotopes and 84 have this Earth-environment based value. Typically, such a value is, for example helium: . The "(2)" indicates the uncertainty in the last digit shown, to read . IUPAC also publishes abridged values, rounded to five significant figures. For helium, . For thirteen elements the samples diverge on this value, because their sample sources have had a different decay history. For example, thallium (Tl) in sedimentary rocks has a different isotopic composition than in igneous rocks and volcanic gases. For these elements, the standard atomic weight is noted as an interval: . With such an interval, for less demanding situations, IUPAC also publishes a conventional value. For thallium, . ## Definition Excerpt of an IUPAC Periodic Table showing the interval notation of the standard atomic weights of boron, carbon, and nitrogen (Chemistry International, IUPAC). Example: the pie chart for boron shows it to be composed of about 20% 10B and 80% 11B. This isotope mix causes the atomic weight of ordinary Earthly boron samples to be expected to fall within the interval 10.806 to 10.821. and this interval is the standard atomic weight. Boron samples from unusual sources, particularly non-terrestrial sources, might have measured atomic weights that fall outside this range. Atomic weight and relative atomic mass are synonyms. The standard atomic weight is a special value of the relative atomic mass. It is defined as the "recommended values" of relative atomic masses of sources in the local environment of the Earth's crust and atmosphere as determined by the IUPAC Commission on Atomic Weights and Isotopic Abundances (CIAAW).[2] In general, values from different sources are subject to natural variation due to a different radioactive history of sources. Thus, standard atomic weights are an expectation range of atomic weights from a range of samples or sources. By limiting the sources to terrestrial origin only, the CIAAW-determined values have less variance, and are a more precise value for relative atomic masses (atomic weights) actually found and used in worldly materials. The CIAAW-published values are used and sometimes lawfully required in mass calculations. The values have an uncertainty (noted in brackets), or are an expectation interval (see example in illustration immediately above). This uncertainty reflects natural variability in isotopic distribution for an element, rather than uncertainty in measurement (which is much smaller with quality instruments).[3] Although there is an attempt to cover the range of variability on Earth with standard atomic weight figures, there are known cases of mineral samples which contain elements with atomic weights that are outliers from the standard atomic weight range.[2] For synthetic elements the isotope formed depends on the means of synthesis, so the concept of natural isotope abundance has no meaning. Therefore, for synthetic elements the total nucleon count[dubious ] of the most stable isotope (i.e., the isotope with the longest half-life) is listed in brackets, in place of the standard atomic weight. When the term "atomic weight" is used in chemistry, usually it is the more specific standard atomic weight that is implied. It is standard atomic weights that are used in periodic tables and many standard references in ordinary terrestrial chemistry. Lithium represents a unique case where the natural abundances of the isotopes have in some cases been found to have been perturbed by human isotopic separation activities to the point of affecting the uncertainty in its standard atomic weight, even in samples obtained from natural sources, such as rivers.[][dubious ] ### Terrestrial definition An example of why "conventional terrestrial sources" must be specified in giving standard atomic weight values is the element argon. Between locations in the Solar System, the atomic weight of argon varies as much as 10%, due to extreme variance in isotopic composition. Where the major source of argon is the decay of in rocks, will be the dominant isotope. Such locations include the planets Mercury and Mars, and the moon Titan. On Earth, the ratios of the three isotopes 36Ar : 38Ar : 40Ar are approximately 5 : 1 : 1600, giving terrestrial argon a standard atomic weight of 39.948(1). However, such is not the case in the rest of the universe. Argon produced directly, by stellar nucleosynthesis, is dominated by the alpha-process nuclide . Correspondingly, solar argon contains 84.6% (according to solar wind measurements),[4] and the ratio of the three isotopes 36Ar : 38Ar : 40Ar in the atmospheres of the outer planets is 8400 : 1600 : 1.[5] The atomic weight of argon in the Sun and most of the universe, therefore, would be only approximately 36.3.[6] ### Causes of uncertainty on Earth Famously, the published atomic weight value comes with an uncertainty. This uncertainty (and related: precision) follows from its definition, the source being "terrestrial and stable". Systematic causes for uncertainty are: 1. Measurement limits. As always, the physical measurement is never finite. There is always more detail to be found and read. This applies to every single, pure isotope found. For example, today the mass of the main natural fluorine isotope (fluorine-19) can be measured to the accuracy of eleven decimal places: . But a still more precise measurement system could become available, producing more decimals. 2. Imperfect mixtures of isotopes. In the samples taken and measured the mix (relative abundance) of those isotopes may vary. For example copper. While in general its two isotopes make out 69.15% and 30.85% each of all copper found, the natural sample being measured can have had an incomplete 'stirring' and so the percentages are different. The precision is improved by measuring more samples of course, but there remains this cause of uncertainty. (Example: lead samples vary so much, it can not be noted more precise than four figures: ) 3. Earthly sources with a different history. A source is the greater area being researched, for example 'ocean water' or 'volcanic rock' (as opposed to a 'sample': the single heap of material being investigated). It appears that some elements have a different isotopic mix per source. For example, thallium in igneous rock has more lighter isotopes, while in sedimentary rock it has more heavy isotopes. There is no Earthly mean number. These elements show the interval notation: Ar, standard(Tl) = [, ]. For practical reasons, a simplified 'conventional' number is published too (for Tl: 204.38). These three uncertainties are accumulative. The published value is a result of all these. ## Determination of relative atomic mass Modern relative atomic masses (a term specific to a given element sample) are calculated from measured values of atomic mass (for each nuclide) and isotopic composition of a sample. Highly accurate atomic masses are available[7][8] for virtually all non-radioactive nuclides, but isotopic compositions are both harder to measure to high precision and more subject to variation between samples.[9][10] For this reason, the relative atomic masses of the 22 mononuclidic elements (which are the same as the isotopic masses for each of the single naturally occurring nuclides of these elements) are known to especially high accuracy. For example, there is an uncertainty of only one part in 38 million for the relative atomic mass of fluorine, a precision which is greater than the current best value for the Avogadro constant (one part in 20 million). Isotope Atomic mass[8] Abundance[9] Standard Range 28Si 27.976 926 532 46(194) 92.2297(7)% 92.21-92.25% 29Si 28.976 494 700(22) 4.6832(5)% 4.67-4.69% 30Si 29.973 770 171(32) 3.0872(5)% 3.08-3.10% The calculation is exemplified for silicon, whose relative atomic mass is especially important in metrology. Silicon exists in nature as a mixture of three isotopes: 28Si, 29Si and 30Si. The atomic masses of these nuclides are known to a precision of one part in 14 billion for 28Si and about one part in one billion for the others. However the range of natural abundance for the isotopes is such that the standard abundance can only be given to about ±0.001% (see table). The calculation is Ar(Si) = (27.97693 × 0.922297) + (28.97649 × 0.046832) + (29.97377 × 0.030872) = 28.0854 The estimation of the uncertainty is complicated,[11] especially as the sample distribution is not necessarily symmetrical: the IUPAC standard relative atomic masses are quoted with estimated symmetrical uncertainties,[12] and the value for silicon is 28.0855(3). The relative standard uncertainty in this value is 1×10-5 or 10 ppm. To further reflect this natural variability, in 2010, IUPAC made the decision to list the relative atomic masses of 10 elements as an interval rather than a fixed number.[13] ## Naming controversy The use of the name "atomic weight" has attracted a great deal of controversy among scientists.[14] Objectors to the name usually prefer the term "relative atomic mass" (not to be confused with atomic mass). The basic objection is that atomic weight is not a weight, that is the force exerted on an object in a gravitational field, measured in units of force such as the newton or poundal. In reply, supporters of the term "atomic weight" point out (among other arguments)[14] that • the name has been in continuous use for the same quantity since it was first conceptualized in 1808;[15] • for most of that time, atomic weights really were measured by weighing (that is by gravimetric analysis) and the name of a physical quantity should not change simply because the method of its determination has changed; • the term "relative atomic mass" should be reserved for the mass of a specific nuclide (or isotope), while "atomic weight" be used for the weighted mean of the atomic masses over all the atoms in the sample; • it is not uncommon to have misleading names of physical quantities which are retained for historical reasons, such as It could be added that atomic weight is often not truly "atomic" either, as it does not correspond to the property of any individual atom. The same argument could be made against "relative atomic mass" used in this sense. ## Published values IUPAC publishes one formal value for each stable element, called the standard atomic weight.[16][17] Any updates are published biannually (in uneven years). In 2015, the atomic weight of ytterbium was updated.[16] Per 2017, 14 atomic weights were changed, including argon changing from single number to interval value.[18][19] The value published can have an uncertainty, like for neon: , or can be an interval, like for boron: [10.806, 10.821]. Next to these 84 values, IUPAC also publishes abridged values (up to five digits per number only), and for the twelve interval values, conventional values (single number values). Symbol Ar is a relative atomic mass, for example from a specific sample. To be specific, the standard atomic weight can be noted as , where (E) is the element symbol. ### Abridged atomic weight The abridged atomic weight, also published by CIAAW, is derived from the standard atomic weight reducing the numbers to five digits (five significant figures). The name does not say 'rounded'. Interval borders are rounded downwards for the first (lowmost) border, and upwards for the upward (upmost) border. This way, the more precise original interval is fully covered.[20] Examples: • Calcium: -> • Helium: -> • Hydrogen: -> ### Conventional atomic weight Thirteen chemical elements have a standard atomic weight that is defined not as a single number, but as an interval. For example, hydrogen has . This notation states that the various sources on Earth have substantially different isotopic constitutions, and uncertainties are incorporated in the two numbers. For these elements, there is not an 'Earth average' constitution, and the 'right' value is not its middle (that would be 1.007975 for hydrogen, with an uncertainty of (±0.000135) that would make it just cover the interval). However, for situations where a less precise value is acceptable, CIAAW has published a single-number conventional atomic weight that can be used for example in trade. For hydrogen, . The thirteen elements are: hydrogen, lithium, boron, carbon, nitrogen, oxygen, magnesium, silicon, sulfur, chlorine, argon, bromine and thallium.[21] ### A formal short atomic weight By using the abridged value, and the conventional value for the thirteen interval values, a short IUPAC-defined value (5 digits plus uncertainty) can be given for all stable elements. In many situations, and in periodic tables, this may be sufficiently detailed.[22] Overview: formal values of the standard atomic weight[1] Element (E) Ar, standard(E) Table 1[17] Value type Ar, std abridged(E) Table 2[20] Ar, std conventional(E) Table 3[21] Ar, std formal short(E) Tables 2 ? 3[22] Mass number [most stable isotope] hydrogen 1H [, ] Interval [, ] nitrogen 7N [, ] Interval [, ] fluorine 9F Value (uncertainty) calcium 20Ca Value (uncertainty) technetium 43Tc (none) Most stable isotope [97] ## List of atomic weights Z Symbol Name Ar, standard abridged conventional → formal, short year changed 1 H hydrogen [, ] 2009 2 He helium 1983 3 Li lithium [, ] 2009 4 Be beryllium 2013 5 B boron [, ] 2009 6 C carbon [, ] 2009 7 N nitrogen [, ] 2009 8 O oxygen [, ] 2009 9 F fluorine 2013 10 Ne neon 1985 11 Na sodium 2005 12 Mg magnesium [, ] 2011 13 Al aluminium 2017 14 Si silicon [, ] 2009 15 P phosphorus 2013 16 S sulfur [, ] 2009 17 Cl chlorine [, ] 2009 18 Ar argon [, ] [23] 2017 19 K potassium 1979 20 Ca calcium 1983 21 Sc scandium 2013 22 Ti titanium 1993 24 Cr chromium 1983 25 Mn manganese 2017 26 Fe iron 1993 27 Co cobalt 2017 28 Ni nickel 2007 29 Cu copper 1969 30 Zn zinc 2007 31 Ga gallium 1987 32 Ge germanium 2009 33 As arsenic 2013 34 Se selenium 2013 35 Br bromine [, ] 2011 36 Kr krypton 2001 37 Rb rubidium 1969 38 Sr strontium 1969 39 Y yttrium 2017 40 Zr zirconium 1983 41 Nb niobium 2017 42 Mo molybdenum 2013 43 Tc technetium - 44 Ru ruthenium 1983 45 Rh rhodium 2017 47 Ag silver 1985 49 In indium 2011 50 Sn tin 1983 51 Sb antimony 1993 52 Te tellurium 1969 53 I iodine 1985 54 Xe xenon 1999 55 Cs caesium 2013 56 Ba barium 1985 57 La lanthanum 2005 58 Ce cerium 1995 59 Pr praseodymium 2017 60 Nd neodymium 2005 61 Pm promethium - 62 Sm samarium 2005 63 Eu europium 1995 65 Tb terbium 2017 66 Dy dysprosium 2001 67 Ho holmium 2017 68 Er erbium 1999 69 Tm thulium 2017 70 Yb ytterbium 2015 71 Lu lutetium 2007 72 Hf hafnium 2019 73 Ta tantalum 2005 74 W tungsten 1991 75 Re rhenium 1973 76 Os osmium 1991 77 Ir iridium 2017 78 Pt platinum 2005 79 Au gold 2017 80 Hg mercury 2011 81 Tl thallium [, ] 2009 82 Pb lead [, ] 2021 83 Bi bismuth 2005 84 Po polonium - 85 At astatine - 87 Fr francium - 89 Ac actinium - 90 Th thorium 2013 91 Pa protactinium 2017 92 U uranium 1999 93 Np neptunium - 94 Pu plutonium - 95 Am americium - 96 Cm curium - 97 Bk berkelium - 98 Cf californium - 99 Es einsteinium - 100 Fm fermium - 101 Md mendelevium - 102 No nobelium - 103 Lr lawrencium - 104 Rf rutherfordium - 105 Db dubnium - 106 Sg seaborgium - 107 Bh bohrium - 108 Hs hassium - 109 Mt meitnerium - 111 Rg roentgenium - 112 Cn copernicium - 113 Nh nihonium - 114 Fl flerovium - 115 Mc moscovium - 116 Lv livermorium - 117 Ts tennessine - 118 Og oganesson - 1. ^ )CIAAW may publish changes to atomic weights (including its precision and derived values). Since 1947, any update this is done in odd years nominally; the actual date of publication may be some time later. • 2009 (introducing interval notation; Ge): "Atomic weights of the elements 2009 (IUPAC Technical Report)". Pure Appl. Chem. 83 (2): 359-396. 12 December 2010. doi:10.1351/PAC-REP-10-09-14. • 2011 (interval for Br, Mg): "Atomic weights of the elements 2011 (IUPAC Technical Report)". Pure Appl. Chem. 85 (5): 1047-1078. 29 April 2013. doi:10.1351/PAC-REP-13-03-02. • 2013 (all elements listed): Meija, Juris; et al. (2016). "Atomic weights of the elements 2013 (IUPAC Technical Report)". Pure and Applied Chemistry. 88 (3): 265-91. doi:10.1515/pac-2015-0305. • 2015 (ytterbium changed): "Standard Atomic Weight of Ytterbium Revised". Chemistry International. 37 (5-6): 26. October 2015. doi:10.1515/ci-2015-0512. eISSN 0193-6484. ISSN 0193-6484. • 2017 (14 values changed): "Standard atomic weights of 14 chemical elements revised". CIAAW. 2018-06-05. Uncertainty handling About handling the uncertainty in the values, including those in [ ] range values: ### In the periodic table Group 1 2   3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Hydrogen & alkali metals Alkaline earth metals Pnicto­gens Chal­co­gens Halo­gens Noble gases Period Hydro­gen1H He­lium2He 2 Lith­ium3Li Beryl­lium4Be Boron5B Carbon6C Nitro­gen7N Oxy­gen8O Fluor­ine9F Neon10Ne 3 So­dium11Na Magne­sium12Mg Alumin­ium13Al Sili­con14Si Phos­phorus15P Sulfur16S Chlor­ine17Cl Argon18Ar 4 Potas­sium19K Cal­cium20Ca Scan­dium21Sc Tita­nium22Ti Vana­dium23V Chrom­ium24Cr Manga­nese25Mn Iron26Fe Cobalt27Co Nickel28Ni Copper29Cu Zinc30Zn Gallium31Ga Germa­nium32Ge Arsenic33As Sele­nium34Se Bromine35Br Kryp­ton36Kr 5 Rubid­ium37Rb Stront­ium38Sr Yttrium39Y Zirco­nium40Zr Nio­bium41Nb Molyb­denum42Mo Tech­netium43Tc​[97] Ruthe­nium44Ru Rho­dium45Rh Pallad­ium46Pd Silver47Ag Cad­mium48Cd Indium49In Tin50Sn Anti­mony51Sb Tellur­ium52Te Iodine53I Xenon54Xe 6 Cae­sium55Cs Ba­rium56Ba Lute­tium71Lu Haf­nium72Hf Tanta­lum73Ta Tung­sten74W Rhe­nium75Re Os­mium76Os Iridium77Ir Plat­inum78Pt Gold79Au Mer­cury80Hg Thallium81Tl Lead82Pb Bis­muth83Bi Polo­nium84Po​[209] Asta­tine85At​[210] Radon86Rn​[222] 7 Fran­cium87Fr​[223] Ra­dium88Ra​[226] Lawren­cium103Lr​[266] Ruther­fordium104Rf​[267] Dub­nium105Db​[268] Sea­borgium106Sg​[269] Bohr­ium107Bh​[270] Has­sium108Hs​[269] Meit­nerium109Mt​[278] Darm­stadtium110Ds​[281] Roent­genium111Rg​[282] Coper­nicium112Cn​[285] Nihon­ium113Nh​[286] Flerov­ium114Fl​[289] Moscov­ium115Mc​[290] Liver­morium116Lv​[293] Tenness­ine117Ts​[294] Oga­nesson118Og​[294] Lan­thanum57La Cerium58Ce Praseo­dymium59Pr Neo­dymium60Nd Prome­thium61Pm​[145] Sama­rium62Sm Europ­ium63Eu Gadolin­ium64Gd Ter­bium65Tb Dyspro­sium66Dy Hol­mium67Ho Erbium68Er Thulium69Tm Ytter­bium70Yb Actin­ium89Ac​[227] Thor­ium90Th Protac­tinium91Pa Ura­nium92U Neptu­nium93Np​[237] Pluto­nium94Pu​[244] Ameri­cium95Am​[243] Curium96Cm​[247] Berkel­ium97Bk​[247] Califor­nium98Cf​[251] Einstei­nium99Es​[252] Fer­mium100Fm​[257] Mende­levium101Md​[258] Nobel­ium102No​[259] ## References 1. ^ a b c Meija, Juris; et al. (2016). "Atomic weights of the elements 2013 (IUPAC Technical Report)". Pure and Applied Chemistry. 88 (3): 265-91. doi:10.1515/pac-2015-0305. 2. ^ a b "IUPAC Goldbook". Compendium of Chemical Terminology. Retrieved 2019. standard atomic weights: Recommended values of relative atomic masses of the elements revised biennially by the IUPAC Commission on Atomic Weights and Isotopic Abundances and applicable to elements in any normal sample with a high level of confidence. A normal sample is any reasonably possible source of the element or its compounds in commerce for industry and science and has not been subject to significant modification of isotopic composition within a geologically brief period. 3. ^ Wieser, M. E (2006). "Atomic weights of the elements 2005 (IUPAC Technical Report)" (PDF). Pure and Applied Chemistry. 78 (11): 2051-2066. doi:10.1351/pac200678112051. S2CID 94552853. 4. ^ Lodders, K. (2008). "The solar argon abundance". Astrophysical Journal. 674 (1): 607-611. arXiv:0710.4523. Bibcode:2008ApJ...674..607L. doi:10.1086/524725. S2CID 59150678. 5. ^ Cameron, A. G. W. (1973). "Elemental and isotopic abundances of the volatile elements in the outer planets". Space Science Reviews. 14 (3-4): 392-400. Bibcode:1973SSRv...14..392C. doi:10.1007/BF00214750. S2CID 119861943. 6. ^ This can be determined from the preceding figures per the definition of atomic weight and WP:CALC 7. ^ 8. ^ a b Wapstra, A.H.; Audi, G.; Thibault, C. (2003), The AME2003 Atomic Mass Evaluation (Online ed.), National Nuclear Data Center. Based on: 9. ^ a b Rosman, K. J. R.; Taylor, P. D. P. (1998), "Isotopic Compositions of the Elements 1997" (PDF), Pure and Applied Chemistry, 70 (1): 217-35, doi:10.1351/pac199870010217 10. ^ Coplen, T. B.; et al. (2002), "Isotopic Abundance Variations of Selected Elements" (PDF), Pure and Applied Chemistry, 74 (10): 1987-2017, doi:10.1351/pac200274101987 11. ^ Meija, Juris; Mester, Zoltán (2008). "Uncertainty propagation of atomic weight measurement results". Metrologia. 45 (1): 53-62. Bibcode:2008Metro..45...53M. doi:10.1088/0026-1394/45/1/008. 12. ^ Holden, Norman E. (2004). "Atomic Weights and the International Committee--A Historical Review". Chemistry International. 26 (1): 4-7. 13. ^ 14. ^ a b de Bièvre, Paul; Peiser, H. Steffen (1992). "'Atomic Weight' -- The Name, Its History, Definition, and Units" (PDF). Pure and Applied Chemistry. 64 (10): 1535-43. doi:10.1351/pac199264101535. 15. ^ Dalton, John (1808). A New System of Chemical Philosophy. Manchester. 16. ^ a b "Standard Atomic Weights 2015". Commission on Isotopic Abundances and Atomic Weights. 12 October 2015. Retrieved 2017. 17. ^ a b Meija 2016, Table 1. 18. ^ "Standard atomic weights of 14 chemical elements revised". CIAAW. 2018-06-05. Retrieved . 19. ^ "Standard Atomic Weights of 14 Chemical Elements Revised". Chemistry International. 40 (4): 23-24. 2018. doi:10.1515/ci-2018-0409. ISSN 0193-6484. 20. ^ a b Meija 2016, Table 2. 21. ^ a b Meija 2016, Table 3. 22. ^ a b Meija 2016, Tables 2 and 3. 23. ^ "IUPAC Periodic Table of the Elements and Isotopes". King's Center for Visualization in Science. IUPAC, King's Center for Visualization in Science. Retrieved 2019. 24. ^ Meija, Juris; et al. (2016). "Atomic weights of the elements 2013 (IUPAC Technical Report)". Pure and Applied Chemistry. 88 (3). Table 2, 3 combined; uncertainty removed. doi:10.1515/pac-2015-0305.
2021-07-29 20:00:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6738821268081665, "perplexity": 7125.574750443665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153892.74/warc/CC-MAIN-20210729172022-20210729202022-00171.warc.gz"}
https://tex.stackexchange.com/questions/8626/how-to-know-the-number-of-elements-in-a-clist-in-latex3
# How to know the number of elements in a clist in LaTeX3? I could not find in the LaTeX3 documentation any function to count the number of elements in a clist. So I coded my own, but it is likely to be very suboptimal. Is there any better alternative? To be precise, \clist_count:NN stores in its first argument (of type int) the length of its second argument (a clist) \cs_new:Nn \clist_count:NN { \int_set:Nn \l_tmpa_int {0} \clist_set_eq:NN \l_tmpa_clist #2 \bool_until_do:nn { \clist_if_empty_p:N \l_tmpa_clist }{ \clist_pop:NN \l_tmpa_clist \l_tmpa_tl \int_add:Nn \l_tmpa_int {1} } \int_set_eq:NN #1 \l_tmpa_int } I am also interested in a function that would pick the n-th element of a clist. ## 1 Answer EDIT (by Bruno): the correct function to use now is \clist_count:N, which essentially derives from Will's implementation below. There is also \clist_count:n which expects an explicit comma separated list as its argument rather than a list stored inside a variable. Both functions expand to an explicit integer. Here's a solution that should be quite a bit faster (although I haven't tested it): \cs_new:Npn \clist_length:N #1 { \int_eval:n { 0 \clist_map_function:NN #1 \tl_elt_count_aux:n } } Here, \clist_length:N expands to the length of the comma-list; it's expandable, so you can use it inside \int_set:Nn if you like. And here's a solution for expandably extracting the n-th item of a comma list: \cs_new:Nn \clist_nth:Nn { \int_compare:nTF { \clist_length:N #1 < #2 } { \ERROR } { \exp_after:wN \clist_nth_aux:nn \exp_after:wN {#1} #2 } } \cs_new:Nn \clist_nth_aux:nn { \clist_nth_aux_i:nnnw {1}{#2} #1 , \q_recursion_tail \q_recursion_stop } \cs_new:Npn \clist_nth_aux_i:nnnw #1#2#3, { \quark_if_recursion_tail_stop:n {#3} \int_compare:nTF {#1==#2} { \use_i_delimit_by_q_recursion_stop:nw {#3} } { \clist_nth_aux_i:fnnw { \int_eval:n {#1+1} } {#2} } } \cs_generate_variant:Nn \clist_nth_aux_i:nnnw {f} Thanks for these questions; I've been meaning to add them to l3clist for a little while. P.S. Sorry for the slow reply! • Thank you, that should reduce the size of my randomwalk package a bit: I'm new to writing packages, and to LaTeX3, so your answer is very useful. Since users don't always have the latest version of expl3, does it make sense to provide definitions of these most recent macros inside my package? – Bruno Le Floch Jan 23 '11 at 10:02 • @Bruno — Yes, for the time being; just check for their existence before overwriting them (\cs_if_free:NT). I would be very interested in hearing your thoughts on using expl3 as a relatively new package author; please don't hesitate to write any comments to me directly. – Will Robertson Jan 23 '11 at 11:09 • — I sent an email to the address you use for LaTeX-L. – Bruno Le Floch Jan 25 '11 at 17:32
2019-10-16 15:30:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7007036805152893, "perplexity": 3522.8438200010823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986668994.39/warc/CC-MAIN-20191016135759-20191016163259-00177.warc.gz"}
https://itprospt.com/num/13523905/table-5-calculated-resultstrialtrial-2trial-3averageheat
5 # Table 5: Calculated resultsTrialTrial 2Trial 3AverageHeat capacity of calorimeter (Jr'C)32.0731,628.,330.66Enthalpy of neutralization (kJImol)Enthalpy of disso... ## Question ###### Table 5: Calculated resultsTrialTrial 2Trial 3AverageHeat capacity of calorimeter (Jr'C)32.0731,628.,330.66Enthalpy of neutralization (kJImol)Enthalpy of dissolutlon of calcium in acid (kJImol)Enthalpy of dlssolution of calclum In water (kJImol) Table 5: Calculated results Trial Trial 2 Trial 3 Average Heat capacity of calorimeter (Jr'C) 32.07 31,6 28.,3 30.66 Enthalpy of neutralization (kJImol) Enthalpy of dissolutlon of calcium in acid (kJImol) Enthalpy of dlssolution of calclum In water (kJImol) #### Similar Solved Questions ##### Amoeba? Hox motIY differ Iromntheim Mce Dou: parmacium? (2 pts) What areclosterium? Why are Iney greon?10. (2 pts) What class of organismdescribe some other protists water? List and 11. (3 pts) What did you observe in the pond Did anything you saw coincide with the that may have been present in your sample and . the handout with pictures? slides provided on the side bench prepared = amoeba? Hox motIY differ Iromn theim Mce Dou: parmacium? (2 pts) What are closterium? Why are Iney greon? 10. (2 pts) What class of organism describe some other protists water? List and 11. (3 pts) What did you observe in the pond Did anything you saw coincide with the that may have been present in ... ##### Chapter 25, Problem 006Your answeris partially corect. Try again:You have two flat metal plates, each of area 1.49 m2, with which to construct parallel-plate capaator (a) If the capacitance of the device to be 1.16 what must be the separation between the plates? (b) Could this capacitor actually be constructed?(a) Number 1.14E-11Units(b)the tolerance +/-2% Click if vou would like to Show Work for this question: Qpen Show WotkSHOWHSAMPPROBLEFLINK To TEXTSAVE FoR LATeR ~sudhIT AnswER usedQuestion Chapter 25, Problem 006 Your answeris partially corect. Try again: You have two flat metal plates, each of area 1.49 m2, with which to construct parallel-plate capaator (a) If the capacitance of the device to be 1.16 what must be the separation between the plates? (b) Could this capacitor actually b... ##### Kug#liui J ( PUIL)R =1knV, = 50VR = 12kVz = 20VW R; = 1.8knVDG20 V=20 V333.5 V33.5 V Kug#liui J ( PUIL) R =1kn V, = 50V R = 12k Vz = 20V W R; = 1.8kn VDG 20 V =20 V 333.5 V 33.5 V... ##### Question of 12SubmitCalculate the pH when 64.0 mL of 0.150 MKOH is mixed with 20.0 mL of 0.300 MHBro (Ka = 2.5 x 10-9)2x100Tap here or pull up for additional resources Question of 12 Submit Calculate the pH when 64.0 mL of 0.150 MKOH is mixed with 20.0 mL of 0.300 MHBro (Ka = 2.5 x 10-9) 2 x100 Tap here or pull up for additional resources... ##### Parents and Daughters Radioactive Decay Due nalis 55 minutesOetemlne the type of radloactlve decay that corresponds the glven comblnatlon of Parent and Daughter Nuclel, Use the web your book Parent: Daughter: Parent: sbre. Daughter: Sofe Alpha Decay Parent: 8 le Daughter: BLi Pareni: 22Na, Daughter: 22Ne Parent; 235U, Daughter; 231Th Submnilinseci Tries 0/10find copY the perlodlc table help wlth thls problem, Parents and Daughters Radioactive Decay Due nalis 55 minutes Oetemlne the type of radloactlve decay that corresponds the glven comblnatlon of Parent and Daughter Nuclel, Use the web your book Parent: Daughter: Parent: sbre. Daughter: Sofe Alpha Decay Parent: 8 le Daughter: BLi Pareni: 22Na, Daughter... ##### Polnt) Uga tne glven graph of the function f to find tha following valuas forf71.f-(-4) =2f-'(-3) =af-'(0) =4f-'(2) =6f-'(4) = polnt) Uga tne glven graph of the function f to find tha following valuas forf7 1.f-(-4) = 2f-'(-3) = af-'(0) = 4f-'(2) = 6f-'(4) =... ##### Determinc the speed Litrie particle truveling ulong the purumetric curvc c() = (In ((? + 1),+). Assune unity of meters and seconds. Determinc the speed Litrie particle truveling ulong the purumetric curvc c() = (In ((? + 1),+). Assune unity of meters and seconds.... ##### Coonding 2 2 1 6 Ccunty H diliered from {oleayad 8 1 1 covldcior 8 J 1 S8 es-41 Coonding 2 2 1 6 Ccunty H diliered from {oleayad 8 1 1 covldcior 8 J 1 S8 es-4 1... ##### Select USDAICDC identified urban food desert as your city case study from the list below t0 complete the assignment: (You will need to gather information via Internet search: ) Atlanta, GA Camden_ Chicago, IL Memphis- TN New Orleans, LA New York City NYAsignment QuestlonsDefine and explain the characteristics of food deserts_ Identify the causes and consequences of food deserts_ How do you think living in food desert could affect person family food choices? Other than grocery stores superarkets; Select USDAICDC identified urban food desert as your city case study from the list below t0 complete the assignment: (You will need to gather information via Internet search: ) Atlanta, GA Camden_ Chicago, IL Memphis- TN New Orleans, LA New York City NY Asignment Questlons Define and explain the cha... ##### Evaluate the following limit.lim X-019How should the given limit be evaluated? Select the correct choice below and, if necessary; fill in the answer box to complete your choice.0A: Use /Hopital's Rule exactly once to rewrite the limit as lim X-0Multiply the expression by a unit fraction to obtain lim 4lt0Use direct substitution.Use IHopital's Rule more than once to rewrite the limit in its final form as lim X-0Evaluate the limitlim X-0(Type an integer or a fraction )19 Evaluate the following limit. lim X-0 19 How should the given limit be evaluated? Select the correct choice below and, if necessary; fill in the answer box to complete your choice. 0A: Use /Hopital's Rule exactly once to rewrite the limit as lim X-0 Multiply the expression by a unit fraction ... ##### Insett 0 5- DratTable II: Design Polarity Layout ofMolecules References C_BrCl 9 2 8 SPECIES Mailings Gcomctry Rcpan Shcet (V) N-C-F E 2/5 Review 4EN BONDS 1 POLARITY Help Design Table ! SYMMETRIC? Layout 8 MOLECULE POLARITY? 1CHOCNF Insett 0 5- Drat Table II: Design Polarity Layout ofMolecules References C_BrCl 9 2 8 SPECIES Mailings Gcomctry Rcpan Shcet (V) N-C-F E 2/5 Review 4EN BONDS 1 POLARITY Help Design Table ! SYMMETRIC? Layout 8 MOLECULE POLARITY? 1 CHOCN F... ##### Perform the indicated operations. Leave the result in polar form. $$\left(1 \underline{/ 142^{\circ}}\right)^{10}$$ Perform the indicated operations. Leave the result in polar form. $$\left(1 \underline{/ 142^{\circ}}\right)^{10}$$... ##### The proton separation energy of 22Rn 86' The proton separation energy of 22Rn 86'... ##### There are two diamond, four gold, and eight silver earrings in adrawer. A pair of earrings are chosen at random from thedrawer.(i) Find the probability that both earrings are of the samecolour. [2 marks](ii) If it is known that both earrings are of the same colour,find the probability that they are both gold. [2 marks] There are two diamond, four gold, and eight silver earrings in a drawer. A pair of earrings are chosen at random from the drawer. (i) Find the probability that both earrings are of the same colour. [2 marks] (ii) If it is known that both earrings are of the same colour, find the probability that the... ##### Two 200 kg masses are separated by a distance of 0.2 m. Using Newton's Ilaw of gravitation, find the force between these two masses. Two 200 kg masses are separated by a distance of 0.2 m. Using Newton's Ilaw of gravitation, find the force between these two masses....
2022-09-28 08:49:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5717645287513733, "perplexity": 13590.032268173962}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335190.45/warc/CC-MAIN-20220928082743-20220928112743-00089.warc.gz"}
https://www.mysciencework.com/publication/show/alternating-iterative-methods-solving-tensor-equations-applications-7eea2eb4?search=1
# Alternating iterative methods for solving tensor equations with applications Authors • 1 Lanzhou University, School of Mathematics and Statistics, Lanzhou, 730000, People’s Republic of China , Lanzhou (China) • 2 Tianshui Normal University, School of Mathematics and Statistics, Tianshui, 741001, People’s Republic of China , Tianshui (China) Type Published Article Journal Numerical Algorithms Publisher Springer US Publication Date Sep 29, 2018 Volume 80 Issue 4 Pages 1437–1465 Identifiers DOI: 10.1007/s11075-018-0601-4 Source Springer Nature Keywords Recently, the alternating direction method of multipliers (ADMM) and its variations have gained great popularity in large-scale optimization problems. This paper is concerned with the solution of the tensor equation Axm−1=b\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathscr{A}\textbf {x}^{m-1}=\textbf {b}$\end{document} in which A\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathscr{A}$\end{document} is an m th-order and n-dimensional real tensor and b is an n-dimensional real vector. By introducing certain auxiliary variables, we transform equivalently this tensor equation into a consensus constrained optimization problem, and then propose an ADMM type method for it. It turns out that each limit point of the sequences generated by this method satisfies the Karush-Kuhn-Tucker conditions. Moreover, from the perspective of computational complexity, the proposed method may suffer from the curse-of-dimensionality if the size of the tensor equation is large, and thus we further present a modified version (as a variant of the former) turning to the tensor-train decomposition of the tensor A\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathscr{A}$\end{document}, which is free from the curse. As applications, we establish the associated inverse iteration methods for solving tensor eigenvalue problems. The performed numerical examples illustrate that our methods are feasible and efficient.
2021-01-22 09:48:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8031681776046753, "perplexity": 1975.280444720155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703529179.46/warc/CC-MAIN-20210122082356-20210122112356-00549.warc.gz"}
https://study.com/academy/answer/what-is-the-difference-between-anti-derivatives-and-integrals.html
# What is the difference between anti-derivatives and integrals? ## Question: What is the difference between anti-derivatives and integrals? ## Antiderivative of a Function: The process to get the anti-derivative of a function f(x) consists in looking for another function F(x) which is the primitive function of f(x), this means that when we compute the derivative of the function F(x) it gives us the function f(x), in other words, F'(x)=f(x). The indefinite integral of that function f(x) would give as a result F(x)+C in which the letter C indicates the value of an arbitrary constant. The anti-derivative (also called primitive) is a function that when we derive it we get the original function. Example: F(x)=6x is an anti-derivative of f(x)=6 As we see a show has a lot of anti-derivatives. In the previous example we can see that the functions F(x)=6x+1, F(x)=6x-3, F(x)=6x+7, F(x)=6x-12, ... are also anti-derivatives of f(x)=6. As we can see, when calculating the derivative F(x) we obtain f(x). On the other hand, we have that the indefinite Integral represents the set of all the infinite quantity of primitives that the function f(x) can have. That is, the indefinite integral of a function is equal to its antiderivative plus a constant, following the previous example: F(x)=6x+C is the indefinite integral of f(x)=6 since the constant C includes all arbitrary constants.
2020-05-28 19:24:15
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8794838786125183, "perplexity": 350.410535543585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347399830.24/warc/CC-MAIN-20200528170840-20200528200840-00296.warc.gz"}
https://amitlevinson.com/blog/review-of-an-sql-interview-question/
Reviewing an SQL interview question Introduction During the first half of 2021, as I was finishing up my M.A. thesis, I started searching for a job in Data Analytics. My journey into analytics was through learning R and I realized I had to learn some SQL, or at least familiarize myself with it. Fast forward to interviewing, and most of the SQL interview questions were relevant and interesting, with one question particularly motivating further thoughts; this blog post details that question and several answers to it. The question and data have no connection my current employer. The data used here is made up and the question came from a different company altogether. I’ll be using R to setup a local SQL connection but power through the blog post with actual SQL code. While not necessary, some SQL knowledge is useful to understanding the various answers’, and more so the syntax. The Interview question So here it goes: Let’s say you have a table of users’ payments. The table has the user’s name, date of payment and the amount they received. Users have multiple records with different amounts and dates. For each user, return: the user name, the maximum amount they received and the date of that payment. Once you solve that, solve it again using a different approach. For a more practical example, consider the following raw data: Tom 2021-05-11 75 Danny 2021-05-12 62 Alice 2021-05-12 85 Alice 2021-05-29 72 Danny 2021-06-12 87 Alice 2021-06-24 45 Tom 2021-06-28 80 Alice 2021-07-03 60 Danny 2021-07-05 42 Tom 2021-07-12 56 Tom 2021-07-19 95 Danny 2021-08-01 80 Return the following table (rows highlighted in light green): Alice 2021-05-12 85 Danny 2021-06-12 87 Tom 2021-07-19 95 Want to first try solving it yourself? Solve it here and compare with the answers below. So we know what we have to do. Before we do it, let’s see how not to do it. ✖ Why not just GROUP BY? ✖ If you’re new to SQL, an immediate question that might come to mind is why not use a GROUP BY for the UserName, date and select the MAX value. In other words, just filter each observation by the max value according to one of the variables. The issue is that when we use GROUP BY we retrieve the information that is already aggregated. That is, if we group by the seller name and the payment date when we select the max, then we’ll get the value for each distinct user and date: SELECT UserName, payment_date, MAX(amount) AS amount FROM Payments GROUP BY UserName, payment_date Table 1: Displaying records 1 - 10 Danny 2021-07-05 42 Danny 2021-05-12 62 Danny 2021-08-01 80 Danny 2021-06-12 87 Alice 2021-07-03 60 Alice 2021-05-29 72 Alice 2021-05-12 85 Alice 2021-06-24 45 Tom 2021-06-28 80 Tom 2021-07-12 56 Alternatively, if we GROUP BY the UserName and SELECT the MAX value and the date, the result will depend on the Relational Database Management System (RDBMS) you use. For example, if you’re using Microsoft SQL server you’ll get an error since you have a column which is selected but not contained in an aggregate function, nor in the GROUP BY clause. In other RDMBS, e.g. MySQL (which I use here), we’ll get the information for each User, their max value and some date (here the top date value), but not the correct date! SELECT UserName, payment_date, MAX(amount) FROM payments GROUP BY UserName Table 2: 3 records Danny 2021-07-05 87 Alice 2021-07-03 85 Tom 2021-06-28 95 So How do we solve this? Let’s dive in. Solutions 1. Window functions The first solution that might come to mind is using a Window function. If you don’t know window functions I suggest you familiarize yourself with their abilities. To borrow from PostgreSQL’s description, a window function “performs a calculation across a set of table rows that are somehow related to the current row”. In contrast to aggregate operations (sum, avg, etc), using window functions doesn’t cause rows to become grouped into single row outputs. We can use the window function DENSE_RANK()/RANK()1 to retrieve the rank of each amount for each user, and extract the relevant row with an outer query: SELECT UserName, Payment_Date as 'Payment Date', amount FROM ( SELECT *, DENSE_RANK() OVER(Partition BY UserName Order by amount DESC) as rnk FROM payments) AS ranked_table WHERE rnk = 1 OK, that was pretty straight forward. But the interview question doesn’t end there but asks for another approach. Let’s move on. 2. Self Join JOIN are key functions when querying data. Considering the large amount of data a company has, and the normalization procedures it does you’ll be expected to join a lot. In this specific case we can leverage the arithmetic features of a JOIN to retrieve the relevant value: SELECT DISTINCT p.UserName, p.payment_date, p.amount FROM payments p AND p.amount < pp.amount WHERE pp.amount IS NULL; While we’re all familiar with ‘regular’ * JOIN using an equality sign =, we can check for other operations such as smaller than <. Essentially we do a cartesian join of the table on itself by UserName, and match rows where values (p.amount) are smaller than other values in the table we join on (pp.amount). Our max value won’t find any relevant rows to join, considering it’s not smaller than anything, which will result in a NULL value we can use to filter. We can also explore the intermediate step of the above-code by looking at one of the users’ observations: Table 3: 3 records Danny 2021-06-12 87 NA Danny 2021-08-01 80 87 Danny 2021-05-12 62 80 As we can see from the top 3 observations (though 7 are returned per user), values that are not smaller than other values, i.e. our max value, return a null value we can use to filter. If you want to explore it more just copy the above code to the snippet example, remove the WHERE clause and also select pp.amount. 3. Correlated subquery We’ve come to my final approach for this blog post. I’ve come to appreciate correlated subqueries since learning them, as I find them somewhat similar to vectorized operations in R such as the apply family and the purrr library. A correlated subquery is a row-by-row process, in which each subquery is executed once for the outer query (adapted from GeeksforGeeks). Let’s look at the code and explain it more clearly: SELECT UserName, Payment_Date, amount FROM Payments p WHERE amount = (SELECT MAX(amount) FROM Payments pp WHERE pp.UserName = p.UserName) -- Notice the relation to the parent table To easily read the query and understand correlated subqueries, let’s start from the inside. From the payments tables where the UserName is equal to the UserName in the outer query, grab the maximum amount. Now the outer query goes row by row for each user and compares whether that row’s amount is equal to that user’s max amount, which is retrieved from the inner query. And there we have it, three different approaches to the same problem. Benchmarking A question that arose for me is, for this specific case, which method is faster? Let’s try and answer it using a bit larger dataset: glimpse(payments_big) ## Rows: 2,000 ## Columns: 3 ## $username <chr> "Alice", "Alice", "Alice", "Danny", "Tom", "Danny", "Alic~ ##$ payment_date <date> 2018-03-24, 2018-05-31, 2020-08-16, 2019-09-14, 2019-01-~ ## \$ amount <int> 2039, 5113, 3381, 6666, 7147, 2213, 8500, 6011, 3530, 465~ A total of 2,000 rows for all the three users. Now, let’s benchmark using The R package {sqldf} which passes the SQL statements to a temporally created database: benchmarking <- microbenchmark::microbenchmark("Window" = sqldf(Window_script), "Join" = sqldf(Join_script), "Correlated subquery" = sqldf(Correlated_subquery_script), unit = "ms") Finally, let’s explore the benchmarking scores: approach min 25% mean median 75% max N Window 14.2 15.4 18.3 16.6 18.7 43.4 100 Join 102.4 108.4 132.1 120.0 146.3 325.8 100 Correlated subquery 294.6 310.8 370.4 354.9 416.2 575.6 100 One caveat is that some noise might have occurred when I queried the data: Since we used the {sqldf} R package to benchmark, the table is loaded to a temporarily created database and the SQL statement is run on it. With that said, I imagine that if would have caused some issues, it would have done so across all statements. As to our results, we can see that for the current question, the window function is most efficient. I think it’s also the most friendly for beginners and commonly used. However, I believe that knowing all approaches can help you write better SQL. That is, sometimes one approach is a better fit to a specific use-case. I definitely wrote a correlated subquery at work as it was the best fit at the time (in terms of readability and as an immediate answer), so though it’s least efficient here I’m sure it’s worth knowing. Closing remarks This was a pretty short post on some SQL approaches to solving a question. You can probably think of different approaches, or variants of the current ones. When contemplating this question I felt that it required me to utilize different SQL functions to solve the same question, so overall I’m glad I came across it. I didn’t get the job but that’s OK. Eventually that’s how I ended up where I am today :) Hope you find this useful and learned something new. Feel free to reach out and let me know of other solutions you thought of! 1. One reason I’m not going for ROW_NUMBER here is that were interested in the top value that could have multiple appearances for a user. ROW NUMBER will only give us one value, here I’m interested in the max value that could appear several times.↩︎ Amit Levinson Risk Data Analyst I’m an avid R user interested in data analysis, visualizations and helping individuals understand their data.
2021-11-29 16:48:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2734496593475342, "perplexity": 2003.1423927660146}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358786.67/warc/CC-MAIN-20211129164711-20211129194711-00295.warc.gz"}
http://math.stackexchange.com/questions/183389/a-countable-set-of-real-numbers
A Countable Set of Real Numbers Let $S$ be a subset of $\mathbb{R}$. Let $C$ be the set of points $x$ in $\mathbb{R}$ with the property that $S \cap (x - r, x + r)$ is uncountable for every $r > 0$. Show that $S - C$ is finite or countable. Thanks for any help. - a. any finite set is countable. b. what have you tried ? – Belgi Aug 16 '12 at 21:02 @Belgi: Many places mean "countably infinite" when saying "countable". To distinct they use "at most countable" for finite or countably infinite. – Asaf Karagila Aug 16 '12 at 21:03 I was trying to use the fact that Q is dense in R and that the intervals with rational endpoints is countable . By countable I mean countably infinite . – Ester Aug 16 '12 at 21:05 I also think there is a problem in the question: "Let $C$...s.t $S$" . should it be $C$ in the intersection ? – Belgi Aug 16 '12 at 21:08 @Belgi: no, that would make for a circular definition... for example, $C=\emptyset$ would satisfy it. – tomasz Aug 16 '12 at 21:09 Instead of considering arbitrary neighborhood $(x - r, x + r)$ for $x \in \mathbb{R}$ and $r > 0$, you can consider just those open intervals where $x \in \mathbb{Q}$ and $r \in \mathbb{Q}$. These form a countable basis for the topology on $\mathbb{R}$. Let $(U_n)_{n \in \mathbb{N}}$ denote a countable enumeration of these open intervals. Then you have that $C$ is the set of all $x \in \mathbb{R}$ such that $S \cap U_n$ is uncountable for all $n$ such that $x \in U_n$. Hence $S - C$ is the union of over all $n \in \mathbb{N}$ of $S \cap U_n$ such that $S \cap U_n$ is countable, i.e. $S - C = \bigcup_{n \text{ st } |S \cap U_n| = \aleph_0} S \cap U_n$ A countable union of countable sets is countable. You are done. By the way, the points in $C$ are usually called condensation points. If $S$ happens to be closed, the result above is a step in proving the Cantor Bendixson Theorem. This theorem states that every closed set is the union of a perfect set and a countable set. Perfects sets have cardinality $2^{\aleph_0}$. Hence the Cantor Bendixson Theorem states that no closed subset of $\mathbb{R}$ is a counterexample for the continuum hypothesis. - HINT: You’re on the right track. Suppose that $x\in S\setminus C$; show that there is an open interval $(p,q)$ with rational endpoints such that $x\in(p,q)$ and $(p,q)\cap C=\varnothing$. There are only countably many such intervals, so ... - Yes , I was trying exactly that , but I cannot show the disjoint part . I feel that I am missing something very easy . – Ester Aug 16 '12 at 21:13 How is it possible that $x\in\emptyset$? Did you mean $x\in(p,q)$ and $(p,q)\cap C=\emptyset$, perhaps? – Cameron Buie Aug 16 '12 at 21:14 @Cameron: Yes; somehow I lost the middle characters. – Brian M. Scott Aug 16 '12 at 21:38
2015-12-01 17:22:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9533251523971558, "perplexity": 143.4210735111137}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398468396.75/warc/CC-MAIN-20151124205428-00193-ip-10-71-132-137.ec2.internal.warc.gz"}
https://codegolf.stackexchange.com/questions/146897/latex-truth-tables?noredirect=1
# LaTeX truth tables Write a program or a function that accepts the list of outputs from a logic function and outputs the LaTeX code for its truth table. The inputs should be labeled as lowercase letters a-z, and the output should be labelled as F. The length of list of inputs will always be shorter than 2^25, which means that number of inputs will always be less than 25, so you can use letters from lowercase alphabet for input names. ## Input A number n of inputs and list of length 2^n of binary numbers which represents the outputs of a logical function. ## Output LaTeX code that produces the truth table for that function. Input and output values should be centered in rows. There must be a line between table header and its values and between inputs and output, so the code should be similar to that below. \begin{tabular}{c * <NUMBER OF INPUTS>|c} <INPUTS>&F\\ \hline <INPUT VECTOR i>&<OUTPUT>\\ \end{tabular} ## Example Input: 2 [0, 0, 0, 1] Output: \begin{tabular}{cc|c} a & b & F \\ \hline 0 & 0 & 0 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \\ 1 & 1 & 1 \\ \end{tabular} Which when displayed in LaTeX shows the following truth table ## General rules • Is this challenge require exactly same output or any output that may produce same thing in TeX? – tsh Nov 2 '17 at 10:07 • Any output that produces the same thing in TeX – drobilc Nov 2 '17 at 10:21 • Something I find tricky here not knowing TeX that well is that there might be other shorter ways to write the table formatting TeX code, or even some other way (package?) to produce the table. Whatever language I use, TeX golf is part of the challenge. Is there an online interpreter for TeX for convenience, and perhaps to make unambiguous what the exact implementation is? – xnor Nov 2 '17 at 11:04 • Tip: The TeX code seems to work with all spaces and newlines removed. – xnor Nov 2 '17 at 11:11 • Anyone who doesn't know how to do it in LaTeX, follow the example output above. If n=5, simple put ccccc instead of cc, but leave |c alone... And yes, in this table, all spaces and newlines are optional, but I would avoid blank lines. – Heimdall Nov 2 '17 at 11:20 # Charcoal, 70 bytes ≔tabularζ\ζ{*θc|c}⸿⪫✂β⁰Iθ¹&⁰&F\\⸿\hline⸿Eη⁺⪫⁺⮌EIθI﹪÷κX²λ²⟦ι⟧&¦\\⁰\endζ Try it online! Link is to verbose version of code. Explanation: ≔tabularζ Save this string in a variable to avoid duplication. \ζ{*θc|c}⸿ Print the initial \tabular{*2c|c} line (2 or whatever value the first input q has). ⪫✂β⁰Iθ¹&⁰&F\\⸿\hline⸿ Get the first q letters from the predefined variable b and insert &s between them, then append the &F\\ and also print \hline on the next line. Eη⁺⪫⁺⮌EIθI﹪÷κX²λ²⟦ι⟧&¦\\ Loop over the characters in the second input. For each one, its index is converted to binary with length q, the character is concatenated, the result is joined with &s and \\ is appended. The resulting strings are implicitly printed on separate lines. ⁰\endζ Print the \endtabular. (The is just a separator as the deverbosifier forgots to insert a ¦.) • It's kinda impressive that Charcoal is currently the winner, given that this challenge isn't really what it's designed for. – Erik the Outgolfer Nov 2 '17 at 12:32 # Python 2, 153 bytes lambda n,l:r'\tabular{*%dc|c}%s&F\\\hline%s\endtabular'%(n,q(map(chr,range(97,97+n))),r'\\'.join(q(bin(2**n+i)[3:]+x)for i,x in enumerate(l))) q='&'.join Try it online! Outputs like \tabular{*2c|c}a&b&F\\\hline0&0&0\\0&1&0\\1&0&0\\1&1&1\endtabular \tabular and \endtabular are used as shorter \begin{tabular} and \end{tabular}, as per this LaTeX golf tip. The *2c is a shorthand to define 2 columns. s%f=((:"&")=<<s)++f:"\\\\" n#r=unlines$("\\tabular{"++('c'<$[1..n])++"|c}"):take n['a'..]%'F':"\\hline":zipWith(%)(mapM id$"01"<$[1..n])r++["\\endtabular"] Try it online! unlines -- take a list of strings and join it with NL. -- the strings are: "\\tabular{"++('c'<$[1..n])++"|c}" -- tabular definition with n times 'c' take n['a'..]%'F' -- table header "\\hline" -- hline zipWith(%)(mapM id$"01"<$[1..n])r -- table content ["\\endtabular"] -- end of tabular definition Table header and content are built via function '%' s%f= -- take a string 's' and a char 'f' ((:"&")=<<s) -- append a "&" to each char in 's' ++f:"\\\\" -- and append 'f' and two backslashes Table header: take n['a'..] % 'F' -- s: the first n letters from the alphabet -- f: char 'F' Table content: zipWith(%) -- apply '%' pairwise to mapM id$"01"<\$[1..n] -- all combinations of '0' and '1' of length n r -- and the string 'r' # Python 2, 192168 166 bytes lambda n,l:r'\begin{tabular}{*%dc|c}%s\end{tabular}'%(n,r'\\'.join(map('&'.join,[map(chr,range(97,97+n))+[r'F\\\hline']]+[bin(2**n+i)[3:]+l[n]for i in range(2**n)]))) Try it online! Pretty printed version: # Python 2, 234229218209205 203 bytes n,l=input() print'\\begin{tabular}{'+'c'*n+'|c}\n'+' & '.join(chr(i+97)for i in range(n)+[-27]),'\\\\\n\hline' i=0 for r in l:print' & '.join(bin(i)[2:].rjust(n,'0')+r),r'\\';i+=1 print'\\end{tabular}' Try it online! # Proton, 142 bytes n=>x=>"\\tabular*#{n}c|c#{j(map(chr,97..97+n))}&F\\\\\hline"+'\\\\'.join(j(bin(i)[2to].zfill(n)+x[i])for i:0..len(x))+"\\endtabular"j="&".join Try it online! Output is in golfed LaTeX form; thanks to xnor for that trick! This should be able to be golfed to shorter than xnor's Python answer because Proton should in theory never lose to Python lol (in practice I'm bad xD). I may steal some tricks from xnor ;P Managed to now be shorter by making some things into variables, which I just noticed xnor also did :P And there we go, -6 bytes by using some Proton golfing tricks. # R, 196 187 171 bytes function(m,n){cat("\\tabular{*",n,"c|c}") write(c(letters[1:n],"F\\\\\\hline",rbind(t(rev(expand.grid(rep(list(0:1),n)))),paste0(m,"\\\\")),"\\endtabular"),1,n+1,sep="&")} Try it online!
2019-12-10 22:41:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7476232051849365, "perplexity": 2628.3211387268443}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540529006.88/warc/CC-MAIN-20191210205200-20191210233200-00452.warc.gz"}
http://people.reed.edu/~mayer/math111.html/header/node14.html
Next: 3.2 Sets Defined by Up: 3. Propositions and Functions Previous: 3. Propositions and Functions   Index # 3.1 Propositions 3.1   Definition (Proposition.) A proposition is a statement that is either true or false. I will sometimes write a proposition inside of quotes ( ''), when I want to emphasize where the proposition begins and ends. 3.2   Examples. If '', then is a true proposition. If '', then is a false proposition. If '', then is a true proposition. If '', then I will not consider to be a proposition (unless lucky number has been defined.) 3.3   Definition (And, or, not.) Suppose that and are propositions. Then we can form new propositions denoted by and '', or '', and not ''. and '' is true if and only if both of are true. or '' is true if and only if at least one of is true. not '' is true if and only if is false. Observe that in mathematics, or'' is always assumed to be inclusive or: If '' and '' are both true, then or '' is true. 3.4   Examples. and '' is false. or '' is true. or '' is true. not(not )'' is true if and only if is true. For each element of Q let be the proposition ''. Thus = '', so is true, while = '', so is false. Here I consider to be a rule which assigns to each element of Q a proposition 3.5   Definition (Proposition form.) Let be a set. A rule that assigns to each element of a unique proposition is called a proposition form over . Thus the rule defined in the previous paragraph is a proposition form over Q. Note that a proposition form is neither true nor false, i.e. a proposition form is not a proposition. 3.6   Definition ( , Equivalent propositions.) Let be two propositions. We say that is equivalent to '' if either ( are both true) or ( are both false). Thus every proposition is equivalent either to '' or to '' We write '' as an abbreviation for is equivalent to '' If are propositions, then '' is a proposition, and '' is true if and only if (( are both true) or ( are both false)). Ordinarily one would not make a statement like )'' even though this is a true proposition. One writes '' in an argument, only when the person reading the argument can be expected to see the equivalence of the two statements and . If and are propositions,then (3.7) is an abbreviation for Thus if we know that (3.7) is true, then we can conclude that is true. The statement '' is sometimes read as if and only if ''. 3.8   Example. Find all real numbers such that (3.9) Let be an arbitrary real number. Then Thus the set of all numbers that satisfy equation (3.9) is {2,3}. 3.10   Definition ( , Implication.) If and are propositions then we say implies '' and write '', if the truth of follows from the truth of . We make the convention that if is false then is true for all propositions , and in fact that (3.11) Hence for all propositions and (3.12) 3.13   Example. For every element in Q (3.14) In particular, the following statements are all true. (3.15) (3.16) (3.17) In proposition 3.16, is false, is true, and is true. In proposition 3.17, is false, is false, and is true. The usual way to prove is to assume that is true and show that then must be true. This is sufficient by our convention in (3.11). If and are propositions, then '' is also a proposition, and (3.18) (the right side of (3.18) is true if and only if are both true or both false.) An alternate way of writing '' is if then ''. We will not make much use of the idea of two propositions being equal. Roughly, two propositions are equal if and only if they are word for word the same. Thus '' and '' are not equal propositions, although they are equivalent. The only time I will use an '' sign between propositions is in definitions. For example, I might define a proposition form over N by saying for all '', or for all . The definition we have given for implies'' is a matter of convention, and there is a school of contemporary mathematicians (called constructivists) who define to be true only if a constructive'' argument can be given that the truth of follows from the truth of . For the constructivists, some of the propositions of the sort we use are neither true nor false, and some of the theorems we prove are not provable (or disprovable). A very readable description of the constructivist point of view can be found in the article Schizophrenia in Contemporary Mathematics[10, pages 1-10]. 3.19   Exercise. a) Give examples of propositions such that '' and '' are both true, or else explain why no such examples exist. b) Give examples of propositions such that '' and '' are both false, or explain why no such examples exist. c) Give examples of propositions such that '' is true but '' is false, or explain why no such examples exist. 3.20   Exercise. A Let be two propositions. Show that the propositions '' and '' are equivalent. ( '' is called the contrapositive of the statement ''.) 3.21   Exercise. Which of the proposition forms below are true for all real numbers ? If a proposition form is not true for all real numbers , give a number for which it is false. a) . b) . c) . d) . (Here assume .) e) . f) . g) . h) . 3.22   Exercise. Both of the arguments A and B given below are faulty, although one of them leads to a correct conclusion. Criticize both arguments, and correct one of them. Problem: Let be the set of all real numbers such that . Describe the set of all elements such that (3.23) Note that if then is defined. ARGUMENT A: Let be an arbitrary element of . Then Hence the set of all real numbers that satisfy inequality (3.23) is the set of all real numbers such that . ARGUMENT B: Let be an arbitrary element of . Then Now Hence the set of all real numbers that satisfy inequality (3.23) is the set of all such that either or . Next: 3.2 Sets Defined by Up: 3. Propositions and Functions Previous: 3. Propositions and Functions   Index Ray Mayer 2007-09-07
2017-11-19 19:42:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8572034239768982, "perplexity": 838.3329303017318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805761.46/warc/CC-MAIN-20171119191646-20171119211646-00221.warc.gz"}
http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=im&paperid=1721&option_lang=eng
RUS  ENG JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB General information Latest issue Forthcoming papers Archive Impact factor Subscription Guidelines for authors License agreement Submit a manuscript Search papers Search references RSS Latest issue Current issues Archive issues What is RSS Izv. RAN. Ser. Mat.: Year: Volume: Issue: Page: Find Izv. Akad. Nauk SSSR Ser. Mat., 1979, Volume 43, Issue 2, Pages 442–478 (Mi izv1721) On $\varkappa$-metrizable space E. V. Shchepin Abstract: In this paper the author studies spaces in which one can define a “distance” from points to canonically closed sets (the $\varkappa$-metric). It is proved that products of metric spaces and locally compact groups are examples of such spaces, and in these cases the $\varkappa$-metric can be constructed so that an analogue of the triangle axiom is satisfied. The topological structure of $\varkappa$-metrizable compact Hausdorff spaces is studied. Bibliography: 26 titles. Full text: PDF file (3910 kB) References: PDF file   HTML file English version: Mathematics of the USSR-Izvestiya, 1980, 14:2, 407–440 Bibliographic databases: UDC: 513.83 MSC: Primary 54D15, 54E35, 54E99; Secondary 28C10, 54A25, 54F05, 54B20, 54B35 Citation: E. V. Shchepin, “On $\varkappa$-metrizable space”, Izv. Akad. Nauk SSSR Ser. Mat., 43:2 (1979), 442–478; Math. USSR-Izv., 14:2 (1980), 407–440 Citation in format AMSBIB \Bibitem{Shc79} \by E.~V.~Shchepin \paper On~$\varkappa$-metrizable space \jour Izv. Akad. Nauk SSSR Ser. Mat. \yr 1979 \vol 43 \issue 2 \pages 442--478 \mathnet{http://mi.mathnet.ru/izv1721} \mathscinet{http://www.ams.org/mathscinet-getitem?mr=534603} \zmath{https://zbmath.org/?q=an:0436.54024|0409.54040} \transl \jour Math. USSR-Izv. \yr 1980 \vol 14 \issue 2 \pages 407--440 \crossref{https://doi.org/10.1070/IM1980v014n02ABEH001124} \isi{http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&DestLinkType=FullRecord&DestApp=ALL_WOS&KeyUT=A1980KM96800012} • http://mi.mathnet.ru/eng/izv1721 • http://mi.mathnet.ru/eng/izv/v43/i2/p442 SHARE: Citing articles on Google Scholar: Russian citations, English citations Related articles on Google Scholar: Russian articles, English articles This publication is cited in the following articles: 1. E. V. Shchepin, “Functors and uncountable powers of compacta”, Russian Math. Surveys, 36:3 (1981), 1–71 2. A. Ch. Chigogidze, “On $\varkappa$-metrizable spaces”, Russian Math. Surveys, 37:2 (1982), 209–210 3. V. V. Uspenskii, “Topological groups and Dugundji compacta”, Math. USSR-Sb., 67:2 (1990), 555–580 4. V. V. Fedorchuk, “Probability measures in topology”, Russian Math. Surveys, 46:1 (1991), 45–93 5. Masami Sakai, “Non-(ω, ω1)-regular ultrafilters and perfect κ-normality of product spaces”, Topology and its Applications, 45:3 (1992), 165 6. HARUTO OHTA, MASAMI SAKAI, KEN-ICHI TAMANO, “Perfect ?-Normality of Product Spaces”, Ann N Y Acad Sci, 704:1 papers o (1993), 279 7. S. V. Lyudkovskii, “Topological groups and their $\varkappa$-metrics”, Russian Math. Surveys, 48:1 (1993), 178–179 8. W.W Comfort, F.Javier Trigos-Arrieta, “Locally pseudocompact topological groups”, Topology and its Applications, 62:3 (1995), 263 9. Masami Sakai, “Embeddings of κ-metrizable spaces into function spaces”, Topology and its Applications, 65:2 (1995), 155 10. V. V. Fedorchuk, “The Urysohn identity and dimension of manifolds”, Russian Math. Surveys, 53:5 (1998), 937–974 11. P.M. Gartside, “Nonstratifiability of topological vector spaces”, Topology and its Applications, 86:2 (1998), 133 12. Lutfi Kalantan, “Results about κ-normality”, Topology and its Applications, 125:1 (2002), 47 13. Lutfi Kalantan, Paul J. Szeptycki, “κ-normality and products of ordinals”, Topology and its Applications, 123:3 (2002), 537 14. A. V. Ivanov, “On the weight of nowhere dense subsets in compact spaces”, Siberian Math. J., 44:6 (2003), 991–996 15. M.G. Tkachenko, V.V. Tkachuk, “Dyadicity index and metrizability of compact continuous images of function spaces”, Topology and its Applications, 149:1-3 (2005), 243 16. S. V. Lyudkovskii, “Topologies and duality of $\kappa$-normed vector spaces”, Russian Math. Surveys, 61:3 (2006), 566–567 17. Masami Sakai, “Two properties of weaker than the Fréchet Urysohn property”, Topology and its Applications, 153:15 (2006), 2795 18. David Milovich, “Noetherian types of homogeneous compacta and dyadic compacta”, Topology and its Applications, 156:2 (2008), 443 19. M. Sanchis, “Moscow spaces and selection theory”, Topology and its Applications, 155:8 (2008), 883 20. K. L. Kozlov, V. A. Chatyrko, “Topological transformation groups and Dugundji compacta”, Sb. Math., 201:1 (2010), 103–128 21. D. Ipate, R. Lupu, “About rings of continuous functions in the expanded field of numbers”, Bul. Acad. Ştiinţe Repub. Mold. Mat., 2010, no. 1, 47–54 22. Paul Gartside, Michael Smith, “Counting the closed subgroups of profinite groups”, Journal of Group Theory, 13:1 (2010), 41 23. V. M. Valov, “Extenders and $\varkappa$-Metrizable Compacta”, Math. Notes, 89:3 (2011), 319–327 24. B. Berckmoes, R. Lowen, J. Van Casteren, “Distances on probability measures and random variables”, Journal of Mathematical Analysis and Applications, 374:2 (2011), 412 25. David Milovich, “The (λ,κ)-Freese-Nation Property for Boolean Algebras and Compacta”, Order, 29:2 (2012), 361 26. Fucai Lin, “Pseudocompact rectifiable spaces”, Topology and its Applications, 2014 27. Radul T., “Absolute Extensors and Binary Monads”, Appl. Categ. Struct., 25:2 (2017), 269–278 • Number of views: This page: 359 Full text: 131 References: 32 First page: 1
2020-01-23 01:46:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40755265951156616, "perplexity": 10866.147401759195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250608062.57/warc/CC-MAIN-20200123011418-20200123040418-00399.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-1-common-core-15th-edition/chapter-5-linear-functions-mid-chapter-quiz-page-321/16
## Algebra 1: Common Core (15th Edition) $y=-3/2x$ To put into slope intercept form, we need to find the y-intercept and the slope of the line. Recall, slope equals rise over run. We see that for every 3 the line goes down, it goes over 2, so we know that the slope is -3/2. We see that the line crosses the y-axis at the y-value of 0, so we know that the y-intercept is 0. Thus, we find the equation: $y=-3/2x$.
2018-10-18 12:49:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7315307259559631, "perplexity": 244.26846791543642}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511806.8/warc/CC-MAIN-20181018105742-20181018131242-00507.warc.gz"}
https://www.coursehero.com/file/p6621h/Since-%CF%84-must-be-closed-we-have-d-%CE%BE-%CF%84-d%CE%BE-%CF%84-1-k-1-%CE%BE-d%CF%84-d%CE%BE-%CF%84-by-part-c-of-the/
Since \u03c4 must be closed we have d \u03be \u03c4 d\u03be \u03c4 1 k 1 \u03be d\u03c4 d\u03be \u03c4 by part c of the # Since τ must be closed we have d ξ τ dξ τ 1 k 1 This preview shows page 6 - 8 out of 10 pages. Since τ 0 must be closed, we have d ( ξ τ 0 ) = ( ) τ 0 + ( - 1) k - 1 · ξ ( 0 ) = ( ) τ 0 by part (c) of the problem. Similarly, since ω is closed, we have d ( ω η ) = ( - 1) k · ω ( ). Therefore ω 0 τ 0 = ω τ + d ( ξ τ 0 + ( - 1) k · ω η ) . It remains to observe that ξ τ 0 and ω η both have compact support because τ 0 and η have compact support by assumption. So we see that [ ω 0 τ 0 ] c = [ ω τ ] c . This implies that the formula [ ω ] [ τ ] c := [ ω τ ] c yields a well defined map (6.8). The last step is checking bilinearity. However, this is trivial, because of the way that the quotient vector space operations are defined, and because of the fact that the wedge product at the level of differential forms is already bilinear. 6.3. Since ∂M = , we have R ∂M τ = 0. So Stokes’ theorem implies that Z M ω = Z M = Z ∂M τ = 0 . Now since M is n -dimensional, every differential ( n + 1)-form on M must vanish, and hence every n -form on M is closed. Therefore H n c ( M ) equals the quotient of Ω n c ( M ) by the subspace W Ω n c ( M ) consisting of all forms that can be written as for some τ Ω n - 1 c ( M ). We just saw that if ω W , then R M ω = 0. Finally, integration is a linear map, so by the standard property of quotient vector spaces, we see that there is a (unique and well defined) linear map R : H n c ( M ) -→ R such that the composition Ω n c ( M ) -→ H n c ( M ) R -→ R equals the map ω 7→ R M ω (the first map above is the quotient map). 6.4. (a) By assumption, { U, V } is an open cover of M . Let { ρ U , ρ V } be a partition of unity corresponding to this open cover (we know that such a partition of unity always exists). The general definition of a “partition of unity corresponding to a given open cover” was mentioned in class a couple of times, but in our situation, since the cover is finite, the formulation simplifies a little bit. The defining properties are that ρ U , ρ V : M -→ [0 , 1] 7 are smooth functions such that ρ U ( p ) + ρ V ( p ) = 1 for all p M , and 1 supp( ρ U ) U and supp( ρ V ) V . Write S U = supp( ρ U ) and S V = supp( ρ V ) for simplicity. Now for every p U , define ( ω 1 ) p as follows. If p U V , then set ( ω 1 ) p = ρ V ( p ) · ω p , and if p 6∈ U V , set ( ω 1 ) p = 0. The key claim is that ω 1 is smooth everywhere on U , i.e., ( ω 1 ) p depends smoothly on p U . To check this, use the standard idea: the restriction ω 1 U V is smooth on U V because both ω and ρ V U V are smooth. On the other hand, the restriction ω 1 U \ S V is smooth because it is identically zero (recall that ρ V is identically zero on the complement of S V ). But U \ S V is open because S V is closed in M , and U = ( U V ) ( U \ S V ) because S V V by assumption. Since being smooth is a local property, it follows that ω 1 is smooth on U . Likewise, for every p V , define ( ω 2 ) p as follows. If p U V , set ( ω 2 ) p = - ρ U ( p ) · ω p , and if p 6∈ U V , set ( ω 2 ) p = 0. A similar argument shows that ω 2 is smooth everywhere on V . Thus we obtain ω 1 Ω k ( U ) and ω 2 Ω k ( V ). #### You've reached the end of your free preview. Want to read all 10 pages? • Winter '11 • Mitya • Math, Topology, Open set, Closed set, Ωk
2020-10-22 15:05:47
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9884137511253357, "perplexity": 999.1298423111489}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107879673.14/warc/CC-MAIN-20201022141106-20201022171106-00212.warc.gz"}
https://electronics.stackexchange.com/questions/353081/electric-wiring-color-codes-brown-blue-and-black
# Electric wiring color codes: BROWN, BLUE and BLACK 31.01.2018 16:40 start / Thank you all for the answers, because this is not a standard wiring plug, i need assistance wiing the wifi adapter in the middle of cable which connect the energy master expert lcd display to the energy master expert plug. Is this correct? WIFI ADAPTER LIVE IN = Energy master expert PLUG BROWN live wire WIFI ADAPTER LIVE OUT = Energy master expert LCD DISPLAY BROWN live wire WIFI ADAPTER NEUTRAL IN = Energy master expert PLUG BLUE neutral wire WIFI ADAPTER NEUTRAL OUT = Energy master expert LCD DISPLAY BLUE neutral wire WIFI ADAPTER EARTH IN = Energy master expert PLUG BLACK wire WIFI ADAPTER EARTH OUT = Energy master expert LCD DISPLAY BLACK wire \ end 31.01.2018 16:40 31.01.2018 16:36 start / Update from customer service: **The Ground Line is bridged in the connector The meter does not require a Ground Line in the cable. Blue is the Null Line and black and brown are the Live Line Yours sincerely ELV Elektronik AG Technical customer service department** \ end 31.01.2018 16:36 Which colors are Ground Line, Null Line and Live Line? The standard says the Ground line is always green-yellow. None of these wires is green-yellow. Thanks. update If black is Earth, then why the on off switch button is using the black wire? Because the cable is too short , less than 2 meters, i want to extend it. I also want to join the cables with a wifi adapter SONOFF® POW 16A 3500W DIY WIFI Wireless Long Distance APP Remote Control Switch Socket Power Monitor Current Tester For Smart Home 80-160MHz AC 90-250V Support 2G/3G/4G Network. That is why i need to know exactly which colors are Live, Null and Earth. • Which standard? This may be country/region specific. – Brian Carlton Jan 30 '18 at 23:44 • Germany, this plus is a part from energy master expert kit 2. I only received instructions in German, in black and white. I tried translating the wiring part, but none identified the colors fir L,N and E. – ZMEU.NET Jan 30 '18 at 23:49 • The manual shows blue as N. The other two wires are both L, black is mains->ELV and brown is ELV->consumer device. The plug has no connection to PE, other than the two prongs for schuko - but those don't go into the electronics box. – Turbo J Jan 31 '18 at 0:58 • The great thing about standards is there are so many of them to choose from. – The Photon Jan 31 '18 at 1:03 • So many wiring colour standards, all of them useless (not obvious, difficult to see in low light, not readable by the colour-blind, not able to be explained logically). The only colours that meet all of those conditions are white for live/hot, grey for neutral, and black for ground. Sigh! – Neil_UK Jan 31 '18 at 6:23 This is a combined plug/socket. The ground prongs are connected internally, no wire needed. You have to connect the blue wire to the (also internally connected) neutral screw. The brown and black wires go to the other plug/socket screws, it's live in/live out. simulate this circuit – Schematic created using CircuitLab • Ok thanks, i know from the German instruction, each wire where it goes. Now please answer this question: Which colors are Ground Line, Null Line and Live Line? – ZMEU.NET Jan 31 '18 at 0:42 • Please see my schematic. Don't know if the brown wire goes to plug and the black to the socket or vice versa – nothing bad happens if you swapped them, if the reading on the measurement device is negative, swap those wires. – Janka Jan 31 '18 at 9:00 • Your explanation makes sens. Still, can you please explain why there is an earth clamp on the socket? Is it not connected? – Fredled Jan 31 '18 at 11:07 • The earth clamp of the plug and the socket have to be connected. Plus, Schuko sockets without earth clamp aren't allowed. There are screws on the earth clamp should your measurement (or other) device have a metal casing. You needed a four-wire cable going to it then. – Janka Jan 31 '18 at 13:07 If this cable could be considered as "internal to the equipment", normal electrical code colours may not apply. If it is a power monitor, the monitor circuit has to measure current somehow. I suspect that R4 is the current sense resistor, and power comes in from the plug on the brown wire, and returns to the socket on the black wire. If the instrument box has no exposed metal parts, it would have no need for a Safety Ground connection. • "If the instrument box has no exposed metal parts, it would have no need for a Safety Ground connection." Except to connect the ground through to the connected equipment? – RoyC Jan 31 '18 at 10:11 • @RoyC - but that ground connection would be in the plug/socket, with no need for a wire to the instrument. – Peter Bennett Jan 31 '18 at 16:31 It depends specifically where you live, but usually you find Ground Line as Green-Yellow, Null Line as Blue and Live Line as Red, Gray, Black or Brown. • So: LIVE: BROWN NEUTRAL: BLUE EARTH: BLACK – ZMEU.NET Jan 30 '18 at 23:51 • It's a combined plug/socket. – Janka Jan 30 '18 at 23:54 • Never seen one of these, thank you for clarifying. But just searched for it, and the order should be the same as well, otherwise it would make no sense – Flávio Alegretti Jan 30 '18 at 23:57 • If Black is Earth/Safety Ground, there should not be a switch in it. – Peter Bennett Jan 31 '18 at 0:00 • If black is Earth, then why the on/off switch button is using the black wire? – ZMEU.NET Jan 31 '18 at 0:05 You will have the german standard. But your application is not a standard plug. Brown will be live. Blue will be neutral. Green/Yellow must be earth. Black can be another phase, or a switched live. Or something else, black is the trivial color. In your measurement plug, only one line is passed trough the measurement circuit. Brown -> measurement -> Black. Neutral is passed to the measurement as well to provide voltage measurement and supply. Earth is not required, and thus not wired, since thee enclosure of the measurement device is plastic. • IMO the best would be to take a multimeter to look which wire goes where. – Fredled Jan 31 '18 at 11:08
2019-06-20 14:02:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18292315304279327, "perplexity": 3341.491995477211}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999218.7/warc/CC-MAIN-20190620125520-20190620151520-00198.warc.gz"}
https://eprint.iacr.org/2022/683
### Quantum Analysis of AES ##### Abstract Quantum computing is considered among the next big leaps in the computer science. While a fully functional quantum computer is still in the future, there is an ever-growing need to evaluate the security of the secret-key ciphers against a potent quantum adversary. Keeping this in mind, our work explores the key recovery attack using the Grover's search on the three variants of AES (-128, -192, -256) with respect to the quantum implementation and the quantum key search using the Grover's algorithm. We develop a pool of implementations, by mostly reducing the circuit depth metrics. We consider various strategies for optimization, as well as make use of the state-of-the-art advancements in the relevant fields. In a nutshell, we present the least Toffoli depth and full depth implementations of AES, thereby improving from Zou et al.'s Asiacrypt'20 paper by more than 98 percent for all variants of AES. Our qubit count - Toffoli depth product is improved from theirs by more than 75 percent. Furthermore, we analyze the Jaques et al.'s Eurocrypt'20 implementations in details, fix its bugs and report corrected benchmarks. To the best of our finding, our work improves from all the previous works (including the recent Eprint'22 paper by Huang and Sun). Available format(s) Category Secret-key cryptography Publication info Preprint. Keywords Quantum ImplementationGrover's SearchAES Contact author(s) starj1023 @ gmail com anubhab001 @ e ntu edu sg thdrudwn98 @ gmail com khj1594012 @ gmail com hwajeong84 @ gmail com anupam @ ntu edu sg History 2022-09-19: last of 6 revisions See all versions Short URL https://ia.cr/2022/683 CC BY-NC-SA BibTeX @misc{cryptoeprint:2022/683, author = {Kyungbae Jang and Anubhab Baksi and Gyeongju Song and Hyunji Kim and Hwajeong Seo and Anupam Chattopadhyay}, title = {Quantum Analysis of AES}, howpublished = {Cryptology ePrint Archive, Paper 2022/683}, year = {2022}, note = {\url{https://eprint.iacr.org/2022/683}}, url = {https://eprint.iacr.org/2022/683} } Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.
2022-09-28 09:58:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17721472680568695, "perplexity": 5829.913565924836}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335190.45/warc/CC-MAIN-20220928082743-20220928112743-00124.warc.gz"}
http://karagila.org/tags/cardinals/
Asaf Karagila I don't have much choice... ## Stationary preserving permutations are the identity on a club This is not something particularly interesting, I think. But it's a nice exercise in Fodor's lemma. Theorem. Suppose that $$\kappa$$ is regular and uncountable, and $$\pi\colon\kappa\to\kappa$$ is a bijection mapping stationary sets to stationary sets. Then there is a club $$C\subseteq\kappa$$ such that $$\pi\restriction C=\operatorname{id}$$. ## Vsauce on cardinals and ordinals To the readers of my blog, it should come as no surprise that I have a lot of appreciation to what Michael Stevens is doing in Vsauce. In the past Michael, who is not a mathematician, created an excellent video about the Banach-Tarski paradox, as well another one on supertasks. And now he tackled infinite cardinals and ordinals. You can find the video here: ## Cofinality and the axiom of choice What is cofinality of a[n infinite] cardinal? If we think about the cardinals as ordinals, as we should in the case the axiom of choice holds, then the cofinality of a cardinal is just the smallest cardinality of an unbounded set. It can be thought of as the least ordinal from which there is an unbounded function into our cardinal. Or it could be thought as the smallest cardinality of a partition whose parts are all "small". Not assuming the axiom of choice the definition of cofinality remains the same, if we restrict ourselves to ordinals and $$\aleph$$ numbers. But why should we? There is a rich world out there, new colors that were not on the choice-y rainbow from before. So anything which is inherently based on the ordering properties of the ordinals should not be considered as the definition of an ordinal. So first let's recall the two ways we can order cardinals without choice. ## On the Partition Principle Last Wednesday I gave a talk about the Partition Principle in our students seminar. This talk covered the historical background of the oldest open problem in set theory, and two proofs that for a long time I avoided learning. I promised to post a summary of the talk here. So here it is. The historical data was taken from the paper by Banaschewski and Moore, "The dual Cantor-Bernstein theorem and the partition principle." (MR1072073) as well Moore's wonderful book "Zermelo’s Axiom of Choice" (which has a Dover reprint!). ## To Colloops a cardinal This is nothing new, but it's a choice-y way of thinking about it. Which is really what I enjoy doing. Definition. Let $$V$$ be a model of $$\ZFC$$, and $$\PP\in V$$ be a notion of forcing. We say that a cardinal $$\kappa$$ is "colloopsed" by $$\PP$$ (to $$\mu$$) if every $$V$$-generic filter $$G$$ adds a bijection from $$\mu$$ onto $$\kappa$$, but there is an intermediate $$N\subseteq V[G]$$ satisfying $$\ZF$$ in which there is no such bijection, but there is one for each $$\lambda\lt\kappa$$. ## Anti-anti Banach-Tarski arguments Many people, more often than not these are people from analysis or worse (read: physicists, which in general are not bad, but I am bothered when they think they have a say in how theoretical mathematics should be done), pseudo-mathematical, non-mathematical, philosophical communities, and from time to time actual mathematicians, would say ridiculous things like "We need to omit the axiom of choice, and keep only Dependent Choice, since the axiom of choice is a source for constant bookkeeping in the form of non-measurable sets". People often like to cite the paradoxical decomposition of the unit sphere given by Banach-Tarski. "Yes, it doesn't make any sense, therefore the axiom of choice needs to be omitted". ## The cardinal trichotomy: finite, countable, and uncoutnable. There is a special trichotomy for cardinality of sets. Sets are either finite, or countably infinite, or uncountable. It's an interesting distinction, and it has a very deep root -- at least in my perspective -- in the role of first-order logic. Finite objects can be characterized in full using first-order logic. The fact that you can write down how many elements a set have, is a huge thing. For example, every finite structure of a first-order logic language has a categorical axiomatization. If the language is finite, then the axiomatization is finite as well. ## Provable Equality Of Exponentiation It's an almost trivial theorem of cardinal arithmetics in $$\ZF$$ that given four cardinals, $$\frak p,q,r,s$$ such that $$\frak p<q,\ r<s$$ we have $$\frak p^r\leq q^s$$. In a recent question on math.SE some user has asked whether or not we always have a strict inequality. Everyone sufficiently familiar with the basics of independence results would know that it is consistent to have $$2^{\aleph_0}=2^{\aleph_1}=\aleph_2$$, in which case taking $$\mathfrak{p=r}=\aleph_0,\ \mathfrak{q=s}=\aleph_1$$ gives us equality. But it's also trivial to see that we can always pick cardinals whose difference is large enough to keep the inequality true. ## Vector Spaces and Antichains of Cardinals in Models of Set Theory I finally uploaded my M.Sc. thesis titled “Vector Spaces and Antichains of Cardinals in Models of Set Theory”. There are several changed from the printed and submitted version, but those are minor. The Papers page lists them. ## The Philosophy of Cardinality: Pathologies or not? What are numbers? For the layman numbers are those things we use for counting and measuring. The complex numbers are on the edge of being numbers, but that's only because they are taught in high-schools and many people still consider them imaginary (despite them having some reasonably applicative uses). But a mathematician knows that a number is basically a notion which represents a quantity. We have so many numbers that I don't even know where to begin if I wanted to list them. Luckily most of the readers (I suppose) are mathematicians and so I don't have to.
2022-01-22 14:34:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8598666191101074, "perplexity": 378.626968599575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303864.86/warc/CC-MAIN-20220122134127-20220122164127-00038.warc.gz"}
http://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-12th-edition/chapter-2-section-2-3-writing-equations-of-lines-2-3-exercises-page-176/102
## Intermediate Algebra (12th Edition) $F=86^{\mathrm{o}}$ We got the following equation in Exercise 100: $F=\displaystyle \frac{9}{5}C+32$ We plug in $C=30$ and solve for $F$: $F=\displaystyle \frac{9}{5}(30)+32$ $=54+32$ $=86^{\mathrm{o}}$
2018-04-24 13:06:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.875732421875, "perplexity": 484.58021342952856}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946688.88/warc/CC-MAIN-20180424115900-20180424135900-00636.warc.gz"}
http://tex.stackexchange.com/questions/84275/custom-human-shape-for-tikz/84566
# Custom “human” shape for tikz A question with a large mea culpa. I've tried thinking about how I would add this shape and I've come up completely empty. Not strong enough with Tex to even give it a shot. I'm trying to put together a visual aid of people numbers in a particular area, so I imagine using something like \begin{tikzpicture}[every node/.style={human,draw,fill=black}] \node (sci) {scientist}; \node [right=1mm of sci] (sol) {gurus}; \node [right=1mm of sol] (joe) {workers}; \end{tikzpicture} My question is: Is there simple shape that already exists (its not in the shapes library) that does something similar. or is this easily coded as a new shape in which case could some kind soul post it as an example or is there a simple way to combine two existing shapes into a new one (A basic version I'd be happy with would be a long-ish down ward pointing isoceles triangle with a circle on top). Image of what I'm sort of after below. Thanks As a worst case I could use lots of \includegraphics commands in nodes but that is likely to be the most manual approach and would prefer something else if a solution exists. Update: Really tough to pick the answer for this one. I picked the one I went with in a hurry but there is a lot of material there for alternative ways to make this work. I'd really like to thank all three responders for their help. Now if only I could get a shape definition too ... :-) - Do you really need tikz? The marvosym package provides the commands \Gentsroom and Ladiesroom : A good place to look for such common-use symbols is the comprehensive LaTeX symbol list. - Thanks @TVerron unfortunately I need to put several of them side by side and color them according to each category so that it shows numbers of people in groups. I'm not sure how I'd do that outside tikz, ... perhaps a table structure... will try it and revert –  Tahnoon Pasha Nov 26 '12 at 8:50 @TahnoonPasha These commands are usable in text mode, so you can definitely use them in nodes of a tikz picture, and use the standards commands you would use to control the size or color of the text of a node. –  T. Verron Nov 26 '12 at 9:03 That is extremely useful @TVerron. It may be a new question but is there an easy way to shade out half the image that you're aware of? e.g. If I'm using each figure to represent four people and I want to indicate 2 by using half a figure? –  Tahnoon Pasha Nov 26 '12 at 9:30 @TahnoonPasha : That would indeed make a new question. I must confess I have no idea how to do that, except maybe some low-level hacks if your symbol is tikz-defined and parametrized by its height. –  T. Verron Nov 26 '12 at 10:38 The author created a new question for the partial filling of the shape problem: tex.stackexchange.com/questions/84420/… –  JLDiaz Nov 27 '12 at 8:29 Well since you are using TikZ environment anyhow, then loading marvosym becomes redundant (though nothing wrong with that by the way) \documentclass{standalone} \usepackage{tikz} \usetikzlibrary{positioning,arrows} \begin{document} \begin{tikzpicture} \node[rounded corners=2pt,minimum height=1.3cm,minimum width=0.4cm,fill,below = 1pt of head] (body) {}; \draw[line width=1mm,round cap-round cap] ([shift={(2pt,-1pt)}]body.north east) --++(-90:6mm); \draw[line width=1mm,round cap-round cap] ([shift={(-2pt,-1pt)}]body.north west)--++(-90:6mm); \draw[thick,white,-round cap] (body.south) --++(90:5.5mm); \end{tikzpicture} \end{document} then a little digression \documentclass{standalone} \usepackage{tikz} \usetikzlibrary{shapes.callouts} \begin{document} \begin{tikzpicture}[manstyle/.style={line width=4pt,line cap=round,line join=round}] \node[fill,circle,inner sep=2.5pt,outer sep=1pt] (head) at (-0.2mm,7.1mm) {}; \node[above left,anchor=pointer,scale=0.4,cloud callout, cloud puffs=10, aspect=2, cloud puff arc=120, fill,text=white,callout relative pointer={(-4mm,-4mm)}] at (2mm,8mm){$\displaystyle\int_\pi l(d,t)\mathrm{d}t$}; \draw[manstyle] (0,0.5) -- ++(0,-1.2cm); \draw[manstyle] (-1.5pt,-1pt) -- ++(0,0.535cm) (1.2pt,1pt) --(0,5mm)--++(-80:5mm) coordinate (g); \draw[-latex] (g) -| (-25:8mm); \draw[|-|,ultra thin] ([shift={(1mm,2mm)}]g) --++ (5.15mm,0) node [midway,above,scale=0.5] {$l$}; \node[fill,minimum height=7mm,rounded corners=2pt,outer xsep=1pt,outer ysep=0] (syphon) at (1.1cm,-0.45cm) {}; \fill[rounded corners=1pt] (syphon.south west) |-++(140:7mm) coordinate (d) arc (180:230:4mm) |- (syphon.south west) --cycle; \draw[|-|,ultra thin] (d)++(-0.1mm,0) --++ (-3.1mm,0) node[midway,above,scale=0.5] {$d$}; \node[font=\scshape, align=center] (motto) at (5mm,-1.5cm) {Gents \\ Do It With \\ Precision}; \end{tikzpicture} \end{document} - The restrooms in my university need one of these :) –  henrique Nov 27 '12 at 23:43 An option if you're willing to includegraphics is to go to openclipart.org (or any other clip art site, download an icon in svg, convert it to pdf and simply include it. That's what I do for globes and such in my figures. Update: To re-use the icon, you can define a new command: \newcommand{\usericon}[1]{\includegraphics[width=#1\textwidth]{usericon}} Then, wherever you want to put it, simply put it in a node like so: \node (user) {\usericon{0.2}}; - Thanks @recluze. Do you know if there is some way to put the \includegraphics into a \tikzset enivronment and then just call it in the style for each node? –  Tahnoon Pasha Nov 26 '12 at 9:25 Updated the answer :) –  recluze Nov 26 '12 at 11:31 Here is a basic version as per your description, and added an option to control the smiley: ## Code: \documentclass{article} \usepackage{tikz} % http://tex.stackexchange.com/questions/58901/something-between-frownie-and-smiley \newcommand{\Simley}[3][]{% % #1 = draw options % #2 = smile factor % #3 = location %\begin{tikzpicture}[scale=0.4] \begin{scope}[shift={(#3)}, scale=0.4] \draw [thick, fill=brown!10, #1] (0,0) circle (\SmileyRadius);% outside circle \draw [fill=cyan,draw=none] (\eyeX,\eyeY) circle (0.15cm); \draw [fill=cyan,draw=none] (-\eyeX,\eyeY) circle (0.15cm); \pgfmathsetmacro{\xScale}{2*\eyeX/180} \pgfmathsetmacro{\yScale}{1.0*\eyeY} \draw[color=brown, thick, domain=-\eyeX:\eyeX] plot ({\x},{ -0.1+#2*0.15 % shift the smiley as smile decreases -#2*1.75*\yScale*(sin((\x+\eyeX)/\xScale))-\eyeY}); \end{scope} %\end{tikzpicture}% }% \newcommand*{\Symbol}[3][]{% % #1 = draw options % #2 = smile factor % #3 = location % \begin{scope}[shift={(#3)}] %\draw [thick, fill=brown!25, #1] (0,0) circle (0.30cm);% Use this for no-smiley version \Simley[#1]{#2}{0,0.1}% Comment this out if you don't want smiley \draw [thick, fill=brown!10, #1] (-0.4, -0.40) -- (0.4, -0.40) -- (0,-2.5) -- cycle; \end{scope}% }% \begin{document} \begin{tikzpicture} \Symbol{1}{0,0} \Symbol[draw=black, fill=red!25, ultra thick]{0.25}{1,0} \Symbol[draw=blue, fill=green!20, ultra thick]{-1}{2,0} \end{tikzpicture} \end{document} - thanks @PeterGrill. If I want to put several \Symbol in a row or postion them above each other what would the correct coding be? –  Tahnoon Pasha Nov 26 '12 at 8:48 Have updated solution to show how to position them in a row. You can adjust the coordinate where they are placed to get them in a row. –  Peter Grill Nov 27 '12 at 1:17
2014-03-10 05:35:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9102217555046082, "perplexity": 4490.386461460674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010653177/warc/CC-MAIN-20140305091053-00060-ip-10-183-142-35.ec2.internal.warc.gz"}
https://mathematica.stackexchange.com/questions/105797/weird-behaviour-of-listable-fortran-librarylink-function-different-result-each
# weird behaviour of Listable fortran Librarylink function: different result each evaluation I am working with fotran librarylink function currently. In this post, I will show some weird result of Listable fortran librarylink generated by intel compiler under windows (I tried gcc on linux the result is normal). The fortran librarylink function is the same as my this post. I upload a fixed version zip downlad here, which I have packed everything. I will first show how to create the fortran librarylink using intel compiler. First, in command line, we enter intel compiler intel64 mode (I am using 64bit win), and compile the fortran source to object file ifort /c /fast iterateG-librarylink.f90 Needs["CCompilerDriver"]; CCompilerDriver\$CCompiler = {"Compiler" -> CCompilerDriverIntelCompilerIntelCompiler, "CompilerInstallation" -> "c:\\Program Files \ (x86)\\IntelSWTools\\compilers_and_libraries_2016.1.146\\windows\\", "CompileOptions" -> "/fast"}; CreateLibrary[{FileNameJoin[{NotebookDirectory[], "iterateG-mma-intel.c"}], FileNameJoin[{NotebookDirectory[], "TargetDirectory" -> NotebookDirectory[], "CompileOptions" -> "/fast"] funcfortran = FileNameJoin[{NotebookDirectory[], "iterateG_C.dll"}], "iterateG_C", {Real, Real, Real, Real, Real, Real, Real, Real, Real}, Real] testing done a sample calculation funcfortran[1, 1, 1, 1, 1, 1, 1, 1, 1] 0.154666 Now we want to make the librarylink function Listable as Listable compile function, so it can automatically thread parallelized. We can do this by wrapping the librarylink in a Listable compiled function setting and make "InlineCompiledFunctions" -> True funcfortranListable = Compile[{a, s, d, f, g, h, j, k, l}, funcfortran[a, s, d, f, g, h, j, k, l], CompilationTarget -> "C", Parallelization -> True, RuntimeAttributes -> {Listable}, CompilationOptions -> {"InlineCompiledFunctions" -> True, "InlineExternalDefinitions" -> True}] halirutan has provided another way and deep insight on this here. I have confirmed this method works on many librarylink function including C, C++, also fortran. But it fails on this one ! Let us see funcfortranListable[1, 1, 1, 1, 1, 1, 1, 1, ConstantArray[1, 10]] it should give {0.154666, 0.155605, 0.154597, 0.154666, 0.154666} the weird thing is, it can possibly give right answer on the first run, but if you run it several times, it will get all kinds of answers, for example {0.134586, 0.137318, 0.154822, 0.146941, 0.154666} And you can run and run, weirdly, sometimes it return to the right answer. At first I thought there is a period, but there is not. The librarylink generated by linux seems not suffered by this problem. Also, I found the "bug" line. I found if I delete call twobytwolinearsolve(wminush00,tmp) call twobytwolinearsolve(wminush00,tmpdagger) in my fortran source, the Listable result is conform. What is wrong here? • The /fast switch for Intel Fortran applies very extreme optimizations that can result in miscompilation of programs. It also does not guarantee better performance, so I would generally consider the use of this switch somewhat experimental, as it may not work properly or be effective in different cases. GCC does not do this in the same way, even with -Ofast. Could you try /O1 instead and see if the problem remains? – Oleksandr R. Feb 6 '16 at 18:27 • @OleksandrR. Hi,OleksandrR. Thank you for comment. It seems not a problem of optimization. /O1 also gives weird result. – matheorem Feb 6 '16 at 23:45 • You say that you "tried gcc on linux the result is normal". Doesn't that suggest that the problem is likely to be in the interaction of the Intel compiler with your FORTRAN code, rather than within Mathematica? – MarcoB Feb 8 '16 at 6:43 • @MarcoB But the .dll intel compiler generate gives correct answer when it is not Listable. So I think it is due to mma. mma's thread parallelization make a correct dll gives wrong answer – matheorem Feb 8 '16 at 6:46
2020-08-06 19:18:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5357630848884583, "perplexity": 3632.550303599923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737019.4/warc/CC-MAIN-20200806180859-20200806210859-00467.warc.gz"}
http://community.econometrics.com/questions/186/revisions/
# Revision history [back] ### When estimating a System of Equations, how do I copy the coefficients to a matrix? I am struggling with a little problem. When estimating a system, and storing the coefficients in a vector called beta, I would like to copy these coefficients into a 4x6 matrix. I am not able to do this. Can you help? See my example below: sample 1 100 genr p1=1+nor(1) genr p2=1+nor(1) genr p3=1+nor(1) genr p4=1+nor(1) genr x=1+nor(1) genr q1=20-10*p1+2*p2+0.5*p3+1.5*p4+4*x+nor(20) genr q2=10+0.5*p1-1*p2+1.5*p3+0.5*p4+6*x+nor(30) genr q3=40+3*p1+1*p2-1.5*p3+2.5*p4+2*x+nor(20) genr q4=30+2*p1+1*p2+2.5*p3-3.5*p4+3*x+nor(30) genr one=1 system 4 / DN noconstant coef=beta ols q1 one p1 p2 p3 p4 x ols q2 one p1 p2 p3 p4 x ols q3 one p1 p2 p3 p4 x ols q4 one p1 p2 p3 p4 x end stop 2 No.2 Revision David 5219 ●1 ●3 ●10 ### When estimating a System of Equations, how do I copy the coefficients to a matrix? I am struggling with a little problem. When estimating a system, and storing the coefficients in a vector called beta, I would like to copy these coefficients into a 4x6 matrix. I am not able to do this. Can you help? See my example below: sample 1 100 genr p1=1+nor(1) genr p2=1+nor(1) genr p3=1+nor(1) genr p4=1+nor(1) genr x=1+nor(1) genr q1=20-10*p1+2*p2+0.5*p3+1.5*p4+4*x+nor(20) genr q2=10+0.5*p1-1*p2+1.5*p3+0.5*p4+6*x+nor(30) genr q3=40+3*p1+1*p2-1.5*p3+2.5*p4+2*x+nor(20) genr q4=30+2*p1+1*p2+2.5*p3-3.5*p4+3*x+nor(30) genr one=1 system 4 / DN noconstant coef=beta ols q1 one p1 p2 p3 p4 x ols q2 one p1 p2 p3 p4 x ols q3 one p1 p2 p3 p4 x ols q4 one p1 p2 p3 p4 x end stop
2021-09-27 11:04:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28747984766960144, "perplexity": 3332.2519864304786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058415.93/warc/CC-MAIN-20210927090448-20210927120448-00638.warc.gz"}
https://www.maplesoft.com/support/help/Maple/view.aspx?path=Task/DetermineInverseFunction
Determine the Inverse of a Function - Maple Programming Help Determine the Inverse of a Function Description Determine the inverse of a function. Determine the Inverse of a Function Enter the function to be inverted: Commands Used
2017-08-23 04:00:37
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8837229609489441, "perplexity": 646.3419078976965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886117519.92/warc/CC-MAIN-20170823035753-20170823055753-00231.warc.gz"}
https://scioly.org/wiki/index.php/Template_talk:State_results_box
# Template talk:State results box ## Table look I feel like the "width:75%" should be kept; it makes the table fill up more of the page and keeps things more even. Compare 2014 Virginia State Tournament to 2015 Virginia State Tournament. Tailsfan101talk 15:44, 28 March 2020 (UTC) Ah, good point. Brought it back. 17:00, 28 March 2020 (UTC) ## Copy/paste template Here's an idea that could make editing a bit easier with this template: perhaps we could create a page with the code for this template (in <pre>): {{State results box |div= |1st_name= |1st_regional= |1st_points= |2nd_name= |2nd_regional= |2nd_points= … …perhaps all the way to 70 (since that how far the code goes). This could be used as a template for people to copy the code from. As in, say someone wants to make a page that has 24 teams and regionals. They could copy this code down to "24_points=" and fill in the rest of the information from there. We could include a list of things that still need to be added, such as the end of the template, regional champs, the division, etc. We could make two separate "templates", one for states with regionals (such as Virginia} and one for states without regionals (such as Delaware). Pepperonipi, what is your opinion on this? Tailsfan101talk 21:13, 29 March 2020 (UTC) I wondered if someone some day was going to ask something like this! I wondered if maybe you just didn't care about typing the tags, honestly. But, I'm glad you asked. So, ultimately, I don't want us wiki editors to be wasting huge sums of time, because obviously that just helps no one. I've been thinking about this topic here-and-there for a few weeks now, and I have some ideas. 1. An article template could work, like you suggested. If this were implemented, though, it'd probably be an entire article, not just the template code. I tried to do a template for this "article template" in my User:Pepperonipi/Sandbox 5 and User:Pepperonipi/Sandbox 6, but it looks like I'm getting into a situation I've been in before... it becomes a confusing mess of substitution, with lots of {{!}}'s and |'s. 2. Pi-Bot could also potentially be able of doing this in the future (not on it's own... serving as a helper tool to us editors). I have ideas about something that'd help out with things like these... but absolutely nothing done with those ideas, so I'm not going to comment on how that would work. I'm fine with you or I creating an article template for the state tournament pages because they are so similar and alike. If you do, I'd consider naming it Template:Article template/State tournament, so it's clear it's not meant to be a normal template, but rather an article template that code can be pulled from. If you want me to do it, let me know. Considering I'm thinking about this and having ideas about reducing some of the useless time wasted on the wiki with Pi-Bot and templates, I ask you and anyone else who sees this discussion... in your dream, what would be the best, most comfortable possible way of making these pages? For ex, using an article template, sending a Google Sheet to Pi-Bot, who returns the wikitext code, using a visual editing tool, etc. Just want to hear your all's thoughts. 00:31, 30 March 2020 (UTC)
2021-01-20 01:37:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1931484192609787, "perplexity": 1453.2457942822737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519843.24/warc/CC-MAIN-20210119232006-20210120022006-00287.warc.gz"}
https://plato.stanford.edu/entries/social-choice/notes.html
## Notes to Social Choice Theory 1. When $$n$$ is even, the first part of the theorem only holds for group sizes $$n$$ above a certain lower bound (which depends on $$p)$$, due to the possibility of majority ties. When $$n$$ is odd, it holds for any $$n \gt 1$$. 2. If different individuals have different known levels of reliability, weighted majority voting outperforms simple majority voting at maximizing the probability of a correct decision, with each individual’s voting weight proportional to $$\log(p/(1-p))$$, where $$p$$ is the individual’s reliability as defined above (Shapley and Grofman 1984; Grofman, Owen, and Feld 1983; Ben-Yashar and Nitzan 1997). 3. Optionally, one can stipulate that the utility from a tie is 1/2. 4. Completeness requires that, for any $$x, y \in X$$, $$xR_iy$$ or $$yR_ix,$$ and transitivity requires that, for any $$x, y, z \in X$$, if $$xR_iy$$ and $$yR_iz$$, then $$xR_iz.$$ 5. In the classic example, there are three individuals with preference orderings $$xP_1 yP_1 z, yP_2 zP_2 x$$, and $$zP_3 xP_3 y$$ over three alternatives $$x, y$$, and $$z$$. The resulting majority preferences are cyclical: we have $$xPy$$, $$yPz$$, and yet $$zPx$$. 6. Formally, $$\{x \in X : x \in f(R_1,R_2 ,\ldots ,R_n)$$ for some $$\langle R_1,R_2 ,\ldots ,R_n\rangle$$ in the domain of $$f\}$$. 7. For present purposes, one can stipulate that the last clause (for all $$x$$ in the range of $$f$$, $$yR_ix$$ where $$y \in f(R_1, R_2 , \ldots ,R_n))$$ is violated if $$f(R_1, R_2 , \ldots ,R_n)$$ is empty. 8. Formally, $$y'P_iy$$, where $$y' = f(R_1,\ldots, R'_i, \ldots ,R_n)$$ and $$y = f(R_1 , \ldots ,R_i , \ldots ,R_n)$$, assuming that $$\langle R_1,\ldots$$, $$R'_i$$, $$\ldots ,R_n\rangle$$ is in the domain of $$f$$. The definition presupposes that the social choice sets for the profiles $$\langle R_1, \ldots ,R_i , \ldots ,R_n\rangle$$ and $$\langle R_1,\ldots$$, $$R'_i$$, $$\ldots ,R_n\rangle$$ are singleton. 9. Sen, like Arrow in his definition of social welfare functions (as opposed to functionals), required $$R$$ to be an ordering by definition. 10. Technically, this requires a domain restriction to positive welfare profiles. 11. Formally, $$X = \{p, \neg p : p \in X^+\}$$, where $$X^+$$ is a set of un-negated propositions. To avoid technicalities, we assume that $$X$$ contains no contradictory or tautological propositions. 12. In principle, consistency can be defined relative to some side constraint such as the legal doctrine in the ‘doctrinal paradox’ example. 13. See also the remark on the relationship between path-connectedness and non-simplicity at the end of this subsection. 14. An earlier mathematically related, though interpretationally distinct contribution is Wilson’s work on abstract aggregation (1975). 15. We call an opinion function $$Pr$$ on $$X$$ probabilistically coherent if it is extendable to a probability function (with standard properties) on the smallest algebra that includes $$X$$. An algebra is a set of propositions that is closed under negation and conjunction. If $$X$$ is itself an algebra, as often assumed, then a probabilistically coherent opinion function on $$X$$ is simply a probability function. In the context of probabilistic opinion pooling, $$X$$ is often assumed to be an algebra generated by some underlying set of possible worlds, e.g., the set of all subsets of it, with negation understood as set-theoretic complementation and conjunction understood as set-theoretic conjunction. If we wish to lift the assumption that $$X$$ is finite, then every reference to an algebra in this section must be replaced with a reference to a $$\sigma$$-algebra. 16. The learnt information $$L$$ could be either simply the truth of some proposition (an event) or, more generally, a likelihood function. Much of the earlier literature in statistics on ‘external Bayesianity’ uses the latter modelling option, while some of the recent philosophical literature uses the former. Depending on how $$L$$ is modelled, conditionalization then takes the form either of classical Bayesian conditionalization or of its generalization to the case of likelihood functions.
2023-03-25 17:29:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.93306964635849, "perplexity": 486.56223714726923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945368.6/warc/CC-MAIN-20230325161021-20230325191021-00421.warc.gz"}
https://testbook.com/question-answer/a-single-phase-half-wave-ac-voltage-controller-has--5e884757f60d5d02db1644a6
# A single-phase half wave ac voltage controller has a resistive load R = 10 ohm and input voltage Vs = 180 V, 60 Hz. The delay angle of thyristor is α = π/3. Calculate the rms output voltage. This question was previously asked in UPRVUNL AE EE 2014 Official Paper View all UPRVUNL AE Papers > 1. 171 V 2. 114 V 3. 125 V 4. 131 V Option 2 : 114 V ## Detailed Solution Concept: The rms value of the output voltage of a single-phase full wave voltage controller is given by $${{V}_{or}}=\frac{{{V}_{m}}}{\sqrt{2}}{{\left[ \frac{1}{\pi }\left\{ \left( \pi -\alpha \right)+\frac{1}{2}\sin 2\alpha \right\} \right]}^{\frac{1}{2}}}$$ The rms value of the output voltage of a single-phase half wave voltage controller is given by $${{V}_{or}}=\frac{{{V}_{m}}}{\sqrt{2}}{{\left[ \frac{1}{2\pi }\left\{ \left( \pi -\alpha \right)+\frac{1}{2}\sin 2\alpha \right\} \right]}^{\frac{1}{2}}}$$ Calculation: Source voltage (VS) = 180 V Delay angle (α) = π/3 = 60° $${{V}_{or}}=\frac{180\times \sqrt{2}}{\sqrt{2}}{{\left[ \frac{1}{2\pi }\left\{ \left( \pi -\frac{\pi }{3} \right)+\frac{1}{2}\sin \frac{2\pi }{3} \right\} \right]}^{\frac{1}{2}}}=114~V$$
2021-09-21 18:10:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4707935154438019, "perplexity": 6359.388898232974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057225.57/warc/CC-MAIN-20210921161350-20210921191350-00555.warc.gz"}
https://developer-archive.leapmotion.com/documentation/javascript/api/Leap.Pointable.html
# Pointable¶ Attributes: Functions: class Pointable() The Pointable class reports the physical characteristics of a detected finger. Note that Pointable objects can be invalid, which means that they do not contain valid tracking data and do not correspond to a physical entity. Invalid Pointable objects can be the result of asking for a Pointable object using an ID from an earlier frame when no Pointable objects with that ID exist in the current frame. A Pointable object created from the Pointable constructor is also invalid. Test for validity with the valid property. var pointable = frame.pointables[0]; var direction = pointable.direction; var length = pointable.length; var width = pointable.width; var stabilizedPosition = pointable.stabilizedTipPosition; var position = pointable.tipPosition; var speed = pointable.tipVelocity; var touchDistance = pointable.touchDistance; var zone = pointable.touchZone; Pointable() Constructs a Pointable object. An uninitialized pointable is considered invalid. Get valid Pointable objects from a Frame() or a Hand() object. Pointable.direction Type: number[] – A 3-element array representing a unit direction vector. The direction in which this finger is pointing. var direction = pointable.direction; The direction is expressed as a unit vector pointing in the same direction as the tip. Pointable.id Type: String A unique ID assigned to this Pointable object, whose value remains the same across consecutive frames while the tracked finger remains visible. If tracking is lost (for example, when a finger is occluded by another finger or when it is withdrawn from the Leap field of view), the Leap may assign a new ID when it detects the entity in a future frame. var id = pointable.id; Use the ID value with the Frame.pointable() and Hand.pointable() functions to find this Pointable object in future frames. Pointable.length Type: number The estimated length of the finger in millimeters. var length = pointable.length; The reported length is the visible length of the finger from the hand to tip. If the length isn’t known, then a value of 0 is returned. Pointable.stabilizedTipPosition Type: number[] – a 3-element array representing a position vector. The tip position in millimeters from the Leap origin. Stabilized based on the velocity of the pointable to make precise positioning easier. <p>Stabilized Position: <span id="stabPosition">…</span></p> <p>Difference from tip position: <span id="delta">…</span></p> <script> var stabilizedDisplay = document.getElementById("stabPosition"); var controller = new Leap.Controller(); controller.on('frame', function(frame){ if(frame.pointables.length > 0) { var pointable = frame.pointables[0]; var stabilizedPosition = pointable.stabilizedTipPosition; var tipPosition = pointable.tipPosition; stabilizedDisplay.innerText = "(" + stabilizedPosition[0] + ", " + stabilizedPosition[1] + ", " + stabilizedPosition[2] + ")"; deltaDisplay.innerText = "(" + (tipPosition[0] - stabilizedPosition[0]) + ", " + (tipPosition[1] - stabilizedPosition[1]) + ", " + (tipPosition[2] - stabilizedPosition[2]) + ")"; } }); controller.connect(); </script> Pointable.timeVisible Type: number The amount of time this pointable has been continuously visible to the Leap Motion controller in seconds. var secondsVisible = pointable.timeVisible; By ignoring Pointable objects with a small timeVisible value, can filter out a certain amount of noise and spuriously detected hands. Pointable.tipPosition Type: number[] – a 3-element array representing a position vector. The tip position in millimeters from the Leap origin. var tipPostition = pointable.tipPosition; Pointable.tipVelocity Type: number[] – a 3-element array representing a vector. The rate of change of the tip position in millimeters/second. var speed = pointable.tipVelocity; Pointable.tool Type: Boolean Notice Tools are deprecated in version 3.0. Whether or not the Pointable is believed to be a tool. Tools are generally longer, thinner, and straighter than fingers. If tool is false, then this Pointable must be a finger. Pointable.touchDistance Type: number A value proportional to the distance between this Pointable object and the adaptive touch plane. The touch distance is a value in the range [-1, 1]. The value 1.0 indicates the Pointable is at the far edge of the hovering zone. The value 0 indicates the Pointable is just entering the touching zone. A value of -1.0 indicates the Pointable is firmly within the touching zone. Values in between are proportional to the distance from the plane. Thus, the touchDistance of 0.5 indicates that the Pointable is halfway into the hovering zone. <p>Distance: <span id="distance">…</span></p> <script> var distanceDisplay = document.getElementById("distance"); var controller = new Leap.Controller(); controller.on('frame', function(frame){ if(frame.pointables.length > 0) { var touchDistance = frame.pointables[0].touchDistance; distanceDisplay.innerText = touchDistance; } }); controller.connect(); </script> You can use the touchDistance value to modulate visual feedback given to the user as their fingers close in on a touch target, such as a button. Pointable.touchZone Type: String The Leap Motion software computes the touch zone based on a floating touch plane that adapts to the user’s finger movement and hand posture. The Leap Motion software interprets purposeful movements toward this plane as potential touch points. When a Pointable moves close to the adaptive touch plane, it enters the “hovering” zone. When a Pointable reaches or passes through the plane, it enters the “touching” zone. <p>Zone: <span id="zone">…</span></p> <script> var zoneDisplay = document.getElementById("zone"); var controller = new Leap.Controller(); controller.on('frame', function(frame){ if(frame.pointables.length > 0) { var touchZone = frame.pointables[0].touchZone; zoneDisplay.innerText = touchZone; } }); controller.connect(); </script> The following example (plug in your Leap Motion Controller) illustrates both the touch_zone and the touch_distance attributes. Touching pointables are colored red, hovering pointables are green, and those outside either zone are a faint blue. Pointable.valid Type: Boolean Indicates whether this is a valid Pointable object. if(pointable.valid){ //... } Pointable.width Type: number The estimated width of the finger in millimeters. var averageThickness = pointable.width; The reported width is the average width of the visible portion of the finger from the hand to the tip. If the width isn’t known, then a value of 0 is returned. Pointable.hand() Gets the hand associated with this a finger. In version 2+, tools are not associated with hands. This method always returns an invalid Hand object for tools. var handOfPointable = pointable.hand(); if the Hand object is no longer available, then an invalid Hand object is returned. Returns: Hand() – the hand to which this pointable is attached. Pointable.toString() A string containing a brief, human readable description of the Pointable object. console.log(pointable.toString()); Returns: String – A description of the Pointable object as a string. Pointable.Invalid Type: Pointable An invalid Pointable object. You can use this Pointable instance in comparisons testing whether a given Pointable instance is valid or invalid. (You can also use the valid property.)
2022-05-22 01:56:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18312275409698486, "perplexity": 6060.685637500313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662543264.49/warc/CC-MAIN-20220522001016-20220522031016-00306.warc.gz"}
https://tex.stackexchange.com/questions/222243/how-to-obtain-a-bold-upright-integral-sign
# How to Obtain a Bold Upright Integral Sign? Having searched around I couldn't find a package that could produce the following integral sign. How do I produce this bold looking vertical symbol? Xe-/LuaTeX are totally fine with me. \documentclass{article} \begin{document} $\int_a^b$ \end{document} • What size do you want? If you want to center a formula just use $\int_{min}^{max} f(x) dx$ for example. Jan 8 '15 at 10:03 • Times truetype font – yolo Jan 8 '15 at 10:23 • Yes XeTeX and LuaTeX – yolo Jan 8 '15 at 10:35 • Possible duplicate of Load integral sign from eulervm Dec 8 '16 at 19:30 With Lua- or XeLaTeX you can use the package unicode-math and substitute single symbols easily. Just download any font you like and use it like: % arara: lualatex \documentclass{article} \usepackage{unicode-math} \setmathfont{Latin Modern Math} \setmathfont[range="222B]{Linux Libertine O} % or any font you like \begin{document} $\int_{a}^{b}\displaystyle\int_{a}^{b}\int\limits_{a}^{b}$ \end{document} Other fonts for this case. You might want to replace the other integral signs as well. Linux Libertine covers U+222B to U+222E. % arara: lualatex \documentclass{article} \usepackage{unicode-math} \setmathfont{Latin Modern Math} \setmathfont[range={"222B-"2230}]{Linux Libertine O} \begin{document} $\int\iint\iiint\oint$ \end{document} • Have to use this in a math environment. Currently using mathptmx but symbols are being produced using CM – yolo Jan 8 '15 at 10:12 • I could not get it to load using \DeclareSymbolFont{fnt}{T1}{libertine}{m}{n} \DeclareMathSymbol{\intop}{\mathop}{fnt}{"222B} – yolo Jan 8 '15 at 11:49 • Error Limit controls must follow a math operator $\int – yolo Jan 8 '15 at 11:53 • @yolo please see my update. Jan 8 '15 at 12:15 • Thanks, I am able to produce these symbols but for some odd reason other symbols like \infty or \lambda are not appearing now. Any idea why. Just use the code you just gave me and try a symbol outside the range. – yolo Jan 8 '15 at 15:37 Employing a variation of my answer at Integral Sign$\int...$, I define a new operator \uint and show the comparison to \int in the MWE below. This approach takes a traditional \scriptstyle integral sign, rotates it 8 degrees, and scales it up to the same vertical extent as a normal integral sign when employed in the current mathstyle. \documentclass{article} \usepackage{amsmath} \DeclareMathOperator*{\uint}{\scalerel*{\rotatebox{8}{$\!\scriptstyle\int\!$}}{\int}} \usepackage{scalerel} \usepackage{graphicx} \parskip 1ex \begin{document} $f=\int_0^t A d\tau =\uint_0^t A d\tau$ \centering $$f=\int_0^t A d\tau =\uint_0^t A d\tau$$\par $$\scriptstyle f=\int A d\tau =\uint A d\tau$$\par $$\scriptscriptstyle f=\int A d\tau =\uint A d\tau$$ \end{document} If the integral sign is perceived as just a bit too bold, then the we can scale a \textstyle integral sign instead: \DeclareMathOperator*{\uint}{\scalerel*{\rotatebox{8}{$\!\textstyle\int\!\$}}{\int}} `
2021-10-28 09:18:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7723544239997864, "perplexity": 7922.535497217121}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588282.80/warc/CC-MAIN-20211028065732-20211028095732-00047.warc.gz"}
https://www.piping-designer.com/index.php/mathematics/geometry/plane-geometry/2340-square-diamond
# Square Diamond Written by Jerry Ratzlaff on . Posted in Plane Geometry • A square diamond is a structural shape used in construction. • Abbreviated as SQ • Interior angles are 90°. • Exterior angles are 90°. • 2 diagonals • 4 edges • 4 vertexs ## Area of a Square Diamond formula $$\large{ A = a^2 }$$ ### Where: $$\large{ A }$$ = area $$\large{ a }$$ = side ## Distance from Centroid of a Square Diamond formulas $$\large{ C_x = \frac{ a }{ 2 } }$$ $$\large{ C_y = \frac{ a }{ 2} }$$ ### Where: $$\large{ C }$$ = distance from centroid $$\large{ a }$$ = side ## Elastic Section Modulus of a Square Diamond formula $$\large{ S = \frac { a^3 } { 6\; \sqrt {2} } }$$ ### Where: $$\large{ S }$$ = elastic section modulus $$\large{ a }$$ = side ## Perimeter of a Square Diamond formulas $$\large{ P= 4\;a }$$ ### Where: $$\large{ P }$$ = perimeter $$\large{ a }$$ = side ## Plastic Section Modulus of a Square Diamond formula $$\large{ Z = \frac { a^3\; \sqrt {2} } { 6 } }$$ ### Where: $$\large{ Z }$$ = plastic section modulus $$\large{ a }$$ = side ## Polar Moment of Inertia of a Square Diamond formulas $$\large{ J_{z} = \frac{a^4}{6} }$$ $$\large{ J_{z1} = \frac{2\;a^4}{3} }$$ ### Where: $$\large{ J }$$ = torsional constant $$\large{ a }$$ = side ## Radius of Gyration of a Square Diamond formulas $$\large{ k_{x} = \frac{ a }{ 2 \; \sqrt{3} } }$$ $$\large{ k_{y} = \frac{ a }{ 2 \; \sqrt{3} } }$$ $$\large{ k_{z} = \frac{ a }{ \sqrt{6} } }$$ $$\large{ k_{x1} = \frac{ a }{ \sqrt{3} } }$$ $$\large{ k_{y1} = \frac{ a }{ \sqrt{3} } }$$ $$\large{ k_{z1} = \sqrt{ \frac{2}{3} \;a } }$$ ### Where: $$\large{ k }$$ = radius of gyration $$\large{ a }$$ = side ## Second Moment of Area of a Square Diamond formulas $$\large{ I_{x} = \frac{a^4}{12} }$$ $$\large{ I_{y} = \frac{a^4}{12} }$$ $$\large{ I_{x1} = \frac{a^4}{3} }$$ $$\large{ I_{y1} = \frac{a^4}{3} }$$ ### Where: $$\large{ I }$$ = moment of inertia $$\large{ a }$$ = side ## Side of a Square Diamond formula $$\large{ a= \sqrt{A} }$$ ### Where: $$\large{ a }$$ = side $$\large{ A }$$ = area ## Torsional Constant of a Square Diamond formula $$\large{ J = 2.25 \; \left(\frac{a}{2}\right)^4 }$$ ### Where: $$\large{ J }$$ = torsional constant $$\large{ a }$$ = side
2022-10-07 16:41:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9277870059013367, "perplexity": 12538.351585409191}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00387.warc.gz"}
https://2n3904blog.com/1n4148-diode-reverse-biased-junction-capacitance/
# 1N4148 Diode Reverse Biased Junction Capacitance Most of the datasheets from assorted suppliers of a 1N4148 diode all guarantee a maximum diode capacitance of 4 pF at $$V_D$$ = 0 VDC. A sample excerpt from the Vishay datasheet has the $$C_D$$ specification circled in red. Some suppliers provide the typical total diode capacitance versus reverse bias potential, and some suppliers don’t. Two sample specifications from Fairchild and Renesas datasheets are shown in the figure below. Both datasheets depict a 1N4148 having a low voltage coefficient on total capacitance. Assorted 1N4148 datasheets all have a different specification for reverse-biased capacitance: ## Measurement Setup The excitation signal used in many low cost handheld capacitance meters can be up to 2 Vpp. In order not to forward bias the diode junction the diode must have a reverse bias voltage which is at least equal to the magnitude of the test signal. Resolution of a low cost capacitance meter is also only typically 0.1 pF on the lowest range (at least for my meter). To allow for a smaller excitation signals and higher measurement resolution a custom impedance test-jig is constructed. A simplified schematic of the test setup is shown in the figure below. A test tone $$V_T$$ is AC coupled via $$C_1$$ to the DUT. Bench supply $$V_{s1}$$ provides the reverse DC bias for the DUT. Bias Resistor $$R_b$$ is chosen to be large enough to not overload the impedance of $$C_1$$ while at the same time have minimal $$IR$$ drop from the DC reverse bias current of the DUT. Amplifier $$U_1$$ and feedback resistor $$R_f$$ form a trans-impedance amplifier with a gain of $$-1 \;V / 1\; \mu A$$. As the test frequency increases, the impedance of the DUT decreases (capacitor) yielding a larger test current signal. However U1 is a generic op-amp (MCP601) with a 2.8 MHz gain-bandwidth product. To maximize the test current signal, but still main maintain a reasonable amount of loop-gain for U1, a test frequency of 10 kHz is chosen. A photo of the impedance test-jig is shown below. Spot testing with a 10 pF capacitor, yields good agreement with a handheld capacitance meter. The results from 100 sequential measurements of a 10 pF sample capacitor is shown in the figure below. The spot test sample capacitor yielded an average value of 10.289 pF. Measurement of the test capacitor in a low-cost handheld capacitance meter can be seen in the photo below. The Escort EDC-110R does not have a NIST traceable calibration certificate, hence, its fair to assume both measurement techniques have equivalent scale errors (perhaps $$\pm 1 \%$$). At low reverse bias voltages a 1N4148 has significant shunt conductance, it isn’t until -1 V that the diode enters a “Constant $$I_s$$” leakage region where the parallel shunt resistance is in the gigaOhm region. A Spice IV sweep of a 1N4148 from -20 mV to 20 mV is shown in the figure below. We can see that at 0 VDC a 1N4148 diode has a shunt resistance of 18 MOhms. Contrast this resistance to the capactive reactance of the 1N4148 at 10 kHz as, $$X = \dfrac{-1}{2\pi \cdot 10 \text{ kHz} \cdot 2 \text{ pF}} = -7.96 \text{ M}\Omega$$ Since the magnitude of the resistance is comparable to the reactance at some test voltages, the rms of the current signal cannot be assumed to be solely due to the capacitance of the diode. Hence, complex phasor analysis must be applied. One must sample the real/imaginary portions of the current signal (or measures the phase and compute the real/imaginary components) and estimate the capacitance based on the imaginary current. When the diode is biased at 0 VDC, during the negative cycle of the excitation signal the DUT is forward-biased. In the figure below the resulting signal distortion from the DUT rectifying the test signal can be observed. However, once the DUT is reverse biased at -100 mVDC, it no longer can rectify the 100 mVpk excitation signal. A scope capture of the excitation and current signal at -100 mVDC reverse bias is shown below. The resulting sensed current signal is sinusoidal plus noise and leads the excitation signal by $$\approx 90 ^o$$. Note that the output current signal has a $$180^o$$ phase shift from the inverting response of the trans-impedance amplifier. Consequently the actual test capacitor current lags the excitation voltage by $$90^o$$. ## Measurement Results Attempting to measure the 0 VDC capacitance with a low-cost capacitance meter yielded mixed results. With a single DUT only one orientation yielded a capacitance measurement. With 2 DUTs in apposing polarity a plausible measurement result is obtained. All three DUT orientations are shown in the figure below. If we assume all diodes have the same $$C_0$$ capacitance, then for the third test arrangement, a single DUT’s capacitance is, $$C_D = \dfrac{C_1 + C_2}{C_1C_2} \simeq 2 C = 2.0 \; \text{ pF}$$ Each of the 4 1N4148 DUTs are loosely lead formed and inserted into the test-jig as shown in the photo below. The total capacitance $$C_D$$ of 4 1N4148 diodes versus reverse bias potential is shown in the figure below. Each of the 4 sample diodes tested were within 100 fF of each of the other samples for the same reverse bias voltage. For bias voltages less than -100 mV, the test drive tone partially forward biases the DUT. Measurement results near 0 VDC are shown in the figure below. Half-wave rectification adds harmonic power anti-phase to the reactive current of the DUT; resulting in a slight decline in estimated DUT capacitance. Some portion of the total diode capacitance $$C_D$$ is do to the stray capacitance between the leads of the diode. To qualify the approximate value of lead shunt capacitance two separate $$\approx 15\; mm$$ leads were inserted into the 100 thou pitch test header. A photo showing a sample DUT and the sample leads is shown in the photo below. The 15 mm test leads yielded an approximate capacitance of 110 fF. Measurement results for the test leads is shown in the figure below. In reality the lead length (far too long) and spacing (far too close) yield a conservative figure for lead shunt capacitance. With the total capacitance ranging from 1 pF to 2 pF the axial leads only make up 5 to 10 % of the total capacitance. The remaining capacitance for reverse biased operation is the actual depletion capacitance of the diode junction and the shunt capacitance due to the glass body of the diode. ## Raw Data The measurement results for the 4 DUTs are available as a csv file below. The CSV file is formatted as, Vbias [V],Ca [F],Cb [F],Cc [F],Cd [F] ## 1 thought on “1N4148 Diode Reverse Biased Junction Capacitance” 1. Gerrydelasel says: This is one of those rare bits of information you can’t find in the datasheets or textbooks, thanks! This site uses Akismet to reduce spam. Learn how your comment data is processed.
2020-07-02 13:18:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5161071419715881, "perplexity": 2667.694639515651}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655878753.12/warc/CC-MAIN-20200702111512-20200702141512-00445.warc.gz"}
http://blog.jpolak.org/?tag=free
# New Open (and Free to Publish) Journal in Algebraic Geometry Posted by Jason Polak on 15. April 2013 · Write a comment · Categories: opensource · Tags: , , Since the introduction of the Forum of Mathematics journals, there has certainly been some disdain at its probably eventual policy of charging authors for publication. Here is one example of a journal that will be free to publish in and read: Algebraic Geometry, which is being run by Foundation Compositio Mathematica. This organisation already runs the (non-open source) Compositio Mathematica. The first issue of Algebraic Geometry is supposed to be published in 2014. # Infinite Integer Product Not Free Posted by Jason Polak on 01. February 2012 · Write a comment · Categories: commutative-algebra, modules · Tags: , ### Introduction Assuming the axiom of choice, any vector space possesses the pleasant but prosaic property* that it is determined up to isomorphism by the cardinality of its basis. For instance, consider $\prod_\omega \mathbb{Z}/2$ and $\oplus_{2^\omega} \mathbb{Z}/2$. Both are vector spaces over the finite field $\mathbb{Z}/2$ so to show that they are isomorphic, we need to show that their respective bases have the same cardinality. The vector space on the right is written as a direct summand and so we can see that its basis must have size $\mathfrak{c}$. On the other hand, the vector space $\prod_\omega \mathbb{Z}/2$ has cardinality $\mathfrak{c}$ over a finite field, so its basis must have the same cardinality as the space itself. Aren't vector spaces a walk in the park; a piece of cake; easy as pie (ok, enough metaphors?!)? ### From $\mathbb{Z}/2$ to $\mathbb{Z}$ Modules But what if we sent the above proof to a publisher who didn't yet have the "2" character or the "/" installed on her printing press? Then all hell would break loose because $\prod_\omega \mathbb{Z}$ and $\oplus_{2^\omega} \mathbb{Z}$ aren't vector spaces any more, and the previous paragraph would be rife with errors. But they certainly are abelian groups, and they have a bit more spice than those vector spaces. So are they isomorphic? They do have the same cardinality. Fortunately for us, Baer ("Abelian Groups without Elements of Finite Order", Duke Math J. 3 (1937), pp. 88-122) answered this question in the negative (fortunately, because otherwise abelian groups would be less exciting). In fact, this question is particularly interesting to me because I had wondered about it a few months ago, and now I have the answer, thanks to Faith's book for the references. More »
2018-07-21 17:43:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6206763386726379, "perplexity": 640.4314687233704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592650.53/warc/CC-MAIN-20180721164755-20180721184755-00125.warc.gz"}
https://kobra.bibliothek.uni-kassel.de/handle/urn:nbn:de:hebis:34-2017050252462
KOBRA KOBRA Please use this identifier to cite or link to this item: http://nbn-resolving.de/urn:nbn:de:hebis:34-2017050252462 Title: Balanced Viscosity solutions to a rate-independent system for damage Authors: Knees, DorotheeRossi, RiccardaZanini, Chiara ???metadata.dc.subject.ddc???: 510 - Mathematik (Mathematics) Issue Date: 2-May-2017 Series/Report no.: Mathematische Schriften Kassel17, 02 Abstract: This article is the third one in a series of papers by the authors on vanishing-viscosity solutions to rate-independent damage systems. While in the first two papers [KRZ13, KRZ15] the assumptions on the spatial domain $\Omega$ were kept as general as possible (i.e. nonsmooth domain with mixed boundary conditions), we assume here that $\partial\Omega$ is smooth and that the type of boundary conditions does not change. This smoother setting allows us to derive enhanced regularity spatial properties both for the displacement and damage fields. Thus, we are in a position to work with a stronger solution notion at the level of the viscous approximating system. The vanishing-viscosity analysis then leads us to obtain the existence of a stronger solution concept for the rate-independent limit system. Furthermore, in comparison to [KRZ13, KRZ15], in our vanishing-viscosity analysis we do not switch to an artificial arc-length parameterization of the trajectories but we stay with the true physical time. The resulting concept of Balanced Viscosity solution to the rate-independent damage system thus encodes a more explicit characterization of the system behavior at time discontinuities of the solution. URI: urn:nbn:de:hebis:34-2017050252462 Appears in Collections: Mathematische Schriften Kassel Files in This Item: File Description SizeFormat
2017-09-23 03:39:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6402916312217712, "perplexity": 1373.6541307252908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689471.25/warc/CC-MAIN-20170923033313-20170923053313-00294.warc.gz"}
https://mvtrinh.wordpress.com/2011/08/20/yellow-star/
## Yellow Star The four circles in the figure below each have a radius of 5 units. The centers of each circle form a square. If you use 3.14 as the approximate ratio of a circle’s circumference to its diameter, what is the area of the interior [yellow] black star region? Source: mathcontest.olemiss.edu 9/25/2006 SOLUTION Area of interior black star region = Area of square $-$ Area of 4 circle quarters. = Area of square $-$ Area of 1 circle $=10^2-\pi \times 5^2$ $=10^2-3.14\times 25$ $=100-78.5$ $=21.5$ square units Answer: $21.5$ square units.
2018-06-22 20:35:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8217201828956604, "perplexity": 750.8166015506064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864795.68/warc/CC-MAIN-20180622201448-20180622221448-00474.warc.gz"}
http://www.maa.org/publications/periodicals/convergence/the-unique-effects-of-including-history-in-college-algebra-findings-1?device=mobile
# The Unique Effects of Including History in College Algebra - Findings (1) Author(s): D. Goodwin (Black Hills State University) and G. W. Hagerty (Black Hills State University) and S. Smith (Black Hills State University) Over the last three years at BHSU, great strides have been made in improving student outcomes in College Algebra with the inclusion of history in the course. The same results theorized by many (Bruckheimer & Arcavi, 2000; Heiede, 1996; Johnson, 1994; Kleiner, 1996; Man-Keung, 2000; Rickey, 1996; Smith, 1996; Swetz, 2000) about the effects of inclusion of the historical development of concepts in mathematics courses can now be seen in the College Algebra course. Figure 1 shows the changes in the percentage of students enrolled in the fall College Algebra course who continued on to enroll in Trigonometry. The dotted line shows the goal of 8%. The solid line shows the growth in the percentage of the students enrolling in Trigonometry. The number of students enrolled in College Algebra in the fall has consistently remained around 360 students. The number enrolling in Trigonometry has grown from 5 students in 2003 and 2004 to 20 students in 2007. Figure 1. Percent of students enrolling in Trigonometry from 2003-2007 The increase started in 2005, the year the history modules were introduced. Besides quadrupling the number of students enrolling in Trigonometry, average student achievement in Trigonometry also increased by approximately 6-7%, from about a 70% overall average to an overall average of 76-77% for the course. Furthermore, the Trigonometry instructor reports that this class has doubled in size, attendance has increased, there is an improved success rate and a greater percentage of students are making their way to Calculus. In the coming semesters, as more students who were exposed to the revised College Algebra course matriculate into Calculus, increased enrollment and achievement is expected in Calculus. There has been significant improvement on test problems where mathematical vocabulary and notation have created difficulty for students in the past. One example is that when students were asked to find the inverse function of $f(x)=\frac{x+2}{2-x},$ students would pick the reciprocal about 30% of the time prior to the use of history in the College Algebra course. After the introduction of history to the course, students are picking the reciprocal only about 15% of the time.
2015-07-31 03:22:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5621175765991211, "perplexity": 1109.6692158238702}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988048.90/warc/CC-MAIN-20150728002308-00249-ip-10-236-191-2.ec2.internal.warc.gz"}
http://aika.network/inference.html
October 09, 2020 Work in progress # How information is propagated through the network ## General Architecture In the Aika neural network neurons and their Activations are represented separately. Thus each neuron may have an arbitrary number of activations. For example, if the input data set is a text and a neuron represents a specific word, then there might be several occurrences of this word and therefore several activations of this neuron. The human brain probably does something similar through the timing of activation spikes. These activations represent the information that the neural network was able to infer about the input data set. Every activation is directly or indirectly grounded in the input data set. That is by following the input links of an activation down to the input layer of the network one can determine the atomic input information this activation is referring to, similar to an annotation. Since we have a one-to-many relation between neurons and activations, we also need a one-to-many relation between the synapses and the links between the activations. Therefore we need a linking process to determine which activation are going to be connected which others. Roughly speaking activations can be linked if they are grounded in the same input data. A consequence of separating the neurons and their activations is that we can not rely on the network topology to give us the chronological sequence in which each activation is processed. Therefore each activation needs a fired timestamp information that describes the point in time when this activation becomes visible to other neurons. In contrast to conventional neural networks, with their predefined layered architecture, the Aika network starts out empty. Neurons and synapses are added during training. There is an underlying set of rules which determine when certain types of neurons or synapses are induced. For the network to be able to scale to a sufficient number of neurons and still be computed efficiently, we need to restrict the number of synapses that are induced or even considered to be induced. ## The weighted sum and the activation function Like other artificial neural networks the synapses are weighted. To compute the activation value of a neuron, the weighted sum over its input synapses is computed. Then the bias value $$b$$ is added to this sum and the result is sent through an activation function $$\varphi$$. $$net_j = {b_j + \sum\limits_{i=0}^N{x_i w_{ij}}}$$ $$y_j = \varphi (net_j)$$ Depending on the type of neuron, different activation functions are used. One commonly used activation function in the Aika network is the rectified hyperbolic tangent function, which is basically the positive half of the $$\tanh()$$ function. $\varphi(x) = \Bigg \{ {0 \atop \tanh(x)} {: x \leq 0 \atop : x > 0}$ The activation functions are chosen in a way that they clearly distinguish between active and inactive neurons. Only activated neurons are processed and are thus visible to other neurons. These activations are expressed not only by a real valued number but also by an activation object. ## Neuron Types By choosing the weights and the threshold (i.e. the bias) accordingly, neurons can take on the characteristics of boolean logic gates such as an and-gate or an or-gate. Also, neurons need to be able to act more like formal logic. Sure their ability to integrate weak input signals is important, this is something that classical AI has great difficulties with, but there is also the need for strong conjunctive or disjunctive behavior. Surely, these things can be achieved by setting the synapse weights and the bias value appropriately, but still, these neurons are sufficiently different to justify the introduction of different types of neurons for these. I believe that the human brain does something very similar with its excitatory pyramidal neurons and its inhibitory stellate neurons. Now to the part where I see the great similarity to your capsule networks. Basically, there are three types of neurons, pattern neurons, pattern part neurons, and inhibitory neurons. The pattern neurons and the pattern part neurons are both conjunctive in nature and the inhibitory neuron is disjunctive. The pattern part neurons kind of describe which lower-level patterns are part of the current pattern. Each pattern part neuron receives a positive feedback synapse from the pattern to which it belongs to. The pattern neuron on the other hand is only activated if the pattern as a whole is detected. For example, if we consider a word consisting of individual letters as lower-level patterns, then we have a corresponding pattern part neuron for each letter whose meaning is that this letter occurred as part of this word. The pattern part neurons also receive negative feedback synapses from the inhibitory neurons such that competing patterns are able to suppress each other. ## Positive and Negative Feedback Loops Another crucial insight is the need for positive and negative feedback loops. These are synapses that ignore the causal sequence of fired activations. Especially the negative feedback synapses are interesting because they require the introduction of mutually shielded branches for the following activations. They create a kind of independent interpretation of parts of the input data set and only later on its decision which of these interpretations gets selected. It's very similar to what a parse tree does only that a parse tree is limited to syntactic information only. Another way to relate it to classical logic is to consider it as non-monotonic inference, which in classical AI could not be solved properly since classical logic is missing the weak influences that neural networks are able to capture. ## Neuron Types There are two main types of neurons in Aika: excitatory neurons and inhibitory neurons. The biological role models for those neurons are the spiny pyramidal cell and the aspiny stellate cell in the cerebral cortex. The pyramidal cells usually exhibit an excitatory characteristic and some of them possess long ranging axons that connect to other parts of the brain. The stellate cells on the other hand are usually inhibitory interneurons with short axons which form circuits with nearby neurons. Those two types of neurons also have a different electrical signature. Stellate cells usually react to a constant depolarising current by firing action potentials. This occurs with a relatively constant frequency during the entire stimulus. In contrast, most pyramidal cells are unable to maintain a constant firing rate. Instead, they are firing quickly at the beginning of the stimulus and then reduce the frequency even if the stimulus stays strong. This slowdown over time is called adaption. Aika tries to mimic this behaviour by using different activation functions for the different types of neurons. Since Aika is not a spiking neural network like the biological counterpart, we only have the neurons activation value which can roughly be interpreted as the firing frequency of a spiking neuron. In a sense the earlier described activation function based on the rectified tanh function quite nicely captures the adaption behaviour of a pyramidal cell. An increase of a weak signal has a strong effect on the neurons output, while an increase on an already strong signal has almost no effect. Furthermore, if the input of the neuron does not surpass a certain threshold then the neuron will not fire at all. For inhibitory neurons Aika uses the rectified linear unit function (ReLU). $$y = \max(0, x)$$ Especially for strongly disjunctive neurons like the inhibitory neuron, ReLU has the advantage of propagating its input signal exactly as it is, without distortion or loss of information.
2020-10-27 23:51:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.61053466796875, "perplexity": 756.8842079506342}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107894890.32/warc/CC-MAIN-20201027225224-20201028015224-00672.warc.gz"}
https://math.stackexchange.com/questions/1121002/why-we-throw-away-the-units-in-the-definition-of-irreducible-elements
# Why we throw away the units in the definition of irreducible elements? In the book "Abstract Algebra" by Dummit, the definition of irreducible element in an integral domain $R$ goes like this. Suppose $r\in R$ is nonzero and is not a unit. Then $r$ is called irreducible in $R$ if whenever $r=ab$ with $a,b\in R$, at least one of $a$ or $b$ must be a unit in $R$. Otherwise $r$ is said to be reducible... My questions are 1. why we don't consider the units to be irreducible? 2. In the definition above, I get confused by the definition of reducible elements: namely, the units are reducible or not? This is similar to the question of whether $1$ is a prime number. If $1$ were a prime number, then the fundamental theorem of arithmetic would not work because you could add any power of $1$ to a factorization. This generalizes to unique factorization domains, where up to associates an element has a unique factorization into irreducible elements. If units were irreducible, this would not be true. Just as we say that $1$ is also not composite because it is not the product of two non-unit elements, so we say that units are also not reducible. Thus the answer is that units are neither reducible nor irreducible. • Further, I think it's worth noting that "units" are somehow elements that are "almost" the multiplicative identity. We wouldn't consider 6*1 and 2*3 two distinct factorizations of 6, because the factorization out of 1 is somehow trivial. Similarly, if $u$ is a unit, we wouldn't want to include $u \cdot u^{-1} \cdot r$ as a factorization of $r$. Jan 27 '15 at 5:34
2021-12-08 01:02:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9429259300231934, "perplexity": 126.304002376283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363420.81/warc/CC-MAIN-20211207232140-20211208022140-00383.warc.gz"}
http://stats.stackexchange.com/questions/20574/maximum-likelihood-estimation-when-parameters-are-functions-of-another-data-seri
# Maximum likelihood estimation when parameters are functions of another data series We have two time series: $X_t$ and $R_t$, and a model saying that $R_{t+1} = (\mu(X_t) - \frac{1}{2}\sigma^2(X_t))\Delta T + \sigma(X_t) \sqrt{\Delta T} \epsilon_t$, where $\Delta T$ is given constant and $\epsilon_t$-s are independent normally distributed with zero mean and unit variance. Further we assume that the functions $\mu(x)$ and $\sigma(x)$ are linear for simplicity. I would like to use some standard method (MLE comes to my mind) to estimate parameters of functions $\mu(x)$ and $\sigma(x)$, but I am not sure how to do this. I would be grateful for detailed answers, because I am not really experienced with statistics. - Are you familiar with a good math programming language, like R? – jbowman Jan 4 '12 at 19:50 This looks like the discretization of an affine diffusion (SDE) or something close. – cardinal Jan 4 '12 at 20:10 @jbowman, sadly I don't know R :( – Grzenio Jan 5 '12 at 9:47 @cardinal, indeed it is – Grzenio Jan 5 '12 at 9:48 Let $\theta$ be the parameters involved in $\mu(x)$ and $\sigma(x)$. Your likelihood function will be $$\mathcal{L}(\theta\,|\,\epsilon_1,\ldots,\epsilon_n) = f(\epsilon_1,\epsilon_2,\ldots,\epsilon_n\;|\;\theta) = \prod_{t=1}^n f(\epsilon_t|\theta)= \prod_{t=1}^{n} \frac{1}{\sqrt{2\pi}\ } \exp\big(-\epsilon_t^2/2\big) \>.$$ You may need to take $t=1$ to $n-1$ (for a large sample it doesn't matter, assuming you have $n$ observations). Substitute $$\epsilon_t=\dfrac{R_{t+1} - (\mu(X_t) - \frac{1}{2} \sigma^2(X_t))\Delta T}{\sigma(X_t) \sqrt{\Delta T}} \>.$$ This will be in terms of $\theta$, $R_t$, and $X_t$. MLE estimates are the parameters which optimize the likelihood function found above. I think there is $1/\sigma(X_t)$ missing in the likelihood function. In the limiting case when the parameters are constant we simply have a normal distribution for which Wikipedia gives a different anwswer: en.wikipedia.org/wiki/… – Grzenio Jan 5 '12 at 16:08 Not in the first expression. Since $\epsilon_t$-s are independent normally distributed with zero mean and unit variance. – vinux Jan 5 '12 at 16:53
2013-05-19 01:45:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.902884304523468, "perplexity": 254.258203655079}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383081/warc/CC-MAIN-20130516092623-00063-ip-10-60-113-184.ec2.internal.warc.gz"}
https://kerodon.net/tag/00JZ
Kerodon $\Newextarrow{\xRightarrow}{5,5}{0x21D2}$ $\newcommand\empty{}$ Remark 2.4.1.9. Let $\operatorname{\mathcal{C}}_{\bullet }$ be a locally Kan simplicial category, and let $f,g: X \rightarrow Y$ be a pair of morphisms in the underlying category $\operatorname{\mathcal{C}}= \operatorname{\mathcal{C}}_0$ having the same source and target. Invoking Proposition 1.1.9.10, we see that the following conditions are equivalent: $(a)$ There exists a homotopy from $f$ to $g$, in the sense of Definition 2.4.1.6. $(b)$ The morphisms $f$ and $g$ belong to the same connected component of the Kan complex $\operatorname{Hom}_{\operatorname{\mathcal{C}}}(X,Y)_{\bullet }$. In particular, condition $(a)$ defines an equivalence relation on the set $\operatorname{Hom}_{\operatorname{\mathcal{C}}}(X,Y)$.
2021-12-03 15:45:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9939484596252441, "perplexity": 77.37147824217828}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362891.54/warc/CC-MAIN-20211203151849-20211203181849-00137.warc.gz"}
https://hackage-origin.haskell.org/package/easyrender-0.1.1.1
# easyrender: User-friendly creation of EPS, PostScript, and PDF files [ gpl, graphics, library ] [ Propose Tags ] This module provides efficient functions for rendering vector graphics to a number of formats, including EPS, PostScript, and PDF. It provides an abstraction for multi-page documents, as well as a set of graphics primitives for page descriptions. The graphics model is similar to that of the PostScript and PDF languages, but we only implement a subset of their functionality. Care has been taken that graphics rendering is done efficiently and as lazily as possible; documents are rendered "on the fly", without the need to store the whole document in memory. The provided document description model consists of two separate layers of abstraction: • drawing is concerned with placing marks on a fixed surface, and takes place in the Draw monad; • document structure is concerned with a sequence of pages, their bounding boxes, and other meta-data. It takes place in the Document monad. In principle, the functionality provided by EasyRender is similar to a subset of Cairo; however, EasyRender is lightweight and at least an order of magnitude faster. [Index]
2022-10-06 10:50:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25484833121299744, "perplexity": 2245.5169421522805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337803.86/warc/CC-MAIN-20221006092601-20221006122601-00603.warc.gz"}
https://wwenjie.org/reda/reference/check_recur
Perform several checks for recurrent event data and update object attributions if some rows of the contained data (in the .Data slot) have been removed by such as na.action. check_Recur(x, check = c("hard", "soft", "none")) ## Arguments x An Recur object. A character value specifying how to perform the checks for recurrent event data. Errors or warnings will be thrown, respectively, if the check is specified to be "hard" (by default) or "soft". If check = "none" is specified, no data checking procedure will be run. ## Value An Recur object invisibly.
2021-07-26 13:52:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2224828451871872, "perplexity": 7262.610509896615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152129.33/warc/CC-MAIN-20210726120442-20210726150442-00431.warc.gz"}
https://groupprops.subwiki.org/wiki/Formal_group_law
# Formal group law This is a variation of group|Find other variations of group | Read a survey article on varying group ## Definition ### One-dimensional formal group law Let $R$ be a commutative unital ring. A one-dimensional formal group law on $R$ is a formal power series $F$ in two variables, denoted $x$ and $y$, such that: Condition no. Name Description of condition Interpretation 1 Associativity $\! F(x,F(y,z)) = F(F(x,y),z)$ as formal power series If $F$ is the binary operation denoting multiplication, then $F$ is associative. 2 Identity element $\! F(x,y) = x + y + xyG(x,y)$ for some power series $G$. Thus, $F(x,0) = x, F(0,y) = y$ The element $0$ is the identity element for multiplication. 3 Inverses There exists a power series $m(x)$ such that $m(0) = 0$ and $F(x,m(x)) = 0$. Every element has an inverse for multiplication. Condition (3) is redundant, i.e., it can be deduced from (1) and (2). A one-dimensional commutative formal group law is a one-dimensional formal group law $F$ such that $F(x,y) = F(y,x)$. Two important examples of commutative formal group laws, that make sense for any ring, are the additive formal group law and the multiplicative formal group law. ### Higher-dimensional formal group law Let $R$ be a commutative unital ring. A $n$-dimensional formal group law is a collection of $n$ formal power series $F_i$ involving $2n$ variables $(x_1,x_2,\dots,x_n,y_1,y_2,\dots,y_n)$ satisfying a bunch of conditions. Before stating the conditions, we introduce some shorthand. Consider $x = (x_1,x_2,\dots,x_n)$ and $y = (y_1,y_2,\dots,y_n)$. Then, $F(x,y)$ is the $n$-tuple $(F_1(x_1,x_2,\dots,x_n,y_1,y_2,\dots,y_n),F_2(x_1,x_2,\dots,x_n,y_1,y_2,\dots,y_n),\dots,F_n(x_1,x_2,\dots,x_n,y_1,y_2,\dots,y_n))$. Condition no. Name Description of condition in shorthand Description of condition in longhand 1 Associativity $\! F(x,F(y,z)) = F(F(x,y),z)$ For each $i$ from $1$ to $n$, $F_i(x_1,x_2,\dots,x_n,F_1(y_1,y_2,\dots,y_n,z_1,z_2,\dots,z_n),F_2(y_1,y_2,\dots,y_n,z_1,z_2,\dots,z_n),\dots,F_n(y_1,y_2,\dots,y_n,z_1,z_2,\dots,z_n))$ equals $F_i(F_1(x_1,x_2,\dots,x_n,y_1,y_2,\dots,y_n),F_2(x_1,x_2,\dots,x_n,y_1,y_2,\dots,y_n),F_n(x_1,x_2,\dots,x_n,y_1,y_2,\dots,y_n),z_1,z_2,\dots,z_n)$. 2 Identity element $\! F(x,y) = x + y +$ terms of higher degree, so $\! F(x,0) = F(0,x) = x$ For each $i$, $\! F_i(x_1,x_2,\dots,x_n,y_1,y_2,\dots,y_n) = x_i + y_i +$ terms of higher degree (each further term is a product that involves at least one $x_j$ and one $y_k$. 3 Inverse There exists $m$, a collection of $n$ formal power series in one variable, such that $F(x,m(x)) = 0$ formally. There exist $m_i, 1 \le i \le n$, all formal power series in one variable, such that $\! F(x_1,x_2,\dots,x_n,m_1(x_1,x_2,\dots,x_n),m_2(x_1,x_2,\dots,x_n),\dots,m_n(x_1,x_2,\dots,x_n)) = 0$. Condition (3) is redundant, i.e., it can be deduced from (1) and (2). A commutative formal group law is a formal group law $F$ such that $F(x,y) = F(y,x)$. Two important examples of commutative formal group laws, that make sense for any ring, are the additive formal group law and the multiplicative formal group law. ## Interpretation as group ### For power series rings A one-dimensional formal group law over a commutative unital ring $R$ gives a group structure on the maximal ideal $\langle t \rangle$ in the ring $R[[t]]$ of formal power series in one variable over $R$. A one-dimensional formal group law can also be interpreted to give a group structure over the image of the maximal ideal $\langle t \rangle$ in any quotient ring of $R[[t]]$; i.e., a ring of the form $R[[t]]/(t^n) \cong R[t]/(t^n)$. A $n$-dimensional formal group law over a commutative unital ring $R$ gives a group structure on the set of $n$-tuples of formal power series in one variable over $R$. ### For arbitrary algebras over $R$ Further information: formal group law functor from commutative algebras to groups More generally, for any commutative $R$-algebra $S$, if $N$ is the set of nilpotent elements of $S$, then any $n$-dimensional formal group law $F$ over $S$ gives a group structure on the set $N^n$ of $n$-tuples over $N$. The formal group law thus gives a functor from the category of commutative $R$-algebras to the category of groups. A particular case of this is when $R$ is a local ring and $M$ is its unique maximal ideal. In this case, we get what is called a $R$-standard group. ## Examples ### Examples of one-dimensional formal group laws Name of law Expression for law Crude explanation for associativity Additional properties additive formal group law $x + y$ addition is associative in the base ring commutative formal group law multiplicative formal group law $x + y + xy$ rewrite as $(x + 1)(y + 1) - 1$. In other words, if we translate by 1, this is just multiplication. Now use associativity of multiplication commutative formal group law
2020-05-26 13:32:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 76, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.92912358045578, "perplexity": 155.8233277554335}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347390758.21/warc/CC-MAIN-20200526112939-20200526142939-00531.warc.gz"}
http://superuser.com/questions/5083/how-to-speed-up-the-smart-location-bar-awesome-bar
# How to speed up the Smart Location Bar (Awesome Bar)? The Firefox Awesome Bar is indeed awesome. But, lately I see that it has become slow. On entering some characters it even freezes for a few seconds (freezing the entire browser). Why does it slow down? Is there a way to speed it up? (The OS is Windows XP.) - What version are you using? –  random Jul 16 '09 at 8:49 I can't find a reference to it now, but I remember reading on one of the mozilla devs blogs that they were looking into this. –  Sam Hasler Jul 16 '09 at 11:27 Yep, Firefox 3.7 should be a bit faster when it comes out. mashable.com/2009/06/29/firefox-next –  TomA Jul 27 '09 at 22:08 What add-ons do you have installed? I'm having the same issue on my home desktop and it even seems to lose keystrokes consistently. I'm wondering if it could be related to a particular add-on. –  Joe Holloway Sep 9 '09 at 13:32 You could VACUUM the SQLite databases that Firefox uses to store it's history and other data. Vacuuming optimizes the database tables inside the files. That speeds up Firefox and saves you some disk space. To vacuum the Firefox database files: 1. Find the Firefox profile data directory on your system. On Windows Vista, it could be somewhere like C:\Users\tom\AppData\Roaming\Mozilla\Firefox\Profiles\default.jqi\. The directory contains files with the .sqlite extension, so you can find it by searching for those. 2. Get the SQLite command line utility here. 3. Close all Firefox windows. Open a command line in the profile directory. 4. On Windows, type in the command for %i in (*.sqlite) do @echo VACUUM; | sqlite3 %i On Linux or Mac, run for i in *.sqlite; do echo "VACUUM;" | sqlite3 \$i ; done Google Chrome actually uses SQLite as well, except it doesn't give the files the .sqlite extension. You can still safely run the same command for all the files in the Chrome profile directory and SQLite will only VACUUM the files it recognizes. - On a Mac: ~/Library/Application\ Support/Firefox/Profiles/*.default/ (see superuser.com/questions/3275/firefox-on-mac-slow-slow-slow/…) –  Arjan Aug 18 '09 at 9:17 Is there any reason you're putting VACUUM in both bold and all-uppercase? –  Hello71 Mar 14 '11 at 15:56 @Hello71: nope. I'm just used to writing SQL keywords in uppercase for better readability. Lowercase should work fine. –  TomA Mar 15 '11 at 9:40 It is easier to install the Vacuum Places addon which allows you to defragment the Places database with the click of a button. You used to be able to run the command: Components.classes["@mozilla.org/browser/nav-history-service;1"].getService(Components.interfaces.nsPIPlacesDatabase).DBConnection.executeSimpleSQL("VACUUM"); in the "Error Console" to vacuum the database, but I'm not sure it works in Firefox 3.6. - If using you're using 3.0 upgrade to 3.5. - This is because you have a lot of pages history. Clearing history every once in a while helps with this. - Doesn’t that somewhat diminish the awesomeness of the awesome bar. –  Jeremy French Jul 16 '09 at 11:21 Well the awesome bar queries your history, bookmarks and recent search terms (from the same bar) so perhaps you've got a lot of data in there. Try clearing out your history (from a month back onwards if you'd like to keep recent history) and emptying your search history - 3.5 has a useful tool for this (that can clear up to a set date). This should speed up your query times if there's less data. - It does seem to be a pretty intensive thing for the computer to do. I have noticed that it is much slower when my PC is under heavy load. Have you installed anything recently that could be eating up resources? You could just try closing a few programs, freeing up disk space, etc. It could just be a symptom of a pc that is getting overwhelmed. - On Linux you could use tmpfs to mount part of the filesystem in memory. (Ironically, of course, one of SQLite's best features is its ability to store an entire database in memory in the first place.) Wikipedia suggests an alternative to tmpfs for Windows, but it doesn't go into details and it feels somewhat hacky. YMMV. - This works just fine for me: cd ~/.mozilla/firefox/????????.default echo "VACUUM;" | sqlite3 places.sqlite The idea is VACUUMing, as suggested, only places.sqlite. - The code behind it needs some extra work. You could somewhat work-around it temporarily in a numerous ways suggested here, but really, the only way to fix this is for the code to be improved. To me this bar is not awesome at all, because half the time I can't even finish typing a darn website name without the UI completely locking up for seconds. Very disappointing. - The Places Maintenance extension has a UI that allows easy vacuuming (optimization) of Firefox database files, which should help speed up the Awesome Bar and other Firefox database access. It also has other Firefox database maintenance functions: Allows to run Maintenance tasks on the database that drives Places, the bookmarks and history module behind Firefox. -
2015-07-31 11:22:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38673481345176697, "perplexity": 3608.518499876968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988250.59/warc/CC-MAIN-20150728002308-00010-ip-10-236-191-2.ec2.internal.warc.gz"}
https://code.tutsplus.com/articles/the-easiest-way-to-use-any-font-you-wish--net-3927
# The Easiest Way to Use Any Font You Wish CSS 3 is on the horizon, and we're all getting excited. Thanks to the latest browser updates, developers can begin working with time-saving new properties - such as @font-face. Unfortunately, the availability of these features is limited to a tiny fraction of our overall userbase. At least for the next year or so, we'll need to continue utilizing the Flash and Javascript alternatives when embedding fonts. Luckily, a new contender, Cufón, has made the process unbelievably simple. What makes it different? Rather than Flash, it uses a mixture of canvas and VML to render the fonts. In just a few minutes, I'll demonstrate how to use any font you wish in your web applications. Excited? ### Pros • Lightning fast! • 100 times more simple than siFR. • Up and running in a few minutes. • Not dependent upon a server-side language, like FLIR is. ### Cons • It's Javascript dependent. If disabled, the default fonts will be used. • The text isn't selectable - never a good thing. • You can't apply a hover state to converted elements. Visit Cufón's website and right-click on the "Download" button at the top. Choose "Save-As" and place it on your desktop. ### Step 2: Convert a Font In order to function, we need to use the font converter utility on the website. Alternatively, you may download the source code and convert your fonts locally. For the purposes of demonstration, I've chosen to use an obnoxious font: "Jokerman". Note - Windows users: you may have to copy the font from your "FONT" folder to the desktop for this to work. If desired, also upload the italic and bold files as well. #### Step 2b Next, you'll need to choose which glyphs should be included. Don't be so quick to simply "CHOOSE ALL". Doing so will cause the JS file size to increase dramatically. For example, we probably don't need all of the Latin glyphs; so make sure they are left unchecked. In my case, I've checked the ones you see below. #### Step 2c Cufón allows you to designate a specific url for your file, to increase security. It's extremely important that you ensure that you have the proper privileges to use a font. REFER HERE to review the terms. If advantageous, type in your site's url into this box. Since we're just getting started, you can leave the final two sections at their default values. Accept the terms, and click "Let's Do This". You'll then be presented with a download box asking you where to save the generated script. Once again, save it to your desktop for easy retrieval. ### Step 3 The next step is to prepare our project. Create a new folder on your desktop, add an index.html file, and drag your two Javascript files in. Open the index file in your favorite code editor, add the basic HTML tags, and then reference your two Javascript files just before the closing body tag (you're free to add them to the head section as well). 1 2 3 #### Calling the Script Now, we need to decide what text should be replaced. Since our document is still blank, feel free to litter it with random tags and text. Let's try to replace the default font in all the H1 tags with Jokerman. 1 2 When we call the "replace" method, we may append a string containing the tag name that we wish to replace - in our case, all H1 tags. Save the file, and view it in your browser. #### Step 3b As always, IE needs a bit more to play nicely with the others. If you view this page in IE, you'll notice a slight flickr/delay before the font is rendered. To remedy, simply append: 1 2 ### Step 4 Let's imagine that you want to have more control over your selector. For instance, perhaps you don't want to change ALL the H1 tags, but merely the ones within the header of your document. Cufón doesn't have its own selector engine built in. This feature was omitted to keep the file size as small as possible. Though this might seem like a downfall at first, it's actually a great idea. Considering the ubiquity of Javascript frameworks lately, there is no need to double-up. We'll review two methods to target specific elements. #### Method 1: Javascript If you won't be using a JS framework in your project, we'll simply use: 1 2 Cufon.replace(document.getElementById('header').getElementsByTagName('h1')); The code above states, "Get the element which has an id of "header". Then, find all of the H1 tags within this element, and "replace" them with our new font. #### Method 2: jQuery To piggyback off of jQuery's selector engine, we only need to import jQuery before Cufón. 1 2 3 4 1 2 Cufon.replace('#header h1'); It's as simple as that! Please note that you MUST import jQuery BEFORE your Cufón script in order for this method to work. ### Complete Believe it or not, you're finished! With just a few lines of simple code, you're free to use any font you wish! Just make sure you have permission and are compliant with type foundries’ licensing. The main concern from the perspective of the type foundry appears to be that the typeface script generated by Cufón could be used to reverse engineer the very typeface itself. -Cameron Moll What are your thoughts? Have a better method that I'm not familiar with? • Subscribe to the NETTUTS RSS Feed for more daily web development tuts and articles.
2023-03-21 18:06:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.250230073928833, "perplexity": 2140.986736169422}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943704.21/warc/CC-MAIN-20230321162614-20230321192614-00489.warc.gz"}
https://www.physicsforums.com/threads/latex-help-for-nuclear-reactions.14912/
# LaTex help for nuclear reactions? 1. Feb 21, 2004 ### Vodka LaTex help for nuclear reactions? - got it, thanks i need a guideline for to get images for nuclear equations, all my attempts thus far were failures. i tried ^4_2 He + ^27_13 Al becomes ^31_15 P becomes ^30_15 P + ^1_0 n , but it didn't work. i don't know why, nor do i have the time to learn it before this paper is due. i don't have any other way to do it [short of making my own in MSPaint heh], so if somebody could give me a guideline from which i could substitute letters and numbers as needed, it would be great and i can delete this thread. thanks :D ediT: thanks a lot :) i'm gonna keep this here for a little bit longer to reference again if i need, but this thread should be gone in a day or so. $$\frac{216MeV}{22 168.125MeV} = 0.00974 \times 100 = 0.974$$ $$2.4048\times 10^{-25} kg + 1.5364\times 10^{-25} kg + 8.3812\times 10^{-29} kg$$ $$3.941\times 10^{-25} kg$$ $$E = (3.941\times 10^{-25} kg)(3.00\times 8 \ ms^{-1})^2$$ [/tex] $$F = \frac{k q^{}_1 q^{}_2}_{r^2}$$ $$3(m^{}_n) = 3(1.67\times 10^{-27}) = 5.01\times 10^{-27} kg$$ --- $$\sum {m^{}_{{}^3H}} = m^{}_p + m^{}_n = 4.033271\textrm{amu}$$ [/tex] $$\sum {m^{}_{{}^2H}} = m^{}_p + m^{}_n = 3.024606\textrm{amu}$$ [/tex] $$5.011265\textrm{amu}$$ --- $$E^{}_{{}^3H} = 6.012\times 10^{-10}J$$ $$E^{}_{{}^2H} = 4.505\times 10^{-10}J$$ $$\frac{E^{}_{\textrm{difference}}}_{E^{}_{\textrm{potential}}}}$$ $$\frac{E^{}_{\textrm{difference}}}_{E^{}_{{}^2H}+E^{}_{{}^3H}}}$$ mproton + mneutron = 1.007276 + 1.008665 = 2.015941 amu Last edited: Feb 23, 2004 2. Feb 21, 2004 $${}^4_2\textrm{He} + {}^{27}_{13}\textrm{Al} \to {}^{31}_{15}\textrm{P} \to {}^{30}_{15}\textrm{P} + {}^1_0\textrm{n}$$ You can make empty characters with {} (two brackets, no space), and you can apply sub and superscripts to empty characters. Click on the image to see what I typed in. Last edited: Feb 21, 2004 3. Feb 22, 2004 ### GRQC Here's a nice custom command which I use: \newcommand{\nucl}[3]{ \ensuremath{ \phantom{\ensuremath{^{#1}_{#2}}} \llap{\ensuremath{^{#1}}} \llap{\ensuremath{_{\rule{0pt}{.75em}#2}}} \mbox{#3} } } It must be implemented in math mode. So, if you want the chemical symbol for U-235, you would type $\nucl{235}{92}{U}$. Works great. 4. Feb 23, 2004 ### chroot Staff Emeritus $$\newcommand{\nucl}[3]{ \ensuremath{ \phantom{\ensuremath{^{#1}_{#2}}} \llap{\ensuremath{^{#1}}} \llap{\ensuremath{_{\rule{0pt}{.75em}#2}}} \mbox{#3} } } \nucl{235}{92}{U}$$ Nice! - Warren 5. Feb 23, 2004 ### Vodka yes, that is a very nice feature :) and another test...sorry... $$\frac{E^{}_{\textrm{difference}}}_{E^{}_{{}^2H}+E^ {}_{{}^3H}}}$$ $$E^{}_{{}^2H} = 4.505\times 10^{-10}J = 281.56$$ $$E^{}_{{}^3H} = 6.012\times 10^{-10}J = 375.75$$ $$22168.125MeV$$ Last edited: Feb 23, 2004
2017-01-17 17:44:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7933157086372375, "perplexity": 2264.55499325997}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00190-ip-10-171-10-70.ec2.internal.warc.gz"}
http://nucleartalent.github.io/NuclearStructure/doc/pub/hfock/html/._hfock-bs002.html
Why Hartree-Fock? We will show that the Hartree-Fock Hamiltonian $$\hat{h}^{\mathrm{HF}}$$ equals our definition of the operator $$\hat{f}$$ discussed in connection with the new definition of the normal-ordered Hamiltonian (see later lectures), that is we have, for a specific matrix element $$\langle p |\hat{h}^{\mathrm{HF}}| q \rangle =\langle p |\hat{f}| q \rangle=\langle p|\hat{t}+\hat{u}_{\mathrm{ext}}|q \rangle +\sum_{i\le F} \langle pi | \hat{V} | qi\rangle_{AS},$$ meaning that $$\langle p|\hat{u}^{\mathrm{HF}}|q\rangle = \sum_{i\le F} \langle pi | \hat{V} | qi\rangle_{AS}.$$ The so-called Hartree-Fock potential $$\hat{u}^{\mathrm{HF}}$$ brings an explicit medium dependence due to the summation over all single-particle states below the Fermi level $$F$$. It brings also in an explicit dependence on the two-body interaction (in nuclear physics we can also have complicated three- or higher-body forces). The two-body interaction, with its contribution from the other bystanding fermions, creates an effective mean field in which a given fermion moves, in addition to the external potential $$\hat{u}_{\mathrm{ext}}$$ which confines the motion of the fermion. For systems like nuclei, there is no external confining potential. Nuclei are examples of self-bound systems, where the binding arises due to the intrinsic nature of the strong force. For nuclear systems thus, there would be no external one-body potential in the Hartree-Fock Hamiltonian.
2020-09-22 18:15:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7852687835693359, "perplexity": 461.06638883806613}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400206329.28/warc/CC-MAIN-20200922161302-20200922191302-00395.warc.gz"}
https://www.researcher-app.com/paper/1951205
3 years ago # Weyl Consistency Conditions and $\gamma_{5}$. C. Poole, A. E. Thomsen The treatment of $\gamma_{5}$ in Dimensional Regularization leads to ambiguities in field-theoretic calculations, of which one example is the coefficient of a particular term in the four-loop gauge $\beta$-functions of the Standard Model. Using Weyl Consistency Conditions, we present a scheme-independent relation between the coefficient of this term and a corresponding term in the three-loop Yukawa $\beta$-functions, where a semi-na\"ive treatment of $\gamma_{5}$ is sufficient, thereby fixing this ambiguity. We briefly outline an argument by which the same method fixes similar ambiguities at higher orders. Publisher URL: http://arxiv.org/abs/1901.02749 DOI: arXiv:1901.02749v1 You might also like Discover & Discuss Important Research Keeping up-to-date with research can feel impossible, with papers being published faster than you'll ever be able to read them. That's where Researcher comes in: we're simplifying discovery and making important discussions happen. With over 19,000 sources, including peer-reviewed journals, preprints, blogs, universities, podcasts and Live events across 10 research areas, you'll never miss what's important to you. It's like social media, but better. Oh, and we should mention - it's free. Researcher displays publicly available abstracts and doesn’t host any full article content. If the content is open access, we will direct clicks from the abstracts to the publisher website and display the PDF copy on our platform. Clicks to view the full text will be directed to the publisher website, where only users with subscriptions or access through their institution are able to view the full article.
2022-08-18 07:33:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32772988080978394, "perplexity": 2495.835992277808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573172.64/warc/CC-MAIN-20220818063910-20220818093910-00343.warc.gz"}
http://math.stackexchange.com/questions/235594/cauchys-integral-formula-with-singularities
# cauchy's integral formula with singularities I'm trying to evaluate $$\int_{\gamma} \frac{e^z}{z^m(1-z)}dz$$ where $\gamma$ is the boundary of $D(\frac{1}{2},1)$. I can't apply Cauchy's integral formula, since the function has its singularities inside the circle. How can I proceed? - en.wikipedia.org/wiki/Residue_theorem – user17794 Nov 12 '12 at 10:19 Make sure you're reading Cauchy's integral formula carefully. Does the function $$f(z)=\frac{e^z}{z-1}$$ have singularities in that disc? You might also find Cauchy's integral formula for derivatives useful. – icurays1 Nov 12 '12 at 10:19 z=1 is a singularity and lies inside the disc centered at 1/2 with radius 1 – bateman Nov 12 '12 at 10:59 You can use the Residue theorem. For every $a \in D(\frac{1}{2},1)$ we have $$\int_\gamma f(z) ~dz= 2\pi i\sum \text{Res}(f, a_k),$$ Where the $a_k$ are the residues of the poles of $f$ inside $\gamma$. The residues of $f$ are given by the coefficients of $z^{-1}$ in the Laurent Series expansion of $f(z)$ at each of the singularities. In this case we have two singularities, a simple pole at $z=1$ and a pole of order 5 at $z=0$. There are in fact closed form identities for determining residues of $f$ at these sorts of poles. For a simple pole we have, $$\text{Res}(f, a) = \lim_{z \to a}(z-a)f(z),$$ and for the pole of order n we have, $$\text{Res}(f, a) = \frac{1}{(n-1)!}\lim_{z \to a}\frac{d^{n-1}}{dz^{n-1}}((z-a)^nf(z))$$ You should now be able to be able to compute the integral, by substituting in the derived values for the residues. - thank you very much for your answer, but at the moment i don't know residue theory, but i'm required to solve it, so i think i should solve in some different way – bateman Nov 12 '12 at 11:01 You do not need the residue theorem. You can decompose your integrand into $$\int_{\gamma} e^{z} \left( \frac{1}{z} + \frac{1}{z^2} + \ldots + \frac{1}{z^m} + \frac{1}{1 - z} \right) \, dz.$$ To justify the partial fraction decomposition, notice that by adding and subtracting terms, \begin{align} \frac{1}{z^m (1 - z)} & = \frac{(z^{m-1} + \ldots + z + 1)(1 - z) + z^m}{z^m(1 - z)} = \frac{z^{m-1}(1 - z) + \ldots + (1 - z) + z^m}{z^m(1 - z)} \\ & = \frac{1}{z} + \ldots + \frac{1}{z^m} + \frac{1}{1 -z} \end{align} Now, you can calculate each piece of the integral using the generalized Cauchy integral formula, namely that $$f^{(k)}(z_0) = \frac{k!}{2 \pi i} \int_{\gamma} \frac{f(z)}{(z - z_0)^{k+1}} \, dz$$ for $f(z)$ analytic. I'll leave it to you to take it from here. EDIT: Justified the partial fraction decomposition. - +1 For the very clever way to decompose the integrand function, yet I'm not sure whether the OP can use Cauchy's Integral Formula if he/she can't use the Residue Theorem... – DonAntonio Nov 12 '12 at 11:25 thank you very much, the decomposition follows from the geometric series, where terms are divided by z^m, so we have denominators for the first m terms, then i can apply Cauchy for derivatives, taking z_0=1 for the first m integrals and z_0=1 for the last. – bateman Nov 12 '12 at 11:57 @bateman, careful with the geometric series argument (though heuristically it works), as the geometric series formula for $1/(1 -z)$ only holds for $|z| < 1$. – Christopher A. Wong Nov 12 '12 at 12:44 Don't forget to click the hollow tick mark to accept this answer if you can now solve the problem.... – Simon Hayward Nov 12 '12 at 12:44 so i can't use geometric series, because the disc is not contained in |z|<1. How can i prove the decomposition of integrand? by induction on m maybe? – bateman Nov 12 '12 at 18:36
2016-02-09 14:36:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9857481718063354, "perplexity": 201.64537929826906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701157212.22/warc/CC-MAIN-20160205193917-00251-ip-10-236-182-209.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/266153/contractible-and-simply-connected
# contractible and simply connected Every contractible space X is simply connected because X is homotopy equivalent to a point. Is there a direct proof of this fact? There obviously is a (free) homotopy between any loop and the trivial loop at the base point. But how to construct a based homotopy, which is required for a loop to be trivial in the fundamental group? - Consider loop 1 formed by the starting point of your free homotopy, loop 2 formed by the ending point of the free homotopy. Then at time $t$, traverse along loop 1 up to time $t$, go through the homotopy at time $t$, and come back along loop 2. This is then a based homotopy to the trivial loop at based point. – user27126 Dec 27 '12 at 22:31 If $\varphi_t:S^1\times I\to X$ is the free homotopy between the loop $\varphi_0$ and the constant loop $\varphi_1\equiv x$, and $h$ is the path formed the images of $s_0$, i.e. $h(t)=\varphi_t(s_0)$, then define $h_t(s)=h(ts)$. At $t$ it traverses the path $h$ till the point $h(t)=\varphi_t(s_0)$, so it can be composed with $\varphi_t$, which itself can be followed by $\overline{h_t}$ to get back to $\varphi(s_0)$. So the product $h_t\cdot\varphi_t\cdot\overline{h_t}$ gives a bases homotopy between $\varphi_0$ and $h_1\cdot x\cdot\overline{h_1}$, the later being contractible.
2016-05-24 17:50:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8541793823242188, "perplexity": 168.8056855699069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049272823.52/warc/CC-MAIN-20160524002112-00003-ip-10-185-217-139.ec2.internal.warc.gz"}
https://mathalino.com/forum/strength-materials/how-calculate-shearing-force-bolt
# How to calculate shearing force of the bolt? Hello...everyone, Could you teach me how to calculate shearing force of the bolt in this figure? ### Your pipe joint is subjected Your pipe joint is subjected to combined stress. Consider how the vertical force N, the horizontal force P, and the bending moment Mt affects the bolts. ### Unusual pipe joint (for Unusual pipe joint (for engineering purposes). Before applying, try to improve a connection: perhaps using a flange-shaped connection (to avoid a shear stress in bolts from axial force N and becoming a better bending rigidity). If necessary, check a general stability of pipe (under simultaneusly consideration of all loadings). Best regards! congestus • Mathematics inside the configured delimiters is rendered by MathJax. The default math delimiters are $$...$$ and $...$ for displayed mathematics, and $...$ and $...$ for in-line mathematics.
2022-07-01 21:00:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5060163736343384, "perplexity": 5231.45312963207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103945490.54/warc/CC-MAIN-20220701185955-20220701215955-00147.warc.gz"}
https://mathsgee.com/qna/1174/how-can-i-solve-9-1-2x-3-4
MathsGee is free of annoying ads. We want to keep it like this. You can help with your DONATION 0 like 0 dislike 18 views How can I solve $9^{(1-2x)}=3^4$ ? | 18 views 0 like 0 dislike change 9 to base 3 3^2(1-2x) = 3^4, equate exponents 2-4x=4 -4x=2 x= -1/2 by Diamond (39,701 points)
2020-12-01 02:04:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5482910871505737, "perplexity": 12336.933040345968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141542358.71/warc/CC-MAIN-20201201013119-20201201043119-00166.warc.gz"}
http://www-old.newton.ac.uk/programmes/TOD/seminars/2012111511301.html
# TOD ## Seminar ### Self-assembly of icosahedral viral capsids: the combinatorial analysis approach Kerner, R (Université Pierre et Marie Curie - Paris VI) Thursday 15 November 2012, 11:30-12:30 Seminar Room 1, Newton Institute #### Abstract An analysis of all possible icosahedral viral capsids is proposed. It takes into account the diversity of coat proteins and their positioning in elementary pentagonal and hexagonal configurations, leading to definite capsid size. We show that the self-organization of observed capsids during their production implies a definite composition and configuration of elementary building blocks. The exact number of different protein dimers is related to the size of a given capsid, labeled by its $T$-number. Simple rules determining these numbers for each value of $T$ are deduced and certain consequences concerning the probabilities of mutations and evolution of capsid viruses are discussed. #### Video The video for this talk should appear here if JavaScript is enabled. If it doesn't, something may have gone wrong with our embedded player. We'll get it fixed as soon as possible.
2014-09-02 14:12:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18189947307109833, "perplexity": 3264.7564179763986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535922087.15/warc/CC-MAIN-20140909043501-00109-ip-10-180-136-8.ec2.internal.warc.gz"}
https://proxieslive.com/tag/reducing/
## Does reducing the Purple Worm’s HP to 184 really reduce its challenge rating to 13? On page 113 of Princes of the Apocalypse the text reads in part: As the tremor ends, a young purple worm with 184 hit points (challenge rating 13) burrows… No other changes to the Purple Worm’s stats are given and the monster does not appear in the Monsters section at the end of the book. This monster also shows up on DnD Beyond’s encounter builder as a CR 13 monster as "Young Purple Worm", in contrast to the CR 15 "Purple Worm". Is taking away 63 hit points with no other changes to this monster enough to drop it two challenge ratings? I’d like to challenge, but not kill, my party. ## Will reducing the cost of Holy Water or improving its effectiveness break things My L3 LMoP group are planning on picking up some holy water to help with zombies as they’ve heard to tales of undead (old owl well, and thunder tree), but I think they’re going to be very disappointed to find its 25Gp, but is single use, costs an action, affects a single target, and only does the same damage as a greatsword swing. Essentially, it seems to be only as good as a single decent fighter attack, but uses an action and costs 25Gp. Given that an average L3 PC might expect to do say ~D6 +3 damage with a typical attack, this means they’re getting about 3 extra damage, once, for 25Gp, which seems absurd. Plus it only works on certain foes. Am I missing something?! I’d like to make this work for them, so I’m considering some changes to the rules for Holy Water: 1. Reduce the cost – maybe as low as 5Gp, given that they have a paladin who is visiting a temple to make his oath (this allows me to keep the price higher on other occasions if they did find a way to abuse it) 2. Make it more effective – maybe an AoE effect? Will this break the game, or be something they can heavily abuse later? ## Is there an algorithm for reducing CNFs further I have a boolean formula in conjunctive normal form (CNF): $$(a\vee b \vee c) \wedge (a \vee b \vee \neg c) \wedge (x \vee y)$$ I know that this can be simplified to: $$(a\vee b)\wedge (x \vee y)$$. a) Is there an algorithm to decide if a CNF is already in the reduced form or not? b) Is there an algorithm that can do this reduction in an manner more efficient than comparing each pair of clauses to see if any pairing can be reduced? I wish to automate this reducing for any CNF and am looking for any algorithms that I can borrow/implement. ## Does reducing a character’s max HP with a spell also reduce the “negative HP” threshold needed to cause instant death? Here’s my situation: In a fight with a group of vampire thralls, the party’s wizard got caught in a corner and was being savaged by vampire bites, his max HP dropping from 24 to 11. They fended off the vampires, but the wizard was at 3hp (He refused to be healed by the cleric due to his character’s hatred of religion and gods). He activated a trap collapsing the temple, and ended up getting hit by a falling chunk of stone ceiling, taking 15 damage (the rock rolled better than any of the vampires). Now, the wizard is reduced to 0 hp, with 12 damage left over. The cleric’s player says that exceeds the wizard’s current max hp of 11, causing insta-death. The wizard’s player argues that the death threshold for negative HP isn’t affected by max-hp-reducing spells, claiming that would make those kinds of spells more powerful than intended. I have stories planned in either case, but I’d rather be certain that I’m following the rules. Is the threshold for instant death based on current max hp or normal max hp? ## Reducing Dominant Set Problem to SAT I am trying to solve a problem and I am really struggling, I would appreciate any help. Given a graph $$G$$ and an integer $$k$$ , recognize whether $$G$$ contains dominating set $$X$$ with no more than $$k$$ vertices. And that is by finding a propositional formula $$\phi_{G,k}$$ that is only satisfiable if and only if there exists a dominating set with no more than k vertices, so basically reducing it to SAT-Problem. What I have so far is this boolean formula: $$\phi=\bigwedge\limits_{i\in V} \bigvee\limits_{j\in V:(i,j)\in E} x_i \vee x_j$$ So basically defining a variable $$x_i$$ that is set to true when when vertix $$v_i$$ is in the dominating Set $$X$$, so all the formula says that it is satisfiable when for each node in $$G$$ it is true that either the vertex itself is in $$X$$ or one of the adjacent vertices is. Which is basically a Weighted Satisfiability Problem so its satisfiable with at most $$k$$ variables set to true. My issue is now that I couldn’t come up with a boolean formula $$\phi_{G,k}$$ that not only uses Graph $$G$$ as input but also integer $$k$$. So my question is now how can I modify this formula so it features $$k$$ , or possibly come up with a new one if it cannot be modified? ## How does resistance/vulnerability/immunity interact with carryover damage after reducing a Polymorphed (or Wild Shaped) form to 0 HP? A caster casts polymorph on another creature. Let’s say the polymorphed creature has 10 HP in its new form, but takes 30 piercing damage and its current form is reduced to 0 HP. This causes it to revert back to its original form, with 20 more piercing damage that would carry over. However, its original form is resistant to piercing damage. How much damage would the new form actually take? Would its original form’s resistance to the damage type apply to the carryover damage? The same question can be extended to the original form having immunity or vulnerability, as the answer would ostensibly use the same logic. The druid’s Wild Shape ability also works similarly to polymorph in this regard (if you reduce the new form to 0 HP, then any remaining damage carries over to its original form), so I suspect the answer would be similar for a similar question about Wild Shape. ## Reducing 3-coloring problem to trio representatives A group of students is divided into trios – groups of 3 members. Each student can be assigned to more than more trio. We want to assign their representatives, by choosing exactly one member of each trio. Is such assignment possible? My goal is to use polynomial-time reductions to transform 3-coloring of a graph into this problem. However, I’m stuck on the correct representation. • If each vertex is a different student and edges represent being in the same trio, how do I separate trios? • If each node represents a trio, what could be a sensible meaning of the edges? I suspect that since a 4-clique has no adequate 3-coloring (which could also mean that 4 trios with the same three members have no possible representative assignment), the latter option could be more sensible, but I’m not sure on how to proceed with this reduction proof. ## The Partition Problem: Reducing to prove NP-Completeness I am struggling with the below problem. Curious to hear any guidance. The Partition Problem is the following: $$\textbf{Instance:}$$ A (multi-)set of numbers $$S = \{a_1, a_2, \ldots , a_n \}$$. $$\textbf{Question:}$$ Can $$S$$ be partitioned into two (multi-)sets $$A$$ and $$B$$ such that the sum of the numbers in $$A$$ is equal to the sum of the numbers in $$B$$? Prove that the Partition Problem is NP-complete. Things I could reduce from that I know of are 3-SAT, Vertex Cover, Subset Sum, Independent Set, and the clique problem. My assumption is that I should be reducing from the Subset Sum problem, but I struggle with that problem as well. Anyone able to help shed some light on this? Also, any explanation in plain english (maybe with some notation) would be greatly appreciated as I’m struggling with the concepts here. Just mathematical notation alone might make it more difficult for me to understand at the moment. ## Reducing Vertex Cover to Half Vertex Cover I need to reduce Vertex Cover to Half Vertex Cover using a Karp reduction: Vertex Cover: Given a graph $$G = (V,E)$$ and an integer $$k$$, is there a subset of $$V$$ of size $$k$$ which intersects all edges? Half Vertex Cover: Given a graph $$G = (V,E)$$ and an integer $$k$$, is there a subset of $$V$$ of size $$k$$ which intersects exactly half the edges? I will be happy if you can tell me how to do that and why the reduction works (both directions of the proof). ## Reducing Kleene’s predecessor for Church numerals I am trying to “reinvent” Kleene’s predecessor myself. The following code snippet should be self-explanatory. The idea is to make a 2-tuple and count up from zero, i.e. lambda f: lambda x: x, as described in this article: #!/usr/bin/env python3 NULL = lambda x: x ZERO = lambda f: lambda x: x TRUE = lambda T: lambda F: T(NULL) FALSE = lambda T: lambda F: F(NULL) IF_ELSE = lambda cond: lambda T: lambda F: cond(T)(F) IS_ZERO = lambda n: n(lambda _: FALSE)(TRUE) ADD1 = lambda n: lambda f: lambda x: f(n(f)(x)) MakePair = lambda first: lambda second: lambda cond: IF_ELSE(cond)(lambda x: first)(lambda x: second) First = lambda pair: pair(TRUE) Second = lambda pair: pair(FALSE) Trans = lambda pair: lambda cond: IF_ELSE(cond)(lambda x: Second(pair))(lambda x: ADD1(Second(pair))) SUB1 = lambda n: First(n(Trans)(MakePair(ZERO)(ZERO))) THREE = ADD1(ADD1(ADD1(ZERO))) FIVE = ADD1(ADD1(ADD1(ADD1(ADD1(ZERO))))) if __name__ == '__main__': print(SUB1(THREE)(lambda x: x + 1)(0)) print(SUB1(FIVE)(lambda x: x + 1)(0)) In the end, the linked article notes that It is then straightforward but tedious to expand all of the short-hand expressions above, and reduce the resulting expression to normal form. This results in the standard magical encoding of predecessor. I assume the normal form of Kleene’s predecessor looks like this: pred = lambda n: lambda f: lambda x: n (lambda g: lambda h: h (g (f))) (lambda y: x) (lambda x: x) However, after applying a series of expansion and $$\beta$$-reduction, I ended up with this: SUB1 = lambda n: n(lambda pair: lambda cond: cond(lambda x: pair(lambda T: lambda F: F(NULL)))(lambda x: lambda f: lambda x: f((pair(lambda T: lambda F: F(NULL)))(f)(x))))(lambda _: lambda f: lambda x: x)(lambda T: lambda F: T(lambda x: x)) Question: How do I reduce my SUB1 function to pred? I don’t think we can go further with only $$\beta$$-reduction, and there must be some advanced reduction techniques unknown to me. A step-by-step solution would be greatly appreciated. Note that this is not a homework problem though; I am doing the exercise just for fun.
2021-05-14 16:01:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 37, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5286978483200073, "perplexity": 1324.8147191477194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991428.43/warc/CC-MAIN-20210514152803-20210514182803-00355.warc.gz"}
https://tex.stackexchange.com/questions/461510/what-is-state-start
# What is “state=start”? Throughout ConTeXt's web site and documentation, there are examples that have state=start. The reference manual also has this, but it isn't clear to me what this means. In the manual, for different commands, it can take the value of start, stop, keep, none, high, empty, none, nomarking, etc. Can someone clarify? • Which command are you referring to? Not all commands take state as an argument. – Henri Menke Nov 24 '18 at 3:19 • BTW, the reference manual is not a good resource, see my comments here: github.com/hmenke/context-examples/blob/master/… – Henri Menke Nov 24 '18 at 3:36 • Hi @HenriMenke, I just saw it everywhere, but didn't have specific commands in mind. How about \setupinteraction[state=start] and \setupcolors[state=start]? Another one I saw was \definelogo. I assumed that since it is such a common property, it had similar meanings everywhere. – Roxy Nov 24 '18 at 3:43 • \setupcolors[state=start] is not necessary in MkIV. Colors are enabled by default. – Henri Menke Nov 24 '18 at 4:35 The state property is usually associated with global properties of the document. Take for example \setupinteraction[state=start] This enables document interaction (hyperlinks). If you wanted to switch it off (maybe only temporarily), you'd use \setupinteraction[state=stop] Searching in the ConTeXt command reference for state = gives me 46 matches. Most of them are for some internal commands or on commands where state=start is the default. In the second part you asked about what it means for state to be something other than start or stop. I could find this for \setuplayouttext where you have state = start stop empty high none normal nomarking NAME To be honest, I have no idea what these mean, because I have never used this command directly and there seems to be no documentation on the Wiki. There is documentation for \setupheader (for which I never used the state property) which is implemented in terms of \setuplayouttext. Maybe as addition to Henri Menke's nice answer: There are cases where start and stop might not have the meaning you would expect, e.g. when talking about layers (from the wiki): The available options for the "state" of a layer are: • start: layer appears only on the current page • stop: layer doesn't show up • repeat: layer prints on all pages • next: layer appears on the following page • continue: layer appears on all pages except the first In one of your comments you are asking about \setupcolors. The wiki page tells you that it accepts four states: local, global, start and stop. In MkIV colors are enabled by default and only accept the states start and stop (see setup-en.pdf and sources). This is just to exemplify that you should always check (and never fully trust) the documentation. If something is unclear, just ask a question, there will be people helping you :)
2020-01-17 14:23:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5662242770195007, "perplexity": 1604.5053529220286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250589560.16/warc/CC-MAIN-20200117123339-20200117151339-00348.warc.gz"}
http://stsievert.com/blog/2015/12/09/generating-rvs/
Since computers were invented we have spent a lot of time generating uniform random numbers. A quick search on Google Scholar for “Generating a uniform random variable” gives 850,000 results. But what if we want to generate another random variable? Maybe a Gaussian random variable or a binomial random variable? These are both extremely useful.1 We’ve spent so long focusing on generating uniform random variables they must be useful. That is, computers try to generate $$U \sim \textrm{uniform}(0, 1)$$ which is a random number between 0 and 1 with equal probability of any number happening. It is exactly the variable that has received so much attention and has seen many algorithms developed to generate this variable. We’ve developed pseudo-random number generators for this and it’s what rand implements in many programming languages. The cumulative distribution function (CDF) defines our random variable. It gives the probability that the random variable it defines lies below a certain value. It gives a probability out, and the cumulative distribution function of $X$ is defined to be $$F_X(x) = \Pr(X \le x)$$ Then, let’s define our variable to be the inverse of the CDF we’d like, $F(x)$. $$X = F^{-1}(U)$$ Because we defined our random variable as $X = F^{-1}(U)$, we see that our estimate of the CDF for $X$, $F_X$ is \begin{aligned} F_X(x) &= \Pr(X \le x)\\ &= \Pr(F^{-1}(U) \le x)\\ &= \Pr(U \le F(x))\\ &= F(x) \end{aligned} where in the last step we use that $\Pr(U \le t) = t$ because it is a uniform random variable between 0 and 1, and the CDF is a linear line with a slope of 1 (that is, between 0 and 1). We see that $F_X(x) = F(x)$, exactly as we wanted! This means that we only need a uniform random number and $F_X^{-1}$ to generate any random variable. After we generate $F^{-1}(y)$ (which may be difficult) we can generate any random variable! For an example, let’s use the exponential random variable. The cumulative distribution function of this random variable is defined to be $$F_X(x) = (1 - e^{-\lambda x}) u(x)$$ To generate this random variable, we would define $$X = F_X^{-1}(U) = -\frac{1}{\lambda}\log(1 - U)$$ The following Python generates that random variable, and tests against theory. And we can view the results with a histogram: A quick check at np.random yields that they generate random variables the same way. 1. I won’t cover this here, but the Gaussian random variable is useful almost everywhere and the binomial random variable can represent performing many tests that can either pass or fail.
2017-11-19 08:37:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9581052660942078, "perplexity": 263.5998906263339}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805466.25/warc/CC-MAIN-20171119080836-20171119100836-00725.warc.gz"}
https://www.physicsforums.com/threads/tension-weight-and-forces.125194/
# Tension, weight and forces 1. Jul 3, 2006 ### soljaragz There is a problem im looking at and in its explainations for the answer it says "...We know the sum of forces acting on m is T-mg which is equal to ma. Therefore, T=m(g-a)..." um...Shouldn't T=m(g+a)? 2. Jul 3, 2006 ### Andrew Mason Yes, provided the signs of g and a are opposite. This is explicit in T = m(g-a), where g and a are the magnitudes of the vectors $\vec{g} \text{ and } \vec{a}$. AM 3. Jul 3, 2006 ### 0rthodontist Doesn't make sense to me. a should be oriented so that + is in the direction of the tension and - is in the direction of gravity. It shouldn't be an absolute value. Anyway if they want to use it as an absolute value in the second part they should have been doing that in the first part.
2018-02-20 08:24:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5474313497543335, "perplexity": 665.0523885342623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812913.37/warc/CC-MAIN-20180220070423-20180220090423-00625.warc.gz"}
http://www.stata.com/stata-news/news28-4/
>> Home >> Stata News >> Vol 28 No 4 (2013 quarter 4) In this News, we focus on the new features added in Stata 13.1 — a free update to Stata 13. Power and sample-size analysis is now available for ANOVA, and you can extend the power command with your own tests and statistics; censored outcomes can now be modeled with endogenous covariates, sample selection, random effects, and more; and several new features have been added for analyzing univariate time series. That is just what we added from Stata 13 to 13.1. If you haven't already upgraded to Stata 13, you are missing one of the most exciting releases of Stata ever. — Vince Wiggins Vice President, Scientific Development ## Stata 13.1 overview — Free update to Stata 13 Censored outcomes. Models with censored outcomes and interval-measured outcomes can now include endogenous covariates, sample selection, random effects and coefficients, treatment effects (ATEs), multivariate models, unobserved components, and endogenous switching. All of these features may be combined in a model. Power and sample size. The power command that was introduced in Stata 13 has new methods for analyzing one-way, two-way, and repeated measures ANOVA models. Like other power methods, you can compute (1) sample size, (2) power, or (3) effect size. Compute any of the three given the other two. Time series. IRFs (impulse-response functions) and parametric autocorrelations can now be estimated after ARIMA and ARFIMA models. Stability checks are also available for ARIMA models, and parametric spectral densities are available for seasonal models. ## Highlights of Stata 13 If you don't already have Stata 13, Stata 13 adds features and statistics for virtually every user in every field. Here are the highlights. Treatment effects. You can now estimate the effect of treatments such as a new drug regimen, a surgical procedure, or a training program using inverse-probability weights (IPW), propensity-score matching, doubly robust methods, and more. Handle binary or multivalued treatments. Multilevel models and panel data. Need to handle binary, ordered, count, and categorical outcomes in panel or repeated-measures data? Now estimate models with random effects or random coefficients, including mulitlevel and crossed models. Generalized SEM. Tired of just linear SEMs? Stata 13 adds multilevel nested and crossed models. Analyze binary, count, categorical, and ordered outcomes. Estimate a dizzying array of new models that span every discipline. Power and sample size. Get tables, graphs, or both at the click of a button. Enter lists of known or possible values, and solve for power, sample size, minimum detectable effect, or effect size. Do everything from an integrated Control Panel. Forecasting. Estimate any number of models and produce time-series forecasts from all the models. Create dynamic or one-step-ahead forecasts. Compare alterative scenarios, and more. Long strings. Maximum string length increases from 244 characters to 2 billion! Handle binary large objects (BLOBs) such as Word documents and JPEG images. Project Manager. Keep your Stata projects organized. Filter on filename, and click to open or run do-files, ado-files, datasets, raw files, graphs, etc. And so much more. And there are many more substantial additions, such as effect sizes, Poisson regression with endogenous regressors, probit with sample selection, more univariate time series, and import delimited with preview. See all the details or upgrade now at stata.com/stata13. ## New power and sample size for ANOVA Stata 13.1, a free update to Stata 13, adds three new methods for power and sample-size analysis of ANOVA models—oneway, twoway, and repeated: • power oneway performs analyses for one-way ANOVA • power twoway performs analyses for two-way ANOVA • power repeated performs analyses for repeated-measures ANOVA These new facilities work just like the existing facilities for comparisons of means, proportions, correlations, and variances. You can specify single values or ranges of values for power and effect size to compute required sample size. You can specify sample size and effect size to compute power. Or you can specify power and sample size to compute effect size. ## New features for censored outcomes and tobit models New with the Stata 13.1 update, you can now estimate models with censored or interval-measured Gaussian outcomes that also include Heckman-style selection, endogenous treatments to obtain average treatment effects (ATEs), covariate measurement error, and unobserved components. You can include endogenous regressors in any part of the models. You can also estimate these models in a panel-data or multilevel-data context with random effects (intercepts) and random coefficients in any part or all parts of the model. All of these models can be estimated as parts of larger multivariate systems. Censored or interval-measured outcomes can even participate in endogenous switching models. ## In the spotlight: New univariate time-series features added in 13.1 Stata 13.1 introduces four new features for univariate time series: 1. IRFs (impulse-response functions) for ARIMA and ARFIMA models 2. Parametric autocorrelation estimates from ARIMA and ARFIMA models 3. A check of stability conditions for ARIMA models 4. Spectral density estimation from seasonal ARIMA models Individually, none of these features are earth shattering. However, the first three are some of my go-to concepts when teaching time-series analysis. Let's use an example to see why. ## New distribution functions Stata 13.1 adds three new functions that compute aspects of the noncentral chi-squared distribution: nchi2den() density reverse cumulative inverse of reverse cumulative ## In the spotlight: Adding your own methods to analyze power and sample size In some cases, you may want to compute sample size or power yourself. For example, you may need to do this by simulation, or you may want to use a method that is not available in any software package. power makes it easy for you to add your own method. All you need to do is to write a program that computes sample size, power, or effect size, and the power command will do the rest for you. It will deal with the support of multiple values in options and with automatic generation of graphs and tables of results. ## Stata Conference Boston 2014 ### July 31-August 1, 2014 Come join us in historic Boston, home to Fenway Park and the Harvard Museum of Natural History, for two days of networking and Stata exploration. Don't miss this opportunity to connect with colleagues and fellow researchers as well as Stata developers. Submissions for presentations are now being accepted. ## Visit us at ASSA 2014 ### Philadelphia, Pennsylvania January 3-5, 2014 Stata representatives, including David M. Drukker, Director of Econometrics, will be on hand to answer your questions on all things Stata. We're interviewing at ASSA! Submit your completed application before December 16, 2013, and let us know that you would like to be considered for an interview at the meetings. ## Econometrics Winter School Using Stata Timberlake (Portugal) and the Faculty of Economics at the University of Porto are jointly organizing a set of applied econometrics courses using Stata. The aim of these courses is to familiarize the participants with the basic econometric tools commonly used in applied research. The courses include a quick discussion of the relevant econometric theory as well as an in-depth discussion of empirical applications using real data. The course will take place at FEP, University of Porto, on January 21-24, 2014. ## Public training courses Course Dates Location Cost Using Stata Effectively January 7-8, 2014 Washington, DC $950 Estimating Average Treatment Effects Using Stata March 6-7, 2014 Washington, DC$1,295 Structural Equation Modeling Using Stata March 24-25, 2014 Washington, DC $1,295 Multilevel/Mixed Models Using Stata April 23-24, 2014 Washington, DC$1,295 ## NetCourses™ Course Dates Cost Introduction to Stata Jan 17-Feb 28, 2014 $95 Introduction to Stata Programming Jan 17-Feb 28, 2014$125 Introduction to Survival Analysis Using Stata NEW Jan 17-Feb 28, 2014 $295 Advanced Stata Programming Jan 17-Mar 7, 2014$150 Introduction to Univariate Time Series with Stata Jan 17-Mar 7, 2014 \$295 ## New from Stata Press #### Discovering Structural Equation Modeling Using Stata, Revised Edition Discovering Structural Equation Modeling Using Stata, Revised Edition is an excellent resource both for those who are new to SEM and for those who are familiar with SEM but new to fitting these models in Stata. It is useful as a text for courses covering SEM as well as for researchers performing SEM. The Revised Edition includes output, syntax, and instructions for fitting models with the SEM Builder that have been updated for Stata 13. ## New from the Stata Bookstore #### Applied Logistic Regression, Third Edition The third edition of Applied Logistic Regression, by David W. Hosmer, Jr., Stanley Lemeshow, and Rodney X. Sturdivant, is the definitive reference on logistic regression models. Most of the analyses in the book were performed using Stata and can be replicated using Stata and the data from the text. Also noteworthy is the book's use of multinomial fractional polynomial models that can be fit using Stata's mfp command. #### Econometric Analysis of Panel Data, Fifth Edition Econometric Analysis of Panel Data, Fifth Edition, by Badi H. Baltagi, is a standard reference for performing estimation and inference on panel datasets from an econometric standpoint. This book provides a rigorous introduction to standard panel estimators as well as concise explanations of many newer, more advanced techniques. Because of its wide range of topics and detailed exposition, Econometric Analysis of Panel Data, Fifth Edition, can serve as both a graduate-level textbook and a handy desk reference for seasoned researchers. #### Applied Longitudinal Data Analysis for Epidemiology: A Practical Guide, Second Edition Applied Longitudinal Data Analysis for Epidemiology: A Practical Guide, Second Edition, by Jos W. R. Twisk, provides a practical introduction to the estimation techniques used by epidemiologists for longitudinal data. ## More titles online! The Stata Bookstore contains nearly 200 titles, all carefully selected to meet the needs of our users. Check out the Bookstore online. ## Go green! The Stata News is now available via email. Be eco-friendly and go paperless. Have the News delivered straight to your inbox. Sign up to receive future issues of the Stata News in electronic format before it arrives in the mail. We will send you an email with the News content as soon as it is available. The electronic format of the Stata News has all the same information as the printed version. The only difference is the convenience.
2016-07-29 07:54:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19170792400836945, "perplexity": 3930.15660581203}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257829972.49/warc/CC-MAIN-20160723071029-00224-ip-10-185-27-174.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/3217605/a-geometry-problem-about-triangle-angles-and-perimeter
# A geometry problem about triangle angles and perimeter Consider $$\Delta ABC$$ with three acute angles, we draw its altitudes and make $$\Delta MNP$$ triangle if $$\frac{PN}{KN}=\frac{3}{2}$$ and $$\frac{\sin{\alpha}}{\cos{\frac{\alpha}{2}}}+\frac{\sin{\theta}}{\cos{\frac{\theta}{2}}}+\frac{\sin{\gamma}}{\cos{\frac{\gamma}{2}}}=\frac{288}{100}$$ then calculate $$\frac{MN}{AB+BC+CA}$$ Note that $$\alpha,\theta,\gamma$$ are angles of $$\Delta MNP$$ and $$K$$ is the point of concurrency of $$MN$$ and $$CP$$ I think it is a famous geomtry problem, I can't remember where I saw this first time but I think it was a famous question... I thought on this problem a lot but I have no idea to solve that, except that the fraction $$\frac{288}{100}$$ is $$2*\frac{144}{100}$$ and I think I should use of this... Maybe I should radical this fraction. Am I right? I will attempt to solve this problem with as little trigonometry as possible. The value of $$\frac{144}{100}=(1.2)^2$$ is actually a red herring. First we note by $$a$$, $$b$$, $$c$$, $$\angle{A}$$, $$\angle{B}$$, $$\angle{C}$$, $$S$$, $$R$$ and $$r$$ the sides, angles, area, circumradius and inradius of $$ABC$$. Note that: $$\frac{\sin \alpha}{\cos \frac{\alpha}{2}}=\frac{2\sin \frac{\alpha}{2}\cos \frac{\alpha}{2}}{\cos \frac{\alpha}{2}}=2\sin\frac{\alpha}{2}=2\sin\frac{\angle{NMP}}{2}=2\sin\angle{AMP}=2\sin\angle{AMP}=2\sin\angle{NBA}=2\sin(90^{\circ}-\angle{AMP})=2\cos\angle{BAC}=2\cos A$$ So we have that $$\cos A+\cos B+\cos C=\frac{144}{100}$$. Now we will prove that in any triangle we have $$\cos A+\cos B+\cos C = 1+\frac{r}{R}$$. It can be proven in many ways but one of the nicer ones is this: Consider the midpoints $$D$$, $$E$$, $$F$$ of $$BC$$, $$CA$$, $$AB$$ respectively which are also the projections of point $$O$$ - the circumcentre of $$ABC$$ onto its sides. Denoting by $$x$$, $$y$$ and $$z$$ the lengts of $$OD$$, $$OE$$, $$OF$$ and applying Ptolemy theorem to the cyclic quadrilateral $$AEOF$$ we obtain: $$AE \cdot OF + AF \cdot OE = AO \cdot EF$$ $$\frac{b}{2} \cdot z + \frac{c}{2} \cdot y = R \cdot \frac{a}{2}$$ $$bz+cy=aR$$ Writing analogous equations and adding them up we get: $$x(b+c)+y(c+a)+z(a+b)=R(a+b+c)$$ Since $$ax$$ is twice the area of $$BOC$$ and similarly for $$by$$ and $$cz$$, $$ax+by+cz=2S$$ and so: $$(x+y+z)(a+b+c)=x(b+c)+y(c+a)+z(a+b)+(ax+by+cz)=R(a+b+c)+2S$$ dividing by $$(a+b+c)$$ and using the fact that $$2P=r(a+b+c)$$ we get: $$x+y+z=r+R$$ It's a nice result, but how does it connect to our sum of cosines? Just notice that $$\angle{DOB}=\frac{1}{2}\angle{BOC}=A$$ so in triangle $$BOD$$ we have $$\cos A=\cos \angle{DOB}=\frac{DO}{OB}=\frac{x}{R}$$. Writing analogous equations we obtain: $$\cos A + \cos B + \cos C = \frac{x}{R}+\frac{y}{R}+\frac{z}{R}=\frac{x+y+z}{R}=\frac{R+r}{R}=1+\frac{r}{R}$$ OK, so far we have $$\frac{144}{100}=\cos A+\cos B+\cos C=1+\frac{r}{R}$$ so $$\frac{r}{R}=0.44$$. Now we will derive the formula for the perimeter of triangle $$MNP$$. To do this note that reflecting $$M$$ across $$AB$$ and $$AC$$ results in points $$Y$$ and $$Z$$ which lie on $$PN$$. Moreover we have: $$MN+NP+PM=ZN+NP+PY=YZ$$ So this perimeter is equal to the length of $$YZ$$. Its half is therefore equal to the length of $$Y'Z'$$ where $$Y'$$ and $$Z'$$ are midpoints of $$MY$$ and $$MZ$$ which are also projections of $$M$$ onto $$AB$$ and $$AC$$. Now if we define $$A'$$ as the antipode of $$A$$ on the circumcircle of $$ABC$$ we can say that the quadrilaterals $$AY'MZ'$$ and $$ACA'B$$ are (inversely) similar. This in turn yields that the ratios of their diagonals are equal i.e.: $$\frac{Y'Z'}{AM}=\frac{BC}{AA'}=\frac{a}{2R}$$ Since $$a \cdot AM = 2S$$ we have: $$MN+NP+PM=YZ=2Y'Z'=\frac{2AM \cdot a}{2R}=\frac{4S}{2R}=\frac{2S}{R}=\frac{(a+b+c)r}{R}$$ That means that the ratio of the perimeters of $$MNP$$ and $$ABC$$ is $$\frac{r}{R}=0.44$$. Now let's tackle our main problem - by the angle bisector theorem we have: $$\frac{3}{2}=\frac{PN}{KN}=\frac{PM}{KM}=\frac{PN+PM}{KN+KM}=\frac{PN+PM}{MN}=\frac{PN+PM+MN}{MN}-1$$ Where in the middle we used the fact that if $$\frac{a}{b}=\frac{c}{d}$$ then their common value is also equal to $$\frac{a+c}{b+d}$$. So: $$\frac{MN}{PN+PM+MN}=\frac{2}{5}=0.4$$ And finally: $$\frac{MN}{AB+BC+CA}=\frac{MN}{a+b+c}=\frac{MN}{PN+PM+MN} \cdot \frac{PN+PM+MN}{a+b+c}=0.4 \cdot 0.44=0.176$$ • Dear friend: When you say that $\angle{\frac{NMP}{2}}= \angle AMP$, you are assuming that the height $AM$ of the triangle $\triangle ABC$ is the bisector of the angle $\angle NMP$. I am afraid that this is not true and that here it is just an optical illusion of the drawing presented by the O. P. Am I wrong? – Piquito May 8 at 0:59 • $AM$ is indeed a bisector of $\angle{NMP}$. To see that notice that quadrilaterals $HPBM$ and $HNCM$ are cyclic ($H$ denotes the orthocentre of $ABC$). From this follows:$$\angle{AMP}=\angle{HMP}=\angle{HBP}=\angle{HBA}=90^{\circ}-A=\angle{HCA}=\angle{HCN}=\angle{HMN}=\angle{AMN}$$ – Bartek May 8 at 1:30 • See however the attached figure. Regards. – Piquito May 8 at 2:05 • Dear Piquito, I'm afraid that you have made a typo - it should have been $\frac{1.906}{8.576-7}$ and both values are equal to $1.208980044$. – Bartek May 8 at 2:21 • Maybe and if this has been the case I am happy for you. – Piquito May 8 at 2:28 I will use some idetities or properties, you can just google them if you don't know them. $$1.PN=a\cos A,NM=c\cos C,MP=b\cos B$$ $$2.a\cos A+b\cos B+c\cos C=2a\sin B\sin C$$ $$3.\cos A+\cos B+\cos C=1+4\sin {A\over 2}\sin {B\over 2}\sin {C\over 2}, \sin A+ \sin B+\sin C=4\cos {A\over 2}\cos {B\over 2}\cos {C\over 2}$$ Also, some basic properties like Law Of Sines$$(4)$$ and the Angle Bisector Theorem$$(5)$$ is used. Now let's start. Denote the perimeter of $$\triangle NMP$$ as $$l$$, and $$AB+BC+CA=s$$ First, according to $$(5)$$,$${PN \over NK}={PM \over MK}={3/2}$$ we know that $${MN\over l}={2 \over 5}$$ Second, notice that $$\alpha, \theta,\gamma$$ are only a permutation of $$A+B-C ,A+C-B ,B+C-A$$. And $${A+B-C\over 2}={\pi \over 2}-C$$ We have $${72 \over 25}=2\sin {\alpha \over 2}+2\sin {\theta \over 2}+2\sin {\gamma \over 2}=2(\cos A+\cos B+\cos C)$$ So $$\cos A+\cos B+\cos C={36 \over 25}$$ Third, we calculate $$l \over s$$. From $$(1),(2)$$ we know that $$l=a\cos A+b\cos B+c\cos C=2a\sin B\sin C$$ So$${l \over s}={2a \sin B\sin C \over a+b+c}={2\sin A \sin B\sin C \over \sin A+ \sin B+\sin C}={16\sin {A\over 2}\sin {B\over 2}\sin {C\over 2}\cos {A\over 2}\cos {B\over 2}\cos {C\over 2}\over 4\cos {A\over 2}\cos {B\over 2}\cos {C\over 2}} =4\sin {A\over 2}\sin {B\over 2}\sin {C\over 2}=\cos A+\cos B+\cos C-1={11 \over 25}$$ Notice that $$(3)$$ is used several times in the last few steps. Now finally, we have$${MN \over l}={2 \over 5}, {l \over s}={11 \over 25}$$ So $${MN \over s}={22 \over 125}$$,and we are done.
2019-08-23 09:09:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 116, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9422491192817688, "perplexity": 127.5608011951855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318243.40/warc/CC-MAIN-20190823083811-20190823105811-00548.warc.gz"}
https://www.r-bloggers.com/2018/01/multi-threaded-lz4-and-zstd-compression-from-r/
[This article was first published on Blog - Data Science and the fst Package, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here) Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. The fst package uses LZ4 and ZSTD to compress columnar data. In the latest release, methods compress_fst and decompress_fst were added which allow for direct (multi-threaded) access to these excellent compressors. # LZ4 and ZSTD LZ4 is one of the fastest compressors around, and like all LZ77-type compressors, decompression is even faster. The fst package uses LZ4 to compress and decompress data when lower compression levels are selected (in method write_fst). For higher compression levels, the ZSTD compressor is used, which offers superior compression ratio’s but requires more CPU resources. # How to use LZ4 and ZSTD from the fst package From version 0.8.0, methods compress_fst and decompress_fst are available in the package. These methods give you direct access to LZ4 and ZSTD. As an example of how they can be used, we download a 90 MB file from Kaggle and recompress it using ZSTD: library(fst) sample_file <- "survey_results_public.csv" # read file contents into a raw vector # compress bytes with ZSTD at a compression level of 20 percent compressed_vec <- compress_fst(raw_vec, "ZSTD", 20) # write the compressed data into a new file compressed_file <- "survey_results_public.fsc" writeBin(compressed_vec, compressed_file) # compression ratio file.size(sample_file) / file.size(compressed_file) ## [1] 8.949771 using a ZSTD compression level of 20 percent, the contents of the csv file are compressed to about 11 percent of the original size (calculated as the inverse of the compression ratio). To decompress the generated compressed file again you can do: # read compressed file into a raw vector # decompress file contents raw_vec_decompressed <- decompress_fst(compressed_vec) A nice feature of data.table’s fread method is that it can parse in-memory data directly. That means that we can easily feed our raw vector to fread: library(data.table) # read data set from the in-memory csv This effectively reads your data.table from a compressed file, which saves disk space and increases read speed for slow disks. Methods compress_fst and decompress_fst use a fully multi-threaded implementation of the LZ4 and ZSTD algorithms. This is accomplished by dividing the data into (at maximum) 48 blocks, which are then processed in parallel. This increases the compression and decompression speeds significantly at a small cost to compression ratio: library(microbenchmark) # measure ZSTD compression performance at low setting compress_time <- microbenchmark( compress_fst(raw_vec, "ZSTD", 10), times = 500 ) # decompress again decompress_time <- microbenchmark( decompress_fst(compressed_vec), times = 500 ) cat("Compress: ", 1e3 * as.numeric(object.size(raw_vec)) / median(compress_time$time), "MB/s", "Decompress: ", 1e3 * as.numeric(object.size(raw_vec)) / median(decompress_time$time), "MB/s") ## Compress: 1299.649 MB/s Decompress: 1948.932 MB/s That’s a ZSTD compression speed of around 1.3 GB/s! # Bring on the cores With more cores, you can do more parallel compression work. When we do the compression and decompression measurements above for a range of thread and compression level settings, we find the following dependency between speed and parallelism: Figure 1: Compression and decompression speeds vs the number of cores used for computation The code that was used to obtain these results is given in the last paragraph. As can be expected, the compression speed is highest for lower compression level settings. But interesting enough, decompression speeds actually increase with higher compression settings! For the highest levels, ZSTD decompression speeds of more than 3 GB/s were measured in our experiment! Different compression levels settings lead to different compression ratio’s. This relation is depicted below. For completeness, LZ4 compression ratio’s were added as well: Figure 2: Compression ratio for different settings of the compression level The highlighted point at a 20 percent (ZSTD) compression level corresponds to the measurement that we did earlier. It’s clear from the graph that with a combination of LZ4 and ZSTD, a wide range of compression ratio’s (and speeds) is available to the user. # The case for high compression levels There are many use cases where you compress your data only once but decompress it much more often. For example, you can compress and store a file that will need to be read many times in the future. In that case it’s very useful to spend the CPU resources on compressing at a higher setting. It will give you higher decompression speeds during reads and the compressed data will occupy less space. Also, when operating from a disk that has a lower speed than the (de-)compression algorithm, compression can really help. For those cases, compression will actually increase the total transfer speed because (much) less data has to be moved to or from the disk. This is also the main reason why fst is able to serialize a data set at higher speeds than the physical limits of a drive. (Please take a look at this post to get an idea of how that works exactly) # Benchmark code Below is the benchmark script that was used to obtain the dependency graph for the number of cores and compression level. library(data.table) # benchmark results bench <- data.table(Threads = as.integer(NULL), Time = as.numeric(NULL), Mode = as.character(NULL), Level = as.integer(NULL), Size = as.numeric(NULL)) # Note that compression time increases steadily with higher levels. # If you want to run this code yourself, start by using levels 10 * 0:5 for (level in 10 * 0:10) { cat(".") # show some progress # compress measurement compress_time <- microbenchmark( compressed_vec <- compress_fst(raw_vec, "ZSTD", level), times = 25) # decompress measurement decompress_time <- microbenchmark( decompress_fst(compressed_vec), times = 25) # add measurements to the benchmark results bench <- rbindlist(list(bench, data.table( Time = median(compress_time$time), Mode = "Compress", Level = level, Size = as.integer(object.size(compressed_vec))), data.table( Threads = threads, Time = median(decompress_time$time), Mode = "Decompress", Level = level, Size = as.integer(object.size(compressed_vec))))) } } This creates a data.table with compression and decompression benchmark results. This post is also available on R-bloggers
2021-09-24 16:13:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35436004400253296, "perplexity": 3929.8308144126463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057558.23/warc/CC-MAIN-20210924140738-20210924170738-00293.warc.gz"}
https://digital-library.theiet.org/content/books/10.1049/pbpo015e_ch10
http://iet.metastore.ingenta.com 1887 ## Digital differential protection of transformers • Author(s): • DOI: $16.00 (plus tax if applicable) ##### Buy Knowledge Pack 10 chapters for$120.00 (plus taxes if applicable) IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied. Recommend Title Publication to library You must fill out fields marked with: * Librarian details Name:* Email:* Name:* Email:* Department:* Why are you recommending this title? Select reason: Digital Protection for Power Systems — Recommend this title to your library ## Thank you This Chapter gives a brief general review of the principles of transformer differential protection. This is followed by an explanation of the application of digital techniques and the algorithms that have been developed specifically for application to transformer protection. The algorithms covered include finite duration impulse (FIR) filters, least-squares curve fitting, the digital Fourier algorithm and the flux-restrained current differential algorithm. Finally, the basic hardware arrangement for implementing digital techniques to the protection of transformers is described. It is, however, important to note that closely similar techniques can be applied to the protection of generators, although, in this case, the transformation ratio of currents is the same on each side of the protected zone. Inspec keywords: Subjects: Preview this chapter: Digital differential protection of transformers, Page 1 of 2 | /docserver/preview/fulltext/books/po/pbpo015e/PBPO015E_ch10-1.gif /docserver/preview/fulltext/books/po/pbpo015e/PBPO015E_ch10-2.gif ### Related content content/books/10.1049/pbpo015e_ch10 pub_keyword,iet_inspecKeyword,pub_concept 6 6 This is a required field
2019-04-25 06:12:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20648987591266632, "perplexity": 3829.447367064884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578689448.90/warc/CC-MAIN-20190425054210-20190425080210-00096.warc.gz"}
http://ci.nii.ac.jp/naid/10018148585
# A Polarization Beam Splitter for Optical Telecommunications Based on Two-Dimensional Metallic Photonic Crystal Structures ## 抄録 We proposed a novel method of constructing a chip polarization beam splitter (PBS) based on two-dimensional metallic photonic crystal (2D-MPC) structures that can be realized using the industrial-based semiconductor IC Cu-interconnect technology. The performance of such a PBS at wavelengths for optical telecommunications is evaluated by the finite-difference-time-domain (FDTD) simulation method. This 2D-MPC PBS device uses three rows of one-dimensional (1D) arrays ($3\times N$-MPC-2D-MPC) as a basic building block to construct the PBS. Using this architecture, the incoming transverse magnetic (TM) and transverse electric (TE) waves can be splitted with very high splitting efficiency (polarization ratio over 1000 for both the TM and TE modes). The implication of our scheme is that the fabrication of such a 2D-MPC PBS is totally compatible with current commercial complementary metal oxide semiconductor (CMOS) fabrication technology. This makes it a ready-to-use technology for the fabrication of a chip-base 2D-MPC PBS. ## 収録刊行物 • Japanese journal of applied physics. Pt. 1, Regular papers & short notes Japanese journal of applied physics. Pt. 1, Regular papers & short notes 45(6A), 5039-5045, 2006-06-15 Published by the Japan Society of Applied Physics through the Institute of Pure and Applied Physics ## 各種コード • NII論文ID(NAID) 10018148585 • NII書誌ID(NCID) AA10457675 • 本文言語コード EN • 資料種別 ART • 雑誌種別 大学紀要 • ISSN 0021-4922 • NDL 記事登録ID 7940990 • NDL 雑誌分類 ZM35(科学技術--物理学) • NDL 請求記号 Z53-A375 • データ提供元 CJP書誌  NDL  JSAP ページトップへ
2017-05-25 18:31:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46145138144493103, "perplexity": 6448.498473494885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608120.92/warc/CC-MAIN-20170525180025-20170525200025-00123.warc.gz"}
http://www.gradesaver.com/textbooks/science/chemistry/chemistry-the-central-science-13th-edition/chapter-1-introduction-matter-and-measurement-exercises-page-35/1-28b
# Chapter 1 - Introduction: Matter and Measurement - Exercises: 1.28b 298 K 77 $^{\circ}$F #### Work Step by Step This question asks us to convert a measurement in Celsius to Kelvin and Fahrenheit. To convert to Kelvin, use the formula K=$^{\circ}$C+273. To convert to Fahrenheit, use the formula $^{\circ}$F=($^{\circ}$C$\times$$\frac{9}{5}$)+32. When you do this with the measurement of 25$^{\circ}$C, you get 298 K and 77 $^{\circ}$F After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2017-08-22 11:40:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4630833864212036, "perplexity": 2902.4175691438722}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886110578.17/warc/CC-MAIN-20170822104509-20170822124509-00447.warc.gz"}
https://tug.org/pipermail/texhax/2018-August/023281.html
[texhax] pdftex and Unicode D. R. Evans doc.evans at gmail.com Sun Aug 26 01:36:11 CEST 2018 I have a document that uses just one font, \tt, and contains the string "1/2". The document renders perfectly under pdftex. I would like to change the document so that instead of "1/2", it uses the single Unicode character ½ (VULGAR FRACTION ONE HALF, Unicode character 555, 0x22b). https://texfaq.org/FAQ-unicode says: > “Modern” TeX-alike applications, XeTeX and LuaTeX read their input using > UTF-8 representations of Unicode as standard. They also use TrueType or > OpenType fonts for output; each such font has tables that tell the > application which part(s) of the Unicode space it covers; the tables enable > the engines to decide which font to use for which character (assuming there > is any choice at all). which left me with the impression that nowadays I don't have to struggle with the old 255-glyph limit or any of the concomitant pain of handling Unicode, and all I needed to do was to change the string "1/2" in the TeX source file to "½". But that seems not to be the case: nothing at all displayed at the point where I expected to see the "½" in the resultant PDF file. Here is a stripped-down trivial test.tex source file that shows what I mean: ---- \tt Line 1: 1/2 Line 2: ½ \end ---- No errors are raised by pdftex, but the "½" character is absent in the output. What is the correct way to have pdftex produce the "½" glyph? Doc PS This is on an up-to-date 64-bit debian stable system. -- Web: http://enginehousebooks.com/drevans -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP digital signature URL: <https://tug.org/pipermail/texhax/attachments/20180825/b8d9f0cb/attachment.sig>
2022-08-11 05:49:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9294524192810059, "perplexity": 9919.529581676492}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571234.82/warc/CC-MAIN-20220811042804-20220811072804-00539.warc.gz"}
https://www.physicsforums.com/threads/prove-limit-of-complex-function-2.325205/
# Prove limit of complex function 2 1. Jul 15, 2009 ### complexnumber 1. The problem statement, all variables and given/known data Prove using limit definition $$\lim_{z \to z_0} (z^2 + c) = z_0^2 + c$$. 2. Relevant equations 3. The attempt at a solution For every $$\varepsilon$$ there should be a $$\delta$$ such that \begin{align*} \text{if and only if } 0 < |z - z_0| < \delta \text{ then } |(z^2 + c) - (z_0^2 + c)| < \varepsilon \end{align*} Starting from $$|(z^2 + c) - (z_0^2 + c)| < \varepsilon$$ \begin{align*} |(z^2 + c) - (z_0^2 + c)| = |z^2 - z_0^2| = |(z+z_0)(z-z_0)| < \varepsilon \end{align*} How can I continue from here? 2. Jul 15, 2009 ### snipez90 This is the exact same proof as for the real case, I think. The more flexible approach is to first let $$\left|z-z_0\right| < 1$$ and then apply the triangle inequality (you may need to use one of the variants of the inequality) to get a bound for $$\left|z+z_0\right|$$ and then choose delta accordingly. If you want to satisfy two inequalities at the same time, delta will be written as the min of two numbers. The other way is to apply the triangle inequality directly. Clearly, the $$\left|z+z_0\right|$$ is the only term that gives us any trouble. Can you rewrite it so that we can use the fact that $$\left|z-z_0\right| < \delta$$ to our advantage? Hint: you need to add and subtract a term.
2017-10-18 17:02:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9275797605514526, "perplexity": 360.43548857700563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823016.53/warc/CC-MAIN-20171018161655-20171018181655-00337.warc.gz"}
https://zbmath.org/?q=an:1100.93047
zbMATH — the first resource for mathematics Examples Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used. Operators a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses Fields any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article) Robust observer design for Itô stochastic time-delay systems via sliding mode control. (English) Zbl 1100.93047 Summary: This paper deals with the output feedback sliding mode control for Itô stochastic time-delay systems. The system states are unmeasured, and the uncertainties are unmatched. A sliding mode control scheme is proposed based on the state estimates. By utilizing a novel switching function, the derivative of the switching function is ensured to be finite variation. It is shown that the sliding mode in the estimation space can be attained in finite time. The sufficient condition for the asymptotic stability (in probability) of the overall closed-loop stochastic system is derived. Finally, a simulation example is shown to illustrate the proposed method. MSC: 93000 General theory of stochastic systems 9.3e+21 Optimal stochastic control (systems) Full Text: References: [1] Bag, S. K.; Spurgeon, S. K.; Edwards, C.: Output feedback sliding mode design for linear uncertain systems. IEE proc. Control theory appl. 144, 209-216 (1997) · Zbl 0887.93011 [2] Chang, K. -Y.; Chang, W. -J.: Variable structure controller design with H$\infty$ norm and variance constraints for stochastic model reference systems. IEE proc. Control theory appl. 146, 511-516 (1999) [3] Chang, K. -Y.; Wang, W. -J.: Robust covariance control for perturbed stochastic multivariable system via variable structure control. Systems control lett. 37, 323-328 (1999) · Zbl 0948.93008 [4] Choi, H. H.: A new method for variable structure control system design: a linear matrix inequality approach. Automatica 33, 2089-2092 (1997) · Zbl 0911.93022 [5] Edwards, C.; Spurgeon, S. K.: Robust output tracking using a sliding-mode controller/observer scheme. Internat. J. Control 64, 967-983 (1996) · Zbl 0858.93019 [6] Gao, H.; Wang, C.; Zhao, L.: Comments on ”an LMI-based approach for robust stabilization of uncertain stochastic systems with time-varying delays”. IEEE trans. Automatic control 48, 2073-2074 (2003) [7] Gouaisbaut, F.; Dambrine, M.; Richard, J. P.: Robust control of delay systems: a sliding mode control design via LMI. Systems control lett. 46, 219-230 (2002) · Zbl 0994.93004 [8] Higham, D.: An algorithmic introduction to numerical simulation of stochastic differential equations. SIAM rev. 43, 525-546 (2001) · Zbl 0979.65007 [9] Khasminskii, R. Z.: Stochastic stability of differential equations. (1980) · Zbl 1259.60058 [10] Lu, C. -Y.; Tsai, J. S. -H.; Jong, G. -J.; Su, T. -J.: An LMI-based approach for robust stabilization of uncertain stochastic systems with time-varying delays. IEEE trans. Automatic control 48, 286-289 (2003) [11] Mao, X.; Koroleva, N.; Rodkina, A.: Robust stability of uncertain stochastic differential delay systems. Systems control lett. 35, 325-336 (1998) · Zbl 0909.93054 [12] Niu, Y.; Ho, D. W. C.; Lam, J.: Robust integral sliding mode control for uncertain stochastic systems with time-varying delay. Automatica 41, 873-880 (2005) · Zbl 1093.93027 [13] Niu, Y.; Lam, J.; Wang, X.; Ho, D. W. C.: Sliding mode control for nonlinear state-delayed systems using neural network approximation. IEE proc. Control theory appl. 150, 233-239 (2003) [14] Niu, Y.; Lam, J.; Wang, X.; Ho, D. W. C.: Observer-based sliding mode control for nonlinear state-delayed systems. Internat. J. Systems sci. 35, 139-150 (2004) · Zbl 1059.93025 [15] M.C. Pai, A. Sinha, Sliding mode control of vibration in a flexible structure via estimated stated and H\infty /\mu  techniques, in: Proceedings of the American Control Conference, Chicago, IL, 2000, pp. 1118 -- 1123. [16] Rundell, A. E.; Drakunov, S. V.; Decarlo, R. A.: A sliding mode observer and controller for stabilization of rotational motion of a vertical shaft magnetic bearing. IEEE trans. Control systems technol. 4, 598-608 (1996) [17] Tsai, Y. -W.; Shyu, K. -K.; Chang, K. -C.: Decentralized variable structure control for mismatched uncertain large-scale systems: a new approach. Systems control lett. 43, 117-125 (2001) · Zbl 0974.93014 [18] E.I. Verriest, Stabilization of deterministic and stochastic systems with uncertain time delays, in: Proceedings of the IEEE Conference Decision and Control, Lake Buena Vista, FL, 1994, pp. 3829 -- 3834. [19] Verriest, E. I.; Florchinger, P.: Stability of stochastic systems with uncertain time delay. Systems control lett. 24, 41-47 (1995) · Zbl 0867.34065 [20] Wang, Z.; Qiao, H.; Burnham, K. J.: On the stabilization of bilinear uncertain time-delay stochastic systems with Markovian jumping parameters. IEEE trans. Automatic control 47, 640-646 (2002) [21] Xu, S.; Chen, T.: Robust H$\infty$ control for uncertain stochastic systems with state delay. IEEE trans. Automatic control 47, 2089-2094 (2002) [22] M.-S. Yang, P.-L. Liu, H.-C. Lu, Output feedback stabilization of uncertain dynamic systems with multiple state delays via sliding mode control strategy, in: ISIE’99, IEEE International Symposium on Industrial Electronics, Bled, Slovenia, 1999, pp. 1147 -- 1152.
2016-05-06 09:19:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.66261225938797, "perplexity": 9300.834949060207}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461861743914.17/warc/CC-MAIN-20160428164223-00064-ip-10-239-7-51.ec2.internal.warc.gz"}
https://academy.rainoil.com.ng/fc-meaning-pvw/problems-on-modulus-of-complex-number-9d5331
Solution of exercise Solved Complex Number Word Problems The equality holds if one of the numbers is 0 and, in a non-trivial case, only when Im(zw') = 0 and Re(zw') is positive. Find All Complex Number Solutions z=1-i. We want this to match the complex number 6i which has modulus 6 and infinitely many possible arguments, although all are of the form π/2,π/2±2π,π/2± Here is a set of practice problems to accompany the Complex Numbers< section of the Preliminaries chapter of the notes for Paul Dawkins Algebra course at Lamar University. Here we introduce a number (symbol ) i = √-1 or i2 = -1 and we may deduce i3 = -i i4 = 1 Definition of Modulus of a Complex Number: Let z = x + iy where x and y are real and i = √-1. Magic e where . And if the modulus of the number is anything other than 1 we can write . the complex number, z. Vector Calculate length of the vector v⃗ = (9.75, 6.75, -6.5, -3.75, 2). Is the following statement true or false? Example.Find the modulus and argument of z =4+3i. Modulus of complex numbers loci problem. Complex Numbers and the Complex Exponential 1. Main Article: Complex Plane Complex numbers are often represented on the complex plane, sometimes known as the Argand plane or Argand diagram.In the complex plane, there are a real axis and a perpendicular, imaginary axis.The complex number a + b i a+bi a + b i is graphed on this plane just as the ordered pair (a, b) (a,b) (a, b) would be graphed on the Cartesian coordinate plane. The modulus of a complex number is another word for its magnitude. Given that the complex number z = -2 + 7i is a root to the equation: z 3 + 6 z 2 + 61 z + 106 = 0 find the real root to the equation. We now have a new way of expressing complex numbers . Moivre 2 Find the cube roots of 125(cos 288° + i sin 288°). The absolute value (or modulus or magnitude) of a complex number is the distance from the complex number to the origin. However, the unique value of θ lying in the interval -π θ ≤ π and satisfying equations (1) and (2) is known as the principal value of arg z and it is denoted by arg z or amp z.Or in other words argument of a complex number means its principal value. r signifies absolute value or represents the modulus of the complex number. Then z5 = r5(cos5θ +isin5θ). Complex Numbers Represented By Vectors : It can be easily seen that multiplication by real numbers of a complex number is subjected to the same rule as the vectors. for those who are taking an introductory course in complex analysis. It’s also called its length, or its absolute value, the latter probably due to the notation: The modulus of $z$ is written $|z|$. 74 EXEMPLAR PROBLEMS – MATHEMATICS 5.1.3 Complex numbers (a) A number which can be written in the form a + ib, where a, b are real numbers and i = −1 is called a complex number . Equation of Polar Form of Complex Numbers $$\mathrm{z}=r(\cos \theta+i \sin \theta)$$ Components of Polar Form Equation. This approach of breaking down a problem has been appreciated by majority of our students for learning Modulus and Argument of Product, Quotient Complex Numbers concepts. The modulus is = = . Properies of the modulus of the complex numbers. An alternative option for coordinates in the complex plane is the polar coordinate system that uses the distance of the point z from the origin (O), and the angle subtended between the positive real axis and the line segment Oz in a counterclockwise sense. The argument of a complex number is the angle formed between the line drawn from the complex number to the origin and the positive real axis on the complex coordinate plane. Advanced mathematics. Given a quadratic equation: x2 + 1 = 0 or ( x2 = -1 ) has no solution in the set of real numbers, as there does not exist any real number whose square is -1. In the previous section we looked at algebraic operations on complex numbers.There are a couple of other operations that we should take a look at since they tend to show up on occasion.We’ll also take a look at quite a few nice facts about these operations. Square roots of a complex number. Mathematical articles, tutorial, examples. It has been represented by the point Q which has coordinates (4,3). Complex numbers tutorial. The complex conjugate is the number -2 - 3i. The modulus of a complex number is the distance from the origin on the complex plane. Ta-Da, done. The modulus of a complex number is always positive number. Observe now that we have two ways to specify an arbitrary complex number; one is the standard way $$(x, y)$$ which is referred to as the Cartesian form of the point. a) Show that the complex number 2i … Conjugate and Modulus. (b) If z = a + ib is the complex number, then a and b are called real and imaginary parts, respectively, of the complex number and written as R e (z) = a, Im (z) = b. Table Content : 1. Maths Book back answers and solution for Exercise questions - Mathematics : Complex Numbers: Modulus of a Complex Number: Problem Questions with Answer, Solution ... Modulus of a Complex Number: Solved Example Problems. 2. Ask Question Asked 5 years, 2 months ago. A complex number can be written in the form a + bi where a and b are real numbers (including 0) and i is an imaginary number. Complex numbers The equation x2 + 1 = 0 has no solutions, because for any real number xthe square x 2is nonnegative, and so x + 1 can never be less than 1.In spite of this it turns out to be very useful to assume that there is a number ifor which one has Complex Number : Basic Concepts , Modulus and Argument of a Complex Number 2.Geometrical meaning of addition , subtraction , multiplication & division 3. ABS CN Calculate the absolute value of complex number -15-29i. Complex integration: Cauchy integral theorem and Cauchy integral formulas Definite integral of a complex-valued function of a real variable Consider a complex valued function f(t) of a real variable t: f(t) = u(t) + iv(t), which is assumed to be a piecewise continuous function defined in the closed interval a ≤ t … The modulus of z is the length of the line OQ which we can I don't understand why the modulus of i is 1 and the argument of i can be 90∘ plus any multiple of 360 Angle θ is called the argument of the complex number. This is the trigonometric form of a complex number where is the modulus and is the angle created on the complex plane. Triangle Inequality. Proof. The problems are numbered and allocated in four chapters corresponding to different subject areas: Complex ... 6.Let f be the map sending each complex number z=x+yi! This is equivalent to the requirement that z/w be a positive real number. It is denoted by . The sum of the real components of two conjugate complex numbers is six, and the sum of its modulus is 10. Exercise 2.5: Modulus of a Complex Number. Equations (1) and (2) are satisfied for infinitely many values of θ, any of these infinite values of θ is the value of amp z. Then the non negative square root of (x^2 + y^2) is called the modulus or absolute value of z (or x + iy). Writing complex numbers in this form the Argument (angle) and Modulus (distance) are called Polar Coordinates as opposed to the usual (x,y) Cartesian coordinates. The modulus and argument are fairly simple to calculate using trigonometry. Since the complex numbers are not ordered, the definition given at the top for the real absolute value cannot be directly applied to complex numbers.However, the geometric interpretation of the absolute value of a real number as its distance from 0 can be generalised. WORKED EXAMPLE No.1 Find the solution of P =4+ −9 and express the answer as a complex number. SOLUTION P =4+ −9 = 4 + j3 SELF ASSESSMENT EXERCISE No.1 1. COMPLEX NUMBER Consider the number given as P =A + −B2 If we use the j operator this becomes P =A+ −1 x B Putting j = √-1we get P = A + jB and this is the form of a complex number. (powers of complex numb. Precalculus. x y y x Show that f(z 1z 2)= f(z 1)f(z 2) for all z 1;z 2 2C. ):Find the solution of the following equation whose argument is strictly between 90 degrees and 180 degrees: z^6=i? This has modulus r5 and argument 5θ. Complex functions tutorial. However, instead of measuring this distance on the number line, a complex number's absolute value is measured on the complex number plane. In the case of a complex number. Popular Problems. Determine these complex numbers. It only takes a minute to sign up. This leads to the polar form of complex numbers. Goniometric form Determine goniometric form of a complex number ?. The second is by specifying the modulus and argument of $$z,$$ instead of its $$x$$ and $$y$$ components i.e., in the form Next similar math problems: Log Calculate value of expression log |3 +7i +5i 2 | . Here, x and y are the real and imaginary parts respectively. Complex analysis. (ii) arg(z) = π/2 , -π/2 => z is a purely imaginary number => z = – z – Note that the property of argument is the same as the property of logarithm. Solution.The complex number z = 4+3i is shown in Figure 2. Our tutors can break down a complex Modulus and Argument of Product, Quotient Complex Numbers problem into its sub parts and explain to you in detail how each step is performed. ... \$ plotted on the complex plane where x-axis represents the real part and y-axis represents the imaginary part of the number… The absolute value of complex number is also a measure of its distance from zero. The formula to find modulus of a complex number z is:. Modulus and argument. Proof of the properties of the modulus. Mat104 Solutions to Problems on Complex Numbers from Old Exams (1) Solve z5 = 6i. 4. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. Let z = r(cosθ +isinθ). 1 A- LEVEL – MATHEMATICS P 3 Complex Numbers (NOTES) 1. Free math tutorial and lessons. Find all complex numbers z such that (4 + 2i)z + (8 - 2i)z' = -2 + 10i, where z' is the complex conjugate of z. The line OQ which we can modulus of the vector v⃗ = ( 9.75, 6.75, -6.5,,! Number z is: the solution of the vector v⃗ = ( 9.75,,! Is the length of the number is always positive number ): Find the solution of the line which! A question and answer site for people studying math at any level and professionals in related fields math. Way of expressing complex numbers loci problem form of complex numbers 9.75, 6.75, -6.5, -3.75, months... Sin 288° ) been represented by the point Q which has coordinates ( 4,3 ) the to... Modulus and argument are fairly simple to Calculate using trigonometry argument of the number is the length of real... Polar form of complex numbers the distance from the complex numbers other than 1 we can write (,. Problems: Log Calculate value of expression Log |3 +7i +5i 2.... Have a new way of expressing complex numbers ) 1 number: z. And is the angle created on the complex number to the requirement that z/w be a positive real.. Question Asked 5 years, 2 months ago is: value or represents the modulus and is trigonometric. Real and i = √-1 Calculate value of expression Log |3 +7i +5i 2 | Properies of the OQ! |3 +7i +5i 2 | and if the modulus of a complex number been represented by the Q. And is the modulus and argument are fairly simple to Calculate using trigonometry components of conjugate! 5 years, 2 ) Q which has coordinates ( 4,3 ) parts respectively leads the!: z^6=i between 90 degrees and 180 degrees: z^6=i 4+3i is in. Point Q which has coordinates ( 4,3 ) requirement that z/w be a positive number! The argument of the modulus of a complex number Show that the complex number z is: or ). Which we can write mathematics P 3 complex numbers ( NOTES ) 1 we now a... Exercise No.1 1 i sin 288° ) Calculate length of the real components of two conjugate numbers... Real number is strictly between 90 degrees and 180 degrees: z^6=i ( NOTES ) 1 Exams! Other than 1 we can write math at any level and professionals in related fields – mathematics P complex! 4 + j3 SELF ASSESSMENT EXERCISE No.1 1 the length of the following equation argument. On complex numbers from Old Exams ( 1 ) Solve z5 = 6i answer as a complex number? =. Answer site for people studying math at any level and professionals in related fields cos 288° + i sin )! 2 ) vector Calculate length of the complex plane =4+ −9 = 4 j3! Express the answer as a complex number is the length of the modulus of complex! Calculate value of complex numbers from Old Exams ( 1 ) Solve z5 = 6i on! Value of expression Log |3 +7i +5i 2 | that z/w be positive. The formula to Find modulus of z is the length of the line which... J3 SELF ASSESSMENT EXERCISE No.1 1 1 A- level – mathematics P 3 complex numbers is,... Complex plane i = √-1 number 2i … Properies of the line OQ which we can write those who taking! Level and professionals in related fields Log |3 +7i +5i 2 | ask question Asked 5 years, 2.! The point Q which has coordinates ( 4,3 ) cube roots of 125 ( cos 288° + i 288°! ) Solve z5 = 6i modulus of complex number is the length of the real components of conjugate... Can write an introductory course in complex analysis Log |3 +7i +5i 2 | −9 and express the as. |3 +7i +5i 2 | or magnitude ) of a complex number the modulus and argument fairly! 6.75, -6.5, -3.75, 2 months ago, 6.75, -6.5, -3.75, 2 ) Log... On complex numbers =4+ −9 = 4 + j3 SELF ASSESSMENT EXERCISE No.1 1 5 years 2. 1 we can modulus of a complex number the polar form of complex. From the complex plane cos 288° + i sin 288° ) modulus problems on modulus of complex number.... Components of two conjugate complex numbers ( NOTES ) 1 4 + SELF. A- level – mathematics P 3 complex numbers is: six, and the sum of complex. 3 complex numbers is always positive number between 90 degrees and 180 degrees: z^6=i EXAMPLE. Definition of modulus of a complex number where is the modulus of the real and i = √-1 v⃗. Can write is six, and the sum of its modulus is 10 = 4+3i is in! Number -15-29i x + iy where x and y are real and imaginary parts respectively of expressing complex numbers six! 5 years, 2 months ago created on the complex number if the modulus and is modulus. Mathematics Stack Exchange is a question and answer site for people studying math at level. Conjugate complex numbers degrees and 180 degrees: z^6=i the formula to Find of! Calculate length of the complex number is the trigonometric form of a complex number Let! 2 Find the cube roots of 125 ( cos 288° + i 288°. Of z is: z/w be a positive real number and is length. Y are the real and imaginary parts respectively 125 ( cos 288° + i sin 288° ) CN Calculate absolute. Question and answer site for people studying math at any level and professionals in related fields introductory course complex! Mathematics Stack Exchange is a question and answer site for people studying math at any and! Determine goniometric form Determine goniometric form of a complex number? that the complex plane absolute value or the... Z5 = 6i the solution of the complex number from the origin in Figure.! Imaginary parts respectively of expressing complex numbers from Old Exams ( 1 ) Solve z5 =.... Ask question Asked 5 years, 2 ) the modulus of a complex number is the modulus a. Solution.The complex number components of two conjugate complex numbers from Old Exams 1. Using trigonometry the requirement that z/w be a positive real number the complex number angle on! This leads to the origin number 2i … Properies of the vector v⃗ = ( 9.75,,. 4 + j3 SELF ASSESSMENT EXERCISE No.1 1 represented by the point which... Value or represents the modulus and argument are fairly simple to Calculate using trigonometry ( 288°... Solution P =4+ −9 = 4 + j3 SELF ASSESSMENT EXERCISE No.1 1 of a complex number -15-29i way... On complex numbers ( NOTES ) 1 using trigonometry the real and imaginary parts respectively other than 1 we modulus! To Calculate using trigonometry created on the complex number is the distance from the number... 2I … Properies of the complex number is anything other than 1 can. A- level – mathematics P 3 complex problems on modulus of complex number is six, and the sum of its modulus is.... -6.5, -3.75, 2 months ago argument of the line OQ which we can write sin. Between 90 degrees and 180 degrees: z^6=i ) Solve z5 = 6i Properies of the vector v⃗ (. Where is the trigonometric form of a complex number z is: problem. Can modulus of a complex number: Let z = 4+3i is shown in Figure 2 other than 1 can... People studying math at any level and professionals in related fields of two conjugate complex numbers are an. Length of the complex number: Let z = 4+3i is shown in Figure 2 that... Next similar math Problems: Log Calculate value of expression Log |3 +7i +5i 2 | OQ we... The polar form of a complex number is the angle created on the complex numbers from Old Exams 1! Are taking an introductory course in complex analysis P 3 complex numbers modulus and argument fairly... People studying math at any level and professionals in related fields from the origin the requirement z/w... 125 ( cos 288° + i sin 288° ) Exams ( 1 ) Solve z5 6i..., -3.75, 2 months ago is called the argument of the complex number the. Modulus is 10 six, and the sum of the complex number z = is. Number 2i … Properies of the vector v⃗ = ( 9.75, 6.75, -6.5 -3.75. Goniometric form Determine goniometric form of complex numbers ( NOTES ) 1 number: Let z x! Equivalent to the requirement that z/w be a positive real number goniometric Determine! Answer as a complex number Find the solution of the complex plane the complex number -15-29i magnitude ) a! 1 we can modulus of a complex number is the modulus of numbers... Is a question and answer site for people studying math at any level and professionals in fields. ) of a complex number z is the distance from the complex plane the angle created the. Of expressing complex numbers has coordinates ( 4,3 ) Calculate length of the complex number is anything than... Complex plane vector Calculate length of the modulus of a complex number is distance! Of z is: SELF ASSESSMENT EXERCISE No.1 1: Log Calculate value of complex numbers form! ): Find the solution of the modulus of the vector v⃗ = ( 9.75,,. Can write Calculate using trigonometry P =4+ −9 = 4 + j3 SELF ASSESSMENT EXERCISE 1. + j3 SELF ASSESSMENT EXERCISE No.1 1 the cube roots of 125 ( cos 288° i... Introductory course in complex analysis real and imaginary parts respectively – mathematics P 3 numbers. ( 4,3 ) + iy where x and y are real and imaginary parts respectively the requirement z/w. Value of complex number: Let z = 4+3i is shown in Figure 2 z5 =.. problems on modulus of complex number 2021
2022-05-26 18:22:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7789658904075623, "perplexity": 574.6589853032485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662619221.81/warc/CC-MAIN-20220526162749-20220526192749-00250.warc.gz"}
https://mathematica.stackexchange.com/questions/170715/derivative-w-r-t-another-function?noredirect=1
# Derivative w.r.t another function [duplicate] Find $\frac{dv}{du}$ if $v=\sin^{-1}\frac{2x}{1+x^2}$ w.r.t $u=\tan^{-1}\frac{2x}{1-x^2}$ $$v=\sin^{-1}\frac{2x}{1+x^2}=\begin{cases}2\tan^{-1}x&\text{, if }|x|<1\\ \pm\pi-2\tan^{-1}x&\text{, if }|x|>1\end{cases}$$ and $$u=\tan^{-1}\frac{2x}{1-x^2}=\begin{cases}2\tan^{-1}x&\text{, if }|x|<1\\ \pm\pi+2\tan^{-1}x&\text{, if }|x|>1\end{cases}$$ Thus, $$\boxed{ \frac{dv}{du}=\frac{\frac{dv}{d(\tan^{-1}x)}}{\frac{du}{d({\tan^{-1}x)}}}=\begin{cases}1&\text{, if }|x|<1\\-1&\text{, if }|x|>1\end{cases}}$$ How do I find derivative w.r.t another function in similar problems using Mathematica ? I tried D[ArcSin[2 x / 1 + x^2], ArcTan[2 x/1 - x^2]] which certainly not giving anything. ## marked as duplicate by J. M. will be back soon♦Sep 29 '18 at 17:36 $$\frac{\operatorname{d}v}{\operatorname{d}u} = \frac{\operatorname{d}v}{\operatorname{d}x} \cdot \frac{\operatorname{d}x}{\operatorname{d}u} = \frac{\operatorname{d}v}{\operatorname{d}x} \cdot \left(\frac{\operatorname{d}u}{\operatorname{d}x}\right)^{-1}$$ v = ArcSin[2 x/(1 + x^2)]; $$\begin{cases} -1 & x\geq 1\lor x\leq -1 \\ 1 & \text{True}\end{cases}$$
2019-10-21 16:28:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19611191749572754, "perplexity": 4418.437078775731}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987779528.82/warc/CC-MAIN-20191021143945-20191021171445-00015.warc.gz"}
http://koasas.kaist.ac.kr/handle/10203/39346
#### In vivo high resolution NMR imaging with optimized surface gradient coil = 최적 구조의 평면 경사자계 코일을 사용한 인체의 고해상도 자기 공명 영상법 Cited 0 time in Cited 0 time in • Hit : 192 A three-channel surface gradient coil(SGC) which is optimized for high resolution NMR imaging of a large object with good spatial linearity is presented. The SGC provides strong gradient fields as well as large size of available object with existing gradient drivers which are the fundamental requirements of high resolution human body imaging. Non-linear gradient field of a the SGC is a problem in real application Geometry and current distribution of the SGC are, therefore, optimized to maximize the linearity and strength of the gradient field and minimize inductance and resistance. To compensate SNR loss of high resolution imaging, a surface RF coil is integrated onto the SGC and thereby devices a thin NMR imaging probe. Geometrical structures and characteristics of the proposed three channel surface gradient coil are discussed and some experimental results as well as computer simulation results are presented. By using the assembly, a high resolution($\triangle{r}=200\mu$) and good SNR image of volunteers spine was obtained with whole-body KAIS 2.0T MRI system. Cho, Zang-Heeresearcher조장희researcher Description 한국과학기술원 : 전기 및 전자공학과, Publisher 한국과학기술원 Issue Date 1991 Identifier 60077/325007 / 000891288 Language eng Description 학위논문(석사) - 한국과학기술원 : 전기 및 전자공학과, 1991.2, [ ii, 62 p. ] URI http://hdl.handle.net/10203/39346 http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=60077&flag=dissertation Appears in Collection EE-Theses_Master(석사논문) Files in This Item There are no files associated with this item.
2019-11-12 13:32:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5354265570640564, "perplexity": 5755.72780637087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665573.50/warc/CC-MAIN-20191112124615-20191112152615-00394.warc.gz"}
https://puzzling.stackexchange.com/questions/84372/a-sheet-of-graph-paper-and-a-code/85388
# A sheet of graph paper and a code IMPORTANT EDIT: If you looked at this before 00:30 UTC 26 May 2019, please reread - the puzzle has been changed Samantha sat down at the fifth computer in her university's computer lab, hoping to retrieve the data she had put into that computer a week previously. However, when she looks at the computer monitor, she sees the following message: Access denied. Please enter the three-digit code to proceed. If you enter an incorrect code, all the data on this computer will be wiped. You have one chance. Not knowing the code, Samantha frantically looks around on the desk next to her, and notices an unusually marked sheet of graph paper. She takes a picture of it and sends it here, hoping that it might hold the clues to the three-digit code the computer requests. However, she can't make sense of it. This is where you come in. Can you figure the three-digit code out? Transcription: The row at the top says: § ä à ^K The row on the side says: Space 3 0 B ^P Hint: The first half is easy; you'll have to take small nibbles at the rest. Hint 2 (28 May 2019): Yes, nibbles. Bites and nibbles, bites and nibbles... Hint 3 (29 May 2019): There are nine pieces with these characters on them. Nine... non-... Hint 4 (29 May 2019): 01101001 01110100 00100111 01110011 00100000 01110100 01101000 01100101 00100000 01101011 01100101 01111001 Hint 5 (1 Jun 2019): The final result should be a picture. Hint 6 (20 Jul 2019): Ababtenz EDIT: It's been brought to my attention that the characters I used, ^K and ^P, aren't in wide use. They are equivalent to VT and DLE, respectively. • Given the relationship between the pre-edit and post-edit right-hand column, are you sure that fourth one is a B rather than a 4? :-) – Gareth McCaughan May 26 '19 at 20:36 • Yes, I am sure of this fact. Thanks for bringing it to my attention, though, so I could check for further errors! – Cloudy7 May 27 '19 at 3:41 • What is ^K and ^P? I don't know these characters? – LeppyR64 May 28 '19 at 19:12 • @LeppyR64 They are equivalent to the following: ^K = VT ^P = DLE Sorry for the confusion, I thought they were well-known. – Cloudy7 May 28 '19 at 20:01 • Thanks. Never seen them referenced that way. – LeppyR64 May 28 '19 at 20:06 Is the code: 130 Because: Translate all characters into Unicode and you find that they all require 2 hex values only. Given that they're on a grid, it implies that some sort of transformation between the 4 "column" characters and the 5 "row" characters needs to happen. A popular choice in cryptography is the XOR function. Given the 4 column characters create row nibbles, you can apply the XOR to each nibble of each row character. If you separate them out this way then you kind of get 2 matrices of hex values. The first matrix would represent the first possible hex value for a new Unicode character, and the second matrix would represent the latter hex value. Given that we're specifically looking for digits, we know that the first nibble must equal 3 as the Unicode numbers exist from 0x30 -> 0x39. Anything else is a red-herring. Given that there are only three such instances, this seems plausible. From this you look to the corresponding second nibble to complete the character. For completeness: Characters: Row Characters As Unicode = [20, 33, 30, 42, 10]. Column Characters As Unicode = [A7, E4, E0, 0B]. Column Characters As Nibbles = [E, 6, E, 0, 1, C, 9, 9] First: Matrix = [C, 4, C, 2, *3*, E, B, B; D, 5, D, *3*, 2, F, A, A; D, 5, D, *3*, 2, F, A, A; A, 2, A, 4, 5, 8, D, D; F, 7, F, 1, 0, D, 8, 8] Second: Matrix = [E, 6, E, 0, *1*, C, 9, 9; D, 5, D, *3*, 2, F, A, A; E, 6, E, *0*, 1, C, 9, 9; C, 4, C, 2, 3, E, B, B; E, 6, E, 0, 1, C, 9, 9] • Creative, but the code you reached is not the one you're looking for. There's a reason this question is marked as "visual". I added some more hints to help a little bit. – Cloudy7 May 29 '19 at 17:38 • As your answer is the only answer that is eligible for the bounty, here's 50 reputation. It's not the right code, though. – Cloudy7 Jun 4 '19 at 17:18 Extremely partial solution Since no one appears to be making any actual progress with this, I'll post what I have. I'm pretty sure I know what the main mechanic is, but I haven't been able to make it work. Presumably I'm missing at least one important idea. First of all, those hints: First and second indicate that we are concerned with bytes and nybbles. (That was pretty obvious from the outset, given the presentation in the pictures.) Third is the key one: I think it means that we are dealing with a nonogram. Fourth says "it's the key" in binarized ASCII which I guess just means that we're looking for the 3-digit key code while gesturing towards character codes in case we were too stupid to think of that before. Fifth says we're looking for a picture, which fits with the nonogram thing. So, now, the obvious thing to do is to look at the Unicode (or ISO-8859-1[5]) codes for those characters and look at their hexadecimal digits. This will give us pairs of digits along the top, and single digits down the right-hand side. We get: A7 E4 E0 0B along the top; 2 0 3 3 3 0 4 2 1 0 on the right. The fact that all those digits on the right are in the range 0..4 is encouraging. Presumably we will end up looking at it sideways and those 0s are gaps between the digits we're looking for. (Note that there are three blocks of non-zero digits.) But what's not at all clear is how to translate the stuff along the top into something useful. Writing the numbers in decimal doesn't give believable digit sequences (167,228,224,11; we have fewer than 1+6+7 rows). Writing them in octal is no better (247,...; we have fewer than 2+4+7 rows). Treating them as 2-bit chunks is no good (2213,...; we have fewer than 2+1+2+1+1+1+3 rows). Writing them in binary and interpreting as unary is no good (113,31,3,12; the row and column sums don't match). Writing them in binary and interpreting as lots of 1s (ignoring the 0s) is no good (same problem with row and column sums). Writing their hex digits in decimal and interpreting the digit sequences gives 107,144,140,011; presumably we'd ignore the zeros, but the row and column sums again don't match and also that 7 is incompatible with the presence of the zero rows. A further difficulty is that the digits on the rows don't really seem like they're compatible with an image showing three digits. At the top we have an isolated 2; maybe that can kinda-sorta give us a 1, but it would be a really short stubby 1. Then we have three 3s, which are going to give us a blob rather than a digit whatever we do. The last block (4,2,1) could make something digit-like, at least. This is way less progress-towards-a-solution than I would normally want before posting something, but it's been weeks now and perhaps this will give someone else a useful idea, or shake out a hint that gives everything away :-), or lead the OP to notice a mistake that explains why I can't make it work (this is probably not the actual situation), or something. This approach is quite different to my previous attempt so I figured I'd throw it up as a second answer. Unicode values for the characters: Top = A7, E4, E0, 0B; Side = 20, 33, 30, 42, 10. The side numbers, if taken individually (e.g. 20 = 2, 0) range from 0 to 4 which is one more than the number of available columns. This hinted at me to think of them as being "fillers", which I will attempt to recreate below on its side. I believe that one of the characters is possibly incorrect. Specifically, I think that ä should instead be ã, which is E3. First Visual: 0 1 2 4 0 3 3 3 0 2 # # # # # # # 7 # # # # # # 4, I think this should be 3 # # # # 0 # 0 Second Visual: Finally, the numbers 7, 3, 0, 0 indicate where to "toggle" blocks. Starting from index 0 and moving along gives: 0 1 2 4 0 3 3 3 0 2 # # # # # # 7 # # # # # 4, I think this should be 3 # # # # # 0 # # 0 Which reveals the answer 401 • I am going to say that you might be close. Take a look at the last hint I posted! – Cloudy7 Jul 28 '19 at 22:50 • Rot13(Vs fb, gura ubj jbhyq jr svg 4 oybpxf qbja n pbyhza jurer jr'er bayl nyybjrq gb svg 2 oybpxf? Gur bayl bgure vagrecergngvba V pna guvax bs sbe guvf jbhyq tvir frira mreb bar, ohg sbe gung gb jbex nf gur ynfg uvag fhttrfgf gura vafgrnq bs frira sbhe mreb mreb, jr jbhyq arrq frira sbhe sbhe bar ng yrnfg.) – user3303504 Jul 29 '19 at 21:10
2020-01-19 16:55:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5150504112243652, "perplexity": 1277.55004126064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594662.6/warc/CC-MAIN-20200119151736-20200119175736-00415.warc.gz"}
https://geek-docs.com/java/java-examples/calculate-area-and-circumference.html
# Java 实例 计算圆的面积和周长 1)用户交互:程序将提示用户输入圆的半径 2)没有用户交互:半径值将在程序本身中指定。 /** * @author: BeginnersBook.com * @description: Program to calculate area and circumference of circle * with user interaction. User will be prompt to enter the radius and * the result will be calculated based on the provided radius value. */ import java.util.Scanner; class CircleDemo { static Scanner sc = new Scanner(System.in); public static void main(String args[]) { /*We are storing the entered radius in double * because a user can enter radius in decimals */ System.out.println("The area of circle is: " + area); System.out.println( "The circumference of the circle is:"+circumference) ; } } Enter the radius: 1 The area of circle is: 3.141592653589793 The circumference of the circle is:6.283185307179586 /** * @author: BeginnersBook.com * @description: Program to calculate area and circumference of circle * without user interaction. You need to specify the radius value in * program itself. */ class CircleDemo2 { public static void main(String args[]) { System.out.println("The area of circle is: " + area); System.out.println( "The circumference of the circle is:"+circumference) ; } } The area of circle is: 28.274333882308138 The circumference of the circle is:18.84955592153876 • 回顶
2021-06-18 17:48:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7265272736549377, "perplexity": 9020.891189551001}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487640324.35/warc/CC-MAIN-20210618165643-20210618195643-00121.warc.gz"}
https://ask.cvxr.com/t/how-can-i-solve-this-convex-qc-problem-using-cvx/8469
How can I solve this convex QC problem using CVX? R is a positive Hermitian matrix, and U_c is a full column matrix and \mathbf{w}\in\mathbb{C}^N is a column vector, and Im\{\cdot\} denotes the image part of the complex number. I don’t known how to solve this optimization using CVX, can you help me. Thank you very much. 1 Like Have you read the CVX Users’ Guide http://cvxr.com/cvx/doc/ ? This problem can be entered essentially directly as shown into CVX. (H is ' and Im is imag) 1 Like but something wrong in my programming. can you help me? 1 Like Perhaps R is not exactly Hermitian, with the consequence that there is a roundoff level imaginary term in the quadratic form. You could try Hermitianizing it. w'*(0.5*(R+R'))*w <= const If that still produces the same error message, perhaps R isn’t actually Hermitian Semidefinite. What is the value of min(eig(R+R')) ? 1 Like thank you! I will try again. how can I transform this problem to SDP? can you help me? 1 Like The problem will be solved by CVX as an SOCP. Any SOCP can be written as an SDP. But it is better to solve it as an SOCP. Indeed, if you convert and enter this as an SDP in CVX, CVX will automatically transform it to an SOCP before sending it to the solver. You need to fix R, as I described previously. 1 Like hello ,there are something wrong in my programming, can you help me ? Thank you very much. 1 Like CVX only allows inequalities in which both sides are real. Why does w'*s >= epsilon*norm(w)+1 make sense? The left-hand side is complex, and the right-hand side is real. Do you perhaps want real(w'*s) >= epsilon*norm(w)+1 I have no idea whether that makes sense. For purposes of this forum, it’s your model, even if you got it from a paper or book, so you tell us what it’s supposed to be. 1 Like as you can see, in my formulation, imag(w’*s)==0, which aims to let the image part of w’*s is zero, therefore, w’*s is a real number. i think real(w’s) >= epsilonnorm(w)+1 is right. thank you very much. 1 Like Yes, I believe so. . 1 Like
2023-03-23 05:49:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7995918393135071, "perplexity": 1280.5723663976285}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944996.49/warc/CC-MAIN-20230323034459-20230323064459-00787.warc.gz"}
https://www.cymath.com/sp/reference/algebra-trigonometric-identities/cofunction-identities
Descripción$$\sin{(\frac{\pi }{2}-x)}=\cos{x}$$ $$\cos{(\frac{\pi }{2}-x)}=\cot{x}$$ $$\tan{(\frac{\pi }{2}-x)}=\csc{x}$$ $$\cot{(\frac{\pi }{2}-x)}=\sin{x}$$ $$\sec{(\frac{\pi }{2}-x)}=\tan{x}$$ $$\csc{(\frac{\pi }{2}-x)}=\sec{x}$$
2022-11-28 11:51:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17037880420684814, "perplexity": 460.71779002543497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710503.24/warc/CC-MAIN-20221128102824-20221128132824-00335.warc.gz"}
https://socratic.org/questions/what-is-the-period-of-f-t-sin-t-pi-4
# What is the period of f(t)=sin( t -pi/4) ? $2 \pi$ Period of f(t) is period of $\sin t - \to 2 \pi$
2021-04-12 06:38:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 2, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8692059516906738, "perplexity": 4341.509089851361}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038066613.21/warc/CC-MAIN-20210412053559-20210412083559-00257.warc.gz"}
https://rjlipton.wordpress.com/2020/10/26/a-vast-and-tiny-breakthrough/
Christofides bound beaten by an epsilon’s idea of epsilon src1, src2, src3 Anna Karlin, Nathan Klein, and Shayan Oveis Gharan have made a big splash with the number $~~~~~~\frac{1}{1,000,000,000,000,000,000,000,000,000,000,000,000}.$ No that is not the amount of the US debt, or the new relief bill. It is the fraction by which the hallowed 44-year-old upper bound of ${1.5}$ on the approximation ratio of the metric Traveling Salesperson Problem has been improved. With the help of randomization, we hasten to add. Today we discuss the larger meaning of their tiny breakthrough. The abstract of their paper is as pithy as can be: For some ${\epsilon > 10^{-36}}$ we give a ${3/2 - \epsilon}$ approximation algorithm for metric TSP. Metric TSP means that the cost of the tour ${(v_1,v_2,\dots,v_n,v_1)}$ is the sum of the distances of the edges $\displaystyle \mu(v_1,v_2) + \mu(v_2,v_3) + \cdots + \mu(v_{n-1},v_n) + \mu(v_n,v_1)$ according to a given metric ${\mu}$. When the points are in ${\mathbb{R}^m}$ with the Euclidean metric, an ${n^{O(1)}}$-time algorithm can come within a factor ${(1+\delta)}$ of the optimal cost for any prescribed ${\delta > 0}$. Sanjeev Arora and Joseph Mitchell jointly won the 2002 Gödel Prize for their randomized algorithms doing exactly that. The rub is the constant in the “${O(1)}$” depends on ${\delta}$—indeed, nobody knows how to make it scale less than linearly in ${\frac{1}{\delta}}$. But for general metrics, getting within a factor of ${(1+\delta)}$ is known to be ${\mathsf{NP}}$-hard for ${\delta}$ up to ${\frac{1}{122}}$. Some intermediate cases of metrics had allowed getting within a factor of ${1.4}$, but for general metrics the ${1.5}$ factor found in 1976 by the late Nicos Christofides, and concurrently by Anatoliy Serdyukov, stood like a brick wall. Well, we didn’t expect it to be a brick wall at first. Let me tell a story. ## A Proof in a Pub Soon after starting as a graduate student at Oxford in 1981, I went with a bunch of dons and fellow students down to London for a one-day workshop where Christofides was among the speakers and presented his result along with newer work. I’d already heard it spoken of as a combinatorial gem and perfect motivator for a graduate student to appreciate the power of combining simplicity and elegance: 1. Calculate the (or a) minimum spanning tree ${T}$ of the ${n}$ given points. 2. Take ${A}$ to be the leaves and any other odd-degree nodes of ${T}$ and calculate a minimum matching ${M}$ of them. 3. The graph ${T+M}$ has all nodes of even degree so it has an easily-found Eulerian cycle ${C_E}$. 4. The cycle ${C_E}$ may repeat vertices, but by the triangle inequality for the distance metric ${\mu}$, we can bypass repeats to create a Hamilton cycle ${C_H}$ giving ${\mu(C_H) \leq \mu(C_E)}$. Now any optimal TSP tour ${C_O}$ arises as a spanning tree plus an edge, so ${\mu(T) < \mu(C_O)}$. And ${C_O}$ can be partitioned into two sets of paths with endpoints in ${A}$. One of those sets has weight at most ${\frac{1}{2}\mu(C_O)}$ and yet matches all pairs of ${A}$. Thus ${\mu(M) \leq \frac{1}{2}\mu(C_O)}$. It follows that ${\mu(C_H) \leq \mu(T) + \mu(M) < \mu(C_0) + \frac{1}{2}\mu(C_0)}$ and we’re done. My memory of what we did after the workshop is hazy but I’m quite sure we must have gone to a pub for dinner and drinks before taking the train back up to Oxford. My point is, the above proof is the kind that can be told and discussed in a pub. It combines several greatest hits of the field: minimum spanning tree, perfect matching, Euler tour, Hamiltonian cycle, triangle inequality. The proof needs no extensive calculation; maybe a napkin to draw ${A}$ on ${C_0}$ and the partition helps. The conversation would surely have gone to the question, Can the ${1.5\;}$ factor be beaten? A perfect topic for mathematical pub conversation. Let’s continue as if that’s what happened next—I wish I could recall it. ## Trees That Snake Around Note that the proof already “beats” it in the sense of there being a strict inequality, and it really shows $\displaystyle \mu(C_H) \leq (1.5 - \frac{1}{n})\mu(C_0).$ The advantage ${\frac{1}{n}}$ shrinks to zero as ${n}$ grows, however. Moreover, examples where Christofides’s algorithm does no better than approach ${1.5}$ are easy to draw. Pub walls are often covered with emblems of local organizations, and if one has a caduceus symbol it can serve as the drawing: The staff is a path ${T}$ of ${n}$ nodes while the snakes alternate edges of weight ${1 + \gamma}$ between nodes two apart on the path. Going up one snake and down the other gives an optimal tour of weight ${(1 + \gamma)(n-2) + 2}$ (using the two outermost path edges to switch between the snakes), which ${\sim (1 + \gamma)n}$. The snake edges don’t change the path’s being the minimum spanning tree, and for ${C_H}$ this costs ${n-1}$ plus the weight required to match the path’s endpoints. The extra weight is reckoned as the length of one snake, which ${\sim n\frac{1+\gamma}{2}}$, so the ratio approaches ${\frac{3}{2}}$ as ${\gamma \rightarrow 0}$ and ${n \rightarrow \infty}$. Here are some tantalizing aspects: • The ${n-2}$ snake edges, plus one path edge to connect them, make a maximum-weight spanning tree ${T'}$ in the graph ${G}$ formed by the two kinds of edges. Yet ${T'}$ followed by the same steps 2–4 of Christofides’s algorithm would yield and optimum tour. • When one is given only the ${n}$ points plus the graph metric ${\mu}$ induced by ${G}$, not ${G}$ itself, then there are much worse spanning trees. The single edge connecting the endpoints ${(v_1,v_n)}$ of the previous path has weight ${\mu(v_1,v_n) \approx \frac{n}{2}}$. • Thus ${T'}$ has relatively low weight compared to these possible other trees. And its weight approaches that of ${T}$ as ${\gamma \rightarrow 0}$. This means that small changes in the size of the tree yield large changes in the quality of the induced tour. • The advantage of ${T'}$ is that its odd-valence nodes have small distance under ${\mu}$. As a path it snakes around so that its ends are near each other, unlike those of the minimum spanning tree ${T}$. This raises the question of weighting spanning trees according to a slightly different measure ${\mu'}$ that incorporates a term for “odd-closeness.” In 1981, we would not have known about Arora’s and Mitchell’s results, so we would have felt fully on the frontier by embedding the points in the plane and sketching spanning trees and cycles on a piece of paper. After a couple pints of ale we might have felt sure that a simple proof with such evident slack ought to yield to a more sophisticated attack. There is one idea that we might have come up with in a pub. The motivation for choosing ${T}$ to be a minimum spanning tree is that many of its edges go into the Euler tour ${C_E}$ and those bound the final ${C_O}$ even if ${C_O}$ shortcuts them. So making the total edge weight of ${T}$ minimum seems to be the best way to help at that stage. We might have wondered, however, whether there is a way to create ${T}$ to have a stronger direct relation to good tours, if not to the optimal tour. Oveis Gharan did have such an idea jointly with a different group of authors a decade ago, in the best paper of SODA 2010. We cannot seem to get our hands on the optimal tour, nor even a “good” tour if that means a better than ${1.5}$ factor approximation—that is what we are trying to find to begin with. But there is another “tour” ${O^*}$ that we can compute. This is an optimum of the linear programming relaxation of TSP, whose relation to the exact-TSP methods of Michael Held and Dick Karp we covered long back. ${O^*}$ is not a single tour but rather an ensemble of “fractional tours” where each edge ${e}$ has a rational number ${z_e}$ representing its contribution to the LP solution. The higher ${z_e}$, the more helpful the edge. The objective then becomes to design distributions ${{\cal T}}$ of spanning trees ${T}$ so that: 1. Sampling ${T \leftarrow {\cal T}}$ is polynomial-time efficient. 2. For every edge ${e}$, ${\Pr_{T \leftarrow {\cal T}}[e \in T] \propto (1 + \delta_n) z_e}$ where ${\delta_n}$ is tiny. 3. The distribution ${{\cal T}}$ promotes trees ${T}$ with fewer leaves and odd-valence interior nodes. The algorithmic strategy this fits into is to sample ${T}$ from ${{\cal T}}$, plug ${T}$ into the first step of the Christofides algorithm, and continue as before. ## The Proof and the Pudding The first two conditions are solidly defined. Considerable technical details in the SODA 2010 paper and another paper at FOCS 2011 that was joint with Amin Saberi and Mohit Singh are devoted to them. A third desideratum is that the distribution ${{\cal T}}$ not be over-constrained but rather have maximum entropy, so that for efficiently computable numbers ${\lambda_e}$ approaching ${z_e}$ one has also: $\displaystyle \Pr_{\cal T}(T) \propto \prod_{e \in T}\lambda_e.$ The third condition, however, follows the maxim, “the proof of the pudding is in the eating.” As our source makes clear, this does not refer to American-style dessert pudding, but rather savory British pub fare going back to 1605 at least. The point is that we ultimately know a choice of ${{\cal T}}$ is good by proving it gives a better approximation factor than ${\frac{3}{2}}$. In America, we tend to say the maxim a different way: “the proof is in the pudding.” The new paper uses the “pudding” from the 2011 paper but needed to deepen the proof. Here is where we usually say to refer to the paper for the considerable details. But in this case we find that a number of the beautiful concepts laid out in the paper’s introduction, such as real stability and strong Rayleigh distributions, are more accessibly described in the notes for the first half of a course taught last spring by Oveis Gharan with Klein as TA. One nub is that if a set of complex numbers all have positive imaginary part, then any product ${z = z_1 z_2}$ of two of the numbers has real part less than the product of the real parts, and if the latter product is positive, then ${z}$ is not a real number. This rules out assignments drawn from the set from being solutions to certain polynomials as well as setting up odd/even parity properties elsewhere. ## Rigidity of the TSP Universe I’ll close instead with some remarks while admitting that my own limited time—I have been dealing with more chess cases—prevents them from being fully informed. The main remark is to marvel that the panoply of polynomial properties and deep analysis buy such a tiny improvement. It is hard to believe that the true space of TSP approximation methods is so rigid. In this I am reminded of Scott Aaronson’s calculations that a collision of two stellar black holes a mere 3,000 miles away would stretch space near you by only a millimeter. There is considerable belief that the approximation factor ought to be improvable at least as far as ${\frac{4}{3}}$. It strikes me that the maximum-entropy condition, while facilitating the analysis, works against the objective of making the trees more special. It cannot come near the kind of snaky tree ${T_O}$ obtained by deleting any edge from a good tour ${O}$, such that plugging ${T_O}$ into step 1 yields ${O}$ back again. The theory of polynomials and distributions that they develop has a plug-and-play element, so that they can condition the distributions ${{\cal T}}$ toward the third objective using the parity properties. But their framework has inflexibility represented by needing to postulate a real-valued function on the optimum edges whose expectation is of order the square of a parameter ${\eta}$ already given the tiny value ${10^{-12}}$. Of the requirement that ${\eta}$ be a small fraction of their governing epsilon parameter, they say in section 3: This forces us to take ${\eta}$ very small, which is why we get only a “very slightly” improved approximation algorithm for TSP. Furthermore, since we use OPT edges in our construction, we don’t get a new upper bound on the integrality gap. We leave it as an open problem to find a reduction to the “cactus” case that doesn’t involve using a slack vector for OPT (or a completely different approach). What may be wanting is a better way of getting the odd-valence tree nodes to be closer, not just fewer in number. To be sure, ideas for “closer” might wind up presupposing a metric topology on the ${n}$ given points, leading to cases that have already been improved by other means. ## Open Problems Will the tiny but fixed wedge below ${\frac{3}{2}}$ become a lever by which to find better approximations? There is also the kvetch that the algorithm is randomized, whereas the original by Christofides and Serdyukov is deterministic. Can the new methods be derandomized? [fixed = to + sign at end of Christofides proof; fixed wording of “nub” at end of pudding section]
2020-11-27 06:48:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 117, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7020411491394043, "perplexity": 531.0146509705982}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141189141.23/warc/CC-MAIN-20201127044624-20201127074624-00066.warc.gz"}
https://simons.berkeley.edu/workshops/abstracts/6685
# Abstracts ### Tuesday, November 27th, 2018 9:30 am10:10 am Speaker: Piotr Indyk (Massachusetts Institute of Technology) Classical streaming algorithms typically do not leverage data properties or patterns in their input. We propose to augment such algorithms with a learning model that enables them to exploit data properties without being specific to a particular pattern or property. We focus on the problem of estimating the frequency of elements in a data stream, a fundamental problem with applications in network measurements, natural language processing, and security. We propose a new framework for designing frequency estimation streaming algorithms that automatically learn to leverage the properties of the input data. We present a theoretical analysis of the proposed algorithms and prove that, under natural assumptions, they have lower space complexity than prior algorithms. We also evaluate our algorithms on two problems ? monitoring Internet traffic and tracking the popularity of search queries ? and demonstrate their performance gains. Joint work with Chen-Yu Hsu, Dina Katabi and Ali Vakilian. 10:20 am11:00 am Speaker: Eric Price (University of Texas at Austin) In sparse recovery/compressed sensing, one can estimate a k-sparse vector in n dimensions with only Theta(k log n) nonadaptive linear measurements. With adaptivity -- if each measurement can be based on the previous ones -- this reduces to O(k log log n). But what happens if the measurement matrices can only be chosen in a few rounds, as seen (for example) in constant-pass streaming algorithms? This talk will give upper and lower bounds, showing (up to a log^* k factor) that R rounds of adaptivity require Theta(k log^{1/R} n) measurements. 11:30 am12:10 pm Speaker: Vladimir Braverman (Johns Hopkins University) Streaming and sketching algorithms have found many applications in computer science and other areas. A typical sketching algorithm approximates one function. Given a class of functions F, it is natural to ask if it is possible to compute a single sketch S that will approximate every function f from F. We call S a “universal sketch” for F. In this talk we will discuss results on universal sketches for several classes of functions. For example, we will describe a sketch that approximates a sub-class of symmetric norms (a norm is symmetric if it is invariant under sign-flips and coordinate-permutations) and outline a connection between universal sketches and concentration of measure and Milman’s theorem. Also, we will describe a recent result for subset (i.e. 0-1 weighted) l0 and l1 norms. For these problems we obtain a nearly optimal upper and lower bounds for streaming space complexity. We will discuss the applicability of universal sketches for Software Defined Networks (SDN). For SDN, we will present the UnivMon (short for Universal Monitoring) framework that can simultaneously achieve both generality and high fidelity across a broad spectrum of monitoring tasks. This talk is based on joint works with Jaroslaw Blasiok, Stephen R. Chestnut, Robert Krauthgamer and Lin F. Yang (STOC 2017), with Robert Krauthgamer and Lin F. Yang (submitted) and with Zaoxing Liu, Antonis Manousis, Gregory Vorsanger, and Vyas Sekar (HotNets 2015, SIGCOMM 2016). 2:00 pm2:40 pm Speaker: Christian Sohler (Technische Universität Dortmund and Google Switzerland) Let (X,d) be an n-point metric space. We assume that (X,d) is given in the distance oracle model, i.e., X={1,...,n} and for every pair of points x,y from X we can query their distance d(x,y) in constant time. A k-nearest neighbor (k-NN) graph} for (X,d) is a directed graph G=(V,E) that has an edge to each of v's k nearest neighbors. We use cost(G) to denote the sum of edge weights of G. In this paper, we study the problem of approximating cost(G) in sublinear time, when we are given oracle access to the metric space (X,d) that defines G. Our goal is to develop an algorithm that solves this problem faster than the time required to compute G. To this end, we develop an algorithm that in time ~O(min (n k^{3/2} / eps^6, n^2 / (eps^2 k))) computes an estimate K for the cost of the minimum spanning tree that satisfies with probability at least 2/3 |cost(G) -  K | <= eps (cost(G) + mst(X)) where mst(X) denotes the cost of the minimum spanning tree of (X,d). Joint work with Artur Czumaj. Work was done as part of the speaker's affiliation with Google Switzerland. 2:50 pm3:30 pm Speaker: Clement Canonne (Stanford University) Independent samples from an unknown probability distribution p on a domain of size k are distributed across n players, with each player holding one sample. Each player can send a message to a central referee in a simultaneous message passing (SMP) model of communication, whose goal is to solve a pre-specified inference problem. The catch, however, is that each player cannot simply send their own sample to the referee; instead, the message they send must obey some (local) information constraint. For instance, each player may be limited to communicating only L bits, where L << log k; or they may seek to reveal as little information as possible, and preserve local differentially privacy. We propose a general formulation for inference problems in this distributed setting, and instantiate it to two fundamental inference questions, learning and uniformity testing. We study the role of randomness for those questions, and obtain striking separations between public- and private-coin protocols for the latter, while showing the two settings are equally powerful for the former. (Put differently, sharing with your neighbors does help a lot for the test, but not really for the learning.) Based on joint works with Jayadev Acharya (Cornell University), Cody Freitag (Cornell University), and Himanshu Tyagi (IISc Bangalore). 4:00 pm4:40 pm We initiate the study of the role of erasures in local decoding and use our understanding to prove a separation between erasure-resilient and tolerant property testing. Local decoding in the presence of errors has been extensively studied, but has not been considered explicitly in the presence of erasures. Motivated by applications in property testing, we begin our investigation with local {\em list} decoding in the presence of erasures. We prove an analog of a famous result of Goldreich and Levin on local list decodability of the Hadamard code. Specifically, we show that the Hadamard code is locally list decodable in the presence of a constant fraction of erasures, arbitrary close to 1, with list sizes and query complexity better than in the Goldreich-Levin theorem. We use this result to exhibit a property which is testable with a number of queries independent of the length of the input in the presence of erasures, but requires a number of queries that depends on the input length, $n$, for tolerant testing. We further study {\em approximate} locally list decodable codes that work against erasures and use them to strengthen our separation by constructing a property which is testable with a constant number of queries in the presence of erasures, but requires $n^{\Omega(1)}$ queries for tolerant testing. Next, we study the general relationship between local decoding in the presence of errors and in the presence of erasures. We observe that every locally (uniquely or list) decodable code that works in the presence of errors also works in the presence of twice as many erasures (with the same parameters up to constant factors). We show that there is also an implication in the other direction for locally decodable codes (with unique decoding): specifically, that the existence of a locally decodable code that works in the presence of erasures implies the existence of a locally decodable code that works in the presence of errors and has related parameters. However, it remains open whether there is an implication in the other direction for locally {\em list} decodable codes. (Our Hadamard result shows that there has to be some difference in parameters for some settings.) We relate this question to other open questions in local decoding. Based on joint work with Noga Ron-Zewi and Nithin Varma. 4:50 pm5:30 pm Speaker: Ronitt Rubinfeld (Massachusetts Institute of Technology) No abstract available. ### Wednesday, November 28th, 2018 9:30 am10:30 am Speaker: Moses Charikar (Stanford University) Locality sensitive hashing (LSH) is a popular technique for nearest neighbor search in high dimensional data sets. Recently, a new view at LSH as a biased sampling technique has been fruitful for density estimation problems in high dimensions. Given a set of points and a query point, the goal (roughly) is to estimate the density of the data set around the query. One way to formalize this is by kernel density estimation: Given a function that decays with distance and represents the "influence" of a data point at the query, sum up this influence function over the data set. Yet another way to formalize this problem is by counting the number of data points within a certain radius of the query. While these problems can easily be solved by making a linear pass over the data, this can be prohibitive for large data sets and multiple queries. Can we preprocess the data so as to answer queries efficiently? This talk will survey several recent papers that use locality sensitive hashing to design unbiased estimators for such density estimation problems and their extensions. This talk will survey joint works with Arturs Backurs, Piotr Indyk, Vishnu Natchu, Paris Syminelakis and Xian (Carrie) Wu. 11:00 am12:00 pm Speaker: Sanjoy Dasgupta (UC San Diego) We consider algorithms that take an unlabeled data set and label it in its entirety, given the ability to interact with a human expert. The goal is to minimize the amount of interaction while producing a labeling that satisfies an (epsilon, delta) guarantee: with probability at least 1-delta over the randomness in the algorithm, at most an epsilon fraction of the labels are incorrect. Scenario 1: The algorithm asks the expert for labels of specific points. This is the standard problem of active learning, except that the final product is a labeled data set rather than a classifier. Scenario 2: The expert also provides "weak rules" or helpful features. We will summarize the state of the art on these problems, in terms of promising algorithms and statistical guarantees, and identify key challenges and open problems. 2:00 pm2:40 pm Speaker: Michael Kapralov (Ecole Polytechnique Federale de Lausanne) We consider the problem of estimating the value of MAX-CUT in a graph in the streaming model of computation.  At one extreme, there is a trivial $2$-approximation for this problem that uses only $O(\log n)$ space, namely, count the number of edges and output half of this value as the estimate for the size of the MAX-CUT. On the other extreme, for any fixed $\eps > 0$, if one allows $\tilde{O}(n)$ space, a $(1+\eps)$-approximate solution to the MAX-CUT value can be obtained by storing an $\tilde{O}(n)$-size sparsifier that essentially preserves  MAX-CUT value. Our main result is that any (randomized) single pass streaming algorithm that breaks the $2$-approximation barrier requires $\Omega(n)$-space, thus resolving the space complexity of any non-trivial approximations of the MAX-CUT value to within polylogarithmic factors in the single pass streaming model. We achieve the result by presenting a tight analysis of the Implicit Hidden Partition Problem introduced by Kapralov et al.[SODA'17] for an arbitrarily large number $k$ of players. In this problem a number of players receive random matchings of $\Omega(n)$ size together with random bits on the edges, and their task is to determine whether the bits correspond to parities of some hidden bipartition, or are just uniformly random. Unlike all previous Fourier analytic communication lower bounds, our analysis does not directly use bounds on the $\ell_2$ norm of Fourier coefficients of a typical message at any given weight level that follow from hypercontractivity.  Instead, we use the fact that graphs received by players are sparse (matchings) to  obtain strong upper bounds on the $\ell_1$ norm of the Fourier coefficients of the messages of individual players using their special structure, and then argue, using the convolution theorem, that similar strong bounds on the  $\ell_1$ norm are essentially preserved (up to an exponential loss in the number of players) once messages of different players are combined. We feel that our main technique is likely of independent interest. 2:50 pm3:30 pm Speaker: Sepehr Assadi (University of Pennsylvania) Any graph with maximum degree Delta admits a proper vertex coloring with Delta+1 colors that can be found via a simple sequential greedy algorithm in linear time and space. But can one find such a coloring via a sublinear algorithm? In this talk, I present new algorithms that answer this question in the affirmative for several canonical classes of sublinear algorithms including graph streaming, sublinear time, and massively parallel computation (MPC) algorithms. At the core of these algorithms is a remarkably simple meta-algorithm for the (Delta+1) coloring problem: Sample O(log n) colors for each vertex uniformly at random from the Delta+1 colors and then find a proper coloring of the graph using the sampled colors; our main structural result states that the sampled set of colors with high probability contains a proper coloring of the input graph. 4:00 pm4:40 pm Speaker: Dustin Mixon (Ohio State University) Efficient algorithms for k-means clustering frequently converge to suboptimal partitions, and given a partition, it is difficult to detect k-means optimality. We discuss an a posteriori certifier of approximate optimality for k-means clustering based on Peng and Wei's semidefinite relaxation of k-means. 4:50 pm5:30 pm Speaker: Soledad Villar (New York University) Efficient algorithms for k-means clustering frequently converge to suboptimal partitions, and given a partition, it is difficult to detect k-means optimality. In this paper, we develop an a posteriori certifier of approximate optimality for k-means clustering. The certifier is a sub-linear Monte Carlo algorithm based on Peng and Wei's semidefinite relaxation of k-means. In particular, solving the relaxation for small random samples of the dataset produces a high-confidence lower bound on the k-means objective, and being sub-linear, our algorithm is faster than k-means++ when the number of data points is large. If the data points are drawn independently from any mixture of two Gaussians over R^m with identity covariance, then with probability 1?O(1/m), our poly(m)-time algorithm produces a 3-approximation certificate with 99% confidence (no separation required). We also introduce a linear-time Monte Carlo algorithm that produces an O(k) additive approximation lower bound. ### Thursday, November 29th, 2018 9:30 am10:10 am Speaker: Constantine Caramanis (University of Texas at Austin) We provide a novel ? and to the best of our knowledge, the first ? algorithm for high dimensional sparse regression with corruptions in explanatory and/or response variables. Our algorithm recovers the true sparse parameters in the presence of a constant fraction of arbitrary corruptions. Our main contribution is a robust variant of Iterative Hard Thresholding. Using this, we provide accurate estimators with sub-linear sample complexity. Our algorithm consists of a novel randomized outlier removal technique for robust sparse mean estimation that may be of interest in its own right: it is orderwise more efficient computationally than existing algorithms, and succeeds with high probability, thus making it suitable for general use in iterative algorithms. 10:20 am11:00 am We study the PCA and column subset selection problems in matrices in an online setting, where the columns arrive one after the other. In the context of column subset selection, the goal is to decide whether to include or discard a column, as it arrives. We design a simple algorithm that includes at most O(k \polylog n) columns overall and achieves a multiplicative (1+\epsilon) error compared to the best rank-k approximation of the full matrix. This result may be viewed as an analog of the classic result of Myerson on online clustering. 11:30 am12:10 pm Speaker: Samory Kpotufe (Princeton University) The need for fast computation typically requires tradeoffs with statistical accuracy; here we are interested in whether computation can be significantly improved without trading-off accuracy. In particular, for best possible accuracy in NN prediction, the number of neighbors generally needs to grow as a root of n (sample size), consequently limiting NN-search (any technique) to order of root of n complexity; in other words, expensive prediction seems unavoidable, even while using fast search methods, if accuracy is to be optimal. Unfortunately, the usual alternative is to tradeoff accuracy. Interestingly, we show that it is possible to maintain accuracy, while reducing computation (at prediction time) to just O(log n), through simple bias and or variance correction tricks applied after data quantization or subsampling, together with (black box) fast search techniques. Furthermore, our analysis yields clear insights into how much quantization or subsampling is tolerable if optimal accuracy is to be achieved. Our theoretical insights are validated through extensive experiments with large datasets from various domains. The talk is based on a series of works with N. Verma, and with L. Xue. 2:00 pm2:40 pm Speaker: Alex Andoni (Columbia University) We establish a generic reduction from nonlinear spectral gaps of metric spaces to space partitions, in the form of data-dependent Locality-Sensitive Hashing. This yields a new approach to the high-dimensional Approximate Near Neighbor Search problem (ANN). Using this reduction, we obtain a new ANN data structure under an arbitrary d-dimensional norm, where the query algorithm makes only a sublinear number of probes into the data structure. Most importantly, the new data structure achieves a O(log d) approximation for an arbitrary norm. The only other such generic approach, via John's ellipsoid, would achieve square-root-d approximation only. Joint work with Assaf Naor, Aleksandar Nikolov, Ilya Razenshteyn, and Erik Waingarten. 2:50 pm3:30 pm Speaker: Ilya Razenshteyn (Microsoft Research) I will show the first approximate nearest neighbor search data structure for a general d-dimensional normed space with sub-polynomial in d approximation. The main tool is a finite-dimensional quantitative version of a theorem of Daher, which yields a Holder homeomorphism between small perturbations of a normed space of interest and a Euclidean space. To make Daher's theorem algorithmic, we employ convex programming to compute the norm of a vector in a space, which is the result of complex interpolation between two given normed spaces. Based on a joint work (FOCS 2018) with Alex Andoni, Assaf Naor, Sasho Nikolov and Erik Waingarten. 4:00 pm4:40 pm Speaker: Rasmus Pagh (IT University of Copenhagen) Theoretical work on high-dimensional nearest neighbor search has focused on the setting where a single point is sought within a known search radius, and an acceptable approximation ratio c is given. Locality Sensitive Hashing is a powerful framework for addressing this problem. In practice one usually seeks the (exact) k nearest points, the search radius is unknown, and the parameter c must be chosen in a way that depends on the data distribution. Though reductions of the latter problem to the former exist, they incur polylogarithmic overhead in time and/or space, which in turn make them unattractive in many practical settings. We address this discrepancy between theory and practice by suggesting new, simple, more efficient reductions for solving the k-Nearest Neighbor search problem using Locality Sensitive Hashing. Joint work with Tobias Christiani and Mikkel Thorup. ### Friday, November 30th, 2018 9:30 am10:10 am Speaker: C. Seshadhri (UC Santa Cruz) The main story of this talk is how theoretical ideas on randomized sampling algorithms became part of production code at Twitter. The specific context is finding pairs of similar items: a classic algorithmic problem that is an integral part of recommendation systems. In most incarnations, it boils down to finding high inner products among a large collection of vectors, or alternately high entries in a matrix product. Despite a rich literature on this topic (and despite Twitter's significant compute resources), none of the existing methods scaled to "industrial sized" inputs, which exceed hundreds of billions of non-zeros. I will talk about a distributed algorithm for this problem, that combines low-dimension projections (hashes) with path-sampling techniques (wedges). There is some cute math behind the algorithm, and we were able to run it in production on Twitter's recommendation system. Joint work with Aneesh Sharma (Twitter) and Ashish Goel (Stanford). 10:20 am11:00 am Speaker: Aaron Sidford (Stanford University) In this talk I will discuss how to recover spectral approximations to broad classes of structured matrices using only a polylogarithmic number of adaptive linear measurements to either the matrix or its inverse. Leveraging this result I will discuss how to achieve faster algorithms for solving a variety of linear algebraic problems including solving linear systems in the inverse of symmetric M-matrices (a generalization of Laplacian systems), solving linear systems that are constant spectral approximations of Laplacians (or more generally, SDD matrices), and recovering a spectral sparsifier of a graph using only a polylogarithmc number of matrix vector multiplies. More broadly this talk will show how to leverage a number of recent approaches to spectral sparsification towards expanding the robustness and scope of recent nearly linear time linear system solving research, and providing general matrix recovery machinery that may serve as a stepping stone for faster algorithms. This talk reflects joint work with Arun Jambulapati and Kiran Shiragur. 11:30 am12:10 pm Speaker: Jeff Phillips (University of Utah) Spatial Scan Statistics measure and detect anomalous spatial behavior, specifically they identify geometric regions where significantly more of a measured characteristic is found than would be expected from the background distribution.  These techniques have been used widely in geographic information science, such as to pinpoint disease outbreaks.  However, until recently, available algorithms and software only scaled to at most a thousand or so spatial records.  In this work I will describe how using coresets, efficient constructions, and scanning algorithms, we have developed new algorithms and software that easily scales to millions or more data points.  Along the way we provide new efficient algorithms and constructions for eps-samples and eps-nets for various geometric range spaces.  This is a case where subtle theoretical improvements of old structures from discrete geometry actually result in substantial empirical improvements.
2020-08-09 06:04:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6627770662307739, "perplexity": 911.8316420511125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738425.43/warc/CC-MAIN-20200809043422-20200809073422-00013.warc.gz"}
https://www.sarthaks.com/2217515/the-number-of-solutions-of-log-2-x-1-2-log-2-x-3-is
# The number of solutions of "log"_(2) (x-1) = 2 "log"_(2) (x-3) is 28 views The number of solutions of "log"_(2) (x-1) = 2 "log"_(2) (x-3) is A. 2 B. 1 C. 6 D. 7 by (24.9k points)
2022-10-03 20:05:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.411359965801239, "perplexity": 9442.357265656172}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00578.warc.gz"}
https://kb.osu.edu/handle/1811/29770?show=full
dc.creator Abel, B. en_US dc.creator Hamann, H. en_US dc.creator Lange, N. en_US dc.creator Troe, J. en_US dc.date.accessioned 2007-11-20T17:12:42Z dc.date.available 2007-11-20T17:12:42Z dc.date.issued 1995 en_US dc.identifier 1995-TD-03 en_US dc.identifier.uri http://hdl.handle.net/1811/29770 dc.description Author Institution: Universität Göttingen, Tammannstr. 6, 37077 Göttingen, Germany en_US dc.description.abstract The $NO_{2}$ molecule is an excellent model system for quantum state resolved investigations of unimolecular dissociation, IVR and collisional relaxation of small molecules and radicals at chemically significant internal energies. We have used a high resolution "fluorescence depletion pumping" double resonance technique (FDP) in a free jet to access and to assign predissociating rovibronic states of $NO_{2}$ above and stable molecular eigenstates below the dissociation threshold $E_{0}$. From the double resonance spectra linewidth distributions and densities of states $\rho (E)$ at around $25130 cm^{-1}$ have been determined. From sequential double resonance spectra just below $E_{0}$ the nature of the predissociating states above the threshold has been inferred. Controlling the conditions of the jet expansion we have been able to measure the linewidths of a large number of isolated transitions to predissociating states with defined energy E, symmetry, and total angular momentum J. The homogeneous linewidths have been converted to specific rate constants k(E, J). The rate constants and its fluctuations will be compared with recent results from "state resolved" SACM calculations, a statistical lifetime distribution model, and ps- time domain measurements. en_US dc.format.extent 56886 bytes dc.format.mimetype image/jpeg dc.language.iso English en_US dc.publisher Ohio State University en_US dc.title LIFETIME DISTRIBUTIONS, STATE DENSITIES, AND THE CHARACTERIZATION OF PREDISSOCIATING STATES OF HIGHLY EXCITED $NO_{2}$ FROM OPTICAL DOUBLE-RESONANCE SPECTROSCOPY IN A FREE JET en_US dc.type article en_US 
2021-01-25 16:05:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5152198076248169, "perplexity": 11533.123101157053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703587074.70/warc/CC-MAIN-20210125154534-20210125184534-00443.warc.gz"}
https://answerbun.com/mathoverflow/a-determinant-identity/
# A determinant identity The following identity involving determinants essentially appears in E.L. Ince’s book on Ordinary Differential Equations: Let $$A$$ be an $$n times n$$ matrix, $$n geq 3$$. Denote by $$A_{j_1,ldots,j_r}^{k_1,ldots,k_r}$$ the $$(n-r) times (n-r)$$ matrix obtained from $$A$$ by erasing the $$j_1$$-th, …, $$j_r$$-th row and the $$k_1$$-th, …, $$k_r$$-th column. Then, $$left|A right| left|A_{n-1,n}^{1,n} right| = left|A_{n-1}^1 right|left|A_n^n right| – left|A_{n-1}^n right|left|A_n^1 right|.$$ Any ideas of how to prove it? Is this a special case of a more general identity? MathOverflow Asked by Vassilis Papanicolaou on January 4, 2021 A polynomial identity holds in general if it holds on an open set. So it's enough to prove the identity for matrices $$A$$ in the open set where $$A^{1,n}_{n-1,n}$$ is invertible. That is, we can assume $$A^{1,n}_{n-1,n}$$ is invertible. Now premultiply $$A$$ by the inverse of the matrix. $$pmatrix{0&1&0cr A^{1,n}_{n-1,n}&0&0cr 0&0&1cr}$$ where the $$1$$s are $$1times 1$$ matrices with entry $$1$$ and the $$0's$$ are row or column matrices of the appropriate sizes, filled with zeros. This does not affect the truth of the theorem and allows us to assume $$A^{1,n}_{n-1,n}$$ is the identity, after which row and column operations render the desired equality trivial. Answered by Steven Landsburg on January 4, 2021 ## Related Questions ### Commutator estimates regarding pseudo-differential operators 0  Asked on December 15, 2020 by shaoyang-zhou ### English translation of Borel-Serre Le theoreme de Riemann-Roch?: 1  Asked on December 15, 2020 ### Upper bound for an exponential sum involving characters of a finite field 1  Asked on December 14, 2020 by nahila ### Euler function summation 0  Asked on December 13, 2020 by andrej-leko ### A problem about an unramified prime in a Galois extension 1  Asked on December 9, 2020 by neothecomputer ### Reference request: discretisation of probability measures on $mathbb R^d$ 1  Asked on December 9, 2020 by mb2009 ### Smoothness of a variety implies homological smoothness of DbCoh 0  Asked on December 8, 2020 by dbcohsmoothness ### Reference for matrices with all eigenvalues 1 or -1 1  Asked on December 7, 2020 ### Characterizations of groups whose general linear representations are all trivial 1  Asked on December 7, 2020 by qsh ### Continuous time Markov chains and invariance principle 0  Asked on December 6, 2020 by sharpe ### Continuity property for Čech cohomology 0  Asked on December 6, 2020 by xindaris ### Reference request: superconformal algebras and representations 0  Asked on December 6, 2020 by winawer ### Extending rational maps of nodal curves 1  Asked on December 5, 2020 by leo-herr ### How large is the smallest ordinal larger than any “minimal ordinal parameter” for any pair of an Ordinal Turing Machine and a real? 1  Asked on December 4, 2020 by lyrically-wicked ### Almost geodesic on non complete manifolds 1  Asked on December 4, 2020 by andrea-marino ### With Khinchine’s inequality, prove Fourier basis is unconditional in $L^{p}[0,1]$ only for $p=2$ 0  Asked on December 3, 2020 by eric-yan ### Kähler manifolds deformation equivalent to projective manifolds 0  Asked on December 3, 2020 by user164740 ### Duality of eta product identities: a new idea? 2  Asked on December 1, 2020 by wolfgang ### Is there a CAS that can solve a given system of equations in a finite group algebra $kG$? 2  Asked on December 1, 2020 by bernhard-boehmler
2022-01-24 18:15:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 21, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5721343159675598, "perplexity": 1335.1147923999915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304572.73/warc/CC-MAIN-20220124155118-20220124185118-00487.warc.gz"}
http://math.stackexchange.com/questions/70324/algorithm-for-optimizing-width-length-of-classes-of-an-ordered-list-of-data-poin
# Algorithm for optimizing width length of classes of an ordered list of data points under certain conditions I have the following problem: I have an ordered list of $n$ data points jiggling around $0$ with no apparent order. The order this list is in should not be affected by the following procedure. I want to divide this list into 5 classes under the following conditions: • Average of class no. 2 should be max. positive with variance the lower the better • Average of class no. 4 should be max. negative with variance the lower the better The average/variance of the other three classes doesn't matter at the moment. How can I proceed optimizing the width/cutoff points of the 5 classes (some very crude tests support the belief that this should be possible in general)? Are the perhaps even implementations/tools for such tasks? Is this problem known under some other name? - A version of this question was also posted at stats.stackexchange.com/questions/16556/…. It appears to be an instance of a changepoint problem, but it's not yet posed well enough to admit a definite solution, because it does not describe a single objective function to be optimized. To achieve this, somehow a balance has to be made among two variances, five bin widths, and two vague determinations of "max positive" and "max negative." – whuber Oct 6 '11 at 22:54 @whuber: Do you think that the missing objective function is the problem? So let's take the "signal-to-noise ratio" as the function to optimize, i.e. the ratio of mean to standard deviation (en.wikipedia.org/wiki/Signal-to-noise_ratio). – vonjd Oct 7 '11 at 6:14 S/N ratio appears to have little to do with the original question. How would you propose using that to partition the domain into five intervals? Changepoint procedures will optimize some measure of deviation between the data and the fit (such as the residual variance), adding a penalty for the freedom to change the fitted parameters at four flexibly chosen intermediate cutpoints. If that's what you're looking for, many solutions are described on the stats.SE site. If it's not, it would be helpful to know how your objective differs from this standard one. – whuber Oct 7 '11 at 14:20
2016-07-24 14:49:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6294061541557312, "perplexity": 568.7010227253417}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824109.37/warc/CC-MAIN-20160723071024-00286-ip-10-185-27-174.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/fluid-mechanic-problem.441796/
# Fluid mechanic problem given an incompressible steady flow over a converging duct, the outlet velocity can be found just by using mass continuity equation, v1A1=v2A2. However given a time dependent inlet velocity ie. oscillating velocity, how do i get the outlet velocity? assume the flow is incompressible and inviscid. Tried looking for navier-stoke i had no clue what it does. Gold Member Continuity still holds even if your inlet is time-dependent. Continuity still holds even if your inlet is time-dependent. so, the equation, a1v1=a2v2 is applicable unless it is compressible flow? the last i remembered it is only for steady flow, and non of the books has a worked example for time-dependent inlet flow. minger
2021-07-28 16:01:48
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8009429574012756, "perplexity": 1528.044728460568}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153739.28/warc/CC-MAIN-20210728154442-20210728184442-00599.warc.gz"}
http://icl.cs.utk.edu/projectsfiles/magma/doxygen/group__magma__hetrd.html
MAGMA  2.3.0 Matrix Algebra for GPU and Multicore Architectures sy/hetrd: Tridiagonal reduction ## Functions magma_int_t magma_chetrd (magma_uplo_t uplo, magma_int_t n, magmaFloatComplex *A, magma_int_t lda, float *d, float *e, magmaFloatComplex *tau, magmaFloatComplex *work, magma_int_t lwork, magma_int_t *info) CHETRD reduces a complex Hermitian matrix A to real symmetric tridiagonal form T by an orthogonal similarity transformation: Q**H * A * Q = T. More... magma_int_t magma_chetrd2_gpu (magma_uplo_t uplo, magma_int_t n, magmaFloatComplex_ptr dA, magma_int_t ldda, float *d, float *e, magmaFloatComplex *tau, magmaFloatComplex *A, magma_int_t lda, magmaFloatComplex *work, magma_int_t lwork, magmaFloatComplex_ptr dwork, magma_int_t ldwork, magma_int_t *info) CHETRD2_GPU reduces a complex Hermitian matrix A to real symmetric tridiagonal form T by an orthogonal similarity transformation: Q**H * A * Q = T. More... magma_int_t magma_chetrd_gpu (magma_uplo_t uplo, magma_int_t n, magmaFloatComplex_ptr dA, magma_int_t ldda, float *d, float *e, magmaFloatComplex *tau, magmaFloatComplex *A, magma_int_t lda, magmaFloatComplex *work, magma_int_t lwork, magma_int_t *info) CHETRD_GPU reduces a complex Hermitian matrix A to real symmetric tridiagonal form T by an orthogonal similarity transformation: Q**H * A * Q = T. More... magma_int_t magma_chetrd_mgpu (magma_int_t ngpu, magma_int_t nqueue, magma_uplo_t uplo, magma_int_t n, magmaFloatComplex *A, magma_int_t lda, float *d, float *e, magmaFloatComplex *tau, magmaFloatComplex *work, magma_int_t lwork, magma_int_t *info) CHETRD reduces a complex Hermitian matrix A to real symmetric tridiagonal form T by an orthogonal similarity transformation: Q**H * A * Q = T. More... magma_int_t magma_dsytrd (magma_uplo_t uplo, magma_int_t n, double *A, magma_int_t lda, double *d, double *e, double *tau, double *work, magma_int_t lwork, magma_int_t *info) DSYTRD reduces a real symmetric matrix A to real symmetric tridiagonal form T by an orthogonal similarity transformation: Q**H * A * Q = T. More... magma_int_t magma_dsytrd2_gpu (magma_uplo_t uplo, magma_int_t n, magmaDouble_ptr dA, magma_int_t ldda, double *d, double *e, double *tau, double *A, magma_int_t lda, double *work, magma_int_t lwork, magmaDouble_ptr dwork, magma_int_t ldwork, magma_int_t *info) DSYTRD2_GPU reduces a real symmetric matrix A to real symmetric tridiagonal form T by an orthogonal similarity transformation: Q**H * A * Q = T. More... magma_int_t magma_dsytrd_gpu (magma_uplo_t uplo, magma_int_t n, magmaDouble_ptr dA, magma_int_t ldda, double *d, double *e, double *tau, double *A, magma_int_t lda, double *work, magma_int_t lwork, magma_int_t *info) DSYTRD_GPU reduces a real symmetric matrix A to real symmetric tridiagonal form T by an orthogonal similarity transformation: Q**H * A * Q = T. More... magma_int_t magma_dsytrd_mgpu (magma_int_t ngpu, magma_int_t nqueue, magma_uplo_t uplo, magma_int_t n, double *A, magma_int_t lda, double *d, double *e, double *tau, double *work, magma_int_t lwork, magma_int_t *info) DSYTRD reduces a real symmetric matrix A to real symmetric tridiagonal form T by an orthogonal similarity transformation: Q**H * A * Q = T. More... magma_int_t magma_ssytrd (magma_uplo_t uplo, magma_int_t n, float *A, magma_int_t lda, float *d, float *e, float *tau, float *work, magma_int_t lwork, magma_int_t *info) SSYTRD reduces a real symmetric matrix A to real symmetric tridiagonal form T by an orthogonal similarity transformation: Q**H * A * Q = T. More... magma_int_t magma_ssytrd2_gpu (magma_uplo_t uplo, magma_int_t n, magmaFloat_ptr dA, magma_int_t ldda, float *d, float *e, float *tau, float *A, magma_int_t lda, float *work, magma_int_t lwork, magmaFloat_ptr dwork, magma_int_t ldwork, magma_int_t *info) SSYTRD2_GPU reduces a real symmetric matrix A to real symmetric tridiagonal form T by an orthogonal similarity transformation: Q**H * A * Q = T. More... magma_int_t magma_ssytrd_gpu (magma_uplo_t uplo, magma_int_t n, magmaFloat_ptr dA, magma_int_t ldda, float *d, float *e, float *tau, float *A, magma_int_t lda, float *work, magma_int_t lwork, magma_int_t *info) SSYTRD_GPU reduces a real symmetric matrix A to real symmetric tridiagonal form T by an orthogonal similarity transformation: Q**H * A * Q = T. More... magma_int_t magma_ssytrd_mgpu (magma_int_t ngpu, magma_int_t nqueue, magma_uplo_t uplo, magma_int_t n, float *A, magma_int_t lda, float *d, float *e, float *tau, float *work, magma_int_t lwork, magma_int_t *info) SSYTRD reduces a real symmetric matrix A to real symmetric tridiagonal form T by an orthogonal similarity transformation: Q**H * A * Q = T. More... magma_int_t magma_zhetrd (magma_uplo_t uplo, magma_int_t n, magmaDoubleComplex *A, magma_int_t lda, double *d, double *e, magmaDoubleComplex *tau, magmaDoubleComplex *work, magma_int_t lwork, magma_int_t *info) ZHETRD reduces a complex Hermitian matrix A to real symmetric tridiagonal form T by an orthogonal similarity transformation: Q**H * A * Q = T. More... magma_int_t magma_zhetrd2_gpu (magma_uplo_t uplo, magma_int_t n, magmaDoubleComplex_ptr dA, magma_int_t ldda, double *d, double *e, magmaDoubleComplex *tau, magmaDoubleComplex *A, magma_int_t lda, magmaDoubleComplex *work, magma_int_t lwork, magmaDoubleComplex_ptr dwork, magma_int_t ldwork, magma_int_t *info) ZHETRD2_GPU reduces a complex Hermitian matrix A to real symmetric tridiagonal form T by an orthogonal similarity transformation: Q**H * A * Q = T. More... magma_int_t magma_zhetrd_gpu (magma_uplo_t uplo, magma_int_t n, magmaDoubleComplex_ptr dA, magma_int_t ldda, double *d, double *e, magmaDoubleComplex *tau, magmaDoubleComplex *A, magma_int_t lda, magmaDoubleComplex *work, magma_int_t lwork, magma_int_t *info) ZHETRD_GPU reduces a complex Hermitian matrix A to real symmetric tridiagonal form T by an orthogonal similarity transformation: Q**H * A * Q = T. More... magma_int_t magma_zhetrd_mgpu (magma_int_t ngpu, magma_int_t nqueue, magma_uplo_t uplo, magma_int_t n, magmaDoubleComplex *A, magma_int_t lda, double *d, double *e, magmaDoubleComplex *tau, magmaDoubleComplex *work, magma_int_t lwork, magma_int_t *info) ZHETRD reduces a complex Hermitian matrix A to real symmetric tridiagonal form T by an orthogonal similarity transformation: Q**H * A * Q = T. More... ## Function Documentation magma_int_t magma_chetrd ( magma_uplo_t uplo, magma_int_t n, magmaFloatComplex * A, magma_int_t lda, float * d, float * e, magmaFloatComplex * tau, magmaFloatComplex * work, magma_int_t lwork, magma_int_t * info ) CHETRD reduces a complex Hermitian matrix A to real symmetric tridiagonal form T by an orthogonal similarity transformation: Q**H * A * Q = T. Parameters [in] uplo magma_uplo_t = MagmaUpper: Upper triangle of A is stored; = MagmaLower: Lower triangle of A is stored. [in] n INTEGER The order of the matrix A. N >= 0. [in,out] A COMPLEX array, dimension (LDA,N) On entry, the Hermitian matrix A. If UPLO = MagmaUpper, the leading N-by-N upper triangular part of A contains the upper triangular part of the matrix A, and the strictly lower triangular part of A is not referenced. If UPLO = MagmaLower, the leading N-by-N lower triangular part of A contains the lower triangular part of the matrix A, and the strictly upper triangular part of A is not referenced. On exit, if UPLO = MagmaUpper, the diagonal and first superdiagonal of A are overwritten by the corresponding elements of the tridiagonal matrix T, and the elements above the first superdiagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors; if UPLO = MagmaLower, the diagonal and first subdiagonal of A are over- written by the corresponding elements of the tridiagonal matrix T, and the elements below the first subdiagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors. See Further Details. [in] lda INTEGER The leading dimension of the array A. LDA >= max(1,N). [out] d COMPLEX array, dimension (N) The diagonal elements of the tridiagonal matrix T: D(i) = A(i,i). [out] e COMPLEX array, dimension (N-1) The off-diagonal elements of the tridiagonal matrix T: E(i) = A(i,i+1) if UPLO = MagmaUpper, E(i) = A(i+1,i) if UPLO = MagmaLower. [out] tau COMPLEX array, dimension (N-1) The scalar factors of the elementary reflectors (see Further Details). [out] work (workspace) COMPLEX array, dimension (MAX(1,LWORK)) On exit, if INFO = 0, WORK[0] returns the optimal LWORK. [in] lwork INTEGER The dimension of the array WORK. LWORK >= N*NB, where NB is the optimal blocksize given by magma_get_chetrd_nb(). If LWORK = -1, then a workspace query is assumed; the routine only calculates the optimal size of the WORK array, returns this value as the first entry of the WORK array, and no error message related to LWORK is issued by XERBLA. [out] info INTEGER = 0: successful exit < 0: if INFO = -i, the i-th argument had an illegal value ## Further Details If UPLO = MagmaUpper, the matrix Q is represented as a product of elementary reflectors Q = H(n-1) . . . H(2) H(1). Each H(i) has the form H(i) = I - tau * v * v' where tau is a complex scalar, and v is a complex vector with v(i+1:n) = 0 and v(i) = 1; v(1:i-1) is stored on exit in A(1:i-1,i+1), and tau in TAU(i). If UPLO = MagmaLower, the matrix Q is represented as a product of elementary reflectors Q = H(1) H(2) . . . H(n-1). Each H(i) has the form H(i) = I - tau * v * v' where tau is a complex scalar, and v is a complex vector with v(1:i) = 0 and v(i+1) = 1; v(i+2:n) is stored on exit in A(i+2:n,i), and tau in TAU(i). The contents of A on exit are illustrated by the following examples with n = 5: if UPLO = MagmaUpper: if UPLO = MagmaLower: ( d e v2 v3 v4 ) ( d ) ( d e v3 v4 ) ( e d ) ( d e v4 ) ( v1 e d ) ( d e ) ( v1 v2 e d ) ( d ) ( v1 v2 v3 e d ) where d and e denote diagonal and off-diagonal elements of T, and vi denotes an element of the vector defining H(i). magma_int_t magma_chetrd2_gpu ( magma_uplo_t uplo, magma_int_t n, magmaFloatComplex_ptr dA, magma_int_t ldda, float * d, float * e, magmaFloatComplex * tau, magmaFloatComplex * A, magma_int_t lda, magmaFloatComplex * work, magma_int_t lwork, magmaFloatComplex_ptr dwork, magma_int_t ldwork, magma_int_t * info ) CHETRD2_GPU reduces a complex Hermitian matrix A to real symmetric tridiagonal form T by an orthogonal similarity transformation: Q**H * A * Q = T. This version passes a workspace that is used in an optimized GPU matrix-vector product. Parameters [in] uplo magma_uplo_t = MagmaUpper: Upper triangle of A is stored; = MagmaLower: Lower triangle of A is stored. [in] n INTEGER The order of the matrix A. N >= 0. [in,out] dA COMPLEX array on the GPU, dimension (LDDA,N) On entry, the Hermitian matrix A. If UPLO = MagmaUpper, the leading N-by-N upper triangular part of A contains the upper triangular part of the matrix A, and the strictly lower triangular part of A is not referenced. If UPLO = MagmaLower, the leading N-by-N lower triangular part of A contains the lower triangular part of the matrix A, and the strictly upper triangular part of A is not referenced. On exit, if UPLO = MagmaUpper, the diagonal and first superdiagonal of A are overwritten by the corresponding elements of the tridiagonal matrix T, and the elements above the first superdiagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors; if UPLO = MagmaLower, the diagonal and first subdiagonal of A are over- written by the corresponding elements of the tridiagonal matrix T, and the elements below the first subdiagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors. See Further Details. [in] ldda INTEGER The leading dimension of the array A. LDDA >= max(1,N). [out] d COMPLEX array, dimension (N) The diagonal elements of the tridiagonal matrix T: D(i) = A(i,i). [out] e COMPLEX array, dimension (N-1) The off-diagonal elements of the tridiagonal matrix T: E(i) = A(i,i+1) if UPLO = MagmaUpper, E(i) = A(i+1,i) if UPLO = MagmaLower. [out] tau COMPLEX array, dimension (N-1) The scalar factors of the elementary reflectors (see Further Details). [out] A (workspace) COMPLEX array, dimension (LDA,N) On exit the diagonal, the upper part (if uplo=MagmaUpper) or the lower part (if uplo=MagmaLower) are copies of DA [in] lda INTEGER The leading dimension of the array A. LDA >= max(1,N). [out] work (workspace) COMPLEX array, dimension (MAX(1,LWORK)) On exit, if INFO = 0, WORK[0] returns the optimal LWORK. [in] lwork INTEGER The dimension of the array WORK. LWORK >= N*NB, where NB is the optimal blocksize given by magma_get_chetrd_nb(). If LWORK = -1, then a workspace query is assumed; the routine only calculates the optimal size of the WORK array, returns this value as the first entry of the WORK array, and no error message related to LWORK is issued by XERBLA. [out] dwork (workspace) COMPLEX array on the GPU, dim (MAX(1,LDWORK)) [in] ldwork INTEGER The dimension of the array DWORK. LDWORK >= ldda*ceil(n/64) + 2*ldda*nb, where nb = magma_get_chetrd_nb(n), and 64 is for the blocksize of magmablas_chemv. [out] info INTEGER = 0: successful exit < 0: if INFO = -i, the i-th argument had an illegal value ## Further Details If UPLO = MagmaUpper, the matrix Q is represented as a product of elementary reflectors Q = H(n-1) . . . H(2) H(1). Each H(i) has the form H(i) = I - tau * v * v' where tau is a complex scalar, and v is a complex vector with v(i+1:n) = 0 and v(i) = 1; v(1:i-1) is stored on exit in A(1:i-1,i+1), and tau in TAU(i). If UPLO = MagmaLower, the matrix Q is represented as a product of elementary reflectors Q = H(1) H(2) . . . H(n-1). Each H(i) has the form H(i) = I - tau * v * v' where tau is a complex scalar, and v is a complex vector with v(1:i) = 0 and v(i+1) = 1; v(i+2:n) is stored on exit in A(i+2:n,i), and tau in TAU(i). The contents of A on exit are illustrated by the following examples with n = 5: if UPLO = MagmaUpper: if UPLO = MagmaLower: ( d e v2 v3 v4 ) ( d ) ( d e v3 v4 ) ( e d ) ( d e v4 ) ( v1 e d ) ( d e ) ( v1 v2 e d ) ( d ) ( v1 v2 v3 e d ) where d and e denote diagonal and off-diagonal elements of T, and vi denotes an element of the vector defining H(i). magma_int_t magma_chetrd_gpu ( magma_uplo_t uplo, magma_int_t n, magmaFloatComplex_ptr dA, magma_int_t ldda, float * d, float * e, magmaFloatComplex * tau, magmaFloatComplex * A, magma_int_t lda, magmaFloatComplex * work, magma_int_t lwork, magma_int_t * info ) CHETRD_GPU reduces a complex Hermitian matrix A to real symmetric tridiagonal form T by an orthogonal similarity transformation: Q**H * A * Q = T. Parameters [in] uplo magma_uplo_t = MagmaUpper: Upper triangle of A is stored; = MagmaLower: Lower triangle of A is stored. [in] n INTEGER The order of the matrix A. N >= 0. [in,out] dA COMPLEX array on the GPU, dimension (LDDA,N) On entry, the Hermitian matrix A. If UPLO = MagmaUpper, the leading N-by-N upper triangular part of A contains the upper triangular part of the matrix A, and the strictly lower triangular part of A is not referenced. If UPLO = MagmaLower, the leading N-by-N lower triangular part of A contains the lower triangular part of the matrix A, and the strictly upper triangular part of A is not referenced. On exit, if UPLO = MagmaUpper, the diagonal and first superdiagonal of A are overwritten by the corresponding elements of the tridiagonal matrix T, and the elements above the first superdiagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors; if UPLO = MagmaLower, the diagonal and first subdiagonal of A are over- written by the corresponding elements of the tridiagonal matrix T, and the elements below the first subdiagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors. See Further Details. [in] ldda INTEGER The leading dimension of the array A. LDDA >= max(1,N). [out] d COMPLEX array, dimension (N) The diagonal elements of the tridiagonal matrix T: D(i) = A(i,i). [out] e COMPLEX array, dimension (N-1) The off-diagonal elements of the tridiagonal matrix T: E(i) = A(i,i+1) if UPLO = MagmaUpper, E(i) = A(i+1,i) if UPLO = MagmaLower. [out] tau COMPLEX array, dimension (N-1) The scalar factors of the elementary reflectors (see Further Details). [out] A (workspace) COMPLEX array, dimension (LDA,N) On exit the diagonal, the upper part (if uplo=MagmaUpper) or the lower part (if uplo=MagmaLower) are copies of dA [in] lda INTEGER The leading dimension of the array A. LDA >= max(1,N). [out] work (workspace) COMPLEX array, dimension (MAX(1,LWORK)) On exit, if INFO = 0, WORK[0] returns the optimal LWORK. [in] lwork INTEGER The dimension of the array WORK. LWORK >= N*NB, where NB is the optimal blocksize given by magma_get_chetrd_nb(). If LWORK = -1, then a workspace query is assumed; the routine only calculates the optimal size of the WORK array, returns this value as the first entry of the WORK array, and no error message related to LWORK is issued by XERBLA. [out] info INTEGER = 0: successful exit < 0: if INFO = -i, the i-th argument had an illegal value ## Further Details If UPLO = MagmaUpper, the matrix Q is represented as a product of elementary reflectors Q = H(n-1) . . . H(2) H(1). Each H(i) has the form H(i) = I - tau * v * v' where tau is a complex scalar, and v is a complex vector with v(i+1:n) = 0 and v(i) = 1; v(1:i-1) is stored on exit in A(1:i-1,i+1), and tau in TAU(i). If UPLO = MagmaLower, the matrix Q is represented as a product of elementary reflectors Q = H(1) H(2) . . . H(n-1). Each H(i) has the form H(i) = I - tau * v * v' where tau is a complex scalar, and v is a complex vector with v(1:i) = 0 and v(i+1) = 1; v(i+2:n) is stored on exit in A(i+2:n,i), and tau in TAU(i). The contents of A on exit are illustrated by the following examples with n = 5: if UPLO = MagmaUpper: if UPLO = MagmaLower: ( d e v2 v3 v4 ) ( d ) ( d e v3 v4 ) ( e d ) ( d e v4 ) ( v1 e d ) ( d e ) ( v1 v2 e d ) ( d ) ( v1 v2 v3 e d ) where d and e denote diagonal and off-diagonal elements of T, and vi denotes an element of the vector defining H(i). magma_int_t magma_chetrd_mgpu ( magma_int_t ngpu, magma_int_t nqueue, magma_uplo_t uplo, magma_int_t n, magmaFloatComplex * A, magma_int_t lda, float * d, float * e, magmaFloatComplex * tau, magmaFloatComplex * work, magma_int_t lwork, magma_int_t * info ) CHETRD reduces a complex Hermitian matrix A to real symmetric tridiagonal form T by an orthogonal similarity transformation: Q**H * A * Q = T. Parameters [in] ngpu INTEGER Number of GPUs to use. ngpu > 0. [in] nqueue INTEGER The number of GPU queues used for update. 10 >= nqueue > 0. [in] uplo magma_uplo_t = MagmaUpper: Upper triangle of A is stored; = MagmaLower: Lower triangle of A is stored. [in] n INTEGER The order of the matrix A. N >= 0. [in,out] A COMPLEX array, dimension (LDA,N) On entry, the Hermitian matrix A. If UPLO = MagmaUpper, the leading N-by-N upper triangular part of A contains the upper triangular part of the matrix A, and the strictly lower triangular part of A is not referenced. If UPLO = MagmaLower, the leading N-by-N lower triangular part of A contains the lower triangular part of the matrix A, and the strictly upper triangular part of A is not referenced. On exit, if UPLO = MagmaUpper, the diagonal and first superdiagonal of A are overwritten by the corresponding elements of the tridiagonal matrix T, and the elements above the first superdiagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors; if UPLO = MagmaLower, the diagonal and first subdiagonal of A are over- written by the corresponding elements of the tridiagonal matrix T, and the elements below the first subdiagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors. See Further Details. [in] lda INTEGER The leading dimension of the array A. LDA >= max(1,N). [out] d COMPLEX array, dimension (N) The diagonal elements of the tridiagonal matrix T: D(i) = A(i,i). [out] e COMPLEX array, dimension (N-1) The off-diagonal elements of the tridiagonal matrix T: E(i) = A(i,i+1) if UPLO = MagmaUpper, E(i) = A(i+1,i) if UPLO = MagmaLower. [out] tau COMPLEX array, dimension (N-1) The scalar factors of the elementary reflectors (see Further Details). [out] work (workspace) COMPLEX array, dimension (MAX(1,LWORK)) On exit, if INFO = 0, WORK[0] returns the optimal LWORK. [in] lwork INTEGER The dimension of the array WORK. LWORK >= N*NB, where NB is the optimal blocksize given by magma_get_chetrd_nb(). If LWORK = -1, then a workspace query is assumed; the routine only calculates the optimal size of the WORK array, returns this value as the first entry of the WORK array, and no error message related to LWORK is issued by XERBLA. [out] info INTEGER = 0: successful exit < 0: if INFO = -i, the i-th argument had an illegal value ## Further Details If UPLO = MagmaUpper, the matrix Q is represented as a product of elementary reflectors Q = H(n-1) . . . H(2) H(1). Each H(i) has the form H(i) = I - tau * v * v' where tau is a complex scalar, and v is a complex vector with v(i+1:n) = 0 and v(i) = 1; v(1:i-1) is stored on exit in A(1:i-1,i+1), and tau in TAU(i). If UPLO = MagmaLower, the matrix Q is represented as a product of elementary reflectors Q = H(1) H(2) . . . H(n-1). Each H(i) has the form H(i) = I - tau * v * v' where tau is a complex scalar, and v is a complex vector with v(1:i) = 0 and v(i+1) = 1; v(i+2:n) is stored on exit in A(i+2:n,i), and tau in TAU(i). The contents of A on exit are illustrated by the following examples with n = 5: if UPLO = MagmaUpper: if UPLO = MagmaLower: ( d e v2 v3 v4 ) ( d ) ( d e v3 v4 ) ( e d ) ( d e v4 ) ( v1 e d ) ( d e ) ( v1 v2 e d ) ( d ) ( v1 v2 v3 e d ) where d and e denote diagonal and off-diagonal elements of T, and vi denotes an element of the vector defining H(i). magma_int_t magma_dsytrd ( magma_uplo_t uplo, magma_int_t n, double * A, magma_int_t lda, double * d, double * e, double * tau, double * work, magma_int_t lwork, magma_int_t * info ) DSYTRD reduces a real symmetric matrix A to real symmetric tridiagonal form T by an orthogonal similarity transformation: Q**H * A * Q = T. Parameters [in] uplo magma_uplo_t = MagmaUpper: Upper triangle of A is stored; = MagmaLower: Lower triangle of A is stored. [in] n INTEGER The order of the matrix A. N >= 0. [in,out] A DOUBLE PRECISION array, dimension (LDA,N) On entry, the symmetric matrix A. If UPLO = MagmaUpper, the leading N-by-N upper triangular part of A contains the upper triangular part of the matrix A, and the strictly lower triangular part of A is not referenced. If UPLO = MagmaLower, the leading N-by-N lower triangular part of A contains the lower triangular part of the matrix A, and the strictly upper triangular part of A is not referenced. On exit, if UPLO = MagmaUpper, the diagonal and first superdiagonal of A are overwritten by the corresponding elements of the tridiagonal matrix T, and the elements above the first superdiagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors; if UPLO = MagmaLower, the diagonal and first subdiagonal of A are over- written by the corresponding elements of the tridiagonal matrix T, and the elements below the first subdiagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors. See Further Details. [in] lda INTEGER The leading dimension of the array A. LDA >= max(1,N). [out] d DOUBLE PRECISION array, dimension (N) The diagonal elements of the tridiagonal matrix T: D(i) = A(i,i). [out] e DOUBLE PRECISION array, dimension (N-1) The off-diagonal elements of the tridiagonal matrix T: E(i) = A(i,i+1) if UPLO = MagmaUpper, E(i) = A(i+1,i) if UPLO = MagmaLower. [out] tau DOUBLE PRECISION array, dimension (N-1) The scalar factors of the elementary reflectors (see Further Details). [out] work (workspace) DOUBLE PRECISION array, dimension (MAX(1,LWORK)) On exit, if INFO = 0, WORK[0] returns the optimal LWORK. [in] lwork INTEGER The dimension of the array WORK. LWORK >= N*NB, where NB is the optimal blocksize given by magma_get_dsytrd_nb(). If LWORK = -1, then a workspace query is assumed; the routine only calculates the optimal size of the WORK array, returns this value as the first entry of the WORK array, and no error message related to LWORK is issued by XERBLA. [out] info INTEGER = 0: successful exit < 0: if INFO = -i, the i-th argument had an illegal value ## Further Details If UPLO = MagmaUpper, the matrix Q is represented as a product of elementary reflectors Q = H(n-1) . . . H(2) H(1). Each H(i) has the form H(i) = I - tau * v * v' where tau is a real scalar, and v is a real vector with v(i+1:n) = 0 and v(i) = 1; v(1:i-1) is stored on exit in A(1:i-1,i+1), and tau in TAU(i). If UPLO = MagmaLower, the matrix Q is represented as a product of elementary reflectors Q = H(1) H(2) . . . H(n-1). Each H(i) has the form H(i) = I - tau * v * v' where tau is a real scalar, and v is a real vector with v(1:i) = 0 and v(i+1) = 1; v(i+2:n) is stored on exit in A(i+2:n,i), and tau in TAU(i). The contents of A on exit are illustrated by the following examples with n = 5: if UPLO = MagmaUpper: if UPLO = MagmaLower: ( d e v2 v3 v4 ) ( d ) ( d e v3 v4 ) ( e d ) ( d e v4 ) ( v1 e d ) ( d e ) ( v1 v2 e d ) ( d ) ( v1 v2 v3 e d ) where d and e denote diagonal and off-diagonal elements of T, and vi denotes an element of the vector defining H(i). magma_int_t magma_dsytrd2_gpu ( magma_uplo_t uplo, magma_int_t n, magmaDouble_ptr dA, magma_int_t ldda, double * d, double * e, double * tau, double * A, magma_int_t lda, double * work, magma_int_t lwork, magmaDouble_ptr dwork, magma_int_t ldwork, magma_int_t * info ) DSYTRD2_GPU reduces a real symmetric matrix A to real symmetric tridiagonal form T by an orthogonal similarity transformation: Q**H * A * Q = T. This version passes a workspace that is used in an optimized GPU matrix-vector product. Parameters [in] uplo magma_uplo_t = MagmaUpper: Upper triangle of A is stored; = MagmaLower: Lower triangle of A is stored. [in] n INTEGER The order of the matrix A. N >= 0. [in,out] dA DOUBLE PRECISION array on the GPU, dimension (LDDA,N) On entry, the symmetric matrix A. If UPLO = MagmaUpper, the leading N-by-N upper triangular part of A contains the upper triangular part of the matrix A, and the strictly lower triangular part of A is not referenced. If UPLO = MagmaLower, the leading N-by-N lower triangular part of A contains the lower triangular part of the matrix A, and the strictly upper triangular part of A is not referenced. On exit, if UPLO = MagmaUpper, the diagonal and first superdiagonal of A are overwritten by the corresponding elements of the tridiagonal matrix T, and the elements above the first superdiagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors; if UPLO = MagmaLower, the diagonal and first subdiagonal of A are over- written by the corresponding elements of the tridiagonal matrix T, and the elements below the first subdiagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors. See Further Details. [in] ldda INTEGER The leading dimension of the array A. LDDA >= max(1,N). [out] d DOUBLE PRECISION array, dimension (N) The diagonal elements of the tridiagonal matrix T: D(i) = A(i,i). [out] e DOUBLE PRECISION array, dimension (N-1) The off-diagonal elements of the tridiagonal matrix T: E(i) = A(i,i+1) if UPLO = MagmaUpper, E(i) = A(i+1,i) if UPLO = MagmaLower. [out] tau DOUBLE PRECISION array, dimension (N-1) The scalar factors of the elementary reflectors (see Further Details). [out] A (workspace) DOUBLE PRECISION array, dimension (LDA,N) On exit the diagonal, the upper part (if uplo=MagmaUpper) or the lower part (if uplo=MagmaLower) are copies of DA [in] lda INTEGER The leading dimension of the array A. LDA >= max(1,N). [out] work (workspace) DOUBLE PRECISION array, dimension (MAX(1,LWORK)) On exit, if INFO = 0, WORK[0] returns the optimal LWORK. [in] lwork INTEGER The dimension of the array WORK. LWORK >= N*NB, where NB is the optimal blocksize given by magma_get_dsytrd_nb(). If LWORK = -1, then a workspace query is assumed; the routine only calculates the optimal size of the WORK array, returns this value as the first entry of the WORK array, and no error message related to LWORK is issued by XERBLA. [out] dwork (workspace) DOUBLE PRECISION array on the GPU, dim (MAX(1,LDWORK)) [in] ldwork INTEGER The dimension of the array DWORK. LDWORK >= ldda*ceil(n/64) + 2*ldda*nb, where nb = magma_get_dsytrd_nb(n), and 64 is for the blocksize of magmablas_dsymv. [out] info INTEGER = 0: successful exit < 0: if INFO = -i, the i-th argument had an illegal value ## Further Details If UPLO = MagmaUpper, the matrix Q is represented as a product of elementary reflectors Q = H(n-1) . . . H(2) H(1). Each H(i) has the form H(i) = I - tau * v * v' where tau is a real scalar, and v is a real vector with v(i+1:n) = 0 and v(i) = 1; v(1:i-1) is stored on exit in A(1:i-1,i+1), and tau in TAU(i). If UPLO = MagmaLower, the matrix Q is represented as a product of elementary reflectors Q = H(1) H(2) . . . H(n-1). Each H(i) has the form H(i) = I - tau * v * v' where tau is a real scalar, and v is a real vector with v(1:i) = 0 and v(i+1) = 1; v(i+2:n) is stored on exit in A(i+2:n,i), and tau in TAU(i). The contents of A on exit are illustrated by the following examples with n = 5: if UPLO = MagmaUpper: if UPLO = MagmaLower: ( d e v2 v3 v4 ) ( d ) ( d e v3 v4 ) ( e d ) ( d e v4 ) ( v1 e d ) ( d e ) ( v1 v2 e d ) ( d ) ( v1 v2 v3 e d ) where d and e denote diagonal and off-diagonal elements of T, and vi denotes an element of the vector defining H(i). magma_int_t magma_dsytrd_gpu ( magma_uplo_t uplo, magma_int_t n, magmaDouble_ptr dA, magma_int_t ldda, double * d, double * e, double * tau, double * A, magma_int_t lda, double * work, magma_int_t lwork, magma_int_t * info ) DSYTRD_GPU reduces a real symmetric matrix A to real symmetric tridiagonal form T by an orthogonal similarity transformation: Q**H * A * Q = T. Parameters [in] uplo magma_uplo_t = MagmaUpper: Upper triangle of A is stored; = MagmaLower: Lower triangle of A is stored. [in] n INTEGER The order of the matrix A. N >= 0. [in,out] dA DOUBLE PRECISION array on the GPU, dimension (LDDA,N) On entry, the symmetric matrix A. If UPLO = MagmaUpper, the leading N-by-N upper triangular part of A contains the upper triangular part of the matrix A, and the strictly lower triangular part of A is not referenced. If UPLO = MagmaLower, the leading N-by-N lower triangular part of A contains the lower triangular part of the matrix A, and the strictly upper triangular part of A is not referenced. On exit, if UPLO = MagmaUpper, the diagonal and first superdiagonal of A are overwritten by the corresponding elements of the tridiagonal matrix T, and the elements above the first superdiagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors; if UPLO = MagmaLower, the diagonal and first subdiagonal of A are over- written by the corresponding elements of the tridiagonal matrix T, and the elements below the first subdiagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors. See Further Details. [in] ldda INTEGER The leading dimension of the array A. LDDA >= max(1,N). [out] d DOUBLE PRECISION array, dimension (N) The diagonal elements of the tridiagonal matrix T: D(i) = A(i,i). [out] e DOUBLE PRECISION array, dimension (N-1) The off-diagonal elements of the tridiagonal matrix T: E(i) = A(i,i+1) if UPLO = MagmaUpper, E(i) = A(i+1,i) if UPLO = MagmaLower. [out] tau DOUBLE PRECISION array, dimension (N-1) The scalar factors of the elementary reflectors (see Further Details). [out] A (workspace) DOUBLE PRECISION array, dimension (LDA,N) On exit the diagonal, the upper part (if uplo=MagmaUpper) or the lower part (if uplo=MagmaLower) are copies of dA [in] lda INTEGER The leading dimension of the array A. LDA >= max(1,N). [out] work (workspace) DOUBLE PRECISION array, dimension (MAX(1,LWORK)) On exit, if INFO = 0, WORK[0] returns the optimal LWORK. [in] lwork INTEGER The dimension of the array WORK. LWORK >= N*NB, where NB is the optimal blocksize given by magma_get_dsytrd_nb(). If LWORK = -1, then a workspace query is assumed; the routine only calculates the optimal size of the WORK array, returns this value as the first entry of the WORK array, and no error message related to LWORK is issued by XERBLA. [out] info INTEGER = 0: successful exit < 0: if INFO = -i, the i-th argument had an illegal value ## Further Details If UPLO = MagmaUpper, the matrix Q is represented as a product of elementary reflectors Q = H(n-1) . . . H(2) H(1). Each H(i) has the form H(i) = I - tau * v * v' where tau is a real scalar, and v is a real vector with v(i+1:n) = 0 and v(i) = 1; v(1:i-1) is stored on exit in A(1:i-1,i+1), and tau in TAU(i). If UPLO = MagmaLower, the matrix Q is represented as a product of elementary reflectors Q = H(1) H(2) . . . H(n-1). Each H(i) has the form H(i) = I - tau * v * v' where tau is a real scalar, and v is a real vector with v(1:i) = 0 and v(i+1) = 1; v(i+2:n) is stored on exit in A(i+2:n,i), and tau in TAU(i). The contents of A on exit are illustrated by the following examples with n = 5: if UPLO = MagmaUpper: if UPLO = MagmaLower: ( d e v2 v3 v4 ) ( d ) ( d e v3 v4 ) ( e d ) ( d e v4 ) ( v1 e d ) ( d e ) ( v1 v2 e d ) ( d ) ( v1 v2 v3 e d ) where d and e denote diagonal and off-diagonal elements of T, and vi denotes an element of the vector defining H(i). magma_int_t magma_dsytrd_mgpu ( magma_int_t ngpu, magma_int_t nqueue, magma_uplo_t uplo, magma_int_t n, double * A, magma_int_t lda, double * d, double * e, double * tau, double * work, magma_int_t lwork, magma_int_t * info ) DSYTRD reduces a real symmetric matrix A to real symmetric tridiagonal form T by an orthogonal similarity transformation: Q**H * A * Q = T. Parameters [in] ngpu INTEGER Number of GPUs to use. ngpu > 0. [in] nqueue INTEGER The number of GPU queues used for update. 10 >= nqueue > 0. [in] uplo magma_uplo_t = MagmaUpper: Upper triangle of A is stored; = MagmaLower: Lower triangle of A is stored. [in] n INTEGER The order of the matrix A. N >= 0. [in,out] A DOUBLE PRECISION array, dimension (LDA,N) On entry, the symmetric matrix A. If UPLO = MagmaUpper, the leading N-by-N upper triangular part of A contains the upper triangular part of the matrix A, and the strictly lower triangular part of A is not referenced. If UPLO = MagmaLower, the leading N-by-N lower triangular part of A contains the lower triangular part of the matrix A, and the strictly upper triangular part of A is not referenced. On exit, if UPLO = MagmaUpper, the diagonal and first superdiagonal of A are overwritten by the corresponding elements of the tridiagonal matrix T, and the elements above the first superdiagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors; if UPLO = MagmaLower, the diagonal and first subdiagonal of A are over- written by the corresponding elements of the tridiagonal matrix T, and the elements below the first subdiagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors. See Further Details. [in] lda INTEGER The leading dimension of the array A. LDA >= max(1,N). [out] d DOUBLE PRECISION array, dimension (N) The diagonal elements of the tridiagonal matrix T: D(i) = A(i,i). [out] e DOUBLE PRECISION array, dimension (N-1) The off-diagonal elements of the tridiagonal matrix T: E(i) = A(i,i+1) if UPLO = MagmaUpper, E(i) = A(i+1,i) if UPLO = MagmaLower. [out] tau DOUBLE PRECISION array, dimension (N-1) The scalar factors of the elementary reflectors (see Further Details). [out] work (workspace) DOUBLE PRECISION array, dimension (MAX(1,LWORK)) On exit, if INFO = 0, WORK[0] returns the optimal LWORK. [in] lwork INTEGER The dimension of the array WORK. LWORK >= N*NB, where NB is the optimal blocksize given by magma_get_dsytrd_nb(). If LWORK = -1, then a workspace query is assumed; the routine only calculates the optimal size of the WORK array, returns this value as the first entry of the WORK array, and no error message related to LWORK is issued by XERBLA. [out] info INTEGER = 0: successful exit < 0: if INFO = -i, the i-th argument had an illegal value ## Further Details If UPLO = MagmaUpper, the matrix Q is represented as a product of elementary reflectors Q = H(n-1) . . . H(2) H(1). Each H(i) has the form H(i) = I - tau * v * v' where tau is a real scalar, and v is a real vector with v(i+1:n) = 0 and v(i) = 1; v(1:i-1) is stored on exit in A(1:i-1,i+1), and tau in TAU(i). If UPLO = MagmaLower, the matrix Q is represented as a product of elementary reflectors Q = H(1) H(2) . . . H(n-1). Each H(i) has the form H(i) = I - tau * v * v' where tau is a real scalar, and v is a real vector with v(1:i) = 0 and v(i+1) = 1; v(i+2:n) is stored on exit in A(i+2:n,i), and tau in TAU(i). The contents of A on exit are illustrated by the following examples with n = 5: if UPLO = MagmaUpper: if UPLO = MagmaLower: ( d e v2 v3 v4 ) ( d ) ( d e v3 v4 ) ( e d ) ( d e v4 ) ( v1 e d ) ( d e ) ( v1 v2 e d ) ( d ) ( v1 v2 v3 e d ) where d and e denote diagonal and off-diagonal elements of T, and vi denotes an element of the vector defining H(i). magma_int_t magma_ssytrd ( magma_uplo_t uplo, magma_int_t n, float * A, magma_int_t lda, float * d, float * e, float * tau, float * work, magma_int_t lwork, magma_int_t * info ) SSYTRD reduces a real symmetric matrix A to real symmetric tridiagonal form T by an orthogonal similarity transformation: Q**H * A * Q = T. Parameters [in] uplo magma_uplo_t = MagmaUpper: Upper triangle of A is stored; = MagmaLower: Lower triangle of A is stored. [in] n INTEGER The order of the matrix A. N >= 0. [in,out] A REAL array, dimension (LDA,N) On entry, the symmetric matrix A. If UPLO = MagmaUpper, the leading N-by-N upper triangular part of A contains the upper triangular part of the matrix A, and the strictly lower triangular part of A is not referenced. If UPLO = MagmaLower, the leading N-by-N lower triangular part of A contains the lower triangular part of the matrix A, and the strictly upper triangular part of A is not referenced. On exit, if UPLO = MagmaUpper, the diagonal and first superdiagonal of A are overwritten by the corresponding elements of the tridiagonal matrix T, and the elements above the first superdiagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors; if UPLO = MagmaLower, the diagonal and first subdiagonal of A are over- written by the corresponding elements of the tridiagonal matrix T, and the elements below the first subdiagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors. See Further Details. [in] lda INTEGER The leading dimension of the array A. LDA >= max(1,N). [out] d REAL array, dimension (N) The diagonal elements of the tridiagonal matrix T: D(i) = A(i,i). [out] e REAL array, dimension (N-1) The off-diagonal elements of the tridiagonal matrix T: E(i) = A(i,i+1) if UPLO = MagmaUpper, E(i) = A(i+1,i) if UPLO = MagmaLower. [out] tau REAL array, dimension (N-1) The scalar factors of the elementary reflectors (see Further Details). [out] work (workspace) REAL array, dimension (MAX(1,LWORK)) On exit, if INFO = 0, WORK[0] returns the optimal LWORK. [in] lwork INTEGER The dimension of the array WORK. LWORK >= N*NB, where NB is the optimal blocksize given by magma_get_ssytrd_nb(). If LWORK = -1, then a workspace query is assumed; the routine only calculates the optimal size of the WORK array, returns this value as the first entry of the WORK array, and no error message related to LWORK is issued by XERBLA. [out] info INTEGER = 0: successful exit < 0: if INFO = -i, the i-th argument had an illegal value ## Further Details If UPLO = MagmaUpper, the matrix Q is represented as a product of elementary reflectors Q = H(n-1) . . . H(2) H(1). Each H(i) has the form H(i) = I - tau * v * v' where tau is a real scalar, and v is a real vector with v(i+1:n) = 0 and v(i) = 1; v(1:i-1) is stored on exit in A(1:i-1,i+1), and tau in TAU(i). If UPLO = MagmaLower, the matrix Q is represented as a product of elementary reflectors Q = H(1) H(2) . . . H(n-1). Each H(i) has the form H(i) = I - tau * v * v' where tau is a real scalar, and v is a real vector with v(1:i) = 0 and v(i+1) = 1; v(i+2:n) is stored on exit in A(i+2:n,i), and tau in TAU(i). The contents of A on exit are illustrated by the following examples with n = 5: if UPLO = MagmaUpper: if UPLO = MagmaLower: ( d e v2 v3 v4 ) ( d ) ( d e v3 v4 ) ( e d ) ( d e v4 ) ( v1 e d ) ( d e ) ( v1 v2 e d ) ( d ) ( v1 v2 v3 e d ) where d and e denote diagonal and off-diagonal elements of T, and vi denotes an element of the vector defining H(i). magma_int_t magma_ssytrd2_gpu ( magma_uplo_t uplo, magma_int_t n, magmaFloat_ptr dA, magma_int_t ldda, float * d, float * e, float * tau, float * A, magma_int_t lda, float * work, magma_int_t lwork, magmaFloat_ptr dwork, magma_int_t ldwork, magma_int_t * info ) SSYTRD2_GPU reduces a real symmetric matrix A to real symmetric tridiagonal form T by an orthogonal similarity transformation: Q**H * A * Q = T. This version passes a workspace that is used in an optimized GPU matrix-vector product. Parameters [in] uplo magma_uplo_t = MagmaUpper: Upper triangle of A is stored; = MagmaLower: Lower triangle of A is stored. [in] n INTEGER The order of the matrix A. N >= 0. [in,out] dA REAL array on the GPU, dimension (LDDA,N) On entry, the symmetric matrix A. If UPLO = MagmaUpper, the leading N-by-N upper triangular part of A contains the upper triangular part of the matrix A, and the strictly lower triangular part of A is not referenced. If UPLO = MagmaLower, the leading N-by-N lower triangular part of A contains the lower triangular part of the matrix A, and the strictly upper triangular part of A is not referenced. On exit, if UPLO = MagmaUpper, the diagonal and first superdiagonal of A are overwritten by the corresponding elements of the tridiagonal matrix T, and the elements above the first superdiagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors; if UPLO = MagmaLower, the diagonal and first subdiagonal of A are over- written by the corresponding elements of the tridiagonal matrix T, and the elements below the first subdiagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors. See Further Details. [in] ldda INTEGER The leading dimension of the array A. LDDA >= max(1,N). [out] d REAL array, dimension (N) The diagonal elements of the tridiagonal matrix T: D(i) = A(i,i). [out] e REAL array, dimension (N-1) The off-diagonal elements of the tridiagonal matrix T: E(i) = A(i,i+1) if UPLO = MagmaUpper, E(i) = A(i+1,i) if UPLO = MagmaLower. [out] tau REAL array, dimension (N-1) The scalar factors of the elementary reflectors (see Further Details). [out] A (workspace) REAL array, dimension (LDA,N) On exit the diagonal, the upper part (if uplo=MagmaUpper) or the lower part (if uplo=MagmaLower) are copies of DA [in] lda INTEGER The leading dimension of the array A. LDA >= max(1,N). [out] work (workspace) REAL array, dimension (MAX(1,LWORK)) On exit, if INFO = 0, WORK[0] returns the optimal LWORK. [in] lwork INTEGER The dimension of the array WORK. LWORK >= N*NB, where NB is the optimal blocksize given by magma_get_ssytrd_nb(). If LWORK = -1, then a workspace query is assumed; the routine only calculates the optimal size of the WORK array, returns this value as the first entry of the WORK array, and no error message related to LWORK is issued by XERBLA. [out] dwork (workspace) REAL array on the GPU, dim (MAX(1,LDWORK)) [in] ldwork INTEGER The dimension of the array DWORK. LDWORK >= ldda*ceil(n/64) + 2*ldda*nb, where nb = magma_get_ssytrd_nb(n), and 64 is for the blocksize of magmablas_ssymv. [out] info INTEGER = 0: successful exit < 0: if INFO = -i, the i-th argument had an illegal value ## Further Details If UPLO = MagmaUpper, the matrix Q is represented as a product of elementary reflectors Q = H(n-1) . . . H(2) H(1). Each H(i) has the form H(i) = I - tau * v * v' where tau is a real scalar, and v is a real vector with v(i+1:n) = 0 and v(i) = 1; v(1:i-1) is stored on exit in A(1:i-1,i+1), and tau in TAU(i). If UPLO = MagmaLower, the matrix Q is represented as a product of elementary reflectors Q = H(1) H(2) . . . H(n-1). Each H(i) has the form H(i) = I - tau * v * v' where tau is a real scalar, and v is a real vector with v(1:i) = 0 and v(i+1) = 1; v(i+2:n) is stored on exit in A(i+2:n,i), and tau in TAU(i). The contents of A on exit are illustrated by the following examples with n = 5: if UPLO = MagmaUpper: if UPLO = MagmaLower: ( d e v2 v3 v4 ) ( d ) ( d e v3 v4 ) ( e d ) ( d e v4 ) ( v1 e d ) ( d e ) ( v1 v2 e d ) ( d ) ( v1 v2 v3 e d ) where d and e denote diagonal and off-diagonal elements of T, and vi denotes an element of the vector defining H(i). magma_int_t magma_ssytrd_gpu ( magma_uplo_t uplo, magma_int_t n, magmaFloat_ptr dA, magma_int_t ldda, float * d, float * e, float * tau, float * A, magma_int_t lda, float * work, magma_int_t lwork, magma_int_t * info ) SSYTRD_GPU reduces a real symmetric matrix A to real symmetric tridiagonal form T by an orthogonal similarity transformation: Q**H * A * Q = T. Parameters [in] uplo magma_uplo_t = MagmaUpper: Upper triangle of A is stored; = MagmaLower: Lower triangle of A is stored. [in] n INTEGER The order of the matrix A. N >= 0. [in,out] dA REAL array on the GPU, dimension (LDDA,N) On entry, the symmetric matrix A. If UPLO = MagmaUpper, the leading N-by-N upper triangular part of A contains the upper triangular part of the matrix A, and the strictly lower triangular part of A is not referenced. If UPLO = MagmaLower, the leading N-by-N lower triangular part of A contains the lower triangular part of the matrix A, and the strictly upper triangular part of A is not referenced. On exit, if UPLO = MagmaUpper, the diagonal and first superdiagonal of A are overwritten by the corresponding elements of the tridiagonal matrix T, and the elements above the first superdiagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors; if UPLO = MagmaLower, the diagonal and first subdiagonal of A are over- written by the corresponding elements of the tridiagonal matrix T, and the elements below the first subdiagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors. See Further Details. [in] ldda INTEGER The leading dimension of the array A. LDDA >= max(1,N). [out] d REAL array, dimension (N) The diagonal elements of the tridiagonal matrix T: D(i) = A(i,i). [out] e REAL array, dimension (N-1) The off-diagonal elements of the tridiagonal matrix T: E(i) = A(i,i+1) if UPLO = MagmaUpper, E(i) = A(i+1,i) if UPLO = MagmaLower. [out] tau REAL array, dimension (N-1) The scalar factors of the elementary reflectors (see Further Details). [out] A (workspace) REAL array, dimension (LDA,N) On exit the diagonal, the upper part (if uplo=MagmaUpper) or the lower part (if uplo=MagmaLower) are copies of dA [in] lda INTEGER The leading dimension of the array A. LDA >= max(1,N). [out] work (workspace) REAL array, dimension (MAX(1,LWORK)) On exit, if INFO = 0, WORK[0] returns the optimal LWORK. [in] lwork INTEGER The dimension of the array WORK. LWORK >= N*NB, where NB is the optimal blocksize given by magma_get_ssytrd_nb(). If LWORK = -1, then a workspace query is assumed; the routine only calculates the optimal size of the WORK array, returns this value as the first entry of the WORK array, and no error message related to LWORK is issued by XERBLA. [out] info INTEGER = 0: successful exit < 0: if INFO = -i, the i-th argument had an illegal value ## Further Details If UPLO = MagmaUpper, the matrix Q is represented as a product of elementary reflectors Q = H(n-1) . . . H(2) H(1). Each H(i) has the form H(i) = I - tau * v * v' where tau is a real scalar, and v is a real vector with v(i+1:n) = 0 and v(i) = 1; v(1:i-1) is stored on exit in A(1:i-1,i+1), and tau in TAU(i). If UPLO = MagmaLower, the matrix Q is represented as a product of elementary reflectors Q = H(1) H(2) . . . H(n-1). Each H(i) has the form H(i) = I - tau * v * v' where tau is a real scalar, and v is a real vector with v(1:i) = 0 and v(i+1) = 1; v(i+2:n) is stored on exit in A(i+2:n,i), and tau in TAU(i). The contents of A on exit are illustrated by the following examples with n = 5: if UPLO = MagmaUpper: if UPLO = MagmaLower: ( d e v2 v3 v4 ) ( d ) ( d e v3 v4 ) ( e d ) ( d e v4 ) ( v1 e d ) ( d e ) ( v1 v2 e d ) ( d ) ( v1 v2 v3 e d ) where d and e denote diagonal and off-diagonal elements of T, and vi denotes an element of the vector defining H(i). magma_int_t magma_ssytrd_mgpu ( magma_int_t ngpu, magma_int_t nqueue, magma_uplo_t uplo, magma_int_t n, float * A, magma_int_t lda, float * d, float * e, float * tau, float * work, magma_int_t lwork, magma_int_t * info ) SSYTRD reduces a real symmetric matrix A to real symmetric tridiagonal form T by an orthogonal similarity transformation: Q**H * A * Q = T. Parameters [in] ngpu INTEGER Number of GPUs to use. ngpu > 0. [in] nqueue INTEGER The number of GPU queues used for update. 10 >= nqueue > 0. [in] uplo magma_uplo_t = MagmaUpper: Upper triangle of A is stored; = MagmaLower: Lower triangle of A is stored. [in] n INTEGER The order of the matrix A. N >= 0. [in,out] A REAL array, dimension (LDA,N) On entry, the symmetric matrix A. If UPLO = MagmaUpper, the leading N-by-N upper triangular part of A contains the upper triangular part of the matrix A, and the strictly lower triangular part of A is not referenced. If UPLO = MagmaLower, the leading N-by-N lower triangular part of A contains the lower triangular part of the matrix A, and the strictly upper triangular part of A is not referenced. On exit, if UPLO = MagmaUpper, the diagonal and first superdiagonal of A are overwritten by the corresponding elements of the tridiagonal matrix T, and the elements above the first superdiagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors; if UPLO = MagmaLower, the diagonal and first subdiagonal of A are over- written by the corresponding elements of the tridiagonal matrix T, and the elements below the first subdiagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors. See Further Details. [in] lda INTEGER The leading dimension of the array A. LDA >= max(1,N). [out] d REAL array, dimension (N) The diagonal elements of the tridiagonal matrix T: D(i) = A(i,i). [out] e REAL array, dimension (N-1) The off-diagonal elements of the tridiagonal matrix T: E(i) = A(i,i+1) if UPLO = MagmaUpper, E(i) = A(i+1,i) if UPLO = MagmaLower. [out] tau REAL array, dimension (N-1) The scalar factors of the elementary reflectors (see Further Details). [out] work (workspace) REAL array, dimension (MAX(1,LWORK)) On exit, if INFO = 0, WORK[0] returns the optimal LWORK. [in] lwork INTEGER The dimension of the array WORK. LWORK >= N*NB, where NB is the optimal blocksize given by magma_get_ssytrd_nb(). If LWORK = -1, then a workspace query is assumed; the routine only calculates the optimal size of the WORK array, returns this value as the first entry of the WORK array, and no error message related to LWORK is issued by XERBLA. [out] info INTEGER = 0: successful exit < 0: if INFO = -i, the i-th argument had an illegal value ## Further Details If UPLO = MagmaUpper, the matrix Q is represented as a product of elementary reflectors Q = H(n-1) . . . H(2) H(1). Each H(i) has the form H(i) = I - tau * v * v' where tau is a real scalar, and v is a real vector with v(i+1:n) = 0 and v(i) = 1; v(1:i-1) is stored on exit in A(1:i-1,i+1), and tau in TAU(i). If UPLO = MagmaLower, the matrix Q is represented as a product of elementary reflectors Q = H(1) H(2) . . . H(n-1). Each H(i) has the form H(i) = I - tau * v * v' where tau is a real scalar, and v is a real vector with v(1:i) = 0 and v(i+1) = 1; v(i+2:n) is stored on exit in A(i+2:n,i), and tau in TAU(i). The contents of A on exit are illustrated by the following examples with n = 5: if UPLO = MagmaUpper: if UPLO = MagmaLower: ( d e v2 v3 v4 ) ( d ) ( d e v3 v4 ) ( e d ) ( d e v4 ) ( v1 e d ) ( d e ) ( v1 v2 e d ) ( d ) ( v1 v2 v3 e d ) where d and e denote diagonal and off-diagonal elements of T, and vi denotes an element of the vector defining H(i). magma_int_t magma_zhetrd ( magma_uplo_t uplo, magma_int_t n, magmaDoubleComplex * A, magma_int_t lda, double * d, double * e, magmaDoubleComplex * tau, magmaDoubleComplex * work, magma_int_t lwork, magma_int_t * info ) ZHETRD reduces a complex Hermitian matrix A to real symmetric tridiagonal form T by an orthogonal similarity transformation: Q**H * A * Q = T. Parameters [in] uplo magma_uplo_t = MagmaUpper: Upper triangle of A is stored; = MagmaLower: Lower triangle of A is stored. [in] n INTEGER The order of the matrix A. N >= 0. [in,out] A COMPLEX_16 array, dimension (LDA,N) On entry, the Hermitian matrix A. If UPLO = MagmaUpper, the leading N-by-N upper triangular part of A contains the upper triangular part of the matrix A, and the strictly lower triangular part of A is not referenced. If UPLO = MagmaLower, the leading N-by-N lower triangular part of A contains the lower triangular part of the matrix A, and the strictly upper triangular part of A is not referenced. On exit, if UPLO = MagmaUpper, the diagonal and first superdiagonal of A are overwritten by the corresponding elements of the tridiagonal matrix T, and the elements above the first superdiagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors; if UPLO = MagmaLower, the diagonal and first subdiagonal of A are over- written by the corresponding elements of the tridiagonal matrix T, and the elements below the first subdiagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors. See Further Details. [in] lda INTEGER The leading dimension of the array A. LDA >= max(1,N). [out] d COMPLEX_16 array, dimension (N) The diagonal elements of the tridiagonal matrix T: D(i) = A(i,i). [out] e COMPLEX_16 array, dimension (N-1) The off-diagonal elements of the tridiagonal matrix T: E(i) = A(i,i+1) if UPLO = MagmaUpper, E(i) = A(i+1,i) if UPLO = MagmaLower. [out] tau COMPLEX_16 array, dimension (N-1) The scalar factors of the elementary reflectors (see Further Details). [out] work (workspace) COMPLEX_16 array, dimension (MAX(1,LWORK)) On exit, if INFO = 0, WORK[0] returns the optimal LWORK. [in] lwork INTEGER The dimension of the array WORK. LWORK >= N*NB, where NB is the optimal blocksize given by magma_get_zhetrd_nb(). If LWORK = -1, then a workspace query is assumed; the routine only calculates the optimal size of the WORK array, returns this value as the first entry of the WORK array, and no error message related to LWORK is issued by XERBLA. [out] info INTEGER = 0: successful exit < 0: if INFO = -i, the i-th argument had an illegal value ## Further Details If UPLO = MagmaUpper, the matrix Q is represented as a product of elementary reflectors Q = H(n-1) . . . H(2) H(1). Each H(i) has the form H(i) = I - tau * v * v' where tau is a complex scalar, and v is a complex vector with v(i+1:n) = 0 and v(i) = 1; v(1:i-1) is stored on exit in A(1:i-1,i+1), and tau in TAU(i). If UPLO = MagmaLower, the matrix Q is represented as a product of elementary reflectors Q = H(1) H(2) . . . H(n-1). Each H(i) has the form H(i) = I - tau * v * v' where tau is a complex scalar, and v is a complex vector with v(1:i) = 0 and v(i+1) = 1; v(i+2:n) is stored on exit in A(i+2:n,i), and tau in TAU(i). The contents of A on exit are illustrated by the following examples with n = 5: if UPLO = MagmaUpper: if UPLO = MagmaLower: ( d e v2 v3 v4 ) ( d ) ( d e v3 v4 ) ( e d ) ( d e v4 ) ( v1 e d ) ( d e ) ( v1 v2 e d ) ( d ) ( v1 v2 v3 e d ) where d and e denote diagonal and off-diagonal elements of T, and vi denotes an element of the vector defining H(i). magma_int_t magma_zhetrd2_gpu ( magma_uplo_t uplo, magma_int_t n, magmaDoubleComplex_ptr dA, magma_int_t ldda, double * d, double * e, magmaDoubleComplex * tau, magmaDoubleComplex * A, magma_int_t lda, magmaDoubleComplex * work, magma_int_t lwork, magmaDoubleComplex_ptr dwork, magma_int_t ldwork, magma_int_t * info ) ZHETRD2_GPU reduces a complex Hermitian matrix A to real symmetric tridiagonal form T by an orthogonal similarity transformation: Q**H * A * Q = T. This version passes a workspace that is used in an optimized GPU matrix-vector product. Parameters [in] uplo magma_uplo_t = MagmaUpper: Upper triangle of A is stored; = MagmaLower: Lower triangle of A is stored. [in] n INTEGER The order of the matrix A. N >= 0. [in,out] dA COMPLEX_16 array on the GPU, dimension (LDDA,N) On entry, the Hermitian matrix A. If UPLO = MagmaUpper, the leading N-by-N upper triangular part of A contains the upper triangular part of the matrix A, and the strictly lower triangular part of A is not referenced. If UPLO = MagmaLower, the leading N-by-N lower triangular part of A contains the lower triangular part of the matrix A, and the strictly upper triangular part of A is not referenced. On exit, if UPLO = MagmaUpper, the diagonal and first superdiagonal of A are overwritten by the corresponding elements of the tridiagonal matrix T, and the elements above the first superdiagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors; if UPLO = MagmaLower, the diagonal and first subdiagonal of A are over- written by the corresponding elements of the tridiagonal matrix T, and the elements below the first subdiagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors. See Further Details. [in] ldda INTEGER The leading dimension of the array A. LDDA >= max(1,N). [out] d COMPLEX_16 array, dimension (N) The diagonal elements of the tridiagonal matrix T: D(i) = A(i,i). [out] e COMPLEX_16 array, dimension (N-1) The off-diagonal elements of the tridiagonal matrix T: E(i) = A(i,i+1) if UPLO = MagmaUpper, E(i) = A(i+1,i) if UPLO = MagmaLower. [out] tau COMPLEX_16 array, dimension (N-1) The scalar factors of the elementary reflectors (see Further Details). [out] A (workspace) COMPLEX_16 array, dimension (LDA,N) On exit the diagonal, the upper part (if uplo=MagmaUpper) or the lower part (if uplo=MagmaLower) are copies of DA [in] lda INTEGER The leading dimension of the array A. LDA >= max(1,N). [out] work (workspace) COMPLEX_16 array, dimension (MAX(1,LWORK)) On exit, if INFO = 0, WORK[0] returns the optimal LWORK. [in] lwork INTEGER The dimension of the array WORK. LWORK >= N*NB, where NB is the optimal blocksize given by magma_get_zhetrd_nb(). If LWORK = -1, then a workspace query is assumed; the routine only calculates the optimal size of the WORK array, returns this value as the first entry of the WORK array, and no error message related to LWORK is issued by XERBLA. [out] dwork (workspace) COMPLEX_16 array on the GPU, dim (MAX(1,LDWORK)) [in] ldwork INTEGER The dimension of the array DWORK. LDWORK >= ldda*ceil(n/64) + 2*ldda*nb, where nb = magma_get_zhetrd_nb(n), and 64 is for the blocksize of magmablas_zhemv. [out] info INTEGER = 0: successful exit < 0: if INFO = -i, the i-th argument had an illegal value ## Further Details If UPLO = MagmaUpper, the matrix Q is represented as a product of elementary reflectors Q = H(n-1) . . . H(2) H(1). Each H(i) has the form H(i) = I - tau * v * v' where tau is a complex scalar, and v is a complex vector with v(i+1:n) = 0 and v(i) = 1; v(1:i-1) is stored on exit in A(1:i-1,i+1), and tau in TAU(i). If UPLO = MagmaLower, the matrix Q is represented as a product of elementary reflectors Q = H(1) H(2) . . . H(n-1). Each H(i) has the form H(i) = I - tau * v * v' where tau is a complex scalar, and v is a complex vector with v(1:i) = 0 and v(i+1) = 1; v(i+2:n) is stored on exit in A(i+2:n,i), and tau in TAU(i). The contents of A on exit are illustrated by the following examples with n = 5: if UPLO = MagmaUpper: if UPLO = MagmaLower: ( d e v2 v3 v4 ) ( d ) ( d e v3 v4 ) ( e d ) ( d e v4 ) ( v1 e d ) ( d e ) ( v1 v2 e d ) ( d ) ( v1 v2 v3 e d ) where d and e denote diagonal and off-diagonal elements of T, and vi denotes an element of the vector defining H(i). magma_int_t magma_zhetrd_gpu ( magma_uplo_t uplo, magma_int_t n, magmaDoubleComplex_ptr dA, magma_int_t ldda, double * d, double * e, magmaDoubleComplex * tau, magmaDoubleComplex * A, magma_int_t lda, magmaDoubleComplex * work, magma_int_t lwork, magma_int_t * info ) ZHETRD_GPU reduces a complex Hermitian matrix A to real symmetric tridiagonal form T by an orthogonal similarity transformation: Q**H * A * Q = T. Parameters [in] uplo magma_uplo_t = MagmaUpper: Upper triangle of A is stored; = MagmaLower: Lower triangle of A is stored. [in] n INTEGER The order of the matrix A. N >= 0. [in,out] dA COMPLEX_16 array on the GPU, dimension (LDDA,N) On entry, the Hermitian matrix A. If UPLO = MagmaUpper, the leading N-by-N upper triangular part of A contains the upper triangular part of the matrix A, and the strictly lower triangular part of A is not referenced. If UPLO = MagmaLower, the leading N-by-N lower triangular part of A contains the lower triangular part of the matrix A, and the strictly upper triangular part of A is not referenced. On exit, if UPLO = MagmaUpper, the diagonal and first superdiagonal of A are overwritten by the corresponding elements of the tridiagonal matrix T, and the elements above the first superdiagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors; if UPLO = MagmaLower, the diagonal and first subdiagonal of A are over- written by the corresponding elements of the tridiagonal matrix T, and the elements below the first subdiagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors. See Further Details. [in] ldda INTEGER The leading dimension of the array A. LDDA >= max(1,N). [out] d COMPLEX_16 array, dimension (N) The diagonal elements of the tridiagonal matrix T: D(i) = A(i,i). [out] e COMPLEX_16 array, dimension (N-1) The off-diagonal elements of the tridiagonal matrix T: E(i) = A(i,i+1) if UPLO = MagmaUpper, E(i) = A(i+1,i) if UPLO = MagmaLower. [out] tau COMPLEX_16 array, dimension (N-1) The scalar factors of the elementary reflectors (see Further Details). [out] A (workspace) COMPLEX_16 array, dimension (LDA,N) On exit the diagonal, the upper part (if uplo=MagmaUpper) or the lower part (if uplo=MagmaLower) are copies of dA [in] lda INTEGER The leading dimension of the array A. LDA >= max(1,N). [out] work (workspace) COMPLEX_16 array, dimension (MAX(1,LWORK)) On exit, if INFO = 0, WORK[0] returns the optimal LWORK. [in] lwork INTEGER The dimension of the array WORK. LWORK >= N*NB, where NB is the optimal blocksize given by magma_get_zhetrd_nb(). If LWORK = -1, then a workspace query is assumed; the routine only calculates the optimal size of the WORK array, returns this value as the first entry of the WORK array, and no error message related to LWORK is issued by XERBLA. [out] info INTEGER = 0: successful exit < 0: if INFO = -i, the i-th argument had an illegal value ## Further Details If UPLO = MagmaUpper, the matrix Q is represented as a product of elementary reflectors Q = H(n-1) . . . H(2) H(1). Each H(i) has the form H(i) = I - tau * v * v' where tau is a complex scalar, and v is a complex vector with v(i+1:n) = 0 and v(i) = 1; v(1:i-1) is stored on exit in A(1:i-1,i+1), and tau in TAU(i). If UPLO = MagmaLower, the matrix Q is represented as a product of elementary reflectors Q = H(1) H(2) . . . H(n-1). Each H(i) has the form H(i) = I - tau * v * v' where tau is a complex scalar, and v is a complex vector with v(1:i) = 0 and v(i+1) = 1; v(i+2:n) is stored on exit in A(i+2:n,i), and tau in TAU(i). The contents of A on exit are illustrated by the following examples with n = 5: if UPLO = MagmaUpper: if UPLO = MagmaLower: ( d e v2 v3 v4 ) ( d ) ( d e v3 v4 ) ( e d ) ( d e v4 ) ( v1 e d ) ( d e ) ( v1 v2 e d ) ( d ) ( v1 v2 v3 e d ) where d and e denote diagonal and off-diagonal elements of T, and vi denotes an element of the vector defining H(i). magma_int_t magma_zhetrd_mgpu ( magma_int_t ngpu, magma_int_t nqueue, magma_uplo_t uplo, magma_int_t n, magmaDoubleComplex * A, magma_int_t lda, double * d, double * e, magmaDoubleComplex * tau, magmaDoubleComplex * work, magma_int_t lwork, magma_int_t * info ) ZHETRD reduces a complex Hermitian matrix A to real symmetric tridiagonal form T by an orthogonal similarity transformation: Q**H * A * Q = T. Parameters [in] ngpu INTEGER Number of GPUs to use. ngpu > 0. [in] nqueue INTEGER The number of GPU queues used for update. 10 >= nqueue > 0. [in] uplo magma_uplo_t = MagmaUpper: Upper triangle of A is stored; = MagmaLower: Lower triangle of A is stored. [in] n INTEGER The order of the matrix A. N >= 0. [in,out] A COMPLEX_16 array, dimension (LDA,N) On entry, the Hermitian matrix A. If UPLO = MagmaUpper, the leading N-by-N upper triangular part of A contains the upper triangular part of the matrix A, and the strictly lower triangular part of A is not referenced. If UPLO = MagmaLower, the leading N-by-N lower triangular part of A contains the lower triangular part of the matrix A, and the strictly upper triangular part of A is not referenced. On exit, if UPLO = MagmaUpper, the diagonal and first superdiagonal of A are overwritten by the corresponding elements of the tridiagonal matrix T, and the elements above the first superdiagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors; if UPLO = MagmaLower, the diagonal and first subdiagonal of A are over- written by the corresponding elements of the tridiagonal matrix T, and the elements below the first subdiagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors. See Further Details. [in] lda INTEGER The leading dimension of the array A. LDA >= max(1,N). [out] d COMPLEX_16 array, dimension (N) The diagonal elements of the tridiagonal matrix T: D(i) = A(i,i). [out] e COMPLEX_16 array, dimension (N-1) The off-diagonal elements of the tridiagonal matrix T: E(i) = A(i,i+1) if UPLO = MagmaUpper, E(i) = A(i+1,i) if UPLO = MagmaLower. [out] tau COMPLEX_16 array, dimension (N-1) The scalar factors of the elementary reflectors (see Further Details). [out] work (workspace) COMPLEX_16 array, dimension (MAX(1,LWORK)) On exit, if INFO = 0, WORK[0] returns the optimal LWORK. [in] lwork INTEGER The dimension of the array WORK. LWORK >= N*NB, where NB is the optimal blocksize given by magma_get_zhetrd_nb(). If LWORK = -1, then a workspace query is assumed; the routine only calculates the optimal size of the WORK array, returns this value as the first entry of the WORK array, and no error message related to LWORK is issued by XERBLA. [out] info INTEGER = 0: successful exit < 0: if INFO = -i, the i-th argument had an illegal value ## Further Details If UPLO = MagmaUpper, the matrix Q is represented as a product of elementary reflectors Q = H(n-1) . . . H(2) H(1). Each H(i) has the form H(i) = I - tau * v * v' where tau is a complex scalar, and v is a complex vector with v(i+1:n) = 0 and v(i) = 1; v(1:i-1) is stored on exit in A(1:i-1,i+1), and tau in TAU(i). If UPLO = MagmaLower, the matrix Q is represented as a product of elementary reflectors Q = H(1) H(2) . . . H(n-1). Each H(i) has the form H(i) = I - tau * v * v' where tau is a complex scalar, and v is a complex vector with v(1:i) = 0 and v(i+1) = 1; v(i+2:n) is stored on exit in A(i+2:n,i), and tau in TAU(i). The contents of A on exit are illustrated by the following examples with n = 5: if UPLO = MagmaUpper: if UPLO = MagmaLower: ( d e v2 v3 v4 ) ( d ) ( d e v3 v4 ) ( e d ) ( d e v4 ) ( v1 e d ) ( d e ) ( v1 v2 e d ) ( d ) ( v1 v2 v3 e d ) where d and e denote diagonal and off-diagonal elements of T, and vi denotes an element of the vector defining H(i).
2017-12-12 15:58:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4562781751155853, "perplexity": 4533.586274621003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948517350.12/warc/CC-MAIN-20171212153808-20171212173808-00489.warc.gz"}
https://math.stackexchange.com/questions/2791532/calculating-px-ex-epsilon-using-chebyshev-inequality-for-discrete-rando
# Calculating $P(|X-E(X)|<\epsilon)$ using Chebyshev inequality for discrete random variable Suppose that $X$ is number of failure until getting the $r$-th success in infinite sequence of Bernoulli trails with probability $p$ for success. I have calculated the following results: $EX=\frac{r(1-p)}{p}$ $Var(X)=\frac{r(1-p)}{p^2}$ Now I want to find a lower bound for the next probability $P(\frac{r(1-p)}{p}-\epsilon<X<\frac{r(1-p)}{p}+\epsilon)$ using Chebyshev inequality (which states that: $P(|X-E(X)|\geq\delta)\leq\frac{\operatorname{Var}(X)}{\delta^2}$) From The results above and using the inequality I get that $P(\frac{r(1-p)}{p}-\epsilon<X<\frac{r(1-p)}{p}+\epsilon)=P(|X-E(X)|<\epsilon)=$ $1-P(|X-E(X)|\geq\epsilon)\geq1-\frac{r(1-p)}{p^2\epsilon^2}$ Here I got stuck and couldn't continue for finding a good lower bound. I know that X is a discrete random variable so $P(X\leq x_i)=\sum_{x\leq x_i}P(X=x_i)$. Can anyone give me an answer or hint how can I continue from here? Thank you, Michael • If you need a better lower bound, you may need other inequalities like Chernoff's. – poyea May 22 '18 at 14:13 • Your $X$ is negative binomial, so you may search for bounds of negative binomial distribution. – StubbornAtom May 22 '18 at 14:41
2019-06-16 14:32:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.846985936164856, "perplexity": 221.8431812578196}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998250.13/warc/CC-MAIN-20190616142725-20190616164552-00076.warc.gz"}
https://cdsweb.cern.ch/collection/PH-EP%20Preprints?ln=sk
# PH-EP Preprints Posledne pridané: 2016-09-26 07:23 Sivers asymmetry extracted in SIDIS at the hard scale of the Drell-Yan process at COMPASS / COMPASS Collaboration Proton transverse-spin azimuthal asymmetries are extracted from the COMPASS 2010 semi-inclusive hadron measurements in deep inelastic muon-nucleon scattering in those four regions of the photon virtuality $Q^2$, which correspond to the four regions of the di-muon mass $\sqrt{Q^2}$ used in the ongoing analysis of the COMPASS Drell-Yan measurements. [...] arXiv:1609.07374 ; CERN-EP-2016-250. - 2016. - 13 p. Preprint - Full text 2016-09-22 09:24 Measurement of $W$ boson angular distributions in events with high transverse momentum jets at $\sqrt{s}=$ 8 TeV using the ATLAS detector / ATLAS Collaboration The $W$ boson angular distribution in events with high transverse momentum jets is measured using data collected by the ATLAS experiment from proton--proton collisions at a centre-of-mass energy $\sqrt{s}=$ 8 TeV at the Large Hadron Collider, corresponding to an integrated luminosity of 20.3 fb$^{-1}$. [...] arXiv:1609.07045 ; CERN-EP-2016-182. - 2016. - 38 p. Previous draft version - Preprint - Full text 2016-09-22 08:20 Azimuthal asymmetries of charged hadrons produced in high-energy muon scattering off longitudinally polarised deuterons / COMPASS Collaboration Single hadron azimuthal asymmetries in the cross sections of positive and negative hadron production in muon semi-inclusive deep inelastic scattering off longitudinally polarised deuterons are determined using the 2006 COMPASS data and also all deuteron COMPASS data. [...] arXiv:1609.06062 ; CERN-EP-2016-245. - 2016. - 15 p. Preprint - Full text 2016-09-20 15:36 Measurements of long-range azimuthal anisotropies and associated Fourier coefficients for $pp$ collisions at $\sqrt{s}=5.02$ and 13 TeV and $p$+Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV with the ATLAS detector / ATLAS Collaboration ATLAS measurements of two-particle correlations are presented for $\sqrt{s} = 5.02$ and 13 TeV $pp$ collisions and for $\sqrt{s_{\mathrm{NN}}} = 5.02$ TeV $p$+Pb collisions at the LHC. [...] arXiv:1609.06213 ; CERN-EP-2016-200. - 2016. - 57 p. Previous draft version - Preprint - Full text 2016-09-19 19:09 Measurement of the WZ production cross section in pp collisions at $\sqrt{s} =$ 7 and 8 TeV and search for anomalous triple gauge couplings at $\sqrt{s} =$ 8 TeV / CMS Collaboration The WZ production cross section is measured by the CMS experiment at the CERN LHC in proton-proton collision data samples corresponding to integrated luminosities of 4.9 fb$^{-1}$ collected at $\sqrt{s} =$ 7 TeV, and 19.6 fb$^{-1}$ at $\sqrt{s} =$ 8 TeV. [...] arXiv:1609.05721 ; CERN-EP-2016-205 ; CMS-SMP-14-014. - 2016. - 44 p. Preprint - Full text 2016-09-19 10:32 Measurement of matter-antimatter differences in beauty baryon decays / LHCb Collaboration Differences in the behaviour of matter and antimatter have been observed in $K$ and $B$ meson decays, but not yet in any baryon decay. [...] arXiv:1609.05216 ; LHCB-PAPER-2016-030 ; CERN-EP-2016-212. - 2016. - 19 p. Related data file(s) - Preprint - Full text - Related supplementary data file(s) 2016-09-18 10:28 Search for narrow resonances in dilepton mass spectra in proton-proton collisions at $\sqrt{s} =$ 13 TeV and combination with 8 TeV data / CMS Collaboration A search for narrow resonances in dielectron and dimuon invariant mass spectra has been performed using data obtained from proton-proton collisions at $\sqrt{s} =$ 13 TeV collected with the CMS detector. [...] arXiv:1609.05391 ; CERN-EP-2016-209 ; CMS-EXO-15-005. - 2016. - 32 p. Preprint - Full text 2016-09-17 23:40 Measurement of inclusive jet cross sections in pp and PbPb collisions at $\sqrt{s_{\mathrm{NN}}} =$ 2.76 TeV / CMS Collaboration Inclusive jet spectra from pp and PbPb collisions at a nucleon-nucleon center-of-mass energy of 2.76 TeV, collected with the CMS detector at the LHC, are presented. [...] arXiv:1609.05383 ; CERN-EP-2016-217 ; CMS-HIN-13-005. - 2016. - 34 p. Preprint - Full text 2016-09-17 15:52 Measurement and QCD analysis of double-differential inclusive jet cross-sections in pp collisions at $\sqrt{s} =$ 8 TeV and ratios to 2.76 and 7 TeV / CMS Collaboration A measurement of the double-differential inclusive jet cross section as a function of the jet transverse momentum $p_{\mathrm{T}}$ and the absolute jet rapidity $|y|$ is presented. [...] arXiv:1609.05331 ; CERN-EP-2016-196 ; CMS-SMP-14-001. - 2016. - 48 p. Preprint - Full text 2016-09-16 11:16 Search for anomalous electroweak production of $WW/WZ$ in association with a high-mass dijet system in $pp$ collisions at $\sqrt{s}=8$ TeV with the ATLAS detector / ATLAS Collaboration A search is presented for anomalous quartic gauge boson couplings in vector-boson scattering. [...] arXiv:1609.05122 ; CERN-EP-2016-171. - 2016. - 37 p. Previous draft version - Preprint - Full text
2016-09-28 15:23:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9917868375778198, "perplexity": 3716.0207038089193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661555.40/warc/CC-MAIN-20160924173741-00239-ip-10-143-35-109.ec2.internal.warc.gz"}
http://mathematica.stackexchange.com/questions/45972/understanding-transpose
# Understanding Transpose It is very likely that it is my lack of math skills that is showing up here. However, I think Transpose is such an important function that I need to master it. So I am going to ask because I do not understand the information given in the documentation. Transpose[list,{Subscript[n, 1], Subscript[n, 2], …}] transposes list so that the k-th level in list is the Subscript[n, k]-th level in the result. Would anyone be able to provide a simple example and point out in a matrix what is going on? I am in particular struggling with the nth and k-th level. How does that work? I want to master Transpose[list, {…}]. Please explain the case where the k-th and n-th level are to be transposed, since this is where I am struggling. If this is considered mathematical question, and therefore not in the right place, I would appreciate a comment so that I could delete the post. - There are copious examples in the Scope section of the function in the documentation. Look at the results (the pattern of indexes in particular) of those results, should be clear. –  ciao Apr 13 '14 at 0:15 As a mathematical question, "What is the transpose?" is actually pretty complex. In short, it's a canonical isomorphism between $V\otimes W^*$ and $V^*\otimes W$ that exists whenever $V,W$ are Hilbert spaces. –  Alex Becker Apr 13 '14 at 6:07 Transpose[$m, {3, 2, 1}] === Flatten[$m, {{3}, {2}, {1}}], so you might find this discussion of Flatten useful. –  WReach Apr 13 '14 at 14:54 Here is a visualization of the 3 dimensional case. A part of the tensor is indexed by tensor[[l1, l2, l3]] where l1, l2, l3 are the indices to levels 1, 2, 3 respectively. Transposing switches how the values are indexed. For example, if new = Transpose[old, {2, 3, 1}], then new[[l3, l1, l2]] == old[[l1, l2, l3]] or new[[l1, l2, l3]] == old[[l2, l3, l1]]. The first equality corresponds to how the result is described in Transpose. In the visualization below, the colors are transposed according to the permutation labeling the graphics. Level 1 corresponds to hue, level 2 to saturation, and level 3 to brightness. The upper left is the identity permutation and corresponds to the original tensor. The labels on the axes correspond to the level in the original tensor. tensor = Table[{i, j, k}, {i, 4}, {j, 4}, {k, 4}]; cf[i_, j_, k_] := Hue[(i - 1)/4, j/4, (k + 3)/9]; g[p_] := Graphics3D[{ PointSize[0.1], Point[Flatten[tensor, 2], VertexColors -> cf @@@ Flatten[Transpose[tensor, p], 2]] }, PlotRange -> {{0.8, 4.2}, {0.8, 4.2}, {0.8, 4.2}}, PlotLabel -> p, Axes -> True, Ticks -> None, AxesLabel -> Ordering@p ]; GraphicsGrid[Partition[Table[g[p], {p, Permutations[{1, 2, 3}]}], 3]] I hope that this example will help with understanding how higher dimensional tensors are transposed. - This is more of a math question, but in the spirit of being helpful: I think if you run this code and look at the colors of each matrix you might understand better what transpose does. m = Table[Graphics[{RGBColor[0, .33 i, .33 j], Disk[]}], {i, 1, 3}, {j,1,3}] // MatrixForm; mT = Transpose@Table[Graphics[{RGBColor[0, .33 i, .33 j], Disk[]}], {i, 1, 3}, {j, 1, 3}] // MatrixForm; Row[{m, mT}] Transpose basically reflects elements across the diagonal. Notice how the first column of the original matrix is the same as the first row of the transposed matrix. - I think the OP had in mind the syntax with the second argument of Transpose, which allows complex array reshuffles for arrays of higher dimensions, rather than the (trivial) single-arg Transpose. –  Leonid Shifrin Apr 13 '14 at 2:32
2015-08-28 12:49:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3997337520122528, "perplexity": 1373.8154470887387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644062782.10/warc/CC-MAIN-20150827025422-00067-ip-10-171-96-226.ec2.internal.warc.gz"}
https://imathworks.com/tex/tex-latex-how-to-force-the-table-orders-in-latex/
# [Tex/LaTex] How to force the table orders in Latex floatslongtable I have several tables and some of them are long tables. LaTeX displays them in different order than I want. Say, I would like to order them as table 1 table 2 table 3(longtable) table 4 table 5 table 6(long table) But LaTeX orders it like table 1 table 2 table 3(longtable) table 6(long table) table 4 table 5 How can I force the order as I want? Latex floating tables never float out of order. However as is mentioned in the documentation longtable doesn't float so tables can float past it, out of order. The simplest solution is to put \clearpage before the longtable to prevent floats floating past.
2022-08-15 21:20:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4226854741573334, "perplexity": 7893.2562666237045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572212.96/warc/CC-MAIN-20220815205848-20220815235848-00363.warc.gz"}
https://imathworks.com/tex/tex-latex-how-to-know-the-column-width-of-a-two-column-article/
# [Tex/LaTex] How to know the column width of a two column article articlelengthstwo-columnwidth I'm writing an article with 2 columns. If my document is this: \documentclass[10pt,a4paper, twocolumn]{article} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage[ left=2.00cm, right=2.00cm, top=2.00cm, bottom=2.00cm ]{geometry} \usepackage{blindtext} \begin{document} \blinddocument \end{document} Is there a way tho know exactly the width of the column (see picture)? I need it for draw in others softwares using this lenght. The column width is stored in the length \columnwidth. It's value can be turned into a text representation using \the\columnwidth. Either use this inside \message{...} to print it to the LaTeX compiler output and log file or directly in the text if you want. The value is in Points (pt) which are 1/72.27 of an inch. (Not that PDF and Postscript have points with 1/72 inches for simplify calculations. These are called "big points" (bp) in TeX.) \documentclass[10pt,a4paper, twocolumn]{article} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage[ left=2.00cm, right=2.00cm, top=2.00cm, bottom=2.00cm ]{geometry} \usepackage{lipsum} \begin{document} \section{bla} some text \message{The column width is: \the\columnwidth} The column width is: \the\columnwidth \lipsum \lipsum \end{document} Which gives me: The column width is: 236.84843pt
2023-02-08 10:02:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9216373562812805, "perplexity": 3569.545687035675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500758.20/warc/CC-MAIN-20230208092053-20230208122053-00366.warc.gz"}
http://medicalnanoscience.se/simply-chess-amd/b7d9cc-advantage-of-passive-transducer-is
We will consider the passive elements like resistor, inductor and capacitor. Pulse Sensor : Working Principle and Its Applications, Ripple Carry Adder : Working, Types and Its Applications, What is Dual Trace Oscilloscope : Working & Its Applications, What is the Efficiency of Transformer & Its Derivation, What is Variable Reluctance Stepper Motor & Its Working, What is a Permanent Magnet Stepper Motor & Its Working, What is a Bus Bar : Types & Their Working, What is Displacement Current : Derivation & Its Properties, What is Gauss Law : Theory & Its Significance, What is Modbus : Working & Its Applications, Arduino Projects for Engineering Students, Electronics Interview Questions & Answers, What is Band Stop Filter : Theory & Its Applications, What is Residual Magnetism : Types & Its Properties, Wireless Communication Interview Questions & Answers, What is an Optical Time-Domain Reflectometer and Its Working, What is Lead Acid Battery : Types, Working & Its Applications, What is Tan Delta Test : Its Principle and Modes, What is Thermoelectric Generator : Working & Its Uses, What is Synchroscope : Circuit Diagram & Its Working, Arduino Uno Projects for Beginners and Engineering Students, Image Processing Projects for Engineering Students, Half Adder and Full Adder with Truth Table, MOSFET Basics, Working Principle and Applications, How Does a PID Controller Work? The passive transducer can be defined as the internal parameters of transducer like resistance-capacitance as well as inductance are changed due to the input signal. asked Apr 9, 2018 by anonymous. Slide-wire resistor . The working principle of this transducer is, it uses exterior power to change the transducer’s physical properties. 3. What are Ferromagnetic Materials – Types & Their Applications, The resolution of this transducer is high, The designing of this transducer is very simple. Advantages of Electrical Transducers . It provides some change within passive electrical amount like resistance, capacitance, and inductance based on the simulation result. Thermocouple, Photovoltaic cell and more are the best examples of the transducers. The flow of fluid is measured by clamping a set of transducers on a pipe. By using it as a secondary transducer, it can be used to measure weight, force and pressure etc. So, as effective area of two plates, $A$ increases the value of capacitance, C also increases. It has low power consumption which is less than about 1 Watt. It consumes less energy (0.8W to 1.0W) compare to microwave sensor. Similarly, as length of conductor, $l$ decreases the value of resistance, R also decreases. It is frictionless device. III. In next chapter, let us discuss about an example for each passive transducer. The best examples of this transducer mainly include a differential transformer, resistance strain, etc. The best examples of this transducer mainly include PV cell, thermocouple, etc. Similarly, as cross sectional area of the conductor, $A$ decreases the value of resistance, R increases. The active transducer does not require any additional source while the passive transducer requires the additional energy source. These devices play an essential role in fields like control engineering, instrumentation, etc. They produce an output signal in the form of some variation in resistance, capacitance or any other electrical parameter, which than has to be converted to an equivalent current or voltage signal. Classification of transducers The transducers can be classified as: I. The electrical system can be controlled with a small level of power. Hence such quantities are required to be sensed and changed into some other form for easy measurement. Active transducer draws energy from the measurand source and gives the electrical output while in passive transducer the transduction can be done by changing the physical property of the material. 1. Advantages of Transducer It simplifies the amplification and attenuation of the signal. It contains a resistance element that is provided with a slider or wiper. Passive transducer passes loss. Similarly, as reluctance of coil, $S$ decreases the value of inductance, $L$ increases. The main difference among the active and passive transducer are listed below. 2. Usually a transducer converts a signal in one form of energy to a signal in another 3. In general, the process of transduction involves the conversion of one form of energy into another form. Passive transducers require an external power, and the output is a measure of so me variation (resistance or capacitance) . The resistance value depends on the three parameters $\rho, l$ & $A$. Different types of Transducers • Active Transducers - These do not need any external power source, for example Piezoelectric Transducer, etc. Passive sensor elements can operate over large temperature ranges. The power requirement of transducer is very small. But to measure all these forces, a device is necessary which can change the physical quantities into easily assessable energy. the following formula for reluctance, S of coil. The passive transducer is one kind of device which can be used to change the specified energy which is non-electrical into electrical with external power. Piezo transducer is driven by square wave (VP-P), the advantage is the sound frequency controllable, power consumption is small, generally in 20ma below, not more than 100ma. Capacitance, C is directly proportional to permittivity, $\varepsilon$. The active transducer changes the energy without using the auxiliary power supply whereas the passive transducer uses the exterior power supply for the conversion of energy. Resistance strain gauge . The POT is a passive transducer since it requires an external power source for its excitation. What is the Difference between 8051, PIC, AVR and ARM? The best examples of this transducer mainly include a differential transformer, resistance strain, etc. • Passive Transducers - These need an external power source, for example Potentiometer, Strain Gauge, etc. Applications of Strain Gauge : (i) Strain measurement (ii) Residual stress measurement (iii) Vibration measurement (iv) Torque measurement (v) Bending and deflection measurement (vi) Compression and tension measurement In this chapter, we discussed about three passive transducers. Definition: Capacitive Transducer is a device which changes its capacitance with change in the physical phenomena to be measured. $A$ is the cross sectional area of the conductor. An active transducer can be defined as, a transducer which gives the output in different forms like current or voltage without using any exterior source of energy. Advantages of Electrical transducers Mostly quantities to be measured are non-electrical such as temperature, pressure, displacement, humidity, fluid flow, speed etc., but these quantities cannot be measured directly. The power requirement of the transducer is very small. Advantages: Very high sensitivity in comparison of metal gauges, High gauge factor in the range of 100 to 200. , Low hysteresis. This transducer provides electric current otherwise voltage straight in reply to ecological stimulation. Transducers which require an external power source for their operation is called as a passive transducer. The major difference between active and passive transducer is that an active transducer has the ability to convert one form of energy into another form (electrical) without using an external source of power. Capacitance, C is directly proportional to the effective area of two plates, $A$. Capacitance, C is inversely proportional to the distance between two plates, $d$. Electrical signals can be easily attenuated or amplified and can be brought up to a level suitable for various device, with the help of static device FGK. So, as distance between two plates, $d$ increases the value of capacitance, C decreases. A passive transducer is said to be a resistive transducer, when it produces the variation (change) in resistance value. The motion of the wiper may be translatory or rotational. The main advantage provided is ultrasonic measurements are non-invasive. $\varepsilon$ is the permittivity or the dielectric constant. The designing of this transducer is complicated. The electrical output of the transducer can be easily used, transmitted and also easily processed for the purpose of measurement. Resistance strain gauge, Differential Transformer are the examples for the Passive transducers. The transducer is one kind of electrical or electronic component, and the main function of this is to change one kind of energy into another. Resistance, R is directly proportional to the resistivity of conductor, $\rho$. Inductance, L is directly proportional to permeability of core, $\mu$. Higher sensitivity of greater than 40 V/mm can be achieved. The capacitance value depends on the three parameters $\varepsilon, A$ & $d$. The best examples of this transducer mainly include a differential transformer, resistance strain, etc. So, as number of turns of coil, $N$ increases the value of inductance, $L$ also increases. e.g. The following are the major advantages of capacitive transducers. Because, the variation in any one of those three parameters changes the inductance value. Active and passive transducers. On the contrary, a passive transducer converts a form of energy into another (electrical) by making use of an external source of power.. The transduction element of capacitive transducer is a capacitor which may be a parallel plate, cylindrical or angular capacitor. It requires an external force for operation and hence very useful for small systems. Before coming to the answer of this question I want to clear the concept of Active and Passive transducer or sensors: Active transducers/ Sensors generate electric current or voltage directly in response to environmental stimulation. Thermometer,Thermocuople, bourdon tubes, ... other form. It provides some change within passive electrical amount like resistance, capacitance, and inductance based on the simulation result. Passive transducer is a device which converts the given non-electrical energy into electrical energy by external force. So, we can make the resistive transducers based on the variation in one of the three parameters $\rho, l$ & $A$. Hence, we will get the following three passive transducers depending on the passive element that we choose. So, as length of conductor, $l$ increases the value of resistance, R also increases. Headphones are a pair of small loudspeaker drivers worn on or around the head over a user's ears. From the above information finally, we can conclude that these types of transducers can change the physical energies into easily calculated energies. Passive Sensors do not control electricity directly and do not require external power sources to accomplish control of an electrical signal. A passive transducer is said to be an inductive transducer, when it produces the variation (change) in inductance value. Due to IC fabrication technique, it can be implemented in the small chip. Active Transducer:- A transducer which do not requires external energy source to convert signal from one form to another. The stimulus being measured- Based on application. So, as permeability of core, $\mu$ increases the value of inductance, L also increases. The best examples of transducers are a microphone, a solar cell, an incandescent light bulb, and an electric motor. As these transducers play a key role while changing the energy from one form to another form. It is a passive transducer. They are good for electrical applications used in smaller and compact premises. Hence, we will get the following three passive transducers depending on the passive element that we choose. This transducer provides electric current otherwise voltage straight in reply to ecological stimulation. So, as resistivity of conductor, $\rho$ increases the value of resistance, R also increases. Inductive transducers may be of passive-type or self-generating type. The best examples of this transducer mainly include PV cell, thermocouple, etc. Thermistor, strain gauges, LVDT etc . They produce an output signal in the form of some variation in resistance, capacitance or any other electrical parameter, which than has to be converted to an equivalent current or voltage signal. The capacitive transducer is very sensitive. The components are active and passive components, sensors, transducers, transmitters, receivers, modules (WiFi, Bluetooth, GSM, RFID, GPS), and so on. Resistance, R is directly proportional to the length of conductor, $l$. The advantages of the inductive transducer include the following. There are different physical forces which we cannot measure easily such as pressure, displacement, humidity and many more. Table 1: A comparison of sensor types. Transducers Types of transducers (Active and Passive) Resistive Transducers Definition Working Principle Types Advantages and disadvantages Applications Capacitive transducers Definition Working principle Types Advantages and disadvantages Applications. The electrical systems can be controlled with a very small level of power . 0 like 0 dislike. It requires external power source for its functioning. The electrical system can be controlled with a very small level of power. the following formula for resistance, R of a metal conductor. Primary and secondary transducer V. Transducers and inverse transducers. In a potentiometer, an electrically conductive wiper slides across a fixed resistive element. - Structure & Tuning Methods. passive transducer is a transducer, which produces the variation in passive element. The advantage of Thermistor transducer is that it will produce a fast and stable response. Precision sensing, small package. The advantage of a semiconductor strain gauge cover the wire round strain gauge is that. the following formula for capacitance, C of a parallel plate capacitor. Here is a question for you, what are the advantages and disadvantages of Active and Passive Transducer? Transducers are classified into two types namely active & passive transducers. 4. So, as permittivity, $\varepsilon$ increases the value of capacitance, C also increases. A potentiometer is a resistive sensor used to measure rotary motion as well as linear displacements. The output signal of this transducer can be generated from the signal which is to be calculated. the following formula for inductance, L of an inductor. So, as reluctance of coil, $S$ increases the value of inductance, $L$ decreases. passive transducer is a transducer, which produces the variation in passive element. Thus this is all about an overview of what is an active and passive transducer, and differences between them. Examples of passive components are resistors (R), capacitors (C), inductors (L), transformers, antennas, potentiometers (variable resistors), diodes (one-way conductors) and the like. Similarly, as number of turns of coil, $N$ decreases the value of inductance, $L$ also decreases. It offers high resolution which is greater than 10 mm. Differential transformer CLASSIFICATION OF TRANSDUCERS A transducer may be classified based upon: 1. Classification of Transducers based on transduction Principle. The main difference between active and passive transducer mainly include what is a transducer, types, and difference between active and passive transducers. This is the first type of classification … Transducer does not require external power sources to accomplish control of an signal. €¦ a potentiometric transducer is one kind of device which can change the energy... €¦ a potentiometric transducer is an electronic device that … the ultrasonic flow meter provides advantages. \Mu $high resolution which is less than about 1 Watt question for you, what the... While the passive elements like resistor, inductor and capacitor get the following formula for resistance, R directly! Electrical signal Low power consumption which is greater than 10 mm \varepsilon a. Passive element that we choose the major advantages of the wiper may be translatory or rotational straight in reply ecological! And do not requires external energy source to convert signal from one form another. Signal from the signal which is less than about 1 Watt three passive transducers to! Effective area of the conductor, advantage of passive transducer is S$ incandescent light bulb, and based... Dielectric constant resolution which is less than about 1 Watt transducer requires the additional energy source of supply function. Is greater than 40 V/mm can be achieved transducer ’ S physical properties good performance value. The three parameters $\rho$ around the head over a more limited.! Is called as a passive transducer is, it uses the measured source to draw the energy over! Strain, etc value over a more limited range output is a question for you, what the. Or the dielectric constant is a device which can be classified based:... Control electricity directly and do not require external power source other types of transducers transducers. $d$ of transducers on a pipe like resistor, inductor and capacitor temperature ranges output signal this! Discussed about three passive transducers one by one solar cell, thermocouple, Photovoltaic cell and more the! Be calculated it uses exterior power source for its excitation light bulb, and the output is a transducer a. A solar cell, an incandescent light bulb, and differences between them as cross sectional area of two,!, let us discuss about an overview of what is the permittivity or the dielectric constant of capacitance, inductance! System can be used to measure rotary motion as well as linear displacements reluctance... €¦ the ultrasonic flow meter provides numerous advantages over other types of flow measuring meters transducer include following... Draw the energy from one form of energy to a signal in another 3 major! That … the ultrasonic flow meter provides numerous advantages over other types transducers... Play an essential role in fields like control engineering, instrumentation, etc greater than 10 mm advantages... Passive sensor elements can operate over large temperature ranges transducers can be used to measure motion.: - a transducer converts a signal in another 3 require any additional source while passive... Transducers a transducer may be classified based upon: 1 passive electrical amount like,. Rotary motion as well as linear displacements of coil, $d$ LDR ) is a passive is. For engineering students a semiconductor strain gauge cover the wire round strain gauge cover the wire round gauge! Self-Generating inductive transducer requires an external power sources to accomplish control of an inductor: capacitive transducer, produces! That … the ultrasonic flow meter provides numerous advantages over other types of transducers the transducers work! Of two plates, $a$ are non-invasive slides across a fixed resistive element electrical systems be. Inductor and capacitor L is directly proportional to the resistivity of conductor $. Are listed below, force and pressure etc question for you, what are examples. Stable response in passive element that we choose a pipe of conductor,$ $. ) in inductance value advantages of capacitive transducers Presented to: Presented by: Overviews user 's ears of! Output signal of this transducer provides electric current otherwise voltage straight in reply to stimulation... Inductance,$ a $increases the value of resistance, R decreases to draw the energy one... Value over a user 's ears it is a passive transducer is advantage of passive transducer is it uses the measured source to signal... V/Mm can be controlled with a slider or wiper a small level of.!, cylindrical or angular capacitor to reluctance of coil Low hysteresis of greater than 40 V/mm be., C of a semiconductor strain gauge, differential transformer, resistance strain, etc conditions like humidity and temperatures! Source for its excitation be implemented in the physical energies into easily assessable energy in. Element that is provided with a very small level of power difference between active and transducer. Transducer which required external source of supply to function article discusses an overview of the inductive transducer advantages of transducer... Drivers worn on or around the head over a more limited range flow measuring meters which! Energies into easily calculated energies core,$ \varepsilon $decreases the value of inductance$... Transducer provides electric current otherwise voltage straight in reply to ecological stimulation inductive transducer of device which changes its with! Between active and passive transducer is, it uses the measured source to signal... €¦ There are some advantages of transducer it simplifies the amplification and attenuation of signal! In reply to ecological stimulation is ultrasonic measurements are non-invasive fixed resistive element smaller and compact.. Directly and do not need any external power sources to accomplish control of an signal!, let us discuss about these three passive transducers require an external power source, for example transducer! Transducer which are given below signal of this transducer can be generated from the above information,. General, the variation in any environmental conditions like humidity and high temperatures a set of transducers transducer! Transducers the transducers can work in any one advantage of passive transducer is those three parameters changes the resistance depends. Well as linear displacements $L$ & $d$ increases the value of capacitance, C decreases... A small level of power useful for small systems a pipe potentiometer, an electrically conductive slides! Working principle of this transducer mainly include a differential transformer are the examples for the of. Element that is provided with a slider or wiper a passive transducer mainly include a transformer... Provides some change within passive electrical amount like resistance, R increases well as linear displacements over other types transducers. The permittivity or the dielectric constant, a decreases the value of capacitance, C.! Effective area of two plates, $N$ increases the value of,... Higher sensitivity of greater than 10 mm of turns of coil reluctance of,... Into two types namely active & passive transducers depending on the three parameters . Any environmental conditions like humidity and high temperatures fast and stable response like resistor, inductor and.. And hence very useful for small systems the additional energy source of small loudspeaker drivers on... Key role while changing the energy the measured source to draw the energy the given non-electrical into. On or around the head over a more limited range $decreases the value of capacitance, C increases that! The example of a metal conductor the transducers electronic components are used to build the circuits and projects engineering...$ a $&$ a $is necessary which can be easily used, transmitted also! And compact premises as resistivity of conductor,$ d $increases the value of capacitance, C directly. And differences between them electric current otherwise voltage straight in reply to ecological stimulation discuss! Change within passive electrical amount like resistance, R decreases attenuation of the signal from signal! Form of energy into electrical energy by external force for operation and hence very for... Conditions like humidity and high temperatures of measurement two plates,$ N $the... Variation ( resistance or capacitance ) is the permittivity or the dielectric constant the variation in passive that! Control of an electrical signal capacitance, and inductance based on the parameters... And changed into some other form for easy measurement transducers require an external power source, for example, solar... The active and passive transducer requires the additional energy source to draw the energy from form! Advantage of Thermistor transducer is a passive transducer is a transducer, it! Clamping a set of transducers on a pipe any environmental conditions like humidity high! Over other types of transducers on a pipe elements like resistor, inductor and.! Resistor, inductor and capacitor and secondary transducer V. transducers and inverse transducers capacitor which may be of or! A microphone, a$ increases consumes less energy ( 0.8W to 1.0W compare. Pic, AVR and ARM less energy ( 0.8W to 1.0W ) compare to microwave sensor Presented:. Stable response a very small level of power we choose a metal conductor can work any... Simplifies the amplification and attenuation of the wiper may be translatory or rotational measure of so variation! Conductor, $\mu$ overview of what is a passive transducer is a device is necessary which be. Fluid is measured by clamping a set of transducers can change the transducer is capacitor... Output is a transducer, it uses exterior power source, for advantage of passive transducer is., Photovoltaic cell and more are the best examples of this transducer mainly include PV cell thermocouple... Different types of transducers can change the transducer ’ S physical properties the inductive transducers may be of or. Between active and passive transducer let us discuss about an overview of what is an device... Attenuation of the inductive transducer, types, and differences between them inductor and capacitor resolution which is to sensed... And the output signal of this transducer mainly include PV cell, an electrically conductive wiper slides across fixed. Quantities into easily calculated energies are listed below, the variation ( )...
2021-04-17 12:11:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5708080530166626, "perplexity": 1420.1622232367372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038119532.50/warc/CC-MAIN-20210417102129-20210417132129-00413.warc.gz"}
https://blog.jverkamp.com/2015/04/17/its-all-greek-to-me/
# It's all Greek to me A few days ago an interesting article came across my RSS feeds: It’s All Greek (or Chinese or Spanish or…) to Me. Basically, in English, when you’re confused, you’ll often say ‘It’s all Greek to me’. It turns out that man (if not all) languages around the world have a similar saying, but the target varies. Luckily, Wikipedia has a lovely page about it: Greek to me. When I posted the link to Facebook, I got a quick question: are there any cycles? While one could just scan through the document, it would be a lot more interesting (at least to me!) if you could do it automatically. Let’s toss together a quick script to do it. First thing we need: a way to get the content of the Wikipedia page. Python is great for this, with requests to grab the page and BeautifulSoup to process it: content = requests.get('https://en.wikipedia.org/wiki/Greek_to_me').text soup = bs4.BeautifulSoup(content) table = soup.find('table', {'class': 'wikitable sortable'}) pairs = collections.defaultdict(set) for row in table.findAll('tr'): cols = row.findAll('td') if not cols: continue if len(cols) == 5: srcs = [src.strip() for src in cols[0].text.split(',')] dsts = [dst.strip() for dst in cols[-1].text.split(',')] for i, dst in enumerate(dsts): dsts[i] = re.sub(r'[$(].*?[$)]', '', dst) for src in srcs: if ' ' in src: continue for dst in dsts: if ' ' in dst: continue Basically, we download the page. Then we go through each of the rows (tr). Skip any rows without column elements (td) as that’s probably the header, otherwise, pull them out. The first column (index 0) is the language with the idiom (English in the example) while the last column (index -1) is the target (Greek). There’s one caveat though, that sometimes the table uses a rowspan when one source can have multiple targets but is only listed once. We check that by only changing the srcs when there are 5 columns. Parse through all of that and what do you have? >>> import pprint >>> pprint.pprint(dict(pairs)) {u'': set([]), u'Afrikaans': set([u'Greek']), u'Albanian': set([u'Chinese']), u'Arabic': set([u'Chinese', u'Garshuni']), ... u'Vietnamese': set([u'Cambodian']), u'Volap\xfck': set([]), u'Yiddish': set([u'Aramaic'])} Exactly what I was looking for. Okay, next step. Find any cycles in the graph. This is straight forward enough by performing a depth first search: def cycle(node, seen): for neighbor in pairs[node]: new_seen = seen + [neighbor] if neighbor in seen: yield new_seen[new_seen.index(neighbor):] else: for recur in cycle(neighbor, new_seen): yield recur The basic idea is to make a generator that returns each cycle as it finds it. It does so by search down each branch, maintaining a list of all nodes it has seen. If it sees the same node twice, that’s a cycle. Otherwise, try all of the neighbors. We avoid infinite loops since there’s a guaranteed base case to the recursion: seen is always one bigger on each step and it’s maximum size is the number of nodes in the graph. So how does it work? >>> for result in cycle('English', ['English']): ... print result ... ['English', u'Greek', u'Chinese', u'English'] ['English', u'Greek', u'Turkish', u'Arabic', u'Chinese', u'English'] ['English', u'Greek', u'Turkish', u'French', u'Chinese', u'English'] ['English', u'Greek', u'Turkish', u'French', u'Hebrew', u'Chinese', u'English'] ['English', u'Dutch', u'Chinese', u'English'] Neat! We’ve already found 5 cycles that involve English alone. But how many cycles are there all together? For that, we need a way to determine if a cycle is actually unique. If you have the cycles A -> B -> C -> A, that’s the same as B -> C -> A -> B. You can do this by putting the cycles in lexical order (so that the ‘smallest’ element in the cycle is first). def reorder(cycle): if cycle[0] == cycle[-1]: cycle = cycle[1:] smallest = min(cycle) for el in list(cycle): if el == smallest: break else: cycle = cycle[1:] + [cycle[0]] return cycle It also is smart enough that if we pass it a list with the first and last node the same (as we will), it trims that off automatically. >>> reorder(['A', 'B', 'C', 'A']) ['A', 'B', 'C'] >>> reorder(['B', 'C', 'A', 'B']) ['A', 'B', 'C'] Bam. So we use that and a set to keep track of what we’ve seen: >>> seen = set() >>> for src in pairs.keys(): ... for result in cycle(src, [src]): ... result = reorder(result) ... if not str(result) in seen: ... print(result) ... [u'Chinese', u'English', u'Greek'] [u'Chinese', u'English', u'Dutch'] [u'Arabic', u'Chinese', u'English', u'Greek', u'Turkish'] [u'Chinese', u'English', u'Greek', u'Turkish', u'French'] [u'Chinese', u'English', u'Greek', u'Turkish', u'French', u'Hebrew'] Huh. So they all go through English. I didn’t actually expect that. :) Still, it’s cool to be able to unify them like that. Okay, one last trick. Let’s visualize them. Luckily, there’s a nice Python interface for graphviz that we can use: # --- Render a nice graph --- g = graphviz.Digraph() for src in pairs.keys(): for dst in pairs[src]: g.edge(src, dst) g.graph_attr['overlap'] = 'false' g.graph_attr['splines'] = 'true' g.format = 'png' g.engine = 'neato' g.render('greek-to-me') Awesome. It’s not the easiest thing in the world to read, but if you look carefully you can pick out a few interesting things. Let’s tweak it a bit to color nodes if and only if they have both an inward edge and an outward one: for src in pairs.keys(): # Does this node lead to another has_out = pairs[src] # Does any node lead to this one has_in = False for dst in pairs.keys(): if src in pairs[dst]: has_in = True break # If both, color it if has_out and has_in: g.node(src, color = 'blue') That’s a little better, all of the nodes in any cycle are in there. Let’s go ahead and show all of the edges in any cycle: # Get all edges that are part of a cycle cycle_edges = set() for cycle in cycles: for src, dst in zip(cycle, cycle[1:]):
2018-12-11 08:45:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41529107093811035, "perplexity": 4094.3753088169465}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823614.22/warc/CC-MAIN-20181211083052-20181211104552-00224.warc.gz"}
https://www.physicsforums.com/threads/radius-of-coil-in-magnetic-field.155476/
# Radius of coil in magnetic field 1. Feb 9, 2007 ### larkinfan11 1. The problem statement, all variables and given/known data Two coils have the same number of circular turns and carry the same current. Each rotates in a magnetic field in a setup similar to the square coil in the figure below. Coil 1 has a radius of 4.2 cm and rotates in a 0.19 T field. Coil 2 rotates in a 0.42 T field. Each coil experiences the same maximum torque. What is the radius (in cm) of coil 2? http://www.webassign.net/CJ/21-21.gif 2. Relevant equations Torque= NIABsin(phi) 3. The attempt at a solution I thought that since each coil experienced the same maximum torque and received the same current that I could set their respective NIABsin(phi) equations equal to each other, cancel out current, and solve for the area of the second coil, but the answer that I got was incorrect (8.899 cm). So I know that there has to be another way to solve this question, but I'm at a loss as to how to approach it. Can anyone offer any guidance or insight on what I'm doing wrong? 2. Feb 9, 2007 ### Staff: Mentor I don't understand how you got your answer; show exactly what you did. 3. Mar 31, 2007 ### prettyinpink Help needed I am stuck on the same problem. I tryed B=muI/2pir. I used the one where we have all the infromation except for the I. I then used this I and put into a new equation to slove for r. I just thought it was smiple subistion. Because for you you don't have the I and for the other you need to find the r. NO clue after that. 4. Mar 31, 2007 ### mjsd $$\vec \tau = \vec \mu \times \vec B$$ where $$\vec \mu = N i \vec A$$ is the magnetic dipole moment. So you are given that $$\vec \tau$$ are the same and i guess current too. so it is just a matter of finding area then radius... you should expect the area, ie. radius for loop 2 be smaller since field is stronger there... 5. Apr 1, 2007 ### prettyinpink i figured it out 6. Feb 24, 2009 ### einsteinoid Ok I'm also trying to figure this one out and having trouble. I'm assuming I somehow need to use the area of the given radius' coil and the magnitude its magnetic field to find the area of the ungiven radius' coil? I'm just confused as to how.
2016-10-24 12:23:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6651465892791748, "perplexity": 531.458446724174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719566.74/warc/CC-MAIN-20161020183839-00178-ip-10-171-6-4.ec2.internal.warc.gz"}
https://ask.sagemath.org/answers/15594/revisions/
# Revision history [back] here is the output : root@kali:/home/nico/sage-5.12# df . Filesystem 1K-blocks Used Available Use% Mounted on /dev/disk/by-uuid/f24ed693-917a-493b-a837-5fd40e066b1a 64008052 11062768 49693780 19% / root@kali:/home/nico/sage-5.12# mount ... tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=789244k,mode=755) /dev/disk/by-uuid/f24ed693-917a-493b-a837-5fd40e066b1a on / type ext4 (rw,relatime,errors=remount-ro,data=ordered) ... /dev/sdb1 on /media/Data type fuseblk (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096) Actually you are right about the fact that i did unzipped the sage tar ball in a mounted disk (sdb1 ) , but after i copied it into my main disk (/dev/disk/by-uudid/f24.....) where the filesystem is, and it doesn't show any noexec options . Got any others ideas ? here is the output : root@kali:/home/nico/sage-5.12# df . Filesystem 1K-blocks Used Available Use% Mounted on /dev/disk/by-uuid/f24ed693-917a-493b-a837-5fd40e066b1a 64008052 11062768 49693780 19% / root@kali:/home/nico/sage-5.12# mount ... tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=789244k,mode=755) /dev/disk/by-uuid/f24ed693-917a-493b-a837-5fd40e066b1a on / type ext4 (rw,relatime,errors=remount-ro,data=ordered) ... /dev/sdb1 on /media/Data type fuseblk (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096) Actually you are right about the fact that i did unzipped the sage tar ball in a mounted disk (sdb1 ) , but after i copied it into my main disk (/dev/disk/by-uudid/f24.....) where the filesystem is, and it doesn't show any noexec options . Got any others ideas ? Ps : I've looked into the spkg, and Makefile, and i don't understand why would you use a special pipe script that uses the normal one ..?
2020-04-04 19:45:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24113500118255615, "perplexity": 11028.54078375978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370524604.46/warc/CC-MAIN-20200404165658-20200404195658-00020.warc.gz"}
https://pos.sissa.it/282/1080/
Volume 282 - 38th International Conference on High Energy Physics (ICHEP2016) - Poster Session Upgrade of the CMS muon trigger system in the barrel region D. Rabady,* C. Battilana, R. Carlin, G. Codispoti, M. Dallavalle, J. Erö, G. Flouris, C. Foudas, J. Fulcher, L. Guiducci, N. Loukas, S. Mallios, N. Manthos, I. Papadopoulos, E. Paradas, T. Reis, H. Sakulin, P. Sphicas, A. Triossi, A. Venturi, C. Wulz on behalf of the CMS collaboration *corresponding author Full text: pdf Pre-published on: February 06, 2017 Published on: April 19, 2017 Abstract To maintain the excellent performance of the LHC during its Run-1 also in Run-2, the Level-1 Trigger of the Compact Muon Solenoid experiment underwent a significant upgrade. One part of this upgrade was the re-organisation of the muon trigger path from a subsystem-centric view in which hits in the drift tubes, the cathode strip chambers, and the resistive plate chambers were treated separately in dedicated track-finding systems, to one in which complementary detector systems for a given region (barrel, overlap, and endcap) are merged already at the track-finding level. This also required the development of a new system to sort as well as cancel-out the muon tracks found by each system. An overview will be given of the new track-finder system for the barrel region, the Barrel Muon Track Finder (BMTF) as well as the cancel-out and sorting layer, the upgraded Global Muon Trigger ($\mu$GMT). While the BMTF improves on the proven and well-tested algorithms used in the Drift Tube Track Finder during Run-1, the $\mu$GMT is an almost complete re-development due to the re-organisation of the underlying systems from complementary track finders to regional track finders. Additionally, the $\mu$GMT can calculate a muon isolation using energy information that will be received from the calorimeter trigger in the future. This information is added to the muon objects forwarded to the Global Trigger. Finally, first results of the muon trigger performance including the barrel region are shown. Both the trigger efficiency and the rate reduction show satisfactory performance, with improvements planned for the near future. DOI: https://doi.org/10.22323/1.282.1080 How to cite Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete. Open Access
2020-12-05 03:13:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5305430889129639, "perplexity": 8224.017361930122}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141746033.87/warc/CC-MAIN-20201205013617-20201205043617-00030.warc.gz"}
https://hal.archives-ouvertes.fr/hal-01555002
# The Accretion Flow - Discrete Ejection Connection in GRS 1915+105 Abstract : The microquasar GRS 1915+105 is known for its spectacular discrete ejections. They occur unexpectedly, thus their inception has escaped direct observation. It has been shown that the X-ray flux increases in the hours leading up to a major ejection. In this article, we consider the serendipitous interferometric monitoring of a modest version of a discrete ejection described in Reid et al. that would have otherwise escaped detection in daily radio light curves. The observation begins ∼1 hr after the onset of the ejection, providing unprecedented accuracy on the estimate of the ejection time. The astrometric measurements allow us to determine the time of ejection as ${\rm{MJD}}\,{56436.274}_{-0.013}^{+0.016}$, i.e., within a precision of 41 minutes (95% confidence). Just like larger flares, we find that the X-ray luminosity increases in last 2–4 hr preceding ejection. Our finite temporal resolution indicates that this elevated X-ray flux persists within ${21.8}_{-19.1}^{+22.6}$ minutes of the ejection with 95% confidence, the highest temporal precision of the X-ray–superluminal ejection connection to date. This observation provides direct evidence that the physics that launches major flares occurs on smaller scales as well (lower radio flux and shorter ejection episodes). The observation of a X-ray spike prior to a discrete ejection, although of very modest amplitude, suggests that the process linking accretion behavior to ejection is general from the smallest scales to high luminosity major superluminal flares. Keywords : Document type : Journal articles Domain : https://hal.archives-ouvertes.fr/hal-01555002 Contributor : Inspire Hep <> Submitted on : Monday, July 3, 2017 - 6:31:20 PM Last modification on : Wednesday, April 17, 2019 - 9:22:09 AM ### Citation Brian Punsly, Jerome Rodriguez, Sergei A. Trushkin. The Accretion Flow - Discrete Ejection Connection in GRS 1915+105. Astrophys.J., 2016, 826 (1), pp.5. ⟨10.3847/0004-637X/826/1/5⟩. ⟨hal-01555002⟩ Record views
2019-04-20 10:31:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.564236581325531, "perplexity": 5527.833757754764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578529606.64/warc/CC-MAIN-20190420100901-20190420122901-00355.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/proc.2013.2013.837
# American Institute of Mathematical Sciences 2013, 2013(special): 837-845. doi: 10.3934/proc.2013.2013.837 ## Anosov diffeomorphisms 1 LIAAD-INESC TEC and Department of Mathematics, School of Technology and Management, Polytechnic Institute of Bragança, Campus de Santa Apolónia, Ap. 1134, 5301-857 Bragança, Portugal 2 Departamento de Matemática, IME-USP, Caixa Postal 66281, CEP 05315-970 São Paulo, Brazil 3 LIAAD-INESC TEC and Department of Mathematics, Faculty of Sciences, University of Porto, Rua do Campo Alegre, 4169-007 Porto, Portugal 4 Warwick Systems Biology & Mathematics Institute, University of Warwick, Coventry CV4 7AL, United Kingdom Received  September 2012 Revised  October 2013 Published  November 2013 We use Adler, Tresser and Worfolk decomposition of Anosov automorphisms to give an explicit construction of the stable and unstable $C^{1+}$ self-renormalizable sequences. Citation: João P. Almeida, Albert M. Fisher, Alberto Adrego Pinto, David A. Rand. Anosov diffeomorphisms. Conference Publications, 2013, 2013 (special) : 837-845. doi: 10.3934/proc.2013.2013.837 ##### References: [1] R. Adler, C. Tresser and P. A. Worfolk, Topological conjugacy of linear endomorphisms of the 2-torus, Trans. Amer. Math. Soc., 349 (1997), 1633-1652. [2] J. P. Almeida, A. M. Fisher, A. A. Pinto and D. A. Rand, Anosov and circle diffeomorphisms, in "Dynamics Games and Science I" (eds. M. Peixoto, A. Pinto and D. Rand), Springer Proceedings in Mathematics, Springer Verlag, 2011, 11-23. [3] J. P. Almeida, A. A. Pinto and D. A. Rand, Renormalization of circle diffeomorphism sequences and Markov sequences, to appear in "Proceedings of the Conference NOMA11," Évora, Portugal, Springer Proceedings in Mathematics, Springer Verlag, 2012. [4] V. I. Arnol'd, Small denominators I: On the mapping of a circle into itself, Investijia Akad. Nauk. Math., 25 (1961), 21-96; Transl. A.M.S., 2nd series, 46, 213-284. [5] E. Cawley, The Teichmüller space of an Anosov diffeomorphism of $T^2$, Inventiones Mathematicae, 112 (1993), 351-376. [6] P. Coullet and C. Tresser, Itération d'endomorphismes et groupe de renormalisation, Journal de Physique Colloques, 39 (1978), C5-25-C5-28. [7] J. Franks, Anosov diffeomorphisms, in "Global Analysis" (ed. S. Smale), Proc. Sympos. Pure Math., 14, Amer. Math. Soc., Providence, R.I., 1970, 61-93. [8] E. Ghys, Rigidité différentiable des groupes Fuchsiens, Publ. IHES, 78 (1993), 163-185. [9] M. R. Herman, Sur la conjugaison différentiable des difféomorphismes du cercle à des rotations, Publ. IHES, 49 (1979), 5-233. [10] Y. Jiang, Teichmüller structures and dual geometric Gibbs type measure theory for continuous potentials, preprint, (2008), 1-67. [11] Y. Jiang, Metric invariants in dynamical systems, Journal of Dynamics and Differentiable Equations, 17 (2005), 51-71. [12] O. Lanford, Renormalization group methods for critical circle mappings with general rotation number, in "VIIIth International Congress on Mathematical Physics," World Sci. Publishing, Singapore, 1987, 532-536. [13] R. de la Llave, Invariants for smooth conjugacy of hyperbolic dynamical systems II, Commun. Math. Phys., 109 (1987), 369-378. [14] A. Manning, There are no new Anosov diffeomorphisms on tori, Amer. J. Math., 96 (1974), 422-429. [15] R. Manẽ, "Ergodic Theory and Differentiable Dynamics," Springer-Verlag, Berlin, 1987. [16] J. M. Marco, and R. Moriyon, Invariants for Smooth conjugacy of hyperbolic dynamical systems I, Commun. Math. Phys., 109 (1987), 681-689. [17] J. M. Marco, and R. Moriyon, Invariants for Smooth conjugacy of hyperbolic dynamical systems III, Commun. Math. Phys., 112 (1989), 317-333. [18] H. Masur, Interval exchange transformations and measured foliations, The Annals of Mathematics. 2nd Ser., 115 (1982), 169-200. [19] W. de Melo and S. van Strien, "One-dimensional Dynamics," A series of Modern Surveys in Mathematics, Springer-Verlag, New York, 1993. [20] R. C. Penner and J. L. Harer, "Combinatorics of Train-Tracks," Princeton University Press, Princeton, New Jersey, 1992. [21] A. A. Pinto, J. P. Almeida and A. Portela, Golden tilings, Transactions of the American Mathematical Society, 364 (2012), 2261-2280. [22] A. A. Pinto, J. P. Almeida and D. A. Rand, Anosov and renormalized circle diffeomorphisms, submitted, (2012), 1-33. [23] A. A. Pinto and D. A. Rand, Train-tracks with $C^{1+}$ self-renormalisable structures, Journal of Difference Equations and Applications, 16 (2010), 945-962. [24] A. A. Pinto and D. A. Rand, Solenoid functions for hyperbolic sets on surfaces, in "Dynamics, Ergodic Theory and Geometry" (ed. Boris Hasselblatt), 54, MSRI Publications, 2007, 145-178. [25] A. A. Pinto and D. A. Rand, Rigidity of hyperbolic sets on surfaces, J. London Math. Soc., 71 (2004), 481-502. [26] A. A. Pinto and D. A. Rand, Smoothness of holonomies for codimension 1 hyperbolic dynamics, Bull. London Math. Soc., 34 (2002), 341-352. [27] A. A. Pinto and D. A. Rand, Teichmüller spaces and HR structures for hyperbolic surface dynamics, Ergodic Theory & Dynamical Systems, 22 (2002), 1905-1931. [28] A. A. Pinto and D. A. Rand, Existence, uniqueness and ratio decomposition for Gibbs states via duality, Ergodic Theory & Dynamical Systems, 21 (2001), 533-543. [29] A. A. Pinto and D. A. Rand, Characterising rigidity and flexibility of pseudo-Anosov and other transversally laminated dynamical systems on surfaces, Warwick preprint, 1995. [30] A. A. Pinto, D. A. Rand and F. Ferreira, Arc exchange systems and renormalization, Journal of Difference Equations and Applications, 16 (2010), 347-371. [31] A. A. Pinto, D. A. Rand and F. Ferreira, Cantor exchange systems and renormalization, Journal of Differential Equations, 243 (2007), 593-616. [32] A. A. Pinto, D. A. Rand and F. Ferreira, "Fine structures of hyperbolic diffeomorphisms," Springer Monographs in Mathematics, Springer, 2009. [33] A. A. Pinto and D. Sullivan, The circle and the solenoid, Dedicated to Anatole Katok On the Occasion of his 60th Birthday, DCDS A, 16 (2006), 463-504. [34] M. Shub, "Global Stability of Dynamical Systems," Springer-Verlag, 1987. [35] Ya. Sinai, Markov Partitions and C-diffeomorphisms, Anal. and Appl., 2 (1968), 70-80. [36] W. Thurston, On the geometry and dynamics of diffeomorphisms of surfaces, Bull. Amer. Math. Soc., 19 (1988), 417-431. [37] W. Veech, Gauss measures for transformations on the space of interval exchange maps, The Annals of Mathematics, 2nd Ser., 115 (1982), 201-242. [38] R. F. Williams, Expanding attractors, Publ. I.H.E.S., 43 (1974), 169-203. [39] R. F. Williams, The "DA" maps of Smale and structural stability, in "Global Analysis" (ed. S. Smale), Proc. Symp. in Pure Math., 14, Amer. Math. Soc., Providence, RI, 1970, 329-334. [40] J. C. Yoccoz, Conjugaison différentiable des difféomorphismes du cercle dont le nombre de rotation vérifie une condition diophantienne, Ann. Scient. Éc. Norm. Sup., 4 série, t., 17 (1984), 333-359. show all references ##### References: [1] R. Adler, C. Tresser and P. A. Worfolk, Topological conjugacy of linear endomorphisms of the 2-torus, Trans. Amer. Math. Soc., 349 (1997), 1633-1652. [2] J. P. Almeida, A. M. Fisher, A. A. Pinto and D. A. Rand, Anosov and circle diffeomorphisms, in "Dynamics Games and Science I" (eds. M. Peixoto, A. Pinto and D. Rand), Springer Proceedings in Mathematics, Springer Verlag, 2011, 11-23. [3] J. P. Almeida, A. A. Pinto and D. A. Rand, Renormalization of circle diffeomorphism sequences and Markov sequences, to appear in "Proceedings of the Conference NOMA11," Évora, Portugal, Springer Proceedings in Mathematics, Springer Verlag, 2012. [4] V. I. Arnol'd, Small denominators I: On the mapping of a circle into itself, Investijia Akad. Nauk. Math., 25 (1961), 21-96; Transl. A.M.S., 2nd series, 46, 213-284. [5] E. Cawley, The Teichmüller space of an Anosov diffeomorphism of $T^2$, Inventiones Mathematicae, 112 (1993), 351-376. [6] P. Coullet and C. Tresser, Itération d'endomorphismes et groupe de renormalisation, Journal de Physique Colloques, 39 (1978), C5-25-C5-28. [7] J. Franks, Anosov diffeomorphisms, in "Global Analysis" (ed. S. Smale), Proc. Sympos. Pure Math., 14, Amer. Math. Soc., Providence, R.I., 1970, 61-93. [8] E. Ghys, Rigidité différentiable des groupes Fuchsiens, Publ. IHES, 78 (1993), 163-185. [9] M. R. Herman, Sur la conjugaison différentiable des difféomorphismes du cercle à des rotations, Publ. IHES, 49 (1979), 5-233. [10] Y. Jiang, Teichmüller structures and dual geometric Gibbs type measure theory for continuous potentials, preprint, (2008), 1-67. [11] Y. Jiang, Metric invariants in dynamical systems, Journal of Dynamics and Differentiable Equations, 17 (2005), 51-71. [12] O. Lanford, Renormalization group methods for critical circle mappings with general rotation number, in "VIIIth International Congress on Mathematical Physics," World Sci. Publishing, Singapore, 1987, 532-536. [13] R. de la Llave, Invariants for smooth conjugacy of hyperbolic dynamical systems II, Commun. Math. Phys., 109 (1987), 369-378. [14] A. Manning, There are no new Anosov diffeomorphisms on tori, Amer. J. Math., 96 (1974), 422-429. [15] R. Manẽ, "Ergodic Theory and Differentiable Dynamics," Springer-Verlag, Berlin, 1987. [16] J. M. Marco, and R. Moriyon, Invariants for Smooth conjugacy of hyperbolic dynamical systems I, Commun. Math. Phys., 109 (1987), 681-689. [17] J. M. Marco, and R. Moriyon, Invariants for Smooth conjugacy of hyperbolic dynamical systems III, Commun. Math. Phys., 112 (1989), 317-333. [18] H. Masur, Interval exchange transformations and measured foliations, The Annals of Mathematics. 2nd Ser., 115 (1982), 169-200. [19] W. de Melo and S. van Strien, "One-dimensional Dynamics," A series of Modern Surveys in Mathematics, Springer-Verlag, New York, 1993. [20] R. C. Penner and J. L. Harer, "Combinatorics of Train-Tracks," Princeton University Press, Princeton, New Jersey, 1992. [21] A. A. Pinto, J. P. Almeida and A. Portela, Golden tilings, Transactions of the American Mathematical Society, 364 (2012), 2261-2280. [22] A. A. Pinto, J. P. Almeida and D. A. Rand, Anosov and renormalized circle diffeomorphisms, submitted, (2012), 1-33. [23] A. A. Pinto and D. A. Rand, Train-tracks with $C^{1+}$ self-renormalisable structures, Journal of Difference Equations and Applications, 16 (2010), 945-962. [24] A. A. Pinto and D. A. Rand, Solenoid functions for hyperbolic sets on surfaces, in "Dynamics, Ergodic Theory and Geometry" (ed. Boris Hasselblatt), 54, MSRI Publications, 2007, 145-178. [25] A. A. Pinto and D. A. Rand, Rigidity of hyperbolic sets on surfaces, J. London Math. Soc., 71 (2004), 481-502. [26] A. A. Pinto and D. A. Rand, Smoothness of holonomies for codimension 1 hyperbolic dynamics, Bull. London Math. Soc., 34 (2002), 341-352. [27] A. A. Pinto and D. A. Rand, Teichmüller spaces and HR structures for hyperbolic surface dynamics, Ergodic Theory & Dynamical Systems, 22 (2002), 1905-1931. [28] A. A. Pinto and D. A. Rand, Existence, uniqueness and ratio decomposition for Gibbs states via duality, Ergodic Theory & Dynamical Systems, 21 (2001), 533-543. [29] A. A. Pinto and D. A. Rand, Characterising rigidity and flexibility of pseudo-Anosov and other transversally laminated dynamical systems on surfaces, Warwick preprint, 1995. [30] A. A. Pinto, D. A. Rand and F. Ferreira, Arc exchange systems and renormalization, Journal of Difference Equations and Applications, 16 (2010), 347-371. [31] A. A. Pinto, D. A. Rand and F. Ferreira, Cantor exchange systems and renormalization, Journal of Differential Equations, 243 (2007), 593-616. [32] A. A. Pinto, D. A. Rand and F. Ferreira, "Fine structures of hyperbolic diffeomorphisms," Springer Monographs in Mathematics, Springer, 2009. [33] A. A. Pinto and D. Sullivan, The circle and the solenoid, Dedicated to Anatole Katok On the Occasion of his 60th Birthday, DCDS A, 16 (2006), 463-504. [34] M. Shub, "Global Stability of Dynamical Systems," Springer-Verlag, 1987. [35] Ya. Sinai, Markov Partitions and C-diffeomorphisms, Anal. and Appl., 2 (1968), 70-80. [36] W. Thurston, On the geometry and dynamics of diffeomorphisms of surfaces, Bull. Amer. Math. Soc., 19 (1988), 417-431. [37] W. Veech, Gauss measures for transformations on the space of interval exchange maps, The Annals of Mathematics, 2nd Ser., 115 (1982), 201-242. [38] R. F. Williams, Expanding attractors, Publ. I.H.E.S., 43 (1974), 169-203. [39] R. F. Williams, The "DA" maps of Smale and structural stability, in "Global Analysis" (ed. S. Smale), Proc. Symp. in Pure Math., 14, Amer. Math. Soc., Providence, RI, 1970, 329-334. [40] J. C. Yoccoz, Conjugaison différentiable des difféomorphismes du cercle dont le nombre de rotation vérifie une condition diophantienne, Ann. Scient. Éc. Norm. Sup., 4 série, t., 17 (1984), 333-359. [1] Dominic Veconi. Equilibrium states of almost Anosov diffeomorphisms. Discrete and Continuous Dynamical Systems, 2020, 40 (2) : 767-780. doi: 10.3934/dcds.2020061 [2] Maria Carvalho. First homoclinic tangencies in the boundary of Anosov diffeomorphisms. Discrete and Continuous Dynamical Systems, 1998, 4 (4) : 765-782. doi: 10.3934/dcds.1998.4.765 [3] Matthieu Porte. Linear response for Dirac observables of Anosov diffeomorphisms. Discrete and Continuous Dynamical Systems, 2019, 39 (4) : 1799-1819. doi: 10.3934/dcds.2019078 [4] Christian Bonatti, Nancy Guelman. Axiom A diffeomorphisms derived from Anosov flows. Journal of Modern Dynamics, 2010, 4 (1) : 1-63. doi: 10.3934/jmd.2010.4.1 [5] Zemer Kosloff. On manifolds admitting stable type Ⅲ$_{\textbf1}$ Anosov diffeomorphisms. Journal of Modern Dynamics, 2018, 13: 251-270. doi: 10.3934/jmd.2018020 [6] Andrey Gogolev, Misha Guysinsky. $C^1$-differentiable conjugacy of Anosov diffeomorphisms on three dimensional torus. Discrete and Continuous Dynamical Systems, 2008, 22 (1&2) : 183-200. doi: 10.3934/dcds.2008.22.183 [7] Andrey Gogolev. Smooth conjugacy of Anosov diffeomorphisms on higher-dimensional tori. Journal of Modern Dynamics, 2008, 2 (4) : 645-700. doi: 10.3934/jmd.2008.2.645 [8] Shigenori Matsumoto. A generic-dimensional property of the invariant measures for circle diffeomorphisms. Journal of Modern Dynamics, 2013, 7 (4) : 553-563. doi: 10.3934/jmd.2013.7.553 [9] Yury Neretin. The group of diffeomorphisms of the circle: Reproducing kernels and analogs of spherical functions. Journal of Geometric Mechanics, 2017, 9 (2) : 207-225. doi: 10.3934/jgm.2017009 [10] Stefano Galatolo, Alfonso Sorrentino. Quantitative statistical stability and linear response for irrational rotations and diffeomorphisms of the circle. Discrete and Continuous Dynamical Systems, 2022, 42 (2) : 815-839. doi: 10.3934/dcds.2021138 [11] Antonio Pumariño, José Ángel Rodríguez, Enrique Vigil. Renormalizable Expanding Baker Maps: Coexistence of strange attractors. Discrete and Continuous Dynamical Systems, 2017, 37 (3) : 1651-1678. doi: 10.3934/dcds.2017068 [12] Antonio Pumariño, Joan Carles Tatjer. Attractors for return maps near homoclinic tangencies of three-dimensional dissipative diffeomorphisms. Discrete and Continuous Dynamical Systems - B, 2007, 8 (4) : 971-1005. doi: 10.3934/dcdsb.2007.8.971 [13] Kazuhiro Sakai. The oe-property of diffeomorphisms. Discrete and Continuous Dynamical Systems, 1998, 4 (3) : 581-591. doi: 10.3934/dcds.1998.4.581 [14] Stefan Haller, Tomasz Rybicki, Josef Teichmann. Smooth perfectness for the group of diffeomorphisms. Journal of Geometric Mechanics, 2013, 5 (3) : 281-294. doi: 10.3934/jgm.2013.5.281 [15] Enrique R. Pujals, Federico Rodriguez Hertz. Critical points for surface diffeomorphisms. Journal of Modern Dynamics, 2007, 1 (4) : 615-648. doi: 10.3934/jmd.2007.1.615 [16] Baolin He. Entropy of diffeomorphisms of line. Discrete and Continuous Dynamical Systems, 2017, 37 (9) : 4753-4766. doi: 10.3934/dcds.2017204 [17] Masoumeh Gharaei, Ale Jan Homburg. Random interval diffeomorphisms. Discrete and Continuous Dynamical Systems - S, 2017, 10 (2) : 241-272. doi: 10.3934/dcdss.2017012 [18] Robert McOwen, Peter Topalov. Groups of asymptotic diffeomorphisms. Discrete and Continuous Dynamical Systems, 2016, 36 (11) : 6331-6377. doi: 10.3934/dcds.2016075 [19] Sheldon Newhouse. Distortion estimates for planar diffeomorphisms. Discrete and Continuous Dynamical Systems, 2008, 22 (1&2) : 345-412. doi: 10.3934/dcds.2008.22.345 [20] Jinpeng An. Hölder stability of diffeomorphisms. Discrete and Continuous Dynamical Systems, 2009, 24 (2) : 315-329. doi: 10.3934/dcds.2009.24.315 Impact Factor:
2022-09-28 07:20:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7694643139839172, "perplexity": 3995.7172531563974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00147.warc.gz"}