url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://proxieslive.com/tag/strings/
## Sorting array of strings (with repetitions) according to a given ordering We get two arrays: ``ordering = ["one", "two", "three"] `` and ``input = ["zero", "one", "two", "two", "three", "three", "three", "four"]; `` We want to find the array `output` so that ``output = ["one", "two", "two", "three", "three", "three", "zero", "four"] // or output = ["one", "two", "two", "three", "three", "three", "four", "zero"] `` The strings (with possible repetitions) should be sorted as in the `ordering` array. Not found/contained strings should be put at the end of the new array and their order doesn’t matter. The $$n^{2}$$ solution is obvious, can we do better? The memory doesn’t matter and it doesn’t have to be an in-place algorithm. ## Finding encryption algorithm from known and encrypted password strings [duplicate] I am working with a piece of software which seems to use some type of lightweight, “home baked” password encryption algorithm. I know a number of clear text passwords as well as their corresponding encrypted as they are stored in the database — Does anyone know of a tool or a means to find the underlying algorithm and/or hash type which might be used? I would like to be able to decrypt and use these in a use-case application of my own. Examples (clear text -> encrypted): ``test123 -> 2404483248 tb -> 43971963 ks -> 43912691 mm -> 43937163 et -> 43941139 `` ( ## Counting strings with balanced substrings Consider a string of characters $$a, b, c$$ only. Such a string is called good if the number of $$a$$‘s + number of $$b$$‘s is equal to the number of $$c$$‘s. Given an integer $$n$$, find the number of strings of length $$n$$ consisting only of characters $$a,b,c$$ such that all of its substrings of length $$k$$ are good. Example: $$n = 3 ,k = 2$$ is $$6$$, $$n = 2,k = 1$$ is $$0$$ I could only solve when there are only two characters but can anyone help me how to solve when there are three characters. ## Given a list of strings, find every pair \$(x,y)\$ where \$x\$ is a substring of \$y\$. Possible to do better than \$O(n^2)\$? Consider the following algorithmic problem: Given a list of strings $$L = [s_1, s_2, \dots, s_n]$$, we want to know all pairs $$(x,y)$$ where $$x$$ is a substring of $$y$$. A trivial algorithm would be this: ``foreach x in L: foreach y in L: if x is substring of y: OUTPUT x,y `` However, this takes $$O(n^2)$$ $$x$$ substring of $$y$$ operations – I am curious to know whether there is a faster algorithm? ## Spliting strings into groups of similar strings I would like to group a list of strings into groups of strings differing by max 1 character: For instance, given: ``[John, Alibaba, Johny, Alidaba, Mary] `` I would expect three groups: ``[John, Johny], [Alibaba, Alidaba], [Mary] `` My first thought was about using some clustering algorithm with Levenshtein distance but that seems like an overkill to me. Is there a better approach? ## Constructing Generalised Suffix Tree from a large set of strings Is there a published method to construct a generalised suffix tree from a large set of strings (~ 500 000) without the need of concatenating them? I would like to use the resulting suffix tree for a pattern search problem. ## Length of strings accepted by DFA Problem: Given a DFA $$D$$, find all possible lengths of strings accepted by the $$D$$. It makes sense that these lengths can be represented as $$a_i+kb_i$$. What might be the algorithm to find all such pairs $$(a_i, b_i)$$? ## How to facilitate the export of secret strings from an offline system? I want to use Shamir’s Secret Sharing algorithm to store a randomly generated passphrase securely by spreading the secret shares on paper for example. The passphrase is generated on an offline system. I am looking for a way to ease the process of “exporting” those secrets which can be quite long (~100 hexadecimal characters). First I converted the secrets from hexadecimal to base64. That is not bad but not enough. Then I tried to compress the strings using different methods but because it is random data it does not compress well (or at all). Then I though of printing them as QR code, it works fine but the issue comes later when I need to import the secrets back, because I would need a camera. Is there anything else I could try? ## Regular expression for strings not starting with 10 How can I construct a regular expression for the language over $$\{0,1\}$$ which is the complement of the language represented by the regular expression $$10(0+1)^*$$? ## Regular expressions for set of all strings on alphabet \$\{a, b\}\$ I came across following regular expressions which equals $$(a+b)^*$$ (set of all strings on alphabet $$\{a, b\}$$): • $$(a^*+bb^*)^*$$ • $$(a^*b+b^*a)^*$$ • $$(a^*bb^*+b^*ab^*)^*(a^*b+b^*a)^*b^*a^*$$ I want to generalise different ways in which we can append to original regular expression $$(a+b)^*$$, to not to change its meaning and still get set of all strings on alphabet $$\{a, b\}$$. I think we can do this in two ways : • P1: We can concatenate anything to $$a$$ and $$b$$ inside brackets of $$(a+b)^*$$ • P2: We can concatenate $$(a+b)^*$$ with any regular expression which has star at most outer level ($$(…)^*$$) • P3: I know $$(a+b)^* = (a^*+b)^* = (a+b^*)^*= (a^*+b^*)^*$$. So I guess P1 and P2 also applies to them. Am I correct with P’s? Q. Also I know $$(a+b)^*=(a^*b^*)^*=b^*(a^*b)^*=(ab^*)^*a^*$$. Can we append some pattern of regular expressions to these also to not to change their original meaning?
2020-02-23 14:29:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 40, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3088451325893402, "perplexity": 525.96175925176}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145774.75/warc/CC-MAIN-20200223123852-20200223153852-00278.warc.gz"}
https://www.mysciencework.com/publication/show/enhanced-direct-cp-violation-b-pm-0-pi-+-pi-k-pm-0-f72d3ec7
# Enhanced direct CP violation in $B^{\pm,0} \to \pi^{+} \pi^{-} K^{\pm,0}$ Authors Type Published Article Publication Date Oct 31, 2002 Submission Date Aug 22, 2002 Identifiers DOI: 10.1103/PhysRevD.66.096008 Source arXiv License Unknown External links ## Abstract We investigate in a phenomenological way, direct CP violation in the hadronic decays $B^{\pm,0} \to \pi^{+} \pi^{-} K^{\pm,0}$ where the effect of $\rho - \omega$ mixing is included. If $N_{c}^{eff}$ (the effective parameter associated with factorization) is constrained using the most recent experimental branching ratios (to $\rho^{0}K^{0}, \rho^{\pm}K^{\mp}, \rho^{\pm}K^{0}, \rho^{0} K^{\pm}$ and $\omega K^{\pm}$) from the BABAR, BELLE and CLEO Collaborations, we get a maximum CP violating asymmetry, $a_{max}$, in the range -25% to $+49%$ for $B^{-} \to \pi^{+}\pi^{-} K^{-}$ and -24% to $+55%$ for ${\Bar B}^{0} \to \pi^{+}\pi^{-} {\Bar K}^{0}$. We also find that CP violation is strongly dependent on the Cabibbo-Kobayashi-Maskawa matrix elements. Finally, we show that the sign of $\sin \delta$ is always positive in the allowed range of $N_{c}^{eff}$ and hence, a measurement of direct $CP$ violation in $B^{\pm,0} \to \pi^{+} \pi^{-} K^{\pm,0}$ would remove the mod$(\pi)$ ambiguity in ${\rm arg}[ - \frac{V_{ts}V_{tb}^{\star}}{V_{us}V_{ub}^{\star}}]$. Seen <100 times
2019-01-21 05:05:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9141932725906372, "perplexity": 1406.7166501859244}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583763149.45/warc/CC-MAIN-20190121050026-20190121072026-00026.warc.gz"}
https://rj722.github.io/articles/19/obm
# Crafting my future with OBM July 12, 2019 It’s been a couple of years maybe, when I read ‘i want 2 do project tell me wat 2 do’ - which by the way, you should too! since I first came across Operation Blue Moon (OBM), a project aimed towards time management and getting things done. It’s run single-handedly by Shakthi Kannan (~mbuf) (who is also the author of ‘i want 2 do project, tell me wat 2 do’ ). Not only does it borrows it’s name, but also the kind of disciple practiced, from our miliary counterparts. The practices here, build upon the years of experience Shakthi has dealing with people trying, failing, and trying ~harder~ again in their conquest with these utterly useful traits. DISCLAIMER: With this blog, I mean to give an overview of what were my takeaways with OBM, and in no way serves as a guide about how you should go about attempting it. One thing is for sure, it’s difficult, which creates the scarcity and you, the winner. OBM, at it’s core, provides one with a framework (the ability to define WHY, WHAT, WHEN and HOW) to align their thoughts, expectations and action practically, along with the ability to monitor your success. How does it do so? Read on. Every participant of OBM has a plan file every plan is inside a particular track (theme), eg. data scientist, devops, etc. - this serves as WHY (an emacs org file), which enlists their goals - long term, short term, secondary, etc. - all goals - The WHAT. Then, under each subsequent heading, define what tasks do you need to do in order to achieve that goal - The HOW. The tasks have the following propoerties: *** TODO Write one blog post :PROPERTIES: :ESTIMATED: 3 :ACTUAL: :OWNER: RJ722 :ID: WRITE.1562247371 :END: ESTIMATED - The time you estimate you would take for completing the task. ACTUAL - The actual amount it takes for you to complete it. OWNER, ID and TASKID are there for better visualizations. (more on this later) I’ve the following function (courtesy of mbuf) in my spacemacs config to help me generate these tasks: (define-skeleton insert-org-entry "Prompt for task, estimate and category" nil '(setq timestamp (format-time-string "%s")) ":PROPERTIES:" \n ":ESTIMATED: " estimate \n ":ACTUAL:" \n ":OWNER: RJ722" \n ":ID: " category "." timestamp \n ":TASKID: " category "." timestamp \n ":END:") The more important question is WHEN. Here’s where the ‘sprints’ chime-in. If you’re participating in OBM, you’re always sprinting (which makes perfect sense, since ideally you wouldn’t want yourself ‘unmanaging’ your time). A sprint, generally lasts for around 14-18 days. Before the sprint starts, move the tasks you want to get done in that sprint, under it’s ‘tab’ (which exists in the plan file, an example here). You also need to enter an average amount of time you can dedicate on a per day basis for this sprint. Shakthi would then move such tasks from all participants to a file dedicated to that sprint. The participants can now clock their tasks (track the amount of time they have spend doing each of these tasks) - in emacs’ org mode (org-clock-in, org-clock-out). If you have done everything until now correctly, you should have a holistic view of how well your performance was for the last sprint. And, this is the most important step. Introspection - See what you did you wrong, what factor did you forget to take into account - how much was the difference between the actual and expected time of completion. Were you not able to complete all the tasks - why? what could you do to improve your estimates? You don’t even have to do this formally (although, it helps). Just doing the work, clocking it, and sending it over is enough to spark an introspective impulse. Just stick with the plan long enough and you’d see improvements. You’d see major improvements. I want to attribute the reason for OBM’s success to it’s simplicity, but it really is the discipline, showing up everyday and doing the work. If you currently find yourself in a position, where it feels like you’re stuck - you know what you want to do, but there’s this ‘something’ stopping you, and it feels forever since you’ve been wanting to do this thing, but there’s been no tangible progress so far. Well, then OBM is exactly the right thing for you. With this framework, you’re forced to formalize things, diagnose the ‘something’ stopping you, make changes to your current schedule and to see the different between the direction you aim to go towards and the one where you’re heading (with your current planning). Last, but not the least, I want to thank Shakthi for all the energy and motivation he’s been pumping into the project himself. Thank you so much Shakthi! Have fun experiencing OBM! Crafting my future with OBM - July 12, 2019 - Rahul Jha
2019-09-16 23:07:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44204989075660706, "perplexity": 2590.9330844519486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572964.47/warc/CC-MAIN-20190916220318-20190917002318-00360.warc.gz"}
https://math.stackexchange.com/questions/2192845/continuous-dual-of-banach-space-embedded-in-hilbert-space
# Continuous dual of Banach space embedded in Hilbert space Let $W$ be a separable infinite-dimensional Banach space and let $W^*$ denote its continuous dual. Suppose that $\mu$ is a Borel measure on $W$ such that $W^*\subseteq L^2(W,\mu)$. Let $K$ denote the closure of $W^*$ in $L^2(W,\mu)$. What is an example of an element of $K$ that is not in $W^*$? I am especially interested in the special case $W=C([0,1])$ when $\mu$ is Wiener measure. Some context: Elements of $L^2(W,\mu)$ are square-integrable random variables on $W$, and elements of $W^*$ are random variables represented by bounded linear functionals. In the special case above, elements of $W^*$ are centered Gaussian random variables. $K$ is the dual of the Cameron-Martin space. I believe it consists of all centered Gaussian random variables in $L^2(W,\mu)$, but I am not sure how to construct a Gaussian random variable on $W$ that is not given by a bounded linear functional. • Sorry, there seems to be a more basic misunderstanding here. Any element of $K$ will be representable by some (a.s. defined) linear functional, just not a bounded one. The point is that any $L^2$ limit of bounded linear functionals is itself an (a.s. defined) linear functional, just not a necessarily bounded one. This is because convergence in $L^2$ implies convergence a.s. along a subsequence. – Shalop Mar 19 '17 at 3:14 • Thanks for pointing this out, this answers my question. To prevent further confusion, I also added the worded "bounded" in the final sentence of the "some context" paragraph in my question above. – pre-kidney Mar 19 '17 at 3:21 • Okay, but there is an explicit answer to your question. Elaborating on my first (deleted) comment, the functionals in $K$ which are not in $W^*$ are precisely those of the form $\phi(g) = \int_0^1 g(x)f''(dx)$ where $f \in H_0^1[0,1]$ and $f''$ exists as a distribution which is not a finite (signed) measure. – Shalop Mar 19 '17 at 4:36
2019-05-23 07:23:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9372469782829285, "perplexity": 99.94527183452644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257156.50/warc/CC-MAIN-20190523063645-20190523085645-00467.warc.gz"}
https://ora.ox.ac.uk/objects/uuid:5cdf0ce1-9f41-413a-b4b2-f44d170fc657
Journal article ### Measurement of the $Ξ_b^-$ and $Ω_b^-$ baryon lifetimes Abstract: Using a data sample of pp collisions corresponding to an integrated luminosity of $3~ \rm fb^{-1}$, the $\Xi_b^-$ and $\Omega_b^-$ baryons are reconstructed in the $\Xi_b^- \rightarrow J/\psi \Xi^-$ and $\Omega_b^- \rightarrow J/\psi \Omega^-$ decay modes and their lifetimes measured to be $\tau (\Xi_b^-) = 1.55\, ^{+0.10}_{-0.09}~{\rm(stat)} \pm 0.03\,{\rm(syst)}$ ps, $\tau (\Omega_b^-) = 1.54\, ^{+0.26}_{-0.21}~{\rm(stat)} \pm 0.05\,{\rm(syst)}$ ps. These are the most precise determin... Publication status: Published ### Authors Journal: Phys. Lett. B Volume: 736 Issue: 4 Pages: 154-162 Publication date: 2014-05-07 DOI: ISSN: 0370-2693 Source identifiers: 21618 Keywords: Pubs id: pubs:21618 UUID: uuid:5cdf0ce1-9f41-413a-b4b2-f44d170fc657 Local pid: pubs:21618 Deposit date: 2014-09-21 If you are the owner of this record, you can report an update to it here: Report update to this record TO TOP
2022-07-04 07:06:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8886332511901855, "perplexity": 2758.5824036597783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104354651.73/warc/CC-MAIN-20220704050055-20220704080055-00687.warc.gz"}
http://mymathforum.com/real-analysis/23058-ode-improper-integral.html
My Math Forum ODE & Improper Integral Real Analysis Real Analysis Math Forum December 4th, 2011, 10:54 AM #1 Member   Joined: Nov 2010 Posts: 48 Thanks: 0 ODE & Improper Integral Can someone please help me with this integral $\int_0^\infty e^{-x^2}sinx\,\mathrm{d}x$? I put F(K)=$\int_0^\infty e^{-x^2}sinx\,\mathrm{d}x$ but I have problem with the F(0). December 4th, 2011, 10:55 AM #2 Member   Joined: Nov 2010 Posts: 48 Thanks: 0 Re: ODE & Improper Integral Sorry, I put F(k)=$\int_0^\infty e^{-x^2}sinkx\,\mathrm{d}x$ but I stuck everytime in F(0). December 4th, 2011, 02:04 PM #3 Senior Member   Joined: Oct 2011 From: Belgium Posts: 522 Thanks: 0 Re: ODE & Improper Integral $\int e^{-x^2} \sin (k x) \, dx=\int e^{-x^2} \frac{\left(e^{ikx}-e^{-ikx}\right)}{2i}dx$ $=\frac{1}{2i}\int e^{-x^2+\text{ikx}}-e^{-x^2-\text{ikx}}dx$ $=\frac{1}{2i}\int e^{-\left(x-\frac{\text{ik}}{2}\right)^2-\frac{k^2}{4}}-e^{-\left(x+\frac{\text{ik}}{2}\right)^2-\frac{k^2}{4}}dx$ $=\frac{\sqrt{\pi }}{4i}e^{\frac{-k^2}{4}}\left(\text{Erf}\left(x-\frac{\text{ik}}{2}\right)-\text{Erf}\left(x+\frac{\text{ik}}{2}\right)\right )$ $\int_0^\infty e^{-x^2} \sin (k x) \, dx=\lim_{x \to \infty }\frac{\sqrt{\pi }}{4i}e^{\frac{-k^2}{4}}\left(\text{Erf}\left(x-\frac{\text{ik}}{2}\right)-\text{Erf}\left(x+\frac{\text{ik}}{2}\right)\right ) - \frac{\sqrt{\pi }}{4i}e^{\frac{-k^2}{4}}\left(\text{Erf}\left(-\frac{\text{ik}}{2}\right)-\text{Erf}\left(\frac{\text{ik}}{2}\right)\right)$ $=\lim_{x \to \infty }\frac{\sqrt{\pi }}{4i}e^{\frac{-k^2}{4}}\left(\text{Erf}\left(x-\frac{\text{ik}}{2}\right)-\text{Erf}\left(x+\frac{\text{ik}}{2}\right)\right ) + \frac{\sqrt{\pi }}{2i}e^{\frac{-k^2}{4}}\left(\text{Erf}\left(\frac{\text{ik}}{2}\ right)\right)$ $=\lim_{x \to \infty }\frac{\sqrt{\pi }}{4i}e^{\frac{-k^2}{4}}\left(\text{Erf}\left(x-\frac{\text{ik}}{2}\right)-\text{Erf}\left(x+\frac{\text{ik}}{2}\right)\right ) + \frac{\sqrt{\pi }}{2}e^{\frac{-k^2}{4}}\left(\text{Erfi}\left(\frac{\text{k}}{2}\ right)\right)$ $=\lim_{x \to \infty }\frac{\sqrt{\pi }}{4i}e^{\frac{-k^2}{4}}\left(\text{Erf}\left(x-\frac{\text{ik}}{2}\right)-\text{Erf}\left(x+\frac{\text{ik}}{2}\right)\right ) + \frac{\sqrt{\pi }}{2}e^{\frac{-k^2}{4}}\left(\text{Erfi}\left(\frac{\text{k}}{2}\ right)\right)$ $=DawsonF(\frac{k}{2})$ $\lim_{x \to \infty }\frac{\sqrt{\pi }}{4i}e^{\frac{-k^2}{4}}\left(\text{Erf}\left(x-\frac{\text{ik}}{2}\right)-\text{Erf}\left(x+\frac{\text{ik}}{2}\right)\right )=0$ because integrand goes to 0 at infinity + analytic function December 5th, 2011, 05:19 AM #4 Member   Joined: Nov 2010 Posts: 48 Thanks: 0 Re: ODE & Improper Integral I must solve the improper equation by using Ordinary Differential Equations, but I can't here is what I do: $F(k)$=$\int_0^\infty e^{-x^2}sinkx\,\mathrm{d}x$ $dF/dk$$=$$\int_0^\infty xe^{-x^2}coskx\,\mathrm{d}x$$=$$-1/2$$\int_0^\infty -2xe^{-x^2}coskx\,\mathrm{d}x$$=$$-1/2$$\int_0^\infty (e^{-x^2})'coskx\,\mathrm{d}x$=$[(-1/2)e^{-x^2}coskx]_0^\infty$$-k/2$$\int_0^\infty e^{-x^2}sinkx\,\mathrm{d}x$$=$$(-k/2)F(k)$ Thus: $dF/dk$$=$$(-k/2)F(k)$ I solve the differential equation and then: $F(k)=ce^{(-k^2)/4}$ to find $c$ I put $k=0$ and there I have problem. How I can solve the improper integral using ODE and If it can't be solved by using ODE how I can solve it using complex analysis? December 5th, 2011, 06:29 AM #5 Senior Member   Joined: Oct 2011 From: Belgium Posts: 522 Thanks: 0 Re: ODE & Improper Integral I think it should be $\frac{\mathrm{d} F}{\mathrm{d} k}=\frac{1}{2}-\frac{k}{2}F$ $F(0)=0$ December 5th, 2011, 06:45 AM   #6 Senior Member Joined: Oct 2011 From: Belgium Posts: 522 Thanks: 0 Re: ODE & Improper Integral Quote: Originally Posted by wnvl I think it should be $\frac{\mathrm{d} F}{\mathrm{d} k}=\frac{1}{2}-\frac{k}{2}F$ $F(0)=0$ Checked this ODE and the solution is indeed again $=DawsonF(\frac{k}{2})$. December 5th, 2011, 07:04 AM #7 Member   Joined: Nov 2010 Posts: 48 Thanks: 0 Re: ODE & Improper Integral why it should be $dF/dk=(1/2)-k/2F$ where I did wrong to the integral? But mostly why $F(0)$? if $k=0$ then the integral becomes $\int_0^\infty 0\,\mathrm{d}x$ which is not . December 5th, 2011, 07:15 AM   #8 Senior Member Joined: Oct 2011 From: Belgium Posts: 522 Thanks: 0 Re: ODE & Improper Integral Quote: Originally Posted by talisman why it should be $dF/dk=(1/2)-k/2F$ where I did wrong to the integral? But mostly why $F(0)$? if $k=0$ then the integral becomes $\int_0^\infty 0\,\mathrm{d}x$ which is not . you made an error in the evaluation of $[(-1/2)e^{-x^2}coskx]_0^\infty$ for 0 concerning your second question, when the integrand is zero over the entire range of integration then the integral should be zero. December 5th, 2011, 07:44 AM #9 Member   Joined: Nov 2010 Posts: 48 Thanks: 0 Re: ODE & Improper Integral Can you explain how I will find the $[(-1/2)e^{-x^2}coskx]_0^\infty$? and also why the integral of zero over the entire range of integration is zero? I asked 2 teachers in my university and told me that the integral of zero can not be defined. December 5th, 2011, 08:04 AM   #10 Senior Member Joined: Oct 2011 From: Belgium Posts: 522 Thanks: 0 Re: ODE & Improper Integral Quote: Originally Posted by talisman Can you explain how I will find the $[(-1/2)e^{-x^2}coskx]_0^\infty$? and also why the integral of zero over the entire range of integration is zero? I asked 2 teachers in my university and told me that the integral of zero can not be defined. $[(-1/2)e^{-0^2}cosk0]=(-1/2)*1*1=-1/2$ $[(-1/2)e^{-\infty^2}cosk\infty]=(-1/2)*0*(something between -1 and 1)=0$ Quote: Originally Posted by talisman and also why the integral of zero over the entire range of integration is zero? I asked 2 teachers in my university and told me that the integral of zero can not be defined. For me this is trivial, but I don't have the background to give a strict proof. Tags improper, integral, ode Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post unwisetome3 Calculus 2 April 8th, 2013 10:45 AM mathrookie2012 Calculus 7 April 21st, 2012 08:14 PM chapsticks Calculus 1 February 26th, 2012 11:59 AM citbquinn Calculus 2 March 15th, 2011 10:42 AM izseekzu Calculus 1 April 13th, 2010 03:37 PM Contact - Home - Forums - Cryptocurrency Forum - Top
2017-08-18 20:11:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 55, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9588214755058289, "perplexity": 1808.8156199220887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105108.31/warc/CC-MAIN-20170818194744-20170818214744-00704.warc.gz"}
http://mathematica.stackexchange.com/questions/18017/value-of-basedirectory-within-ant-script/18020
# Value of $BaseDirectory within Ant script? I'm using an ant script to install my application from the Workbench (as suggested in this answer). In past I've used two ways of specifying the destination for the build: 1. Hard-coded directory, like /Users/me/Library/Mathematica/Applications. This would be fine for a single computer, but I store the code in a versioning system, and access it from computers having different filesystems (e.g., MacOSX laptop, linux desktop). It is inconvenient to change the ant build file for each build. 2. Setting an environment variable on each computer ($MMA_APP_DIR, say) and using <property environment="env"/> within ant to get that variable using ${env.MMA_APP_DIR}. Much better than option #1, since I don't have to change the code itself, but still an inconvenience. Mathematica's, $BaseDirectory and $UserBaseDirectory are the destinations I'm after. How can I access the values of these variables from the ant script? They are built into the Workbench framework, showing up as destination folders when using the Application View to export to an archive or to a folder. I've yet to discover how to access them. - I added a section answering your specific question on accessing $UserBaseDirectory etc. –  Leonid Shifrin Jan 18 '13 at 18:30 @LeonidShifrin, that answers the particular question, yet your property file suggestion is easier to manage, so I'll go with that. –  JxB Jan 20 '13 at 5:33 ## 1 Answer ### Ant property files I would use the Ant property files instead. Getting some Mathematica-related variables by calling Mathematica from Ant is possible (see the second part of this answer), but more complicated and error-prone too. Ant property files are exactly the mechanism used by Ant to separate the parameters that vary from machine to machine, from those which are universal. These files has the .properties extension, and are stored separately. Your project may look like Test Test Kernel init.m Test.m Scripts build.xml build.properties Test.nb Then the build.properties file may look like mathematicaInstallDir = C:/Program Files/Wolfram Research/Mathematica/8.0 mathExe = ${mathematicaInstallDir}/MathKernel.exe userBaseDirectory = C:/Users/Archie/AppData/Roaming/Mathematica and your build.xml may look like <project name="Test" basedir=".." default="build"> <property name="rootdir" value="${basedir}"/> <property name= "pacletName" value = "Test"/> <property name= "dist" value = "${rootdir}/Build/${pacletName}"/> <property file="build.properties"/> <property name="destination" value="${userBaseDirectory}"/> <target name="clean" > <delete dir = "${dist}"/> </target> <target name = "build" depends = "clean"> <mkdir dir="${dist}"/> <mkdir dir = "${dist}/Kernel"/> <copy todir="${dist}/Kernel"> <fileset dir="Test/Kernel"/> </copy> <copy file="Test/Test.m" todir="${dist}"/> </target> <target name = "undeploy"> <delete dir = "${destination}"/> </target> <!-- Copy the project to the final destination --> <target name = "deploy" depends = "undeploy, build"> <copy todir="${destination}"> <fileset dir="${dist}"/> </copy> </target> </project> What happens is that the "destination" is read in by the build.xml file from build.properties file, and used in the targets defined after that. The build.properties file you don't commit to the version control, it is different for different machines. This is the standard practice with Ant, and I think this will be a better solution than trying to automate things further, call Mathematica from Ant, etc. ### How to communicate results from Mathematica to Ant If you really want it, there is a way to communicate the results from Mathematica to Ant. I will show a simple example - the $UserBaseDirectory will be assigned to some Ant property. Here are two additional targets to add to the build.xml script, which illustrate this: <property name="jlinkpath" value="${mathematicaInstallDir}/SystemFiles/Links/JLink/"/> <target name="initMathematicaTask" unless="JLinkLoaded"> <path id="jlink.lib"> <fileset dir="${jlinkpath}"> <include name="**/JLink.jar"/> <include name="**/SystemFiles/*"/> </fileset> </path> <!-- Load JLink --> <taskdef name="mathematica" classname="com.wolfram.jlink.util.MathematicaTask" > <classpath refid="jlink.lib"/> </taskdef> <property name="JLinkLoaded" value="true"/> </target> <target name = "testUserBaseDirSet" depends = "initMathematicaTask"> <mathematica exe="${mathExe}" fresh="true" quit="true"> <![CDATA[ AntSetProperty["userBaseDir", ToString[$UserBaseDirectory]]; ]]> </mathematica> <echo message="The vaue of 'userBaseDir' is: \${userBaseDir} "/> </target> The property "jlinkpath" you can set at any place after the "mathematicaInstallDir" property has been defined (you will need this one in any case, and it has been defined in my example in the build.properties file). The "initMathematicaTask" target is an auxiliary target to init Mathematica. With it, one can use Mathematica in the Ant builds. The target "testUserBaseDirSet" illustrates how you can run Mathematica code by Ant and set Ant variables (properties) from within Mathematica. To see a larger example of this, search for notebook.xml, which is a part of the documentation build script and should reside somewhere in the Workbench distribution. - Thanks Leonid, +1! –  JxB Jan 20 '13 at 5:31
2015-05-24 13:30:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49354425072669983, "perplexity": 9106.865739780067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928019.31/warc/CC-MAIN-20150521113208-00226-ip-10-180-206-219.ec2.internal.warc.gz"}
https://tex.stackexchange.com/questions/350270/shortstack-with-empty-line
# \shortstack with empty line How can I force \shortstack to preserve an empty line? For example, \shortstack[c]{hoge \\ foo \\ bar} \shortstack[c]{hoge \\ \\ bar} provides . But I would like to keep the empty line in the middle. • Do you need to use \shortstack? Why not just a tabular (which will preserve line heights and empty rows)? – Werner Jan 24 '17 at 20:22 \shortstack is really unsuitable for uniformly stacking characters, because it just takes into account the character's height and depth, adding just 3pt between them. Look at the result with \documentclass{article} \begin{document} \shortstack{a\\c\\e} \shortstack{g\\y\\q} \shortstack{fg\\ly\\pt} \shortstack{l\\a\\f} \end{document} The result is surely not what you expected it to be. The command is only useful when its argument consists of capital letters only (without diacritics). \documentclass{article} \newcommand{\bettershortstack}[2][c]{% \begin{tabular}[b]{@{}#1@{}}#2\end{tabular}% } \begin{document} \bettershortstack{a\\c\\e} \bettershortstack{g\\y\\q} \bettershortstack{fg\\ly\\pt} \bettershortstack{l\\a\\f} \end{document} \documentclass{article} \newcommand{\bettershortstack}[2][c]{% \begin{tabular}[b]{@{}#1@{}}#2\end{tabular}% } \begin{document} \bettershortstack{hoge \\ foo \\ bar} \bettershortstack{hoge \\ \\ bar} \end{document} For things to line up properly, use a \vphantom, since line heights are relative to the entry: \documentclass{article} \usepackage{stackengine} \begin{document} \shortstack[c]{hoge \\ foo \\ bar} \shortstack[c]{hoge \\ \\ bar} \shortstack[c]{hoge \\ \vphantom{foo} \\ bar} \end{document}
2021-01-18 14:58:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 3452.1303270297553}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514796.13/warc/CC-MAIN-20210118123320-20210118153320-00130.warc.gz"}
https://www.physicsforums.com/threads/confirm-this-problem-which-involves-friction.118054/
# Confirm this problem which involves friction 1. Apr 19, 2006 ### danielI God day! The magnitude of force P is slowly increased. Does the homogeneous box of mass $$m$$ slip or tip first? State the value of P which would cause each occurence. Neglect any effect of the size of the small feet. http://img75.imageshack.us/img75/629/friction5mb.png [Broken] This is my work which I wish anyone could check. $$N + P_y - mg = 0\Rightarrow N = mg - P_y$$ We also know that $$P_y = P\sin30\Rightarrow N = mg - P\sin30$$ The resultant force of the forces that will affect the body in the y-axis is $$R_y = mg - P\sin30$$ And in the x-axis it will be the force that is pulling the body minus the friction, i.e., $$R_x = P\cos30 - \frac{mg-P\sin30}{2}$$ Now is everything correct? Could I have missed some force or misscalculated something? My strategy is now to check when $$R_x = 0$$ and $$R_y = 0$$, that is, before the storm breaks loose. $$R_y = 0$$ for $$P = 2mg$$ $$R_x = 0$$ for $$P = \frac{4mg}{4\cos30+1}$$ Since $$\frac{4mg}{4\cos30+1}\leq 2mg$$ it will be starting to move in x-direction before tilting. And it will do this for $$P > 2mg$$ Thank you and have a god day! Last edited by a moderator: May 2, 2017 2. Apr 19, 2006 ### durt You'll have to consider the torques when it tilts. 3. Apr 19, 2006 ### danielI Hello durt Okay, since the tilting part was wrong we keep the $$R_x$$ and remove the $$R_y$$. Let the blue dot be origo. We will have two torques, the one by gravity and the one created by the force P. We start of with the one created by gravity. $$M_1 = mgd$$ This will be counterclockwise. Then we look at the other one $$M_2 = P_yz$$ we know that $$x = d/\tan30$$. This gives $$z = (2d + d/\tan30)\sin30 = (d + d/(2\tan30))$$ So, $$M_2 = P(d + d/(2\tan30))$$ Which is clockwise. We set $$M_1=M_2$$ and get $$P = \frac{mg}{1+\frac{1}{2tan30}}$$ Then the body will tilt first if $$\frac{1}{1+\frac{1}{2tan30}} < \frac{1}{\cos30 + 1/4}$$ which clearly is true (well, if you use the calculator ). And hence, the body will tilt when $$P > \frac{mg}{1+\frac{1}{2tan30}}$$ #### Attached Files: • ###### Copy of friction.PNG File size: 11.1 KB Views: 169 Last edited: Apr 20, 2006
2018-01-18 22:04:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7363182306289673, "perplexity": 1337.3635713434435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887621.26/warc/CC-MAIN-20180118210638-20180118230638-00267.warc.gz"}
http://math.stackexchange.com/questions/291160/find-a-relation-between-and-y-that-does-not-involve-logarithms
# Find a relation between and y that does not involve logarithms Could I please have a solution to this, I've spent an hour on it so far -_- Thanks in advance. $$\log_{10}(1+y) - \log_{10}( 1-y) = x$$ - do you mean $\log_{10} (1 + y) - \log_{10} (1-y) = x$? – Calvin Lin Jan 31 '13 at 6:10 We want $\log_{10} \frac {1+y}{1-y} = x$, which gives $\frac {1+y}{1-y} = 10^x$, which gives $$y = \frac {10^x -1}{10^x + 1}.$$ You can either expand it directly to make $y$ the subject of the formula, or use the 'trick' that if $\frac {a}{b} = \frac {c}{d}$, then $\frac {a-b}{a+b} = \frac {c-d}{c+d}$. – Calvin Lin Jan 31 '13 at 6:17 @197 Multiply both sides by $1-y$ to get $10^x-10^xy=1+y$, collect like terms to get $(10^x+1)y=10^x-1$, then divide to get $y$ alone. – Mike Jan 31 '13 at 7:00
2016-07-27 05:45:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9023111462593079, "perplexity": 266.9020264227368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257825366.39/warc/CC-MAIN-20160723071025-00075-ip-10-185-27-174.ec2.internal.warc.gz"}
https://doc.simo.com.hk/Functions/transpose/
• A should be a matrix and it returns the transpose of A. • It gives the same result as using the transpose operator ', i.e., A'.
2018-12-14 00:22:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.643500030040741, "perplexity": 682.4483145654413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825123.5/warc/CC-MAIN-20181214001053-20181214022553-00153.warc.gz"}
https://cs.stackexchange.com/questions/64951/methods-to-prevent-overfitting
# Methods to prevent overfitting I am aware of three approaches to prevent over-fitting of data when trying to model it on a neural net. The first two approaches I know suggest to train on more data and employ bootstrap aggregating. The third approach is to use a model that has the right capacity, one that has enough to fit the true regularities but not the weaker/dubious regularities (the noise). However, I don't quite understand this 2nd approach. How do we create a distinction and choose a model that detects this? Can someone expand on the second approach? I am aware of ways to limit the capacity such as early stopping(which is self explanatory), penalty for peculiar weights, etc. But an explanation on why these specific methods limit the capacity to fit the right data but not the wrong one would be helpful. • You mentioned :"second approaches(2) suggest to train on more data and employ bootstrap aggregating". Can you give more info about this approach? Did you read it from a research paper/journal etc? – Alwyn Mathew Oct 22 '16 at 10:46 • "this 2nd approach" - what do you mean by this? are you referring to bootstrap aggregating? If so, what research have you done? What have you read so far? Have you read en.wikipedia.org/wiki/Bootstrap_aggregating? What gave you the idea that bootstrap aggregating reduces over-fitting? Or, are you asking about reducing capacity? If so, it's confusing to call it "the third approach" and then refer to it as "this 2nd approach". What do you mean by "create a distinction and choose a model that detects this"? – D.W. Oct 22 '16 at 10:52 • Also, what's your specific question? "Can someone expand on the second approach?" is vague and open-ended. We seek narrowly focused technical questions that can be answered in a few paragraphs. What specifically are you confused about? Also, it sounds like you have in mind a whole list of approaches that fall under that category and want us to explain something about all of them. That seems too broad; it amounts to multiple different questions. Finally, there's lots written about overfitting; what have you read? – D.W. Oct 22 '16 at 10:55 • stats.stackexchange.com/a/187700/2921 – D.W. Oct 22 '16 at 11:10 • I think OP is referring to "The third approach", i.e. "choosing a model with the right capacity". – Ariel Oct 22 '16 at 11:42 First, convince yourself why complicated models can cause overfitting. Suppose I'm trying to learn some real valued function $f$, and I'm given $n$ samples of the form $S=(x_1,y_1),...,(x_n,y_n)$. If i choose to model my data by some polynomial (of arbitrary degree) then for any set of samples $S$, i can find a function in my model with zero training error, although it is likely to have high generalization error (a degree $n$ polynomial passing through all points in $S$ can always be found, regardless of any structure of the points in $S$). Perhaps using polynomials of degree$\le 3$ will be good enough to model my data (it doesn't suffer from the previous problem). Why 3? I don't know, this depends on the specific types of functions I'm trying to learn. A more concrete example, suppose you want to learn what is the appropriate dosage of some medicine, and you are given samples of the form $(x_1,y_1),...,(x_n,y_n)$ where $x_i\in\mathbb{R}$ denotes the dosage, and $y_i\in\left\{\text{'good'},\text{'bad'}\right\}$ tells you whether the dosage $x_i$ worked (it can be too high or too low). Allowing your algorithm to choose some arbitrary function can result in strange outputs. For example if you received the samples $(1,\text{'bad'}),(2,\text{'good'}),(3,\text{'good'})$ then a function $f:\mathbb{R}\rightarrow\left\{-1,1\right\}$ which satisfies $f(1)=-1,f(2)=f(3)=1$, but is random everywhere else, is consistent with the samples. Such $f$ will probably have no indication on the true effect of the medicine. Now, I'm no doctor, but perhaps it is worth considering functions of the form: $I_{a,b}=\begin{cases} +1, & x\in [a,b] \\ -1, & \text{otherwise} \end{cases}$.
2019-12-12 07:06:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7236237525939941, "perplexity": 485.6228319534098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540537212.96/warc/CC-MAIN-20191212051311-20191212075311-00280.warc.gz"}
https://ncertmcq.com/selina-concise-mathematics-class-10-icse-solutions-chapter-4-linear-inequations-ex-4a/
## Selina Concise Mathematics Class 10 ICSE Solutions Chapter 4 Linear Inequations (In one variable) Ex 4A These Solutions are part of Selina Concise Mathematics Class 10 ICSE Solutions. Here we have given Selina Concise Mathematics Class 10 ICSE Solutions Chapter 4 Linear Inequations Ex 4A. Other Exercises Question 1. State, true or false : Solution: Question 2. State whether the following statements are true or false: (i) If a < b, then a – c < b – c (ii) If a > b, then a + c > b + c (iii) If a < b, then ac > bc (iv) If a > b, then $$\frac { a }{ b }$$ < $$\frac { b }{ c }$$ (v) If a – c > b – d; then a + d > b + c (vi) If a < b, and c > 0, then a – c > b – c where a, b, c and d are real numbers and c ≠ 0. Solution: (i) True (ii) True (iii) False (iv) False (v) True (vi) False Question 3. If x ∈ N, find the solution set of inequations, (i) 5x + 3 ≤ 2x + 18 (ii) 3x – 2 < 19 – 4x Solution: x = {1, 2} Question 4. If the replacement set is the set of whole numbers, Solve: (i) x + 7 ≤ 11 (ii) 3x – 1 > 8 (iii) 8 – x > 5 (iv) 7 – 3x ≥ – $$\frac { 1 }{ 2 }$$ (v) x – $$\frac { 3 }{ 2 }$$ < $$\frac { 3 }{ 2 }$$ – x (vi) 18 ≤ 3x – 2 Solution: Question 5. Solve the inequation : 3 – 2x ≥ x – 12 given that x ∈ N. [1987] Solution: 3 – 2x ≥ x – 12 ⇒ – 2x – x ≥ – 12 – 3 ⇒ – 3x ≥ -15 ⇒ – x ≥ – 5 ⇒ x ≤ 5 Solution Set= {1, 2, 3, 4, 5} or {x ∈ N : x ≤ 5} Question 6. If 25 – 4x ≤ 16, find: (i) the smallest value of x, when x is a real number (ii) the smallest value of x, when x is an integer. Solution: 25 – 4x ≤ 16 ⇒ – 4x ≤ 16 – 25 ⇒ – 4x ≤ – 9 ⇒ 4x ≥ 9 x ≥ $$\frac { 9 }{ 4 }$$ (i) The smallest value of x, when x is a real number $$\frac { 9 }{ 4 }$$ or 2.25 (ii) The smallest value of x, when x is an integer 3. Question 7. If the replacement set is the set of real numbers, solve: Solution: Question 8. Find the smallest value of x for which 5 – 2x < 5$$\frac { 1 }{ 2 }$$ – $$\frac { 5 }{ 3 }$$ x, where x is an integer. Solution: Question 9. Find the largest value of x for which 2 (x – 1) ≤ 9 – x and x ∈ W. Solution: 2 (x – 1) ≤ 9 – x ⇒ 2x – 2 ≤ 9 – x ⇒ 2x + x ≤ 9 + 2 ⇒ 3x ≤ 11 ⇒ x ≤ $$\frac { 11 }{ 3 }$$ ⇒ x ≤ 3$$\frac { 2 }{ 3 }$$ x ∈ W and value of x is largest x = 3 Question 10. Solve the inequation: 12 + 1$$\frac { 5 }{ 6 }$$ x ≤ 5 + 3x and x ∈ R. (1999) Solution: Question 11. Given x ∈ (integers), find the solution set of: -5 ≤ 2x – 3 < x + 2. Solution: -5 ≤ 2x – 3 < x + 2 (i) -5 ≤ 2x – 3 ⇒ -2x ≤ -3 + 5 ⇒ -2x ≤ 2 ⇒ x ≤ -1 ⇒ -1 ≤ x (ii) 2x – 3 < x + 2 ⇒ 2x – x < 2 + 3 ⇒ x < 5 From (i) and (ii), -1 ≤ x < 5 x = {-1, 0, 1, 2, 3, 4} Question 12. Given x ∈ (whole numbers), find the solution set of: -1 ≤ 3 + 4x < 23. Solution: -1 ≤ 3 + 4x < 23 (i) -1 ≤ 3 + 4x ⇒ -1 – 3 ≤ 4x ⇒ -4 < 4x ⇒ -1 ≤ x (ii) 3 + 4x < 23 ⇒ 4x < 23 – 3 ⇒ 4x < 20 ⇒ x < 5 From (i) and (ii) -1 ≤ x < 5 and x ∈ W Solution set = {0, 1, 2, 3, 4} Hope given Selina Concise Mathematics Class 10 ICSE Solutions Chapter 4 Linear Inequations Ex 4A are helpful to complete your math homework. If you have any doubts, please comment below. Learn Insta try to provide online math tutoring for you.
2022-12-10 09:38:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23434123396873474, "perplexity": 1691.1654267699662}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710421.14/warc/CC-MAIN-20221210074242-20221210104242-00261.warc.gz"}
https://blog.xa0.de/post/Kernels%20---%20Additional-Intuition/
If you're here, you probably checked out a few blog posts or Wikipedia to understand Kernels. So this is just a few additional thoughts. Firstly, why even do this 'kernel trick'? The application we will be looking at in this post is clustering a set of points. For this, it is of course important to have some kind of distance measure between the points, in fact that is the main function needed. Consider a dataset of 4-dimensional points. Pick a point (x1,x2,x3,x4). For the sake of this example you find that to separate the points in multiple clusters you will have to move them into the 10-dimensional space using the following formula: $(\sum_i^4 x_iy_i)^2$ Fully expanded, each of the 4-dimensional points is now the following 10-dimensional point: $\{x_1^2 y_1^2 , 2 x_1 x_2 y_2 y_1 , 2 x_1 x_3 y_3 y_1 , 2 x_1 x_4 y_4 y_1 , x_2^2 y_2^2 , x_3^2 y_3^2 , x_4^2 y_4^2 , 2 x_2 x_3 y_2 y_3 , 2 x_2 x_4 y_2 y_4 , 2 x_3 x_4 y_3 y_4\}$ You could just convert all points into this and then apply a clustering algorithm, which will use some kind of distance measure on those points such as euclidean distance, but observe all these calculations.. For each point you have to do $10$ calculations. Now imagine if you did not have to convert all points. You use the polynomial kernel: $(\sum_i^4 x_iy_i)^2 = (x^Ty)^2$ The crucial point here is that the kernel gives us a kind of similarity measure (due to the dot product), which is what we want. Since this is basically the only thing we need for a successful clustering, using a kernel works here. Of course, if you needed the higher dimension for something more complicated, a kernel would not suffice any longer. The second crucial point here is that calculating the kernel is much faster. You do not have to first convert all the points to 10 dimensions and then apply some kind of distance measure, no, you do that in just one dot operation, i.e. one sum loop. To be precise, you iterate $N = 4$ times with a kernel, but you do it $N*2.5=10$ times with the naive way of converting points to higher dimension. Published on
2021-03-01 07:02:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6141388416290283, "perplexity": 249.98790201757683}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362133.53/warc/CC-MAIN-20210301060310-20210301090310-00503.warc.gz"}
https://www.gradesaver.com/textbooks/science/chemistry/chemistry-the-central-science-13th-edition/chapter-3-chemical-reactions-and-reaction-stoichiometry-exercises-page-113/3-10a
## Chemistry: The Central Science (13th Edition) $CaO(s)+H_2O(l)\rightarrow Ca(OH)_2(aq)$ $CaO(s)+H_2O(l)\rightarrow Ca(OH)_2(aq)$ This equation is balanced because each side includes 2 Ca, 2 O and 2 H atoms.
2019-11-12 03:05:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21388433873653412, "perplexity": 6407.839297783171}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664567.4/warc/CC-MAIN-20191112024224-20191112052224-00397.warc.gz"}
https://stats.stackexchange.com/questions/529653/how-to-get-the-weights-of-coefficients-of-treatment-effect-from-de-chaisemartin
# How to get the weights of coefficients of treatment effect from De Chaisemartin,2020? Thanks to the suggestion from @Ariel in this discussion, I visit this paper and face a problem. The DID equation is $$Y_{i,g,t} = \alpha_g + \alpha_t + \beta D_{g,t}+e_{igt}$$ $$Y_{i,g,t}$$ is the outcome (dependent variable) of unit $$i$$ in group $$g$$ at period $$t$$ on group fixed effects, period fixed effects, and $$D_{g,t}$$ is the treatment in group $$g$$ at period $$t$$ And, from the page 2965, they describe $$\beta$$ as $$\beta = \mathbb{E}{\Large{[}}\sum_{(g,t);D_{g,t}=1}w_{g,t}\Delta_{g,t}\Large{]}$$ where $$\Delta_{g,t}$$ is the average treatment effect (ATE) in group $$g$$ and period $$t$$ and the weight $$w_{g,t}$$ sum to 1 but maybe negative. I have two questions as below: 1> From this discussion, ATE should be the average treatment over time rather than the average treatment of a group $$g$$ at a specific time $$t$$. 2> I did not see where in this paper the author mentioned the way to calculate $$w_{g,t}$$ so I did not understand why its sum is 1 but maybe negative
2021-09-21 12:24:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 17, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.789652407169342, "perplexity": 458.4224823422729}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057202.68/warc/CC-MAIN-20210921101319-20210921131319-00418.warc.gz"}
https://www.mathdoubts.com/cos-sum-to-product-identity-proof/
Proof of Sum to Product identity of Cosine functions The sum to product identity of cosine functions is written in trigonometry popularly in any one of the following three forms. $(1). \,\,\,$ $\cos{\alpha}+\cos{\beta}$ $\,=\,$ $2\cos{\Bigg(\dfrac{\alpha+\beta}{2}\Bigg)}\cos{\Bigg(\dfrac{\alpha-\beta}{2}\Bigg)}$ $(2). \,\,\,$ $\cos{x}+\cos{y}$ $\,=\,$ $2\cos{\Bigg(\dfrac{x+y}{2}\Bigg)}\cos{\Bigg(\dfrac{x-y}{2}\Bigg)}$ $(3). \,\,\,$ $\cos{C}+\cos{D}$ $\,=\,$ $2\cos{\Bigg(\dfrac{C+D}{2}\Bigg)}\cos{\Bigg(\dfrac{C-D}{2}\Bigg)}$ Now, let’s learn how to derive the sum to product transformation identity of cosine functions. If $\alpha$ and $\beta$ represent the angles of right triangles, then the cosine of angle alpha is written as $\cos{\alpha}$ and cosine of angle beta is written as $\cos{\beta}$ in mathematics. Express the Addition of Cosine functions Write the both cosine functions in a row by displaying a plus sign between them to express the addition of the cosine functions in mathematical form. $\implies$ $\cos{\alpha}+\cos{\beta}$ Expand each cosine function in the expression Let’s take that $\alpha = a+b$ and $\beta = a-b$. Now, replace the equivalent values of $\alpha$ and $\beta$ in the trigonometric expression. $\implies$ $\cos{\alpha}+\cos{\beta}$ $\,=\,$ $\cos{(a+b)}$ $+$ $\cos{(a-b)}$ The cosines of compound angles can be expanded by the angle sum and angle difference trigonometric identities of cosine functions. $\implies$ $\cos{\alpha}+\cos{\beta}$ $\,=\,$ $\Big(\cos{a}\cos{b}$ $-$ $\sin{a}\sin{b}\Big)$ $+$ $\Big(\cos{a}\cos{b}$ $+$ $\sin{a}\sin{b}\Big)$ Simplify the Trigonometric expression Let’s focus on simplifying the trigonometric expression in the right hand side of the equation by the fundamental mathematical operations. $=\,\,\,$ $\cos{a}\cos{b}$ $-$ $\sin{a}\sin{b}$ $+$ $\cos{a}\cos{b}$ $+$ $\sin{a}\sin{b}$ $=\,\,\,$ $\cos{a}\cos{b}$ $+$ $\cos{a}\cos{b}$ $-$ $\sin{a}\sin{b}$ $+$ $\sin{a}\sin{b}$ $=\,\,\,$ $2\cos{a}\cos{b}$ $+$ $\require{cancel} \cancel{\sin{a}\sin{b}}$ $-$ $\cancel{\sin{a}\sin{b}}$ $\,\,\, \therefore \,\,\,\,\,\,$ $\cos{\alpha}+\cos{\beta}$ $\,=\,$ $2\cos{a}\cos{b}$ The trigonometric expression is successfully simplified and it expresses the transformation of sum of the cosine functions into product form. However, the product form is in terms of $a$ and $b$. Hence, we should express them in terms of $\alpha$ and $\beta$. We have assumed that $\alpha = a+b$ and $\beta = a-b$. Now, let’s evaluate $a$ and $b$ in terms of $\alpha$ and $\beta$. They are actually calculated by the fundamental operations of the mathematics. Add both algebraic equations firstly to evaluate the value of $a$. $\implies$ $\alpha+\beta$ $\,=\,$ $(a+b)+(a-b)$ $\implies$ $\alpha+\beta$ $\,=\,$ $a+b+a-b$ $\implies$ $\alpha+\beta$ $\,=\,$ $a+a+b-b$ $\implies$ $\alpha+\beta$ $\,=\,$ $2a+\cancel{b}-\cancel{b}$ $\implies$ $\alpha+\beta \,=\, 2a$ $\implies$ $2a \,=\, \alpha+\beta$ $\,\,\, \therefore \,\,\,\,\,\,$ $a \,=\, \dfrac{\alpha+\beta}{2}$ Now, subtract the equation $\beta = a-b$ from the equation $\alpha = a+b$ to evaluate $b$ in terms of $\alpha$ and $\beta$. $\implies$ $\alpha-\beta$ $\,=\,$ $(a+b)-(a-b)$ $\implies$ $\alpha-\beta$ $\,=\,$ $a+b-a+b$ $\implies$ $\alpha-\beta$ $\,=\,$ $a-a+b+b$ $\implies$ $\alpha-\beta$ $\,=\,$ $\cancel{a}-\cancel{a}+2b$ $\implies$ $\alpha-\beta \,=\, 2b$ $\implies$ $2b \,=\, \alpha-\beta$ $\,\,\, \therefore \,\,\,\,\,\,$ $b \,=\, \dfrac{\alpha-\beta}{2}$ Therefore, we have successfully calculated both $a$ and $b$. Now, substitute them in the trigonometric equation $\cos{\alpha}+\cos{\beta}$ $\,=\,$ $2\cos{a}\cos{b}$. $\,\,\, \therefore \,\,\,\,\,\,$ $\cos{\alpha}+\cos{\beta}$ $\,=\,$ $2\cos{\Big(\dfrac{\alpha+\beta}{2}\Big)}\cos{\Big(\dfrac{\alpha-\beta}{2}\Big)}$ Therefore, the sum of the cosine functions is successfully transformed into product form of the trigonometric functions and this equation is called the sum to product identity of cosine functions. Thus, we can prove the sum to product transformation identity of cosine functions in terms of $x$ and $y$ and also in terms of $C$ and $D$ by following the same procedure. Latest Math Topics A best free mathematics education website for students, teachers and researchers. Maths Topics Learn each topic of the mathematics easily with understandable proofs and visual animation graphics. Maths Problems Learn how to solve the maths problems in different methods with understandable steps. Learn solutions Subscribe us You can get the latest updates from us by following to our official page of Math Doubts in one of your favourite social media sites.
2021-12-07 01:50:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8657525777816772, "perplexity": 183.47075554255943}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363332.1/warc/CC-MAIN-20211207014802-20211207044802-00584.warc.gz"}
https://iacr.org/cryptodb/data/paper.php?pubkey=28792
## CryptoDB ### Paper: Optimizing Authenticated Garbling for Faster Secure Two-Party Computation Authors: Jonathan Katz Samuel Ranellucci Mike Rosulek Xiao Wang DOI: 10.1007/978-3-319-96878-0_13 (login may be required) Search ePrint Search Google CRYPTO 2018 Wang et al. (CCS 2017) recently proposed a protocol for malicious secure two-party computation that represents the state-of-the-art with regard to concrete efficiency in both the single-execution and amortized settings, with or without preprocessing. We show here several optimizations of their protocol that result in a significant improvement in the overall communication and running time. Specifically:We show how to make the “authenticated garbling” at the heart of their protocol compatible with the half-gate optimization of Zahur et al. (Eurocrypt 2015). We also show how to avoid sending an information-theoretic MAC for each garbled row. These two optimizations give up to a 2.6$\times$× improvement in communication, and make the communication of the online phase essentially equivalent to that of state-of-the-art semi-honest secure computation.We show various optimizations to their protocol for generating AND triples that, overall, result in a 1.5$\times$× improvement in the communication and a 2$\times$× improvement in the computation for that step. ##### BibTeX @inproceedings{crypto-2018-28792, title={Optimizing Authenticated Garbling for Faster Secure Two-Party Computation}, booktitle={Advances in Cryptology – CRYPTO 2018}, series={Lecture Notes in Computer Science}, publisher={Springer}, volume={10993}, pages={365-391}, doi={10.1007/978-3-319-96878-0_13}, author={Jonathan Katz and Samuel Ranellucci and Mike Rosulek and Xiao Wang}, year=2018 }
2023-03-21 18:09:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4261209964752197, "perplexity": 5929.036118420257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943704.21/warc/CC-MAIN-20230321162614-20230321192614-00559.warc.gz"}
https://stats.stackexchange.com/questions/180703/skewness-why-is-this-distribution-right-skewed?answertab=votes
# Skewness - why is this distribution right skewed? If I interpret the accepted answer here correctly, I would say that the distribution below is right skewed (because of the points above the straight line on the far right). I don't really understand this however. My thinking is that: • The sample distribution (on the y-axis) 'wins quantiles faster' on the far right than the theoretical distribution (on the x-axis). • Hence there is higher density on the right for the sample distribution than for the theoretical distribution. • Hence the distribution is left skewed (like on the image below). I have two questions: 1. What is wrong about my reasoning? Can anyone please give me a step by step argument? 2. Could anyone please give me an example of what would happen if the points on the far left would deviate from the straight line (i.e. when would the distribution then be left/right skewed)?
2019-09-16 16:55:11
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8001383543014526, "perplexity": 409.1603451543254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572879.28/warc/CC-MAIN-20190916155946-20190916181946-00406.warc.gz"}
http://www.mathguru.com/level1/decimals-2008093000032051.aspx
If you like what you see in Mathguru Subscribe Today For 12 Months US Dollars 12 / Indian Rupees 600 Available in 20 more currencies if you pay with PayPal. Buy Now No questions asked full moneyback guarantee within 7 days of purchase, in case of Visa and Mastercard payment Post to: Explanation: Decimal The decimal numeral system (also called base ten or occasionally denary) has ten as its base. It is the numerical base most widely used by modern civilizations. A decimal is a tenth part, and decimals become a series of nested tenths http://en.wikipedia.org/wiki/Decimal Kilogram The kilogram (symbol: kg) is the base unit of mass in the International System of Units (SI, from the French Le Système International d'Unitès), which is the modern standard governing the metric system. The kilogram is defined as being equal to the mass of the International Prototype Kilogram (IPK), which is almost exactly equal to the mass of one liter of water. ## SI multiples SI multiples for gram (g) Submultiples Multiples Value Symbol Name Value Symbol Name 10−1 g dg decigram 101 g dag decagram 10−2 g cg centigram 102 g hg hectogram 10−3 g mg milligram 103 g kg kilogram(Our solved example in mathguru.com uses this concept) 10−6 g µg microgram (mcg) 106 g Mg megagram (tonne) 10−9 g ng nanogram 109 g Gg gigagram 10−12 g pg picogram 1012 g Tg teragram 10−15 g fg femtogram 1015 g Pg petagram 10−18 g ag attogram 1018 g Eg exagram 10−21 g zg zeptogram 1021 g Zg zettagram 10−24 g yg yoctogram 1024 g Yg yottagram Common prefixes are in bold face. http://en.wikipedia.org/wiki/Kilogram 3 + 2 = 5 with apples Addition is a mathematical operation that represents combining collections of objects together into a larger collection. It is signified by the plus sign (+). For example, in the picture on the right, there are 3 + 2 apples-meaning three apples and two other apples-which is the same as five apples. Therefore, 3 + 2 = 5. (Our solved example in mathguru.com uses this concept) Besides counting fruits, addition can also represent combining other physical and abstract quantities using different kinds of numbers: numbers, fractions, irrational numbers, vectors, decimals and more. # Subtraction "5 − 2 = 3" (verbally, "five minus two equals three") An example problem Subtraction is one of the four basic binary arithmetic operations; it is the inverse of addition, meaning that if we start with any number and add any number and then subtract the same number we added, we return to the number we started with. Subtraction is denoted by a minus sign in infix notation. Subtraction is used to model four related processes: 1.  From a given collection, take away (subtract) a given number of objects. For example, 5 apples minus 2 apples leaves 3 apples. 2.  From a given measurement, take away a quantity measured in the same units. If I weigh 200 pounds, and lose 10 pounds, then I weigh 200 − 10 = 190 pounds. 3.  Compare two like quantities to find the difference between them. For example, the difference between $800 and$600 is $800 −$600 = \$200. Also known as comparative subtraction. 4.  To find the distance between two locations at a fixed distance from starting point. For example if, on a given highway, you see a mileage marker that says 150 miles and later see a mileage marker that says 160 miles, you have traveled 160 − 150 = 10 miles. (Our solved example in mathguru.com uses this concept) http://en.wikipedia.org/wiki/Subtraction
2018-06-23 00:28:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45898717641830444, "perplexity": 2586.2500267121573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864848.47/warc/CC-MAIN-20180623000334-20180623020334-00507.warc.gz"}
http://math.stackexchange.com/questions/411134/complex-equation-to-be-solved
# complex equation to be solved I need to find all solutions to the complex equation $e^{1/z} = \sqrt{e}$ Then I need to show that all these solutions are on the circle $|z-1|=1$ Using the fact that $e^{2\pi i}=1$, I solved the equation to find $z = \frac{2}{1-4ik\pi}$ but that is not what's in the back of my book. Any help from the math community is welcome. - ...and what's what is in your book's back, if may we ask? – DonAntonio Jun 4 '13 at 16:04 z=(2-8kpi*i)/(1+16k²pi²), but when I put my solution set in the equation of the circle, and calculate the absolute value, it comes out to be 1!!! – imranfat Jun 4 '13 at 16:07 I've no idea how they did reach that answer ... but ...it may depend on some specific branch of the square root function or something like that...? Are you sure you copied correctly the equality? – DonAntonio Jun 4 '13 at 16:12 Absolutely, I triple checked. The book's answer made no sense to me, but along it says that"it is easy to show that it lies on the specified circle" so I did not look up at the wrong answer section either. I am pretty confident with this material, so when I found such a different answer in the back→hence this post. – imranfat Jun 4 '13 at 16:17 The book's answer and your answer agree. Just multiply the numerator and the denominator of your answer by $1+4ik\pi$ and then replace $k$ with $-k$ to get the book's answer. – Thomas Andrews Jun 4 '13 at 16:22 Hints: $$e^{\frac1z}=e^{\frac12}\iff \frac1z-\frac12=2k\pi i\;,\;\;k\in\Bbb Z\;\ldots\ldots$$ - Yes, although your approach slightly different (but cool nonetheless), it comes down to the same answer as what I had. I think something in the back of the book ain't right – imranfat Jun 4 '13 at 16:14 Two things: (1) what book is that, and (2) Be sure to be checking the answer to the correct section in the correwc t chapter and in the correct section. – DonAntonio Jun 4 '13 at 16:16 It is a course book written by the faculty of my school (abroad) so it is not published or anything. Mistakes can happen so to say. Looking at your answer as well as Andre's, I think the back of the book is wrong. – imranfat Jun 4 '13 at 16:33 My calculation gives $z=\frac{2}{1+4k\pi i}$. Of course that is essentially the same as your answer. When we subtract $1$, we get $\frac{1-4k\pi i}{1+4k \pi i}$, which has norm $1$. - Your answer's exactly the same as mine, but why did you substract one? To check whether the solutions lay on some circle? – DonAntonio Jun 4 '13 at 16:19 Yes, it is the verification that $|z-1|=1$. – André Nicolas Jun 4 '13 at 16:26 Allright, thanks! I am think I am good now. This question is solved... – imranfat Jun 4 '13 at 16:34 Your answer and the book's answer are essentially the same since: $$\frac{2}{1-4ik\pi} \cdot \frac{1+4ik\pi}{1+4ik\pi}= \frac{2+8ik\pi}{1+16k^2\pi^2}$$ That means your answer for $k$ gives the book's answer for $-k$. Andre's answer shows why all the solutions are are the circle $|z-1|=1$. - to see clearly why the solution set lies on the given circle, first use the transform $w=\frac1z$, the equation becomes $w-\frac12 = 2k\pi i$ thus all solutions for $w$ lie on the "vertical" straight line $\mathfrak{Re}(w)=\frac12$. under the reverse transform $z=\frac1w$ this line is then mapped to the circle $|z-1|=1$ because $\infty \rightarrow 0$ and the two intersections of the straight line with the unit circle, $\frac12(1 \pm \sqrt{3}i)$ are merely interchanged. it is then easy to show that the three points $0, \frac12(1 \pm \sqrt{3}i)$ all lie at unit distance from 1. -
2016-02-08 22:11:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8667219877243042, "perplexity": 365.75616996835174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701154221.36/warc/CC-MAIN-20160205193914-00168-ip-10-236-182-209.ec2.internal.warc.gz"}
https://zbmath.org/?q=an%3A1253.94045
## Improved single-key attacks on 8-round AES-192 and AES-256.(English)Zbl 1253.94045 Abe, Masayuki (ed.), Advances in cryptology – ASIACRYPT 2010. 16th international conference on the theory and application of cryptology and information security, Singapore, December 5–9, 2010. Proceedings. Berlin: Springer (ISBN 978-3-642-17372-1/pbk). Lecture Notes in Computer Science 6477, 158-176 (2010). Summary: AES is the most widely used block cipher today, and its security is one of the most important issues in cryptanalysis. After 13 years of analysis, related-key attacks were recently found against two of its flavors (AES-192 and AES-256). However, such a strong type of attack is not universally accepted as a valid attack model, and in the more standard single-key attack model at most 8 rounds of these two versions can be currently attacked. In the case of 8-round AES-192, the only known attack (found 10 years ago) is extremely marginal, requiring the evaluation of essentially all the $$2^{128}$$ possible plaintext/ciphertext pairs in order to speed up exhaustive key search by a factor of 16. In this paper we introduce three new cryptanalytic techniques, and use them to get the first non-marginal attack on 8-round AES-192 (making its time complexity about a million times faster than exhaustive search, and reducing its data complexity to about 1/32,000 of the full codebook). In addition, our new techniques can reduce the best known time complexities for all the other combinations of 7-round and 8-round AES-192 and AES-256. For the entire collection see [Zbl 1202.94006]. ### MSC: 94A60 Cryptography Full Text:
2022-08-19 23:29:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3412386178970337, "perplexity": 1254.9263674905096}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573849.97/warc/CC-MAIN-20220819222115-20220820012115-00379.warc.gz"}
http://math.stackexchange.com/questions/593/is-there-a-relationship-between-e-and-the-sum-of-n-simplexes-volumes
# Is there a relationship between $e$ and the sum of $n$-simplexes volumes? When I look at the Taylor series for $e^x$ and the volume formula for oriented simplexes, it makes $e^x$ look like it is, at least almost, the sum of simplexes volumes from $n$ to $\infty$. Does anyone know of a stronger relationship beyond, "they sort of look similar"? Here are some links: Volume formula http://en.wikipedia.org/wiki/Simplex#Geometric_properties - –  kennytm Jul 23 '10 at 17:58 The function e^x is the solution of functional equation exp(x+y)=exp(x)exp(y) s.t. exp'(0)=1. I wonder, if one can see that the generating function for simplex volumes satisfies this equation... –  Grigory M Jul 23 '10 at 18:00 @Kenny I messed up the question, I meant e^x, is that what is confusing you? –  Jonathan Fischoff Jul 23 '10 at 18:15 @Jon: No, I was asking the definition of "sum of simplexes volumes from n to infinity". –  kennytm Jul 23 '10 at 19:09 Oh yes. But not neccarily unity. Depends on what x is. Like e^1.5i could be thought of as adding and subtracting oriented volumes that are not unity ... I think –  Jonathan Fischoff Jul 23 '10 at 20:32 ## 1 Answer The answer is, it's just a fact “cone over a simplex is a simplex” rewritten in terms of the generating function: observe that because n-simplex is a cone over (n-1)-simplex $\frac{\partial}{\partial x}vol(\text{n-simplex w. edge x}) = vol(\text{(n-1)-simplex w. edge x})$; in other words $e(x):=\sum_n vol\text{(n-simplex w. edge x)}$ satisfies an equvation $e'(x)=e(x)$. So $e(x)=Ce^x$ -- and C=1 because e(0)=1. - I think understand the basic idea. The relationship between the border of the simplex and its volume, is such it can phrased in a way that satisfy's the same functional equation that equation that e satisfies, mainly that it's own derivative? Is that close? –  Jonathan Fischoff Jul 26 '10 at 18:26 @Jonathan Yes, something like this (I'd say "n-dimensional simplex is constructed from (n-1)-dimensional in such way that..."). In combinatorics such things happen quite often: you write down a generating function for something and then observe that it satisfies some simple differential equation (coming from reccurence relation on that something); and when you're solving differential equation you often encounter something like e^x (because it satisfies f'=f, indeed). –  Grigory M Jul 27 '10 at 5:52
2015-09-01 06:53:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9246548414230347, "perplexity": 872.0508583216968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645167576.40/warc/CC-MAIN-20150827031247-00269-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.physicsoverflow.org/21225/matrix-integral-identity
# Matrix integral identity + 7 like - 0 dislike 192 views 1) How to prove that $N\times N$ matrix integral over complex matrices $Z$ $$\int d Z d Z^\dagger e^{-Tr Z Z^\dagger} \frac{x_1\det e^Z -x_2 \det e^{AZ^\dagger}}{\det(1-x_1e^Z)\det(1-x_2e^{AZ^\dagger})}$$ does not depend on the external Hermitian matrix $A$? $x_1$ and $x_2$ are numbers. The statement is trivial for $1\times1$ case. 2)The same for $$\int d Z d Z^\dagger e^{-Tr Z Z^\dagger} \frac{x_1\det e^Z -x_2 \det e^{AZ^\dagger}}{\det(1-x_1e^Zg)\det(1-x_2e^{AZ^\dagger}g)}$$ where g - arbitrary $GL(N)$ matrix. This post imported from StackExchange MathOverflow at 2014-07-29 11:48 (UCT), posted by SE-user Sasha retagged Jul 29, 2014 I understand that $dZ$ is the Lebesgue measure on $N\times N$ complex matrices, that is $dZ=\prod_{i,j}d\Re z_{ij}d\Im z_{ij}$, but what $dZ^\dagger$ stands for ? This post imported from StackExchange MathOverflow at 2014-07-29 11:48 (UCT), posted by SE-user Adrien Hardy This is just a notation sometimes used in the mathphys literature to show that you integrate over $2N^2$ real variables contrary to $N^2$ for the Hermitian model. This post imported from StackExchange MathOverflow at 2014-07-29 11:48 (UCT), posted by SE-user Sasha How exactly is the integral defined? Isn't the denominator always going to hit 0? For instance in the 1 by 1 case, the denominator contains $1-x_1 e^z$, where $z$ is a complex number, and then at the points $z=\log (x_1)^{-1}$ the integrand diverges. If it were true, you would just do a unitary transformation (which leaves the measure invariant) to diagonalize A, and then rescale the coordinates to get rid of the dependence on the eignevalues. But this integral is A dependent in the 1 by 1 case, which is not trivial. Did you try to prove it for $N=1$? If it does not hold in that case I do not think it is valid for $N>1$. Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ys$\varnothing$csOverflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
2018-08-22 03:22:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.766556978225708, "perplexity": 876.4503052805684}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221219469.90/warc/CC-MAIN-20180822030004-20180822050004-00403.warc.gz"}
https://stats.stackexchange.com/questions/21565/how-do-i-fit-a-constrained-regression-in-r-so-that-coefficients-total-1/177430
# How do I fit a constrained regression in R so that coefficients total = 1? I see a similar constrained regression here: Constrained linear regression through a specified point but my requirement is slightly different. I need the coefficients to add up to 1. Specifically I am regressing the returns of 1 foreign exchange series against 3 other foreign exchange series, so that investors may replace their exposure to that series with a combination of exposure to the other 3, but their cash outlay must not change, and preferably (but this is not mandatory), the coefficients should be positive. I have tried to search for constrained regression in R and Google but with little luck. • Are you sure this is a constrained regression problem? As I read the question, you seek a relationship of the form $y_4$ (one Forex series) = $\beta_1 y_1 + \beta_2 y_2 + \beta_3 y_3$ (plus, I presume, a fourth term representing a prevailing safe rate of return). That's independent of the investment decision. If a customer wants to invest $c$ capital in $y_4$ using $y_1$, $y_2$, and $y_3$ as proxies, then they would just invest $c\beta_1$ in $y_1$, $c\beta_2$ in $y_2$, and $c\beta_3$ in $y_3$. That adds no special complication to the regression, does it? – whuber Jan 23 '12 at 17:03 • It does because if you model this you will find that B1 + B2 + B3 > 1 in many cases (or < 1 in others). That is because the currency one is trying to replicate with the descriptors will typically have a larger or smaller volatility than the others, and so the regression will give you smaller or larger weights in response. This requires the investor either not to be fully invested, or to leverage, which I do not want. As for safe rate of return no. All we are trying to do is replicate series1 using other variables. Being a finance guy and not a statistician perhaps I have misnamed my question. – Thomas Browne Jan 23 '12 at 18:49 • The reason for including a term for a safe rate of return is that sometimes it will have a nonzero coefficient. Presumably, safe instruments (overnight bank deposits) are available to everyone at low cost, so anyone ignoring this as a component of their investment basket could be choosing suboptimal combinations. Now, if the coefficients do not add to unity, so what? Just invest as much as you wish in the proportions estimated by the regression. – whuber Jan 23 '12 at 19:06 • right..... simple as that. Thanks. I feel a bit silly now haha. – Thomas Browne Jan 23 '12 at 19:11 • Not silly at all. Merely asking this question reflects a high level of thought. I was just checking my own understanding of your question to make sure you got an effective answer. Cheers. – whuber Jan 23 '12 at 19:14 If I understand correctly, your model is $$Y = \pi_1 X_1 + \pi_2 X_2 + \pi_3 X_3 + \varepsilon,$$ with $\sum_k \pi_k = 1$ and $\pi_k\ge0$. You need to minimize $$\sum_i \left(Y_i - (\pi_1 X_{i1} + \pi_2 X_{i2} + \pi_3 X_{i3}) \right)^2$$ subject to these constraints. This kind of problem is known as quadratic programming. Here a few line of R codes giving a possible solution ($X_1, X_2, X_3$ are the columns of X, the true values of the $\pi_k$ are 0.2, 0.3 and 0.5). > library("quadprog"); > X <- matrix(runif(300), ncol=3) > Y <- X %*% c(0.2,0.3,0.5) + rnorm(100, sd=0.2) > Rinv <- solve(chol(t(X) %*% X)); > C <- cbind(rep(1,3), diag(3)) > b <- c(1,rep(0,3)) > d <- t(Y) %*% X > solve.QP(Dmat = Rinv, factorized = TRUE, dvec = d, Amat = C, bvec = b, meq = 1) $solution [1] 0.2049587 0.3098867 0.4851546$value [1] -16.0402 $unconstrained.solution [1] 0.2295507 0.3217405 0.5002459$iterations [1] 2 0 $Lagrangian [1] 1.454517 0.000000 0.000000 0.000000$iact [1] 1 I don’t know any results on the asymptotic distribution of the estimators, etc. If someone has pointers, I’ll be curious to get some (if you wish I can open a new question on this). • Actually quick question. Shouldn't I be minimizing variance rather than sum? Isn't that what a regression does is minimize the variance of the square of the errors? – Thomas Browne Jan 23 '12 at 19:13 • This is clever, Elvis, but couldn't you accomplish the same thing simply by reparameterizing the regression? E.g., let $Y = \alpha_1 X_1 + \alpha_2 X_2 + (1-\alpha_1-\alpha_2)X_3 +\varepsilon$ That's equivalent to $Y-X_3 = \alpha_1(X_1-X_3) + \alpha_2(X_2-X_3)+\varepsilon$. The estimates and standard errors of the $\pi_i$ are straightforward to compute from the estimates and var-covar matrix of $\alpha_1$ and $\alpha_2$. – whuber Jan 23 '12 at 19:30 • @whuber Yes but with more noisy data, or with some of the $\pi_k$ close to $0$, you’d violate easily the constraint $\pi_k > 0$, which is the "hard" part of the problem. – Elvis Jan 23 '12 at 19:34 • A positive coefficient tells you to buy a foreign currency; a negative coefficient tells you to sell it. If you don't own that currency already, you need to borrow it in order to sell it ("selling short"). Because unrestricted borrowing can get people into trouble, there are constraints on the amount of borrowing and how it is paid for ("margin requirements" and "capital carrying costs" and "mark-to-market" procedures). Therefore, borrowing is possible but is often avoided except by major players in the markets or unless it confers large advantages. – whuber Jan 23 '12 at 19:50 • Many thanks to all for all the help. Actually just to make a comment on FX markets in general, they are more easy to short than equities or bonds because one does not have to borrow a stock before short selling. One simply flips the denominator and numerator currencies. So for example selling EURUSD and selling USDEUR are exactly equivalent trades in terms of the risk department, but they are of course exactly opposite positions. That's why FX is such a great playground for quant traders because they don't have to worry much about directional frictions which are much more important in equities – Thomas Browne Jan 24 '12 at 9:42 As mentioned by whuber, if you are interested only in the equality constraints, you can also just use the standard lm() function by rewriting your model: \begin{eqnarray} Y&=&\alpha+\beta_1 X_1+\beta_2 X_2+\beta_3 X_3+\epsilon\\ &=& \alpha+\beta_1 X_1+\beta_2 X_2+(1-\beta_1-\beta_2) X_3+\epsilon\\ &=& \alpha + \beta_1( X_1-X_3) +\beta_2 (X_2-X_3)+ X_3+\epsilon \end{eqnarray} But this does not guarantee that your inequality constraints are satisfied! In this case, it is however, so you get exactly the same result as using the quadratic programming example above (putting the X3 on the left): X <- matrix(runif(300), ncol=3) Y <- X %*% c(0.2,0.3,0.5) + rnorm(100, sd=0.2) X1 <- X[,1]; X2 <-X[,2]; X3 <- X[,3] lm(Y-X3~-1+I(X1-X3)+I(X2-X3)) • In the above case by Matifou, what's to prevent the third coefficient from being negative? For example, had the optimal coefficients for $\beta_1=0.75$ and $\beta_2=0.5$ we would get that $(1-\beta_1-\beta_2)=-0.25$ which implies here that our third coefficient is negative and therefore does not hold based on our desired regression. – A.S. Feb 21 '16 at 19:02 • Thanks @A.S. for pointing this out. Indeed, this solution works only for the equality constraints, not the inequality ones. I edited the text accordingly. – Matifou Feb 21 '16 at 20:30 As I understand your model, you're seeking to find $$\bar{\bar{x}} \cdot \bar{b} = \bar{y}$$ such that $$\sum \left [ \begin{matrix} \bar{b} \end{matrix} \right ] =1$$ I've found the easiest way to treat these sorts of problems is to use matrices' associative properties to treat $\bar{b}$ as a function of other variables. E.g. $\bar{b}$ is a function of $\bar{c}$ via the transform block $\bar{\bar{T_c}}$. In your case, $r$ below is $1$. $$\bar{b} = \left [ \begin{matrix} k_0 \\ k_1 \\ k_2 \end{matrix} \right ] = \bar{\bar{T_c}} \cdot \bar{c} = \left [ \begin{matrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ -1 & -1 & 1 \end{matrix} \right ] \cdot \left[ \begin{matrix} k_0 \\ k_1 \\ r \end{matrix} \right ]$$ Here we can separate our $k$nowns and $u$nknowns. $$\bar{c} = \left[ \begin{matrix} k_0 \\ k_1 \\ r \end{matrix} \right ] = \bar{\bar{S_u}} \cdot \bar{c_u} + \bar{\bar{S_k}} \cdot \bar{c_k} = \left[ \begin{matrix} 1 & 0 \\ 0 & 1 \\ 0 & 0 \end{matrix} \right] \cdot \left [ \begin{matrix} k_0 \\ k_1 \end{matrix} \right ] + \left [ \begin{matrix} 0 \\ 0 \\ 1 \end{matrix} \right ] \cdot r$$ While I could combine the different transform/separation blocks, that gets cumbersome with more intricate models. These blocks allow knowns and unknowns to be separated. $$\bar{\bar{x}} \cdot \bar{\bar{T_c}} \cdot \left ( \bar{\bar{S_u}} \cdot \bar{c_u} + \bar{\bar{S_k}} \cdot \bar{c_k} \right ) = \bar{y} \\ \bar{\bar{v}} = \bar{\bar{x}} \cdot \bar{\bar{T_c}} \cdot \bar{\bar{S_u}} \\ \bar{w} = \bar{y} - \bar{\bar{x}} \cdot \bar{\bar{T_c}} \cdot \bar{\bar{S_k}} \cdot \bar{c_k}$$ Finally the problem is in a familiar form. $$\bar{\bar{v}} \cdot \bar{c_u} = \bar{w}$$ Old question but since I'm facing the same problem I thought to post my 2p... Use quadratic programming as suggested by @Elvis but using sqlincon from the pracma package. I think the advantage over quadrpog::solve.QP is a simpler user interface to specify the constraints. (In fact, lsqlincon is a wrapper around solve.QP). Example: library(pracma) set.seed(1234) # Test data X <- matrix(runif(300), ncol=3) Y <- X %*% c(0.2, 0.3, 0.5) + rnorm(100, sd=0.2) # Equality constraint: We want the sum of the coefficients to be 1. # I.e. Aeq x == beq Aeq <- matrix(rep(1, ncol(X)), nrow= 1) beq <- c(1) # Lower and upper bounds of the parameters, i.e [0, 1] lb <- rep(0, ncol(X)) ub <- rep(1, ncol(X)) # And solve: lsqlincon(X, Y, Aeq= Aeq, beq= beq, lb= lb, ub= ub) [1] 0.1583139 0.3304708 0.5112153 Same results as Elvis's: library(quadprog) Rinv <- solve(chol(t(X) %*% X)); C <- cbind(rep(1,3), diag(3)) b <- c(1,rep(0,3)) d <- t(Y) %*% X solve.QP(Dmat = Rinv, factorized = TRUE, dvec = d, Amat = C, bvec = b, meq = 1)\$solution EDIT To try to address gung's comment here's some explanation. sqlincon emulates matlab's lsqlin which has a nice help page. Here's the relevant bits with some (minor) edits of mine: X Multiplier matrix, specified as a matrix of doubles. C represents the multiplier of the solution x in the expression C*x - Y. C is M-by-N, where M is the number of equations, and N is the number of elements of x. Y Constant vector, specified as a vector of doubles. Y represents the additive constant term in the expression C*x - Y. Y is M-by-1, where M is the number of equations. Aeq: Linear equality constraint matrix, specified as a matrix of doubles. Aeq represents the linear coefficients in the constraints Aeq*x = beq. Aeq has size Meq-by-N, where Meq is the number of constraints and N is the number of elements of x beq Linear equality constraint vector, specified as a vector of doubles. beq represents the constant vector in the constraints Aeq*x = beq. beq has length Meq, where Aeq is Meq-by-N. lb Lower bounds, specified as a vector of doubles. lb represents the lower bounds elementwise in lb ≤ x ≤ ub. ub Upper bounds, specified as a vector of doubles. ub represents the upper bounds elementwise in lb ≤ x ≤ ub.
2019-10-20 00:59:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.709790050983429, "perplexity": 1416.4630438523423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986700560.62/warc/CC-MAIN-20191020001515-20191020025015-00478.warc.gz"}
https://proofwiki.org/wiki/Sum_of_Strictly_Positive_Real_Numbers_is_Strictly_Positive
# Sum of Strictly Positive Real Numbers is Strictly Positive ## Theorem $x, y \in \R_{>0} \implies x + y \in \R_{>0}$ ## Proof $\displaystyle x$ $>$ $\displaystyle 0$ $\, \displaystyle \land \,$ $\displaystyle y$ $>$ $\displaystyle 0$ Real Number Ordering is Compatible with Addition $\displaystyle \leadsto \ \$ $\displaystyle x + y$ $>$ $\displaystyle 0 + 0$ Real Number Inequalities can be Added $\displaystyle$ $=$ $\displaystyle 0$ Real Number Axioms: $\R \text A 3$: Identity $\blacksquare$
2020-08-15 15:48:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8628388047218323, "perplexity": 183.61993542418165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740929.65/warc/CC-MAIN-20200815154632-20200815184632-00452.warc.gz"}
https://oxforderewhon.wordpress.com/2008/11/28/rdf-and-the-time-dimension-part-1/
## RDF and the Time Dimension – Part 1 In RDF – an Introduction I claimed that introducing any kind of continuous dimension (for example, a time dimension) is not possible, if you follow the official interpretation given in the RDF specifications. Actually it is even worse: In basic RDF even discrete dimensions cannot be modeled. In this post I will elaborate on my claims giving a detailed description of the problem. In part 2 I will propose a new interpretation of RDF Graphs, allow for dimensions into RDF. If you are new to RDF, or terms such as reification, entailment, fact or model don’t mean much to you, you might want to read my introduction to RDF since we need these terms to talk about RDF’s incapability of modeling dimensions. I will try to present everything in a semi formal way, using some mathematical notation, but to always try to keep the post understandable for those that would not define themselves as “math people”. However, I feel that a certain amount of formality is necessary, to outline the problem and proposed solution. Continuous and Discrete Dimensions Let’s start by trying to give you an idea, of what I mean by continuous and discrete dimensions in RDF. Think of a dimension as a variable $d$ that can take values from a specified set (e.g. 1 and 2). You now define your triples (or facts) relative to the $d$. This means, that for $d = 1$ you have a different set of facts than for $d = 2$. Whether I now speak of a continuous or discrete dimension depends on the cardinality (number of elements) of the value set for $d$. If the value set contains an infinite number of elements I speak of a continuous dimension and if the number of elements is finite I speak of a discrete dimension. Since in our example the cardinality of the value set was 2 ($|{1,2}|=2$) we have a discrete dimension. Here is an example for a discrete dimension. In my introduction to RDF I asked you, how you would model the following in RDF: August is a summer month if you are in the northern hemisphere. If you are in the southern hemisphere it is a winter month. Let’s think of this in terms of dimensions. We could say that we have a dimension $d$ that can take one of two values: “northern hemisphere” or “southern hemisphere”. For $d =$ “northern hemisphere”, we define the fact: “August is a summer month” and for $d =$ “southern hemisphere” we define the fact: “August is a winter month”. So what would be an example for a continuous dimension? The easiest I can think of (and probably the most important) is time. Here our variable $d$ can take any moment in time which allows us to specify facts for any moment in time and thereby to model change. With this we would, for example, be able to model that from 1990 to 1999 person A was married to person B, then got divorced and from 2001 until today person A is married to person C. For all moments in time between 1990 and 1999 (an infinite number) we would assert the fact that person A is married to person B and for those moments between 2001 and today we would assert the fact that person A is married to person C. RDF does not Support the Concept of Dimensions Now, let’s get back to my claim that RDF does not support the concept of dimensions and give an idea of how a formal proof could be constructed. Suppose RDF did support the concept of dimensions. In this case an RDF Graph would allow us to store semantically contradicting triples and provide some means of distinguishing between different contexts (dimension values). If we now take all the triples distinguishing between the different contexts out of the graph, thereby creating a true subgraph, the following holds: A model satisfying the original graph does not satisfy the subgraph. Since for the original graph the model has to distinguish between the different contexts, it is not able to make all contradicting facts true simultaniously which would be needed to satisfy the constructed subgraph. This however does violate the so called subgraph lemma (see section Entailment in my RDF Introduction), stating: “A graph entails all its subgraphs.” Therefore the assumption cannot be true and RDF cannot support the concept of dimensions. Let’s follow the argumentation looking at our example of a discrete dimension: August is a summer month if you are in the northern hemisphere. If you are in the southern hemisphere it is a winter month. An RDF Graph describing the above situation would necessarily contain the triples “August is a summer month” and “August is a winter month”. Additional triples would be used to distinguish between the different situations (being in the northern or the southern hemisphere). If we now constructed a subgraph by removing those additional triples, a model for the original graph would not satisfy the constructed subgraph since for the subgraph August must be a winter and a summer month. This would however violate the subgraph lemma, and therefore no RDF Graph can exist that describes the above example. Why are Dimensions Important? As we have seen, RDF does not support the concept of dimensions. But why should we care? Are there any good reasons why we should want to model dimensions in RDF? I believe the answer is yes: There are many good reasons, and we should try to incorporate dimensions in RDF. Take FOAF as an example. FOAF stands for Friend of a Friend and is a widely used ontology for describing people. You could, for example, use FOAF and RDF to say, that there is a person called John Doe and that he is part of the S-A-M-P-L-E project. But what happens if John leaves the project. FOAF and RDF only allow us to say that either John is part of the project, that John is not part of the project or, to make no assertion at all about whether or not John is part of the project. An odd thing, isn’t it? People change, situations change, the web changes. RDF is static. There is no way of properly modeling change in RDF. Even our very simple example, about August being a summer or a winter month depending on the point of view, cannot be expressed in RDF. You can say, August is a summer month. You can say August is a winter month. You can even say August is a summer and a winter month. However, you cannot say that it depends your location whether August is a summer or a winter month. In OxPoints we need, for example, to be able to say that a college has changed its name over time. Oxford University has existed for almost 800 years and things have changed. To be able to model Oxford University in RDF, we need RDF to support a time dimension. An introduction to OxPoints can be found at https://oxforderewhon.wordpress.com/2008/11/18/oxpoints-providing-geodata-for-the-university-of-oxford/. Named Graphs – The Solution for Discrete Dimensions I admit, I exaggerated a bit. Things are not really as bad as I outlined them. There is an extension to RDF that allows you to model discrete dimension: Named Graphs. The idea behind named graphs is rather simple. Instead of one RDF Graph, you create multiple graphs. This allows you to make assertions on the RDF Graphs and since you can have multiple graphs you can easily implement our example: Create one graph for the northern hemisphere and one graph for the southern hemisphere and you’re done. Even though named graphs are not directly a part of the RDF specifications many RDF tools support the idea in one form or another. Sesame [http://www.openrdf.org/] and Jena [http://jena.sourceforge.net/] for example, two RDF triple stores written in Java, allow you to specify a context for each triple. These contexts are then used to group triples together (thereby creating a named graph). This concept of assigning a context to a triple is often referred to as quads: subject predicate object context. Named Graphs – No Solution for Continuous Dimensions So if there is a solution, what exactly is my problem? The answer is easy: Named graphs do not work for continuous dimensions. Let’s take time as an example. Should we create one graph per year, one graph per day or one graph per second? It is easy to see, that you end up creating hundreds and thousands of graphs, not really capturing the idea of time, but modelling a discrete subset. Let’s suppose you went for a graph per year. Not only are you unable to say anything about something being valid only for a couple of months. If you realized that you wanted to include a new dimension into your data – let’s only use our simple northern/southern hemisphere example – you would end up doubling the number of graphs you have to maintain. And this just because of a very simple dimension. It is obvious, apart from not really being able to model continuous dimensions, named graphs would not scale even if you reduced your dimension to a discrete subset. Conclusion We have seen that basic RDF is not able to support the outlined concept of dimensions and that the named graph extension is only able to support discrete dimensions. I hope I could convince you, that support for continuous dimensions in RDF would be a very helpful extension, since it would allow us to use RDF to model change. It believe that we can solve this problem by slightly changing the interpretation of RDF Graphs. I will outline my ideas in Part 2 of this post. If you have also faced (or solved) this problem, I’d be delighted to hear from you. ### 5 Responses to RDF and the Time Dimension – Part 1 1. […] will elaborate on this in my next blogpost (RDF and the Time Dimension). Until then, as a thinking exercise, try to encode the following into RDF: “August is a […] 2. […] and the Time Dimension – Part 2 In part 1 I claimed that you will run into problems when you try to model dimensional data in RDF: In basic […] 3. […] RDF and the Time Dimension – Part 1 […] 4. […] RDF and the Time Dimension Part 1 — in this post the author explains succinctly where the problem lies although the example used is flawed because it contains hidden context (i.e. “August is a summer month…” is not true in general and needs the context “…for those in the Northern Hemisphere”, which can be modelled in RDF). The post also settles on named graphs as a solution but claims they cannot be used for continuous dimensions such as time (missing the solution of using something like OWL-Time to represent intervals and relative timings). […] 5. […] RDF and the Time Dimension Part 1 — in this post the author explains succinctly where the problem lies although the example used is flawed because it contains hidden context (i.e. “August is a summer month…” is not true in general and needs the context “…for those in the Northern Hemisphere”, which can be modelled in RDF). The post also settles on named graphs as a solution but claims they cannot be used for continuous dimensions such as time (missing the solution of using something like OWL-Time to represent intervals and relative timings). […]
2018-01-16 07:44:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 10, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5506693720817566, "perplexity": 687.6891740294959}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886237.6/warc/CC-MAIN-20180116070444-20180116090444-00514.warc.gz"}
https://peeterjoot.wordpress.com/2012/05/
Peeter Joot's (OLD) Blog. Math, physics, perl, and programming obscurity. papasu on PHY450H1S. Relativistic Electr… papasu on Energy term of the Lorentz for… lidiodu on PHY450H1S. Relativistic Electr… lidiodu on PHY450H1S. Relativistic Electr… lidiodu on bivector form of Stokes t… • 307,917 Archive for May, 2012 updated compilation of class notes for phy454h1s, continuum mechanics. Posted by peeterjoot on May 28, 2012 As mentioned previously I’ve got a compilation of class notes here: This has been updated now with many changes that should make it easier to read. These notes are no longer follow the exact order of the lectures, but are grouped by topic, with the various problems incorporated into the chapter content as Problems/Solutions sections. Some stuff is moved to appendixes, and some parts just plain deleted. There are a number of cosmetic changes too, the biggest of which is style overhaul resulting from a switch to the book class to the classicthesis style (not strictly for thesis content since it provides an elegant general book template and framework). latex colors Posted by peeterjoot on May 16, 2012 I’d seen in wikipedia a list of colors for latex, but some of them didn’t work for me. Looking in the .log file from my latex compilation, I found the path to the color definitions in effect for my document and ran a bit of search and replace to create the following \documentclass[openany]{memoir} \usepackage[svgnames]{xcolor} \begin{document} \color{AliceBlue}{AliceBlue} \color{AntiqueWhite}{AntiqueWhite} \color{Aqua}{Aqua} \color{Aquamarine}{Aquamarine} \color{Azure}{Azure} \color{Beige}{Beige} \color{Bisque}{Bisque} \color{Black}{Black} \color{BlanchedAlmond}{BlanchedAlmond} \color{Blue}{Blue} \color{BlueViolet}{BlueViolet} \color{Brown}{Brown} \color{BurlyWood}{BurlyWood} \color{Chartreuse}{Chartreuse} \color{Chocolate}{Chocolate} \color{Coral}{Coral} \color{CornflowerBlue}{CornflowerBlue} \color{Cornsilk}{Cornsilk} \color{Crimson}{Crimson} \color{Cyan}{Cyan} \color{DarkBlue}{DarkBlue} \color{DarkCyan}{DarkCyan} \color{DarkGoldenrod}{DarkGoldenrod} \color{DarkGray}{DarkGray} \color{DarkGreen}{DarkGreen} \color{DarkGrey}{DarkGrey} \color{DarkKhaki}{DarkKhaki} \color{DarkMagenta}{DarkMagenta} \color{DarkOliveGreen}{DarkOliveGreen} \color{DarkOrange}{DarkOrange} \color{DarkOrchid}{DarkOrchid} \color{DarkRed}{DarkRed} \color{DarkSalmon}{DarkSalmon} \color{DarkSeaGreen}{DarkSeaGreen} \color{DarkSlateBlue}{DarkSlateBlue} \color{DarkSlateGray}{DarkSlateGray} \color{DarkSlateGrey}{DarkSlateGrey} \color{DarkTurquoise}{DarkTurquoise} \color{DarkViolet}{DarkViolet} \color{DeepPink}{DeepPink} \color{DeepSkyBlue}{DeepSkyBlue} \color{DimGray}{DimGray} \color{DimGrey}{DimGrey} \color{DodgerBlue}{DodgerBlue} \color{FireBrick}{FireBrick} \color{FloralWhite}{FloralWhite} \color{ForestGreen}{ForestGreen} \color{Fuchsia}{Fuchsia} \color{Gainsboro}{Gainsboro} \color{GhostWhite}{GhostWhite} \color{Gold}{Gold} \color{Goldenrod}{Goldenrod} \color{Gray}{Gray} \color{Green}{Green} \color{GreenYellow}{GreenYellow} \color{Grey}{Grey} \color{Honeydew}{Honeydew} \color{HotPink}{HotPink} \color{IndianRed}{IndianRed} \color{Indigo}{Indigo} \color{Ivory}{Ivory} \color{Khaki}{Khaki} \color{Lavender}{Lavender} \color{LavenderBlush}{LavenderBlush} \color{LawnGreen}{LawnGreen} \color{LemonChiffon}{LemonChiffon} \color{LightBlue}{LightBlue} \color{LightCoral}{LightCoral} \color{LightCyan}{LightCyan} \color{LightGoldenrod}{LightGoldenrod} \color{LightGoldenrodYellow}{LightGoldenrodYellow} \color{LightGray}{LightGray} \color{LightGreen}{LightGreen} \color{LightGrey}{LightGrey} \color{LightPink}{LightPink} \color{LightSalmon}{LightSalmon} \color{LightSeaGreen}{LightSeaGreen} \color{LightSkyBlue}{LightSkyBlue} \color{LightSlateBlue}{LightSlateBlue} \color{LightSlateGray}{LightSlateGray} \color{LightSlateGrey}{LightSlateGrey} \color{LightSteelBlue}{LightSteelBlue} \color{LightYellow}{LightYellow} \color{Lime}{Lime} \color{LimeGreen}{LimeGreen} \color{Linen}{Linen} \color{Magenta}{Magenta} \color{Maroon}{Maroon} \color{MediumAquamarine}{MediumAquamarine} \color{MediumBlue}{MediumBlue} \color{MediumOrchid}{MediumOrchid} \color{MediumPurple}{MediumPurple} \color{MediumSeaGreen}{MediumSeaGreen} \color{MediumSlateBlue}{MediumSlateBlue} \color{MediumSpringGreen}{MediumSpringGreen} \color{MediumTurquoise}{MediumTurquoise} \color{MediumVioletRed}{MediumVioletRed} \color{MidnightBlue}{MidnightBlue} \color{MintCream}{MintCream} \color{MistyRose}{MistyRose} \color{Moccasin}{Moccasin} \color{NavajoWhite}{NavajoWhite} \color{Navy}{Navy} \color{NavyBlue}{NavyBlue} \color{OldLace}{OldLace} \color{Olive}{Olive} \color{OliveDrab}{OliveDrab} \color{Orange}{Orange} \color{OrangeRed}{OrangeRed} \color{Orchid}{Orchid} \color{PaleGoldenrod}{PaleGoldenrod} \color{PaleGreen}{PaleGreen} \color{PaleTurquoise}{PaleTurquoise} \color{PaleVioletRed}{PaleVioletRed} \color{PapayaWhip}{PapayaWhip} \color{PeachPuff}{PeachPuff} \color{Peru}{Peru} \color{Pink}{Pink} \color{Plum}{Plum} \color{PowderBlue}{PowderBlue} \color{Purple}{Purple} \color{Red}{Red} \color{RosyBrown}{RosyBrown} \color{RoyalBlue}{RoyalBlue} \color{Salmon}{Salmon} \color{SandyBrown}{SandyBrown} \color{SeaGreen}{SeaGreen} \color{Seashell}{Seashell} \color{Sienna}{Sienna} \color{Silver}{Silver} \color{SkyBlue}{SkyBlue} \color{SlateBlue}{SlateBlue} \color{SlateGray}{SlateGray} \color{SlateGrey}{SlateGrey} \color{Snow}{Snow} \color{SpringGreen}{SpringGreen} \color{SteelBlue}{SteelBlue} \color{Tan}{Tan} \color{Teal}{Teal} \color{Thistle}{Thistle} \color{Tomato}{Tomato} \color{Turquoise}{Turquoise} \color{Violet}{Violet} \color{VioletRed}{VioletRed} \color{Wheat}{Wheat} \color{White}{White} \color{WhiteSmoke}{WhiteSmoke} \color{Yellow}{Yellow} \color{YellowGreen}{YellowGreen} \end{document} This produces the following xcolor svgnames Posted in Math and Physics Learning. | Tagged: , | Leave a Comment » Fun with platform linker inconsistencies (AIX vs. Linux) Posted by peeterjoot on May 10, 2012 Imagine we have three source files, to be built into a pair of shared libs (one with a dependency on the other) and an exe as in: // s1.C extern "C" void foo(void){} // s2.C extern "C" void foo(void) ; extern "C" void bar(void){foo();} // m.C extern "C" void foo(void) ; extern "C" void bar(void) ; int main() { bar() ; foo() ; return 0 ; } On Linux, we can compile and link these with command line of the following sort # g++ -shared s1.C -o libs1.so -fpic # g++ -shared s2.C -o libs2.so -fpic -L. -ls1 # g++ -L. -ls2 m.C -Wl,-rpath-link,pwd -Wl,-rpath,pwd Notice that we’ve not explicitly linked to libs1.so on Linux, even though we are using a symbol from it explictly. The linker picks up dependencies from other things that you choose to link to. On AIX the equivalent commands to create a pair of shared libraries and link the exe to it fails at that exe link # xlC -q64 -qmkshrobj s1.C -o shr1.o # ar -X64 -crv libs1.a shr1.o # xlC -q64 -qmkshrobj s2.C -o shr2.o -L. -ls1 # ar -X64 -crv libs2.a shr2.o # xlC -q64 -L. -ls2 m.C ld: 0711-317 ERROR: Undefined symbol: .foo ld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain more information. You’ve got to add -ls1 to those link flags to get the exe to find its dependencies. I wonder which of these two link time behaviours is more common? Posted in C/C++ development and debugging. | Tagged: , , , , | Leave a Comment » A common anti-pattern: mutex acquire is freeable. Posted by peeterjoot on May 7, 2012 Again and again in DB2 code, developers appear to have discovered a “clever” way to manage shared memory cleanup of memory that is protected by a mutex (what is called a latch internally in the DB2 codebase). The pattern is roughly of the following form: struct myStuff { mutex m ; // ... other stuff. } ; void myStuffCleanup( myStuff * pSharedMem ) { if ( pSharedMem->m.acquire() ) { freeMemory( pSharedMem ) ; } } The developer coding this, based on other state and knowledge of when the myStuffCleanup function is executed, knows that if they get to this point in the code, no new attempts to acquire the mutex will ever be made. However, before the memory can be freed, there needs to be some sort of guarentee that all the old users of the memory (and mutex) are gone. For cleanup problems like this we appear to have a number of developers that have been clever enough to realize that the last thing any consumer of this memory will ever do is release the mutex. So, they code an acquire guard like the above, believing that if the mutex can be acquired at this point in the code, they can “safely” free the memory. However, it is actually unfortunately that the developer has been clever enough to figure out this easy way to handle the cleanup, because it is not safe. First off, observe that unless the developer knows the internals of the mutex implementation, this isn’t a terribly safe thing to do. For example, the mutex could internally use something like a Unix semaphore or a Windows EVENT, so cleanup code may be required before the memory containing the mutex is freed. However, in this case, the mutex implementation in question historically has never required any sort of cleanup (provided it wasn’t in use at the cleanup point). As it happens, we didn’t even historically have a cleanup method for this particular mutex implementation. We have one now, defined for consistency with some of our other mutex implementations which do require explicit cleanup, but it states that it is just for consistency and is a no-op. Reading that documentation (or bad assumptions) is probably what leads the developers to believe that they can free memory containing this mutex type even if it is held. [An aside.  Our mutex implementation Windows actually does use an EVENT to manage the wait for the myStuffCleanup() acquire caller, and that EVENT HANDLE will still be allocated, assigned to the mutex even after the mutex release.  Our clever developer has gotten lucky because we happen to clean up that EVENT HANDLE in the acquire() call (presuming there really was no other use of the mutex).] Despite having nothing to cleanup after this last man out acquire, this sort of pattern is neither correct nor safe. What the developer doesn’t know is what our release method does internally. It happens that the operation that logically releases the mutex, allowing it to be acquired, isn’t necessarily the last operation or access of the mutex memory performed by our release code. Our release code has roughly the following form: void mutex::release() { validate() ; markMutexFree() ; wakeupAnyWaitersIfAppropriate() ; waitlistOperationsIfAny() ; // some platforms only. releaseInternalMutexForWaiterManagement() ; // some platforms only. logMutexStateToTraceBufferIfTraceIsOn() ; validate() ; } After that markMutexFree() point in the code, there are a number of possible accesses to the mutex memory. If that “final” release() caller gets context switched out just after that point, and the memory freed before it resumes (or the SMP system is fast enough to allow the free to proceed concurrently while the mutex::release() code executes), then we will be in trouble when the release() code resumes. Here’s an enumeration of the memory accesses to the mutex after the internal-“release” that makes it available for new acquire: 1. waitlistOperationsIfAny().  On our Unix platforms for this mutex type we keep what we call a waitlist, one for all the write (exclusive) waiters, and one for all the read (shared) waiters for the mutex.  Similarily in our Windows mutex implementation we have an EVENT HANDLE pointer in the mutex (although we don’t update that in release after the wakeup like we do on Unix).  After we’ve released the mutex, we’ll wake up any waiters, and then store the new waitlist values in the mutex.  In this scenerio we’ll be storing only a zero as the new waitlist value, because there’s either no waiters, or the only waiter should be the cleanup caller, and we’ll have just woken up that “last waiter”.  We happen to avoid a requirement for actually storing the waitlist separately for our 64-bit Unix implementation, but we do still unfortunately ship a 32-bit server version of DB2 (a developer only version that runs on Linux ia32).  Long story made short, if the memory is recycled after acquire and that happens before these waitlist stores, these zero stores could be corrupting memory (or attempting to access memory that could be unmapped). 2. releaseInternalMutexForWaiterManagement().  Our Windows and 32-bit Unix implementations currently have an internal mutex (in this case, a plain old spinlock, not a reader/writer mutex) that we use for storing either our waitlist pointer or an pointer to our EVENT handles for the mutex.  This internal mutex free will result in a store (i.e a store zero), plus some memory barrier instructions if appropriate.  Again, if the mutex memory has been free()’d, this would be bad. 3. logMutexStateToTraceBufferIfTraceIsOn().  This is problem determination and performance related tracing.  It may or may not be enabled at runtime, but should be alllowed to look at the mutex state if executed.  If the memory has been free()’d and unmapped then this trace code would look at the mutex memory, and could trap. 4. Our final validate() call.  This regularly finds code that uses this free-if-I-can-acquire pattern, since our mutex is fairly stateful, and good for catching this pattern.  In particular, the memset to 0xDD that occurs in our free() code will set inappropriate bits that validate() complains about. There have been developers that have objected to our final validate() call, saying that we shouldn’t be doing it (or any other operation on the mutex) after the logical release point.   They’d like this to be our bug, not theirs.  To win this working-as-designed argument, we essentially have to argue that what has been done is free memory that has been passed to a function that is still operating on it. I’d be curious to know if this pattern of acquire mutex and then destroy memory containing it is common in other codebases (using other mutex types).  If this is a common pattern, I wonder how many other types of mutex implementations can tolerate a free of the mutex memory while the release code for an uncontended mutex is still executing?  Our pure spinlock implementation happens to be able to tolerate this, and does no further access to the mutex memory after the logical release.  However, our reader-writer mutex (the mutex type in question here), cannot tolerate this sort of free while in use on some platforms … but we abort unconditionally if detected on any. A Fourier series refresher. Posted by peeterjoot on May 3, 2012 [Click here for a PDF of this post with nicer formatting] Motivation. I’d used the wrong scaling in a Fourier series over a $[0, 1]$ interval. Here’s a reminder to self what the right way to do this is. Guts Suppose we have a function that is defined in terms of a trigonometric Fourier sum \begin{aligned}\phi(x) = \sum c_k e^{i \omega k x},\end{aligned} \hspace{\stretch{1}}(2.1) where the domain of interest is $x \in [a, b]$. Stating the problem this way avoids any issue of existence. We know $c_k$ exists, but just want to find what they are given some other representation of the function. Multiplying and integrating over our domain we have \begin{aligned}\begin{aligned}\int_a^b \phi(x) e^{-i \omega m x} dx &= \sum c_k \int_a^b e^{i \omega (k -m) x} dx \\ &= c_m (b - a) + \sum_{k \ne m} \frac{e^{i \omega(k-m) b} - e^{i \omega(k-m)a}}{i \omega (k -m)} .\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.2) We want all the terms in the sum to be be zero, requiring equality of the exponentials, or \begin{aligned}e^{i \omega (k -m) (b -a )} = 1,\end{aligned} \hspace{\stretch{1}}(2.3) or \begin{aligned}\omega = \frac{2 \pi}{b - a}.\end{aligned} \hspace{\stretch{1}}(2.4) This fixes our Fourier coefficients \begin{aligned}c_m = \frac{1}{{b - a}} \int_a^b \phi(x) e^{- 2 \pi i m x/(b - a)} dx.\end{aligned} \hspace{\stretch{1}}(2.5) Given this, the correct (but unnormalized) Fourier basis for a $[0, 1]$ interval would be the functions $e^{2 \pi i x}$, or the sine and cosine equivalents. References Posted in Math and Physics Learning. | Tagged: , | Leave a Comment »
2017-12-16 13:04:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 10, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.489776611328125, "perplexity": 4335.96407796887}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948588072.75/warc/CC-MAIN-20171216123525-20171216145525-00704.warc.gz"}
https://blog.webernetz.net/2015/01/19/considerations-about-ipsec-pre-shared-keys-psks/
# Considerations about IPsec Pre-Shared Keys Pre-shared keys (PSK) are the most common authentication method for site-to-site IPsec VPN tunnels. So what’s to say about the security of PSKs? What is its role for the network security? How complex should PSKs be? Should they be stored additionally? What happens if an attacker catches my PSKs? I am listing my best practice steps for generating PSKs. ## Pre-Shared Keys in IPsec The following section is related to site-to-site VPNs only and NOT to remote access VPNs. 1. The pre-shared key is merely used for authentication, not for encryption! IPsec tunnels rely on the ISAKMP/IKE protocols to exchange the keys for encryption, etc. But before IKE can work, both peers need to authenticate each other (mutual authentication). This is the only part in which the PSKs are used (RFC 2409). 2. If static IP addresses are used on both sides (= main mode can be used), an attacker who has the PSK must also spoof/redirect these public addresses over himself in order to establish a VPN connection. That is: Even if an attacker has a PSK, he must spoof a public IP address to use it to authenticate against the other side. This is quite unrealistic for normal persons with common ISP connections. Even skilled hackers must be able to inject falsified BGP routes or to sit nearby the customers default gateway/router. 3. But: If one remote side has only a dynamic IP address, IKE must use the aggressive mode for its authentication. In this scenario, a hash from the PSK traverses the Internet. An attacker can do an offline brute-force attack against this hash. That is: If the PSK is not complex enough, the attacker could succeed and would be able to establish a VPN connection to the network (if he furthermore knows the IDs of the site-to-site VPN peers which is no problem since they traverse through the Internet in plaintext, too). ## Best Practice for PSKs Since the PSKs must be configured on each side only once, it should be no problem to write 20-40 letters on the firewall. Thereby, a really complex key can be generated and used for the authentication of the VPN peer. Here are my tips: 1. Generate a new/different PSK for every VPN tunnel. 2. Use a password/passphrase generator for the creation of the PSK. 3. Generate a long PSK with at least 30 chars, to resist a brute-force attack. (See my article about password complexity.) To avoid problems, use only alphanumeric chars. Since the PSK with 30 chars is really long, the “small” character set of only 62 alphabets and numerals is no problem. The security level in this example would be round about 178 bit (since $log_{2}(62^{30})=178$). 4. Do NOT send the PSK to your peer over the Internet, but via phone, fax, or SMS. 5. There is no need to store the PSK anywhere else. If it is configured on both sides, you can discard it. In the worst case, you need to generate and transfer a new one. ## 7 thoughts on “Considerations about IPsec Pre-Shared Keys” 1. aaa says: Is there a way to require the PSK to expire? 1. What exactly do you mean? A kind of expiration timer that automatically blocks the VPN if the same PSK is used for x days? This must be a firewall feature, but I have not heard of a feature like that. Or do you mean whether it is a security issue if the PSK is never changed? Well, as long as both sites use static IPs, and as long as the PSK is complex enough, there is no reason to change the PSK. However, if it is never changed, this is not “good” either. Maybe it is exposed through another way (social engineering, etc.). So, in my opinion, a PSK change every 3-5 years is a good choice. But even more it is relevant to check every 3-5 years if appropriate security algorithms (ciphers) are used for phase 1 and phase 2. If you are still using “no-DH” or “DH-2”, this is NOT secure anymore. That is: Change your PSK every 3-5 years AND review your P1 and P2 proposals. 😉 2. Saghar says: Hello, What do you mean by “Generate a new PSK for every VPN tunnel”? Do the communicating parties need to exchange a nonce/random everytime (with which they generate a new PSK)? Imagine that we have several embedded devices that they need to authenticate whenever they want to communicate with each other. Do you think that authentication with PSK is a good idea ? Do you know of any mechanism with which we can securely distribute the PSK to all these devices? Or should we configure the PSK seperately on each device? Thanks 1. Well, it depends. If you have multiple embedded devices, you should consider using authentication via certificates. There are options to distribute certificates automatically. Concerning “Generate a new PSK for every VPN tunnel”: If you are a company that has 10-50 static VPN tunnels that do not change that often (i.e., the IP addresses of the partners do not change that ofen), you can use PSKs for authentication. The PSK must be configured only once (!) during the setup of the VPN. It must not be changed later on. But if you have a few VPNs coming from dynamic IP addresses, I do not recommend to use the same PSK for these VPNs, but to use a different/new PSK for each of these. If one PSK is exposed, it can be deleted without the need for a change of the PSKs from the other VPNs.
2017-01-19 23:55:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3233882188796997, "perplexity": 2085.6261472679557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00024-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/pythagoras-theorem-its-converse-o-interior_5667
# Question - Pythagoras Theorem and Its Converse Account Register Share Books Shortlist ConceptPythagoras Theorem and Its Converse #### Question From a point O in the interior of a ∆ABC, perpendicular OD, OE and OF are drawn to the sides BC, CA and AB respectively. Prove that : (ii) AF^2 + BD^2 + CE^2 = AE^2 + CD^2 + BF^2 #### Solution You need to to view the solution Is there an error in this question or solution? S
2017-08-18 04:59:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21529319882392883, "perplexity": 3468.705069524814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104565.76/warc/CC-MAIN-20170818043915-20170818063915-00617.warc.gz"}
http://onmyphd.com/?p=mosfet.short.channel.effects
# MOSFET short channel effects ## What are short channel effects? The main drives for reducing the size of the transistors, i.e., their lengths, is increasing speed and reducing cost. When you make circuits smaller, their capacitance reduces, thereby increasing operating speed. In the same token, smaller circuits allow more of them in the same wafer, dividing the total cost of a single wafer among more dies. However, with great reduction come great problems, in this case in the form of unwanted side effects, the so called short-channel effects. When the channel of the MOSFET becomes the same order of magnitude as the depletion layer width of source and drain, the transistors start behaving differently, which impacts performance, modeling and reliability. These effects can be divided among the following: ## Drain-Induced Barrier Lowering (DIBL) This effect is better understood when we see the potential barrier profile that an electron has to overcome to go from source to drain. Under normal conditions ($V_{DS} = 0$ and $V_{GS} = 0$), there is a potential barrier that stops the electrons from flowing from source to drain. The gate voltage has the function of lowering this barrier down to the point where electrons are able to flow (left side of figure). Ideally, this would be the only voltage that would affect the barrier. However, as the channel becomes shorter, a larger $V_{D}$ widens the drain depletion region to a point that reduces the potential barrier (right side of figure). For this reason, this effect is aptly called Drain Induced Barrier Lowering (DIBL). The top figure shows a cut of a short channel (solid line) and a long-channel (dashed line) MOSFETs. The bottom part shows the potential barrier profile along the surface of the channel (from source to drain). In the left side, $V_{DS} = 0$, while in the right the drain voltage is raised to show the DIBL effect. If this a hard concept to grasp, think about it in terms of depletion regions only. The drain is close enough to the source to easily form the depletion region normally created by the gate. That is, the drain depletion region extends to the source, forming a unique depletion region. This is known as punchthrough. Therefore, a high drain voltage can open the bottleneck and contribute to turn on the transistor as a gate would. This is essentially equivalent to reducing the threshold voltage of the transistor, which leads to higher leakage current. The DIBL effect can be calculated by measuring the threshold voltage as a function of two extreme drain voltages, $V_{th}(V_D)$: $$DIBL=\frac{V_{th}(V_D^{low}) - V_{th}(V_{supply})}{V_{supply} - V_D^{low}}$$ where $V_D^{low}$ is a very low drain voltage and $V_{supply}$ is the supply voltage (the highest drain voltage that can be applied). This function is always positive and no DIBL effect would return 0. ## Surface scattering The velocity of the charge carriers is defined by the mobility of that carrier times the electric field along the channel. When the carriers travel along the channel, they are atracted to the surface by the electric field created by the gate voltage. As a result, they keep crashing and bouncing against the surface, during their travel, following a zig-zagging path. This effectively reduces the surface mobility of the carriers, in comparison with their bulk mobility. The change in carrier mobility impacts the current-voltage relationship of the transistor. As the electron travels through the channel, it is atracted to the $Si-SiO_2$ interface and bounces against it. This effect reduces its mobility. You may be wondering why this is a short-channel effect... Indeed, as the length of the channel becomes shorter, the lateral electric field created by $V_{DS}$ becomes stronger. To compensate that, the vertical electric field created by the gate voltage needs to increase proportionally, which can be achieved by reducing the oxide thickness. As a side effect, surface scattering becomes heavier, reducing the effective mobility in comparison with longer channel technology nodes. ## Velocity saturation The velocity of charge carriers, such as electrons or holes, is proportional to the electric field that drives them, but that is only valid for small fields. As the field gets stronger, their velocity tends to saturate. That means that above a critical electric field, they tend to stabilize their speed and eventually cannot move faster. Velocity saturation is specially seen in short-channel MOSFET transistors, because they have higher electric fields. ### When does the velocity of charge carriers saturate? The critical velocity is defined by the material the charge carriers are flowing through. In particular, in diffusions it is defined by their doping concentration. As a first-order approximation, the carrier velocity is defined as: $$$$v_d = \frac{\mu E}{1 + E/E_c}\label{eq:vel}$$$$ where $\mu$ is the carrier mobility, $E$ is the electric field and $E_c$ is the critical electric field (the point at which the velocity tends to saturate). The velocity saturates when $E \gg E_c$ and it becomes $v_{d} = \mu E_c = v_{sat}$ (when $E \ll E_c$, $v_{d} = \mu E$ as expected). In silicon, for electrons it is ~107 cm/s and for holes around 0.6*107 cm/s. ### What effect has velocity saturation in the drain current? To see the effect of velocity saturation on transistor operation, we must see how the limit in the velocity of carriers influences the current. We go back to see how the current is derived for the transistor model. The current is the change in charge through time. Charge in a slice of length of the transistor is $$dQ(x) = -C_{ox} Wdx (V_{GS} - V_{TH} - V(x)).$$ The current is the derivative of this charge through time: $$I_{DS} = -\frac{dQ(x)}{dt} = \frac{dQ(x)}{dx}\frac{dx}{dt}.$$ This must be true for any $x$, since the current is equal throughout the channel. $\frac{dx}{dt}$ is the velocity of the carriers and it is equal to the expression $\eqref{eq:vel}$. For long channel transistor, we assume $v_d = \mu E$, so the difference between current with or without velocity saturation is a division by $1 + E/E_c$. That is: $$$$I_{DS_{short}} = \frac{I_{DS_{long}}}{1 + E/E_c} = \frac{\mu C_{ox} W ((V_{GS} - V_{TH})V_{DS} - V_{DS}^2/2))}{L(1 + E/E_c)}\label{eq:short}$$$$ But velocity saturation is only apparent when the current saturates due to velocity saturation before saturating due to pinch-off. That means that the drain-source saturation voltage will be lower than $V_{GS} - V_{TH}$ in short channel transistors. To find out this saturation voltage, we must look for when a change in the drain-source voltage does not change the current, i.e., when $dI_{DS}/dV_{DS} = 0$. Differentiating $\eqref{eq:short}$ with respect to $V_{DS}$ and finding its zero will lead to (keep in mind that $E_c = v_{sat}/\mu$ and $E = V_{DS}/L$): $$V_{DS_{SAT}} = \frac{2(V_{GS} - V_{TH})}{1 + \sqrt{1 + \frac{2\mu(V_{GS} - V_{TH})}{v_{sat}L} }}.$$ Given that the term $\frac{2\mu(V_{GS} - V_{TH})}{v_{sat}L}$ is positive, $V_{DS_{SAT}}$ will be smaller than $V_{GS} - V_{TH}$. ## Impact ionization As mentioned earlier, short-channel transistors create strong lateral electric fields, since the distance between source and drain is very small. This electric field endows the charge carriers with high velocity, and therefore, high energy. The carriers that have high enough energy to cause troubles are called "hot" carriers. These normally appear close to the drain, where they have the most energy. Since they are traveling through a Silicon lattice, there is a possibility that they collide with an atom of the structure. Given enough energy, the energy passed to the atom upon collision can knock out an electron out of the valence band to the conduction band. This originates an electron-hole pair: the hole is attracted to the bulk while the generated electron moves on to the drain. The substrate current is a good way to measure the impact ionization effect. When an electron collides with an atom of the Silicon lattice structure, the energy passed to the atom upon collision can knock out an electron out of the valence band to the conduction band, creating an electron-hole pair. The hole is attracted to the bulk while the generated electron moves on to the drain. In case the generation of electron-hole pairs is very agressive, two catastrophic effects can happen. One of them relates to the parasitic bipolar transistor that is formed by the junctions between source-bulk-drain. This transistor is normally turned off because the bulk is biased at the lowest voltage of the circuit. However, when holes are flowing through the bulk, they are causing a voltage drop at the parasitic resistance of the bulk itself. This, in turn, can active the BJT if the base-emitter (bulk-source) voltage exceeds 0.6-0.7 V. With the transistor on, electrons start flowing from the source to the bulk and drain, which can lead to even more generation of electron-hole pairs. Holes flowing through the bulk create a voltage drop that may turn on the parasitic bipolar transistor. When it turns on, electrons can flow to the bulk and drain by the BJT instead of the channel created by the MOSFET. The most catastrophic case happens when the newly generated electrons become themselves hot carriers and knock out other atoms of the lattice. This in turn can create an avalanche effect, eventually leading to an overrun current that the gate voltage cannot control. ## Hot Carrier Injection (HCI) The hot carrier accelerated by the high electric field can have a different fate as well. The energy it contains may be sufficient to enter the oxide and get trapped in it. The trapped electrons alter the transistor response to the gate voltage in the form of increased threshold voltage. Over time, the accumulation of electrons in the oxide causes the so called "ageing" of transistors. Interestingly, FLASH memories use the same effect to memorize bits: the negative charge stored in the floating gate through injection of "hot carriers" changes the threshold voltage and this change is interpreted as a 1 or 0. A "hot" electron manages to enter the oxide and gets trapped in it. To reduce the formation of "hot" carriers and their negative effects, the electric field is artificially weakened with the implantation of lighty-doped drains, beside the heavily-doped drains. The electric field only needs to be weakened at the drain, but since the drain terminal is only defined by the operating point, the implant is added to both terminals of the MOSFET. The reasoning here is that the depletion regions of the lightly-doped implant are wider. With wider depletion regions there is a larger distance between different potentials, which reduces the electric field. The other side of the coin is that the parasitic resistances of source and drain are increased. Lightly doped drains help reduce the strength of lateral electric fields and therefore, reduce the formation of "hot" carriers. If I helped you in some way, please help me back by liking this website on the bottom of the page or clicking on the link below. It would mean the world to me! Tweet
2018-06-19 10:39:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.7485221028327942, "perplexity": 717.456392756847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267862248.4/warc/CC-MAIN-20180619095641-20180619115641-00539.warc.gz"}
http://astrophysicsformulas.com/astronomy-formulas-astrophysics-formulas/keplerian-orbital-velocity/
# Keplerian Orbital Velocity ### Circular Orbital Velocity under Gravitational Forces In the case of a two-body problem and simple circular motion due to only gravitational forces, the Keplerian orbital velocity can be found by simply equating centrapetal force to gravitational force. In general the two masses, say $m$ and $M$, orbit around the center of mass of the system, with an effective mass equal to the reduced mass. In the case that $m \ll M$ the center of mass is near the center of gravity of $M$, and the Keplerian orbital velocity is $v = \sqrt{\left(\frac{GM}{r}\right)}$ or $v \sim 30 \left(\frac{M}{M_{\odot}}\right)^{\frac{1}{2}} \left(\frac{1 {\rm AU}}{r}\right)^{\frac{1}{2}} \ \ {\rm km \ s^{-1}}$ or $v \sim 66,600 \left(\frac{M}{M_{\odot}}\right)^{\frac{1}{2}} \left(\frac{1 {\rm AU}}{r}\right)^{\frac{1}{2}} \ \ {\rm miles \ per \ hour}$ where 1 AU is 1 astronomical unit, and $M_{\odot}$ is a solar mass. In all of the above, $r$ is the radius of the orbit. Another convenient form (again for $m \ll M$) that expresses the Keplerian velocity as a fraction of the speed of light, $c$, is $\frac{v}{c} = \sqrt{ \frac{r_{g}}{r} }$ where $r_{g}$ is the gravitational radius, for which convenient forms are $\begin{array} rr_{g} & = 1.4822 \times 10^{13}M_{8} \ {\rm cm} \\ \\ r_{g} & = 1.4822 (M/M_{\odot}) \ {\rm km} \\ \\ r_{g} & \sim M_{8} \ {\rm AU}, \\ \end{array}$ where $M_{8}$ is the central is in units of $10^{8}$ solar masses. ### Orbital Velocity when One Mass is Not Negligible In the case that one of the masses is not negligible compared to the other, simply use the total mass ($M+m$) in place of $M$ in the above formulas for orbital velocity. ### Orbital Velocity for Elliptical Orbits In the general case of elliptical orbits, simply multiply any of the above equations for orbital velocity by the following factor, $f_{\rm elliptical}$: $f_{\rm elliptical} \ \equiv \frac{v_{\rm elliptical}}{v_{\rm circular}} \ = \ \left(2 – \frac{r}{a}\right)^{\frac{1}{2}}$ where $a$ is the semimajor axis of the ellipse, and $r$ is the length of the line joining the two masses (obviously, for elliptical orbits the speed varies around the ellipse).
2017-10-23 02:18:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8662282228469849, "perplexity": 224.80789692419683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825510.59/warc/CC-MAIN-20171023020721-20171023040721-00614.warc.gz"}
http://html.rhhz.net/qxxb_en/html/20170410.htm
J. Meteor. Res.  2017, Vol. 31 Issue (4): 767-773 PDF http://dx.doi.org/10.1007/s13351-017-6177-4 The Chinese Meteorological Society 0 Article Information Yan LI, Weican ZHOU, Xianyan CHEN, Dexian FANG, Qianqian ZHANG . 2017. Influences of the Three Gorges Dam in China on Precipitation over Surrounding Regions. 2017. J. Meteor. Res., 31(4): 767-773 http://dx.doi.org/10.1007/s13351-017-6177-4 Article History in final form March 22, 2017 Influences of the Three Gorges Dam in China on Precipitation over Surrounding Regions Yan LI1, Weican ZHOU1, Xianyan CHEN2, Dexian FANG3, Qianqian ZHANG1 1. Key Laboratory of Meteorological Disaster of Ministry of Education, Joint International Research Laboratory of Climate and Environment Change, and Collaborative Innovation Center on Forecast and Evaluation of Meteorological Disasters, Nanjing University of Information Science & Technology, Nanjing 210044; 2. National Climate Center, Beijing 100081; 3. Chongqing Institute of Meteorological Sciences, Chongqing 401147 ABSTRACT: Impacts of the Three Gorges Dam (TGD) in China on the regional pattern and annual amount of precipitation around the Three Gorges Reservoir (TGR) are examined by comparing observations before and after the operation of TGD (1984–2003 and 2004–13). Empirical orthogonal function (EOF) analysis of the annual precipitation anomalies clearly indicates that the land-use change associated with the construction of TGD has not significantly changed the precipitation pattern. To investigate the impacts of TGD on the rainfall amount, we compare the relative variations of atmospheric variables related to precipitation formation in three spatial bands: over TGR, near TGR, and far from TGR. It is found that the differences in annual rainfall over TGD between the two periods before and after the operation of TGD are small, suggesting a weak impact of TGD on the rainfall amount. The TGD water level increased from 66 m before June 2003 to 175 m after 2010, and this may have slightly reduced precipitation on the local scale. Key words: Three Gorges Dam     precipitation     empirical orthogonal function (EOF) analysis 1 Introduction The Three Gorges Dam (TGD) in China—located on the Yangtze River and controlling a section of the river—is the largest hydro power project in the world. The associated Three Gorges Reservoir (TGR), a huge artificial lake, had a water level of 66 m before the operation of TGD, which then increased to 135 m in June 2003, 155 m in October 2006, approximately 172 m in 2008 and 2009, and 175 m on 25 October 2010, after which it has remained stable. At the point, its water level rose to 175 m, TGR had extended to 660 km along the waterway, with its average width having increased from 0.6 to 1–2 km, and the total area having reached 1084 km2. Therefore, an increasing amount of attention is being paid to the forcing exerted by TGD on the climate. However, the impact in this regard, due to the land-use change related to TGD, has not yet been clarified. Some studies, based on several specific field experiments (Chen et al., 2009, 2013; Li and Tang, 2014) and idealized numerical simulations (Zhang et al., 2004), have indicated that the influence of TGR on climate is weak and confined only to the local (i.e., km) scale. The surrounding topography is complex and may couple with TGR to impact the precipitation in nearby areas. Two numerical simulations, control and TGD, were performed by Miller et al. (2005) to investigate the influence of TGD, and the results indicated that the presence of TGR leads to strong local evaporation that cools the air, thus enhancing descending motion and, as a result, there is almost no influence on precipitation locally due to TGR. Wu et al. (2006) examined the impact of TGD on regional precipitation and found that TGR land-use change enhances the rainfall between the Daba and Qinling mountains and weakens it over the TGD region (28°–34°N, 106°–112°E). Xiao et al. (2010) analyzed the decadal variation of precipitation in the vicinity of TGD using daily data from 1960 to 2005, and argued that there is no significant evidence that the opposite sign of change in precipitation in the areas north and south of the Yangtze River has been affected by TGD. Hence, the effect of land-use change associated with TGD on the surrounding precipitation is controversial. Studies based on high-resolution numerical simulations (Zhang et al., 2004; Wu et al., 2006) and observational analysis have limited temporal coverage, only partially covering the period of TGD's existence (Wu et al., 2006; Xiao et al., 2010). Thus, further studies on the influences of TGD on the surrounding pattern and amounts of precipitation, using longer observational data, are required. The abovementioned research on the climate change induced by TGR (Wu et al., 2006; Xiao et al., 2010) only used data up until the year 2005, when the water level was less than 155 m, and there were only two years of observations used for discussion. In fact, TGD has now been in use—either partially or completely—for more than 10 yr, which provides us with a sufficiently long observational record to reanalyze the climatic impact of TGD. In addition, the rainfall over East Asia experienced a significant decadal shift around the mid 1990s (Zhu et al., 2014; Ren et al., 2017); the decadal turning point of the winter rainfall over southern China occurred around 1993 (Li et al., 2016). Therefore, in this study, 10 yr (2004–13) of observations after TGR came into use, and another two 10-yr observational periods (1984–93 and 1994–2003) before its use, are compared to reveal the possible impact of TGR on the surrounding patterns and amounts of precipitation. 2 Data and methods The daily observations of precipitation (mm), temperature (℃), evaporation (kg m–2), and relative humidity (%) during 1984–2013 are from the China Daily Ground Climate Dataset, Version 3.0, of the China Meteorological Data Sharing Service System (http://cdc.nmic.cn). The data were collected and quality-controlled by the National Meteorological Information Center of the China Meteorological Administration. After an initial quality assessment, we identified a set of stations to be applied in this study (Fig. 1). To understand effectively the precipitation variations (both spatial and temporal) in the vicinity of TGD (28°–34°N, 106°–112°E), empirical orthogonal function (EOF) analysis is applied. Figure 1 Location of Three Gorges Dam (TGD) and the observational stations used in this study. Each colored region represents the three spatial bands used for elucidating the influence of TGD on the local climate. Black dots indicate the station locations and the "#" symbol represents the location of TGD. We divided up the study zone based on distance from the reservoir (Degu et al., 2011). Specifically, we compared the climatic variations of temperature, humidity, evaporation, and precipitation in the following three spatial bands (Fig. 1): (1) over TGR, (2) near TGR (a band between 10 and 100 km away from the center of TGR), and (3) far from TGR (a band between 100 and 200 km away from the center of TGR). The climatic variation associated with the Pacific Decadal Oscillation and other decadal climate modes generally influences a much larger region, including the three spatial bands of the present study. However, the climate change induced by the reservoir is considered as being greater in the closer TGR region (Li et al., 2011). Hence, the relative variation from one band to another is considered as being mostly the results of the reservoir's impact. The influence of internal variations of temperature, humidity, and precipitation should be very random in the three bands, and thus negligible in the 10-or 30-yr average pattern. For a similar reason, we define a new time series as the difference between the time series over TGR and the vicinity of TGD (28°–34°N, 106°–112°E). Since the influence of TGD reflected in the time series over the vicinity of TGD is relatively small (Wu et al., 2006), the new time series should significantly reduce the signal of natural internal variation. 3 Precipitation variability around TGD The spatiotemporal characteristics of annual-mean precipitation in the TGD region are analyzed by using the EOF method for the three decades (1984–93, 1994–2003, and 2004–13), separately. The results show a similar pattern of precipitation anomalies in the first and second modes, so we put the three decades together to investigate the precipitation change over time. Figure 2 shows the spatial pattern and the corresponding spatiotemporal variability of the first and second EOF modes during 1984–2013 using the EOF method. The first mode explains 36% of the total variance and reflects a precipitation pattern with same anomalous signs over southern and northern Yangtze River (Fig. 2a). The second mode accounts for 14.9% of the total variance and shows opposite signs of anomalies in northern and southern Yangtze River (Fig. 2b). This is consistent with the findings of Xiao et al. (2010) and Li and Tang (2014). The time series of the first EOF mode mainly reflects the interannual variation (Fig. 2c). However, the time series of the second EOF mode shows not only the significant interannual variability, but also the interdecadal variability; the 5-yr running mean clearly shows two turning points, in 1993 and 2003, which is consistent with the split point of the study periods. From the mid 1990s to the early 2000s, precipitation decreased to the north and increased to the south. The precipitation patterns were similar during the 1980s and 2004–13, when the precipitation increased to the north of the Yangtze River but decreased to the south. The above EOF analysis agrees with previous studies that the natural precipitation has an opposite signal on northern and southern Yangtze River. Furthermore, we should be cautious of the fact that the turning point of the second EOF mode is around 2003, when TGR was only partially in use. Figure 2 Spatial patterns of the EOF's leading modes and the corresponding spatiotemporal variability of mean annual precipitation (mm day–1) in the area of TGD (28°–33°N, 105°–113°E) from 1984 to 2013: (a) spatial pattern of the first mode, (b) spatial pattern of the second mode, (c) principal component of the first mode, and (d) principal component of the second mode. The red line in (d) is the 5-yr running mean, and the blue horizontal lines denote the decadal mean. To investigate this further, the observation is divided into three 10-yr periods: 1984–93, 1994–2003, and 2004–13 (the former two representing decades before TGR was in use and the latter representing a decade after TGR came into use). The differences in the annual-mean precipitation between 1994–2003 and 1984–93, as well as between 2004–13 and 1994–2003, were calculated and plotted in Fig. 3. The results clearly show opposite precipitation changes over northern and southern sides of the Yangtze River, which is consistent with the second EOF mode. During the recent 10-yr period, after TGD's water level was raised in 2003, the annual-mean precipitation increased to the north and decreased to the south, as compared with 1994–2003. Wu et al. (2006) suggested that this phenomenon might have been influenced by the construction of TGD. However, Xiao et al. (2010) believed that such an opposite tendency of precipitation variation occurred before TGD's construction, meaning that it may have been due to natural variability. From Fig. 2d, we can see that the magnitude of the time series during 1984–93 is significantly smaller than that during 2004–13. Thus, this raises a new question as to whether the construction of TGD enhanced the natural variability of precipitation. Figure 3 Differences in mean annual precipitation (mm day–1) derived from the two periods: (a) between 1994–2003 and 1984–93; (b) between 2004–13 and 1994–2003. To address this question, the time series of annual-mean precipitation anomalies for the areas to the north and south of the Yangtze River were calculated. Figure 4a shows the time series of the average annual-mean precipitation over the northern side, from which it can be seen that the precipitation enhanced after 1998. The time series for the southern side (Fig. 4b) shows opposite anomalies; the precipitation started to decrease gradually from 2000. It is clear that there was no significant change in the precipitation trend in 2004, both on the northern and southern sides. These results strongly suggest that the patterns of increased precipitation on the northern side and decreased precipitation on the southern side in recent years are therefore not due to the expanded water area, but instead to a spatially homogeneous change in precipitation associated with natural climate change. The turning point in 2004 of the second EOF mode (Fig. 2d) may be due to the different natural variability between the north and south side areas. Hence, the pattern of opposite precipitation variation in the areas on southern and northern sides of TGR may not be influenced by TGD. Figure 4 Mean annual precipitation (mm day–1) on the (a) north side and (b) south side of the Yangtze River in the area of TGD from 1984 to 2013. The dotted line denotes the 9-yr running mean curve. 4 Possible impact of TGD on the annual rainfall amount To further investigate the possible impact of TGD on regional precipitation, we focus on the 30-yr daily mean near-surface (10 m) temperature (Tmean; ℃), maximum temperature (Tmax; ℃), minimum temperature (Tmin; ℃), precipitation (mm), relative humidity (%), and surface evaporation (kg m–2). The relative variation from one band to another is studied by using the three spatial bands as depicted in Fig. 1. Figure 5 shows the 30-yr spatial gradient for Tmean, Tmax, and Tmin in three scenarios: (1) Scenario Ⅰ: the percentage difference in temperature between the TGR and near-TGR regions, $\displaystyle\frac{{{\rm{TGR}}\! -\;{\rm{TGR}}_{\rm{near}}}}{{\;{\rm{TGR}}_{\rm{near}}}}\!\! \times \!\!100$ ; (2) Scenario Ⅱ: the percentage difference between the regions near and far from TGR, $\displaystyle\frac{{{\rm{TGR}}_{\rm{near}} - {\rm{TGR}}_{\rm{far}}}}{{{\rm{TGR}}_{\rm{far}}}} \times 100$ ; and Scenario Ⅲ: the percentage difference between TGR and the region far away, $\displaystyle\frac{{{\rm{TGR}}\! -\! {\rm{TGR}}_{\rm{far}}}}{{{\rm{TGR}}_{\rm far}}}\!\! \times \!\!100$ . During the recent 30-yr period, Scenarios Ⅰ and Ⅲ show a significantly increased gradient, and Scenario Ⅱ shows no significant change in Tmean and Tmin after 2003 (Figs. 5a, c). In other words, Tmean and Tmin over the TGR region increased faster than those in the near-TGR and far-TGR regions. In addition, the relative variation from the near-TGR region to the far-TGR region is small. However, the relative variability in Tmax from one band to another is weak in the recent 30-yr period, as shown in Fig. 5b. It is evident that TGR has exerted a stronger effect on local-scale (over the TGR region) Tmean, which is associated with a warming in Tmin (comparing Figs. 5b, 6c). These spatial gradient results for temperature agree with the conclusions of some previous studies (Chen et al., 2009, 2013). Figure 5 Spatial gradients of daily temperature (℃), where the y-axis is the percentage increase (%) of the climatological mean from one spatial band to another: (a) temperature, (b) maximum temperature, and (c) minimum temperature for Scenarios Ⅰ, Ⅱ, and Ⅲ. The spatial gradients of the 10-yr mean surface evaporation and relative humidity in the three periods are shown in Fig. 6. The surface evaporation and relative humidity show opposite changes in the past 30 years. The daily mean evaporation gradient between the TGR and near-TGR regions increased continuously; the increases are 0.42% and 2.45% in the two successive 10-yr periods, respectively. However, the daily mean evaporation gradient decreased between the TGD and far-TGR regions; the decreases are –1.74% and –2.77% in the two successive 10-yr periods, respectively. In Fig. 6b, the spatial gradient of relative humidity also demonstrates this point. It is shown that the change in the relative humidity gradient between the TGR and near-TGR regions is small in the pre-TGD period (1984–2003) but it becomes larger in the post-TGD period (2004–13), as does the gradient between the near-TGR and far-TGR regions. Figure 6 As in Fig. 5, but for (a) surface evaporation (kg m–2) and (b) relative humidity (%). Next, we explored the corresponding influence on precipitation. Figure 7 shows the spatial gradients of the daily total precipitation for the periods 1984–2013, 1984–93, 1994–2003, and 2004–13. Although the spatial gradients of precipitation are opposite in the post-and pre-TGD eras in Scenario Ⅲ, this is caused by the dramatic decrease in the gradient between the TGR and near-TGR regions (Scenario Ⅰ), as well as that between the near-TGR and far-TGR regions (Scenario Ⅱ) in the post-TGD period. Because the impact of the expanded watery area gradually decreased with increasing distance from TGR, a similar trend from the TGR region to the near-TGR region and an opposite trend from the above two regions to the far-TGR region mean that the effect on the precipitation associated with TGD is in the first 100 km (local scale, including the TGR and near-TGR regions). Figure 7 As in Fig. 5, but for precipitation (mm). Following the above findings, we investigated the influence of TGD on the local rainfall using a new time series, which is the difference of the two time series over the following regions: over the TGR region, and the whole vicinity of the TGD area (28°–34°N, 106°–112°E). For the new time series, the impact of natural variation should be reduced because it should be very similar in the above two regions. Figure 8 indicates a small difference in the mean annual precipitation between these two regions, with the difference ranging from –0.4 to 0.4 mm day–1. There are 7 yr when the difference is more than 0.2 mm day–1, among which 4 yr occurred before 2004, with a maximum of 0.35 mm day–1, and 3 yr occurred after 2004, with a maximum of 0.34 mm day–1. Moreover, there is a negative trend in the annual-mean precipitation difference. The slope of the negative trend is –0.13 (30 yr)–1, and it is not statistically significant at the 90% confidence level. Although the negative trend in the precipitation difference can be seen over the 30-yr period, the last decade's mean precipitation anomalies are much weaker than those in the first two decades. This suggests that the water level of TGD increasing by 69 m after 2003 may have had weak impacts on the rainfall amount in its vicinity, which may have slightly reduced precipitation on the local scale. Figure 8 Time series of the precipitation difference between the TGR region and the whole vicinity of the TGD area (28°–34°N, 106°–112°E). The horizontal lines indicate the average difference in precipitation in the pre-TGD and post-TGD periods, and the dotted line denotes the linear trend. 5 Conclusions In summary, the EOF analysis of annual precipitation anomalies suggests that TGR is located in a transition zone between opposing precipitation variation in the areas north and south of the Yangtze River. In addition, there has been no significant change in precipitation since 2004, both in the north and south. Thus, the increased precipitation on the northern side and decreased precipitation on the southern side since 2004 is probably due to the different natural variability, which is associated with natural climate change. The relative climate variation from one band to another in terms of temperature, humidity, evaporation, and precipitation, in three spatial bands (over TGR, near TGR, and far from TGR) has been investigated. The results indicate that the land-use changes associated with TGD may have weakly impacted the rainfall amount on the local scale, possibly reducing the precipitation over TGR slightly. The TGD project may influence both the social environment and climate change (Wu et al., 2006). We have tried in this study to investigate the influence of TGD on precipitation, to answer the question as to whether TGD could have modified the pattern and rainfall amounts of precipitation on the local or regional scale. The results provide useful information to the government for flood prevention and mitigation. It should, however, be pointed out that precipitation varies spatially due to the complex topography. The influence of TGD on the regional atmospheric circulations closely related to the precipitation has not been addressed in the present work; further studies using high-resolution numerical simulations are needed. References Chen, X. Y. , Q. Zhang, D. X. Ye, et al. , 2009: Regional climate change over Three Gorges reservoir area. Resour. Environ. Yangtze Basin, 18, 47-51. (in Chinese) Chen X. Y., Song L. C., Guo Z. F., et al., 2013: Climate change over the Three Gorge reservoir and upper Yangtze with its possible effect. Resour. Environ. Yangtze Basin, 22, 1466–1471. Degu A. M., Hossain F., Niyogi D., et al., 2011: The influence of large dams on surrounding climate and precipitation patterns. Geophys. Res. Lett., 38, L04405. DOI:10.1029/2010GL046482 Li H. Tang, 2014: Local precipitation changes induced by the Three Gorges reservoir based on TRMM observations. Resour. Environ. Yangtze Basin, 23, 617–625. Li W. Zhu, J. J. Dong, 2017: A new mean-extreme vector for the trends of temperature and precipitation over China during 1960–2013. Meteor. Atmos. Phys., 129, 273–282. DOI:10.1007/s00703-016-0464-y Li Y., Gao Y. H., Chen X. Y., et al., 2011: The impact of the land use change associated with the Three Gorges Dam on regional climate change. J. Nanjing Univ. (Nat. Sci.), 47, 330–338. Miller L., N. M. Jin, J. F. Tsang, 2005: Local climate sensitivity of the Three Gorges Dam. Geophys. Res. Lett., 32, L16704. DOI:10.1029/2005GL022821 Ren Q., Zhu Z. W., Hao L. P., et al., 2017: The enhanced relationship between southern China winter rainfall and warm pool ocean heat content. Int. J. Climatol., 37, 409–419. DOI:10.1002/joc.4714 Wu G., L. Zhang, Q. H. Jiang, 2006: Three Gorges Dam affects regional precipitation. Geophys. Res. Lett., 33, L6704. DOI:10.1029/2006GL026780 Xiao C. Yu, C. F. Fu, 2010: Precipitation characteristics in the Three Gorges Dam vicinity. Int. J. Climatol., 30, 2021–2024. DOI:10.1002/joc.1963 Zhang T., H. H. Zhu, C. Zhang, 2004: Numerical modeling of microclimate effects produced by the formation of the Three Gorges reservoir. Resour. Environ. Yangtze Basin, 13, 133–137. Zhu W., Z. Li, T. H. He, 2014: Out-of-phase relationship between boreal spring and summer decadal rainfall changes in southern China. J. Climate, 27, 1083–1099. DOI:10.1175/JCLI-D-13-00180.1
2017-09-20 09:39:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6252174973487854, "perplexity": 2629.8014859716673}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818686983.8/warc/CC-MAIN-20170920085844-20170920105844-00224.warc.gz"}
https://aplusphysics.com/community/index.php?/topic/3233-projectile/
• 0 # projectile ## Question A body is projected downward at an angle of 30 degrees to the horizontal from the top of a building 170m high. It's initial speed is 40m/s. a) How long will it take before striking the ground? How far from the foot of the building will it strike? c) At what angle with the horizontal will it strike? ## Recommended Posts • 0 I'd start by making a table of information with the initial velocity, final velocity, displacement, acceleration, and time for both the vertical and horizontal components of motion.  Once you have the information in the table, your kinematic equations will be of help.  If you're not sure how to do that, the following video might be a great place to start: ##### Share on other sites • 0 It not an assignment am just putting the things I learnt from your video on projectile to work. I have tried every method possible but am confused with the case of downward projection. Please kindly put me through. Thanks. ##### Share on other sites • 0 Glad to help -- why don't you show what you've done so far -- i.e., show the tables of information for vertical and horizontal motion that you've put together.  That'll give me an idea how to guide you to a solution most efficiently. ##### Share on other sites • 0 Vertical competent Vi= 40sin30 Vf d=170m a= 9.8ms-2 t= Horizontal component V avg= 34.6ms-1 d= t= ##### Share on other sites • 0 For the vertical component, let's assume down is the negative direction...  You can solve for time in the air using the quadratic: $d=v_{i}t+\frac{1}{2}at^{2}$ and the quadratic equation, or make your life simpler and solve for vf first, then time. Once you know how long the projectile is in the air, use that time in your horizontal equation to find how far it travels horizontally. As for the angle with which it strikes, use the final vertical velocity component and the horizontal velocity component (along with a little trig) to solve for the impact angle. ## Join the conversation You can post now and register later. If you have an account, sign in now to post with your account. ×   Pasted as rich text.   Paste as plain text instead Only 75 emoji are allowed.
2022-07-02 05:24:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5666110515594482, "perplexity": 1023.202427283205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103984681.57/warc/CC-MAIN-20220702040603-20220702070603-00670.warc.gz"}
http://math.stackexchange.com/questions/255174/if-i-have-a-poisson-random-variable-with-parametery-mu-whats-this-condition
# If I have a Poisson random variable with parametery $\mu$, what's this conditional probability? I have the following question for homework and I'm not sure how to get it started. It says to suppose that $N$ is a Poisson random variable with parameter $\mu$. Given $N=n$, random variables $X_1,X_2,X_3,\ldots,X_n$ are independent with uniform $\sim (0,1)$ distribution. So there are a random number of $X$'s. Given $N=n$ what is the probability that all the $X$'s are less than $t$? So I set up the problem as I'm looking for: \begin{align}P(X\lt t\mid N=n) =\frac{P(X\lt t, N=n)}{P(N=n)}\end{align} I'm unsure of how to find the joint density function for the numerator. Since they aren't independent, because the probability of $X$ depends on $N$. I'm also unsure of the $t$ as well, so would this mean that if $X$ was unconditioned that$$P(X_j\lt t)=\frac{1}{1-t}$$?? - Given that $N=n$, we don't need to worry about the fact that $N$ is Poisson, we actually know $n$. (Presumably this is the beginning of a more elaborate problem in which the distribution of $N$ will be relevant.) For fixed $n$, calculating the probability is, I imagine, easy for you.
2015-05-29 15:01:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9518798589706421, "perplexity": 71.12137465108606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930143.90/warc/CC-MAIN-20150521113210-00339-ip-10-180-206-219.ec2.internal.warc.gz"}
http://www.bioinbrief.com/
## To develop a peptide vaccine for cancers sufferers using the HLA-A26 To develop a peptide vaccine for cancers sufferers using the HLA-A26 allele, which really is a minor inhabitants worldwide, we investigated the immunological replies of HLA-A26+/A26+ cancers sufferers to four different CTL epitope peptides below personalized peptide vaccine regimens. sufferers could not end up being signed up for this research because they didn’t have significantly more than 2 positive IgG replies among the 4 pooled peptides. As a result, the introduction of extra HLA-A26 limited peptides is Duloxetine manufacture necessary. Furthermore, if sufferers have both HLA-A2, -A24, or A3 HLA-A26 and superfamily, it might be suitable to make use of both peptides matched up using the HLA-A2, -A24, or A3 superfamily and the ones matched using the HLA-A26 allele, as we’ve been performing the vaccination using both peptides using the effective enhancing of peptide-specific CTL replies as reported previously.11,12,21,23 This is a small research with a restricted variety of sufferers for investigation from the PPV-induced immunological replies in HLA-A26+/A26+ cancers sufferers. Therefore, scientific benefits weren’t established as the supplementary objective. However, maybe it’s important to offer available information in the scientific outcome of the sufferers under PPV. We’ve previously reported the chance of extending the entire survival in scientific studies of PPV in each body organ cancer sufferers, as well as for sufferers who display humoral replies and T cell replies especially. As a total result, there have been no comprehensive response, no incomplete response, 13 steady disease (SD), 7 intensifying disease, and 1 unidentified. Five sufferers (one each with stage IV small-cell lung cancers, Duloxetine manufacture adenocarcinoma lung cancers, invasive ductal breasts cancer, pancreatic cancers, and cancer of the colon sufferers) cannot receive the initial routine of vaccination due to rapid disease development and passed away within 80?times of the initial vaccination. The various other 16 sufferers received at least six vaccinations (median, 15 vaccinations; range, 3C32 vaccinations), and their median success period was 949?times (range, 47C1820?times). Best scientific replies had been SD (n?=?12), progressive disease (n?=?2), and unknown (n?=?1). In this scholarly study, there is no factor in possibly cell surface inflammatory or markers cytokines between before and after PPV. This may be due mainly to small amounts of sufferers with numerous kinds of cancers. These total results, nevertheless, were partly like the prior cell surface area marker study where PPV induced boosts and reduced the regularity of PD1+Compact disc4+ T cells which of PD1+Compact disc8+ T cells in colaboration with favorable overall success.25 In conclusion, this study showed that PPV with these four different CTL epitope peptides could possibly be simple for HLA-A26+ advanced cancer patients due to the safety from the regimens and high rates of immunological responses. Acknowledgments This scholarly research was backed partly with the Japan Company for Medical Analysis and advancement, AMED, a comprehensive analysis plan from the Regional Invention Cluster Plan from the Ministry of Education, Culture, Sports, Technology and Research of Japan, and a grant in the Sendai Kousei Medical center. Disclosure Declaration Akira Yamada is certainly a Board person in the Green Peptide Co., Duloxetine manufacture Ltd. Kyogo Itoh received bureau honorarium and it is a expert/advisory plank member. Kyogo Itoh received analysis money from Taiho Pharmaceutical Co., Ltd. No Efnb2 issues of interests had been declared with the other authors.. ## History: Cross-sectional studies have indicated that vitamin D serostatus is inversely History: Cross-sectional studies have indicated that vitamin D serostatus is inversely associated with adiposity. multivariate mixed linear regression models. Results: Vitamin DCdeficient children had an adjusted 0.1/y greater change in BMI than did vitamin DCsufficient children (for trend = 0.05). Similarly, vitamin DCdeficient children had a 0.03/y (95% CI: 0.01, 0.05/y) greater change in subscapular-to-triceps skinfold-thickness 446859-33-2 IC50 ratio and a 0.8 cm/y (95% CI: 0.1, 1.6 cm/y) greater change in waist circumference than did vitamin DCsufficient children. Vitamin D deficiency was related to slower linear growth in girls (?0.6 cm/y, = 0.04) but not in boys (0.3 cm/y, = 0.34); however, an interaction with sex was not statistically significant. Conclusion: Vitamin D serostatus was inversely associated with the development of adiposity in school-age children. INTRODUCTION Many regions worldwide are undergoing a rapid nutrition transition through which obesity-related chronic conditions account for an increasing percentage of the condition burden (1). The fast upsurge in the prices of weight problems in school-age kids (2) is specially concerning because years as a child obesity can be a risk element for weight problems (3) and related risk elements for cardiometabolic disease (4) later on in life. It is very important to recognize modifiable risk elements that get excited about the early advancement of adiposity to steer future avoidance and treatment attempts. Supplement D insufficiency is prevalent in the globe highly; it’s estimated that 1 billion folks have 25-hydroxyvitamin D [25(OH)D] concentrations in keeping with insufficiency (75 nmol/L) (5). Actually children who reside in subtropical climates are in risk of supplement D deficiency relating to recent research in Brazil (6) and Costa Rica (7). Inadequate supplement D status is actually a risk element for childhood weight problems. Vitamin D impacts lypolysis 446859-33-2 IC50 (8, 9) and adipogenesis (10, 11) in human being adipocytes through its part in regulating intracellular calcium mineral concentrations. Cross-sectional research indicated that plasma 25(OH)D concentrations are inversely connected with body mass index (BMI; in kg/m2) (12C14) and waistline circumference (15, 16) in kids. Nevertheless, the interpretation of the associations is bound because supplement D could be sequestered from the bloodstream and in to the bigger adipose cells mass of obese topics due to its hydrophobic properties (17). The cross-sectional nature of previous studies precludes the making of an inference regarding the directionality of the association between vitamin D and adiposity. We conducted a prospective study to evaluate the associations between vitamin D serostatus assessed in subjects at enrollment and changes in indicators of adiposity, including BMI, subscapular-to-triceps skinfold-thickness ratio, and waist circumference, over 3 y of follow-up in a representative sample of low- and middle-income school-age children from Bogota, Colombia. In addition, we assessed the association between vitamin D serostatus and linear growth. SUBJECTS AND METHODS Study population and field procedures In February 2006, we recruited 3202 children aged 5C12 y from public schools in Bogota, Colombia as part of an observational longitudinal study in nutrition and health. Details on recruitment procedures and study design were previously published (18). In summary, we used a cluster random-sampling strategy in which clusters were defined as classes of all public primary schools in the city by the end of 2005. Because the public school system enrolled 57% of all primary school children in the city, and that 89% of them came from low- and middle-income socioeconomic backgrounds (19), the study 446859-33-2 IC50 population was representative of low- and middle-income families who lived in Bogota. At the time of enrollment, we distributed a self-administered questionnaire to parents through which we collected information on sociodemographic characteristics including age, parity, education level, and household socioeconomic status. The response rate for the survey was 81%. During the following weeks, trained research assistants stopped at the educational classes to acquire anthropometric measurements and fasting blood samples from the kids. Height was assessed without shoes towards the nearest 1 mm Rabbit Polyclonal to TDG having a wall-mounted portable Seca 202 stadiometer (Seca, Hanover, MD), pounds was assessed in light clothes towards the nearest 0.1 kg about Tanita HS301 solar-powered digital scales (Tanita, Arlington Heights, IL), skinfold thicknesses had been measured towards the nearest 0.5 mm with SlimGuide Skinfold Calipers (Creative Health Products Inc, Plymouth, MI), and waist circumference was measured towards the nearest 1 mm having a nonextensible measuring tape at the amount of the umbilicus relating to standard protocols (20). In June Follow-up anthropometric measurements were obtained. ## A modified colorimetric high-throughput display based on pH changes combined with A modified colorimetric high-throughput display based on pH changes combined with an amidase inhibitor capable of distinguishing between nitrilases and nitrile hydratases. is believed that for the most part, they are involved in a cascade reaction with amidases, affording carboxylic acids from nitriles passing through an amide intermediate ( Yasano 1980 ). Nitrilases are also present in many different species and afford a carboxylic acid directly from a nitrile compound ( Prasad 2010 ) ( Figure 1 ). Figure 1 Natural pathways for enzymatic conversion of nitriles to carboxylic acids. A number of screening assays for nitrile-converting enzymes based on continuous and stopped methods are well documented in the literature ( Asano 2002 , Martinkova 2008 ; Reisinger 2006 , Santoshkumar 2010 ; CEBPE He 2011 ; Zheng 2011, Yazbeck 2006 , Wang 2012 ). However, as nitrilases and nitrile hydratase-amidases afford the same final product, it is important to design a screening assay able to distinguish between the two enzymatic pathways. Herein, we describe a colorimetric high-throughput screening assay based on pH changes coupled with the use of an amidase inhibitor. This screen is based on a binary response allowing differentiation between nitrilases and nitrile hydratase-amidases enzymatic systems and is suitable for the first step of hierarchical screening projects. A Banerjee-modified colorimetric and pH sensitive assay coupled with an amidase inhibitor was performed for testing nitrilase and nitrile hydratase-amidase enzymes. Commercially obtainable microorganisms potentially including the nitrile hydratase Gleevec and nitrilase enzymatic systems had been utilized as positive settings. All nitriles and their related amides and carboxylic acids and an amidase inhibitor had been evaluated to identify any feasible color modification interferences inside the enzymatic assay Gleevec program. It had been assumed how the strains that didn’t collect the amide during nitrile degradation indicated nitrilase activity. The intermediate build up of the related amide during nitrile rate of metabolism coupled with carboxylic acidity formation was used as a sign of the lifestyle of the nitrile hydratase-amidase program ( Layh 1997 ). The manifestation of nitrile hydratases was induced by acetonitrile or benzonitrile for aromatic and aliphatic nitriles, respectively. Mandelonitrile was as well toxic towards the microorganisms ahead of enzymatic induction so that it was not utilized as an inducing agent. The well-known amidase inhibitor diethyl phosphoramidate, DEPA ( Bauer 1998 ), was selected for this testing since its color didn’t influence the assay readout and in addition it isn’t affected by pH adjustments during the assay. The usage of an amidase inhibitor allowed the accumulation from the amide intermediate, therefore permitting the discrimination between nitrile hydratase-amidases and nitrilases when only 1 of the enzymatic systems was present ( Brady 2004 ). Nevertheless, when the microorganism got both enzymatic systems, it had been not possible to attain a definitive summary. Furthermore, a microbial control test can be important because the creation and/or excretion of acidic metabolites in to the extracellular press in concentrations high plenty of to trigger color adjustments in the pH sign may bargain the assay validity. The testing assays could possibly be supervised by basic microtiter plate visible inspection ( Shape 2 ). Additionally, a colorimetric get better at plate was utilized as research color scale. Shape 2 Testing for nitrile hydratase and nitrilase creating strains using mandelonitrile like a substrate inside a microplate. Row A: control tests: A1CA3 mandelonitrile, A4CA6: mandelamide, A7CA9: mandelic acidity; A10CA12: … Needlessly to say, Pseudomonas putida CCT 2357 and Pseudomonas fluorescens CCT 3178 are specifically nitrilase creating strains ( Desk 1 ). This total result can be backed by proof through the books ( Chen 2009 , Prasad 2010 ) and may be rationalized from the maintenance of yellow color in the existence or lack of amidase inhibitor. Alternatively, Nocardia simplex CCT 3022 Gleevec generates just nitrile hydratase-amidase enzymes because the carboxylic acidity formation was recognized in the test without amidase inhibitor however, not in the assay with the help of DEPA. If a nitrilase was present, color modification would be anticipated in the test out DEPA addition, nevertheless, no color modification was noticed. The strains Rhodococcus ruber CCT 1879, Rhodococcus equi CCT 0541, Rhodococcus erythropolis CCT 1878 and Nocardia brasiliensis CCT 3439 create both a nitrilase and nitrile hydratase-amidase. Gleevec In this full case, the testing assay cannot provide a conclusive response. Tests were finished in the current presence of a nitrilase inhibitor (AgNO3),. ## Background Until recently, malaria in human beings was misdiagnosed as in Background Until recently, malaria in human beings was misdiagnosed as in the human population in Malaysia and to investigate four suspected fatal cases. Old World monkeys [1]. Naturally-acquired knowlesi infections in humans were thought to be rare until we explained a large focus of cases in the Kapit Division of Sarawak State, Malaysian Borneo [2]. In that study, all infections diagnosed as by microscopy were MK-0974 or other non-species using a nested-PCR assay. and are hard to distinguish microscopically leading to parasite species misidentification. Symptomatic malaria attributed to infections in adults has been reported in other parts of Malaysia, suggesting that this emergence of in humans may lengthen beyond the Kapit Division. At the time of our initial publication in 2004 [2], (reported as is generally connected with low parasitemia and an easy clinical training course, this raises the chance that malaria could become serious. In today’s series of research, we’ve MK-0974 motivated the distribution of in various places in Malaysian Borneo and Peninsular Malaysia utilizing a extremely particular nested-PCR assay, and evaluated all obtainable demographic, lab and clinical data in the 4 fatal situations. METHODS Human bloodstream examples The present research was accepted by the Medical Analysis Ethics Sub-Committee from the Malaysian Ministry of Wellness. In Sarawak it really is government plan to hospitalize all slide-positive malaria situations regardless of scientific severity. During several intervals between March 2001 and March 2006, a complete of 960 bloodstream spots on filtration system paper had been gathered from unselected sufferers accepted with slide-positive malaria to 12 clinics across Sarawak; Bau, Lundu, Betong, Serian, Sibu, Sarikei, Kanowit, Kapit, Marudi, Miri, Lawas and Limbang (for places and sample quantities see Body 1a). Hospitalization with microscopy-positive malaria was the just criteria employed for bloodstream spot collection. The samples from Kapit exclude those MK-0974 reported [2] previously. Parasite id by regular diagnostic microscopy documented 428 (44.6%) 2 (02%) and 2 (0.2%) blended attacks (Desk 1). The sufferers had been mostly male (758%) using a mean age group of 369 (range 02-91) years. body 1 Distribution and prevalence of individual knowlesi malaria in Malaysia Desk 1 Evaluation of outcomes for recognition of Plasmodium types by PCR and microscopy. In Malaysia there’s a requirement of malaria-positive bloodstream films used district clinics and health treatment centers to be delivered to the particular state Vector-Borne Illnesses Control Program (VBDCP) head office for re-examination and types verification by microscopy. These slides are kept for seven years. In response to your obtain microscopy-confirmed archival bloodstream films, a complete of 49 stained bloodstream smears defined as had been extracted from 15 Rabbit Polyclonal to EFNA2 administrative districts in Sabah, Malaysian Borneo. Of the, 13 had been from 2003, 10 from 2004 and 26 from 2005. Five archival bloodstream films defined as by microscopy had been extracted from 4 districts in Pahang, MK-0974 Peninsular Malaysia (3 in 2004 and 2 in 2005). Furthermore, bloodstream films extracted from four fatal malaria situations and reported as displaying by microscopy had been extracted from the Sarawak Wellness Section. DNA was extracted from every one of the archival bloodstream movies received for verification of Plasmodium types by nested-PCR. DNA removal and nested-PCR study of examples DNA was extracted from bloodstream spots on filtration system papers and entire bloodstream as defined previously [2, 3]. At least one harmful control blood spot from an uninfected individual was included for each and every 11 patient blood spots. Positive settings for and were included in all nested-PCR speciation assays and steps to prevent cross-contamination were as explained previously [2]. For blood films on microscope slides, DNA was extracted by moistening the blood film with one drop of Tris-EDTA (TE) buffer, pH 8, and scraping the film of blood into a microcentrifuge tube comprising 100 l TE buffer. Ten l of 10 mg/ml Proteinase K (Amresco, USA) and 100 l of lysis buffer (5 mM EDTA, 05% Sodium dodecyl sulfate, 200 mM NaCl and 100 mM Tris-Cl, pH 8) was added to the tube and incubated inside a thermomixer at 56 C with shaking at 900 rpm for 10 min. An equal volume of phenol-chloroform isoamyl alcohol (Amresco, USA) was then added to each sample, followed by strenuous combining for 15 sec and centrifugation for 2 min at 14,000 rpm. After. ## Although smartphone applications represent the most frequent data consumer tool from Although smartphone applications represent the most frequent data consumer tool from your citizen perspective in environmental applications, they can also be used for in-situ data collection and production in diverse scenarios, such as geological sciences and biodiversity. using alternate uncompressed and compressed types. data acquisition in assorted scenarios such as geological sciences [11,12], epidemiology [13], biodiversity [14], and noise pollution monitoring [8,9]. In these good examples smartphones play either a consumer or maker part as standard clients inside a client-server architecture. Nevertheless, they could become intermediaries or customer aggregators also. For example, in low-connectivity circumstances, a mobile program may consume and procedure data from close by receptors and upload aggregated datasets towards the corresponding machines when network links are restored [15C17]. In this specific case, smartphones might possibly gather huge levels of data to become additional published to remote control machines, which might be a significant impediment with regards to performance. Customers and Suppliers exchange sensor data through conversation protocols. Internet and Cellular Sensor Systems (WSN) are types of energetic conversation stations that connect sensor systems and customer applications. Whatever the particular route selected, communication is based on internationally used standard protocols [18]. The use of standard protocols to exchange info between smartphones and sensor infrastructures (servers, services, (SWE) initiative is a platform that specifies interfaces and metadata encodings to enable real-time integration of heterogeneous sensor networks. It provides solutions and encodings to enable the creation of web-accessible sensor property [26]. SWE is an attempt to define the foundations for the vision, a worldwide system where sensor networks of any kind can be connected [27C29]. It includes specifications for services interfaces such as (SOS) [30] and (SPS) [31], as well as encodings such as (O&M) [32] and the (SensorML) [33]. In this article we particularly focus on SOS, 107008-28-6 SensorML and O&M as they are the main specifications involved in the exchange of most sensor data between clients and servers. We consider in our experiments versions 1.0.0 of SOS and O&M and version 1.0.1 of SensorML, because although newer versions of SOS and O&M have been recently approved (as of April 2012), the older ones are still widely used. SOS-based services provide access to observations from a range of sensor systems, including remote, in-situ, fixed and mobile sensors, in a standard way. The information exchanged between clients and servers, as a general rule, will follow the O&M specification for observations and the SensorML specification for descriptions of sensors or system of sensors (both referred by the term allows clients to access metadata about the capabilities provided by the server. allows to retrieve descriptions of procedures. is used to retrieve observational data from the server. This data can be filtered using several parameters, such as procedures, observed phenomena, location, time intervals and instants. The offers support for data producers to upload observations into SOS servers. Using and 107008-28-6 operations, data producers can register its sensor systems and insert observations into the server, respectively. The service interfaces and data models in SWE fit nicely in the creation of information systems according to service-oriented architectures (SOA). The main SOA design principles such as loose-coupling between service implementations and interfaces, independence, reusability and composability, encourage the use of SWE specifications and data models in such information systems [14,34]. Therefore, these specifications such as for example 107008-28-6 SOS Mouse monoclonal antibody to COX IV. Cytochrome c oxidase (COX), the terminal enzyme of the mitochondrial respiratory chain,catalyzes the electron transfer from reduced cytochrome c to oxygen. It is a heteromericcomplex consisting of 3 catalytic subunits encoded by mitochondrial genes and multiplestructural subunits encoded by nuclear genes. The mitochondrially-encoded subunits function inelectron transfer, and the nuclear-encoded subunits may be involved in the regulation andassembly of the complex. This nuclear gene encodes isoform 2 of subunit IV. Isoform 1 ofsubunit IV is encoded by a different gene, however, the two genes show a similar structuralorganization. Subunit IV is the largest nuclear encoded subunit which plays a pivotal role in COXregulation solutions and O&M data versions have become common artifacts in the look and creation of SOA-based applications dealing with the integration and administration of observations and sensor systems. However, inside our opinion, their software to the cellular realm is bound due to the massive amount exchanged information, which exceeds the control capabilities of cell phones frequently. The necessity to decrease data conversation can be an essential element after that, which pertains to data formats found in communication protocols inevitably. XML (eXtensible Markup Vocabulary) is probable one of the most widely used. ## Nitric oxide, ?Zero, is among the most important substances in the Nitric oxide, ?Zero, is among the most important substances in the biochemistry of living microorganisms. imino nitroxides, correspondingly. An EPR strategy for discriminative ?Zero and recognition using liposome-encapsulated NNs originated HNO. The membrane hurdle of liposomes protects NNs against decrease in natural systems while is certainly permeable to both analytes, ?Zero and HNO. The awareness of this strategy for the recognition of the rates of ?NO/HNO generation is about 1 nM/s. The application of encapsulated NNs for real-time discriminative ?NO/HNO detection might become a valuable tool in nitric oxide related studies. =0.4104 M?1s?1 [44]. Fig. 4A shows the initial rates of hcPTIO conversion to hcPTI measured at wavelength 299 nm after addition of AS to solutions made up of hcPTIO/cPTIO combination in the absence (= (5.50.9)= 2.2104 M?1 s?1. The obtained value of the rate constant of HNO reaction with cPTIO is an order of magnitude lower than that reported by Samuni et al.[35]. The authors did not take into account cPTIO-induced acceleration Rabbit Polyclonal to Smad1 of AS decomposition, which resulted in overestimation of the rate constant of the reaction of cPTIO with HNO. Physique 4 The measurement of the rate constant of the reaction of cPTIO with HNO using NH2OH as competitive agent. (A) The decrease of absorbance at 299 nm in the mixture of 0.3 mM 1400W 2HCl supplier cPTIO and 0.3 mM hcPTIO measured after addition of 0.5 mM Angelis salt … EPR detection of ?NO and HNO by nitronyl nitroxide/hydroxylamine detecting system The capacity of EPR detection of ?NO and HNO by NN in the presence of hIN was explored for two NNs, cPTIO and NN+ (see Plan 1) using PAPA 1400W 2HCl supplier NONOate and AS as sources of ?NO and HNO, correspondingly. Amount 5 displays the prices from the EPR-measured NN decay and IN deposition in the current presence of PAPA NONOate so that as. The prices of NN decay linearly depended over the concentrations of PAPA NONOate so that as yielding half lifetimes of their decomposition, 1/2 = (882) min and 1/2 = (14.00.2) min, correspondingly, in an excellent contract with the books data. Nevertheless, the stoichiometry from the change, [NN]/[IN], was discovered to vary considerably, being near 1:2 in case there is ?NO generation and 1:1 in case of HNO generation. This is in agreement with the different mechanisms of ?NO and HNO reactions with the NNs. In case of ?NO generation, the observed transformations are described by equations: (6) $hIN+NO2?IN+HNO2$ (7) with the net equation $NN+hIN+?NO2IN+HNO2$ (8) being consistent with EPR-observed stoichiometry [NN]/[IN] close to 1:2 (Fig. 5A). Number 5 The dependencies of the rates of NN decay () and IN formation () on concentration of PAPA NONOate (A) and AS (B) measured by EPR using 0.5 mM cPTIO/0.5 mM hcPTI (filled symbols) or 0.5 mM NN+/5 mM hIN+ (bare symbols). Lines symbolize … In case of HNO generation, the observed transformations are explained by equations (1-3) becoming consistent with EPR-observed stoichiometry [NN]/[IN] close to 1:1 (Fig. 5B). The observed ?NO- and HNO-induced EPR spectral changes of NNs in the presence of hIN can be used as calibration for further quantitative EPR measurements of the rates of ?NO/HNO generation. The different stoichiometry of the [NN]/[IN] transformation can be utilized 1400W 2HCl supplier for discriminative ?NO and HNO detection mainly because demonstrated in the next section. Discriminative detection of nitric oxide and HNO by encapsulated nitronyl nitroxides The application of NNs in biological systems is limited due to the fast reduction of NNs and INs into diamagnetic EPR-silent product[29, 32, 45]. Encapsulation of membrane-impermeable NNs into the inner aqueous space of the liposomes has been previously used to protect NNs against bioreduction[29, 30, 46, 47]. Here we explore the power of encapsulated NN+ for discriminative detection of ?NO and HNO. Both varieties, HNO and ?NO, freely diffuse across phospholipid membrane, react using the NN+ forming different items after that, paramagnetic IN+ and diamagnetic hIN+, correspondingly. The last mentioned should bring about the various spectra adjustments as proven 1400W 2HCl supplier in Amount 6 illustrating the idea of this EPR strategy. Amount 6 Schematic style of liposome-encapsulated paramagnetic sensor for the discriminative EPR recognition of ?Zero and HNO. Encapsulation protects NN+ (R=N+(CH3)3) from reducing realtors. Both ?Zero and freely diffuse over the phospholipid membrane HNO, … To demonstrate the capability of encapsulated membrane-impermeable NN+ probe for discriminative recognition of ?Simply no and we performed EPR measurements using PAPA NONOate simply because HNO ?Zero AS and donor as HNO donor. An addition of Regarding the liposome-encapsulated NN+ led to several-fold acceleration of AS decomposition. ## OBJECTIVEGhrelin is a gut-derived peptide and an endogenous ligand for the OBJECTIVEGhrelin is a gut-derived peptide and an endogenous ligand for the growth hormones (GH) secretagogue receptor. phosphorylation (an alleged second messenger for ghrelin) in skeletal muscle mass. CONCLUSIONSGhrelin infusion induces lipolysis and insulin level of resistance independently of GH and cortisol acutely. We hypothesize which the metabolic ramifications of ghrelin give a methods to partition blood sugar to glucose-dependent ADL5747 IC50 tissue during circumstances of energy lack. Ghrelin, an endogenous ligand for the growth hormones (GH) secretagogue receptor (GHS-R), stimulates GH and adrenocorticotropic hormone (ACTH) secretion (1) furthermore to presenting orexigenic and gastrokinetic results (2,3). The observation that GHS-R is situated in peripheral tissues shows that ghrelin may exert immediate effects (4). The consequences of ghrelin on substrate in human beings are uncertain, but insulin level of resistance and arousal of lipolysis have already been reported (5C7). Nevertheless, it continues to be tough to segregate immediate results from results linked to cortisol and GH, and we’ve recently showed that somatostatin infusion does not sufficiently suppress ghrelin-induced GH and cortisol secretion (8). Hormonally changed hypopituitary sufferers constitute a way for learning putative GH- and cortisol-independent ramifications of ghrelin in individual topics in vivo. We ERCC6 aimed to review potential direct ramifications of ghrelin on substrate insulin and fat burning capacity awareness in the postabsorptive condition. In one test in healthful adults, we evaluated whether ghrelin-induced GH discharge translated into GH signaling in skeletal muscles, in case of which the need for abrogating indirect ramifications of ghrelin is normally apparent. Second, we examined the consequences of ghrelin publicity on whole-body and local substrate fat burning capacity in ADL5747 IC50 the basal and insulin-stimulated condition in hypopituitary sufferers on stable replacing with GH and hydrocortisone. Analysis DESIGN AND Strategies The studies had been conducted relative to the Helsinki Declaration and following approval by the neighborhood ethics committee, the Danish Medications Agency, and the nice Clinical Practice (GCP) device of Aarhus School Medical center. Both protocols had been signed up (Clinicaltrials.gov id study 1: “type”:”clinical-trial”,”attrs”:”text”:”NCT00116025″,”term_id”:”NCT00116025″NCT00116025 and research 2: “type”:”clinical-trial”,”attrs”:”text”:”NCT00139945″,”term_id”:”NCT00139945″NCT00139945). Planning of artificial ghrelin. Synthetic individual acylated ghrelin (NeoMPS, Strasbourg, France) was dissolved in isotonic saline and sterilized by dual passing through a 0.8/0.2-m pore-size filter (Super Acrodisc; Gelman Sciences, Ann Arbor, MI). Research 1: topics and study process. Six healthy guys (aged 23 1 years, BMI 23.5 0.4 kg/m2) were examined seeing that previously described (6). They received a continuing infusion of saline or ghrelin (5 pmol kg?1 min?1) beginning in 0 min. At 90 min, a muscles biopsy was extracted from ADL5747 IC50 the lateral vastus muscles using a Bergstr?m biopsy needle (Fig. 1). FIG. 1. Study protocol. Please refer to study design and methods for further details. Study 2: subjects and study protocol. Eight hypopituitary males (aged 53 4 years, BMI 31.6 1.0 kg/m2) about stable replacement therapy with GH and hydrocortisone (for >3 months) participated. None of the individuals experienced diabetes (A1C 5.7 0.1% [range 4.9C6.0]) or any concomitant chronic disease. Each individual was analyzed on two occasions with 5-h infusions of saline or ghrelin (5 pmol kg?1 min?1) inside a randomized double-blind, cross-over design. Both study days commenced at 0800 h after an over night fast (>9 h), with the subjects remaining fasting. One intravenous cannula was put in the antecubital region for infusion, and one intravenous cannula was put in a heated dorsal hand vein for sampling of arterialized blood. At = 0 min, saline or a primed-continuous ghrelin infusion (5 pmol kg?1 min?1) was commenced. The bolus dose was estimated from your elimination rate constant of ghrelin (= 0 min and continued throughout. Glucose rate of appearance (test when appropriate. ideals <0.05 were considered significant. Statistical analysis was performed using SPSS version ADL5747 IC50 14.0 for Windows. RESULTS Study 1. Ghrelin infusion stimulated endogenous GH secretion, which peaked at = 60 min (1.1 0.9 g/l [saline] vs. 33.3 8.0 g/l [ghrelin]; = 0.008). A significant elevation in serum FFA levels was recorded (0.4 0.04 g/l [saline] vs. 1.0 0.1 g/l ADL5747 IC50 [ghrelin]; = 0.003). The levels of serum cortisol (268 24 nmol/l [saline] vs. 400 57 nmol/l [ghrelin]; = 0.06) and plasma glucose (5.2 0.1 mmol/l [saline]. ## We aimed to investigate the design of manifestation and clinical need We aimed to investigate the design of manifestation and clinical need for isocitrate dehydrogenase 1(IDH1) in esophageal squamous cell carcinoma (ESCC). potential biomarker for prognosis and diagnosis. and [18]. IDH1 takes on driving jobs in the rate of metabolism of glucose, essential fatty acids, and glutamine aswell as the maintenance of mobile redox status; IDH1 is situated in the peroxisomes and cytoplasm [19]. Latest research about IDH1 in cancers possess centered on the mutations from the gene primarily. mutations were within low-grade glioma and supplementary glioblastoma, severe myeloid leukemia, chondrosarcoma, intrahepatic cholangiocarcinoma, and melanoma [22C24]. These studies for the gene indicate that mutation may affect tumorigenesis and tumor progression significanty. wild-type allele. Ward et al. recommended and validated that wild-type encourages cell growth and proliferation [25] after that. Aberrant protein manifestation, as the principal functional gene result, matches genome initiatives 17 alpha-propionate IC50 and can be an important phenotypic characteristic of cancer. The association of protein biomarkers with clinical characteristics and outcomes of cancer patients may elucidate the underlying molecular mechanisms of cancer initiation and progression [26]. Studies on wild-type IDH1 protein as a diagnostic and prognostic biomarker remain inadequate. IDH1 protein has been identified as a novel biomarker for the diagnosis of non-small cell lung cancer [27]. A study using genome-wide RNA-Seq indicates that IDH1 expression is usually higher in ESCC tissues than in normal tissues [28]. However, the protein expression of IDH1 17 alpha-propionate IC50 in ESCC and its correlation with 5-year overall survival (OS) rates and progression-free survival (PFS) are undetermined. In the current study, we compared the expression of IDH1 in the tumor 17 alpha-propionate IC50 tissue with that in the paracancerous tissue by quantitative real-time PCR (qRTCPCR), immunohistochemistry, and Western blot analysis. The serum expression in patients and healthy controls were used to assess the value of IDH1 as a diagnostic biomarker. Moreover, the association of IDH1 with the clinicopathological characteristics of patients with ESCC and the prognostic value of IDH1 were analyzed. CCK8 and clonal efficiency assays were used for observing if IDH1 could affect growth and proliferation of ESCC cells. RESULTS IDH1 expression in frozen tissues IDH1 expression was analyzed by IHC, qRTCPCR, and Traditional western blot evaluation. The IDH1 appearance in the formalin-fixed paraffin inserted (FFPE) tissue examples was dependant on IHC. The IDH1 proteins was mainly distributed in the cytoplasm of ESCC cells (Body ?(Figure1).1). Cancerous 17 alpha-propionate IC50 examples demonstrated 22 (+++), 8 (++), 6 (+), and 2 (C), whereas 17 alpha-propionate IC50 paracancerous tissue demonstrated 34 (C) and 4 (+). Therefore, it had been portrayed in 22 cancerous tissue and 0 paracancerous tissue extremely, and a big change was indicated (Desk ?(Desk1,1, < 0.001). By qRTCPCR evaluation, IDH1 in cancerous tissue was upregulated in accordance with that in paracancerous tissue in 38 sufferers (Body ?(Body2A,2A, < 0.001). To verify CSP-B the IDH1 level, American blot evaluation was performed with 10 pairs of cancerous and paracancerous tissue (Body ?(Figure2B).2B). The outcomes recommended that IDH1 appearance was higher in cancerous tissue than in paracancerous tissue (Body 2C, 2D, < 0.001). Body 1 IDH1 appearance in sufferers with ESCC was analyzed by executing immunohistochemistry Desk 1 Quantification from the appearance of IDH1 in cancerous and paracancerous tissue via IHC staining Body 2 IDH1 appearance in cancerous tissues weighed against that in paracancerous tissues was discovered at (A) mRNA level by RTCPCR Diagnostic worth of serum IDH1 We evaluated the serum degrees of IDH1 in 67 sufferers with ESCC and 67 healthful handles by enzyme-linked immunosorbent assay (ELISA) (Body ?(Figure3A).3A). The mean worth of. ## The severe acute respiratory syndrome (SARS) epidemic of 2003 was responsible The severe acute respiratory syndrome (SARS) epidemic of 2003 was responsible for 774 deaths and caused significant economic damage worldwide. problem that is regularly experienced in PCR-based assays. Furthermore, the RCA technology provides a faster, more sensitive, and economical substitute for available PCR-based strategies currently. Severe severe respiratory symptoms (SARS) can be an rising disease due to the book SARS coronavirus (SARS-CoV) (2, 38390-45-3 supplier 4, 5, 14). In July 2003 By the finish from the SARS epidemic, a complete of 8,096 SARS situations have been reported from 30 countries, with 774 fatalities. Whether potential outbreaks of SARS shall occur is unknown at the moment. However, provided the latest SARS situations in southern China due to an unknown supply and several laboratory-related attacks (12), it’s important to be ready for such a chance. In the lack of a SARS-CoV vaccine or antiviral medications, the usage of rigorous infection control insurance policies and early medical diagnosis with speedy, sensitive, and extremely specific laboratory strategies are crucial for the first administration of SARS-CoV an infection. From epidemiological linkages Apart, the radiographic and scientific top features of the disease aren’t SARS particular, identifying a dependence on specific laboratory lab tests that may confirm SARS-CoV an infection early throughout the disease. Recognition of SARS-CoV-specific antibodies is normally a particular and delicate but isn’t feasible at medical demonstration (6, 14). Recognition of SARS-CoV by invert transcription-PCR (RT-PCR) in medical specimens allows analysis in the first stage of the condition. However, as opposed to many other severe respiratory infections, just low degrees of SARS-CoV are usually present through the early symptomatic stage of infection. Based on the outcomes of first-generation RT-PCR assays, SARS-CoV RNA could be detected having a level of sensitivity of just ca. 30 to 50% in one respiratory specimen. An increased level of sensitivity may be accomplished if serial examples are collected, especially through the second week of disease when maximal disease shedding happens (13, 14). The sort of clinical test (e.g., nasopharyngeal aspirate, neck swabs, stool examples, urine, etc.) also impacts the level of sensitivity of RT-PCR (21). Lately, the energy of circularizable oligonucleotides, or padlock probes, continues to be proven for the recognition of focus on nucleic acidity sequences; this process shows greater level of sensitivity than regular PCR (3, 8, 16). Upon hybridization to a focus on RNA or DNA series, both ends from the probe become juxtaposed and may become became a member of by DNA ligase (Fig. ?(Fig.1).1). The circularized DNA probe produces a highly effective template for an exponential after that, or hyperbranching, rolling-circle amplification (RCA) response (Fig. ?(Fig.1),1), catalyzed with a processive DNA polymerase 38390-45-3 supplier with strand displacement activity highly. In isothermal circumstances, hyperbranching RCA can be with the capacity of a 109-collapse signal amplification of every group within 90 min (8). The RCA technique can be delicate extremely, and a circularized DNA probe destined to an individual target template could be effectively recognized (3). The RCA assay offers many advantages over additional amplification methods: the ligation needs Watson-Crick foundation pairing at both ends from the probe hybridize with ideal complementarity, not Terlipressin Acetate merely permitting the recognition of the single-nucleotide polymorphism but avoiding the nonspecific amplification generated simply by conventional PCR also. Circularizable probes could be useful for the reputation of both RNA and DNA web templates, eliminating the necessity for RT and developing a standard assay format for both RNA and DNA recognition (11). Single-stranded DNA displaced from the DNA polymerase could be 38390-45-3 supplier easily certain by primers, thus enabling the reaction to be performed under isothermal conditions and removing the need for a thermocycler. We describe here 38390-45-3 supplier a simple, scalable assay using RCA technology that allows the rapid, sensitive, and efficient detection of cultured SARS virus in both liquid and solid phases and present preliminary results on a small number of clinical respiratory specimens. FIG. 1. Pictorial representation of the RCA method. (A) Padlock probe containing target-complementary segment hybridization to a target DNA or RNA sequence. (B) The padlock probe can be.
2017-08-18 19:56:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40133312344551086, "perplexity": 12079.936613313794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105108.31/warc/CC-MAIN-20170818194744-20170818214744-00663.warc.gz"}
https://math.stackexchange.com/questions/1511128/is-the-equation-true-that-pa-cap-bc-pacpbc-while-a-b-are-independe
Is the equation true that $P(A\cap B|C) = P(A|C)P(B|C)$ while A, B are independent. Since I got into grad school to study computer sicence, I have been a T.A. and mid-term procter. Plus, I should check students answering sheets. To make answers, I would like to check the equation true which is $P(A\cap B|C) = P(A|C)P(B|C)$ while A, B are independent. I think that is true. But I am still little bit not sure. If that is true, could you prove why? Thank you for helping in advance. • What does the "comma " mean? – SchrodingersCat Nov 3 '15 at 13:48 • A intersection B. – Woonghee Lee Nov 3 '15 at 13:57 • Try it out on $C=A\triangle B$. Then the LHS takes value $0$ but the RHS can easily take a positive value. – drhab Nov 3 '15 at 14:10 Let $A$ and $B$ be the events that two independently tossed coins come up heads. Let $C$ be the event "exactly one coin comes up heads". Then the LHS is $0$, while the RHS is $1/4$. • No, that is not true. Independence of $A$ and $B$ doesn't imply that $A$ and $B$ are conditionally independent on $C$ - $C$ could in principle tell you everything about $A$ if you know $B$ as well. – Daniel Littlewood Nov 12 '15 at 18:17
2018-08-19 15:56:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.765271008014679, "perplexity": 435.8314337344375}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215222.74/warc/CC-MAIN-20180819145405-20180819165405-00669.warc.gz"}
https://www.semanticscholar.org/paper/Nagarajan-Subramanian-and-Ayhan-Esi-THE-N%C3%96RLUND-SP-Subramanian-Esi/fc01454a3f5e84a366a7aa16ae73e91a2c42a4d5
# Nagarajan Subramanian and Ayhan Esi THE NÖRLUND SPACE OF DOUBLE ENTIRE SEQUENCES • Published 2010 #### Abstract Let Γ denote the spaces of all double entire sequences. Let Λ denote the spaces of all double analytic sequences. This paper is devoted to a study of the general properties of Nörlund double entire sequence space η ( Γ ) , Γ and also study some of the properties of η ( Γ ) and η ( Λ ) . ### Cite this paper @inproceedings{Subramanian2010NagarajanSA, title={Nagarajan Subramanian and Ayhan Esi THE N{\"{O}RLUND SPACE OF DOUBLE ENTIRE SEQUENCES}, author={N. Subramanian and Ayhan Esi}, year={2010} }
2017-12-12 20:44:27
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9650841951370239, "perplexity": 9570.581964859519}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948517917.20/warc/CC-MAIN-20171212192750-20171212212750-00361.warc.gz"}
https://docs.sciml.ai/stable/modules/SciMLSensitivity/ode_fitting/prediction_error_method/
# Prediction error method (PEM) When identifying linear systems from noisy data, the prediction-error method [Ljung] is close to a gold standard when it comes to the quality of the models it produces, but is also one of the computationally more expensive methods due to its reliance on iterative, gradient-based estimation. When we are identifying nonlinear models, we typically do not have the luxury of closed-form, non-iterative solutions, while PEM is easier to adopt to the nonlinear setting.[Larsson] Fundamentally, PEM changes the problem from minimizing a loss based on the simulation performance, to minimizing a loss based on shorter-term predictions. There are several benefits of doing so, and this example will highlight two: • The loss is often easier to optimize. • In addition to an accurate simulator, you also obtain a prediction for the system. • With PEM, it's possible to estimate disturbance models. The last point will not be illustrated in this tutorial, but we will briefly expand upon it here. Gaussian, zero-mean measurement noise is usually not very hard to handle. Disturbances that affect the state of the system may, however, cause all sorts of havoc on the estimate. Consider wind affecting an aircraft, deriving a statistical and dynamical model of the wind may be doable, but unless you measure the exact wind affecting the aircraft, making use of the model during parameter estimation is impossible. The wind is an unmeasured load disturbance that affects the state of the system through its own dynamics model. Using the techniques illustrated in this tutorial, it's possible to estimate the influence of the wind during the experiment that generated the data and reduce or eliminate the bias it otherwise causes in the parameter estimates. We will start by illustrating a common problem with simulation-error minimization. Imagine a pendulum with unknown length that is to be estimated. A small error in the pendulum length causes the frequency of oscillation to change. Over sufficiently large horizon, two sinusoidal signals with different frequencies become close to orthogonal to each other. If some form of squared-error loss is used, the loss landscape will be horribly non-convex in this case, indeed, we will illustrate exactly this below. Another case that poses a problem for simulation-error estimation is when the system is unstable or chaotic. A small error in either the initial condition or the parameters may cause the simulation error to diverge and its gradient to become meaningless. In both of these examples, we may make use of measurements we have of the evolution of the system to prevent the simulation error from diverging. For instance, if we have measured the angle of the pendulum, we can make use of this measurement to adjust the angle during the simulation to make sure it stays close to the measured angle. Instead of performing a pure simulation, we instead say that we predict the state a while forward in time, given all the measurements up until the current time point. By minimizing this prediction rather than the pure simulation, we can often prevent the model error from diverging even though we have a poor initial guess. We start by defining a model of the pendulum. The model takes a parameter $L$ corresponding to the length of the pendulum. using DifferentialEquations, Optimization, OptimizationOptimJL, OptimizationPolyalgorithms, Plots, Statistics, DataInterpolations, ForwardDiff tspan = (0.1f0, Float32(20.0)) tsteps = range(tspan[1], tspan[2], length = 1000) u0 = [0f0, 3f0] # Initial angle and angular velocity function simulator(du,u,p,t) # Pendulum dynamics g = 9.82f0 # Gravitational constant L = p isa Number ? p : p[1] # Length of the pendulum gL = g/L θ = u[1] dθ = u[2] du[1] = dθ du[2] = -gL * sin(θ) end We assume that the true length of the pendulum is $L = 1$, and generate some data from this system. prob = ODEProblem(simulator,u0,tspan,1.0) # Simulate with L = 1 sol = solve(prob, Tsit5(), saveat=tsteps, abstol = 1e-8, reltol = 1e-6) y = sol[1,:] # This is the data we have available for parameter estimation plot(y, title="Pendulum simulation", label="angle") We also define functions that simulate the system and calculate the loss, given a parameter p corresponding to the length. function simulate(p) _prob = remake(prob,p=p) solve(_prob, Tsit5(), saveat=tsteps, abstol = 1e-8, reltol = 1e-6)[1,:] end function simloss(p) yh = simulate(p) e2 = yh e2 .= abs2.(y .- yh) return mean(e2) end simloss (generic function with 1 method) We now look at the loss landscape as a function of the pendulum length: Ls = 0.01:0.01:2 simlosses = simloss.(Ls) fig_loss = plot(Ls, simlosses, title = "Loss landscape", xlabel="Pendulum length", ylabel = "MSE loss", lab="Simulation loss") This figure is interesting, the loss is of course 0 for the true value $L=1$, but for values $L < 1$, the overall slope actually points in the wrong direction! Moreover, the loss is oscillatory, indicating that this is a terrible function to optimize, and that we would need a very good initial guess for a local search to converge to the true value. Note, this example is chosen to be one-dimensional in order to allow these kinds of visualizations, and one-dimensional problems are typically not hard to solve, but the reasoning extends to higher-dimensional and harder problems. We will now move on to defining a predictor model. Our predictor will be very simple, each time step, we will calculate the error $e$ between the simulated angle $\theta$ and the measured angle $y$. A part of this error will be used to correct the state of the pendulum. The correction we use is linear and looks like $Ke = K(y - \theta)$. We have formed what is commonly referred to as a (linear) observer. The Kalman filter is a particular kind of linear observer, where $K$ is calculated based on a statistical model of the disturbances that act on the system. We will stay with a simple, fixed-gain observer here for simplicity. To feed the sampled data into the continuous-time simulation, we make use of an interpolator. We also define new functions, predictor that contains the pendulum dynamics with the observer correction, a prediction function that performs the rollout (we're not using the word simulation to not confuse with the setting above) and a loss function. y_int = LinearInterpolation(y,tsteps) function predictor(du,u,p,t) g = 9.82f0 L, K, y = p # pendulum length, observer gain and measurements gL = g/L θ = u[1] dθ = u[2] yt = y(t) e = yt - θ du[1] = dθ + K*e du[2] = -gL * sin(θ) end predprob = ODEProblem(predictor,u0,tspan,nothing) function prediction(p) p_full = (p..., y_int) _prob = remake(predprob,u0=eltype(p).(u0),p=p_full) solve(_prob, Tsit5(), saveat=tsteps, abstol = 1e-8, reltol = 1e-6)[1,:] end function predloss(p) yh = prediction(p) e2 = yh e2 .= abs2.(y .- yh) return mean(e2) end predlosses = map(Ls) do L p = (L, 1) # use K = 1 predloss(p) end plot!(Ls, predlosses, lab="Prediction loss") Once gain we look at the loss as a function of the parameter, and this time it looks a lot better. The loss is not convex, but the gradient points in the right direction over a much larger interval. Here, we arbitrarily set the observer gain to $K=1$, we will later let the optimizer learn this parameter. For completeness, we also perform estimation using both losses. We choose an initial guess we know will be hard for the simulation-error minimization just to drive home the point: L0 = [0.7] # Initial guess of pendulum length optf = Optimization.OptimizationFunction((x,p)->simloss(x), adtype) optprob = Optimization.OptimizationProblem(optf, L0) ressim = Optimization.solve(optprob, PolyOpt(), maxiters = 5000) ysim = simulate(ressim.u) plot(tsteps, [y ysim], label=["Data" "Simulation model"]) p0 = [0.7, 1.0] # Initial guess of length and observer gain K optf2 = Optimization.OptimizationFunction((p,_)->predloss(p), adtype) optfunc2 = Optimization.instantiate_function(optf2, p0, adtype, nothing) optprob2 = Optimization.OptimizationProblem(optfunc2, p0) respred = Optimization.solve(optprob2, PolyOpt(), maxiters = 5000) ypred = simulate(respred.u) plot!(tsteps, ypred, label="Prediction model") The estimated parameters $(L, K)$ are respred.u Now, we might ask ourselves why we used a correct on the form $Ke$ and didn't instead set the angle in the simulation equal to the measurement. The reason is twofold 1. If our prediction of the angle is 100% based on the measurements, the model parameters do not matter for the prediction and we can thus not hope to learn their values. 2. The measurement is usually noisy, and we thus want to fuse the predictive power of the model with the information of the measurements. The Kalman filter is an optimal approach to this information fusion under special circumstances (linear model, Gaussian noise). We thus let the optimization learn the best value of the observer gain in order to make the best predictions. As a last step, we perform the estimation also with some measurement noise to verify that it does something reasonable: yn = y .+ 0.1f0 .* randn.(Float32) y_int = LinearInterpolation(yn,tsteps) # redefine the interpolator to contain noisy measurements optf = Optimization.OptimizationFunction((x,p)->predloss(x), adtype) optprob = Optimization.OptimizationProblem(optf, p0) resprednoise = Optimization.solve(optprob, PolyOpt(), maxiters = 5000) yprednoise = prediction(resprednoise.u) plot!(tsteps, yprednoise, label="Prediction model with noisy measurements") resprednoise.u This example has illustrated basic use of the prediction-error method for parameter estimation. In our example, the measurement we had corresponded directly to one of the states, and coming up with an observer/predictor that worked was not too hard. For more difficult cases, we may opt to use a nonlinear observer, such as an extended Kalman filter (EKF) or design a Kalman filter based on a linearization of the system around some operating point. As a last note, there are several other methods available to improve the loss landscape and avoid local minima, such as multiple-shooting. The prediction-error method can easily be combined with most of those methods. References: • LjungLjung, Lennart. "System identification–-Theory for the user". • LarssonLarsson, Roger, et al. "Direct prediction-error identification of unstable nonlinear systems applied to flight test data."
2022-09-27 07:31:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7948046326637268, "perplexity": 1292.9991459827918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00448.warc.gz"}
http://web.stanford.edu/~yyye/correction.html
# List of Corrections Page 99, line -10 : change $\ge$ to $\le$. Also, in the bottom paragraph, change "increased" to "decreased". Page 98, line 6 : add a transpose between the two vector product. Page 95, line 4 : change the power -2 in the last term to 2. Page 89, line -1 : remove the power 2. (Thanks to Jianfu Wang of University Leiden, Netherlands for discovering these typos.) Page 49, 53 : both $a_j$ and $a_{.j}$ represent the $j$th column of $A$; Example 2.1: (3/) should be (3/2). Exercise 2.12: Change "2" to "1" in the (dual) potential functions. Page 13 : 6th line "function" should be "functions"; Page 44: in the inequality of Theorem 2.1, "$n$" should be "$m$". (Thanks to Cris Choi of UI for discovering these typos.) Page 170 in Algorithm 5.2. In the line "While ... do," "$\|(x^k,s^k)\|_1 < \frac{2}{\theta^k\rho}$" has to be replaced by "$\|(x^k,s^k)\|_1 \leq \frac{2}{\theta^k\rho}$". (Thanks to Rouven Lepenis of Uni-Wuezburg for discovering this error.) Reference [41]: Change "E.-W. Bai and Y. Ye" to "E.-W. Bai, Y. Ye, and R. Tempo". I apologize to Roberto Tempo for this error. Reference [52]: Change "D. P. Bertsekas" to "D. Bertsimas". I apologize to Bertsimas for this error. Page 13: Change "$C\bullet X$ and $\det(X)$ are homogeneous of degree $1$..." to "$C\bullet X$ and $\det(X)$ are homogeneous of degree $1$ and $n$, respectively, ...". Theorem 3.11 and Exercise 3.8: Change "\eta(x,s)\le 1" to "\eta(x,s)\le 1/\sqrt{2}". Page 260, equation (8.24): Change "(x/\tau)^T\nabla f(x/\tau)^T" to "(x/\tau)^T\nabla f(x/\tau)". Yinyu Ye
2014-07-23 08:36:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5892117619514465, "perplexity": 7342.932963452475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997877644.62/warc/CC-MAIN-20140722025757-00115-ip-10-33-131-23.ec2.internal.warc.gz"}
https://www.zbmath.org/authors/?q=ai%3Agalkowski.jeffrey
## Galkowski, Jeffrey Compute Distance To: Author ID: galkowski.jeffrey Published as: Galkowski, Jeffrey; Galkowski, J. Documents Indexed: 30 Publications since 2012, including 1 Book Co-Authors: 13 Co-Authors with 20 Joint Publications 274 Co-Co-Authors all top 5 ### Co-Authors 10 single-authored 5 Spence, Euan A. 4 Toth, John Andrew 3 Canzani, Yaiza 2 Marchand, Pierre 2 Wunsch, Jared 2 Zworski, Maciej 1 Bruno, Oscar P. 1 Chatterjee, Sourav 1 Dyatlov, Semyon 1 Léautaud, Matthieu 1 Müller, Eike Hermann 1 Smith, Hart F. 1 Zelditch, Steve all top 5 ### Serials 4 Communications in Mathematical Physics 3 Journal of Spectral Theory 2 Pure and Applied Analysis 1 Israel Journal of Mathematics 1 Nonlinearity 1 Annales de l’Institut Fourier 1 Duke Mathematical Journal 1 Integral Equations and Operator Theory 1 Journal of Differential Geometry 1 Journal für die Reine und Angewandte Mathematik 1 Memoirs of the American Mathematical Society 1 Numerische Mathematik 1 Proceedings of the London Mathematical Society. Third Series 1 Transactions of the American Mathematical Society 1 IMRN. International Mathematics Research Notices 1 The Journal of Geometric Analysis 1 Communications in Partial Differential Equations 1 SIAM Journal on Mathematical Analysis 1 Advances in Computational Mathematics 1 The Journal of Fourier Analysis and Applications 1 Journal of the Institute of Mathematics of Jussieu 1 Journal of Physics A: Mathematical and Theoretical 1 Analysis & PDE 1 Annals of PDE all top 5 ### Fields 25 Partial differential equations (35-XX) 10 Global analysis, analysis on manifolds (58-XX) 6 Quantum theory (81-XX) 4 Differential geometry (53-XX) 4 Numerical analysis (65-XX) 3 Operator theory (47-XX) 2 Ordinary differential equations (34-XX) 1 Potential theory (31-XX) 1 Dynamical systems and ergodic theory (37-XX) 1 Probability theory and stochastic processes (60-XX) 1 Mechanics of particles and systems (70-XX) 1 Systems theory; control (93-XX) ### Citations contained in zbMATH Open 24 Publications have been cited 97 times in 58 Documents Cited by Year Restriction bounds for the free resolvent and resonances in lossy scattering. Zbl 1347.35188 Galkowski, Jeffrey; Smith, Hart F. 2015 Wavenumber-explicit analysis for the Helmholtz $$h$$-BEM: error estimates and iteration counts for the Dirichlet problem. Zbl 1414.35054 Galkowski, Jeffrey; Müller, Eike H.; Spence, Euan A. 2019 Averages of eigenfunctions over hypersurfaces. Zbl 1393.53031 Canzani, Yaiza; Galkowski, Jeffrey; Toth, John A. 2018 Eigenfunction scarring and improvements in $$L^\infty$$ bounds. Zbl 1386.35307 Galkowski, Jeffrey; Toth, John A. 2018 Distribution of resonances in scattering by thin barriers. Zbl 1453.35002 Galkowski, Jeffrey 2019 Optimal constants in nontrapping resolvent estimates and applications in numerical analysis. Zbl 1439.35142 Galkowski, Jeffrey; Spence, Euan A.; Wunsch, Jared 2020 A quantitative Vainberg method for black box scattering. Zbl 1372.35204 Galkowski, Jeffrey 2017 Pointwise bounds for Steklov eigenfunctions. Zbl 1407.35239 Galkowski, Jeffrey; Toth, John A. 2019 The quantum Sabine law for resonances in transmission problems. Zbl 1407.35149 Galkowski, Jeffrey 2019 Wavenumber-explicit regularity estimates on the acoustic single- and double-layer operators. Zbl 1417.31008 Galkowski, Jeffrey; Spence, Euan A. 2019 Defect measures of eigenfunctions with maximal $$L^\infty$$ growth. (Mesures de défaut de fonctions propres avec croissance $$L^\infty$$ maximale.) Zbl 1428.35106 Galkowski, Jeffrey 2019 Pointwise bounds for joint eigenfunctions of quantum completely integrable systems. Zbl 1441.81101 Galkowski, Jeffrey; Toth, John A. 2020 The $$L^{2}$$ behavior of eigenfunctions near the glancing set. Zbl 1362.53046 Galkowski, Jeffrey 2016 Quantum ergodicity for a class of mixed systems. Zbl 1290.81037 Galkowski, Jeffrey 2014 On the growth of eigenfunction averages: microlocalization and geometry. Zbl 1471.35213 Canzani, Yaiza; Galkowski, Jeffrey 2019 Resonances for thin barriers on the circle. Zbl 1349.35262 Galkowski, Jeffrey 2016 Eigenfunction concentration via geodesic beams. Zbl 1470.35243 Canzani, Yaiza; Galkowski, Jeffrey 2021 Nonlinear instability in a semiclassical problem. Zbl 1258.35026 Galkowski, Jeffrey 2012 Fractal Weyl laws and wave decay for general trapping. Zbl 1380.35021 Dyatlov, Semyon; Galkowski, Jeffrey 2017 Pseudospectra of semiclassical boundary value problems. Zbl 1317.35146 Galkowski, Jeffrey 2015 Domains without dense Steklov nodal sets. Zbl 1445.35271 Bruno, Oscar P.; Galkowski, Jeffrey 2020 Control from an interior hypersurface. Zbl 1436.35255 Galkowski, Jeffrey; Léautaud, Matthieu 2020 Eigenvalues of the truncated Helmholtz solution operator under strong trapping. Zbl 1480.35111 Galkowski, Jeffrey; Marchand, Pierre; Spence, Euan A. 2021 Analytic hypoellipticity of Keldysh operators. Zbl 1481.35100 Galkowski, Jeffrey; Zworski, Maciej 2021 Eigenfunction concentration via geodesic beams. Zbl 1470.35243 Canzani, Yaiza; Galkowski, Jeffrey 2021 Eigenvalues of the truncated Helmholtz solution operator under strong trapping. Zbl 1480.35111 Galkowski, Jeffrey; Marchand, Pierre; Spence, Euan A. 2021 Analytic hypoellipticity of Keldysh operators. Zbl 1481.35100 Galkowski, Jeffrey; Zworski, Maciej 2021 Optimal constants in nontrapping resolvent estimates and applications in numerical analysis. Zbl 1439.35142 Galkowski, Jeffrey; Spence, Euan A.; Wunsch, Jared 2020 Pointwise bounds for joint eigenfunctions of quantum completely integrable systems. Zbl 1441.81101 Galkowski, Jeffrey; Toth, John A. 2020 Domains without dense Steklov nodal sets. Zbl 1445.35271 Bruno, Oscar P.; Galkowski, Jeffrey 2020 Control from an interior hypersurface. Zbl 1436.35255 Galkowski, Jeffrey; Léautaud, Matthieu 2020 Wavenumber-explicit analysis for the Helmholtz $$h$$-BEM: error estimates and iteration counts for the Dirichlet problem. Zbl 1414.35054 Galkowski, Jeffrey; Müller, Eike H.; Spence, Euan A. 2019 Distribution of resonances in scattering by thin barriers. Zbl 1453.35002 Galkowski, Jeffrey 2019 Pointwise bounds for Steklov eigenfunctions. Zbl 1407.35239 Galkowski, Jeffrey; Toth, John A. 2019 The quantum Sabine law for resonances in transmission problems. Zbl 1407.35149 Galkowski, Jeffrey 2019 Wavenumber-explicit regularity estimates on the acoustic single- and double-layer operators. Zbl 1417.31008 Galkowski, Jeffrey; Spence, Euan A. 2019 Defect measures of eigenfunctions with maximal $$L^\infty$$ growth. (Mesures de défaut de fonctions propres avec croissance $$L^\infty$$ maximale.) Zbl 1428.35106 Galkowski, Jeffrey 2019 On the growth of eigenfunction averages: microlocalization and geometry. Zbl 1471.35213 Canzani, Yaiza; Galkowski, Jeffrey 2019 Averages of eigenfunctions over hypersurfaces. Zbl 1393.53031 Canzani, Yaiza; Galkowski, Jeffrey; Toth, John A. 2018 Eigenfunction scarring and improvements in $$L^\infty$$ bounds. Zbl 1386.35307 Galkowski, Jeffrey; Toth, John A. 2018 A quantitative Vainberg method for black box scattering. Zbl 1372.35204 Galkowski, Jeffrey 2017 Fractal Weyl laws and wave decay for general trapping. Zbl 1380.35021 Dyatlov, Semyon; Galkowski, Jeffrey 2017 The $$L^{2}$$ behavior of eigenfunctions near the glancing set. Zbl 1362.53046 Galkowski, Jeffrey 2016 Resonances for thin barriers on the circle. Zbl 1349.35262 Galkowski, Jeffrey 2016 Restriction bounds for the free resolvent and resonances in lossy scattering. Zbl 1347.35188 Galkowski, Jeffrey; Smith, Hart F. 2015 Pseudospectra of semiclassical boundary value problems. Zbl 1317.35146 Galkowski, Jeffrey 2015 Quantum ergodicity for a class of mixed systems. Zbl 1290.81037 Galkowski, Jeffrey 2014 Nonlinear instability in a semiclassical problem. Zbl 1258.35026 Galkowski, Jeffrey 2012 all top 5 ### Cited by 69 Authors 15 Galkowski, Jeffrey 10 Spence, Euan A. 6 Wunsch, Jared 3 Tacy, Melissa 3 Wyman, Emmett L. 2 Canzani, Yaiza 2 Escapil-Inchauspé, Paul 2 Gomes, Sean P. 2 Jerez-Hanckes, Carlos 2 Lotoreichik, Vladimir 2 Marchand, Pierre 2 Toth, John Andrew 2 Xi, Yakun 2 Zelditch, Steve 1 Anand, Akash 1 Barnett, Alex H. 1 Baskin, Dean 1 Behrndt, Jussi 1 Betcke, Timo 1 Boubendir, Yassine 1 Bruno, Oscar P. 1 Chandler-Wilde, Simon N. 1 Chen, Linchong 1 Deleporte, Alix 1 Di Cristo, Michele 1 Dwarka, Vandana 1 Dyatlov, Semyon 1 Ecevit, Fatih 1 Embree, Mark 1 Ganesh, Mahadevan 1 Gannot, Oran 1 Gélat, Pierre 1 Gibbs, Andrew 1 Graham, Ivan G. 1 Han, Xiaolong 1 Haqshenas, Seyyed R. 1 Hassell, Andrew 1 Hillairet, Luc 1 Iosevich, Alex 1 Jiang, Ying 1 Kuo, Frances Y. 1 Lafontaine, David 1 Langer, Matthias 1 Lazergui, Souaad 1 Léautaud, Matthieu 1 Li, Xiaolin 1 Lin, Fang Hua 1 Moiola, Andrea 1 Müller, Eike Hermann 1 Nonnenmacher, Stéphane 1 Pembery, O. R. 1 Rivière, Gabriel 1 Rohleder, Jonathan 1 Rondi, Luca 1 Saffari, Nader 1 Sloan, Ian Hugh 1 Smyshlyaev, Valery P. 1 Taylor, Michael Eugene 1 van ’t Wout, Elwin 1 Vodev, Georgi 1 Vögel, Martin 1 Vuik, Cornelis 1 Wang, Bo 1 Wang, Xing 1 Yang, Mengxuan 1 Yu, Dandan 1 Zhang, Cheng 1 Zhu, Jiuyi 1 Zworski, Maciej all top 5 ### Cited in 43 Serials 4 Communications in Mathematical Physics 3 Numerische Mathematik 3 SIAM Journal on Mathematical Analysis 2 Annales de l’Institut Fourier 2 Duke Mathematical Journal 2 Journal of Functional Analysis 2 The Journal of Geometric Analysis 2 Communications in Partial Differential Equations 2 Advances in Computational Mathematics 2 Annales Mathématiques du Québec 1 Analysis Mathematica 1 Applicable Analysis 1 Computers & Mathematics with Applications 1 Israel Journal of Mathematics 1 Journal of Computational Physics 1 Nonlinearity 1 Reviews in Mathematical Physics 1 Advances in Mathematics 1 American Journal of Mathematics 1 BIT 1 Indiana University Mathematics Journal 1 Integral Equations and Operator Theory 1 Journal of Computational and Applied Mathematics 1 Journal of Differential Geometry 1 Journal für die Reine und Angewandte Mathematik 1 Mathematische Annalen 1 Memoirs of the American Mathematical Society 1 Transactions of the American Mathematical Society 1 Systems & Control Letters 1 Forum Mathematicum 1 M$$^3$$AS. Mathematical Models & Methods in Applied Sciences 1 Numerical Algorithms 1 Applied Mathematical Modelling 1 SIAM Journal on Scientific Computing 1 The Journal of Fourier Analysis and Applications 1 Documenta Mathematica 1 Journal of the European Mathematical Society (JEMS) 1 Annales Henri Poincaré 1 Journal of the Institute of Mathematics of Jussieu 1 Bulletin of Mathematical Sciences 1 SIAM/ASA Journal on Uncertainty Quantification 1 Séminaire Laurent Schwartz. EDP et Applications 1 Pure and Applied Analysis all top 5 ### Cited in 22 Fields 46 Partial differential equations (35-XX) 19 Global analysis, analysis on manifolds (58-XX) 16 Numerical analysis (65-XX) 8 Quantum theory (81-XX) 7 Operator theory (47-XX) 6 Optics, electromagnetic theory (78-XX) 4 Differential geometry (53-XX) 3 Potential theory (31-XX) 3 Dynamical systems and ergodic theory (37-XX) 3 Harmonic analysis on Euclidean spaces (42-XX) 2 Several complex variables and analytic spaces (32-XX) 2 Ordinary differential equations (34-XX) 2 Integral equations (45-XX) 2 Systems theory; control (93-XX) 1 Number theory (11-XX) 1 Algebraic geometry (14-XX) 1 Functions of a complex variable (30-XX) 1 Calculus of variations and optimal control; optimization (49-XX) 1 Geometry (51-XX) 1 Probability theory and stochastic processes (60-XX) 1 Mechanics of particles and systems (70-XX) 1 Statistical mechanics, structure of matter (82-XX)
2022-06-26 03:15:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4552238881587982, "perplexity": 8619.401688522988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036363.5/warc/CC-MAIN-20220626010644-20220626040644-00100.warc.gz"}
https://calculator.academy/annualized-turnover-calculator/
Enter the number of months, the average number of employees, and the average number of employees leaving to determine the annualized turnover. ## Annualized Turnover Formula The following formula is used to calculate an annualized turnover. ATR = (E/L/M)*12*100 • Where ATR is the annualized turnover rate (%) • E is the average number of people employed during the time period • L is the number of employees that left during the time period • M is the total number of months of the period To calculate an annualized turnover rate, divide the number of people employed during the time period by the number of employees that left, then again by the number of month, then finally multiply by 1200. ## Annualized Turnover Definition An annualized turnover is defined as the rate or percentage of employees that leave a company over a year with respect to the total number of employees. For example, if there is a company with 10 employees on average, and 5 people left over a year, the turnover rate is 50%. (5/10 * 100 = 50) ## Example Problem How to calculate annualized turnover? 1. First, determine the length of the period to be analyzed. For this example, the number of months that the company has turnover data for is 3 months. 2. Next, determine the average number of employees during the time period. In this case, the number of employees was 101, 100, and 99 for each month respectively. This leads to an average of 100 employees. 3. Next, determine the number of employees that left of the time period. In this scenario, the number of employees that left was 20. 4. Finally, calculate the annualized turnover rate using the formula above. ATR = (E/L/M)*12*100 ATR = (20/100/3)*12*100 ATR = 80%
2023-03-29 18:51:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.741237461566925, "perplexity": 778.6594966829532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949025.18/warc/CC-MAIN-20230329182643-20230329212643-00304.warc.gz"}
http://www.gradesaver.com/textbooks/science/chemistry/chemistry-a-molecular-approach-3rd-edition/chapter-8-sections-8-1-8-9-exercises-review-questions-page-375/25a
## Chemistry: A Molecular Approach (3rd Edition) The alkali metals have one valance electron and are among the most reactive metals because their electron configuration $(ns^{1})$ is one electron beyond a nobles gas configuration. They readily react to lose $(ns^{1})$ electron obtaining a noble gas configuration. This explains why alkali metals tend to form +1 cations.
2017-11-25 09:47:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21877849102020264, "perplexity": 3703.5668424838673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809746.91/warc/CC-MAIN-20171125090503-20171125110503-00399.warc.gz"}
https://athhallcamreview.com/binary-neutron-star-mergers-as-a-probe-of-quark-hadron-crossover-equations-of-state-arxiv2203-05461v2-gr-qc-updated/
##### What's Hot It is anticipated that the gravitational radiation detected in future gravitational wave (GW) detectors from binary neutron star (NS) mergers can probe the high-density equation of state (EOS). We perform the first simulations of binary NS mergers which adopt various parametrizations of the quark-hadron crossover (QHC) EOS. These are constructed from combinations of a hadronic EOS (\$n_{b} < 2~n_0\$) and a quark-matter EOS (\$n_{b} > 5~n_0\$), where \$n_{b}\$ and \$n_0\$ are the baryon number density and the nuclear saturation density, respectively. At the crossover densities (\$2~ n_0 < n_{b} < 5~ n_0\$) the QHC EOSs continuously soften, while remaining stiffer than hadronic and first-order phase transition EOSs, achieving the stiffness of strongly correlated quark matter. This enhanced stiffness leads to significantly longer lifetimes of the postmerger NS than that for a pure hadronic EOS. We find a dual nature of these EOSs such that their maximum chirp GW frequencies \$f_{max}\$ fall into the category of a soft EOS while the dominant peak frequencies (\$f_{peak}\$) of the postmerger stage fall in between that of a soft and stiff hadronic EOS. An observation of this kind of dual nature in the characteristic GW frequencies will provide crucial evidence for the existence of strongly interacting quark matter at the crossover densities for QCD. Share.
2023-03-31 20:10:58
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8599546551704407, "perplexity": 12292.050517496486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949678.39/warc/CC-MAIN-20230331175950-20230331205950-00099.warc.gz"}
https://www.hackmath.net/en/math-problem/8278
# Curiosity factor A blogger starts a new website, initially the number of the traffic is 293 due to their curiosity factor. The business owner estimated that the traffic will increase by 2,6% per week. What will be the number of it in week 5?. Result t =  333 #### Solution: $t_{0}=293 \ \\ q=2.6 \%=1 + \dfrac{ 2.6 }{ 100 }=1.026 \ \\ \ \\ t=t_{0} \cdot \ q^{ 5 }=293 \cdot \ 1.026^{ 5 } \doteq 333.1229 \doteq 333$ Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you! Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...): Be the first to comment! Tips to related online calculators Do you want to convert time units like minutes to seconds? ## Next similar math problems: 1. Theorem prove We want to prove the sentence: If the natural number n is divisible by six, then n is divisible by three. From what assumption we started? 2. Summerjob The temporary workers planted new trees. Of the total number of 500 seedlings, they managed to plant 426. How many percents did they meet the daily planting limit? 3. The crime The crime rate of a certain city is increasing by exactly 7% each year. If there were 600 crimes in the year 1990 and the crime rate remains constant each year, determine the approximate number of crimes in the year 2025. 4. Five members Write first 5 members geometric sequence and determine whether it is increasing or decreasing: a1 = 3 q = -2 5. Six terms Find the first six terms of the sequence a1 = -3, an = 2 * an-1 6. Tenth member Calculate the tenth member of geometric sequence when given: a1=1/2 and q=2 7. GP - 8 items Determine the first eight members of a geometric progression if a9=512, q=2 8. Geometric progression 2 There is geometric sequence with a1=5.7 and quotient q=-2.5. Calculate a17. 9. A perineum A perineum string is 10% shorter than its original string. The first string is 24, what is the 9th string or term? 10. Geometric sequence 4 It is given geometric sequence a3 = 7 and a12 = 3. Calculate s23 (= sum of the first 23 members of the sequence). 11. Researchers Researchers ask 200 families whether or not they were the homeowner and how many cars they had. Their response was homeowner: 14 no car or one car, two or more cars 86, not homeowner: 38 no car or one car, two or more cars 62. What percent of the families 12. Profit gain If 5% more is gained by selling an article for Rs. 350 than by selling it for Rs. 340, the cost of the article is: 13. Five harvests In the seed company, they know that, out of 100 grains of a new variety, they get an average of 2000 grains after harvest. Approximately how many grains do they get out of 100 grains after five harvests? 14. Machine Price of the new machine is € 62000. Every year is depreciated 15% of residual value. What will be the value of the machine after 3 years? 15. The city At the end of 2010 the city had 248000 residents. The population increased by 2.5% each year. What is the population at the end of 2013? 16. The city 2 Today lives 167000 citizens in city. How many citizens can we expect in 11 years if their annual increase is 1%? 17. Quotient Determine the quotient and the second member of the geometric progression where a3=10, a1+a2=-1,6 a1-a2=2,4.
2020-04-01 07:20:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39048412442207336, "perplexity": 1920.1637491153008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505550.17/warc/CC-MAIN-20200401065031-20200401095031-00266.warc.gz"}
http://physics.stackexchange.com/tags/electronics/hot
# Tag Info ## Hot answers tagged electronics 7 Virtual ground refers to a circuit element not directly connected to ground, held at a reference voltage. This reference voltage need not be the same voltage as ground either. For example many op-amp circuits were originally designed for dual power supplies (say +12V and -12V) and could handle filtering or modification of a signal that was oscillating ... 7 "Ground" refers to a particular voltage, generally taken to be "zero", or the voltage of the earth. A "virtual ground" is a wire in a circuit whose voltage is held to be zero not because it is directly connected to the true ground, but instead because it is actively driven to that voltage typically by feedback mechanisms. Here's an example of a virtual ... 7 I'll take it step by step here. First I'll write the answer for the first few cases with circuit analysis. Then I'll apply a reduction to show the pattern that the problem arrives at. N=1 $$Z = R+R=2R$$ N=2 $$Z = R+\frac{1}{\frac{1}{R}+\frac{1}{R + R}} = R \left( 1+\frac{1}{1+\frac{1}{1 + 1}} \right)=\frac{5}{3} R$$ N=3 $$Z = ... 6 For any given n, you can work it out via the rules for series and parallel resistors, but to get a general formula, valid for all n, doesn't look easy to me. The best way I know of is to get a recursive relationship giving the resistance of an n-step ladder in terms of an (n-1)-step ladder. If I'm not mistaken, the n-step ladder can be thought of ... 6 You might find the Yahoo "home_transistor" group a useful resource. There's also a series of videos on YouTube by Jeri Ellsworth including some where she makes transistors. In one, in particular, she takes the crystal out of a germanium point-contact diode and turns the crystal into a point-contact transistor (much like the Bell Labs transistor.) There ... 5 Power consumption is about linear with frequency. The processor contains millions of complementary FETs as shown. When the input goes low the small capacitance gets charged and it will hold a small amount of energy. A same amount is lost during the charging. When the input goes high again the charge will be drained to ground and be lost. So with each ... 5 In experimental physics it is required to use electronics as instruments. You must know how they work(amplifiers, ADC's, MCA's etc) in order to fully understand and design an experiment. Usually, you don't need too much electronics(filters, amplifiers, transistors, digital electronics-boolean algebra) is more often than not, more than enough. You need ... 5 The key difference between a Zener diode and a normal diode is that the Zener diode has a low breakdown voltage - typically in the few volts range. The breakdown voltage is low because the heavy doping means the depletion layer is very thin, and even at a low voltage the field strength over this thin depletion layer is very high. With a conventional diode ... 4 I spent many years working in the video graphics design industry. One of our problems is the opposite problem, that is, we have limits on how much electromagnetic radiation our products can produce. Before we can ship any new design, we have to test it to make sure it meets the limits. (The tests are done on samples, not on every item shipped.) The limits ... 4 First of all, an electric spark is moving of the electric charge through the air. This is somehow curious, as air itself is an electric insulator and does not conduct charges. However, when the electric field in air exceeds certain value, air gets ionized and highly conductive, enabling the movement of the charge. The next step to understand is why we get ... 4 To add to the "linear with frequency" point, there is also an additional factor. As that "dynamic power" increases, the temperature of the die will increase and this will also increase the leakage current through the millions of transistors, which will cause more dissipation (termed "static power") There's a long Anandtech thread taking lots of values and ... 4 The A/W units refer to the current (in Ampère) produced per Watt of light incident on the photodiode. This current-production happens when the diode operates in the so-called photoconductive mode. Since your question wasn't on the inner workings of a photodiode, I won't expand on this, but Wikipedia contains some more information if desired. 4 From "The Transistor, A Semi-Conductor Triode", by J. Bardeen and W. H. Brattain, Phys Rev. 74(2), 230-231 (1948): "The device consists of three electrodes placed on a block of germanium as shown schematically in Fig. 1. Two, called the emitter and collector, are of the point-contact rectifier type and are placed in close proximity (separation ~0.005 to ... 3 Cosmic rays do have noticeable affect on electronics. The most prevalent effect is from memory bit flips (known as "soft errors"). The degree of significance of the effect depends on the application. A typical soft error rate for static RAM is in the region of 400 FITs/Mbit [1]. (Failures in time=failures per billion device hours) So if you have 1 Gb of ... 3 For the most part cosmic rays do nothing to consumer electronics. This is not to say that they can't flip bits or even damage elements, but the rate for such effects is very, very low. Radiation effects are routinely observed in electronics placed in accelerator experimental halls (where the radiation levels are at lethal-dose-in-minutes levels when the ... 3 From the specifications of your battery, that is 1.5V and 2700mAh, you can compute that there is 14580 Joules of energy stored in your batteries. The formula P=U\cdot I relates power to voltage and current. You battery specs give voltage and capacity (that is total charge stored). The former is in Volt, the latter in milli-Ampère-hour. The product is ... 3 I'm not sure why the resistor to ground from B is there, but you are incorrect at point D, the capacitor doesn't pass the DC level as you've indicated. It's a high-pass filter with C and R, so basically you need to move the DC-level on the Vd plot to ground - but keep the two transients like you've plotted them. That is, the curve should start at ground and ... 3 It's surprisingly difficult to find a nice simple description of how a transistor works. This description is from my old physics book - I suspect this may be oversimplified and I'm sure a complete description would run to lots of equations! Anyhow, this is what an NPN transistor looks like: so as you say, the collector-base junction is reverse biased and ... 3 FORWARD BIAS OF A P-N JUNCTION As the electrons move towards the positive terminal and the holes towards the negative, they will come to the depletion layer. This is a very narrow layer around the junction (i.e. around the interface of the two semiconductors.) In the depletion layer, electrons and holes can recombine, but the recombination rate is not high ... 3 The answer is that the whole circuit is full of electrons. I think you may be thinking along the lines of "if I switch a tap on, the water takes time v/L to reach the end of a hose of length L. So, if I switch a light on, the electrons must take analogous time the reach the light". Because the circuit is full of electrons, the energy source shoves the ... 3 Calling it a built-in voltage is something of a misnomer. People usually think of "voltage" as "what you measure with a voltmeter". So "voltage" is normally synonymous with "electrochemical potential of electrons" (in stat mech terminology) and with "difference in fermi level" (in semiconductor terminology). Under this definition, the built-in "voltage" is ... 2 In fact, Schottky Diodes have the lowest forward voltages. This means, that this is not a question of band gap "voltage" (this is a energy difference originally!) but of technology. Second are Germanium point contact diodes with gold wires. "Diodes" made from Galena maybe are very low too, but due to the wiggely properties I would not dare to write ... 2 I would start from power consumption. It's a little unclear to me if you're talking about electron use in all of the components or the CPU, but since 3 Volts was used, I'll take the discussion to be limited to CPU. Let's say 30 W of power consumption.$$I=\frac{P}{V}=\frac{30 W}{3 V} = 10 A$$Now, for the electron flux (denoted \phi), we can just ... 2 You have (1.5V)(2.7A)(3600s) = 14580 Watt-seconds = 14580 Joules of energy in the battery. BUT you can only get that much energy out if you extract it the same way the manufacturer did it during their optimized tests, which is typically at a much lower amperage than what you want to drive your motor. Also, remember that there are friction and other losses. ... 2 A flip-flop (bistable multivibrator) is, in simple terms, two transistors wired together in such a way that there are two stable conditions: (1) one transistor is full "on", while the other if full "off" (2) vice versa If the circuit happens to be in a state "in-between" these two states, it will, due to positive feedback, very rapidly move towards one of ... 2 If the weight is F in pounds, the coefficient of friction \mu and the speed of v in mph then the power W required to maintain this motion in Watts is$$ W \approx 2.0 \mu \cdot F \cdot v $$The coefficient of 2.0 comes from the conversion into metric units. To move 200 lbs at 10 mph with a coefficient of friction of \mu=0.4 is$$ W \approx ... 2 For a given circuit in a given technology, power increases at a rate proportional to $f^3$ or worse. You can see by looking at the graph in @Martin Thompson's answer that power is superlinear in frequency. $P=c V^2 f + P_S$ is correct, but only superficially so because $f$ and $P_S$ are functions of $V$ and $V_{th}$ (the threshold voltage.) In practice ... 2 Well, the biasing of the Base-Emitter (BE) and Collector-Emitter (CE) junctions is determined by the operation mode. Based on the "common emitter" in the title and your goal of determining input characteristics I am guessing that you are interested in the operation of a BJT transistor as a common-emitter small-signal amplifier. For this case the transistor ... 2 The complete explanation takes a few lectures - it is simply impossible to provide this amount of information as an answer. Very general explanation: Let's take a look at NMOS transistor (the one shown in the schematic attached to the question). It has 4 pins which you can force potentials on: Gate Bulk Source Drain In order to understand how the ... 2 You've correctly deduced how the circuit works. This particular configuration is better known as a bridge rectifier and is often packaged as a single component containing 4 diodes. There are two uses for this - rectifying alternating current as depicted in your question, and creating circuits that can handle direct current with reversed polarity (for ... Only top voted, non community-wiki answers of a minimum length are eligible
2014-04-18 21:22:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7953295111656189, "perplexity": 862.1380577519133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/charges-and-field.374356/
# Homework Help: Charges and field 1. Feb 1, 2010 ### bennyngreal A line charge of uniform charge density lambda forms a circle of radius b that lies in the x-y plane with its centre at the origin. a)Find the electric field E at the point (0,0,h). b)At what value of h will E in parta) be a maximum?What is this maximum? ans: a)E=人hb/(2e0(h^2+b^2)^1/2 >k(direction of K) b)h=b/2^1/2 ; Emax=人/(3^3/2*e0*b) 2. Feb 1, 2010 ### jdwood983 If you have the answers, what is the problem? 3. Feb 3, 2010 ### bennyngreal the method how to do it 4. Feb 3, 2010 ### jdwood983 I think using the equation for an electric field due to a line charge is a good place to start: $$\mathbf{E}(\mathbf{r})=\frac{1}{4\pi\varepsilon_0}\int_\mathcal{P}\frac{\lambda(\mathbf{r}')}{\mathcal{R}^2}\hat{\mathbf{\mathcal{R}}}dl'$$ Fortunately for you, the line charge is of a uniform density, so this can be reduced to $$\mathbf{E}(\mathbf{r})=\frac{\lambda}{4\pi\varepsilon_0}\int_\mathcal{P}\frac{1}{\mathcal{R}^2}\hat{\mathbf{\mathcal{R}}}dl'$$ What can you tell me about $\mathcal{R}$--the separation length--and $\hat{\mathbf{\mathcal{R}}}$--the separation vector?
2018-09-20 15:04:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46450889110565186, "perplexity": 2059.292082132351}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156513.14/warc/CC-MAIN-20180920140359-20180920160759-00259.warc.gz"}
https://mathematica.stackexchange.com/questions/134107/mapping-valueq-over-a-list-of-symbols-gives-answer-different-from-direct-evaluat
# Mapping ValueQ over a list of symbols gives answer different from direct evaluation If I start with a symbol q that I've assigned a value to, q = 0; and one, x, that I haven't, I get different answers from ValueQ depending on how I call ValueQ. If I map ValueQ over a list, Map[ ValueQ, {q,x} ] Mathematica returns {False, False}. If I apply ValueQ directly, however, {ValueQ[q], ValueQ[x]} Mathematica returns a different answer, {True, False}. I've tried {Names["q"], Names["x"]} which returns {{q},{}} before evaluating the two ValueQ calls, and {{q},{x}} after the calls. There's obviously some sort of side effect from the call to ValueQ taking place behind the scenes. If I quit the kernel to start with a clean slate each time, however, and reverse the order in which I call the two versions (Map or no Map) I get the same results. This is troubling behavior. What's going on? I'm using Mathematica 11.0.1.0 on Linux x86 (64-bit). • Please, see Section "Properties & Relations" in Documentation on ValueQ. The effect I suppose is concerned with HoldAll attribute of ValueQ, so that ValueQ/@Unevaluated[{q, x}] gives right answer {True, False}. – Alx Dec 23 '16 at 4:57 Map is evaluating its arguments, so you end up with ValueQ[0] (False) instead of ValueQ[q]] (True); In[3]:= Map[ValueQ, Unevaluated[{q, x}]] Out[3]= {True, False} • That makes sense now. I've seen other functions with HoldAll attribute behave the same way. – Rodney Price Dec 24 '16 at 1:35 There're actually 2 issues in your question. One is the evaluation control problem, this has been explained by Brett Champion and Alx. The other issue is: Why {Names["q"], Names["x"]} returns {{"q"}, {}} before the ValueQ calls? The answer is: ValueQ isn't relevant at all, the key point is, the symbol x has appeared in the code Map[ValueQ, {q, x}] so it's created. A simpler way to reproduce the issue is: Remove[x] Names["x"] (* {} *) x; Names["x"] (* {"x"} *)
2020-10-31 07:34:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5235864520072937, "perplexity": 5363.401105786958}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107916776.80/warc/CC-MAIN-20201031062721-20201031092721-00607.warc.gz"}
https://jeremykun.com/2013/04/15/probabilistic-bounds-a-primer/?like_comment=24907&_wpnonce=fa08618870
# Probabilistic Bounds — A Primer Probabilistic arguments are a key tool for the analysis of algorithms in machine learning theory and probability theory. They also assume a prominent role in the analysis of randomized and streaming algorithms, where one imposes a restriction on the amount of storage space an algorithm is allowed to use for its computations (usually sublinear in the size of the input). While a whole host of probabilistic arguments are used, one theorem in particular (or family of theorems) is ubiquitous: the Chernoff bound. In its simplest form, the Chernoff bound gives an exponential bound on the deviation of sums of random variables from their expected value. This is perhaps most important to algorithm analysis in the following mindset. Say we have a program whose output is a random variable $X$. Moreover suppose that the expected value of $X$ is the correct output of the algorithm. Then we can run the algorithm multiple times and take a median (or some sort of average) across all runs. The probability that the algorithm gives a wildly incorrect answer is the probability that more than half of the runs give values which are wildly far from their expected value. Chernoff’s bound ensures this will happen with small probability. So this post is dedicated to presenting the main versions of the Chernoff bound that are used in learning theory and randomized algorithms. Unfortunately the proof of the Chernoff bound in its full glory is beyond the scope of this blog. However, we will give short proofs of weaker, simpler bounds as a straightforward application of this blog’s previous work laying down the theory. If the reader has not yet intuited it, this post will rely heavily on the mathematical formalisms of probability theory. We will assume our reader is familiar with the material from our first probability theory primer, and it certainly wouldn’t hurt to have read our conditional probability theory primer, though we won’t use conditional probability directly. We will refrain from using measure-theoretic probability theory entirely (some day my colleagues in analysis will like me, but not today). ## Two Easy Bounds of Markov and Chebyshev The first bound we’ll investigate is almost trivial in nature, but comes in handy. Suppose we have a random variable $X$ which is non-negative (as a function). Markov’s inequality is the statement that, for any constant $a > 0$, $\displaystyle \textup{P}(X \geq a) \leq \frac{\textup{E}(X)}{a}$ In words, the probability that $X$ grows larger than some fixed constant is bounded by a quantity that is inversely proportional to the constant. The proof is quite simple. Let $\chi_a$ be the indicator random variable for the event that $X \geq a$ ($\chi_a = 1$ when $X \geq a$ and zero otherwise). As with all indicator random variables, the expected value of $\chi_a$ is the probability that the event happens (if this is mysterious, use the definition of expected value). So $\textup{E}(\chi_a) = \textup{P}(X \geq a)$, and linearity of expectation allows us to include a factor of $a$: $\textup{E}(a \chi_a) = a \textup{P}(X \geq a)$ The rest of the proof is simply the observation that $\textup{E}(a \chi_a) \leq \textup{E}(X)$. Indeed, as random variables we have the inequality $a \chi_a \leq X$. Whenever $X < a$, the value of $a \chi_a = 0$ while $X$ is nonnegative by definition. And whenever $a \leq X$,the value of $a \chi_a = a$ while $X$ is by assumption at least $a$. It follows that $\textup{E}(a \chi_a) \leq \textup{E}(X)$. This last point is a simple property of expectation we omitted from our first primer. It usually goes by monotonicity of expectation, and we prove it here. First, if $X \geq 0$ then $\textup{E}(X) \geq 0$ (this is trivial). Second, if $0 \leq X \leq Y$, then define a new random variable $Z = Y-X$. Since $Z \geq 0$ and using linearity of expectation, it must be that $\textup{E}(Z) = \textup{E}(Y) - \textup{E}(X) \geq 0$. Hence $\textup{E}(X) \leq \textup{E}(Y)$. Note that we do require that $X$ has a finite expected value for this argument to work, but if this is not the case then Markov’s inequality is nonsensical anyway. Markov’s inequality by itself is not particularly impressive or useful. For example, if $X$ is the number of heads in a hundred coin flips, Markov’s inequality ensures us that the probability of getting at least 99 heads is at most 50/99, which is about 1/2. Shocking. We know that the true probability is much closer to $2^{-100}$, so Markov’s inequality is a bust. However, it does give us a more useful bound as a corollary. This bound is known as Chebyshev’s inequality, and its use is sometimes referred to as the second moment method because it gives a bound based on the variance of a random variable (instead of the expected value, the “first moment”). The statement is as follows. Chebyshev’s Inequality: Let $X$ be a random variable with finite expected value and positive variance. Then we can bound the probability that $X$ deviates from its expected value by a quantity that is proportional to the variance of $X$. In particular, for any $\lambda > 0$, $\displaystyle \textup{P}(|X - \textup{E}(X)| \geq \lambda) \leq \frac{\textup{Var}(X)}{\lambda^2}$ And without any additional assumptions on $X$, this bound is sharp. Proof. The proof is a simple application of Markov’s inequality. Let $Y = (X - \textup{E}(X))^2$, so that $\textup{E}(Y) = \textup{Var}(X)$. Then by Markov’s inequality $\textup{P}(Y \geq \lambda^2) \leq \frac{\textup{E}(Y)}{\lambda^2}$ Since $Y$ is nonnegative $|X - \textup{E}(X)| = \sqrt(Y)$, and $\textup{P}(Y \geq \lambda^2) = \textup{P}(|X - \textup{E}(X)| \geq \lambda)$. The theorem is proved. $\square$ Chebyshev’s inequality shows up in so many different places (and usually in rather dry, technical bits), that it’s difficult to give a good example application.  Here is one that shows up somewhat often. Say $X$ is a nonnegative integer-valued random variable, and we want to argue about when $X = 0$ versus when $X > 0$, given that we know $\textup{E}(X)$. No matter how large $\textup{E}(X)$ is, it can still be possible that $\textup{P}(X = 0)$ is arbitrarily close to 1. As a colorful example, let $X$ is the number of alien lifeforms discovered in the next ten years. We might debate that $\textup{E}(X)$ can arbitrarily large: if some unexpected scientific and technological breakthroughs occur tomorrow, we could discover an unbounded number of lifeforms. On the other hand, we are very likely not to discover any, and probability theory allows for such a random variable to exist. If we know everything about $\textup{Var}(X)$, however, we can get more informed bounds. Theorem: If $\textup{E}(X) \neq 0$, then $\displaystyle \textup{P}(X = 0) \leq \frac{\textup{Var}(X)}{\textup{E}(X)^2}$. Proof. Simply choose $\lambda = \textup{E}(X)$ and apply Chebyshev’s inequality. $\displaystyle \textup{P}(X = 0) \leq \textup{P}(|X - \textup{E}(X)| \geq \textup{E}(X)) \leq \frac{\textup{Var}(X)}{\textup{E}(X)^2}$ The first inequality follows from the fact that the only time $X$ can ever be zero is when $|X - \textup{E}(X)| = \textup{E}(X)$, and $X=0$ only accounts for one such possibility. $\square$ This theorem says more. If we know that $\textup{Var}(X)$ is significantly smaller than $\textup{E}(X)^2$, then $X > 0$ is more certain to occur. More precisely, and more computationally minded, suppose we have a sequence of random variables $X_n$ so that $\textup{E}(X_n) \to \infty$ as $n \to \infty$. Then the theorem says that if $\textup{Var}(X_n) = o(\textup{E}(X_n)^2)$, then $\textup{P}(X_n > 0) \to 1$. Remembering one of our very early primers on asymptotic notation, $f = o(g)$ means that $f$ grows asymptotically slower than $g$, and in terms of this fraction $\textup{Var}(X) / \textup{E}(X)^2$, this means that the denominator dominates the fraction so that the whole thing tends to zero. ## The Chernoff Bound The Chernoff bound takes advantage of an additional hypothesis: our random variable is a sum of independent coin flips. We can use this to get exponential bounds on the deviation of the sum. More rigorously, Theorem: Let $X_1 , \dots, X_n$ be independent random $\left \{ 0,1 \right \}$-valued variables, and let $X = \sum X_i$. Suppose that $\mu = \textup{E}(X)$. Then the probability that $X$ deviates from $\mu$ by more than a factor of $\lambda > 0$ is bounded from above: $\displaystyle \textup{P}(X > (1+\lambda)\mu) \leq \frac{e^{\lambda \mu}}{(1+\lambda)^{(1+\lambda)\mu}}$ The proof is beyond the scope of this post, but we point the interested reader to these lecture notes. We can apply the Chernoff bound in an easy example. Say all $X_i$ are fair coin flips, and we’re interested in the probability of getting more than 3/4 of the coins heads. Here $\mu = n/2$ and $\lambda = 1/2$, so the probability is bounded from above by $\displaystyle \left ( \frac{e}{(3/2)^3} \right )^{n/4} \approx \frac{1}{5^n}$ So as the number of coin flips grows, the probability of seeing such an occurrence diminishes extremely quickly to zero. This is important because if we want to test to see if, say, the coins are biased toward flipping heads, we can simply run an experiment with $n$ sufficiently large. If we observe that more than 3/4 of the flips give heads, then we proclaim the coins are biased and we can be assured we are correct with high probability. Of course, after seeing 3/4 of more heads we’d be really confident that the coin is biased. A more realistic approach is to define some $\varepsilon$ that is small enough so as to say, “if some event occurs whose probability is smaller than $\varepsilon$, then I call shenanigans.” Then decide how many coins and what bound one would need to make the bad event have probability approximately $\varepsilon$. Finding this balance is one of the more difficult aspects of probabilistic algorithms, and as we’ll see later all of these quantities are left as variables and the correct values are discovered in the course of the proof. ## Chernoff-Hoeffding Inequality The Hoeffding inequality (named after the Finnish statistician, Wassily Høffding) is a variant of the Chernoff bound, but often the bounds are collectively known as Chernoff-Hoeffding inequalities. The form that Hoeffding is known for can be thought of as a simplification and a slight generalization of Chernoff’s bound above. Theorem: Let $X_1, \dots, X_n$ be independent random variables whose values are within some range $[a,b]$. Call $\mu_i = \textup{E}(X_i)$, $X = \sum_i X_i$, and $\mu = \textup{E}(X) = \sum_i \mu_i$. Then for all $t > 0$, $\displaystyle \textup{P}(|X - \mu| > t) \leq 2e^{-2t^2 / n(b-a)^2}$ For example, if we are interested in the sum of $n$ rolls of a fair six-sided die, then the probability that we deviate from $(7/2)n$ by more than $5 \sqrt{n \log n}$ is bounded by $2e^{(-2 \log n)} = 2/n^2$. Supposing we want to know how many rolls we need to guarantee with probability 0.01 that we don’t deviate too much, we just do the algebra: $2n^{-2} < 0.01$ $n^2 > 200$ $n > \sqrt{200} \approx 14$ So with 15 rolls we can be confident that the sum of the rolls will lie between 20 and 85. It’s not the best possible bound we could come up with, because we’re completely ignoring the known structure on dice rolls (that they follow a uniform distribution!). The benefit is that it’s a quick and easy bound that works for any kind of random variable with that expected value. Another version of this theorem concerns the average of the $X_i$, and is only a minor modification of the above. Theorem: If $X_1, \dots, X_n$ are as above, and $X = \frac{1}{n} \sum_i X_i$, with $\mu = \frac{1}{n}(\sum_i \mu_i)$, then for all $t > 0$, we get the following bound $\displaystyle \textup{P}(|X - \mu| > t) \leq 2e^{-2nt^2/(b-a)^2}$ The only difference here is the extra factor of $n$ in the exponent. So the deviation is exponential both in the amount of deviation ($t^2$), and in the number of trials. This theorem comes up very often in learning theory, in particular to prove Boosting works. Mathematicians will joke about how all theorems in learning theory are just applications of Chernoff-Hoeffding-type bounds. We’ll of course be seeing it again as we investigate boosting and the PAC-learning model in future posts, so we’ll see the theorems applied to their fullest extent then. Until next time! ## 12 thoughts on “Probabilistic Bounds — A Primer” 1. albert Hands down best place to learn new math on the internet. Liked by 1 person 2. kdros hello – enjoying your write up, but I’m having a bit of trouble following the markov inequality proof, where we have a*X_a <= X. Why is it that when a < X, the left side of the equation is zero, and the right side is non-negative? (ie, wouldn't a < X imply X_a == 0?) Like • The left side is zero because when a < X, by definition the indicator random variable $\chi_a = 0$ (as you said). So the LHS of that inequality is a * 0 = 0. On the other hand, X was assumed to be a non-negative random variable, so it is at least zero no matter what. Like • Levi There is a typo there. You meant to write Whenever X < a, the former is equal to zero while the latter is nonnegative. Whenever a < X, the former is equal to zero while the latter is nonnegative. Like 3. The Chernoff bound was exactly what I needed for something and I’d never heard of it before. Thanks! Like 4. I see a minor problem: It looks like the backslashes have disappeared from the TeX formulas. Like • What a * bizarre* bug. Luckily old revisions have all of the backslashes in them. I blame the fact that WordPress is written in PHP (in spite of my obvious biases). Like 5. Synaptic_Sage Um. Perhaps I’ve missed something, but, in the paragraph on monotonicity of expectation, shouldn’t it say, $E[Y] \ge E[X]$ instead of $E[X] \ge E[Y]$? Like • good catch Like 6. Synaptic_Sage Wow, that was fast. Anyhow, excellent primer, thanks! Like 7. Levi There are two typos in the following paragraph: “Whenever a X instead of a < X and a \leq X instead of a \geq X. Like
2019-12-12 20:02:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 115, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9360036253929138, "perplexity": 206.3835673757419}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540545146.75/warc/CC-MAIN-20191212181310-20191212205310-00327.warc.gz"}
https://www.jneurosci.org/highwire/markup/585580/expansion?width=1000&height=500&iframe=true&postprocessors=highwire_tables%2Chighwire_reclass%2Chighwire_figures%2Chighwire_math%2Chighwire_inline_linked_media%2Chighwire_embed
Table 1. Prior reports of the hemisphere controlling head movements in humans IpsilateralContralateralBilateral Beevor (1909)aHanajima et al. (1998)bPenfield and Rasmussen (1950)b Balagura and Katz (1980)aKang et al. (2011)aBenecke et al. (1988)b Willoughby and Anderson (1984)aGandevia and Applegate (1988)b Berardelli et al. (1991)b Mastaglia et al. (1986)aOdergren and Rimpilainen (1996)b Manon-Espaillat and Ruff (1988)aOdergren et al. (1997)b Thompson et al. (1997)b Anagnostou et al. (2011)aDeToledo and Dow (1998)a • The hemisphere controlling movements was taken from the results reported by each study, because the interpretations sometimes did not match the actual results provided. • a Studies of stroke or epilepsy cases. • b Studies of electrical or magnetic stimulation of the precentral gyrus.
2020-08-08 18:08:06
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8428109288215637, "perplexity": 13465.118902055325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738015.38/warc/CC-MAIN-20200808165417-20200808195417-00277.warc.gz"}
https://fossnik.gitlab.io/portfolio/tags/react/
## Ramda.js Ramda is a library designed specifically for a functional programming style. It makes it easy to create functional programming pipelines, and never mutates user data. JavaScript’s Array.prototype class has some functionally-flavored methods such as map, reduce, and filter. However, these only operate on arrays. Ramda is described as more generic, being that it can work with strings, objects, and user-defined data types. There are other convenience libraries that allow functional operations on objects, such as lodash, but Ramda is much more functional by design; explicitly eliminating the possibility side-effects, and facilitating a divergence from the very imperative style of JavaScript to a more declarative model of programming. ## Composition with Functions So why would I want to use Ramda when lodash is easier to understand? Ramda allows functions to be used as first-order components. ## Cypress E2E Integration Testing The cypress testing framework is an end-to-end integration test framework for React.js, which is quickly surpassing alternatives such as Jest, Mocha, and Enzyme, Chai, and others. Unit testing had long been an area of React.js where a stark lack of consensus existed. Myriad different frameworks had been available, all with their own selling points. Many developers won over by Cypress are now abandoning this patchwork of other unit testing tools in droves. ## Using React Hooks API ### An alternative to Redux The hooks api is a game-changing new framework introduced by the Facebook team in React 16.8. It represents a radical foray into a totally new paradigm of React.js development. # Notes on Hooks Hooks are like primitive composable components for React. With hooks, instead of using ES2015 classes and complex third party frameworks such as Redux, we can use simple function() expressions throughout. ## Functional Programs They are more functional - essentially they trade the complexity of how classes work for the complexity of how closures work. Hooks are more consistent with functional programming. ## Side-Effects I had previously associated side-effects with a negative connotation, but in this context, side-effects are very much intentional. A side-effect, or simply effect is employed anytime we perform a fetch operation or write something out to the console in a class component. ## UseState One of the most exciting parts of the Hooks API is the global state functionality. With useState, we have native functionality in React that might finally make it possible to avoid Redux entirely. At long last, React developers can make use of global state without cluttering up our applications with loads of messy connect functions and the 10 pounds of actions, reducers, and lifecycle triggers code it takes to get an ounce of functionality in React. ## So what is Typing anyway? To understand what TypeScript is, first you must be familiar with the concept of typing. JavaScript developers will probably be aware of the difference between Equality (==) and Identity (===) Operators. These operators differentiate between entities like the string "123" and the integer 123. This is because numbers and characters are fundamentally stored differently in memory. In JavaScript, all variables are declared using the keyword var (or const/let). However, this is not the case in strongly typed languages, such as Java and C, which CANNOT declare any variable without an explicitly defined data type. #### JavaScript is ‘interpreted’, not ‘compiled’ A data type is a formalization of the way in which information is represented in a computer system, such as the case with Integers and Strings. Languages such as Java and C are referred to as strongly typed languages, because they depend on these types being defined at compile time, before the high-level language is converted into machine code or byte-code. JavaScript is an interpreted language (it is not compiled) It uses dynamic typing because, as the name implies, JavaScript was created with script-building in mind. Scripts generally automate the execution of code that is exterior to the script itself (such as compiled programs). As JavaScript development has become increasingly complex, there has arisen greater need for more explicit typing. This need was partially met by tools such as PropTypes. However, TypeScript, which was created by Microsoft, is especially helpful for validating the consistency of user-designed data objects. ## Type Checking Type checking is the idea that the shape of a data object remains consistent as it is passed between different parts of a program. TypeScript extends the JavaScript language, making it possible to explicitly define the shape of objects when they are declared, or passed into / returned from a function. # What is Jest? Jest is a popular Node.js unit testing framework, frequently used with React. Like React itself, Jest is used internally by Facebook as a primary development tool. Jest allows for tests to be run in parallel, greatly increasing efficiency. ## Why use tests at all? Even after you complete an app and get everything finally working, this is always the possibility of future modifications introducing new bugs. Unit testing simply makes it easier to guard against this scenario. ## How do I install Jest? Jest is included with React (and React Native). Otherwise, it can be installed using your package manager of choice npm install --save-dev jest
2020-01-26 11:23:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21678680181503296, "perplexity": 2901.6005707847207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251688806.91/warc/CC-MAIN-20200126104828-20200126134828-00082.warc.gz"}
http://lists.w3.org/Archives/Public/w3c-math-erb/msg00454.html
# Re: html markup of previous sample The following is how i would mark up the example posted by Ron Whitney using MINSE. Some notes about Ron's markup and what is done here are included first (where i refer to Ron's points, i mean the ones in the more recent HTML posting unless otherwise specified). ------------------------------------------------------- notes on example 1. Ron mentioned his use of <mo> </mo> to indicate that the entity (&nu; \cdot \nabla) is an operator (Ron's point 4). With MINSE, i think this won't be necessary. The "oper" compound indicates operator application and implies that its first argument is an operator, much like "apply" means function application, while implying that the first argument is a function. "oper" typically may be rendered just by placing its two arguments side by side. This lets you write things like "(D^2 + 3D + 1)f" where the D is the derivative operator. The convention for parentheses which is described in Bruce's current proposal makes it impossible to do this without special provisions like "&FunctionApplication;", as i previously noted. 2. Note that what Ron has as (&nu; \cdot \nabla) i marked up as (?nu? .dot 'Grad), assuming that the \cdot meant "dot product" and that the \nabla meant "gradient". Similarly, i have assumed that &otimes; means a tensor product. If, as i suspect, i have chosen the wrong meanings because i have no expertise in the subject of this paper, these can be replaced with the appropriate compound names. 3. Ron also mentioned his use of &FunctionApplication; to indicate the application of the new operators "div" and "curl", and possibly to indicate the application of "max" and "lim" (Ron's points 5 and 9). can just add context definitions for "div" and "curl". Here the name i chose reflects my assumption that "div" means "divergence". 4. In many situations multiple comparisons are written in a chain, which happens here under the "max" compounds with "0 <= i <= T". How does your notation deal with this and the issue of operator associativity? I ran up against this problem of serial and chained operators and solved it before the prototype went up on June 2 by placing all the relational operators on the same precedence level with the special associativity "chain", allowing the definition of a "comparison" compound which can infer that "0 <= i <= T" means "(0 <= i) and (i <= T)". 5. Ron mentioned that "R" is used for two different meanings (his point 1 in the TeX posting). There is no such ambiguity if you use "'Real" to mean the reals and "R" to mean a named variable. 6. With regard to Ron's point 2, i didn't know the meaning of D&nu;/Dt, but if i knew that it was a special kind of derivative i could have defined a new compound to take care of it. For now i have just treated the D as the name of an operator. 7. A new compound "funcdep" is defined to indicate the "functional dependence" which Ron mentioned (his point 3 in the TeX posting). I don't know what it means to place "t" as a prescript in front of the pair "?nu?;1, ?nu?;2", so i was not able to choose a more meaningful compound name. The new compound is attained with a context definition. 8. As with Ron's example, alignment is left for later (Ron's point 3). 9. Ron marked up "absolute value" using &leftvert; and &rightvert; (Ron's point 7). How is the grouping ability of these symbols declared? I used an absolute-value compound because i thought it was more appropriate here. The ability to write things this way is only a convenience, but a provision for bracketing constructs shouldn't be hard to add to MINSE's notation parser. This would let you define new bracketing constructs easily in the notation definition, so that you could write [0, 'inf) if you wanted to. 10. The ad hoc matrix multiplication operator (Ron's point 7 in the TeX posting) is again handled by simply making a new compound. If it were used frequently, the author could just add a new binary operator to the notation definition and the precedence and associativity would be defined there. In Ron's markup, i'm not at all sure what the consequences of an operator with undefined precedence would be (Ron's point 11). 11. On the very last line of Ron's markup we've got &sum;_{i,j}a_{ij}b_{ij} This doesn't seem to make it clear that {ij} is to be a double subscript as opposed to an identifier "ij" or something else. With MINSE i wrote "a;i;j" because the index operator ";" is left-associative and can indicate that "a" is indexed with "i" and then indexed with "j". Alternatively, "a" could be indexed with the pair "i,j" using the comma operator. Notice also that here Ron uses the underscore to represent both the conditions on the summation and the indices on the variables. I'm not certain that this distinction can always be unambiguously made; an index operator makes this much clearer. 12. Superscripts abound in this example, and as Ron noted they have the various meanings of product and index (Ron's point 5 in the TeX posting). In the cases where they indicate an index, as in L^1 and C^\infty, they are so distinguished with the compound "upidx". The lack of a more meaningful name is again due to my lack of experience. 13. Parens are also used for all sorts of meanings in this example (Ron's point 6 in the TeX posting), and i think it's impossible to tell the difference between the interval "(0,&infinity;)" and the pair "(&nu;_1,&nu;_2)" the way Ron has it marked up. It is also very unclear when the parens indicate function application, as in "&nu;^&epsilon;(x,t)". This is all distinguished in the MINSE markup using different compounds. Function application is the only case implied just by the parentheses; the compound "openopen" is used for writing an interval open at both ends, alleviating this ambiguity. 14. The excerpt doesn't reach the end of the HTML ordered list <ol>. 15. Ron wrote: > BTW, in using the convenient screen display form for $elements, > I've segregated these elements so that the text of the paragraph after >$ starts on a new line. I take it this adds significant space in an > SGML document. I'm not sure i understand what is meant here -- adding any kind of extra whitespace to an HTML document doesn't change the spacing, except in PRE elements. The layout is handled in the following markup using ordinary HTML elements, treating the SE elements as part of the text flow. I used paragraphs because there is currently no means of setting out figures in HTML. Upon seeing that figures would be a useful layout idiom for more than just images, i proposed something for it a while ago, but not much happened to it. See http://www.acl.lanl.gov/HTML_WG/html-wg-96q1.messages/0598.html for details. The closest thing we have now might be MS Internet Explorer's "floating frame", but that is by no means standard. Excerpt from the above URL: > This gives us an element for surrounding and positioning *anything* > in a box with an optional caption, credit, or overlay. [...] --------------------------------------- suggested HTML markup of example ... <body> ... <p>The Euler equations for an inviscid incompressible 2-D fluid flow are given by <p align=center> <se>'quot(D .oper ?nu?,D .oper t) = -'grad(p)</se>, <se>x .eltof 'Real^2</se>, <se>t > 0</se> <br> <se>'diverg(?nu?) = 0</se>, <se>?nu?(x,0) = ?nu?;0(x)</se> </p> <p>where <se>?nu? = 'funcdep('pair(?nu?;1,?nu;2),t)</se> is the fluid velocity, <se>p</se> is the scalar pressure, <se>'quot(D .oper ?nu?,D .oper t) = 'partial(?nu?,t) + 'oper(?nu? .dot 'Grad, ?nu?)</se>, and <se>?nu?;0</se> is an initial incompressible velocity field, <i class=latin>i.e.</i> <se>'diverg(?nu?;0) = 0</se>. <p>In this paper, we study the detailed limiting behaviour of approximate solution sequences for 2-D Euler with vortex sheet initial data. A sequence of smooth velocity fields <se>(?nu? .upidx ?epsilon?)(x,t)</se> is an <dfn>approximate solution sequence</dfn> for 2-D Euler provided that the <se>?nu?</se> is incompressible, <i class=latin>i.e.</i> <se>'diverg(?nu?) = 0</se>, and satisfies the following properties: <ol> <li>The velocity fields <se>?nu? .upidx ?epsilon?</se> have uniformly bounded local kinetic energy, <i class=latin>i.e.</i> <p align=center> <se>'max('integ('abs((?nu? .upidx ?epsilon?)(x,t))^2,x,'abs(x) <= R), 0 <= i <= T) <= C</se> </p> <p>for any <se>R, T > 0</se>. <li>The corresponding vorticity, <se>?omega?^2 = 'curl(?nu? .upidx ?epsilon?)</se>, is uniformly bounded in <se>L .upidx 1</se>, <i class=latin>i.e.</i> <p align=center> <se>'max('integ('abs((?omega? .upidx ?epsilon?)(x,t)),x), 0 <= i <= T) <= C</se> </p> <p>for any <se>T > 0</se>. <li>The vortex field <se>?nu? .upidx ?epsilon?</se> is weakly consistent with 2-D Euler, i.e. for all smooth test functions, <se>?phi? .eltof (C .upidx 'inf)('Real^2 .cartprod 'openopen(0,'inf))</se> with <se>'diverg(?phi?) = 0</se>, <p align=center> <se>'lim('integ('integ(?phi?;t .dot ?nu?^2 + ?nu? .upidx ?epsilon? .tensorprod ?nu? .upidx ?epsilon?), x), t), ?epsilon? .approach 0) = 0</se>. </p> <p>Here <se>?nu? .tensorprod ?nu? = 'pair(v;i,v;j)</se>, and <se>'matprod(A,B)</se> denotes the matrix product <se>'Sum(a;i;j,b;i;j,(i,j))</se>. We remark in passing... ------------------------------------------------------------- discussion On the whole, i think i'd have to say that the proliferation of homonyms in Ron's example makes me rather uncomfortable. Parens, superscripts, and juxtaposition have so many different meanings in the HTML markup he posted that -- even if it were possible for mapping rules to choose which meaning is intended -- i don't think i would just trust the rules to pick the right one every time, and guessing exactly how to appease them by manipulating the notation would quickly get troublesome. I would much prefer getting into the habit of consistently saying what i mean instead of hoping that it gets interpreted right. Moreover, what if authors later want to define new meanings for juxtaposition or parentheses? There seems to be no provision for this because the juxtaposition itself is used to figure out the meaning. I think it makes more sense to go the other way, i.e. from the meaning to the notation instead of guessing the meaning from things like juxtaposition and parentheses.
2015-12-01 20:15:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8921215534210205, "perplexity": 5835.8868234739}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398471436.90/warc/CC-MAIN-20151124205431-00312-ip-10-71-132-137.ec2.internal.warc.gz"}
https://mmas.github.io/optimization-scipy
# Optimization methods in Scipy numerical-analysis optimization python numpy scipy Mathematical optimization is the selection of the best input in a function to compute the required value. In the case we are going to see, we'll try to find the best input arguments to obtain the minimum value of a real function, called in this case, cost function. This is called minimization. In the next examples, the functions scipy.optimize.minimize_scalar and scipy.optimize.minimize will be used. The examples can be done using other Scipy functions like scipy.optimize.brent or scipy.optimize.fmin_{method_name}, however, Scipy recommends to use the minimize and minimize_scalar interface instead of these specific interfaces. Finding a global minimum can be a hard task. Scipy provides different stochastic methods to do that, but it won't be covered in this article. Let's start: import numpy as np from scipy import optimize, special import matplotlib.pyplot as plt ## 1D optimization For the next examples we are going to use the Bessel function of the first kind of order 0, here represented in the interval (0,10]. x = np.linspace(0, 10, 500) y = special.j0(x) plt.plot(x, y) plt.show() Golden section method minimizes a unimodal function by narrowing the range in the extreme values: optimize.minimize_scalar(special.j0, method='golden') fun: -0.40275939570255315 x: 3.8317059773846487 nfev: 43 Brent's method is a more complex algorithm combination of other root-finding algorithms: optimize.minimize_scalar(special.j0, method='brent') fun: -0.40275939570255309 nfev: 10 nit: 9 x: 3.8317059554863437 plt.plot(x, y) plt.axvline(3.8317059554863437, linestyle='--', color='k') plt.show() As we can see in this example, Brent method minimizes the function in less objective function evaluations (key nfev) than the Golden section method. ## Multivariate optimization The Rosenbrock function is widely used for tests performance in optimization algorithms. The Rosenbrock function has a parabolic-shaped valley with the global minimum in it. The function definition is: $f\left(x,y\right)=\left(a-x{\right)}^{2}+b\left(y-{x}^{2}{\right)}^{2}$ It has a global minimum at $\left(x,y\right)=\left(a,{a}^{2}\right)$, where $f\left(x,y\right)=0$. Scipy provides a multidimensional Rosenbrock function, a variant defined as: $f\left(X\right)=\sum _{i=1}^{N-1}100\left({x}_{i+1}-{x}_{i}^{2}{\right)}^{2}+\left(1-{x}_{i}{\right)}^{2}\phantom{\rule{1em}{0ex}}where\phantom{\rule{1em}{0ex}}X=\left[{x}_{i},...,{x}_{N}\right]\in \mathbb{R}$ x, y = np.mgrid[-2:2:100j, -2:2:100j] plt.pcolor(x, y, optimize.rosen([x, y])) plt.plot(1, 1, 'xw') plt.colorbar() plt.axis([-2, 2, -2, 2]) plt.title('Rosenbrock function') plt.xlabel('x') plt.ylabel('y') plt.show() For the next examples, we are going to use it as a 2D function, with a global minimum found at (1,1). Initial guess: x0 = (0., 0.) Nelder-Mead and Powell methods are used to minimize functions without the knowledge of the derivative of the function, or gradient. Nelder-Mead method, also known as downhill simplex is an heuristics method to converge in a non-stationary point: optimize.minimize(optimize.rosen, x0, method='Nelder-Mead') status: 0 nfev: 146 success: True fun: 3.6861769151759075e-10 x: array([ 1.00000439, 1.00001064]) message: 'Optimization terminated successfully.' nit: 79 Powel's conjugate direction method is an algorithm where the minimization is achieved by the displacement of vectors by a search for a lower point: optimize.minimize(optimize.rosen, x0, method='Powell') status: 0 success: True direc: array([[ 1.54785007e-02, 3.24539372e-02], [ 1.33646191e-06, 2.53924992e-06]]) nfev: 462 fun: 1.9721522630525295e-31 x: array([ 1., 1.]) message: 'Optimization terminated successfully.' nit: 16 As we can see in this case, Powell's method finds the minimum in less steps (iterations, key nit), but with more function evaluations than Nelder-Mead method. Modifying the initial direction of the vectors we may get a better result with less function evaluations. Let's try setting and initial direction of (1,0) from our initial guess, (0,0): optimize.minimize( optimize.rosen, x0, method='Powell', options={'direc': (1, 0)}) status: 0 success: True direc: array([ 1., 0.]) nfev: 52 fun: 0.0 x: array([ 1., 1.]) message: 'Optimization terminated successfully.' nit: 2 We'll find the minimum in considerably less function evaluations of the different points. Sometimes we can use gradient methods, like BFGS, without knowing the gradient: optimize.minimize(optimize.rosen, x0, method='BFGS') status: 2 success: False njev: 32 nfev: 140 hess_inv: array([[ 0.48552643, 0.96994585], [ 0.96994585, 1.94259477]]) fun: 1.9281078336062298e-11 x: array([ 0.99999561, 0.99999125]) message: 'Desired error not necessarily achieved due to precision loss.' jac: array([ -1.07088609e-05, 5.44565446e-06]) In this case, we haven't achieved the optimization. We will see more about gradient-based minimization in the next section. These methods will need the derivatives of the cost function, in the case of the Rosenbrock function, the derivative is provided by Scipy, anyway, here's the simple calculation in Maxima: rosen: (1-x)^2 + 100*(y-x^2)^2; $100\left(y-{x}^{2}{\right)}^{2}+\left(1-x{\right)}^{2}$ rosen_d: [diff(rosen, x), diff(rosen, y)]; $\left[-400x\left(y-{x}^{2}\right)-2\left(1-x\right),200\left(y-{x}^{2}\right)\right]$ Conjugate gradient method is similar to a simpler gradient descent but it uses a conjugate vector and in each iteration the vector moves in a direction conjugate to the all previous steps: optimize.minimize(optimize.rosen, x0, method='CG', jac=optimize.rosen_der) status: 0 success: True njev: 33 nfev: 33 fun: 6.166632297117924e-11 x: array([ 0.99999215, 0.99998428]) message: 'Optimization terminated successfully.' jac: array([ -7.17661535e-06, -4.26162510e-06]) BFGS calculates an approximation of the hessian of the objective function in each iteration, for that reason it is a Quasi-Newton method (more on Newton method later): optimize.minimize(optimize.rosen, x0, method='BFGS', jac=optimize.rosen_der) status: 0 success: True njev: 26 nfev: 26 hess_inv: array([[ 0.48549325, 0.96988373], [ 0.96988373, 1.94247917]]) fun: 1.1678517168020135e-16 x: array([ 1.00000001, 1.00000002]) message: 'Optimization terminated successfully.' jac: array([ -2.42849398e-07, 1.30055966e-07]) BFGS achieves the optimization on less evaluations of the cost and jacobian function than the Conjugate gradient method, however the calculation of the hessian can be more expensive than the product of matrices and vectors used in the Conjugate gradient. L-BFGS is a low-memory aproximation of BFGS. Concretely, the Scipy implementation is L-BFGS-B, which can handle box constraints using the bounds argument: optimize.minimize(optimize.rosen, x0, method='L-BFGS-B', jac=optimize.rosen_der) status: 0 success: True nfev: 25 fun: 1.0433892998247468e-13 x: array([ 0.99999971, 0.9999994 ]) jac: array([ 4.74035377e-06, -2.66444085e-06]) nit: 21 ### Hessian-based optimization Newton's method uses the first and the second derivative (jacobian and hessian) of the objective function in each iteration. This is the hessian matrix of the Rosenbrock function, calculated with Maxima: rosen_d2: [[diff(rosen_d[1], x), diff(rosen_d[1], y)], [diff(rosen_d[2], x), diff(rosen_d[2], y)]]; $\left[\left[-400\left(y-{x}^{2}\right)+800{x}^{2}+2,-400x\right],\left[-400x,200\right]\right]$ Minimization Rosenbrock function using Newton's method with jacobian and hessian matrix: optimize.minimize(optimize.rosen, x0, method='Newton-CG', jac=optimize.rosen_der, hess=optimize.rosen_hess) status: 0 success: True njev: 85 nfev: 53 fun: 1.4946283615394615e-09 x: array([ 0.99996137, 0.99992259]) message: 'Optimization terminated successfully.' nhev: 33 jac: array([ 0.01269975, -0.00637599]) Here we don't have a lower evaluations than the previous methods but we can use Newton's method for twice-differentiable quadratic functions to get faster results.
2018-11-19 20:19:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 7, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5655474066734314, "perplexity": 3914.1577004467754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746110.52/warc/CC-MAIN-20181119192035-20181119214035-00284.warc.gz"}
http://www.physicsforums.com/showthread.php?s=5b0f8ea30d66c50e581c76b0a78e3ccb&p=4200808
## Error in binary arithmetics Hello everyone ! I got a problem, I can't figure out why when I compute $A+B$ and $2\times (\frac{A}{2}+\frac{B}{2})$ the result is sometimes different. Can anyone explain why the results of the two operations are different ? Edit : A and B are coded in binary of course and we compute a binary addition. PhysOrg.com engineering news on PhysOrg.com >> NASA: Austin, calling Austin. 3-D pizzas to go>> Research suggests modular design competence can benefit new product development>> Army ground combat systems adopts tool for choosing future warfighting vehicles Recognitions: Gold Member Science Advisor Are you using integer or floating point ? My guess is former , and one of the numerators is an odd number. Integers I forgot to mention sorry. Recognitions: Gold Member ## Error in binary arithmetics Well there ya go. Got Basic? FOR I = 1 , 10 , 1 PRINT I, INT( I/2) NEXT I END Quote by jim hardy Well there ya go. Got Basic? FOR I = 1 , 10 , 1 PRINT I, INT( I/2) NEXT I END I see thank you :) . Recognitions:
2013-05-23 06:08:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4383543133735657, "perplexity": 4032.889939089249}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702900179/warc/CC-MAIN-20130516111500-00017-ip-10-60-113-184.ec2.internal.warc.gz"}
http://cs.masseffect.wikia.com/wiki/Planets
## FANDOM 190 Pages Milky Way Clusters Systems Planets Moons Asteroids Space stations Starships Missions Assignments Locations Planet Index Planets are massive celestial bodies that orbit a star or stellar remnant. They are of many different sizes and compositions. Planets may have natural satellites (moons) and may or may not be inhabited. Every planet has a unique blurb, but most of them cannot be interacted with. Travel from planet to planet requires use of the Galaxy Map. Within each solar system, there will usually only be one location to land / dock. Most often this will be a planet, but it may be a moon or starship instead. Many planets can be scanned from space for minerals or other collectibles, but not landed upon. There are even asteroids containing valuable metals or other objects hidden in the belts. Main article: Mass Effect System Guide For an alphabetical list of planets see Category:Planets (or the Planet Index). ## Properties Editovat Mass Effect provides the radius of planets (in kilometers), and for planets with a solid surface it further provides the surface gravity (as a proportion of Earth's surface gravity). From these, it is possible to derive the mass (and density) for any planet with a surface gravity. Isaac Newton's gravitational constant is 6.67×10-11 m3kg-1s-2 which would be 6.67×10-20 km3kg-1s-2 when dimensioned for kilometers. Earth's surface gravity, g, is 9.81 ms-2, which is 9.81×10-3 kms-2. According to Newton's Law of Universal Gravitation, the mass of any body, M, is ar2G-1, where a is gravity, r the radius, and G the gravitational constant. Since the gravity of a planetary body in Mass Effect, s, is given as a proportion of Earth's, a = sg. Therefore, given values for s (dimensionless) and r (in km), and given that the mass of the Earth is 5.9736×1024 kg, the mass of a planet will be: $M = \dfrac{a r^2}{G} = \dfrac{s g r^2}{G} = \dfrac{s \cdot 9.81 \times 10^{-3} \cdot r^2}{6.674 \times 10^{-20}} \; \mbox{kg} \; = \; 1.46 \times 10^{17} \cdot s \cdot r^2\;\mbox{kg} \; = \; 2.44 \times 10^{-8} \cdot s \cdot r^2\;\mbox{Earth Masses}$
2018-09-22 06:05:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7864000201225281, "perplexity": 1642.4751449493651}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158045.57/warc/CC-MAIN-20180922044853-20180922065253-00067.warc.gz"}
https://homework.study.com/explanation/what-frequency-of-sound-travels-in-air-at-20-degree-c-has-a-wavelength-equal-to-1-7m.html
# What frequency of sound travels in air at 20 degree C has a wavelength equal to 1.7m? ## Question: What frequency of sound travels in air at 20 degree C has a wavelength equal to 1.7m? ## The Wave Velocity Formula: The velocity of the wave indicates how fast the wave is moving between two positions. For longitudinal wave, a number of cycles performed by the wave per second is called a frequency of the given wave. The distance between any successive rarefaction or compression is called a wavelength of the wave. For the constant-frequency wave, the wavelength of the wave is directly proportional to the wave velocity.
2023-03-22 00:08:03
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.828231155872345, "perplexity": 382.27172380065815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943747.51/warc/CC-MAIN-20230321225117-20230322015117-00109.warc.gz"}
https://www.physicsforums.com/threads/purdue-vs-urbana-champaign-undergrduate-computer-engineering.216353/
# Purdue vs Urbana Champaign? : Undergrduate Computer Engineering 1. Feb 19, 2008 ### ishanwahi Hi, Im from India having not much knowledge of the American Universities. I have recvd admit confirmations from both Purdue & Univ of Illinois at Urbana Champaign for their 4 year undergraduate programme in Computer Engineering and am in the usual dilemma... which to confirm? Rankings ... Illinois is better ranked but Purdue is not ranked really much below and has a reputation of being a pure engineering institute. I dont know much about the quality of their courses,programmes, faculty and facilities in Computer Engineering undergarduate department. Kindly guide me. Thanks a ton Last edited: Feb 19, 2008 2. Feb 19, 2008 ### qspeechc Then maybe your decision should be based on factors other than the ranking- like fees, travel involved, campus life, residences, schlarships/bursaries etc. 3. Feb 19, 2008 ### h2oski1326 First off, as a Boilermaker (Purdue) I am obviously biased toward my school. Second, I am an ME and don't claim to know all of the details of the ECE program at Purdue nor Illinois. That said, take this for whatever it is worth. As far as curriculum, the ME curriculum's between the two schools are VERY similar and I would assume the same is true for most engineering disciplines at the two schools. So, as far as knowledge you will gain, it is a tossup. Both are situated in the middle of the American heartland. The towns are small and most at the university are from the state, so the culture shock will be similar. Champaign is a much neater town that West Lafayette, IMO. So the edge for location would have to go to Champaign, but very slightly. The biggest difference is probably reputation. Purdue seems to have a better reputation than Illinois, particularly outside of the mid-west region. This is particularly important when it comes to job placement. This just comes from my personal experience with co-oping, internships, other job interviews, and etc. I have had several people who were convinced that Purdue was an Ivy League school before I met them. Of course, these probably won't be the people who are hiring you but I found it amusing. I was born and raised a Boiler and can't really imagine being anywhere else, so I would convince you to become one as well ;). If anyone else knows anything about Purdue or UOI ECE programs please chime in.
2018-04-23 02:17:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22182989120483398, "perplexity": 2430.1461170256825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945669.54/warc/CC-MAIN-20180423011954-20180423031954-00276.warc.gz"}
https://miktex.org/packages/smartref
Version: Copyright: License: Packaged on: 11/5/2010 2:17:04 PM Number of files: 6 Size on disk: 327.61 kB Extend the capability of the the \ref command: whenever a label is set this package records, along with the label, the values of some other counters (which, can be selected by the user). Then, the value of these counters can be recalled with a command similar to \pageref. Moreover, this package adds a command (\s[name]ref) for each counter added that displays something only if the value of the [name] counter is changed from when the label is set.
2019-06-17 11:12:43
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.900806725025177, "perplexity": 3170.0490793133285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998473.44/warc/CC-MAIN-20190617103006-20190617125006-00315.warc.gz"}
https://testbook.com/question-answer/an-antenna-with-an-efficiency-of-90-has-a-maximum--5f52f946cf9ede6c7f75cbcd
# An antenna with an efficiency of 90% has a maximum radiation intensity of 0.5 W/Steradian. Calculate the directive gain of the antenna when the input power to the antenna is 0.4 W. This question was previously asked in ISRO Scientist ECE: 2020 Official Paper View all ISRO Scientist EC Papers > 1. 18.23 2. 17.4 3. 11.2 4. 21.6 ## Answer (Detailed Solution Below) Option 2 : 17.4 Free CT 3: Building Materials 2962 10 Questions 20 Marks 12 Mins ## Detailed Solution Directive Gain (D): It is defined as the ratio of radiation intensity due to the test antenna to isotropic antenna (hypothetical antenna that radiates uniformly in all direction) $$D = \frac{U}{U_0}=\frac{4\pi U}{P_{rad}}$$ Where, U = radiation intensity due to test antenna, in watts per unit solid angle U0  = radiation intensity due to an isotropic antenna, in watts per unit solid angle Prad =total power radiated in watts Since U is a directional dependent quantity, the directive gain of an antenna depends on the angle θ and Φ. If the radiation intensity assumes its maximum value then the directive gain is called the Directivity (D0). $$D_0=\frac{U_{max}}{U_0}=\frac{4\pi U_{max}}{P_{in}}$$ Calculation: Given: Efficiency, η = 0.90, Umax = 0.5 W/sr Also, with Pin = 0.4 W, the radiated power will be: Prad = 0.4 × 0.9 = 0.36 W ∴ The directivity will be: $$D= \frac{{4\pi \; \times \;0.5}}{{0.36}}$$ D = 17.46
2021-10-24 10:31:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.910187304019928, "perplexity": 3739.617740347901}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585916.29/warc/CC-MAIN-20211024081003-20211024111003-00018.warc.gz"}
http://itensor.org/support/2367/non-hermitian-dmrg-in-julia
# non-Hermitian DMRG in Julia +1 vote Hi, I am trying to do an example by obtaining the ground state of a non-Hermitian Hamiltonian. Is it possible to do it now in Julia (or in the future version when there is Arnoldi eigensolver). I already know there's one in the C++ version. Thank you! +1 vote Hi Terry, The short answer is that this is not currently available in the Julia version at the level of an option you can pass to the dmrg function, if that's what you are asking. Similarly for the C++ version there is no such option, though as you noted we do offer an Arnoldi algorithm code. But for both versions you could modify the DMRG code to use Arnoldi. For the Julia version this should actually be pretty straightforward, because the eigsolve function we call from Julia is a function offered by the package KrylovKit.jl and that same eigsolve function offers the ability to change the algorithm to Arnoldi. Here is a documentation page on eigsolve explaining about this: https://jutho.github.io/KrylovKit.jl/stable/man/eig/ However, not having written non-Hermitian DMRG myself, I'm not sure off hand what other changes might be needed to the rest of the DMRG algorithm. Perhaps not very many other than changing the eigen solver? Best regards, Miles commented by (290 points) Hi Miles, For the last part, I guess you are probably correct. See this paper for example: https://arxiv.org/pdf/2001.07088.pdf Best, Tianqi commented by (14.1k points) You could try changing this line here: https://github.com/ITensor/ITensors.jl/blob/master/src/mps/dmrg.jl#L108 to ishermitian::Bool = false. You can also change the line to ishermitian::Bool = get(kwargs, :ishermitian, true) to have DMRG use the keyword argument ishermitian, which gets passed to the eigensolver from KrylovKit as Miles alluded to. I didn't make it available yet since I hadn't tested that functionality at all. Could you try that out for your problem and let us know if it works? We can make it available if we get some confirmation that it works ok. -Matt commented by (290 points) Hi Matt, Thank you for your suggestion. I will be working on this for some non-Hermitian Hamiltonian. Just a technical question here as I am a newbie for Julia: should I write a new julia function, say nHdmrg.jl or I create my own package based on ITensor.jl as it is written here: https://itensor.github.io/ITensors.jl/stable/AdvancedUsageGuide.html -Tianqi commented by (14.1k points) Hi Tianqi, No need to make your own package or make a new function. You can follow the instructions in this section: to make your own local "development" version of ITensors. Unfortunately that section needs more details and is a work in progress, but all you have to do is type the command julia>] dev ITensors (where ] enters package mode at the Julia REPL). Then, if you edit the dmrg function in the file ~/.julia/dev/ITensors/src/mps/dmrg.jl, you will see the changes you make when you use ITensors with using ITensors. As that section mentions, if you install the package Revise with julia>] add Revise, it makes it so that when you edit your local version of ITensors found in ~/.julia/dev/ITensors, you don't even have to restart Julia to see the change you make. You can test this by adding a print statement, like println("Testing Revise"), inside the dmrg function, and then run the dmrg function without restarting your Julia session. To switch back to the "official" ITensors version you had installed, you can type julia>] free ITensors. The development version will still be saved and you can use it again by typing julia>] dev ITensors. This is the preferred way to develop a package, and you can use it to make a pull request to ITensors. Let me know if you have any questions or if the process is confusing, I'll try to update that page with more details about the development process. -Matt commented by (290 points) Hi Matt, Thanks for your explanation in detail. It is very clear. I will try this and let you all know about my results here at a later time. Tianqi
2022-05-16 09:02:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31539127230644226, "perplexity": 1097.972747747616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662510097.3/warc/CC-MAIN-20220516073101-20220516103101-00097.warc.gz"}
https://zbmath.org/?q=an:07000579
# zbMATH — the first resource for mathematics Energy gaps for $$p$$-Yang-Mills fields over compact Riemannian manifolds. (English) Zbl 1408.58011 The main purpose of the present paper is to obtain a Simons-type inequality for $$p$$-Yang-Mills fields over an oriented compact manifold without boundary. Next, the author establishes some $$L^r$$-energy gaps of $$p$$-Yang-Mills fields. The problem of energy gaps for $$p$$-Yang-Mills fields over complete noncompact Riemannian manifolds remains open. ##### MSC: 5.8e+16 Variational problems concerning extremal problems in several variables; Yang-Mills functionals 5.8e+21 Harmonic maps, etc. Full Text:
2021-09-27 18:59:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22987771034240723, "perplexity": 2640.2531330055485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058467.95/warc/CC-MAIN-20210927181724-20210927211724-00649.warc.gz"}
https://artofproblemsolving.com/wiki/index.php?title=1993_AIME_Problems/Problem_6&direction=next&oldid=112623
# 1993 AIME Problems/Problem 6 ## Problem What is the smallest positive integer that can be expressed as the sum of nine consecutive integers, the sum of ten consecutive integers, and the sum of eleven consecutive integers? ## Solution ### Solution 1 Denote the first of each of the series of consecutive integers as $a,\ b,\ c$. Therefore, $n = a + (a + 1) \ldots (a + 8) = 9a + 36 = 10b + 45 = 11c + 55$. Simplifying, $9a = 10b + 9 = 11c + 19$. The relationship between $a,\ b$ suggests that $b$ is divisible by $9$. Also, $10b -10 = 10(b-1) = 11c$, so $b-1$ is divisible by $11$. We find that the least possible value of $b = 45$, so the answer is $10(45) + 45 =$\boxed{495}$. === Solution 2 === Let the desired integer be$ (Error compiling LaTeX. ! Missing $inserted.)n$. From the information given, it can be determined that, for positive integers$a, \ b, \ c$:$n = 9a + 36 = 10b + 45 = 11c + 55$This can be rewritten as the following congruences:$n \equiv 0 \pmod{9}$$(Error compiling LaTeX. ! Missing inserted.)n \equiv 5 \pmod{10}$$ (Error compiling LaTeX. ! Missing$ inserted.)n \equiv 0 \pmod{11}$Since 9 and 11 are relatively prime, n is a multiple of 99. It can then easily be determined that the smallest multiple of 99 with a units digit 5 (this can be interpreted from the 2nd congruence) is$\boxed{495}$=== Solution 3 === Let$ (Error compiling LaTeX. ! Missing $inserted.)n$be the desired integer. From the given information, we have <cmath> \begin{align*}9x &= a \\ 11y &= a \\ 10z + 5 &= a, \end{align*}</cmath> here,(Error compiling LaTeX. ! Package amsmath Error: \begin{align*} allowed only in paragraph mode.)x,$and$y$are the middle terms of the sequence of 9 and 11 numbers, respectively. Similarly, we have$z$as the 4th term of the sequence. Since,$a$is a multiple of$9$and$11,$it is also a multiple of$\text{lcm}[9,11]=99.$Hence,$a=99m,$for some$m.$So, we have$10z + 5 = 99m.$It follows that$99(5) = \boxed{495}is the smallest integer that can be represented in such a way. === Solution 4 === By the method in Solution 1, we find that the number$(Error compiling LaTeX. ! Missing$ inserted.)n$can be written as$9a+36=10b+45=11c+55$for some integers$a,b,c$. From this, we can see that$n$must be divisible by 9, 5, and 11. This means$n$must be divisible by 495. The only multiples of 495 that are small enough to be AIME answers are 495 and 990. From the second of the three expressions above, we can see that$n$cannot be divisible by 10, so$n$must equal$\boxed{495}\$. Solution by Zeroman.
2022-05-24 09:55:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 32, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9997325539588928, "perplexity": 903.3332872527144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662570051.62/warc/CC-MAIN-20220524075341-20220524105341-00094.warc.gz"}
https://kb.osu.edu/handle/1811/11834?show=full
dc.creator Gudeman, Christopher S. en_US dc.creator Begemann, Marianne H. en_US dc.creator Saykally, R. J. en_US dc.date.accessioned 2006-06-15T14:46:32Z dc.date.available 2006-06-15T14:46:32Z dc.date.issued 1983 en_US dc.identifier 1983-MF-09 en_US dc.identifier.uri http://hdl.handle.net/1811/11834 dc.description Author Institution: Department of Chemistry, University of California en_US dc.description.abstract Doppler shifts in the transition frequencies of molecular ions produced in DC glow discharges were first reported by woods and co-$workers^{1}$ for pure rotational spectra in the 3 = region. These Doppler shifts were $\sim 10$ times smaller than the pressure broadened linewidths and were therefore too small to produce well-resolved red- and blue-shifted components, which could provide velocity modulation of absorption signals for lock-in detection. Because pressure broadening is usually negligible at infrated wavelengths, and high ion velocities (drift random) can berealized in light gases such as $H_{2}$ and He, velocity-modulated ion absorption spectroscopy in audio frequency discharges with lock-in detection at the discharge frequency becomes a straight forward and powerful technique for ion absorption spectroscopy when used in conjunction with a narrow bandwidth tunable laser source. Absorptions due to neutral atoms and molecules, which are much more abundant in these discharges than charged species, are suppressed by about a factor of 100. Furthermore, a first derivative line shape is characteristic of ion signals, while a single Gaussian shape is observed for neutrals, providing unambiguous differentiation between charged and neutral spectra. In this paper we describe a velocity modulation absorption spectrometer which consists of a commercial color center laser (Burleigh FCL-20), an 1 cm $\times$ 1 m liquid nitrogen cooled AC discharge cell, an InSb photovoltaic detector, and a lock-in amplifier. Operating characteristics, including sensitivity, lineshapes, and spectral and dynamical information obtained, will be discussed. The extension of this method to absorption spectroscopy with visible dye laser will be $described.^{1}$ R. C. Woods. T. A. Dixon. R. J. Saykally, and P. C. Szanto, Phys. Rev. Lett. 35, 1269 (1975): C. S. Gudeman, N. D. Piltch, and R. C. Woods, 37th Symposium on Molecular Spectroscopy, Columbus, OH, June 1982, paper TB7. $^{*}$Supported by NSF Grant \# CHE8207307. en_US dc.format.extent 139755 bytes dc.format.mimetype image/jpeg dc.language.iso English en_US dc.publisher Ohio State University en_US dc.title VELOCITY MODULATED LASER ABSORPTION SPECTROSCOPY OF MOLECULAR IONS en_US dc.type article en_US 
2020-08-04 23:46:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5954691171646118, "perplexity": 6780.979061804209}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735885.72/warc/CC-MAIN-20200804220455-20200805010455-00211.warc.gz"}
http://openstudy.com/updates/55e884d7e4b022720612ad51
anonymous one year ago help! :) 1. anonymous @Michele_Laino 2. anonymous @calculusxy 3. geerky42 With what? 4. anonymous can you explain what opposites are and give examples of real world situations? :) 5. anonymous @Michele_Laino just explain it in your own words :) 6. Michele_Laino for example -5 and +5 are opposite with respect to addition, since we have: -5+(+5)=0, and 0 is the neutral element of addition of the set of integers 7. anonymous ok thats the example? 8. Michele_Laino yes it is an example which comes from mathematics 9. anonymous ok and can you explain what opposites are 10. Michele_Laino here is another example from physics: north pole and south pole are opposite, since between them there is an attractive force 11. Michele_Laino I meant norh pole and south pole of a magnet 12. anonymous ok i get the examples, but can you explain what opposites are not in an example please :) 13. Michele_Laino referring to the quantum electrodynamics processes, I can say that opposites element are such that when they are both presents, they disappear and a new form of energy will appears 14. Michele_Laino such process is called annihilation 15. anonymous ok next question! :) 16. anonymous can you describe multiplying and dividing signed numbers in the real world? 17. Michele_Laino yes, I know an example from physics, nevertheless it is difficult to understand. 18. anonymous 19. Michele_Laino it is related to a solid state physics of semiconductors. When we study semiconductors, we have to introduce the so called effective mass of a carrier of electricity, and such effective mass is negative. Of course such effective mass can be multiplied for a positive quantity in order to get another quantity 20. Michele_Laino another simpler example comes from the Coulomb law, which is related to the interaction between a positive charge and a negative charge, for example. So in order to get the magnitude of the interaction force between a positive and a negative electric charges, I have to multiply a negative quantity with a positive quantity, using the subsequent algebrauc expression: $\Large F = K\frac{{{Q_1}{Q_2}}}{{{r^2}}}$ where Q1>0, Q2<0, r>0 and K>0 21. anonymous this is all so confusing!
2017-01-18 22:59:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7232967019081116, "perplexity": 1243.2136017876976}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00031-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/nhm.2017013
Article Contents Article Contents # A macroscopic traffic model with phase transitions and local point constraints on the flow • * Corresponding author: Massimiliano D. Rosini • In this paper we present a macroscopic phase transition model with a local point constraint on the flow. Its motivation is, for instance, the modelling of the evolution of vehicular traffic along a road with pointlike inhomogeneities characterized by limited capacity, such as speed bumps, traffic lights, construction sites, toll booths, etc. The model accounts for two different phases, according to whether the traffic is low or heavy. Away from the inhomogeneities of the road the traffic is described by a first order model in the free-flow phase and by a second order model in the congested phase. To model the effects of the inhomogeneities we propose two Riemann solvers satisfying the point constraints on the flow. Mathematics Subject Classification: Primary: 35L65, 90B20; Secondary: 82C26. Citation: • Figure 1.  Geometrical meaning of the notations used through the paper. In particular, $\Omega_{\rm f} = \Omega_{\rm f}^- \cup \Omega_{\rm f}^+$ and $\Omega_{\rm c}$ are the free-flow and congested domains, respectively; $V_{\rm f}^+$ and $V_{\rm f}^-$ are the maximal and minimal speeds in the free-flow phase, respectively, and $V_{\rm c}$ is the maximal speed in the congested phase Figure 2.  Geometrical meaning of the cases (T11a) and (T11b). Above $u_\ell'$ and $u_\ell''$ are $u_\ell$ in two different cases Figure 3.  Geometrical meaning of the cases (T12a) and (T12b). Above $u_\ell'$ and $u_\ell''$ are $u_\ell$ in two different cases Figure 4.  Geometrical meaning of the cases (T21a), (T21b) and (T22a). Above $u_\ell'$ and $u_\ell''$ are $u_\ell$ in two different cases Figure 5.  $(\rho_1, v_1) \doteq \mathcal{R}_1^{\rm c}[u_\ell,u_r]$ and $(\rho_2, v_2) \doteq \mathcal{R}_2^{\rm c}[u_\ell,u_r]$ in the case considered in Example 1. Figure 6.  The invariant domains described in Proposition 6 and Proposition 9 Figure 7.  $u_1 \doteq \mathcal{R}_1^{\rm c}[u_0,u_0]$ and $u_2 \doteq \mathcal{R}_2^{\rm c}[u_0,u_0]$ in the case considered in Example 2. Above $\hat{u}_1,\check{u}_1$ are given by (T11b) and $\hat{u}_2,\check{u}_2$ by (T21b); we let $w_0 = W(u_0)$, $\check{v}_i = V(\check{u}_i)$, $\hat{w}_2 = W(\hat{u}_2)$, $\check{w}_i = W(\check{u}_i)$ Figure 8.  Notations used in Section 5 Figure 9.  The solutions constructed in Subsection 5.1 on the left and in Subsection 5.2 on the right represented in the $(x,t)$-plane. The red thick curves are phase transitions. In particular, those along $x=0$ are stationary undercompressive phase transitions Figure 10.  Quantitative representation of density, on the left, and velocity, on the right, corresponding to the solutions constructed in Subsection 5.1 and Subsection 5.2. Recall that the two solutions coincide up to the interaction $i_5$ Figure 11.  Quantitative representation of density, on the left, and velocity, on the right, corresponding to the solution constructed in Subsection 5.2 Figures(11)
2023-03-28 16:05:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.7662357687950134, "perplexity": 518.5942010476919}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948867.32/warc/CC-MAIN-20230328135732-20230328165732-00036.warc.gz"}
http://teachingintrotocs.blogspot.com/2011/11/spanning-trees-with-few-hops.html
## Wednesday, November 9, 2011 ### Spanning trees with few hops Minimum spanning tree problems often become NP-hard when one introduces side constraints. One popular such constraint is to limit the depth of the tree - the maximum number of "hops" when traversing a path from the root to a leaf in the tree. That's the k-hops minimum spanning tree problem, supposedly motivated by network applications, because delays along a path are roughly proportional to the number of hops. In this problem, unlike the minimum spanning tree problem, the input edge weights do not necessarily define a metric. Indeed, in the minimum spanning tree problem, we usually start by saying: "Without loss of generality we can assume that the triangular inequality holds; otherwise, replace costly edges by shortest paths." But in the k-hop minimum spanning tree problem, such replacements may be invalid because they increase the number of hops. When the input weights do not define a metric, known algorithms are quite poor. If there exists a tree with k hops and weight w, then one can produce a tree with O(k log n) hops and weight O(w log n), but doing better is impossible. However, the situation is much brighter with random inputs drawn from a nice distribution. In a 2008 arxiv paper by Angel, Flaxman and Wilson, one assumes that the input is a complete graph and that the edges are drawn independently from the same distribution, for example the uniform U[0,1] distribution, or an exponential, or some other reasonable distribution (i.e. with similar density function near zero). Then there is a simple algorithm to produce a near-optimal tree. Here is the algorithm. First, assume that the bound k on the number of hops is less than log log n. Then the algorithm guesses x1, x2,..., xk, where xi is the number of nodes that are i hops away from the root. Then it performs a greedy Prim-like algorithm, going level by level, where, to build level i, you pick the xi nodes that can most cheaply be connected to the nodes in the part of the tree already built. (The "guess" relies on probabilistic analysis to compute the optimal sequence x(i)). Second, assume that the bound k on the number of hops is more than log log n. Then the algorithm builds an (unconstrained) minimum spanning tree, then greedily breaks it up into small-depth clusters, then reconnects the clusters using a straightforward extension of the first algorithm, (where the goal is now not to connect the entire graph but to create a small depth subtree intersecting all the clusters). Although the independent weight assumption is quite restrictive, the algorithm itself is natural. In fact, the unexpected bonus is that when k>log log n, not only is the output tree near-optimal, but its cost is almost the same as the cost of the unconstrained MST: in other words, one get tree of depth O(log log n) at no extra cost. So one is left to wonder: how good is this algorithm in practice?
2014-11-28 18:59:19
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8975218534469604, "perplexity": 498.8904425387312}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931010792.55/warc/CC-MAIN-20141125155650-00121-ip-10-235-23-156.ec2.internal.warc.gz"}
https://espacemath.com/cross-product-calculator
# cross product calculator First vector X Y Z Second vector X Y Z #### RESULTS Fill the calculator form and click on Calculate button to get result here Get cross product calculator For Your Website Having trouble figuring out the sum of two trajectories? No worries, you’re at the right place! Our free normal vector calculator is here to save the day for you.  Simply input values in this tool and you’re good to go! ## What is Cross-Product To understand it, let’s first get our heads around vector: It is a mathematical tool with a clearly defined magnitude and direction. It is utilized in Physics, Mathematics, Engineering and Computer Science. The cross-product (don’t confuse with dot-product), to put it simply, is a binary operation on two trajectories in 3-dimensional space. It is represented by the sign ‘x’ (read: Cross). Consider two linearly self-determining of these, ‘a’ and ‘b’; the cross product of these two vectors would be a trajectory that is perpendicular to both a and b. ## Cross-Product Formula Again, let’s consider the ‘a x b’ with ‘x’ being the result of their multiplication. So, the formula goes like this: $$\mathbf{C = a \times b = |a| |b| sin(\theta) n}$$ Where, $$C=?$$ a – is the first vector, b – is the second trajectory, θ - is an angle between both the above-mentioned vectors, n – the resulting third path-line perpendicular to both a and b. ## How to do Cross-Product If you know how to work out the multiplications, then it becomes quite simple actually. All you need to do is use the above-stated formula and you’re good to go. Additionally, as you may already know by now that the result of two path-lines is a third route that is at a right angle to both the previous vectors. However, it is pertinent to note here that when both the trajectories ‘a’ and ‘b’ point in the same or the opposite direction, the length of the third path-line is 0. However, when both ‘a’ and ‘b’ are at right angle relative to each other, the length of the third one is maximized. Let’s define ‘a’ and ‘b’ with coordinates ax, ay and az and bx, by and bz respectively. Now naturally, let’s assume that the resultant vector ‘c’ goes with coordinates cx, cy and cz. Consider the values: $$a = (4,5,6)$$ $$b = (7,8,9)$$ $$c =?$$ Let’s do the math and find out! $$\mathbf{cx = aybz - azby} = 5\times9 - 6\times8 = 45 - 48 = -3$$ $$\mathbf{cy = azbx - axbz} = 6*7 - 4*9 = 42 - 36 = 6$$ $$\mathbf{cz = axby - aybx} = 4*8 - 5*7 = 32 - 35 = -3$$ your answer = $$-3,6,-3$$ Or, if you aren’t interested in doing all the math, you can simply use our cross multiply calculator and get the answer automatically in a split second. ## Cross-Product of two vectors The two vectors ‘a’ and ‘b’ follow the rules given below: $$(ya) x b=y (a x b)=a x (yb)$$, $$a x (b+c)=a × b+a × c$$ $$(b+c) × a=b × a+c × a$$ Where c is the sum value while y is a scaler. We can use these properties, to come up with a formula for the multiplication result in relation to components. ## How to use Cross-Product Calculator Coming back to our digital gizmo, you can determine the path-line results with our vector multiplication calculator. It is extremely easy to use. There are two ways you can determine the answer. Either you can use, coordinates method or the initial points method. Both options are given in our cartesian product calculator. You can select either of these two options for the purpose of calculation. ## Coordinates Method and Initial points Method 1. In Coordinates approach, you have to input the coordinates (x,y,z) for the unit trajectories whose product trajectory you want to determine and that is that. 2. On the other hand, in initial points approach, you have to input the initial points as well as the terminal points of both the unit path-lines to get the answer. All that you have to do is simply follow the steps given below: • Input the values (Coordinates or initial points) of the two trajectories • Click on ‘calculate’ to find the answer. ## dot product vs cross product People often wonder “Is dot product same as the cross product?” these two are polar opposites. The dot multiplication is scaler in nature and a scaler is not defined by a particular direction while vector on the other hand, is described with a specific direction. Other Languages
2021-12-09 03:44:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7912123203277588, "perplexity": 776.803279520206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363659.21/warc/CC-MAIN-20211209030858-20211209060858-00080.warc.gz"}
http://clay6.com/qa/21246/a-point-source-of-light-is-taken-away-from-the-experimental-setup-of-photoe
Want to ask us a question? Click here Browse Questions Ad 0 votes # A point source of light is taken away from the experimental setup of photoelectric effect, then which is the most appropriate statement? $\begin{array}{1 1} \text{(a) Saturation photocurrent remains same, while stopping potential increases.} \\ \text{(b) Saturation photocurrent and stopping potential both decrease.} \\ \text{(c) Saturation photocurrent decreases while stopping potential remains same.}\\ \text{(d) Saturation photocurrent decreases and stopping potential increases.} \end{array}$ Can you answer this question? ## 1 Answer 0 votes Ans (c) As the source of light is taken away, the intensity of light at the location of experiment setup decreases. Photocurrent ~ Intensity So, photocurrent decreases. Now, $V_{\circ} = hc/\lambda_{\circ}e – \phi/e$ which is independent of intensity. answered Dec 24, 2013 edited Mar 13, 2014 0 votes 1 answer 0 votes 1 answer 0 votes 1 answer 0 votes 1 answer 0 votes 1 answer 0 votes 1 answer 0 votes 1 answer
2017-02-21 05:09:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8838292956352234, "perplexity": 10788.172623820743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00556-ip-10-171-10-108.ec2.internal.warc.gz"}
http://cosmo.astro.uni.torun.pl/foswiki/bin/view/Cosmo/CosmoCurvature
<< WebHome # Cosmic Curvature (standard ruler) Some of the most important parameters of the metric are those that directly affect the curvature of three-dimensional comoving spatial sections. This mainly includes and . The first (and possibly the only, so far) simultaneous constraint on both and from within a single survey, using a standard ruler method, is AstroPh:0106135 - Roukema, Mamon, Bajtlik (2002). This topic is generally referred to today as BAO - the use of baryonic acoustic oscillations - although in fact can be more general, i.e. make less assumptions. The best-cited example of BAO usage is AstroPh:0501171 SDSS z < 0.47 Eisenstein et al., who estimated Omega_total = 1.010\pm0.009. Updating Roukema, Mamon & Bajtlik (2002) on newer QSO redshift surveys should result in preciser, and hopefully more accurate, estimates of these metric parameters. Topic revision: r2 - 06 Sep 2010, BoudRoukema English Français Polski Copyright © CC-BY-SA by the contributing authors. All material on this collaboration platform is copyrighted under CC-BY-SA by the contributing authors unless otherwise noted. Ideas, requests, problems regarding Foswiki? Send feedback
2022-12-08 10:20:59
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8162457942962646, "perplexity": 6701.72224003098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711286.17/warc/CC-MAIN-20221208082315-20221208112315-00343.warc.gz"}
https://pythonforundergradengineers.com/index3.html
## Quiver plots using Python, matplotlib and Jupyter notebooks A quiver plot is a type of 2D plot that shows vector lines as arrows. Quiver plots are useful in electrical engineering to visualize electrical potential and valuable in mechanical engineering to show stress gradients. In this post, we will build three quiver plots using Python, matplotlib, numpy, and Jupyter notebooks. ## Plotting a stress-strain curve with four libraries: matplotlib, pandas, altair and bokeh After watching a great webinar about plotting with different python libraries, I wanted to see what it was like to make a stress strain curve using four different modules: pandas, matplotlib, altair and bokeh (with holoviews). ## Plotting Histograms with matplotlib and Python Histograms are a useful type of statistics plot for engineers. A histogram is a type of bar plot that shows the frequency or number of values compared to a set of value ranges. Histogram plots can be created with Python and the plotting package matplotlib. The plt.hist() function creates … ## Statistics in Python using the statistics module In this post, we'll look at a couple of statistics functions in Python. These statistics functions are part of the Python Standard Library in the statistics module. The four functions we'll use in this post are common in statistics: • mean - average value • median - middle value • mode - most often value • standard … ## Solving Two Equations for Two Unknowns and a Statics Problem with SymPy and Python SymPy (http://www.sympy.org) is a Python library for symbolic math. In symbolic math, symbols represent mathematical expressions. In a numerical calculation, the value of pi is stored as an estimate of pi, a floating point number close to 3.14.... In a symbolic math expression, the value of …
2019-05-24 17:38:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4038650095462799, "perplexity": 1948.2903035221348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257699.34/warc/CC-MAIN-20190524164533-20190524190533-00423.warc.gz"}
https://www.gradesaver.com/textbooks/math/geometry/CLONE-68e52840-b25a-488c-a775-8f1d0bdf0669/chapter-6-section-6-1-circles-and-related-segments-and-angles-exercises-page-275/11i
## Elementary Geometry for College Students (6th Edition) $m\angle 6=30^{\circ}$ In part e we determined that the $m\angle 2=120^{\circ}$. All radii of a circle are congruent, therefore $\overline{CQ}=\overline{BQ}$, creating an isosceles triangle. The two base angles of an isosceles triangle are congruent. So if we subtract the vertex angle ($\angle 2$) from $180^{\circ}$ and then divide the resulting number by two (two base angles), we will find the measure of the missing angle. $180-120=60\div2=30$ $m\angle 6=30^{\circ}$
2019-10-16 22:19:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6395873427391052, "perplexity": 305.7157230410664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986670928.29/warc/CC-MAIN-20191016213112-20191017000612-00539.warc.gz"}
https://library.kiwix.org/sound.stackexchange.com_en_all_2021-05/A/question/37142.html
## How to ensure uniform sound levels in multiple WAV files? 5 3 I require that the sound levels of all the songs for my project are uniform. The songs are from different albums and different artists, hence they are not uniform from the perspective of volume level. I found MP3Gain for MP3 files. But people say it does not work with the WAV file format. There are some problems like reducing audio quality when it comes to Audacity. Which programs like MP3Gain which can be successfully utilized for wave files? Can I use SoX for this, and how? 1 I'm not sure Wav or Aif can hold ReplayGain data; flac, ogg, mp3 & aac can, so you might be better off using one of those formats - https://en.wikipedia.org/wiki/ReplayGain has an explanation & links to available software. – Tetsujin – 2015-09-25T06:57:35.173 2 You are talking about perceived loudness. This is quite unlike peak level. What you can do is measure the perceived loudness for all your songs and then adjust the gain accordingly. The best way currently available to measure perceived loudness is using the R128 standard. I hope you're on a Mac, because then you can use this free commandline tool: https://github.com/audionuma/r128x Perceived loudness according to R128 is measured in LUFS (loudness units full scale), but these are identical to deciBels, except that they are measuring an RMS window over a certain time (which is of course the only way to measure perceived loudness). Now comes the tricky thing: if your song has a loud part and a more quiet part, statistics dictates that the perceived integrated loudness for the the song you're measuring is a function of the entire song. What you can do: • Isolate the parts of all your songs that you want to sound the same to the listener. If a song has several drops, you can just take a single portion of a few seconds that is representative of the loudness of the song. • Measure all these isolated parts using the above tool or another R128 measuring tool. You will have to differentiate between M, S and I loudness values. You can simply ignore M (momentary) and S (short term) loudness and just look at the I (integrated) level. • R128 wants tv programs to all be at -23 LUFS, but you don't care about this. Just make notes of all the integrated levels of your song snippets. As they are digital full scale measurements, the measurements will yield negative results, like -9 LUFS and -15 LUFS. • Like we've seen, LUFS are just RMS deciBels, so you can gain scale the entire song using any linear gain plugin or adjustment. Take the softest song, e.g. the one that measures -15 LUFS and use that one as a reference. You can now gain scale a song that measured -9 LUFS using a -6 dB gain adjustment, and the song will then have the same perceived loudness for the parts you selected. • Voila, album mastered. You may want to be sure your tracks are all mastered in terms of dynamics and EQ before you do all of the above. You can even normalize them first, so you know you are not wasting bits because all songs are way too quiet. If you are very technical inclined, you can even measure the LRA (loudness range) using R128 tools and compare those of songs you think should pretty much be in the same loudness range. If all else fails, I'd be happy to master your songs for you, but it won't be free! Good luck, Hens Zimmerman http://henszimmerman.audio In case the OP doesn't run Mac OS, he can have a look at https://github.com/jiixyj/loudness-scanner which uses the same library that r128x uses. – audionuma – 2016-02-01T06:35:44.313 1 And FFmpeg also includes loudness measurement : http://ffmpeg.org/ffmpeg-filters.html#ebur128 – audionuma – 2016-02-01T06:37:53.057 1 Replaygain is a metadata based system - it analyses the absolute peak value of your audio, then writes a scaling value which the player reads and uses to amplify the audio data when you play it back. The wave format doesn't natively support the metadata used for it so you have to either alter the actual audio data to normalise it, or use a hack to the format - I refer you to this old hydrogenaudio post: https://www.hydrogenaud.io/forums/index.php?s=&showtopic=54958&view=findpost&p=492968 Can't guarantee interoperability with any of your other apps. Simplest thing to do would be to use foobar, do a batch convert with a normalisation processing step and save copies of the files for your project use. 1 MP3 metadata has a sort of dynamics compression hack where an individual track can say to the player “play me at 110% volume” and another track can say “play me at 90% volume” and the result is that those 2 MP3’s seem to be at the same perceived volume. With lossless audio, you don’t have this hack. You have to use actual dynamics compression. So the way you compress the dynamics is to run the audio files through the same dynamics compressor and create new lossless audio files that all play at the same perceived volume. This is classic audio editing. Note that this has nothing to do with normalization. You can normalize 10 WAV files and they can all still have a different perceived volume. Normalizing is math, but perceived volume comes from art. Each song has a different dynamic range because of the way the people who made it expressed themselves artistically. A dynamics compressor is how you can make 2 songs from 2 different sources match. There is an art to finding the right settings for your dynamics compressor based on your source material. That is why the tools that set one MP3 at 110% and another at 90% do an analysis of the audio first. With lossless audio, you will listen in real-time to the output of the dynamics compressor and make the adjustments you want. Some dynamics compressors also have automatic features that simplify their use and are equivalent to what is done with MP3. Audacity probably has a built-in dynamics compressor and if not, then you can probably use a plug-in. If you are running on a Mac there is one built-in called “AUDynamicsCompressor.” If you were only going to have one audio processor, it would be a dynamics processor. It’s the most fundamental of audio processors. It would be hard to imagine any kind of audio editing going on without a dynamics processor. 0 For Windows users. Since utilities to calculate replay gain/peak values are mostly designed for compressed "lossy" formats such as mp3, enabling ReplayGain for wav tracks/albums, is a problem. I found a workaround was encoding mp3s from wav sources and using Winamp (v5.8) to both calculate and write the ReplayGain to the mp3 id3v2.3 tags, then applying those values to the original wavs. There's a Windows freeware tool that copies id3 tags directly onto an existing wav file called Mp3tag. Below is a screenshot: In your player that supports ReplayGain playback simply enable it. In VLC the option called 'Replay gain mode' allows you to select: 'Album', 'Track' or 'None'. I hope this answer still has relevance. ReplayGain is especially useful for chamber music in my limited experience. For that track above I also measured the 'Perceived Loudness' (before ReplayGain) in dB. It was, Left: -11.82; Right: -12.22. The whole track should hence be louder, much louder, when Winamp's ReplayGain value -5.26 dB is applied, however, in practice, track playback becomes somewhat quieter on good headphones. My understanding is that -5.26 dB track gain is an average figure when applied that falls below the left/right peak amplitudes to avoid clipping artefacts above 0 dB. Average 'Loudness', another value, is subtracted from a -14 dB pink noise reference. ReplayGain attenuates or raises the peak amplitude in line with either the Album or the Track average. I have encountered so far significantly more tracks in which volume decreases as a result of using ReplayGain. The net effect is usually to normalize the loudness and in the process lower the volume. I may be wrong of course, but whether in dB or LUFS, 'Perceived Loudness' is that which you want to manipulate or change. People write volumes trying to explain all this. Edit: It turns out that ReplayGain multiplies each sample value by a constant. Ten raised to the power of one-twentieth of replay gain. In short, if the ReplayGain is put at -5.26 dB (as computed by Winamp), sample values across the board should be multiplied by 0.54575786109127092251134492267094. May this be accurate. I'm citing from: http://wiki.hydrogenaud.io/index.php?title=ReplayGain_1.0_specification Conclusion: Having normalized the WAV file above by 54.575786109127092251134492267094 percent and compared it to the MP3 version that contains ReplayGain, I must regrettably report that VLC (3.0.11 Vetinari) on Windows 10 fails to respect the metadata in the MP3 tag, if ReplayGain mode is enabled in the Audio Preferences section. ReplayGain ID3 tags inserted into WAV tracks are ignored as well, unsurprisingly. While Winamp does respect these values in the ID3v2.3 tags in MP3s, as does PowerAmp on Android, Winamp also does not recognize the WAV file tags, to my astonishment. It implements a default gain in the absence of proper ReplayGain. Back to the question: How to ensure uniform sound levels in multiple WAV files? My normalized WAV played back in Winamp sounds exactly the same as the MP3 with RG. The formula "ten raised to the power of one-twentieth of replay gain" therefore, in this instance, replicated ReplayGain accurately. You can probably rely on the formula, using Winamp to obtain ReplayGain. A better option might be to normalize loudness in music tracks of varying loudnesses employing one of the other standards. EBU R128 set to -23 LUFS has been recommended, I think. Edit: Minor correction. VLC applies ReplayGain to MP3s (not to WAV files), provided the RG metadata exists. You must relaunch the player (in Windows) after you enable Replay gain mode under Audio Preferences. Then it works fine. My mistake. -2 If you're needing auto-gain to be applied in realtime then your best bet would be to convert the WAV's in question to MP3 using the highest possible quality settings in Audacity (trust me, if no one is specifically listening for audio artifacts/deformities, no one will notice any change in quality). If you don't need auto-gain to be applied in realtime you can just use Audacity's Amplify effect and let it automatically determine the exact amount of gain to apply to bring the audio up to 0dB. Proceed to render the WAV to the disk and use where needed. 1Normalising is not the same as adjusting for replay gain – Tetsujin – 2015-09-25T06:55:31.747
2021-09-19 18:05:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3004639446735382, "perplexity": 2638.931211593781}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056892.13/warc/CC-MAIN-20210919160038-20210919190038-00591.warc.gz"}
http://math.stackexchange.com/questions/284113/calculate-the-variance
# Calculate the Variance This is what the question that I'm having trouble with: I know how to calculate the variance in general but I'm not sure how they got this: - By definition of the variance, you have that $$\mathrm{Var}(X_i)=E\left[(X_i-E[X_i])^2\right]=E\left[(X_i-p)^2\right].$$ Let us call $Y_i=(X_i-p)^2$. Then $Y_i$ is a random variable given by $$Y_i= \begin{cases} (1-p)^2\quad&\text{when }X_i=1,\\ p^2&\text{when }X_i=0, \end{cases}$$ from which we see that $P(Y_i=(1-p)^2)=P(X_i=1)=p$ and $P(Y_i=p^2)=P(X_i=0)=1-p$, and thus its expectation is $$E[Y_i]=(1-p)^2\times p+p^2\times (1-p)=p-p^2.$$ - Why is it the other way round from the question though? I.e. P{X_i=0]=p and P{X_i=1}=p –  Mathlete Jan 22 '13 at 10:45 What? The other way around how? In both your question and this answer we have: $P(X_i=0)=1-p$ and $P(X_i=1)=p$. –  Stefan Hansen Jan 22 '13 at 10:52 Sorry, my mistake - I misread your line of working. Thanks for clarifying. –  Mathlete Jan 22 '13 at 10:55 we know var($X_{i}$)=$E[\,(X_{i}-E(X_{i}))^{2} ]=E[\,(X_{i}-p))^{2} ]$ (because $E(X_{i})=P\{X_{i}=0\}*0+P\{X_{i}=1\}*1=p$), a more useful formula is $var(X_{i})=E(X_{i}^{2})-E(X_{i})^{2}$ which can be obtained by expanding var($X_{i}$)=$E[\,(X_{i}-E(X_{i}))^{2} ]$ - OK but why would you multiply by (1-p)^2 and p^2? –  Mathlete Jan 22 '13 at 9:30 You have that $Var(X_i)=E((X_i-E(X_i))^2)=E(X_i^2-2X_iE(X_i)+(E(X_i))^2)=E(X_i^2)-(E(X_i))^2.$ Now $X_i^2$ is a RV that equals $1^2=1$ with probability $p$ and $0^2=0$ with probability $1-p$. As such, its expected value is $1\cdot p + 0\cdot (1-p) = p$. On the other hand, since you $E(X_i)=p$, you get that $(E(X_i))^2=p^2$. Combining these two results gives you that $Var(X_i)=E(X_i^2)-(E(X_i))^2=p-p^2$. - I got that answer too doing it that way - I was just wondering how they went about their method? –  Mathlete Jan 22 '13 at 10:32
2015-09-02 00:58:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9506216645240784, "perplexity": 302.47360408741025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645235537.60/warc/CC-MAIN-20150827031355-00028-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.edreports.org/reports/detail/jump-math-2019-7
## Alignment: Overall Summary The instructional materials reviewed for JUMP Math Grade 7 partially meet expectations for alignment. The instructional materials meet expectations for focus and coherence by assessing grade-level content, devoting the majority of class time to the major work of the grade, and being coherent and consistent with the progressions in the Standards. The instructional materials partially meet expectations for rigor and the mathematical practices. The instructional materials partially meet the expectations for rigor by attending to conceptual understanding and procedural skill and fluency, and they also partially meet expectations for practice-content connections by identifying the mathematical practices and using them to enrich grade-level content. | ## Gateway 1: ### Focus & Coherence 0 7 12 14 13 12-14 Meets Expectations 8-11 Partially Meets Expectations 0-7 Does Not Meet Expectations ## Gateway 2: ### Rigor & Mathematical Practices 0 10 16 18 12 16-18 Meets Expectations 11-15 Partially Meets Expectations 0-10 Does Not Meet Expectations | ## Gateway 3: ### Usability 0 22 31 38 N/A 31-38 Meets Expectations 23-30 Partially Meets Expectations 0-22 Does Not Meet Expectations ## The Report - Collapsed Version + Full Length Version ## Focus & Coherence #### Meets Expectations + - Gateway One Details The instructional materials reviewed for JUMP Math Grade 7 meet expectations for Gateway 1. The instructional materials meet expectations for focus within the grade by assessing grade-level content and spending the majority of class time on the major work of the grade. The instructional materials meet expectations for being coherent and consistent with the Standards as they connect supporting content to enhance focus and coherence, have an amount of content that is viable for one school year, and foster coherence through connections at a single grade. ### Criterion 1a Materials do not assess topics before the grade level in which the topic should be introduced. 2/2 + - Criterion Rating Details The instructional materials reviewed for JUMP Math Grade 7 meet expectations for not assessing topics before the grade level in which the topic should be introduced. Above-grade-level assessment items are present and can be modified or omitted without significant impact on the underlying structure of the instructional materials. ### Indicator 1a The instructional material assesses the grade-level content and, if applicable, content from earlier grades. Content from future grades may be introduced but students should not be held accountable on assessments for future expectations. 2/2 + - Indicator Rating Details The instructional materials reviewed for JUMP Math Grade 7 meet expectations for assessing grade-level content. Above-grade-level assessment Items are present but could be modified or omitted without a significant impact on the underlying structure of the instructional materials. The Sample Unit Quizzes and Tests along with Scoring Guides and Rubrics were reviewed for this indicator. Examples of grade-level assessment items include: • Teacher Resource, Part 1, Sample Quizzes and Tests, Unit 2 Test, Item 3, “Subtract. Then find the distance apart. a. +8 − (−5) = ______, so +8 and −5 are _____ units apart. b. +4 − (+9) = ______, so +4 and +9 are _____ units apart. c. −6 − (+2) = ______, so −6 and +2 are _____ units apart. d. −3 − (−10) = ______, so −3 and −10 are _____ units apart.” Students are subtracting rational numbers and finding their distance apart. (7.NS.1c) • Teacher Resource, Part 1, Sample Quizzes and Tests, Unit 3 Quiz, Item 5, “Factor the expression. Use the GCF of the numbers. a. 6x + 8 b. 12x − 15 c. 3x + 12 d. 8x − 4.” Students factor an expression using the Greatest Common Factor. (7.EE.1) • Teacher Resource, Part 2, Sample Quizzes and Tests, Unit 5 Test, Item 4, “a. Use long division to write $$\frac{7}{8}$$ as a decimal. b. How do you know that the division is finished at this point?” Students use long division to change a fraction to a decimal and explain that the decimal form of a rational number terminates in 0s. (7.NS.2d) • Teacher Resource, Part 2, Sample Quizzes and Tests, Unit 6 Quiz, Item 2, “What shape is the cross-section?” Students name the 2-dimensional figure that results from slicing a cross-section of a 3-dimensional figure. (7.G.3) • Teacher Resource, Part 2, Sample Quizzes and Tests, Unit 7 Quiz, Item 2, “Box plots A and B represent sets that have 500 data values. a. Which set has the greater median? __________ b. Which set has the greater IQR? __________” Students compare two data sets and assess the degree of overlap. (7.SP.3) The following are examples of assessment Items that are aligned to standards above Grade 7, but these can be modified or omitted without compromising the instructional materials: • Teacher Resource, Part 2, Sample Quizzes and Tests, Unit 3, Item Bonus, “2(x + 3) = 3x - 7” Students expand an expression and collect terms. (8.EE.8) • Teacher Resource, Part 2, Sample Quizzes and Tests, Unit 4, Item 2, “Evaluate the variables. Include the units.” Students find exterior angles based on a picture. (8.G.5) ### Criterion 1c - 1f Coherence: Each grade's instructional materials are coherent and consistent with the Standards. 7/8 + - Criterion Rating Details The instructional materials reviewed for JUMP Math Grade 7 meet expectations for being coherent and consistent with the Standards. The instructional materials connect supporting content to enhance focus and coherence, include an amount of content that is viable for one school year, and foster connections at a single grade. However, the instructional materials contain off-grade-level material and do not relate grade-level concepts explicitly to prior knowledge from earlier grades. ### Indicator 1c Supporting content enhances focus and coherence simultaneously by engaging students in the major work of the grade. 2/2 + - Indicator Rating Details The instructional materials reviewed for JUMP Math Grade 7 meet expectations that supporting content enhances focus and coherence simultaneously by engaging students in the major work of the grade. When appropriate, the supporting work enhances and supports the major work of the grade level. Examples where connections are present include the following: • Teacher Resource, Part 1, Unit 6, Lessons G7-8, G7-9, and G7-10 connect 7.RP.2 with 7.G.1 as students are expected to recognize and represent proportional relationships between quantities in order to solve problems involving scale drawings of geometric figures. • Teacher Resource, Part 2, Unit 4, Lessons G7-11, G7-12, and G7-13 connect 7.EE.4a with 7.G.5 as students are expected to solve word problems leading to equations of the form px + q = r and p(x + q) = r, where p, q, and r are specific rational numbers, that arise from using facts about supplementary, complementary, vertical, and adjacent angles in a figure. • Teacher Resource, Part 2, Unit 4, Lessons G7-14, G7-15, and G7-16 connect 7.EE.4 with 7.G.6 as students are expected to construct simple equations and inequalities to solve problems by reasoning about the quantities that arise from real-world and mathematical problems involving area of two-dimensional objects composed of triangles, quadrilaterals, and polygons. • Teacher Resource, Part 2, Unit 7, Lesson SP7-16 connects 7.RP.2 with 7.SP.2 as students are expected to recognize and represent proportional relationships between quantities in order to draw inferences about a population with an unknown characteristic of interest. ### Indicator 1d The amount of content designated for one grade level is viable for one school year in order to foster coherence between grades. 2/2 + - Indicator Rating Details The instructional materials reviewed for JUMP Math Grade 7 meet expectations for having an amount of content designated for one grade level that is viable for one school year in order to foster coherence between grades. Overall, the amount of time needed to complete the lessons is approximately 154 days, which is appropriate for a school year of approximately 140-190 days. • The materials are written with 15 units containing a total of 168 lessons. • Each lesson is designed to be implemented during the course of one 45 minute class period per day. In the materials, there are 168 lessons, and of those, 29 are Bridging lessons. These 29 Bridging lessons have been removed from the count because the Teacher Resource states that they are not counted as part of the work for the year, so the number of lessons examined for this indicator is 139 lessons. • There are 15 unit tests which are counted as 15 extra days of instruction. • There is a short quiz every 3-5 lessons. Materials expect these quizzes to take no more than 10 minutes, so they are not counted as extra days of instruction. ### Indicator 1e Materials are consistent with the progressions in the Standards i. Materials develop according to the grade-by-grade progressions in the Standards. If there is content from prior or future grades, that content is clearly identified and related to grade-level work ii. Materials give all students extensive work with grade-level problems iii. Materials relate grade level concepts explicitly to prior knowledge from earlier grades. 1/2 + - Indicator Rating Details The instructional materials reviewed for JUMP Math Grade 7 partially meet expectations for being consistent with the progressions in the Standards. Overall, the materials address the standards for this grade level and provide all students with extensive work on grade-level problems. The materials make connections to content in future grades, but they do not explicitly relate grade-level concepts to prior knowledge from earlier grades. The materials develop according to the grade-by-grade progressions in the Standards, and content from prior or future grades is clearly identified and related to grade-level work. The Teacher Resource contains sections that highlight the development of the grade-by-grade progressions in the materials, identify content from prior or future grades, and state the relationship to grade-level work. • At the beginning of each unit, "This Unit in Context" provides a description of prior concepts and standards students have encountered during the grade levels before this one. The end of this section also makes connections to concepts that will occur in future grade levels. For example, "This Unit in Context" from Unit 8, Statistics and Probability: Probability Models, of Teacher Resource, Part 1 describes the topics from Measurement and Data that students encountered in Grades K through 5, specifically organizing and representing data with scaled picture and bar graphs and line plots with measurements in fractions of a unit, and from Statistics and Probability in Grade 6, specifically developing an understanding of statistical variability and summarizing and describing distributions. The description then includes the topic of probability, specifically referring to using different tools to find probabilities, and it concludes with how the work of this unit builds to the statistical topic of bivariate data in Grade 8. There are some lessons that are not labeled Bridging lessons that contain off-grade-level material, but these lessons are labeled as “preparation for” and can be connected to grade-level work. For example, Teacher Resource, Part 2, Unit 2, Lesson RP7-33 addresses solving addition and multiplication equations including negative addends and coefficients, and the lesson is labeled as "preparation for 7.EE.4." The materials give all students extensive work with grade-level problems. The lessons also include Extensions, and the problems in these sections are on grade level. • Whole class instruction is used in the lessons, and all students are expected to do the same work throughout the lesson. Individual, small-group, or whole-class instruction occurs in the lessons. • The problems in the Assessment & Practice books align to the content of the lessons, and they provide on-grade-level problems that "were designed to help students develop confidence, fluency, and practice." (page A-54, Teacher Resource) • In the Advanced Lessons, students get the opportunity to engage with more difficult problems, but the problems are still aligned to grade-level standards. For example, the problems in Teacher Resource, Part 2, Unit 3, Lesson EE7-27 engage students in solving inequalities where the coefficient of the variable is negative, which is more difficult than when the coefficient is positive, but these problems still align to 7.EE.4b. Also, the problems in Teacher Resource, Part 2, Unit 5, Lesson NS7-52 that have students simplifying numerical expressions that include repeating decimals align to standards from 7.NS. The instructional materials do not relate grade-level concepts explicitly to prior knowledge from earlier grades. Examples of missing explicit connections include: • Every lesson identifies “Prior Knowledge Required” even though the prior knowledge identified is not aligned to any grade-level standards. For example, Teacher Resource, Part 2, Unit 2, Lesson RP7-28 states that its goal is to solve problems involving ratios with fractional terms, and the prior knowledge required is that students can divide fractions, can multiply fractions by whole numbers, can multiply whole numbers by fractions, can find equivalent ratios, and can understands ratio tables. • There are 29 lessons identified as Bridging lessons, but none of these lessons are explicitly aligned to standards from prior grades even though they do state for which grade-level standards they are preparation. For example, in Teacher Resource, Part 1, Unit 4, four of the seven lessons are Bridging lessons labeled as "preparation for 7.NS.1," and two of the seven are Bridging lessons labeled as "preparation for 7.NS.2." However, none of these six Bridging lessons are explicitly aligned to standards prior to Grade 7. Also, Teacher Resource, Part 2, Unit 3, Lesson EE7-1 is a Bridging lesson labeled as "preparation for 7.EE.4" that has students substituting values for a variable into an expression, but the lesson is not explicitly aligned to standards prior to Grade 7. ### Indicator 1f Materials foster coherence through connections at a single grade, where appropriate and required by the Standards i. Materials include learning objectives that are visibly shaped by CCSSM cluster headings. ii. Materials include problems and activities that serve to connect two or more clusters in a domain, or two or more domains in a grade, in cases where these connections are natural and important. 2/2 + - Indicator Rating Details The instructional materials reviewed for JUMP Math Grade 7 meet expectations for fostering coherence through connections at a single grade, where appropriate and required by the standards. Overall, materials include learning objectives that are visibly shaped by CCSSM cluster headings and make connections within and across domains. In the materials, the units are organized by domains and are clearly labeled. For example, Teacher Resource, Part 1, Unit 3 is titled Expressions and Equations: Equivalent Expressions, and Teacher Resource, Part 2, Unit 6 is titled Geometry: Volume, Surface Area, and Cross Sections. Within the units, there are goals for each lesson, and the language of the goals is visibly shaped by the CCSSM cluster headings. For example, in Teacher Resource, Part 1, Unit 8, the goal for Lesson SP7-9 states "Students will design and use a simulation to determine probabilities of compound events." The language of this goal is visibly shaped by 7.SP.C, "Investigate chance processes and develop, use, and evaluate probability models." The instructional materials include problems and activities that serve to connect two or more clusters in a domain or two or more domains in a grade. Examples of these connections include the following: • In Teacher Resource, Part 2, Unit 4, Lessons G7-22 and G7-23, the materials connect 7.G.A with 7.G.B as students draw, construct, and describe geometrical figures; describe the relationships between them; and solve problems involving angle measure and area. • In Teacher Resource, Part 2, Unit 7, Lesson SP7-13, the materials connect 7.SP.A with 7.SP.B as students use random sampling to draw inferences about a population and informal comparative inferences about two populations. • In Teacher Resource, Part 2, Unit 3, Lesson EE7-19, the materials connect 7.RP with 7.EE as students are expected to recognize and represent proportional relationships between quantities and rewrite expressions in different forms in a problem context to shed light on the problem and how the quantities in it are related. ### Criterion 1b Students and teachers using the materials as designed devote the large majority of class time in each grade K-8 to the major work of the grade. 4/4 + - Criterion Rating Details The instructional materials reviewed for JUMP Math Grade 7 meet expectations for students and teachers using the materials as designed and devoting the majority of class time to the major work of the grade. Overall, instructional materials spend approximately 73 percent of class time on the major clusters of the grade. ### Indicator 1b Instructional material spends the majority of class time on the major cluster of each grade. 4/4 + - Indicator Rating Details The instructional materials reviewed for JUMP Math Grade 7 meet expectations for spending the majority of class time on the major clusters of each grade. Overall, approximately 73 percent of class time is devoted to major work of the grade. The materials for Grade 7 include 15 units. In the materials, there are 168 lessons, and of those, 29 are Bridging lessons. According to the materials, Bridging lessons should not be “counted as part of the work of the year” (page A-56), so the number of lessons examined for this indicator is 139 lessons. The supporting clusters were also reviewed to determine if they could be factored in due to how strongly they support major work of the grade. There were connections found between supporting clusters and major clusters, and due to the strength of the connections found, the number of lessons addressing major work was increased from the approximately 96 lessons addressing major work as indicated by the materials themselves to 102 lessons. Three perspectives were considered: the number of units devoted to major work, the number of lessons devoted to major work, and the number of instructional days devoted to major work including days for unit assessments. The percentages for each of the three perspectives follow: • Units – Approximately 67 percent, 10 out of 15; • Lessons – Approximately 73 percent, 102 out of 139; and • Days – Approximately 73 percent, 112 out of 154. The number of instructional days, approximately 73 percent, devoted to major work is the most reflective for this indicator because it represents the total amount of class time that addresses major work. ## Rigor & Mathematical Practices #### Partially Meets Expectations + - Gateway Two Details The instructional materials reviewed for JUMP Mathematics Grade 7 partially meet expectations for Gateway 2. The instructional materials partially meet expectations for rigor by developing conceptual understanding of key mathematical concepts, giving attention throughout the year to procedural skill and fluency, and spending some time working with routine applications. The instructional materials do not always treat the three aspects of rigor together or separately, but they do place heavier emphasis on procedural skill and fluency. The instructional materials partially meet expectations for practice-content connections. Although the instructional materials meet expectations for identifying and using the MPs to enrich mathematics content, they partially attend to the full meaning of each practice standard. The instructional materials partially attend to the specialized language of mathematics. ### Criterion 2a - 2d Rigor and Balance: Each grade's instructional materials reflect the balances in the Standards and help students meet the Standards' rigorous expectations, by helping students develop conceptual understanding, procedural skill and fluency, and application. 6/8 + - Criterion Rating Details The instructional materials reviewed for JUMP Mathematics Grade 7 partially meet expectations for rigor by developing conceptual understanding of key mathematical concepts, giving attention throughout the year to procedural skill and fluency, and spending some time working with routine applications. The instructional materials do not always treat the three aspects of rigor together or separately, but they do place heavier emphasis on procedural skill and fluency. ### Indicator 2a Attention to conceptual understanding: Materials develop conceptual understanding of key mathematical concepts, especially where called for in specific content standards or cluster headings. 2/2 + - Indicator Rating Details The instructional materials reviewed for JUMP Math Grade 7 meet expectations for developing conceptual understanding of key mathematical concepts, especially where called for in specific standards or cluster headings. The materials include lessons designed to support students’ conceptual understanding. Examples include: • Teacher Resource, Part 1, Unit 5, Lesson RP7-22, Extensions, Item 1, “Jake makes a 2.5% commission on the sale of a $30 item. How much money does he make in commission? Justify your answer.” In the extensions for this lesson problems exist where students use different forms. • Teacher Resource, Part 1, Unit 5, Lesson RP7-16, Exercises, Item 2, “a. Check that $$\frac{1}{20}$$ = 0.05 b. Use the fact that $$\frac{1}{20}$$ = 0.05 to write the fraction as a decimal. i. $$\frac{2}{20}$$ ii. $$\frac{13}{20}$$ iii. $$\frac{7}{20}$$” • Teacher Resource, Part 1, Unit 3, Lesson EE7-2, Exercises, Item 1, “Use the commutative property of multiplication to complete the equation. a. (9-7) x (3+4) = ____ b. (8-5) x (8÷4)= ____” Conceptual understanding is built with this lesson. • Teacher Resource, Part 2, Unit 3, Lesson EE7-15, Exercises, Item 1, “Move all the variable terms to the left side and all the constant terms to the right side. a. 3x + 3 - x = 5.” ### Indicator 2b Attention to Procedural Skill and Fluency: Materials give attention throughout the year to individual standards that set an expectation of procedural skill and fluency. 2/2 + - Indicator Rating Details The instructional materials reviewed for JUMP Math Grade 7 meet expectations for attending to those standards that set an expectation of procedural skill and fluency. The materials place an emphasis on fluency, giving many opportunities to practice standard algorithms and to work on procedural knowledge. Cluster 7.NS.A develops procedural skill in completing addition, subtraction, multiplication, and division with rational numbers. Examples include: • Teacher Resource, Part 1, Unit 2, Lesson NS7-5, Exercise, Item e, “(+2) + (-3) Do the previous exercises without using a number line. Make sure you get all the same answers.” Students first use the number line to add integers and then apply noticed patterns to addition problems without a number line. • Teacher Resource, Part 1, Lesson NS7-14, Extensions, Item 5, “Lynn says that -4$$\frac{1}{5}$$+ 3$$\frac{2}{5}$$= -1$$\frac{3}{5}$$ because -4 + 3 = -1 and 1 + 2 = 3. Do you agree with Lynn? Why or why not?” Students apply the patterns associated with adding and subtracting integers to add and subtract decimals and fractions that contain negatives. • Student Resource, Assessment & Practice Book, Part 1, Lessons NS7-27, Item 2a, “-8 x 5 = 0 - ____=_____.” Students use the distributive property to further understand multiplying integers. Standard 7.EE.1 expects students to use procedural skills in developing equivalent, linear expressions with rational coefficients through addition, subtraction, factoring, and multiplication. Examples include: • Teacher Resource, Part 1, Unit 3, Lesson EE7-9, Exercises,“Write an equivalent expression without brackets. Then simplify your expression. a. 3x − (5 + 6x) b. 5x + 4 − (2x + 9).” Students simplify expressions by combining like terms using properties of operations. • Teacher Resource, Part 1, Unit 3, Lesson EE7-11, Exercises, “Simplify. a. 3x − 2(x + 5).” Students use pictures and area models to write equivalent expressions that involve multiplication and factoring. Standard 7.EE.4 expects students to develop procedural skill in constructing and solving linear equations in the form px+q=r or p(x + q)=r, and inequalities in the form px+q>r and px+q<r. Examples include: • Teacher Resource, Part 2, Unit 3, Lesson EE7-14, Exercises, “Solve the equation in two steps. a. 3x + (-5) =13 b. (-4)y - (-2) = 34 c. (-4) + 9z =14” Students undo operations to solve equations with rational numbers. Students solve many problems including, one-step equations, two-step equations, equations using the distributive property, and equations with complex fractions involving cross multiplying. • Teacher Resource, Part 2, Unit 3, Lesson EE7-23, Exercises, Item 1, “Write the description using symbols. a. x is 16 or more b. x is 25 or less c. x is -0.5 or less.” Students use symbols to write inequalities to represent conditions and show solutions on a number line. • Teacher Resource, Part 2, Unit 3, Lesson EE7-25, Exercises, “Write an inequality to represent the weights on a balance.” Pictures of balances are shown with different weights. Students use a balance model to solve inequalities. ### Indicator 2c Attention to Applications: Materials are designed so that teachers and students spend sufficient time working with engaging applications of the mathematics, without losing focus on the major work of each grade 1/2 + - Indicator Rating Details The instructional materials reviewed for JUMP Math Grade 7 partially meet expectations for being designed so that teachers and students spend sufficient time working with engaging applications of mathematics without losing focus on the major work of each grade. Overall, many of the application problems are routine in nature and replicate the examples given during guided practice, and problems given for independent work are heavily scaffolded. Examples include: • Book 2 Unit 2 has limited real-world problems for students to solve. The focus of the unit is on learning specific algorithms to solve ratio problems. • Teacher Resource, Part 2, Unit 2, Lesson RP7-27, Exercises, “Write a ratio with whole-number terms. a. $$\frac{3}{8}$$ of a pizza for every 3 people.” However, none of the independent student work includes real-world scenarios. (7.RP.A) Students use fractional ratios and write equivalent whole number ratios. • Teacher Resource, Part 2, Unit 2, Lesson RP7-28, Extensions, Item 2, “John skates $$\frac{7}{2}$$ km in 10 minutes and bikes $$\frac{19}{4}$$ km in 12 minutes. Does he skate or bike faster?” (7.RP.A) Students are using proportional relationships to solve the problem. • Teacher Resource, Part 1, Unit 7, Lesson NS7-28, Exercises, “Write a multiplication equation to show the amount of change. a. Ted gained$10 every hour for 5 hours.” (7.NS.3) Students are given a variety of real-world contexts and are asked to write expressions and equations for each context. Students are also asked to solve equations using multiplication and addition. • Teacher Resource, Part 2, Unit 1, Lesson NS7-32, Exercises, “Will the recipe turn out? a. I’m making 5 $$\frac{1}{2}$$ batches of gravy, and each batch needs $$\frac{3}{8}$$ cup of flour. I use 2 cups of flour.” (7.NS.3) Students are using the four operations with rational numbers to solve problems. However, students are presented with few opportunities to solve real-world problems involving the four operations of rational numbers. When real-world problems are given, students are encouraged to follow the given examples and the problems do not have room for multiple strategies. • Teacher Resource, Part 1, Unit 5, Lesson RP7-22, Exercises, “The amount of tax is 5%. Multiply the original price by 1.05 to calculate the price after taxes. a. a $30 sweater b. a$12 CD.” (7.EE.3) The application questions follow given examples closely. For example, students solve percentage increase problems by being shown the structure of the problems before this set of exercises. • Teacher Resource, Part 1, Unit 8, Lesson PS7-9, Exercises, “Which part in the exercises above has the same answer as the given problem? a. 40% of students in the class are boys. Students are picked at random once a week for five weeks. Estimate the probability that a boy will be chosen in at least two consecutive weeks. b. 40% of blood donors have Type O blood. What is the probability that none of the first six donors asked have Type O blood?” (7.SP.8) The application questions follow given examples closely. Non-routine problems are occasionally found in the materials. For example: • In Book 1, Unit 1 Lesson RP7-11, Extensions, Item 5, “Raj mixes 3 cups of white paint with 1 cup of blue paint. He meant to mix 1 cup of white paint with 3 cups of blue paint. How much blue paint does he need to add to get the color he originally wanted?” (7.RP.A) Students are shown how to use ratio tables to help them solve problems with proportional relationships. ### Indicator 2d Balance: The three aspects of rigor are not always treated together and are not always treated separately. There is a balance of the 3 aspects of rigor within the grade. 1/2 + - Indicator Rating Details The instructional materials reviewed for JUMP Math Grade 7 partially meet expectations that the three aspects of rigor are not always treated together and are not always treated separately. All three aspects of rigor are present in the materials, but there is an over-emphasis on procedural skills and fluency. The curriculum addresses conceptual understanding, procedural skill and fluency, and application standards, when called for, and evidence of opportunities where multiple aspects of rigor are used to support student learning and mastery of the standards. There are multiple lessons where one aspect of rigor is emphasized. The materials emphasize fluency, procedures, and algorithms. Examples of conceptual understanding, procedural skill and fluency, and application presented separately in the materials include: • Conceptual Understanding: Teacher Resource, Part 1, Unit 2, Lesson NS7-5, Exercises, “Add using a number line. a. (-4) + (+1).” Students are introduced to adding integers using a number line. In the guided practice of the teachers edition, cut out arrows are moved around a number line drawn on the board to show students how adding negative numbers is done on a number line. • Application: Teacher Resource, Part 2, Unit 2, Lesson RP7-35, Exercise, “A shirt cost $25. After taxes, it costs$30. What percent of the original price are the taxes?” Students use contexts to learn to cross multiply to arrive at an equation and then solve the equation. • Procedural Skill and Fluency: In Part 2, Unit 5, Lesson NS7-47, Extensions, Item 1, “Investigate if the estimate is more likely to be correct when the divisor is closer to the rounded number you used to make your estimate. For example, when the divisor is 31 rounded to 30, is your estimate more likely to be correct than when the divisor is 34 rounded to 30? Try these examples: 31⟌243, 31⟌249, 31⟌257, 31⟌265, 31⟌274, 34⟌243, 34⟌249, 34⟌257, 34⟌265, 34⟌274.” Students are given opportunities to develop fluency with division with rational numbers. Examples of where conceptual understanding, procedural skill and fluency, and application are presented together in the materials include: • Student Resource, Student Assessment and Practice Book, Part 2, Lesson G7-24, Item 9, “The dimensions of a cereal box are 7 $$\frac{7}{8}$$ inches by 3 $$\frac{1}{3}$$ inches by 11 $$\frac{4}{5}$$ inches. What is the volume of the cereal box in cubic inches?” This problem has students using application and procedural skill and fluency using the formula to solve the word problem. • Teacher Resource, Part 1, Lesson RP7-16, Exercises, “Draw models to multiply. a. 2 × 4.01 b. 3 × 3.12” develops conceptual understanding of multiplying decimals by modeling the multiplication while using procedural fluency. • Teacher Resource, Part 2, Lesson G7-18, Exercises, Item a, “The area of an Olympic ice rink is 1,800 m$$^2$$. A school builds an ice rink to the scale (Olympic rink) : (school rink) = 5 : 4. What is the area of the school rink?” Students develop procedural fluency when they practice calculating areas given scales while solving application problems. ### Criterion 2e - 2g.iii Practice-Content Connections: Materials meaningfully connect the Standards for Mathematical Content and the Standards for Mathematical Practice 6/10 + - Criterion Rating Details The instructional materials reviewed for JUMP Math Grade 7 partially meet expectations for practice-content connections. Although the instructional materials meet expectations for identifying and using the MPs to enrich mathematics content, they partially attend to the full meaning of each practice standard. The instructional materials partially attend to the specialized language of mathematics. ### Indicator 2e The Standards for Mathematical Practice are identified and used to enrich mathematics content within and throughout each applicable grade. 2/2 + - Indicator Rating Details The instructional materials reviewed for JUMP Math Grade 7 meet expectations for identifying the Standards for Mathematical Practice and using them to enrich mathematics content within and throughout the grade level. All 8 MPs are clearly identified throughout the materials, with few or no exceptions. Examples include: • The Mathematical Practices are identified at the beginning of each unit in the “Mathematical Practices in this Unit.” • “Mathematical Practices in this Unit” gives suggestions on how students can show they have met a Mathematical Practice. For example, in Teacher Resource, Part 1, Unit 8, Mathematical Practices in this Unit, “MP.1: SP7-2 Extension 2, SP7-9 Extension 2.” • “Mathematical Practices in this Unit” gives the Mathematical Practices that can be assessed in the unit. For example, in Teacher Resources, Part 2, Unit 7, Mathematical Practices in this Unit, “In this unit, you will have the opportunity to assess MP.1 to MP.4 and MP.6 and MP.8.” • The Mathematical Practices are also identified in the materials in the lesson margins. • In optional Problem Solving Lessons designed to develop specific problem-solving strategies, MPs are identified in specific components/ problems in the lesson. ### Indicator 2f Materials carefully attend to the full meaning of each practice standard 1/2 + - Indicator Rating Details The instructional materials reviewed for JUMP Math Grade 7 partially meet expectations for carefully attending to the full meaning of each practice standard. The materials do not attend to the full meaning of MPs 4 and 5. Examples of the materials carefully attending to the meaning of some MPs include: • MP2: Teacher Resource, Part 1, Unit 3, Lesson EE7-6, Extensions, Item 3, “a. Sketch a circle divided into the following fractions. i. thirds, ii. fourths, iii. fifths. b. Evaluate the expression 360x for i. x = $$\frac{1}{3}$$, ii. x = $$\frac{1}{4}$$, iii. x = $$\frac{1}{5}$$, c. Use your answers to b and a protractor to check the accuracy of your sketches in a.” In this question, students take the quantitative work with the sketches of circles and connect it to the abstract work of evaluating expressions. • MP2: Teacher Resource, Part 2, Unit 5, Lesson NS7-46, Extensions, Item 3, “Bev made a grape drink by mixing $$\frac{1}{3}$$ cup of ginger ale with $$\frac{1}{2}$$ cup of grape juice. She used all her ginger ale, but she still has lots of grape juice. She wants to make 30 cups of the grape drink for a party. How many 355 mL cans of ginger ale does she need to buy?” In the solution, students get a remainder in their division, so they must interpret that remainder as needing to buy more cans. • MP6: Teacher Resource, Part 2, Unit 3, Lesson EE7-15, Extensions, Item 6, “Clara’s Computer Company is making a new type of computer and Clara wants to advertise it. A 30-second commercial costs $1,500,000. Clara plans to sell the computer at a profit of$45.00. Clara determines that 8,600,000 people watched the commercial. a. What percentage of people who watched the commercial would have to buy the product to pay for the price of the commercial? Show your work using equations. Say what each equation means in the situation. b. What facts did you need to use to do part a? c. What place value did you round your answer in part a? Explain your choice. d. Do you think the commercial was a good idea for Clara? Explain.” Students attend to precision throughout the problem to determine if the commercial was a good idea. • MP6: Teacher Resource, Part 1, Unit 2, Lesson NS7-2, Extensions, Item 5, “Liz has red, blue, and white paint in the ratio 3:2:1. She mixes equal parts of all three colors to make light purple paint. If she uses all her white paint, what is the ratio of red to blue paint that she has leftover? Use a T-table or a tape diagram with clear labels.” MP6 is developed as students are encouraged to use clear labels in models to ensure they can understand their calculations. This would help students be precise with their ratio calculations. • MP7: Teacher Resource, Part 1, Unit 2, Lesson NS7-3, Extensions, Item 2, “Look for shortcut ways to add the gains and losses. a. -4 - 5 - 6 +7 +8 + 9.” Students are shown how to group numbers together to make the addition easier, looking for addends that combine to make 10, and looking for opposites to cancel out. Students use structure to complete the problem. • MP7: Teacher Resource, Part 2, Unit 1, Lesson NS7-37, Extensions 2, “Without doing the division, which do you expect to be greater? -21,317.613 ÷ $$\frac{1}{2}$$ or -21,317.613 ÷ $$\frac{3}{5}$$? Explain.” Students use the structure of dividing by fractions to help them reason about which answer would be greater. Examples of the materials not carefully attending to the meaning of MPs 4 and 5 include: • MP4: Teacher Resource, Part 1, Unit 5, Lesson RP7-19, Extensions, Item 4, “Ethan bought a house for $80,000. He spent$5,000 renovating it. Two years after he bought the house, the value increased by 20%. If he sells the house, what would his annual profit be, per year?” Because students are working very similar problems before this set of problems, students do not model with mathematics. • MP4: Teacher Resource, Part 2, Unit 6, Lesson G7-25, Exercise, Item 1, “The base of a free-standing punching bag is an octagon. The area of the base is 3.5 ft$$^2$$ and the height is 3 ft. a. What is the volume of the punching bag? b. A 30 kg bag of sand fills $$\frac{2}{3}$$ ft$$^2$$. How many bags of sand do you need to fill the punching bag?” Because students are working very similar problems before this set of problems, students do not model with mathematics. • MP5: Teacher Resource, Part 1, Unit 2, Lesson NS7-2, Extensions, Item 5, “Use a T-table or a tape diagram with clear labels. Which was faster?...What does the tape diagram show you that the T-table does not?” Students are told which tools to use. • MP5: Teacher Resource, Part 2, Unit 6, Lesson G7-24, Exercises, “a. Find the volume of the prism in three different ways.” A picture of a rectangular prism with sides of 13 cm, 2 cm, and 5 cm is shown. “b. Which way is the easiest to calculate mentally? Solutions: a. 13 x 2 x 5 = 26 x 5 = 130, so V = 130 cm$$^3$$, 13 x 5 x 2 = 65 x 2 = 130, so V = 130 cm$$^3$$, 5 x 2 x 13 = 130, so V = 130 cm$$^3$$; b) 5 cm x 2 cm x 13 cm is the easiest to calculate because 5 x 2 = 10 and it is easy to multiply by 10.” This problem has students deciding which way to multiply the numbers is easiest, which does not require the use of tools. ### Indicator 2g Emphasis on Mathematical Reasoning: Materials support the Standards' emphasis on mathematical reasoning by: 0/0 ### Indicator 2g.i Materials prompt students to construct viable arguments and analyze the arguments of others concerning key grade-level mathematics detailed in the content standards. 1/2 + - Indicator Rating Details The instructional materials reviewed for JUMP Math Grade 7 partially meet expectations for prompting students to construct viable arguments and analyze the arguments of others concerning key grade-level mathematics detailed in the content standards. Students explain their thinking, compare answers with a partner, or understand the error in a problem. However, this is done sporadically within extension questions, and often the materials identify questions as MP3 when there is not an opportunity for students to analyze situations, make conjectures, and justify conclusions. At times, the materials prompt students to construct viable arguments and critique the reasoning of others. Examples that demonstrate this include: • Teacher Resource, Part 2, Unit 1, Lesson NS7-37, Extensions, Item 2, “a. Without doing the division, which do you expect to be greater, −21,317,613 ÷ $$\frac{1}{2}$$ or −21,317,613 ÷ $$\frac{3}{5}$$? Explain. b. In pairs, explain your answers to part a. Do you agree with each other? Discuss why or why not. Use math words.” • In Teacher Resource, Part 2, Unit 3, Lesson EE7 -17, Extensions, Item 4, students complete the following problem: “a. After the first two numbers in a sequence, each number is the sum of all previous numbers in the sequence. If the 20th term is 393,216, what is the 18th term? Look for a fast way to solve the problem. b. In pairs, explain why the way you chose in part a works. Do you agree with each other? Discuss why or why not.” • In Teacher Resources, Part 2, Unit 5, Lesson NS7-49, Extensions, Item 3, students are told, “Eddy painted a square wall. Randi is painting a square wall that is twice as wide as the wall that Eddy painted. Eddy used 1.3 gallons of paint. Randi says she will need 2.6 gallons of paint because that is twice as much as Eddy needed and the wall she is painting is twice as big. So you agree with Randi? Why or Why not?” In questions where students must explain an answer or way of thinking, the materials identify the exercise as MP3. As a result, questions identified as MP3 are not arguments and not designed to establish conjectures and build a logical progression of a statement to explore the truth of the conjecture. Examples include: • Teacher Resource, Part 1, Unit 3, Lesson EE7-2, Extensions, Item 4, “Which value for w makes the equation true? Justify your answers. a. 2 x 5 - w x 2 b. w x 6 = 6 x 3” • Teacher Resource, Part 1, Unit 8, Lesson SP7 - 1, Extensions, Item 4, “Sam randomly picks a marble from a bag. The probability of picking a red marble is $$\frac{2}{5}$$. What is the probability of not picking red? Explain.” • Teacher Resource, Part 2, Unit 3, Lesson EE7 -28, Extensions, Item 3, “a. The side lengths of a triangle are x, 2x +1, and 10. What can x be? Justify your answer. b. In pairs, explain your answers to part a. Do you agree with each other? Discuss why or why not.” • Many MP3 problems in the extension sections follow a similar structure. Students are given a problem and “explain.” Then, students compare their answers with a partner and discuss if they agree or not. This one dimensional approach does not offer guidance to students on how to construct an argument or critique the reasoning of others. For example, Teacher Resource, Part 2, Unit 7, Lesson SP7-14, Extensions, Item 4, “a. What shape is the cross section of the cube? Explain how you know using math words. b. In pairs, discuss your answers to part a. Do you agree with each other? Discuss why or why not.” • Students are given extension questions when they are asked to analyze the math completed by a fictional person. For example, Teacher Resource, Part 1, Unit 1, Lesson RP7-7, Extensions, Item 4, students are asked to determine if another student is correct with their reasoning. “Two whole numbers are in the ratio 1 : 3. Rob says they cannot add to an odd number. Is he right? Explain.” These problems begin to develop students’ ability to analyze the mathematical reasoning of others but do not fully develop this skill. Students analyze an answer given by another, but do not develop an argument or present counterexamples. ### Indicator 2g.ii Materials assist teachers in engaging students in constructing viable arguments and analyzing the arguments of others concerning key grade-level mathematics detailed in the content standards. 1/2 + - Indicator Rating Details The instructional materials reviewed for JUMP Math Grade 7 partially meet expectations for assisting teachers in engaging students in constructing viable arguments and analyzing the arguments of others concerning key grade-level mathematics detailed in the content standards. Some guidance is provided to teachers to initiate students in constructing arguments and critiquing others; however, the guidance lacks depth and structure, and there are multiple missed opportunities to assist students to engage in constructing and critiquing mathematical arguments. The materials have limited support for the teacher to develop MP3. Generally, the materials encourage students to work with a partner as a way to construct arguments and critique each other. In the teacher information section, teachers are provided with the following information: • Page A-14: “Promote communication by encouraging students to work in pairs or in small groups. Support students to organize and justify their thinking by demonstrating how to use mathematical terminology symbols, models and manipulatives as they discuss and share their ideas. Student grouping should be random and vary throughout the week.” The material provides no further guidance on thoughtful ways to group students and only limited structures that would encourage collaboration. • Page A-49: “Classroom discussion in the lesson plans include prompts such as SAY, ASK and PROMPT. SAY is used to provide a sample wording of an explanation, concept or definition that is at grade level, precise, and that will not lead to student misconceptions. ASK is used for probing questions, followed by sample answers in parentheses. Allow students time to think before providing a PROMPT, which can be a simple re-wording of the question or a hint to guide students in the correct direction to answer the question….You might also have students discuss their thinking and explain their reasoning with a partner, or write down their explanations individually. This opportunity to communicate thinking, either orally or in writing, helps students consolidate their learning and facilitates the assessment of many Standards for Mathematical Practice.” While this direction would help teachers facilitate discussion in the classroom, it would not help teachers to develop student’s ability to construct arguments or critique the reasoning of others. • Page A-49: There are sentence starters that are referenced that mostly show teachers how to facilitate discussions among students. The materials state, “When students work with a partner, many of them will benefit from some guidance, such as displaying question or sentence stems on the board to encourage partners to understand and challenge each other’s thinking, use of vocabulary, or choice of tools or strategies. For example: • I did ___ the same way but got a different answer. Let’s compare our work. • What does ___ mean? • Why is ___ true? • Why do you think that ___ ? • I don’t understand ___. Can you explain it a different way? • Why did you use ___? (a particular strategy or tool) • How did you come up with ___? (an idea or strategy)” Once all students have answered the ASK question, have volunteers articulate their thinking to the whole class so other students can benefit from hearing their strategies” While this direction would help teachers facilitate discussion in the classroom, it would not help them to develop student’s ability to construct arguments or critique the reasoning of others. • A rubric for the Mathematical Practices is provided for teachers on page L-71. For MP3, a Level 3 is stated as, “Is able to use objects, drawings, diagrams, and actions to construct an argument” and “Justifies conclusions, communicates them to others, and responds to the arguments of others.” This rubric would provide some guidance to teachers about what to look for in student answers but no further direction is provided about how to use it to coach students to improve their arguments or critiques. • In the Math Practices in this Unit Sections, MP3 is listed multiple times. The explanation of MP3 in the unit often consists of a general statement. For example, in Teacher Resources, Part 1, Unit 3, the MP3 portion of the section states, “In EE7-7 Extension 3, students construct and critique arguments when they discuss in pairs the reasons why they agree or disagree with the statement 0 ÷ 0 = 1, and when they ask questions to understand and challenge each other’s thinking.” These explanations do not provide guidance to teachers to get students constructing arguments or critiquing the reasoning of others. There are limited times when specific guidance is provided to teachers for specific problems. Examples include: • Some guidance is provided to teachers for constructing a viable argument when teachers are provided solutions to questions labeled as MP3 in the extension questions. Some of these questions include wording that could be used as an exemplar response about what a viable argument is. For example, Teacher Resource, Part 1, Unit 1, Lesson RP7-10, Extensions, Item 6 students are asked to tell if the given quantities are in a proportional relationship. Teachers are provided with the sample solutions, “Sample solutions: a. The quantities are proportional. We made a table with headings “side length,” “area of square,” and “square of perimeter.” The ratio for area to square of perimeter was always 1 to 16, so the two quantities are proportional…” • In Teacher Resource, Part 2, Unit 3, Lesson EE7-17, Extensions, Item 4 students are asked in pairs to explain why the way they chose in part a) works. Students are asked if they agree with each other and to discuss why or why not. Answers and a teacher NOTE are provided: “NOTE: In part b), encourage partners to ask questions to understand and challenge each other’s thinking (MP.3)—see page A-49 for sample sentence and question stems." Frequently, problems are listed as providing an opportunity for students to engage in MP3, but miss the opportunity to give detail on how a teacher will accomplish this. Examples include: • In Teacher Resource, Part 1, Unit 4, Lesson NS7-22, Extensions, Item 3, students are given the following problem and asked to explain: “b. Len placed a table 1.23 m long along a wall 3 m long. If his bed is 2.13 m long, will it fit along the same wall? Explain.” The answer is provided but no guidance is provided to teachers to help students explain. • In Teacher Resource, Part 2, Unit 3, Lesson RP7-15, Extensions, Item 5, students are asked, “How would you shift the decimal point to divide by 10,000,000? Explain.” Teachers are given the sample response, “Move the decimal 7 places (because there are 7 zeros in 10,000,000) to the left (because I am dividing).” This is not facilitating the development of mathematical arguments. ### Indicator 2g.iii Materials explicitly attend to the specialized language of mathematics. 1/2 + - Indicator Rating Details The instructional materials reviewed for JUMP Math Grade 7 partially meet expectations for explicitly attending to the specialized language of mathematics. Accurate mathematics vocabulary is present in the materials; however, while vocabulary is identified throughout the materials, there is no explicit directions for instruction of the vocabulary in the teacher materials of the lesson. Examples include, but are not limited to: • Vocabulary is identified in the Terminology section at the beginning of each unit. • Vocabulary is identified at the beginning of each lesson. • The vocabulary words and definitions are bold within the lesson. • There is not a glossary. • There is not a place for the students to practice the new vocabulary in the lessons. ## Usability #### Not Rated + - Gateway Three Details This material was not reviewed for Gateway Three because it did not meet expectations for Gateways One and Two ### Criterion 3a - 3e Use and design facilitate student learning: Materials are well designed and take into account effective lesson structure and pacing. ### Indicator 3a The underlying design of the materials distinguishes between problems and exercises. In essence, the difference is that in solving problems, students learn new mathematics, whereas in working exercises, students apply what they have already learned to build mastery. Each problem or exercise has a purpose. N/A ### Indicator 3b Design of assignments is not haphazard: exercises are given in intentional sequences. N/A ### Indicator 3c There is variety in what students are asked to produce. For example, students are asked to produce answers and solutions, but also, in a grade-appropriate way, arguments and explanations, diagrams, mathematical models, etc. N/A ### Indicator 3d Manipulatives are faithful representations of the mathematical objects they represent and when appropriate are connected to written methods. N/A ### Indicator 3e The visual design (whether in print or online) is not distracting or chaotic, but supports students in engaging thoughtfully with the subject. N/A ### Criterion 3f - 3l Teacher Planning and Learning for Success with CCSS: Materials support teacher learning and understanding of the Standards. ### Indicator 3f Materials support teachers in planning and providing effective learning experiences by providing quality questions to help guide students' mathematical development. N/A ### Indicator 3g Materials contain a teacher's edition with ample and useful annotations and suggestions on how to present the content in the student edition and in the ancillary materials. Where applicable, materials include teacher guidance for the use of embedded technology to support and enhance student learning. N/A ### Indicator 3h Materials contain a teacher's edition (in print or clearly distinguished/accessible as a teacher's edition in digital materials) that contains full, adult-level explanations and examples of the more advanced mathematics concepts in the lessons so that teachers can improve their own knowledge of the subject, as necessary. N/A ### Indicator 3i Materials contain a teacher's edition (in print or clearly distinguished/accessible as a teacher's edition in digital materials) that explains the role of the specific grade-level mathematics in the context of the overall mathematics curriculum for kindergarten through grade twelve. N/A ### Indicator 3j Materials provide a list of lessons in the teacher's edition (in print or clearly distinguished/accessible as a teacher's edition in digital materials), cross-referencing the standards covered and providing an estimated instructional time for each lesson, chapter and unit (i.e., pacing guide). N/A ### Indicator 3k Materials contain strategies for informing parents or caregivers about the mathematics program and suggestions for how they can help support student progress and achievement. N/A ### Indicator 3l Materials contain explanations of the instructional approaches of the program and identification of the research-based strategies. N/A ### Criterion 3m - 3q Assessment: Materials offer teachers resources and tools to collect ongoing data about student progress on the Standards. ### Indicator 3m Materials provide strategies for gathering information about students' prior knowledge within and across grade levels. N/A ### Indicator 3n Materials provide strategies for teachers to identify and address common student errors and misconceptions. N/A ### Indicator 3o Materials provide opportunities for ongoing review and practice, with feedback, for students in learning both concepts and skills. N/A ### Indicator 3p Materials offer ongoing formative and summative assessments: N/A ### Indicator 3p.i Assessments clearly denote which standards are being emphasized. N/A ### Indicator 3p.ii Assessments include aligned rubrics and scoring guidelines that provide sufficient guidance to teachers for interpreting student performance and suggestions for follow-up. N/A ### Indicator 3q Materials encourage students to monitor their own progress. N/A ### Criterion 3r - 3y Differentiated instruction: Materials support teachers in differentiating instruction for diverse learners within and across grades. ### Indicator 3r Materials provide strategies to help teachers sequence or scaffold lessons so that the content is accessible to all learners. N/A ### Indicator 3s Materials provide teachers with strategies for meeting the needs of a range of learners. N/A ### Indicator 3t Materials embed tasks with multiple entry-points that can be solved using a variety of solution strategies or representations. N/A ### Indicator 3u Materials suggest support, accommodations, and modifications for English Language Learners and other special populations that will support their regular and active participation in learning mathematics (e.g., modifying vocabulary words within word problems). N/A ### Indicator 3v Materials provide opportunities for advanced students to investigate mathematics content at greater depth. N/A ### Indicator 3w Materials provide a balanced portrayal of various demographic and personal characteristics. N/A ### Indicator 3x Materials provide opportunities for teachers to use a variety of grouping strategies. N/A ### Indicator 3y Materials encourage teachers to draw upon home language and culture to facilitate learning. N/A ### Criterion 3aa - 3z Effective technology use: Materials support effective use of technology to enhance student learning. Digital materials are accessible and available in multiple platforms. ### Indicator 3aa Digital materials (either included as supplementary to a textbook or as part of a digital curriculum) are web-based and compatible with multiple internet browsers (e.g., Internet Explorer, Firefox, Google Chrome, etc.). In addition, materials are "platform neutral" (i.e., are compatible with multiple operating systems such as Windows and Apple and are not proprietary to any single platform) and allow the use of tablets and mobile devices. N/A ### Indicator 3ab Materials include opportunities to assess student mathematical understandings and knowledge of procedural skills using technology. N/A ### Indicator 3ac Materials can be easily customized for individual learners. i. Digital materials include opportunities for teachers to personalize learning for all students, using adaptive or other technological innovations. ii. Materials can be easily customized for local use. For example, materials may provide a range of lessons to draw from on a topic. N/A Materials include or reference technology that provides opportunities for teachers and/or students to collaborate with each other (e.g. websites, discussion groups, webinars, etc.). N/A ### Indicator 3z Materials integrate technology such as interactive tools, virtual manipulatives/objects, and/or dynamic mathematics software in ways that engage students in the Mathematical Practices. N/A abc123 Report Published Date: 2020/09/17 Report Edition: 2019 Title ISBN Edition Publisher Year Teacher Resource for Grade 7, New US Edition 978‑1‑77395‑082‑2 JUMP Math 2019 Student Assessment & Practice Book 7.1 978‑1‑927457‑47‑4 JUMP Math 2019 Student Assessment & Practice Book 7.2 978‑1‑927457‑48‑1 JUMP Math 2019 ## Math K-8 Review Tool The mathematics review criteria identifies the indicators for high-quality instructional materials. The review criteria supports a sequential review process that reflect the importance of alignment to the standards then consider other high-quality attributes of curriculum as recommended by educators. For math, our review criteria evaluates materials based on: • Focus and Coherence • Rigor and Mathematical Practices • Instructional Supports and Usability The K-8 Evidence Guides complements the review criteria by elaborating details for each indicator including the purpose of the indicator, information on how to collect evidence, guiding questions and discussion prompts, and scoring criteria. ## Math K-8 K‑8 Evidence Guide K‑8 Review Criteria The EdReports rubric supports a sequential review process through three gateways. These gateways reflect the importance of alignment to college and career ready standards and considers other attributes of high-quality curriculum, such as usability and design, as recommended by educators. Materials must meet or partially meet expectations for the first set of indicators (gateway 1) to move to the other gateways. Gateways 1 and 2 focus on questions of alignment to the standards. Are the instructional materials aligned to the standards? Are all standards present and treated with appropriate depth and quality required to support student learning? Gateway 3 focuses on the question of usability. Are the instructional materials user-friendly for students and educators? Materials must be well designed to facilitate student learning and enhance a teacher’s ability to differentiate and build knowledge within the classroom. In order to be reviewed and attain a rating for usability (Gateway 3), the instructional materials must first meet expectations for alignment (Gateways 1 and 2). Alignment and usability ratings are assigned based on how materials score on a series of criteria and indicators with reviewers providing supporting evidence to determine and substantiate each point awarded. Alignment and usability ratings are assigned based on how materials score on a series of criteria and indicators with reviewers providing supporting evidence to determine and substantiate each point awarded. For ELA and math, alignment ratings represent the degree to which materials meet expectations, partially meet expectations, or do not meet expectations for alignment to college- and career-ready standards, including that all standards are present and treated with the appropriate depth to support students in learning the skills and knowledge that they need to be ready for college and career. For science, alignment ratings represent the degree to which materials meet expectations, partially meet expectations, or do not meet expectations for alignment to the Next Generation Science Standards, including that all standards are present and treated with the appropriate depth to support students in learning the skills and knowledge that they need to be ready for college and career. For all content areas, usability ratings represent the degree to which materials meet expectations, partially meet expectations, or do not meet expectations for effective practices (as outlined in the evaluation tool) for use and design, teacher planning and learning, assessment, differentiated instruction, and effective technology use.
2021-04-11 10:27:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36676743626594543, "perplexity": 3168.841058051927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038061820.19/warc/CC-MAIN-20210411085610-20210411115610-00068.warc.gz"}
http://artofmemory.com/wiki/Giordano_Bruno_System_(renaissance)
# Giordano Bruno System (renaissance) Making Images ## On the subject of the images Of course, this conceiving and impressing of the images of visible things is proven not to have happened like that which is practiced on other materials (plainly on the surfaces of their bodies), where the greater ones have greater bodies and the smaller ones have smaller. Indeed, on many a tablet or paper we depict many images; on a larger scale we paint large things and write large characters. Yet here is each undivided substance, which likewise conceives the figures and characters of many things, as well as of great ones. For instance, we conceive in the center of our eyes' pupils a whole forest of things at one undivided glance, and each thing's mass we contract and judge at one single probe. Not only the interior but, in a certain way, the more spiritual power of the soul receives these species and arranges them, existing in a phantastic spirit,1 and should be judged to be something individual, from the genus of light, illumination and the act of the perceptible thing and form, differing from external sight, which is formed by another alien light, because it is both light and sight at the same time, just as, proportionally, the sun's light is distinguished from the moon's. For, just as natural light is visible from without, the other the light phantasm is visible from within. Finally, eyesight differs from the seeing power of the internal spirit, as a mirror that sees should be distinguished from a mirror that does not see, but is only distinguished by him who represents it as a mirror informed and illuminated by itself, and which is both light and mirror at the same time, and in which a perceptible object is one with a percipient subject. This is a certain world and inlet, as it were, brimming over with forms and species, which contains not only the species of those things conceived externally according to their size and number but also adds, by virtue of the imagination, size to size and number to number. And, on the other hand, just as in nature innumerable species are composed of and coalesce out of small elements,2 so too by the action of this intrinsic cause not only are the forms of natural species preserved within this most ample inlet, but also they will be able to be multiplied there for the multiplication of the innumerable images conceivable beyond compare, just as when we figure winged centaurs from a man and a stag, or winged rational beings from a man and a horse and a bird,3 we can produce, by a similar mingling, the infinite from the countless, more ample than all the words which are composed by the various kinds of combination and coordination out of the numbered elements of many languages. 1For the term "phantastic," see Dedicatory Epistle fn. 8, above. 2While Bruno appears to be speaking about atoms, he more likely is referring to the Ramusian mnemonic system in which things were classified into their parts, these split into their own parts, and so on practically ad infinitum. Cf. Yates (1966). 3Here "rational" is an adjectival form of ratio, Bruno's term for conception, logic, etc., on which we have remarked above several times, so that the word is by no means "rational" in the colloquial sense but, rather, something achieved by a mental process. Bruno's centaurs are not, then, those winged creatures, part man and part horse, which, in classical mythology, drew the chariot of Dionysus or were ridden by Eros. He is not, after all, using them out of belief or even for mythic purposes here.—Rather, he is arguing that they are a mental fusion, as would be the unified mind-picture of man+horse+bird, using this as an example to suggest that all sorts of other imagistic fusions are likewise possible, in fact, an infinite variety of such beings. Cf. i-1-8 and i-1-10. ## Images of the Roman God's A is Jove's attendants and surrounders, Mercury's winged shoes, Sol's attendants. В is Jove's palace, Mercury's wings, the Demiurge. С is Luna's waning, Saturn's death, those against attending Sol, those against Luna's contrasts. D is Cupid's right-hand things. E is Cupid's left-hand things. F is the Courts of Mars and Saturn, those that shun Æsculapius, those that shun Arion, Jove's left hand. G is Æsculapius, Mercury's court, the contraries to Saturn's court, and other species according to Nature. H is those that attend Jove, the Crown,2 the eye of Mercury; in the second rank, Mercury's right hand. H is those that are against the fisher, falsity and Mercury's left hand.3 I is Saturn's doubt, against those that assist Jove, Mercury's left hand, contraries to the eye, against those that assist Pallas, the first and second shunners, the waning of Luna. К is Intent, Influence, winged shoes, wings, Mercury's attend ants, flying.4 L is Cupid, Cupid's right hand and wings, with the caduceus and Mercury's eye. M is Mercury's attendants and right hand. N is the First Ass of Cyllenius.5 О is Mercury's right hand. P is Mercury's left hand's first, second, third and fourth fingers.6 Q is the Demiurge, with Pallas's handmaidens and Mercury's wings. R is Sol's attendants of the first and second ranks, Luna's waxing and others according to their ways of containing, and the contraries to Saturn's poverty, and Cupid's swift flight. S is Æsculapius, Orpheus, Jupiter, Apollo, Sol, contrary Saturn, Mars, Luna's waning. T is the court of Venus and Cupid V is Luna's waxing, and others according to the different appearances of their size, as knowledge from the house of Pallas and Mercury, or wealth from the house of Sol and Pluto. X is the waning of Luna and the conditions from Saturn's court with the suppression of Mars. Y is Jove's throne and what is in his region, and other species, as in Mercury the speaker, Sol the possessor, or Venus the lovely. Z is those that assist Sol according to species or Luna's contrasts. 1Bruno has Pro., which ap$Insert formula here$pears to mean "proposed" here. Also, for the significance of the word chart and alphabet, see our introduction to this work. 2The "crown" seems to refer to Jupiter's crown. 3In the 1591 edition this entry repeats the "H" from the above paragraph, and seems to be a simple continuation of it. 4Bruno has volat.. 5Cyllene is the mountain where Mercury was born, sometimes also described as his mother. Hence "Cyllenian" is often an epithet for Mercury. However, there is no association of a donkey with Mercury. It could be that we have another reference here to Bruno's lost donkey satire. Cf. ii-6 fn. 80. 6Bruno does not specify what these ordinal numbers refer to. ## Bruno's Alphabet for memorizing verses: [A] Apistos has a man dressed in black vestments with red hair. Ornamented with a beard he holds a fire constantly in his hand. (This probably is a term for asbestos, mentioned in antiquity in Pliny Natural History, 37.10.54 par. 146. It was also associated with Arcadia.) [B] Baroptenus likewise has someone wearing black clothes, who has a chest dyed with blood and snow in his hands, as he walks upon the waters. (The term may be barippe, an unknown precious stone, black with red and white spots, mentioned in Pliny 37.10.55 par. 150.) [C] Brontia has Jove striking Prometheus with lightning. (Or brontea, "thunder-stone," possibly a meteorite.) [D] Chelidonia has a teen-age girl dressed in purple who is freckled with black spots. (Celandine, "swallow-spice," mentioned in Pliny 25.8.50 par. 154. It was fabled to be found in a swallow's stomach.) [E] Cinedia shows a very old fish of that name riding upon the waters, that also holds a diviners' rod. (Cinædia, an unknown precious stone found in the brain, it was supposed, of the fish cinædus, mentioned in Pliny 37.10.56 par. 153 and 33.11.53 par. 146. Cf. i-2-13 Fields fn. 31.) [F] Chbrites contains an image of Circe in an iron chariot who is gutting a scylla bird. (Chlorite, a precious stone, grass green in color, possibly the same as smasragdoprasus, mentioned in Pliny in 37.10.56 par. 156. In Hyginus the scylla is a fish in Fabulæ 198, but in Ovid Metamorphoses 8.151, and in all other known citations, Scylla is a bird. Scylla and Circe were bitter love rivals, per Ovid Metamorphoses 7.65, 13.732 and 14.1-764.) [G] Dracontias shows someone holding a sliced-off dragon's hand in his left hand, a gem in his right. (Dracontias is an unknown precious stone mentioned in Pliny 37.10.58 par. 160. Its name comes from the genitive of draco ("dragon"), hence Bruno's image.) [H] Eumetris shows Mercury or Apollo standing next to a girl who is asleep with her head placed on flint The god is dressed in clothes dyed in a hue midway between green and white. (Eumetris is an unknown precious stone mentioned in Pliny 37.10.88 par. 160. Its name suggests "well-measured" or "well-proportioned;" thus its associations here. Cf. i-2-13 Fields fn. 46.) [T] Galactites shows a woman nursing, white-hued and generally in white clothes. Gasidane shows a pregnant woman crowned with flowers.(Galactites is milk-stone, an otherwise unknown precious stone mentioned in Pliny 37.10.59 par.162. Gasidane (Gassinades) is a semiprecious stone mentioned in Pliny 37.10.59 par. 193.) [K] Glossopetra shows Philomela who is fleeing Diana. (This is tongue-stone, a stone in the shape of a human tongue, mentioned in Pliny 37.10.59 par.193; the story of Philomela transformed into a nightingale appears in Ovid Metamorphoses 6.668.) [L] Gorgonia shows Perseus changing men into stones when he shows them Medusa's head. (Gorgonia is "gorgon-stone," so called because it hardens when taken from water and exposed to air. It is a form of coral, and is mentioned in Pliny 37.10.59 par. 164.) [M] Heliotropium shows a boy who has red hair in greenish muslin who is staring at the sun. Around him are clouds. (This is girasole (It.) or heliotrope ("turnsole"), mentioned in Pliny, 2.41.41 par. 109 and 22.21.29 par. 57.) [N] Hephestites shows a gleaming girl in a black cloak, crowned with a golden wheel, who, with her right hand, is extinguishing a fire's flame with water, while holding a mirror in her left one.(Hephæstitis is "vulcan-stone," an unknown precious stone mentioned in Pliny 37.10.60 par. 166.) [O] Hamonites shows a youth, dressed in a saffron robe with ram's horns, sleeping under a laurel tree.(This precious stone, mentioned in Pliny 37.10.60 par. 197, comes from the term Hammonis cornu (Amnion's horn); it is typically shaped like a ram's horn and is associated with the Egyptian ramheaded god, Ammon, identified with Zeus by the Greeks and Jupiter by the Romans. Cf. i-2-13 Fields fn. 38.) [P] Hienia shows a young man holding a rod in his right hand and putting an animal's eye in his mouth. A top his head a crow sits.(This is an unknown precious stone mentioned by Pliny at 32.11.54 par. 154; its name suggests Hyænia gemma (hyena stone).) [Q] Liparis shows a woman who gives off a scent. To her vapor the forms of wild animals come running. (Liparis is an unknown precious stone is mentioned by Pliny in 37.10.62 par. 62. Its name suggests a connection with the Islands of Lipari in the Tyrrhenian Sea off Sicily, which include Vulcano, Stromboli and Lipari itself. In ancient times they were known as the Æoliæ Insulæ (Islands of Æolius, the wind god).) [R] Mitrax contains an effigy of Venus or Flora in a multicolored field. (This Persian precious stone, Mithrax, is connected somehow with Mithra, the Persian sun god, perhaps because of its brilliant color. It is mentioned in Pliny 37.10.63 par. 173. Flora is a Roman goddess of flowering plants, a fertility goddess.) [S] Meroctes shows a shepherd sprinkling milk on his face. (We have been unable to discover anything concerning this stone.) [T] Nebrides carries Bacchusls image. It has the same shape as Brontia. (This stone, Nebrides, was a precious stone sacred to Dionysus. The name comes from nebris, the fawnskin worn by the Bacchæ at the Bacchanalia. The name suggests a brown, dappled gem, probably more properly a semi-precious stone. It is mentioned in Pliny 37.10.64 par. 175.) [U] Orites shows a salamander in a fire.(The name suggests oritis (mountain-stone) or perhaps sideritis (star- or steel-stone); it is an unknown semi-precious stone mentioned in Pliny 37.10.65 par. 176. Cf. i-2-13 Fields fn. 27.) [X] Paneres holds Venus's image.(Paneres or panerastos is a precious stone with the supposed property of making fruitful, mentioned in Pliny 37.10.66 par. 178.) [Y] Peantis has the figure of Lucina, who assists a woman in childbirth. (The stone peanitis takes its name from Рæаn the Healer, an epithet for Apollo; see C. Julius Solinus, a Latin grammarian of the third century AD, in chapter nine of his book on grammar. Cf. Pliny 37.10.66 par. 180.) [Z] Alectorius has a soldier disembowelling a rooster's belly. (This must be alectoria gemma (rooster gem), a gem found in a rooster's mouth. See Pliny 37.10.54 par. 144.)
2017-06-27 08:56:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5595527291297913, "perplexity": 10583.000578637486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321306.65/warc/CC-MAIN-20170627083142-20170627103142-00631.warc.gz"}
http://soleadea.org/cfa-formulas-practice-mode
# CFA Formulas App Using this app, you can learn your CFA level 1 & level 2 formulas anytime and anywhere. Now – the Formulas App is in its trial mode – which means only a few formulas get generated. To gain unlimited access to our Free CFA Formulas App you need to set up your account. ## Free CFA Formulas Appinside Your Free Personalized Study Plan: (4 mini-steps only) Own-Price Elasticity of Demand Click to show formula $E^{d}_{p_{x}} = \frac{\%\Delta Q^{d}_{x}}{\%\Delta P_{x}}\ or\ E^{d}_{p_{x}} = (\frac{\Delta Q^{d}_{x}}{\Delta P_{x}})\times (\frac{P_{x}}{Q^{d}_{x}})$ • $E^{d}_{p_{x}}$ – own-price elasticity of demand • $Q^{d}_{x}$ – quantity demanded of good "X" • $P_{x}$ – price per unit of good "X"
2018-07-16 14:44:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6785949468612671, "perplexity": 7921.799377750647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589350.19/warc/CC-MAIN-20180716135037-20180716155037-00408.warc.gz"}
https://www.edaboard.com/threads/initial-value-depending-on-the-input.381177/
# Initial value depending on the input Status Not open for further replies. #### bremenpl ##### Member level 3 Hello there, I am designing a simple debounce mechanism in VHDL. I have the following problem. An internal counter (a variable) should be initially set to 0 or some value (ie. 100), depending on either the input is high or low at power up. I have no idea how to implement this. Is it even possible? At the moment my internal "key" state is initialized with input value, but for some reason this doesnt work. The first transition always fails (no debounce). I am using additional boolean generic's in order to enable the debounce for either up or low transitions. Code: -- Libraries ------------------------------------------------------------------- --! Main library library ieee; --! std logic components use ieee.std_logic_1164.all; -- Entity ---------------------------------------------------------------------- --! bouncer signals entity entity_debounce is generic ( --! Stores the maximum hysteresis value. max : natural; --! Disables the up counting hysteresis when true hystup_dis : boolean; --! Disables the down counting hysteresis when true hystdown_dis : boolean ); port ( --! Clock signal. Rising edge triggers the counter. clk : in std_logic; --! Input signal that needs debouncing input : in std_logic; --! clean, debounced output output : out std_logic ); end entity_debounce; -- Architecture ---------------------------------------------------------------- --! Debounce hysteresis mechanism implementation architecture arch_debounce of entity_debounce is --! internal key memory state signal key : std_logic := input; begin process_debounce : process(clk) --! Hysteresis counter variable cnt : natural := 0; begin if (rising_edge(clk)) then if (input = '1') then if ((cnt < max) and (hystup_dis = false)) then cnt := cnt + 1; elsif (key = '0') then key <= '1'; cnt := max; end if; else if ((cnt > 0) and (hystdown_dis = false)) then cnt := cnt - 1; elsif (key = '1') then key <= '0'; cnt := 0; end if; end if; end if; end process; output <= key; end arch_debounce; . . . In the top vhd file: Code: --! Debouncer instance 20 ms hysteresis debounce time debounce_ledbtn20ms : entity_debounce generic map ( max => 656, hystup_dis => true, hystdown_dis => false ) port map ( clk => clk_32k, input => pin_ledbtn, output => pins_led(0) ); On the scope one can see that 1st transition is not correct, since falling edge should have debounce added (hystdown_dis => false). There should never be debounce added on rising edges and that is OK (hystup_dis => true). I would appreciate all help. Last edited: #### FvM ##### Super Moderator Staff member An initial value is either synthesized as constant or power-on-reset value, depending on the logic, in your case the latter. POR value can't depend on input, instead you'll need an explicit reset that stores the initial value. Simple, straightforward. You should consider that some logic families don't support asynchronous set function and need to emulate it by a logic loop latch, e.g. all newer Intel FPGA. As a drawback, the POR value of this latch will be undetermined. #### bremenpl ##### Member level 3 you'll need an explicit reset that stores the initial value. Simple, straightforward. I tried doing just that- it did not help as well. I think in my situation (MachXO2 from Lattice) it is somewhat not possible to initialize the values this way. For example, when I created a separate bolean variable that is initialy false, and then in the behavioral code I check either it is false and set it to true forever, the synthesizer gave me a warning that my variable will always be true. So, its like it was never initialized with false. Also, the code didnt work so this had to be the case... #### FvM ##### Super Moderator Staff member You didn't show what you have tried. As said, the posted code, using an input value as initializer doesn't work. Reading an input during reset and initializing a counter respectively should work. I have coded similar hardware more than once. #### bremenpl ##### Member level 3 You didn't show what you have tried. As said, the posted code, using an input value as initializer doesn't work. Reading an input during reset and initializing a counter respectively should work. I have coded similar hardware more than once. You are right, but the thing is to make the input being read only once. Please take a look at this MWE: Code: --! Debounce hysteresis mechanism implementation architecture arch_debounce of entity_debounce is begin process_debounce : process(clk) variable test : bolean := false; begin if (test = false) test = true; -- this code is never run end if; end process; The problem lies in the test variable. The if statement was designed to run some portion of code only once per whole device reset. Why doesnt it work? #### FvM ##### Super Moderator Staff member Yes the code synthesis to nothing for two reasons: - nothing depends on the test signal - it's rewritten in combinational code Need to synthesize a register, in other words assign the new value on clock event. #### bremenpl ##### Member level 3 This is just an example. In the if statement part I had another statement that sets the counter variable, depending on the input signal. That did not work as well. How else can I create a 1 time execution situation? #### FvM ##### Super Moderator Staff member Basically something like below. A reset synchronizer may be required to assure consistent action of the first clock edge. Code VHDL - [expand]1 2 3 4 5 6 7 8 9 10 11 process (reset, clk) begin if reset = '1' then init_done <= '0'; elsif rising_edge(clk) then if init_done = '0' then init_done <= '1'; init_reg <= init_inp; end if; end if; end process; #### bremenpl ##### Member level 3 I see... But the problem here is that the mentioned reset signal is probably a signal entering the component. This design would require a physical reset to be performed a startup, this is not automatic, do I understand correctly? #### KlausST ##### Super Moderator Staff member Hi, I'm not sure wheter I understand the problem correctly. I assume you have one input and one output...and at powerup you want the output to have the same logic state as the input, but as soon as possible. The dilemma is, that the usual timing is: * Power up (FPGA signals are not initiated yet, thus high impedance) * time to load the FPGA configuration (FPGA signals are not initiated yet, thus high impedance) * FPGA is initiated and thus the output only can be configured as fix HIGH, LOW or Z (but you don't want fix H or L here) * FPGA is operating and now can react an output a signal depending on an input (but now it's too late for your idea) If my assumptions above are correct, then the only solution I see is to use an external resistor between input and output. This makes the "output" to get the same state as the "input" as long as the "output" is in Z state. As soon as you switch the "output" to H or L the FPGA controls the state (overriding the resistor) Klaus #### FvM ##### Super Moderator Staff member External hardware reset is the preferred method and should be always provided in a good design. Problems arise particularly, if the input clock is already present when the internal power on reset is released. It's nevertheless possible to generate an internal reset, although it can't be guaranteed to be completely free of metastable events. Code VHDL - [expand]1 2 3 4 5 6 7 8 9 10 11 12 SIGNAL reset_int           : STD_LOGIC := '1'; SIGNAL reset_cnt           : INTEGER RANGE 0 TO 7 := 0; PROCESS (clk) BEGIN IF rising_edge(clk) THEN IF reset_cnt = 7 THEN reset_int <= '0'; ELSE reset_cnt <= reset_cnt + 1; END IF; END IF; END PROCESS; - - - Updated - - - I'm not sure whether I understand the problem correctly. I understand that he wants to perform a specific action (set a register according to input signal) once after the POR has been released. Obviously, the FPGA IOs are already configured at this time. Status Not open for further replies.
2020-07-07 03:26:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33029696345329285, "perplexity": 6020.136974992025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655891640.22/warc/CC-MAIN-20200707013816-20200707043816-00183.warc.gz"}
https://archive.lib.msu.edu/crcmath/math/math/f/f263.htm
## Fourier Matrix The Square Matrix with entries given by (1) for , 1, 2, ..., , and normalized by to make it a Unitary. The Fourier matrix is given by (2) and the matrix by (3) In general, (4) with (5) where is the Identity Matrix. Note that the factorization (which is the basis of the Fast Fourier Transform) has two copies of in the center factor Matrix. Strang, G. Wavelet Transforms Versus Fourier Transforms.'' Bull. Amer. Math. Soc. 28, 288-305, 1993.
2021-12-04 01:19:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9679720997810364, "perplexity": 697.185993763743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362923.11/warc/CC-MAIN-20211204003045-20211204033045-00117.warc.gz"}
https://gmatclub.com/forum/if-the-average-arithmetic-mean-of-the-five-numbers-x-7-2-16-and-103134.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 16 Oct 2019, 00:09 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # If the average (arithmetic mean) of the five numbers x, 7, 2, 16 and Author Message TAGS: ### Hide Tags Intern Joined: 25 Sep 2010 Posts: 12 If the average (arithmetic mean) of the five numbers x, 7, 2, 16 and  [#permalink] ### Show Tags 19 Oct 2010, 00:58 21 00:00 Difficulty: 55% (hard) Question Stats: 65% (01:43) correct 35% (01:46) wrong based on 449 sessions ### HideShow timer Statistics If the average (arithmetic mean) of the five numbers x, 7, 2, 16 and 11 is equal to the median of five numbers, what is the value of X (1) 7 < x < 11 (2) x is the median of the five numbers first statement is definitely sufficient, but in the second statement, if X is the median, it can take three numbers, 10. 9, 8, so ...how is it sufficient, i might be missing some basic point here,,,,so need help,,,,as the second statement is also sufficient for this problem Math Expert Joined: 02 Sep 2009 Posts: 58364 Re: If the average (arithmetic mean) of the five numbers x, 7, 2, 16 and  [#permalink] ### Show Tags 20 Oct 2010, 08:46 7 5 vitamingmat wrote: satishreddy wrote: if the average arithmetic mean of the five numbers X, 7,2,16.11 is equal to the median of five numbers, what is the value of X 1) 7<X<11 2) X is the median of five numbers first statement is definitely sufficient, but in the second statement, if X is the median, it can take three numbers, 10. 9, 8, so ...how is it sufficient, i might be missing some basic point here,,,,so need help,,,,as the second statement is also sufficient for this problem I am not convinced with the above answers... from 1 ) 7 < X< 11 === > X is like 7.1, or 7.2 or 7.5 or 8,8.1 or 9 0r 9.5 ... there are so many possibilites so i can not take only 8,9,10 as X so i can not decide from 1 from 2) X is the median of five numbers then === > X, 7,2,16,11 .... ==> how can i arrange if i dont know the value of X in either descending or ascending.. so from the both also it wont be possible to get the ans i believe... So ans E ... Do let me know if i am wrong.... There is one more thing we know from the stem: "the average arithmetic mean of the five numbers x, 7, 2, 16, 11 is equal to the median of five numbers" --> $$\frac{x+2+7+11+16}{5}=median$$. As there are odd number of terms then the median is the middle term when arranged in ascending (or descending) order but we don't know which one (median could be x, 7 or 11). (1) 7 < x < 11 --> when ordered we'll get 2, 7, x, 11, 16 --> $$\frac{x+2+7+11+16}{5}=median=x$$ --> $$\frac{x+36}{5}=x$$ --> $$x=9$$. Sufficient. (2) x is the median of five numbers --> the same info as above: $$\frac{x+2+7+11+16}{5}=median=x$$ --> $$\frac{x+36}{5}=x$$ --> $$x=9$$. Sufficient. _________________ ##### General Discussion Manager Joined: 08 Sep 2010 Posts: 150 Location: India WE 1: 6 Year, Telecom(GSM) Re: If the average (arithmetic mean) of the five numbers x, 7, 2, 16 and  [#permalink] ### Show Tags 19 Oct 2010, 03:22 1 satishreddy wrote: if the average arithmetic mean of the five numbers X, 7,2,16.11 is equal to the median of five numbers, what is the value of X 1) 7<X<11 2) X is the median of five numbers first statement is definitely sufficient, but in the second statement, if X is the median, it can take three numbers, 10. 9, 8, so ...how is it sufficient, i might be missing some basic point here,,,,so need help,,,,as the second statement is also sufficient for this problem For finding out median you have to arrange the numbers in increasing or decresing order, then only you have to take the middle term as median. So now the order will be 2,7,x,11,16 And in that way ,you will have the same median value i.e, 9. Manager Joined: 25 Aug 2010 Posts: 55 Re: If the average (arithmetic mean) of the five numbers x, 7, 2, 16 and  [#permalink] ### Show Tags 20 Oct 2010, 06:24 satishreddy wrote: if the average arithmetic mean of the five numbers X, 7,2,16.11 is equal to the median of five numbers, what is the value of X 1) 7<X<11 2) X is the median of five numbers first statement is definitely sufficient, but in the second statement, if X is the median, it can take three numbers, 10. 9, 8, so ...how is it sufficient, i might be missing some basic point here,,,,so need help,,,,as the second statement is also sufficient for this problem I am not convinced with the above answers... from 1 ) 7 < X< 11 === > X is like 7.1, or 7.2 or 7.5 or 8,8.1 or 9 0r 9.5 ... there are so many possibilites so i can not take only 8,9,10 as X so i can not decide from 1 from 2) X is the median of five numbers then === > X, 7,2,16,11 .... ==> how can i arrange if i dont know the value of X in either descending or ascending.. so from the both also it wont be possible to get the ans i believe... So ans E ... Do let me know if i am wrong.... Intern Joined: 20 Sep 2013 Posts: 2 Re: If the average (arithmetic mean) of the five numbers x, 7, 2, 16 and  [#permalink] ### Show Tags 25 Sep 2013, 03:52 1 1) allow us to say that the first and last numbers of the series are 2 and 16 giving an average = 9 and, therefore, a median = 9. giving that x must be a value between 8, 9, 10 we came to the conclusion that x=9 -> sufficient 2) the information that x=median therefore x=average is sufficient to allow to conclude that x=9. This information, in fact allow to exclude x<2 and x>16 and at the same time provide indications that the value is somewhere between the second and 4th value of the series. Manager Joined: 31 Mar 2013 Posts: 60 Location: India GPA: 3.02 Re: If the average (arithmetic mean) of the five numbers x, 7, 2, 16 and  [#permalink] ### Show Tags 09 Oct 2013, 11:17 3 The above set is not evenly spaced. Is it possible to have a non-evenly spaced set where $$mean=median$$? Math Expert Joined: 02 Sep 2009 Posts: 58364 Re: If the average (arithmetic mean) of the five numbers x, 7, 2, 16 and  [#permalink] ### Show Tags 10 Oct 2013, 02:43 2 2 emailmkarthik wrote: The above set is not evenly spaced. Is it possible to have a non-evenly spaced set where $$mean=median$$? Yes, it is. Consider: {0, 1, 1, 2}. _________________ Current Student Joined: 29 Jul 2015 Posts: 14 Location: India Concentration: Strategy, General Management Schools: Ivey '19 (A\$) GMAT 1: 660 Q49 V32 GPA: 3.7 WE: Supply Chain Management (Consulting) Re: If the average (arithmetic mean) of the five numbers x, 7, 2, 16 and  [#permalink] ### Show Tags 29 Mar 2016, 12:07 Dear Bunuel , I have a doubt on statement B Statement B says - x is the median of the 5 no's. So x can be 7,8,9,10,11 . Then how can we determine the value of x using B alone ? Math Expert Joined: 02 Sep 2009 Posts: 58364 Re: If the average (arithmetic mean) of the five numbers x, 7, 2, 16 and  [#permalink] ### Show Tags 29 Mar 2016, 12:19 3 ThePlayer wrote: Dear Bunuel , I have a doubt on statement B Statement B says - x is the median of the 5 no's. So x can be 7,8,9,10,11 . Then how can we determine the value of x using B alone ? This is explained above: if-the-average-arithmetic-mean-of-the-five-numbers-x-103134.html#p803613 There is one more thing we know from the stem: "the average arithmetic mean of the five numbers x, 7, 2, 16, 11 is equal to the median of five numbers" --> $$\frac{x+2+7+11+16}{5}=median$$. As there are odd number of terms then the median is the middle term when arranged in ascending (or descending) order but we don't know which one (median could be x, 7 or 11). (1) 7 < x < 11 --> when ordered we'll get 2, 7, x, 11, 16 --> $$\frac{x+2+7+11+16}{5}=median=x$$ --> $$\frac{x+36}{5}=x$$ --> $$x=9$$. Sufficient. (2) x is the median of five numbers --> the same info as above: $$\frac{x+2+7+11+16}{5}=median=x$$ --> $$\frac{x+36}{5}=x$$ --> $$x=9$$. Sufficient. _________________ Board of Directors Joined: 17 Jul 2014 Posts: 2514 Location: United States (IL) Concentration: Finance, Economics GMAT 1: 650 Q49 V30 GPA: 3.92 WE: General Management (Transportation) Re: If the average (arithmetic mean) of the five numbers x, 7, 2, 16 and  [#permalink] ### Show Tags 09 Apr 2016, 13:54 oh man..I just had this question in my GMATPrep test, and I panicked and picked A... quick question to Bunuel, can we consider only INTEGER values, knowing that the question stem tells that we have NUMBERS (aka any numbers)? 1. x has values 8, 9, 10. we know that (36+x)/5 = x 36+x=5x x=9 so only value if x=9 2. x is the median of the five 36+x=5x x=9 yes, 9 is the median. D I dismissed B because I considered non-integer values for x...but still..illogical..because if we apply what we are given..we always get x=9... Manager Joined: 26 Dec 2015 Posts: 236 Location: United States (CA) Concentration: Finance, Strategy WE: Investment Banking (Venture Capital) Re: If the average (arithmetic mean) of the five numbers x, 7, 2, 16 and  [#permalink] ### Show Tags 16 Sep 2017, 16:48 If the average (arithmetic mean) of the five numbers x, 7, 2, 16 and 11 is equal to the median of five numbers, what is the value of X (1) 7 < x < 11 (2) x is the median of the five numbers Important to note: This is a purely CONCEPTUAL problem. Meaning, there's HARDLY ANY MATH REQUIRED/INVOLVED. You should be able to eye this and take 30-45 seconds MAX. I'll show you how. 1- set up #s in increasing order (common step whenever you see the word "median"): 2, 7, 11, 16, x --> quickly see 2+7=9...9+11=20, 20+16=36. 2- average = $$\frac{sum of #s in a set}{total numbers in a set}$$ --> $$\frac{36+x}{5}$$ 3- set up question: $$\frac{36+x}{5}$$ = median. > key here is to find out what x is (or what the median is)! 1) TELLS US X=MEDIAN! How do you know this? Well, there are 5 terms in the set. > 2 terms are BELOW 7. > 2 terms are ABOVE 11. -- Since there are only 5 terms in a set, the 3rd term (middle) = median. Bingo. Eliminate B, C, E. 2) SAME AS 1! -- Elim A. Ans: D Intern Joined: 10 May 2017 Posts: 5 Re: If the average (arithmetic mean) of the five numbers x, 7, 2, 16 and  [#permalink] ### Show Tags 18 Oct 2017, 01:37 Bunuel wrote: andrewwal wrote: russ9 1) I believe that it can also be 11 since if x was 11 the median in fact would be 11 (but the median could never be 16 since if x=16, then you'd have 2, 7, 11, 16, 16 and the median would be 11 not 16). 2) Statement 1 and 2 are essentially saying the say thing, since the constraint "the mean is equal to the median" and that is the only value [9] that satisfies that constraint -- therefore it is D. Correct. 15 was a typo there. Edited. Bunuel Hi Bunuel, I understood the solution however, Elementary doubt here. Since Mean=Median, Shouldn't the numbers in the set be consecutive, However I cannot seem understand the consecutive nature of the above mentioned set. Thanks Math Expert Joined: 02 Sep 2009 Posts: 58364 Re: If the average (arithmetic mean) of the five numbers x, 7, 2, 16 and  [#permalink] ### Show Tags 18 Oct 2017, 01:48 neilphilip10 wrote: Bunuel Hi Bunuel, I understood the solution however, Elementary doubt here. Since Mean=Median, Shouldn't the numbers in the set be consecutive, However I cannot seem understand the consecutive nature of the above mentioned set. Thanks For an evenly spaced set (arithmetic progression), the median equals to the mean. Though the reverse is not necessarily true. Consider {0, 1, 1, 2} --> median = mean = 1 but the set is not evenly spaced. _________________ Intern Joined: 05 Mar 2015 Posts: 41 Location: Azerbaijan GMAT 1: 530 Q42 V21 GMAT 2: 600 Q42 V31 GMAT 3: 700 Q47 V38 Re: If the average (arithmetic mean) of the five numbers x, 7, 2, 16 and  [#permalink] ### Show Tags 22 May 2018, 12:05 Hi, I just found a mistake in a question and would like to let you know You can answer the question even without looking at the statements 1 and 2 - the question stem itself gives sufficient information to find the value of x, which is 9 My reasoning: if we order the numbers in an increasing order we get 2, 7, 11, 16. we also have x. Regardless of the value of x, the median will be a number that is 7≤X ≤11 The question also says that median equals the mean. (36+x)/5 is the mean, which is also median. (36+x)/5 this tells us that x cannot be 7, because 36/5 is already greater than 7. If x were 7 the median would not be equal to the mean. If x were 8, the median would be 8, but the mean would be greater than mean. You can check all numbers that satisfies 7≤X ≤11. The only number that satisfies 7≤X ≤11 and also makes median equal to the mean is 9 You can answer the question without looking at the statements Math Expert Joined: 02 Sep 2009 Posts: 58364 Re: If the average (arithmetic mean) of the five numbers x, 7, 2, 16 and  [#permalink] ### Show Tags 23 May 2018, 05:58 kablayi wrote: Hi, I just found a mistake in a question and would like to let you know You can answer the question even without looking at the statements 1 and 2 - the question stem itself gives sufficient information to find the value of x, which is 9 My reasoning: if we order the numbers in an increasing order we get 2, 7, 11, 16. we also have x. Regardless of the value of x, the median will be a number that is 7≤X ≤11 The question also says that median equals the mean. (36+x)/5 is the mean, which is also median. (36+x)/5 this tells us that x cannot be 7, because 36/5 is already greater than 7. If x were 7 the median would not be equal to the mean. If x were 8, the median would be 8, but the mean would be greater than mean. You can check all numbers that satisfies 7≤X ≤11. The only number that satisfies 7≤X ≤11 and also makes median equal to the mean is 9 You can answer the question without looking at the statements That's not correct. From the stem, x can be -1, 9 or 19. If x = -1, then the set is {-1, 2, 7, 11, 16} --> median = mean = 7. If x = 9, then the set is {-2, 7, 9, 11, 16} --> median = mean = 9. If x = 19, then the set is {2, 7, 11, 16, 19} --> median = mean = 11. _________________ Intern Joined: 11 Feb 2019 Posts: 12 Re: If the average (arithmetic mean) of the five numbers x, 7, 2, 16 and  [#permalink] ### Show Tags 28 Mar 2019, 08:04 Does this question even need any of the answer choices? As in, this question can be answered without using either of the two options. Is there any other question type where value of x needs to be calculated in series where the answer choices really are needed? Bunuel wrote: vitamingmat wrote: satishreddy wrote: if the average arithmetic mean of the five numbers X, 7,2,16.11 is equal to the median of five numbers, what is the value of X 1) 7<X<11 2) X is the median of five numbers first statement is definitely sufficient, but in the second statement, if X is the median, it can take three numbers, 10. 9, 8, so ...how is it sufficient, i might be missing some basic point here,,,,so need help,,,,as the second statement is also sufficient for this problem I am not convinced with the above answers... from 1 ) 7 < X< 11 === > X is like 7.1, or 7.2 or 7.5 or 8,8.1 or 9 0r 9.5 ... there are so many possibilites so i can not take only 8,9,10 as X so i can not decide from 1 from 2) X is the median of five numbers then === > X, 7,2,16,11 .... ==> how can i arrange if i dont know the value of X in either descending or ascending.. so from the both also it wont be possible to get the ans i believe... So ans E ... Do let me know if i am wrong.... There is one more thing we know from the stem: "the average arithmetic mean of the five numbers x, 7, 2, 16, 11 is equal to the median of five numbers" --> $$\frac{x+2+7+11+16}{5}=median$$. As there are odd number of terms then the median is the middle term when arranged in ascending (or descending) order but we don't know which one (median could be x, 7 or 11). (1) 7 < x < 11 --> when ordered we'll get 2, 7, x, 11, 16 --> $$\frac{x+2+7+11+16}{5}=median=x$$ --> $$\frac{x+36}{5}=x$$ --> $$x=9$$. Sufficient. (2) x is the median of five numbers --> the same info as above: $$\frac{x+2+7+11+16}{5}=median=x$$ --> $$\frac{x+36}{5}=x$$ --> $$x=9$$. Sufficient. Senior Manager Status: Whatever it takes! Joined: 10 Oct 2018 Posts: 383 GPA: 4 Re: If the average (arithmetic mean) of the five numbers x, 7, 2, 16 and  [#permalink] ### Show Tags 28 Mar 2019, 08:27 prekshita wrote: Does this question even need any of the answer choices? As in, this question can be answered without using either of the two options. Is there any other question type where value of x needs to be calculated in series where the answer choices really are needed? It is not possible. Without the statements, you CANNOT get a single solution. X can be anything! Statements are given either to find a solution or to prove something asked in the question. P.S: By taking your statement into consideration, why would the test takers give such a question where the statements are useless? _________________ ALL ABOUT GMAT- $$https://exampal.com/gmat/blog/gmat-score-explained$$ Senior Manager Status: Gathering chakra Joined: 05 Feb 2018 Posts: 433 Re: If the average (arithmetic mean) of the five numbers x, 7, 2, 16 and  [#permalink] ### Show Tags 10 Jun 2019, 11:46 Bunuel chetan2u I quickly recognized that only 9 will give an int answer (45/9=5) and satisfies the condition, and that there's only 1 number that gives us med = avg in that constraint. But, is this a bad assumption since the prompt doesn't specify ints? What if in a different problem with a similar constraint the average and median were a non-int? We know that there's still only 1 correct value where med=avg, even if we don't calculate, right? Re: If the average (arithmetic mean) of the five numbers x, 7, 2, 16 and   [#permalink] 10 Jun 2019, 11:46 Display posts from previous: Sort by
2019-10-16 07:09:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7571366429328918, "perplexity": 1136.1405710608046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986666467.20/warc/CC-MAIN-20191016063833-20191016091333-00485.warc.gz"}
https://www.encyclopediaofmath.org/index.php?title=Simply-connected_group&curid=7550&diff=42436&oldid=39777
# Difference between revisions of "Simply-connected group" A topological group (in particular, a Lie group) for which the underlying topological space is simply-connected. The significance of simply-connected groups in the theory of Lie groups is explained by the following theorems. 1) Every connected Lie group $G$ is isomorphic to the quotient group of a certain simply-connected group (called the universal covering of $G$) by a discrete central subgroup isomorphic to $\pi_1(G)$. 2) Two simply-connected Lie groups are isomorphic if and only if their Lie algebras are isomorphic; furthermore, every homomorphism of the Lie algebra of a simply-connected group $G_1$ into the Lie algebra of an arbitrary Lie group $G_2$ is the derivation of a (uniquely defined) homomorphism of $G_1$ into $G_2$. The centre $Z$ of a simply-connected semi-simple compact or complex Lie group $G$ is finite. It is given in the following table for the various kinds of simple Lie groups. $$\begin{array}{||c||c|c|c|c|c|c|c|c|c|c||} \hline \\ G & A_n & B_n & C_n & D_{2n} & D_{2n+1} & E_6 & E_7 & E_8 & F_4 & G_2 \\ \hline Z & \mathbf{Z}_{n+1} & \mathbf{Z}_2 & \mathbf{Z}_2 & \mathbf{Z}_2 \times \mathbf{Z}_2 & \mathbf{Z}_4 & \mathbf{Z}_3 & \mathbf{Z}_2 & \{e\} & \{e\} & \{e\} \\ \hline \end{array}$$ In the theory of algebraic groups, a simply-connected group is a connected algebraic group $G$ not admitting any non-trivial isogeny $\phi : \tilde G \rightarrow G$, where $\tilde G$ is also a connected algebraic group. For semi-simple algebraic groups over the field of complex numbers this definition is equivalent to that given above. #### References [a1] G. Hochschild, "The structure of Lie groups" , Holden-Day (1965) MR0207883 Zbl 0131.02702 [a2] R. Hermann, "Lie groups for physicists" , Benjamin (1966) MR0213463 Zbl 0135.06901 [a3] J.E. Humphreys, "Linear algebraic groups" , Springer (1975) pp. Sect. 35.1 MR0396773 Zbl 0325.20039 How to Cite This Entry: Simply-connected group. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Simply-connected_group&oldid=39777 This article was adapted from an original article by E.B. Vinberg (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
2018-01-23 11:57:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9058468341827393, "perplexity": 400.08709296064717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891926.62/warc/CC-MAIN-20180123111826-20180123131826-00755.warc.gz"}
https://ask.sagemath.org/question/53502/functions-in-polynomials-rings/?sort=oldest
# Functions in polynomials rings I want to define a function in a polynomial ring in several variables. R.<x1,x3,x5>=PolynomialRing(QQ) I am trying to define a function that takes $(i,j)$ to $x_i^j$. I tried def f(i,j): return xi^j This does not work. I tried replacing xi with x[i], that doesn't work. Can someone please tell me what I am doing wrong and how to fix it? If instead of taking 3 variables I take only 1 variable then the method works. edit retag close merge delete Sort by » oldest newest most voted You want to access the generators of R as a tuple: sage: R = PolynomialRing(QQ, 3, names='x'); R Multivariate Polynomial Ring in x0, x1, x2 over Rational Field sage: x = R.gens(); x (x0, x1, x2) sage: x[0] x0 If you want to use some strange alternative indexing, then you can achieve it with a function. more One can produce strings and have the polynomial ring eat them. String formatting is easy thanks to Python. Define a polynomial ring as in the question: R.<x1, x3, x5> = PolynomialRing(QQ) Define a "generator power" function as follows: def f(i, j): r""" Return the polynomial variable xi raised to the j-th power. """ return R('x{}^{}'.format(i, j)) Example: sage: f(3, 2) x3^2 more
2021-03-04 22:09:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2883376479148865, "perplexity": 1994.3871219395874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369523.73/warc/CC-MAIN-20210304205238-20210304235238-00400.warc.gz"}
https://socratic.org/questions/the-triangle-inequality-theorem-states-that-the-sum-of-the-lengths-of-any-two-si-1
# The triangle inequality theorem states that the sum of the lengths of any two sides of a triangle is greater than the length of the third side. The lengths of two sides of a triangle are 15 ft and 27 ft. Find the possible lengths of the third side? ## The third side must have a length greater than ? ft and less than ? ft. Mar 8, 2018 #### Answer: Possible length of third side is $12 < x < 42 \mathmr{and} x | \left(12 , 42\right)$ ft. #### Explanation: 15+27=42 ; 27-15=12 . Therefore the third side must be less than $42$ ft ant greater than $12$ ft. Therefore possible length of third side of size $x$ is $12 < x < 42 \mathmr{and} x | \left(12 , 42\right)$ ft [Ans]
2018-12-11 00:55:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9094692468643188, "perplexity": 328.25578660326835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823516.50/warc/CC-MAIN-20181210233803-20181211015303-00386.warc.gz"}
https://en.wikibooks.org/wiki/Wikibooks:PROPOSALS
(Redirected from Wikibooks:PROPOSALS) Wikibooks Discussion Rooms Discussions Assistance Requests General | Proposals | Projects | Featured books General | Technical | Administrative Deletion | Undeletion | Import | Permissions ## Now under construction: Wikibooks Stacks As part of the infrastructure overhaul I've been doing for, at this writing, just over 23 months, and following from the previous discussion here in the ongoing series of threads (link), I'm now developing a replacement for the current subject hierarchy, in the form of a book called Wikibooks Stacks. I'm not currently asking for help with this, tbh. Somewhat embarrassingly, given the collaborative nature of wikis, just atm I really need to do this carefully, step by step, myself, because there's still new design work involved at each step. But I do want to let everyone know what I'm doing, and perhaps folks here will offer advice (or point out that I'm making a huge mistake somewhere!). When I'm all done, all our 3000-or-so books will be filed in both the "old" subject hierarchy and the "new" stacks, and I'll be able to do the equivalent of flipping one of those big high-voltage switches and suddenly the categories visible on each book main page will be shelves instead of subjects, and then I can start the process of carefully mothballing the old subject pages, one by one. Then it'll be time to start in earnest on the final(?) stage of this multi-year overhaul of our infrastructure, the introduction of topical categories that list pages as well as books, which will enable us to provide much better targets for incoming links from sister projects, including from Wikipedia. Grouping all of this machinery in a book is more convenient, organizationally, than the Subject: namespace, as it happens. The new pages, equivalent to subjects, have name prefix Shelf: or, at the top level, Department:, which are not recognized by the wiki platform as namespace prefixes, so these pages are all technically in mainspace, as is the book. Our infrastructure templates such as {{BookCat}} and {{BOOKNAME}} know to associate these name prefixes with book Wikibooks Stacks, which is convenient because most of the pages involved don't have to have the name of the book built into them at all, they can just use markup {{BOOKNAME|Shelf:}} (which expands to Wikibooks Stacks). Shelves correspond to subjects that use {{subject page}}, departments to subjects that use {{root subject}}. There are shelf categories, each with an associated allbooks category, just as there are subject categories with associated allbooks categories. When I set up the machinery of the subject hierarchy, I arranged that when any of the pages involved detected a problem, it would flag it out, and provide buttons to help a human operator implement likely actions to fix it. This time around, I've made some improvements to this semi-automation while I was about it. I also very much want to arrange for dialog-based assistants to replace the older-style editing buttons (with the older-style buttons reappearing if the dialog tools are not detected — thus, graceful degradation when things aren't working right). This would be very cutting-edge use of the dialog tools, and I very much want to learn as much as I can from the experience, about how to make effective use of the dialog tools. Which is actually part of what's holding me up just atm: I could be marching forward with setting up shelves, but then I'd be missing out on this major opportunity to gain experience with dialog. --Pi zero (discusscontribs) 19:51, 3 June 2018 (UTC) Progress report: All our books have been shelved; they're also all listed in subjects. The shelf categories are hidden, the subject categories are visible; but I'm now in a position to switch that, so the shelf categories are visible and the subject categories hidden. Then I can start shutting down the subjects, which also has to be done manually. Strangely, I've got a discrepancy between the number of shelves and the number of subjects, whose cause should eventually come out during the manual shutdown. I'm not sure what to do about possible incoming links to subject pages that are now going to be either nonexistant or redirects. --Pi zero (discusscontribs) 02:26, 29 September 2018 (UTC) Cheers for all the work you have made already. I noticed two things which I'm not sure whether they are glitches or not: 1) Departments do not list any featured books. 2) Under Wikibooks Stacks/Departments the Wikijunior department correctly lists the Wikijunior shelf, but the Help department does not list the Help shelf. -- Vito F. (discuss) 23:46, 10 October 2018 (UTC) On the second point, actually the link provided is to the help shelf, rather than to the help department. I'm unsure whether that should be treated differently, or if instead the wikijunior department should be treated differently. --Pi zero (discusscontribs) 04:03, 11 October 2018 (UTC) I think I've got the department featured books problem fixed. --Pi zero (discusscontribs) 06:43, 12 October 2018 (UTC) I've improved both of those displays on the departments page. --Pi zero (discusscontribs) 12:44, 13 October 2018 (UTC) ## Merge Template:One-page book into Template:BookCat The one-page books like DBMS don't need a dedicated category like Category:Book:DBMS, because categories are meant to gather groups of pages. That's why I propose to merge Template:One-page book into Template:BookCat, and to replace this couple of templates by {{BookCat|1p=yes}}, so it would add Category:One-page books and not Category:Book:. JackPotte (discusscontribs) 12:57, 1 August 2018 (UTC) It's an interesting thought; off hand, though, I'd think it apt to make infrastructural coding more complicated, multiplying special cases depending on features of a book. It's simple to assume if there's a book main page, there's a book category. --Pi zero (discusscontribs) 13:24, 1 August 2018 (UTC) That said, it might be useful to do a cross-check in {{BookCat}} regarding the number of pages in the book category versus membership in Category:One-page books. --Pi zero (discusscontribs) 13:26, 1 August 2018 (UTC) ## Making use of autoreview That's just a permission which is rarely used which I think can be made into better use. Currently it is a rarely-used permission granted my sysops under their discretion (I, for one, give the permission when I feel that user is contributing on few pages). As the much more popular reviewer permission is auto-granted upon meeting some conditions, I propose that the autoreview permission also be auto-granted upon meeting a lesser set of conditions. Maybe something like 1. 35 edits 30 reviewed edits and 2. 10 days old account? The conditions are designed in such a way to ensure that it is not too hard to achieve while keeping a barrier for bad-faith users. I also propose that users who already have autoreview and later gain reviewer should (automatically) have the autoreview flag removed, as reviewer is a superset of autoreview. I plan to be manually doing this very soon to 'clean up' the list of autoreviewed users (which in itself is few). Leaderboard (discusscontribs) 07:11, 14 August 2018 (UTC) The best would be to get the whole picture before deciding the rights limits. I think about the sysop candidates, the voters limits, and even meta:Admin activity review/Local inactivity policies. That's why I had manufactured a kind of state of the art on the French projects myself, and then we have voted to uniformized the first steps numbers on the French Wikiversity to 100 editions plus one month old account (for voting to anything and getting the autoreview flag). Anyway, 35 seems very low to me, especially with the typos maintenance. So we would have to decide the meaning of a significant edition, and its value (10 pages of qualities = 1,000 typo corrections?). JackPotte (discusscontribs) 14:54, 14 August 2018 (UTC) I see three suppositions here: 1. that the only concern with conferring autoreview is bad-faith edits; 2. that autoreview of one's own edits, when compared to review of others' edits, becomes safe at a significantly lesser level of project experience; and 3. that the project suffers significantly from the cost of dealing with accumulating un-autoreviewed edits during the interval between sufficient project experience for autoreview, and sufficient project experience for general review. In case it's not perfectly clear, let ${\displaystyle x}$ be the amount of project experience where it becomes safe to give the user autoreview, and ${\displaystyle y}$ the amount of project experience where it becomes safe to give the user review. The ratio between the two, ${\displaystyle x/y}$, must be significantly less than one, or we wouldn't want to complicate things by conferring autoreview separately. If this ratio is, say, 0.95, that means the amount of project experience where it becomes safe to confer autoreview is 95% of the amount where it becomes safe to confer general review, and if the ratio is that high, I for one doubt it'd be worth it. The purpose of requiring project experience before conferring review is more than just filtering out bad-faith editors. The review autopromotion threshold is concerned with the user's acclimation to the local culture of the project, as distinct from other projects, such as Wikipedia (from which cultural hegemony is a perennial problem for the smaller projects). In my experience, though, a project-inexperienced user is more likely to run barefoot through the project imposing cultural misapprehensions to their own edits. Therefore, I'm inclined to reject supposition (1), and therefore to mostly reject supposition (2). Even if the ratio ${\displaystyle (y-x)/y}$ were significantly less than one, I would still be dubious of supposition (3). With the primary exception of Wikijunior, review on en.wb just isn't that urgent. We certainly do want to deal with it, but when we fall behind it's not a disaster, unless of course the edits we haven't reviewed are severely problematic. We try to notice problematic edits when they happen, and review is our safety net for when we don't catch them immediately. If it takes a while for us to collect things that get caught by the safety net, that's still better than not having the net. --Pi zero (discusscontribs) 17:16, 14 August 2018 (UTC) On a purely practical point, I question whether it is worth the effort to request automation of the allocation of autoreview given how few people request it. To extend Pi zero's thoughts, we need to be cautious with setting the level too low. Often the most problematic editors are class projects who can rack up 35 edits in an hour which are often full of copyright violations and other problems. Removing the requirement to contribute more broadly before edits are automatically accepted reduces our chance to course-correct these editors. The impact can then be mass deletion of their work later. QuiteUnusual (discusscontribs) 17:22, 14 August 2018 (UTC) ┌──────┘ Thank you all for your views. One valid concern I see is the concern of sufficient project experience before granting autoreview, and hence I'll make a small modification to my proposal: instead of just 35 edits, there should be 30 reviewed mainspace edits (excluding talk for instance). The reason is that the autoreview permission is meant for those who make contributive edits to only a few books, and there should be evidence of that. This should also solve the issue of 35 edits being too less; it's far tougher for 30 reviewed edits to be bad than 35 unreviewed edits. The lack of clear documentation regarding autoreview is the major reason why so few people request it. I remember once querying about autoreviewing edits way back in 2014 in one of my RFP's, and some users wondered what that is. The RFP did not mention autoreview, and the related userright template did not mention it either then. your comment on "decide the meaning of a significant edition, and its value" got me thinking, and I tried to model this. A quick and dirty one gave me something like (where ${\displaystyle }$: average size of each edit, E: no of edits required): ${\displaystyle E=50-(\ln )^{2}}$ (with constraint ${\displaystyle \min(E)=10}$). That could also help counter the issue of users trying to evade the rule by making small bad edits. Leaderboard (discusscontribs) 18:01, 14 August 2018 (UTC) Your proposal does not appear to be solving any existing problem. Clearly there are drawbacks, and we get nothing in return for those. --Pi zero (discusscontribs) 18:22, 14 August 2018 (UTC) ## Proposal to delete Simple English Wikiquote and Wikibooks There is a now a proposal to delete Simple English Wikiquote and Wikibooks. Agusbou2015 (discusscontribs) 22:29, 26 August 2018 (UTC) Proposal withdrawn, and the projects will not be deleted. StevenJ81 (discusscontribs) 14:51, 28 August 2018 (UTC) ## Restricting page creations and raising autoconfirmed limit ### Restrict new page creations to users with account I propose that the ability to create new pages in the main namespace be taken away from anonymous users. The reasoning is simple; ~100% of pages created by them are nonsense, test edits or otherwise get speedily deleted. While it may raise concerns that it is against the principle of free access, I believe that this restriction is justified. This change is not without precedent; en.wikipedia currently restricts general page creation to autoconfirmed users. For what it's worth, I do not propose requiring autoconfirmed; while that is certainly an option, I think there could be genuine cases where users (eg, as part of a class project) create a new page immediately. (If that's what you prefer, then it will be necessary to create a "page creation request" option wherein users could request pages for creation to be reviewed by admins in a queue) I had already thought about that time when the most part of the anonymous new pages would be trash, impossible to handle by our team. Today, I don't think that we've reached this critical point but I think that the problem isn't limited the the creations: all modifications are concerned. My URL abuse filter for the newcomer has stopped the spammers, so we "just" have to delete around 10 test pages per day, which seems doable to me in order to preserve the wiki spirit, where anyone can publish anything valuable quickly. JackPotte (discusscontribs) 08:56, 27 August 2018 (UTC) Maybe, but has any IP user ever published a sensible article? Modifications on its own I can understand; many IP users do make contributive edits, but I can't see that for page creations. JackPotte's filter army (especially filter 38) is incredible; I'm surprised with the junk which users try to insert into Wikibooks which one never sees (and hence I don't mind the occasional false positive on that). But that filter does not help with creating blank or gibberish articles. Leaderboard (discusscontribs) 09:12, 27 August 2018 (UTC) IPs do make useful contributions here and there. I think there might be a book or two around that were created by IPs, though I couldn't lay my hands on them readily. --Pi zero (discusscontribs) 10:38, 27 August 2018 (UTC) Just stumbled across an example of a book created by IP, How to Type. Makes me wonder if there are more of them than I'd thought, since I happened across one so soon after the question was raised. --Pi zero (discusscontribs) 13:15, 27 August 2018 (UTC) The book in question consists of only three articles and was created way back in 2011 (were things different back then?). While admittedly still a surprise for me, I am inclined to think that this is a rare instance of an IP who has created pages which would come in the top 0.1% of all mainspace pages created by an IP. Leaderboard (discusscontribs) 13:53, 27 August 2018 (UTC) I just crossed paths with another one: How to Tie a Tie. As you might guess, where I found those two has to do with the partly-alphabetical order in which I'm doing my current walk through the collection. Beware of convenient reasons for disregarding information that doesn't fit your current thesis. I suspect you'd find the whole Wikibooks project was most active in its early years and has slowly declined since —a pattern whose cause, if it's really there, may be a deep, complex mix of factors— so it may be that the starting date of a book has good odds of being some years ago regardless of who created it. Also, asking whether only 0.1% of IP book creations are useful may be asking the wrong question; not only are we intensely interested in not turning away positive contributions, but we are also crucially interested in not turning way positive contributors, both because we want to retain them as contributors if possible —they are the pool from which the next generation of Wikibookians must come, if it's to come from anywhere— and because, regardless of whether they stay in the long run, we want them to come away with positive feelings about the project so that those feelings spread outward from them memetically to other potential contributors. --Pi zero (discusscontribs) 15:02, 27 August 2018 (UTC) ### Introduce edit count limit for autoconfirmed The current Wikibooks policies state that autoconfirmed users need to be four days old. There are a few issues with that: 1. It does not make sense for users who have never edited to be able to edit semi-protected pages by just waiting. 2. Because users do not need to edit to get autoconfirmed, some edit filters (eg: the often-hit filter 38) have to be modified to take edit count into account. This means that users often get unknowingly caught, and we can't simplify things by just saying that users need to get autoconfirmed to insert external links. Hence I propose that users need to make a few edits (in addition to the four-day rule) in order to get autoconfirmed status. I think 5 edits would do, but some other wikis use 10, so I leave that to you. Leaderboard (discusscontribs) 08:33, 27 August 2018 (UTC) That's what Wikibooks:Autoconfirmed users says; I've no idea whether it's true. And I'd also be interested, before changing our criteria, whatever they now are, in how much trouble we now get from autoconfirmed users who would be excluded by the altered criteria. --Pi zero (discusscontribs) 10:46, 27 August 2018 (UTC) It's true; I remember one instance of an user caught by the edit filter; he had made only one edit and was listed as autoconfirmed. Leaderboard (discusscontribs) 11:42, 27 August 2018 (UTC) Playing devil's advocate, an autoconfirmed class that doesn't require any edits is still a potentially useful preliminary filter against a large class of casual troublemakers with short attention span; a determined troublemaker with a nontrivial attention span can get past any plausible set of autoconfirmation criteria. --Pi zero (discusscontribs) 13:24, 27 August 2018 (UTC) ## Tag Proposal Tags are used throughout Wikibooks to draw attention to something that needs attention, for example, there's something not neutral about a book, a fact is in dispute..etc... At present, the current mechanism for tag use is to tag it and discuss the changes and arrive at consensus either for or against the requested change. However, what happens if something is simply tagged without discussion, like this example, which really did happen. This article was tagged by a bot that is no longer active back in 2007. No discussion was started and the tag remained. That's 11 years with no discussion and the tag still present. Or, take for instance this example ] which was tagged in 2006 by Sirakim, and again no discussion was ever started by that user or any other user. While I commend the users for tagging the book, without discussion, there can be no consensus, and without the consensus, the requested changes (if they're actually requested at all ) can't be made. The current thought on this is to simply leave the tags in place unless consensus exists to have them removed. However, with the tags staying on the books for years without discussion, or with a dead discussion, they serve to make the books less likely to be read. Since the end goal of a book is for it to be read, these tags without discussion or with dead discussions (at least 3 years old like this one which was posted in 2012 and has had no responses whatsoever) , actually fail to serve that purpose. They , in fact, serve to make the books less desirable to be be read. In light of that , I'd like to make a proposal. My proposal is this, that tag can be removed in cases where they are left without a discussion on the talk page, or if a discussion has stopped for a period of at least 3 years . My rationale is: That the tag on the page without a discussion is useless, no one can read the mind of the person placing the tag thus making the tag useless. In the case of a discussion that's dead, like 3 years dead, even though there is no time limit, allowing 3 years for a discussion and coming to no conclusion makes the tag useless as well. Further by allowing tags to remain for years without any removal leaves open the concept of tagging as harrassment (en.wikipedia has had this problem and for that reason, allows for IAR removal of tags if it's been determined that they serve no purpose other than to harass.) My remedy for this would be that the tag could be removed by an user, if , within 3 years there is either no discussion or the discussion has died without consensus, without predjudice, meaning the tag can be placed back in, as long as the editor is wiling to start a discussion and explain why this tag needs to be placed on this book. Personally, I really think it should be a case of IAR, but as User:Leaderboard has objected to my doing this, I thought it prudent to start a discussion to see if a consensus might be raised. So, what do you think ? 16:18, 29 August 2018 (UTC) You're misunderstanding some important points, here. (I commend your presentation, btw.) It's taken me years to grok some of this, and I rather welcome the opportunity to try to lay some of it out where it can be seen as a whole. • Your first statement is incorrect; you say tags are "to draw attention" to a problem, but no, that is not their primary purpose. Their placement is chosen to draw attention, but that drawing of attention is merely a secondary enhancement to their primary function. Their primary function is to declare that in somebody's judgement, there is a problem. It would be grossly destructive to erase our institutional memory of such concerns, and would to a disservice to readers, to potential writers, and to the books involved. • Wikibooks has fundamentally different community structure that Wikipedia. • If you think of Wikipedia, Wikibooks, and Wikinews (I'm most familiar with those three projects, and have found them a useful progression to consider) simply as coherent projects, obviously Wikipedia is the largest, Wikinews the smallest, and Wikibooks of intermediate size. However, Wikipedia is, to a significant extent, a uniform pool of millions of articles, and it's expected that individual users, in general, roam moderately freely across many articles. On Wikinews, a given news article is only worked on by, typically, two people — a reporter and a reviewer. And Wikibooks has yet a different structure: in a sense, Wikibooks isn't a single project at all, but rather a confederation of about three thousand micro-projects, most (if not all) of them so much smaller that Wikinews is gigantic by comparison. These micro-projects are called "books". Each of them is far too small to warrant a whole administrative infrastructure on its own, and despite their many idiosyncracies they do have some properties in common, so they band together for a common administrative infrastructure. There is a fairly small community of users here who concern themselves with Wikibooks as a whole —some of those users are admins, a bunch more are not— and then each book has, at least in concept, its own community. However, the community of a book is profoundly different from that of a large project like Wikipedia, or even than the central-infrastructure community of Wikibooks. • Small projects, I have observed, have greater respect for users who came before; as project size shrinks, respect for the positions of past users rises. Wikipedia has far less; Wikinews has more; and on individual books, this effect reaches its highest form. It's not uncommon for a given book to have only one major contributor at a time; adopting a book is a common process here (one I've been through myself, both partial adoption and full adoption). There might be only one major editor on a book in a given year, or a given book might go for years without a major contributor. When one comes to a book, one generally puts much thought into understanding the approach to the book taken by one's predecessor(s) on the book, and if one does make significant changes it's after careful consideration and some notification and conceivably even inquiry on one or more user talk pages. It's sort-of as if past editors on a book are ghostly participants in current decision-making. The idea of removing tags on book after some fixed number of years is just fundamentally contrary to both the time-scale and the attitude toward predecessors. --Pi zero (discusscontribs) 17:19, 29 August 2018 (UTC) I must agree with Pi zero's first point; tags are to inform users that there is a problem as much as it is done to attract attention. However, I do think that tags should not be kept for too long; certainly not 11 years. There should be some appropriate discussion made to that effect (but not removal after x years like what you suggest); after all, a freshly-placed tag could be taken more seriously than a tag which was kept ages ago. Leaderboard (discusscontribs) 17:54, 29 August 2018 (UTC) To be clear —this may be compatible with what you're driving at— seems to me if a tag has been around for 11 years that's certainly a positive motive to do something about it, but how one goes about that is the same as it would be if one had acted much earlier. An old tag might still be taken seriously, I think; it's entirely dependent on the specifics of the situation. --Pi zero (discusscontribs) 18:51, 29 August 2018 (UTC) Sounds like User:Leaderboardand I are thinking in approximately the same direction, that tags should just languish forever. What I'm saying, is if a tag is placed on the page, whomever places the tag should start a discussion and state what's wrong. A tag without a discussion, and no additional comments on the tag means that we can't see what the tagger saw. I'm certainly not about to remove a tag that was placed on a day ago, even a week ago or a month ago where a discussion was started but no discussion is continuing, we all have real life to deal with :) . But if it's left on 3 years with no discussion started or the discussion has died off, sure. That's time enough to have at least made some change or reach some consensus, dont' you think. If that doesn't sound right, we can set some other criteria to it, like a greater year length with no discussion or a dead discussion and no consensus is reached, or some other kind of criteria. I'm certainly willing ! 19:48, 29 August 2018 (UTC) One of Wikipedia's faults (amongst others :-) is its red tape. Bureaucratic rules. I'd like not to import that feature from Wikipedia to other projects. If one has a case of that sort to remove a tag (if I'm following you, the argument is essentially, the tagger didn't explain, I don't see the problem, and after all this time there's no evidence that anyone else saw the problem and either commented or attempted to solve it, either), I'd suggest making the case for tag removal on the associated talk page, perhaps inviting comments from people (projects reading room possibly?), give it some time, and after a while, then remove the tag. Or some variant on that, depending on particular circumstances. --Pi zero (discusscontribs) 21:16, 29 August 2018 (UTC) Keep it simple in my opinion: If the book is inactive, and the person who placed the tag is inactive, and you as an informed, active, editor think that the tag is no longer required or was misplaced originally then remove it with a comment on the talk page. QuiteUnusual (discusscontribs) 08:06, 30 August 2018 (UTC) User:QuiteUnusual I agree with the keep it simple approach, and that's what I originally intended to do (Kind of an IAR thing we have over at Wikipedia (IAR = Ignore all rules) ). In fact, one of my first edits was this one, where I invoked IAR (with a detailed explanation of why I was removing the tag), in this case, Mike's Bot tagged the page as being non-neutral, back in 2007, and opened no discussion, nor had anyone else, and the user running Mike's bot has retired, so he can't be asked why this page was tagged, so I thought this was a good candidate for an IAR removal under those circumstances, that edit was reverted . I was messaged about it on my page and after a quick discussion I stated I wouldn't touch the tags on any page for the time being, that's the genesis of this whole discussion. There's no actual mechanism for removing tags, so I thought it might be a good idea to have some sort of mechanism for doing so when it's obvious that nothing is being done about the tag and more than enough time elapsed. Yes, I agree that there's plenty wrong with Wikipedia, that's why I'm over here and not over there, but that doesn't mean throw the whole model away, there are some things they do that are actually pretty useful, like IAR. Speaking of, what criteria would you use to determine if a tag should remain on a page or be removed ? 14:01, 30 August 2018 (UTC) ## Conversion of Books into Other eBook Formats While books on Wikibooks are, in many cases, supplied in both webpage and PDF formats, these formats are not necessarily portable to all use-cases. In particular, PDFs do not natively support word-wrapping, which makes them difficult to read on small devices or with bad eyesight. I propose that, in addition to offering in-browser and PDF documents, Wikibooks should offer either (or both) ePub format books and/or easily downloadable archives of the webpages that collectively make up individual books. These formats are simple, widely supported, and allow for both easy distribution and increased accessibility while offline. Thank you for your consideration. --198.119.59.10 (discuss) 15:49, 18 October 2018 (UTC) Actually it has already been implemented in 2012: Special:Book. JackPotte (discusscontribs) 21:10, 18 October 2018 (UTC)
2018-11-17 22:02:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 7, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42152947187423706, "perplexity": 1758.0245790933109}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743854.48/warc/CC-MAIN-20181117205946-20181117231946-00550.warc.gz"}
http://crypto.stackexchange.com/questions/6090/hashing-a-password-with-the-password/6091
I was wondering if hashing a password with the password would be a good way of encrypting the password. So, the user must know his/her password to get the same result as the one in the database. Also, this would prevent decrypting of the password without the password... right? I've read articles and other SO questions/answers on hashing passwords, but was wondering if this would actually work. - First, "hashing a password with the password" is an undefined statement. Regardless, you are well into "inventing your own crypto" territory. Assuming you mean something along the lines of SHA-256(password + password), this is a phenomenally bad password digesting scheme. Being unsalted, your approach is vulnerable to rainbow attacks. Not using key stretching, your approach is vulnerable to brute forcing with GPGPUs. BCrypt is vulnerable to none of these. It uses cryptographically random salts, ensuring that duplicate passwords do not hash to the same value. And it requires a significant amount of work to be performed by an attacker in order to generate each hash, requiring tens to hundreds of thousands of times more computation per password attempt. Edit: In the comments, it was clarified that the meaning was something along the lines of SHA-256(password + SHA-256(password)). This is susceptible to precisely the same attacks: identical passwords hash to identical values, which is a serious flaw. And again, hashing is fast. Fast enough to brute force a huge number of your passwords, if given your password database. Real key derivation functions significantly increase the cost of brute forcing past the point where it's no longer a practical attack vector. - I understood it as HMAC(password, password) or something like that, but you are right. Plus it doesn't really have any functional advantage over just hashing the password, and may in fact be weaker due to over-engineering. –  Thomas Jan 23 '13 at 19:51 Well, I meant more along the lines of hash(pass,hash(pass)); I get what you are saying though and will look into BCrypt –  SchautDollar Jan 23 '13 at 19:55 Alright, thanks for clarifying. –  SchautDollar Jan 23 '13 at 22:31
2015-09-05 08:05:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4334648847579956, "perplexity": 1204.1956887625404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645396463.95/warc/CC-MAIN-20150827031636-00148-ip-10-171-96-226.ec2.internal.warc.gz"}
http://www.math.toronto.edu/~drorbn/Talks/MUGS-1501/index.html
Handout: Commutators-Handout.pdf. Slides: Commutators-Slides.nb. Talk video: . Sources: pensieve. © | << < ? > >> | Dror Bar-Natan: Talks: # Commutators ## Math Union Guest Speaker ### 4PM Friday January 16 2015, Bahen B024 Abstract. The commutator of two elements $x$ and $y$ in a group $G$ is $xyx^{-1}y^{-1}$. That is, $x$ followed by $y$ followed by the inverse of $x$ followed by the inverse of $y$. In my talk I will tell you how commutators are related to the following four riddles: 1. Can you send a secure message to a person you have never communicated with before (neither privately nor publicly), using a messenger you do not trust? 2. Can you hang a picture on a string on the wall using $n$ nails, so that if you remove any one of them, the picture will fall? 3. Can you draw an $n$-component link (a knot made of $n$ non-intersecting circles) so that if you remove any one of those $n$ components, the remaining $n-1$ will fall apart? 4. Can you solve the quintic in radicals? Is there a formula for the zeros of a degree $5$ polynomial in terms of its coefficients, using only the operations on a scientific calculator? Prerequisites. 1. The first week of any class on group theory. 2. Knowing that every complex number other than to $0$ has exactly $n$ roots of order $n$, and how to compute them.
2022-07-04 05:10:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5049663186073303, "perplexity": 593.6274856458368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104354651.73/warc/CC-MAIN-20220704050055-20220704080055-00424.warc.gz"}
https://tex.stackexchange.com/questions/78362/miktex-cant-write-on-pdf-file
# MiKTeX can't write on pdf file [closed] I am using MiKTeX 2.9 on Windows 7. I'm getting the error: "can't write on pdf file." The pdf doesn't reflect what I wrote. What did I do wrong? My input: \documentclass[12pt]{article} \usepackage{epsf} \usepackage{amssymb} \usepackage{graphicx} \usepackage{setspace} \usepackage{amsmath} \usepackage{rotate} \usepackage{endnotes} %\usepackage{harvard} \usepackage{vmargin} \usepackage{natbib} \bibliographystyle{apsr} \oddsidemargin=1.0in\evensidemargin=0.in \textheight=8.5in\textwidth=6.6 in \baselineskip=18pt \parskip=6pt TEXT \newpage \baselineskip=12pt \doublespacing \bibliographystyle{apsa} \bibliography{Dissertation_Proposal} \end{document} • You're missing a \begin{document}. I'm guessing the log file will state this clearly or your editor. Oct 19, 2012 at 20:38 • No, I have: \oddsidemargin=1.0in\evensidemargin=0.in \textheight=8.5in\textwidth=6.6 in \baselineskip=18pt \parskip=6pt %opening \title{Title} \author{Author} \begin{document} \maketitle \begin{abstract} Oct 19, 2012 at 20:40 • I can't see what you posted in the comment in the actual question. Oct 19, 2012 at 20:43 • This can happen when the output PDF file is locked by another application, e.g. it is open in Adobe Reader. Oct 19, 2012 at 20:54 • You were right. The error was caused by the pdf already being open. Closing it fixed the problem. Thanks! Oct 19, 2012 at 21:07 The error message is generated by the prompt_file_name procedure in TeX. Most likely causes: • The output PDF file already exists and is locked by another application, e.g. Adobe Reader (on Windows). • In this case, one can usually close the open document (or close the PDF viewer entirely), retype the output file name in TeX's prompt, and press ENTER to try again. • If this doesn't work, try deleting the PDF file in Windows Explorer, and hopefully it will tell you what application is the culprit. • You could also try a tool like Unlocker. (If you are interested in finding a PDF viewer that does not lock open documents, see this related question on superuser.) • Insufficient hard disk space. • Insufficient memory. See https://tex.stackexchange.com/a/16901/17427 • Trying to write to a directory that doesn't exist. (This is more likely if you are writing external files through something like tikzexternalize, rather than for the main document output file)*. • Trying to write in a protected/unsafe location. See for example http://www.tex.ac.uk/cgi-bin/texfaq2html?label=includeother and https://tex.stackexchange.com/a/2214/17427 A couple of things to note: • One cannot type x at this prompt to abort. That will write on a file called x.pdf. • In pdflatex, pressing ENTER without retyping the filename unhelpfully writes on a file called .pdf. • In lualatex, pressing ENTER without retyping the filename prompts again. • In xelatex, this prompt doesn't occur. Instead, one seems to get the message ** ERROR ** Unable to open ... and then it exits. *Thanks to Sextus Empiricus for pointing this one out. • I've deleted the pdf and it worked. Thank you Jul 1, 2015 at 20:01 • @cyberSingularity I came across the same problem "can't write on" in my case it was related to Tikz wanting to write in a folder (external figures) that did not exist. You might wish to add this to the solution. Dec 10, 2019 at 21:24 • @SextusEmpiricus: Sorry for the belated action, but I've now done this. Many thanks! Jan 31, 2020 at 11:01 • I think I found an additional cause: I got the same error because of Avast antivirus "protecting" the documents folder against MikiText or some app in the Latex editor. That happened to me when rendering RMarkdown to pdf in RStudio. – Pere Oct 26, 2020 at 17:00
2022-08-14 05:50:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8245830535888672, "perplexity": 3110.57230097914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571996.63/warc/CC-MAIN-20220814052950-20220814082950-00376.warc.gz"}
http://codeforces.com/problemset/problem/568/E
E. Longest Increasing Subsequence time limit per test 1.5 seconds memory limit per test 128 megabytes input standard input output standard output Note that the memory limit in this problem is less than usual. Let's consider an array consisting of positive integers, some positions of which contain gaps. We have a collection of numbers that can be used to fill the gaps. Each number from the given collection can be used at most once. Your task is to determine such way of filling gaps that the longest increasing subsequence in the formed array has a maximum size. Input The first line contains a single integer n — the length of the array (1 ≤ n ≤ 105). The second line contains n space-separated integers — the elements of the sequence. A gap is marked as "-1". The elements that are not gaps are positive integers not exceeding 109. It is guaranteed that the sequence contains 0 ≤ k ≤ 1000 gaps. The third line contains a single positive integer m — the number of elements to fill the gaps (k ≤ m ≤ 105). The fourth line contains m positive integers — the numbers to fill gaps. Each number is a positive integer not exceeding 109. Some numbers may be equal. Output Print n space-separated numbers in a single line — the resulting sequence. If there are multiple possible answers, print any of them. Examples Input 31 2 3110 Output 1 2 3 Input 31 -1 331 2 3 Output 1 2 3 Input 2-1 222 4 Output 2 2 Input 3-1 -1 -151 1 1 1 2 Output 1 1 2 Input 4-1 -1 -1 241 1 2 2 Output 1 2 1 2 Note In the first sample there are no gaps, so the correct answer is the initial sequence. In the second sample there is only one way to get an increasing subsequence of length 3. In the third sample answer "4 2" would also be correct. Note that only strictly increasing subsequences are considered. In the fifth sample the answer "1 1 1 2" is not considered correct, as number 1 can be used in replacing only two times.
2018-04-25 17:29:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23270000517368317, "perplexity": 672.1269509120325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947931.59/warc/CC-MAIN-20180425154752-20180425174752-00199.warc.gz"}
https://www.esaral.com/q/distance-of-the-point-3-4-5-40603
# Distance of the point (3, 4, 5) Question: Distance of the point (3, 4, 5) from the origin (0, 0, 0) is (A) √50 (B) 3 (C) 4 (D) 5 Solution: (A) √50 Explanation: Let $P$ be the point whose coordinate is $(3,4,5)$ and $Q$ represents the origin. From distance formula we can write as $P Q=\sqrt{(3-0)^{2}+(4-0)^{2}+(5-0)^{2}}$ $=\sqrt{9+16+25}$ $=\sqrt{50}$ $\therefore$ Distance of the point $(3,4,5)$ from the origin $(0,0,0)$ is $\sqrt{50}$ units. Hence, option (A) is the only correct choice.
2023-02-07 21:01:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9133753776550293, "perplexity": 375.44981980592524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500641.25/warc/CC-MAIN-20230207201702-20230207231702-00066.warc.gz"}
https://openreview.net/forum?id=nhN-fqxmNGx
## A Comparison of Hamming Errors of Representative Variable Selection Methods 29 Sept 2021, 00:35 (modified: 24 Mar 2022, 05:18)ICLR 2022 PosterReaders: Everyone Keywords: Lasso, Hamming error, phase diagram, rare and weak signals, elastic net, SCAD, thresholded Lasso, forward selection, forward backward selection Abstract: Lasso is a celebrated method for variable selection in linear models, but it faces challenges when the covariates are moderately or strongly correlated. This motivates alternative approaches such as using a non-convex penalty, adding a ridge regularization, or conducting a post-Lasso thresholding. In this paper, we compare Lasso with 5 other methods: Elastic net, SCAD, forward selection, thresholded Lasso, and forward backward selection. We measure their performances theoretically by the expected Hamming error, assuming that the regression coefficients are ${\it iid}$ drawn from a two-point mixture and that the Gram matrix is block-wise diagonal. By deriving the rates of convergence of Hamming errors and the phase diagrams, we obtain useful conclusions about the pros and cons of different methods. One-sentence Summary: A theoretical comparison of the Hamming errors for 6 different variable selection methods Supplementary Material: zip 5 Replies
2023-02-02 18:31:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6534084677696228, "perplexity": 2832.9140688660545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500035.14/warc/CC-MAIN-20230202165041-20230202195041-00528.warc.gz"}
http://www.tug.org/twg/mfg/mail-html/1998-01/msg00274.html
# Re: Binary Relations, draft 1 • To: math-font-discuss@cogs.susx.ac.uk • Subject: Re: Binary Relations, draft 1 • From: Hans Aberg <haberg@matematik.su.se> • Date: Mon, 16 Nov 1998 15:37:43 +0100 • Content-Length: 1577 ```At 22:58 +0100 1998/11/15, Taco Hoekwater wrote: >>>>>> "MI" == MicroPress Inc <support@micropress-inc.com> writes: > MI> 2. Chars like \includeequal (176 for example): > MI> perhaps the line should be shorter than the "\include" part? > >Will experiment. It seems that this in print is often equal to the full length if the \include part. The main point though is that it looks good from the graphical point of view (from the mathematical point of view, I do not think it makes a difference). > MI> 4. What are you going to do when you run out of math families (16!)? > >Panic! > >Seriously, there are 15 families I can use, so that should be enough. >And the current layout is not at all final. We might move about half >of the characters out to "special purpose" fonts (as we have now in >TeX, where the wasy2 and stmaryrd fonts fill this role). I think that the blackboard bold characters should be added to Unicode, for the reason that they are used in mathematics as symbols with semantic meaning, and not as a special style font. For example, it is pretty much standard to use blackboard bold R, Z denoting the sets of reals and integers, and it is becoming to be more common to use say blackboard bold small letters to denote certain fixed objects (such as blackboard bold i, j, k to denote the imaginary units in quaternions). Hans Aberg * Email: Hans Aberg <mailto:haberg@member.ams.org>
2017-10-22 01:11:00
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8180391192436218, "perplexity": 5477.066736290138}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824931.84/warc/CC-MAIN-20171022003552-20171022023552-00354.warc.gz"}
https://forum.allaboutcircuits.com/threads/fpga-board-design-with-2-mipi-csi-2-camera-modules-at-4k-60fps.178795/
# FPGA board design with 2 MIPI CSI-2 camera modules at 4k 60FPS #### sijafae Joined Nov 12, 2020 4 Hello, I am trying to design an FPGA board that has 2 MIPI cameras on it with 4 data lanes each providing a 4k resolution and 60 FPS data stream to a computer. I am thinking of using a Lattice ECP5 for the FPGA board, but it seems as if its MIPI CSI-2 receiver can only do a max 900 Mbps data transfer per lane, while I need about 1.2 Gbps (assuming RAW8). I see that the Lattice's Crosslink can go up to 1.5 Gbps per lane, but this FPGA only got about 5k LUTs. The reason the size of Crosslink is a problem is that I want the FPGA to do some image processing before being sent as a live data stream to the computer. I have seen e.g. the Embedded Vision Development Board that uses both a Crosslink and ECP5 separated on 2 different boards: 1 to receive the data stream from the cameras (Crosslink) and merges the 2 video streams, and the other for processing the video stream (ECP5). However, I am not sure what interface they are using between these two FPGAs (MIPI CSI-2 I believe?). Thus, my question is... Is it possible to receive the camera video stream with the crosslink, and convert it to a form that can be sent to ECP5, such that 1.2 Gbps transfer per lane (4.8 Gbps per video stream) is possible from Crosslink to ECP5 instead of the 900 Mbps that is the limit on the ECP5's MIPI CSI-2 receiver? #### sijafae Joined Nov 12, 2020 4 Hello FlyingDutch, I have looked at using Xilinx FPGA's and SOC's to make this system. However, these components are extremely expensive, where 1 IC can cost about $1000. Instead, I want to make the system using Lattice FPGA due to the low cost to make the system. From what I believe I understand so far, the Crosslink board can receive a higher data rate per lane on the MIPI CSI-2. The Crosslink board can then convert this input to a different interface. This new interface will then be connected to an ECP5, where the image processing happens before using e.g. USB3.0 (CYUSB3014) or Ethernet ( DP 83867R) to transfer about 5 Gbps to the computer. #### FlyingDutch Joined Mar 16, 2021 49 Hello FlyingDutch, I have looked at using Xilinx FPGA's and SOC's to make this system. However, these components are extremely expensive, where 1 IC can cost about$1000. Instead, I want to make the system using Lattice FPGA due to the low cost to make the system. Hello @sijafae, I understand your point of view, but This SoM module don't cost 1000$- just 250$ (all SOM module). You need of course design PCB module for this SOM and needed peripherals. See Xilinx product page: https://www.xilinx.com/products/som/kria/k26c-commercial.html BYW: Development kit SOM+motherboard cost about 230\$ at many distributors. Best Regards Last edited:
2021-06-19 10:15:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17369717359542847, "perplexity": 3549.782621402442}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487647232.60/warc/CC-MAIN-20210619081502-20210619111502-00568.warc.gz"}
http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.ndjfl/1039724885
### Computable Models of Theories with Few Models Bakhadyr Khoussainov, Andre Nies, and Richard A. Shore Source: Notre Dame J. Formal Logic Volume 38, Number 2 (1997), 165-178. #### Abstract In this paper we investigate computable models of -categorical theories and Ehrenfeucht theories. For instance, we give an example of an -categorical but not -categorical theory such that all the countable models of except its prime model have computable presentations. We also show that there exists an -categorical but not -categorical theory such that all the countable models of except the saturated model, have computable presentations. First Page: Primary Subjects: 03C15 Secondary Subjects: 03C35, 03C57 Full-text: Open access Permanent link to this document: http://projecteuclid.org/euclid.ndjfl/1039724885 Mathematical Reviews number (MathSciNet): MR1489408 Digital Object Identifier: doi:10.1305/ndjfl/1039724885 Zentralblatt MATH identifier: 0891.03013 ### References [1] Baldwin, J., and A. Lachlan, On strongly minimal sets,'' The Journal of Symbolic Logic, vol. 36 (1971), pp. 79--96. Mathematical Reviews (MathSciNet): MR44:3851 Zentralblatt MATH: 0217.30402 Digital Object Identifier: doi:10.2307/2271517 [2] Ershov, Yu., Constructive Models and Problems of Decidability, Nauka, Moskow, 1980. Mathematical Reviews (MathSciNet): MR598465 [3] Goncharov, S., Constructive models of $\omega_1$-categorical theories,'' Matematicheskie Zametki, vol. 23 (1978), pp. 885--9. Mathematical Reviews (MathSciNet): MR80g:03029 Zentralblatt MATH: 0403.03025 [4] Goncharov, S., Strong constructivability of homogeneous models,'' Algebra and Logic, vol. 17 (1978), pp. 363--8. Mathematical Reviews (MathSciNet): MR538302 [5] Logic Notebook, edited by Yu. Ershov and S. Goncharov, Novosibirsk University, Novosibirsk, 1986. Mathematical Reviews (MathSciNet): MR88k:03003 [6] Khissamiev, N.,On strongly constructive models of decidable theories,'' Izvestiya AN Kaz. SSR, vol. 1 (1974), pp. 83--4. Mathematical Reviews (MathSciNet): MR354344 [7] Kudeiberganov, K., On constructive models of undecidable theories,'' Siberian Mathematical Journal, vol. 21 (1980), pp. 155--8. Mathematical Reviews (MathSciNet): MR592228 [8] Harrington, L., Recursively presentable prime models,'' The Journal of Symbolic Logic, vol. 39 (1973), pp. 305--9. Mathematical Reviews (MathSciNet): MR50:4292 Zentralblatt MATH: 0332.02055 Digital Object Identifier: doi:10.2307/2272643 [9] Millar, T., The theory of recursively presented models,'' Ph.D. Dissertation, Cornell University, Ithaca, 1976. [10] Morley, M., Decidable models,'' Israel Journal of Mathematics, vol. 25 (1976), pp. 233--40. Mathematical Reviews (MathSciNet): MR56:15405 Zentralblatt MATH: 0361.02067 Digital Object Identifier: doi:10.1007/BF02757002 [11] Peretýatkin, M., On complete theories with finite number of countable models,'' Algebra and Logic, vol. 12 (1973), pp. 550--70. Zentralblatt MATH: 0298.02047 Mathematical Reviews (MathSciNet): MR354347 [12] Rosenstein, J., Linear Orderings, Academic Press, New York, 1982. Mathematical Reviews (MathSciNet): MR84m:06001 Zentralblatt MATH: 0488.04002
2013-12-06 16:40:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4824419617652893, "perplexity": 13231.299310661034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163052204/warc/CC-MAIN-20131204131732-00009-ip-10-33-133-15.ec2.internal.warc.gz"}
http://gateoverflow.in/39545/gate-2016-2-04
+6 votes 807 views Consider the system, each consisting of $m$ linear equations in $n$ variables. 1. If $m < n$, then all such systems have a solution. 2. If $m > n$, then none of these systems has a solution. 3. If $m = n$, then there exists a system which has a solution. Which one of the following is CORRECT? 1. $I, II$ and $III$ are true. 2. Only $II$ and $III$ are true. 3. Only $III$ is true. 4. None of them is true. asked edited | 807 views ## 2 Answers +9 votes Best answer Correct answer => C) why ? I) This is false. Consider a system with m < n, which are incosistent like a+b+c = 2 a+b+c = 3 Here m < n but no solution because of inconsistency ! II) m > n but no solution for none of system => What if this system of equations have 2 equations which are dependent ? ex => a+b = 2 2a + 2b = 4 a-b = 0 Then a = 1, b= 1 is solutions . II) Is false. III) this is true, M = 2, N = 2 a+b = 2 a-b = 0 Then m= 1, n= 1 Now there exists system which has solution . III) is correct. Answer is C ! answered by Veteran (40.7k points) selected but for the case of two parallel lines example y=x+5 and y=x+6 for these equations no solution so c should also be false. You need to read the statement III) Clearly. What you are deducing is incorrect ! If m = n , the there exists a system which has a solution * there exists * Counter example is used for disproving, for all, not * there exists * –1 vote Answer D: I & II are false already but similarly for III also for the case of two parallel lines example y=x+5 and y=x+6 for these equations no solution so c should also be false. Hence D is correct ans. answered by (163 points) You need to read the statement III) Clearly. What you are deducing is incorrect ! If m = n , the there exists a system which has a solution * there exists * Counter example is used for disproving, for all, not * there exists * Answer: +4 votes 2 answers 1 +3 votes 1 answer 2 +3 votes 2 answers 3
2017-01-21 00:06:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.618189811706543, "perplexity": 1417.5124190425365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00210-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.bookofproofs.org/branches/domain-of-discourse/
Welcome guest You're not logged in. 283 users online, thereof 0 logged in After defining what truth values are, we will next explain, in which context it makes sense to assign a truth value to a string of a given formal language. Imagine, we want to assign a truth value to the following English sentence: “The equation $x+4 = 2$ has always a solution.” Remember, that we are now at the model level of our study and this sentence is up to now a string of some formal language without any meaning. If we want to construct a logical calculus, in which we can assign a truth value to this string, we have to pay attention to the context, in which we do so. Otherwise, our assignment might be ambiguous. For instance, if the context is integers, and our logical calculus is laying out the corresponding theory, then our logical calculus should be able to assign the value true to this string because the integer $$x = -2$$ is a solution. But if the context is natural numbers, then the string is false, since there is no such natural number $x$, for which $x+4=2$. The context, in which we are studying the logic of given strings is known as the domain of discourse. Now, we will define it formally: ## Definition: Domain of Discourse A domain of discourse (also called universe of discourse or just universe) is a non-empty universal set $U$, in which we study a given formal language $$L\subseteq (\Sigma^*,\cdot)$$ over an alphabet $$\Sigma$$. | | | | | created: 2016-10-04 23:30:07 | modified: 2020-05-04 18:53:16 | by: bookofproofs, guest
2022-08-11 14:04:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6427198052406311, "perplexity": 390.4128552277902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571472.69/warc/CC-MAIN-20220811133823-20220811163823-00015.warc.gz"}
https://www.biostars.org/p/9475555/
How to generate GTF file from BLAST output? 2 0 Entering edit mode 5 weeks ago Hello All, How to create a GTF file, I have only sequence level of information available (fasta format) Note: I am working on mitochondrial genome, no GTF file available in any repository Regards Mathavan M GTF • 293 views 0 Entering edit mode A GTF file of what? In general you question is lacking some crucial bits of information to get meaningful answers/support here. 0 Entering edit mode I am working on bovine genome 0 Entering edit mode Since the title of the question refers to converting a BLAST output to GTF here is one representative tool (LINK). You will need to see if it works for your purpose. There are others but YMMV. 3 Entering edit mode 5 weeks ago The Ensembl GTFs contain the mitochondrial genes. 0 Entering edit mode Mam I working on Bubalus bubalis (water buffalo) 2 Entering edit mode 5 weeks ago I don't think this is what the OP wants, but to answer the actual question: Probably the easiest way to get BLAST output as a GTF file is to use commandline BLAST, and to set the outfmt to 17 (SAM), and then convert the SAM output to gff. And the easiest way to do that might be sam2bed and then bed2gff from bedops.
2021-07-27 05:00:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.256582647562027, "perplexity": 7423.630850219974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152236.64/warc/CC-MAIN-20210727041254-20210727071254-00491.warc.gz"}
http://taggedwiki.zubiaga.org/new_content/f3471c5c2891a5de499a2f6ff2308dba
ROC curve of three epitope predictors. In signal detection theory, a receiver operating characteristic (ROC), or simply ROC curve, is a graphical plot of the sensitivity vs. (1 - specificity) for a binary classifier system as its discrimination threshold is varied. The ROC can also be represented equivalently by plotting the fraction of true positives (TPR = true positive rate) vs. the fraction of false positives (FPR = false positive rate). Also known as a Relative Operating Characteristic curve, because it is a comparison of two operating characteristics (TPR & FPR) as the criterion changes.[1] ROC analysis provides tools to select possibly optimal models and to discard suboptimal ones independently from (and prior to specifying) the cost context or the class distribution. ROC analysis is related in a direct and natural way to cost/benefit analysis of diagnostic decision making. The ROC curve was first developed by electrical engineers and radar engineers during World War II for detecting enemy objects in battle fields, also known as the signal detection theory. ROC analysis has more recently been used in medicine, radiology, psychology, and other areas for many decades, and it has been introduced relatively recently in other areas like machine learning and data mining. ## Basic concept true positive (TP) eqv. with hit true negative (TN) eqv. with correct rejection false positive (FP) eqv. with false alarm, Type I error false negative (FN) eqv. with miss, Type II error true positive rate (TPR) eqv. with hit rate, recall, sensitivity TPR = TP / P = TP / (TP + FN) false positive rate (FPR) eqv. with false alarm rate, fall-out FPR = FP / N = FP / (FP + TN) accuracy (ACC) ACC = (TP + TN) / (P + N) specificity (SPC) or True Negative Rate SPC = TN / N = TN / (FP + TN) = 1 − FPR positive predictive value (PPV) eqv. with precision PPV = TP / (TP + FP) negative predictive value (NPV) NPV = TN / (TN + FN) false discovery rate (FDR) FDR = FP / (FP + TP) Matthews Correlation Coefficient (MCC) $\mbox{MCC} = (\mbox{TP}\; \mbox{TN} - \mbox{FP}\; \mbox{FN})/ \sqrt{P N P' N'}$ Source: Fawcett (2004). A classification model (classifier or diagnosis) is a mapping of instances into a certain class/group. The classifier or diagnosis result can be in a real value (continuous output) in which the classifier boundary between classes must be determined by a threshold value, for instance to determine whether a person has hypertension based on blood pressure measure, or it can be in a discrete class label indicating one of the classes. Let us consider a two-class prediction problem (binary classification), in which the outcomes are labeled either as positive (p) or negative (n) class. There are four possible outcomes from a binary classifier. If the outcome from a prediction is p and the actual value is also p, then it is called a true positive (TP); however if the actual value is n then it is said to be a false positive (FP). Conversely, a true negative has occurred when both the prediction outcome and the actual value are n, and false negative is when the prediction outcome is n while the actual value is p. To get an appropriate example in a real-world problem, consider a diagnostic test that seeks to determine whether a person has a certain disease. A false positive in this case occurs when the person tests positive, but actually does not have the disease. A false negative, on the other hand, occurs when the person tests negative, suggesting they are healthy, when they actually do have the disease. Let us define an experiment from P positive instances and N negative instances. The four outcomes can be formulated in a 2×2 contingency table or confusion matrix, as follows: actual value p n total prediction outcome p' True Positive False Positive P' n' False Negative True Negative N' total P N ## ROC space The ROC space and plots of the four prediction examples. (Note: this diagram is incorrect; see discussion.) The contingency table can derive several evaluation metrics (see infobox). To draw a ROC curve, only the true positive rate (TPR) and false positive rate (FPR) are needed. TPR determines a classifier or a diagnostic test performance on classifying positive instances correctly among all positive samples available during the test. FPR, on the other hand, defines how many incorrect positive results occur among all negative samples available during the test. A ROC space is defined by FPR and TPR as x and y axes respectively, which depicts relative trade-offs between true positive (benefits) and false positive (costs). Since TPR is equivalent with sensitivity and FPR is equal to 1 - specificity, the ROC graph is sometimes called the sensitivity vs (1 - specificity) plot. Each prediction result or one instance of a confusion matrix represents one point in the ROC space. The best possible prediction method would yield a point in the upper left corner or coordinate (0,1) of the ROC space, representing 100% sensitivity (no false negatives) and 100% specificity (no false positives). The (0,1) point is also called a perfect classification. A completely random guess would give a point along a diagonal line (the so-called line of no-discrimination) from the left bottom to the top right corners. An intuitive example of random guessing is a decision by flipping coins (head or tail). The diagonal line divides the ROC space in areas of good or bad classification/diagnostic. Points above the diagonal line indicate good classification results, while points below the line indicate wrong results (although the prediction method can be simply inverted to get points above the line). Let us look into four prediction results from 100 positive and 100 negative instances: A B C C' TP=63 FP=28 91 FN=37 TN=72 109 100 100 200 TP=77 FP=77 154 FN=23 TN=23 46 100 100 200 TP=24 FP=88 112 FN=76 TN=12 88 100 100 200 TP=88 FP=24 112 FN=12 TN=76 88 100 100 200 TPR = 0.63 TPR = 0.77 TPR = 0.24 TPR = 0.88 FPR = 0.28 FPR = 0.77 FPR = 0.88 FPR = 0.24 ACC = 0.68 ACC = 0.50 ACC = 0.18 ACC = 0.82 Plots of the four results above in the ROC space are given in the figure. The result A clearly shows the best among A, B, and C. The result B lies on the random guess line (the diagonal line), and it can be seen in the table that the accuracy of B is 50%. However, when C is mirrored onto the diagonal line, as seen in C', the result is even better than A. Since this mirrored C method or test simply reverses the predictions of whatever method or test produced the C contingency table, the C method has positive predictive power simply by reversing all of its decisions. When the C method predicts p or n, the C' method would predict n or p, respectively. In this manner, the C' test would perform the best. While the closer a result from a contingency table is to the upper left corner the better it predicts, the distance from the random guess line in either direction is the best indicator of how much predictive power a method has, albeit, if it is below the line, all of its predictions including its more often wrong predictions must be reversed in order to utilize the method's power. ## Curves in ROC space Discrete classifiers, such as decision tree or rule set, yield numerical values or binary label. When a set is given to such classifiers, the result is a single point in the ROC space. For other classifiers, such as naive Bayesian and neural network, they produce probability values representing the degree to which class the instance belongs to. For these methods, setting a threshold value will determine a point in the ROC space. For instance, if probability values below or equal to a threshold value of 0.8 are sent to the positive class, and other values are assigned to the negative class, then a confusion matrix can be calculated. Plotting the ROC point for each possible threshold value results in a curve. ## Further interpretations How a ROC curve can be interpreted Sometimes, the ROC is used to generate a summary statistic. Three common versions are: • the intercept of the ROC curve with the line at 90 degrees to the no-discrimination line • the area between the ROC curve and the no-discrimination line • the area under the ROC curve, or "AUC", or A' (pronounced "a-prime") [2] • d' (pronounced "d-prime"), the distance between the mean of the distribution of activity in the system under noise-alone conditions and its distribution under signal plus noise conditions, divided by their standard deviation, under the assumption that both these distributions are normal with the same standard deviation. Under these assumptions, it can be proved that the shape of the ROC depends only on d'. The AUC is equal to the probability that a classifier will rank a randomly chosen positive instance higher than a randomly chosen negative one.[3]It can be shown that the area under the ROC curve is equivalent to the Mann-Whitney U, which tests for the median difference between scores obtained in the two groups considered if the groups are of continuous data. It is also equivalent to the Wilcoxon test of ranks. The AUC has been found to be related to the Gini coefficient(G) by the following formula[4] G1 + 1 = 2xAUC, where: $G_1 = 1 - \sum_{k=1}^{n} (X_{k} - X_{k-1}) (Y_{k} + Y_{k-1})$ In this way, it is possible to calculate the AUC by using an average of a number of trapezoidal approximations. However, any attempt to summarize the ROC curve into a single number loses information about the pattern of tradeoffs of the particular discriminator algorithm. The machine learning community most often uses the ROC AUC statistic for model comparison[5]. This measure can be interpreted as the probability that when we randomly pick one positive and one negative example, the classifier will assign a higher score to the positive example than to the negative. In engineering, the area between the ROC curve and the no-discrimination line is often preferred, because of its useful mathematical properties as a non-parametric statistic. This area is often simply known as the discrimination. In psychophysics, d' is the most commonly used measure. The illustration at the top right of the page shows the use of ROC graphs for the discrimination between the quality of different epitope predicting algorithms. If you wish to discover at least 60% of the epitopes in a virus protein, you can read out of the graph that about 1/3 of the output would be falsely marked as an epitope. The information that is not visible in this graph is that the person that uses the algorithms knows what threshold settings give a certain point in the ROC graph. Sometimes it can be more useful to look at a specific region of the ROC Curve rather than at the whole curve. It is possible to compute partial AUC.[6] For example, one could focus on the region of the curve with low false positive rate, which is often of prime interest for population screening tests.[7] ## History The ROC curve was first used during World War II for the analysis of radar signals before it was employed in signal detection theory.[8] Following the attack on Pearl Harbor in 1941, the United States army began new research to increase the prediction of correctly detected Japanese aircraft from their radar signals. In the 1950s, ROC curves were employed in psychophysics to assess human (and occasionally non-human animal) detection of weak signals.[8] In medicine, ROC analysis has been extensively used in the evaluation of diagnostic tests.[9][10] ROC curves are also used extensively in epidemiology and medical research and are frequently mentioned in conjunction with evidence-based medicine. In radiology, ROC analysis is a common technique to evaluate new radiology techniques.[11]. In the social sciences, ROC analysis is often called the ROC Accuracy Ratio, a common technique for judging the accuracy of default probability models. ROC curves also proved useful for the evaluation of machine learning techniques. The first application of ROC in machine learning was by Spackman who demonstrated the value of ROC curves in comparing and evaluating different classification algorithms.[12] ## References 1. ^ Signal detection theory and ROC analysis in psychology and diagnostics : collected papers; Swets, 1996 2. ^ J. Fogarty, R. Baker, S. Hudson (2005). "Case studies in the use of ROC curve analysis for sensor-based estimates in human computer interaction". ACM International Conference Proceeding Series, Proceedings of Graphics Interface 2005, Waterloo, Ontario, Canada: Canadian Human-Computer Communications Society. 3. ^ Fawcett, T. (2006). An introduction to ROC analysis. Pattern Recognition Letters, 27, 861-874. 4. ^ Hand, D.J., & Till, R.J. (2001). A simple generalization of the area under the ROC curve to multiple class classification problems. Machine Learning, 45, 171-186. 5. ^ Hanley, JA; BJ McNeil (1983-09-01). "A method of comparing the areas under receiver operating characteristic curves derived from the same cases". Radiology 148 (3): 839-843. Retrieved on 2008-12-03. 6. ^ McClish, Donna Katzman (1989-08-01). "Analyzing a Portion of the ROC Curve". Med Decis Making 9 (3): 190-195. doi:10.1177/0272989X8900900307. Retrieved on 2008-09-29. 7. ^ Dodd, Lori E.; Margaret S. Pepe (2003). "Partial AUC Estimation and Regression". Biometrics 59 (3): 614-623. doi:10.1111/1541-0420.00071. Retrieved on 2007-12-18. 8. ^ a b D.M. Green and J.M. Swets (1966). Signal detection theory and psychophysics. New York: John Wiley and Sons Inc.. ISBN 0-471-32420-5. 9. ^ M.H. Zweig and G. Campbell (1993). "Receiver-operating characteristic (ROC) plots: a fundamental evaluation tool in clinical medicine". Clinical chemistry 39 (8): 561–577. PMID 8472349. 10. ^ M.S. Pepe (2003). The statistical evaluation of medical tests for classification and prediction. New York: Oxford. 12. ^ Spackman, K. A. (1989). "Signal detection theory: Valuable tools for evaluating inductive learning". Proceedings of the Sixth International Workshop on Machine Learning: 160–163, San Mateo, CA: Morgan Kaufman. ### General references • Balakrishnan, N., Handbook of the Logistic Distribution, Marcel Dekker, Inc., 1991, ISBN-13: 978-0824785871. • Gonen M., Analyzing Receiver Operating Characteristic Curves Using SAS, SAS Press, 2007, ISBN: 978-1-59994-298-1. • Green, William H., Econometric Analysis, fifth edition, Prentice Hall, 2003, ISBN 0-13-066189-9. • Hosmer, David W. and Stanley Lemeshow, Applied Logistic Regression, 2nd ed., New York; Chichester, Wiley, 2000, ISBN 0-471-35632-8. • Lasko, T. A., J.G. Bhagwat, K.H. Zou and L. Ohno-Machado (Oct. 2005). The use of receiver operating characteristic curves in biomedical informatics. Journal of Biomedical Informatics 38(5):404-415. PMID 16198999 • Mason, S. J. and N.E. Graham, Areas beneath the relative operating characteristics (ROC) and relative operating levels (ROL) curves: Statistical significance and interpretation. Q.J.R. Meteorol. Soc. (2002), 128, pp. 2145–2166. • Pepe, M. S. (2003). The statistical evaluation of medical tests for classification and prediction. Oxford. ISBN 0198565828 • Stephan, Carsten, Sebastian Wesseling, Tania Schink, and Klaus Jung. Comparison of Eight Computer Programs for Receiver-Operating Characteristic Analysis. Clin. Chem., Mar 2003; 49: 433 - 439. [1] • Swets, J.A. (1995). Signal detection theory and ROC analysis in psychology and diagnostics: Collected papers. Lawrence Erlbaum Associates. • Swets, J., Dawes, R., and Monahan, J. Better Decisions through Science. Scientific American, Oct 2000, pp. 82-87.
2021-10-24 21:16:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6386997103691101, "perplexity": 1868.573065813288}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587606.8/warc/CC-MAIN-20211024204628-20211024234628-00334.warc.gz"}
https://www.physicsforums.com/threads/modelling-the-spread-of-disease-in-matlab-runge-kutta.527024/
Homework Help: Modelling the spread of disease (in MatLab, Runge-Kutta) 1. Sep 4, 2011 Corrigan 1. The problem statement, all variables and given/known data Hi! I'm to build a model which solves the basic dynamic eqns in the so called SIR model in epidemology. I'm using MatLab and RK4. There are tree populations in this model: Succeptible S, infected I and recovered R. 2. Relevant equations The eqations are dS/dt = -beta * S*I/N dI/dt = beta * S*I/N - alpha*I dR/dt = alpha*I where alpha and beta are rates of recovery and infection (constants). Vaccination will be introduced aswell, but I have set that to 0 for now. 3. The attempt at a solution My base .m-file is this: Code (Text): clear all close all %Step size and number of iterations are defined dt=1; N=1000;         %days P = 1000000; %The initial conditions are entered as the first row in the array z. z(1)=P-1;             % = S z(2)=1;               % = I z(3)=1;               % = R t(1)=0; %RK4 on function for n=1:(N) k1=P1f(t(n),z(n,:)); k2=P1f(t(n)+dt/2,z(n,:)+1/2*k1*dt); k3=P1f(t(n)+dt/2,z(n,:)+1/2*k2*dt); k4=P1f(t(n)+dt,z(n,:)+k3*dt); z(n+1,:)=z(n,:)+1/6*(k1+2*k2+2*k3+k4)*dt; t(n+1)=t(n)+dt; end %The resluts are plotted subplot(1,2,2) plot(t,z(:,2),'r') axis([0 1000 0 100000]) xlabel(''),ylabel('') subplot(1,2,1) plot(t,z(:,3)) axis([0 1000 0 100000]) xlabel(''),ylabel('') with the function P1f.m: Code (Text): function f=P1f(t,z) %Def constants alpha = 0.25; sigma = 50; T = 0.1; beta = sigma*T; gamma= 0; f(1) = -beta*z(1)*z(2)/z(3) - gamma*z(1); f(2) = beta*z(1)*z(2)/z(3) - alpha*z(2); f(3) = gamma*z(1) + alpha*z(2); end The constants k in the RK4 routine all become NaN, and I don't understand why. This happenes during the first iteration. Most of my problem is that I don't fully understand how the 'function' definition in matlab works. As you can see, I only define the RHS of the original eqations for the first 3 elements in z. How does the function calculate the values past that first row? I know the method is sound because I've used it before, last year, and I've forgotten how it works. Very grateful for any help, Eric EDIT: Major mistake was dividing by R instead of N = R + I + S in the right hand sides. Seems to be working now, as far as I can see. Last edited: Sep 4, 2011
2018-12-13 22:37:27
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8113390803337097, "perplexity": 3934.3417073848773}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825112.63/warc/CC-MAIN-20181213215347-20181214000847-00541.warc.gz"}
https://se.mathworks.com/help/phased/ref/gain2aperture.html
# gain2aperture Convert gain to effective aperture ## Syntax ``A = gain2aperture(GdB,lambda)`` ## Description example ````A = gain2aperture(GdB,lambda)` returns the effective aperture of an antenna corresponding to an antenna gain of `GdB` for an incident electromagnetic wave with wavelength `lambda`.``` ## Examples collapse all An antenna has a gain of 3 dB. Calculate the antenna's effective aperture when used to capture an electromagnetic wave with a wavelength of 10 cm. `a = gain2aperture(3,0.1)` ```a = 0.0016 ``` ## Input Arguments collapse all Antenna gains, specified as a scalar or as an N-element real-valued vector. If `GdB` is a vector, each element of `GdB` corresponds to the effective aperture of the same element in the output argument `A`. See Gain and Effective Aperture for a discussion of aperture and gain. Units are in dBi. Data Types: `double` Wavelength of the incident electromagnetic wave, specified as a positive scalar. The wavelength of an electromagnetic wave is the ratio of the wave propagation speed to the frequency. Units are in meters. Data Types: `double` ## Output Arguments collapse all Antenna effective aperture, returned as a positive scalar or as an N-element vector of positive values. The elements of `A` represent the effective apertures for the corresponding elements of `GdB`. The size of `A` equals the size of `GdB`. Data Types: `double` collapse all ### Gain and Effective Aperture The effective aperture describes how much energy is captured by an antenna from an incident electromagnetic plane wave. The effective area of the antenna and is not the same as the actual physical area. The array gain of an antenna G is related to its effective aperture Ae by: `$G=\frac{4\pi }{{\lambda }^{2}}{A}_{e}$` where λ is the wavelength of the incident electromagnetic wave. For a fixed wavelength, the antenna gain is proportional to the effective aperture. For a fixed effective aperture, the antenna gain is inversely proportional to the square of the wavelength. The gain expressed in dBi (GdB) is `$GdB=10{\mathrm{log}}_{10}G=10{\mathrm{log}}_{10}\left(\frac{4\pi {A}_{g}}{{\lambda }^{2}}\right).$` The effective antenna aperture can be derived from the gain in dB using `${A}_{e}={10}^{GdB/10}\frac{{\lambda }^{2}}{4\pi }.$` ## References [1] Skolnik, M. Introduction to Radar Systems, 3rd Ed. New York: McGraw-Hill, 2001. [2] Richards, M. Fundamentals of Radar Signal Processing, New York: McGraw-Hill, 2005. ## Version History Introduced in R2011a
2023-03-24 16:19:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9798410534858704, "perplexity": 1203.6930226122336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945287.43/warc/CC-MAIN-20230324144746-20230324174746-00536.warc.gz"}
http://openstudy.com/updates/4f1f7de6e4b076dbc34850dc
## anonymous 4 years ago Pls help!:) The line x+y=10 intersects the line y=x, the x-axis and the y-axis at the points A, B, C respectively. Find the coordinates of A, B & C 1. anonymous (10,0),(0,10).(5,5) 2. anonymous for a, b, and c 3. anonymous Coordinates of A: $2x = 10 \rightarrow x=5$ Sub x=5 into x+y=10 (or y=x) to get y=5, so A is (5,5). Coordinates of B: $x+0=10 \rightarrow x=10$ Sub x=10 into x+y=10 giving y=0, hence B is (10,0) Coordinates of C: $0+y=10 \rightarrow y=10$ sub y=10 into x+y=10 giving x=0, hence C is (0,10)
2016-10-22 05:27:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7934290170669556, "perplexity": 4292.111960100239}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718426.35/warc/CC-MAIN-20161020183838-00428-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.edaboard.com/threads/what-is-the-procedure-for-pic-timer0-to-generate-a-1-ms-or-1-sec-interrupt.52636/#post-237188
Continue to Site # What is the procedure for PIC timer0 to generate a 1 ms or 1 sec interrupt? Status Not open for further replies. #### zahidkhan ##### Member level 2 hi all what is the procedure for pic timer0 to generate a 1 ms or 1 sec interrupt. regards... #### gidimiz timer0 calculator Hi, Its very easy... First and most importnat is that you will know that an interupt on the timers will happen only when they reach the end, i.e. 0xFFFF +1. So lets says this, you are using external OSC at 4MHz, and your times dosnt have any prescalar or postscale set. and you are using the OSC as timer. So to get an interup, you first need to know how fast the timer will incramet. this is done by diving the OSC by 4 ( thats the MIPS of the PIC ) and then cahnge is to the time domain: f = OSC/4 = 4,000,000 / 4 = 1,000,000 or 1MHz To the time domain: t = 1/f = 1/1,000,000 = 0.000001 Sec = 1uSec So your timer will incrament every 1uSec. To get to 1mSec you will have to do: 1mSec/1uSec = 1000 clock cycles. Now we need to load the timer with this settings, as i said before the timer will interupr only with it will reach the end, so we need to take the end and subtract the clock cycle that will pass, i.e.: TMR0 = 0xFFFF - (time we want ) = 0xFFFF - 0x03E8 ( 1000 in Hex ) = 0xFC17 This is very easy to do with the PC calculator, that enable you to change from Dec. to Hex. in one button. So the answer we have is 0xFC17. Load this value each time to your timer and then you will get an interupt every 1mSec. Now you can play around and figure out, what is the max time you can get. And i you use the pre-scalare the your original clock wile have to divid even more. See the drawing in the data sheet from more info. If you found my answer useful, click on the button that says Helped me. ( NO points will be taken from you! And you will get 3 Points) Good luck. Points: 2 Points: 2 ### dibagay Points: 2 #### zahidkhan ##### Member level 2 pic timer0 calculator bUT TMR0 can hold maximum of 256 i.e it is an 8-bit timer. Isnt it ?.how can we load it with 0xFC17. thanks #### IanP pic timer0 You are right: TIMER0 is an 8-bit timer .. Have a look at this: Simple and fast system to get reliable timer periods from a PIC controller. This system (PIC assembler source is provided) gives a simple, fast way to generate regular periods with a PIC at any clock speed. Great for one second events like simple clocks. You can use any crystal you have, 4.0 MHz, 12.137253 MHz, (ANY crystal) and ANY prescaler value, and still get perfect one second timing. It will generate reliable periods from milliseconds to many seconds, with very fast code execution. https://www.romanblack.com/one_sec.htm Regards, IanP Points: 2 V Points: 2 ### eagle1109 Points: 2 #### contagiouseddie ##### Member level 2 timer0 pic Here's the equation to calculate: Period = (256 - TMR0)*(4/fosc)*(Prescaler) For example, if Period = 1 ms and fosc = 4 MHz and using Prescaler 1:4 1ms = (256 - TMR0)(1us)(1) TMR0 = 6 = 0x06 This is thevalue you must load to the timer 0 register. Remember to turn on the timer and clear the timer interrupt flag each time after the timer interrupt occurs. Careful with the global interrupt also because any unclear interrupt flag would trigger another interrupt. For 1 s of period, choose proper prescaler value which is suitable. Remember* timer 0 is only 8-bit. In order to have more precise timing, use 16 bit timer. All the best. Points: 2 Points: 2 ### eagle1109 Points: 2 #### gidimiz pic tmr0 Hi, zahidkhan said: bUT TMR0 can hold maximum of 256 i.e it is an 8-bit timer. Isnt it ?.how can we load it with 0xFC17. thanks You are correct is cases. As i didnt know what PIC you are using, i just explained how all the timers work. In the PIC18 the TMR0 has 8 or 16 bit, so either way, the idea was more importnant. Also, contagiouseddie is very correct. I forgot to mention that, you have to reset the TMR0IF flag. You dont have to set to interupt if you dont want to, just wait until the TMR0IF will change. Each way has its advantages and disadvantages. Good lcuk. Points: 2 V Points: 2 ### eagle1109 Points: 2 #### zahidkhan ##### Member level 2 pic timer 0 Thank U all . Now i am getting my concepts clear. but having some problems in the code to generate a clock .I am using Proton+.Here the code. Code: Device 16F84A XTAL = 4.0 Declare Lcd_DTPin PortB.4 Declare Lcd_ENPin Portb.3 Declare Lcd_RSPin Portb.2 Symbol T0IF = INTCON.2 ' TMR0 Overflow Interrupt Flag Symbol T0IE = INTCON.5 ' TMR0 Overflow Interrupt Enable Symbol GIE = INTCON.7 ' Global Interrupt Enable Symbol PS0 = OPTION_REG.0 ' Prescaler Rate Select Symbol PS1 = OPTION_REG.1 ' Prescaler Rate Select Symbol PS2 = OPTION_REG.2 ' Prescaler Rate Select Symbol PSA = OPTION_REG.3 ' Prescaler Assignment Symbol T0CS = OPTION_REG.5 DIM MS AS WORD DIM SECOND AS BYTE DIM MINUTE AS BYTE DIM HOUR AS BYTE PS0 = 1 PS1 = 0 PS2 = 0 PSA = 0 T0CS = 0 'set the clock source for internal oscillator T0IF = 0 'clear the interrupt flag T0IE = 1 'enable tmr0 interrupt GIE = 1 'enable Global interrupts TMR0 = 6 On Interrupt goto InterruptServiceRoutine GoTo Main 'start of program, jumps straight to main Disable 'disable interrupts InterruptServiceRoutine: T0IF = 0 'clear the interrupt flag TMR0 = 6 INC MS IF MS >= 999 THEN MS = 0 INC SECOND IF SECOND >= 59 THEN SECOND = 0 INC MINUTE IF MINUTE >= 59 THEN MINUTE = 0 INC HOUR ENDIF ENDIF ENDIF Resume 'return back to where the main code was running 'before the interrupt Enable 're enanble interupts Main: Print at 1,1, DEC HOUR," ",DEC MINUTE," ", DEC SECOND," ",DEC MS GoTo Main #### thunderboympm ##### Full Member level 5 Re: timer0 pic Here's the equation to calculate: Period = (256 - TMR0)*(4/fosc)*(Prescaler) For example, if Period = 1 ms and fosc = 4 MHz and using Prescaler 1:4 1ms = (256 - TMR0)(1us)(1) TMR0 = 6 = 0x06 This is thevalue you must load to the timer 0 register. Remember to turn on the timer and clear the timer interrupt flag each time after the timer interrupt occurs. Careful with the global interrupt also because any unclear interrupt flag would trigger another interrupt. For 1 s of period, choose proper prescaler value which is suitable. Remember* timer 0 is only 8-bit. In order to have more precise timing, use 16 bit timer. All the best. what about for getting 50mSec timer for 20MHz crystal at a 256 prescaler V Points: 2 #### cubanflyer ##### Full Member level 5 Re: timer0 pic what about for getting 50mSec timer for 20MHz crystal at a 256 prescaler This is can not be achieved using tmr0 and 20Mhz crystal directly. You could look at using additional variables handled by the interrupt routine. You will need 250000 clock cycles at 20Mhz to get this timing period, you could look at using timer1 (16 bit) Last edited: V Points: 2 #### wyiancwc88 ##### Newbie level 4 Re: pic timer0 How to get a 2 MHz pulse with 30% duty cycle from 20 MHz crystal? Can anyone provide me the code? thanks.. #### foggysail ##### Newbie level 6 Re: pic timer0 This is a great thread, I am new to this especially C, working similar timing problems to apply to a pic18f4550. But I do have a question. I notice that the system oscillator was used in the above examples and that ran at 4MHz which is too high for the internal oscillator. Second, if one uses the system oscillator/clock for timers, does that not tie up the entire PIC? I will have at least a "while loop" for counting in the timer ..........I THINK, have not ventured far enough into this yet. I can just visualize tying up everything by using the system clock. Am I missing something here? Thanks-- Foggy Status Not open for further replies.
2023-03-26 21:46:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4475690722465515, "perplexity": 8846.379353628288}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946535.82/warc/CC-MAIN-20230326204136-20230326234136-00032.warc.gz"}