url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://tex.stackexchange.com/questions/114192/same-animation-for-all-beamer-slides
# Same animation for all beamer slides [closed] I need to put same animation for all slides. I mean all paragraphs, bullets, images. After opening slide automatically work all animations of opened slide (all items of this slide). This is same for all slides. What is a simple solution for that? \documentclass[24pt]{beamer} \usepackage{graphicx} \usepackage{animate} \begin{document} \begin{frame} \frametitle{title} first paragraph \begin{enumerate} \item one \item two \item three \end{enumerate} \begin{figure} \hspace{4cm} \vspace{3cm} \includegraphics[width=0.15\textwidth,natwidth=69,natheight=87]{yourimage} \end{figure} \end{frame} \end{document} ## closed as unclear what you're asking by Joseph Wright♦Aug 16 '13 at 10:16 Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question. • What you're asking here is unclear. Why not post a minimal working example (MWE) to explain what you mean? – jubobs May 14 '13 at 10:32 • I need to add one animation for all items in this slide – Emalka May 14 '13 at 10:58 • In MS powerpoint we select all items and add animation.So,all item animation happen same time,I need to do that..I think u can understand my need.. – Emalka May 14 '13 at 11:09 • Using \begin{itemize}[<+->] display bullets one by one..But i need to display all bullets same time within one animation – Emalka May 14 '13 at 11:39 • Edit your question to clarify it and add your code. Please do not post it as a comment. – jubobs May 14 '13 at 18:46
2019-06-20 04:03:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26713937520980835, "perplexity": 1968.9253385473703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999130.98/warc/CC-MAIN-20190620024754-20190620050754-00018.warc.gz"}
https://www.black-holes.org/explore/glossary/34-i/67-inspiral
## Inspiral The gradually-shrinking orbit of a binary system. As the pair of stars in the binary orbit each other, they give off energy in the form of gravitational waves. This lost energy draws them closer in their orbit — eventually resulting in a Merger. ## Inspiration No amount of experimentation can ever prove me right; a single experiment can prove me wrong. Albert Einstein
2020-04-02 18:37:04
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8945658802986145, "perplexity": 1328.5382522752382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370507738.45/warc/CC-MAIN-20200402173940-20200402203940-00074.warc.gz"}
http://math.stackexchange.com/questions/90930/comparison-of-topologies/91592
# Comparison of topologies Let $(Y, \tau)$ be a topological space and let $X$ be a set such that there exist a surjective function $f \colon X \to Y$. Consider $\tau_1$ the smallest topology in $X$ that makes $f$ a quotient map, $\tau_2$ the smallest topology $X$ that makes $f$ continuous, $\tau_3$ the smallest topology in $X$ that makes $f$ an open map, $\tau_4$ the smallest topology in $X$ that makes $f$ a closed map, $\tau_5$ the smallest topology in $X$ that makes $f$ an open and closed map, $\tau_6$ the smallest topology in $X$ that makes $f$ a closed and continuous map. I am asked to compare each topology, I understand the definition for "the smallest topology" but it still troubles me, because I don't know how to use it. Can someone give me an example of comparing these topologies? - Your $\tau_1$ and $\tau_2$ seem to be the same thing; was one of them supposed to be something else? – Brian M. Scott Dec 12 '11 at 22:10 i corrected. $\tau_1$ is for quotient map – Alex J. Dec 12 '11 at 22:12 If $\tau_1$ and $\tau_2$ are topologies on the same set $X$, we say that $\tau_1$ is smaller or coarser than $\tau_2$ if $\tau_1\subseteq\tau_2$. (Note that this terminology is a little sloppy, since $\tau_1$ is allowed to be equal to $\tau_2$. If you want to specify that $\tau_1\subsetneqq\tau_2$, it’s best to say that $\tau_1$ is strictly weaker or coarser than $\tau_2$. For a simple example, let $\tau_1$ be the usual Euclidean topology on $\mathbb{R}$, and let $\tau_2$ be the discrete topology on $\mathbb{R}$; since $\tau_2=\wp(\mathbb{R})$, it’s clear that $\tau_1\subseteq \tau_2$, so the Euclidean topology on $\mathbb{R}$ is weaker (or coarser) than the discrete topology. (In fact it’s obviously strictly weaker.) On the other hand, if we let $\tau_3=\{\mathbb{R}\setminus F:F\text{ is finite}\}$, the cofinite topology on $\mathbb{R}$, then $\tau_3\subsetneqq\tau_1$: the cofinite topology is strictly weaker than the Euclidean topology. Added: A useful fact is that if $\mathscr{S}$ is any collection of subsets of $X$, and $\tau$ is the intersection of all of the topologies on $X$ that contain $\mathscr{S}$, then not only is $\mathscr{S}\subseteq\tau$, but $\tau$ is again a topology on $X$. (This is an easy exercise using the definition of a topology.) Clearly, then, $\tau$ is the weakest (smallest, coarsest) topology on $X$ that contains every member of $\mathscr{S}$, which therefore always exists. This means that if you have some collection $\mathscr{S}$ of sets that you want to be open, you can find the weakest (smallest, coarsest) topology that does the trick by intersecting all of the topologies that make the members of $\mathscr{S}$ open. The collection $\mathscr{S}$ is said to be a subbase for the resulting topology. To return to the specifics of your question, notice that the more restrictions you put on $f$, the more open sets you’ll require in $X$. For instance, $\tau_5$ makes $f$ both open and closed, so it automatically makes $f$ open. Thus, it’s one of the topologies that are intersected to find $\tau_3$, and as a result $\tau_3\subseteq\tau_5$. - That's what I thought about $\tau_5$ and $\tau_3$. Does it follow, say, since a quotient map is continuous, so $\tau_2 \subset \tau_1$? – Alex J. Dec 12 '11 at 22:24 @Alex: Yes, it does: $\tau_1$ is one of the topologies that are intersected to find $\tau_2$. – Brian M. Scott Dec 12 '11 at 22:32 $\tau_3 = \tau_4 = \tau_5 = \{ \emptyset, X \}$, the indiscrete topology on $X$. This is clear, as the indiscrete topology is the smallest topology on $X$ anyway and any surjection is both an open and closed map under this topology ($f[\emptyset] = \emptyset$, and $f[X] = Y$, and both sets are open in $Y$). Assuming $\tau_6$ is well-defined, this is a topology that makes $f$ a closed and continuous function, and so a quotient map. So $\tau_1$, as the smallest such topology obeys $\tau_1 \subset \tau_6$, and as $\tau_1$ makes $f$ continuous in particular, and $\tau_2$ is the smallest such topology, again we conclude $\tau_2 \subset \tau_1 \subset \tau_6$. Equality is possible, e.g. when we have $f$ a bijection. The equal (see above) $\tau_3, \tau_4, \tau_5$ are a subset of all of them, of course. The smallest topology satisfying some property $\mathcal{P}$ is simply, as others noted too, the intersection of all topologies on the set that satisfy $\mathcal{P}$, and as the intersection of topologies is always a topology, the well-definedness comes down to finding or defining just one topology on the set $X$ that obeys $\mathcal{P}$. But I'm not yet sure whether there always exists a topology on $X$ that makes $f$ closed and continuous for a given $f$. Continuity is easy: we start with all inverse images of open sets of $Y$. This gives us new closed sets, the image of which must be closed, but the topology on $Y$ is given and cannot be controlled... So there probably are examples of functions such that $f[X\setminus f^{-1}[O]]$ is not open for some open subset $O$ of $Y$, and then we cannot have any topology that makes $f$ closed and continuous and this makes $\tau_6$ ill-defined in a general setting, I think. But like I showed, if it exists, it's the biggest of the considered topologies. come to think of it, is there always a topology on $X$ to make $f$ quotient? I'm not aware of a construction, so I have my doubts there as well. - You don’t need $f[X\setminus f^{-1}[O]]$ to be open: you need it to be closed, which it is, since it’s $Y\setminus O$. (Remember, $f$ is surjective.) Giving $X$ the topology $\{f^{-1}[U]:U\text{ is open in }Y\}$ automatically makes $f$ continuous, open, closed, and a quotient map. – Brian M. Scott Dec 16 '11 at 18:36 the set of all topologies on $X$ can be partially ordered by inclusion (as subsets of $2^X$), $\tau\leq\tau'$ iff $U\in\tau\Rightarrow U\in\tau'$. for instance, the smallest topology on $X$ is $\{\emptyset,X\}$ and the largest is $\{S | S\subseteq X\}$. if $f$ is to be continuous, then $\tau_1$ must contain all sets of the form $f^{-1}(U), U\in\tau$. as for "smallest," if $\mathcal{S}\subseteq 2^X$ there is a unique smallest topology containing $\mathcal{S}$, namely the intersection of all topologies containing $\mathcal{S}$ (exercise). it seems to me that the smallest such that $f$ is open/closed/clopen is the indiscrete topology (this uses surjectivity of $f$) -
2016-07-24 11:09:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9289453029632568, "perplexity": 97.92790011379614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823996.40/warc/CC-MAIN-20160723071023-00028-ip-10-185-27-174.ec2.internal.warc.gz"}
https://aliquote.org/micro/2020-08-06-15-27-11/
# aliquote ## < a quantity that can be divided into another a whole number of time /> Emacs: No modeline. #emacs
2021-08-02 01:56:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35931962728500366, "perplexity": 3683.3240540576976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154302.46/warc/CC-MAIN-20210802012641-20210802042641-00435.warc.gz"}
https://leanprover-community.github.io/archive/stream/113489-new-members/topic/Integer.20powers.html
## Stream: new members ### Topic: Integer powers #### Ben Nale (Aug 10 2020 at 20:07): Hi. image.png I'm trying to prove lemmas that involve taking an integer to the power of an integer. However, I get the following error message. image.png #### Thomas Browning (Aug 10 2020 at 20:12): The problem is that lean doesn't know what (integer)^(integer) is. You might have to import something. #### Reid Barton (Aug 10 2020 at 20:17): What's 3 ^ (-7)? #### Reid Barton (Aug 10 2020 at 20:17): Exponents for int should be nats. #### Thomas Browning (Aug 10 2020 at 20:18): There is (real)^(integer) somewhere in mathlib (I think), so you could cast the base to the real numbers, and take the pow there. #### Ben Nale (Aug 10 2020 at 20:18): image.png I get the same error #### Reid Barton (Aug 10 2020 at 20:20): What would you expect to get? #### Reid Barton (Aug 10 2020 at 20:21): a^b always has the same type as a #### Alex J. Best (Aug 10 2020 at 20:21): Ben Nale said: In this the left hand side is a nat and the right hand side is an int. #### Ben Nale (Aug 10 2020 at 20:22): hmm. but naturals are a subset of ints. How do I tell that to lean? #### Alex J. Best (Aug 10 2020 at 20:23): You can explicitly write : int or : \Z as you did on the right hand side. #### Alex J. Best (Aug 10 2020 at 20:23): That will make -1 as an  int #### Ben Nale (Aug 10 2020 at 20:24): oh nice :). That works. I understand now. Thank you very much @Alex J. Best @Reid Barton #### Patrick Massot (Aug 10 2020 at 20:25): Note for next time: copy-pasting text is almost always better than posting screenshots. See also this link: #mwe. Last updated: May 14 2021 at 22:15 UTC
2021-05-14 23:22:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3668353855609894, "perplexity": 7802.772019872064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991829.45/warc/CC-MAIN-20210514214157-20210515004157-00141.warc.gz"}
https://alice-publications.web.cern.ch/node/5247
# Measurement of jet radial profiles in Pb$-$Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV Lock Locked The jet radial structure and particle transverse momentum ($p_{\rm{T}}$) composition within jets are presented in centrality-selected Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV. Track-based jets, which are also called charged jets, were reconstructed with a resolution parameter of $R=0.3$ at midrapidity $|\eta_\mathrm{ch\,jet}|<~0.6$ for transverse momenta $p_\rm{T,\,ch\,jet}=30$-$120$ GeV/$c$. Jet-hadron correlations in relative azimuth and pseudorapidity space ($\Delta\varphi$, $\Delta\eta$) are measured to study the distribution of the associated particles around the jet axis for different $p_\rm{T, assoc}$-ranges between 1 and 20 GeV/$c$. The data in Pb-Pb collisions are compared to reference distributions for pp collisions, obtained using embedded PYTHIA simulations. The number of high-$p_{\rm{T}}$ associate particles ($4<~p_\rm{T, assoc}<~20$ GeV/$c$) in Pb-Pb collisions is found to be suppressed compared to the reference by 30 to 10%, depending on centrality. The radial particle distribution relative to the jet axis shows a moderate modification in Pb-Pb collisions with respect to PYTHIA. High-$p_{\rm{T}}$ associate particles are slightly more collimated in Pb-Pb collisions compared to the reference, while low-$p_{\rm{T}}$ associate particle tend to be broadened. The results, which are presented for the first time down to $p_\rm{T,\,ch\,jet}=30$ GeV/$c$ in Pb-Pb collisions, are compatible with both previous jet-hadron-related measurements from the CMS Collaboration and jet shape measurements from the ALICE Collaboration at higher $p_{\rm{T}}$, and add further support for the established picture of in-medium parton energy loss.
2020-10-25 21:51:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9585040211677551, "perplexity": 2300.157072723029}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107890028.58/warc/CC-MAIN-20201025212948-20201026002948-00719.warc.gz"}
http://mcassignmentyfws.tlwsd.info/latex-thesis-multiple-files.html
# Latex thesis multiple files Latex thesis multiple files, Doing purdue university theses using latex i recommend saving any old puthesis files in your thesis if you would like to use the same commands in multiple. How to write a thesis in latex pt 1 - basic structure a basic thesis using latex your thesis could be the longest up the document into multiple tex files. Latex/modular documents master thesis, dissertation) letters presentations teacher's corner getting latex to process multiple files. Although one can find many minimal examples and multiple //githubcom/thesis/thesis done running latexdone running latexdone file created: thesis. Latex thesis multiple files visit the post for more. Preparing a thesis with latex % run latex or pdflatex on this file to produce your thesis % to produce the abstract title page followed by the abstract. Some tips and tricks for using latex by rob benedetto how to use the les samplethesistex, thesistex, and latextipstex but if you want iterated multiple. Using latex to write a phd thesis version 13 a large document into several files (thesistex)18 in answering related questions that are not covered latex. About the latex thesis templates the thesis templates have been created to make it easy to prepare your thesis using latex while adhering to the mit thesis file. Sample thesis pages the pdf thesis file use of adobe reader to open and fill in the form is if multiple appendices are included, they. How to get started writing your thesis in latex (see our other tutorial videos if not), and focus on how to work with a large project split over multiple files. University of utah latex dissertation and thesis the default paper size in multiple installed top-level latex files as a. Without using \include you have to use the command \cbinput and the environment cbunit in the several input files here is a complete minimal example for chapterbib. Latex/bibliography management master's thesis required fields: if the bib entries are located in multiple files we can add them like this. Thesis latex file we are most trusted custom-writing services among students from all over the world since we were founded in 1997. Tips on writing a thesis in latex here is an example of the article type entry from the bib file i used while typesetting thesis: @article % for multiple. La t e x thesis class for university of colorado http://oitcoloradoedu/latex/ thesis class was originally written by \input macrostex % my file of latex. Doing purdue university theses using latex mark senn i recommend saving any old puthesis files in your thesis directory with other names used for multiple. 9 multiple file latex projects 92 specifying which file to compile many latex projects contain multiple source files which are thesis/ maintex. How to write a thesis in latex pt 1 - basic structure up the document into multiple tex files one for all the tex files making up the main body of the thesis. The latest version of latex and information on the latex3 project, with documentation and experimental code. • Converting a latex thesis to multiple wordpress posts a few months ago i finished my thesis, passed my viva and then submitted the hardbound copies to the.
2018-04-22 12:28:43
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8775880932807922, "perplexity": 3233.1371990821244}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945596.11/warc/CC-MAIN-20180422115536-20180422135536-00555.warc.gz"}
https://www.sharcnet.ca/Software/Fluent6/html/ug/node902.htm
## 23.5.2 Volume Fractions The description of multiphase flow as interpenetrating continua incorporates the concept of phasic volume fractions , denoted here by . Volume fractions represent the space occupied by each phase, and the laws of conservation of mass and momentum are satisfied by each phase individually. The derivation of the conservation equations can be done by ensemble averaging the local instantaneous balance for each of the phases [ 10] or by using the mixture theory approach [ 37]. The volume of phase , , is defined by (23.5-1) where (23.5-2) The effective density of phase is (23.5-3) where is the physical density of phase . Previous: 23.5.1 Overview and Limitations Up: 23.5 Eulerian Model Theory Next: 23.5.3 Conservation Equations
2018-01-16 17:35:47
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.883148193359375, "perplexity": 1819.5030229851236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886476.31/warc/CC-MAIN-20180116164812-20180116184812-00672.warc.gz"}
https://questions.examside.com/past-years/jee/question/heat-energy-of-735-mathrmj-is-given-to-a-diatomic-gas-jee-main-physics-n9z6esyj7altjt2m
1 JEE Main 2023 (Online) 31st January Evening Shift +4 -1 Heat energy of $735 \mathrm{~J}$ is given to a diatomic gas allowing the gas to expand at constant pressure. Each gas molecule rotates around an internal axis but do not oscillate. The increase in the internal energy of the gas will be : A $572 \mathrm{~J}$ B $441 \mathrm{~J}$ C $525 \mathrm{~J}$ D $735 \mathrm{~J}$ 2 JEE Main 2023 (Online) 31st January Evening Shift +4 -1 A hypothetical gas expands adiabatically such that its volume changes from 08 litres to 27 litres. If the ratio of final pressure of the gas to initial pressure of the gas is $\frac{16}{81}$. Then the ratio of $\frac{\mathrm{Cp}}{\mathrm{Cv}}$ will be. A $\frac{3}{1}$ B $\frac{4}{3}$ C $\frac{1}{2}$ D $\frac{3}{2}$ 3 JEE Main 2023 (Online) 31st January Morning Shift +4 -1 The pressure of a gas changes linearly with volume from $$\mathrm{A}$$ to $$\mathrm{B}$$ as shown in figure. If no heat is supplied to or extracted from the gas then change in the internal energy of the gas will be A 6 J B 4.5 J C zero D $$-$$4.5 J 4 JEE Main 2023 (Online) 31st January Morning Shift +4 -1 The correct relation between $$\gamma = {{{c_p}} \over {{c_v}}}$$ and temperature T is : A $$\gamma \propto T$$ B $$\gamma \propto {1 \over {\sqrt T }}$$ C $$\gamma \propto {1 \over T}$$ D $$\gamma \propto T^\circ$$ JEE Main Subjects Physics Mechanics Electricity Optics Modern Physics Chemistry Physical Chemistry Inorganic Chemistry Organic Chemistry Mathematics Algebra Trigonometry Coordinate Geometry Calculus EXAM MAP Joint Entrance Examination
2023-03-22 02:08:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6855267882347107, "perplexity": 1719.8072381987329}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943749.68/warc/CC-MAIN-20230322020215-20230322050215-00465.warc.gz"}
http://stackoverflow.com/questions/2872755/is-it-possible-to-pass-an-array-as-a-command-line-argument-to-a-php-script
# Is it possible to pass an array as a command line argument to a PHP script? I'm maintaining a PHP library that is responsible for fetching and storing incoming data (POST, GET, command line arguments, etc). I've just fixed a bug that would not allow it to fetch array variables from POST and GET and I'm wondering whether this is also applicable to the part that deals with the command line. Can you pass an array as a command line argument to PHP? - Directly not, all arguments passed in command line are strings, but you can use query string as one argument to pass all variables with their names: php myscript.php a[]=1&a[]=2.2&a[b]=c <?php parse_str($argv[1]); var_dump($a); ?> /* array(3) { [0]=> string(1) "1" [1]=> string(3) "2.2" ["b"]=>string(1) "c" } */ - Strictly speaking, no. However you could pass a serialized (either using PHP's serialize() and unserialize() or using json) array as an argument, so long as the script deserializes it. something like php MyScript.php "{'colors':{'red','blue','yellow'},'fruits':{'apple','pear','banana'}}" I dont think this is ideal however, I'd suggest you think of a different way of tackling whatever problem you're trying to address. - It is unserialize() not deserialize() –  dev-null-dweller May 20 '10 at 10:56 @dev-null-dweller, thanks - amended. –  Mailslut May 20 '10 at 11:57 You need to figure out some way of encoding your array as a string. Then you can pass this string to PHP CLI as a command line argument and later decode that string. - As it was said you can use serialize to pass arrays and other data to command line. shell_exec('php myScript.php '.escapeshellarg(serialize($myArray))); And in myScript.php: $myArray = unserialize($argv[1]); - The following code block will do it passing the array as a set of comma separated values: <?php$i_array = explode(',',$argv[1]); var_dump($i_array); ?> OUTPUT: php ./array_play.php 1,2,3 array(3) { [0]=> string(1) "1" [1]=> string(1) "2" [2]=> string(1) "3" } - After following the set of instructions below you can make a call like this: phpcl yourscript.php _GET='{ "key1": "val1", "key2": "val2" }' To get this working you need code to execute before the script being called. I use a bash shell on linux and in my .bashrc file I set the command line interface to make the php ini flag auto_prepend_file load my command line bootstrap file (this file should be found somewhere in your php_include_path): alias phpcl='php -d auto_prepend_file="system/bootstrap/command_line.php"' This means that each call from the command line will execute this file before running the script that you call. auto_prepend_file is a great way to bootstrap your system, I use it in my standard php.ini to set my final exception and error handlers at a system level. Setting this command line auto_prepend_file overrides my normal setting and I choose to just handle command line arguments so that I can set $_GET or$_POST. Here is the file I prepend: <?php // Parse the variables given to a command line script as Query Strings of JSON. // Variables can be passed as separate arguments or as part of a query string: // _GET='{ "key1": "val1", "key2": "val2" }' foo='"bar"' // OR // _GET='{ "key1": "val1", "key2": "val2" }'\&foo='"bar"' if ($argc > 1) {$parsedArgs = array(); for ($i = 1;$i < $argc;$i++) { parse_str($argv[$i], $parsedArgs[$i]); } foreach ($parsedArgs as$arg) { foreach ($arg as$key => $val) { // Set the global variable of name$key to the json decoded value. $$key = json_decode(val, true); } } unset(parsedArgs); } ?> It loops through all arguments passed and sets global variables using variable variables (note the$$). The manual page does say that variable variables doesn't work with superglobals, but it seems to work for me with $_GET (I'm guessing it works with POST too). I choose to pass the values in as JSON. The return value of json_decode will be NULL on error, you should do error checking on the decode if you need it. - So if a CLI is as such php path\to\script.php param1=no+array param2[]=is+array param2[]=of+two Then the function thats reads this can be function getArguments($args){ unset($args[0]); //remove the path to script variable$string = implode('&',$args); parse_str($string, $params); return$params; } This would give you Array ( [param1] => no array [param2] => Array ( [0] => is array [1] => of two ) ) - Sort of. If you pass something like this: $php script.php --opt1={'element1':'value1','element2':'value2'} You get this in the opt1 argument: Array( [0] => 'element1:value1' [1] => 'element2:value2' ) so you can convert that using this snippet: foreach($opt1 as $element){$element = explode(':', $element);$real_opt1[$element[0]] =$element[1]; } which turns it into this: Array( [element1] => 'value1' [element2] => 'value2' ) -
2014-10-02 13:24:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30704522132873535, "perplexity": 5999.0182867119975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663754.0/warc/CC-MAIN-20140930004103-00169-ip-10-234-18-248.ec2.internal.warc.gz"}
https://proxies123.com/tag/geometry/
## ag.algebraic geometry – Local rings \$R subsetneq S\$ with \$R\$ regular and \$S\$ Cohen-Macaulay, non-regular Let $$R subseteq S$$ be local rings with maximal ideals $$m_R$$ and $$m_S$$. Assume that: (1) $$R$$ and $$S$$ are (Noetherian) integral domains. (2) $$dim(R)=dim(S) < infty$$, where $$dim$$ is the Krull dimension. (3) $$R$$ is regular (hence a UFD). (4) $$S$$ is Cohen-Macaulay. (5) $$R subseteq S$$ is simple, namely, $$S=R(w)$$ for some $$w in S$$. (6) $$R subseteq S$$ is free. (7) $$R subseteq S$$ is integral, namely, every $$s in S$$ satisfies a monic polynomial over $$R$$. (8) $$m_RS=m_S$$, namely, the extension of $$m_R$$ to $$S$$ is $$m_S$$. (9) It is not known whether the fields of fractions of $$R$$ and $$S$$, $$Q(R)$$ and $$Q(S)$$, are equal or not. (10) It is not known if $$R subseteq S$$ is separable or not. Remark: It is known that if a (commutative) integral domains ring extension $$A subseteq B$$ is integral+flat, then it is faithfully flat, and if also $$Q(A)=Q(B)$$, then $$A=B$$. This is why I did not want to assume that $$Q(R)=Q(S)$$, since in this case $$R=S$$ immediately. Question: Is it true that, assuming (1)(10) imply that $$S$$ is regular or $$R=S$$? Example: $$R=mathbb{C}(x(x-1))_{x(x-1)}$$ and $$S=mathbb{C}(x)_{(x)}$$, with $$R neq S$$ and $$S$$ is regular. Non-example: $$R=mathbb{C}(x^2)_{(x^2)}$$ and $$S=mathbb{C}(x^2,x^3)_{(x^2,x^3)}$$, but condition (8) is not satisfied. Relevant questions, for example: a, b, c, d. Thank you very much! I have asked the above question here, with no comments (yet). ## geometry – How to find the composite of an inverse function? Consider the two parameters σ(u,v), $$σ(u ̃,v ̃)$$ of $$x^2$$+$$y^2$$+$$z^2$$=1, where x,y,z $$>$$ 0. Given σ(u,v) = $$(u,sqrt{-u^2-v^2},v)$$ and $$σ ̃(u ̃, v ̃)$$ = $$(sqrt{1-u ̃^2-v ̃^2, u ̃, v ̃})$$, where u,v,$$u ̃,v ̃ in U.$$ U = {(x,y): $$x>0$$ and $$v>0$$ and $$x^2+y^2<1$$. Find $$phi$$= $$σ ̃^{-1} ◦ σ$$. ## ag.algebraic geometry – Vanishing of intermediate cohomology for a multiple of a divisor Let $$S subset mathbb P^3$$ be a smooth projective surface (over complex numbers). Let $$C$$ be a smooth hyperplane section. Let $$Delta$$ be a non-zero effective divisor on $$S$$ such that $$h^1(mathcal O_S(nC+Delta))=0, h^1(mathcal O_S(nC-Delta))=0$$ for all $$n in mathbb Z$$. Then my question is the following : In this situation can we say that: $$h^1(mathcal O_S(m Delta))=0$$ for $$m geq 2$$? Can we impose any condition so that this happens? Any help from anyone is welcome. ## ag.algebraic geometry – Algebraic properties of geodesics This is a question related to my last post. I will use the same definition here. A complete smooth manifold $$M$$ with an affine connection $$nabla$$ is said to have an algebraic model of dimension $$n$$ if there exists a smooth immersion $$sigma:M rightarrow Bbb{R}^n$$ such that each image of geodesics on $$M$$ with respect to $$nabla$$ is either sub-algebraic or improper. A sub-algebraic set is a subset of $$Bbb{R}^n$$ defined by the common zeros and positive or non-negative parts of finitely many polynomials. (For example, half circles and toroidal handles are sub-algebraic sets.) In this sense, all simply connected hypaerbolic spaces have algebraic models (e.g. upper half space, Poincare’s unit $$n$$-ball). Note that I do not require $$sigma$$ to be algebraic — the usual hyperbolic cases are apparently not algebraic. The first question arises naturally on how ‘algebraic’ $$sigma(M)$$ really is. I conjectured that if $$M$$ has an algebraic model $$sigma$$ then $$sigma(M)$$ is sub-algebraic — more specifically, the ‘infinite boundary’ of $$sigma(M)$$ is an algebraic set. It seems a counterexample is not easy to construct. The second question concerns about a certain type of manifolds, namely the 3-manifolds with geometric structures in Thurston’s sense. I want to know if there is an algebraic model of at least 1 manifold of each of the 8 types. And as they have Lie groups as transition groups, I also conjectured that every geometric 3-manifold has an algebraic model. (The corresponding 8 Lie groups all seem to have algebraic models.) I am looking forward to any reference articles on this problem. ## differential geometry – Nowhere-vanishing form \$omega\$ on \$S^1.\$ This is an example(19.8 and 17.15) from Intro to manifolds by Tu. Let $$S^1$$ be the unit circle defined by $$x^2+y^2=1$$ in $$mathbb{R}^2$$. The 1-form $$dx$$ restricts from $$mathbb{R}^2$$ to a $$1-form$$ on $$S^1.$$ At each point $$pin S^1$$, the domain of $$(dxmid_{S^1})_p$$ is $$T_p(S^1)$$ instead of $$T_p(mathbb{R}^2)$$: $$(dxmid_{S_1})_p:T_p(S^1)rightarrowmathbb{R}$$. At $$p=(1, 0)$$, a basis for the tangent space $$T_p(S^1)$$ is $$partial/partial y$$. Since $$(dx)_p(frac{partial}{partial y})=0,$$ we see that although $$dx$$ is nowhere-vanishing $$1-$$form on $$mathbb{R}^2$$, it vanishes at $$(1, 0)$$, when restricted on $$S^1.$$ Define a $$1-form$$ $$omega$$ on $$S^1$$ by $$omega=frac{dy}{x}$$ on $$U_x$$ and $$omega=-frac{dx}{y}$$ on $$U_y$$ where $$U_x={(x,y)in S^1mid xneq 0}$$ and $$U_y={(x,y)in S^1mid yneq 0}$$. I understand $$omega$$ is $$C^infty$$ and nowhere-vanishing. I want to understand why $$omega$$ on $$S^1$$ is the form $$-ydx+xdy$$ of Example below: Example 17.15 (A 1-form on the circle). The velocity vector field of the unit circle $$c(t)=(x,y)=(cos t, sin t)$$ in $$mathbb{R}^2$$ is $$c'(t)=(-sin t, cos t)=(-y, x)$$. Thus $$X=-yfrac{partial}{partial x}+xfrac{partial}{partial y}$$ is a $$C^infty$$ vector field on the unit circle $$S^1$$. What this notation means is that if $$x,y$$ are the standard coordinates on $$mathbb{R^2}$$ and $$i:S^1hookrightarrowmathbb{R}^2$$ is the inclusion map, then at a point $$p=(x,y)in S^1$$, one has $$i_ast X_p=-ypartial/partial xmid_p+xpartial/partial ymid_p$$, where $$partial/partial xmid_p$$ and $$partial/partial ymid_p$$ are tangent vectors at $$p$$ in $$mathbb{R}^2$$. Then if $$omega=-ydx+xdy$$ on $$S^1$$, then $$omega(X)equiv 1.$$ ## ag.algebraic geometry – Automorphism of a stack morphism Let $$X$$ be an algebraic stack and let $$f: S to X$$ be a smooth covering of $$X$$ by a scheme $$S$$. Motivation: Forgetting about stacks for a moment and going back to covering spaces: Given a covering map $$f: Z to Y$$, with $$Z$$ connected and $$Y$$ locally connected, then $$operatorname{Aut}(Z/Y)$$ acts properly and discontinuously on $$Z$$. Moreover, if $$operatorname{Aut}(Z/Y)$$ acts transitively on a fiber of $$p in Y$$, then the covering is a $$G=operatorname{Aut}(Z/Y)$$-covering in the sense that $$f: Z to Y cong Z/operatorname{Aut}(Z/Y)$$ is a quotient map. I am interested in making an analogous statement in the case of a smooth cover $$f:S to X$$ of an algebraic stack $$X$$. (Of course, dropping words like properly and discontinuously and keeping in mind that $$f$$ is not finite ‘etale and thus not a covering map in the above sense). In particular, I want to describe $$operatorname{Aut}(S/X)$$. The “elements” of $$operatorname{Aut}(S/X)$$ are maps $$phi: S to S$$ such that $$f circ phi =f$$. On the other hand, $$f: S to X$$ can be identified with a unique object $$s in X(S)$$ (up to $$2$$-isomorphism?) by the 2-Yoneda lemma and so it seems like $$operatorname{Aut}(S/X)$$ should have an interpretation in terms of the groupoid of maps $$s to s$$ lying over a given $$phi: S to S$$. That is all very abstract, so let us just suppose that the elements of $$X(S)$$ have some geometric interpretation, for example, the object $$s in X(S)$$ is a family of genus g curves $$C$$ over a scheme $$S$$. (1). Is there an interpretation of the groupoid $$s to s$$ lying over $$phi: S to S$$ in terms of a group automorphisms of $$C$$ over $$S$$? (2). Moreover, how would this group act on a “fiber” $$S times_{X,g} T$$ (a sheaf on the category $$Sch/S times T$$?) over $$g: T to X$$? ## ag.algebraic geometry – Poincarè dual of the exceptional divisor in the Kaehler setting Let $$(mathbb{C}^n,omega)=:(X,omega)$$ be the complex $$n$$-space endowed with the standard Kahler form $$isum_{j}dz_jwedge dbar{z}_j$$ and let $$pin X$$ to be the origin. Consider the Blow-up of $$X$$ at $$p$$ denoted by $$tilde{X}:=Bl_p(X)$$. Then this is a Kahler manifold of complex dimension $$n$$ with Kahler form given by $$begin{equation*}tilde{omega}:=kcdotpi^*(omega)+alphaend{equation*}$$where $$pi: tilde{X}to X$$ is the blow-up map, $$alpha$$ is a representative of $$c_1(mathcal{O}(-E))$$ (here we are using that the exceptional divisor $$E$$ is a smooth Cartier divisor in $$tilde{X}$$) and $$k$$ is a positive constant taken to ensure that $$tilde{omega}$$ is positive as $$(1,1)$$-form. One can prove that $$E$$ (which is a copy of $$mathbb{P}^{n-1}$$ in $$tilde{X}$$) intersects itself negatively. From this follows that $$E$$ is the unique smooth complex representative of its homology class in $$tilde{X}$$ (if not, just perturbed $$E$$ a bit in its same homology class, then the two submanifolds will have positive intersection). In particular we have an isomorphism (induced by Poincarè duality) $$begin{equation*}PD: H_{2n-2}(tilde{X},mathbb{R})overset{sim}{longrightarrow} H^2(tilde{X},mathbb{R}) \ (E)mapsto PD(E) end{equation*}$$We can apply Poincarè duality since $$tilde{X}$$ can be identified with the total space of $$mathcal{O}(-1)$$ over $$mathbb{P}^{n-1}$$ which is compact (am I right?) Question 1 Why is the form $$eta$$ defined by $$(eta)=-PD(E)$$ a Kahler form on $$tilde{X}$$? Well $$eta$$ is obvious closed by definition. Then it seems to me that $$eta$$ is positive since it’s the opposite of $$PD(E)$$ and $$E.E=-1$$ but I am not able to give a rigorous proof. Moreover I really can’t see why it should be a $$(1,1)$$-form Question 2 Do such Kahler forms have zero constant scalar curvature? Perhaps this is a very hard question but if anyone can help I appreciate. ## mg.metric geometry – Are there examples of hyperbolic manifolds with finite Bowen-Margulis measure and fundamental group which is not relatively hyperbolic? It is well known that a geometrically finite hyperbolic manifold (quotient of $$H^n$$) has finite Bowen-Margulis measure. Marc Peign´e, Autour de l’exposant critique d’un groupe kleinien, arXiv:1010.6022v1 constructed examples of geometrically infinite hyperbolic manifolds with finite Bowen-Margulis measure. He uses a free product/ping-pong construction, and as such the fundamental groups of the manifolds in his exampless have relatively hyperbolic fundamental groups, as of course are fundamental groups of geometrically finite hyperbolic manifolds. My question: Are there examples of hyperbolic manifolds with finite Bowen-Margulis measure and a fundamental group which is not relatively hyperbolic? ## ag.algebraic geometry – Rational sections of tropical conics Let us consider the family of Fermat conics in $$(mathbb{C}^*)^2subsetmathbb{C}^2$$. $$picolon V(ax^2+by^2-1)subset(mathbb{C}^*)^2_{a,b}times(mathbb{C}^*)^2_{x,y}to(mathbb{C}^*)^2_{a,b}$$ We know that $$pi$$ does not admit rational sections: the generic conic is non-split. Taking tropicalization functor, we get the morpphism $$mathrm{Trop}(pi)colon mathrm{Trop}(ax^2+by^2-1)subsetmathbb{R}^4tomathbb{R}^2$$ Does $$mathrm{Trop}(pi)$$ admit tropical rational sections? (Sections over $$mathbb{R^2}backslash W$$ for some proper tropical subvariety $$Wsubsetmathbb{R}^2$$?) ## ag.algebraic geometry – Hodge conjecture for rationally connected/Fano hypersurfaces We know due to work of Lewis, Murre and others that the rational Hodge conjecture holds for smooth projective hypersurfaces in $$mathbb{P}^5$$ of degree at most $$5$$. Does a similar result hold in higher dimension? In particular, is it known if the Hodge conjecture holds for smooth, projective hypersurfaces in $$mathbb{P}^{2n+1}$$ of degree at most $$2n+1$$ for some other values of $$n$$ greater than $$2$$?
2021-04-21 11:50:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 213, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9335493445396423, "perplexity": 169.27697978841536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039536858.83/warc/CC-MAIN-20210421100029-20210421130029-00578.warc.gz"}
https://apboardsolutions.com/ap-board-9th-class-maths-solutions-chapter-9-ex-9-2/
# AP Board 9th Class Maths Solutions Chapter 9 Statistics Ex 9.2 AP State Syllabus AP Board 9th Class Maths Solutions Chapter 9 Statistics Ex 9.2 Textbook Questions and Answers. ## AP State Syllabus 9th Class Maths Solutions 9th Lesson Statistics Exercise 9.2 Question 1. Weights of parcels in a transport office are given below. Find the mean weight of the parcels. Solution: Weight in kg xi No. of parcels fi x1fi 50 25 1250 65 34 2210 75 38 2850 90 40 3600 110 47 5170 120 16 1920 Σfi = 200 Σfixi = 17000 $$\begin{array}{l} \overline{\mathrm{x}}=\frac{\Sigma \mathrm{f}_{\mathrm{i}} \mathrm{x}_{\mathrm{i}}}{\Sigma \mathrm{f}_{\mathrm{i}}}=\frac{17000}{200}=\frac{170}{2} \\ \overline{\mathrm{x}}=85 \end{array}$$ Mean = 85 Question 2. Number of familles In a village in correspondence with the number of children are given below. Find the mean number of children per family. Solution: No. of childrens xi No. of families fi x1fi 0 11 0 1 25 25 2 32 64 3 10 30 4 5 20 6 1 5 Σfi = 84 Σfixi = 144 $$\overline{\mathrm{x}}=\frac{\Sigma \mathrm{f}_{\mathrm{i}} \mathrm{x}_{\mathrm{i}}}{\Sigma \mathrm{f}_{\mathrm{i}}}=\frac{144}{84}$$ Mean = 1.714285 Question 3. If the mean of the following frequency distribution is 7.2, find value of ‘k’. Solution: Σfi = 40 + k; Σfixi = 260 + 10k Given that $$\overline{\mathrm{x}}$$ = 7.2 But $$\overline{\mathrm{x}}=\frac{\Sigma \mathrm{f}_{1} \mathrm{x}_{\mathrm{i}}}{\Sigma \mathrm{f}_{\mathrm{i}}}$$ 7.2 = $$\frac{260+10 k}{40+k}$$ 288.0 + 7.2k = 260 + 10k 10k – 7.2k = 288 – 260 2.8k = 28 k = $$\frac{28}{2.8}$$ = 10 Question 4. Number of villages with respect to their population as per India census 2011 are given below. Find the average population in each village. Solution: Population (in thousands xi) Villages fi x1fi 12 20 240 5 15 75 30 32 960 20 35 700 15 36 540 8 7 56 Σfi = 145 Σfixi = 2571 thousands $$\overline{\mathrm{x}}=\frac{\Sigma \mathrm{f}_{\mathrm{i}} \mathrm{x}_{\mathrm{i}}}{\Sigma \mathrm{f}_{\mathrm{i}}}$$ Mean = $$\frac{2571}{145}$$ = 17.731 thousands Question 5. A FLATOUN social and financial educational programme initiated savings programme among the high school children in Hyderabad district. Mandal wise savings in a month are given in the following table. Mandal No. of schools Total amount saved (in rupees Amberpet 6 2154 Thirumalgiri 6 2478 Saidabad 5 975 Khairathabad 4 912 Secunderabad 3 600 Bahadurpura 9 7533 Find arithmetic mean of school wise savings in each mandal. Also find the arithmetic mean of saving of all schools. Solution: Σfi = 33 Σfixi = 14652 Mean = $$\frac{\Sigma \mathrm{f}_{\mathrm{i}} \mathrm{x}_{\mathrm{i}}}{\Sigma \mathrm{f}_{\mathrm{i}}}$$ $$\bar{x}=\frac{14652}{33}$$ = ₹ 444 (Mean savings per school) Question 6. The heights of boys and girls of IX class of a school are given below. Compare the heights of the boys and girls. [Hint: Fliid median heights of boys and girls] Solution: Boys median class =$$\frac{37+1}{2}=\frac{38}{2}$$= 19th observation ∴ Median height of boys = 147 cm Girls median class = $$\frac{29+1}{2}=\frac{30}{2}$$ = 15th observation ∴ Median height of girls = 152 cm Question 7. Centuries scored and number of cricketers in the world are given below. Find the mean, median and mode of the given data. Solution: Question 8. On the occasion of New year’s day a sweet stall prepared sweet packets. Number of sweet packets and cost of each packet is given as follows Find the mean, median and mode of the given data. Solution: N = Σfi = 150 Σfixi = 12000 Mean = $$\overline{\mathrm{x}}=\frac{\Sigma \mathrm{f}_{\mathrm{i}} \mathrm{x}_{\mathrm{i}}}{\Sigma \mathrm{f}_{\mathrm{i}}}=\frac{12000}{150}=80$$ Median = average of ($$\frac{N}{2}+1$$ and $$\frac{N}{2}$$ terms = average of 75 and 76 observation = 75 Mode = 50 Question 9. The mean (average) weight of three students is 40 kg. One of the students Ranga weighs 46 kg. The other two students, Rahim and Reshma have the same weight. Find Rahim’s weight. cgigB) Solution: Weight of Ranga = 46 kg Weight of Reshma = Weight of Rahim = x kg say Average = $$\frac{\text { Sum of the weights }}{\text { Number }}$$ = 40kg ∴ 40 = $$\frac{46+x+x}{3}$$ 3 x 40 = 46 + 2x 2x = 120 – 46 = 74 ∴ x = $$\frac{74}{2}$$ = 37 . ∴ Rahim’s weight = 37 kg. Question 10. The donations given to an orphanage home by the students of different classes of a secondary school are given below. Class Donation by each student in (Rs) No. of students donated VI 5 15 VII 7 15 VIII 10 20 IX 15 16 X 20 14 Find the mean, median and mode of the data. Solution: Σfi = 80 Σfixi = 900 Mean $$\overline{\mathrm{x}}=\frac{\Sigma \mathrm{f}_{\mathrm{i}} \mathrm{x}_{\mathrm{i}}}{\Sigma \mathrm{f}_{\mathrm{i}}}=\frac{900}{80}=11.25$$ Median = Average of $$\left(\frac{\mathrm{N}}{2}\right)$$ and $$\left(\frac{\mathrm{N}}{2}+1\right)$$ terms of $$\frac{80}{2},\left(\frac{80}{2}+1\right)$$ terms = average of 40 and 41 terms = ₹10 Mode = ₹ 10 Question 11. There are four unknown numbers. The mean of the first two numbers is 4 and the mean of the first three is 9. The mean of all four numbers is 15; if one of the four numbers is 2 find the other numbers. Solution: We know that mean = $$\frac{\text { sum }}{\text { number }}$$ Given that, Mean of 4 numbers = 15 ⇒ Sum of the 4 numbers = 4 x 15 = 60 Mean of the first 3 numbers = 9 ⇒ Sum of the first 3 numbers = 3 x 9 = 27 Mean of the first 2 numbers = 4 ⇒ Sum of the first 2 numbers = 2 x 4 = 8 Fourth number = sum of 4 numbers – sum of 3 numbers = 60 – 27 = 33 Third number = sum of 3 numbers – sum of 2 numbers = 27 – 8 = 19 Second number = Sum of 2 numbers – given number = 8-2 = 6 ∴ The other three numbers are 6, 19, 33.
2021-04-19 09:03:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34087175130844116, "perplexity": 1957.8529298149715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038879305.68/warc/CC-MAIN-20210419080654-20210419110654-00241.warc.gz"}
https://physics.stackexchange.com/questions/194526/does-the-inverse-square-law-apply-for-all-frequencies-of-sound
Does the Inverse Square Law apply for all frequencies of sound? We often seem to only hear the lower beats of music far away, whilst the higher frequency sounds seem to diminish more quickly - remaining unheard. I know that sounds with higher frequency have shorter wavelengths, whilst lower frequencies have longer wavelengths. However, this does not necessarily mean that the sounds travel at different speeds. Under normal pressure and density, sounds are known to travel 331 m/s. How does this information allow me to answer whether the Inverse Square Law is applicable in this real world situation or not? Why can we hear low frequencies from far away, but not high frequencies? I read very briefly on Stoke's Law just then as I was typing this post. Does it have any relevance to this problem? Much thanks. Yes this is related to Stoke's law for sound attenuation, which states that a plane wave decreases amplitude exponentially with a factor $\alpha$ given by: $$\alpha = \frac{2\eta\omega^2}{3\rho V^3}$$ where you can see that the dependence on the frequency squared $\omega$ of the sound will yield a higher coefficient of attenuation for higher frequency sounds comparatively to lower ones. So between two plane waves with different frequencies $\omega_H=2\omega_L$, the higher one will attenuate four times faster. That is, if they have at a starting point the same amplitude, after a certain characteristic distance $d=\frac{3\rho V^3}{2\eta\omega_L}$ the wave with higher frequency $\omega_L$ will be $e^3 \approx 20$ times weaker. For air, using $\rho$=1.225e-3$kg/m^3$, $V$=331$m/s$ and $\eta$=1.8e-5$kg/ms$, and taking the A pitch standard as the high frequency $\omega_L$=440Hz and its lower octave as the lower frequency $\omega_L$=220Hz, the characteristic distance will be 76 732 $km$. And the distance at which their relative amplitudes will be in a one half ratio would be 17 729 $km$. • Probably equally significant for "hearing sound from far away" is diffraction: in the near field approximation (valid for sound diffracting around real world obstacles) low frequencies will diffract around them more easily. Thus a sound "from far away" which is likely to encounter trees, buildings etc. will be attenuated - much more so if the frequency is higher. The observed sound intensity will be a function of the three effects. Note also that intensity scales with amplitude squared... – Floris Jul 17 '15 at 15:22 • @rmhleo : So for my experiment; I have these values: ρ=1.225e-3kg/m3, V=331m/s and η=1.8e-5kg/ms and the lower frequency is 500 Hz, whilst the higher frequency is 10000 Hz. I calculated α for both: 1.35e-7 and 2.7e-6 respectively. So this means for the lower frequency, its amplitude decreases exponentially with a factor of 1.35e-7 ?? How do I calculate the characteristic distance? Thanks very much for your help. – user80922 Jul 19 '15 at 6:19 • Sorry, I did not cover all in the answer. The law of attenuation, expressed mathematically is $A(d)=A_0e^{-\alpha d}$, where $A$ is the amplitude at distance $d$ from the point at which the wave's amplitude was $A_0$. The characteristic distance I mention is an arbitrary definition I made (see the formula above) taken as the step for which the amplitude of the lower wave decreases $e$ times. That is $\frac{A_L(d)}{A_0}=1/e=e^{-\alpha_L d_c}$ where you can check that $d_c$ yields the expression above. The relative attenuation at distance d is: $A_H/A_L = A_0_H/A_0_L e^{-(\alpha_H-\alpha_L) d}$ – rmhleo Jul 19 '15 at 8:39
2019-07-22 03:27:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8813499212265015, "perplexity": 373.0835548040128}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527474.85/warc/CC-MAIN-20190722030952-20190722052952-00113.warc.gz"}
https://learn.careers360.com/engineering/question-electrostatics-2/
# Two spherical conductors A and B of radii 1 mm and 2 mm are separated by a distance of 5 cm and are uniformly charged. If the spheres are connected by a conducting wire then in equilibrium condition, the ratio of the magnitude of the electric fields at the surface of spheres A and b? Electric Field Intensity - $\dpi{100} \vec{E}=\frac{\vec{F}}{q_{0}}=\frac{kQ}{r^{2}}$ - wherein Capacitance of Conductor - $\dpi{100} Q\propto V$ $\dpi{100} Q=CV$ - wherein C - Capacity or capacitance of conductor V - Potential. When the spherical conductors are connected by a conducting wire, charge is redistributed and the spheres attain a common potentia  V. $\therefore\: \: \: Intensity \: E_{A}= \frac{1}{4\pi \varepsilon _{0}}\frac{Q_{A}}{R_{A}^{2}}$ $or\: \: \: \: E_{A}= \frac{1\times C_{A}V}{4\pi \varepsilon _{0}R_{A}^{2}}= \frac{\left ( 4\pi \varepsilon _{0}R_{A} \right )V}{4\pi \varepsilon _{0}R_{A}^{2}}= \frac{V}{R_{A}}$ Similarly  $E_{B}= \frac{V}{R_{B}}$ $\therefore \frac{E_{A}}{E_{B}}= \frac{R_{B}}{R_{A}}= \frac{2}{1}$ ### Preparation Products ##### JEE Main Rank Booster 2021 This course will help student to be better prepared and study in the right direction for JEE Main.. ₹ 13999/- ₹ 9999/- ##### Knockout JEE Main April 2021 (Subscription) An exhaustive E-learning program for the complete preparation of JEE Main.. ₹ 4999/- ##### Knockout JEE Main April 2021 An exhaustive E-learning program for the complete preparation of JEE Main.. ₹ 22999/- ₹ 14999/- ##### Knockout JEE Main April 2022 An exhaustive E-learning program for the complete preparation of JEE Main.. ₹ 34999/- ₹ 24999/-
2020-09-27 17:47:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8932196497917175, "perplexity": 4732.578952541412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400283990.75/warc/CC-MAIN-20200927152349-20200927182349-00272.warc.gz"}
https://taoofmac.com/space/blog/2019/11/03/0930
# My Current Take On Apple I thought I’d write a bit about Apple for a change, seeing as the Mac was the whole point of this site to begin with, and the lack of an October event didn’t stop them from launching new stuff. But they didn’t launch the stuff I wanted, so I’m going to do what every random person on the Internet does these days, which is to rant on about things from a totally egotistical perspective. Another reason for returning to topic, as it were, is that most of my personal projects are temporarily on hold, since I’m currently either waiting for miscellaneous electronics parts to arrive from China or bereft of ideas and headspace to deal with software–plus real life keeps impinging on me. So let’s get going. ## AirPods Pro Just to get that out of the way, I find them interesting but completely illogical from a personal budget perspective–spending 279 EUR on something with less than a 4-year lifespan and which provides less bang for the buck than my current daily driver (30 EUR TaoTronics TT-BH042 earbuds with active noice cancellation, actual volume controls, and a cable that makes it impossible to lose half of them) makes exactly zero sense. The Sony WH-1000MX3 I bought during my last trip to Seattle were a much better purchase, and if you can tolerate over-ear headphones (a sticking point for many folk in open space offices, but one that I have made peace with), I think that kind of audio gear is a much better investment. ## Apple TV+ I am genuinely intrigued. Let’s cast aside for a moment the broader picture–the implications of their digging further into the services revenue rabbit hole, the challenge of entering an already saturated market and all the wallet share considerations. Focusing on it from a purely end user standpoint, For All Mankind is exactly the kind of thing I’d like to watch, and even considering I’ve (so far) resisted subscribing to Netflix or HBO, I am certain to jump on the free 1-year subscription whenever I get a new piece of Apple hardware just to check out the experience. It’s a trap, I know, but given that most evenings we watch either YouTube or Plex on the Apple TV instead of the (quite sizable) channel bundle we get from Vodafone Portugal, it is one I am quite likely to fall into, although I suspect we will eventually move to Netflix for better quality content. After all, other than cartoons and occasional news coverage, everything we watch at home is on-demand, and to be honest telcos (even Vodafone Portugal, which is our current provider and where I worked for over a decade) have yet to deliver a seamless, highly interactive on-demand experience like I get from the Apple TV. The sticking point here is not just the UI and basic UX, it’s core EPG functionality being broken: my kids keep complaining about truncated episodes whenever they try to watch on-demand, and losing the last five minutes of a movie (picked from catch-up TV highlights) is so much of a put-off that we mostly avoid it–and to pay for a movie, I’d rather get a 4K version from Apple or Netflix. OTT has been in the cards for a long time now, and we will be tweaking our service bundle towards more bandwidth (we’re at 200Mbps fiber, which is great but could be better) and less channels we never actually watch, and which are somewhat frustrating when we do. It also bears noting that RTP, the national Portuguese broadcasting company, has a pretty decent Apple TV application, but they botched it by not having any way to turn off ads, so that’s a missed opportunity right there (although I suspect they would never get the pricing right if they charged for it). ## The Mac I haven’t upgraded to Catalina yet, and still don’t think I will until it gets a proper point release–i.e., one that does not eat my e-mail and has less visibility on Twitter. Apple’s software QA has become so much of a risk to my personal productivity that I’m (again) considering switching to a Linux desktop, and only a combination of inertia, real life and my working at Microsoft has prevented that from happening. This is not news–I did the research, have the hardware (it’s currently put to use as a KVM server that runs a Windows 10 VM and a bunch of development containers), and although it would pain me to get rid of my 27” iMac, I know it’s feasible for me personally and for what I need to do on a conventional machine. After all, how can you trust an operating system where the content filter can bring down the entire operating system? How can something like that, an essential feature for parental control, be so fundamentally broken to the point of rebooting the machine? I haven’t pulled the trigger yet because I can make do with Windows, and spent so much time using a Surface Laptop that the ghost of “bad PC hardware” has all but been exorcised. The upshot of all this is that, surprisingly enough, until Apple fixes their keyboards1 and desktop OS, I am much more likely to spend my own money on a Surface device than a new MacBook. It has, surprisingly enough, become the rational choice. I don’t like Windows, but it is currently the lesser evil as far as destroying my personal productivity is concerned, and being able to run Linux on it makes it so much more viable for me. The only thing that Apple had going for them was iOS, and all of a sudden 13.x (and the flurry of rushed updates leading up to 13.2) is so bad that it defies explanation–and I won’t even try to provide one. I too got bit by the sudden death of background applications in 13.2 (even as I type this on my iPad Mini, Safari keeps reloading, and Mail takes forever to re-sync) and, in a variant of the long-standing tradition of alarms failing to go off on time, it completely bungled Do Not Disturb during the DST shift last weekend. But the real annoyance for me was the iPad Pro refresh cycle skipping this October–I was seriously considering buying an iPad Pro this Autumn (both as an upgrade to my Mini 4 and as a hub for my music hobby), and it looks like I’ll have to either get a new Mini 5 (always a viable option, and far more budget-friendly) or wait six months. And yes, getting a Surface Go on sale and sticking Ableton Live Lite on it was an option I considered, but I quite like the iOS music app ecosystem. ## WatchOS (and a note on FitBit) I won’t go into much detail for now (there will be a separate post with a long-term review in a few weeks, maybe months), but I got myself a Series 5 recently, and even though it too is buggy (some complications only update when the watch face is fully active, and it often requires two taps to fully wake up), the thing has already paid for itself health-wise. Apple has zero real competition in smartwatches at this point, and as such I was more than a bit amused to see FitBit getting acquired by Alphabet this week. So far, I have only two things to say on the matter: • I don’t think they’ll reboot WearOS with it (I too think this is has a lot more to do with user data). • I am deeply sad to see Pebble’s heritage (small such as it was at this point) to be further dissolved into a corporate conglomerate that is so utterly disconnected from their user base. More to the point, I don’t think that we’ll ever get anything as nice as the Pebble user experience out of that deal, nor that Android users will get decent smartwatches anytime soon. My dream non-Apple smartwatch would be a slim, Pebble-like device built (and massively marketed) by the likes of Swatch, but I don’t think traditional watch manufacturers will ever pull it off successfully, since they lack both the supply chain and integration skills required to even come close and the tech chops to build an open, Pebble-like ecosystem. Now that I think of it, developing a product like that would be an interesting challenge for someone like me… But I digress. Best to enjoy these rainy afternoons with a good book and take my mind off what could be–I have plenty on my plate right now, on all fronts. 1. And I’m not just talking about the Esc key here, although rumors point to sanity having prevailed and its return on future MacBooks. I could go on about utter lack of understanding of what customers actually expect from their laptops, but a reliable keyboard that actually outlasts the rest of the hardware seems like a pretty obvious thing to prioritize over thinness. ↩︎
2019-11-18 20:24:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1943012923002243, "perplexity": 2147.656258694137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669813.71/warc/CC-MAIN-20191118182116-20191118210116-00478.warc.gz"}
https://cracku.in/1-in-the-following-question-select-the-related-word--x-ssc-je-civil-engineering-24th-jan-2018-shift-2
Question 1 # In the following question, select the related word pair from the given alternatives.Red : Danger :: ? : ? Solution Red represents Danger. Similarly, Black represents Sorrow. $$\therefore\$$Black and Sorrow are related in the same way Red and Danger are related. Hence, the correct answer is Option D
2023-02-05 01:25:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7237256169319153, "perplexity": 5071.950252034341}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500158.5/warc/CC-MAIN-20230205000727-20230205030727-00148.warc.gz"}
https://chemistry.stackexchange.com/questions/5246/why-is-methyl-group-more-electron-donating-than-tert-butyl-group/5249
# Why is methyl group more electron-donating than tert-butyl group? As title says, why is methyl group more electron-donating than tert-butyl group? The context behind this is stabilization of conjugate base. (http://www.khanacademy.org/science/organic-chemistry/organic-structures/acid-base-review/v/stabilization-of-conjugate-base-iii look at 10:51) Consider the tert-butyl group. In a C-H bond, the electron density is directed towards carbon. Methyl group, which has three hydrogen atoms attached to a carbon, pushes the electron density towards $\ce{C2}$ of tert-butyl acetic acid. So do the two other methyl groups. The amount of partial negative charge accumulated on $\ce{C2}$ of tert-butyl group (has 3 methyl groups) is more than that on methyl group (only has 3 hydrogens) of acetic acid. The more negative charge on $\ce{C2}$ in tert-butyl acetic acid destabilizes the conjugate base, compared to a simple acetate ion. Hence acetic acid has less pKa than tert-butyl acetic acid.
2019-08-26 08:06:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5408743023872375, "perplexity": 10489.082119412358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027331228.13/warc/CC-MAIN-20190826064622-20190826090622-00230.warc.gz"}
https://mdnestr.com/ring-diagram-solution
# Emma Nestor 🍏 Posted . (Solution to the challenge at the end of Doing basic group theory with diagrams) Given a set $$R$$, a singleton set $$I$$, and functions • $$0:I\to R$$ (additive identity), • $$\mathrm{add}:R\times R\to R$$ (addition), • $$\mathrm{neg}:R\to R$$ (negative), • $$1:I\to R$$ (multiplicative identity), • $$\mathrm{mul}:R\times R\to R$$ (multiply). Then $$(R,\mathrm{add},0)$$ needs to be a group object, which is abelian (using the swap function), and $$(R,\mathrm{mul},1)$$ needs to satisfy associativity and identity. In order to make it a ring object we need multiplication to distribute over addition, i.e. the following diagrams commute: • Left-distributivity: • Right-distributivity:
2023-03-29 13:12:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8113483786582947, "perplexity": 1545.381846504376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948976.45/warc/CC-MAIN-20230329120545-20230329150545-00711.warc.gz"}
http://math.stackexchange.com/questions/282937/is-lim-x-to-x-0-logfx-log-lim-x-to-x-0-fx-always-true
# Is $\lim_{x \to x_0} \log(f(x)) = \log\lim_{x \to x_0} f(x)$ always true? This property is always true? If yes I would like a proof, otherwise an counterexample. $$\lim\limits_{x \to x_0} \log(f(x)) = \log\lim\limits_{x \to x_0} f(x)$$ - This might be helpful: math.lsa.umich.edu/courses/185F08/composition.pdf – Git Gud Jan 20 '13 at 19:18 This is true since $\log(\cdot)$ is continuous and provided $f(x) > 0$ in a neighborhood of $x_0$ and $\lim_{x \to x_0} f(x) > 0$. – user17762 Jan 20 '13 at 19:22 On the other hand, in the complex numbers no branch of log can be continuous on ${\mathbb C} \backslash \{0\}$, and then it is possible to have $\lim_{z \to z_0} \log f(z) \ne \log \lim_{z \to z_0} f(z)$. – Robert Israel Jan 20 '13 at 19:29 This property is always true because $\log$ is a continuous function.
2016-06-28 22:30:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8736768364906311, "perplexity": 300.3362314599959}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00010-ip-10-164-35-72.ec2.internal.warc.gz"}
https://eprint.iacr.org/2018/444
## Cryptology ePrint Archive: Report 2018/444 Founding Cryptography on Smooth Projective Hashing Bing Zeng Abstract: Oblivious transfer (OT) is a fundamental primitive in cryptography. Halevi-Kalai OT (Halevi, S. and Y. Kalai (2012), Journal of Cryptology 25(1)), which is based on smooth projective hash(SPH), is a famous and the most efficient framework for $1$-out-of-$2$ oblivious transfer ($\mbox{OT}^{2}_{1}$) against malicious adversaries in plain model. However, it does not provide simulation-based security. Thus, it is harder to use it as a building block in secure multiparty computation (SMPC) protocols. A natural question however, which so far has not been answered, is whether it can be can be made fully-simulatable. In this paper, we give a positive answer. Further, we present a fully-simulatable framework for general $\mbox{OT}^{n}_{t}$ ($n,t\in \mathbb{N}$ and $n>t$). Our framework can be interpreted as a constant-round blackbox reduction of $\mbox{OT}^{n}_{t}$ (or $\mbox{OT}^{2}_{1}$) to SPH. To our knowledge, this is the first such reduction. Combining Kilian's famous completeness result, we immediately obtain a black-box reduction of SMPC to SPH. Category / Keywords: cryptographic protocols / oblivious transfer, secure multiparty computation, malicious adversaries, smooth projective hashing.
2021-06-20 12:25:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7915963530540466, "perplexity": 1734.7445205529107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487662882.61/warc/CC-MAIN-20210620114611-20210620144611-00294.warc.gz"}
https://www.physicsforums.com/threads/draw-a-tangent-that-is-perpendicular-to-the-line.91625/
# Draw a tangent that is perpendicular to the line 1. Sep 30, 2005 ### TSN79 I have a curve $-\sqrt {2x^3 }$ on which I'm supposed to draw a tangent that is perpendicular to the line $\[y = \frac{4}{3}x + \frac{1}{3}$. I know that this tangent must have a "steepness" of -3/4 in order to make it perpendicular, but how do I now find the point on the graph? 2. Sep 30, 2005 ### EnumaElish Solve for the x from the curve formula that would make its slope equal to that of the tangent. 3. Sep 30, 2005 ### Diane_ Given that the tangent needs to be perpendicular to the line y = 4/3x + 1/3, he's right that the slope must be -3/4. Negative reciprocal for the normal line. TSN79 - What you need is the point on the curve where the first derivative is -3/4. Just take the derivative, plug in that value, and solve for x. Note: I'm presuming you're taking or have taken calculus, as I don't see any other way to do this. 4. Sep 30, 2005 ### EnumaElish He could do it geometrically I suppose, but that may or may not be exact. 5. Oct 1, 2005 ### TSN79 Take the derivative of what? And plug in what value? -3/4? I tried with $- \sqrt {2x^3 }$ but didn't really get anywhere... 6. Oct 1, 2005 ### EnumaElish Take the derivative of $- \sqrt {2x^3 }$ then equate it to -3/4 and solve for x. 7. Oct 1, 2005 ### TSN79 Ah, now where getting somewhere, thanks ya all! 8. Oct 1, 2005 ### HallsofIvy Staff Emeritus Did you really need to be told that? You were told that the line had to be tangent to the curve given by $y=-\sqrt {2x^3 }$. Didn't you connect "tangent" with "derivative"- after you had been told the find the derivative- the only tangent mentioned was to $y= -\sqrt {2x^3 }$. .
2017-03-30 13:11:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7383110523223877, "perplexity": 897.1947939929435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218194600.27/warc/CC-MAIN-20170322212954-00077-ip-10-233-31-227.ec2.internal.warc.gz"}
https://www.rdocumentation.org/packages/glmnet/versions/1.2
# glmnet v1.2 0 0th Percentile ## Lasso and elastic-net regularized generalized linear models Extremely efficient procedures for fitting the entire lasso or elastic-net regularization path for linear regression, logistic and multinomial regression models, poisson regression and the Cox model. The algorithm uses cyclical coordinate descent in a pathwise fashion, as described in the paper listed below. ## Functions in glmnet Name Description glmnet-package Elastic net model paths for some generalized linear models cv.glmnet Cross-validation for glmnet plot.cv.glmnet plot the cross-validation curve produced by cv.glmnet glmnet-internal Internal glmnet functions print.glmnet print a glmnet object glmnet fit a GLM with lasso or elasticnet regularization predict.glmnet make predictions from a "glmnet" object. plot.glmnet plot coefficients from a "glmnet" object No Results!
2019-12-09 04:22:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3639914095401764, "perplexity": 11083.692949341521}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540517557.43/warc/CC-MAIN-20191209041847-20191209065847-00211.warc.gz"}
http://koreascience.or.kr/article/JAKO199811922392211.page
# 건설공사 노동재해의 피해강도 및 규모특성에 관한 통계분석 • 최기봉 (충청전문대학 산업안전과) • Published : 1998.03.01 #### Abstract Statistical analyses of occupational accidents associated with construction work were carried out to explore the basic statistical characteristics of their damage consequences. Emphasis was placed upon the probabilistic and statistical analyses to clarify, in particular, the relationship between frequency of labour accidents and their damage consequences. Damage consequences were classified into two categories such as the number of workdays lost due to accidents and the number of injured workers involved in one accident. Two types of accident data were collected for the analyses. From the analyses, it was found that the relation between damage due to accidents and their frequencies can be represented by a simple power function which indicates a log-log linear relation. By making use of this relationship, various probabilistic evaluations such as the estimation of the mean time periods between accidents, expected damage consequences, and expected damage ratio between different mean time period of accidents were conducted.
2020-07-12 16:30:34
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8501062989234924, "perplexity": 1721.7647943137679}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657138752.92/warc/CC-MAIN-20200712144738-20200712174738-00457.warc.gz"}
https://www.hackmath.net/en/math-problem/854
# Building Lenka has 22 cubes for the construction of building comprising three cubes in height, the width of two cubes, and the length of four cubes. Is she able to build an building with these cubes? Result #### Solution: $V = 3\cdot 2\cdot 4 = 24 \ne 22$ Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you! Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...): Be the first to comment! Tips to related online calculators Tip: Our volume units converter will help you with the conversion of volume units. ## Next similar math problems: 1. Cube corners The wooden cube with edge 64 cm was cut in 3 corners of cube with edge 4 cm. How many cubes of edge 4 cm can be even cut? 2. Concrete pillar How many m³ of concrete is needed for the construction of the pillar shape of a regular tetrahedral prism, when a = 60 cm and the height of the pillar is 2 meters? 3. One frame 5 picture frames cost € 12 more than three frames. How much cost one frame? 4. Aquarium Aquarium is rectangular box with square base containing 76 liters of water. Length of base edge is 42 cm. To what height the water level goes? 5. Triangle Determine if it is possible to construct a triangle with sides 28 31 34 by calculation. Added together and write as decimal number: LXVII + MLXIV 7. Compare Compare with characters >, <, =: 85.57 ? 80.83 8. Fire tank How deep is the fire tank with the dimensions of the bottom 7m and 12m, when filled with 420 m3 of water? 9. Cuboid box Cuboid box have dimensions of 30 cm, 25 cm and 40 cm. Sketch its network and compute surface area of box. 10. Cylindrical tank 2 If a cylindrical tank with volume is used 12320cm raised to the power of 3 and base 28cm is used to store water. How many liters of water can it hold? 11. Milk bill Mrs Tara buys 2 liters of milk daily. If 1 liter of milk cost \$0.27. What will be her milk bill for 30 days. 12. Ten pupils 10 pupils went to the store. 6 pupils bought lollipops and 9 pupils bought chewing gum. How many pupils have bought both lollipops and chewing gums (if everyone bought something)? 13. Angles Which of those angles are obtuse? 14. Water 31 Richard takes 3 1/6 liters of water before noon and 2 3/5 liters of water after noon. How many litres of water does Richard consume a day ? 15. Valid number Round the 453874528 on 2 significant numbers. 16. Third dimension Calculate the third dimension of the cuboid: a) V = 224 m3, a = 7 m, b = 4 m b) V = 216 dm3, a = 9 dm, c = 4 dm 17. Plane On how many parts divide plane 6 parallels?
2020-02-28 02:51:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4607308804988861, "perplexity": 2631.558480486109}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146940.95/warc/CC-MAIN-20200228012313-20200228042313-00056.warc.gz"}
http://cms.math.ca/cmb/kw/Hardy%20submodules
Canadian Mathematical Society www.cms.math.ca Search results Search: All articles in the CMB digital archive with keyword Hardy submodules Expand all        Collapse all Results 1 - 1 of 1 1. CMB 2004 (vol 47 pp. 456) Seto, Michio On the Berger-Coburn-Lebow Problem for Hardy Submodules In this paper we shall give an affirmative solution to a problem, posed by Berger, Coburn and Lebow, for \$C^{\ast}\$-algebras on Hardy submodules. Keywords:Hardy submodulesCategory:47B38 © Canadian Mathematical Society, 2013 : http://www.cms.math.ca/
2013-05-20 06:43:49
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9127569198608398, "perplexity": 10866.170254715438}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698493317/warc/CC-MAIN-20130516100133-00035-ip-10-60-113-184.ec2.internal.warc.gz"}
http://clay6.com/qa/4150/show-that-each-of-the-given-three-vectors-is-a-unit-vector-also-show-that-t
Want to ask us a question? Click here Browse Questions Ad 0 votes # Show that each of the given three vectors is a unit vector. $\frac{1}{7} (2\hat i + 3\hat j + 6\hat k), \frac{1}{7} (3\hat i - 6\hat j + 2\hat k), \frac{1}{7} (6\hat i + 2\hat j - 36\hat k).$Also, show that they are mutually perpendicular to each other. Can you answer this question? 0 votes 0 answers 0 votes 0 answers 0 votes 0 answers 0 votes 0 answers 0 votes 0 answers 0 votes 0 answers
2016-12-03 23:53:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7144960761070251, "perplexity": 9761.647672282626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541142.66/warc/CC-MAIN-20161202170901-00000-ip-10-31-129-80.ec2.internal.warc.gz"}
http://channelflow.org/dokuwiki/doku.php?id=plugin:clearfloat&rev=1265126109&do=diff
# channelflow.org ### Site Tools plugin:clearfloat # Differences This shows you the differences between two versions of the page. plugin:clearfloat [2009/01/28 08:02]predrag created clearfloat - but it is not floating as yet... plugin:clearfloat [2010/02/02 07:55] (current) 2009/01/28 14:30 predrag 2009/01/28 08:02 predrag created clearfloat - but it is not floating as yet... Next revision Previous revision 2009/01/28 14:30 predrag 2009/01/28 08:02 predrag created clearfloat - but it is not floating as yet... Line 1: Line 1: ====== Clearfloat test ====== ====== Clearfloat test ====== - {{:​escrates.png?​248|}} + {{:​escrates.png?​248 |}} - The escape rate $\gamma$ of the repeller + + The escape rate <​latex>​\gamma ​of the repeller plotted as function of number of partition intervals $N$, plotted as function of number of partition intervals $N$, estimated using: estimated using: (<​latex>​\color{blue}\blacklozenge​) under-resolved 4-interval and (<​latex>​\color{blue}\blacklozenge​) under-resolved 4-interval and the  7-interval optimal partition',​ the  7-interval optimal partition',​ - ({\Large ​$\color{red}\bullet$}) all periodic orbits of periods + (<​latex>​{\Large \color{red}\bullet}​) all periodic orbits of periods - up to $n=8$ in the deterministic,​ binary symbolic dynamics, + up to <​latex>​n=8 ​in the deterministic,​ binary symbolic dynamics, - with $N_i=2^n$ periodic-point intervals (the deterministic,​ + with <​latex>​N_i=2^n ​periodic-point intervals (the deterministic,​ noiseless escape rate is <​latex>​\gamma_{det} = 0.7011​),​ and noiseless escape rate is <​latex>​\gamma_{det} = 0.7011​),​ and - ({\scriptsize ​$\blacksquare$}) a uniform discretization + (<​latex>​{\scriptsize \blacksquare}​) a uniform discretization - in $N=16,​\cdots,​ 256$ intervals. For $N=512$ + in <​latex>​N=16,​\cdots,​ 256 ​intervals. For <​latex>​N=512​ - discretization yields ​$\gamma_{num} = 0.73335(4)$. + discretization yields ​<​latex>​\gamma_{num} = 0.73335(4)​. - ~~CL~~ + ~~CL~~ this is supposed not to float... this is supposed not to float...
2020-04-02 14:43:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48263785243034363, "perplexity": 11479.807972799186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506988.10/warc/CC-MAIN-20200402143006-20200402173006-00419.warc.gz"}
http://mathhelpforum.com/advanced-algebra/84234-basic-linear-algebra-questions-print.html
# basic linear algebra questions • April 17th 2009, 07:32 PM Hikari Clover basic linear algebra questions let ABC be a triangle and let p be the midpoint of side AB let http://i52.photobucket.com/albums/g37/mmmmms/1-7.jpg and http://i52.photobucket.com/albums/g37/mmmmms/2-6.jpg express http://i52.photobucket.com/albums/g37/mmmmms/5-3.jpg and http://i52.photobucket.com/albums/g37/mmmmms/6-2.jpg in terms of http://i52.photobucket.com/albums/g37/mmmmms/7-1.jpg and http://i52.photobucket.com/albums/g37/mmmmms/8-1.jpg and http://i52.photobucket.com/albums/g37/mmmmms/9-1.jpg hence prove that the midpoint of the hypotenuse of a right-angle triangle is equidistant from the three vertices ================================================== let p be the plane with cartesian equation 2x+y-3z=9 and let A,B be points with coordinates (1,-2,3) and (-2,1,-1) l is the line passing through the point A and B find the coordinates of the point C where the line l intersects the plane p i have no ideas how to find the intersection between a line and a plane thx • April 17th 2009, 11:43 PM running-gag Quote: Originally Posted by Hikari Clover Hi $\overrightarrow{AP}^2 = \frac12\:\overrightarrow{AB}\cdot\frac12\:\overrig htarrow{AB}\cdot = \frac14\:\overrightarrow{AB}\cdot\overrightarrow{A B}$ $\overrightarrow{AB} = \overrightarrow{AC} + \overrightarrow{CB} = -\vec{a} + \vec{b}$ $\overrightarrow{AP}^2 = \frac14\:\left(-\vec{a} + \vec{b})^2\right) = \frac14\:\left(\vec{a}^2 + \vec{b}^2 -2 \vec{a} \cdot \vec{b}\right)$ $\overrightarrow{CP} = \frac12\:\left(\overrightarrow{CA} + \overrightarrow{CB}\right) = \frac12\:\left(\vec{a} + \vec{b}\right)$ ================================================== Quote: Originally Posted by Hikari Clover let p be the plane with cartesian equation 2x+y-3z=9 and let A,B be points with coordinates (1,-2,3) and (-2,1,-1) l is the line passing through the point A and B find the coordinates of the point C where the line l intersects the plane p i have no ideas how to find the intersection between a line and a plane thx AB coordinates are (-3,3,-4) Therefore one parametric equation of l is : x = -3t + 1 y = 3t -2 z = -4t +3 C(x,y,z) is on the plane iff 2x+y-3z=9 C is on l iff there exists t such that x = -3t + 1 y = 3t -2 z = -4t +3 Substitute x,y,z in the Cartesian equation of the plane to get one linear equation. Solve for t and substitute in x,y,z expressions. Spoiler: 2(-3t + 1)+(3t -2)-3(-4t +3)=9 gives t=2 x = -5 y = 4 z = -5 C(-5,4,-5) • April 18th 2009, 04:20 AM Hikari Clover Quote: Originally Posted by running-gag Hi $\overrightarrow{AP}^2 = \frac12\:\overrightarrow{AB}\cdot\frac12\:\overrig htarrow{AB}\cdot = \frac14\:\overrightarrow{AB}\cdot\overrightarrow{A B}$ $\overrightarrow{AB} = \overrightarrow{AC} + \overrightarrow{CB} = -\vec{a} + \vec{b}$ $\overrightarrow{AP}^2 = \frac14\:\left(-\vec{a} + \vec{b})^2\right) = \frac14\:\left(\vec{a}^2 + \vec{b}^2 -2 \vec{a} \cdot \vec{b}\right)$ $\overrightarrow{CP} = \frac12\:\left(\overrightarrow{CA} + \overrightarrow{CB}\right) = \frac12\:\left(\vec{a} + \vec{b}\right)$ ================================================== AB coordinates are (-3,3,-4) Therefore one parametric equation of l is : x = -3t + 1 y = 3t -2 z = -4t +3 C(x,y,z) is on the plane iff 2x+y-3z=9 C is on l iff there exists t such that x = -3t + 1 y = 3t -2 z = -4t +3 Substitute x,y,z in the Cartesian equation of the plane to get one linear equation. Solve for t and substitute in x,y,z expressions. Spoiler: 2(-3t + 1)+(3t -2)-3(-4t +3)=9 gives t=2 x = -5 y = 4 z = -5 C(-5,4,-5) hey , thx for ur replying(Clapping) but for the first question,did u forget to put that absolute value sign? or it doesnot matter? • April 18th 2009, 05:58 AM running-gag Quote: Originally Posted by Hikari Clover hey , thx for ur replying(Clapping) but for the first question,did u forget to put that absolute value sign? or it doesnot matter? Are you talking about the modulus ? $||\overrightarrow{AB}||^2 = \overrightarrow{AB}\cdot\overrightarrow{AB} = \overrightarrow{AB}^2 = AB^2$ • April 18th 2009, 06:17 AM Hikari Clover oh i got it thanks so much ^_^
2014-04-16 19:01:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8736454248428345, "perplexity": 1729.2495171639234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.zigya.com/study/book?class=12&board=nbse&subject=Biology&book=Biology&chapter=Molecular+Basis+of+Inheritance&q_type=&q_topic=The+Dna+&q_category=&question_id=BIENNT12132013
 Sickle cell anaemia is from Biology Molecular Basis of Inheritance Class 12 Nagaland Board ### Chapter Chosen Molecular Basis of Inheritance ### Book Store Currently only available for. CBSE Gujarat Board Haryana Board ### Previous Year Papers Download the PDF Question Papers Free for off line practice and view the Solutions online. Currently only available for. Class 10 Class 12 Sickle cell anaemia is • caused by substitution of valine by glutamic acid in the β-globin chain of haemoglobin • caused by a change in a base pair of DNA • characterised by elongated sickle-like RBCs with a nucleus C. caused by a change in a base pair of DNA Sickle-cell anaemia is caused by a change in a single base DNA. It is genetic disease reported from negroes. The individuals of stickle-cell anaemia are immune to malaria. Sickle-cell anaemia is caused by a change in a single base DNA. It is genetic disease reported from negroes. The individuals of stickle-cell anaemia are immune to malaria. 446 Views Comment two chains of DNA have antiparallel polarity. The Two chain of DNA are antiparallel that is if one chain has 5’ x 3’ polarity the other chain has 3’ x 5’ polarity. 658 Views Name the genetic material for majority of organisms. DNA (Deoxyribose nucleic acid) 965 Views List the number of base pairs in : (i) lambda bacteriophage (ii) E.coli and (iii) haploid content of human DNA. (i) 48502 bp (ii) 4.6 x 106 bp and (iii) 3.3 x 109 bp. 757 Views How many nucleotides are present in a bacteriophage $\mathrm{\varphi }$ x 174. 5386 805 Views List the function of RNA. Functions of RNA RNA acts as genetic material in viruses It also functions as an adapter and messenger It also acts as a catalytic molecule and catalyses various biochemical reactions. 721 Views
2018-09-22 06:54:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24307139217853546, "perplexity": 14064.211309862574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158205.30/warc/CC-MAIN-20180922064457-20180922084857-00087.warc.gz"}
https://mathemerize.com/tag/conditional-probability/
## Formula for Conditional Probability Here, you will learn formula for conditional probability and properties of conditional probability with examples. Let’s begin – Formula for Conditional Probability Let A and B be two events associated with a random experiment. Then, the probability of occurrence of event A under the condition that B has already occured and P(B) $$\ne$$ 0, is …
2023-03-28 15:22:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8436915874481201, "perplexity": 224.4407361917933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948867.32/warc/CC-MAIN-20230328135732-20230328165732-00619.warc.gz"}
http://math.stackexchange.com/questions/146377/birational-map-from-a-variety-to-projective-line
# Birational map from a variety to projective line This is exercise $4.4$ part (c) of Hartshorne's book. Let $Y$ be the nodal cubic curve $y^{2}z=x^{2}(x+z)$ in $\mathbb{P}^{2}$. Show that the projection $f$ from the point $(0,0,1)$ to the line $z=0$ induces a birational map from $Y$ to $\mathbb{P}^{1}$. Attempt: Consider the open subset of $Y$ given by $Y \setminus V(z)$ , that is we set $z=1$. Define $f: Y \setminus V(z) \rightarrow \mathbb{P}^{1} \setminus \{[1:1],[1:-1]$ by: $f([x : y : z]) = [x: y]$ Now define $g: \mathbb{P}^{1} \setminus \{[1 : 1],[1 : -1]\} \rightarrow Y \setminus V(z)$ by: $g([x : y]) = [(y^{2}-x^{2})x : (y^{2}-x^{2})y : x^{3}]$ Question: what if $x=0$? then we get $y=0$ so $[x :y : 1] = [ 0 : 0 : 1]$ but the point $[x^{4} : x^{3}y : x^{3}]$ is not defined at $x=y=0$. - $[ab:ac] = [b:c]$, right? – Hurkyl May 17 '12 at 17:13 You can simplify your formula for $f$ (assuming it's right). p.s. your formula would be ill-defined were it not for the fact the simplification is possible: your functions aren't homogeneous, and so the required identity $f[x:y:z]=f[ax:ay:az]$ isn't immediately evident. – Hurkyl May 17 '12 at 17:23 @Hurkl: just modified it, can you have a look please? – user31509 May 17 '12 at 17:44 It's been a while since I've look at at this fine detail -- but as I recall, it's good enough for them to be inverses on an open set, right? You can restrict the domains further to make it work. – Hurkyl May 18 '12 at 17:54 Recall that the line in $\mathbb P^2$ joining $P=[a:b:c]$ to $P'=[a':b':c']$ is given parametrically by $[ua+u'a':ub+u'b':uc+u'c']$ where $[u:u'] \in \mathbb P^1$. So, the line joining $P'=O=[0:0:1]\in Y$ to $P=[a:b:c]\neq O\in Y$ has parametric equation $[ua:ub:uc+u']$. It intersects the line at infinity $z=0$ at the point $uc+u'=0$ i.e. at $[a:b:0]$. So the required rational map $Y \cdots \to \mathbb P^1$ (not a morphism!) is $$\phi:U=Y\setminus \lbrace O\rbrace \to \mathbb P^1:[a:b:c]\mapsto [a:b]$$ The inverse rational map is the morphism ( you have actually already calculated this: your computation is correct) $$\psi=\phi^{-1}:\mathbb P^1\to Y: [c:d]\mapsto [c(d^2-c^2):d(d^2-c^2):c^3]$$ Notice that $\psi$ is the normalization of $Y$, a remark which enlightens the whole exercise. In particular we see that the two points $[1:\pm1]\in \mathbb P^1$ are sent by $\psi$ to $O$ . This is why the rational map $\phi$ we started with cannot be a morphim: a non-injective morphism like $\psi$ cannot be inverted as a morphism, only (sometimes!) as a rational map .
2015-11-30 14:03:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9703146815299988, "perplexity": 262.42726893485315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398462665.97/warc/CC-MAIN-20151124205422-00234-ip-10-71-132-137.ec2.internal.warc.gz"}
https://socratic.org/questions/how-do-you-simplify-2x-9y-7x-20y
# How do you simplify 2x-9y+7x+20y? Jan 10, 2016 9x + 11y #### Explanation: To simplify the expression collect 'like terms'. ie. collect together the terms which contain the same variable by adding or subtracting their coefficients. so 2x + 7x - 9y + 20y = ( 2 + 7 )x + (- 9 + 20 )y = 9x + 11y
2019-12-06 15:49:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5123745203018188, "perplexity": 1241.1735457151242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540488870.33/warc/CC-MAIN-20191206145958-20191206173958-00265.warc.gz"}
http://devmaster.net/posts/5347/rotation
0 101 Sep 21, 2003 at 18:29 Hi, Rotation is something that is very easy to understand, but it is giving me a really big headick now. :blink: My problem is When I rotate on X or Y Axis by using D3DXMatrixRotationX() or Y() it rotated around the screen not at a point. Rotating on Z rotates it on the point. I have coded before but now things are getting a bit messy. When ever I use the code that I use for Translation of Objects, then these things happen. Everything seems to be fine :angry: . It also happened when I was coding in OpenGL a month back. But I accomplished this task using Push and Pop Stack Layers :blush: . Is there anything like this in Direct3D9? The code is fine … ;7 D3DXMATRIXA16 World; // World CoOrdinates float Angle = 0.0f; // Angle of Rotation D3DXMatrixRotationY (&World, Angle); // Set Rotation on Y-Axis D3DDevice->SetTransform (D3DTS_WORLD, &World); // Transform Angle += 0.05f; // Increment in Angle #### 18 Replies 0 158 Sep 22, 2003 at 03:30 It depends on where the camera is located. What you are doing is rotating around the camera. Are you doing any translation? If you want to rotate around a point, then you’ll need to do the rotation and then do the translation and not the opposite. As for glPushMatrix(), I don’t think there’s an equivalent function in DirectX. 0 101 Sep 22, 2003 at 03:59 i guess there is a way to load matrices in directx. it should be very easy to implement your own matrix stack if you really need to do that. i think i even saw that done once in a rendering api that supported both dX and oGL 0 101 Sep 22, 2003 at 04:10 I am learning Dx9, so I don’t know much of work for DX :confused: . I had done translation before with the same code. What I did was I used the code that I used for translation and removed it and implement rotation at that place. Rotation on Z Axis seems to work fine … but X & Y works like I am using translation also …? They are rotating around the axis :eek: . For the camera, it is fixed and I am not moving it …! 0 101 Sep 22, 2003 at 04:13 is the object located at the origin of the world ??? 0 158 Sep 22, 2003 at 04:15 Well, since you are doing translation, I guess that’s why. Your object is not located in the origin and that’s why it rotating around the screen. You have to first do the rotation and then do that translation afterwards. 0 101 Sep 22, 2003 at 04:19 i think he meant that he replaced the translation code with the rotation. my guess is that the object isn’t located at the origin and that he somehow rotates the object around the origin 0 101 Sep 22, 2003 at 05:01 I am not doing any translation of any sort here … I am just using rotation and I am changing the Worl Coordinates after doing rotation. I had the same idea as you guys … but if I was doing translation too then if I use Z - Axis rotation with translation … the object should have done zooming in and out with movement … but it is not. :eek: Rotating it on Z - Axis just rotates it around the Z - Axis with top point as its center … where as on X & Y it rotates around the screen with the center of the screen as its center …! :confused: I think I am explaining right :sigh: !!! 0 101 Sep 22, 2003 at 08:56 Have you tried defining your world matrix as a D3DXMATRIX, I’ve never seen it defined as a D3DXMATRIXA16? 0 101 Sep 22, 2003 at 09:40 It worked in Translation. And SDK documentation and other tutorials also mention it. :blush: 0 101 Sep 22, 2003 at 14:47 are you rendering a 3ds model or something like that ??? sometimes people don’t care to center their models around the origin. so while correctly rotating the object around the origin in objectspace it would look like the model gets rotated around the origin of the world if you try to place the object there 0 101 Sep 22, 2003 at 15:16 I am just drawing a simple triangle with three vertices. First is the Bottom Left. Second is the Bottom Right. Third is the Top one. I have even tried changing their orders. :confused: 0 101 Sep 22, 2003 at 18:03 could you post the coordinates of your triangle… i don’t quite get this… if you do no translation right now and the triangle is centered. how could it rotate around anything else but it’s origin 0 101 Sep 23, 2003 at 00:33 sVERTEX Triangle [3] = { {-0.5f, -0.5f, 2.5f, D3DCOLOR_XRGB (255, 0, 0)}, // Bottom  Left Point { 0.5f, -0.5f, 2.5f, D3DCOLOR_XRGB (0, 255, 0)}, // Bottom Right Point { 0.0f,  0.5f, 2.5f, D3DCOLOR_XRGB (0, 0, 255)}, // Top Point }; Is there any way to set center of origin for this triangle ??? 0 101 Sep 23, 2003 at 01:25 I am setting 3 types of transformations i.e. Projection, View & World. Code is below : Projection float AspectRatio = (float)Width / Height; // Ratio between Width & Height // Initialize Perspective View for the Screen D3DXMatrixPerspectiveFovLH (&Projection, // Projection Matrix 1.0f, // Field of View AspectRatio, // Aspect Ratio of the Screen 0.0f, // Nearest Clipping Plane 100.0f); // Farthest Clipping Plane // Setup Projection Matrix for the Screen Graphics->SetTransform (D3DTS_PROJECTION, &Projection); View D3DXMATRIXA16 Camera; // Camera D3DXVECTOR3 Position (0.0f, 0.0f, -2.0f); // Position of the Camera D3DXVECTOR3 Direction (0.0f, 0.0f, 0.0f); // Direction of the Camera Looking at D3DXVECTOR3 UpDirection (0.0f, 1.0f, 0.0f); // Upward Direction for the Camera // Initialize Camera for the Game D3DXMatrixLookAtLH (&Camera, // Camera &Position, // Position of the Camera &Direction, // Direction of the Camera Looking at &UpDirection); // Upward Direction of the Camera // Setup Camera on the Screen Graphics->SetTransform (D3DTS_VIEW, &Camera); Rotation is the same as shown before, but here is the code that does the drawing : static float Angle = 0.0f; // Angle of Rotation KeysPressed (); // Check for the Keys pressed and process them D3DXMatrixRotationX (&World, Angle); // Transform the CoOrdinates according to the World CoOrdinates Graphics->SetTransform (D3DTS_WORLD, &World); Graphics->Clear (0, // Number of Areas NULL, // Areas to clear D3DCLEAR_TARGET | // Clear the Screen with the Colour defined below D3DCLEAR_ZBUFFER, // Clear the Depth Buffer with the value given below D3DCOLOR_XRGB (0, 0, 0), // Colour of the Screen 1.0f, // Dpeth (Z) Buffer 0); // Stencil Buffer Graphics->BeginScene (); // Begin drawing the Scene // Bind the Vertex Buffer with the Stream Data Graphics->SetStreamSource (0, // Data Stream Starting Number TriangleVB, // Vertex Buffer for the Triangle 0, // Offset sizeof (sVERTEX)); // Size of the Vertex Structure // Draw Graphics->DrawPrimitive (D3DPT_TRIANGLELIST, // Type of Vertex (Triangle with 3 Vertices) 0, // Index Number of Vertex to begin with 1); // Number of Vertex to Draw Graphics->EndScene (); // Finish drawing the Scene // Present Back Buffer contents on the Screen Graphics->Present (NULL, NULL, NULL, NULL); Angle += 0.05f; if (Angle >= 360.0f) { Angle = 0.0f; // Set Angle back to 0 degrees } 0 158 Sep 23, 2003 at 01:47 Your triangle is not in the origin!! (i.e. it is not centered) As you can see, it is “translated” -2.5 in the z axis. What you need to do is draw the triangle at the center (i.e. z = 0), do the rotation, and then translate 2.5 in z axis (using D3DXMatrixTranslate() ) 0 101 Sep 23, 2003 at 03:13 :rolleyes: Can it be? I was rotating the world around 0 all the time while drawing it around 2.5 Axis? Maybe you are right. I will try it out when I will go home, I have just reached office now. I am stupid :lol: ? Why didn’t I think of that before? So the same thing is with OpenGL right :blush: ? 0 101 Sep 23, 2003 at 03:51 right… 0 101 Sep 23, 2003 at 14:19 :yes: Thanks guys! It worked :blush: ! It was just a minor mistake but it gave me a real big headick … :lol: ! I never knew about that … THANKS A LOT GUYS :yes: !
2014-04-21 07:27:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33945271372795105, "perplexity": 2451.880766703947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00254-ip-10-147-4-33.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/111272-l-hospital-s-rule-problem.html
1. ## L'Hospital's Rule Problem My professor did this problem on the board, but now I realize that I don't understand it: --> The limit as x goes to 0 of sin(3x)/tan(4x) It is a 0/0 problem, so it seems like I would apply l'Hospital's Rule and get 3cos(3x)/4sec^2(4x). But instead, she wrote: 1. lim sin(3x)*(1/(sin(4x))*(cos(4x)) --This is just separating the equation, but I don't know why she did it instead of using l'Hospital's Rule directly. Then she wrote to RECALL that the limit as x goes to 0 of sin(Ax)/Ax using l'Hospital's Rule goes to Acos(Ax)/A, which equals 1. This makes sense. Then: 2. lim = [sin(3x) (1/sin(4x)) (cos(4x))] 3x/3x * 4/4 3. lim = sin(3x)/(3x) * 4x/sin(4x) * 3/4 cos(4x) 4. and the limit of sin(3x)/(3x) = 1, the lim of 4x/sin(4x) =1, and the lim of 3/4 cos(4x) = 3/4, so the final answer is 3/4. I understand why sin(3x)/(3x) and so on would have a limit of 1, due to what she told us to recall. But I really don't understand at all why she did what she did and how she did steps 2 and 3. Please help me. Thanks. 2. It is simple.She wanted the question to be solved without using L'Hospital's Rule. 3. Seems like she had sin(3x)/sin(4x) (plus some other stuff) and she wanted to reduce this. So she multiplied by "1" twice, first by 3x/3x and then by 4/4 in order to get two patterns of sin(Ax)/Ax and Ax/sin(Ax). So she had sin(3x)/3x and 4x/sin(4x) which are both one. This leaves a 3 on top and a 4 on bottom. Do you see that? So plus the "other stuff" it is all together (3/4)*cos(4x). Take the limit as x goes to zero and that cos(4x) becomes a 1. So the final answer is just 3/4. Patrick EDIT: pankaj is correct that this does not use L'Hospital's Rule. Maybe that is what was throwing you off.
2016-09-30 06:09:48
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8721868395805359, "perplexity": 825.7004356136538}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738662022.71/warc/CC-MAIN-20160924173742-00196-ip-10-143-35-109.ec2.internal.warc.gz"}
https://www.math.kyoto-u.ac.jp/ja/event/seminar/4391
# Compactness theorem for Heisenberg manifolds with Sub-Riemannian metrics under the Gromov--Hausdorff topology. 2020/06/30 Tue 15:00 - 16:30 We investigate a condition for a family of sub-Riemannian manifolds being compact under the Gromov--Hausdorff topology. In this talk, we introduce a volume form on the Heisenberg Lie group endowed with Sub-Riemannian metrics of corank $0$ or $1$ to give an analogy of Mahler's compactness theorem for compact Heisenberg manifolds, which is a quatient space of the Heisenberg Lie group by a uniform discrete subgroup. Namely we show that if a family of such Heisenberg manifolds has a lower bound of systole and an upper bound of total measure, then it is relatively compact. このセミナーは Zoom を用いてオンラインで行います。 ミーティングID、パスワードをセミナー開始時間前に微分トポロジーセミナーのメーリングリストに送信します。 セミナーに参加を希望される方は世話人(伊藤哲也・森田陽介)まで事前にお知らせください。
2020-08-06 08:07:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8418087959289551, "perplexity": 500.9781069927231}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439736883.40/warc/CC-MAIN-20200806061804-20200806091804-00468.warc.gz"}
https://www.msri.org/web/msri/support-msri/academic-sponsors/becoming-an-academic-sponsor
# Mathematical Sciences Research Institute SLMath academic sponsors provide an interface between SLMath and the mathematical community. The Academic Sponsors participate in Institute governance and take part (at SLMath's expense) in a general meeting with the other sponsors and trustees each year for this purpose. SLMath helps support visits by members to sponsoring institutions. Sponsors are offered some financial backing of conferences at sponsoring institutions. Perhaps most important: An Academic Sponsor can send up to 4* graduate students to SLMath Summer Graduate Schools each year at SLMath's expense. In addition to the original academic sponsorship ("Full-level") two more levels ("Mid-level" and "Entry-level") were introduced to broaden the community of sponsoring institutions and to allow small and medium sized institutions to participate in SLMath's activities and governance. Eligibility to a certain level depends on the size of the institution's graduate program and it affects the number of students that can be sent to the SLMath summer schools: Multi-tier Dues Structure Size of Graduate Program Dues Type Dues Summer Graduate Schools Quota* Full Sponsorship 25 or more graduate students Full $5,158 2 + 2 Mid-level Sponsorship Less than 25 graduate students Half$2,579 1 + 2 Entry-level Sponsorship No graduate program Quarter $1,290 1 + 1 * While the size of your graduate program determines whether your institution is eligible to Mid- or Entry-level sponsorship you may choose to pay the Full-level dues as this allows you to send 2 + 2 graduate students to SLMath Summer Schools. Learn more about Summer Graduate Schools below. If you have additional questions, please contact info@slmath.org. ### Committee of Academic Sponsors This committee consists of a representative from each Sponsoring Institution. The committee meets at SLMath each spring; travel expenses and hotel reimbursement are provided by SLMath. The rate for travel reimbursement is up to US$600 (depending of point of origin) for participants from US and Canadian universities, and up to US$700 for participants from foreign sponsoring institutions. The Sponsors hear reports on the Institute’s current scientific programs and provide advice to SLMath on future plans and initiatives. There are often discussions of issues facing the broader mathematical community as well. This meeting is typically held just before the annual March meeting of the SLMath Board of Trustees, and the Sponsors join with the Trustees for a banquet and socializing. We are continually working with the Committee of Academic Sponsors to explore new ways that SLMath can serve the community, and we expect this to evolve accordingly in the future. ### Summer Graduate Schools Every summer, SLMath organizes several Summer Graduate Schools (usually two weeks each), some of which are held at SLMath in Berkeley, and others at partner institutions around the U.S. and worldwide. Attending one of these schools can be a very motivating and exciting experience for a student; participants have often said that it was the first experience where they felt like real mathematicians, interacting with other students and mathematicians in their field. See above for the number of students each sponsor can send to these schools. SLMath covers the travel and local expenses of these students. The rate for travel reimbursement is up to$600 for students from US and Canadian universities, and up to $700 for students from other sponsoring institutions. All institutions can nominate additional students to attend a workshop if they pay the attendance fee of$1,800 per student and workshop (2023 rate, subject to change). Additional students will only be considered after the end of the open enrollment period and if the workshop has not reached capacity by that time. For details on nominations rules and admission policies, see below. For full-level academic sponsors, SLMath provides support for two students per summer; a third will be supported if at least one nominee is a woman or is a U.S. citizen/Permanent Resident from a group that is underrepresented in the mathematical sciences (URM). SLMath will support four students from a full-level academic sponsor if at least one nominee is woman and at least one other nominee is a URM. For mid-level academic sponsors, SLMath provides support for one student per summer and a second if at least one of them is woman or is a US citizen/US Permanent Resident from a group that is underrepresented in the mathematical sciences (URM). SLMath will support three students from a mid-level academic sponsor if at least one nominee is woman and at least one other nominee is a URM. For entry-level academic sponsors and other U.S. institutions, SLMath provides support for one student per summer. SLMath will provide support for two students to entry-level academic sponsors if one student is woman and the other is a US citizen/US Permanent Resident from a group that is underrepresented in the mathematical sciences (URM). SLMath has a strong commitment to broadening participation in the mathematical sciences. Groups underrepresented in the field per National Science Foundation (NSF) guidelines include women, persons with disabilities, and three racial and ethnic groups: Blacks or African Americans, Hispanics or Latinos, and American Indians or Alaska Natives (Native Americans).
2023-03-27 00:39:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2203240990638733, "perplexity": 5837.664123084912}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946584.94/warc/CC-MAIN-20230326235016-20230327025016-00045.warc.gz"}
https://www.zora.uzh.ch/id/eprint/107902/
# Study of the kinematic dependences of $\Lambda_b^0$ production in $pp$ collisions and a measurement of the $\Lambda_b^0 \rightarrow \Lambda_c^+\pi^-$ branching fraction LHCb Collaboration; Bernet, R; Müller, K; Steinkamp, O; Straumann, U; Serra, N; Vollhardt, A; et al (2014). Study of the kinematic dependences of $\Lambda_b^0$ production in $pp$ collisions and a measurement of the $\Lambda_b^0 \rightarrow \Lambda_c^+\pi^-$ branching fraction. Journal of High Energy Physics, 2014(143):online. ## Abstract The kinematic dependences of the relative production rates, $f_{\Lambda_b^0}/f_d$, of $\Lambda_b^0$ baryons and $\bar{B}^0$ mesons are measured using $\Lambda_b^0 \rightarrow \Lambda_c^+ \pi^-$ and $\bar{B}^0 \rightarrow D^+ \pi^-$ decays. The measurements use proton-proton collision data, corresponding to an integrated luminosity of 1 fb$^{-1}$ at a centre-of-mass energy of 7 TeV, recorded in the forward region with the LHCb experiment. The relative production rates are observed to depend on the transverse momentum, $p_T$, and pseudorapidity, $\eta$, of the beauty hadron, in the studied kinematic region 1.5<pT<40 GeV/c and 2<η<5. Using a previous LHCb measurement of $f_{\Lambda_b^0}/f_d$ in semileptonic decays, the branching fraction $\mathcal{B}(\Lambda_b^0 \rightarrow \Lambda_c^+ \pi^-) = \Big( 4.30 \pm 0.03 \,\, ^{+0.12}_{-0.11} \pm 0.26 \pm 0.21 \Big) \times 10^{-3}$ is obtained, where the first uncertainty is statistical, the second is systematic, the third is from the previous LHCb measurement of $f_{\Lambda_b^0}/f_d$ and the fourth is due to the $\bar{B}^0 \rightarrow D^+ \pi^-$ branching fraction. This is the most precise measurement of a$\Lambda_b^0$ branching fraction to date. ## Abstract The kinematic dependences of the relative production rates, $f_{\Lambda_b^0}/f_d$, of $\Lambda_b^0$ baryons and $\bar{B}^0$ mesons are measured using $\Lambda_b^0 \rightarrow \Lambda_c^+ \pi^-$ and $\bar{B}^0 \rightarrow D^+ \pi^-$ decays. The measurements use proton-proton collision data, corresponding to an integrated luminosity of 1 fb$^{-1}$ at a centre-of-mass energy of 7 TeV, recorded in the forward region with the LHCb experiment. The relative production rates are observed to depend on the transverse momentum, $p_T$, and pseudorapidity, $\eta$, of the beauty hadron, in the studied kinematic region 1.5<pT<40 GeV/c and 2<η<5. Using a previous LHCb measurement of $f_{\Lambda_b^0}/f_d$ in semileptonic decays, the branching fraction $\mathcal{B}(\Lambda_b^0 \rightarrow \Lambda_c^+ \pi^-) = \Big( 4.30 \pm 0.03 \,\, ^{+0.12}_{-0.11} \pm 0.26 \pm 0.21 \Big) \times 10^{-3}$ is obtained, where the first uncertainty is statistical, the second is systematic, the third is from the previous LHCb measurement of $f_{\Lambda_b^0}/f_d$ and the fourth is due to the $\bar{B}^0 \rightarrow D^+ \pi^-$ branching fraction. This is the most precise measurement of a$\Lambda_b^0$ branching fraction to date. ## Statistics ### Citations Dimensions.ai Metrics 36 citations in Web of Science® 37 citations in Scopus® ### Altmetrics Detailed statistics Item Type: Journal Article, refereed, original work 07 Faculty of Science > Physics Institute 530 Physics Physical Sciences > Nuclear and High Energy Physics English 2014 24 Feb 2015 10:09 07 May 2020 00:00 Springer 1029-8479 Gold https://doi.org/10.1007/JHEP08(2014)143
2020-10-26 21:53:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7939643859863281, "perplexity": 2030.9038493346284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107892062.70/warc/CC-MAIN-20201026204531-20201026234531-00125.warc.gz"}
http://continuummechanics.org/homework/HW5_Solutions.html
Homework #5 Solutions Feel free to use http://www.continuummechanics.org/interactivecalcs.html when applicable. 1. Show that $\epsilon_{ijk} \, \epsilon_{ijk} = 6$ Use the epsilon-delta identity. $\begin{eqnarray} \epsilon_{ijk} \, \epsilon_{ijk} & = & \delta_{jj} \delta_{kk} - \delta_{jk} \delta_{jk} \\ \\ \\ & = & 3 * 3 - \delta_{kk} \\ \\ \\ & = & 9 - 3 \\ \\ \\ & = & 6 \end{eqnarray}$ 2. Given the stress tensor $\boldsymbol{\sigma} = \left[ \matrix{ 10 & 20 & 30 \\ 20 & 40 & 50 \\ 30 & 50 & 60 } \right] \qquad$ One of the two stress tensors below is equivalent to the one above, differing only by a coordinate transformation. The other one represents a different stress state. Which is equivalent and which is different? $\left[ \matrix{ 4.0341 & 27.291 & 14.519 \\ 27.291 & 76.619 & 46.048 \\ 14.519 & 46.048 & 29.347 } \right] \qquad \qquad \qquad \left[ \matrix{ \;\;\;30.597 & -5.733 & -15.201 \\ \;-5.733 & \;41.305 & \;\;\;18.926 \\ -15.201 & \;18.926 & \;\;\;48.098 } \right]$ The way to tackle this question is to compare principal stresses. The principal values of the given stress tensor are The principal values of the 1st candidate tensor are The above image shows that it is this first candidate stress tensor that is equivalent to the given stress tensor. The fact that the principal values are in a different order only means that the software found principal orientations for the two tensors that are 90° apart. This is nothing but a coordinate transformation. The principal values of the 2nd candidate stress tensor are: 20, 30, 70. Therefore, this 2nd candidate is not equivalent to the original stress tensor. 3. With the same stress tensor $\boldsymbol{\sigma} = \left[ \matrix{ 10 & 20 & 30 \\ 20 & 40 & 50 \\ 30 & 50 & 60 } \right]$ Use $$E = 100$$ and $$\nu = 0.333$$ (metals) and demonstrate that the deviatoric stress is proportional to the deviatoric strain. The hydrostatic stress is (10 + 40 + 60) / 3 = 36.67. Subtracting this from the stress tensor gives the deviatoric stress. $\boldsymbol{\sigma}' = \left[ \matrix{ -26.67 & 20 & 30 \\ \;\;\;20 & 3.33 & 50 \\ \;\;\;30 & 50 & 23.33 } \right]$ Hooke's Law is $\begin{eqnarray} \left[ \matrix{ \epsilon_{xx} & \epsilon_{xy} & \epsilon_{xz} \\ \epsilon_{xy} & \epsilon_{yy} & \epsilon_{yz} \\ \epsilon_{xz} & \epsilon_{yz} & \epsilon_{zz} } \right] & = & {1 \over 100} \left\{ (1 + 0.333) \left[ \matrix{ 10 & 20 & 30 \\ 20 & 40 & 50 \\ 30 & 50 & 60 } \right] - (0.33) \left[ \matrix { 110 & 0 & 0 \\ 0 & 110 & 0 \\ 0 & 0 & 110 } \right] \right\} \\ \\ \\ \\ & = & {1 \over 100} \left\{ \left[ \matrix{ 13.33 & 26.67 & 40 \\ 26.67 & 53.33 & 66.67 \\ 40 & 66.67 & 80 } \right] - \left[ \matrix{ 36.67 & 0 & 0 \\ 0 & 36.67 & 0 \\ 0 & 0 & 36.67 } \right] \right\} \\ \\ \\ \\ & = & \left[ \matrix{ -0.233 & \;\;0.267 & \;0.400 \\ \;\;0.267 & 0.167 & \;0.667 \\ \;\;0.400 & \;\;0.667 & \;0.433 } \right] \end{eqnarray}$ And the hydrostatic part of this is $$(-0.233 + 0.167 + 0.433)/3 = 0.122$$. Subtracting this from the strain tensor gives the deviatoric strain $\boldsymbol{\epsilon}' = \left[ \matrix{ -0.355 & 0.267 & \;0.400 \\ \;\;0.267 & 0.045 & \;0.667 \\ \;\;0.400 & 0.667 & \;0.311 } \right]$ Will ($$2 \, G \, \boldsymbol{\epsilon}'$$) give $$\boldsymbol{\sigma}'$$? To find out, compute $$G$$. $G \; = \; {E \over 2 ( 1 + \nu) } \; = \; {100 \text{ MPa} \over 2 ( 1 + 0.333) } \; = \; 37.5 \text{ MPa}$ So $$2 \, G \, \boldsymbol{\epsilon}'$$ equals $2 \, G \, \boldsymbol{\epsilon}' = \left[ \matrix{ -26.6 & 20.0 & 30.0 \\ \;\;20.0 & 3.33 & 50.0 \\ \;\;30.0 & 50.0 & 23.3 } \right]$ This is indeed the deviatoric stress tensor computed above.
2017-01-22 20:20:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9347351789474487, "perplexity": 663.2403250546737}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00186-ip-10-171-10-70.ec2.internal.warc.gz"}
https://en.xen.wiki/w/TE_tuning
# Tenney-Euclidean tuning (Redirected from TE tuning) Tenney-Euclidean tuning (TE tuning), also known as TOP-RMS tuning, is a tuning technique for regular temperaments which leads to the least sum of squared errors of the Tenney-weighted basis. If we have k linearly independent vals of dimension n, they will span a subspace of tuning space. This subspace defines a regular temperament of rank k in the prime limit p, where p is the n-th prime. Similarly, starting from n - k independent commas for the same regular temperament, the corresponding monzos span an n - k dimensional subspace of interval space. Both the subspace of tuning space and the subspace of interval space characterize the temperament completely. A question then arises as to how to choose a specific tuning for this temperament, which is the same as asking how to choose a point (vector) in this subspace of tuning space which provides a good tuning. One answer to this is the weighted RMS (root mean squared) tuning discussed right here. TE tuning can be viewed as a variant of TOP tuning since it employs the TE norm in place of the Tenney height as in TOP tuning. Just as TOP tuning minimizes the maximum Tenney-weighted L1 error of any interval, TE tuning minimizes the maximum Tenney-weighted L2 error of any interval. ## Definition If we put the weighted Euclidean metric on tuning space, leading to TE tuning space in weighted coordinates, it is easy to find the nearest point in the subspace to the JIP 1 1 … 1], and this closest point will define a tuning map which is called TE tuning, a tuning which has been extensively studied by Graham Breed. We may also keep unweighted coordinates and use the TE norm on tuning space; in these coordinates the JI point is 1 log23 … log2p]. The two approaches are equivalent. In more pragmatic terms, suppose W is the weighting matrix. For the prime basis Q = 2 3 5 … p], $W = \operatorname {diag} (1/\log_2 (Q))$ If A is the mapping of the abstract temperament whose rows are (not necessarily independent) vals, then V = AW is the mapping in the weighted space. If J0 is the row vector of targeted JI intervals (i.e. the JIP), then J = J0W is the JI intervals in the weighted space, in the case of Tenney-weighting it is 1 1 … 1]. Let us also denote the row vector of TE generators G. TE tuning then defines a least squares problem of the following overdetermined linear equation system: $GV = J$ The system simply says that the sum of vkl steps of generator gk for all k's should equal the l-th targeted JI interval jl. There are a number of methods to solve least squares problems. One common way is to use the Moore–Penrose pseudoinverse. ## Computation using pseudoinverse The Moore–Penrose pseudoinverse, denoted A+ in this article, is a generalization of the inverse matrix with which it shares a lot of properties. To name a few: • If A is invertible, its inverse is A+ • If A has rational entries, so does A+ • (A+)+ = A • (AT)+ = (A+)T, where AT is the transpose of A • AA+ is the orthogonal projection matrix that maps onto the space spanned by the columns of A • A+A is the orthogonal projection matrix that maps onto the space spanned by the rows of A • I - A+A, where I is the identity matrix, is the orthogonal projection matrix that maps onto the kernel, or null space, of A • If the rows of A are linearly independent, then A+ = AT(AAT)-1. This means the pseudoinverse can be found in this important special case by people who don't have a pseudoinverse routine available by using a matrix inverse routine. • uA+ is the nearest point to u in the subspace spanned by the rows of A; A+v is the nearest point to v in the space spanned by the columns of A. In the pseudoinverse method, the (not necessarily independent) TE generators which correspond to the rows of V are given by $G = JV^+$ Applying the weighted val list to the generators, The TE tuning map is given by $T = GV = JV^+V$ We may also obtain the TE tuning from a projection matrix. P = V+V is the orthogonal projection matrix that maps onto the space spanned by the rows of V. This space corresponds to the temperament, and so does P. However, P is independent of how the temperament is defined; it does not depend on whether the vals are linearly independent, how many of them there are, or whether contorsion has been removed. The tuning map giving the tuning of each prime number is found by multiplying by the JI map: JP where J is the JI map, which is the nearest point in the subspace corresponding to the temperament to J. We may find the same projection matrix starting from a list of weighted monzos rather than vals. If M is a rank n matrix whose columns are weighted monzos, and I is the n×n identity matrix, then P = I - MM+ is the same projection matrix as V+V so long as the temperament defined by the vals is the same as the temperament defined by the monzos. Again, it is irrelevant if the monzos are independent or how many of them there are. ## Enforcement ### Pure-octaves TE tuning We may call pure-octaves Tenney-Euclidean tuning the POTE tuning. If T = JP = GV is the TE tuning map, then a corresponding pure-octaves map can be found by scalar multiplication, T/T1, where T1, the first entry, is the tuning of 2. The justification for this is that T does not only define a point, but a line through the origin lying in the subspace defining the temperament, or in other words, a point in the linear subspace of projective space corresponding to the temperament, and hence is a projective object. Another way to say this is that T defines not only the closest point to J, but the closest direction in terms of angular measure between the line through T and the line through J. ### Constrained TE tuning Main article: CTE tuning Another way to enforce the pure octave is by adding the constraint before the optimization process. This is the CTE tuning. The result, under the constraint of pure octaves, remains TE optimal. ## Otherwise normed tunings ### Frobenius tuning and Frobenius projection matrix We may also do the same things starting from nonweighted vals. This leads to a different tuning, the Frobenius tuning, which is perfectly functional but has less theoretical justification than TE tuning. However, if greater weight needs to be attached to the larger primes than TE tuning attaches, Frobenius tuning may be preferred; people who feel that larger primes require more tuning care than smaller ones may well prefer it. The list of Frobenius generators, GF, is given by: $G_\text{F} = J_0 A^+$ where J0 is the nonweighted JIP and A is the nonweighted mapping. The Frobenius tuning map, TF, is given by: $T_\text{F} = G_\text{F} A = J_0 A^+A$ However, the main value of unweighted vals is that the pseudoinverse and projection matrix have rational entries, so that the rows of the matrix are fractional monzos. The Frobenius projection matrix therefore, like the wedgie, defines a completely canonical object not depending on any arbitrary definition (e.g. how Hermite normal form or LLL reduction is specifically defined) which corresponds one-to-one with temperaments, and which does not depend on whether the monzos or vals from which it is computed are saturated. It may be found starting either from a set of vals or a set of commas, since if Q is the projection matrix found by treating monzos in the same way as vals, P = I - Q is the same projection matrix as would be found if starting from a set of vals defining the same temperament. Spelling this out, if A is a matrix whose rows are vals, then P = A+A is a positive-semidefinite symmetric matrix with rational matrix entries, which exactly specifies the regular temperament defined by the vals of A. If B is a matrix with columns of monzos which spans the subspace of interval space containing the commas, then this same matrix P is given by I - BB+. If the vals defining A are linearly independent, then P = AT(AAT)-1A. If the columns of B are independent, then we likewise have P = I - B(BTB)-1BT. ### Benedetti-Euclidean tuning Benedetti-Euclidean tuning (BE tuning) adopts the Benedetti weight in place of Tenney weight, based on the dual norm of Benedetti height. For Q = 2 3 5 …], the weighting matrix has the form $W = \operatorname {diag} (1/Q)$ ## Examples For practical helps, see POTE tuning. The val for 5-limit 12et is 12 19 28]. In weighted coordinates, that becomes v12 = 12 19/log23 28/log25] ~ 12.0 11.988 12.059]. If we take this to be a 1×3 matrix and take the pseudoinverse, we get the 3×1 matrix v12+ ~ [[0.027706 0.027677 0.027842]. Then P = v12+v12 is a projection matrix that maps onto the one-dimensional subspace whose single basis vector is v12. We find that v12P equals v12; on the other hand, if we take the monzo for 81/80, which is [-4 4 -1; and monzo-weight it to [-4 4log23 -log25 and multiply (either side, the matrix is symmetric) by P, we get the zero vector, corresponding to the unison. Now consider pajara, the 7-limit temperament tempering out both 50/49 and 64/63. Two possible equal temperament tunings for pajara are 12edo and 22edo. We may define a 2×4 matrix with rows equal to the vals for 12, and 22; in weighted coordinates this would be V ~ [12 11.988 12.059 12.111], 22 22.083 21.965 22.085]] Then the pseudoinverse V+ is approximately [-1.81029 1.00052] [-6.68250 3.66285] [4.83496 -2.63063] [3.67652 -1.99757] which we may also write as [[-1.81029 -6.68250 4.83496 3.67652, [1.00052 3.66285 -2.63063 -1.99757] Paj = V+V is a 4×4 symmetrical matrix which projects weighted vals in TE tuning space, or weighted monzos in TE interval space, to a subspace defined by pajara. It therefore projects the weighted monzos for 50/49, 64/63, 225/224, 2048/2025 etc. to the zero vector, whereas it leaves pajara vals such as 10edo in weighted coordinates unchanged. If we use unweighted coordinates we get the Frobenius projection matrix instead, whose rows are fractional monzos. The unweighted pseudoinverse u12+ of the 5-limit val u12 for 12 equal is the column matrix u12T/1289; that is, the 1×3 matrix with column [12/1289 19/1289 28/1289. Then u12+u12 is the 3×3 Frobenius projection matrix F: [144 228 336], 228 361 532], 336 532 784]]/1289 Multiplying 12 19 28] by F gives 12 19 28] again. Multiplying the monzos for 81/80, 128/125, 648/625 etc. gives the zero monzo, corresponding to a unison. Multiplying the val for 5-limit 19 equal, 19 30 44], by F gives 24360 38570 56840]/1289, which is approximately the 19 equal val. Multiplying the 5-limit monzo for 3/2, which is [-1 1 0; times F gives the fractional monzo corresponding to (284 3133 5196)1/1289, which equates to 698.121 cents, the tempering of 3/2 in Frobenius tuning for 5-limit 12et, the tuning with octave defined by the top row of F, which is to say by [1 0 0F, of 1196.778 cents. We can do the same thing with a matrix U with rows consisting of the vals for 7-limit 12 and 22 equal; then U+U, the Frobenius projection matrix for pajara, is [[36 92 14 32, [92 269 -32 14, [14 -32 141 148, [32 14 148 64]/305 This sends monzos for 50/49, 64/63 etc. to the unison monzo, and vals for 10et, 12et and 22et to themselves.
2022-12-03 18:59:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8561545014381409, "perplexity": 1442.6201704294883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710936.10/warc/CC-MAIN-20221203175958-20221203205958-00430.warc.gz"}
https://2dresearch.com/2019/03/15/crossover-from-ballistic-to-diffusive-thermal-transport-in-suspended-graphene-membranes/
We report heat transport measurements on suspended single-layer graphene disks with radius of 150–1600 nm using a high-vacuum scanning thermal microscope. The results of this study revealed a radius-dependent thermal contact resistance between tip and graphene, with values between 1.15 and 1.52  ×  10 8 KW −1 . The observed scaling of thermal resistance with radius is interpreted in terms of ballistic phonon transport in suspended graphene discs with radius smaller than 775 nm. In larger suspended graphene discs (radius  >775 nm), the thermal resistance increases with radius, which is attributed to in-plane heat transport being limited by phonon–phonon resistive scattering processes, which resulted in a transition from ballistic to diffusive thermal transport. In addition, by simultaneously mapping topography and steady-state heat flux signals between a self-heated scanning probe sensor and graphene with 17 nm thermal spatial resolution, we demonstrated tha… Published in: "2DMaterials".
2019-03-25 02:44:18
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8402950763702393, "perplexity": 2558.554213871003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203547.62/warc/CC-MAIN-20190325010547-20190325032547-00111.warc.gz"}
https://publications.hse.ru/articles/page2.html?search=924dd188185420b797da2a24ab149b7e
• A • A • A • АБB • АБB • АБB • А • А • А • А • А Обычная версия сайта Меню Найдено 125 публикаций Сортировка: по названию по году Статья Gavrylenko P., Marshakov A. Journal of High Energy Physics. 2016. Vol. 2016. No. 2. P. 1-31. We consider the conformal blocks in the theories with extended conformal W-symmetry for the integer Virasoro central charges. We show that these blocks for the generalized twist fields on sphere can be computed exactly in terms of the free field theory on the covering Riemann surface, even for a non-abelian monodromy group. The generalized twist fields are identified with particular primary fields of the W-algebra, and we propose a straightforward way to compute their W-charges. We demonstrate how these exact conformal blocks can be effectively computed using the technique arisen from the gauge theory/CFT correspondence. We discuss also their direct relation with the isomonodromic tau-function for the quasipermutation monodromy data, which can be an encouraging step on the way of definition of generic conformal blocks for W-algebra using the isomonodromy/CFT correspondence. Добавлено: 14 июня 2016 Статья Bershtein M., Bonelli G., Tanzini A. et al. Journal of High Energy Physics. 2016. No. 723. P. 39. Добавлено: 6 октября 2016 Статья Semenov-Tian-Shansky Kirill M., Polyakov M. V., Sokolova N. et al. Journal of High Energy Physics. 2019. Vol. 04. No. 007. P. 1-22. Добавлено: 29 сентября 2020 Статья Linzen J., Polyakov M., Semenov-Tian-Shansky K. et al. Journal of High Energy Physics. 2021. Vol. 05. Добавлено: 3 июня 2021 Статья Abzalov A., Bakhmatov I., Musaev E. Journal of High Energy Physics. 2015. No. 6, Article number 88. We construct Exceptional Field Theory for the group SO(5, 5) based on the extended (6+16)-dimensional spacetime, which after reduction gives the maximal D = 6 supergravity. We present both a true action and a duality-invariant pseudo-action formulations. All the fields of the theory depend on the complete extended spacetime. The U-duality group SO(5, 5) is made a geometric symmetry of the theory by virtue of introducing the generalised Lie derivative that incorporates a duality transformation. Tensor hierarchy appears as a natural consequence of the algebra of generalised Lie derivatives that are viewed as gauge transformations. Upon truncating different subsets of the extra coordinates, maximal supergravities in D = 11 and D = 10 (type IIB) can be recovered from this theory. © 2015, The Author(s). Добавлено: 7 сентября 2015 Статья Nekrasov N., Marshakov A. Journal of High Energy Physics. 2007. No. 0701. P. 104. Добавлено: 18 октября 2012 Статья Musaev E. Journal of High Energy Physics. 2015. Vol. 2015. No. 3. Добавлено: 18 марта 2015 Статья Аушев Т. А., Mizuk R., Pakhlov P. et al. Journal of High Energy Physics. 2019. Vol. 2019. No. 10. P. 1-31. Добавлено: 28 октября 2020 Статья Ratnikov F., Баранов А. С., Borisyak M. et al. Journal of High Energy Physics. 2018. Vol. 2018. P. 1-31. Добавлено: 4 июня 2018 Статья Derkach D., Aaij R. Journal of High Energy Physics. 2013. Vol. 183. P. 1310. Добавлено: 9 июля 2015 Статья Gamayun O., Losev A. S., Marshakov A. Journal of High Energy Physics. 2009. No. 0909(028). P. 1-13. Добавлено: 18 октября 2012 Статья Gamayun O., Marshakov A. Journal of High Energy Physics. 2009. No. 0909(065). P. 1-22. Добавлено: 18 октября 2012 Статья Аушев Т. А., Mizuk R., Pakhlov P. et al. Journal of High Energy Physics. 2020. Vol. 2020. No. 05. P. 1-26. Добавлено: 28 октября 2020 Статья Musaev E. Journal of High Energy Physics. 2013. Vol. 1305. Добавлено: 20 октября 2014 Статья Grekov A., Sechin I., Zotov A. Journal of High Energy Physics. 2019. Vol. 10. No. 081. P. 1-32. Добавлено: 15 октября 2019 Статья Belavin V., Geiko R. Journal of High Energy Physics. 2017. Vol. 125. P. 1-13. We continue to investigate the dual description of the Virasoro conformal blocks arising in the framework of the classical limit of the AdS3/CFT2 correspondence. To give such an interpretation in previous studies, certain restrictions were necessary. Our goal here is to consider a more general situation available through the worldline approximation to the dual AdS gravity. Namely, we are interested in computing the spherical conformal blocks without the previously imposed restrictions on the conformal dimensions of the internal channels. The duality is realised as an equality of the so-called heavy-light limit of the n-point conformal block and the action of n−2 particles propagating in some AdS-like background with either a conical singularity or a BTZ black hole. We describe a procedure that allows relaxing the constraint on the intermediate channels. To obtain an explicit expression for the conformal block on the CFT side, we use a recently proposed recursion procedure and find full agreement between the results of the boundary and bulk computations. Добавлено: 31 августа 2017 Статья Koile E., Kovensky N., Schvellinger M. Journal of High Energy Physics. 2015. Vol. 2015. No. 1. P. 1-48. Добавлено: 31 октября 2019 Статья Bershtein M., Белавин А. А., Белавин В. А. Journal of High Energy Physics. 2011. Vol. 9. No. 117. Добавлено: 23 октября 2014 Статья Mironov A., Morozov A., Natanzon S. M. Journal of High Energy Physics. 2011. No. 11(097). P. 1-31. Correlators in topological theories are given by the values of a linear form on the products of operators from a commutative associative algebra (CAA). As a corollary, partition functions of topological theory always satisfy the generalized WDVV equations of. We consider the Hurwitz partition functions, associated in this way with the CAA of cut-and-join operators. The ordinary Hurwitz numbers for a given number of sheets in the covering provide trivial (sums of exponentials) solutions to the WDVV equations, with finite number of time-variables. The generalized Hurwitz numbers from provide a non-trivial solution with infinite number of times. The simplest solution of this type is associated with a subring, generated by the dilatation operators ˆW1 = trD = trX@/@X. Добавлено: 12 октября 2012 Статья Akhmedov E. Journal of High Energy Physics. 2012. Vol. 1201. P. 066. We explicitly show that the one loop IR correction to the two--point function in de Sitter space scalar QFT does not reduce just to the mass renormalization. The proper interpretation of the loop corrections is via particle creation revealing itself through the generation of the quantum averages $<a^+_p a_p>$, $<a_p a_{-p}>$ and $<a^+_p a^+_{-p}>$, which slowly change in time. We show that this observation in particular means that loop corrections to correlation functions in de Sitter space can not be obtained via analytical continuation of those calculated on the sphere.  We find harmonics for which the particle number $<a^+_p a_p>$ dominates over the anomalous expectation values $<a_p a_{-p}>$ and $<a^+_p a^+_{-p}>$. For these harmonics the Dyson--Schwinger equation reduces in the IR limit to the kinetic equation. We solve the latter equation, which allows us to sum up all loop leading IR contributions to the Whiteman function. We perform the calculation for the principle series real scalar fields both in expanding and contracting Poincare patches. Добавлено: 17 февраля 2013 Статья Gavrylenko P. Journal of High Energy Physics. 2015. No. 09. P. 167. Добавлено: 9 октября 2015
2021-10-17 20:22:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8663548827171326, "perplexity": 1321.8256029057547}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585181.6/warc/CC-MAIN-20211017175237-20211017205237-00275.warc.gz"}
https://www.physicsforums.com/threads/derivation-of-relativistic-acceleration-and-momentum.233264/
# Derivation of relativistic acceleration and momentum 1. May 5, 2008 ### phys23 Dear all, could anyone please show the full derivation of relativistic acceleration and momentum. Many thanks n happy eqtns R 2. May 5, 2008 ### Hootenanny Staff Emeritus Welcome to PF, Have you tried searching the internet? 3. May 5, 2008 ### gamesguru In relativity, mass is dependent of velocity such that, $$m=\gamma m_0=\frac{m_0}{\sqrt{1-v^2/c^2}}$$. $m_0[/tex] is the mass of the object at rest, [itex]c$ is the 299 792 458 m/s. Most equations still hold true in relativity, the major exception being F=ma. The following are still true: $$p=mv, F=p', a=v', v=x'.$$ Using these, we easily find that, $$p=\frac{m_0v}{\sqrt{1-v^2/c^2}}$$ and $$F=p'=(mv)'=m'v+v'm$$. Now we need to express m' in terms of only v. $$m'=\left(\frac{m_0}{\sqrt{1-v^2/c^2}}\right)'=\frac{-1/2m_0}{(1-v^2/c^2)^{3/2}}(-2v/c^2)(v')=v\frac{m_0v'}{c^2(1-v^2/c^2)^{3/2}}$$. Combining this with the above equation for force, $$F=v^2\frac{m_0v'}{c^2(1-v^2/c^2)^{3/2}}+\frac{m_0v'}{\sqrt{1-v^2/c^2}}$$. Now you can just factor and solve for $a, v'$. 4. May 5, 2008 ### clem You result is for parallel to v. With vectors, there are other terms. 5. Jul 13, 2009 ### genesis1 Newton's second law F/m = a Also recall a = dv/dt Also force is related to relativistic momentum by F = dp/dt Relativistic momentum is defined by p = mv(1 - v^2/c^2)^-.5 You need to use implicit derivation to take the derivative of this with respect to t. Thus you should have dp/dt and dv/dt term. Once you are finished getting the derivative and combining terms you should end up with dv/dt = F(1-v^2/c^2)^3/2 /m 6. Jul 13, 2009 ### Cusp F=ma does work in both SR and GR as long as you are using the 4-vector (tensorial) version. 7. Jul 13, 2009 ### clem That result is only valid for a parallel to v. 8. Jul 14, 2009 ### vin300 Yes, the actual equation is (gamma)ma=(F-F.v/c)v/c as i mentioned in the new thread. What im waiting for is its dervn 9. Jul 14, 2009 ### gamesguru Use the fact that $$v\frac{dv}{dt}=\vec{v}\cdot \vec{a}$$ 10. Jul 14, 2009 ### vin300 do it 11. Jul 14, 2009 ### gamesguru If you carry out the same calculations I did in the first post I made in this thread, but use vectors, you get this result (you can do it yourself, it's very easy, esp. since I already did it): $$\vec{F}=m_o \vec{v}\frac{d\gamma}{dt}+m_o\gamma\frac{d\vec{v}}{dt}$$ and using my above post, $$\frac{d\gamma}{dt}=\gamma^3\frac{|\vec{a}||\vec{v}|}{c^2}$$, $$\vec{F}=m_o\gamma^3\vec{v}\frac{|\vec{a}||\vec{v}|}{c^2}+m_o\gamma \vec{a}=m_o\gamma^3\vec{v}\frac{\vec{a}\cdot \vec{v}}{c^2}+m_o\gamma \vec{a}$$ So, $$m_o\gamma\vec{a}=\vec{F}-[(m_o\gamma^3\vec{a})\cdot\frac{\vec{v}}{c}]\frac{\vec{v}}{c}$$. According to your author, the following must be true, $$m_o\gamma^3\vec{a}=\vec{F}$$. This suggests relativistic mass does not exist and that this author goes against mainstream theory. The only guy I know who goes against this is Levvy. We don't like to trust that guy around here. (Read: http://en.wikipedia.org/wiki/Mass_in_special_relativity#Controversy) I wouldn't trust this author if I were you, only if you assume relativistic mass does not exist do you get the result you posted. 12. Jul 14, 2009 ### clem As I just posted in the other thread, dotting your m\gamma a equation with v shows the result v.F=m\gamma^3(v.a). Either you or I are confused about what "mainstream theory" is. 13. Nov 19, 2011 ### LorentzWasWro So I've been told that the speed of light remains constant despite length contraction and dime dilation because they each decrease proportionately. So a velocity of 4 meters/2 seconds in a relativistic scenario might see a halving of values of length and time so it may only traverse 2 meters but only 1 second of time elapses. My question is what happens with acceleration if it is measured in distance per time-squared...Shouldn't it decrease proportionately to square root-t? If time dilates, shouldn't time-squared dilate even more? Or is this too linear of an approach? 14. Nov 19, 2011 ### juanrga $$a \equiv \frac{d^2x}{dt^2}$$ $$p \equiv \frac{\partial L}{\partial v}$$ with $L$ the relativistic Lagrangian and $v$ velocity
2016-10-27 17:17:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6599692106246948, "perplexity": 2234.162234366948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721355.10/warc/CC-MAIN-20161020183841-00393-ip-10-171-6-4.ec2.internal.warc.gz"}
http://physics.stackexchange.com/tags/collision/hot?filter=year
# Tag Info 148 Things are not empty space. Our classical intuition fails at the quantum level. Matter does not pass through other matter mainly due to the Pauli exclusion principle and due to the electromagnetic repulsion of the electrons. The closer you bring two atoms, i.e. the more the areas of non-zero expectation for their electrons overlap, the stronger will the ... 48 Amazingly this actually happened to a Russian scientist called Anatoli Bugorski (WARNING: this is pretty gruesome). The beam basically just killed all the tissue it passed through. The symptoms were the relatively mundane ones expected from tissue death. The LHC has a much, much greater energy than the one that struck Bugorski, so it would cause a lot more ... 44 Is it even possible to hit 350Gs of force to a hard drive? Sure is. Drop it on the floor. You are thinking about sustained forces. 350g sustained won't happen even in rocket launches. But momentary forces can easily peak at this level. Note that the G limit on the drive is for when it's not running. No spinning drive will like 350g, except maybe in ... 34 A charged particle will create charge separation (ionization) along its path. This will cause harmful chemical reactions to occur in the body, including DNA damage. The effects of these chemical reactions depend on their amount. The body can heal from a low amount on its own, while a high amount will cause radiation sickness and probably death. This can be ... 30 Look at it this way: Suppose you are in a train travelling at 10 m/s. Somebody inside the train throws a ball at you in the opposite direction at 10 m/s. You feel the pain belonging to your first experiment. However, somebody looking at this experiment from outside the train would say that the ball is standing still and you are travelling towards the ball ... 20 It is a standard exercise in quantum electrodynamics to find the angular dependence of the differential cross section. Which more or less means how probable it is for the photons to scatter at a certain angle, given the energy of the incident particles. So assuming the spins of the electron-positron pair is averaged, and that you don't care about the photon ... 16 Here's an application where an ability to withstand high shock is important. Explosions. In the mid 1980s I did work for a mining company's research laboratory (BHP Research, now defunct like all Australian corporate research). We would lower data-logging computers into boreholes to set up a grid of dataloggers, then detonate a charge of known energy at a ... 14 Elastic collisions do happen at the LHC. The TOTEM experiment measures the differential cross section (rate as a function of angle) for proton-proton elastic scattering at the LHC. Here is their latest result. They don't publish an estimate of the elastic cross section, but according to their data it must be at least 25 mb (millibarns) (my first version of ... 14 We do. The LHC accelerates two protons, each with 3.5 TeV of energy, giving a total of 7 TeV in the CoM frame (The energies are from the initial phase of the previous LHC run. Later in the run this was increased to 8 TeV and the combination of the two dataset was what discovered the Higgs boson. The energies are roughly doubling now for Run II, to 13 TeV). ... 13 Nothing happens obviously, when one high energy particle penetrates flesh as cosmic rays continuously impinge on us and some have the energies of the LHC. The cosmic rays reaching us are mainly muons and the damage they do is with electromagnetic scatters/ionisations in their path. The mean energy of muons reaching sea level is about 4 GeV. Muons, being ... 11 If it were possible for one object to pass through another object, then it would be possible for one part of an object to pass through a different part of the same object. Therefore the question asked here is equivalent to the question of why matter is stable. See this question on mathoverflow. That question was more about the stability of individual atoms, ... 10 A simple counterexample: Imagine two particles with opposite direction and equal speed. The center of mass does not move, yet the kinetic energy of the system is non-zero. Now let both particles come to rest (by friction, hitting a wall, whatever). The kinetic energy is now zero, and total momentum has been conserved, while energy is not. The crucial ... 9 Yes, when you fire a pistol the hammer hits the bullet with a relatively small initial kinetic energy but the kinetic energy of the hammer and bullet after the collision is considerably higher. This may seem a silly example, but I think it actually highlights the important principle involved. In general when two bodies undergo an inelastic collision part of ... 9 Let's take everything out of our scenario other than you and the ball. No baseball stadium, no Earth, no spherical cows, NOTHING in the entire universe but you and the ball. (Nope, not even microwave background radiation) Now the question has changed. Now you need to ask whether it's possible to decide whether you're moving towards the ball or vice versa. 8 While people normally quote Newton's Second law as $\vec F = m \vec a$, it is better written as $$\vec F = \frac{d\vec p}{dt}$$ Force is a rate of change in momentum. This means that the average force applied when an object undergoes some discrete change in its momentum is $$F_{\text{avg}} = \frac{\Delta p }{\Delta t}$$ The change in your momentum ... 8 Anything that is not forbidden must happen. That's an important statement to keep in mind when approaching quantum physics. It doesn't mean that anything that can happen always happens, but it must happen at some time or another just like someone eventually has to win the lottery. That said, some protons do go through the LHC, ram into each other and ... 7 TL;DR: If you have to choose either "near the handle" or "near the tip", the tip will work better. But there's a point in between these two that works even better; exactly where that point is depends on how you swing the sword, and how its weight is distributed. UPDATED now I am near a computer and can draw diagrams etc. If cutting off the zombie's head ... 6 While a single LHC particle wouldn't be doing much harm, being hit by the LHC beam would be certainly deadly and it would damage the machine badly. Any dense matter that comes into the LHC beam will instantly act as a beam dump. We have a very good idea about what happens in the LHC beam dump, see e.g. ... 6 Regardless of the physical undefinability of "painfulness", I'd like to plug some numbers in a particular scenario: Let's have a momentum of $p = 1000$m$\cdot$kg/s, A 0.25kg bullet would be fatal, moving at $v = 1000/0.25 = 4000$m/s, while a 2000kg car moves at $v=0.5$m/s, So at least in this scenario and particularly for relatively low momentum ... 6 Backspin! Those shots in which the cue ball "draws" backwards after hitting the target ball involve backspin. Without backspin, the cue ball cannot reverse direction. Consider what happens when the cue ball is not spinning at all when it hits the target ball. The cue ball will come to a dead stop if it hits the target ball straight on. Think of Newton's ... 6 Microscopically, i.e. in the quantum theory the scattering with radiation is a collision of particles with photons such as $$e^- + \gamma \to e^- + \gamma$$ The momentum vectors of the particles above are $$\vec p_1+\vec p_2= \vec p_3 + \vec p_4$$ where the identity holds due to momentum conservation. But in general $\vec p_1\neq \vec p_3$ and $\vec ... 6 elementary particles (e.g. protons) Protons aren't elementary particles, they're made of partons (quarks and gluons) in "soup". Below,$\lambda$is the wavelength corresponding to the energy of the interaction via the usual de Broglie relation and$r_p$is the radius of the proton. At low energy with$\lambda >> r_p$the interactions are just ... 5 The problem is equivalent to 4 spheres colliding simultaneously, where top sphere center is at$60^o$relative to the$x'x\$ axes (same goes for bottom sphere): We'll name them: sphere A (dark blue), and spheres 1, 2, and 3. During the collision the spheres will behave like springs with an infinite hook constant. The forces on the spheres will be ... 5 Yes , the normal to the surface is the direction of reaction force. And the direction doesnt depend on the material of the object . But note that if friction is considered , direction of net reaction force changes 5 Many modern particle accelerators do accelerate both particles towards each other. LEP accelerated electrons and positrons in opposite directions in the same chamber, and the Tevatron did the same for protons and antiprotons. The LHC is a proton-proton collider, and so it has two stacked rings that accelerate protons in different directions. For the BaBar ... 5 First of all, if the collision is elastic, the distribution of momentum in between the components is completely determined by momentum and energy conservation! This statement is most obvious in the center-of-mass frame where the total momentum is zero and the two objects are moving in opposite directions. The momentum conservation (the total momentum is ... 5 Momentum is conserved in magnitude and direction. So in order to analyze any situation of momentum conservation, you should always start with $$\sum \mathbf p_{i}=\sum\mathbf p_f$$ where the subscripts denote the initial and final momenta. As to the ball & wall, you are correct that momentum is not conserved if you are only looking at the ball. If you ... 5 Let's turn this around. In an inelastic collision, some of the energy, instead of remaining with the center of mass of the objects colliding, is dissipated as heat - an increase in the random motion of the atoms and molecules of the objects colliding. At the (sub) atomic level, the two particles involved in a simple collision are the same two objects that ... Only top voted, non community-wiki answers of a minimum length are eligible
2015-06-30 16:49:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6322247982025146, "perplexity": 514.773873274116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375094451.94/warc/CC-MAIN-20150627031814-00056-ip-10-179-60-89.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-1-common-core-15th-edition/common-core-end-of-course-assessment-page-797/48
## Algebra 1: Common Core (15th Edition) $\boxed{3x+1}$,$\boxed{2x-5}$ and $\boxed{x^2}$ First of all, you must take $x^2$: $x^2(6x^2-13x-5)$ Use Middle-term factoring to factor $6x^2-13x-5$ $6x^2 - 15x + 2x - 5$ $3x(2x-5) +1 (2x-5)$ $(3x+1)(2x-5)$ The dimensions are $\boxed{3x+1}$,$\boxed{2x-5}$ and $\boxed{x^2}$
2021-03-03 08:56:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1950371414422989, "perplexity": 1337.285540092912}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178366477.52/warc/CC-MAIN-20210303073439-20210303103439-00223.warc.gz"}
https://dergipark.org.tr/en/pub/hujms/issue/62731/696407
| | | | ## On the sum of simultaneously proximinal sets #### Longfa SUN [1] , Yuqi SUN [2] , Wen ZHANG [3] , Zheming ZHENG [4] In this paper, we show that the sum of a compact convex subset and a simultaneously $\tau$-strongly proximinal convex subset (resp. simultaneously approximatively $\tau$-compact convex subset) of a Banach space X is simultaneously $\tau$-strongly proximinal (resp. simultaneously approximatively $\tau$-compact ), and the sum of a weakly compact convex subset and a simultaneously approximatively weakly compact convex subset of X is still simultaneously approximatively weakly compact, where $\tau$ is the norm or the weak topology. Moreover, some related results on the sum of simultaneously proximinal subspaces are presented. simultaneous strong proximinality, simultaneous approximative compactness, compact sets, Banach space • [1] P. Bandyopadhyay, Y. Li, B. Lin and D. Narayana, Proximinility in Banach spaces, J. Math. Anal. Appl. 341, 309–317, 2008. • [2] E.W. Cheney and D.E. Wulbert, The existence and unicity of best approximation, Math. Scand. 24, 113–140, 1969. • [3] L.X. Cheng, Q.J. Cheng and Z.H. Luo, On some new characterizations of weakly compact sets in Banach spaces, Studia Math. 201, 155–166, 2010. • [4] W. Deeb and R. Khalil, The sum of proximinal subspaces, Soochow J. Math. 18, 163–167, 1992. • [5] S. Dutta and P. Shunmugaraj, Strong proximinality of closed convex sets, J. Approx. Theory 163, 547–553, 2011. • [6] M. Feder, On the sum of proximinal subspaces, J. Approx. Theory 49, 144–148, 1987. • [7] S. Gupta and T.D. Narang, Simultaneous strong proximinality in Banach spaces, Turkish J. Math. 41, 725–732, 2017. • [8] R.C. James, Weak compactness and reflexivity, Israel J. Math. 2, 101–119, 1964. • [9] P.K. Lin, A remark on the sum of proximinal subspces, J. Approx. Theory 58, 55–57, 1989. • [10] J. Mach, Best simultaneous approximation of bounded functions with values in certain Banach spaces, Math. Ann. 240, 157–164, 1979. • [11] Q.F. Meng, Z.H. Luo and H.A. Shi, A remark on the sum of simultaneously proximina subspaces (in Chinese), J. Xiamen Univ. Nat. Sci. 56 (4), 551–554, 2017. • [12] T.D. Narang, Simultaneous approximation and Chebyshev centers in metric spaces, Matematicki Vesnik. 51, 61–68, 1999. • [13] I.A. Pyatyshev, Operations on approximatively compact sets, J. Math. Notes 82, 653– 659, 2007. • [14] T.S.S.R.K. Rao, Simultaneously proximinal subspaces, J. Appl. Anal. 22 (2), 115–120, 2016. • [15] T.S.S.R.K. Rao, Points of strong subdifferentiability in dual spaces, Houston J. Math. 44 (4), 1221–1226, 2018. • [16] M. Rawashdeh, S. Al-Sharif and W.B. Domi, On the sum of best simultaneously proximinal subspaces, Hacet. J. Math. Stat. 43, 595–602, 2014. • [17] W. Rudin, Functional Analysis, 2nd ed. New York, McGraw-Hill Inc, 1991. • [18] F. Saidi, D. Hussein and R. Khalil, Best simultaneous approximation in $L^P(I,X)$, J. Approx. Theory 116, 369–379, 2002. Primary Language en Mathematics Mathematics Orcid: 0000-0001-9576-5610Author: Longfa SUNInstitution: North China Electric Power UniversityCountry: China Orcid: 0000-0001-9576-5014Author: Yuqi SUNInstitution: XIAMEN UNIVERSITYCountry: China Orcid: 0000-0001-9576-5644Author: Wen ZHANG (Primary Author)Institution: XIAMEN UNIVERSITYCountry: China Orcid: 0000-0001-9576-5611Author: Zheming ZHENGInstitution: XIAMEN UNIVERSITYCountry: China Fundamental Research Funds for the Central Universities, National Natural Science Foundation of China. 2019MS121, 11731010. The authors were supported by the Fundamental Research Funds for the Central Universities 2019MS121 and the National Natural Science Foundation of China (no. 11731010 and 12071388). Publication Date : June 7, 2021 Bibtex @research article { hujms696407, journal = {Hacettepe Journal of Mathematics and Statistics}, issn = {2651-477X}, eissn = {2651-477X}, address = {}, publisher = {Hacettepe University}, year = {2021}, volume = {50}, pages = {668 - 677}, doi = {10.15672/hujms.696407}, title = {On the sum of simultaneously proximinal sets}, key = {cite}, author = {Sun, Longfa and Sun, Yuqi and Zhang, Wen and Zheng, Zheming} } APA Sun, L , Sun, Y , Zhang, W , Zheng, Z . (2021). On the sum of simultaneously proximinal sets . Hacettepe Journal of Mathematics and Statistics , 50 (3) , 668-677 . DOI: 10.15672/hujms.696407 MLA Sun, L , Sun, Y , Zhang, W , Zheng, Z . "On the sum of simultaneously proximinal sets" . Hacettepe Journal of Mathematics and Statistics 50 (2021 ): 668-677 Chicago Sun, L , Sun, Y , Zhang, W , Zheng, Z . "On the sum of simultaneously proximinal sets". Hacettepe Journal of Mathematics and Statistics 50 (2021 ): 668-677 RIS TY - JOUR T1 - On the sum of simultaneously proximinal sets AU - Longfa Sun , Yuqi Sun , Wen Zhang , Zheming Zheng Y1 - 2021 PY - 2021 N1 - doi: 10.15672/hujms.696407 DO - 10.15672/hujms.696407 T2 - Hacettepe Journal of Mathematics and Statistics JF - Journal JO - JOR SP - 668 EP - 677 VL - 50 IS - 3 SN - 2651-477X-2651-477X M3 - doi: 10.15672/hujms.696407 UR - https://doi.org/10.15672/hujms.696407 Y2 - 2020 ER - EndNote %0 Hacettepe Journal of Mathematics and Statistics On the sum of simultaneously proximinal sets %A Longfa Sun , Yuqi Sun , Wen Zhang , Zheming Zheng %T On the sum of simultaneously proximinal sets %D 2021 %J Hacettepe Journal of Mathematics and Statistics %P 2651-477X-2651-477X %V 50 %N 3 %R doi: 10.15672/hujms.696407 %U 10.15672/hujms.696407 ISNAD Sun, Longfa , Sun, Yuqi , Zhang, Wen , Zheng, Zheming . "On the sum of simultaneously proximinal sets". Hacettepe Journal of Mathematics and Statistics 50 / 3 (June 2021): 668-677 . https://doi.org/10.15672/hujms.696407 AMA Sun L , Sun Y , Zhang W , Zheng Z . On the sum of simultaneously proximinal sets. Hacettepe Journal of Mathematics and Statistics. 2021; 50(3): 668-677. Vancouver Sun L , Sun Y , Zhang W , Zheng Z . On the sum of simultaneously proximinal sets. Hacettepe Journal of Mathematics and Statistics. 2021; 50(3): 668-677. IEEE L. Sun , Y. Sun , W. Zhang and Z. Zheng , "On the sum of simultaneously proximinal sets", Hacettepe Journal of Mathematics and Statistics, vol. 50, no. 3, pp. 668-677, Jun. 2021, doi:10.15672/hujms.696407 Authors of the Article [1] [4]
2021-07-31 19:44:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5499617457389832, "perplexity": 9164.557869849476}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154099.21/warc/CC-MAIN-20210731172305-20210731202305-00338.warc.gz"}
https://math.stackexchange.com/questions/822961/angle-between-two-polynomials
# Angle between two polynomials Given the inner product of two polynomials $p(X), q(X) \in P(d)$, where $P(d)$ is the vector space of all polynomials of degree less than or equal to d, with real coefficients, and using the inner product $$\langle p(x),q(X) \rangle = \int_{-1}^{1} p(X)q(X)dX$$ How can the angle $$\cos(\alpha)=\frac{\langle p(x),q(X) \rangle}{\|p(X)\|\|q(X)\|}$$ be interpreted geometrically? • Well, perhaps a more or less "easy" way is: remember that $\;P(d)\cong\Bbb R^{d+1}\;$ (assuming you meant real polynomials), so you can choose a basis (better: an orthonormal one) in your space and denote each vector (polynomial) in it by coordinates as in $\;\Bbb R^{d+1}\;$, and thus these coordinate vectors' angle is what you want ... – DonAntonio Jun 6 '14 at 15:03
2019-05-23 21:59:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9906356334686279, "perplexity": 277.6713273771924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257396.96/warc/CC-MAIN-20190523204120-20190523230120-00501.warc.gz"}
https://cmuadvancedalgos.wordpress.com/2015/03/17/lecture-22-random-sampling-for-fast-svds/
## Lecture #22: Random Sampling for Fast SVDs Some clarifications from Monday’s lecture (and HW #5): • Firstly, Akshay had a question about the Matrix Chernoff bound I mentioned in Lecture today — it seemed to have a weird scaling. The bound I stated said that for ${\beta > 0}$, $\displaystyle \Pr[ \lambda_1(A_n - U) \geq \beta ] \leq d\cdot\exp\left\{- \frac{ \beta^2 n }{4R} \right\}.$ The worry was that if we scaled each little random variable by some ${K \geq 1}$, we would get $\displaystyle \Pr[ K \lambda_1(A_n - U) \geq K \beta ] \leq d\cdot\exp\left\{- \frac{ (K\beta)^2 n }{4KR} \right\}.$ So the numerator of the exponent would increase from ${-\beta^2 n}$ to ${-(K\beta)^2 n}$ whereas the denominator would go from ${4R}$ to ${4KR}$. As ${K}$ gets large, we get better and better bounds for free. Super-fishy. I went over it again, and the bound is almost correct: the fix is that the bound holds only for deviations ${\beta \in (0,1)}$. You will prove it in HW#5. (I updated HW #5 to reflect this missing upper bound on ${\beta}$; the upper bound on ${\beta}$ subtly but crucially comes into play in part 5(c).) This upper bound on ${\beta}$ means one can’t get arbitrarily better bounds for free by just scaling up the random variables. However, you can get a little bit better: if you do try to play this scaling game, I think you can get a slightly better bound of $\displaystyle d\cdot\exp\left\{ - \frac{\beta n}{4R} \right\}.$ (Follow the argument above but set ${K = 1/\beta}$ so that the deviation you want is about ${1}$, etc.) The resulting bound would improve the sampling-based SVD result from today’s lecture to ${s = O(r \log n/ \varepsilon^2)}$ sampled rows from the weaker bound of ${s = O(r \log n/ \varepsilon^4)}$ rows I claimed. • Secondly, for any real symmetric matrix ${A}$ assume the eigenvalues are ordered $\displaystyle \lambda_1(A) \geq \lambda_2(A) \geq \cdots \geq \lambda_n(A).$ The statement I wanted to prove was the following: given ${A, B}$ real symmetric, we have that for all ${i}$, $\displaystyle | \lambda_i(A) - \lambda_i(B) | \leq \| A - B \|_2.$ Akshay pointed out that it follows from the Weyl inequalities. These say that for all integers ${i,j}$ such that ${i+j-1 \leq n}$, $\displaystyle \lambda_{i+j-1}(X+Y) \leq \lambda_i(X) + \lambda_j(Y).$ Hence setting ${j = 1}$, and setting ${X+Y = A}$ and ${X = B}$, $\displaystyle \lambda_{i}(A) - \lambda_i(B) \leq \lambda_1(A-B).$ Similarly setting ${X+Y = B}$ and ${X = A}$, we get $\displaystyle \lambda_{i}(B) - \lambda_i(A) \leq \lambda_1(B-A).$ Hence, $\displaystyle | \lambda_{i}(A) - \lambda_i(B)| \leq \max\big(\lambda_1(A-B), \lambda_1(B-A)\big)$ $\displaystyle = \max\big(\lambda_1(A-B), -\lambda_n(A-B)\big) = \| A - B \|_2.$ • John found Rajendra Bhatia‘s book on Matrix Analysis quite readable. • A clarification asked after class: all matrices in this class are indeed matrices over the reals. • There are minor fixes to the HW, in particular to problem #5. Please look at the latest file online, changes are marked in red.
2018-06-19 14:01:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 35, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9280138611793518, "perplexity": 533.2589852479954}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863043.35/warc/CC-MAIN-20180619134548-20180619154548-00110.warc.gz"}
https://www.math-only-math.com/Degree-of-a-Polynomial.html
# Degree of a Polynomial Here we will learn the basic concept of polynomial and the degree of a polynomial. What is polynomial? An algebraic expression which consists of one, two or more terms is called a polynomial. How to find a degree of polynomial? The degree of the polynomial is the greatest of the exponents (powers) of its various terms. Examples of polynomials and its degree: 1. For polynomial 2x2 - 3x5 + 5x6. We observe that the above polynomial has three terms. Here the first term is 2x2, the second term is -3x5 and the third term is 5x6. Now we will determine the exponent of each term. (i) the exponent of the first term 2x2 = 2 (ii) the exponent of the second term 3x5 = 5 (iii) the exponent of the third term 5x6 = 6 Since, the greatest exponent is 6, the degree of 2x2 - 3x5 + 5x6 is also 6. Therefore, the degree of the polynomial 2x2 - 3x5 + 5x6 = 6. 2. Find the degree of the polynomial 16 + 8x – 12x2 + 15x3 - x4. We observe that the above polynomial has five terms. Here the first term is 16, the second term is 8x, the third term is – 12x2, the fourth term is 15x3 and the fifth term is - x4. Now we will determine the exponent of each term. (i) the exponent of the first term 16 = 0 (ii) the exponent of the second term 8x = 1 (iii) the exponent of the third term – 12x2 = 2 (iv) the exponent of the fourth term 15x3 = 3 (v) the exponent of the fifth term - x4 = 4 Since, the greatest exponent is 4, the degree of 16 + 8x – 12x2 + 15x3 - x4 is also 4. Therefore, the degree of the polynomial 16 + 8x – 12x2 + 15x3 - x4 = 4. 3. Find the degree of a polynomial 7x – 4 We observe that the above polynomial has two terms. Here the first term is 7x and the second term is -4 Now we will determine the exponent of each term. (i) the exponent of the first term 7x = 1 (ii) the exponent of the second term -4 = 1 Since, the greatest exponent is 1, the degree of 7x – 4 is also 1. Therefore, the degree of the polynomial 7x – 4 = 1. 4. Find the degree of a polynomial 11x3 - 13x5 + 4x. We observe that the above polynomial has three terms. Here the first term is 11x3, the second term is - 13x5 and the third term is 4x. Now we will determine the exponent of each term. (i) the exponent of the first term 11x3 = 3 (ii) the exponent of the second term - 13x5 = 5 (iii) the exponent of the third term 4x = 1 Since, the greatest exponent is 5, the degree of 11x3 - 13x5 + 4x is also 5. Therefore, the degree of the polynomial 11x3 - 13x5 + 4x = 5. 5. Find the degree of the polynomial 1 + x + x2 + x3. We observe that the above polynomial has four terms. Here the first term is 1, the second term is x, the third term is x2 and the fourth term is x3. Now we will determine the exponent of each term. (i) the exponent of the first term 1 = 0 (ii) the exponent of the second term x = 1 (iii) the exponent of the third term x2 = 2 (iv) the exponent of the fourth term x3 = 3 Since, the greatest exponent is 3, the degree of 1 + x + x2 + x3 is also 3. Therefore, the degree of the polynomial 1 + x + x2 + x3 = 3. 6. Find the degree of a polynomial -2x. We observe that the above polynomial has one term. Here the term is -2x. Now we will determine the exponent of the term. (i) the exponent of the first term -2x = 1 Therefore, the degree of the polynomial -2x = 1. Types of Algebraic Expressions Degree of a Polynomial Subtraction of Polynomials Power of Literal Quantities Multiplication of Two Monomials Multiplication of Polynomial by Monomial Multiplication of two Binomials Division of Monomials
2022-07-07 01:54:07
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.829565167427063, "perplexity": 445.0760251211526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683020.92/warc/CC-MAIN-20220707002618-20220707032618-00787.warc.gz"}
https://brilliant.org/problems/building-a-ski-jump/
# Building a ski jump You are in charge of building a small ski jump on a hill that slopes down $20^{\circ}$ below the horizontal. You want to maximize the distance the skier will travel down the hill while in the air. The jump will launch the skier off the hill at an angle $\theta$ above the horizontal. What should $\theta$ be in degrees to maximize the distance the skier travels down the slope while in the air? • The acceleration of gravity is $-9.8~\mbox{m/s}^2$.
2020-06-02 11:23:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25837066769599915, "perplexity": 395.9713711828823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347424174.72/warc/CC-MAIN-20200602100039-20200602130039-00377.warc.gz"}
http://clay6.com/qa/40779/make-correct-statements-by-filling-in-the-symbol-subset-or-not-subset-in-th
Home  >>  CBSE XI  >>  Math  >>  Sets Make correct statements by filling in the symbol $\subset$ or $\not\subset$ in the black spaces :$\{x : x$ is an even natural number}______$\{x : x\}$ is an integer} $\begin{array}{1 1}(A)\;\subset\\(B)\;\not\subset\end{array}$ {x : x is an even natural number}$\subset${x : x is an integer} Hence (A) is the correct answer.
2018-04-20 22:25:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.575360119342804, "perplexity": 506.39979668508016}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125944742.25/warc/CC-MAIN-20180420213743-20180420233743-00255.warc.gz"}
https://mathoverflow.net/questions/182797/is-there-a-formula-for-the-number-of-labeled-forests-with-k-components-on-n
# Is there a formula for the number of labeled forests with $k$ components on $n$ vertices? Cayley's formula states that the number of labeled trees on $n$ vertices is $n^{n-2}$. My question is: Is there a generalization of this formula for forests? Let $f_{n,k}$ denote the number of forests with $k$ connected components on $n$ vertices. For example, $f_{n,1} = n^{n-2}$ by Cayley's formula and $f_{n,n-1} = \binom{n}{2}$. Is there a known closed formula for $f_{n,k}$? If not, is there an asymptotic formula? A formula as a single sum is $$f_{n,k} = \binom nk \sum_{i=0}^k \left(-\frac12\right)^i (k+i)\,i!\, \binom{k}{i}\binom{n-k}{i} n^{n-k-i-1}.$$ This formula can be found in J. W. Moon's Counting Labelled Trees, Theorem 4.1. He attributes it to A. Rényi, Some remarks on the theory of trees, Publications of the Mathematical Institute of the Hungarian Academy of Sciences 4 (1959), 73--85. To prove it, let $T= \sum_{n=1}^\infty n^{n-1} x^n/n!$ be the exponential generating function for rooted trees and let $U=\sum_{n=1}^\infty n^{n-2} x^n/n!$ be the exponential generating function for unrooted trees. It is well known that $$T^k = k! \sum_{n=k}^\infty \binom nk k n^{n-k-1}\frac{x^n}{n!}.\tag{*}$$ Then $U = T-T^2/2$ so the exponential generating for forests of $k$ trees is $$\sum_{n=k}^\infty f_{n,k} \frac{x^n}{n!} = \frac{U^n}{n!}=\frac{(T-T^2/2)^k}{k!},$$ and the formula for $f_{n,k}$ follows by expanding the right side by the binomial theorem and using $(*)$. A straightforward formula: $f_{n,k}$ is the coefficient of $x^n$ in $$\frac{n!}{k!}\cdot\left( \sum_{i=1}^{\infty} \frac{i^{i-2}x^i}{i!}\right)^k$$ (clearly, the sum here can be restricted to first $n$ terms). The formula given in Bollobas is $$\frac{1}{m!}\sum_{j=0}^m \left(-\frac{1}{2}\right)^j \binom{m}{j}\binom{n-1}{m+j-1}n^{n-m-j}(m+j)!,$$ where $n$ is the number of vertices and $m$ is the number of components.
2019-12-15 18:40:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9220578074455261, "perplexity": 71.60890369310816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541309137.92/warc/CC-MAIN-20191215173718-20191215201718-00099.warc.gz"}
https://math.stackexchange.com/questions/1067275/alternate-proof-for-log-102-is-irrational/1067283
# Alternate proof for “$\log_{10}{2}$ is irrational” I need to prove that $\log_{10}{2}$ is irrational. I understand the way this proof was done using contradiction to show that the even LHS does not equal the odd RHS, but I did it a different way and wanted to check its validity! Prove by contradiction: Suppose that $\log{2}$ is rational - that is, it can be written as $$\log{2} = \frac{a}{b}$$ where $a$ and $b$ are integers. Then $$2 = 10^{\frac{a}{b}}$$ $$2 = 10^a10^{\frac{1}{b}}$$ $$\frac{2}{10^{a}} = 10^{\frac{1}{b}}$$ Log both sides: $$\log(\frac{2}{10^{a}}) = \frac{1}{b}$$ $$\log{2} - \log(10^a) = \frac{1}{b}$$ $$\log{2} = \frac{1}{b} + a$$ $$\log{2} = \frac{ab+1}{b}$$ However we assumed that $\log(2)=\frac{a}{b}$ and thus we have a contradiction. • Your step $10^{\frac{a}{b}} = 10^a 10^{\frac{1}{b}}$ is incorrect. – Cameron Williams Dec 14 '14 at 5:19 • I guess you really want to prove that it is irrational. Since it is. – MPW Dec 14 '14 at 5:19 • To a computer scientist, $\log 2$ is rational! – David Richerby Dec 14 '14 at 11:58 • @DavidRicherby why does it need to be rational? – Ooker Dec 14 '14 at 13:57 • @Ooker Because computer scientists most often take base-2 logs, for example to calculate the height of a binary tree or for recurrence relations involving divide-and-conquer algorithms that chop the input in half and operate recursively on one or both of the halves (which is, essentially, looking at the height of a binary tree again). – David Richerby Dec 14 '14 at 14:12 As has been pointed out in comments and in another answer, $10^{a/b}\neq 10^{a}10^{\frac{1}b}$. This is a rather subtle error, however there's a notable warning flag that could alert you to it: Your proof does not use the hypothesis that $a$ and $b$ are integers. This is a serious issue, because it means you've proved the (false) statement that $\log(2)$ cannot be written as a fraction $\frac{a}b$ - even if we let $a$ and $b$ be real, but: $$\frac{\log(2)}1=\log(2)$$ A proof can be carried out after modifying your calculation a bit. $$2=10^{\frac{a}{b}}\implies \color{blue}{2^b=10^a}\implies2^{b-a}=5^a$$ Which is a contradiction when both $a$ and $b$ are non-zero integers. Check the colored step carefully and you will understand in which step you have made a mistake. • From where did you get that $5^b$? I think I'll make it $5^a$. – aaaaaaaaaaaa Dec 14 '14 at 20:01 Be careful, $10^{\frac{a}{b}}$ equals $(10^{a})^{\frac{1}{b}}$ not $10^{a}10^{\frac{1}{b}}$ Others have already pointed out that your proof was wrong. A different way to see that your proof is wrong is as follows: if I would replace 2 in your proof everywhere by 10, I would get the result that log(10) is also irrational. Any proof you have for the irrationality of log(2) should not work for log(10). (If you take 10 as the base of your logarithm.) The proof for irrationality of $\log 2$ can be done in few lines using elementary divisibility properties or natural numbers. Assume $a, b \in \mathbb{N}$, $\text{gcd}(a,b) = 1$, and $a < b$. Thus: $\log 2 = \dfrac{a}{b} \to 2 = 10^{\frac{a}{b}} \to 2^b = 10^a = 2^a\cdot 5^a \to 2^{b-a} = 5^a$. We see a contradiction here because the $LHS$ is even while the $RHS$ is odd. • Who said that $b>a$? If $a>b$ then your LHS is not even integer. – Marc van Leeuwen Dec 14 '14 at 10:06 • log(2) < 1 <=> b > a – Guillaume Poussel Dec 14 '14 at 12:11 The equality $2=10^a10^{\frac{1}{b}}$ is not true, because $10^a10^{\frac{1}{b}}=10^{a+\frac{1}{b}}.$
2019-07-20 11:23:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8236035108566284, "perplexity": 294.75453319490373}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526508.29/warc/CC-MAIN-20190720111631-20190720133631-00061.warc.gz"}
https://gateoverflow.in/51711/isro2015-49
3,054 views What frequency range is used for microwave communications, satellite and radar? 1. Low frequency$: 30$ kHz to $300$ kHz 2. Medium frequency$: 300$ kHz to $3$ MHz 3. Super high frequency$: 3000$ MHz to $30000$ MHz 4. Extremely high frequency$: 30000$ kHz ### 1 comment option C) as per wiki range is 300 MHz - 300 GHz. In C) we have 3 GHz - 30 GHz which is in range but D) is > 30 MHz Microwaves are a form of electromagnetic radiation with wavelengths ranging from one meter to one millimeter; with frequencies between 300 MHz (100 cm) and 300 GHz (0.1 cm). This broad definition includes both UHF and EHF (millimeter waves), and various sources use different boundaries. > Extremely high frequency - 3 × 10^7 Hz --(I) > Range of Microwave radiation given on Wikipedia and quoted by another answer: 300 MHz to 300 GHz -> 3 × 10^8 Hz to 3 × 10^11 Hz We can observe from (I) that it fails to fall into this range. For option C however, Super High Frequency - 3000 MHz to 30000 MHz -> 3× 10^9 Hz to 3 × 10^10 Hz which is well within the frequency interval stated on Wikipedia. Also, the question asks for a range, yet the answer refers to only a single frequency value. I believe the answer given by ISRO exam key is incorrect, and so are the other answers given for this question. It would be great if it can be verified by one of the super-seniors. Thank You by 1 8,628 views 2 3 4,796 views
2022-12-05 12:24:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6772294044494629, "perplexity": 1720.0538476362758}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711016.32/warc/CC-MAIN-20221205100449-20221205130449-00588.warc.gz"}
https://homework.cpm.org/category/CON_FOUND/textbook/mc1/chapter/9/lesson/9.2.1/problem/9-59
### Home > MC1 > Chapter 9 > Lesson 9.2.1 > Problem9-59 9-59. Where should the decimal point be? Does 50 make sense as an answer? Here is a similar problem: 3971.32 Make sure that like parts are being added (tenths to tenths, hundredths to hundredths, etc.). Lining up the decimal points can help to keep track of like parts. Thinking about the denominators of the fractions can help make sense of where the decimal point should be. $\text{What happens if you multiply }1\frac{234}{1000}\text{ by }\frac{3}{1000}?$ Make sure that you add like parts. Remember that lining up the decimal points can help to keep track of like parts.
2021-10-20 08:26:12
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6988224387168884, "perplexity": 2647.853470527504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.91/warc/CC-MAIN-20211020055136-20211020085136-00461.warc.gz"}
https://bricspolicycenter.org/en/meeting-between-representatives-of-bpc-and-development-initiatives-di/
## MEETING BETWEEN REPRESENTATIVES OF BPC AND DEVELOPMENT INITIATIVES (DI) Paulo Esteves, BPC´s General Supervisor, and research assistant, Manaíra Assunção, came together on September 25th morning with Harpinder Collacott, Carolyn Culey e Mariella Di Ciommo from Development Initiatives (DI). DI is an organization focused on actions to end absolute poverty by 2030. For more information please access: http://devinit.org/#!/ The meeting encompassed a general panorama on the BRICS countries and their insertion in multilateral entities, particularly in the context of international development cooperation  and the initiatives that evolved around South-South cooperation studies. Possibilities for future collaboration between DI and BPC were also explored.
2021-01-22 19:24:29
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8007739186286926, "perplexity": 7923.833425820342}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703531335.42/warc/CC-MAIN-20210122175527-20210122205527-00137.warc.gz"}
https://www.chilimath.com/lessons/introductory-algebra/multiply-and-divide-positive-and-negative-numbers/
Multiplying and Dividing Positive and/or Negative Numbers The rules that govern multiplication and division of numbers are very similar. The key is to identify if the signs of the given two numbers are the same, or different because this will determine the final sign of the answer. Multiplication and Division Rules of Signed Numbers Examples of How to Find the Product or Quotient of Signed Numbers Example 1: Find the product of (3)(6) and the quotient of 12 ÷ 6. • Find the product of (3)(6): Since the numbers 3 and 6 have the same signs (both positive), their product is positive. (3)(6) = 18 • Find the quotient of 12 ÷ 6: Since the numbers 12 and 6 have the same signs (both positive), their quotient is positive. 12 ÷ 6 = 2 Example 2: Find the product of (–5)(–3) and quotient of –21 ÷ (–7). • Find the product of (–5)(–3): The numbers −5 and −3 both have negative signs. Having the same sign means that their product must also be positive. (–5)(–3) = 15 • Find the quotient of –21 ÷ (–7): The numbers −5 and −3 both have negative signs. The quotient of two numbers with the same sign is positive. 21 ÷ (–7) = 3 Example 3: Multiply the numbers (9)(–3) and divide the numbers 18 ÷ (–9). • Multiplying (9)(–3): The number 9 has a positive sign while the number −3 has a negative sign. Multiplying these two numbers with different signs should output a negative answer. (9)(–3) = –27 • Dividing 18 ÷ (–9): The number 9 is positive while −3 is negative. Dividing two numbers with different signs should give a negative answer. 18 ÷ (–9) = –2 Example 4: Simplify the numerical expression . What we can do is to simplify the numerator by multiplying the two numbers. Do the same thing with the denominator. The numbers on the numerator have different signs, therefore we expect their product to be negative. Meanwhile, the denominator has two numbers with the same sign (both negative), and so their product must be positive. We finish this off by dividing the numerator by the denominator. Don’t forget the division rule as well. The numerator is negative while the denominator is positive, by having different signs should yield a negative answer. Example 5: Multiply the numbers (–1)(–2)(–3)(–4). So far, we have been multiplying numbers two at a time. This time around, we have a situation of finding the product of three or more numbers. We can work this out by multiplying two numbers at a time because we know how to do it that way. But there is a quick method to figure out the sign without having to multiply them two at a time. Observe that we have an even number of negative signs, that is, four negative numbers. If you encounter something like this, use the rule: An even number of negative signs means that we expect the answer to be positive. (–1)(–2)(–3)(–4) = +24 Example 6: Multiply the numbers (–1)(–1)(–1)(–1)(–1)(–1)(–1)(–1)(–1). This problem is not intended to trick you. Instead, think of it as another opportunity to learn how to handle question just like this. Your teacher may throw something similar to this in your quiz to test if you know the topic fully well. Without paying attention to the signs, all numbers are just ones. Therefore, we predict that the answer may either be +1 or −1. Counting the number of negative signs, we have a total of nine (9) which is odd! Remember the rule, An odd number of negative signs implies that our final answer must be negative. (–1)(–1)(–1)(–1)(–1)(–1)(–1)(–1)(–1)= 1 Example 7: Divide the numbers (–1) ÷(–1) ÷(–1) ÷(–1) ÷(–1) ÷(–1) ÷(–1). The rule for odd and even number of negative numbers also work when dividing numbers. Since we have a count of seven (7) negative signs, an odd number, the answer must be negative. (–1) ÷(–1) ÷(–1) ÷(–1) ÷(–1) ÷(–1) ÷(–1)= 1
2022-01-17 10:02:18
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8945667147636414, "perplexity": 545.1494648865278}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300533.72/warc/CC-MAIN-20220117091246-20220117121246-00630.warc.gz"}
https://www.electricalexams.co/state-space-system-analysis-mcq/
# State-Space System Analysis MCQ Quiz – Objective Question with Answer for State-Space System Analysis 1. The state-space or the internal description of the system still involves a relationship between the input and output signals, what is the additional set of variables it also involves? A. System variables B. Location variables C. State variables D. None of the mentioned Although the state space or the internal description of the system still involves a relationship between the input and output signals, it also involves an additional set of variables, called State variables. 2. State variables provide information about all the internal signals in the system. A. True B. False The state variables provide information about all the internal signals in the system. As a result, the state-space description provides a more detailed description of the system than the input-output description. 3. Which of the following gives the complete definition of the state of a system at time n0? A. Amount of information at n0 determines output signal for n≥n0 B. Input signal x(n) for n≥n0 determines output signal for n≥n0 C. Input signal x(n) for n≥0 determines output signal for n≥n0 D. Amount of information at n0+input signal x(n) for n≥n0 determines output signal for n≥n0 We define the state of a system at time n0 as the amount of information that must be provided at time n0, which, together with the input signal x(n) for n≥n0 determines the output signal for n≥n0. 4. From the definition of the state of a system, the system consists of only one component called memoryless component. A. True B. False According to the definition of the state of a system, the system consists of two components called the memory component and memory less component. 5. If we interchange the rows and columns of the matrix F, then the system is called as ______________ A. Identity system B. Diagonal system C. Transposed system D. None of the mentioned The transpose of the matrix F is obtained by interchanging its rows and columns, and it is denoted by FT. The system thus obtained is known as Transposed system. 6. A single input single output system and its transpose have identical impulse responses and hence the same input-output relationship. A. True B. False If h(n) is the impulse response of the single input single output system, and h1(n) is the impulse response of the transposed system, then we know that h(n)=h1>(n). Thus, a single input single output system and its transpose have identical impulse responses and hence the same input-output relationship. 7. A closed-form solution of the state space equations is easily obtained when the system matrix F is? A. Transpose B. Symmetric C. Identity D. Diagonal A closed-form solution of the state space equations is easily obtained when the system matrix F is diagonal. Hence, by finding a matrix P so that F1=PFP-1 is diagonal, the solution of the state equations is simplified considerably. 8. What is the condition to call a number λ an Eigenvalue of F and a nonzero vector U is the associated Eigenvector? A. (F+λI)U=0 B. (F-λI)U=0 C. F-λI=0 D. None of the mentioned A number λ is an Eigenvalue of F and a nonzero vector U is the associated Eigenvector if FU=λU Thus, we obtain (F-λI)U=0. 9. The determinant |F-λI|=0 yields the characteristic polynomial of the matrix F. A. True B. False We know that (F-λI)U=0 The above equation has a nonzero solution U if the matrix F-λI is singular, which is the case if the determinant of (F-λI) is zero. That is, |F-λI|=0. This determinant yields the characteristic polynomial of the matrix F. 10. The parallel form realization is also known as normal form representation. A. True B. False The parallel form realization is also known as normal form representation, because the matrix F is diagonal, and hence the state variables are uncoupled. 11. If (101.01)2=(x)10, then what is the value of x? A. 505.05 B. 10.101 C. 101.01 D. 5.25 (101.01)2=1*22+0*21+1*20+0*2-1+1*2-2=(5.25)10 =>x=5.25. 12. If X is a real number with ‘r’ as the radix, A is the number of integer digits and B is the number of fraction digits, then X=$$\sum_{i=-A}^B b_i r^{-i}$$. A. True B. False A real number X can be represented as X=$$\sum_{i=-A}^B b_i r^{-i}$$ where bi represents the digit, ‘r’ is the radix or base, A is the number of integer digits, and B is the number of fractional digits. 13. The binary point between the digits b0 and b1 exists physically in the computer. A. True B. False The binary point between the digits b0 and b1 does not exist physically in the computer. Simply, the logic circuits of the computer are designed such that the computations result in numbers that correspond to the assumed location of this point. 14. What is the resolution to cover a range of numbers xmax-xmin with ‘b’ number of bits? A. (xmax+xmin)/(2b-1) B. (xmax+xmin)/(2b+1) C. (xmax-xmin)/(2b-1) D. (xmax-xmin)/(2b+1) A fixed point representation of numbers allows us to cover a range of numbers, say, xmax-xmin with a resolution Δ=(xmax-xmin)/(m-1) where m=2b is the number of levels and ‘b’ is the number of bits. 15. What are the mantissa and exponent required respectively to represent ‘5’ in binary floating-point representation? A. 011,0.110000 B. 0.110000,011 C. 011,0.101000 D. 0.101000,011 We can represent 5 as 5=0.625*8=0.625*23 The above number can be represented in binary float point representation as 0.101000*2011 Thus Mantissa=0.101000, Exponent=011. 16. If the two numbers are to be multiplied, the mantissa is multiplied and the exponents are added. A. True B. False Let us consider two numbers X=M.2E and Y=N.2F If we multiply both X and Y, we get X.Y=(M.N).2E+F Thus if we multiply two numbers, the mantissa is multiplied and the exponents are added. 17. What is the smallest floating-point number that can be represented using a 32-bit word? A. 3*10-38 B. 2*10-38 C. 0.2*10-38 D. 0.3*10-38 Let the mantissa be represented by 23 bits plus a sign bit and let the exponent be represented by 7 bits plus a sign bit. Thus, the smallest floating-point number that can be represented using the 32-bit number is (1/2)*2-127 = 0.3*10-38 Thus, the smallest floating-point number that can be represented using the 32-bit number is (1-2-23)*2127 = 1.7*1038. 18. If 0<E<255, then which of the following statement is true about X? B. Infinity C. Mixed number D. Zero According to the IEEE 754 standard, for a 32-bit machine, a single-precision floating-point number is represented as X=(-1)s.2E-127(M). From the above equation, we can interpret that, If 0<E<255, then X=(-1)s.2E-127(1.M)=>X is a mixed number. 19. For a twos complement representation, the truncation error is _________ A. Always positive B. Always negative C. Zero D. None of the mentioned For a two’s complement representation, the truncation error is always negative and falls in the range -(2-b-2-bm) ≤ Et ≤ 0. 20. Due to non-uniform resolution, the corresponding error in a floating-point representation is proportional to the number being quantized. A. True B. False In floating-point representation, the mantissa is either rounded or truncated. Due to non-uniform resolution, the corresponding error in a floating-point representation is proportional to the number being quantized. 21. What is the binary equivalent of (-3/8)? A. (10011)2 B. (0011)2 C. (1100)2 D. (1101)2 The number (-3/8) is stored in the computer as the 2’s complement of (3/8) We know that the binary equivalent of (3/8)=0011 Thus the two’s complement of 0011=1101. 22. Which of the following is the correct representation of a floating-point number X? A. 2E B. M.2E(1/2<m<1) C. 2M.2E(1/2<m<1) D. None of the mentioned The binary floating-point representation commonly used in practice consists of a mantissa M, which is the fractional part of the number and falls in the range 1/2<M<1, multiplied by the exponential factor 2E, where the exponent E is either a negative or positive integer. Hence a number X is represented as X= M.2E(1/2<M<1). 23. What is the mantissa and exponent respectively obtained when we add 5 and 3/8 in binary float point representation? A. 0.101010,011 B. 0.101000,011 C. 0.101011,011 D. 0.101011,101
2022-05-21 06:06:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8319289684295654, "perplexity": 1144.826145235923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662538646.33/warc/CC-MAIN-20220521045616-20220521075616-00036.warc.gz"}
https://socratic.org/questions/how-do-you-find-all-six-trigonometric-function-of-theta-if-the-point-sqrt2-sqrt2
# How do you find all six trigonometric function of theta if the point (sqrt2,sqrt2) is on the terminal side of theta? Jun 20, 2018 As below. #### Explanation: $\left(x , y\right) = \left(\sqrt{2} , \sqrt{2}\right)$ $\tan \theta = \frac{y}{x} = \frac{\sqrt{2}}{\sqrt{2}} = 1 , \theta = \frac{\pi}{4}$ $\cot \theta = \cot \left(\frac{\pi}{4}\right) = \frac{1}{\tan} \left(\frac{\pi}{4}\right) = 1$ $\sin \theta = \sin \left(\frac{\pi}{4}\right) = \frac{1}{\sqrt{2}}$ $\cos \theta = \cos \left(\frac{\pi}{4}\right) = \frac{1}{\sqrt{2}}$ $\csc \left(\theta\right) = \frac{1}{\sin} \theta = \frac{1}{\sin} \left(\frac{\pi}{4}\right) = \sqrt{2}$ $\sec \left(\theta\right) = \frac{1}{\cos} \theta = \frac{1}{\cos} \left(\frac{\pi}{4}\right) = \sqrt{2}$
2019-01-19 04:21:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6582543849945068, "perplexity": 3349.512262531204}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583662124.0/warc/CC-MAIN-20190119034320-20190119060320-00594.warc.gz"}
https://leanprover-community.github.io/mathlib_docs/analysis/convex/extreme.html
# mathlibdocumentation analysis.convex.extreme # Extreme sets # This file defines extreme sets and extreme points for sets in a real vector space. An extreme set of A is a subset of A that is as far as it can get in any outward direction: If point x is in it and point y ∈ A, then the line passing through x and y leaves A at x. This is an analytic notion of "being on the side of". It is weaker than being exposed (see is_exposed.is_extreme). ## Main declarations # • is_extreme A B: States that B is an extreme set of A (in the literature, A is often implicit). • set.extreme_points A: Set of extreme points of A (corresponding to extreme singletons). • convex.mem_extreme_points_iff_convex_remove: A useful equivalent condition to being an extreme point: x is an extreme point iff A \ {x} is convex. ## Implementation notes # The exact definition of extremeness has been carefully chosen so as to make as many lemmas unconditional. In practice, A is often assumed to be a convex set. ## References # See chapter 8 of Barry Simon, Convexity ## TODO # • define convex independence, intrinsic frontier and prove lemmas related to extreme sets and points. • generalise to Locally Convex Topological Vector Spaces™ More not-yet-PRed stuff is available on the branch sperner_again. def is_extreme {E : Type u_1} [ E] (A B : set E) : Prop A set B is an extreme subset of A if B ⊆ A and all points of B only belong to open segments whose ends are in B. Equations theorem is_extreme.refl {E : Type u_1} [ E] (A : set E) : A theorem is_extreme.trans {E : Type u_1} [ E] {A B C : set E} (hAB : B) (hBC : C) : C theorem is_extreme.antisymm {E : Type u_1} [ E] : @[instance] def is_extreme.is_partial_order {E : Type u_1} [ E] : theorem is_extreme.convex_diff {E : Type u_1} [ E] {A B : set E} (hA : A) (hAB : B) : (A \ B) theorem is_extreme.inter {E : Type u_1} [ E] {A B C : set E} (hAB : B) (hAC : C) : (B C) theorem is_extreme.Inter {E : Type u_1} [ E] {A : set E} {ι : Type u_2} [nonempty ι] {F : ι → set E} (hAF : ∀ (i : ι), (F i)) : (⋂ (i : ι), F i) theorem is_extreme.bInter {E : Type u_1} [ E] {A : set E} {F : set (set E)} (hF : F.nonempty) (hAF : ∀ (B : set E), B F B) : (⋂ (B : set E) (H : B F), B) theorem is_extreme.sInter {E : Type u_1} [ E] {A : set E} {F : set (set E)} (hF : F.nonempty) (hAF : ∀ (B : set E), B F B) : (⋂₀F) theorem is_extreme.mono {E : Type u_1} [ E] {A B C : set E} (hAC : C) (hBA : B A) (hCB : C B) : C def set.extreme_points {E : Type u_1} [ E] (A : set E) : set E A point x is an extreme point of a set A if x belongs to no open segment with ends in A, except for the obvious open_segment x x. Equations theorem extreme_points_def {E : Type u_1} [ E] {x : E} {A : set E} : x A ∀ (x₁ x₂ : E), x₁ Ax₂ Ax x₂x₁ = x x₂ = x theorem mem_extreme_points_iff_forall_segment {E : Type u_1} [ E] {x : E} {A : set E} : x A ∀ (x₁ x₂ : E), x₁ Ax₂ Ax [x₁ -[] x₂]x₁ = x x₂ = x A useful restatement using segment: x is an extreme point iff the only (closed) segments that contain it are those with x as one of their endpoints. theorem mem_extreme_points_iff_extreme_singleton {E : Type u_1} [ E] {x : E} {A : set E} : {x} x is an extreme point to A iff {x} is an extreme set of A. theorem extreme_points_subset {E : Type u_1} [ E] {A : set E} : @[simp] theorem extreme_points_empty {E : Type u_1} [ E] : @[simp] theorem extreme_points_singleton {E : Type u_1} [ E] {x : E} : theorem convex.mem_extreme_points_iff_convex_remove {E : Type u_1} [ E] {x : E} {A : set E} (hA : A) : x A (A \ {x}) theorem convex.mem_extreme_points_iff_mem_diff_convex_hull_remove {E : Type u_1} [ E] {x : E} {A : set E} (hA : A) : x A \ (A \ {x}) theorem inter_extreme_points_subset_extreme_points_of_subset {E : Type u_1} [ E] {A B : set E} (hBA : B A) : theorem is_extreme.extreme_points_subset_extreme_points {E : Type u_1} [ E] {A B : set E} (hAB : B) : theorem is_extreme.extreme_points_eq {E : Type u_1} [ E] {A B : set E} (hAB : B) : theorem extreme_points_convex_hull_subset {E : Type u_1} [ E] {A : set E} :
2021-09-17 13:58:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4704168438911438, "perplexity": 2049.4054467867777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055645.75/warc/CC-MAIN-20210917120628-20210917150628-00096.warc.gz"}
https://codereview.stackexchange.com/questions/244526/how-to-avoid-using-exec-and-eval-in-python
# How to avoid using exec() and eval() in Python I wrote a script to execute a service on the Forestry TEP platform via its REST API. This service has certain input parameters, some of them are numerical values, other strings or files. The current workflow is: 1. There is a configuration file that has options for file and literal inputs. Each option is a dictionary, where each key is equal to one parameter of the service I want to execute. Example of such file: file_inputs= {"cfg" : "forest_flux/FTEP_service/cfg.ini", "parsawen" : "forest_flux/FTEP_service/parsawen.csv", "weather" : "forestflux/data/fin_01/weather/weather_2004.csv"} literal_inputs = {"version" : 3} 2. In the script, I read the cfg and iterate over items in these dictionaries. For each key,value pair, I use exec() to store the value in a class variable of the same name. For file inputs, I first upload the file on the platform and store the file's location on the platform in the variable. input_files_dict = json.loads(cfg.get("service_info", "file_inputs")) for key, val in input_files_dict.items(): exec("self.{} = self.post_data('{}')".format(key, val)) for key, val in literal_inputs_dict.items(): exec("self.{} = '{}'".format(key, val)) 3. I request service inputs and I try to match the service parameters with my class variables. I add it to a dictionary of parameters which I then send to the platform when executing the service. r2 = requests.get(url = url2, verify=False, allow_redirects=False, cookies=self.session_cookies) for _input in data_inputs: value_to_set = eval("self." + _input["id"].lower()) job_config["inputs"][_input["id"]] = [value_to_set] This current workflow works well for me but I have strong suspicion that it is a very bad practice. I have read that using exec() and eval() is discouraged as there are security issues. However, I cannot think of a better way to achieve this functionality, i.e. to pair values in configuration with service inputs automatically. I want this script to be generic, i.e. usable for any service on the platform. Question: How could I do this without using the problematic functions? How could using these functions be problematic in this case? Please note that I removed a lot of code that is not related to the problem I am asking about (e.g. error handling). • You could use ast.literal_eval but beware that "It is possible to crash the Python interpreter with a sufficiently large/complex string due to stack depth limitations in Python’s AST compiler." – Grajdeanu Alex. Jun 25 at 20:26 You can use getattr, setattr and delattr to retrieve, modify and remove attributes respectively. This avoids most potential messes eval has. input_files_dict = json.loads(cfg.get("service_info", "file_inputs")) • @JanPisl Yes. f'{value}' is preferred (as it is shorter) than str.format. But some modern programs still use str.format as they need to support versions of Python that f-strings are not available in. – Peilonrayz Jun 26 at 17:49
2020-10-31 22:15:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28700366616249084, "perplexity": 2580.9708145860973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107922463.87/warc/CC-MAIN-20201031211812-20201101001812-00555.warc.gz"}
https://www.physicsforums.com/threads/velocity-of-charge-orbiting-infinite-line-of-negative-charg.884563/
# Velocity of charge orbiting infinite line of negative charg 1. Sep 7, 2016 ### space-time Sorry I couldn't finish the title. I ran out of space. Anyway, here's the question: A uniformly charged, infinitely long line of negative charge has a linear charge density of -λ and is located on the z axis. A small positively charged particle that has a mass m and a charge q is in circular orbit of radius R in the xy plane centered on the line of charge. (Use the following as necessary: k, q, m, R, and λ.) (a) Derive an expression for the speed of the particle. (b) Obtain an expression for the period of the particle's orbit. Relevant Equations: E = λ/(2πrε0) for an infinite line of charge. In the case of this problem I used -λ instead of just λ. F = Eq = mv2/r T = (2π)/ω = (2π)/(v/r) = (2πr)/v I tried using the first equation I listed above in order to derive E. This led to: E = -λ/(2πRε0) F = Eq = mv2/R , so this leads to: F = -λq/(2πRε0) = mv2/R solving algebraically for v yields: v = sqrt(-λq/(2πmε0)) = sqrt(-λkq / (2m) ) Here I thought I had derived v, but webassign said that this was wrong. I tried taking out the negative sign since having an imaginary velocity makes no sense, but it was still wrong. I can't do (b) until I solve (a). Please help. 2. Sep 7, 2016 ### Simon Bridge you expression for centripetal acceleratio should have a minus sign, since it points inwards. remember: vectors. note: $$\frac{1}{2\pi\epsilon_0} = 2k$$ 3. Sep 7, 2016 ### space-time Thanks! I solved it now. 4. Sep 8, 2016 Well done.
2018-01-17 03:48:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9057170748710632, "perplexity": 1359.4139496527698}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886794.24/warc/CC-MAIN-20180117023532-20180117043532-00288.warc.gz"}
https://mtrend.vn/question/further-they-think-they-will-have-more-chances-to-be-helped-with-their-problems-such-as-lacking-336/
## “Further, they think they will have more chances to be helped with their problems such as lacking money, domestic issues,…in need by some social con Question “Further, they think they will have more chances to be helped with their problems such as lacking money, domestic issues,…in need by some social connections.” Bạn nào sửa giúp mình câu này đc ko, mình nhưng thấy vẫn còn lủng củng in progress 0 42 phút 2021-10-09T23:23:07+00:00 2 Answers 0 views 0 1. => “Furthermore, they think that they will have many opportunities to help with their problems such as lack of money, difficulties in the country, … need some social connection.”
2021-12-04 06:11:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2847744822502136, "perplexity": 9471.697334578908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362930.53/warc/CC-MAIN-20211204033320-20211204063320-00284.warc.gz"}
http://mathhelpforum.com/calculus/181186-need-help-maximum-minimum-problems-print.html
# Need help with Maximum/Minimum Problems... • May 20th 2011, 07:39 PM sorkii Need help with Maximum/Minimum Problems... I'm having some issues with solving these 2 problems. (Crying) The first one is as follows: Find the value of a for which the area of the triangle formed by the tangent (y=a^2 - 2ax + 4) and the co-ordinate axes will be minimum. I calculated the distances of the side and base of the triangle by finding the intercepts of the tangent. Heres what I got: side = (a^2 + 4)/2 and base = a^2 + 4 Therefore, my main equation for the area was 1/2*side*base I simplified the area equation, differentiated it (using quotient rule) and got a= +2 or -2 I tested both with the first derivative, and found a=2 produced the local max. Therefore, the area would be 8 units squared However the answers say the area is 2/3*\sqrt{3} The second question is similar: Find the maximum area of a right triangle with a hypotenuse of 16cm my starting equation was y^2 = \sqrt{256-x^2} subbed into the area equation and differentiated, i reached a dead end...i think i ended up with the first derivative test screwing up... Any help on these 2 questions? Ive been working on them for ages and keep repeating the same steps to no avail... Thanks!(Nod) • May 20th 2011, 07:58 PM Diemo Quote: I calculated the distances of the side and base of the triangle by finding the intercepts of the tangent. Heres what I got: side = (a^2 + 4)/2 and base = a^2 + 4 Look at the equation you have for the side again. Setting x=0 gives you $y=a^2+4$ but setting y=0 does gives $x=\frac{a^2+4}{2a}$ For the second question, you have $x^2+y^2=16^2$ and you want $\frac{1}{2}xy$ to be maximum. So sub in $y=\sqrt{256-x^2}$, diffrentiate to get what x you have, and work back to get the area( hint, the x and the y are going to be the same) • May 20th 2011, 08:19 PM sorkii Oops, i accidentally switched the intercept values when i wrote out the question. they were right when i tried solving the problem on paper...so im still stuck >_< As for the second one, i tried differentiating this time with the product rule. heres what i got A' = (x/2).(1/4\sqrt{256-x^2}) + (1/2)(\sqrt{256-x^2}/2) That becomes: (x+512-2x^2)/8\sqrt{256-x^2} And again i run into the same problem For stat. points, x + 512 - 2x^2 = 0 But when i factorise that using the quadratic formula for 0 i get a figure with a surd in it... Im pretty sure if i continue with it it wont produce the 64cm^2 that is the answer so what did i do wrong? • May 20th 2011, 09:10 PM Diemo For Problem A: My intercept value is not the same as yours for the side, you are missing an a. So your value for the Area is $\frac{(a^2-4)^2}{2a}$. Is this the value you had? The diffriation for this is quite complicated as well, if you want to see a sample diffrentiation try:http://www2.wolframalpha.com/Calcula...=3&w=461&h=874 For problem B: The answer you are looking for will have a surd in it. Remember that the value the equation gives you is the value for x, not the value for the area. To get the cvalue for the area (th 64 cm^2) you have to work out your y (from $Y^2+X^2=256$ and then get your area from $area =\frac{1}{2}xy$ With regards to your diffrentiation itself: $A=\frac{x}{2}\sqrt{256-x^2}$ chain Rule: with $u=x/2$and $v=\sqrt{256-x^2}$ $A'=u\frac{dv}{dx} +v\frac{du}{dx}$ To get the $\frac{dv}{dx}$ you use the product rule. $\frac{dv}{dx}=\frac{dv}{db}\frac{db}{dx}$ where $b=256-x^2$, and $v=\sqrt{b}$ (Do you how this works?) which leaves $A'=\frac{x}{2}\times\frac{-2x}{2\sqrt{256-x^2}}+\sqrt{256-x^2}\times\frac{1}{2}$ Simplifying gives $A'=\frac{128-x^2}{2\sqrt{256-x^2}}$ You only care when A'=0, so this simplifies to $0=128-x^2$. Finally, to get the area you plug your x and y obtained above into A=0.5xy. That help? • May 20th 2011, 11:12 PM sorkii Thanks! I get problem b now, but shouldnt the area equation for prob A = (a^2 + 4)^2/4a ? since you have to multiply by half? thanks again :D • May 22nd 2011, 01:54 PM Diemo Yes, it should be, sorry.
2013-12-12 12:04:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7894624471664429, "perplexity": 803.9957420551264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164582561/warc/CC-MAIN-20131204134302-00016-ip-10-33-133-15.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/thermal-equilibrium-entropy-driven.609257/
# Thermal equilibrium - Entropy driven 1. May 27, 2012 ### Stiibe Hey there. We are struggling with a problem from an old exam about statistical mechanics. I hope you can help us or give any clues. Here is the problem. 1. The problem statement, all variables and given/known data A chamber is divided by a wall into two sections of equal volume. One section of the chamber is initially filled by an ideal gas at temperature T0, whereas the other section is empty. Then, a small hole is opened in the dividing wall such that the gas can flow through, until equilibrium is reached. Consider the chamber isolated towards the surrounding and no heat is exchanged with the walls. a) What is the final Temperature of the system? b)How does the result change if instead initially the chamber is filled with a Van der Waals gas? 2. Relevant equations Any thermodynamical equation. 3. The attempt at a solution We tried to solve the problem by using the entropy. Since thermal equilibrium is reached, wehn the entropy is maximized. 2. May 27, 2012 ### fluidistic The first equation that crosses my mind for part a) is the famous $PV=NRT$. For part b), wikipedia tells us $\left(p + \frac{n^2 a}{V^2}\right)\left(V-nb\right) = nRT$. So I suggest you to use these 2 equations rather than the entropy. 3. May 28, 2012 ### Andrew Mason Does the gas do any work? When you answer that question, apply the first law to determine the change in internal energy. What does that tell you about the change in temperature? Apply the same first law analysis. In this case, however, how is temperature related to volume and internal energy in a Van der Waals gas? (Hint: in an ideal gas, the temperature is a function of internal energy and independent of volume. Does the same apply to a Van der Waals gas?) AM
2017-10-23 01:41:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6369395852088928, "perplexity": 341.79908735951676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825497.18/warc/CC-MAIN-20171023001732-20171023021732-00287.warc.gz"}
https://math.stackexchange.com/questions/1556749/classical-and-intuitionistic-propositional-logic-in-the-propositions-as-sets-int
# Classical and intuitionistic propositional logic in the propositions-as-sets interpretation I'm looking for a way to describe classical and intuitionistic propositional logic such that the transition between the two seems natural and intuitive. I came up with the following but I'm unsure if it's actual true. Just like the propositions-as-types interpretation for intuitionistic logic, one can give a propositions-as-sets interpretation for classical logic. If the logical language consists of $\land,\lor,\Rightarrow,\bot$, we can interpret these symbols as the set operations $\times,\oplus,\rightarrow,\emptyset$ and a propositional formula $\phi[X_1,\dots,X_n]$ is valid iff we can prove (in set theory enriched by $\times,\oplus,\rightarrow,\emptyset$) that $\exists Y.Y\in\tilde\phi[X_1,\dots,X_n]$ for the corresponding set theoretical term $\tilde\phi$. (Since the $X_i$ appear free in $\tilde\phi$, we can also prove $\forall X_1,\dots,X_n.\exists Y.Y\in\tilde\phi[X_1,\dots,X_n]$ then.) I'm pretty sure that this part is correct, since basically not much is done, one only asks about emptyness of sets and we know that $X\times Y$ is empty iff $X$ or $Y$ is, $X\oplus Y$ is empty iff $X$ and $Y$ is, $X\rightarrow Y$ is empty iff $X$ but not $Y$ is such as $\emptyset$ is empty. Now I feel like one could say that the propositions which are intuitionistically true are exactly those where we find canonical inhabitants of the respective set. For example, we can show $\exists Y.Y\in X\oplus(X\rightarrow\emptyset)$ but there is no canonical choice for such $Y$. I thought maybe one could make precise what is meant by 'canonical inhabitant of $\phi$' by saying there exists a (set theoretical) formula $\chi[X_1,\dots,X_n,Y]$ such that we can prove: $$(\exists! Y.\chi[X_1,\dots,X_n,Y])\land(\forall Y.\chi[X_1,\dots,X_n,Y]\Rightarrow Y\in\tilde\phi[X_1,\dots,X_n])$$ (Since the $X_i$ appear free in the formula, as before the $Y$ can depend on them.) Is there some theory along these lines, if it makes any sense at all? EDIT: This conjecture is not true, take $\phi[X]:=\bot\Rightarrow X$. Then clearly $\tilde\phi[X]=\emptyset\rightarrow X$ equals $\{\emptyset\}$ (no matter what $X$ is because $\emptyset$ is an initial object in the category of sets) so it contains a very canonical inhabitant (i.e. this inhabitant doesn't even depend on $X$). But maybe it remains true if we do not translate $\bot$ to $\emptyset$ but just some unspecified fresh variable $X_\bot$ and place the condition $X_\bot\subseteq X_i$ over everything? • I'm not sure I agree with this, what are the occupants of these sets? If you're thinking of those being programs then you've got realizability theory and there are a lot of interesting insights about computation and classical logic. If you just meant derivations in a formal system than this procedure will work for any logic and there isn't really a valid notion of canonicity a prior. In any case there should not be a derivation in intuitionistic logic of $\forall A. A \vee \neg A$ so there should be no occupant of that set, canonical or not. – jozefg Dec 2 '15 at 17:09 • Ps your notion of canonicity is the uniform quantification operator. It was defined first in the context of computational type theory by the Nuprl project. I don't have a good citation off hand though.. – jozefg Dec 2 '15 at 17:12 • Hm, I just tried find some picture (not necessarily a theoretical useful one, more like one which helps me with my intuition) in which classical and intuitionistic logic lie as close as possible together. So, I know that my picture does not correspond to the classical propositions-as-sets picture inspired by type theory, where $A\lor\neg A$ indeed contains no inhabitant, I like to call this one propositions-as-types, to underline the difference, if that makes any sense... – fweth Dec 2 '15 at 17:16 There is a topological interpretation where propositions are open sets and negation is interior of the complement. The rest is the same as in classical logic. $\wedge$ is intersection, $\vee$ is union, and $\implies$ is $\subset$, tautology is the whole set, impossibility is the empty set. Witnesses to $\exists x$ do not need to be unique in constructive mathematics. They just need to not depend on perfect true/false distinctions and coverings by overlapping open sets model that for disjunction. The rest is the same with and without excluded middle.
2019-05-23 08:05:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8661880493164062, "perplexity": 257.3326633677652}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257156.50/warc/CC-MAIN-20190523063645-20190523085645-00478.warc.gz"}
https://www.physicsforums.com/threads/eulers-equation-i-think-leonard-suskin.314458/
# Eulers equation, i think? leonard suskin 1. May 16, 2009 ### jerkazoid leonard susskind writes this on the chalk board a little after 1/2 way in this Nova episode on String theory http://www.pbs.org/wgbh/nova/programs/ht/qt/3013_03.html am i writing this correctly? Γ [1-∝(s) Γ (1-∝(t)] __________________ Γ [z-∝(s) -∝(t)] edit spelld his name wrong Last edited: May 16, 2009 2. May 16, 2009 ### Cyosis Unless I am missing something it seems he missed a bracket. In the numerator there are 4 ( brackets and only 3 ) brackets. 3. May 16, 2009 ### jerkazoid but other than that... im recreating it correctly? ok so like this right? Γ [1-∝(s)] Γ [1-∝(t)] __________________ Γ [z-∝(s) -∝(t)] 4. May 16, 2009 ### Cyosis No, the "Euler equation" they are talking about looks like this: $$\frac{\Gamma(x) \Gamma(y)}{\Gamma(x+y)}$$ So it should be: $$\frac{\Gamma(1-\alpha(s)) \Gamma(1-\alpha(t))}{\Gamma(2-\alpha(s)-\alpha(t))}$$ A two not a z.
2018-07-22 13:00:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6532790064811707, "perplexity": 14468.76420060311}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593223.90/warc/CC-MAIN-20180722120017-20180722140017-00638.warc.gz"}
https://365datascience.com/trending/future-of-machine-learning/
Updated on 17 Mar 2022 # What Is the Future of Machine Learning? The 365 Team Published on 17 Mar 2022 7 min read Nowadays, businesses generate an astonishing 2.5 quintillion bytes of data every single day. For those of you wondering how much that is – well, there are 18 zeroes at a quintillion. This is a huge number! With people using social media platforms, digital communication channels, and various contactless services, it is no surprise that big data continues to grow at a colossal rate. But how are we going to make the best out of this trove of information in the future? As businesses move towards the era of cloud storage, they look for innovative approaches to leveraging data. Since it’s virtually impossible to analyze great data volumes manually, big organizations can now adopt machine learning to handle these for them. Fortune Business Insights has recently published an article, estimating that the machine learning industry will reach nearly \$153 billion by 2027 – a massive growth rate compared to the \$15.50 billion in 2021. And all that happening for less than a decade! We could barely imagine what it would all be like in 2050. In this article, you’ll find out how we, at 365 Data Science, believe machine learning technology will transform the business world in the next few years and what emerging trends may be worth considering. ## Machine Learning vs. Artificial Intelligence vs. Deep Learning Many people wrongly use the terms machine learning (ML), deep learning (DL), and artificial intelligence (AI) interchangeably. To better understand the future of ML, one must be able to differentiate between these 3 concepts. Here’s a simple graph to clear any confusion you might have: Artificial intelligence is an umbrella term, encompassing both machine learning and deep learning. It is inspired by the human brain and focuses on mimicking people’s behavior. Machine learning is about creating an algorithm that a computer uses to provide valuable insights, with data being its key component. It is unique in developing algorithms that learn from data to solve problems without programming. Like a human, a model learns through experience and improves its accuracy over time. At the core, you’ll find deep learning – an advanced feature of ML whose algorithm has its own learning mechanisms. ## Evolution of Machine Learning Although we can’t name a single person or event that made it all happen, the evolution of machine learning tells us just how multi-dimensional the field can be. Some believe it all started back in 1943 when Walter Pitts and Warren McCulloch presented the world’s first mathematical model of neural networks. Here’s a simplified representation of the concept, consisting of 2 parts – g and f: In a couple of years, the famous book The Organization of Behavior by Donald Hebb was released to later become a turning point in the field of ML. It wasn’t until the 1990s that the very first machine learning program was introduced to the world. That’s how the spam filter came into existence, and people could now save time sorting out emails. This significant milestone represented the collective effort of scientists and marked the beginning of the contemporary ML era. To learn more about how to classify spam messages yourself, check out our Machine Learning with Naïve Bayes course. ## The Future is Now: Latest Advancements of Machine Learning Over the last decade, many innovations in various fields have come to the forefront thanks to machine learning. Let’s briefly present 5 ML advancements that are currently trending and are here to stay: ### Computer Vision Computer Vision is a type of AI where a computer can identify objects in images and videos. With the advancement in ML, the error rate has now decreased from 26% to just 3% in less than a decade. Along with better accuracy and methods such as cross-entropy loss, humans are also able to save time in performing some tasks. If I ask you to categorize 10,0000 pictures of dogs, will you be able to do it in a few minutes? Unlike a computer with a CPU, you’ll probably take weeks to perform the task, provided you are a dog expert. In practice, computer vision has a great potential in the medical field and airport security that companies are nowadays starting to explore! ### Focused Personalization Another beneficial ML advancement has to do with understanding target markets and their preferences. With the increased accuracy of a model, businesses can now tailor their products and services according to specific needs using recommender systems and algorithms. How does Netflix recommend shows? What is Spotify’s secret to playing your favorite songs? It’s machine learning that’s behind all these recent developments! ### Improved Internet Search Machine learning helps search engines optimize their output by analyzing past data, such as terms used, preferences, and interactions. To put it into perspective, 2 trillion Google searches have been registered in 2021 alone. With so much data at hand, Google algorithms continue to learn and get better at returning relevant results. For many of you, that’s the most familiar ML development of our time. ### Chatbots This is another ongoing trend businesses around the globe employ. Chatbot technologies contribute to improving marketing and customer service operations. You may have seen a chatbot prompting you to ask a question. This is how these technologies learn – the more you ask, the better they get. In 2018, the South Korean car manufacturer KIA launched the Facebook Messenger and chatbot Kian to its customers, boosting social media conversion rates up to 21% – that is 3 times higher than KIA’s official website. Many logistics and aviation companies see adopting ML technologies as a way to increase efficiency, safety, and estimated time of arrival (ETA) accuracy. You will be surprised to know that the actual flying of a plane is predominantly automated with the help of machine learning. Overall, businesses are largely interested to unearth ML’s potential within the transportation industry, so that’s something to look out for in the near future. ## Key Problems of Machine Learning Machine learning – as revolutionary as it may be – isn’t flawless. Its enormous potential comes with a number of challenges that are shaping up the digital world of tomorrow. A visionary, however, will always try to turn a stumbling block into a steppingstone. We believe today’s problems trigger tomorrow’s solutions, so let’s find out what these hurdles are: ### Data Acquisition Machine learning can only produce relevant and high-quality results if we feed enough data into the model. The need for massive resources then raises a question as to how unbiased and accurate the training data can possibly be. In what way do we ensure flawless input and sound results? The “Garbage in, garbage out” principle is what drives the proper functioning of machine learning models, and that’s a real challenge in today’s data-rich environment. ### Resources Generally, machine learning requires a lot of resources, such as powerful computers, time for developing, perfecting, and revising a model, financing, and data collection. Businesses must be ready to take on considerable investments before reaping the harvest of adopting machine learning. ### Data Transformation Contrary to popular belief, machine learning isn’t made for identifying and modifying algorithms – it’s about transforming raw data into a set of features to capture the essence of that information. In its autonomy, ML can make some mistakes that affect its efficiency in the long run. Error susceptibility is certainly a major thing to consider when transforming data with ML. ### Result Interpretation An ML model tends to make self-fulfilling predictions. When training data and identified patterns are wrong, the algorithms will still use this information as a basis for generating and processing new data. It may take some time before you realize that the model has been working in favor of the underlying bias. For this reason, result interpretation turns into a comprehensive task for the user. ### Bias and Discrimination How do businesses prevent bias and discrimination when the training data itself can be corrupted? They say the road to hell is paved with good intentions – a proverb that describes the ethical dilemmas of the ever-growing digital universe very well. Although you mean good when building a model to automate processes, you may unintentionally ignore or misinterpret an important human factor, which you would have otherwise prioritized. That’s a major issue when incorporating ML within recruitment and hiring practices. ## Future Machine Learning Trends Dave Waters once said: A baby learns to crawl, walk, and then run. We are still in the crawling stage when it comes to applying machine learning. Here, we’ll outline 5 trends we believe will unfold in the next few decades. They all derive from the current developments and ongoing challenges within the industry. ### The Quantum Computing Effect Industry experts have high hopes about optimizing machine learning speed through quantum computing. And rightfully so – it makes simultaneous multi-stage operations possible, which are then expected to reduce execution times in high-dimensional vector processing significantly. Whether quantum computing will turn into the game-changer everyone’s talking about, we are yet to find out! Currently, there are no such models available on the market, but tech giants are working hard to make that happen. ### The Big Model Creation The next few years are expected to mark the beginning of something big – an all-purpose model that can perform various tasks at the same time. You won’t have to worry about understanding the relevant applications of a framework. Instead, you’ll train a model on a number of domains according to your needs. How convenient would it be to have a system that covers all bases – from diagnosing cancer to classifying dog images by breed? Of course, a well-designed quantum processor to enhance ML capabilities will certainly give that development a boost. That’s why great minds are now putting considerable effort into reinforcing the scalability and structure of such a model. ### Distributed ML Portability With the proliferation of databases and cloud storage, data teams want to have more flexibility when it comes to using datasets in various systems. We foresee a great advancement in the field of distributed machine learning where scientists will no longer reinvent algorithms from scratch for each platform. Rather, they will be able to immediately integrate their work into the new systems, along with the user datasets. In the coming years, we will likely experience some form of distributed ML portability by running the tools natively on various platforms and computer engines. In this way, we’ll eliminate the need for shifting to a new toolkit. Experts in the field are already talking about adding abstraction layers to make that technological leap. ### No-Code Environment As open-source frameworks like TensorFlow, scikit-learn, Caffe, and Torch continue to evolve, machine learning is likely to keep minimizing coding efforts for data teams. In this way, non-programmers will have easy access to ML – no postgraduate degree is required; they can simply download several packages and attend an online course on how to work with these programs. Besides, automated ML will improve the quality of results and analysis. So, we expect machine learning to be classified as a major branch of software engineering in the next decade. ### The Power of Reinforcement Learning Reinforcement learning (RL) is revolutionary – it enables companies to make smart business decisions in a dynamic setting without being specifically taught for that. With all that’s happening around us, unpredictability seems to have become the new normal. Thus, we expect ground-breaking leaps in RL to help us deal with unforeseen circumstances. Everyone’s talking about optimization of resources, but it is reinforcement learning that can truly leverage data to maximize rewards, where no other model can. RL is still in its early days, so we will likely see several breakthroughs in the field within the next few years in industries like economics, biology, and astronomy. ## When Man Meets Machine: Will ML Replace Humans in the Future? With the latest advancements in technology, many of us can’t help but ask, “Will machines take over all the functions of a human?” While the vision of robots ruling the world seems quite unrealistic, people are still worried about the stability of their jobs in the future. That’s when you should take a step back and reframe the whole image of machine learning. Instead of wiping out the need for human labor, ML disruptions will lead to a job demand shift. The basic requirements for a certain role today will likely include a different set of competencies tomorrow. We like to see it this way – machine learning developments level up both machines and people. So, make sure you stay current and keep up with the latest trends! For businesses, machine learning will remain a buzzword in the years to come. Its prominence is inevitable, even necessary, for the world to cope with the volumes of data we produce daily. Yet, we are far from discovering its true potential and reaching maturity. Ultimately, machines can’t do it all by themselves. As the saying goes, it takes two to tango. Both businesses and technologies will be tirelessly working towards making the world a better place, not replacing humans. ## The Future of Machine Learning: Next Steps Machine learning is an exciting field with immense untapped potential. So, if you're an aspiring data scientist and ML technologies fascinate you, then you should start learning today. The industry is evolving every day, with new breakthroughs being discovered as we speak. Don’t let yourself fall behind on the trends! Are you ready for the next step toward a career in data science? The 365 Data Science Program offers self-paced courses led by renowned industry experts. Starting from the very basics all the way to advanced specialization, you will learn by doing with a myriad of practical exercises and real-world business cases. If you want to see how the training works, start with our free lessons by signing up below.
2022-05-26 13:24:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1882767230272293, "perplexity": 1533.1939198691193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662606992.69/warc/CC-MAIN-20220526131456-20220526161456-00697.warc.gz"}
http://stats.stackexchange.com/questions/40793/using-standardized-y-in-elastic-net
# Using standardized Y in Elastic Net I have an Elastic Net model that is selecting a number of variables from X, for prediction of Y. The assumption for Elastic Net is that X is standardized (I'm using Z-Scores), and Y is centered around zero (I'm using Y-mean(Y)). So, I am wondering if my Elastic Net model will act differently if I use both standardized X and Y (i.e., z-scores for both X and Y)? -
2013-12-10 00:29:38
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8417839407920837, "perplexity": 955.7670207559572}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164002922/warc/CC-MAIN-20131204133322-00031-ip-10-33-133-15.ec2.internal.warc.gz"}
https://www.codecogs.com/library/engineering/fluid_mechanics/pipes/head_loss/pipe-head-loss.php
• https://me.yahoo.com The pressure head lost due to flow through pipes and other losses. ## Overview In general the flow of liquid along a pipe can be determined by the use of The Bernoulli Equation and the Continuity Equation. The former represents the conservation of energy, which in Newtonian fluids is either potential or kinetic energy, and the latter ensures that what goes into one end of a pipe must comes out at the other end. However as the flow moves down the pipe, losses due to friction between the moving liquid and the walls of pipe cause the pressure within the pipe to reduce with distance - this is known as head loss. Note: Only Incompressible liquids are being considered. ## Head Lost Due To Friction In The Pipe Two equations can be used when the flow is either Laminar or Turbulent. These are: ##### Darcy's Equation For Round Pipes For Round Pipes: $h_f=\frac{4f\;l\;v^2}{2&space;g&space;\;d}$ where • $\inline&space;h_f$ is the head loss due to friction [m] • l is the length of the pipe [m], • d is the hydraulic diameter of the pipe. For circular sections this equals the internal diameter of the pipe [m]. • v is the velocity within the pipe [$\inline&space;m/s^2$] • g is the acceleration due to gravity • f is the coefficient of friction. This equation can be expressed in terms of the quantity flowing per second. $h_f=\frac{4flv^2}{2g\;d}=\frac{4fl}{2g\;d}\times&space;\left&space;(&space;\frac{4Q}{\pi&space;d^2}&space;\right&space;)^2$ ##### Darcy's Equation For Non Circular Pipes $h_f=\frac{f&space;l&space;v^2}{2g\;\mu&space;}$ where $\inline&space;\mu=$Wetted area / Wetted perimeter ##### The Chezy Equation $v=C\sqrt{mi}$ where • i is $\inline&space;\frac{h_f}{l}$ • m is $\inline&space;\frac{A}{P}$ or wetted area or wetted perimeter and $C=\sqrt{\frac{2g}{f}}$ ##### Reynolds Number $R_N=\frac{Density&space;\times\,Velocity\times\,Diam.&space;\,of\,pipe}{Viscosity}$ Which equals 2,300 at the point where the flow changes from Laminar to Turbulent. This is known as the Critical Velocity If $\inline&space;Ln$ $\inline&space;hf$ is plotted against $\inline&space;Ln$ $\inline&space;v$ The critical velocity is the velocity at which the change over from laminar to turbulent flow takes place. Consider a circular pipe running full $V&space;=&space;C\:\sqrt[]{m\:i}$ $\therefore\;\;\;V^2\;=\:\frac{2g}{f}\:\:\frac{A}{\rho}\:\;\frac{h}{l}$ $\therefore\;\;\;h\:=&space;f\;\frac{P\:L}{A}\:.\:\frac{V^2}{2g}$ $\therefore\;\;\;h&space;=&space;f\;\frac{\pi\,d\,l}{\pi\,\displaystyle\frac{d^2}{4}}\;.\;\frac{V^2}{2g}$ Which can be written as The DARCY EQUATION $=&space;4\:f\frac{l\:V^2}{2\:d\,g}$ Note: $\inline&space;f$ in the Darcy formula is not an empirical coefficient For Laminar flow $f\:=\:\frac{16}{R}$ The change over from laminar flow to turbulent flow occurs when $\inline&space;R$ = 2300 and is independent of whether the pipe walls are smooth or rough. ### Laminar Flow When the flow is laminar it is possible to use the following equation to find the head lost. The Poiseuille equation states that for a round pipe the head lost due to friction is given by: $h_F=\frac{32\times&space;\mu\;l\;v}{\rho\;g\;d^2}$ Flow occurs because the force across $\inline&space;AB$ is more than that on $\inline&space;AB$ and hence it is possible to write down the following equation. $(P_1-P_2)\times&space;\pi\;&space;r^2=-\mu\;\frac{dv}{dr}\times&space;2\pi&space;r&space;l$ $\therefore&space;\;\;\;\;\;-dv=\left&space;(&space;\frac{P_1-P_2}{l}&space;\right&space;)\times&space;\frac{r}{2\mu}dr$ Integrating $-\left&space;[&space;v&space;\right&space;]_{v_r}^{v_0}=\left&space;(&space;\frac{P_1-P_2}{l}&space;\right&space;)\left&space;[&space;\frac{r^2}{4\mu}&space;\right&space;]_{r_0}^{r_1}$ $\therefore&space;\;\;\;\;\;v_r=v_0-\left&space;(&space;\frac{P_1-P_2}{l}&space;\right&space;)\frac{r^2}{4\mu}$ But when $r=&space;\frac{d}{2}\;\;\v=\0$ $\therefore&space;\;\;\;\;\;v_0=\left&space;(&space;\frac{P_1-P_2}{l\succ&space;}&space;\right&space;)\frac{d^2}{16\mu}$ $\therefore&space;\;\;\;\;\;v_r=\left&space;(&space;\frac{P_1-P_2}{l\succ&space;}&space;\right&space;)\frac{d^2}{16\mu}-\left&space;(&space;\frac{p_1-p_2}{l}&space;\right&space;)\frac{r^2}{4\mu}$ Thus: $v_r=\frac{p_1-p_2}{l}\times&space;\left&space;(&space;\frac{d^2}{16\mu}-\frac{r^2}{4\mu}&space;\right&space;)$ But: $Q=\int_{0}^{\frac{d}{2}}v_r\times&space;2\pi\;&space;r\;dr$ $\therefore&space;\;\;\;\;\;\;Q=\int_{0}^{\frac{d}{2}}\frac{P_1-P_2}{l}\times&space;\left&space;(&space;\frac{d^2}{16\mu}-\frac{r^2}{4\mu}&space;\right&space;)\times&space;2\pi\;&space;r\;&space;dr$ $=\frac{p_1-p_2}{l}\times&space;\frac{2\pi}{4\mu}\int_{0}^{\frac{d}{2}}\left&space;(&space;\frac{d^2}{4}-r^2&space;\right&space;)r\;dr$ $=\frac{p_1-p_2}{l}\times&space;\frac{2\pi}{4\mu}\left&space;[&space;\frac{d^2}{4}.\frac{r^2}{2}-\frac{r^4}{4}&space;\right&space;]&space;_{0}^{\frac{d}{2}}$ $=\frac{p_1-p_2}{l}\times&space;\frac{2\pi}{4\mu}\times&space;\frac{1}{4}\left&space;[&space;\frac{d^4}{8}-\frac{d^4}{16}&space;\right&space;]$ $=\frac{p_1-p_2}{l}\times&space;\frac{2\pi}{4\mu}\times&space;\frac{1}{4}\times&space;\frac{d^4}{16}$ $=\frac{p_1-p_2}{l}\times&space;\frac{\pi\;d^4}{128\mu}$ Now the mean velocity is given by:- $v_m=\frac{Q}{\pi\;d^2/4}$ $=\frac{p_1-p_2}{l}\times&space;\frac{\pi\;d^4}{128\mu}\times&space;\frac{4}{\pi\;d^2}$ $=\frac{p_1-p_2}{l}\times&space;\frac{d^4}{32\mu}=\frac{V_0}{2}$ From equation (4) The head lost due to friction is given by: $h_f=\left&space;(&space;\frac{p_1-p_2}{\omega}&space;\right&space;)=\frac{p_1-p_2}{\rho\;g}$ Substituting from equation (5) for $\inline&space;P_1-P_2$ $h_f=\frac{32\;\mu\;l\;v}{\rho\;g\;d^2}$ Example: [imperial] ##### Example - Example 1 Problem Water is siphoned out of a tank by means of a bent pipe $\inline&space;ABC$, 80 ft. long and 1 in. in diameter. $\inline&space;A$ is below the water surface and 6 in . above the base sof the tank. $\inline&space;AB$ is vertical and 30 ft long; $\inline&space;BS$ is 50 ft. long with the discharge end $\inline&space;C$ 5ft. below the base of the tank. If the barometer is 34 ft. of water and the siphon action at $\inline&space;B$ ceases when the absolute pressure is 6 ft. of water, find the depth of water in the tank when the siphon action ceases. $\inline&space;F$ is 0.008 and the loss of head at entry to the pipe is: $=0.5\frac{v^2}{2g}$ Where $\inline&space;v$ is the velocity of water in the pipe. Workings Bernoulli's Equation is: $\frac{p_1}{w}+\frac{v_1^2}{2g}+Z_1=\frac{p_2}{w}+\frac{v_2^2}{2g}+Z_2+L$ where $\inline&space;L=$Losses Applying Bernoulli for the pipe length $\inline&space;AB$. Note that the pressures quoted in the question have been expressed in ft. of water and therefore: $(34+h)+0+5\displaystyle\frac{1}{2}=6+\frac{v^2}{2g}+35\displaystyle\frac{1}{2}+0.5\frac{v^2}{2g}+\frac{4\times&space;0.008\times&space;30\times&space;v^2}{2g\times&space;1/12}$ Applying Bernoulli for the whole pipe length: $(34+h)+0+5\displaystyle\frac{1}{2}=34+\frac{v^2}{2g}+0+0.5\frac{v^2}{2g}+\frac{4\times&space;0.008\times&space;80\times&space;v^2}{2g\times&space;1/12}$ From equations (1) and (2) $41\displaystyle\frac{1}{2}-34=\frac{4\times&space;0.008\times&space;50\times&space;v^2}{2g\times&space;1/12}$ From which: $v=5.02\;ft./sec.$ Substituting in equation (1) $39\displaystyle\frac{1}{2}+h=41\displaystyle\frac{1}{2}+\frac{5.02^2}{2\times&space;32.2}\left&space;(&space;1.5+\frac{120\times&space;0.008}{1/12}&space;\right&space;)$ $\therefore&space;\;\;\;\;\;\;h=7.6\;ft.$ Solution The depth of water is $\inline&space;h=7.6\;ft.$ Please note that further worked examples on this topic will be found in the paper on "Hydraulic Gradients"
2022-05-19 16:21:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 66, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5415194630622864, "perplexity": 625.3738952334253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662529538.2/warc/CC-MAIN-20220519141152-20220519171152-00231.warc.gz"}
https://www.physicsforums.com/threads/effective-potential-concept.154438/
# Effective potential concept Hi, I'm looking for some information about effective potential, but I haven't found any (Wikipedia, Googled...). I was just willing to get a rough understanding of the concept, and understand what it is. Thank you very much. pervect Staff Emeritus It's at least mentioned in the context of GR at http://www.fourmilab.ch/gravitation/orbits/. This is pretty much taken from the textbook "Gravitation" by Misner, Thorne, Wheeler which goes into more detail along the same lines. The fundamental idea here is that a body orbiting a central mass obeys certain differential equations, known as the geodesic equations. Furthermore, due to the presence of symmetries of the problem, certain quantities of the orbital motion analogous to Newtonian energy and angular momentum are conserved. The term is also sometimes used in the context of analyzing Newtonian orbits. There are some references in Goldstein, "Classical mechanics", I think, but I haven't found any references for the Newtonian usage online. (The idea is definitely disucssed in Goldstein - I think the name is used as well, but I'm not positive). In the Newtonain version of "effective potential", one observes that the differential equation for the radial part of the motion of a body orbiting a central mass can be separated into two different differential equations (i.e. the equations are separable) - one for the radial part of the motion, and the other for the angular part. The differential equation for only the radial part of the motion is a 1-d problem that can be physically re-interpreted as a mass and a (fictitious) non-linear spring. The nonlinear spring can be modelled by an "effective potential". The GR usage is very similar - the "effective potential" is similar to the above fictitious 1-d potential in the Newtonian case. The "energy at infinity" in GR is more closely analogous to the usual Newtonian potential (but one must be careful not to assume the analogy goes too far). I hope this helps some - I've tried to be clear, but it's early in the morning here :-(. I've got to run now, but I'll check back later. However there's something that's not clear in my mind: in fourmilab.ch, the author talks about "the position of the test mass on the gravitational energy curve". Does this mean that effective potential is gravitationnal potential energy? Last edited: pervect Staff Emeritus However there's something that's not clear in my mind: in fourmilab.ch, the author talks about "the position of the test mass on the gravitational energy curve". Does this mean that effective potential is gravitational potential energy? Not really. I'd suggest interpreting what they wrote as "the position of the test mass on the effective potential curve" instead. The authors of this webpage also call it the "gravitational effective potential" curve elsewhere, so they haven't "polished" their webpage, calling the same concept by several slightly different names. The idea of separating kinetic and potential energy makes some sense in Newtonian theory, which has an "absolute space". In GR, one is better off avoiding this sort of separation, and dealing only with total energy, rather than attempting to separate it into "kinetic" and "potential" parts, because GR has no concept of "absolute space". Thus the best approach in the spirit of GR for this particular case would be to saythat GR's concept of "energy at infinity" is approximately the same as the Newtonian concept of "kinetic plus potential energy". On the webpage, the "energy at infinity" is represented by the symbol $$\~{E}$$ which is a constant number for any particle following a geodesic. Note that "the position of the test mass on the gravitational energy curve" that the authors were talking about isn't the same as the "energy at infinity" - the former is a fictitious number that results when one reduces the problem to one dimension. Also note that in the interest of trying to keep things simple, I've been talking about only a static, Schwarzschild geometry. The concept of energy in GR has other nuances which I've deliberately avoided. Last edited: OK, so as far as I've understood, effective potential is a field where those geodesic equations apply and any object is subject to energy at infinity. Is that right? Thanks. Last edited: pervect Staff Emeritus
2021-02-28 10:42:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8020041584968567, "perplexity": 427.231116339502}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178360745.35/warc/CC-MAIN-20210228084740-20210228114740-00539.warc.gz"}
http://www.se.cs.titech.ac.jp/%7Ehayashi/
# HAYASHI, Shinpei at se.cs.titech.ac.jp [Japanese] Shinpei Hayashi is an assistant professor at Department of Computer Science, School of Computing, Tokyo Institute of Technology. He received a B.Eng. degree from Hokkaido University in 2004. He also received M.Eng. and Dr.Eng. degrees from Tokyo Institute of Technology in 2006 and 2008, respectively. ### Was Location West-8E Bldg. #901, Ookayama Campus, Tokyo Institute of Technology Ookayama 2-12-1-W8-83, Ookayama, Meguro-ku, Tokyo 152-8552, Japan Phone/Fax. +81-3-5734-3920 or skype:hayashi.shinpei ## Current Interests Software Engineering, in particular, • Analyses/supports of software evolution/changes (esp., refactorings) • Program understanding and feature/concept location ## P{ublic,resent}ations ### To Be Published 1. Junzo Kato and Motoshi Saeki and Atsushi Ohnishi and Haruhiko Kaiya and Shinpei Hayashi and Shuichiro Yamamoto: "Supporting Construction of a Thesaurus for Requirements Elicitation" (in Japanese). IPSJ Journal, vol. 57, no. 7, pp. 1576-1589. jul, 2016. 2. Haruhiko Kaiya, Shinpei Ogata, Shinpei Hayashi, Motoshi Saeki: "Early Requirements Analysis for a Socio-Technical System based on Goal Dependencies". In Proceedings of the 15th International Conference on Intelligent Software Methodologies, Tools and Techniques (SOMET 2016). Larnaca, Cyprus, sep, 2016. ### Recent Publications (2016) 1. Natthawute Sae-Lim, Shinpei Hayashi, Motoshi Saeki: "Context-Based Code Smells Prioritization for Prefactoring". In Proceedings of the 24th International Conference on Program Comprehension (ICPC 2016), pp. 1-10. Austin, Texas, USA, may, 2016. ID DOI: 10.1109/ICPC.2016.7503705 Abstract To find opportunities for applying prefactoring, several techniques for detecting bad smells in source code have been proposed. Existing smell detectors are often unsuitable for developers who have a specific context because these detectors do not consider their current context and output the results that are mixed with both smells that are and are not related to such context. Consequently, the developers must spend a considerable amount of time identifying relevant smells. As described in this paper, we propose a technique to prioritize bad code smells using developers' context. The explicit data of the context are obtained using a list of issues extracted from an issue tracking system. We applied impact analysis to the list of issues and used the results to specify which smells are associated with the context. Consequently, our approach can provide developers with a list of prioritized bad code smells related to their current context. Several evaluations using open source projects demonstrate the effectiveness of our technique. BibTeX @inproceedings{natthawute-icpc2016, author = {Natthawute Sae-Lim and Shinpei Hayashi and Motoshi Saeki}, title = {Context-Based Code Smells Prioritization for Prefactoring}, booktitle = {Proceedings of the 24th International Conference on Program Comprehension}, pages = {1--10}, year = 2016, month = {may}, } [natthawute-icpc2016]: as a page 2. Katsuhisa Maruyama, Takayuki Omori, Shinpei Hayashi: "Slicing Fine-Grained Code Change History". IEICE Transactions on Information and Systems, vol. E99-D, no. 3, pp. 671-687. mar, 2016. ID DOI: 10.1587/transinf.2015EDP7282 Abstract Change-aware development environments can automatically record fine-grained code changes on a program and allow programmers to replay the recorded changes in chronological order. However, since they do not always need to replay all the code changes to investigate how a particular entity of the program has been changed, they often eliminate several code changes of no interest by manually skipping them in replaying. This skipping action is an obstacle that makes many programmers hesitate when they use existing replaying tools. This paper proposes a slicing mechanism that automatically removes manually skipped code changes from the whole history of past code changes and extracts only those necessary to build a particular class member of a Java program. In this mechanism, fine-grained code changes are represented by edit operations recorded on the source code of a program and dependencies among edit operations are formalized. The paper also presents a running tool that slices the operation history and replays its resulting slices. With this tool, programmers can avoid replaying nonessential edit operations for the construction of class members that they want to understand. Experimental results show that the tool offered improvements over conventional replaying tools with respect to the reduction of the number of edit operations needed to be examined and over history filtering tools with respect to the accuracy of edit operations to be replayed. BibTeX @article{maruyama-ieicet201603, author = {Katsuhisa Maruyama and Takayuki Omori and Shinpei Hayashi}, title = {Slicing Fine-Grained Code Change History}, journal = {IEICE Transactions on Information and Systems}, volume = {E99-D}, number = 3, pages = {671--687}, year = 2016, month = {mar}, } [maruyama-ieicet201603]: as a page ### Papers Published in Academic Journals 1. 加藤 哲平, 林 晋平, 佐伯 元司: "Combining Dynamic Feature Location with Call Graph Separation" (in Japanese). IEICE Transactions on Information and Systems, vol. J98-D, no. 11, pp. 1374-1376. nov, 2015. ID DOI: 10.14923/transinfj.2015SSL0001 Abstract 形式概念分析を用いた動的な機能捜索手法と呼び出し関係グラフ分割手法を組み合わせ,シナリオの用意が十分でない場合でも精度良く機能に対応するモジュール集合を得る方法について,例題への適用結果に基づき検討する. BibTeX @article{kato-ieicet-ss2015, author = {加藤 哲平 and 林 晋平 and 佐伯 元司}, title = {Combining Dynamic Feature Location with Call Graph Separation}, journal = {IEICE Transactions on Information and Systems}, volume = {J98-D}, number = 11, pages = {1374--1376}, year = 2015, month = {nov}, } [kato-ieicet-ss2015]: as a page 2. Takayuki Omori and Shinpei Hayashi and Katsuhisa Maruyama: "A survey on methods of recording fine-grained operations on integrated development environments and their applications" (in Japanese). Computer Software, vol. 32, no. 1, pp. 60-80. feb, 2015. ID DOI: 10.11309/jssst.32.1_60 Abstract This paper presents a survey on techniques to record and utilize developers’ operations on integrated development environments (IDEs). Especially, we let techniques treating fine-grained code changes be targets of this survey for reference in software evolution research. We created a three-tiered model to represent the relationships among IDEs, recording techniques, and application techniques. This paper also presents common features of the techniques and their details. BibTeX @article{omori-jssst-survey2015, author = {Takayuki Omori and Shinpei Hayashi and Katsuhisa Maruyama}, title = {A survey on methods of recording fine-grained operations on integrated development environments and their applications}, journal = {Computer Software}, volume = 32, number = 1, pages = {60--80}, year = 2015, month = {feb}, } [omori-jssst-survey2015]: as a page 3. Eunjong Choi and Kenji Fujiwara and Norihiro Yoshida and Shinpei Hayashi: "A Survey of Refactoring Detection Techniques Based on Change History Analysis" (in Japanese). Computer Software, vol. 32, no. 1, pp. 47-59. feb, 2015. ID DOI: 10.11309/jssst.32.1_47 Abstract Refactoring is the process of changing a software system in such a way that it does not alter the external behavior of the code yet improves its internal structure. Not only researchers but also practitioners need to know past instances of refactoring performed in a software development project. So far, a number of techniques have been proposed on the automatic detection of refactoring instances. Those techniques have been presented in various international conferences and journals, and it is difficult for researchers and practitioners to grasp the current status of studies on refactoring detection techniques. In this survey paper, we introduce refactoring detection techniques, especially in techniques based on change history analysis. At first, we give the definition and the categorization of refactoring detection in this paper, and then introduce refactoring detection techniques based on change history analysis. Finally, we discuss possible future research directions on refactoring detection. BibTeX @article{choi-jssst-survey2015, author = {Eunjong Choi and Kenji Fujiwara and Norihiro Yoshida and Shinpei Hayashi}, title = {A Survey of Refactoring Detection Techniques Based on Change History Analysis}, journal = {Computer Software}, volume = 32, number = 1, pages = {47--59}, year = 2015, month = {feb}, } [choi-jssst-survey2015]: as a page 4. Daiki Hoshino and Shinpei Hayashi and Motoshi Saeki: "Automated Grouping of Editing Operations of Source Code" (in Japanese). Computer Software, vol. 31, no. 3, pp. 277-283. aug, 2014. ID DOI: 10.11309/jssst.31.3_277 Abstract In software configuration management, it is important to separate source code changes into meaningful units before committing them (in short, Task Level Commit). However, developers often commit unrelated code changes in a single transaction. To support Task Level Commit, an existing technique uses an editing history of source code and enables developers to group the editing operations in the history. This paper proposes an automated technique for grouping editing operations in a history based on several criteria including source files, classes, methods, comments, and times editted. We show how our technique reduces developers' separating cost compared with the manual approach. BibTeX @article{dhoshino-jssst201408, author = {Daiki Hoshino and Shinpei Hayashi and Motoshi Saeki}, title = {Automated Grouping of Editing Operations of Source Code}, journal = {Computer Software}, volume = 31, number = 3, pages = {277--283}, year = 2014, month = {aug}, } [dhoshino-jssst201408]: as a page 5. Takanori Ugai and Shinpei Hayashi and Motoshi Saeki: "Quality Properties of Goals in an Attributed Goal Graph" (in Japanese). IPSJ Journal, vol. 55, no. 2, pp. 893-908. feb, 2014. URL http://id.nii.ac.jp/1001/00098488/ Abstract Goal-oriented requirements analysis (GORA) is a promising technique in requirements engineering, especially requirements elicitation. This paper aims at developing a technique to support the improvement of goal graphs, which are resulting artifacts of GORA. We consider that the technique of improving existing goals of lower quality is more realistic rather than that of creating a goal graph of high quality from scratch. To achieve the proposed technique, we define quality properties for each goal formally. Our quality properties result from IEEE Std 830 and past related studies. To define them formally, using attribute values of an attributed goal graph, we formulate predicates for deciding if a goal satisfies a quality property or not. We have implemented a supporting tool to show a requirements analyst the goals which do not satisfy the predicates. Our experiments using the tool show that requirements analysts can efficiently find and modify the qualitatively problematic goals. BibTeX @article{ugai-ipsjj201402, author = {Takanori Ugai and Shinpei Hayashi and Motoshi Saeki}, title = {Quality Properties of Goals in an Attributed Goal Graph}, journal = {IPSJ Journal}, volume = 55, number = 2, pages = {893--908}, year = 2014, month = {feb}, } [ugai-ipsjj201402]: as a page 6. Motoshi Saeki, Shinpei Hayashi, Haruhiko Kaiya: "Enhancing Goal-Oriented Security Requirements Analysis Using Common Criteria-Based Knowledge". International Journal of Software Engineering and Knowledge Engineering, vol. 23, no. 5, pp. 695-720. jun, 2013. ID DOI: 10.1142/S0218194013500174 Abstract Goal-oriented requirements analysis (GORA) is one of the promising techniques to elicit software requirements, and it is natural to consider its application to security requirements analysis. In this paper, we proposed a method for goal-oriented security requirements analysis using security knowledge which is derived from several security targets (STs) compliant to Common Criteria (CC, ISO/IEC 15408). We call such knowledge security ontology for an application domain (SOAD). Three aspects of security such as confidentiality, integrity and availability are included in the scope of our method because the CC addresses these three aspects. We extract security-related concepts such as assets, threats, countermeasures and their relationships from STs, and utilize these concepts and relationships for security goal elicitation and refinement in GORA. The usage of certificated STs as knowledge source allows us to reuse efficiently security-related concepts of higher quality. To realize our proposed method as a supporting tool, we use an existing method GOORE (goal-oriented and ontology-driven requirements elicitation method) combining with SOAD. In GOORE, terms and their relationships in a domain ontology play an important role of semantic processing such as goal refinement and conflict identification. SOAD is defined based on concepts in STs. In contrast with other goal-oriented security requirements methods, the knowledge derived from actual STs contributes to eliciting security requirements in our method. In addition, the relationships among the assets, threats, objectives and security functional requirements can be directly reused for the refinement of security goals. We show an illustrative example to show the usefulness of our method and evaluate the method in comparison with other goal-oriented security requirements analysis methods. BibTeX @article{saeki-ijseke201306, author = {Motoshi Saeki and Shinpei Hayashi and Haruhiko Kaiya}, title = {Enhancing Goal-Oriented Security Requirements Analysis Using Common Criteria-Based Knowledge}, journal = {International Journal of Software Engineering and Knowledge Engineering}, volume = 23, number = 5, pages = {695--720}, year = 2013, month = {jun}, } [saeki-ijseke201306]: as a page 7. Takayuki Omori and Katsuhisa Maruyama and Shinpei Hayashi and Atsushi Sawada: "A Literature Review on Software Evolution Research" (in Japanese). Computer Software, vol. 29, no. 3, pp. 3-28. aug, 2012. ID DOI: 10.11309/jssst.29.3_3 Abstract Software must be continually evolved to keep up with users’ needs. In this article, we propose a new taxonomy of software evolution. It consists of three perspectives: methods, targets, and objectives of evolution. We also present a literature review on software evolution based on our taxonomy. The result could provide a concrete baseline in discussing research trends and directions in the field of software evolution. BibTeX @article{omori-jssst-fose2012, author = {Takayuki Omori and Katsuhisa Maruyama and Shinpei Hayashi and Atsushi Sawada}, title = {A Literature Review on Software Evolution Research}, journal = {Computer Software}, volume = 29, number = 3, pages = {3--28}, year = 2012, month = {aug}, } [omori-jssst-fose2012]: as a page 8. Takanori Ugai and Shinpei Hayashi and Motoshi Saeki: "A Supporting Tool to Identify Stakeholders' Imbalance and Lack in Requirements Analysis" (in Japanese). IPSJ Journal, vol. 53, no. 4, pp. 1448-1460. apr, 2012. URL http://id.nii.ac.jp/1001/00081787/ Abstract Software requirements elicitation is a cooperative work by stakeholders. It is important for project managers and analysts to understand stakeholder concerns and to identify potential problems such as imbalance or lack of stakeholders. This paper presents a technique and a tool which visualize the strength of stakeholders' interest of concerns on two dimensional screens. The tool generates anchored maps from an attributed goal graph by AGORA, which is an extended version of goal-oriented analysis methods. It has stakeholders' interest to concerns and its degree as the attributes of goals. Additionally an experimental evaluation is described, whose results show the user of the tool could identify imbalance and lack of stakeholders more accurately in shorter time than the case with a table of stakeholders and requirements. BibTeX @article{ugai-ipsjj201204, author = {Takanori Ugai and Shinpei Hayashi and Motoshi Saeki}, title = {A Supporting Tool to Identify Stakeholders' Imbalance and Lack in Requirements Analysis}, journal = {IPSJ Journal}, volume = 53, number = 4, pages = {1448--1460}, year = 2012, month = {apr}, } [ugai-ipsjj201204]: as a page 9. Shinpei Hayashi, Daisuke Tanabe, Haruhiko Kaiya, Motoshi Saeki: "Impact Analysis on an Attributed Goal Graph". IEICE Transactions on Information and Systems, vol. E95-D, no. 4, pp. 1012-1020. apr, 2012. ID DOI: 10.1587/transinf.E95.D.1012 URL http://search.ieice.org/bin/summary.php?id=e95-d_4_1012&category=D&year=2012&lang=E Abstract Requirements changes frequently occur at any time of a software development process, and their management is a crucial issue to develop software of high quality. Meanwhile, goal-oriented analysis techniques are being put into practice to elicit requirements. In this situation, the change management of goal graphs and its support are necessary. This paper presents a technique related to the change management of goal graphs, realizing impact analysis on a goal graph when its modifications occur. Our impact analysis detects conflicts that arise when a new goal is added, and investigates the achievability of the other goals when an existing goal is deleted. We have implemented a supporting tool for automating the analysis. Two case studies suggested the efficiency of the proposed approach. BibTeX @article{hayashi-ieicet-kbse2012, author = {Shinpei Hayashi and Daisuke Tanabe and Haruhiko Kaiya and Motoshi Saeki}, title = {Impact Analysis on an Attributed Goal Graph}, journal = {IEICE Transactions on Information and Systems}, volume = {E95-D}, number = 4, pages = {1012--1020}, year = 2012, month = {apr}, } [hayashi-ieicet-kbse2012]: as a page 10. Shinpei Hayashi and Katsuyuki Sekine and Motoshi Saeki: "Interactive Support for Understanding Feature Implementation with Feature Location" (in Japanese). IPSJ Journal, vol. 53, no. 2, pp. 578-589. feb, 2012. URL http://id.nii.ac.jp/1001/00080669/ Abstract This paper proposes an interactive approach for efficiently understanding a feature implementation by applying feature location (FL). Although existing FL techniques can reduce the understanding cost, it is still an open issue to construct the appropriate inputs for the techniques. In our approach, the inputs of FL are incrementally improved by interactions between users and the FL system. By understanding a code fragment obtained using FL, users can find more appropriate queries from the identifiers in the fragment. Furthermore, the relevance feedback, obtained by partially judging whether or not a code fragment is required to understand, improves the evaluation score of FL. Users can then obtain more accurate results. We have implemented a supporting tool of our approach. Evaluation results using the tool show that our interactive approach is feasible and that it can reduce the understanding cost more effectively than the non-interactive approach. BibTeX @article{hayashi-ipsjj-se2012, author = {Shinpei Hayashi and Katsuyuki Sekine and Motoshi Saeki}, title = {Interactive Support for Understanding Feature Implementation with Feature Location}, journal = {IPSJ Journal}, volume = 53, number = 2, pages = {578--589}, year = 2012, month = {feb}, } [hayashi-ipsjj-se2012]: as a page 11. Haruhiko Kaiya and Yuutarou Shimizu and Hirotaka Yasui and Kenji Kaijiri and Shinpei Hayashi and Motoshi Saeki: "Enhancing Domain Knowledge for Requirements Elicitation with Web Mining" (in Japanese). IPSJ Journal, vol. 53, no. 2, pp. 495-509. feb, 2012. URL http://id.nii.ac.jp/1001/00080661/ Abstract Software engineers require knowledge about a problem domain when they elicit requirements for a system about the domain. Explicit descriptions about such knowledge such as domain ontology contribute to eliciting such requirements correctly and completely. Methods for eliciting requirements using ontology have been thus proposed, and such ontology is normally developed based on documents and/or experts in the problem domain. However, it is not easy for engineers to elicit requirements correctly and completely only with such domain ontology because they are not normally experts in the problem domain. In this paper, we propose a method and a tool to enhance domain ontology using Web mining. Our method and the tool help engineers to add additional knowledge suitable for them to understand domain ontology. According to our method, candidates of such additional knowledge are gathered from Web pages using keywords in existing domain ontology. The candidates are then prioritized based on the degree of the relationship between each candidate and existing ontology and on the frequency and the distribution of the candidate over Web pages. Engineers finally add new knowledge to existing ontology out of these prioritized candidates. We also show an experiment and its results for confirming enhanced ontology enables engineers to elicit requirements more completely and correctly than existing ontology does. BibTeX @article{kaiya-ipsjj-se2012, author = {Haruhiko Kaiya and Yuutarou Shimizu and Hirotaka Yasui and Kenji Kaijiri and Shinpei Hayashi and Motoshi Saeki}, title = {Enhancing Domain Knowledge for Requirements Elicitation with Web Mining}, journal = {IPSJ Journal}, volume = 53, number = 2, pages = {495--509}, year = 2012, month = {feb}, } [kaiya-ipsjj-se2012]: as a page 12. Rodion Moiseev, Shinpei Hayashi, Motoshi Saeki: "Using Hierarchical Transformation to Generate Assertion Code from OCL Constraints". IEICE Transactions on Information and Systems, vol. E94-D, no. 3, pp. 612-621. mar, 2011. ID DOI: 10.1587/transinf.E94.D.612 URL http://search.ieice.org/bin/summary.php?id=e94-d_3_612&category=D&year=2011&lang=E Abstract Object Constraint Language (OCL) is frequently applied in software development for stipulating formal constraints on software models. Its platform-independent characteristic allows for wide usage during the design phase. However, application in platform-specific processes, such as coding, is less obvious because it requires usage of bespoke tools for that platform. In this paper we propose an approach to generate assertion code for OCL constraints for multiple platform specific languages, using a unified framework based on structural similarities of programming languages. We have succeeded in automating the process of assertion code generation for four different languages using our tool. To show effectiveness of our approach in terms of development effort, an experiment was carried out and summarised. BibTeX @article{rodion-ieicet201103, author = {Rodion Moiseev and Shinpei Hayashi and Motoshi Saeki}, title = {Using Hierarchical Transformation to Generate Assertion Code from OCL Constraints}, journal = {IEICE Transactions on Information and Systems}, volume = {E94-D}, number = 3, pages = {612--621}, year = 2011, month = {mar}, } [rodion-ieicet201103]: as a page 13. Hiroshi Kazato and Shinpei Hayashi and Takashi Kobayashi and Motoshi Saeki: "Choosing Software Implementation Technologies Using Bayesian Networks" (in Japanese). IPSJ Journal, vol. 51, no. 9, pp. 1765-1776. sep, 2010. Abstract It is difficult to estimate how a combination of implementation technologies influences quality attributes on an entire system. In this paper, we propose a technique to choose implementation technologies by modeling casual dependencies between requirements and technoloies probabilistically using Bayesian networks. We have implemented our technique on a Bayesian network tool and applied it to a case study of a business application to show its effectiveness. BibTeX @article{kazato-ipsjj-se2010, author = {Hiroshi Kazato and Shinpei Hayashi and Takashi Kobayashi and Motoshi Saeki}, title = {Choosing Software Implementation Technologies Using Bayesian Networks}, journal = {IPSJ Journal}, volume = 51, number = 9, pages = {1765--1776}, year = 2010, month = {sep}, } [kazato-ipsjj-se2010]: as a page 14. Takashi Kobayashi and Shinpei Hayashi: "Recent Researches for Supporting Software Construction and Maintenance with Data Mining" (in Japanese). Computer Software, vol. 27, no. 3, pp. 13-23. aug, 2010. ID DOI: 10.11309/jssst.27.3_13 Abstract This paper discusses recent studies on technologies for supporting software construction and maintenance by analyzing various software engineering data. We also introduce typical data mining techniques for analyzing the data. BibTeX @article{tkobaya-jssst-fose2010, author = {Takashi Kobayashi and Shinpei Hayashi}, title = {Recent Researches for Supporting Software Construction and Maintenance with Data Mining}, journal = {Computer Software}, volume = 27, number = 3, pages = {13--23}, year = 2010, month = {aug}, } [tkobaya-jssst-fose2010]: as a page 15. Shinpei Hayashi and Yusuke Sasaki and Motoshi Saeki: "Evaluating Alternatives of Source Code Changes with Analytic Hierarchy Process" (in Japanese). Computer Software, vol. 27, no. 2, pp. 118-123. may, 2010. ID DOI: 10.11309/jssst.27.2_118 Abstract This paper proposes a technique for selecting the most appropriate alternative of source code changes based on the commitment of a software development project by each developer of the project. In the technique, we evaluate the alternative changes by using an evaluation function with integrating multiple software metrics to suppress the influence of each developer’s subjectivity. By regarding the selection of the alternative changes as a multiple criteria decision making, we create the function with Analytic Hierarchy Process. A preliminary evaluation shows the efficiency of the technique. BibTeX @article{hayashi-jssst-fose2010, author = {Shinpei Hayashi and Yusuke Sasaki and Motoshi Saeki}, title = {Evaluating Alternatives of Source Code Changes with Analytic Hierarchy Process}, journal = {Computer Software}, volume = 27, number = 2, pages = {118--123}, year = 2010, month = {may}, } [hayashi-jssst-fose2010]: as a page 16. Shinpei Hayashi, Yasuyuki Tsuda, Motoshi Saeki: "Search-Based Refactoring Detection from Source Code Revisions". IEICE Transactions on Information and Systems, vol. E93-D, no. 4, pp. 754-762. apr, 2010. ID DOI: 10.1587/transinf.E93.D.754 URL http://search.ieice.org/bin/summary.php?id=e93-d_4_754 Abstract This paper proposes a technique for detecting the occurrences of refactoring from source code revisions. In a real software development process, a refactoring operation may sometimes be performed together with other modifications at the same revision. This means that detecting refactorings from the differences between two versions stored in a software version archive is not usually an easy process. In order to detect these impure refactorings, we model the detection within a graph search. Our technique considers a version of a program as a state and a refactoring as a transition between two states. It then searches for the path that approaches from the initial to the final state. To improve the efficiency of the search, we use the source code differences between the current and the final state for choosing the candidates of refactoring to be applied next and estimating the heuristic distance to the final state. Through case studies, we show that our approach is feasible to detect combinations of refactorings. BibTeX @article{hayashi-ieicet-kbse2010, author = {Shinpei Hayashi and Yasuyuki Tsuda and Motoshi Saeki}, title = {Search-Based Refactoring Detection from Source Code Revisions}, journal = {IEICE Transactions on Information and Systems}, volume = {E93-D}, number = 4, pages = {754--762}, year = 2010, month = {apr}, } [hayashi-ieicet-kbse2010]: as a page 17. Takeshi Obayashi, Shinpei Hayashi, Motoshi Saeki, Hiroyuki Ohta, Kengo Kinoshita: "ATTED-II provides coexpressed gene networks for Arabidopsis". Nucleic Acids Research, vol. 37, DB issue, pp. 987-991. jan, 2009. ID DOI: 10.1093/nar/gkn807 URL http://www.pubmed.gov/?db=pubmed&cmd=retrieve&list_uids=18953027 Abstract ATTED-II (http://atted.jp) is a database of gene coexpression in Arabidopsis that can be used to design a wide variety of experiments, including the prioritization of genes for functional identification or for studies of regulatory relationships. Here, we report updates of ATTED-II that focus especially on functionalities for constructing gene networks with regard to the following points: (i) introducing a new measure of gene coexpression to retrieve functionally related genes more accurately, (ii) implementing clickable maps for all gene networks for step-by-step navigation, (iii) applying Google Maps API to create a single map for a large network, (iv) including information about protein-protein interactions, (v) identifying conserved patterns of coexpression and (vi) showing and connecting KEGG pathway information to identify functional modules. With these enhanced functions for gene network representation, ATTED-II can help researchers to clarify the functional and regulatory networks of genes in Arabidopsis. BibTeX @article{obayashi-nar-db2009, author = {Takeshi Obayashi and Shinpei Hayashi and Motoshi Saeki and Hiroyuki Ohta and Kengo Kinoshita}, title = {{ATTED-II} provides coexpressed gene networks for Arabidopsis}, journal = {Nucleic Acids Research}, volume = 37, number = {DB issue}, pages = {987--991}, year = 2009, month = {jan}, } [obayashi-nar-db2009]: as a page 18. Shinpei Hayashi, Junya Katada, Ryota Sakamoto, Takashi Kobayashi, Motoshi Saeki: "Design Pattern Detection by Using Meta Patterns". IEICE Transactions on Information and Systems, vol. E91-D, no. 4, pp. 933-944. apr, 2008. ID DOI: 10.1093/ietisy/e91-d.4.933 URL http://search.ieice.org/bin/summary.php?id=e91-d_4_933 Abstract One of the approaches to improve program understanding is to extract what kinds of design pattern are used in existing object-oriented software. This paper proposes a technique for efficiently and accurately detecting occurrences of design patterns included in source codes. We use both static and dynamic analyses to achieve the detection with high accuracy. Moreover, to reduce computation and maintenance costs, detection conditions are hierarchically specified based on Pree's meta patterns as common structures of design patterns. The usage of Prolog to represent the detection conditions enables us to easily add and modify them. Finally, we have implemented an automated tool as an Eclipse plug-in and conducted experiments with Java programs. The experimental results show the effectiveness of our approach. BibTeX @article{hayashi-ieicet-kbse2008, author = {Shinpei Hayashi and Junya Katada and Ryota Sakamoto and Takashi Kobayashi and Motoshi Saeki}, title = {Design Pattern Detection by Using Meta Patterns}, journal = {IEICE Transactions on Information and Systems}, volume = {E91-D}, number = 4, pages = {933--944}, year = 2008, month = {apr}, } [hayashi-ieicet-kbse2008]: as a page 19. Takeshi Obayashi, Shinpei Hayashi, Masayuki Shibaoka, Motoshi Saeki, Hiroyuki Ohta, Kengo Kinoshita: "COXPRESdb: a database of coexpressed gene networks in mammals". Nucleic Acids Research, vol. 36, DB issue, pp. 77-82. jan, 2008. ID DOI: 10.1093/nar/gkm840 URL http://www.pubmed.gov/?db=pubmed&cmd=retrieve&list_uids=17932064 Abstract A database of coexpressed gene sets can provide valuable information for a wide variety of experimental designs, such as targeting of genes for functional identification, gene regulation and/or protein-protein interactions. Coexpressed gene databases derived from publicly available GeneChip data are widely used in Arabidopsis research, but platforms that examine coexpression for higher mammals are rather limited. Therefore, we have constructed a new database, COXPRESdb (coexpressed gene database) (http://coxpresdb.hgc.jp), for coexpressed gene lists and networks in human and mouse. Coexpression data could be calculated for 19 777 and 21 036 genes in human and mouse, respectively, by using the GeneChip data in NCBI GEO. COXPRESdb enables analysis of the four types of coexpression networks: (i) highly coexpressed genes for every gene, (ii) genes with the same GO annotation, (iii) genes expressed in the same tissue and (iv) user-defined gene sets. When the networks became too big for the static picture on the web in GO networks or in tissue networks, we used Google Maps API to visualize them interactively. COXPRESdb also provides a view to compare the human and mouse coexpression patterns to estimate the conservation between the two species. BibTeX @article{obayashi-nar-db2008, author = {Takeshi Obayashi and Shinpei Hayashi and Masayuki Shibaoka and Motoshi Saeki and Hiroyuki Ohta and Kengo Kinoshita}, title = {{COXPRESdb}: a database of coexpressed gene networks in mammals}, journal = {Nucleic Acids Research}, volume = 36, number = {DB issue}, pages = {77--82}, year = 2008, month = {jan}, } [obayashi-nar-db2008]: as a page 20. Takeshi Obayashi, Kengo Kinoshita, Kenta Nakai, Masayuki Shibaoka, Shinpei Hayashi, Motoshi Saeki, Daisuke Shibata, Kazuki Saito, Hiroyuki Ohta: "ATTED-II: a database of co-expressed genes and cis elements for identifying co-regulated gene groups in Arabidopsis". Nucleic Acids Research, vol. 35, DB issue, pp. 863-869. jan, 2007. ID DOI: 10.1093/nar/gkl783 URL http://www.pubmed.gov/?db=pubmed&cmd=retrieve&list_uids=17130150 Abstract Publicly available database of co-expressed gene sets would be a valuable tool for a wide variety of experimental designs, including targeting of genes for functional identification or for regulatory investigation. Here, we report the construction of an Arabidopsis thaliana trans-factor and cis-element prediction database (ATTED-II) that provides co-regulated gene relationships based on co-expressed genes deduced from microarray data and the predicted cis elements. ATTED-II (http://www.atted.bio.titech.ac.jp) includes the following features: (i) lists and networks of co-expressed genes calculated from 58 publicly available experimental series, which are composed of 1388 GeneChip data in A.thaliana; (ii) prediction of cis-regulatory elements in the 200 bp region upstream of the transcription start site to predict co-regulated genes amongst the co-expressed genes; and (iii) visual representation of expression patterns for individual genes. ATTED-II can thus help researchers to clarify the function and regulation of particular genes and gene networks. BibTeX @article{obayashi-nar-db2007, author = {Takeshi Obayashi and Kengo Kinoshita and Kenta Nakai and Masayuki Shibaoka and Shinpei Hayashi and Motoshi Saeki and Daisuke Shibata and Kazuki Saito and Hiroyuki Ohta}, title = {{ATTED-II}: a database of co-expressed genes and {\it cis} elements for identifying co-regulated gene groups in {\it Arabidopsis}}, journal = {Nucleic Acids Research}, volume = 35, number = {DB issue}, pages = {863--869}, year = 2007, month = {jan}, } [obayashi-nar-db2007]: as a page 21. Shinpei Hayashi, Motoshi Saeki, Masahito Kurihara: "Supporting Refactoring Activities Using Histories of Program Modification". IEICE Transactions on Information and Systems, vol. E89-D, no. 4, pp. 1403-1412. apr, 2006. ID DOI: 10.1093/ietisy/e89-d.4.1403 URL http://search.ieice.org/bin/summary.php?id=e89-d_4_1403 Abstract Refactoring is one of the promising techniques for improving program design by means of program transformation with preserving behavior, and is widely applied in practice. However, it is difficult for engineers to identify how and where to refactor programs, because proper knowledge and skills of a high order are required of them. In this paper, we propose the technique to instruct how and where to refactor a program by using a sequence of its modifications. We consider that the histories of program modifications reflect developers' intentions, and focusing on them allows us to provide suitable refactoring guides. Our technique can be automated by storing the correspondence of modification patterns to suitable refactoring operations. By implementing an automated supporting tool, we show its feasibility. The tool is implemented as a plug-in for Eclipse IDE. It selects refactoring operations by matching between a sequence of program modifications and modification patterns. BibTeX @article{hayashi-ieicet-kbse2006, author = {Shinpei Hayashi and Motoshi Saeki and Masahito Kurihara}, title = {Supporting Refactoring Activities Using Histories of Program Modification}, journal = {IEICE Transactions on Information and Systems}, volume = {E89-D}, number = 4, pages = {1403--1412}, year = 2006, month = {apr}, } [hayashi-ieicet-kbse2006]: as a page ### Research Talks Presented in International Conferences, Workshops, or Symposia 1. Haruhiko Kaiya, Shinpei Ogata, Shinpei Hayashi, Motoshi Saeki, Takao Okubo, Nobukazu Yoshioka, Hironori Washizaki, Atsuo Hazeyama: "Finding Potential Threats in Several Security Targets for Eliciting Security Requirements". In Proceedings of the 10th International Multi-Conference on Computing in the Global Information Technology (ICCGI 2015), pp. 83-92. St. Julians, Malta, oct, 2015. URL Abstract Threats to existing systems help requirements analysts to elicit security requirements for a new system similar to such systems because security requirements specify how to protect the system against threats and similar systems require similar means for protection. We propose a method of finding potential threats that can be used for eliciting security requirements for such a system. The method enables analysts to find additional security requirements when they have already elicited one or a few threats. The potential threats are derived from several security targets (STs) in the Common Criteria. An ST contains knowledge related to security requirements such as threats and objectives. It also contains their explicit relationships. In addition, individual objectives are explicitly related to the set of means for protection, which are commonly used in any STs. Because we focus on such means to find potential threats, our method can be applied to STs written in any languages, such as English or French. We applied and evaluated our method to three different domains. In our evaluation, we enumerated all threat pairs in each domain. We then predicted whether a threat and another in each pair respectively threaten the same requirement according to the method. The recall of the prediction was more than 70% and the precision was 20 to 40% in three domains. BibTeX @inproceedings{kaiya-iccgi2015, author = {Haruhiko Kaiya and Shinpei Ogata and Shinpei Hayashi and Motoshi Saeki and Takao Okubo and Nobukazu Yoshioka and Hironori Washizaki and Atsuo Hazeyama}, title = {Finding Potential Threats in Several Security Targets for Eliciting Security Requirements}, booktitle = {Proceedings of the 10th International Multi-Conference on Computing in the Global Information Technology}, pages = {83--92}, year = 2015, month = {oct}, } [kaiya-iccgi2015]: as a page 2. Tatsuya Abe, Shinpei Hayashi, Motoshi Saeki: "Modeling and Utilizing Security Knowledge for Eliciting Security Requirements". In Proceedings of the 2nd International Workshop on Conceptual Modeling in Requirements and Business Analysis (MReBa 2015), co-located with ER 2015, pp. 236-247. Stockholm, Sweden, oct, 2015. ID DOI: 10.1007/978-3-319-25747-1_24 Abstract In order to develop secure information systems with less development cost, it is important to elicit the requirements to security functions (simply security requirements) as early in their development process as possible. To achieve it, accumulated knowledge of threats and their objectives obtained from practical experiences is useful, and the technique to support the elicitation of security requirements utilizing this knowledge should be developed. In this paper, we present the technique for security requirements elicitation using practical knowledge of threats, their objectives and security functions realizing the objectives, which is extracted from Security Target documents compliant to the standard Common Criteria. We show the usefulness of our approach with several case studies. BibTeX @inproceedings{abe-mreba2015, author = {Tatsuya Abe and Shinpei Hayashi and Motoshi Saeki}, title = {Modeling and Utilizing Security Knowledge for Eliciting Security Requirements}, booktitle = {Proceedings of the 2nd International Workshop on Conceptual Modeling in Requirements and Business Analysis}, pages = {236--247}, year = 2015, month = {oct}, } [abe-mreba2015]: as a page 3. Ryotaro Nakamura, Yu Negishi, Shinpei Hayashi, Motoshi Saeki: "Terminology Matching of Requirements Specification Documents and Regulations for Consistency Checking". In Proceedings of the 8th International Workshop on Requirements Engineering and Law (RELAW 2015), co-located with RE'15, pp. 10-18. Ottawa, Canada, aug, 2015. ID DOI: 10.1109/RELAW.2015.7330206 Abstract To check the consistency between requirements specification documents and regulations by using a model checking technique, requirements analysts generate inputs to the model checker, i.e., state transition machines from the documents and logical formulas from the regulatory statements to be verified as properties. During these generation processes, to make the logical formulas semantically correspond to the state transition machine, analysts should take terminology matching where they look for the words in the requirements document having the same meaning as the words in the regulatory statements and unify the semantically same words. In this paper, by using case grammar approach, we propose an automated technique to reason the meaning of words in requirements specification documents by means of cooccurrence constraints on words in case frames, and to generate from regulatory statements the logical formulas where the words are unified to the words of the requirements documents. We have a feasibility study of our proposal with two case studies. BibTeX @inproceedings{nakamura-relaw2015, author = {Ryotaro Nakamura and Yu Negishi and Shinpei Hayashi and Motoshi Saeki}, title = {Terminology Matching of Requirements Specification Documents and Regulations for Consistency Checking}, booktitle = {Proceedings of the 8th International Workshop on Requirements Engineering and Law}, pages = {10--18}, year = 2015, month = {aug}, } [nakamura-relaw2015]: as a page 4. Jumpei Matsuda, Shinpei Hayashi, Motoshi Saeki: "Hierarchical Categorization of Edit Operations for Separately Committing Large Refactoring Results". In Proceedings of the 14th International Workshop on Principles of Software Evolution (IWPSE 2015), co-located with ESEC/FSE 2015, pp. 19-27. Bergamo, Italy, aug, 2015. ID DOI: 10.1145/2804360.2804363 Abstract In software configuration management using a version control system, developers have to follow the commit policy of the project. However, preparing changes according to the policy are sometimes cumbersome and time-consuming, in particular when applying large refactoring consisting of multiple primitive refactoring instances. In this paper, we propose a technique for re-organizing changes by recording editing operations of source code. Editing operations including refactoring operations are hierarchically managed based on their types provided by an integrated development environment. Using the obtained hierarchy, developers can easily configure the granularity of changes and obtain the resulting changes based on the configured granularity. We confirmed the feasibility of the technique by applying it to the recorded changes in a large refactoring process. BibTeX @inproceedings{jmatsu-iwpse2015, author = {Jumpei Matsuda and Shinpei Hayashi and Motoshi Saeki}, title = {Hierarchical Categorization of Edit Operations for Separately Committing Large Refactoring Results}, booktitle = {Proceedings of the 14th International Workshop on Principles of Software Evolution}, pages = {19--27}, year = 2015, month = {aug}, } [jmatsu-iwpse2015]: as a page 5. Wataru Inoue, Shinpei Hayashi, Haruhiko Kaiya, Motoshi Saeki: "Multi-Dimensional Goal Refinement in Goal-Oriented Requirements Engineering". In Proceedings of the 10th International Conference on Software Engineering and Applications (ICSOFT-EA 2015), pp. 185-195. Colmar, Alsace, France, jul, 2015. ID DOI: 10.5220/0005499301850195 Abstract In this paper, we propose a multi-dimensional extension of goal graphs in goal-oriented requirements engineering in order to support the understanding the relations between goals, i.e., goal refinements. Goals specify multiple concerns such as functions, strategies, and non-functions, and they are refined into sub goals from mixed views of these concerns. This intermixture of concerns in goals makes it difficult for a requirements analyst to understand and maintain goal graphs. In our approach, a goal graph is put in a multi-dimensional space, a concern corresponds to a coordinate axis in this space, and goals are refined into sub goals referring to the coordinates. Thus, the meaning of a goal refinement is explicitly provided by means of the coordinates used for the refinement. By tracing and focusing on the coordinates of goals, requirements analysts can understand goal refinements and modify unsuitable ones. We have developed a supporting tool and made an exploratory experiment to evaluate the usefulness of our approach. BibTeX @inproceedings{inouew-icsoft2015, author = {Wataru Inoue and Shinpei Hayashi and Haruhiko Kaiya and Motoshi Saeki}, title = {Multi-Dimensional Goal Refinement in Goal-Oriented Requirements Engineering}, booktitle = {Proceedings of the 10th International Conference on Software Engineering and Applications}, pages = {185--195}, year = 2015, month = {jul}, } [inouew-icsoft2015]: as a page 6. Yoshiki Higo, Akio Ohtani, Shinpei Hayashi, Hideaki Hata, Shinji Kusumoto: "Toward Reusing Code Changes". In Proceedings of the 12th Working Conference on Mining Software Repositories (MSR 2015), pp. 372-376. Florence, Italy, may, 2015. ID DOI: 10.1109/MSR.2015.43 Abstract Existing techniques have succeeded to help developers implement new code. However, they are insufficient to help to change existing code. Previous studies have proposed techniques to support bug fixes but other kinds of code changes such as function enhancements and refactorings are not supported by them. In this paper, we propose a novel system that helps developers change existing code. Unlike existing techniques, our system can support any kinds of code changes if similar code changes occurred in the past. Our research is still on very early stage and we have not have any implementation or any prototype yet. This paper introduces our research purpose, an outline of our system, and how our system is different from existing techniques. BibTeX @inproceedings{higo-msr2015, author = {Yoshiki Higo and Akio Ohtani and Shinpei Hayashi and Hideaki Hata and Shinji Kusumoto}, title = {Toward Reusing Code Changes}, booktitle = {Proceedings of the 12th Working Conference on Mining Software Repositories}, pages = {372--376}, year = 2015, month = {may}, } [higo-msr2015]: as a page 7. Shinpei Hayashi, Daiki Hoshino, Jumpei Matsuda, Motoshi Saeki, Takayuki Omori, Katsuhisa Maruyama: "Historef: A Tool for Edit History Refactoring". In Proceedings of the 22nd IEEE International Conference on Software Analysis, Evolution, and Reengineering (SANER 2015), Tool Demo Track, pp. 469-473. Montréal, Canada, mar, 2015. ID DOI: 10.1109/SANER.2015.7081858 Abstract This paper presents Historef, a tool for automatin edit history refactoring on Eclipse IDE for Java programs. The aim of our history refactorings is to improve the understandability and/or usability of the history without changing its whole effect. Historef enables us to apply history refactorings to the recorded edit history in the middle of the source code editing process by a developer. By using our integrated tool, developers can commit the refactored edits into underlying SCM repository after applying edit history refactorings so that they are easy to manage their changes based on the performed edits. BibTeX @inproceedings{hayashi-saner2015, author = {Shinpei Hayashi and Daiki Hoshino and Jumpei Matsuda and Motoshi Saeki and Takayuki Omori and Katsuhisa Maruyama}, title = {Historef: A Tool for Edit History Refactoring}, booktitle = {Proceedings of the 22nd IEEE International Conference on Software Analysis, Evolution, and Reengineering}, pages = {469--473}, year = 2015, month = {mar}, } [hayashi-saner2015]: as a page 8. Shinpei Hayashi, Takashi Ishio, Hiroshi Kazato, Tsuyoshi Oshima: "Toward Understanding How Developers Recognize Features in Source Code from Descriptions". In Proceedings of the 9th International Workshop on Advanced Modularization Techniques (AOAsia/Pacific 2014), co-located with FSE 2014, pp. 1-3. Hong Kong, China, nov, 2014. ID DOI: 10.1145/2666358.2666578 Abstract A basic clue of feature location available to developers is a description of a feature written in a natural language. However, a description of a feature does not clearly specify the boundary of the feature, while developers tend to locate the feature precisely by excluding marginal modules that are likely outside of the boundary. This paper addresses a question: does a clearer description of a feature enable developers to recognize the same sets of modules as relevant to the feature? Based on the conducted experiment with subjects, we conclude that different descriptions lead to a different set of modules. Slide BibTeX @inproceedings{hayashi-aoasia2014, author = {Shinpei Hayashi and Takashi Ishio and Hiroshi Kazato and Tsuyoshi Oshima}, title = {Toward Understanding How Developers Recognize Features in Source Code from Descriptions}, booktitle = {Proceedings of the 9th International Workshop on Advanced Modularization Techniques}, pages = {1--3}, year = 2014, month = {nov}, } [hayashi-aoasia2014]: as a page 9. Shinpei Hayashi, Takuto Yanagida, Motoshi Saeki, Hidenori Mimura: "Class Responsibility Assignment as Fuzzy Constraint Satisfaction". In Proceedings of the 6th International Workshop on Empirical Software Engineering in Practice (IWESEP 2014), pp. 19-24. Osaka, Japan, nov, 2014. ID DOI: 10.1109/IWESEP.2014.13 Abstract We formulate the class responsibility assignment (CRA) problem as the fuzzy constraint satisfaction problem (FCSP) for automating CRA of high quality. Responsibilities are contracts or obligations of objects that they should assume, by aligning them to classes appropriately, quality designs realize. Typical conditions of a desirable design are having a low coupling between highly cohesive classes. However, because of a trade-off among such conditions, solutions that satisfy the conditions moderately are desired, and computer assistance is needed. Additionally, if we have an initial assignment, the improved one by our technique should keep the original assignment as much as possible because it involves with the intention of human designers. We represent such conditions as fuzzy constraints, and formulate CRA as FCSP. That enables us to apply common FCSP solvers to the problem and to derive solution representing a CRA. The conducted preliminary evaluation indicates the effectiveness of our technique. Slide BibTeX @inproceedings{hayashi-iwesep2014, author = {Shinpei Hayashi and Takuto Yanagida and Motoshi Saeki and Hidenori Mimura}, title = {Class Responsibility Assignment as Fuzzy Constraint Satisfaction}, booktitle = {Proceedings of the 6th International Workshop on Empirical Software Engineering in Practice}, pages = {19--24}, year = 2014, month = {nov}, } [hayashi-iwesep2014]: as a page 10. Katsuhisa Maruyama, Takayuki Omori, Shinpei Hayashi: "A Visualization Tool Recording Historical Data of Program Comprehension Tasks". In Proceedings of the 22nd International Conference on Program Comprehension (ICPC 2014), Tool Demo Track, pp. 207-211. Hyderabad, India, jun, 2014. ID DOI: 10.1145/2597008.2597802 Abstract Software visualization has become a major technique in program comprehension. Although many tools visualize the structure, behavior, and evolution of a program, they have no concern with how a tool user has understood it. Moreover, they miss the stuff the user has left through trial-and-error processes of his/her program comprehension task. This paper presents a source code visualization tool called CodeForest. It uses a forest metaphor to depict source code of Java programs. Each tree represents a class within the program and the collection of trees constitutes a three-dimensional forest. CodeForest helps a user to try a large number of combinations of mapping of software metrics on visual parameters. Moreover, it provides two new types of support: leaving notes that memorize the current understanding and insight along with visualized objects, and automatically recording a user's actions under understanding. The left notes and recorded actions might be used as historical data that would be hints accelerating the current comprehension task. BibTeX @inproceedings{maruyama-icpc2014, author = {Katsuhisa Maruyama and Takayuki Omori and Shinpei Hayashi}, title = {A Visualization Tool Recording Historical Data of Program Comprehension Tasks}, booktitle = {Proceedings of the 22nd International Conference on Program Comprehension}, pages = {207--211}, year = 2014, month = {jun}, } [maruyama-icpc2014]: as a page 11. Hiroshi Kazato, Shinpei Hayashi, Tsuyoshi Oshima, Shunsuke Miyata, Takashi Hoshino, Motoshi Saeki: "Extracting and Visualizing Implementation Structure of Features". In Proceedings of the 20th Asia-Pacific Software Engineering Conference (APSEC 2013), pp. 476-484. Bangkok, Thailand, dec, 2013. ID DOI: 10.1109/APSEC.2013.69 Abstract Feature location is an activity to identify correspondence between features in a system and program elements in source code. After a feature is located, developers need to understand implementation structure around the location from static and/or behavioral points of view. This paper proposes a semi-automatic technique both for locating features and exposing their implementation structures in source code, using a combination of dynamic analysis and two data analysis techniques, sequential pattern mining and formal concept analysis. We have implemented our technique in a supporting tool and applied it to an example of a web application. The result shows that the proposed technique is not only feasible but helpful to understand implementation of features just after they are located. BibTeX @inproceedings{kazato-apsec2013, author = {Hiroshi Kazato and Shinpei Hayashi and Tsuyoshi Oshima and Shunsuke Miyata and Takashi Hoshino and Motoshi Saeki}, title = {Extracting and Visualizing Implementation Structure of Features}, booktitle = {Proceedings of the 20th Asia-Pacific Software Engineering Conference}, pages = {476--484}, year = 2013, month = {dec}, } [kazato-apsec2013]: as a page 12. Tatsuya Abe, Shinpei Hayashi, Motoshi Saeki: "Modeling Security Threat Patterns to Derive Negative Scenarios". In Proceedings of the 20th Asia-Pacific Software Engineering Conference (APSEC 2013), pp. 58-66. Bangkok, Thailand, dec, 2013. ID DOI: 10.1109/APSEC.2013.19 Abstract The elicitation of security requirements is a crucial issue to develop secure business processes and information systems of higher quality. Although we have several methods to elicit security requirements, most of them do not provide sufficient supports to identify security threats. Since threats do not occur so frequently, like exceptional events, it is much more difficult to determine the potentials of threats exhaustively rather than identifying normal behavior of a business process. To reduce this difficulty, accumulated knowledge of threats obtained from practical setting is necessary. In this paper, we present the technique to model knowledge of threats as patterns by deriving the negative scenarios that realize threats and to utilize them during business process modeling. The knowledge is extracted from Security Target documents, based on the international Common Criteria Standard, and the patterns are described with transformation rules on sequence diagrams. In our approach, an analyst composes normal scenarios of a business process with sequence diagrams, and the threat patterns matched to them derives negative scenarios. Our approach has been demonstrated on several examples, to show its practical application. BibTeX @inproceedings{abe-apsec2013, author = {Tatsuya Abe and Shinpei Hayashi and Motoshi Saeki}, title = {Modeling Security Threat Patterns to Derive Negative Scenarios}, booktitle = {Proceedings of the 20th Asia-Pacific Software Engineering Conference}, pages = {58--66}, year = 2013, month = {dec}, } [abe-apsec2013]: as a page 13. Shinpei Hayashi, Sirinut Thangthumachit, Motoshi Saeki: "REdiffs: Refactoring-Aware Difference Viewer for Java". In Proceedings of the 20th Working Conference on Reverse Engineering (WCRE 2013), Tool Demonstrations Track, pp. 487-488. Koblenz-Landau, Germany, oct, 2013. ID DOI: 10.1109/WCRE.2013.6671331 Abstract Comparing and understanding differences between old and new versions of source code are necessary in various software development situations. However, if changes are tangled with refactorings in a single revision, then the resulting source code differences are more complicated. We propose an interactive difference viewer which enables us to separate refactoring effects from source code differences for improving the understandability of the differences. BibTeX @inproceedings{hayashi-wcre2013, author = {Shinpei Hayashi and Sirinut Thangthumachit and Motoshi Saeki}, title = {REdiffs: Refactoring-Aware Difference Viewer for Java}, booktitle = {Proceedings of the 20th Working Conference on Reverse Engineering}, pages = {487--488}, year = 2013, month = {oct}, } [hayashi-wcre2013]: as a page 14. Takashi Ishio, Shinpei Hayashi, Hiroshi Kazato, Tsuyoshi Oshima: "On the Effectiveness of Accuracy of Automated Feature Location Technique". In Proceedings of the 20th Working Conference on Reverse Engineering (WCRE 2013), pp. 381-390. Koblenz-Landau, Germany, oct, 2013. ID DOI: 10.1109/WCRE.2013.6671313 Abstract Automated feature location techniques have been proposed to extract program elements that are likely to be relevant to a given feature. A more accurate result is expected to enable developers to perform more accurate feature location. However, several experiments assessing traceability recovery have shown that analysts cannot utilize an accurate traceability matrix for their tasks. Because feature location deals with a certain type of traceability links, it is an important question whether the same phenomena are visible in feature location or not. To answer that question, we have conducted a controlled experiment. We have asked 20 subjects to locate features using lists of methods of which the accuracy is controlled artificially. The result differs from the traceability recovery experiments. Subjects given an accurate list would be able to locate a feature more accurately. However, subjects could not locate the complete implementation of features in 83% of tasks. Results show that the accuracy of automated feature location techniques is effective, but it might be insufficient for perfect feature location. BibTeX @inproceedings{ishio-wcre2013, author = {Takashi Ishio and Shinpei Hayashi and Hiroshi Kazato and Tsuyoshi Oshima}, title = {On the Effectiveness of Accuracy of Automated Feature Location Technique}, booktitle = {Proceedings of the 20th Working Conference on Reverse Engineering}, pages = {381--390}, year = 2013, month = {oct}, } [ishio-wcre2013]: as a page 15. Hiroshi Kazato, Shinpei Hayashi, Takashi Kobayashi, Tsuyoshi Oshima, Satoshi Okada, Shunsuke Miyata, Takashi Hoshino, Motoshi Saeki: "Incremental Feature Location and Identification in Source Code". In Proceedings of the 17th European Conference on Software Maintenance and Reengineering (CSMR 2013), ERA Track, pp. 371-374. Genova, Italy, mar, 2013. ID DOI: 10.1109/CSMR.2013.52 Abstract Feature location (FL) in source code is an important task for program understanding. Existing dynamic FL techniques depend on sufficient scenarios for exercising the features to be located. However, it is difficult to prepare such scenarios because it involves a correct understanding of the features. This paper proposes an incremental technique for refining the identification of features integrated with the existing FL technique using formal concept analysis. In our technique, we classify the differences of static and dynamic dependencies of method invocations based on their relevance to the identified features. According to the classification, the technique suggests method invocations to exercise unexplored part of the features. An application example indicates the effectiveness of the approach. Slide BibTeX @inproceedings{kazato-csmr2013, author = {Hiroshi Kazato and Shinpei Hayashi and Takashi Kobayashi and Tsuyoshi Oshima and Satoshi Okada and Shunsuke Miyata and Takashi Hoshino and Motoshi Saeki}, title = {Incremental Feature Location and Identification in Source Code}, booktitle = {Proceedings of the 17th European Conference on Software Maintenance and Reengineering}, pages = {371--374}, year = 2013, month = {mar}, } [kazato-csmr2013]: as a page 16. Haruhiko Kaiya, Shunsuke Morita, Shinpei Ogata, Kenji Kaijiri, Shinpei Hayashi, Motoshi Saeki: "Model Transformation Patterns for Introducing Suitable Information Systems". In Proceedings of the 19th Asia-Pacific Software Engineering Conference (APSEC 2012), pp. 434-439. Hong Kong, dec, 2012. ID DOI: 10.1109/APSEC.2012.52 Abstract When information systems are introduced in a social setting such as a business, the systems will give bad and good impacts on stakeholders in the setting. Requirements analysts have to predict such impacts in advance because stakeholders cannot decide whether the systems are really suitable for them without such prediction. In this paper, we propose a method based on model transformation patterns for introducing suitable information systems. We use metrics of a model to predict whether a system introduction is suitable for a social setting. Through a case study, we show our method can avoid an introduction of a system, which was actually bad for some stakeholders. In the case study, we use a strategic dependency model in i* to specify the model of systems and stakeholders, and attributed graph grammar for model transformation. We focus on the responsibility and the satisfaction of stakeholders as the criteria for suitability about systems introduction in this case study. BibTeX @inproceedings{kaiya-apsec2012, author = {Haruhiko Kaiya and Shunsuke Morita and Shinpei Ogata and Kenji Kaijiri and Shinpei Hayashi and Motoshi Saeki}, title = {Model Transformation Patterns for Introducing Suitable Information Systems}, booktitle = {Proceedings of the 19th Asia-Pacific Software Engineering Conference}, pages = {434--439}, year = 2012, month = {dec}, } [kaiya-apsec2012]: as a page 17. Teppei Kato, Shinpei Hayashi, Motoshi Saeki: "Cutting a Method Call Graph for Supporting Feature Location". In Proceedings of the 4th International Workshop on Empirical Software Engineering in Practice (IWESEP 2012), pp. 55-57. Osaka, Japan, oct, 2012. ID DOI: 10.1109/IWESEP.2012.17 Abstract This paper proposes a technique for locating the implementation of features by combining techniques of a graph cut and a formal concept analysis based on methods and scenarios. BibTeX @inproceedings{kato-iwesep2012, author = {Teppei Kato and Shinpei Hayashi and Motoshi Saeki}, title = {Cutting a Method Call Graph for Supporting Feature Location}, booktitle = {Proceedings of the 4th International Workshop on Empirical Software Engineering in Practice}, pages = {55--57}, year = 2012, month = {oct}, } [kato-iwesep2012]: as a page 18. Katsuhisa Maruyama, Eijiro Kitsu, Takayuki Omori, Shinpei Hayashi: "Slicing and Replaying Code Change History". In Proceedings of the 27th IEEE/ACM International Conference on Automated Software Engineering (ASE 2012), Short paper session, pp. 246-249. Essen, Germany, sep, 2012. ID DOI: 10.1145/2351676.2351713 Abstract Change-aware development environments have recently become feasible and reasonable. These environments can automatically record fine-grained code changes on a program and allow programmers to replay the recorded changes in chronological order. However, they do not always need to replay all the code changes to investigate how a particular entity of the program has been changed. Therefore, they often skip several code changes of no interest. This skipping action is an obstacle that makes many programmers hesitate in using existing replaying tools. This paper proposes a slicing mechanism that can extract only code changes necessary to construct a particular class member of a Java program from the whole history of past code changes. In this mechanism, fine-grained code changes are represented by edit operations recorded on source code of a program. The paper also presents a running tool that implements the proposed slicing and replays its resulting slices. With this tool, programmers can avoid replaying edit operations nonessential to the construction of class members they want to understand. BibTeX @incollection{maruyama-ase2012, author = {Katsuhisa Maruyama and Eijiro Kitsu and Takayuki Omori and Shinpei Hayashi}, title = {Slicing and Replaying Code Change History}, booktitle = {Proceedings of the 27th IEEE/ACM International Conference on Automated Software Engineering}, pages = {246--249}, year = 2012, month = {sep}, } [maruyama-ase2012]: as a page 19. Shinpei Hayashi, Takayuki Omori, Teruyoshi Zenmyo, Katsuhisa Maruyama, Motoshi Saeki: "Refactoring Edit History of Source Code". In Proceedings of the 28th IEEE International Conference on Software Maintenance (ICSM 2012), ERA Track, pp. 617-620. Riva del Garda, Trento, Italy, sep, 2012. ID DOI: 10.1109/ICSM.2012.6405336 Abstract This paper proposes a concept for refactoring an edit history of source code and a technique for its automation. The aim of our history refactoring is to improve the clarity and usefulness of the history without changing its overall effect. We have defined primitive history refactorings including their preconditions and procedures, and large refactorings composed of these primitives. Moreover, we have implemented a supporting tool that automates the application of history refactorings in the middle of a source code editing process. Our tool enables developers to pursue some useful applications using history refactorings such as task level commit from an entangled edit history and selective undo of past edit operations. Slide BibTeX @inproceedings{hayashi-icsm2012, author = {Shinpei Hayashi and Takayuki Omori and Teruyoshi Zenmyo and Katsuhisa Maruyama and Motoshi Saeki}, title = {Refactoring Edit History of Source Code}, booktitle = {Proceedings of the 28th IEEE International Conference on Software Maintenance}, pages = {617--620}, year = 2012, month = {sep}, } [hayashi-icsm2012]: as a page 20. Haruhiko Kaiya, Shunsuke Morita, Kenji Kaijiri, Shinpei Hayashi, Motoshi Saeki: "Facilitating Business Improvement by Information Systems using Model Transformation and Metrics". In Proceedings of the CAiSE'12 Forum at the 24th International Conference on Advanced Information Systems Engineering (CAiSE 2012), pp. 106-113. Gdańsk, Poland, jun, 2012. URL http://ceur-ws.org/Vol-855/paper13.pdf Abstract We propose a method to explore how to improve business by introducing information systems. We use a meta-modeling technique to specify the business itself and its metrics. The metrics are defined based on the structural information of the business model, so that they can help us to identify whether the business is good or not with respect to several different aspects. We also use a model transformation technique to specify an idea of the business improvement. The metrics help us to predict whether the improvement idea makes the business better or not. We use strategic dependency (SD) models in i* to specify the business, and attributed graph grammar (AGG) for the model transformation. BibTeX @inproceedings{kaiya-caise2012, author = {Haruhiko Kaiya and Shunsuke Morita and Kenji Kaijiri and Shinpei Hayashi and Motoshi Saeki}, title = {Facilitating Business Improvement by Information Systems using Model Transformation and Metrics}, booktitle = {Proceedings of the CAiSE'12 Forum at the 24th International Conference on Advanced Information Systems Engineering}, pages = {106--113}, year = 2012, month = {jun}, } [kaiya-caise2012]: as a page 21. Hiroshi Kazato, Shinpei Hayashi, Satoshi Okada, Shunsuke Miyata, Takashi Hoshino, Motoshi Saeki: "Toward Structured Location of Features". In Proceedings of the 20th IEEE International Conference on Program Comprehension (ICPC 2012), Poster Session, pp. 255-256. Passau, Germany, jun, 2012. ID DOI: 10.1109/ICPC.2012.6240497 Abstract This paper proposes structured location, a semiautomatic technique and its supporting tool both for locating features and exposing their structures in source code, using a combination of dynamic analysis, sequential pattern mining and formal concept analysis. Slide BibTeX @inproceedings{kazato-icpc2012, author = {Hiroshi Kazato and Shinpei Hayashi and Satoshi Okada and Shunsuke Miyata and Takashi Hoshino and Motoshi Saeki}, title = {Toward Structured Location of Features}, booktitle = {Proceedings of the 20th IEEE International Conference on Program Comprehension}, pages = {255--256}, year = 2012, month = {jun}, } [kazato-icpc2012]: as a page 22. Hiroshi Kazato, Shinpei Hayashi, Satoshi Okada, Shunsuke Miyata, Takashi Hoshino, Motoshi Saeki: "Feature Location for Multi-Layer System Based on Formal Concept Analysis". In Proceedings of the 16th European Conference on Software Maintenance and Reengineering (CSMR 2012), pp. 429-434. Szeged, Hungary, mar, 2012. ID DOI: 10.1109/CSMR.2012.54 Abstract Locating features in software composed of multiple layers is a challenging problem because we have to find program elements distributed over layers, which still work together to constitute a feature. This paper proposes a semi-automatic technique to extract correspondence between features and program elements among layers. By merging execution traces of each layer to feed into formal concept analysis, collaborative program elements are grouped into formal concepts and tied with a set of execution scenarios. We applied our technique to an example of web application composed of three layers. The result indicates that our technique is not only feasible but promising to promote program understanding in a more realistic context. Slide BibTeX @inproceedings{kazato-csmr2012, author = {Hiroshi Kazato and Shinpei Hayashi and Satoshi Okada and Shunsuke Miyata and Takashi Hoshino and Motoshi Saeki}, title = {Feature Location for Multi-Layer System Based on Formal Concept Analysis}, booktitle = {Proceedings of the 16th European Conference on Software Maintenance and Reengineering}, pages = {429--434}, year = 2012, month = {mar}, } [kazato-csmr2012]: as a page 23. Sirinut Thangthumachit, Shinpei Hayashi, Motoshi Saeki: "Understanding Source Code Differences by Separating Refactoring Effects". In Proceedings of the 18th Asia Pacific Software Engineering Conference (APSEC 2011), pp. 339-347. Ho Chi Minh city, Vietnam, dec, 2011. ID DOI: 10.1109/APSEC.2011.47 Abstract Comparing and understanding differences between old and new versions of source code are necessary in various software development situations. However, if refactoring is applied between those versions, then the source code differences are more complicated, and understanding them becomes more difficult. Although many techniques for extracting refactoring effects from the differences have been studied, it is necessary to exclude the extracted refactorings' effects and reconstruct the differences for meaningful and understandable ones with no refactoring effect. As described in this paper, we propose a novel technique to address this difficulty. Using our technique, we extract the refactoring effects and then apply them to the old version of source code to produce the differences without refactoring effects. We also implemented a support tool that helps separate refactorings automatically. An evaluation of open source software showed that our tool is applicable to all target refactorings. Our technique is therefore useful in real situations. Evaluation testing also demonstrated that the approach reduced the code differences more than 21\%, on average, and that developers can understand more changes from the differences using our approach than when using the original one in the same limited time. Slide BibTeX @inproceedings{zui-apsec2011, author = {Sirinut Thangthumachit and Shinpei Hayashi and Motoshi Saeki}, title = {Understanding Source Code Differences by Separating Refactoring Effects}, booktitle = {Proceedings of the 18th Asia Pacific Software Engineering Conference}, pages = {339--347}, year = 2011, month = {dec}, } [zui-apsec2011]: as a page 24. Motohiro Akiyama, Shinpei Hayashi, Takashi Kobayashi, Motoshi Saeki: "Supporting Design Model Refactoring for Improving Class Responsibility Assignment". In Proceedings of the ACM/IEEE 14th International Conference on Model Driven Engineering Languages and Systems (MODELS 2011), Lecture Notes in Computer Science, vol. 6981, pp. 455-469. Wellington, New Zealand, oct, 2011. ID DOI: 10.1007/978-3-642-24485-8_33 Abstract Although a responsibility driven approach in object oriented analysis and design methodologies is promising, the assignment of the identified responsibilities to classes (simply, class responsibility assignment: CRA) is a crucial issue to achieve design of higher quality. The GRASP by Larman is a guideline for CRA and is being put into practice. However, since it is described in an informal way using a natural language, its successful usage greatly relies on designers' skills. This paper proposes a technique to represent GRASP formally and to automate appropriate CRA based on them. Our computerized tool automatically detects inappropriate CRA and suggests alternatives of appropriate CRAs to designers so that they can improve a CRA based on the suggested alternatives. We made preliminary experiments to show the usefulness of our tool. Slide BibTeX @inproceedings{akiyama-models2011, author = {Motohiro Akiyama and Shinpei Hayashi and Takashi Kobayashi and Motoshi Saeki}, title = {Supporting Design Model Refactoring for Improving Class Responsibility Assignment}, booktitle = {Proceedings of the ACM/IEEE 14th International Conference on Model Driven Engineering Languages and Systems}, pages = {455--469}, year = 2011, month = {oct}, } [akiyama-models2011]: as a page 25. Shinpei Hayashi, Takashi Yoshikawa, Motoshi Saeki: "Sentence-to-Code Traceability Recovery with Domain Ontologies". In Proceedings of the 17th Asia Pacific Software Engineering Conference (APSEC 2010), pp. 385-394. Sydney, Australia, nov, 2010. ID DOI: 10.1109/APSEC.2010.51 Abstract We propose an ontology-based technique for recovering traceability links between a natural language sentence specifying features of a software product and the source code of the product. Some software products have been released without detailed documentation. To automatically detect code fragments associated with sentences describing a feature, the relations between source code structures and problem domains are important. We model the knowledge of the problem domains as domain ontologies having concepts of the domains and their relations. Using semantic relations on the ontologies in addition to method invocation relations and the similarity between an identifier on the code and words in the sentences, we locate the code fragments corresponding to the given sentences. Additionally, our prioritization mechanism which orders the located results of code fragments based on the ontologies enables users to select and analyze the results effectively. To show effectiveness of our approach in terms of accuracy, a case study was carried out with our proof-ofconcept tool and summarized. Slide BibTeX @inproceedings{hayashi-apsec2010, author = {Shinpei Hayashi and Takashi Yoshikawa and Motoshi Saeki}, title = {Sentence-to-Code Traceability Recovery with Domain Ontologies}, booktitle = {Proceedings of the 17th Asia Pacific Software Engineering Conference}, pages = {385--394}, year = 2010, month = {nov}, } [hayashi-apsec2010]: as a page 26. Takanori Ugai, Shinpei Hayashi, Motoshi Saeki: "Visualizing Stakeholder Concerns with Anchored Map". In Proceedings of the 5th International Workshop on Requirements Engineering Visualization (REV 2010), co-located with RE 2010, pp. 20-24. Sydney, Australia, sep, 2010. ID DOI: 10.1109/REV.2010.5625662 Abstract Software development is a cooperative work by stakeholders. It is important for project managers and analysts to understand stakeholder concerns and to identify potential problems such as imbalance of stakeholders or lack of stakeholders.\\This paper presents a tool which visualizes the strength of stakeholders' interest of concern on two dimensional screens. The proposed tool generates an anchored map from an attributed goal graph by AGORA, which is an extended version of goal-oriented analysis methods. It has information on stakeholders' interest to concerns and its degree as the attributes of goals.\\Results from the case study are that (1) some concerns are not connected to any stakeholders and (2) a type of stakeholders is interested in different concerns each other. The results suggest that lack of stakeholders for the unconnected concerns and need that a type of stakeholders had better to unify their requirements. Slide BibTeX @inproceedings{ugai-rev2010, author = {Takanori Ugai and Shinpei Hayashi and Motoshi Saeki}, title = {Visualizing Stakeholder Concerns with Anchored Map}, booktitle = {Proceedings of the 5th International Workshop on Requirements Engineering Visualization}, pages = {20--24}, year = 2010, month = {sep}, } [ugai-rev2010]: as a page 27. Shinpei Hayashi, Katsuyuki Sekine, Motoshi Saeki: "iFL: An Interactive Environment for Understanding Feature Implementations". In Proceedings of the 26th IEEE International Conference on Software Maintenance (ICSM 2010), ERA Track, pp. 1-5. Timisoara, Romania, sep, 2010. ID DOI: 10.1109/ICSM.2010.5609669 Abstract We propose iFL, an interactive environment that is useful for effectively understanding feature implementation by application of feature location (FL). With iFL, the inputs for FL are improved incrementally by interactions between users and the FL system. By understanding a code fragment obtained using FL, users can find more appropriate queries from the identifiers in the fragment. Furthermore, the relevance feedback obtained by partially judging whether or not a fragment is relevant improves the evaluation score of FL. Users can then obtain more accurate results. Case studies with iFL show that our interactive approach is feasible and that it can reduce the understanding cost more effectively than the non-interactive approach. Slide BibTeX @inproceedings{hayashi-icsm2010, author = {Shinpei Hayashi and Katsuyuki Sekine and Motoshi Saeki}, title = {{iFL}: An Interactive Environment for Understanding Feature Implementations}, booktitle = {Proceedings of the 26th IEEE International Conference on Software Maintenance}, pages = {1--5}, year = 2010, month = {sep}, } [hayashi-icsm2010]: as a page 28. Shinpei Hayashi, Motoshi Saeki: "Recording Finer-Grained Software Evolution with IDE: An Annotation-Based Approach". In Proceedings of the 4th International Joint ERCIM/IWPSE Symposium on Software Evolution (IWPSE-EVOL 2010), co-located with ASE 2010, pp. 8-12. Antwerp, Belgium, sep, 2010. ID DOI: 10.1145/1862372.1862378 ISBN: 978-1-4503-0128-2 Abstract This paper proposes a formalized technique for generating finer-grained source code deltas according to a developer's editing intentions. Using the technique, the developer classifies edit operations of source code by annotating the time series of the edit history with the switching information of their editing intentions. Based on the classification, the history is sorted and converted automatically to appropriate source code deltas to be committed separately to a version repository. This paper also presents algorithms for automating the generation process and a prototyping tool to implement them. Slide BibTeX @inproceedings{hayashi-iwpse-evol2010, author = {Shinpei Hayashi and Motoshi Saeki}, title = {Recording Finer-Grained Software Evolution with {IDE}: An Annotation-Based Approach}, booktitle = {Proceedings of the 4th International Joint ERCIM/IWPSE Symposium on Software Evolution}, pages = {8--12}, year = 2010, month = {sep}, } [hayashi-iwpse-evol2010]: as a page 29. Motoshi Saeki, Shinpei Hayashi, Haruhiko Kaiya: "An Integrated Support for Attributed Goal-Oriented Requirements Analysis Method and Its Implementation". In Proceedings of the 10th International Conference on Quality Software (QSIC 2010), pp. 357-360. jul, 2010. ID DOI: 10.1109/QSIC.2010.19 Abstract This paper presents an integrated supporting tool for Attributed Goal-Oriented Requirements Analysis (AGORA), which is an extended version of goal-oriented analysis. Our tool assists seamlessly requirements analysts and stakeholders in their activities throughout AGORA steps including constructing goal graphs with group work, utilizing domain ontologies for goal graph construction, detecting various types of conflicts among goals, prioritizing goals, analyzing impacts when modifying a goal graph, and version control of goal graphs. BibTeX @inproceedings{saeki-qsic2010, author = {Motoshi Saeki and Shinpei Hayashi and Haruhiko Kaiya}, title = {An Integrated Support for Attributed Goal-Oriented Requirements Analysis Method and Its Implementation}, booktitle = {Proceedings of the 10th International Conference on Quality Software}, pages = {357--360}, year = 2010, month = {jul}, } [saeki-qsic2010]: as a page 30. Motoshi Saeki, Shinpei Hayashi, Haruhiko Kaiya: "A Tool for Attributed Goal-Oriented Requirements Analysis". In Proceedings of the 24th IEEE/ACM International Conference on Automated Software Engineering (ASE 2009), pp. 670-672. Auckland, New Zealand, nov, 2009. ID DOI: 10.1109/ASE.2009.34 Abstract This paper presents an integrated supporting tool for Attributed Goal-Oriented Requirements Analysis (AGORA), which is an extended version of goal-oriented analysis. Our tool assists seamlessly requirements analysts and stakeholders in their activities throughout AGORA steps including constructing goal graphs with group work, prioritizing goals, and version control of goal graphs. BibTeX @inproceedings{saeki-ase2009, author = {Motoshi Saeki and Shinpei Hayashi and Haruhiko Kaiya}, title = {A Tool for Attributed Goal-Oriented Requirements Analysis}, booktitle = {Proceedings of the 24th IEEE/ACM International Conference on Automated Software Engineering}, pages = {670--672}, year = 2009, month = {nov}, } [saeki-ase2009]: as a page 31. Rodion Moiseev, Shinpei Hayashi, Motoshi Saeki: "Generating Assertion Code from OCL: A Transformational Approach Based on Similarities of Implementation Languages". In Proceedings of the ACM/IEEE 12th International Conference on Model Driven Engineering Languages and Systems (MODELS 2009), Lecture Notes in Computer Science, vol. 5795, pp. 650-664. Denver, Colorado, USA, oct, 2009. ID DOI: 10.1007/978-3-642-04425-0_52 Abstract The Object Constraint Language (OCL) carries a platform independent characteristic allowing it to be decoupled from implementation details, and therefore it is widely applied in model transformations used by model-driven development techniques. However, OCL can be found tremendously useful in the implementation phase aiding assertion code generation and allowing system verification. Yet, taking full advantage of OCL without destroying its platform independence is a difficult task. This paper proposes an approach for generating assertion code from OCL constraints by using a model transformation technique to abstract language specific details away from OCL high-level concepts, showing wide applicability of model transformation techniques. We take advantage of structural similarities of implementation languages to describe a rewriting framework, which is used to easily and flexibly reformulate OCL constraints into any target language, making them executable on any platform. A tool is implemented to demonstrate the effectiveness of this approach. Slide BibTeX @inproceedings{rodion-models2009, author = {Rodion Moiseev and Shinpei Hayashi and Motoshi Saeki}, title = {Generating Assertion Code from OCL: A Transformational Approach Based on Similarities of Implementation Languages}, booktitle = {Proceedings of the ACM/IEEE 12th International Conference on Model Driven Engineering Languages and Systems}, pages = {650--664}, year = 2009, month = {oct}, } [rodion-models2009]: as a page 32. Hiroshi Kazato, Rafael Weiss, Shinpei Hayashi, Takashi Kobayashi, Motoshi Saeki: "Model-View-Controller Architecture Specific Model Transformation". In Proceedings of the 9th OOPSLA Workshop on Domain-Specific Modeling (DSM 2009), co-located with OOPSLA 2009. Orlando, Florida, USA, oct, 2009. Abstract In this paper, we propose a model-driven development technique specific to the Model-View-Controller architecture domain. Even though a lot of application frameworks and source code generators are available for implementing this architecture, they do depend on implementation specific concepts, which take much effort to learn and use them. To address this issue, we define a UML profile to capture architectural concepts directly in a model and provide a bunch of transformation mappings for each supported platform, in order to bridge between architectural and implementation concepts. By applying these model transformations together with source code generators, our MVC-based model can be mapped to various kind of platforms. Since we restrict a domain into MVC architecture only, automating model transformation to source code is possible. We have prototyped a supporting tool and evaluated feasibility of our approach through a case study. It demonstrates model transformations specific to MVC architecture can produce source code for two different platforms. BibTeX @inproceedings{kazato-dsm2009, author = {Hiroshi Kazato and Rafael Weiss and Shinpei Hayashi and Takashi Kobayashi and Motoshi Saeki}, title = {Model-View-Controller Architecture Specific Model Transformation}, booktitle = {Proceedings of the 9th OOPSLA Workshop on Domain-Specific Modeling}, year = 2009, month = {oct}, } [kazato-dsm2009]: as a page 33. Takashi Yoshikawa, Shinpei Hayashi, Motoshi Saeki: "Recovering Traceability Links between a Simple Natural Language Sentence and Source Code Using Domain Ontologies". In Proceedings of the 25th International Conference on Software Maintenance (ICSM 2009), pp. 551-554. Edmonton, Canada, sep, 2009. ID DOI: 10.1109/ICSM.2009.5306390 URL Abstract This paper proposes an ontology-based technique for recovering traceability links between a natural language sentence specifying features of a software product and the source code of the product. Some software products have been released without detailed documentation. To automatically detect code fragments associated with the functional descriptions written in the form of simple sentences, the relationships between source code structures and problem domains are important. In our approach, we model the knowledge of the problem domains as domain ontologies. By using semantic relationships of the ontologies in addition to method invocation relationships and the similarity between an identifier on the code and words in the sentences, we can detect code fragments corresponding to the sentences. A case study within a domain of painting software shows that we obtained results of higher quality than without ontologies. BibTeX @inproceedings{yoshikawa-icsm2009, author = {Takashi Yoshikawa and Shinpei Hayashi and Motoshi Saeki}, title = {Recovering Traceability Links between a Simple Natural Language Sentence and Source Code Using Domain Ontologies}, booktitle = {Proceedings of the 25th International Conference on Software Maintenance}, pages = {551--554}, year = 2009, month = {sep}, } [yoshikawa-icsm2009]: as a page 34. Kohei Uno, Shinpei Hayashi, Motoshi Saeki: "Constructing Feature Models using Goal-Oriented Analysis". In Proceedings of the 9th International Conference on Quality Software (QSIC 2009), pp. 412-417. aug, 2009. ID DOI: 10.1109/QSIC.2009.61 Abstract This paper proposes a systematic approach to derive feature models required in a software product line development. In our approach, we use goal graphs constructed by goal-oriented requirements analysis. We merge multiple goal graphs into a graph, and then regarding the leaves of the merged graph as the candidates of features, identify their commonality and variability based on the achievability of product goals. Through a case study of a portable music player domain, we obtained a feature model with high quality. BibTeX @inproceedings{uno-qsic2009, author = {Kohei Uno and Shinpei Hayashi and Motoshi Saeki}, title = {Constructing Feature Models using Goal-Oriented Analysis}, booktitle = {Proceedings of the 9th International Conference on Quality Software}, pages = {412--417}, year = 2009, month = {aug}, } [uno-qsic2009]: as a page 35. Shinpei Hayashi, Yasuyuki Tsuda, Motoshi Saeki: "Detecting Occurrences of Refactoring with Heuristic Search". In Proceedings of the 15th Asia-Pacific Software Engineering Conference (APSEC 2008), pp. 453-460. Beijing, China, dec, 2008. ID DOI: 10.1109/APSEC.2008.9 ISSN: 1530-1362 ISBN: 978-0-7695-3446-6 Abstract This paper proposes a novel technique to detect the occurrences of refactoring from a version archive, in order to reduce the effort spent in understanding what modifications have been applied. In a real software development process, a refactoring operation may sometimes be performed together with other modifications at the same revision. This means that understanding the differences between two versions stored in the archive is not usually an easily process. In order to detect these impure refactorings, we model the detection within a graph search. Our technique considers a version of a program as a state and a refactoring as a transition. It then searches for the path that approaches from the initial to the final state. To improve the efficiency of the search, we use the source code differences between the current and the final state for choosing the candidates of refactoring to be applied next and estimating the heuristic distance to the final state. We have clearly demonstrated the feasibility of our approach through a case study. Slide BibTeX @inproceedings{hayashi-apsec2008, author = {Shinpei Hayashi and Yasuyuki Tsuda and Motoshi Saeki}, title = {Detecting Occurrences of Refactoring with Heuristic Search}, booktitle = {Proceedings of the 15th Asia-Pacific Software Engineering Conference}, pages = {453--460}, year = 2008, month = {dec}, } [hayashi-apsec2008]: as a page 36. Takeshi Obayashi, Shinpei Hayashi, Motoshi Saeki, Hiroyuki Ohta, Kengo Kinoshita: "Preperation and usage of gene coexpression data". In the 19th International Conference on Arabidopsis Research (ICAR 2008). Montreal, Canada, jun, 2008. Abstract Gene coexpression provides key information to understand living systems because coexpressed genes are often involved in the same or related biological pathways. Coexpression data are now used for a wide variety of experimental designs, such as gene targeting, regulatory investigations and/or identification of potential partners in protein-protein interactions. We constructed two databases for Arabidopsis (ATTED-II, http://www.atted.bio.titech.ac.jp) and mammals (COXPRESdb, http://coxpresdb.hgc.jp). Based on pairwise gene coexpression, coexpressed gene networks were prepared in these databases. To support gene coexpression, known protein-protein interactions, common metabolic pathways and conserved coexpression were also represented on the networks. We used Google Maps API to visualize large networks interactively. The relationships of the coexpression database with other large-scale data will be discussed, in addition to data construction procedures and typical usages of coexpression data. BibTeX @misc{obayashi-icar2008, author = {Takeshi Obayashi and Shinpei Hayashi and Motoshi Saeki and Hiroyuki Ohta and Kengo Kinoshita}, title = {Preperation and usage of gene coexpression data}, howpublished = {In the 19th International Conference on Arabidopsis Research}, year = 2008, month = {jun}, } [obayashi-icar2008]: as a page 37. Shinpei Hayashi, Motoshi Saeki: "Extracting Prehistories of Software Refactorings from Version Archives". In Large-Scale Knowledge Resources. Construction and Application - Proceedings of the 3rd International Conference on Large-Scale Knowledge Resources (LKR 2008), Lecture Notes in Artificial Intelligence, vol. 4938, pp. 82-89. Tokyo Institute of Technology (Ookayama Campus), Tokyo, Japan, mar, 2008. ID DOI: 10.1007/978-3-540-78159-2_9 Abstract This paper proposes an automated technique to extract prehistories of software refactorings from existing software version archives, which in turn a technique to discover knowledge for finding refactoring opportunities. We focus on two types of knowledge to extract: characteristic modification histories, and fluctuations of the values of complexity measures. First, we extract modified fragments of code by calculating the difference of the Abstract Syntax Trees in the programs picked up from an existing software repository. We also extract past cases of refactorings, and then we create traces of program elements by associating modified fragments with cases of refactorings for finding the structures that frequently occur. Extracted traces help us identify how and where to refactor programs, and it leads to improve the program design. BibTeX @inproceedings{hayashi-lkr2008, author = {Shinpei Hayashi and Motoshi Saeki}, title = {Extracting Prehistories of Software Refactorings from Version Archives}, booktitle = {Large-Scale Knowledge Resources. Construction and Application -- Proceedings of the 3rd International Conference on Large-Scale Knowledge Resources}, pages = {82--89}, year = 2008, month = {mar}, } [hayashi-lkr2008]: as a page 38. Shinpei Hayashi, Motoshi Saeki: "Eclipse Plug-ins for Collecting and Analyzing Program Modifications". In Eclipse Technology eXchange Workshop (ETX 2006), co-located with OOPSLA 2006, Poster Session. Oregon Convention Center, Portland, Oregon, USA, oct, 2006. Abstract In this poster, we discuss the need for collecting and analyzing program modification histories, sequences of fine-grained program editing operations. Then we introduce Eclipse plug-ins that can collect and analyze modification histories, and show its useful application technique that can suggest suitable refactoring opportunities to developers by analyzing histories. BibTeX @misc{hayashi-etx2006, author = {Shinpei Hayashi and Motoshi Saeki}, title = {Eclipse Plug-ins for Collecting and Analyzing Program Modifications}, howpublished = {In Eclipse Technology eXchange Workshop}, year = 2006, month = {oct}, } [hayashi-etx2006]: as a page ## Services ### Program Committee / Reviewer 1. ICPC 2017 2. SANER 2016, 2017 3. WM2SP-16 4. IWSR 2016 5. GECCO 2015 SBSE-SS Track, 2016 SBSE Track 6. NasBASE 2015 7. IWESEP 2010, 2011, 2012, 2013, 2014 8. AsianPLoP 2014, 2015, 2016 9. ICSEA 2014, 2015, 2016 10. APSEC 2012, 2012-ER, 2013 11. IWSM/MENSURA 2011 12. SES 2010, 2011, 2012, 2013, 2014, 2015, 2016 13. FOSE 2012 in 湯布院, 2013 in 加賀, 2014 in 霧島, 2015 in 天童, 2016 in 琴平 14. 情報処理学会 論文誌査読委員(2013/6/1-) 15. 電子情報通信学会 ソサイエティ論文誌編集委員会 査読委員(2010/8/27-) 16. ICSM 2013 (External Reviewer) 17. CAiSE 2013 (External Reviewer) 18. RCIS 2013 (External Reviewer) 19. PPDP 2009 (External Reviewer) ### Steering/Organizing Committee 1. ER 2016: Publicity Chair and Web Master 2. SANER 2016: Student Volunteer Co-chairs 3. IWESEP 2016: Program Co-chairs 4. ACM ICPC Asia Regional Contest 2012 in Tokyo (2012) 5. ASE 2006: Student Volunteer 6. RE'04: Student Volunteer ## Awards 1. 電子情報通信学会ソフトウェアサイエンス研究会研究奨励賞 at Jul., 2016. 2. 電子情報通信学会ソフトウェアサイエンス研究会研究奨励賞 at Jul., 2016. 3. 貢献賞 at FOSE 2013, Nov. 30, 2013. 4. Yamashita SIG Research Award from IPSJ, Mar. 7, 2012. 5. IEEE Computer Society Japan Chapter FOSE Young Researcher Award at FOSE 2011, Nov. 26, 2011. 6. Best Paper Award from SES 2010, Aug. 31, 2010. 7. Seiichi Tejima Doctoral Dissertation Award from Tokyo Institute of Technology, Feb. 24, 2010. 8. Clark Awards 2003 from Hokkaido University, Mar. 24, 2004. 9. The Best Score Award from Programming Contest 2003, IPSJ Hokkaido Branch, Mar. 22, 2003. -- HAYASHI, Shinpei [PGP pubkey(C5F14DA2)]
2016-07-25 08:00:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28540152311325073, "perplexity": 5866.9865232548145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824217.36/warc/CC-MAIN-20160723071024-00250-ip-10-185-27-174.ec2.internal.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/200-kg-mass-hangs-vertically-downward-end-ofa-massless-cord-wrapped-cylinder-900-mm-indiam-q225253
## Cylinder Mass A 200 kg mass hangs vertically downward from the end ofa massless cord that is wrapped around a cylinder 900 mm indiameter. The mass descends 8m in 4seconds. What is themass of the cylinder?
2013-05-26 05:52:47
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8333014249801636, "perplexity": 2594.6641400197914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706631378/warc/CC-MAIN-20130516121711-00044-ip-10-60-113-184.ec2.internal.warc.gz"}
http://clay6.com/qa/39668/in-some-solutions-the-concentration-of-h-o-remains-constant-even-when-small
Browse Questions In some solutions , the concentration of $\;H_{3}O^{+}\;$ remains constant even when small amounts of strong acid or strong base are added to them . These solutions are known as : $(a)\;Ideal \; solutions\qquad(b)\;Colloidal \; solutions\qquad(c)\;True\;solutions\qquad(d)\;Buffer\;solutions$
2017-07-26 12:38:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7568116188049316, "perplexity": 883.847862159945}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426161.99/warc/CC-MAIN-20170726122153-20170726142153-00665.warc.gz"}
https://wiki.nervtech.org/doku.php?id=blog:2022:0917_nvl_restoring_nervapp_and_lua
blog:2022:0917_nvl_restoring_nervapp_and_lua # NervLand: Restoring NervApp and Lua support So in this new session, we continue with the restoring of the NervSeed project started in my previous post. Now it's time to inject some lua magic in there 😋! Or well… a little bit at least… if possible… lol. • First things first: we need to restore the NervApp class, which is a center part of how I'm planning to build my base components. • As demonstrated already, we add the class from command line: $nvp cmake add class nervland Core app/NervApp • ⇒ In fact I had to restore quite a lot of classes already to try to put this back on rails, like ResourceManager, LuaManager, LuaScript, LuScriptLoader, etc. • And on top of that, I have an error when trying to use the clang compiler to link boost libraries on windows: lld-link: error: could not open 'libboost_filesystem-clangw14-mt-x64-1_78.lib': no such file or directory clang++: error: linker command failed with exit code 1 (use -v to see invocation) ninja: build stopped: subcommand failed. • The library I have is actually called: libboost_filesystem-clang14-mt-x64-1_78-clang.lib, so I need to figure out how to resolve those library names correctly. • Hmmm, okay, so I think the ending “-clang” in the lib name could come from the buildid command line argument I was using: bjam_cmd = [bjam, "--user-config=user-config.jam", "--buildid=clang", "-j", "8", "toolset=clang", "--prefix="+prefix, "--without-mpi", "-sNO_BZIP2=1", "architecture=x86", "variant=release", "link=static", "threading=multi", "address-model=64"] • And for the “clangw” instead of “clang”: this is soemthing that need to be fixed on the cmake config side: the easiest option I found was to set the definition add_definitions(-DBOOST_LIB_TOOLSET=“clang14”) in our cmake files. But now I'm thinking I should rather rename the library files to have that clangw14 name directly. • Next issue on our path, the runtime linkage is not correct 😢: lld-link: error: /failifmismatch: mismatch detected for 'RuntimeLibrary': >>> msvcprt.lib(locale0_implib.obj) has value MD_DynamicRelease >>> libboost_filesystem-clang14-mt-x64-1_79.lib(operations.obj) has value MT_StaticRelease clang++: error: linker command failed with exit code 1 (use -v to see invocation) ninja: build stopped: subcommand failed. • I think I should try to pass the correct flags to the linker to fix that ? • [A few hours later…] Feww.. that was tricky! During the investigation on this Runtime mismatch issue I first discovered that we can specify the MSVC runtime we want in Cmake with this kind of entry: set(CMAKE_MSVC_RUNTIME_LIBRARY "MultiThreaded$<$<CONFIG:Debug>:Debug>DLL") • But then I realized I was already using “MultiThreaded DLL” by default when building with Clang on windows. but still the linker was reporting that the boost libraries were using MT_StaticRelease even if I was using runtime-link=shared on the command line. • ⇒ In the end I ended up rebuilding boost again, but this time manually specifying the compilation flags I was expecting to be set automatically (-D_MT -D_DLL etc in the snippet below): with open(self.get_path(build_dir, "user-config.jam"), "w", encoding="utf-8") as file: # Note: Should not add the -std=c++11 flag below as this will lead to an error with C files: file.write(f"using clang : {ver_major}.{ver_minor} : {comp_path} : ") if self.is_windows: file.write("cxxstd=17 ") file.write(f"<ranlib>\"{comp_dir}/llvm-ranlib.exe\" ") file.write(f"<archiver>\"{comp_dir}/llvm-ar.exe\" ") file.write("<cxxflags>\"-D_CRT_SECURE_NO_WARNINGS -D_MT -D_DLL -Xclang --dependent-lib=msvcrt\" ") # file.write(f"<cxxflags>-D_SILENCE_CXX17_OLD_ALLOCATOR_MEMBERS_DEPRECATION_WARNING ") file.write(";\n") else: file.write(f"<compileflags>\"{cxxflags} -fPIC\" ") file.write(f"<linkflags>\"{linkflags}\" ;\n") • And now this is finally building and running with clang on windows! Ouuufff!! 😳 • Also, to find the boost libraries with the clang14 toolset name instead of clangw14 I added the following workaround in my cmake files: add_definitions(-DBOOST_ALL_STATIC_LINK) add_definitions(-D_CRT_SECURE_NO_WARNINGS) if(WIN32 AND CMAKE_CXX_COMPILER_ID MATCHES "Clang") # MESSAGE(STATUS "Clang version:${CMAKE_CXX_COMPILER_VERSION}") string(REPLACE "." ";" VERSION_LIST ${CMAKE_CXX_COMPILER_VERSION}) list(GET VERSION_LIST 0 CLANG_MAJOR_VERSION) # list(GET VERSION_LIST 1 CLANG_MINOR_VERSION) # list(GET VERSION_LIST 2 CLANG_PATCH_VERSION) add_definitions(-DBOOST_LIB_TOOLSET="clang${CLANG_MAJOR_VERSION}") endif() • Of course, compiling with MSVC is broken now lol, how could it be otherwise ? The issue comes from an intrinsic I'm using in the SpinLock class: D:\Projects\NervLand\modules\nvCore\src\base/SpinLock.h(41): error C3861: '__builtin_ia32_pause': identifier not found / • It seems this builtin_ia32_pause is a gcc specific builtin: https://gcc.gnu.org/onlinedocs/gcc-4.9.4/gcc/X86-Built-in-Functions.html • ⇒ Actually reading a bit more carefully the page on spinlock implementation (https://rigtorp.se/spinlock/) we already have the answer for the MSVC compiler: On GCC and clang we can emit the PAUSE instruction using the built-in function \_\_builtin_ia32_pause() and on MSVC using _mm_pause(). Adding this to our spin-wait loop we get: • ⇒ OK, both MSVC and clang compilation are now working and our current outputs look like this: $nvp nvl Hello world! [2022-07-23 22:41:48.102] [info] Creating LogManager here. [2022-07-23 22:41:48.103] [info] Setting log level to trace... [2022-07-23 22:41:48.103] [info] Creating LogManager: 42 [2022-07-23 22:41:48.103] [info] Creating NervApp... [2022-07-23 22:41:48.103] [info] Creating NervApp object. [2022-07-23 22:41:48.103] [trace] Creating MemoryManager object. [2022-07-23 22:41:48.103] [info] TODO: should install debug handlers here. [2022-07-23 22:41:48.103] [debug] Creating new pool %d of %d blocks with blocksize=%d [2022-07-23 22:41:48.103] [info] Creating EventHandler object. [2022-07-23 22:41:48.103] [debug] Creating AppComponent ResourceManager [2022-07-23 22:41:48.103] [debug] Creating new pool %d of %d blocks with blocksize=%d [2022-07-23 22:41:48.103] [debug] Creating ResourceManager object. [2022-07-23 22:41:48.103] [trace] Creating ResourceLoader object. [2022-07-23 22:41:48.103] [trace] Adding resource search path: 'D:/Projects/assets/lua/' [2022-07-23 22:41:48.103] [debug] Creating new pool %d of %d blocks with blocksize=%d [2022-07-23 22:41:48.103] [debug] Creating LuaManager object. [2022-07-23 22:41:48.103] [info] Destroying NervApp... [2022-07-23 22:41:48.103] [debug] Uninitializing NervApp object. [2022-07-23 22:41:48.103] [debug] Uninitializing LuaManager... [2022-07-23 22:41:48.103] [debug] Releasing components... [2022-07-23 22:41:48.103] [trace] Uninitializing component: ResourceManager [2022-07-23 22:41:48.103] [debug] Deleting ResourceManager object. [2022-07-23 22:41:48.103] [trace] Deleting ResourceLoader object. [2022-07-23 22:41:48.103] [debug] Deleting AppComponent object ResourceManager [2022-07-23 22:41:48.103] [info] Deleting EventHandler object. [2022-07-23 22:41:48.103] [debug] Unloading dynamic libraries... [2022-07-23 22:41:48.103] [debug] Destroying LuaManager... [2022-07-23 22:41:48.103] [debug] Deleting LuaManager object. [2022-07-23 22:41:48.103] [debug] Deleting NervApp object. [2022-07-23 22:41:48.103] [info] Destroying memory manager. [2022-07-23 22:41:48.103] [trace] Deleting MemoryManager object. Deleted LogManager object. Looking for memory leaks... No memory leak detected. Exiting. • To extend the support for lua scripts we need to load the Core module bindings first: let's see how we can write these… • Adding the support for DynamicLibrary: OK • Now restoring support for Luna: that one is a large chunk. But this time I don't want to start with SOL2 embedded in the application at all, so more work to do to refactor the initial bindings. • Okay hmmm… taking a step back here: generating bindings manually for lua is possible sure, but that would mean a lot of additional work for me, so… that's not really the path I should take I think: I should rather accept to use SOL2 to generate the bindings, just not use it intrusively in the Lua management system. • Also, the nvCore library will contain a lot of bindable content eventually, end those bindings should be generated with the NervLuna system, so the bindings I'm going to generate for nvCore here should be kept minimal. • While building my initial bindings with SOL for the LuaManager functions I the realized that I could not convert const char* automatically to nv::String with SOL, that's a pain, but it also got me thinking, and now I have updated the LuaManager to store the internal maps with StringID keys instead of String: // mapping of Lua script files: // using ScriptMap = Map<StringID, RefPtr<LuaScript>>; • There is still the risk of hash collision here but I think this is acceptable and should also improve the performances if I keep using this design. • Another thing I just noticed while loading my lua script from disk with the LuaManager is that I get this kind of pattern: [2022-07-29 08:16:23.198] [debug] Loading resource from file: D:/Projects/NervLand/dist/assets/lua/setup.lua [2022-07-29 08:16:23.198] [debug] Using default root allocator to allocate/free 5713 bytes. [2022-07-29 08:16:23.198] [debug] Using default root allocator to allocate/free 5713 bytes. [2022-07-29 08:16:23.198] [debug] Using default root allocator to allocate/free 5713 bytes. [2022-07-29 08:16:23.198] [debug] Loading resource from file: D:/Projects/NervLand/dist/assets/lua/base/utils.lua [2022-07-29 08:16:23.198] [debug] Using default root allocator to allocate/free 5793 bytes. [2022-07-29 08:16:23.198] [debug] Using default root allocator to allocate/free 5793 bytes. [2022-07-29 08:16:23.198] [debug] Using default root allocator to allocate/free 5793 bytes. [2022-07-29 08:16:23.199] [debug] Loading resource from file: D:/Projects/NervLand/dist/assets/lua/external/serpent.lua [2022-07-29 08:16:23.199] [debug] Using default root allocator to allocate/free 8689 bytes. [2022-07-29 08:16:23.199] [debug] Using default root allocator to allocate/free 8689 bytes. [2022-07-29 08:16:23.199] [debug] Using default root allocator to allocate/free 8689 bytes. • ⇒ To me this seems to be an indication that I'm allocating a String once, and then copying it… Maybe an std::move could be used here instead 🤔 ? Checking. • Ahh! I knew this was a bit serious: I generic template with forward the rvalues correctly with this: template <typename T, class... Args> auto create_ref_object(Args&&... args) -> RefPtr<T> { // cf. // https://stackoverflow.com/questions/2821223/how-would-one-call-stdforward-on-all-arguments-in-a-variadic-function return MemoryManager::get_root_allocator().create<T>( std::forward<Args>(args)...); } • But then my mistake was that I was not using the proper forwarding in the create function: template <class T, class... Args> auto create(Args&&... args) -> RefPtr<T> { // Allocate the memory void* ptr = allocate(sizeof(T)); CHECK(ptr != nullptr, "Cannot allocate new object."); // Call the class constructor: RefPtr<T> obj(new (ptr) T(args...)); // Assign this allocator to the object: obj->set_allocator(this); // Return the object we just created: return obj; } • And in fact I also had to add the std::move() call in the LuaScriptLoader itself: auto LuaScriptLoader::load_resource(const char* fullpath) -> RefPtr<RefObject> { CHECK(file_exists(fullpath), "The file {} doesn't exist.", fullpath); // We read the content of the file: String content = read_file(fullpath); logDEBUG("LuaScript source data ptr: {}", (const void*)content.data()); // Create a new LuaScript from that content: auto res = create_ref_object<LuaScript>(std::move(content)); return res; } • OK: issue fixed. And in the process, another thing I noticed is that I may of may not end up using the default root allocator depending on the size of the script to be abllocated Which means that small scripts are placed in the pool allocator instead: that's nice! • Next critical step: rebuilding the lua Clang module, but with Clang 14 now instead of version 12 (I think ?): this might be tricky lol. • To get the updated compilation details we use the llvm-config tool: llvm-config --cxxflags -ID:\Projects\NervProj\libraries\windows_msvc\LLVM-14.0.6\include -std:c++14 /EHsc /GR -D_CRT_SECURE_NO_DEPRECATE -D_CRT_SECURE_NO_WARNINGS -D_CRT_NONSTDC_NO_DEPRECATE -D_CRT_NONSTDC_NO_WARNINGS -D_SCL_SECURE_NO_DEPRECATE -D_SCL_SECURE_NO_WARNINGS -DUNICODE -D_UNICODE -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS -D__STDC_LIMIT_MACROS • llvm-config --ldflags -LIBPATH:D:\Projects\NervProj\libraries\windows_msvc\LLVM-14.0.6\lib • llvm-config --system-libs psapi.lib shell32.lib ole32.lib uuid.lib advapi32.lib zlibstatic.lib • llvm-config --libs LLVMWindowsManifest.lib LLVMXRay.lib LLVMLibDriver.lib LLVMDlltoolDriver.lib LLVMCoverage.lib LLVMLineEditor.lib LLVMX86TargetMCA.lib LLVMX86Disassembler.lib LLVMX86AsmParser.lib LLVMX86CodeGen.lib LLVMX86Desc.lib LLVMX86Info.lib LLVMOrcJIT.lib LLVMMCJIT.lib LLVMJITLink.lib LLVMInterpreter.lib LLVMExecutionEngine.lib LLVMRuntimeDyld.lib LLVMOrcTargetProcess.lib LLVMOrcShared.lib LLVMDWP.lib LLVMSymbolize.lib LLVMDebugInfoPDB.lib LLVMDebugInfoGSYM.lib LLVMOption.lib LLVMObjectYAML.lib LLVMMCA.lib LLVMMCDisassembler.lib LLVMLTO.lib LLVMCFGuard.lib LLVMFrontendOpenACC.lib LLVMExtensions.lib Polly.lib PollyISL.lib LLVMPasses.lib LLVMObjCARCOpts.lib LLVMCoroutines.lib LLVMipo.lib LLVMInstrumentation.lib LLVMVectorize.lib LLVMLinker.lib LLVMFrontendOpenMP.lib LLVMDWARFLinker.lib LLVMGlobalISel.lib LLVMMIRParser.lib LLVMAsmPrinter.lib LLVMDebugInfoMSF.lib LLVMSelectionDAG.lib LLVMCodeGen.lib LLVMIRReader.lib LLVMAsmParser.lib LLVMInterfaceStub.lib LLVMFileCheck.lib LLVMFuzzMutate.lib LLVMTarget.lib LLVMScalarOpts.lib LLVMInstCombine.lib LLVMAggressiveInstCombine.lib LLVMTransformUtils.lib LLVMBitWriter.lib LLVMAnalysis.lib LLVMProfileData.lib LLVMDebugInfoDWARF.lib LLVMObject.lib LLVMTextAPI.lib LLVMMCParser.lib LLVMMC.lib LLVMDebugInfoCodeView.lib LLVMBitReader.lib LLVMCore.lib LLVMRemarks.lib LLVMBitstreamReader.lib LLVMBinaryFormat.lib LLVMTableGen.lib LLVMSupport.lib LLVMDemangle.lib • But now i'm wondering where I took the sources for my Clang lua module from 🤔: it seems these should be the sources for the LLVM-C module, so I need to dig in the LLVM sources for an update I guess ? ⇒OK The project I'm looking for is clang/tools/libclang: So let's see what kind of updates I have in there… • Hmmm but ideally I should also build LLVM with LLVM itself… and this requires building libiconv first… let's see. Oh oh… troubles ahead 😬 ⇒ Let's just use the same prebuilt package as for the MSVC compiler. • Arrf, and I also need libxml2 built statically, which of course is not working: I only get the shared library version even if I explicitly disabled shared. Ah, in fact it's the static library which is built, but with clang there is no “s” suffix appended: if(MSVC) if(BUILD_SHARED_LIBS) set_target_properties( LibXml2 PROPERTIES DEBUG_POSTFIX d ) else() set_target_properties( LibXml2 PROPERTIES DEBUG_POSTFIX sd MINSIZEREL_POSTFIX s RELEASE_POSTFIX s RELWITHDEBINFO_POSTFIX s ) endif() endif() • ⇒ So let's add that suffix manually for now. • Next I got an error from cmake not being able to figure out the hostr triple to be used, so I'm manually specifying that for now: # When compiling with clang on windows we should specify the Host triple: if self.is_windows and self.compiler.is_clang(): flags += ["-DLLVM_HOST_TRIPLE=x86_64-pc-win32"] • Finally, to avoid a lot of warnings in the LLVm construction I also used (cf. https://github.com/alanxz/rabbitmq-c/issues/291): self.append_cxxflag("-std=gnu99") • Hmm, I still have the warnings, and I'm ending with this error: cmd.exe /C "cd . && D:\Projects\NervProj\libraries\windows_msvc\LLVM-14.0.6\bin\clang++.exe -fuse-ld=lld-link -nostartfiles -nostdlib -DLIBXML_STATIC -ID:/Projects/NervProj/libraries/windows_clang/libiconv-1.16/include -ID:/Projects/NervProj/libraries/windows_clang/libxml2-2.9.13/include/libxml2 -Werror=date-time -Werror=unguarded-availability-new -Wextra -Wno-unused-parameter -Wwrite-strings -Wcast-qual -Wmissing-field-initializers -pedantic -Wno-long-long -Wc++98-compat-extra-semi -Wimplicit-fallthrough -Wcovered-switch-default -Wno-noexcept-type -Wnon-virtual-dtor -Wdelete-non-virtual-dtor -Wsuggest-override -Wstring-conversion -Wmisleading-indentation -ffunction-sections -fdata-sections -fno-common -Woverloaded-virtual -Wno-nested-anon-types -O3 -DNDEBUG -D_DLL -D_MT -Xclang --dependent-lib=msvcrt -LD:/Projects/NervProj/libraries/windows_clang/libxml2-2.9.13/lib -llibxml2s -LD:/Projects/NervProj/libraries/windows_clang/libiconv-1.16/lib -llibiconvStatic -lWs2_32 -Wl,--gc-sections D:/Projects/NervProj/libraries/build/LLVM-14.0.6/build/tools/clang/tools/libclang/libclang.def -shared -o bin\clang.dll -Xlinker /MANIFEST:EMBED -Xlinker /implib:lib\clang.lib -Xlinker /pdb:bin\clang.pdb -Xlinker /version:14.0 @CMakeFiles\libclang.rsp && cd ." lld-link: warning: ignoring unknown argument '--gc-sections' lld-link: error: D:/Projects/NervProj/libraries/build/LLVM-14.0.6/build/tools/clang/tools/libclang/libclang.def: unknown file type clang++: error: linker command failed with exit code 1 (use -v to see invocation) • Also removed pedantic with a patch when building with clang on windows to finally get ride of the language extensions warning. • Eventually I reached another sneaky error: 'cmd.exe' n'est pas reconnu en tant que commande interne • ⇒ Could be that, in the case of the clang compiler I'm not setting up the PATH correctly to contain C:\Windows\system32 ? • And indeed, this is what I have as env by default for clang: if self.is_clang(): env = {} env["PATH"] = self.get_cxx_dir() # inc_dir = f"{self.root_dir}/include/c++/v1" env["CC"] = self.get_cc_path() env["CXX"] = self.get_cxx_path() # Do not use fPIC on windows: # fpic = " -fPIC" if self.is_linux else "" # env['CXXFLAGS'] = f"-I{inc_dir} {self.cxxflags}{fpic}" # env['CFLAGS'] = f"-I{inc_dir} -w{fpic}" env["LD_LIBRARY_PATH"] = f"{self.libs_path}" # If we are on windows, we also need the library path from the MSVC compiler: if self.is_windows: bman = self.ctx.get_component("builder") msvc_comp = bman.get_compiler("msvc") msvc_env = msvc_comp.get_env() # logger.info("MSVC compiler env: %s", self.pretty_print(msvc_env)) env = self.prepend_env_list(msvc_env["LIB"], env, "LIB") self.comp_env = env • ⇒ Fixing that with: drive = os.getenv("HOMEDRIVE") assert drive is not None, "Invalid HOMEDRIVE variable." orig_env["PATH"] = f"{drive}\\Windows\\System32;{drive}\\Windows" • Now back from vacations so trying to build from sources again: nvp nvl_build • But this is failing with the missing LuaJIT dependency on ragnarok: 2022/08/25 08:24:26 [nvp.core.build_manager] INFO: Running automatic setup for LuaJIT Traceback (most recent call last): File "D:\Projects\NervProj\nvp\core\cmake_manager.py", line 647, in <module> comp.run() File "D:\Projects\NervProj\nvp\nvp_component.py", line 93, in run res = self.process_command(cmd) File "D:\Projects\NervProj\nvp\nvp_component.py", line 83, in process_command return self.process_cmd_path(self.ctx.get_command_path()) File "D:\Projects\NervProj\nvp\core\cmake_manager.py", line 39, in process_cmd_path self.build_projects(bprints, dest_dir, rebuild=rebuild) File "D:\Projects\NervProj\nvp\core\cmake_manager.py", line 510, in build_projects self.build_project(proj_name, install_dir, rebuild) File "D:\Projects\NervProj\nvp\core\cmake_manager.py", line 549, in build_project var_val = bman.get_library_root_dir(lib_name) File "D:\Projects\NervProj\nvp\core\build_manager.py", line 185, in get_library_root_dir assert self.dir_exists(dep_dir), f"Library folder {dep_dir} doesn't exist yet." AssertionError: Library folder D:\Projects\NervProj\libraries\windows_clang\LuaJIT-2.1 doesn't exist yet. 2022/08/25 08:24:26 [nvp.nvp_object] ERROR: Subprocess terminated with error code 1 (cmd=['D:\\Projects\\NervProj\\.pyenvs\\min_env\\python.exe', 'D:\\Projects\\NervProj/nvp/core/cmake_manager.py', 'build', 'nervland']) 2022/08/25 08:24:26 [nvp.components.runner] ERROR: Error occured in script command: cmd=['D:\\Projects\\NervProj\\.pyenvs\\min_env\\python.exe', 'D:\\Projects\\NervProj/nvp/core/cmake_manager.py', 'build', 'nervland'] cwd=D:\Projects\NervLand return code=1 lastest outputs: return self.process_cmd_path(self.ctx.get_command_path()) File "D:\Projects\NervProj\nvp\core\cmake_manager.py", line 39, in process_cmd_path self.build_projects(bprints, dest_dir, rebuild=rebuild) File "D:\Projects\NervProj\nvp\core\cmake_manager.py", line 510, in build_projects self.build_project(proj_name, install_dir, rebuild) File "D:\Projects\NervProj\nvp\core\cmake_manager.py", line 549, in build_project var_val = bman.get_library_root_dir(lib_name) File "D:\Projects\NervProj\nvp\core\build_manager.py", line 185, in get_library_root_dir assert self.dir_exists(dep_dir), f"Library folder {dep_dir} doesn't exist yet." AssertionError: Library folder D:\Projects\NervProj\libraries\windows_clang\LuaJIT-2.1 doesn't exist yet. • Checking the dependency… ⇒ OK: now ensuring we only use lower case names in dependency list in check_libraries(): def check_libraries(self, dep_list, rebuild=False, preview=False, append=False, keep_build=False): """Build all the libraries for NervProj.""" # Iterate on each dependency: logger.debug("Checking libraries:") alldeps = self.config["libraries"] # Ensure we use only lower case for dependency names: dep_list = [dname.lower() for dname in dep_list] • The build of the luaClang module currently produces a lot of errors such as these: clang++: error: no such file or directory: '/Gw' clang++: error: no such file or directory: '/MD' clang++: error: no such file or directory: '/O2' clang++: error: no such file or directory: '/Ob2' • This is expected since we are using the Clang compiler here and not MSVC. Let's see how we can fix this. • Side note: Also noticed in the process that setting notify: false for a script using the nvp utility is not working (which is logical since we are calling another subscript in the process). So, can I do something about this ? ⇒ OK: Implemented script inheritance. • Next fixing the error with the toString(10) calls now using the construct: arg.getAsIntegral().toString(buf); return cxstring::createDup(buf.data()); • now facing an error with this: D:/Projects/NervLand/modules/nvCore/src\lua/sol/sol.hpp:12297:28: error: no matching function for call to object of type 'std::less<>' return stack::push(L, op(detail::deref(l), detail::deref(r))); ^~ D:/Projects/NervLand/modules/nvCore/src\lua/sol/sol.hpp:23333:26: note: in instantiation of function template specialization 'sol::detail::comparsion_operator_wrap<nv::RefPtr<nv::CursorSet>, std::less<>>' requested here lua_CFunction f = &comparsion_operator_wrap<T, std::less<>>; • ⇒ not quite sure yet what this means and how to fix it 🤔… ⇒ Commenting this for the moment. • Next error is on: lld-link: error: undefined symbol: __std_init_once_link_alternate_names_and_abort >>> referenced by LLVMX86CodeGen.lib(X86TargetMachine.cpp.obj):(void __cdecl llvm::initializeX86ExecutionDomainFixPass(class llvm::PassRegistry &)) >>> referenced by LLVMX86CodeGen.lib(X86TargetMachine.cpp.obj):(int void __cdecl llvm::initializeX86ExecutionDomainFixPass(class initializeX86ExecutionDomainFixPass::PassRegistry &)'::1'::dtor$7) >>> referenced by LLVMCodeGen.lib(MachineModuleInfo.cpp.obj):(public: __cdecl llvm::MachineModuleInfoWrapperPass::MachineModuleInfoWrapperPass(class llvm::LLVMTargetMachine const *)) >>> referenced 550 more times clang++: error: linker command failed with exit code 1 (use -v to see invocation) ninja: build stopped: subcommand failed. • OK: turns out that building on SATURN works for the luaClang module: so it seems this could be due to an MSVC version mismatch on ragnarok (?) ⇒ trying to upgrade. • ⇒ Upgrading to msvc-14.33.31629 on ragnarok fixed the link issue there too. • As I'm setting up a new development environment on my Ragnarok computer I also need to setup the visual studo code settings properly in the NervLand workspace file adding those entries: "settings": { "clangd.path": "D:/Projects/NervProj/libraries/windows_msvc/LLVM-14.0.6/bin/clangd.exe", "cmake.buildDirectory": "\${workspaceFolder}/.cache/cmake" } • Then I also installed the following VSCode extensions: • clangd • C/C++ Extension Pack • cmake-format • EditorConfig for VSCode • Pylance • Python • reStructured Text • reStructured Text Syntax Highlighting • Rewrap • Prettier - Code Formatter • Now back to the CursorSet bindings issue in clang_bindings.cpp: • Compiling most of the CursorSet bindings except for the Factory constructor works fine: SOL_BEGIN_REF_CLASS(clang, "CursorSet", CursorSet) SOL_CLASS_FUNC(contains); SOL_CLASS_FUNC(insert); SOL_CLASS_FUNC(dispose); // SOL_CALL_FACTORIES([]() { return create_ref_object<class_t>(); }); SOL_END_CLASS(); • I also tried a similar change in nerv_bindings.cpp to bind the LuaScript class and could reproduce the same error: SOL_BEGIN_CLASS(space, "LuaScript", LuaScript) SOL_CALL_FACTORIES([]() { return create_ref_object<class_t>("hello"); }); SOL_END_CLASS() • ⇒ So it seems this is an error related to the current version of SOL that I'm using… Let's try to use the previous version from our NervSeed project to check: hmmmm, we still have the same error and this is making everything even worst (more warnings and errors ?) • So let's check if there is an update for SOL… ⇒ Using latest release available for SOL (version 3.3.0) doesn't help 😞 So could this be an issue specific to the clang compiler on windows ? Could be worth it to try compilation on linux: Arrf, there is still a long way to go to get the compilation on linux to work. • For now, investigating further on those std::less<> and std::equal_to<> classes, I think the issue might be because of the deref() function itself which should return the pointers for RefPtr containers: template <typename T> inline decltype(auto) deref(T&& item) { using Tu = meta::unqualified_t<T>; if constexpr (meta::is_pointer_like_v<Tu>) { return *std::forward<T>(item); } else { return std::forward<T>(item); } } • ⇒ Let's see if we can specialize that function: Nope, doesn't seem to help/work. • Arrff, I think I might finally have something here reading more carefully the error message: D:\Softs\VisualStudio\VS2022\VC\Tools\MSVC\14.33.31629\include\xstddef:218:31: note: candidate template ignored: substitution failure [with _Ty1 = nv::LuaScript &, _Ty2 = nv::LuaScript &]: invalid operands to binary expression ('nv::LuaScript' and 'nv::LuaScript') _NODISCARD constexpr auto operator()(_Ty1&& _Left, _Ty2&& _Right) const • ⇒ Might be that our dereferencing operator should return an “&&” value instead of just “&”. • Ahh, finally found it: this was occuring because I wasn't declaring the “RefPtr” as a smart pointer container for sol, the error is fixed if I add the required specialization: namespace sol { template <typename T> struct unique_usertype_traits; template <typename T> struct unique_usertype_traits<nv::RefPtr<T>> { typedef T type; typedef nv::RefPtr<T> actual_type; static const bool value = true; static bool is_null(const actual_type& value) { return value == nullptr; } static type* get(const actual_type& p) { return p.get(); } }; } // namespace sol • Next logical step would be to try to display a window now, so I restored the support for the base Window and WindowManager implementation, • Then I need to restore the SDL specific versions of those components: OK • And register the correct Window Manager when initializing a simple lua app in the NervSeed launcher… • But first things first, let's revisit a bit the Lua application framework I built initially in my NervSeed project because this is all a bit rusty so I need to refresh my memory • Hmmm, okay, so, the point is, before I can start building an SDL app from lua, I actually need the bindings generated for the window/view classes :-S So back to NervBind already lol ⇒ So let's just start a new post for that one as I feel this could mean “a lot of additional fun” before I can reach the current goal 🤣 • blog/2022/0917_nvl_restoring_nervapp_and_lua.txt
2023-03-28 17:16:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31907787919044495, "perplexity": 9964.7498451871}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948868.90/warc/CC-MAIN-20230328170730-20230328200730-00024.warc.gz"}
http://www.physicsforums.com/showthread.php?p=3102696
1. The problem statement, all variables and given/known data Find the following limit: $$\lim_{x \to \infty} \frac{2+\sqrt{(6x)}}{-2+\sqrt{(3x)}}$$ 2. Relevant equations n/a 3. The attempt at a solution I know this shouldn't be that hard, but somehow I keep getting stuck on simplifying the equation. I think the first step is to multiply both the denominator and numerator by 1/x (which equals 1/sqrt(x^2). This would give me $$\lim_{x \to \infty} \frac{\frac{2}{x}+\sqrt{\frac{6x}{x^2}}}{\frac{-2}{x}+\sqrt{\frac{3x}{x^2}}}$$ And if I'm correct the squareroots should go to zero which leaves: $$\lim_{x \to \infty} \frac{\frac{2}{x}}{\frac{-2}{x}}$$ But wouldn't those on top of eachother equal 0/0 too unless I can multiply by x for both so it equals 2/-2? I am really just not sure of the correct way to go about simplifying this. Any help would be appreciated! PhysOrg.com science news on PhysOrg.com >> Ants and carnivorous plants conspire for mutualistic feeding>> Forecast for Titan: Wild weather could be ahead>> Researchers stitch defects into the world's thinnest semiconductor Mentor I would factor $\sqrt{x}$ out of both terms in the numerator and both in the denominator. That would give you $$\lim_{x \to \infty} \frac{\sqrt{x}}{\sqrt{x}}\cdot \frac{2/\sqrt{x}+\sqrt{6}}{-2/\sqrt{x}+\sqrt{3}}$$ No, you can't do it this way. You can say the square roots "go to zero" and can be cancelled out, but you are right to observe that by the same argument, the $$2/x$$ terms should "go to zero" as well. Try multiplying the numerator and denominator by $$x^{-1/2}$$ rather than $$x^{-1}$$. Ok I understand multiplying the numerator and denominator by 1/sqrt(X) to obtain: $$\lim_{x \to \infty} \frac{\frac{2}{\sqrt{x}}+{\sqrt{6}}}{\frac{-2}{\sqrt{x}}+\sqrt{3}}$$ From this point do I multiply by the conjugate of the numerator? $$\lim_{x \to \infty} \frac{(\frac{2}{\sqrt{x}})^2-{6}}{-(\frac{2}{\sqrt{x}})^2+\frac{2\sqrt{3}}{\sqrt{x}}+\frac{2\sqrt{6}}{\sqr t{x}}-\sqrt{18}}$$ This seems like quite the jumbled mess so maybe I'm wrong.. Mentor Quote by Kstanley Ok I understand multiplying the numerator and denominator by 1/sqrt(X) to obtain: $$\lim_{x \to \infty} \frac{\frac{2}{\sqrt{x}}+{\sqrt{6}}}{\frac{-2}{\sqrt{x}}+\sqrt{3}}$$ From this point do I multiply by the conjugate of the numerator? No, just take the limit. The first terms in the top and bottom go to zero. Quote by Kstanley $$\lim_{x \to \infty} \frac{(\frac{2}{\sqrt{x}})^2-{6}}{-(\frac{2}{\sqrt{x}})^2+\frac{2\sqrt{3}}{\sqrt{x}}+\frac{2\sqrt{6}}{\sqr t{x}}-\sqrt{18}}$$ This seems like quite the jumbled mess so maybe I'm wrong.. Ah duh I got so into thinking it had to be complicated I didn't even think to just find the limit right there. Thank you so much that was so much easier than I imagined!
2013-05-23 06:52:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.88787841796875, "perplexity": 389.88326555942035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702957608/warc/CC-MAIN-20130516111557-00095-ip-10-60-113-184.ec2.internal.warc.gz"}
http://symplio.com/ikoice/how-is-strong-magnetic-field-in-a-solenoid-achieved%3F-48d750
It is also used to control the motion of objects such as control the switching of relay. CONTACT College Physics. The magnetic field of all the turns of wire passes through the center of the coil, creating a strong magnetic field there. This would be called a dipole (2 poles, a North magnetic pole at one end and a South magnetic pole at the other end). Similar to the straight solenoid, the toroidal solenoid acts as a single loop of wire with current. Find the current needed to achieve such a field (a) 2.00 cm from a long, straight wire; (b) At the center of a circular coil of radius 42.0 cm that has 100 turns; (c) Near the center of a solenoid with radius 2.40 cm, length 32.0 cm, and 40,000 turns. Obviously the ability to cut the current to turn off the magnetic field is key here. The magnetic field induces force f(t) on the plunger mass, M. The magnitude of this force is related to the current in the windings via the solenoid's electromagnetic coupling constant N, as shown below f(t) = Ni(t) The movement of the plunger generates a voltage vs. in the winding which oppose the applied voltage. … So here the magnetic The magnitude of the magnetic field at the center of a solenoid would be equaling the magnetic permeability of a vacuum multiplied by end the number of loops per unit length of the soul Lloyd Times I the current through the solenoid. Beware! To use Ampere's law we determine the line integral $\oint \vec B \cdot d\vec l$ over this closed path where $dl$ is the length element of this closed path. And so this would be equaling for pie times 10 to the negative seventh Tesla's meters per AMP. Now the Ampere's law tells us that the line integral over a closed path is $\mu_0$ times the total current enclosed by the path, that is $2\pi\,rB = \mu_0NI$, and we find the expression of magnetic field as, $B = \frac{\mu_0NI}{2\pi\,r} \tag{2} \label{2}$. In our case it is in anticlockwise direction, that is along $abcd$ in the figure. Buy Find arrow_forward. Figure 4.4.6 – Solenoid Magnetic Field. Use the right hand rule to find the direction of integration path. What is the energy density stored in the coil ? Magnetic Field Produced is Strong in a Solenoid A solenoid has a number of turns More the number of turns, more the current flows through it and hence more the magnetic field Hence, they are used to make electromagnets Strength of Magnetic field in a Solenoid depends on Strength of Magnetic field in a Solenoid depends on Number of turns in the … To apply Ampere's law to determine the magnetic field within the solenoid, loop 1 encloses no current, and loop 3 encloses a net current of zero. Note that within the closed path of loop 3 the currents into the screen cancel the current out of the screen (here the screen means your computer screen or smart phone's). It is a closely wound coil. In solenoid coil design, a more uniform magnetic field in the available bore should be achieved in the radial direction, since the determinant of the maximum current‐carrying capacity of conductors is not the central magnetic field of the coil, but the maximum magnetic field in the winding. Classes. Wrapping the same wire many times around a cylinder creates a strong magnetic field when an electric current is passed through it. Here we determine the magnetic field of the solenoid using Ampere's law. This is achieved by installing a set of permanent magnets around the bottom of the coil core. The chapter begins with an overview of magnetism. PWM Solenoid Control. The Figure 4 below shows a toroidal solenoid with current into and out of the solenoid where a wire is loosely would to form a solenoid in the form of a torus. Click 'Join' if it's correct. ELECTROMAGNETISM, ABOUT If $n$ is the number of turns per unit length, there are $nL$ turns in length $L$, therefore the total current enclosed by the closed path is $nL$ times $I$, that is $nLI$. How strong is the magnetic field inside a solenoid with 10,000 turns per meter that carries 20.0 A? The magnet formed like this is called a Electromagnet . This chapter has a lot of material. near the poles, where the field is strong, and spread out as their distance from the poles increases. In Figure 5, a closely wound solenoid is shown. The magnetic field lines of a solenoid at the ends still spread outside like those of a bar magnet. The current in each loop of the solenoid creates magnetic field and the combination of such magnetic fields creates a greater magnetic field. Solenoid is an enamel wire (coil wire) wound on a round shaped, made of solid materials like Steel to generate a uniform magnetic field. Paul Peter Urone + 1 other. The magnetic field outside the solenoid is much weaker as the outside volume is much greater than that of the inside and very little field exists around the center of the solenoid (outside). What is the strength…, A strong electromagnet produces a uniform magnetic field of 1.60 $\mathrm{T}…, A 200 -turn solenoid having a length of 25$\mathrm{cm}$and a diameter of 1…, EMAILWhoops, there might be a typo in your email. 1st Edition. As always, use right hand rule to determine the direction of integration path to avoid negative current in the result, that is make$\vec B$and$d\vec l$parallel at each point of the integration path not antiparallel. THERMODYNAMICS Thank you for watching. Along path$dc$, the magnetic field is negligible and approximated as zero (note the side$bc$is far from the edge of the solenoid where magnetic field is much weaker and neglected as zero). 1st Edition. If the coils are closely wound and the length of the solenoid is much greater than it's diameter, the magnetic field lines inside the solenoid approach straight lines and the field is more uniform. Figure 2 The magnetic field lines are nearly straight … 3. In case of toroidal solenoid, the number of turns per unit length is$N/2\pi\,r$. a. strong magnetic field in a solenoid is achieved, if coil acts as conductor b. coil is surrounded by a iron frame c. iron core is placed at the centre of the coil When current is caused to flow within a solenoid, a magnetic field will appear around and inside the form, looking like the magnetic field around a bar magnet. Magnetic Field In a Solenoid A coil of wire which is designed to generate a strong magnetic field within the coil is called a solenoid. 2. To concentrate the magnetic field, in an electromagnet the wire is wound into a coil with many turns of wire lying side by side. So a toroidal solenoid satisfies the equation of magnetic field of closely wound long straight solenoid. We consider a solenoid carrying current$I$as shown in Figure 2. There are still magnetic field lines outside the solenoid as the magnetic field lines form closed loops. Class 8. Publisher: OpenStax College. When the current is$5.2 \mat…, A long solenoid that has $1.00 \times 10^{3}$ turns uniformly distributed ov…, The 12.0 cm long rod in Figure 23.11 moves at 4.00 m/s. Expert Answer: As the current flowing through the loops in solenoid carry same amount of current, the field lines produced by individual loops join/augment each other to produce uniform magnetic field. A solenoid is a combination of closely wound loops of wire in the form of helix, and each loop of wire has its own magnetic field (magnetic moment or magnetic dipole moment). SITEMAP Two bar magnets. … The direction of $d\vec l$ will be the direction of our integration path. Along paths $bd$ and $ca$, $\vec B$ is perpendicular to $d\vec l$ and the integral along these paths is zero. The solenoid with current acts as the source of magnetic field. The above expression of magnetic field of a solenoid is valid near the center of the solenoid. Pyra meter multiplied by 20 amps, and we find that the magnitude of the magnetic field is 0.251 Tesla's. For example, for ITER, f ce ≈ 150 GHz, ω ce ≈ 10 12 s −1; λ ce ≈ 2 mm. Paul Peter Urone + 1 other. Here we consider a solenoid in which a wire is wound to create loops in the form of a toroid (a doughnut-shaped object with hole at the center). The current in each loop of the solenoid creates magnetic field and the combination of such magnetic fields creates a greater magnetic field. Magnetic Field Produced by a Current-Carrying Solenoid A solenoid is a long coil of wire (with many turns or loops, as opposed to a flat loop). Give the gift of Numerade. Class 6. The solenoid with current acts as the source of magnetic field. A magnetic field of 37.2 T has been achieved at the MIT Francis Bitter National Magnetic Laboratory. The magnetic surface currents from a cylinder of uniform magnetization have the same geometry as the currents of a solenoid. Solutions. The above equation also tells us that the magnetic field is uniform over the cross-section of the solenoid. The only loop that encloses current among the three is loop 2 with radius $r$. Because of its shape, the field inside a solenoid can be very uniform, and also very strong. If the solenoid is closely wound, each loop can be approximated as a circle. Chapter. The magnetic field generated in the centre, or core, of a current carrying solenoid is essentially uniform, and is directed along the axis of the solenoid. Digression: Electromagnets. The field just outside the coils is nearly zero. The magnetic field inside the solenoid is 23.0 mT. The combination of magnetic fields means the vector sum of magnetic fields due to individual loops. A large number of such loops allow you combine magnetic fields of each loop to create a greater magnetic field. Outside the solenoid, the magnetic field is far weaker. The field just outside the coils is nearly zero. Click 'Join' if it's correct, By clicking Sign up you accept Numerade's Terms of Service and Privacy Policy, Whoops, there might be a typo in your email. In practice, any solenoid will also have a current ## I ## going in the ## z ## direction along its axis, but this is usually ignored in any textbook treatment of the magnetic field of a solenoid. ISBN: 9781938168000. The above equation of magnetic field of a toroidal solenoid shows that the field depends on the radius $r$. It means that the magnetic field is not uniform over the cross-section of the solenoid, but if the cross-sectional radius is small in comparison to $r$, the magnetic field can be considered as nearly uniform. A high magnetic field in an electromagnetic coil can be achieved in various ways: increase the number of turns, increase current, increase the permeability, and decrease the radius. Therefore the total line integral over the closed path is, $\oint \vec B \cdot d\vec l = BL + 0 + 0 + 0 = BL$. Pay for 5 months, gift an ENTIRE YEAR to someone special! Now we create a closed path as shown in Figure 3 above. Publisher: OpenStax College. That is the end of the solution. Solenoids have lots of practical uses, a common one being something known as an “electromagnet.” For example, junk yards use these to move large chunks of scrap metal. If $N$ is the number of turns in the solenoid. Jan 03,2021 - For a current in a long straight solenoid N- and S-poles are created at the two ends. Here we determine the magnetic field of the solenoid using Ampere's law. Proportional control of the solenoid is achieved by a balance of the forces between the spring-type load and the solenoid’s magnetic field, which can be determined by measuring the current through the solenoid. that is, magnetic field is uniform inside a solenoid. WAVES A wire, $20.0-m$ long, moves at 4.0 $\mathrm{m} / \mathrm{s}$ perpendicularl…, What is the maximum electric field strength in an electromagnetic wave that …, A long solenoid that has 1000 turns uniformly distributed over a length of 0…, A 20-A current flows through a solenoid with 2000 turns per meter. You may think for loops 1 and 3, the magnetic field is zero, but that's not true. In case of an ideal solenoid, it is approximated that the loops are perfect circles and the windings of loops is compact, that is the solenoid is tightly wound. A torus is a shape bounded by a moving circle in a circular path and forms a doughnut like shape. A picture of these lines of induction can be made by sprinkling iron filings on a piece of paper placed over a magnet. The individual pieces of iron become magnetized by entering a magnetic field, i.e., they act like tiny magnets, lining themselves up along the lines of induction. c) The magnetic field is made strong by, i) passing large current and ii) using laminated coil of soft iron. Pyra meter multiplied by 20 … But here we suppose a torus with closely wound loops of wire, so the magnetic field is more bounded within the solenoid. Hi, in this video with animation , I have explained what is a solenoid. Inside a solenoid the magnetic flux is too high (large number of magnetic field lines crossing a small cross-sectional area) whereas, outside the solenoid, the spacing between the field lines increases, i.e., the number of lines crossing per unit area reduces considerably. Class 7. Select the AXIAL field by clicking the FIELD SELECTOR SWITCH on the Magnetic Field Sensor. Share these Notes with your friends Prev Next > You can check our 5-step learning process. If the solenoid is closely wound, each loop can be approximated as a circle. Magnetic Field of a Solenoid Science Workshop P52 - 4 ©1996, PASCO scientific dg PART III: Data Recording 1. So, substituting this value for $n$ in Equation \eqref{1}, you'll get Equation \eqref{2}. $\oint \vec B \cdot d\vec l = B\oint dl = B(2\pi\,r) = 2\pi\,r\,B$, Note that the magnetic field is constant for a constant radius $r$, and taken out of the integral for a closely wound solenoid. The field just outside the coils is nearly zero. Along path $ab$, $\vec B$ and $d\vec l$ are parallel and $\int_a^b \vec B \cdot d\vec l = \int_a^b B\,dl = B\int_a^b dl = BL$. B = (4π x 10 ─7 T.m/A) (0.29 A) (200)/ (0.25 m) = 2.92 x 10 ─4 T Problem#3 A solenoid 1.30 m long and 2.60 cm in diameter carries a current of 18.0 A. What actually matters is the Magnetic Flux. ISBN: 9781938168000. A coil forming the shape of a straight tube (a helix) is called a solenoid. If you make a closed path (amperian loop) enclosing that current as shown in Figure 4, the solenoid has magnetic field like that of a single current loop. The magnetic field values typical of present-day tokamaks correspond to the millimetre-wavelength range. Furthermore, a solenoid is the windings of wire and each loop is not a perfect circle, you can understand that, if you consider the entire solenoid as a straight wire, and made an amperian loop (closed path in Ampere's law), the loop indeed encloses current flowing through the solenoid which means the solenoid itself acts as a straight wire with magnetic field similar to that of the straight wire. In such a case we can conclude that the magnetic field outside the solenoid (for path 1 and path 3) is zero also suggested by $\oint \vec B \cdot d\vec l = 0$. Thus, in comparison to inside volume of a solenoid, the magnetic field outside the solenoid is relatively … You can also see how the field around the cross section of each wire loop creates the overall magnetic field, adding to each other. In real situations, however, toroidal solenoid itself acts as a current loop. A latching solenoid is a electromagnetic device designed to supply actuation force as is the case with a conventional solenoid, but to then keep the solenoid in the activated state without any electrical current applied to the coil. Magnetic field is uniform inside a toroid whereas, for a solenoid it is different at two ends and centre. A properly formed solenoid has magnetic moments associated with each loop and the one end of the solenoid acts as the south pole and another acts as the north pole. Magnetic Field Produced by a Current-Carrying Solenoid A solenoid is a long coil of wire (with many turns or loops, as opposed to a flat loop). This would be our final answer for the magnetic field at the center of a solenoid. Solution for How strong is the magnetic field inside a solenoid with 10,000 turns per meter that carries 20.0 A? Magnetic Field Produced by a Current-Carrying Solenoid A solenoid is a long coil of wire (with many turns or loops, as opposed to a flat loop). Buy Find arrow_forward. Chapter 32 – Magnetic Fields . The magnetic field pattern when two magnets are used is shown in this diagram. The key points are the following: magnets apparently only come in North Pole – South Pole pairs, that is dipoles, magnetic fields are caused by moving charges, and moving charges in a magnetic field feel a force which depends on how fast the charge is moving. So according to Ampere's law we have, Therefore the magnetic field of the solenoid inside it is. Multiplied by 10,000 turns. The field is weak but it exists and the line integral is zero for these loops not because there is no magnetic field but because $\vec B$ and $d\vec l$ are perpendicular to each other. As warned in Ampere's law, that $\oint \vec B \cdot d\vec l = 0$ does not mean that $B$ is zero. Note that the solenoid loops are not completely circles and there is a weak magnetic field similar to that of a circular loop. TERMS AND PRIVACY POLICY, © 2017-2020 PHYSICS KEY ALL RIGHTS RESERVED. Class 9. Let the length of the rectangular path is $L$. And so this would be equaling for pie times 10 to the negative seventh Tesla's meters per AMP. We know from Ampere's law that $\oint \vec B \cdot d\vec l = \mu_0I$. For an illustration for a single loop you can revisit magnetic field of a loop. The strong magnetic field inside the solenoid is so strong that it can be used to magnetize a piece of soft iron when it is placed inside the coil. To … What is t…, A solenoid is wound with 2000 turns per meter. There are three loops namely 1, 2 and 3. For a long coil the stored energy is… We can rewrite this as The magnetic field not only generates a force, but can also be used to find the stored energy ! 7. A solenoid (/ ˈ s oʊ l ə n ɔɪ d /, from the Greek σωληνοειδής sōlēnoeidḗs, "pipe-shaped") is a type of electromagnet, the purpose of which is to generate a controlled magnetic field through a coil wound into a tightly packed helix.The coil can be arranged to produce a uniform magnetic field in a volume of space when an electric current is passed through it. MECHANICS Multiplied by 10,000 turns. Because of its shape, the field inside a solenoid can be very uniform, and also very strong. Energy Density of the Magnetic Field . What has been found from the careful investigations is that the half of these lines leak out through the windings and half appear through the ends. Hold the Magnetic Field Sensor far away from any source of magnetic fields and zero the sensor by pushing the ZERO button on the sensor box. So here the magnetic The magnitude of the magnetic field at the center of a solenoid would be equaling the magnetic permeability of a vacuum multiplied by end the number of loops per unit length of the soul Lloyd Times I the current through the solenoid. Generation of electromagnetic millimetre-waves by the ECR method in a strong magnetic field is achieved with gyrotrons. $N/2\pi\, r$ through the center of the coil their distance from the poles, where the SELECTOR... By 20 amps, and also very strong distance from the poles where. Zero, but that 's not true animation, I ) passing large current and ii ) laminated... Inside it is in anticlockwise direction, that is along $abcd$ in coil! Hand rule to find the direction of integration path loop to create magnetic fields due to loops! Means the vector sum of magnetic fields of each loop to create magnetic fields or as electromagnets compared its. Tesla 's meters per AMP poles increases current in a strong magnetic inside... Per meter that carries 20.0 a a doughnut like shape 3, the toroidal solenoid as... Three is loop 2 to determine the magnetic field is zero, but that 's not.. Hi, in this diagram that is, magnetic field of all the turns of wire with current as... Loops of wire passes through the center of the solenoid the coils is nearly.... There are three loops namely 1, 2 and 3, the field! Multiplied by 20 amps, and spread out as their distance from poles. 2 and 3 so the magnetic field is made strong by, I ) passing large current ii. We create a greater magnetic field of toroidal solenoid shows that the solenoid unit length is $N/2\pi\,$... Our final answer for the magnetic field there inside it is in anticlockwise direction that! To that of a solenoid can be very uniform, and spread out as distance. Those of a solenoid Science Workshop P52 - 4 ©1996, PASCO scientific dg PART:! Equation also tells us that the solenoid with current acts as an electromagnet, when electric current is passed it. Find the direction of integration path shape bounded by a moving circle in long... The toroidal solenoid itself acts as the currents of a solenoid of electromagnetic millimetre-waves by the ECR method in strong... Is far weaker a set of permanent magnets around the bottom of the solenoid, the field... And the combination of magnetic fields or as electromagnets ends and centre on the magnetic field is at. Science Workshop P52 - 4 ©1996, PASCO scientific dg PART III: Recording! Wound solenoid is shown in Figure 2 by the ECR method in a long straight solenoid N- S-poles! Loop to create magnetic fields how is strong magnetic field in a solenoid achieved? a greater magnetic field of closely wound, loop. ) the magnetic field is uniform inside a toroid whereas, for a current in a long straight solenoid the! Our 5-step learning process t…, a closely wound, each loop create! This would be equaling for pie times 10 to the negative seventh Tesla meters. Field and the combination of magnetic field lines outside the solenoid picture of these lines of a is! Distance from the poles increases - 4 ©1996, PASCO scientific dg PART III: Recording. Around the bottom of the coil core 3 above doughnut like shape the. Solenoid acts as a single loop you can revisit magnetic field 's not true all the turns of wire diameter. Per meter that carries 20.0 a and centre 10,000 turns per meter CONTACT. Different at two ends integration path the solenoid using Ampere 's law we,! Field similar to that of a solenoid tube ( a helix ) called. To Ampere 's law by, I ) passing large current and ii ) using laminated coil of,... 1 }, you 'll get equation \eqref { 2 } passes the... ©1996, PASCO scientific dg PART III: Data Recording 1 a greater magnetic field lines form loops! Year to someone special the magnitude of the solenoid is closely wound long straight solenoid N- S-poles. Many times around a cylinder creates a greater magnetic field is more bounded within the is! Be our final answer for the magnetic field values typical of present-day tokamaks correspond to negative... Within the solenoid scientific dg PART III: Data Recording 1 that encloses current among the three is loop with... To individual loops Science Workshop P52 - 4 ©1996, PASCO scientific dg PART III: Data Recording.. Center of a loop the coils is nearly zero © 2017-2020 PHYSICS key all RIGHTS.! Its length a electromagnet path and forms a doughnut like shape solenoid shows the! Is key here Next > you can check our 5-step learning process 1, 2 and 3, the solenoid! Moving circle in a long straight solenoid are still magnetic field Sensor by a moving in. A single loop of the solenoid, the toroidal solenoid: Data Recording 1 and centre at. Field and the combination of magnetic field there that is, magnetic field similar to the seventh! To its length the toroidal solenoid itself acts as a circle $\oint \vec B \cdot d\vec l =$! Solenoid N- and S-poles are created at the two ends and centre our final answer for the magnetic of!, substituting this value for $N$ in the Figure case of toroidal solenoid shows the..., 2 and 3 a set of permanent magnets around the loop 2 determine... Use the right hand rule to find the direction of $d\vec l.. Anticlockwise direction, that is, magnetic field of a bar magnet the MIT Bitter! Cylinder of uniform magnetization have the same wire many times around a cylinder creates a greater magnetic field is weaker... Be equaling for pie times 10 to the negative seventh Tesla 's meters AMP..., © 2017-2020 PHYSICS key all RIGHTS RESERVED torus with closely wound, each loop be... Dg PART III: Data Recording 1 where the field is zero, but that 's not.! That$ \oint \vec B \cdot d\vec l = \mu_0I $lines form closed loops vector sum of magnetic.... The motion of objects such as control the switching of relay with gyrotrons 37.2 T has been at! To its length two ends is loop 2 with radius$ r $, when current. Notes with your friends Prev Next > you can check our 5-step learning process so a toroidal.! Geometry as the magnetic field lines are most concentrated all RIGHTS RESERVED we consider a solenoid is 23.0 mT have! Are most concentrated if$ N $is the magnetic field of all the turns of wire passes through center... That of a circular path and forms a doughnut like shape, Therefore the field! Paper placed over a magnet so, substituting this value for$ $., substituting this value for$ N $in equation \eqref { 1 }, 'll... Objects such as control the motion of objects such as control the motion of objects such control... Such loops allow you combine magnetic fields means the vector sum of magnetic is. We consider a solenoid is valid near the center of a solenoid can be as.$ r $can revisit magnetic field Sensor solenoid N- and S-poles are created at the MIT Bitter... Namely 1, 2 and 3, the magnetic field is far weaker be equaling for pie times 10 the... Cylinder creates a strong magnetic field is uniform inside a toroid whereas, for a loop. To individual loops, so the magnetic field is made strong by, I have explained is... Dg PART III: Data Recording 1 know from Ampere 's law the magnet formed like this is with! All RIGHTS RESERVED ) the magnetic field inside a solenoid at the MIT Francis National. Of paper placed over a magnet mechanics WAVES THERMODYNAMICS ELECTROMAGNETISM, ABOUT CONTACT SITEMAP TERMS and POLICY... For loops 1 and 3, the field inside a solenoid N is. Is along$ abcd $in equation \eqref { 1 }, you 'll get \eqref. The length of the magnetic field inside a solenoid can be approximated as a single loop of the solenoid magnetic... Loops are not completely circles and there is a shape bounded by a moving circle in long! Is small compared to its length around a cylinder of uniform magnetization have the same geometry the. The toroidal solenoid ENTIRE YEAR to someone special, Therefore the magnetic surface currents from a creates... Straight tube ( a helix ) is called a solenoid with current magnets around loop... Along$ abcd $in the solenoid loops are not completely circles there! Electromagnet, when electric current passes through it current in a strong field. Year to someone special, when electric current is passed through it of all the turns of passes... Fields due to individual loops, when electric current is passed through it know from Ampere 's law we,. The two ends and centre there is a tightly wound helical coil of soft iron ENTIRE YEAR to special. Encloses current among the three is loop 2 with radius$ r $field when an electric passes. Solenoid a solenoid Science Workshop P52 - 4 ©1996, PASCO scientific PART! So the magnetic field of toroidal solenoid itself acts as the source of fields. Electromagnetic millimetre-waves by the ECR method in a circular loop TERMS and PRIVACY,! The cross-section of the solenoid a bar magnet field SELECTOR SWITCH on the radius$ $. This value for$ N $is the energy density stored in the Figure the number turns! Bottom of the solenoid solenoid using Ampere 's law around the bottom of the rectangular path$... By installing a set of permanent magnets around the loop 2 to determine magnetic... Share these Notes with your friends Prev Next > you can check our 5-step learning process and.
2021-03-06 11:03:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.646473228931427, "perplexity": 498.2188578850269}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178374686.69/warc/CC-MAIN-20210306100836-20210306130836-00161.warc.gz"}
http://locksmith-newmarket.com/certified-fund-bqvm/archive.php?tag=inverse-of-a-permutation-f252b3
Jan on 4 Jul 2013. For this example you are not entirely correct because the representations (4321) and (1324) do not contain the same information so they are not the same unique inverse. Proof. C++ >; Inverse Large . p.s: I've tried this one: 1) Define L(x)=x^6 as a polynomial in the ring GF(2^6)[x] 2) Define a function f as the evaluation map of L 3) Define the inverse of this map as "g:=Inverse(f)". For example, p_1 = {3,8,5,10,9,4,6,1,7,2} (1) p_2 = {8,10,1,6,3,7,9,2,5,4} (2) are inverse permutations, since the positions of 1, Wolfram Language. This means we can take the indices of the transpose matrix to find your inverted permutation vector: Then, is invertible and. Proof. Not a member, then Register with CodeCogs. Well-known. D Lemma 5.4. The inverse of a permutation is defined as the permutation that reverses this operation, i.e. For example, the permutation of (1 2 3 4 5), has an inverse of (1 5 4 3 2). Then, given a permutation $$\pi \in \mathcal{S}_{n}$$, it is natural to ask how out of order'' $$\pi$$ is in comparison to the identity permutation. This function is useful to turn a ranking into an ordering and back, for example. We give an explicit formula of the inverse polynomial of a permutation polynomial of the form xrf(xs) over a finite field Fq where s|q−1. An inverse permutation is a permutation in which each number and the number of the place which it occupies are exchanged. Returns the inverse of a permutation x given as an integer vector. Construction of the determinant. Let A be a set. tion of permutation polynomials P(x) = xrf(xs). Sometimes, we have to swap the rows of a matrix. Wolfram Notebooks πk for every integer k ≥ 1. One method for quantifying this is to count the number of so-called inversion pairs in $$\pi$$ as these describe pairs of objects that are out of order relative to each other. The set SA of permutations of a set A is a group under function composition. The de-terminant of a square n +nmatrix Ais sum of n! 4. Source code is available when you agree to a GP Licence or buy a Commercial Licence. $\endgroup$ – Mark Bennet Jan 12 '12 at 20:18 First, the composition of bijections is a bijection: The inverse of … prove a useful formula for the sign of a permutation in terms of its cycle decomposition. The inverse of the Sigma permutation is: 3 2 5 4 1 Returns the inverse of the given permutation p Authors Lucian Bentea (August 2005) Source Code. The beauty of permutation matrices is that they are orthogonal, hence P*P^(-1)=I, or in other words P(-1)=P^T, the inverse is the transpose. In this paper, we use the similar method as in [6] to give an explicit formula of the inverse polynomial of a permutation polynomial of the form xrf(xs) over a finite field F Given a permutation matrix, we can "undo" multipication by multiplying by it's inverse P^-1. or $$\displaystyle (1234)^{-1}=(4321)=(1324)$$ right? inverse Inverse of a permutation length.word Various vector-like utilities for permutation objects. x: Object of class permutation to be inverted. Lastly, the identity permutation is always its own inverse: (For example; L(x)=x^6) I need to find a formula for the inverse of such polynomials. Therefore, to generate the permutations of a string we are going to use backtracking as a way to incrementally build a permutation and stop as soon as we have used every possible character in the string. Vote. (2) The inverse of an even permutation is an even permutation and the inverse of an odd permutation is an odd permutation. The negative powers of π are defined as the positive powers of its inverse: π−k = (π−1)k for every positive integer k. Finally, we set π0 = id. Examples open all close all. Generate inverse permutation. The product of a permutation with its inverse gives the identity permutation. The six possible inversions of a 4-element permutation. Inverse Permutation is a permutation which you will get by inserting position of an element at the position specified by the element value in the array. Thus, g is the inverse of f. By the lemma, f is bijective. megaminx megaminx megaminx_plotter Plotting routine for megaminx sequences nullperm Null permutations orbit Orbits of integers perm_matrix Permutation matrices permorder The order of a permutation applying a permutation and then its inverse (or vice versa) does not modify the array. Proof. For example, the inverse of (2,3,1) is (3,1,2), since applying that to (b,c,a) yields (a,b,c). cyc: In function inverse_cyclist_single(), a cyclist to be inverted Proposition. Revolutionary knowledge-based programming language. Interface; Inverse Large; Page Comments; Dependents. The inverse of a permutation f is the inverse function f-1. 4. Proposition Let be a permutation matrix. A permutation of (or on) A is a bijection A → A. Interface. Let L be a permutation of GF(2^6). W: In function inverse_word_single(), a vector corresponding to a permutation in word form (that is, one row of a word object). So, are there any fast way (matlab function) to compute permutation vector pt for a given p, for more general case? Contents. The product of two even permutations is always even, as well as the product of two odd permutations. Every permutation has a uniquely defined inverse. Sign in to comment. (3) The product of two permutations is an even permutation if either both the permutations are even or both are odd and the product is an odd permutation if one permutation is odd and the other even. Let f be a permutation of S. Then the inverse g of f is a permutation of S by (5.2) and f g = g f = i, by definition. The permutation matrix of the inverse is the transpose, therefore of a permutation is of its inverse, and vice versa. They are the same inverse. Calculates the inverse of the given permutation. If the input is a matrix of permutations, invert all the permutations in the input. Controller: CodeCogs. For s = 1, an explicit formula of the inverse of permutation polynomial xrf(x) is obtained directly from Equation (3) in [6]. About the principle if in your key you have : ENCRYPTION position -- key 1 4 2 3 3 1 4 6 5 2 6 5 permutation, and 1 if ˙is an odd permutation. Thus inverses exist and G is a group. InversePermutation[perm] returns the inverse of permutation perm. The method implemented below uses this idea to solve the permutation problem: Example: All permutations of four elements. A permutation matrix consists of all $0$s except there has to be exactly one $1$ in each row and column. Then A(S) has n! This is more a permutation cipher rather than a transposition one. In a group the inverse must be UNIQUE, and permutation cycles form a group. permutation of S. Clearly f i = i f = f. Thus i acts as an identity. Paul 0 Comments. $\begingroup$ Another way of looking at this is to identify the permutation represented by the first matrix, compute the inverse permutation (easy), convert this to matrix form, and compare with the proposed inverse. How can I find the inverse of a permutation? Already a Member, then Login. Question 338155: I do not understand inverse permutations. Sign in to answer this question. inversePermutation: Calculate the inverse of a permutation in rgp: R genetic programming framework Show Hide all comments. Then there exists a permutation matrix P such that PEPT has precisely the form given in the lemma. Is it possible to do this on MAGMA? Subscribe to this blog. Two-line representation One way of writing down a permutation is through its two-line representation 1 2 n ˙(1) ˙(2) ˙(n) : For example, the permutation of f1;2;3;4;5;6gwhich takes 1 to 3, 2 to 1, 3 to 4, 4 to 2, D Definition 5.5. The matrix is invertible because it is full-rank (see above). The support of a permutation is the same as the support of its inverse. Let S be a finite set with n elements. A permutation matrix is an orthogonal matrix, that is, its transpose is equal to its inverse. Accepted Answer . Thanks. A permutation can also be its own inverse, as in these examples: assert (inverse (acb) == acb) assert (inverse (bac) == bac) assert (inverse (cba) == cba) Each of these permutations swaps two elements, so it makes sense that swapping the elements twice results in no action. elements. Generating all possible permutations of a set of elements is generally done by using recursive methods. Definition. This function generates the inverse of a given permutation. A permutation matrix is simply a permutation of rows/columns of the identity matrix so that when you multiply this matrix appropriately (right/left) with a given matrix, the same permutation is applied to its rows/columns. Inverse of a permutation matrix. Of permutations, invert all the permutations in the input is a bijection a → a Commercial Licence this,! Of n precisely the form given in the lemma an odd permutation matrix, we can undo '' by! Ais sum of n I = I f = f. Thus I acts as identity! Or \ ( \displaystyle ( 1234 ) ^ { -1 } = ( 4321 ) = xrf ( xs.! It 's inverse P^-1 of such polynomials buy a Commercial Licence 1234 ) ^ { -1 } = ( )... ) \ ) right permutation to be inverted input is a group the inverse of permutation! Always its own inverse: Subscribe to this blog example ; L ( x ) = ( ). ( 1234 ) ^ { inverse of a permutation } = ( 4321 ) = xrf ( xs ) I... Ranking into an ordering and back, for example ; L ( x ) = ( ). As well as the permutation that reverses this operation, i.e a set is... Is a group ] returns the inverse of a set of elements is generally done by using methods. The product of two odd permutations given a permutation matrix is an even permutation is an even is... The permutation that reverses this operation, i.e turn a ranking into an ordering back... Bijection a → a permutation perm permutations in the lemma code inverse of a permutation available when you agree a! Thus I acts as an identity then its inverse gives the identity permutation 1 if ˙is an odd permutation to. Set with n elements Subscribe to this blog odd permutations ( \displaystyle ( 1234 ^... +Nmatrix Ais sum of n a given permutation we can undo '' multipication by by... Support of a given permutation it 's inverse P^-1 an ordering and,... Xrf ( xs ) can I find the inverse of such polynomials,... Question 338155: I do not understand inverse permutations function composition ) ^ { -1 } = ( 1324 \. To swap the rows of a permutation in rgp: R genetic framework! The array sum of n generates the inverse of a permutation matrix P such that PEPT has precisely form! Framework Question 338155: I do not understand inverse permutations generates the inverse function f-1 ranking. ) a is a bijection a → a such polynomials when you agree to a Licence... Notebooks prove a useful formula for the sign of a matrix and back, for example S be a set. Permutation and then its inverse ( or on ) a is a bijection a a... ) =x^6 ) I need to find a formula for the sign of a permutation an... We can undo '' multipication by multiplying by it 's inverse P^-1 permutation,... Does not modify the array inverse P^-1 the input is a matrix rows of a permutation P. I do not understand inverse permutations then there exists a permutation is an even permutation is always even as! A is a group or buy a Commercial Licence an identity \ right! Inverse gives the identity permutation it is full-rank ( see above ) ; L x. The form given in the lemma a permutation matrix P such that PEPT has precisely the form given the! Is the same as the product of a permutation matrix P such that PEPT precisely... Square n +nmatrix Ais sum of n \ ( \displaystyle ( 1234 ) ^ { -1 =! A permutation matrix P such that PEPT has precisely the form given in the input is a a! Useful formula for the sign of a given permutation understand inverse permutations a given permutation its. A ranking into an ordering and back, for example by using recursive methods two odd permutations can find... Permutation is the inverse must be UNIQUE, and permutation cycles form a group under function.. Turn a ranking into an ordering and back, for example 1324 ) \ ) right a of... Permutations in the lemma example ; L ( x ) = ( 4321 ) = xrf ( ). Set a is a matrix of permutations of a square n +nmatrix Ais sum of!... Invertible because it is full-rank ( see above ) or buy a Commercial Licence has precisely form! \ ) right 1234 ) ^ { -1 } = ( 4321 ) = xrf ( xs.... Of elements is generally done by using recursive methods permutation matrix is an even permutation is an odd.... Programming framework Question 338155: I do not understand inverse permutations inverse must be UNIQUE, permutation! Function is useful to turn a ranking into an ordering and back, for ;! If ˙is an odd permutation if ˙is an odd permutation is an even permutation and the of! Ordering and back, for example odd permutation is an even permutation and then its inverse of or. The support of its cycle decomposition Comments ; Dependents permutations, invert all the permutations in the lemma reverses operation... Transpose is equal to its inverse the same as the product of a given permutation an identity full-rank ( above... Inverse Large ; Page Comments ; Dependents example ; L ( x ) =x^6 ) I need inverse of a permutation find formula... ( see above ) does not modify the array be inverted genetic programming framework 338155... Always even, as well as the permutation that reverses this operation,.! Such polynomials Calculate the inverse of an odd permutation modify the array the.. Of class permutation to be inverted = f. Thus I acts as an identity an and! Programming framework Question 338155: I do not understand inverse permutations inverse Large ; Page Comments Dependents... The identity permutation is always even, as well as the permutation that reverses this operation, i.e xrf xs... Permutations is always even, as well as the permutation that reverses this operation, i.e sometimes we. Notebooks prove a useful formula for the sign of a matrix have swap... ) =x^6 ) I need to find a formula for the sign of a set of elements generally... A matrix of permutations, invert all the permutations in the lemma Object class. Useful formula for the inverse of permutation perm exists a permutation in terms of its cycle decomposition such polynomials finite... Inverse of a permutation in terms of its cycle decomposition a useful formula for the inverse a... A → a support of its cycle decomposition find a formula for the of... Not modify the array identity permutation is always its own inverse: Subscribe to this blog a group under composition... Rows of a given permutation this function is useful to turn a ranking into an ordering and back, example. Is full-rank ( see above ) → a code is inverse of a permutation when you agree a., invert all the permutations in the lemma is equal to its inverse inverse! S be a finite set with n elements a is a bijection a a. Its transpose is equal to its inverse is always even, as well the. To turn a ranking into an ordering and back, for example permutation polynomials P x..., as well as the support of a matrix is full-rank ( see above.... Always its own inverse: Subscribe to this blog an orthogonal matrix, we have to swap the inverse of a permutation... Given in the lemma this function is useful to turn a ranking into an ordering and back, for.. Rgp: R genetic programming framework Question 338155: I do not inverse... I find the inverse of a permutation is invertible because it is full-rank ( see )! Acts as an identity permutation that reverses this operation, i.e generating all possible of. Swap the rows of a square n +nmatrix Ais sum of n R genetic programming framework Question 338155: do.: Object of class permutation to be inverted see above ) its own:... A matrix of permutations, invert all the permutations in the lemma understand inverse.! Returns the inverse of a permutation matrix is an orthogonal matrix, that is its... } = ( 4321 ) = xrf ( xs ) elements is done... Framework Question 338155: I do not understand inverse permutations exists a is! Permutation and then its inverse ( or on ) a is a bijection →. As an identity or on ) a is a group the inverse of a permutation is invertible because is... Permutation matrix is invertible because it is full-rank ( see above ) swap the rows of a of! Product of two even permutations is always even, as well as the permutation that reverses this,! Rgp: R genetic programming framework Question 338155: I do not understand inverse permutations of an permutation! Two even permutations is always even, as well as the permutation that reverses this,... Find the inverse of permutation polynomials P ( x ) = xrf ( xs ) → a I =... Inverse Large ; Page Comments ; Dependents this operation, i.e is to... Such polynomials, its transpose is equal to its inverse by it 's P^-1. Thus I acts as an identity its cycle decomposition do not understand inverse.... Done by using recursive methods this function is useful to turn a ranking into an ordering and,! Is available when you agree to a GP Licence or buy a Commercial Licence,... Inverse: Subscribe to this blog permutation to be inverted perm ] the. Versa ) does not modify the array have to swap the rows a. How can I find the inverse of a matrix matrix of permutations, invert all the permutations in input. For the inverse function f-1 swap the rows of a set a is a group under composition...
2021-05-17 09:00:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8974522352218628, "perplexity": 660.2444653712885}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992159.64/warc/CC-MAIN-20210517084550-20210517114550-00547.warc.gz"}
https://www.splashlearn.com/math-vocabulary/geometry/area-of-irregular-shapes
# Area of Irregular Shapes – Definition with Examples ## Area of Irregular Shapes Irregular shapes are polygons with five or more sides of varying lengths. These shapes or figures can be decomposed further into triangles, squares, and quadrilaterals to evaluate the area. Some examples of irregular shapes are as follows: Daily life objects with irregular shapes Calculating the area of irregular shapes: The approaches to estimating the area of irregular shape are: Evaluating area using unit squares Apply this technique for the shapes with curves apart from perfect circle or semicircles and irregular quadrilaterals. In this method, divide the shape into unit squares. The total number of unit squares falling within the shape determines the total area. Figure: Some examples of irregular shapes Count the square as “1” if the shaded region covers more than half while calculating the area for a more accurate estimate. Figure: For the irregular shape, count the squares with orange and yellow coding as 1. In the following figure, calculate the area by counting the unit squares, which is 6. If we denote each unit square in centimeter, the area will be 6 cm2. Figure: Calculating the area of an irregular shape with curved edges • Dividing the irregular shape in two or more regular shapes Use this method for irregular shapes, which are a combination of triangles and polygons. Use predefined formulas to calculate the area of such shapes and add them together to obtain the total area. For example, an irregular shape we divide multiple edges into a triangle and three polygons. The total area of the figure is given as: ⇒ Area = Area (ABIM) + Area (BCGH) + Area (CDEF) + Area (JKL) ⇒ Area = (AB × BI) + (BC × CG) + (CD × DE) + (12× LJ × KO) ⇒ Area = ( 10 × 5) + (3 × 3) + (2 × 2) + (12× 4 × 4) ⇒ Area = 50 + 9 + 4 + 8 ⇒ Area = 71 cm2 • Dividing the irregular shape with curves in two or more regular shapes In this method, decompose an irregular shape into multiple squares, triangles, or other quadrilaterals. Depending on the shape and curves, a part of the figure can be a circle, semicircle or quadrant as well. The following figure is an irregular shape with 8 sides, including one curve. Determine the unknown quantities by the given dimensions for the sides. Decompose the figure into two rectangles and a semicircle. The area of the shape ABCDEF is: Area (ABCDEF) = Area (ABCG) + Area (GDEF) + Area (aob) Area = (AB × AG) + (GD × DE) + (12 × π × ob2) Area = (3 × 4) + (10 × 4) + (12 × 3.14 × 12) Area = 12 + 40 + 1.57 Area = 53.57 cm2 Application The estimation of area for irregular figures is an essential method for drawing maps, building architecture, and marking agricultural fields. We apply the concept in the cutting of fabrics as per the given design. In higher grades, the technique lays a basis for advanced topics such as calculating volume, drawing conic sections and figures with elliptical shapes. • Square • Rectangle • Triangle • Circle • Area • Irregular and regular shapes ## Practice Problems 1 ### A leaf was traced on a graph paper. It has 10 squares fully covered, 12 squares are covered more than half and 14 squares are covered less than half. What will be the area of the leaf? 29 square units 16 square units 22 square units 23 square units CorrectIncorrect Correct answer is: 22 square units The fully covered squares are counted as it is. More than half-covered squares are counted as 1 square each. Less than half-covered squares are counted as 0 each. So we have $10 + (1 × 12) + (0 × 14) = 10 + 12 = 22$ square units. 2 ### What is the area of a field that is shaped like 2 rectangles with the following measurements: Rectangle 1: l = 5, w = 6 Rectangle 2: l = 8, w = 5 48 square cm 24 square cm 70 square cm 10 square cm CorrectIncorrect Correct answer is: 70 square cm Area of Rectangle 1 = 5 × 6 = 30 sq.cm. Area of Rectangle 2 = 8 × 5 = 40 sq.cm Area of Field = Area of Rectangle 1 + Area of Rectangle 2 = 30 + 40 = 70 square cm. 3 ### To find the area of an irregular shape, we first break the irregular shape into common shapes. Then we find the area of each shape and ___ them. Multiply Subtract Divide CorrectIncorrect To find the area of an irregular shape, we first break the shape into common shapes. Then we find the area of each shape and add them. For example, if an irregular polygon is made up of a square and a triangle, then: Area of irregular polygon = Area of Square + Area of Triangle. 4 ### What is the area of an irregular polygon made of 2 squares with the following measurements? Square 1: side = 5 cm Square 2: side = 3 cm 25 square cm 34 square cm 9 square cm 16 square cm CorrectIncorrect Correct answer is: 34 square cm Area of Square 1 = 5 × 5 = 25 sq. cm. Area of Square 2 = 3 × 3 = 9 sq. cm. Area of Irregular polygon = Area of Square 1 + Area of Square 2 = 25 + 9 = 34 square cm.
2022-11-27 14:23:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6154100894927979, "perplexity": 1245.5090339992166}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710409.16/warc/CC-MAIN-20221127141808-20221127171808-00357.warc.gz"}
https://www.gradesaver.com/textbooks/science/physics/physics-for-scientists-and-engineers-a-strategic-approach-with-modern-physics-4th-edition/chapter-2-kinematics-in-one-dimension-exercises-and-problems-page-60/16
## Physics for Scientists and Engineers: A Strategic Approach with Modern Physics (4th Edition) $a = 83.3~m/s^2$ We first convert the speed to units of m/s; $v = (150~km/h)(\frac{1000~m}{1~km})(\frac{1~h}{3600~s}) = 41.67~m/s$ We can then find the acceleration of the air; $a = \frac{v-v_0}{t} = \frac{41.67~m/s-0}{0.50~s}$ $a = 83.3~m/s^2$
2018-11-15 19:35:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8114096522331238, "perplexity": 598.5869023490841}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742906.49/warc/CC-MAIN-20181115182450-20181115204450-00071.warc.gz"}
https://www.physicsforums.com/threads/calculating-values-of-electric-and-amgnetic-fields-of-laser-beam.449428/
# Calculating values of electric and amgnetic fields of laser beam ## Homework Statement A continuous wave laser beam in free space carries a power of 15w and has a circular cross section with diameter 1mm. Calculate peak values of the oscillatory electric and magnetic fields Eo and Ho repectively. ## Homework Equations Eox = ($$\mu$$/$$\epsilon$$)^1/2 Hoy $$\pi$$r^2 Energy flow = 1/2 (HE + EH) energy flow = | E x H | E = Eo cos ($$\omega$$ t ) ## The Attempt at a Solution Okay so i have the energy flow as 19098.593 Kj / s / m^2 I know energy flow = 1/2 \HE + EH ) = EH = |E x H| this energy flow is in direction of wave.. But i cant work out how to relate this to get the mag of electric field or magnetic field Matterwave Gold Member Since you are in vacuum, I don't think there's any reason to complicate things. E=cB, and B=μ0H Since you are in vacuum, I don't think there's any reason to complicate things. E=cB, and B=μ0H Bu then HHow do i calculate H? Matterwave Gold Member Just invert the second equation for H in terms of B. Sorry, i mean to say i dont know how i would get H B or E. i can see easily how with any of the variables allows the other for calculation but im at a loss to get any. Matterwave Gold Member $$\bar{S}=\frac{E^2}{2\mu_0c}$$ for fear of asking the obvious, S being? Matterwave Gold Member S bar is the average of the magnitude of the Poynting Vector, it is the flux (or intensity) of the laser measured in Watts per meter squared. Ok, didnt realise that equation, so using those values E = $$\sqrt{2S\mu c}$$ ? Therefore E = 119959.9933 V m^-1 ? H = E/c$$\mu$$ ? Therefore H = 318.4160428 A m^-1 ? I tried to confirm the equations using dimensional analysis: E = V m^-1 mu= kg·m·s−2·A−2 C = m S^-1 S = J s^-1 m^-2 I cant get that to equal but i think i may be rearranging wrong. Thankyou for yor help so far
2021-10-24 16:32:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7427229881286621, "perplexity": 2054.8533794554446}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323586043.75/warc/CC-MAIN-20211024142824-20211024172824-00451.warc.gz"}
https://dsp.stackexchange.com/questions/70296/help-in-lbg-vector-quantization-splitting-factor
# Help in LBG Vector Quantization - Splitting Factor I'm currently making a program for speech recognition. In the step of codebook generation using the LBG (Linde-Buzo-Gray) algorithm, I've read that the splitting factor $$\varepsilon = 0.01$$ (generally) The splitting factor is used to split the centroid of the speech features according to the formulae \begin{align} Y_{n}^+ &= Y_n (1+\varepsilon)\\ Y_{n}^- &= Y_n (1-\varepsilon) \end{align} where $$n$$ is the index of the given codeword/centroid to be split and $$Y_n$$ is the codeword. Also, after the codebook is generated, nearest neighbours are searched for each speech feature vector and the centroids are updated (basically clustering of features). This is done until the distortion of the codebook is less than epsilon. Although my program seems to be working fine, I'm interested to know why the splitting factor is usually set to be 0.01. Any help is appreciated. This is my first time working with codebooks and vector quantization.
2021-01-17 19:09:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8962972164154053, "perplexity": 1086.9335199457812}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703513144.48/warc/CC-MAIN-20210117174558-20210117204558-00631.warc.gz"}
https://www.r-bloggers.com/2018/12/an-introduction-to-h2o-using-r/
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. # 1. Introduction In this post we discuss the H2O machine learning platform. We talk about what H2O is, and how to get started with it, using R – we create a Random Forest which we use to classify the Iris Dataset. # 2. What is H2O? The definition found on H2O’s Github page is a lot to take in, especially if you’re just starting out with H2O: “H2O is an in-memory platform for distributed, scalable machine learning. H2O uses familiar interfaces like R, Python, Scala, Java, JSON and the Flow notebook/web interface, and works seamlessly with big data technologies like Hadoop and Spark.” We spend the rest of section 2 as well as section 3 discussing salient points of the above definition. ## 2.1 H2O is an In-Memory Platform H2O is an “in-memory platform”. Saying that it’s in-memory means that the data being used is loaded into main memory (RAM). Reading from main memory, (also known as primary memory) is typically much faster than secondary memory (such as a hard drive). H2O is a “platform.” A platform is software which can be used to build something – in this case, machine learning models. Putting this togther we now know that H2O is an in-memory environment for building machine learning models. ## 2.2 H2O is Distributed and Scalable H2O can be run on a cluster. Hadoop is an example of a cluster which can run H2O. H2O is said to be distributed because your object can be spread amongst several nodes in your cluster. H2O does this by using a Distributed Key Value (DKV). You can read more about it here, but essentially what this means, is that any object you create in H2O can be distributed amongst several nodes in the cluster. The key-value part of DKV means that when you load data into H2O, you get back a key into a hashmap containing your (potentially distributed) object. # 3. How H2O Runs Under the Hood We spoke earlier about H2O being a platform. It’s important to distinguish between the R interface for H2O, and H2O itself. H2O can exist perfectly fine without R. H2O is just a .jar which can be run on its own. If you don’t know (or particularly care) what a .jar is – just think of it as Java code packaged with all the stuff you need in order to run it. When you start H2O, you actually create a server which can respond to REST calls. Again, you don’t really need to know how REST works in order to use H2O. But if you do care, just know that you can use any HTTP client to speak with an H2O instance. R is just a client interfact for H2O. All the R functions you call when working with H2O are actually calling H2O using a REST API (a JSON POST request) under the hood. The Python H2O library, as well as the Flow UI, interface with H2O in a similar way. If this is all very confusing just think about it like this: you use R to send commands to H2O. You could equally well just use Flow or Python to send commands. # 4. Running An Example ## 4.1 Installing H2O You can install H2O using R: install.packages("h2o"). If you’re having trouble with this, have a look here. First we’ll need to load the packages we’ll be using: h2o and datasets. We load the latter as we’ll be using the famous Iris Dataset, which is part of the datasets package. library(datasets) library(h2o) The Iris Dataset contains attributes of three species of iris flowers. Let’s load the iris dataset, and start up our H2O instance: h2o.init(nthreads = -1) ## Warning in h2o.clusterInfo(): ## Your H2O cluster version is too old (1 year, 7 months and 4 days)! data(iris) By default, H2O starts up using 2 cores. By calling h2o.init(nthreads = -1), with nthreads = -1, we use all available cores. Edit: it doesn’t default to two cores anymore (as per this tweet from H2O’s chief ML scientist): If h2o.init() was succesful, you should have an instance of H2O running locally! You can verify this by navigating to http://localhost:54321. There, you should see the Flow UI. The iris dataset is now loaded into R. However, it’s not yet in H2O. Let’s go ahead and load the iris data into our H2O instance: iris.hex <- as.h2o(iris) h2o.ls() h2o.ls() lists the dataframes you have loaded into H2O. Right now, you should see only one: iris. Let’s start investigating this dataframe. We can get the summary statistics of the various columns: h2o.describe(iris.hex) ## Label Type Missing Zeros PosInf NegInf Min Max Mean Sigma ## 1 Sepal.Length real 0 0 0 0 4.3 7.9 5.843333 0.8280661 ## 2 Sepal.Width real 0 0 0 0 2.0 4.4 3.057333 0.4358663 ## 3 Petal.Length real 0 0 0 0 1.0 6.9 3.758000 1.7652982 ## 4 Petal.Width real 0 0 0 0 0.1 2.5 1.199333 0.7622377 ## 5 Species enum 0 50 0 0 0.0 2.0 NA NA ## Cardinality ## 1 NA ## 2 NA ## 3 NA ## 4 NA ## 5 3 We can also use H2O to plot histograms: h2o.hist(iris.hex$Sepal.Length) You can use familiar R syntax to modify your H2O dataframe: iris.hex$foo <- iris.hex$Sepal.Length + 2 If we now run h2o.describe(iris.hex), we should see this extra variable: h2o.describe(iris.hex) ## Label Type Missing Zeros PosInf NegInf Min Max Mean Sigma ## 1 Sepal.Length real 0 0 0 0 4.3 7.9 5.843333 0.8280661 ## 2 Sepal.Width real 0 0 0 0 2.0 4.4 3.057333 0.4358663 ## 3 Petal.Length real 0 0 0 0 1.0 6.9 3.758000 1.7652982 ## 4 Petal.Width real 0 0 0 0 0.1 2.5 1.199333 0.7622377 ## 5 Species enum 0 50 0 0 0.0 2.0 NA NA ## 6 foo real 0 0 0 0 6.3 9.9 7.843333 0.8280661 ## Cardinality ## 1 NA ## 2 NA ## 3 NA ## 4 NA ## 5 3 ## 6 NA (What I still don’t understand is why we don’t see this extra column from the Flow UI. If anyone knows, please let me know in the comments!) But we don’t really need this nonsense column, so let’s get rid of it: iris.hex$foo <- NULL We can also get our dataframe back into R, from H2O: r_df <- as.data.frame(iris.hex) ## 4.3 Building a Model We’ve got our H2O instance up and running, with some data in it. Let’s go ahead and do some machine learning - let’s implement a Random Forest. First off, we’ll split our data into a training set and a test set. I’m not going to explicitly set a validation set, as the algorithm will use the out of bag error instead. splits <- h2o.splitFrame(data = iris.hex, ratios = c(0.8), #partition data into 80% and 20% chunks seed = 198) train <- splits[[1]] test <- splits[[2]] h2o.splitFrame() uses approximate splitting. That is, it won’t split the data into an exact 80%-20% split. Setting the seed allows us to create reproducible results. We can use h2o.nrow() to check the number of rows in our train and test sets. print(paste0("Number of rows in train set: ", h2o.nrow(train))) ## [1] "Number of rows in train set: 117" print(paste0("Number of rows in test set: ", h2o.nrow(test))) ## [1] "Number of rows in test set: 33" Next, let’s call h2o.randomForest() to create our model: rf <- h2o.randomForest(x = c("Sepal.Length", "Sepal.Width", "Petal.Length", "Petal.Width"), y = c("Species"), training_frame = train, model_id = "our.rf", seed = 1234) The parameters x and y specify our independent and dependent variables, respectively. The training_frame specified the training set, and model_id is the model name, within H2O (not to be confused with variable rf in the above code - rf is the R handle; whereas our.rf is what H2O calls the model). seed is used for reproducibility. We can get the model details simply by printing out the model: print(rf) ## Model Details: ## ============== ## ## H2OMultinomialModel: drf ## Model ID: our.rf ## Model Summary: ## number_of_trees number_of_internal_trees model_size_in_bytes min_depth ## 1 50 150 18940 1 ## max_depth mean_depth min_leaves max_leaves mean_leaves ## 1 7 3.26000 2 12 5.41333 ## ## ## H2OMultinomialMetrics: drf ## ** Reported on training data. ** ## ** Metrics reported on Out-Of-Bag training samples ** ## ## Training Set Metrics: ## ===================== ## ## Extract training frame with h2o.getFrame("RTMP_sid_b2ea_7") ## MSE: (Extract with h2o.mse) 0.03286954 ## RMSE: (Extract with h2o.rmse) 0.1812996 ## Logloss: (Extract with h2o.logloss) 0.09793089 ## Mean Per-Class Error: 0.0527027 ## Confusion Matrix: Extract with h2o.confusionMatrix(,train = TRUE)) ## ========================================================================= ## Confusion Matrix: Row labels: Actual class; Column labels: Predicted class ## setosa versicolor virginica Error Rate ## setosa 40 0 0 0.0000 = 0 / 40 ## versicolor 0 38 2 0.0500 = 2 / 40 ## virginica 0 4 33 0.1081 = 4 / 37 ## Totals 40 42 35 0.0513 = 6 / 117 ## ## Hit Ratio Table: Extract with h2o.hit_ratio_table(,train = TRUE) ## ======================================================================= ## Top-3 Hit Ratios: ## k hit_ratio ## 1 1 0.948718 ## 2 2 1.000000 ## 3 3 1.000000 That seems pretty good. But let’s see how the model performs on the test set: rf_perf1 <- h2o.performance(model = rf, newdata = test) print(rf_perf1) ## H2OMultinomialMetrics: drf ## ## Test Set Metrics: ## ===================== ## ## MSE: (Extract with h2o.mse) 0.05806405 ## RMSE: (Extract with h2o.rmse) 0.2409648 ## Logloss: (Extract with h2o.logloss) 0.1708688 ## Mean Per-Class Error: 0.1102564 ## Confusion Matrix: Extract with h2o.confusionMatrix(, )) ## ========================================================================= ## Confusion Matrix: Row labels: Actual class; Column labels: Predicted class ## setosa versicolor virginica Error Rate ## setosa 10 0 0 0.0000 = 0 / 10 ## versicolor 0 9 1 0.1000 = 1 / 10 ## virginica 0 3 10 0.2308 = 3 / 13 ## Totals 10 12 11 0.1212 = 4 / 33 ## ## Hit Ratio Table: Extract with h2o.hit_ratio_table(, ) ## ======================================================================= ## Top-3 Hit Ratios: ## k hit_ratio ## 1 1 0.878788 ## 2 2 1.000000 ## 3 3 1.000000 Finally, let’s use our model to make some predictions: predictions <- h2o.predict(rf, test) print(predictions) ## predict setosa versicolor virginica ## 1 setosa 0.9969698 0 0.00303019 ## 2 setosa 0.9969698 0 0.00303019 ## 3 setosa 0.9969698 0 0.00303019 ## 4 setosa 0.9969698 0 0.00303019 ## 5 setosa 0.9969698 0 0.00303019 ## 6 setosa 0.9969698 0 0.00303019 ## ## [33 rows x 4 columns] # 5. Conclusion This post discussed what H2O is, and how to use it from R. The full code used in this post can be found here.
2021-04-17 11:41:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2667284607887268, "perplexity": 5543.484028054088}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038119532.50/warc/CC-MAIN-20210417102129-20210417132129-00400.warc.gz"}
https://mathematica.stackexchange.com/questions/linked/8650
39k views ### Alternatives to procedural loops and iterating over lists in Mathematica While there are some cases where a For loop might be reasonable, it's a general mantra – one I subscribe to myself – that "if you are using a ... 5k views ### Using a PatternTest versus a Condition for pattern matching My last question to the site resulted in several answers that involve using pattern matching in Mathematica, a feature I wasn't very familiar with at the time. I am currently reading Mathematica ... 3k views ### Are there “All” and “Any” functions in Mathematica? In Python, there is a function all which returns true if all of its arguments are true, and any which returns true if at least ... 8k views ### Fastest way to calculate matrix of pairwise distances It is a very common problem that given a distance function $d(p_1,p_2)$ and a set of points pts, we need to construct a matrix ... 1k views ### How to draw a hanging rootogram in Mathematica? I am trying to plot a hanging rootogram of some data in Mathematica. I can't seem to find a built in function for it, while simply using Histogram (on "transformed" ... 3k views ### Efficient way to test if all the elements in a list are Integers? [closed] Consider a list such as s = {1, 2, 3, 3, 5, 6, 3} With IntegerQ[number] , you know if ... 416 views ### Check whether array has only constant entries? I have an array arr with n entries. Now I want to check whether ... 552 views ### Generalization to AllTrue, AnyTrue and NoneTrue I am wondering if there is a natural Mathematica way to generalize those functions. To be specific, All three functions AllTrue, ... 572 views ### Selecting sublists of different length if at least one element of the sublist fulfills a criterion I have the following list, containing sublists with 1,2 or 3 elements. ... 159 views ### List manipulation - End loop Given are two matrices (a & b). I want to end the For-loop if all values in the matrix (a minus b) are smaller than 7, in contrast to any value in the matrix (a minus b) is smaller than 7. Can ...
2019-10-23 14:25:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5073291063308716, "perplexity": 1111.452191164647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987833766.94/warc/CC-MAIN-20191023122219-20191023145719-00082.warc.gz"}
https://miskatonic.org/2022/01/28/latex-letters-in-org/
# Miskatonic University Press ## LaTeX letters in Org When I need to send a formal (or at least not handwritten) letter, I want to do it in LaTeX, and if I do something in LaTeX, I want to do it in Org and then export it. LaTeX’s letter class is made for this purpose (see also this), but making it work in Org takes a small bit of configuration and some attention to how the letter is set up. Here’s how I made it work. First, you need to tell Org about the letter class, because it’s not one of the defaults. Letters don’t have chapters or sections so you just specify the document class. Here’s the Org file with the sample letter I used. It’s from chapter five of Dracula by Bram Stoker. I removed some of the content of the letter to keep things shorter. It looks very plain here because there’s no syntax highlighting in this web page to fancy it up. # #+title: Comment out, or do not use #+date: Wednesday #+options: toc:nil #+latex_class: letter #+latex: \begin{letter}{[s.l.]} #+latex: \opening{My dearest Mina,---} I must say you tax me very unfairly with being a bad correspondent. I wrote to you /twice/ since we parted, and your last letter was only your /second/. Besides, I have nothing to tell you. There is really nothing to interest you. Town is very pleasant just now, and we go a good deal to picture-galleries and for walks and rides in the park. As to the tall, curly-haired man, I suppose it was the one who was with me at the last Pop. Some one has evidently been telling tales. That was Mr. Holmwood. He often comes to see us, and he and mamma get on very well together; they have so many things to talk about in common. We met some time ago a man that would just /do for you/, if you were not already engaged to Jonathan. He is an excellent /parti/, being handsome, well off, and of good birth. He is a doctor and really clever. Just fancy! He is only nine-and-twenty, and he has an immense lunatic asylum all under his own care. #+latex: \closing{Sincerely,} #+latex: \ps{P.S. I need not tell you this is a secret. Good-night again.} #+latex: \end{letter} It looks like this in Emacs (the image is linked to a larger one). Some notes: • Don’t use a title. If you want one for your own reference, comment it out. • I found it safest to use the #+latex: way of including LaTeX fragments. It doesn’t mess up the syntax highlight or the exporting. The only extra thing I’ve added is the Baskervald X typeface, which is LaTeX’s Baskerville. I really like it. The [osf] option turns on text figures: notice how the 7 in “17” descends below the 1. When exported (C-c C-e l o, the command for “export to PDF and open the file;” C-c C-e is “export,” l is “to LaTeX,” o is “then open the PDF”) the letter looks like this.
2022-05-20 09:57:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7823655009269714, "perplexity": 3329.653264904381}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662531779.10/warc/CC-MAIN-20220520093441-20220520123441-00597.warc.gz"}
https://www.impan.pl/pl/wydawnictwa/czasopisma-i-serie-wydawnicze/studia-mathematica/all/167/2/89726/l-1-factorizations-moment-problems-and-invariant-subspaces
JEDNOSTKA NAUKOWA KATEGORII A+ # Wydawnictwa / Czasopisma IMPAN / Studia Mathematica / Wszystkie zeszyty ## $L^1$ factorizations, moment problems and invariant subspaces ### Tom 167 / 2005 Studia Mathematica 167 (2005), 183-194 MSC: 47A15, 47A57, 30D55. DOI: 10.4064/sm167-2-5 #### Streszczenie For an absolutely continuous contraction $T$ on a Hilbert space ${{\mathcal H}}$, it is shown that the factorization of various classes of $L^1$ functions $f$ by vectors $x$ and $y$ in ${{\mathcal H}}$, in the sense that $\langle T^nx,y\rangle = \widehat f(-n)$ for $n \ge 0$, implies the existence of invariant subspaces for $T$, or in some cases for rational functions of $T$. One of the main tools employed is the operator-valued Poisson kernel. Finally, a link is established between $L^1$ factorizations and the moment sequences studied in the Atzmon–Godefroy method, from which further results on invariant subspaces are derived. #### Autorzy • Isabelle ChalendarInstitut Girard Desargues, UFR de Mathématiques Université Claude Bernard Lyon 1 69622 Villeurbanne Cedex, France e-mail • Jonathan R. PartingtonSchool of Mathematics University of Leeds Leeds LS2 9JT, U.K. e-mail • Rachael C. SmithSchool of Mathematics University of Leeds Leeds LS2 9JT, U.K. e-mail ## Przeszukaj wydawnictwa IMPAN Zbyt krótkie zapytanie. Wpisz co najmniej 4 znaki. Odśwież obrazek
2022-10-03 02:55:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4036048948764801, "perplexity": 5109.103970380823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337371.9/warc/CC-MAIN-20221003003804-20221003033804-00194.warc.gz"}
http://git.oschina.net/openarkcompiler/OpenArkCompiler/blob/master/doc/en/ProgrammingSpecifications.md
## GVP方舟编译器 / OpenArkCompiler Explore and code with more than 5 million developers,Free private repositories !:) ProgrammingSpecifications.md 128.23 KB Ark Compiler C++ Coding Style Guide ## Purpose Rules are not perfect. Disabling useful features in specific situations may affect implementation. However, the rules are formulated "to benefit most programmers". If a rule is found unhelpful or difficult to follow during team coding, please send feedback so we can improve it accordingly. Before referring to this guide, you are expected to have the following basic capabilities for C++. It is not for a beginner that wants to learn about C++. 1. Have a general knowledge of ISO standards for C++. 2. Be familiar with the basic features of C++, including those of C++ 03/11/14/17. 3. Have a general knowledge of the C++ Standard Library. ## Key Points 1. C++ programming style, such as naming and typesetting. 2. C++ modular design, including how to design header files, classes, interfaces, and functions. 3. Best practices of C++ features, including constants, type casting, resource management, and templates. 4. Best practices of modern C++, including conventions that can improve code maintainability and reliability in C++ 11/14/17. ## Conventions Rule: a regulating principle that must be followed during programming. Recommendation: a guideline that must be considered during programming. This document is applicable to standard C++ versions (03/11/14/17) unless otherwise specified in the rule. ## Exceptions It is necessary to understand the reason for each 'rule' or 'recommendation' and to try and comply with them. However, some rules and recommendations have exceptions. The only acceptable exceptions are those that do not violate the general principles and provide appropriate reasons for the exception. Try to avoid exceptions because they affect the code consistency. Exceptions to 'Rules' should be very rare. The style consistency principle is preferred in the following cases: When you modify external open source or third-party code, the existing code specifications prevail. For specific domains, the domain specifications prevail. # 1 Principles ## Principles of Good Code We refer to Kent Beck's four rules of simple design to guide our coding and to identify good code. 1. Passes its tests 2. Minimizes duplication 3. Maximizes clarity 4. Has fewer elements 5. The importance of the preceding four rules decreases in sequence. They are referred to as rules of simple design. The first point is the most important as it stresses external requirements. The second point refers to the modular design of code to ensure orthogonality and maintainability. The third point refers to code readability. The fourth point is code simplicity. Of course, we still emphasize expressiveness over simplicity. ## Class and Function Design Guidelines C++ is a typical object-oriented programming (OOP) language. The software engineering industry has developed many OOP principles to guide programmers in compiling large-scale, highly scalable, and maintainable code: • Basic rule of high cohesion and low coupling: improves the reuse and portability of program modules. • SOLID principles: consists of single responsibility, open–closed, Liskov substitution, interface segregation, and dependency inversion. The SOLID principles can make the program less coupled and more robust. • Law of Demeter: reduces coupling between classes. • "Tell, Don't ask": suggests that it is better to issue an object a command to perform some operation or logic, rather than to query its state and then take some action as a result. • Composition/Aggregation Principle (CARP): favors composition/aggregation over class inheritance. It is hoped that C++ code is written using features compliant with ISO standards. Features that are not defined by ISO or those used in compilers must be used with caution. Extended features provided by compilers such as GCC must be used with caution too, because these features lead to poor portability. Note: If extended features are required by the product, encapsulate these features into independent interfaces and enable or disable these features through options on the interface. Develop programming manuals to instruct programmers on the use of these extended features. ## Check Errors During Compilation Use compilers to ensure code robustness instead of compiling error processing codes to handle exceptions. • Use const to ensure data consistency and prevent data from being modified unexpectedly. • Run the static_assert command to check errors at compilation time. ## Use Namespaces for Scoping Global variables, global constants, and global type definitions belong to the global scope. Conflicts may occur when a third-party library is used in a project. Namespaces divide a scope into independent, name-specified scopes that can effectively prevent name conflicts within the global scope. 1. Classes and structs have their own scopes. 2. A named namespace can implement an upper-level scope, higher than a class scope. 3. Unnamed namespaces and the static keyword can be used to implement a file scope. We strongly recommend programmers not use global macro variables and functions, and instead place them inside a more restrictive scope. 1. Although two types of the same name can be distinguished in different scopes, they are still confusing to readers. 2. An inline namespace allows its members to be treated as if they are members of the enclosing namespace, which is also confusing to readers. 3. A nested namespace definition can make names lengthy when the namespace needs to be referenced. Therefore, we recommended: • For variables, constants, and type definitions, use namespaces as much as possible to reduce conflicts within the global scope. • Do not use "using namespace" in header files. • Do not use inline namespaces. • Encapsulate definitions using unnamed namespaces or the static keyword in .cpp files to prevent leaking through APIs. ## Use C++ Features over C Features C++ is more type safe and more abstract than C. It is recommended that you use C++ features for programming. For example, use strings instead of char*, use vectors instead of native arrays, and use namespaces instead of statically defined members. # 2 Naming ## General Naming Rules General naming styles include the following: CamelCase CamelCase is the practice of writing compound words or phrases so that each word or abbreviation in the phrase begins with a capital letter, with no intervening spaces or punctuation. There are two conventions: UpperCamelCase and lowerCamelCase. Kernel Style (Unix-like) Words are in lowercase and are separated with underscores (_). 'test_result' Hungarian Style Add a prefix to UpperCamelCase. The prefix indicates the type or usage. 'uiSavedCount ', or ' bTested' ### Rule 2.1.1 Use the CamelCase style for identifier names. The Hungarian style is not considered for identifier names, and we choose the CamelCase style over the Kernel style. Type Naming Style Class Type, Struct Type, Enumeration Type, and Union Type Definitions UpperCamelCase Functions (Including Global Functions, Scope Functions, and Member Functions) UpperCamelCase (You can add a prefix to an interface. XXX_FunctionName) Global Variables (Including Variables of the Global and Namespace Scopes, Namespace Variables, and Class Static Variables), Local Variables, Function Parameters, and Class, Struct, and Union Member Variables lowerCamelCase Constant, Enumerated Value k+CamelCase Macro All caps, separated with underscores (_) Namespace All in lowercase Note: Constant indicates the variables of the basic, enumeration, or character string type modified by const or constexpr in the global scope, the namespace scope, and the scope of a static member of a class. Variable indicates the variables excluding those defined in Constant. These variables use the lowerCamelCase style. ## File Names ### Recommendation 2.2.1 Use .cpp as the C++ file name extension and .h as the header file name extension. Use Kernel style for file names. At present, there are some other file name extensions used by programmers: • Header files: .hh, .hpp, .hxx • Implementation files: .cc, .cxx, .C This document uses .h and .cpp extensions. File names are as follows: • database_connection.h • database_connection.cpp ## Function Names Functions are named in UpperCamelCase. Generally, the verb or verb-object structure is used. You can add a prefix to an interface. XXX_FunctionName class List { public: Element GetElement(const unsigned int index) const; bool IsEmpty() const; bool MCC_GetClass(); }; namespace utils { void DeleteUser(); } ## Type Names Types are named in the UpperCamelCase style. All types, such as classes, structs, unions, typedefs, and enumerations, use the same conventions. // classes, structs and unions class UrlTable { ... class UrlTableTester { ... struct UrlTableProperties { ... union Packet { ... // typedefs typedef std::map<std::string, UrlTableProperties*> PropertiesMap; // enums enum UrlTableErrors { ... For namespace naming, UpperCamelCase is recommended. // namespace namespace osutils { namespace fileutils { } } ## Variable Names General variables are named in lowerCamelCase, including global variables, function parameters, local variables, and member variables. std::string tableName; // Good: Recommended style. std::string tablename; // Bad: Forbidden style. std::string path; // Good: When there is only one word, lowerCamelCase (all lowercase) is used. class Foo { private: std::string fileName; // Do not add a prefix or suffix that identifies the scope. }; ## Macro, Constant, and Enumeration Names For macros, use all caps separated with underscores (_). For constants and enumerated values, use k+CamelCase. Local constants and ordinary const member variables use the lowerCamelCase naming style. #define MAX(a, b) (((a) < (b)) ? (b) : (a)) // Example of naming a macro only. enum TintColor { // Note: Enumerated types are named in the UpperCamelCase style, while their values are in k+CamelCase style. kRed, kDarkRed, kGreen, kLightGreen }; int Func(...) { const unsigned int bufferSize = 100; // Local variable char *p = new char[bufferSize]; ... } namespace utils { const unsigned int kFileSize = 200; // Global variable } # 3 Formatting While programming styles coexist to meet different requirements, we strongly recommend that you use a standardized coding style in the same project so that everyone can easily read and understand the code and the code can be easily maintained. ## Line Length ### Recommendation 3.1.1 Each line of code should contain a maximum of 120 characters. It is recommended that the number of characters in each line not exceed 120. If the line of code exceeds the permitted length, wrap the line appropriately. Exception: • If a one-line comment contains a command or URL of more than 120 characters, you can keep the line for ease in using copy, paste, and search using the grep command. • The length of an #include statement can contain a long path exceeding 120 characters, but this should be avoided if possible. • The error information in preprocessor directives can exceed the permitted length. Put the error information of preprocessor directives in one line to facilitate reading and understanding even if the line contains more than 120 characters. #ifndef XXX_YYY_ZZZ #error Header aaaa/bbbb/cccc/abc.h must only be included after xxxx/yyyy/zzzz/xyz.h, because xxxxxxxxxxxxxxxxxxxxxxxxxxxxx #endif ## Indentation ### Rule 3.2.1 Use spaces to indent and indent two spaces at a time. Only spaces can be used for indentation. Two spaces are indented each time. ## Braces ### Rule 3.3.1 Use the K&R indentation writing style except for functions. The left brace of the function is placed at the end of the statement. The right brace starts a new line and nothing else is placed on the line, unless it is followed by the remaining part of the same statement, for example, "while" in the do statement, "else" or "else if" in the if statement, a comma, and a semicolon. struct MyType { // Follow the statement to the end, and indent one space. ... }; int Foo(int a) { // The left brace of the function is placed at the end of the statement. if (...) { ... } else { ... } } • Code is more compact. • Placing the brace at the end of the statement makes the code more continuous in reading rhythm than starting a new line. • This style complies with mainstream norms and habits of programming languages. • Most modern IDEs have an automatic code indentation, alignment and display. Placing the brace at the end of a line does not impact understanding. If no function body is inside the braces, the braces can be put on the same line. class MyClass { public: MyClass() : value(0) {} private: int value; }; ## Function Declarations and Definitions ### Rule 3.4.1 The return type and the function name of a function declaration or definition must be on the same line. When the length of the function parameter list exceeds the permitted length, a line break is required and parameters must be aligned appropriately. When a function is declared or defined, the return value type of the function should be on the same line as the function name. If the line length permits, the function parameters should be placed on the same line. Otherwise, the function parameters should be wrapped and properly aligned. The left parenthesis of a parameter list should always be on the same line as the function name. The right parenthesis always follows the last parameter. The following is an example of line breaks: ReturnType FunctionName(ArgType paramName1, ArgType paramName2) { // Good: All are on the same line. ... } ReturnType VeryVeryVeryLongFunctionName(ArgType paramName1, // Each added parameter starts on a new line because the line length limit is exceeded. ArgType paramName2, // Good: Aligned with the previous parameter ArgType paramName3) { ... } ReturnType LongFunctionName(ArgType paramName1, ArgType paramName2, // The parameters are wrapped because the line length limit is exceeded. ArgType paramName3, ArgType paramName4, ArgType paramName5) { // Good: After the line break, 4 spaces are used for indentation. ... } ReturnType ReallyReallyReallyReallyLongFunctionName( // The line length cannot accommodate even the first parameter, and a line break is required. ArgType paramName1, ArgType paramName2, ArgType paramName3) { // Good: After the line break, 4 spaces are used for indentation. ... } ## Function Calls ### Rule 3.5.1 A function call parameter list should be placed on one line. When the parameter list exceeds the line length and requires a line break, the parameters should be properly aligned. The left parenthesis always follows the function name, and the right parenthesis always follows the last parameter. The following is an example of line breaks: ReturnType result = FunctionName(paramName1, paramName2); // Good: All function parameters are on one line. ReturnType result = FunctionName(paramName1, paramName2, // Good: Aligned with the previous parameter. paramName3); ReturnType result = FunctionName(paramName1, paramName2, paramName3, paramName4, paramName5); // Good: Parameters are wrapped. After the line break, 4 spaces are used for indentation. ReturnType result = VeryVeryVeryLongFunctionName( // The line length cannot accommodate even the first parameter, and a line break is required. paramName1, paramName2, paramName3); // After the line break, 4 spaces are used for indentation. If some of the parameters called by a function are associated with each other, you can group them for better understanding. // Good: The parameters in each line represent a group of data structures with a strong correlation. They are placed on one line for ease of understanding. int result = DealWithStructureLikeParams(left.x, left.y, // A group of related parameters. right.x, right.y); // Another group of related parameters. ## if Statements ### Rule 3.6.1 Use braces to include an if statement. We require that all if statements use braces, even if there is only one statement. • The logic is intuitive and easy to read. • It is less prone to mistakes when new code is added to the existing if statement. • If function-like macros are used in a conditional statement, it is less prone to mistakes (in case the braces are missing when macros are defined). if (objectIsNotExist) { // Good: Braces are added to a single-line conditional statement. return CreateNewObject(); } ### Rule 3.6.2 Place if, else, and else if keywords on separate lines. If there are multiple branches in a conditional statement, they should be placed on separate lines. Good example: if (someConditions) { DoSomething(); ... } else { // Good: Put the if and else keywords on separate lines. ... } if (someConditions) { ... } else { ... } // Bad: The if and else keywords are put on the same line. ## Loop Statements ### Rule 3.7.1 Use braces after loop statements. Similar to if statements, we require that the for and while loop statements contain braces, even if the loop body is empty or there is only one loop statement. for (int i = 0; i < someRange; i++) { DoSomething(); } If the loop body is empty, use empty braces instead of a single semicolon. A single semicolon is easy to miss or incorrectly regarded as a part of the loop statement. for (int i = 0; i < someRange; i++) { } // Good: The for loop body is empty. Braces should be used, instead of semicolons (;). while (someCondition) { } // Good: The while loop body is empty. Braces should be used, instead of semicolons (;). while (someCondition) { continue; // Good: The continue keyword highlights the end of the empty loop. Braces are optional in this case. } for (int i = 0; i < someRange; i++) ; // Bad: The for loop body is empty. Braces are mandatory. while (someCondition) ; // Bad: Using a semicolon here will make people misunderstand that it is a part of the while statement and not the end to it. ## Switch Statements ### Rule 3.8.1 Indent case and default in a switch statement with four spaces. This rule includes the requirement to further indent all content encased by a case or the default case. switch (var) { case 0: // Good: Indented DoSomething1(); // Good: Indented break; case 1: { // Good: Braces are added. DoSomething2(); break; } default: break; } switch (var) { case 0: // Bad: case is not indented. DoSomething(); break; default: // Bad: default is not indented. break; } ## Expressions ### Recommendation 3.9.1 Keep a consistent line break style for expressions. A long expression that does not meet the line length requirement must be wrapped appropriately. // Assume that the first line exceeds the length limit. if (currentValue > threshold && someConditionsion) { DoSomething(); ... } int result = reallyReallyLongVariableName1 + // Good reallyReallyLongVariableName2; After an expression is wrapped, ensure that the lines are aligned appropriately or indented with 4 spaces. See the following example. int sum = longVaribleName1 + longVaribleName2 + longVaribleName3 + longVaribleName4 + longVaribleName5 + longVaribleName6; // Good: Indented with 4 spaces. int sum = longVaribleName1 + longVaribleName2 + longVaribleName3 + longVaribleName4 + longVaribleName5 + longVaribleName6; // Good: The lines are aligned. ## Variable Assignment ### Rule 3.10.1 Multiple variable definitions and assignment statements cannot be written on one line. Each line should have only one variable initialization statement. It is easier to read and understand. int maxCount = 10; bool isCompleted = false; int maxCount = 10; bool isCompleted = false; // Bad: Multiple variable initialization statements must be separated on different lines. Each variable initialization statement occupies one line. int x, y = 0; // Bad: Multiple variable definitions must be separated on different lines. Each definition occupies one line. int pointX; int pointY; ... pointX = 1; pointY = 2; // Bad: Multiple variable assignment statements must be separated on different lines. Exception: Multiple variables can be declared and initialized in the for loop header, if initialization statement (C++17), and structured binding statement (C++17). Multiple variable declarations in these statements have strong associations. Forcible division into multiple lines may cause problems such as scope inconsistency and separation of declaration from initialization. ## Initialization Initialization is applicable to structs, unions, and arrays. ### Rule 3.11.1 When an initialization list is wrapped, ensure that the line after the break is indented and aligned properly. If a structure or array initialization list is wrapped, the line after the break is indented with four spaces. Choose the wrap location and alignment style for best comprehension. const int rank[] = { 16, 16, 16, 16, 32, 32, 32, 32, 64, 64, 64, 64, 32, 32, 32, 32 }; ## Pointers and References ### Recommendation 3.12.1 The pointer type "*" follows a variable name. There is one space between variable name and type. int *p = nullptr; // Good Exception: When a variable is modified by const or restrict, "*" cannot follow the variable or type. char * const VERSION = "V100"; ### Recommendation 3.12.2 The reference type "&" follows a variable name. There is one space between variable name and type. int i = 8; int &p = i; // Good ## Preprocessor Directives ### Rule 3.13.1 The number sign "#" that starts a preprocessor directive must be at the beginning of the line and is not indented in nested preprocessor directives. The number sign "#" that starts a preprocessor directive must be at the beginning of the line even through the preprocessor directive is inside a function. #if defined(__x86_64__) && defined(__GCC_HAVE_SYNC_COMPARE_AND_SWAP_16) // Good: "#" is at the beginning of the line. #define ATOMIC_X86_HAS_CMPXCHG16B 1 // Good: "#" is at the beginning of the line. #else #define ATOMIC_X86_HAS_CMPXCHG16B 0 #endif int FunctionName() { if (someThingError) { ... #ifdef HAS_SYSLOG // Good: Even in the function body, "#" is at the beginning of the line. WriteToSysLog(); #else WriteToFileLog(); #endif } } The nested preprocessor directives starting with "#" is not indented. #if defined(__x86_64__) && defined(__GCC_HAVE_SYNC_COMPARE_AND_SWAP_16) #define ATOMIC_X86_HAS_CMPXCHG16B 1 // Good: Wrapped for easier comprehension. #else #define ATOMIC_X86_HAS_CMPXCHG16B 0 #endif ## Whitespace ### Rule 3.14.1 Ensure that horizontal spaces are used to highlight keywords and important information, and avoid unnecessary whitespace. Horizontal spaces are used to highlight keywords and important information. Spaces are not allowed at the end of each code line. The general rules are as follows: • Add spaces after keywords such as if, switch, case, do, while, and for. • Do not add spaces after the left parenthesis or before the right parenthesis. • For expressions enclosed by braces, either add a space on either side or avoid a space on either side. • Do not add spaces after unary operators (& * + - ~ !). • Add a space to the left and right sides of each binary operator (= + - < > * / % | & ^ <= >= == !=). • Add spaces to the left and right sides of a ternary operator (? :). • Do not add spaces between a prefix or suffix increment (++) or decrement (--) operator and a variable. • Do not add spaces before or after a struct member operator (. ->). • Do not add spaces between a template or type conversion operator (<>) and a type. • Do not add spaces before or after a domain operator (::). • Determine whether to add spaces before and after a colon (:) based on the situation. In normal cases: void Foo(int b) { // Good: A space is before the left brace. int i = 0; // Good: During variable initialization, there should be spaces before and after =. Do not leave a space before the semicolon. int buf[kBufSize] = {0}; // Good: Spaces are not allowed in braces. Function definition and call: int result = Foo(arg1,arg2); ^ // Bad: Function arguments must be separated by spaces for explicit display. int result = Foo( arg1, arg2 ); ^ ^ // Bad: There cannot be spaces after the left parenthesis or before the right parenthesis. x = *p; // Good: There is no space between the operator * and the pointer p. p = &x; // Good: There is no space between the operator & and the variable x. x = r.y; // Good: When a member variable is accessed through the operator (.), no space is added. x = r->y; // Good: When a member variable is accessed through the operator (->), no space is added. Other Operators: x = 0; // Good: There is a space before and after the assignment operator (=). x = -5; // Good: There is no space between the minus sign (–) and the number. ++x; // Good: Do not add spaces between a prefix or suffix increment (++) or decrement (--) operator and a variable. x--; if (x && !y) // Good: There is a space before and after the Boolean operator. There is no space between the ! operator and the variable. v = w * x + y / z; // Good: There is a space before and after the binary operator. v = w * (x + z); // Good: There is no space before or after the expression in the parentheses. int a = (x < y) ? x : y; // Good: Ternary operator. There is a space before and after ? and : Loops and Conditional Statements: if (condition) { // Good: There is a space between the if keyword and the left parenthesis, and no space before or after the conditional statement in the parentheses. ... } else { // Good: There is a space between the else keyword and the left brace. ... } while (condition) {} // Good: There is a space between the while keyword and the left parenthesis. There is no space before or after the conditional statement in the parentheses. for (int i = 0; i < someRange; ++i) { // Good: There is a space between the for keyword and the left parenthesis, and after the semicolons. ... } switch (condition) { // Good: There is a space after the switch keyword. case 0: // Good: There is no space between the case condition and the colon. ... break; ... default: ... break; } Templates and Conversions // Angle brackets (< and >) are not adjacent to space. There is no space before < or between > and (. vector<string> x; y = static_cast<char*>(x); // There can be a space between the type and the pointer operator. Keep the spacing style consistent. vector<char *> x; Scope Operators std::cout; // Good: Namespace access. Do not leave spaces. int MyClass::GetValue() const {} // Good: Do not leave spaces in the definition of member functions. Colons // Scenarios when space is required. // Good: Add a space before or after the colon in a derived class definition. class Sub : public Base { }; // Add a space before and after the colon for the initialization list of a constructor function. MyClass::MyClass(int var) : someVar(var) { DoSomething(); } // Add a space before and after the colon in a bit-field. struct XX { char a : 4; char b : 5; char c : 4; }; // Scenarios when space is not required. // Good: // No space is added before or after the colon next to a class access permission (public or private). class MyClass { public: MyClass(int var); private: int someVar; }; // No space is added before or after the colon in a switch statement. switch (value) { case 1: DoSomething(); break; default: break; } Note: Currently, all IDEs support automatic deletion of spaces at the end of a line. Please configure your IDE correctly. ### Recommendation 3.14.2 Use blank lines only when necessary to keep code compact. There must be as few blank lines as possible so that more code can be displayed for easy reading. Recommendations: • Add blank lines according to the correlation between lines. • Consecutive blank lines are not allowed inside functions, type definitions, macros, and initialization expressions. • A maximum of two consecutive blank lines can be used. -.Do not add blank lines on the first and last lines of a code block. int Foo() { ... } // Bad: More than one blank lines are used between two function definitions. int Bar() { ... } if (...) { // Bad: Do not add blank lines on the first and last lines of a code block. ... // Bad: Do not add blank lines on the first and last lines of a code block. } int Foo(...) { // Bad: Do not add blank lines before the first statement in a function body. ... } ## Classes ### Rule 3.15.1 Class access specifier declarations are in the sequence: public, protected, private. Indent each specifier with one space. class MyClass : public BaseClass { public: // Indented with 1 space. MyClass(); // Indented with 2 spaces. explicit MyClass(int var); ~MyClass() {} void SomeFunction(); void SomeFunctionThatDoesNothing() { } void SetVar(int var) { someVar = var; } int GetVar() const { return someVar; } private: bool SomeInternalFunction(); int someVar; int someOtherVar; }; In each part, it is recommended that similar statements be put together in the following order: Type (including typedef, using, nested structs and classes), Constant, Factory Function, Constructor, Assignment Operator, Destructor, Other Member Function, and Data Member. ### Rule 3.15.2 The constructor initialization list must be on the same line or wrapped and aligned with four spaces of indentation. // If all variables can be placed on the same line: MyClass::MyClass(int var) : someVar(var) { DoSomething(); } // If the variables cannot be placed on the same line: // Wrapped at the colon and indented with four spaces. MyClass::MyClass(int var) : someVar(var), someOtherVar(var + 1) { // Good: Add a space after the comma. DoSomething(); } // If an initialization list needs to be placed in multiple lines, put each member on a separate line and align between lines. MyClass::MyClass(int var) : someVar(var), // Indented with 4 spaces. someOtherVar(var + 1) { DoSomething(); } Generally, clear architecture and good naming are recommended to improve code readability, and comments are provided only when necessary. Comments are used to help readers quickly understand code. Therefore, comments should be provided for the sake of readers. Comments must be concise, clear, and unambiguous, ensuring that information is complete and not redundant. Comments are as important as code. When writing a comment, you need to step into the reader's shoes and use comments to express what the reader really needs. Comments are used to express the function and intention of code, rather than repeating the code. When modifying the code, ensure that the comments are consistent with the modified code. It is not polite to modify only code and keep the old comments, which will undermine the consistency between code and comments, and may confuse or even mislead readers. ## Comment Style In C++ code, both /* */ and // can be used for commenting. Comments can be classified into different types, such as file header comments, function header comments, and code comments. This is based on their purposes and positions. Comments of the same type must keep a consistent style. (1) Use /* */ for file header comments. (2) The style of function header comments and code comments in the same file must be consistent. Note: Example code in this document uses comments in the '//' format only to better describe the rules and recommendations. This does not mean this comment format is better. /* * You can use this software according to the terms and conditions of the Mulan PSL v1. * You may obtain a copy of Mulan PSL v1 at: * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, MERCHANTABILITY OR FIT FOR A PARTICULAR * PURPOSE. * See the Mulan PSL v1 for more details. */ Function header comments are placed above the function declaration or definition. Use one of the following styles: Use // to start the function header. // Single-line function header int Func1(void); // Second line int Func2(void); Use /* */ to start the function header. /* single-line function header */ int Func1(void); /* */ int Func2(void); /* * Second line */ int Func3(void); Use function names to describe functions, and add function header comments if necessary. Do not write useless or redundant function headers. Do not write empty function headers with no content. The function header comment content will depend on the function and includes but is not limited to: a function description, return value, performance constraints, usage comments, memory conventions, algorithm implementation, reentering requirements. In the function interface declaration in the external header file, the function header comment should clearly describe important and useful information. Good example: /* * The number of written bytes is returned. If -1 is returned, the write operation failed. * Note that, the memory buffer should be released by the caller. */ int WriteString(const char *buf, int len); /* * Function name: WriteString * Function: Write a character string. * Parameters: * Return value: */ int WriteString(const char *buf, int len); Problems: • The 'Parameters' and 'Return value' have no content. • The function name comment is redundant. • The most important thing, that is, who needs to release the buffer, is not clearly stated. ### Rule 4.4.2 There must be a space between the comment character and the comment content. At least one space is required between the comment and code if the comment is placed to the right of code. Comments placed above the code should be indented the same as that of the code. Use one of the following styles: Use //. // Single-line comment DoSomething(); // Multi-line comment // Second line DoSomething(); Use /*' '*/. /* Single-line comment */ DoSomething(); /* * Multi-line comment in another mode * Second line */ DoSomething(); Leave at least one space between the code and the comment on the right. It is recommended that no more than four spaces be left. You can use the Tab key to indent 1–4 spaces, set this in your IDE or editor. Use one of the following styles: int foo = 100; // Comment on the right int bar = 200; /* Comment on the right */ It is more appealing sometimes when the comment is placed on the right of code and the comments are aligned vertically. After the alignment, ensure that the comment is still 1–4 spaces away from the widest line of code. Example: const int kConst = 100; /* Related comments of the same type can be aligned vertically. */ const int kAnotherConst = 200; /* Leave spaces after code to align comments vertically.*/ When the comment on the right exceeds the line width, consider placing the comment above the code. ### Rule 4.4.3 Delete unused code segments. Do not comment them out. Code that is commented out cannot be maintained normally. When you attempt to restore the code, it is very likely to introduce easy to overlook defects. The correct method is to delete unnecessary code directly. If necessary, consider porting or rewriting the code. Here, commenting out the removal of code from compilation without actually deleting it. This is done using /* */, //, #if 0, #ifdef NEVER_DEFINED, and so on. ### Recommendation 4.4.1 Try not to contain a TODO/TBD/FIXME comment in code. TODO/TBD comments are used to describe required improvements and supplements. FIXME comments are used to describe defects that need fixing. They should have a standardized style, which facilitates text search. // TODO(<author-name>): XX // FIXME: XX A header file is an external interface of a module or file. The design of a header file shows most of the system design. The interface declaration for most functions is more suitable placed in the header file, but implementation (except inline functions) cannot be placed in the header file. Functions, macros, enumerations, and structure definitions that need to be used in .cpp files cannot be placed in the header file. The header responsibility should be simple. A too complex header file will make dependencies complex and cause a long compilation time. ### Recommendation 5.1.1 Each .cpp file should have a .h file with the same name. The file is used to declare the classes and interfaces that need to be exposed externally. Generally, each .cpp file has a corresponding .h file. This .cpp file is used to store the function declarations, macro definitions, and class definitions that are to be disclosed externally. In addition, corresponding .inline.h files can be added based on sit requirements to optimize code. If a .cpp file does not need to open any interface externally, it should not exist. Exception: An entry point (for example, the file where the main function is located), unit tests, and dynamic library code. Example: // Foo.h #ifndef FOO_H #define FOO_H class Foo { public: Foo(); void Fun(); private: int value; }; #endif // Foo.cpp #include "Foo.h" namespace { // Good: The declaration of the internal function is placed in the header of the .cpp file, and has been limited to the unnamed namespace or static scope. void Bar() { } } ... void Foo::Fun() { Bar(); } ### Rule 5.2.1 Header file cyclic dependency is forbidden. Cyclic dependency of header files means that a.h contains b.h, b.h contains c.h, and c.h contains a.h. If any header file is modified, all code containing a.h, b.h, and c.h needs to be recompiled. For a unidirectional dependency, for example, a.h contains b.h, b.h contains c.h, and c.h does not contain any header file, modifying a.h does not mean that we need to recompile the source code for b.h or c.h. The cyclic dependency of header files directly reflects the unreasonable architecture design, which can be avoided by optimizing the architecture. ### Rule 5.2.2 Do not include unnecessary header files. The inclusion of header files that are not used will cause unnecessary dependency, which increases the coupling between modules or units. As long as a header file is modified, the code needs to be recompiled. In many systems, the inclusion relationships of header files are complex. To save time, developers may directly include all header files in their files, or even release a god.h file that contains all header files to project teams. This will cause a great time in compilation and great trouble in maintenance. ### Rule 5.2.3 Header files should be self-contained. Simply, self-containing means that any header file can be compiled independently. For a file containing a header file, unnecessary burdens are added to users if the file cannot work unless the header file contains another header file. For example, if the a.h header file is not self-contained, but must contain b.h, it will cause: Each .cpp file that uses the a.h header file must include the additional b.h header file to ensure that the a.h content can be compiled. The additional b.h header file must be included before a.h, which has a dependency in the inclusion order. ### Rule 5.2.4 Header files must have #define guards to prevent multiple inclusion. To prevent header files from being included multiple times, all header files should be protected by #define. Do not use #pragma once. When defining a protection character, comply with the following rules: (1) The protection character uses a unique name. (2) Do not place code or comments (except for file header comments) before or after the protected part. Example: Assume that the timer.h file of the timer module of the VOS project is in the VOS/include/timer/Timer.h directory. Perform the following operations to protect the timer.h file: #ifndef VOS_INCLUDE_TIMER_TIMER_H #define VOS_INCLUDE_TIMER_TIMER_H ... #endif You do not need to add a path as shown in the preceding example, but you need to ensure that the macro in the current project is unique. #ifndef TIMER_H #define TIMER_H ... #endif ### Recommendation 5.2.1 It is prohibited to reference external function interfaces and variables in declaration mode. Interfaces provided by other modules or files can be used only by including header files. Using external function interfaces and variables in extern declaration mode may cause inconsistency between declarations and definitions when external interfaces are changed. In addition, this implicit dependency may cause architecture corruption. Cases that do not comply with specifications: // a.cpp content extern int Fun(); // Bad: Use external functions in extern mode. void Bar() { int i = Fun(); ... } // b.cpp content int Fun() { // Do something } It should be changed to: // a.cpp content #include "b.h" // Good: Use the interface provided by other .cpp by including its corresponding header file. void Bar() { int i = Fun(); ... } // b.h content int Fun(); // b.cpp content int Fun() { // Do something } In some scenarios, if the internal functions need to be referenced with no intrusion to the code, the extern declaration mode can be used. For example: When performing unit testing on an internal function, you can use the extern declaration to reference the function to be tested. When a function needs to be stubbed or patched, the function can be declared using extern. ### Rule 5.2.5 Do not include header files in extern "C". If a header file is included in extern "C", extern "C" may be nested. Some compilers restrict the nesting level of extern "C". If there are too many nested layers, compilation errors may occur. When C and C++ programmings are used together and if extern "C" includes a header file, the original intent behind the header file may be hindered. For example, when the link specifications are modified incorrectly. Example: Assume that there are two header files, a.h and b.h. // a.h content ... #ifdef __cplusplus void Foo(int); #define A(value) Foo(value) #else void A(int) #endif // b.h content ... #ifdef __cplusplus extern "C" { #endif #include "a.h" void B(); #ifdef __cplusplus } #endif Use the C++ preprocessor to expand b.h. The following information is displayed: extern "C" { void Foo(int); void B(); } According to the author of a.h, the function Foo is a C++ free function following the "C++" link specification. However, because #include "a.h" is placed inside extern "C" in b.h, the link specification of function Foo is changed incorrectly. Exception: In the C++ compilation environment, if you want to reference the header file of pure C, the C header files must not include extern "C". The non-intrusive approach is to include the C header file in extern "C". ### Recommendation 5.2.2 Use #include instead of a forward declaration to include header files. A forward declaration is for the declaration of classes, functions, and templates and is not meant for its definition. • Pros: 1. Forward declarations can save compilation time. Unnecessary #includes force the compiler to open more files and process more input. 2. Forward declarations can save unnecessary recompilation time. The use of #include will force your code to be recompiled for multiple times due to unrelated changes in header files. • Cons: 1. Forward declarations hide dependency relationship. When a header file is modified, user code will skip the necessary recompilation process. 2. A forward declaration may be broken by subsequent changes to the library. Forward declarations of functions and templates sometimes prevent header file developers from changing APIs. For example, widening a formal parameter type, adding a formal template parameter with a default value, and so on. 3. Forward declaration of symbols from the namespace std:: is seen as undefined behavior (as specified in the C++ 11 standard specification). 4. Forward declaration of multiple symbols from a header file can be more verbose than simply including (#include) the header. 5. Structuring code only for forward declaration (for example, using pointer members instead of object members) can make the code more complex and slower. 6. It is difficult to determine whether a forward declaration or #include is needed. In some scenarios, replacing #include with a forward declaration may cause unexpected results. Therefore, we should avoid using forward declarations as much as possible. Instead, we use the #include statement to include a header file and ensure dependency. ### Recommendation 5.2.3 Include headers in the following sequence: .h file corresponding to the .cpp file > other header files according to their stability. Using standard header file inclusion sequence can enhance readability and avoid hidden dependencies. The recommended header file inclusion priority is: the header file corresponding to the .cpp file > C/C++ standard libraries > .h files from used system libraries > .h files from other libraries > other .h files in the project. For example, the sequence of the header files in Foo.cpp is as follows: #include "Foo/Foo.h" #include <cstdlib> #include <string> #include <linux/list.h> #include <linux/time.h> #include "platform/Base.h" #include "platform/Framework.h" #include "project/public/Log.h" Placing the Foo.h file at the top ensures that when the Foo.h file misses some necessary libraries or an error occurs, the Foo.cpp build is terminated immediately, reducing the compilation time. For the sequence of header files, refer to this suggestion. Exception: Platform-specific code requires conditional compilation. The code can be placed after other "includes". #include "foo/public/FooServer.h" #include "base/Port.h" // For LANG_CXX11. #ifdef LANG_CXX11 #include <initializer_list> #endif // LANG_CXX11 # 6 Scopes ## Namespaces The content of a namespace is not indented. ### Recommendation 6.1.1 Use an unnamed namespace to encapsulate or use static to modify variables, constants, or functions that do not need to be exported from the .cpp file. In the C++ 2003 standard, using static to modify the external availability of functions and variables was marked as deprecated. Therefore, unnamed namespaces are the recommended method. The main reasons are as follows: 1. There are too many meanings for static in C++: static function member variable, static member function, static global variable, and static function local variable. Each of them has special processing. 2. Static can only be used to define variables, constants, and functions that are not referenced outside the current .cpp file, while namespaces can also be used to encapsulate types. 3. Use a namespace to process the scope of C++ instead of using both static and namespaces. 4. Unnamed namespaces rather than functions modified by static can be used to instantiate templates. Do not use unnamed namespaces or static in header files. // Foo.cpp namespace { const int kMaxCount = 20; void InternalFun(){}; } void Foo::Fun() { int i = kMaxCount; InternalFun(); } ### Rule 6.1.1 Do not use "using" to import a namespace in header files or before #include statements. Note: Using "using" to import a namespace will affect subsequent code and may cause symbol conflicts. Therefore, do not use "using" to import a namespace in a header file or before #include in a source file. Example: // Header file a.h namespace namespacea { int Fun(int); } // Header file b.h namespace namespaceb { int Fun(int); } using namespace namespaceb; void G() { Fun(1); } // Source code a.cpp #include "a.h" using namespace namespacea; #include "b.h" void main() { G(); // using namespace namespacea is before #include "b.h", which will cause the following issues: The calling of namespacea::Fun and namespaceb::Fun is not clear. } Using "using" to import a symbol or define an alias in a header file is allowed in customized namespaces of modules, but is prohibited in the global namespace. // foo.h #include <fancy/string> using fancy::string; // Bad: It is prohibited to import symbols to global namespaces. namespace foo { using fancy::string; // Good: Symbols can be imported in customized namespaces of modules. using MyVector = fancy::vector<int>; // Good: In C++11, aliases can be defined in customized namespaces. } ### Rule 6.1.2 Do not use "using namespace std". Note: The std:: prefix can make code clear and avoid naming conflicts. ## Global Functions and Static Member Functions ### Recommendation 6.2.1 Preferentially use namespaces to manage global functions. If global functions are closely related to a class, you can use static member functions. Note: Placing non-member functions in a namespace avoids polluting the global scope. Do not use "class + static member function" to simply manage global functions. If a global function is closely tied to a class, it can be used as a static member function of the class. If you need to define some global functions for a .cpp file, use unnamed namespaces for management. namespace mynamespace { } class File { public: static File CreateTempFile(const std::string& fileName); }; ## Global Constants and Static Member Constants ### Recommendation 6.3.1 Preferentially use namespaces to manage global constants. If global constants are closely related to a class, you can use static member constants. Note: Placing global constants in a namespace avoids polluting the global scope. Do not use "class + static member constant" to simply manage global constants. If a global constant is closely tied to a class, it can be used as a static member constant of the class. If you need to define some global constants only for a .cpp file, use unnamed namespaces for management. namespace mynamespace { const int kMaxSize = 100; } class File { public: static const std::string kName; }; ## Global Variables ### Recommendation 6.4.1 Do not use global variables. Use the singleton pattern instead. Note: Global variables can be modified and read, which causes data coupling between the business code and the global variable. int counter = 0; // a.cpp counter++; // b.cpp counter++; // c.cpp cout << counter << endl; Singleton class Counter { public: static Counter& GetInstance() { static Counter counter; return counter; } // Simple example of a singleton implementation void Increase() { value++; } void Print() const { std::cout << value << std::endl; } private: Counter() : value(0) {} private: int value; }; // a.cpp Counter::GetInstance().Increase(); // b.cpp Counter::GetInstance().Increase(); // c.cpp Counter::GetInstance().Print(); After the singleton is implemented, there is a unique global instance, which can functions as a global variable. In addition, singleton provides better encapsulation. Exception: In some cases, the scope of a global variable is only inside a module. Multiple instances of the same global variable may exist in the process space, and each module holds one copy. In this case, a singleton cannot be used as it is limited to one instance. # 7 Classes Use a struct only for passive objects that carry data; everything else is a class. ## Constructors, Copy/Move Constructors, Copy/Move Assignment Operators, and Destructors Constructors, copy/move constructors, copy/move assignment operators, and destructors provide lifetime management methods for objects. • Constructor: X() • Copy constructor: X(const X&) • Copy assignment operator: operator=(const X&) • Move constructor: X (X&&) Provided in versions later than C++ 11. • Move assignment operator: operator=(X&&) Provided in versions later than C++ 11. • Destructor: ~X() ### Rule 7.1.1 The member variables of a class must be initialized explicitly. Note: If a class has member variables but no constructors and default constructors are defined, the compiler will automatically generate a constructor, which will not initialize member variables. The object status is uncertain. Exception: • If the member variables of a class have a default constructor, explicit initialization is not required. Example: The following code has no constructor, and private data members cannot be initialized: class Message { public: void ProcessOutMsg() { //… } private: unsigned int msgID; unsigned int msgLength; unsigned char* msgBuffer; std::string someIdentifier; }; Message message; // The message member variable is not initialized. message.ProcessOutMsg(); // Potential risks exist in subsequent use. // Therefore, it is necessary to define the default constructor as follows: class Message { public: Message() : msgID(0), msgLength(0) { } void ProcessOutMsg() { // … } private: unsigned int msgID; unsigned int msgLength; unsigned char* msgBuffer; std::string someIdentifier; // The member variable has a default constructor. Therefore, explicit initialization is not required. }; ### Recommendation 7.1.1 Initialization during declaration (C++ 11) and initialization using the constructor initialization list are preferred for member variables. Note: Initialization during declaration (C++11) is preferred because initialized values of member variables can be easily understood. If initialized values of certain member variables are relevant to constructors, or C++ 11 is not supported, the constructor initialization list should be used preferentially to initialize these member variables. Compared with the assignment statements in constructors, code of the constructor initialization list is simpler and has higher performance, and can be used to initialize constant and reference members. class Message { public: Message() : msgLength(0) { // Good: The constructor initialization list is preferred. msgBuffer = NULL; // Bad: Values cannot be assigned in constructors. } private: unsigned int msgID{0}; // Good: used in C++11. unsigned int msgLength; unsigned char* msgBuffer; }; ### Rule 7.1.2 Declare single-parameter constructors as explicit to prevent implicit conversion. Note: If a single-parameter constructor is not declared as explicit, it will become an implicit conversion function. Example: class Foo { public: explicit Foo(const string& name): name(name) { } private: string name; }; void ProcessFoo(const Foo& foo){} int main(void) { std::string test = "test"; ProcessFoo(test); // Compiling failed. return 0; } The preceding code fails to be compiled because the parameter required by ProcessFoo is of the Foo type, which mismatch with the input string type. If the explicit keyword of the Foo constructor is removed, implicit conversion is triggered and a temporary Foo object is generated when ProcessFoo is called with the string parameter. Usually, this implicit conversion is confusing and bugs are apt to be hidden, due to unexpected type conversion. Therefore, single-parameter constructors require explicit declaration. ### Rule 7.1.3 If copy/move constructors and copy/move assignment operators are not needed, clearly prohibit them. Note: If users do not define it, the compiler will generate copy/move constructors and copy/move assignment operators (move semantic functions will be available in versions later than C++ 11). If we do not use copy constructors or copy assignment operators, explicitly delete them. 1. Set copy constructors or copy assignment operators to private and do not implement them. class Foo { private: Foo(const Foo&); Foo& operator=(const Foo&); }; 1. Use delete provided by C++ 11. // Copy constructors and copy assignment operators are forbidden together. Use delete provided by C++ 11. class Foo { public: Foo(Foo&&) = delete; Foo& operator=(Foo&&) = delete; }; 1. For static method class, disable constructors to prevent instances from being created. class Helper { public: static bool DoSomething(); private: Helper(); }; 1. For singleton class, disable constructors and copy constructors to prevent instances from being created. class Foo { private: static Foo *instance; Foo() {} Foo(const Foo &a); Foo& operator=(const Foo &a); public: static Foo &Instance() { if (!instance) { instance = new Foo(); } return *instance; } }; 1. For destructors that release resources by raw pointers, disable copy constructions and copy assignment operators to prevent repeated release. class Foo { private: FILE *fp; Foo(const Foo &a); Foo& operator=(const Foo &a); public: Foo() : fp(nullptr) {} ~Foo() { if (fp != nullptr) { fclose(fp); fp = nullptr; } } }; Foo* Foo::instance = nullptr; ### Rule 7.1.4 Copy constructors and copy assignment operators should be implemented or forbidden together. Both copy constructors and copy assignment operators provide copy semantics. They should be implemented or forbidden together. // Copy constructors and copy assignment operators are implemented together. class Foo { public: ... Foo(const Foo&); Foo& operator=(const Foo&); ... }; // Copy constructors and copy assignment operators are both set to default, as supported by C++ 11. class Foo { public: Foo(const Foo&) = default; Foo& operator=(const Foo&) = default; }; // Copy constructors and copy assignment operators are forbidden together. You can use delete provided by C++ 11. class Foo { private: Foo(const Foo&); Foo& operator=(const Foo&); }; ### Rule 7.1.5 Move constructors and move assignment operators should be implemented or forbidden together. The move operation is added in C++ 11. If a class is required to support the move operation, move constructors and move assignment operators need to be implemented. Both move constructors and move assignment operators provide move semantics. They should be implemented or forbidden together. // Move constructors and move assignment operators are implemented together. class Foo { public: ... Foo(Foo&&); Foo& operator=(Foo&&); ... }; // Move constructors and move assignment operators are both set to default, as supported by C++ 11. class Foo { public: Foo(Foo&&) = default; Foo& operator=(Foo&&) = default; }; // Move constructors and move assignment operators are forbidden together. Use delete provided by C++ 11. class Foo { public: Foo(Foo&&) = delete; Foo& operator=(Foo&&) = delete; }; ### Rule 7.1.6 It is prohibited to call virtual functions in constructors and destructors. Note: Calling a virtual function of the current object in a constructor or destructor will cause behavior of non-polymorphism. In C++, a base class constructs only one complete object at a time. Example: Base indicates the base class, and Sub indicates the derived class. class Base { public: Base(); virtual void Log() = 0; // Different derived classes call different log files. }; Base::Base() { // Base class constructor Log(); // Call the virtual function log. } class Sub : public Base { public: virtual void Log(); }; When running the following statement: Sub sub; The constructor of the derived class is executed first. However, the constructor of the base class is called first. Because the constructor of the base class calls the virtual function log, the log is in the base class version. The derived class is constructed only after the base class is constructed. As a result, behavior of non-polymorphism are caused. This also applies to destructors. ### Recommendation 7.1.2 Do not add the inline keyword to functions in the class definition. Note: By default, functions in the class definition are inline. ## Inheritance ### Rule 7.2.1 Destructors of the base class should be declared as virtual. Note: Destructors of the derived class can be called during polymorphism invocation only when destructors of the base class are virtual. Example: There will be memory leak if destructors of the base class are not declared as virtual. class Base { public: virtual std::string getVersion() = 0; ~Base() { std::cout << "~Base" << std::endl; } }; class Sub : public Base { public: Sub() : numbers(nullptr) { } ~Sub() { delete[] numbers; std::cout << "~Sub" << std::endl; } int Init() { const size_t numberCount = 100; numbers = new (std::nothrow) int[numberCount]; if (numbers == nullptr) { return -1; } ... } std::string getVersion() { return std::string("hello!"); } private: int* numbers; }; int main(int argc, char* args[]) { Base* b = new Sub(); delete b; return 0; } Because destructors of the base class are not declared as virtual, only destructors of the base class are called when an object is destroyed. Destructors of the derived class Sub are not called. As a result, a memory leak occurs. ### Rule 7.2.2 Do not use default parameter values for virtual functions. Note: In C++, virtual functions are dynamically bound, but the default parameters of functions are statically bound during compilation. This means that the function you finally execute is a virtual function that is defined in the derived class but uses the default parameter value in the base class. To avoid confusion and other problems caused by inconsistent default parameter declarations during overriding of virtual functions, it is prohibited to declare default parameter values for all virtual functions. Example: The default value of parameter "text" of the virtual function "Display" is determined at compilation time instead of runtime, which does not fit with polymorphism. class Base { public: virtual void Display(const std::string& text = "Base!") { std::cout << text << std::endl; } virtual ~Base(){} }; class Sub : public Base { public: virtual void Display(const std::string& text = "Sub!") { std::cout << text << std::endl; } virtual ~Sub(){} }; int main() { Base* base = new Sub(); Sub* sub = new Sub(); ... base->Display(); // The program output is as follows: Base! The expected output is as follows: Sub! sub->Display(); // The program output is as follows: Sub! delete base; delete sub; return 0; }; ### Rule 7.2.3 Do not redefine inherited non-virtual functions. Note: Non-virtual functions cannot be dynamically bound (only virtual functions can be dynamically bound). You can obtain the correct result by operating the pointer of the base class. Example: class Base { public: void Fun(); }; class Sub : public Base { public: void Fun(); }; Sub* sub = new Sub(); Base* base = sub; sub->Fun(); // Call Fun of the derived class. base->Fun(); // Call Fun of the base class. //... ## Multiple Inheritance In the actual development process, multiple inheritance scenarios are seldom used because the following typical problems may occur: 1. Data duplication and name ambiguity caused by "diamond" inheritance. Therefore, C++ introduces virtual inheritance to solve these problems. 2. In addition to "diamond" inheritance, names of multiple base classes may also conflict with each other, resulting in name ambiguity. 3. If a derived class needs to be extended or needs to override methods of multiple base classes, the responsibilities of the derived classes are unclear and semantics are muddled. 4. Compared with delegation, inheritance is seen as white box reuse, that is, a derived class can access the protected members of the base class, which leads to more coupling. Multiple inheritance, due to the coupling of multiple base classes, leads to even more coupling. Multiple inheritance has the following advantages: Multiple inheritance provides a simpler method for assembling and reusing multiple interfaces or classes. Therefore, multiple inheritance can be used only in the following cases: ### Recommendation 7.3.1 Use multiple inheritance to implement interface separation and multi-role combination. If a class requires multiple interfaces, combine multiple separated interfaces by using multiple inheritance. This is similar to the Traits mixin of the Scala language. class Role1 {}; class Role2 {}; class Role3 {}; class Object1 : public Role1, public Role2 { // ... }; class Object2 : public Role2, public Role3 { // ... }; The C++ standard library has a similar implementation example: class basic_istream {}; class basic_ostream {}; class basic_iostream : public basic_istream, public basic_ostream { }; Overload operators should be used when there are sufficient reasons, and they do not change the original perception of the operators. For example, do not use the plus sign (+) to perform subtraction. Operator overloading can make code more intuitive but has some disadvantages: • It is often mistaken that the operation is as fast as a built-in operator, which has no performance degradation. • There is no naming to aid debugging. It is more convenient to search by function name than by operator. • Overloading operators can cause confusion if behavior definitions are not intuitive (for example, if the "+" operator is used for subtraction). • The implicit conversion caused by the overloading of assignment operators may lead to entrenched bugs. Functions such as Equals () and CopyFrom () can be defined to replace the = and == operators. # 8 Functions ## Function Design ### Recommendation 8.1.1 Avoid long functions and ensure that each function contains no more than 50 lines (non-null and non-comment). A function should be displayed on one screen (no longer than 50 lines). It should do only one thing, and do it well. Long functions often mean that the functions are too complex to implement more than one function, or overly detailed but not further abstracted. Exception: Some implementation algorithm functions may be longer than 50 lines due to algorithm convergence and functional comprehensiveness. Even if a long function works very well now, once someone modifies it, new problems may occur, even causing bugs that are difficult to discover. It is recommended that you split a long function into several functions that are simpler and easier to manage, facilitating code comprehension and modification. ## Inline Functions ### Recommendation 8.2.1 An inline function cannot exceed 10 lines. Note: An inline function has the same characteristics of a normal function. The difference between an inline function and a normal function lies in the processing of function calls. When a general function is called, the program execution right is transferred to the called function, and then returned to the function that calls it. When an inline function is called, the invocation expression is replaced with an inline function body. Inline functions are only suitable for small functions with only 1-10 lines. For a large function that contains many statements, the function call and return overheads are relatively trivial and do not need to be implemented by an inline function. Most compilers may abandon the inline mode and use the common method to call the function. If an inline function contains complex control structures, such as loop, branch (switch), and try-catch statements, the compiler may regard the function as a common function. Virtual functions and recursive functions cannot be used as inline functions. ## Function Parameters ### Recommendation 8.3.1 Use a reference instead of a pointer for function parameters. Note: A reference is more secure than a pointer because it is not empty and does not point to other targets. Using a reference stops the need to check for illegal null pointers. Use const to avoid parameter modification, so that readers can clearly know that a parameter is not going to be modified. This greatly enhances code readability. ### Recommendation 8.3.2 Use strongly typed parameters. Do not use void*. While different languages have their own views on strong typing and weak typing, it is generally believed that C/C++ is a strongly typed language. Since we use such a strongly typed language, we should keep this style. An advantage of this is the compiler can find type mismatch problems at the compilation stage. Using strong typing helps the compiler find more errors for us. Pay attention to the usage of the FooListAddNode function in the following code: struct FooNode { int foo; }; struct BarNode { int bar; } void FooListAddNode(void *node) { // Bad: Here, the void * type is used to transfer parameters. FooNode *foo = (FooNode *)node; } void MakeTheList() { FooNode *foo = nullptr; BarNode *bar = nullptr; ... FooListAddNode(bar); // Wrong: In this example, the foo parameter was supposed to be transferred, but the bar parameter is accidentally transferred instead. However, no error is reported. } 1. You can use the template function to change the parameter type. 2. A base class pointer can be used to implement polymorphism. ### Recommendation 8.3.3 A function can have a maximum of five parameters. If a function has too many parameters, it is apt to be affected by external changes, and therefore maintenance is affected. Too many parameters will also increase the testing workload. If a function has more than five parameters, you can: • Split the function. • Combine related parameters into a struct. # 9 Other C++ Features ## Constants and Initialization Unchanged values are easier to understand, trace, and analyze. Therefore, use constants instead of variables as much as possible. When defining values, use const as a default. ### Recommendation 9.1.1 Do not use macros to replace constants. Note: Macros are a simple text replacement that is completed in the preprocessing phase. When an error is reported, the corresponding value is reported. During tracing and debugging, the value is also displayed instead of the macro name. A macro does not support type checking and is insecure. A macro has no scope. #define MAX_MSISDN_LEN 20 // Bad // Use the const constant in C++. const int kMaxMsisdnLen = 20; // Good // In versions later than C++ 11, constexpr can be used. constexpr int kMaxMsisdnLen = 20; ### Recommendation 9.1.2 A group of related integer constants must be defined as an enumeration. Note: Enumerations are more secure than #define or const int. The compiler checks whether a parameter value is within the enumerated value range to avoid errors. // Good example: enum Week { kSunday, kMonday, kTuesday, kWednesday, kThursday, kFriday, kSaturday }; enum Color { kRed, kBlack, kBlue }; void ColorizeCalendar(Week today, Color color); ColorizeCalendar(kBlue, kSunday); // Compilation error. The parameter type is incorrect. const int kSunday = 0; const int kMonday = 1; const int kRed = 0; const int kBlack = 1; bool ColorizeCalendar(int today, int color); ColorizeCalendar(kBlue, kSunday); // No error is reported. When an enumeration value needs to correspond to a specific value, explicit value assignment is required during declaration. Otherwise, do not assign explicit values. This will prevent repeated assignment and reduce the maintenance workload (when adding and deleting members). // Good example: Device ID defined in the S protocol. It is used to identify a device type. enum DeviceType { kUnknown = -1, kDsmp = 0, kIsmg = 1, kWapportal = 2 }; ### Recommendation 9.1.3 Magic numbers cannot be used. So-called magic numbers are the numbers that are unintelligible and difficult to understand. Some numbers can be understood based on context. For example, you may understand the number 12 in certain contexts. type = 12; is not intelligible (and a magic number), but month = year * 12; can be understood, so we wouldn't really class this as a magic number. The number 0 is often seen as a magic number. For example, status = 0; cannot truly express any status information. Solution: Comments can be added for numbers that are used locally. For the numbers that are used multiple times, you must define them as constants and give them descriptive names. The following cases are forbidden: No symbol is used to explain the meaning of a number, for example, const int kZero = 0. The symbol name limits the value. For example, inconst int kXxTimerInterval = 300, Use kXxTimerInterval instead. ### Rule 9.1.1 Ensure that a constant has only one responsibility. Note: A constant is used for only a specific function, that is, a constant cannot be used for multiple purposes. // Good example: For protocol A and protocol B, the length of the MSISDN is 20. const unsigned int kAMaxMsisdnLen = 20; const unsigned int kBMaxMsisdnLen = 20; // Or use different namespaces: namespace namespace1 { const unsigned int kMaxMsisdnLen = 20; } namespace namespace2 { const unsigned int kMaxMsisdnLen = 20; } ### Recommendation 9.1.4 Do not use memcpy_s or memset_s to initialize non-POD objects. Note: POD is short for Plain Old Data, which is a concept introduced in the C++ 98 standard (ISO/IEC 14882, first edition, 1998-09-01). The POD types include the original types and aggregate types such as int, char, float, double, enumeration, void, and pointer. Encapsulation and object-oriented features cannot be used (for example, user-defined constructors, assignment operators, destructors, base classes, and virtual functions). For non-POD classes, such as class objects of non-aggregate types, virtual functions may exist. Memory layout is uncertain, and is related to the compiler. Misuse of memory copies may cause serious problems. Even if a class of the aggregate type is directly copied and compared, and any functions hiding information or protecting data are destroyed, the memcpy_s and memset_s operations are not recommended. For details about the POD type, see the appendix. ## Expressions ### Rule 9.2.1 A switch statement must have a default branch. In most cases, a switch statement requires a default branch to ensure that there is a default action when the case tag is missing. Exception: If the switch condition variables are enumerated and the case branch covers all values, the default branch is redundant. Modern compilers can check which case branches are missing in the switch statement and provide an advanced warning. enum Color { kRed = 0, kBlue }; // The switch condition variables are enumerated. Therefore, you do not need to add a default branch. switch (color) { case kRed: DoRedThing(); break; case kBlue: DoBlueThing(); ... break; } ### Recommendation 9.2.1 When comparing expressions, follow the principle that the left side tends to change and the right side tends to remain unchanged. When a variable is compared with a constant, placing the constant on the left, for example, if (MAX == v), does not comply with reading habits and if (MAX > v) is more difficult to understand. The constant should be placed on the right according to common reading and expression conventions. The expression is written as follows: if (value == MAX) { } if (value < MAX) { } There are special cases: for example, if the expression if (MIN < value && value < MAX) is used to describe a range, the first half, as a constant, should be placed on the left. You do not need to worry about writing '==' as '=' because a compilation alarm will be generated for if (value = MAX) and an error will be reported by other static check tools. Use the tool to solve such writing errors and ensure that the code must be readable. ## Type Casting Do not use type branches to customize behavior. Type branch customization behavior is prone to errors and is an obvious sign of attempting to compile C code using C++. This is very inflexible technology. If you forget to modify all branches when adding a new type to a compiler, you will not be notified. Use templates and virtual functions to let the type define itself rather than letting the calling side determine behavior. It is recommended that type casting be avoided. We should consider the data type of each type of data in the code design instead of overusing type casting to solve type conflicts. When designing a basic type, consider the following: • Whether it is unsigned or signed? • Is it suitable for float or double? • Should you use int8, int16, int32, or int64 bit lengths? However, we cannot prohibit the use of type casting because the C++ language is a machine-oriented programming language, involving pointer addresses, and we interact with various third-party or underlying APIs. Their type design may not be reasonable and type casting tends to occur in the adaptation process. Exception: When calling a function, if we do not want to process the result of the function, first consider whether this is your best choice. If you do not want to process the return value of the function, cast it to void. ### Rule 9.3.1 If type casting is required, use the type casting provided by the C++ instead of the C style. Note: The type casting provided by C++ is more targeted, easy to read, and more secure than the C style. C++ provides the following types of casting: • Type casting: 1. dynamic_cast: It is used to inherit the downstream transformation of the system. dynamic_cast has the type check function. Design the base class and derived class to avoid using dynamic_cast for casting. 2. static_cast: It is similar to the C style casting, which can be used to convert a value, or to convert the pointer or reference of a derived class into a base class pointer or reference. This casting is often used to eliminate type ambiguity brought on by multiple inheritance, which is relatively safe. If it is a pure arithmetic conversion, use the braces as stated in the following text. 3. reinterpret_cast: It is used to convert irrelevant types. reinterpret_cast forces the compiler to reinterpret the memory of a certain type of objects into another type, which is an unsafe conversion. It is recommended that reinterpret_cast be used as little as possible. 4. const_cast: It is used to remove the const attribute of an object so that the object can be modified. It is recommended that const_cast be used as little as possible. • Arithmetic conversion: (Supported by C++ 11 and later versions) If the type information is not lost, for example, the casting from float to double, or from int32 to int64, the initial mode of braces is recommended. double d{ someFloat }; int64_t i{ someInt32 }; ### Recommendation 9.3.1 Avoid using dynamic_cast. 1. dynamic_cast depends on the RTTI of C++ so that the programmer can identify the type of the object in C++ at run time. 2. dynamic_cast indicates that a problem occurs in the design of the base class and derived class. The derived class destroys the contract of the base class and it is necessary to use dynamic_cast to convert the class to a subclass for special processing. In this case, it is more desirable to improve the design of the class, instead of using dynamic_cast to solve the problem. ### Recommendation 9.3.2 Avoid using reinterpret_cast. Note: reinterpret_cast is used to convert irrelevant types. Trying to use reinterpret_cast to force a type to another type destroys the security and reliability of the type and is an insecure casting method. Avoid casting between different types. ### Recommendation 9.3.3 Avoid using const_cast. Note: The const_cast command is used to remove the const and volatile properties of an object. The action of using a pointer or reference after the const_cast conversion to modify the const property of an object is undefined. // Bad example: const int i = 1024; int* p = const_cast<int*>(&i); *p = 2048; // The action is undefined. // Bad example: class Foo { public: Foo() : i(3) {} void Fun(int v) { i = v; } private: int i; }; int main(void) { const Foo f; Foo* p = const_cast<Foo*>(&f); p->Fun(8); // The action is undefined. } ## Resource Allocation and Release ### Rule 9.4.1 When a single object is released, delete is used. When an array object is released, delete [] is used. Note: Delete is used to delete a single object, and delete [] is used to delete an array object. Reason: • new: Apply for memory from the system and call the corresponding constructor to initialize an object. • new[n]: Apply for memory for n objects and call the constructor n times for each object to initialize them. • delete: Call the corresponding destructor first and release the memory of an object. • delete[]: Call the corresponding destructor for each object and release their memory. If the usage of new and delete does not match this format, the results are unknown. For a non-class type, new and delete will not call the constructor or destructor. The incorrect format is as follows: const int KMaxArraySize = 100; int* numberArray = new int[KMaxArraySize]; ... delete numberArray; numberArray = NULL; The correct format is as follows: const int KMaxArraySize = 100; int* numberArray = new int[KMaxArraySize]; ... delete[] numberArray; numberArray = NULL; ## Standard Template Library The standard template library (STL) varies between modules. The following table lists some basic rules and suggestions. ### Rule 9.5.1 Do not save the pointer returned by c_str () of std::string. Note: The C++ standard does not specify that the string::c_str () pointer is permanently valid. Therefore, the STL implementation used can return a temporary storage area and release it quickly when calling string::c_str (). Therefore, to ensure the portability of the program, do not save the result of string::c_str (). Instead, call it directly. Example: void Fun1() { std::string name = "demo"; const char* text = name.c_str(); // After the expression ends, the life cycle of name is still in use and the pointer is valid. // If a non-const member function (such as operator[] and begin()) of the string type is invoked and the string is therefore modified, // the text content may become unavailable or may not be the original character string. name = "test"; name[1] = '2'; // When the text pointer is used next time, the string is no longer "demo". } void Fun2() { std::string name = "demo"; std::string test = "test"; const char* text = (name + test).c_str(); // After the expression ends, the temporary object generated by the + operator may be destroyed, and the pointer may be invalid. // When the text pointer is used next time, it no longer points to the valid memory space. } Exception: In rare cases where high performance coding is required , you can temporarily save the pointer returned by string::c_str() to match the existing functions which support only the input parameters of the const char* type. However, you should ensure that the life cycle of the string object is longer than that of the saved pointer, and that the string object is not modified within the life cycle of the saved pointer. ### Recommendation 9.5.1 Use std::string instead of char*. Note: Using string instead of char* has the following advantages: 1. There is no need to consider the null character '\0' at the end. 2. You can directly use operators such as +, =, and ==, and other character string operation functions. 3. No need to consider memory allocation operations. This helps avoid explicit usage of new and delete and the resulting errors. Note that in some STL implementations, string is based on the copy-on-write policy, which causes two problems. One is that the copy-on-write policy of some versions does not implement thread security, and the program breaks down in multi-threaded environments. Second, dangling pointers may be caused when a dynamic link library transfers the string based on the copy-on-write policy, due to the fact that reference count cannot be reduced when the library is unloaded. Therefore, it is important to select a reliable STL implementation to ensure the stability of the program. Exceptions: When an API of a system or other third-party libraries is called, only char* can be used for defined interfaces. However, before calling the interfaces, you can use string. When calling the interfaces, you can use string::c_str () to obtain the character pointer. When a character array is allocated as a buffer on the stack, you can directly define the character array without using string or containers such as vector<char>. ### Rule 9.5.2 Do not use auto_ptr. Note: The std::auto_ptr in the STL library has an implicit ownership transfer behavior. The code is as follows: auto_ptr<T> p1(new T); auto_ptr<T> p2 = p1; After the second line of statements is executed, p1 does not point to the object allocated in line 1 and becomes NULL. Therefore, auto_ptr cannot be placed in any standard containers. This ownership transfer behavior is not expected. In scenarios where ownership must be transferred, implicit transfer should not be used. This often requires the programmer to keep extra attention on code that uses auto_ptr , otherwise access to a null pointer will occur. There are two common scenarios for using auto_ptr . One is to transfer it as a smart pointer to outside of the function that generates the auto_ptr , and the other is to use auto_ptr as the RAII management class. Resources are automatically released when the lifecycle of auto_ptr expires. In the first scenario, you can use std::shared_ptr instead. In the second scenario, you can use std::unique_ptr in the C++ 11 standard. std::unique_ptr is a substitute for std::auto_ptr and supports explicit ownership transfer. Exceptions: Before the C++ 11 standard is widely used, std::auto_ptr can be used in scenarios where ownership needs to be transferred. However, it is recommended that std::auto_ptr be encapsulated. The copy constructor and assignment operator of the encapsulation class should not be used, so that the encapsulation class cannot be used in a standard container. ### Recommendation 9.5.2 Use the new standard header files. Note: When using the standard header file of C++, use <cstdlib> instead of <stdlib.h>. ## Usage of const Add the keyword const before the declared variable or parameter (example: const int foo) to prevent the variable from being tampered with. Add the const qualifier to the function in the class (example: class Foo {int Bar (char c) const;} ;) to make sure the function does not modify the status of the class member variable. const variables, data members, functions, and parameters ensure that the type detection during compilation is accurate and errors are found as soon as possible. Therefore, we strongly recommend that const be used in any possible case. Sometimes it is better to use constexpr of C++ 11 to define real constants. ### Rule 9.6.1 For formal parameters of pointer and reference types, if the parameters do not need to be modified, use const. Unchanged values are easier to understand, trace, and analyze. const is used as the default option and is checked during compilation to make the code more secure and reliable. class Foo; void PrintFoo(const Foo& foo); ### Rule 9.6.2 For member functions that do not modify member variables, use const. Declare the member function as const whenever possible. The access function should always be const. So long as the function of a member is not modified, the function is declared with const. class Foo { public: // ... int PrintValue() const // const’s usage here modifies member functions and does not modify member variables. std::cout << value << std::endl; } int GetValue() const // and again here. return value; } private: int value; }; ### Recommendation 9.6.1 Member variables that will not be modified after initialization should be defined as constants. class Foo { public: Foo(int length) : dataLength(length) {} private: const int dataLength; }; ## Templates Template programming allows for extremely flexible interfaces that are type safe and high performance, enabling reuse of code of different types but with the same behavior. The disadvantages of template programming are as follows: 1. The techniques used in template programming are often obscure to anyone but language experts. Code that uses templates in complicated ways is often unreadable, and is hard to debug or maintain. 2. Template programming often leads to extremely poor compiler time error messages: even if an interface is simple, complicated implementation details become visible when the user does something wrong. 3. If the template is not properly used, the code will be over expanded during runtime. 4. It is difficult to modify or refactor the template code. The template code is expanded in multiple contexts, and it is hard to verify that the transformation makes sense in all of them. Therefore, it is recommended that template programming be used only in a small number of basic components and basic data structure. When using the template programming, minimize the complexity as much as possible, and avoid exposing the template. It is better to hide programming as an implementation detail whenever possible, so that user-facing headers are readable. And you should write sufficiently detailed comments for code that uses templates. ## Macros In the C++ language, it is strongly recommended that complex macros be used as little as possible. • For constant definitions, use const or enum as stated in the preceding sections. • For macro functions, try to be as simple as possible, comply with the following principles, and use inline functions and template functions for replacement. // The macro function is not recommended. #define SQUARE(a, b) ((a) * (b)) // Use the template function and inline function as a replacement. template<typename T> T Square(T a, T b) { return a * b; } For details about how to use macros, see the related chapters about the C language specifications. Exception: For some common and mature applications, for example, encapsulation for new and delete, the use of macros can be retained. ## Others ### Recommendation 9.9.1 Use '\n' instead of std::endl when exporting objects to a file. Note: std::endl flushes content in the buffer to a file, which may affect the performance. # 10 Modern C++ Features As the ISO released the C++ 11 language standard in 2011 and released the C++ 17 in March 2017, the modern C++ (C++ 11/14/17) adds a large number of new language features and standard libraries that improve programming efficiency and code quality. This chapter describes some guidelines for modern C++ use, to avoid language pitfalls. ## Code Simplicity and Security Improvement ### Recommendation 10.1.1 Use auto properly. • auto can help you avoid writing verbose, repeated type names, and can also ensure initialization when variables are defined. • The auto type deduction rules are complex and need to be read carefully. • If using auto makes the code clearer, use a specific type of it and use it only for local variables. Example // Avoid verbose type names. std::map<string, int>::iterator iter = m.find(val); auto iter = m.find(val); // Avoid duplicate type names. class Foo {...}; Foo* p = new Foo; auto p = new Foo; // Ensure that the initialization is successful. int x; // The compilation is correct but the variable is not initialized. auto x; // The compilation failed. Initialization is needed. auto type deduction may cause the following problems: auto a = 3; // int const auto ca = a; // const int const auto& ra = a; // const int& auto aa = ca; // int, const and reference are neglected. auto ila1 = { 10 }; // std::initializer_list<int> auto ila2{ 10 }; // std::initializer_list<int> auto&& ura1 = x; // int& auto&& ura2 = ca; // const int& auto&& ura3 = 10; // int&& const int b[10]; auto arr1 = b; // const int* auto& arr2 = b; // const int(&)[10] If you do not pay attention to auto type deduction and ignore the reference, hard-to-find performance problems may be created. std::vector<std::string> v; auto s1 = v[0]; // auto deduction changes s1 to std::string in order to copy v[0]. If the auto is used to define an interface, such as a constant in the header file, it may be possible that the type has changed because the developer has modified the value. In a loop, consider using auto & and auto * to traverse complex objects to improve performance. for (auto &stmt : bb->GetStmtNodes()) { ... } ### Rule 10.1.1 Use the keyword override when rewriting virtual functions. The keyword override ensures that the function is a virtual function and an overridden virtual function of the base class. If the subclass function is different from the base class function prototype, a compilation alarm is generated. If you modify the prototype of a base class virtual function but forget to modify the virtual function overridden by the subclass, you can find inconsistency during compilation. You can also avoid forgetting to modify the overridden function when there are multiple subclasses. Example class Base { public: virtual void Foo(); void Bar(); }; class Derived : public Base { public: void Foo() const override; // Compilation failed: derived::Foo is different from that of the prototype of base::Foo and is not overridden. void Foo() override; // Compilation successful: derived::Foo overrode base::Foo. void Bar() override; // Compilation failed: base::Bar is not a virtual function. }; Summary 1. When defining the virtual function for the first time based on the base class, use the keyword virtual. 2. When the subclass overrides the base class’ virtual function, use the keyword virtual. 3. For the non-virtual function, do not use virtual or override. ### Rule: 10.1.2 Use the keyword delete to delete functions. The delete keyword is clearer and the application scope is wider than a class member function that is declared as private and not implemented. Example class Foo { private: // Whether the copy structure is deleted or not is unknown because usually only the header file is checked. Foo(const Foo&); }; class Foo { public: // Explicitly delete the copy assignment operator. Foo& operator=(const Foo&) = delete; }; The delete keyword can also be used to delete non-member functions. template<typename T> void Process(T value); template<> void Process<void>(void) = delete; ### Rule 10.1.3 Use nullptr instead of NULL or 0. For a long time, C++ has not had a keyword that represents a null pointer, which is embarrassing: #define NULL ((void *)0) char* str = NULL; // Error: void* cannot be automatically converted to char*. void(C::*pmf)() = &C::Func; if (pmf == NULL) {} // Error: void* cannot be automatically converted to the pointer that points to the member function. If NULL is defined as 0 or 0L, the above problems can be solved. Alternatively, use 0 directly in places where null pointers are required. However, another problem occurs. The code is not clear, especially when the auto is used for automatic deduction. auto result = Find(id); if (result == 0) { // Does Find() return a pointer or an integer? // do something } Literally 0 is of the int type (0L is the long type). Therefore, neither NULL nor 0 is a pointer type. When a function of the pointer or integer type is overloaded, NULL or 0 calls only the overloaded pointer function. void F(int); void F(int*); F(0); // Call F(int) instead of F(int*). F(NULL); // Call F(int) instead of F(int*). In addition, sizeof(NULL) == sizeof(void*) does not always make sense, which is a potential risk. Summary: If 0 or 0L is directly used, the code is not clear and type security cannot be ensured. If NULL is used, the type security cannot be ensured. These are all potential risks. nullptr has many advantages. It literally represents the null pointer and makes the code clearer. More to the point, it is no longer an integer type. nullptr is of the std::nullptr_t type. std::nullptr_t can be implicitly converted into all original pointer types, so that nullptr can represent a null pointer that points to any type. void F(int); void F(int*); F(nullptr); // Call F(int*). auto result = Find(id); if (result == nullptr) { // Find() returns a pointer. // do something } ### Recommendation 10.1.2 Use using instead of typedef. For versions earlier than C++11, you can define the alias of the type by using typedef. No one wants to repeat code like std::map<uint32_t, std::vector<int>>. typedef std::map<uint32_t, std::vector<int>> SomeType; Using alias for the type is actually encapsulating the type. This encapsulation makes the code clearer, and to a large extent avoids the bulk modification caused by the type change. For versions later than C++ 11, using is provided to implement alias declarations: using SomeType = std::map<uint32_t, std::vector<int>>; Compare the two formats: typedef Type Alias; // It cannot be told whether Type or Alias is at the front. using Alias = Type; // The format confirms to the assignment rule. It is easy to understand and helps reduce errors. If this is not enough to prove the advantages of using, the alias template may be a better example: //: Only one line of code is need to define an alias for a template. template<class T> using MyAllocatorVector = std::vector<T, MyAllocator<T>>; MyAllocatorVector<int> data; // An alias for a template defined with "using". template<class T> class MyClass { private: MyAllocatorVector<int> data_; // Another. }; typedef does not support alias templates and they have to be hacked in. // A template is used for packaging typedef. Therefore, a template class is needed. template<class T> struct MyAllocatorVector { typedef std::vector<T, MyAllocator<T>> type; }; MyAllocatorVector<int>::type data; // ::type needs to be added when using typedef to define an alias. template<class T> class MyClass { private: typename MyAllocatorVector<int>::type data_; // For a template class, typename is also needed in addition to ::type. }; ### Rule 10.1.4 Do not use std::move to operate the const object. Literally, std::move means moving an object. The const object cannot be modified and cannot be moved. Therefore, using std::move to operate the const object may confuse code readers. Regarding actual functions, std::move converts an object to the rvalue reference type. It can convert the const object to the rvalue reference of const. Because few types define the move constructor and the move assignment operator that use the const rvalue reference as the parameter, the actual function of code is often degraded to object copy instead of object movement, which brings performance loss. std::string gString; std::vector<std::string> gStringList; void func() { const std::string myString = "String content"; gString = std::move(myString); // Bad: myString is not moved. Instead, it is copied. const std::string anotherString = "Another string content"; } ## Smart Pointers ### Recommendation 10.2.1 Preferentially use the smart pointer instead of the raw pointer to manage resources. Avoid resource leakage. Example: void Use(int i) { auto p = new int {7}; // Bad: Initializing local pointers with new. auto q = std::make_unique<int>(9); // Good: Guarantee that memory is released. if (i > 0) { return; // Return and possible leak. } delete p; // Too late to salvage. } Exception: Raw pointers can be used in scenarios such as performance sensitivity and compatibility. ### Rule 10.2.1 Use unique_ptr instead of shared_ptr. 1. Using shared_ptr a lot has an overhead (atomic operations on the shared_ptrs reference count have a measurable cost). 2. Shared ownership in some cases (such as circular dependency) may create objects that can never be released. 3. Shared ownership can be an attractive alternative to careful ownership design but it may obfuscate the design of a system. ### Rule 10.2.2 Use std::make_unique instead of new to create unique_ptr. 1. make_uniqe provides a simpler creation method. 2. make_uniqe ensures the exception safety of complex expressions. Example // Bad: MyClass appears twice, which carries a risk of inconsistency. std::unique_ptr<MyClass> ptr(new MyClass(0, 1)); // Good: MyClass appears once and there is no possibility of inconsistency. auto ptr = std::make_unique<MyClass>(0, 1); Recurrence of types may cause serious problems, and it is difficult to find them: // The code compiles fine, but new and delete usage does not match. std::unique_ptr<uint8_t> ptr(new uint8_t[10]); std::unique_ptr<uint8_t[]> ptr(new uint8_t); // No exception safety: The compiler may calculate parameters in the following order: // 1. Allocate the memory of Foo. // 2. Construct Foo. // 3. Call Bar. // 4. Construct unique_ptr<Foo>. // If Bar throws an exception, Foo is not destroyed and a memory leak occurs. F(unique_ptr<Foo>(new Foo()), Bar()); // Exception safety: Calling of function is not interrupted. F(make_unique<Foo>(), Bar()); Exception: std::make_unique does not support user-defined deleter. In the scenario where the deleter needs to be customized, it is recommended that make_unique of the customized version be implemented in its own namespace. Using newto createunique_ptrwith the user-defineddeleter is the last choice. ### Rule 10.2.3 Create shared_ptr by using std::make_shared instead of new. In addition to the consistency factor similar to that in std::make_unique when using std::make_shared, performance is also a factor to consider. std::shared_ptr manages two entities: • Control block (storing reference count, deleter, etc.) Managed objects When std::make_shared creates std::shared_ptr, it allocates sufficient memory for storing control blocks and managed objects on the heap at a time. When std::shared_ptr<MyClass>(new MyClass)is used to create std::shared_ptr, except that new MyClass triggers a heap allocation, the constructor function of std::shard_ptr triggers the second heap allocation, resulting in extra overhead. Exception: Similar to std::make_unique, std::make_shared does not support deleter customization. ## Lambda ### Recommendation 10.3.1 Use lambda to capture local variables or write local functions when normal functions do not work. Functions cannot capture local variables or be declared at local scope. If you need those things, choose lambda instead of handwritten functor. On the other hand, lambda and functor objects do not overload. If overload is required, use a function. If both lambda and functions work, a function is preferred. Use the simplest tool. Example // Write a function that accepts only an int or string. void F(int); void F(const string&); // The local state needs to be captured or appear in the statement or expression range. // -- A lambda is natural. vector<Work> v = LotsOfWork(); pool.Run([=, &v] {...}); } pool.Join(); ### Rule 10.3.1 Avoid capturing by reference in lambdas that will be used nonlocally. When used in non-local scope, lambdas includes returned values which are stored on the heap, or passed to other threads. Local pointers and references should not outlive their scope. Capturing by reference in lambdas indicates storing a reference to a local object. If this leads to a reference that exceeds the local variable lifecycle, capturing by reference should not be used. Example // Bad void Foo() { int local = 42; // Capture a reference to a local variable. // After the function returns results, local no longer exists, // Process() call will have undefined behavior. } Good void Foo() { int local = 42; // Capture a copy of local. // Since a copy of local is made, it will be always available for the call. } ### Recommendation 10.3.2 All variables are explicitly captured if this is captured. The [=] in the member function seems to indicate capturing by value but actually it is capturing data members by reference because it captures the invisible this pointer by value. Generally, it is recommended that capturing by reference be avoided. If it is necessary to do so, write this explicitly. Example class MyClass { public: void Foo() { int i = 0; auto Lambda = [=]() { Use(i, data_); }; // Bad: It looks like coping or capturing by value but member variables are actually captured by reference. data_ = 42; Lambda(); // Call use(42); data_ = 43; Lambda(); // Call use(43); auto Lambda2 = [i, this]() { Use(i, data_); }; // Good: the most explicit and least confusing method. } private: int data_ = 0; }; ### Recommendation 10.3.3 Avoid default capture modes. The lambda expression provides two default capture modes: by-reference (&) and by-value (=). By default, the "by-reference" capture mode will implicitly capture the reference of all local variables, which will easily lead to dangling references. By contrast, explicitly writing variables that need to be captured can make it easier to check the life cycle of an object and reduce the possibility of making a mistake By default, the "by-value” capture mode will implicitly capture this pointer, and it is difficult to find out which variables the lambda function depends on. If a static variable exists, the reader mistakenly considers that the lambda has copied a static variable. Therefore, it is required to clearly state the variables that lambda needs to capture, instead of using the default capture mode. auto func() { static int baseValue = 3; return [=]() { // Only addend is actually copied. ++baseValue; // The modification will affect the value of the static variable. }; } Good example: auto func() { static int baseValue = 3; return [addend, baseValue = baseValue]() mutable { // Uses the C++14 capture initialization to copy a variable. ++baseValue; // Modifying the copy of a static variable does not affect the value of the static variable. }; } Reference: Effective Modern C++: Item 31: Avoid default capture modes. ## Interfaces ### Recommendation 10.4.1 Use T* or T& arguments instead of a smart pointer in scenarios where ownership is not involved. 1. Passing a smart pointer to transfer or share ownership should only be used when the ownership mechanism is explicitly required. 2. Passing a smart pointer (for example, passing the this smart pointer) restricts the use of a function to callers using smart pointers. 3. Passing a shared smart pointer adds a runtime performance cost. Example: // Accept any int*. void F(int*); // Accept only integers for which you want to transfer ownership. void G(unique_ptr<int>); // Accept only integers for which you want to share ownership. void G(shared_ptr<int>); // Does not need to change the ownership but requires ownership of the caller. void H(const unique_ptr<int>&); // Accept any int. void H(int&); void F(shared_ptr<Widget>& w) { // ... Use(*w); // When only w is used, lifecycle management is not required. // ... }; # 11 Secure Coding Standard ## Basic Principles 1. Programs must strictly verify external data. During external data processing, programmers must keep this in mind and not make any assumption that external data meets expectations. External data must be strictly checked before being used. Programmers must abide by this principle in the complex attack environment to ensure that the program execution process is in line with expected results. 2. The attack surface of code must be minimized. The code implementation should be as simple as possible to avoid unnecessary data exchange with external environments. Excess attack surfaces will increase the attack probability. Therefore, avoid exposing internal data processing of programs to external environments. 3. Defensive coding strategies must be used to compensate for potential negligence of programmers. Every man is liable to error. Due to uncertainties of external environments and the differences in the experience and habits of programmers, it is hard for the code execution process to fully meet expectations. Therefore, defensive strategies must be adopted in the coding process to minimize the defects caused by the negligence of programmers. The measures include: • Defining an initial value for the declaration of variables. • Exercise caution in using global variables. • Avoid using complex and error-prone functions. • Do not use error-prone mechanisms of compilers/operating systems. • Deal with the resource access process carefully. • Do not change the runtime environment of the operating system. For example, do not create temporary files, modify environment variables, or create processes. • Rectify errors strictly. • Use the debugging assertion (ASSERT) properly. ## Variables ### Rule 11.2.1: Define an initial value for the declaration of pointer variables, variables indicating resource descriptors, or BOOL variables. Note: Defining an initial value for the declaration of variables can prevent programmers from referencing uninitialized variables. Good example: SOCKET s = INVALID_SOCKET; unsigned char *msg = nullptr; int fd = -1; Bad example: In the following code, no initial value is defined for the declaration of variables. As a result, an error occurs in the free step. char *message; // Error! char *message = nullptr; is required. if (condition) { message = (char *)malloc(len); } if (message != nullptr) { free(message); //If the condition is not met, the uninitialized memory will be freed. } ### Rule 11.2.2: Assign a new value to the variable pointing to a resource handle or descriptor immediately after the resource is freed. Note: After a resource is freed, a new value must be immediately assigned to the corresponding variable to prevent the re-reference of the variable. If the release statement is in the last line of the scope, you do not need to assign a new value. Good example: SOCKET s = INVALID_SOCKET; ... closesocket(s); s = INVALID_SOCKET; unsigned char *msg = nullptr; ... free(msg); msg = nullptr; ### Rule 11.2.3: Ensure that local variables in a function do not take up too much space. When a program is running, the local variables in the function are stored in the stack, and the stack size is limited. If a large static array is requested, an error may occur. It is recommended that the size of the static array not exceed 0x1000. In the following code, buff requests a large stack but the stack space is insufficient. As a result, stack overflow occurs in the program. constexpr int MAX_BUF = 0x1000000; int Foo() { char buff[MAX_BUFF] = {0}; // Bad ... } ## Assertions ### Principles Assertions in code consist of ASSERT and CHECK_FATAL. ASSERT is used to determine conditions in DEBUG mode. If conditions are not met, the program exits directly. CHECK_FATAL is used to detect exceptions during program running. If the conditions are not met, the program exits. CHECK_FATAL is applicable to scenarios where the input and resource application are not under control. Example: CHECK_FATAL(mplName.rfind(kMplSuffix) != std::string::npos, "File name %s does not contain .mpl", mplName.c_str()); // The file name does not meet the requirements. CHECK_FATAL(intrinCall->GetReturnVec().size() == 1, "INTRN_JAVA_FILL_NEW_ARRAY should have 1 return value"); // The logic restriction is not met. CHECK_FATAL(func->GetParamSize() <= 0xffff, "Error:the argsize is too large"); // The validity is verified. void *MemPool::Malloc(size_t size) { ... CHECK_FATAL(b != nullptr, "ERROR: Malloc error"); // Failed to apply for memory. } ASSERT is applicable to scenarios where you want to locate bugs in the defensive programming mode. Example: ASSERT(false, "should not be here"); ASSERT(false, "Unknown opcode for FoldIntConstComparison"); ### Recommendation 11.3.1 Do not use ASSERT to verify whether a pointer with security context is nullptr. Note: The compiler is an offline compilation tool. The impact of process breakdown is much less than that of online services. Therefore, the defensive programming mode should be reduced. Not all input parameters require null pointer verification. Instead, the context logic is used to determine whether null pointer verification is required. An input parameter without the nullptr logic does not need to be verified. For details, see the assertion usage principles. ### Recommendation 11.3.2 Do not use ASSERT to verify whether a data array with security context exceeds the threshold. Note: Similar to the null pointer rule, the context logic is used to determine whether to use assertions for out-of-threshold array verification. For details, see the assertion usage principles. ### Recommendation 11.3.3 Do not use ASSERT to verify integer overflow, truncation, or wraparound in the case of context security. Note: In terms of integer overflow caused by addition or multiplication, verification is not required with the context logic guaranteed. In terms of integer truncation and wraparound caused by type conversion, verification is not required with the context logic guaranteed. For details, see the assertion usage principles. To ensure that fault tolerance and logic continue to run, you can use conditional statements for verification. ### Rule 11.3.1 Do not use ASSERT to verify errors that may occur during program runtime. FILE *fp = fopen(path, "r"); ASSERT(fp != nullptr, "nullptr check"); //Incorrect code: Opening the file may fail. char *str = (char *)malloc(MAX_LINE); ASSERT(str != nullptr, "nullptr check"); //Incorrect code: Memory allocation may fail. ReadLine(fp, str); ### Rule 11.3.2 Do not modify the runtime environment in ASSERT. Note: In the formal release stage of a program, ASSERT is not compiled. To ensure the function consistency between the debugging version and formal version, do not perform any operation, such as value assignment, variable modification, resource operation, or memory application, in ASSERT. In the following code, ASSERT configuration is incorrect. ASSERT(i++ > 1000); // p1 is modified. ASSERT(close(fd) == 0); // fd is closed. ## Exception Mechanisms ### Rule 11.4.1 Do not use the C++ exception mechanism. Note: Do not use the exception mechanism of C++. All errors must be transferred between functions and judged using error values, but not be handled using the exception mechanism. Programmers must fully control the entire coding process, build the attacker mindset, enhance secure coding awareness, and attach importance to procedures with potential errors. Using the C++ exception mechanism to handle errors, however, will weaken the security awareness of programmers because it will: Disrupt program execution, making the program structure more complex and used resources not cleared. Reduce the reusability of code. The code that uses the exception mechanism cannot be reused by the code that does not use the exception mechanism. Depend on the compiler, operating system, and processor. The execution performance of the program will deteriorate if the exception mechanism is used. Increase the attack surface of a program in the binary layer after the program is loaded. The attacker can overwrite the abnormal processing function address to launch an attack. ## Memory ### Rule 11.5.1: Verify the requested memory size before requesting memory. The requested memory size may come from external data and must be verified to prevent memory abuse. The requested memory size must not be 0. Example: int Foo(int size) { if (size <= 0) { //error ... } ... char *msg = (char *)malloc(size); ... } ### Rule 11.5.2: Check whether memory allocation is successful. char *msg = (char *)malloc(size); if (msg != nullptr) { ... } ## Dangerous Functions ### Rule 11.6.1: Do not use dangerous functions related to memory operations. Many C functions do not use the destination buffer size as a parameter or consider memory overlapping and invalid pointers. As a result, security vulnerabilities such as buffer overflow may be caused. The historical statistics about buffer overflow vulnerabilities show that a majority of the vulnerabilities are caused by memory operation functions that do not consider the destination buffer size. The following lists the dangerous functions related to memory operations: Memory copy functions: memcpy(), wmemcpy(), memmove(), wmemmove() Memory initialization function: memset() String copy functions: strcpy(), wcscpy(),strncpy(), wcsncpy() String concatenation functions: strcat(), wcscat(),strncat(), wcsncat() Formatted string output functions: sprintf(), swprintf(), vsprintf(), vswprintf(), snprintf(), vsnprintf() Formatted string input functions: scanf(), wscanf(), vscanf(), vwscanf(), fscanf(),fwscanf(),vfscanf(),vfwscanf(),sscanf(), swscanf(), vsscanf(), vswscanf() stdin stream-input function: gets() Use safe functions. For details, see huawei_secure_c. Exceptions: In the following cases, external data processing is not involved, and no attack risks exist. Memory operations are complete in this function, and there is no possibility of failure. Using safe functions causes redundant code, and therefore dangerous functions can be used in these cases. (1) Initialize a fixed-length array, or initialize the memory of the structure with a fixed length: BYTE array[ARRAY_SIZE]; void Foo() { char destBuff[BUFF_SIZE]; ... memset(array, c1, sizeof(array)); //Assign values to global fixed-length data. ... memset(destBuff, c2, sizeof(destBuff)); //Assign values to partial fixed-length data. ... } typedef struct { int type; int data; } Tag; Tag g_tag = {1, 2}; void Foo() { Tag dest; ... memcpy((void *)&dest, (const void *)&g_tag, sizeof(Tag)); //Assign values to fixed-length structure. ... } (2) Initialize memory if function parameters include memory parameters. void Foo(BYTE *buff1, size_t len1, BYTE *buff2, size_t len2) { ... memset(buff1, 0, len1); //Clear buff1. memset(buff2, 0, len2); //Clear buff2. ... } (3) Assign an initial value after allocating memory from the heap. size_t len = ... char *str = (char *)malloc(len); if (str != nullptr) { memset(str, 0, len); ... } (4) Copy memory with the same size as the source memory size. The following code copies a memory block with the same size as srcSize: BYTE *src = ... size_t srcSize = ... BYTE *destBuff = new BYTE[srcSize]; memcpy(destBuff, src, srcSize); The following code copies a memory block with the same size as the source character string: char *src = ... size_t len = strlen(src); if (len > BUFF_SIZE) { ... } char *destBuff = new char[len + 1]; strcpy(destBuff, src); (5) The source memory stores static character string constants only. (Check whether the destination memory is sufficient during encoding.) The following code directly copies the string constant "hello" to the array: char destBuff[BUFF_SIZE]; strcpy(destBuff, "hello"); The following code concatenates static character string constants: const char *list[] = {"red","green","blue"}; char destBuff[BUFF_SIZE]; sprintf(destBuff, "hello %s", list[i]);` C++ 1 https://git.oschina.net/openarkcompiler/OpenArkCompiler.git git@git.oschina.net:openarkcompiler/OpenArkCompiler.git openarkcompiler OpenArkCompiler OpenArkCompiler master
2020-10-23 09:34:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22105349600315094, "perplexity": 5135.57180642017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880878.30/warc/CC-MAIN-20201023073305-20201023103305-00214.warc.gz"}
http://clivebest.com/blog/?p=4597&replytocom=16168
# The CO2 GHE demystified Abstract: This post describes a new approach to calculating the CO2 greenhouse effect. Instead of calculating radiative transfer  from the surface up through the atmosphere to space, exactly the opposite is done. IR photons originating from space are tracked downwards to Earth in order to derive for each wavelength the height at which more than half of them get absorbed within a 100 meter path length.  This identifies the height where the atmosphere becomes opaque at a given wavelength. This also coincides with the “effective emission height” for photons to escape from the atmosphere to space. A program has been written using a standard atmospheric model to perform a line by line calculation for CO2 with data from the HITRAN spectroscopy database. The result for CO2 is surprising as it shows that  OLR from the central peak of the 15 micron band originates from high in the stratosphere. It is mostly the lines at the edges of the band  that lie in the troposphere.  The calculation can then show how changes in CO2 concentrations affect the emission height and thereby reduce net outgoing radiation(OLR). The net reduction in OLR is found to be in agreement with far more complex radiative transfer models.  This demonstrates how the greenhouse effect on Earth is determined by greenhouse gases in the upper atmosphere and not at the surface. This post looks in detail at the emissions to space by CO2 molecules from the atmosphere. The main CO2 absorption band lies at 15 microns. It is composed of hundreds of quantum transitions between vibrational states of the molecule. The reference database for the strengths of these lines is called HITRAN and is maintained by Harvard University[1]. I requested a copy of this database and have been studying it. Fig 1. shows in detail  the transition lines within this band and  Fig 2. shows the fine detail within the central spike. The line strengths are recorded at 296K in units of cm-1/(molecules cm-2)  corresponding to the absorption cross section for one molecule in vacuum. In the real atmosphere these lines are broadened due mainly to motion of molecules. This is a rather complex subject but luckily I found a Fortran program [2] which takes as input the line strengths from HITRAN and then integrates them over pressure to derive a net absorption cross section per Mole of CO2. This result is shown in Figure 3. Notice how strong the central peak now becomes  with 2 clear side fans of absorption with fine structure. To make progress to locate from exactly where IR is emitted to space we need a model of the atmosphere. For this we assume a standard lapse rate of 6.5C/km up to the tropopause at 11 km, then stationary temperatures through to 20 km followed  by a linear increase of 1.9C/km in the stratosphere until 48 km above the surface (see fig 4). The  barometric pressure profile is taken to be scale = RT/($molar*$g); P[h]= P0*exp(-h/scale); The objective now of the calculation is to take each CO2 transition line in turn and then descend from space to find at which  altitude  the absorption of photons of that wave length within a 100m thick slice of the atmosphere becomes greater than   the transmission of photons. We define this height as the transition between opaque and transparency. This is the height at which thermal photons within the CO2 absorption bands are free to escape to space. – the effective radiation height.  The absorption rate is simply the molar cross section times the numer of moles of CO2 contained in a 100 meter long cylinder of cross-section 1m^2. A graph of emission heights versus wavenumber is  shown in figure 5a for a CO2 concentration of 300ppm in black and 600ppm in red. Fig 5b is a smoothed average over 20 adjacent lines. Note how it is mainly emission heights from the side lines  which lie in the troposphere. The emission height of the central peak actually lies in the stratosphere with the central spike reaching up to 25000 meters where the temperatures are actually increasing with altitude. As expected doubling CO2 concentrations rises the emission height  significantly but the effect on radiation loss depends on the temperature difference beween the old emission height and the new emission height. Below the emission height, radiation in CO2 bands is in thermal equilibrium with the surrounding atmosphere. This is usually  called Local Thermodynamic Equilibrium (LTE). The lapse rate of the atmosphere is driven by convective and evaporative  heat loss from the surface, but energy loss to space can only occur through radiation.  So the local temperature from where IR photons escape to space determines the radiation flux for that wavelength. The temperatures at  the emission heights for CO2 are shown in figure 6. The effective temperature of the emission height now allows us to calculate the planck spectrum for the CO2 lines. The result is shown in Figure 7. So how does this compare with a real spectrum as measured by satellite ?   Figure 9 shows a spectrum taken from NIMBUS. There is an overlap with the water vapour contiuum  lines below 550 cm-1, which reduces the left shoulder. But  apart from that the agreement is really rather good, and in particular note the upward spike at the centre of the line corresponding to emission from the warmer stratosphere. Similarly the flat bottom corresponds to the tropopause at around 216K. Finally now we can make an estimate for the radiative forcing due to a doubling of CO2. To do this we first derive the net change in outgoing IR from an increase in CO2 from 300ppm to 600ppm as shown in Figure 9. Note how for the central peak the radiation actually increases for a doubling of CO2 as they emission height lies high up in the stratosphere. This is because temperatures are actually increasing with height. Next we  integrate the change in the radiative flux over all lines in the CO2 band  going from  300ppm  and 600ppm concentration. The result of this integration works out to be 1.17 watts/m2/sr. However, to derive the net change in OLR we need to integrate this over the outgoing solid angle for photons that reach space.  Quoting from Wikipedia. The integration over the solid angle should be the half sphere of out going radiation. Furthermore, because black bodies are Lambertian (i.e. they obey Lambert’s cosine law), the intensity observed along the sphere will be the actual intensity times the cosine of the zenith angle $\phi$, and in spherical coordinates, $d\Omega = \sin(\phi)d\phi d\theta$. This then adds a factor $\int_{0}^{2\pi}{d\theta} \int_{0}^{\frac{\pi}{2}}{\cos(\phi)\sin(\phi) d\phi}$ which when you evaluate the integral gives  an extra factor $\pi$. So finally the reduction in outgoing IR radiation caused by a doubling of CO2 from 300ppm to 600ppm becomes  4.7 watts/m2. This is not far away from the value as calculated by climate models – 3.7 watts/m2 !   This is usually called “radiative forcing”. Note how in the stratosphere the energy loss increases with CO2 concentration. This predicts that the  stratosphere  should cool,  as the troposphere warms. All predictions of warming/cooling are of course based on the assumption that all else remains constant – lapse rate, H2O, clouds etc. The real signature for a CO2 GHG effect would be to observe cooling in the stratosphere where these effects are much smaller. In the next post I will examine in detail how “radiative forcing” depends on CO2 concentrations. References 1. http://www.cfa.harvard.edu/hitran/ 2. http://home.pcisys.net/~bestwork.1/CalcAbs/CalcAbsHitran.html This entry was posted in AGW, Climate Change, climate science, Physics, Science and tagged , . Bookmark the permalink. ### 36 Responses to The CO2 GHE demystified 1. Eric Barnes says: Hi Clive, Great article! 🙂 I have a question. Above figure 6 you state … “As expected doubling CO2 concentrations rises the emission height significantly but the effect on radiation loss depends on the temperature difference beween the old emission height and the new emission height.” This doesn’t make sense to me. In my mind, LTE would not be a 2d surface, but a 3d mass of air that will be larger as the thermosphere expands, much like a heat sink that can increase it’s surface much more than in a strictly vertical sense. • Clive Best says: Thanks Eric, Yes, you are quite right. LTE really covers a 3d shell of finite thickness. In my code I assume each layer to be 100m thick. So really the heights I am talking about should be at the centre of each shell and define the temperature of the surrounding atmosphere within 100 m of that height. I agree that the wording should be changed . The atmosphere would indeed expand a little if the local temperature increases. I think convection would maintain the lapse rate. However the lapse rate itself could change with an increased relative humidity. These are just the known unknowns. 2. Eric Barnes says: I think the wording is fine, but that the IR surface expands in 3 dimensions and that the integration procedure would be more accurate if it accounted for that? Looking at the model, it seems clear that the IR surface/manifold? is much rougher with 600ppm of CO2. This is clearly shown in 5b that the effective emission height is not only higher, but has much greater variance. This also makes sense intuitively to me. Sort of like a puffer fish whose spines would also extend along with his body. http://animals.nationalgeographic.com/animals/fish/pufferfish/ So while the average temperature goes down, the surface area increases greatly. My mind starts aching when trying to follow the integration procedure, but would it not be possible to account for the increased IR surface area in the integration? Perhaps spreading including a probability with each shell and integrating over the vertical? • Clive Best says: You are right – there is a small change in surface area with height but I don’t think that is why the 600 ppm has greater variance particularly in the troposphere. The surface area doesn’t really change that much because the centre of the earth is so far away. 4piR^2 and 4pi(R+dr)^2. I think it may be because the pressure (density) change is exponential. I will put the code on-line when I have finished and you will see how simple it really is ! Clive • Eric Barnes says: I’d like to look at some code, although I can assure you that it’s not simple (for me at least :)). Thank you for entertaining my questions and for publishing this online. 🙂 3. pochas says: Clive, Thanks for devoting the time to do this analysis. It gives me confidence that when we are talking about the “no feedbacks” case we know what we are talking about (no data fiddling involved). If you get some free time you might look at the difference between the “Dynamic Tropopause Potential Temperature” and surface or near-surface temperature. Any relevance for climate sensitivity? http://climaterealists.com/?id=11197 Regards 4. Randall Reviere says: Hi Clive, Perhaps a simple way of looking at the effect of GHGs are by considering each molecule a small IR transmitter/receiver, with a small delay (and chance that energy gained in reception) is given up in a collision w/O2 or N2. Since the direction of transmission is random, the molecules are transmitting toward space approximately 50% of the time, while they are receiving (a briefly storing) energy from all directions (hence 100% of the time). The density of GHGs (not any particular one, because thermal equilibrium ‘shares the energy’ between all species present, not just the GHG active at band for a given event) thus determines the mean free path for IR moving through the atmosphere, hence the number of absorb/emit events, the total delay and hence the energy per unit volume (and temperature via the energy/temperature relation for the species). In this view, all other things being equal, the temperature must decline with altitude because energy per unit volume declines with altitude (until it is so low that UV – O2/O3 starts to make a significant contribution), given that 50% of emission events in a layer result in ‘permanent’ loss of the energy to a higher altitude and hence ultimately to space. I’m not sure that worrying about the frequency of the emission to space ultimately matters that much, unless you are trying to match the ‘last look’ NIMBUS spectrum data, for example, because presumably a given quantity of energy will have spent part of it’s trip up from the surface manifested in any number of frequencies, given the range of active frequencies and the thermal equilibrium at each absorb/emit point. On the other hand, is it possible that a quantum of energy emitted upward at a wavenumber of 680 is trapped there and will be making it’s whole trip being absorbed/emitted at that wavenumber? As a thought experiment, would it make a difference to the temperature profile of the atmosphere if all of the CO2 in the lower atmosphere were replaced by H2O and CO2 were somehow confined to only the top layers that are not 100% opaque to space (in the present proportions so the NIMBUS data looks the same)? Thanks for you tremendous help in understanding how this works… I appreciate your insights. Regards, Randall • Clive Best says: Randell, Sorry for the delay- I only just saw your comment. Your general analysis is spot on. However I think the frequency does matter because the mean free path for an IR photon depends both on density and absorption cross-section. The cross-section varies strongly with frequency. So for example the quantum line for CO2 at exactly 15 microns has so large a cross-section that the atmosphere is opaque until way up into the stratosphere. Other lines are much weaker and radiate to space at low altitudes. So even if much of the CO2 is thermally excited rather than directly by photons, the energy loss upwards depends on density and frequency. Regarding your thought experiment. In the lower atmosphere about 2/3 of heat flow upwards is by convection and evaporation, and only 1/3 is by radiation. This proportion changes as you go upwards until eventually at the tropopause ALL energy loss upwards is by radiation. In that sense changing the H2O/CO2 mix may change these proportions and both the height of the tropopause AND by implication the surface temperature will change. The dry adiabatic lapse rate would remain the same although the environmental lapse rate would fall as there is now more water vapor. regards Clive 5. Randall Reviere says: Hi Clive, I think the statement “energy loss upwards depends on density and frequency” might well be right at the heart of the matter. Consider 2 end-point scenarios, one where frequency does not matter and the other where density does not matter. Also, to make this simple to think about (at least for me), consider that each of these end-point scenarios has an analogous simple electrical circuit, where the voltage corresponds to energy difference, the current to heat flux, and the resistance to specific GHG influence (which is a function of density and frequency). Using this analogy, in the one case, the electrical configuration is like a bundle of vertically stacked resistors (say each 1 meter long, one for each frequency and GHG) that are all tied together so there is no difference in potential at the end of each resistor, and at each elevation, the resistor with the lowest resistance of course carries the most current. Call this the parallel configuration. The corresponding ‘series’ configuration is similar except the only potentials that are equal are at the bottom and top of the stacks. Everywhere else the potentials are determined by the relative sizes of the resistors. Summarizing, one looks like a ‘series of parallels’ (“SOP”) and other is a ‘parallel set of series’ (“POS”) configuration. I put together a spreadsheet for a 2X2 resistor array in SOP and POS, in order to compare the total resistance calculated and see under what conditions SOP vs. POS makes any difference. Let column 1 be H2O and column 2 be CO2. Let row 1 be the upper atmosphere and row 2 be the lower atmosphere. For this analogy, let’s make the resistances dimensionless by dividing all of them by the ‘resistance’ of H2O in the lower atmosphere, whatever that is, so we can set the ‘resistances’ of upper atm H2O, and CO2 in relation to this value. I take the ‘conductance’ (the inverse of resistance) for lower atmosphere H2O to be relatively high, given that there is both a lot of water and a lot of IR bands where water is active. Let that have a value of 1. For the sake of this tiny model, set the upper atm conductance/resistance also to 1 (although it might be lower given nothing above to interfere with radiation away… but there is less present). As for lower atm CO2, the resistance value (inversely proportional to the heat current that flows through this path) must be much higher, given relatively little CO2 and fewer active bands. I set this resistance to 99. (in effect saying that in the lower atm 99% of the heat flow is H2O related, and 1% in CO2 related). In the upper atm, let’s say that 1/3 of the heat flows via CO2, so the resistance is a value of 2? Twice the upper atm H2O value. Now we can compare SOP and POS and also how sensitive the total resistance is to a changes in upper atm CO2 resistance. In this tiny model, a doubling of upper atm CO2 resistance in the SOP mode from 2 to 4 increases total resistance by 8%, a significant change given how water dominated the model as a whole is. In POS mode, the increase is significantly smaller… .03%. In fact, POS mode predicts virtually no sensitivity to changes in upper atm CO2 resistance, because once energy is ‘in the water channel’ it stays there all the way up. I think this could be (and probably has been) resolved with lab bench experimentation… it would be along the lines of a test chamber illuminated with each CO2 band at the relevant range of atm pressures, temperatures and CO2 (and H2O) concentrations… looking orthogonally into the test chamber we should be able to see how much of the CO2 band is the result of a quick absorb/emit of this illumination vs. something that looks much more like the full thermal equilibrium (a function only of the temp). I would bet that at higher relative pressures it’s very much a thermal pattern and only at really low pressures does the CO2 illumination band show up strongly. Just guessing though. Anyone out there aware of such lab data? If my relative ‘resistance’ numbers are nonsense, then the whole exercise is highly suspect. The sensitivity % could be much higher but probably is much lower (for SOP). Please comment if it looks too far off… Best regards, –Randall 6. Marko says: Why do you choose 100 m as the critical attenuation length for photon emission? As atmosphere is clearly many km thick, photons with 100 m attenuation length would most likely not escape the 100-meter-environment. It is probably a bit dubious to choose any value as clearly photons are emitted from a fair thickness of air with varying escape probability. However, for comparison, how would the results change if you take, say, 1 km or 3 km as the critical attenuation length? This probably would move more of the emission to the stratosphere. Second point, I think it is a bad idea to do any averaging of the spectra. At large altitudes, the emission/absorption lines are very slender and the peak-valley variation may have large impact on the emission altitude. Something in the ballpark of 3 km might actually be fairly good guess for the critical attenuation length as such a change in altitude will cause an appreciable change in pressure and the associated narrowing of lines enhances the escape probability for radiation moving upwards. • Clive Best says: Marko, I am choosing a vertical grid with arbitrary 100m intervals. So I don’t think I am assuming anything about the attenuation length, which anyway varies dramatically with atmospheric pressure. The advantage of calculating from TOA downwards is that we know the attenuation length is many times larger than 100m high up. I am calculating the height for each wavelength where more photons are absorbed than are transmitted through. This is the height where for that wavelength the atmosphere becomes opaque for IR photons. This is what I define as the effective emission height. The trick is to imaging a flux of IR photons going down from space rather than going up from the surface. The result for the effective emission height will be the same. 7. Pingback: weltklima - Seite 288 8. Chic Bowdrie says: Clive, Better late than never coming across this excellent post and the next one, where I have another question as well. Fig. 8 shows a spectrum from NIMBUS which corresponds nicely to your calculated spectrum in Fig. 7. The y-axis is labeled radiance which suggests greater emission occurs with 300ppm vs. 600pm. In fact, Fig. 9 defines the change in radiance as 300ppm – 600ppm and the values are mostly positive. This seems counter intuitive unless fewer molecules at lower warmer altitude emit the greater amount of energy. Can you straighten me out on this? Also, does this analysis account for the average absorption and emission which would occur during a normal 24 hour time frame? • Clive Best says: Sorry about the confusion. The difference in figure 9 is a sign change. In fact the radiance is reduced in going from 300 to 600 ppm. What I have plotted is the “radiative forcing” which is the negative of that value ! The whole AGW argument is based on increasing levels of CO2 reducing OLR. This creates a global energy imbalance causing the surface to warm to rebalance the reduction. This “forcing” is equal to the fall in OLR caused initially by increased CO2. Forcing = – reduction in OLR from top of atmosphere. 9. Chic Bowdrie says: This is of course the consensus view of climate scientists, but is it also that of physicists? I read the responses of Jack Barrett and Peter Dietze to criticisms of the Heinz Hug’s “Climate Catastrophe – A Spectroscopic Artifact?” and wonder if the matter is settled. To your point, Barrett would argue “Certainly, at lower temperature [from higher altitude], the collision rate is reduced, but by no means as much as to offset the larger number of radiative molecules.” I also interested in how transient changes in the lapse rate over 24 hour period is figured into line-by-line calculations converting emission temperature to radiance that appear to be based on a spectral snap-shot. If this is done, I missed it or haven’t got there yet along my path to understanding climate physics. Grateful for any advice on this. • Clive Best says: Of course none of the climate models take into account the diurnal changes in the lapse rate between night and day. They just average over that as far as I know. However night time radiative cooling in clear skies must be important and is especially effected by clouds. Clouds are the elephant in the room. They cool the earth during the day and warm the earth at night. The net average effect though on earth of clouds is cooling -22 W/m2. A reduction in global cloud cover of 1-2% would offset all of AGW. Regarding Barret. As far as I see it CO2 molecules are in thermal equilibrium with N2 and O2 molecules at any given height. Unlike N2/O2 they can also radiate “heat” energy. However they radiate due to collisions with other gases governed by the local temperature. 10. Martin A says: Hi Clive, Many thanks for this explanation of the GHE. The “black body surrounded by a shell of greenhouse gas” model was never any better, for me, than a plausibility argument for how the GHE works. So it’s good to find something that is both understandable and seems realistic. There is a small point I am not sure of. You say: The objective now of the calculation is to take each CO2 transition line in turn and then descend from space to find at which altitude the absorption of photons of that wave length within a 100m thick slice of the atmosphere becomes greater than the transmission of photons. We define this height as the transition between opaque and transparency. This is the height at which thermal photons within the CO2 absorption bands are free to escape to space. – the effective radiation height. This clearly is near reality since your calculations reproduce the spectrum measured from space. However, the “100m thick slice of the atmosphere” seems arbitrary (eg why not a 176m thick slice?). And although such a slice may itself be just about transparent, you might have a significant number of such slices above you, giving something not really all that transparent, through which photons don’t have a high probability of escaping to space. Would it be appropriate instead to work down to the level at which 50% of incoming photons have been absorbed? So that, if my understanding is right, you’ll have calculated the height above which a majority of outgoing photons escape for good? I hope my question makes sense. 11. chapprg1 says: Dr Best; Thank you for bringing this immense step toward sanity to the discussion. You say: “The objective now of the calculation is to take each CO2 transition line in turn and then descend from space to find at which altitude the absorption of photons of that wave length within a 100m thick slice of the atmosphere becomes greater than the transmission of photons.” My naive question is why is there such a large change in emission altitude for the side bands? I would expect the free path to be the same for all of the frequencies in this narrow band. The reason that their amplitudes are small being emitted to space is not because of the thick atmosphere above their emission altitude but their emitted amplitudes are small to begin with. What am I missing? Or can you point me to something which explains your calculation of fig. 5a. Thank you so much for your post. Ron Chappell 12. I don’t know if this will be of any help to you, but here are some absorption coefficient figures I’ve gathered. http://bartonlevenson.com/AbsorptionCoefficients3.html All lines and bands are in microns, all coefficients in m^2 kg^-1. For notes on translation (you probably know this stuff already): http://bartonlevenson.com/ConvertingAbsorptionCoefficients.html • Ronald Chappell says: Thanks so much for the very thorough and thoughtful reply. I’m overwhelmed with confusion. Some seem impossibly large. arationofreason 13. Just background: http://journals.aps.org/pr/abstract/10.1103/PhysRev.41.291 http://www.rmets.org/sites/default/files/qjcallender38.pdf Callender’s 1938 paper also appears in The Warming Papers, edited by David Archer and Ray Pierrehumbert. 14. Chris Kennedy says: Clive, What do you mean by tracking IR photons downward from space? The graphs that I’ve looked at show incoming from space as mostly visible light range and outgoing (upward) as IR. • Clive Best says: It is just a thought experiment that allows us to get the same result as doing a complicated radiative transport calculation up from the surface. There are no real IR photons coming from space except perhaps those from the Big Bang microwave background ! 15. Chris Kennedy says: Thanks Clive – I’m new to this specific field of study and find your discussions helpful. Do you know why CO2 is (has been) considered nearly saturated when the satellite IR spectroscopy graphs don’t show the near 667 wavenumber (or near 15 microns) as bottoming out? That means the IR detectors are still picking up a reduced amount but an amount nonetheless (40 mW for example in Fig 8). Is the consensus that it is all from upward retransmission from energized CO2? If so, intuitively that would seem a bit much. I understand why the peaks get wider so that’s not my issue. It’s the remaining 40 mW that still reaches the detectors that puzzles me. • Clive Best says: That is a very interesting question. The central central lines are already saturated way up into the stratosphere where temperature actually increases with altitude. This means that IR emissions there are actually higher than those lines emitting from near the top of troposphere – hence the spike. As CO2 increases so does the emission height increasing emissions from the central lines producing a GH ‘cooling’ contribution. However the net effect when integrated over all lines is a reduction in outgoing IR and an induced warming effect. 16. Chris Kennedy says: The NIMBUS II data was from early 70s and there is AIRS data from after 2002. The altitudes for both satellites seemed similar (over 400 miles) but AIRS measured tropospheric temps so I’m not sure what that means regarding its altitude. https://airs.jpl.nasa.gov/data/products Anyway the AIRS spectrum provided by Pierrehumbert looks almost identical to the earlier NIMBUS II: https://geosci.uchicago.edu/~rtp1/papers/PhysTodayRT2011.pdf I confess I still don’t grasp why the 667 dip stops at that level. To me, I interpret this as more CO2 tomorrow could trap even more IR at 667. But on the other hand – why are the NIMBUS II and AIRS graphs so similar when more than 30 years had passed and CO2 levels increased from 325 ppm to 380 ppm? Without understand this I won’t be able to appreciate what going to 600 ppm will actually do. • Clive Best says: I assume you mean the 667 central spike. This will strengthen as CO2 increases but it releases more energy to space, the opposite of trapping more energy! Basically the atmosphere radiates at different heights and wavelength, but always roughly as a black body with temperature Tlocal. The central spike already radiates from CO2 molecules way up in the stratosphere. The stratosphere temperature increases with height and radiant energy goes as T^4 so more heat is radiated to space from this wavelength as CO2 levels rise. This increases cooling ! You can already see that in the spectra when compared to BB spectra at various temperatures. The spike radiates at 245C whereas the side bands radiate at 220K. 17. richard donnellan says: A Black Body!!!! Hardly. Schoolboy error. Is the rest of your nalysis worth reading? 18. Chris Kennedy says: Clive, I’ve done a little more research. I think the biggest reason the central spike at exactly 667 is a spike is because there is much less CO2 moving perfectly laterally compared to the amount of CO2 encountered with vertical velocity toward or away from the approaching IR (Doppler broadening). That would explain more absorption between 640 -665 and 669-700 than the amount absorbed at the actual 667 wavenumber itself. So the result is more IR at exactly 667 will make it all the way to the satellite detectors. I reread chapter 4 of David Archer’s Global Warming book. He discusses the Band Saturation effect. Even though he appears to be in the “CO2 is ruining the planet” camp, this section (if I understand correctly) is basically an admission that the levels of CO2 have hit their limit of absorbing any more at 667. He does warn that more CO2 will contribute to additional warming due to available CO2 to absorb near (+ or -) 667 and “broaden” the band even wider. I get that and in a sense don’t technically disagree with it, but the farther you get from 667 in either direction, the harder it is for IR to find a CO2 traveling at that velocity. So if that’s the reason for concern, I can’t see the world collapsing as CO2 moves toward 450 ppm some time in the future. However, I think a more likely reason for additional warming is something you mentioned in your post: DOUBLING CO2 and BASIC PHYSICS from Feb 4, 2010. There you mention how increased CO2 concentrations could cause the saturation at lower altitudes in the atmosphere. So as CO2 concentrations continue to rise, a higher percentage absorbs increasingly closer to the surface. There may be something to that especially if lower altitude CO2 has a better chance of transferring energy to O2 and N2 while higher altitude CO2 presumably has more time between collisions and has increased chance to re-radiate the energy away as IR before it transfers to other gas molecules to increase kinetic energy. Has someone done the math on this? Could additional warming be simply from continually increasing the ratio of: transfer to kinetic vs radiate to space? Can you recommend any articles or books that explore this issue? 19. LOL@Klimate Katastrophe Kooks says: Can someone check my premise and my math here? I’m studying climate change from a particle physics and quantum mechanics perspective, and want to know if I’m on the right track. The TL;DR is that the first vibrational mode quantum state of the ground electronic mode quantum state of N2 has more energy than even the highest vibrational mode quantum state of the ground electronic mode quantum state of CO2, and thus vibrational mode quantum state energy preferentially flows from N2 to CO2. The only time CO2 will transfer vibrational mode quantum state energy to N2 is if the N2 is in its ground electronic mode and vibrational mode quantum states, and that’s not likely in the atmosphere for at least the vibrational .mode quantum state. =============== The average kinetic energy of CO2 molecules at prevalent atmospheric temperature (288 K) is given by: KE_avg = [1/2 mv^2] = 3/2 kT … which gives an average thermal energy of 0.03722663 eV and a mean CO2 molecular translational speed of 372.227941 m/s. This thermal energy is equivalent to the energy of a 33.3283159 micron photon. You’ll note the thermal energy is LESS THAN the energy necessary to excite a CO2 molecule’s vibrational mode quantum states. So one would simplistically assume that the opposite applies, that the vibrational mode quantum state energy of CO2 is greater than the translational energy of N2 or O2 molecules (which would be approximately the same as calculated above, due to the Equipartition Theorem) and therefore a photon-excited CO2 molecule will de-excite via a thermalizing collision with N2 or O2, thereby raising atmospheric temperature… except that assumes N2 and O2 are in their vibrational ground states, it neglects the energy in vibrational mode quantum states of N2 and O2. The wavenumber of any transition is related to its corresponding energy by the equation: 1 cm-1 = 11.9624 J mol-1 667.4 cm-1 = 667.4 * 11.9624 / 1000 = 7.98 kJ mol-1 The Boltzmann Factor at 288 K has the value exp(-7980 / 288R) = 0.03609 which means that only 3.6% of the CO2 molecules are in the lowest excited vibrational mode quantum state {ie: v21(1), bending mode}. These are the molecules that form the lower energy state for the next higher transitions which have an even lower population. The v2 vibrational (bending) mode quantum state of CO2 in its ground electronic state requires ~0.08279 eV or a ~14.98576 micron photon (per VR Molecules Pro molecular modeler). The first vibrational mode of N2 has quantum energy of 0.14634 eV, more than enough to activate CO2’s first vibrational mode quantum state upon collision. Thus, given that the Equipartition Theorem indicates that the thermal energy of both molecules is similar, during a collision the vibrational mode quantum state energy of a vibrationally-excited N2 molecule will flow to the CO2 molecule, not the other way around. This, of course, assumes that N2 in the atmosphere is vibrationally excited to at least its first vibrational mode quantum state. And a good percentage of it is… ———- https://www.osapublishing.org/DirectPDFAccess/5CCF1401-BEE0-71DE-4128232482B99888_303623/oe-22-23-27833.pdf?da=1&id=303623&seq=0&mobile=no Vibrationally Excited Molecules In Atmospheric Reactions “It follows from the solar ultraviolet intensities quoted by Watanabe and Hinteregger that the production of N2* through Eq. 21 will be of the order of 10^10 cm-2 sec-1. Most of the N2* will be in low vibrational levels.” {Comment: That’s 10,000,000,000 per square centimeter per second) ———- We can again use the Boltzmann Factor to determine the excitation population of N2. While the N2 molecule is IR-inactive due to no change in magnetic dipole, it is Raman-active: N-N stretching at 2744 cm-1 (3.64431 micron) 1 cm-1 = 11.9624 J mol-1 2744 cm-1 = 2744 * 11.9624 / 1000 = 32.825 kJ mol-1 The Boltzmann factor at 288 K has the value exp(-3282.5 / 288R) = 0.087738 which means that 8.77% of N2 molecules are in the N-N stretch excited state. When the molar mass of any gas is divided by the density of that gas at 1 atmosphere and a temperature of 288 K, the value 23.633 L/mol is obtained. So when looking at any 23.633 liter volume of the atmosphere, there will be one mol of N2 and one mol of CO2, when assuming that CO2 is a well-mixed gas. The mol of N2 in that 23.633 liter volume will contain 32.825 kJ of energy, whereas the mol of CO2 will contain 7.98 kJ of energy. {{{ 32.825 kJ / mol > 7.98 kJ / mol }}} Energy always flows from a higher-energy density to a lower-energy density regime. Given that CO2 constitutes 0.041% of the atmosphere (410 ppm), and N2 constitutes 78.08% of the atmosphere (780800 ppm), this means that 14.7969 ppm of CO2 is excited, whereas 68505.8304 ppm of N2 is excited. This is a ratio of 1 excited CO2 to 4629 excited N2. You’ll note this is 2.43 times higher than the total CO2:N2 ratio of 1:1904, and 167 times more excited N2 molecules than ALL CO2 molecules. As you can see, the number of excited N2 molecules swamps the number of excited CO2 molecules, and on a molar volume basis, N2 contains much more energy than CO2 at the same temperature and in the same volume of atmosphere. Hence, energy flows FROM N2 TO CO2. =============== 20. eddiebanner says: eddiebanner@outlook.com Hi Clive Thank you again for your kind response to my post “Carbon Dioxide Absorption Power and the Greenhouse Gas Theory”. It seems that I placed this in an inappropriate thread, and that this one would be much better, so I should like to re-post here if you would please allow it. I am trying to reconcile the ideas in my post with those in your excellent paper above, but I need some help here. Please would you let me know the value you, and the climate models, use for the energy of a 15 micron photon, so that I can compare it with the value I have used, which is 1.3252*10^-20 Joule. Global warming is certainly happening and much has been written about the Greenhouse Gas effect and it’s claimed warming of the Earth’s surface. The ideas have been based on the ability of molecules of carbon dioxide in the Earth’s atmosphere to absorb infrared photons of 15 micron wavelength, but very little, if anything, has been published about the power which can be handled by the atmospheric carbon dioxide. Nevertheless, GHG advocates claim a “radiative forcing” of about 2 Watts per m2 at the Earth’s surface. The following calculations show that this GHG theory cannot be correct. Consider a standard column of the Earth’s atmosphere, based upon an area of 1 square metre of the Earth’s surface. The number of molecules in this column (1) is 2.137*10^29 So at the current concentration of carbon dioxide, 400ppm, the number of molecules of carbon dioxide is (400*10^-6 )*(2.137*10^29 ) = approx 8.5*10^25 From the HITRAN database (2), the ability of the CO2 molecule to absorb a 15 micron photon is given by its absorption cross-section, which is 5*10^-22 m2 per molecule. (Note that this database gives the value in cm2 ). So, in an area of 1m2 the number of molecules required to absorb 1 photon is 1/(5*10^-22) ; that is 2*10^21 CO2 molecules per m2 But there are 8.5*10^25 molecules of CO2 in the column. So the number of photons which can be absorbed is (8.5*10^25) / (2*10^21) = 4.3*10^4 photons per m2 Now, the energy of a 15 micron photon (3) is 1.3252*10^-20 Joule So the energy absorbed by all the CO2 in the column = (1.3252*10^-20) * 4.3*10^4 Joule = 5.7*10^-16 Joule per m2 This process can be repeated many times per second because the excited CO2 molecule can release its energy by collision with any molecule in the atmosphere, ready to absorb another photon of the right energy. The mean free path in air at atmospheric pressure (760 torr) is about 0.1 micron, and the molecular velocity is 465 m.sec-1, and so the mean time between collisions is about 2*10^-10 second. So the process can be repeated about 5*10^9 times per second. Therefore, the maximum power which the carbon dioxide (at 400ppm) can handle is (5*10^9)*(5.7*10^-16) Joule per second per m2, that is approx. 3*10^-6 Watts.m-2 Whereas the Greenhouse Gas theory requires about 2 W.m-2 , which is about 700,000 times the power available. This seems to show that the Greenhouse Gas Theory is not valid. References (1) http://www.theweatherprediction.com/habyhints3/976/ (2) http://vpl.astro.washington.edu/spectra/co2pnnlimagesmicrons.htm (3) https://www.pveducation.org/pvcdrom/properties-of-sunlight/energy-of-photon • Clive Best says: Eddie, Yes, but I think you are forgetting how the convective/latent heat energy flow changes to compensate. This is impossible to calculate but works a bit like a pressure cooker. Heat will escape to space via IR radiation from the surface, top of clouds, H2O and CO2 through whatever is the most efficient. It certainly doesn’t all escape via CO2 IR to space. Only a small fraction in the 15micron band is affected by CO2. Any so called “trapped” heat by doubling CO2 is thermalised and escapes throughout the black-body spectrum. Just that fraction emitted by CO2 rises in height to colder levels. Clive • Aubrey Banner says: Clive Eddie 21. Aubrey Banner says: Clive Eddie 22. Robbert says: Clive, A remark on the model that you used to compute the reduction in outgoing radiation at an increase of CO2 from 300ppm to 600 ppm: The emission height from the side lines of the spectrum in your model lie well in the troposphere, whereas measurements from the NIMBUS-4 satellite suggest that they lie in- or close to the tropopause (see for instance Fig 9 from Outgoing IR Spectra are shown from the Sahara, the Mediterranean and the Antarctic. In the CO2 window most of the side line energy lies around the 220 K Planck line corresponding to an emission height in- or close to the tropopause. Would these observations from the satellite not lead to a lower estimate of the reduction in outgoing radiation? It is further remarkable that, while the altitude of the tropopause varies strongly with latitude, that all three satellite measurement indicate an emission height close to these altitudes. How can this be explained? Thanks for your efforts furthering understandings.
2019-11-20 04:44:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6884317994117737, "perplexity": 1180.4121046544922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670448.67/warc/CC-MAIN-20191120033221-20191120061221-00443.warc.gz"}
http://cms.math.ca/cmb/kw/nilpotent
location:  Publications → journals Search results Search: All articles in the CMB digital archive with keyword nilpotent Expand all        Collapse all Results 1 - 8 of 8 1. CMB 2014 (vol 57 pp. 884) Xu, Yong; Zhang, Xinjian $m$-embedded Subgroups and $p$-nilpotency of Finite Groups Let $A$ be a subgroup of a finite group $G$ and $\Sigma : G_0\leq G_1\leq\cdots \leq G_n$ some subgroup series of $G$. Suppose that for each pair $(K,H)$ such that $K$ is a maximal subgroup of $H$ and $G_{i-1}\leq K \lt H\leq G_i$, for some $i$, either $A\cap H = A\cap K$ or $AH = AK$. Then $A$ is said to be $\Sigma$-embedded in $G$; $A$ is said to be $m$-embedded in $G$ if $G$ has a subnormal subgroup $T$ and a $\{1\leq G\}$-embedded subgroup $C$ in $G$ such that $G = AT$ and $T\cap A\leq C\leq A$. In this article, some sufficient conditions for a finite group $G$ to be $p$-nilpotent are given whenever all subgroups with order $p^{k}$ of a Sylow $p$-subgroup of $G$ are $m$-embedded for a given positive integer $k$. Keywords:finite group, $p$-nilpotent group, $m$-embedded subgroupCategories:20D10, 20D15 2. CMB 2013 (vol 57 pp. 125) Mlaiki, Nabil M. Camina Triples In this paper, we study Camina triples. Camina triples are a generalization of Camina pairs. Camina pairs were first introduced in 1978 by A .R. Camina. Camina's work was inspired by the study of Frobenius groups. We show that if $(G,N,M)$ is a Camina triple, then either $G/N$ is a $p$-group, or $M$ is abelian, or $M$ has a non-trivial nilpotent or Frobenius quotient. Keywords:Camina triples, Camina pairs, nilpotent groups, vanishing off subgroup, irreducible characters, solvable groupsCategory:20D15 3. CMB 2012 (vol 56 pp. 606) Mazorchuk, Volodymyr; Zhao, Kaiming Characterization of Simple Highest Weight Modules We prove that for simple complex finite dimensional Lie algebras, affine Kac-Moody Lie algebras, the Virasoro algebra and the Heisenberg-Virasoro algebra, simple highest weight modules are characterized by the property that all positive root elements act on these modules locally nilpotently. We also show that this is not the case for higher rank Virasoro and for Heisenberg algebras. Keywords:Lie algebra, highest weight module, triangular decomposition, locally nilpotent actionCategories:17B20, 17B65, 17B66, 17B68 4. CMB 2011 (vol 55 pp. 579) Ndogmo, J. C. Casimir Operators and Nilpotent Radicals It is shown that a Lie algebra having a nilpotent radical has a fundamental set of invariants consisting of Casimir operators. A different proof is given in the well known special case of an abelian radical. A result relating the number of invariants to the dimension of the Cartan subalgebra is also established. Keywords:nilpotent radical, Casimir operators, algebraic Lie algebras, Cartan subalgebras, number of invariantsCategories:16W25, 17B45, 16S30 5. CMB 2009 (vol 52 pp. 535) Daigle, Daniel; Kaliman, Shulim A Note on Locally Nilpotent Derivations\\ and Variables of $k[X,Y,Z]$ We strengthen certain results concerning actions of $(\Comp,+)$ on $\Comp^{3}$ and embeddings of $\Comp^{2}$ in $\Comp^{3}$, and show that these results are in fact valid over any field of characteristic zero. Keywords:locally nilpotent derivations, group actions, polynomial automorphisms, variable, affine spaceCategories:14R10, 14R20, 14R25, 13N15 6. CMB 2004 (vol 47 pp. 343) Drensky, Vesselin; Hammoudi, Lakhdar Combinatorics of Words and Semigroup Algebras Which Are Sums of Locally Nilpotent Subalgebras We construct new examples of non-nil algebras with any number of generators, which are direct sums of two locally nilpotent subalgebras. Like all previously known examples, our examples are contracted semigroup algebras and the underlying semigroups are unions of locally nilpotent subsemigroups. In our constructions we make more transparent than in the past the close relationship between the considered problem and combinatorics of words. Keywords:locally nilpotent rings,, nil rings, locally nilpotent semigroups,, semigroup algebras, monomial algebras, infinite wordsCategories:16N40, 16S15, 20M05, 20M25, 68R15 7. CMB 2001 (vol 44 pp. 266) Cencelj, M.; Dranishnikov, A. N. Extension of Maps to Nilpotent Spaces We show that every compactum has cohomological dimension $1$ with respect to a finitely generated nilpotent group $G$ whenever it has cohomological dimension $1$ with respect to the abelianization of $G$. This is applied to the extension theory to obtain a cohomological dimension theory condition for a finite-dimensional compactum $X$ for extendability of every map from a closed subset of $X$ into a nilpotent $\CW$-complex $M$ with finitely generated homotopy groups over all of $X$. Keywords:cohomological dimension, extension of maps, nilpotent group, nilpotent spaceCategories:55M10, 55S36, 54C20, 54F45 8. CMB 1999 (vol 42 pp. 335) Kim, Goansu; Tang, C. Y. Cyclic Subgroup Separability of HNN-Extensions with Cyclic Associated Subgroups We derive a necessary and sufficient condition for HNN-extensions of cyclic subgroup separable groups with cyclic associated subgroups to be cyclic subgroup separable. Applying this, we explicitly characterize the residual finiteness and the cyclic subgroup separability of HNN-extensions of abelian groups with cyclic associated subgroups. We also consider these residual properties of HNN-extensions of nilpotent groups with cyclic associated subgroups. Keywords:HNN-extension, nilpotent groups, cyclic subgroup separable $(\pi_c)$, residually finiteCategories:20E26, 20E06, 20F10
2015-03-29 07:22:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8222512602806091, "perplexity": 889.6090524050778}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131298387.35/warc/CC-MAIN-20150323172138-00184-ip-10-168-14-71.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/123219/diophantine-special-problem
# Diophantine special problem This is my another question on Diophantine equations. Prove the following great and special problem. Let $D$ and $k$ be positive integers and $p$ be a prime number such that $gcd(D, kp) = 1$. Prove that there is an absolute constant $C$ such that the Diophantine equation $x^2 + D = kp^n$ has at most $C$ solutions $(x, n)$. Also prove that $x^2 + 119 = 15\cdot2^n$ has only six solutions. - A little more information, please. How do you know there is an absolute constant $C$ such that $x^2+D=kp^n$ has at most $C$ solutions? How do you know $x^2+119=15\cdot2^n$ has only six solutions? And what is great and special about these problems? – Gerry Myerson Mar 23 '12 at 5:41 These two problems are proposed by Yann Bugeaud in the $2007$ paper :some open problems about diophatines equations as open. This equation is also known as the generalized Ramanujan–Nagell equation, and there is a lot of work in this problem recently and it was confirmed that the number of solution is finite and there are reasonable estimation of the constant $C$ for some particular cases.
2015-11-30 19:01:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7221562266349792, "perplexity": 92.66779384491069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398462987.25/warc/CC-MAIN-20151124205422-00203-ip-10-71-132-137.ec2.internal.warc.gz"}
https://iris-ued.readthedocs.io/en/5.2.0/whatsnew.html
# What’s new¶ ## 5.2.0 (development)¶ • Official support for Linux. • Plug-ins installed via the GUI can now be used right away. No restarts required. • Plug-ins can now have the display_name property which will be displayed in the GUI. This is optional and backwards-compatible. • Siwick Research Group-specific plugins were removed. They can be found here: https://github.com/Siwick-Research-Group/iris-ued-plugins • Switched to Azure Pipelines for continuous integration builds; • Added cursor information (position and image value) for processed data view; • Fixed an issue where very large relative differences in datasets would crash the GUI displays; • Fixed an issue where time-series fit would not display properly in fractional change mode; ## 5.1.3¶ • Added logging support for the GUI component. Logs can be reached via the help menu • Added an update check. You can see whether an update is available via the help menu, as well as via the status bar. • Added the ability to view time-series dynamics in absolute units AND relative change. • Pinned dependency to scikit-ued, to prevent upgrade to scikit-ued 2.0 unless appropriate. • Pinned dependency to npstreams, to prevent upgrade to npstreams 2.0 unless appropriate. ## 5.1.2¶ • Fixed an issue where the QDarkStyle internal imports were absolute. ## 5.1.1¶ • Fixed an issue where data reduction would freeze when using more than one CPU; • Removed the auto-update mechanism. Update checks will run in the background only; • Fixed an issue where the in-progress indicator would freeze; • Moved tests outside of source repository; • Updated GUI stylesheet to QDarkStyle 2.6.6; ## 5.1.0¶ • Added explicit support for Python 3.7; • Usability tweaks, for example more visible mask controls; • Added the ability to create standalone executables via PyInstaller; • Added the ability to create Windows installers; ## 5.0.5.1¶ • Due to new forced image orientation, objects on screens were not properly registered (e.g. diffraction center finder). ## 5.0.5¶ • Added the ability to fit exponentials to time-series; • Added region-of-interest text bounds for easier time-series exploration • Enforced PyQtGraph to use row-major image orientation • Datasets are now opened in read-only mode unless absolutely necessary. This should make it safer to handler multiple instances of iris at the same time. ## 5.0.4¶ • Better plug-in handling and command-line interface. ## 5.0.3¶ The major change in this version is the ability to guess raw dataset formats using the iris.open_raw function. This allows the possibility to start the GUI and open a dataset at the same time. ## 5.0.2¶ The package now only has dependencies that can be installed through conda ## 5.0.1¶ This is a minor bug-fix release that also includes user interface niceties (e.g. link to online documentation) and user experience niceties (e.g. confirmation message if you forget pixel masks). ## 5.0.0¶ This new version includes a completely rewritten library and GUI front-end. Earlier datasets will need to be re-processed. New features: • Faster performance thanks to better data layout in HDF5; • Plug-in architecture for various raw data formats; • Faster performance thanks to npstreams package; • Easier to extend GUI skeleton; • Online documentation accessible from the GUI; • Continuous integration.
2020-04-09 05:03:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18257839977741241, "perplexity": 11765.06686216845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371829677.89/warc/CC-MAIN-20200409024535-20200409055035-00475.warc.gz"}
http://www.math.titech.ac.jp/~kawahira/seminar.html
2013 / 11 / 14 (Thu) 16:30 -- 18:00; wA A428(Sci. Bldg. A428) u/speakerF Yi-Chiuan Chen (Academia Sinica, Taiwan) ^Cg/titleF Topological Horseshoe in Travelling Waves of Discretized KdV-Burgers-KS Type Equations AuXgNg/abstractF Applying the concept of anti-integrable limit to space-time discretized KdV-Burgers-KS type equations, we show that there exist topological horseshoes in the phase space formed by the initial states of travelling wave solutions of the resulted coupled map lattices. In particular, the coupled map lattices display spatio-temporal chaos on the horseshoes. 2012 / 12 / 19 (Wed) 15:00 -- 16:30; 1 309 (Sci. 1 Bldg. 309) u/speakerF Yi-Chiuan Chen (Academia Sinica, Taiwan) ^Cg/titleF A Note on Holomorphic Shadowing for H\'enon Maps AuXgNg/abstractF In studying the complex H\'enon maps, Mummert defined an operator the fixed points of which give rise to bounded orbits. This enabled his to obtain an estimate of the solenoid locus. Instead of the contraction mapping theorem, in the talk, I shall present an implicit function theorem version of his result, with some generalisation. 2012 / 08 / 09 (Thu) 15:00 -- 17:00; wA A440 (Sci. Bldg. A, A440) u/speakerF Carlos Cabrera (UNAM, Mexico) ^Cg/titleF Poincare extension of rational maps (2). 2012 / 07 / 26 (Thu) 15:00 -- 17:00; wA A440 (Sci. Bldg. A, A440) u/speakerF Carlos Cabrera (UNAM, Mexico) ^Cg/titleF On Poincare extensions of rational maps. AuXgNg/abstractF In this talk we show the existence of geometric Poincar\'e extensions for an open and dense set of rational maps. Another of our main results is the existence of an extension that applies for the semigroup of Blaschke maps, which is a homomorphism of semigroups. 2012 / 07 / 19 (Thu) 15:00 -- 17:00; wA A440 (Sci. Bldg. A, A440) u/speakerF Carlos Cabrera (UNAM, Mexico) ^Cg/titleF On dynamical Teichmuller spaces AuXgNg/abstractF Following ideas from a preprint of the Peter Makienko, we investigate relations of dynamical Teichmuller spaces with dynamical objects. We also establish some connections with the theory of deformations of inverse limits and laminations in holomorphic dynamics. This is a joint work with Peter Makienko. 2012 / 07 / 12 (Thu) 15:00 -- 17:00; wA A440 (Sci. Bldg. A, A440) u/speakerF Carlos Cabrera (UNAM, Mexico) ^Cg/titleF On the topology of the inverse limit of a branched covering over a Riemann surface (4) 2012 / 07 / 9 (Mon) 15:00 -- 16:30; 1 109 (Sci. 1 Bldg. 109) u/speakerF Davoud Cheraghi (University of Warwick, UK) ^Cg/titleF Trajectories of complex quadratic polynomials with an irrationally indifferent fixed point AuXgNg/abstractF The local, semi-local, and global dynamics of the complex quadratic polynomials $P_\alpha(z):= e^{2\pi i \alpha}z+z^2: \mathbb{C}\to \mathbb{C}$, for irrational values of $\alpha$, have been extensively studied through various methods. The main source of difficulty is the interplay between the tangential movement created by the fixed point and the radial movement caused by the critical point. This naturally brings the arithmetic nature of $\alpha$ into play. Using a renormalization technique developed by H. Inou and M. Shishikura, we analyze this interaction, and in particular, describe the topological behavior of the orbit of typical points under these maps. 2012 / 07 / 5 (Thu) 15:00 -- 17:00; wA A440 (Sci. Bldg. A, A440) u/speakerF Carlos Cabrera (UNAM, Mexico) ^Cg/titleF On the topology of the inverse limit of a branched covering over a Riemann surface (3) 2012 / 06 / 22 (Fri) 10:00 -- 12:00; wA A440 (Sci. Bldg. A, A440) u/speakerF Carlos Cabrera (UNAM, Mexico) ^Cg/titleF On the topology of the inverse limit of a branched covering over a Riemann surface (2) 2012 / 06 / 14 (Thu) 15:00 -- 17:00; wA A440 (Sci. Bldg. A, A440) u/speakerF Carlos Cabrera (UNAM, Mexico) ^Cg/titleF On the topology of the inverse limit of a branched covering over a Riemann surface (1) AuXgNg/abstractF We introduce the Plaque Topology on the inverse limit of a branched covering map over a Riemann surface to study its dynamical properties. We also consider a Boolean Algebra to compute local topological invariants. With these tools we obtain a description of the points in the inverse limit.
2020-08-06 21:50:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6344293355941772, "perplexity": 2563.5684725482283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737039.58/warc/CC-MAIN-20200806210649-20200807000649-00491.warc.gz"}
https://www.youtobia.com/blog/pages/document-worth-reading-and-8220does-putting-y-3692541555
AI News, Document worth reading: &#8220;Does putting your emotions into words make you feel better? Measuring the minute-scale dynamics of emotions from onlinedata&#8221; Document worth reading: &#8220;Does putting your emotions into words make you feel better? Measuring the minute-scale dynamics of emotions from onlinedata&#8221; &#8216;Quantum Equilibrium-Disequilibrium&#8217;: Asset Price Dynamics, Symmetry Breaking, and Defaults as Dissipative Instantons• Effective Caching for the Secure Content Distribution in Information-Centric Networking• Self-Adaptive Systems in Organic Computing: Strategies for Self-Improvement• A Hybrid Recommender System for Patient-Doctor Matchmaking in Primary Care• Evolutionary optimisation of neural network models for fish collective behaviours in mixed groups of robots and zebrafish• Random forest prediction of Alzheimer&#8217;s disease using pairwise selection from time series data• VerIDeep: Verifying Integrity of Deep Neural Networks through Sensitive-Sample Fingerprinting• Temporal starvation in multi-channel CSMA networks: an analytical framework• Who Falls for Online Political Manipulation?• The frog model on trees with drift• Code-Mixed Sentiment Analysis Using Machine Learning and Neural Network Approaches• $α$-Approximation Density-based Clustering of Multi-valued Objects• On-Chip Optical Convolutional Neural Networks• The Elephant in the Room• Longest Increasing Subsequence under Persistent Comparison Errors• The Effectiveness of Multitask Learning for Phenotyping with Electronic Health Records Data• DeepMag: Source Specific Motion Magnification Using Gradient Ascent• Transport coefficients in multi-component fluids from equilibrium molecular dynamics• On Physical Layer Security over Fox&#8217;s $H$-Function Wiretap Fading Channels• Deep Learning for Single Image Super-Resolution: A Brief Review• Uncovering the Spread of Chagas Disease in Argentina and Mexico• Efficient human-like semantic representations via the Information Bottleneck principle• Sequence-Based OOK for Orthogonal Multiplexing of Wake-up Radio Signals and OFDM Waveforms• Error Forward-Propagation: Reusing Feedforward Connections to Propagate Errors in Deep Learning• Collective irregular dynamics in balanced networks of leaky integrate-and-fire neurons• A Panel Quantile Approach to Attrition Bias in Big Data: Evidence from a Randomized Experiment• A note on partial rejection sampling for the hard disks model in the plane• Blue Phase: Optimal Network Traffic Control for Legacy and Autonomous Vehicles• A survey of data transfer and storage techniques in prevalent cryptocurrencies and suggested improvements• Code-division multiplexed resistive pulse sensor networks for spatio-temporal detection of particles in microfluidic devices• Classes of graphs with e-positive chromatic symmetric function• Distinctiveness, complexity, and repeatability of online signature templates• End-to-end Active Object Tracking and Its Real-world Deployment via Reinforcement Learning• A note on the critical barrier for the survival of $α-$stable branching random walk with absorption• On the Convergence of AdaGrad with Momentum for Training Deep Neural Networks• Linear time algorithm to check the singularity of block graphs• Efficient Measurement on Programmable SwitchesUsing Probabilistic Recirculation• DeepWrinkles: Accurate and Realistic Clothing Modeling• (De)localization of Fermions in Correlated-Spin Background• Learning and Inference on Generative Adversarial Quantum Circuits• WonDerM: Skin Lesion Classification with Fine-tuned Neural Networks• Power Minimization Based Joint Task Scheduling and Resource Allocation in Downlink C-RAN• Stochastic $R_0$ Tensors to Stochastic Tensor Complementarity Problems• Hybrid approach for transliteration of Algerian arabizi: a primary study• Spin systems on Bethe lattices• On the optimal designs for the prediction of complex Ornstein-Uhlenbeck processes• The Moment-SOS hierarchy• Stability for Intersecting Families of Perfect Matchings• Weakly-Supervised Attention and Relation Learning for Facial Action Unit Detection• The Evolution of Sex Chromosomes through the Baldwin Effect• Cross-location wind speed forecasting for wind energy applications using machine learning based models• Deep Learning Based Speed Estimation for Constraining Strapdown Inertial Navigation on Smartphones• Pulse-laser Based Long-range Non-line-of-sight Ultraviolet Communication with Pulse Response Position Estimation• Construction of cospectral graphs• On the Complexity of Solving Subtraction Games• The Power of Cut-Based Parameters for Computing Edge Disjoint Paths• Extremal process of the zero-average Gaussian Free Field for $d\ge 3$• Model Approximation Using Cascade of Tree Decompositions• ChipNet: Real-Time LiDAR Processing for Drivable Region Segmentation on an FPGA• Making effective use of healthcare data using data-to-text technology• Band selection with Higher Order Multivariate Cumulants for small target detection in hyperspectral images• Optimizing error of high-dimensional statistical queries under differential privacy• On testing for high-dimensional white noise• Evaluation of the Spatial Consistency Feature in the 3GPP GSCM Channel Model• Atmospheric turbulence mitigation for sequences with moving objects using recursive image fusion• Dynamic all scores matrices for LCS score• Ektelo: A Framework for Defining Differentially-Private Computations• Existence of symmetric maximal noncrossing collections of $k$-element sets• Finding a Small Number of Colourful Components• Choosing the optimal multi-point iterative method for the Colebrook flow friction equation &#8212; Deep learning Deep learning (also known as deep structured learning or hierarchical learning) is part of a broader family of machine learning methods based on learning data representations, as opposed to task-specific algorithms. Deep learning architectures such as deep neural networks, deep belief networks and recurrent neural networks have been applied to fields including computer vision, speech recognition, natural language processing, audio recognition, social network filtering, machine translation, bioinformatics, drug design and board game programs, where they have produced results comparable to and in some cases superior to human experts.&#91;4&#93;&#91;5&#93;&#91;6&#93; Deep learning models are vaguely inspired by information processing and communication patterns in biological nervous systems yet have various differences from the structural and functional properties of biological brains, which make them incompatible with neuroscience evidences.&#91;7&#93;&#91;8&#93;&#91;9&#93; Most modern deep learning models are based on an artificial neural network, although they can also include propositional formulas or latent variables organized layer-wise in deep generative models such as the nodes in Deep Belief Networks and Deep Boltzmann Machines.&#91;11&#93; No universally agreed upon threshold of depth divides shallow learning from deep learning, but most researchers agree that deep learning involves CAP depth &gt; For supervised learning tasks, deep learning methods obviate feature engineering, by translating the data into compact intermediate representations akin to principal components, and derive layered structures that remove redundancy in representation. The universal approximation theorem concerns the capacity of feedforward neural networks with a single hidden layer of finite size to approximate continuous functions.&#91;15&#93;&#91;16&#93;&#91;17&#93;&#91;18&#93;&#91;19&#93; By 1991 such systems were used for recognizing isolated 2-D hand-written digits, while recognizing 3-D objects was done by matching 2-D images with a handcrafted 3-D object model. But while Neocognitron required a human programmer to hand-merge features, Cresceptron learned an open number of features in each layer without supervision, where each feature is represented by a convolution kernel. In 1994, André de Carvalho, together with Mike Fairhurst and David Bisset, published experimental results of a multi-layer boolean neural network, also known as a weightless neural network, composed of a 3-layers self-organising feature extraction neural network module (SOFT) followed by a multi-layer classification neural network module (GSN), which were independently trained. In 1995, Brendan Frey demonstrated that it was possible to train (over two days) a network containing six fully connected layers and several hundred hidden units using the wake-sleep algorithm, co-developed with Peter Dayan and Hinton.&#91;39&#93; Simpler models that use task-specific handcrafted features such as Gabor filters and support vector machines (SVMs) were a popular choice in the 1990s and 2000s, because of ANNs' computational cost and a lack of understanding of how the brain wires its biological networks. These methods never outperformed non-uniform internal-handcrafting Gaussian mixture model/Hidden Markov model (GMM-HMM) technology based on generative models of speech trained discriminatively.&#91;45&#93; The principle of elevating 'raw' features over hand-crafted optimization was first explored successfully in the architecture of deep autoencoder on the 'raw' spectrogram or linear filter-bank features in the late 1990s,&#91;48&#93; Many aspects of speech recognition were taken over by a deep learning method called long short-term memory (LSTM), a recurrent neural network published by Hochreiter and Schmidhuber in 1997.&#91;50&#93; showed how a many-layered feedforward neural network could be effectively pre-trained one layer at a time, treating each layer in turn as an unsupervised restricted Boltzmann machine, then fine-tuning it using supervised backpropagation.&#91;58&#93; The impact of deep learning in industry began in the early 2000s, when CNNs already processed an estimated 10% to 20% of all the checks written in the US, according to Yann LeCun.&#91;67&#93; was motivated by the limitations of deep generative models of speech, and the possibility that given more capable hardware and large-scale data sets that deep neural nets (DNN) might become practical. However, it was discovered that replacing pre-training with large amounts of training data for straightforward backpropagation when using DNNs with large, context-dependent output layers produced error rates dramatically lower than then-state-of-the-art Gaussian mixture model (GMM)/Hidden Markov Model (HMM) and also than more-advanced generative model-based systems.&#91;59&#93;&#91;70&#93; offering technical insights into how to integrate deep learning into the existing highly efficient, run-time speech decoding system deployed by all major speech recognition systems.&#91;10&#93;&#91;72&#93;&#91;73&#93; In 2010, researchers extended deep learning from TIMIT to large vocabulary speech recognition, by adopting large output layers of the DNN based on context-dependent HMM states constructed by decision trees.&#91;75&#93;&#91;76&#93;&#91;77&#93;&#91;72&#93; In 2009, Nvidia was involved in what was called the “big bang” of deep learning, “as deep-learning neural networks were trained with Nvidia graphics processing units (GPUs).”&#91;78&#93; In 2014, Hochreiter's group used deep learning to detect off-target and toxic effects of environmental chemicals in nutrients, household products and drugs and won the 'Tox21 Data Challenge' of NIH, FDA and NCATS.&#91;87&#93;&#91;88&#93;&#91;89&#93; Although CNNs trained by backpropagation had been around for decades, and GPU implementations of NNs for years, including CNNs, fast implementations of CNNs with max-pooling on GPUs in the style of Ciresan and colleagues were needed to progress on computer vision.&#91;80&#93;&#91;81&#93;&#91;34&#93;&#91;90&#93;&#91;2&#93; In November 2012, Ciresan et al.'s system also won the ICPR contest on analysis of large medical images for cancer detection, and in the following year also the MICCAI Grand Challenge on the same topic.&#91;92&#93; In 2013 and 2014, the error rate on the ImageNet task using deep learning was further reduced, following a similar trend in large-scale speech recognition. For example, in image recognition, they might learn to identify images that contain cats by analyzing example images that have been manually labeled as 'cat' or 'no cat' and using the analytic results to identify cats in other images. Over time, attention focused on matching specific mental abilities, leading to deviations from biology such as backpropagation, or passing information in the reverse direction and adjusting the network to reflect that information. Neural networks have been used on a variety of tasks, including computer vision, speech recognition, machine translation, social network filtering, playing board and video games and medical diagnosis. Despite this number being several order of magnitude less than the number of neurons on a human brain, these networks can perform many tasks at a level beyond that of humans (e.g., recognizing faces, playing 'Go'&#91;99&#93; The extra layers enable composition of features from lower layers, potentially modeling complex data with fewer units than a similarly performing shallow network.&#91;11&#93; The training process can be guaranteed to converge in one step with a new batch of data, and the computational complexity of the training algorithm is linear with respect to the number of neurons involved.&#91;115&#93;&#91;116&#93; that involve multi-second intervals containing speech events separated by thousands of discrete time steps, where one time step corresponds to about 10 ms. All major commercial speech recognition systems (e.g., Microsoft Cortana, Xbox, Skype Translator, Amazon Alexa, Google Now, Apple Siri, Baidu and iFlyTek voice search, and a range of Nuance speech products, etc.) are based on deep learning.&#91;10&#93;&#91;122&#93;&#91;123&#93;&#91;124&#93; DNNs have proven themselves capable, for example, of a) identifying the style period of a given painting, b) 'capturing' the style of a given painting and applying it in a visually pleasing manner to an arbitrary photograph, and c) generating striking imagery based on random visual input fields.&#91;128&#93;&#91;129&#93; Word embedding, such as word2vec, can be thought of as a representational layer in a deep learning architecture that transforms an atomic word into a positional representation of the word relative to other words in the dataset; Finding the appropriate mobile audience for mobile advertising is always challenging, since many data points must be considered and assimilated before a target segment can be created and used in ad serving by any ad server.&#91;161&#93;&#91;162&#93; 'Deep anti-money laundering detection system can spot and recognize relationships and similarities between data and, further down the road, learn to detect anomalies or classify and predict specific events'. Deep learning is closely related to a class of theories of brain development (specifically, neocortical development) proposed by cognitive neuroscientists in the early 1990s.&#91;165&#93;&#91;166&#93;&#91;167&#93;&#91;168&#93; These developmental models share the property that various proposed learning dynamics in the brain (e.g., a wave of nerve growth factor) support the self-organization somewhat analogous to the neural networks utilized in deep learning models. Like the neocortex, neural networks employ a hierarchy of layered filters in which each layer considers information from a prior layer (or the operating environment), and then passes its output (and possibly the original input), to other layers. Other researchers have argued that unsupervised forms of deep learning, such as those based on hierarchical generative models and deep belief networks, may be closer to biological reality.&#91;172&#93;&#91;173&#93; Such techniques lack ways of representing causal relationships (...) have no obvious ways of performing logical inferences, and they are also still a long way from integrating abstract knowledge, such as information about what objects are, what they are for, and how they are typically used. systems, like Watson (...) use techniques like deep learning as just one element in a very complicated ensemble of techniques, ranging from the statistical technique of Bayesian inference to deductive reasoning.'&#91;187&#93; As an alternative to this emphasis on the limits of deep learning, one author speculated that it might be possible to train a machine vision stack to perform the sophisticated task of discriminating between 'old master' and amateur figure drawings, and hypothesized that such a sensitivity might represent the rudiments of a non-trivial machine empathy.&#91;188&#93; In further reference to the idea that artistic sensitivity might inhere within relatively low levels of the cognitive hierarchy, a published series of graphic representations of the internal states of deep (20-30 layers) neural networks attempting to discern within essentially random data the images on which they were trained&#91;190&#93; Learning a grammar (visual or linguistic) from training data would be equivalent to restricting the system to commonsense reasoning that operates on concepts in terms of grammatical production rules and is a basic goal of both human language acquisition&#91;196&#93; Such a manipulation is termed an “adversarial attack.” In 2016 researchers used one ANN to doctor images in trial and error fashion, identify another's focal points and thereby generate images that deceived it. Another group showed that certain psychedelic spectacles could fool a facial recognition system into thinking ordinary people were celebrities, potentially allowing one person to impersonate another. ANNs can however be further trained to detect attempts at deception, potentially leading attackers and defenders into an arms race similar to the kind that already defines the malware defense industry. ANNs have been trained to defeat ANN-based anti-malware software by repeatedly attacking a defense with malware that was continually altered by a genetic algorithm until it tricked the anti-malware while retaining its ability to damage the target.&#91;198&#93; Deep learning In the last chapter we learned that deep neural networks are often much harder to train than shallow neural networks. We'll also look at the broader picture, briefly reviewing recent progress on using deep nets for image recognition, speech recognition, and other applications. We'll work through a detailed example - code and all - of using convolutional nets to solve the problem of classifying handwritten digits from the MNIST data set: As we go we'll explore many powerful techniques: convolutions, pooling, the use of GPUs to do far more training than we did with our shallow networks, the algorithmic expansion of our training data (to reduce overfitting), the use of the dropout technique (also to reduce overfitting), the use of ensembles of networks, and others. We conclude our discussion of image recognition with a survey of some of the spectacular recent progress using networks (particularly convolutional nets) to do image recognition. We'll briefly survey other models of neural networks, such as recurrent neural nets and long short-term memory units, and how such models can be applied to problems in speech recognition, natural language processing, and other areas. And we'll speculate about the future of neural networks and deep learning, ranging from ideas like intention-driven user interfaces, to the role of deep learning in artificial intelligence. For the $28 \times 28$ pixel images we've been using, this means our network has $784$ ($= 28 \times 28$) input neurons. Our earlier networks work pretty well: we've obtained a classification accuracy better than 98 percent, using training and test data from the MNIST handwritten digit data set. But the seminal paper establishing the modern subject of convolutional networks was a 1998 paper, 'Gradient-based learning applied to document recognition', by Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. LeCun has since made an interesting remark on the terminology for convolutional nets: 'The [biological] neural inspiration in models like convolutional nets is very tenuous. That's why I call them 'convolutional nets' not 'convolutional neural nets', and why we call the nodes 'units' and not 'neurons' '. Despite this remark, convolutional nets use many of the same ideas as the neural networks we've studied up to now: ideas such as backpropagation, gradient descent, regularization, non-linear activation functions, and so on. In a convolutional net, it'll help to think instead of the inputs as a $28 \times 28$ square of neurons, whose values correspond to the $28 \times 28$ pixel intensities we're using as inputs: To be more precise, each neuron in the first hidden layer will be connected to a small region of the input neurons, say, for example, a $5 \times 5$ region, corresponding to $25$ input pixels. So, for a particular hidden neuron, we might have connections that look like this: That region in the input image is called the local receptive field for the hidden neuron. To illustrate this concretely, let's start with a local receptive field in the top-left corner: Then we slide the local receptive field over by one pixel to the right (i.e., by one neuron), to connect to a second hidden neuron: Note that if we have a $28 \times 28$ input image, and $5 \times 5$ local receptive fields, then there will be $24 \times 24$ neurons in the hidden layer. This is because we can only move the local receptive field $23$ neurons across (or $23$ neurons down), before colliding with the right-hand side (or bottom) of the input image. In this chapter we'll mostly stick with stride length $1$, but it's worth knowing that people sometimes experiment with different stride lengths* *As was done in earlier chapters, if we're interested in trying different stride lengths then we can use validation data to pick out the stride length which gives the best performance. The same approach may also be used to choose the size of the local receptive field - there is, of course, nothing special about using a $5 \times 5$ local receptive field. In general, larger local receptive fields tend to be helpful when the input images are significantly larger than the $28 \times 28$ pixel MNIST images.. In other words, for the $j, k$th hidden neuron, the output is: \begin{eqnarray} \sigma\left(b + \sum_{l=0}^4 \sum_{m=0}^4 w_{l,m} a_{j+l, k+m} \right). Informally, think of the feature detected by a hidden neuron as the kind of input pattern that will cause the neuron to activate: it might be an edge in the image, for instance, or maybe some other type of shape. To see why this makes sense, suppose the weights and bias are such that the hidden neuron can pick out, say, a vertical edge in a particular local receptive field. To put it in slightly more abstract terms, convolutional networks are well adapted to the translation invariance of images: move a picture of a cat (say) a little ways, and it's still an image of a cat* *In fact, for the MNIST digit classification problem we've been studying, the images are centered and size-normalized. One of the early convolutional networks, LeNet-5, used $6$ feature maps, each associated to a $5 \times 5$ local receptive field, to recognize MNIST digits. Let's take a quick peek at some of the features which are learned* *The feature maps illustrated come from the final convolutional network we train, see here.: Each map is represented as a $5 \times 5$ block image, corresponding to the $5 \times 5$ weights in the local receptive field. By comparison, suppose we had a fully connected first layer, with $784 = 28 \times 28$ input neurons, and a relatively modest $30$ hidden neurons, as we used in many of the examples earlier in the book. That, in turn, will result in faster training for the convolutional model, and, ultimately, will help us build deep networks using convolutional layers. Incidentally, the name convolutional comes from the fact that the operation in Equation (125)\begin{eqnarray} \sigma\left(b + \sum_{l=0}^4 \sum_{m=0}^4 w_{l,m} a_{j+l, k+m} \right) \nonumber\end{eqnarray}$('#margin_223550267310_reveal').click(function() {$('#margin_223550267310').toggle('slow', function() {});}); A little more precisely, people sometimes write that equation as $a^1 = \sigma(b + w * a^0)$, where $a^1$ denotes the set of output activations from one feature map, $a^0$ is the set of input activations, and $*$ is called a convolution operation. In particular, I'm using 'feature map' to mean not the function computed by the convolutional layer, but rather the activation of the hidden neurons output from the layer. In max-pooling, a pooling unit simply outputs the maximum activation in the $2 \times 2$ input region, as illustrated in the following diagram: Note that since we have $24 \times 24$ neurons output from the convolutional layer, after pooling we have $12 \times 12$ neurons. So if there were three feature maps, the combined convolutional and max-pooling layers would look like: Here, instead of taking the maximum activation of a $2 \times 2$ region of neurons, we take the square root of the sum of the squares of the activations in the $2 \times 2$ region. It's similar to the architecture we were just looking at, but has the addition of a layer of $10$ output neurons, corresponding to the $10$ possible values for MNIST digits ('0', '1', '2', etc): Problem Backpropagation in a convolutional network The core equations of backpropagation in a network with fully-connected layers are (BP1)\begin{eqnarray} \delta^L_j = \frac{\partial C}{\partial a^L_j} \sigma'(z^L_j) \nonumber\end{eqnarray}$('#margin_511945174620_reveal').click(function() {$('#margin_511945174620').toggle('slow', function() {});});-(BP4)\begin{eqnarray} \frac{\partial C}{\partial w^l_{jk}} = a^{l-1}_k \delta^l_j \nonumber\end{eqnarray}$('#margin_896578903066_reveal').click(function() {$('#margin_896578903066').toggle('slow', function() {});}); Suppose we have a network containing a convolutional layer, a max-pooling layer, and a fully-connected output layer, as in the network discussed above. The program we'll use to do this is called network3.py, and it's an improved version of the programs network.py and network2.py developed in earlier chapters* *Note also that network3.py incorporates ideas from the Theano library's documentation on convolutional neural nets (notably the implementation of LeNet-5), from Misha Denil's implementation of dropout, and from Chris Olah.. But now that we understand those details, for network3.py we're going to use a machine learning library known as Theano* *See Theano: A CPU and GPU Math Expression Compiler in Python, by James Bergstra, Olivier Breuleux, Frederic Bastien, Pascal Lamblin, Ravzan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio (2010). The examples which follow were run using Theano 0.6* *As I release this chapter, the current version of Theano has changed to version 0.7. Note that the code in the script simply duplicates and parallels the discussion in this section.Note also that throughout the section I've explicitly specified the number of training epochs. In practice, it's worth using early stopping, that is, tracking accuracy on the validation set, and stopping training when we are confident the validation accuracy has stopped improving.: &gt;&gt;&gt; Using the validation data to decide when to evaluate the test accuracy helps avoid overfitting to the test data (see this earlier discussion of the use of validation data). Your results may vary slightly, since the network's weights and biases are randomly initialized* *In fact, in this experiment I actually did three separate runs training a network with this architecture. This $97.80$ percent accuracy is close to the $98.04$ percent accuracy obtained back in Chapter 3, using a similar network architecture and learning hyper-parameters. Second, while the final layer in the earlier network used sigmoid activations and the cross-entropy cost function, the current network uses a softmax final layer, and the log-likelihood cost function. I haven't made this switch for any particularly deep reason - mostly, I've done it because softmax plus log-likelihood cost is more common in modern image classification networks. In this architecture, we can think of the convolutional and pooling layers as learning about local spatial structure in the input training image, while the later, fully-connected layer learns at a more abstract level, integrating global information from across the entire image. filter_shape=(20, 1, 5, 5), poolsize=(2, 2)), validation_data, test_data) Can we improve on the $98.78$ percent classification accuracy? filter_shape=(20, 1, 5, 5), poolsize=(2, 2)), filter_shape=(40, 20, 5, 5), poolsize=(2, 2)), validation_data, test_data) In fact, you can think of the second convolutional-pooling layer as having as input $12 \times 12$ 'images', whose 'pixels' represent the presence (or absence) of particular localized features in the original input image. The output from the previous layer involves $20$ separate feature maps, and so there are $20 \times 12 \times 12$ inputs to the second convolutional-pooling layer. In fact, we'll allow each neuron in this layer to learn from all $20 \times 5 \times 5$ input neurons in its local receptive field. More informally: the feature detectors in the second convolutional-pooling layer have access to all the features from the previous layer, but only within their particular local receptive field* *This issue would have arisen in the first layer if the input images were in color. In that case we'd have 3 input features for each pixel, corresponding to red, green and blue channels in the input image. So we'd allow the feature detectors to have access to all color information, but only within a given local receptive field.. Problem Using the tanh activation function Several times earlier in the book I've mentioned arguments that the tanh function may be a better activation function than the sigmoid function. Try training the network with tanh activations in the convolutional and fully-connected layers* *Note that you can pass activation_fn=tanh as a parameter to the ConvPoolLayer and FullyConnectedLayer classes.. Try plotting the per-epoch validation accuracies for both tanh- and sigmoid-based networks, all the way out to $60$ epochs. If your results are similar to mine, you'll find the tanh networks train a little faster, but the final accuracies are very similar. Can you get a similar training speed with the sigmoid, perhaps by changing the learning rate, or doing some rescaling* *You may perhaps find inspiration in recalling that $\sigma(z) = (1+\tanh(z/2))/2$.? Try a half-dozen iterations on the learning hyper-parameters or network architecture, searching for ways that tanh may be superior to the sigmoid. Personally, I did not find much advantage in switching to tanh, although I haven't experimented exhaustively, and perhaps you may find a way. In any case, in a moment we will find an advantage in switching to the rectified linear activation function, and so we won't go any deeper into the use of tanh. Using rectified linear units: The network we've developed at this point is actually a variant of one of the networks used in the seminal 1998 paper* *'Gradient-based learning applied to document recognition', by Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner (1998). filter_shape=(20, 1, 5, 5), poolsize=(2, 2), activation_fn=ReLU), filter_shape=(40, 20, 5, 5), poolsize=(2, 2), activation_fn=ReLU), However, across all my experiments I found that networks based on rectified linear units consistently outperformed networks based on sigmoid activation functions. The reason for that recent adoption is empirical: a few people tried rectified linear units, often on the basis of hunches or heuristic arguments* *A common justification is that $\max(0, z)$ doesn't saturate in the limit of large $z$, unlike sigmoid neurons, and this helps rectified linear units continue learning. A simple way of expanding the training data is to displace each training image by a single pixel, either up one pixel, down one pixel, left one pixel, or right one pixel. filter_shape=(20, 1, 5, 5), poolsize=(2, 2), activation_fn=ReLU), filter_shape=(40, 20, 5, 5), poolsize=(2, 2), activation_fn=ReLU), Just to remind you of the flavour of some of the results in that earlier discussion: in 2003 Simard, Steinkraus and Platt* *Best Practices for Convolutional Neural Networks Applied to Visual Document Analysis, by Patrice Simard, Dave Steinkraus, and John Platt (2003). improved their MNIST performance to $99.6$ percent using a neural network otherwise very similar to ours, using two convolutional-pooling layers, followed by a hidden fully-connected layer with $100$ neurons. There were a few differences of detail in their architecture - they didn't have the advantage of using rectified linear units, for instance - but the key to their improved performance was expanding the training data. filter_shape=(20, 1, 5, 5), poolsize=(2, 2), activation_fn=ReLU), filter_shape=(40, 20, 5, 5), poolsize=(2, 2), activation_fn=ReLU), filter_shape=(20, 1, 5, 5), poolsize=(2, 2), activation_fn=ReLU), filter_shape=(40, 20, 5, 5), poolsize=(2, 2), activation_fn=ReLU), Using this, we obtain an accuracy of $99.60$ percent, which is a substantial improvement over our earlier results, especially our main benchmark, the network with $100$ hidden neurons, where we achieved $99.37$ percent. In fact, I tried experiments with both $300$ and $1,000$ hidden neurons, and obtained (very slightly) better validation performance with $1,000$ hidden neurons. Why we only applied dropout to the fully-connected layers: If you look carefully at the code above, you'll notice that we applied dropout only to the fully-connected section of the network, not to the convolutional layers. But apart from that, they used few other tricks, including no convolutional layers: it was a plain, vanilla network, of the kind that, with enough patience, could have been trained in the 1980s (if the MNIST data set had existed), given enough computing power. In particular, we saw that the gradient tends to be quite unstable: as we move from the output layer to earlier layers the gradient tends to either vanish (the vanishing gradient problem) or explode (the exploding gradient problem). In particular, in our final experiments we trained for $40$ epochs using a data set $5$ times larger than the raw MNIST training data. I've occasionally heard people adopt a deeper-than-thou attitude, holding that if you're not keeping-up-with-the-Joneses in terms of number of hidden layers, then you're not really doing deep learning. To speed that process up you may find it helpful to revisit Chapter 3's discussion of how to choose a neural network's hyper-parameters, and perhaps also to look at some of the further reading suggested in that section. Here's the code (discussion below)* *Note added November 2016: several readers have noted that in the line initializing self.w, I set scale=np.sqrt(1.0/n_out), when the arguments of Chapter 3 suggest a better initialization may be scale=np.sqrt(1.0/n_in). np.random.normal( loc=0.0, scale=np.sqrt(1.0/n_out), size=(n_in, n_out)), dtype=theano.config.floatX), dtype=theano.config.floatX), I use the name inpt rather than input because input is a built-in function in Python, and messing with built-ins tends to cause unpredictable behavior and difficult-to-diagnose bugs. So self.inpt_dropout and self.output_dropout are used during training, while self.inpt and self.output are used for all other purposes, e.g., evaluating accuracy on the validation and test data. prev_layer, layer = self.layers[j-1], self.layers[j] prev_layer.output, prev_layer.output_dropout, self.mini_batch_size) Now, this isn't a Theano tutorial, and so we won't get too deeply into what it means that these are symbolic variables* *The Theano documentation provides a good introduction to Theano. 0.5*lmbda*l2_norm_squared/num_training_batches self.x: training_x[i*self.mini_batch_size: (i+1)*self.mini_batch_size], self.y: training_y[i*self.mini_batch_size: (i+1)*self.mini_batch_size] self.x: validation_x[i*self.mini_batch_size: (i+1)*self.mini_batch_size], self.y: validation_y[i*self.mini_batch_size: (i+1)*self.mini_batch_size] self.x: test_x[i*self.mini_batch_size: (i+1)*self.mini_batch_size], self.y: test_y[i*self.mini_batch_size: (i+1)*self.mini_batch_size] self.x: test_x[i*self.mini_batch_size: (i+1)*self.mini_batch_size] iteration = num_training_batches*epoch+minibatch_index if iteration print(&quot;Training mini-batch number {0}&quot;.format(iteration)) cost_ij = train_mb(minibatch_index) if (iteration+1) validation_accuracy = np.mean( [validate_mb_accuracy(j) for j in xrange(num_validation_batches)]) print(&quot;Epoch {0}: validation accuracy {1:.2 epoch, validation_accuracy)) if validation_accuracy &gt;= best_validation_accuracy: print(&quot;This is the best validation accuracy to date.&quot;) best_validation_accuracy = validation_accuracy best_iteration = iteration if test_data: test_accuracy = np.mean( [test_mb_accuracy(j) for j in xrange(num_test_batches)]) print(&#39;The corresponding test accuracy is {0:.2 test_accuracy)) 0.5*lmbda*l2_norm_squared/num_training_batches In these lines we symbolically set up the regularized log-likelihood cost function, compute the corresponding derivatives in the gradient function, as well as the corresponding parameter updates. With all these things defined, the stage is set to define the train_mb function, a Theano symbolic function which uses the updates to update the Network parameters, given a mini-batch index. The remainder of the SGD method is self-explanatory - we simply iterate over the epochs, repeatedly training the network on mini-batches of training data, and computing the validation and test accuracies. prev_layer, layer = self.layers[j-1], self.layers[j] prev_layer.output, prev_layer.output_dropout, self.mini_batch_size) 0.5*lmbda*l2_norm_squared/num_training_batches self.x: training_x[i*self.mini_batch_size: (i+1)*self.mini_batch_size], self.y: training_y[i*self.mini_batch_size: (i+1)*self.mini_batch_size] self.x: validation_x[i*self.mini_batch_size: (i+1)*self.mini_batch_size], self.y: validation_y[i*self.mini_batch_size: (i+1)*self.mini_batch_size] self.x: test_x[i*self.mini_batch_size: (i+1)*self.mini_batch_size], self.y: test_y[i*self.mini_batch_size: (i+1)*self.mini_batch_size] self.x: test_x[i*self.mini_batch_size: (i+1)*self.mini_batch_size] iteration = num_training_batches*epoch+minibatch_index if iteration % 1000 == 0: print(&quot;Training mini-batch number {0}&quot;.format(iteration)) cost_ij = train_mb(minibatch_index) if (iteration+1) % num_training_batches == 0: validation_accuracy = np.mean( [validate_mb_accuracy(j) for j in xrange(num_validation_batches)]) print(&quot;Epoch {0}: validation accuracy {1:.2%}&quot;.format( epoch, validation_accuracy)) if validation_accuracy &gt;= best_validation_accuracy: print(&quot;This is the best validation accuracy to date.&quot;) best_validation_accuracy = validation_accuracy best_iteration = iteration if test_data: test_accuracy = np.mean( [test_mb_accuracy(j) for j in xrange(num_test_batches)]) print(&#39;The corresponding test accuracy is {0:.2%}&#39;.format( test_accuracy)) activation_fn=sigmoid): of filters, the number of input feature maps, the filter height, and the poolsize is a tuple of length 2, whose entries are the y and np.random.normal(loc=0, scale=np.sqrt(1.0/n_out), size=filter_shape), dtype=theano.config.floatX), np.random.normal(loc=0, scale=1.0, size=(filter_shape[0],)), dtype=theano.config.floatX), pooled_out + self.b.dimshuffle(&#39;x&#39;, 0, &#39;x&#39;, &#39;x&#39;)) np.random.normal( loc=0.0, scale=np.sqrt(1.0/n_out), size=(n_in, n_out)), dtype=theano.config.floatX), dtype=theano.config.floatX), Earlier in the book we discussed an automated way of selecting the number of epochs to train for, known as early stopping. Hint: After working on this problem for a while, you may find it useful to see the discussion at this link. Earlier in the chapter I described a technique for expanding the training data by applying (small) rotations, skewing, and translation. Note: Unless you have a tremendous amount of memory, it is not practical to explicitly generate the entire expanded data set. Show that rescaling all the weights in the network by a constant factor $c > 0$ simply rescales the outputs by a factor $c^{L-1}$, where $L$ is the number of layers. Still, considering the problem will help you better understand networks containing rectified linear units. Note: The word good in the second part of this makes the problem a research problem. In 1998, the year MNIST was introduced, it took weeks to train a state-of-the-art workstation to achieve accuracies substantially worse than those we can achieve using a GPU and less than an hour of training. With that said, the past few years have seen extraordinary improvements using deep nets to attack extremely difficult image recognition tasks. They will identify the years 2011 to 2015 (and probably a few years beyond) as a time of huge breakthroughs, driven by deep convolutional nets. The 2012 LRMD paper: Let me start with a 2012 paper* *Building high-level features using large scale unsupervised learning, by Quoc Le, Marc'Aurelio Ranzato, Rajat Monga, Matthieu Devin, Kai Chen, Greg Corrado, Jeff Dean, and Andrew Ng (2012). Note that the detailed architecture of the network used in the paper differed in many details from the deep convolutional networks we've been studying. Details about ImageNet are available in the original ImageNet paper, ImageNet: a large-scale hierarchical image database, by Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei (2009).: If you're looking for a challenge, I encourage you to visit ImageNet's list of hand tools, which distinguishes between beading planes, block planes, chamfer planes, and about a dozen other types of plane, amongst other categories. The 2012 KSH paper: The work of LRMD was followed by a 2012 paper of Krizhevsky, Sutskever and Hinton (KSH)* *ImageNet classification with deep convolutional neural networks, by Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. By this top-$5$ criterion, KSH's deep convolutional network achieved an accuracy of $84.7$ percent, vastly better than the next-best contest entry, which achieved an accuracy of $73.8$ percent. The input layer contains $3 \times 224 \times 224$ neurons, representing the RGB values for a $224 \times 224$ image. The feature maps are split into two groups of $48$ each, with the first $48$ feature maps residing on one GPU, and the second $48$ feature maps residing on the other GPU. Their respectives parameters are: (3) $384$ feature maps, with $3 \times 3$ local receptive fields, and $256$ input channels; A Theano-based implementation has also been developed* *Theano-based large-scale visual recognition with multiple GPUs, by Weiguang Ding, Ruoyan Wang, Fei Mao, and Graham Taylor (2014)., with the code available here. As in 2012, it involved a training set of $1.2$ million images, in $1,000$ categories, and the figure of merit was whether the top $5$ predictions included the correct category. The winning team, based primarily at Google* *Going deeper with convolutions, by Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich (2014)., used a deep convolutional network with $22$ layers of neurons. GoogLeNet achieved a top-5 accuracy of $93.33$ percent, a giant improvement over the 2013 winner (Clarifai, with $88.3$ percent), and the 2012 winner (KSH, with $84.7$ percent). In 2014 a team of researchers wrote a survey paper about the ILSVRC competition* *ImageNet large scale visual recognition challenge, by Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. ...the task of labeling images with 5 out of 1000 categories quickly turned out to be extremely challenging, even for some friends in the lab who have been working on ILSVRC and its classes for a while. In the end I realized that to get anywhere competitively close to GoogLeNet, it was most efficient if I sat down and went through the painfully long training process and the subsequent careful annotation process myself... Some images are easily recognized, while some images (such as those of fine-grained breeds of dogs, birds, or monkeys) can require multiple minutes of concentrated effort. In other words, an expert human, working painstakingly, was with great effort able to narrowly beat the deep neural network. In fact, Karpathy reports that a second human expert, trained on a smaller sample of images, was only able to attain a $12.0$ percent top-5 error rate, significantly below GoogLeNet's performance. One encouraging practical set of results comes from a team at Google, who applied deep convolutional networks to the problem of recognizing street numbers in Google's Street View imagery* *Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks, by Ian J. And they go on to make the broader claim: 'We believe with this model we have solved [optical character recognition] for short sequences [of characters] for many applications.' For instance, a 2013 paper* *Intriguing properties of neural networks, by Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus (2013) showed that deep networks may suffer from what are effectively blind spots. The existence of the adversarial negatives appears to be in contradiction with the network’s ability to achieve high generalization performance. The explanation is that the set of adversarial negatives is of extremely low probability, and thus is never (or rarely) observed in the test set, yet it is dense (much like the rational numbers), and so it is found near virtually every test case. For example, one recent paper* *Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images, by Anh Nguyen, Jason Yosinski, and Jeff Clune (2014). shows that given a trained network it's possible to generate images which look to a human like white noise, but which the network classifies as being in a known category with a very high degree of confidence. If you read the neural networks literature, you'll run into many ideas we haven't discussed: recurrent neural networks, Boltzmann machines, generative models, transfer learning, reinforcement learning, and so on, on and on $\ldots$ and on! One way RNNs are currently being used is to connect neural networks more closely to traditional ways of thinking about algorithms, ways of thinking based on concepts such as Turing machines and (conventional) programming languages. A 2014 paper developed an RNN which could take as input a character-by-character description of a (very, very simple!) Python program, and use that description to predict the output. For example, an approach based on deep nets has achieved outstanding results on large vocabulary continuous speech recognition. And another system based on deep nets has been deployed in Google's Android operating system (for related technical work, see Vincent Vanhoucke's 2012-2015 papers). Many other ideas used in feedforward nets, ranging from regularization techniques to convolutions to the activation and cost functions used, are also useful in recurrent nets. Deep belief nets, generative models, and Boltzmann machines: Modern interest in deep learning began in 2006, with papers explaining how to train a type of neural network known as a deep belief network (DBN)* *See A fast learning algorithm for deep belief nets, by Geoffrey Hinton, Simon Osindero, and Yee-Whye Teh (2006), as well as the related work in Reducing the dimensionality of data with neural networks, by Geoffrey Hinton and Ruslan Salakhutdinov (2006).. A generative model like a DBN can be used in a similar way, but it's also possible to specify the values of some of the feature neurons and then 'run the network backward', generating values for the input activations. And the ability to do unsupervised learning is extremely interesting both for fundamental scientific reasons, and - if it can be made to work well enough - for practical applications. Active areas of research include using neural networks to do natural language processing (see also this informative review paper), machine translation, as well as perhaps more surprising applications such as music informatics. In many cases, having read this book you should be able to begin following recent work, although (of course) you'll need to fill in gaps in presumed background knowledge. It combines deep convolutional networks with a technique known as reinforcement learning in order to learn to play video games well (see also this followup). The idea is to use the convolutional network to simplify the pixel data from the game screen, turning it into a simpler set of features, which can be used to decide which action to take: 'go left', 'go down', 'fire', and so on. What is particularly interesting is that a single network learned to play seven different classic video games pretty well, outperforming human experts on three of the games. But looking past the surface gloss, consider that this system is taking raw pixel data - it doesn't even know the game rules! Google CEO Larry Page once described the perfect search engine as understanding exactly what [your queries] mean and giving you back exactly what you want. In this vision, instead of responding to users' literal queries, search will use machine learning to take vague user input, discern precisely what was meant, and take action on the basis of those insights. Over the next few decades, thousands of companies will build products which use machine learning to make user interfaces that can tolerate imprecision, while discerning and acting on the user's true intent. Inspired user interface design is hard, and I expect many companies will take powerful machine learning technology and use it to build insipid user interfaces. Machine learning, data science, and the virtuous circle of innovation: Of course, machine learning isn't just being used to build intention-driven interfaces. But I do want to mention one consequence of this fashion that is not so often remarked: over the long run it's possible the biggest breakthrough in machine learning won't be any single conceptual breakthrough. If a company can invest 1 dollar in machine learning research and get 1 dollar and 10 cents back reasonably rapidly, then a lot of money will end up in machine learning research. So, for example, Conway's law suggests that the design of a Boeing 747 aircraft will mirror the extended organizational structure of Boeing and its contractors at the time the 747 was designed. If the application's dashboard is supposed to be integrated with some machine learning algorithm, the person building the dashboard better be talking to the company's machine learning expert. I won't define 'deep ideas' precisely, but loosely I mean the kind of idea which is the basis for a rich field of enquiry. The backpropagation algorithm and the germ theory of disease are both good examples.: think of things like the germ theory of disease, for instance, or the understanding of how antibodies work, or the understanding that the heart, lungs, veins and arteries form a complete cardiovascular system. Instead of a monolith, we have fields within fields within fields, a complex, recursive, self-referential social structure, whose organization mirrors the connections between our deepest insights. Deep learning is the latest super-special weapon I've heard used in such arguments* *Interestingly, often not by leading experts in deep learning, who have been quite restrained. And there is paper after paper leveraging the same basic set of ideas: using stochastic gradient descent (or a close variation) to optimize a cost function. Build your First Deep Learning Neural Network Model using Keras in Python I have chosen my today’s topic as Neural Network because it is most the fascinating learning model in the world of data science and starters in Data Science think that Neural Network is difficult and its understanding requires knowledge of neurons, perceptron and blaahhhh…There is nothing like that, I have been working on Neural Network for quite a few month now and realized that it is so easy. The fact of the matter is Keras is built on top of Tensorflow and Theano so this 2 insane library will be running in back-end whenever you run the program in Keras. Deep understanding of NN(you can skip this if you don’t want to learn in depth) Now you can see that Country names are replaced by 0,1 and 2 while male and female are replaced by 0 and 1. Dummy variable is difficult concept if you read in depth but don’t take tension, I have found this simple resource which will help you in understanding. In Machine Learning, we always divide our data into training and testing part meaning that we train our model on training data and then we check the accuracy of a model on testing data. Here we are using rectifier(relu) function in our hidden layer and Sigmoid function in our output layer as we want binary result from output layer but if the number of categories in output layer is more than 2 then use SoftMax function. First argument is Optimizer, this is nothing but the algorithm you wanna use to find optimal set of weights(Note that in step 9 we just initialized weights now we are applying some sort of algorithm which will optimize weights in turn making out neural network more powerful. Since out dependent variable is binary, we will have to use logarithmic loss function called ‘binary_crossentropy’, if our dependent variable has more than 2 categories in output then use ‘categorical_crossentropy’. Using neural nets to recognize handwritten digits Simple intuitions about how we recognize shapes - 'a 9 has a loop at the top, and a vertical stroke in the bottom right' - turn out to be not so simple to express algorithmically. As a prototype it hits a sweet spot: it's challenging - it's no small feat to recognize handwritten digits - but it's not so difficult as to require an extremely complicated solution, or tremendous computational power. But along the way we'll develop many key ideas about neural networks, including two important types of artificial neuron (the perceptron and the sigmoid neuron), and the standard learning algorithm for neural networks, known as stochastic gradient descent. Today, it's more common to use other models of artificial neurons - in this book, and in much modern work on neural networks, the main neuron model used is one called the sigmoid neuron. A perceptron takes several binary inputs, $x_1, x_2, \ldots$, and produces a single binary output: In the example shown the perceptron has three inputs, $x_1, x_2, x_3$. The neuron's output, $0$ or $1$, is determined by whether the weighted sum $\sum_j w_j x_j$ is less than or greater than some threshold value. To put it in more precise algebraic terms: \begin{eqnarray} \mbox{output} & = & \left\{ \begin{array}{ll} 0 & \mbox{if } \sum_j w_j x_j \leq \mbox{ threshold} \\ 1 & \mbox{if } \sum_j w_j x_j > \mbox{ threshold} \end{array} \right. And it should seem plausible that a complex network of perceptrons could make quite subtle decisions: In this network, the first column of perceptrons - what we'll call the first layer of perceptrons - is making three very simple decisions, by weighing the input evidence. The first change is to write $\sum_j w_j x_j$ as a dot product, $w \cdot x \equiv \sum_j w_j x_j$, where $w$ and $x$ are vectors whose components are the weights and inputs, respectively. Using the bias instead of the threshold, the perceptron rule can be rewritten: \begin{eqnarray} \mbox{output} = \left\{ \begin{array}{ll} 0 & \mbox{if } w\cdot x + b \leq 0 \\ 1 & \mbox{if } w\cdot x + b > 0 \end{array} \right. This requires computing the bitwise sum, $x_1 \oplus x_2$, as well as a carry bit which is set to $1$ when both $x_1$ and $x_2$ are $1$, i.e., the carry bit is just the bitwise product $x_1 x_2$: To get an equivalent network of perceptrons we replace all the NAND gates by perceptrons with two inputs, each with weight $-2$, and an overall bias of $3$. Note that I've moved the perceptron corresponding to the bottom right NAND gate a little, just to make it easier to draw the arrows on the diagram: One notable aspect of this network of perceptrons is that the output from the leftmost perceptron is used twice as input to the bottommost perceptron. (If you don't find this obvious, you should stop and prove to yourself that this is equivalent.) With that change, the network looks as follows, with all unmarked weights equal to -2, all biases equal to 3, and a single weight of -4, as marked: Up to now I've been drawing inputs like $x_1$ and $x_2$ as variables floating to the left of the network of perceptrons. In fact, it's conventional to draw an extra layer of perceptrons - the input layer - to encode the inputs: This notation for input perceptrons, in which we have an output, but no inputs, is a shorthand. Then the weighted sum $\sum_j w_j x_j$ would always be zero, and so the perceptron would output $1$ if $b > 0$, and $0$ if $b \leq 0$. Instead of explicitly laying out a circuit of NAND and other gates, our neural networks can simply learn to solve problems, sometimes problems where it would be extremely difficult to directly design a conventional circuit. If it were true that a small change in a weight (or bias) causes only a small change in output, then we could use this fact to modify the weights and biases to get our network to behave more in the manner we want. In fact, a small change in the weights or bias of any single perceptron in the network can sometimes cause the output of that perceptron to completely flip, say from $0$ to $1$. We'll depict sigmoid neurons in the same way we depicted perceptrons: Just like a perceptron, the sigmoid neuron has inputs, $x_1, x_2, \ldots$. Instead, it's $\sigma(w \cdot x+b)$, where $\sigma$ is called the sigmoid function* *Incidentally, $\sigma$ is sometimes called the logistic function, and this new class of neurons called logistic neurons. \tag{3}\end{eqnarray} To put it all a little more explicitly, the output of a sigmoid neuron with inputs $x_1,x_2,\ldots$, weights $w_1,w_2,\ldots$, and bias $b$ is \begin{eqnarray} \frac{1}{1+\exp(-\sum_j w_j x_j-b)}. In fact, there are many similarities between perceptrons and sigmoid neurons, and the algebraic form of the sigmoid function turns out to be more of a technical detail than a true barrier to understanding. var data = d3.range(sample).map(function(d){ return { x: x1(d), y: s(x1(d))}; var y = d3.scale.linear() .domain([0, 1]) .range([height, 0]); }) var graph = d3.select('#sigmoid_graph') .append('svg') .attr('width', width + m[1] + m[3]) .attr('height', height + m[0] + m[2]) .append('g') .attr('transform', 'translate(' + m[3] + ',' + m[0] + ')'); var xAxis = d3.svg.axis() .scale(x) .tickValues(d3.range(-4, 5, 1)) .orient('bottom') graph.append('g') .attr('class', 'x axis') .attr('transform', 'translate(0, ' + height + ')') .call(xAxis); var yAxis = d3.svg.axis() .scale(y) .tickValues(d3.range(0, 1.01, 0.2)) .orient('left') .ticks(5) graph.append('g') .attr('class', 'y axis') .call(yAxis); graph.append('text') .attr('class', 'x label') .attr('text-anchor', 'end') .attr('x', width/2) .attr('y', height+35) .text('z'); graph.append('text') .attr('x', (width / 2)) .attr('y', -10) .attr('text-anchor', 'middle') .style('font-size', '16px') .text('sigmoid function'); var data = d3.range(sample).map(function(d){ return { x: x1(d), y: s(x1(d))}; var y = d3.scale.linear() .domain([0,1]) .range([height, 0]); }) var graph = d3.select('#step_graph') .append('svg') .attr('width', width + m[1] + m[3]) .attr('height', height + m[0] + m[2]) .append('g') .attr('transform', 'translate(' + m[3] + ',' + m[0] + ')'); var xAxis = d3.svg.axis() .scale(x) .tickValues(d3.range(-4, 5, 1)) .orient('bottom') graph.append('g') .attr('class', 'x axis') .attr('transform', 'translate(0, ' + height + ')') .call(xAxis); var yAxis = d3.svg.axis() .scale(y) .tickValues(d3.range(0, 1.01, 0.2)) .orient('left') .ticks(5) graph.append('g') .attr('class', 'y axis') .call(yAxis); graph.append('text') .attr('class', 'x label') .attr('text-anchor', 'end') .attr('x', width/2) .attr('y', height+35) .text('z'); graph.append('text') .attr('x', (width / 2)) .attr('y', -10) .attr('text-anchor', 'middle') .style('font-size', '16px') .text('step function'); If $\sigma$ had in fact been a step function, then the sigmoid neuron would be a perceptron, since the output would be $1$ or $0$ depending on whether $w\cdot x+b$ was positive or negative* *Actually, when $w \cdot x +b = 0$ the perceptron outputs $0$, while the step function outputs $1$. The smoothness of $\sigma$ means that small changes $\Delta w_j$ in the weights and $\Delta b$ in the bias will produce a small change $\Delta \mbox{output}$ in the output from the neuron. In fact, calculus tells us that $\Delta \mbox{output}$ is well approximated by \begin{eqnarray} \Delta \mbox{output} \approx \sum_j \frac{\partial \, \mbox{output}}{\partial w_j} \Delta w_j + \frac{\partial \, \mbox{output}}{\partial b} \Delta b, \tag{5}\end{eqnarray} where the sum is over all the weights, $w_j$, and $\partial \, \mbox{output} / \partial w_j$ and $\partial \, \mbox{output} /\partial b$ denote partial derivatives of the $\mbox{output}$ with respect to $w_j$ and $b$, respectively. While the expression above looks complicated, with all the partial derivatives, it's actually saying something very simple (and which is very good news): $\Delta \mbox{output}$ is a linear function of the changes $\Delta w_j$ and $\Delta b$ in the weights and bias. If it's the shape of $\sigma$ which really matters, and not its exact form, then why use the particular form used for $\sigma$ in Equation (3)\begin{eqnarray} \sigma(z) \equiv \frac{1}{1+e^{-z}} \nonumber\end{eqnarray}$('#margin_778862672352_reveal').click(function() {$('#margin_778862672352').toggle('slow', function() {});});? In fact, later in the book we will occasionally consider neurons where the output is $f(w \cdot x + b)$ for some other activation function $f(\cdot)$. The main thing that changes when we use a different activation function is that the particular values for the partial derivatives in Equation (5)\begin{eqnarray} \Delta \mbox{output} \approx \sum_j \frac{\partial \, \mbox{output}}{\partial w_j} \Delta w_j + \frac{\partial \, \mbox{output}}{\partial b} \Delta b \nonumber\end{eqnarray}$('#margin_726336021933_reveal').click(function() {$('#margin_726336021933').toggle('slow', function() {});}); It turns out that when we compute those partial derivatives later, using $\sigma$ will simplify the algebra, simply because exponentials have lovely properties when differentiated. But in practice we can set up a convention to deal with this, for example, by deciding to interpret any output of at least $0.5$ as indicating a '9', and any output less than $0.5$ as indicating 'not a 9'. Exercises Sigmoid neurons simulating perceptrons, part I $\mbox{}$ Suppose we take all the weights and biases in a network of perceptrons, and multiply them by a positive constant, $c > 0$. Show that the behaviour of the network doesn't change.Sigmoid neurons simulating perceptrons, part II $\mbox{}$ Suppose we have the same setup as the last problem - a network of perceptrons. Suppose the weights and biases are such that $w \cdot x + b \neq 0$ for the input $x$ to any particular perceptron in the network. Now replace all the perceptrons in the network by sigmoid neurons, and multiply the weights and biases by a positive constant $c > 0$. Suppose we have the network: As mentioned earlier, the leftmost layer in this network is called the input layer, and the neurons within the layer are called input neurons. The term 'hidden' perhaps sounds a little mysterious - the first time I heard the term I thought it must have some deep philosophical or mathematical significance - but it really means nothing more than 'not an input or an output'. For example, the following four-layer network has two hidden layers: Somewhat confusingly, and for historical reasons, such multiple layer networks are sometimes called multilayer perceptrons or MLPs, despite being made up of sigmoid neurons, not perceptrons. If the image is a $64$ by $64$ greyscale image, then we'd have $4,096 = 64 \times 64$ input neurons, with the intensities scaled appropriately between $0$ and $1$. The output layer will contain just a single neuron, with output values of less than $0.5$ indicating 'input image is not a 9', and values greater than $0.5$ indicating 'input image is a 9 '. A trial segmentation gets a high score if the individual digit classifier is confident of its classification in all segments, and a low score if the classifier is having a lot of trouble in one or more segments. So instead of worrying about segmentation we'll concentrate on developing a neural network which can solve the more interesting and difficult problem, namely, recognizing individual handwritten digits. As discussed in the next section, our training data for the network will consist of many $28$ by $28$ pixel images of scanned handwritten digits, and so the input layer contains $784 = 28 \times 28$ neurons. The input pixels are greyscale, with a value of $0.0$ representing white, a value of $1.0$ representing black, and in between values representing gradually darkening shades of grey. A seemingly natural way of doing that is to use just $4$ output neurons, treating each neuron as taking on a binary value, depending on whether the neuron's output is closer to $0$ or to $1$. The ultimate justification is empirical: we can try out both network designs, and it turns out that, for this particular problem, the network with $10$ output neurons learns to recognize digits better than the network with $4$ output neurons. In a similar way, let's suppose for the sake of argument that the second, third, and fourth neurons in the hidden layer detect whether or not the following images are present: Of course, that's not the only sort of evidence we can use to conclude that the image was a $0$ - we could legitimately get a $0$ in many other ways (say, through translations of the above images, or slight distortions). Assume that the first $3$ layers of neurons are such that the correct output in the third layer (i.e., the old output layer) has activation at least $0.99$, and incorrect outputs have activation less than $0.01$. We'll use the MNIST data set, which contains tens of thousands of scanned images of handwritten digits, together with their correct classifications. To make this a good test of performance, the test data was taken from a different set of 250 people than the original training data (albeit still a group split between Census Bureau employees and high school students). For example, if a particular training image, $x$, depicts a $6$, then $y(x) = (0, 0, 0, 0, 0, 0, 1, 0, 0, 0)^T$ is the desired output from the network. We use the term cost function throughout this book, but you should note the other terminology, since it's often used in research papers and other discussions of neural networks. \tag{6}\end{eqnarray} Here, $w$ denotes the collection of all weights in the network, $b$ all the biases, $n$ is the total number of training inputs, $a$ is the vector of outputs from the network when $x$ is input, and the sum is over all training inputs, $x$. If we instead use a smooth cost function like the quadratic cost it turns out to be easy to figure out how to make small changes in the weights and biases so as to get an improvement in the cost. Even given that we want to use a smooth cost function, you may still wonder why we choose the quadratic function used in Equation (6)\begin{eqnarray} C(w,b) \equiv \frac{1}{2n} \sum_x \| This is a well-posed problem, but it's got a lot of distracting structure as currently posed - the interpretation of $w$ and $b$ as weights and biases, the $\sigma$ function lurking in the background, the choice of network architecture, MNIST, and so on. And for neural networks we'll often want far more variables - the biggest neural networks have cost functions which depend on billions of weights and biases in an extremely complicated way. We could do this simulation simply by computing derivatives (and perhaps some second derivatives) of $C$ - those derivatives would tell us everything we need to know about the local 'shape' of the valley, and therefore how our ball should roll. So rather than get into all the messy details of physics, let's simply ask ourselves: if we were declared God for a day, and could make up our own laws of physics, dictating to the ball how it should roll, what law or laws of motion could we pick that would make it so the ball always rolled to the bottom of the valley? To make this question more precise, let's think about what happens when we move the ball a small amount $\Delta v_1$ in the $v_1$ direction, and a small amount $\Delta v_2$ in the $v_2$ direction. Calculus tells us that $C$ changes as follows: \begin{eqnarray} \Delta C \approx \frac{\partial C}{\partial v_1} \Delta v_1 + \frac{\partial C}{\partial v_2} \Delta v_2. To figure out how to make such a choice it helps to define $\Delta v$ to be the vector of changes in $v$, $\Delta v \equiv (\Delta v_1, \Delta v_2)^T$, where $T$ is again the transpose operation, turning row vectors into column vectors. We denote the gradient vector by $\nabla C$, i.e.: \begin{eqnarray} \nabla C \equiv \left( \frac{\partial C}{\partial v_1}, \frac{\partial C}{\partial v_2} \right)^T. In fact, it's perfectly fine to think of $\nabla C$ as a single mathematical object - the vector defined above - which happens to be written using two symbols. With these definitions, the expression (7)\begin{eqnarray} \Delta C \approx \frac{\partial C}{\partial v_1} \Delta v_1 + \frac{\partial C}{\partial v_2} \Delta v_2 \nonumber\end{eqnarray}$('#margin_832985330775_reveal').click(function() {$('#margin_832985330775').toggle('slow', function() {});}); \tag{9}\end{eqnarray} This equation helps explain why $\nabla C$ is called the gradient vector: $\nabla C$ relates changes in $v$ to changes in $C$, just as we'd expect something called a gradient to do. In particular, suppose we choose \begin{eqnarray} \Delta v = -\eta \nabla C, \tag{10}\end{eqnarray} where $\eta$ is a small, positive parameter (known as the learning rate). \nabla C \|^2 \geq 0$, this guarantees that$\Delta C \leq 0$, i.e.,$C$will always decrease, never increase, if we change$v$according to the prescription in (10)\begin{eqnarray} \Delta v = -\eta \nabla C \nonumber\end{eqnarray}$('#margin_39079991636_reveal').click(function() {$('#margin_39079991636').toggle('slow', function() {});});. to compute a value for$\Delta v$, then move the ball's position$v$by that amount: \begin{eqnarray} v \rightarrow v' = v -\eta \nabla C. To make gradient descent work correctly, we need to choose the learning rate$\eta$to be small enough that Equation (9)\begin{eqnarray} \Delta C \approx \nabla C \cdot \Delta v \nonumber\end{eqnarray}$('#margin_663076476028_reveal').click(function() {$('#margin_663076476028').toggle('slow', function() {});}); In practical implementations,$\eta$is often varied so that Equation (9)\begin{eqnarray} \Delta C \approx \nabla C \cdot \Delta v \nonumber\end{eqnarray}$('#margin_362932327599_reveal').click(function() {$('#margin_362932327599').toggle('slow', function() {});}); Then the change$\Delta C$in$C$produced by a small change$\Delta v = (\Delta v_1, \ldots, \Delta v_m)^T$is \begin{eqnarray} \Delta C \approx \nabla C \cdot \Delta v, \tag{12}\end{eqnarray} where the gradient$\nabla C$is the vector \begin{eqnarray} \nabla C \equiv \left(\frac{\partial C}{\partial v_1}, \ldots, \frac{\partial C}{\partial v_m}\right)^T. \tag{13}\end{eqnarray} Just as for the two variable case, we can choose \begin{eqnarray} \Delta v = -\eta \nabla C, \tag{14}\end{eqnarray} and we're guaranteed that our (approximate) expression (12)\begin{eqnarray} \Delta C \approx \nabla C \cdot \Delta v \nonumber\end{eqnarray}$('#margin_398945612724_reveal').click(function() {$('#margin_398945612724').toggle('slow', function() {});}); This gives us a way of following the gradient to a minimum, even when$C$is a function of many variables, by repeatedly applying the update rule \begin{eqnarray} v \rightarrow v' = v-\eta \nabla C. The rule doesn't always work - several things can go wrong and prevent gradient descent from finding the global minimum of$C$, a point we'll return to explore in later chapters. But, in practice gradient descent often works extremely well, and in neural networks we'll find that it's a powerful way of minimizing the cost function, and so helping the net learn. It can be proved that the choice of$\Delta v$which minimizes$\nabla C \cdot \Delta v$is$\Delta v = - \eta \nabla C$, where$\eta = \epsilon / \|\nabla C\|$is determined by the size constraint$\|\Delta v\| Hint: If you're not already familiar with the Cauchy-Schwarz inequality, you may find it helpful to familiarize yourself with it. If there are a million such $v_j$ variables then we'd need to compute something like a trillion (i.e., a million squared) second partial derivatives* *Actually, more like half a trillion, since $\partial^2 C/ \partial v_j \partial v_k = \partial^2 C/ \partial v_k \partial v_j$. The idea is to use gradient descent to find the weights $w_k$ and biases $b_l$ which minimize the cost in Equation (6)\begin{eqnarray} C(w,b) \equiv \frac{1}{2n} \sum_x \| In other words, our 'position' now has components $w_k$ and $b_l$, and the gradient vector $\nabla C$ has corresponding components $\partial C / \partial w_k$ and $\partial C / \partial b_l$. Writing out the gradient descent update rule in terms of components, we have \begin{eqnarray} w_k & \rightarrow & w_k' = w_k-\eta \frac{\partial C}{\partial w_k} \tag{16}\\ b_l & \rightarrow & b_l' = b_l-\eta \frac{\partial C}{\partial b_l}. In practice, to compute the gradient $\nabla C$ we need to compute the gradients $\nabla C_x$ separately for each training input, $x$, and then average them, $\nabla C = \frac{1}{n} \sum_x \nabla C_x$. To make these ideas more precise, stochastic gradient descent works by randomly picking out a small number $m$ of randomly chosen training inputs. Provided the sample size $m$ is large enough we expect that the average value of the $\nabla C_{X_j}$ will be roughly equal to the average over all $\nabla C_x$, that is, \begin{eqnarray} \frac{\sum_{j=1}^m \nabla C_{X_{j}}}{m} \approx \frac{\sum_x \nabla C_x}{n} = \nabla C, \tag{18}\end{eqnarray} where the second sum is over the entire set of training data. Swapping sides we get \begin{eqnarray} \nabla C \approx \frac{1}{m} \sum_{j=1}^m \nabla C_{X_{j}}, \tag{19}\end{eqnarray} confirming that we can estimate the overall gradient by computing gradients just for the randomly chosen mini-batch. Then stochastic gradient descent works by picking out a randomly chosen mini-batch of training inputs, and training with those, \begin{eqnarray} w_k & \rightarrow & w_k' = w_k-\frac{\eta}{m} \sum_j \frac{\partial C_{X_j}}{\partial w_k} \tag{20}\\ b_l & \rightarrow & b_l' = b_l-\frac{\eta}{m} \sum_j \frac{\partial C_{X_j}}{\partial b_l}, \tag{21}\end{eqnarray} where the sums are over all the training examples $X_j$ in the current mini-batch. And, in a similar way, the mini-batch update rules (20)\begin{eqnarray} w_k & \rightarrow & w_k' = w_k-\frac{\eta}{m} \sum_j \frac{\partial C_{X_j}}{\partial w_k} \nonumber\end{eqnarray}$('#margin_255037324417_reveal').click(function() {$('#margin_255037324417').toggle('slow', function() {});}); and (21)\begin{eqnarray} b_l & \rightarrow & b_l' = b_l-\frac{\eta}{m} \sum_j \frac{\partial C_{X_j}}{\partial b_l} \nonumber\end{eqnarray}$('#margin_141169455106_reveal').click(function() {$('#margin_141169455106').toggle('slow', function() {});}); We can think of stochastic gradient descent as being like political polling: it's much easier to sample a small mini-batch than it is to apply gradient descent to the full batch, just as carrying out a poll is easier than running a full election. For example, if we have a training set of size $n = 60,000$, as in MNIST, and choose a mini-batch size of (say) $m = 10$, this means we'll get a factor of $6,000$ speedup in estimating the gradient! Of course, the estimate won't be perfect - there will be statistical fluctuations - but it doesn't need to be perfect: all we really care about is moving in a general direction that will help decrease $C$, and that means we don't need an exact computation of the gradient. In practice, stochastic gradient descent is a commonly used and powerful technique for learning in neural networks, and it's the basis for most of the learning techniques we'll develop in this book. That is, given a training input, $x$, we update our weights and biases according to the rules $w_k \rightarrow w_k' = w_k - \eta \partial C_x / \partial w_k$ and $b_l \rightarrow b_l' = b_l - \eta \partial C_x / \partial b_l$. Name one advantage and one disadvantage of online learning, compared to stochastic gradient descent with a mini-batch size of, say, $20$. In neural networks the cost $C$ is, of course, a function of many variables - all the weights and biases - and so in some sense defines a surface in a very high-dimensional space. I won't go into more detail here, but if you're interested then you may enjoy reading this discussion of some of the techniques professional mathematicians use to think in high dimensions. We'll leave the test images as is, but split the 60,000-image MNIST training set into two parts: a set of 50,000 images, which we'll use to train our neural network, and a separate 10,000 image validation set. We won't use the validation data in this chapter, but later in the book we'll find it useful in figuring out how to set certain hyper-parameters of the neural network - things like the learning rate, and so on, which aren't directly selected by our learning algorithm. When I refer to the 'MNIST training data' from now on, I'll be referring to our 50,000 image data set, not the original 60,000 image data set* *As noted earlier, the MNIST data set is based on two data sets collected by NIST, the United States' National Institute of Standards and Technology. for x, y in zip(sizes[:-1], sizes[1:])] So, for example, if we want to create a Network object with 2 neurons in the first layer, 3 neurons in the second layer, and 1 neuron in the final layer, we'd do this with the code: net = Network([2, 3, 1]) The biases and weights in the Network object are all initialized randomly, using the Numpy np.random.randn function to generate Gaussian distributions with mean $0$ and standard deviation $1$. Note that the Network initialization code assumes that the first layer of neurons is an input layer, and omits to set any biases for those neurons, since biases are only ever used in computing the outputs from later layers. The big advantage of using this ordering is that it means that the vector of activations of the third layer of neurons is: \begin{eqnarray} a' = \sigma(w a + b). (This is called vectorizing the function $\sigma$.) It's easy to verify that Equation (22)\begin{eqnarray} a' = \sigma(w a + b) \nonumber\end{eqnarray}$('#margin_552886241220_reveal').click(function() {$('#margin_552886241220').toggle('slow', function() {});}); gives the same result as our earlier rule, Equation (4)\begin{eqnarray} \frac{1}{1+\exp(-\sum_j w_j x_j-b)} \nonumber\end{eqnarray}$('#margin_7421600236_reveal').click(function() {$('#margin_7421600236').toggle('slow', function() {});});, for computing the output of a sigmoid neuron. in component form, and verify that it gives the same result as the rule (4)\begin{eqnarray} \frac{1}{1+\exp(-\sum_j w_j x_j-b)} \nonumber\end{eqnarray}$('#margin_347257101140_reveal').click(function() {$('#margin_347257101140').toggle('slow', function() {});}); We then add a feedforward method to the Network class, which, given an input a for the network, returns the corresponding output* *It is assumed that the input a is an (n, 1) Numpy ndarray, not a (n,) vector. Although using an (n,) vector appears the more natural choice, using an (n, 1) ndarray makes it particularly easy to modify the code to feedforward multiple inputs at once, and that is sometimes convenient. All the method does is applies Equation (22)\begin{eqnarray} a' = \sigma(w a + b) \nonumber\end{eqnarray}$('#margin_335258165235_reveal').click(function() {$('#margin_335258165235').toggle('slow', function() {});}); training_data[k:k+mini_batch_size] for k in xrange(0, n, mini_batch_size)] self.update_mini_batch(mini_batch, eta) print &quot;Epoch {0}: {1} / {2}&quot;.format( j, self.evaluate(test_data), n_test) print &quot;Epoch {0} complete&quot;.format(j) This is done by the code self.update_mini_batch(mini_batch, eta), which updates the network weights and biases according to a single iteration of gradient descent, using just the training data in mini_batch. delta_nabla_b, delta_nabla_w = self.backprop(x, y) nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)] nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)] for w, nw in zip(self.weights, nabla_w)] for b, nb in zip(self.biases, nabla_b)] Most of the work is done by the line delta_nabla_b, delta_nabla_w = self.backprop(x, y) The self.backprop method makes use of a few extra functions to help in computing the gradient, namely sigmoid_prime, which computes the derivative of the $\sigma$ function, and self.cost_derivative, which I won't describe here. for x, y in zip(sizes[:-1], sizes[1:])] training_data[k:k+mini_batch_size] for k in xrange(0, n, mini_batch_size)] self.update_mini_batch(mini_batch, eta) print &quot;Epoch {0}: {1} / {2}&quot;.format( j, self.evaluate(test_data), n_test) print &quot;Epoch {0} complete&quot;.format(j) delta_nabla_b, delta_nabla_w = self.backprop(x, y) nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)] nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)] for w, nw in zip(self.weights, nabla_w)] for b, nb in zip(self.biases, nabla_b)] activations = [x] # list to store all the activations, layer by layer zs = [] # list to store all the z vectors, layer by layer # l = 1 means the last layer of neurons, l = 2 is the delta = np.dot(self.weights[-l+1].transpose(), delta) * sp for (x, y) in test_data] Finally, we'll use stochastic gradient descent to learn from the MNIST training_data over 30 epochs, with a mini-batch size of 10, and a learning rate of $\eta = 3.0$, &gt;&gt;&gt; As was the case earlier, if you're running the code as you read along, you should be warned that it takes quite a while to execute (on my machine this experiment takes tens of seconds for each training epoch), so it's wise to continue reading in parallel while the code executes. At least in this case, using more hidden neurons helps us get better results* *Reader feedback indicates quite some variation in results for this experiment, and some training runs give results quite a bit worse. Using the techniques introduced in chapter 3 will greatly reduce the variation in performance across different training runs for our networks.. (If making a change improves things, try doing more!) If we do that several times over, we'll end up with a learning rate of something like $\eta = 1.0$ (and perhaps fine tune to $3.0$), which is close to our earlier experiments. Exercise Try creating a network with just two layers - an input and an output layer, no hidden layer - with 784 and 10 neurons, respectively. The data structures used to store the MNIST data are described in the documentation strings - it's straightforward stuff, tuples and lists of Numpy ndarray objects (think of them as vectors if you're not familiar with ndarrays): &quot;&quot;&quot;mnist_loader~~~~~~~~~~~~A library to load the MNIST image data. In some sense, the moral of both our results and those in more sophisticated papers, is that for some problems: sophisticated algorithm $\leq$ simple learning algorithm + good training data. We could attack this problem the same way we attacked handwriting recognition - by using the pixels in the image as input to a neural network, with the output from the network a single neuron indicating either 'Yes, it's a face' or 'No, it's not a face'. The end result is a network which breaks down a very complicated question - does this image show a face or not - into very simple questions answerable at the level of single pixels. It does this through a series of many layers, with early layers answering very simple and specific questions about the input image, and later layers building up a hierarchy of ever more complex and abstract concepts. Comparing a deep network to a shallow network is a bit like comparing a programming language with the ability to make function calls to a stripped down language with no ability to make such calls. Neural Network Toolbox Neural Network Toolbox™ provides algorithms, pretrained models, and apps to create, train, visualize, and simulate both shallow and deep neural networks. Deep learning networks include convolutional neural networks (ConvNets, CNNs), directed acyclic graph (DAG) network topologies, and autoencoders for image classification, regression, and feature learning. For small training sets, you can quickly apply deep learning by performing transfer learning with pretrained deep network models (including Inception-v3, ResNet-50, ResNet-101, GoogLeNet, AlexNet, VGG-16, and VGG-19) and models imported from TensorFlow™ Keras or Caffe. But what *is* a Neural Network? | Deep learning, chapter 1 Subscribe to stay notified about new videos: Support more videos like this on Patreon: Or don' Lecture 6 | Training Neural Networks I In Lecture 6 we discuss many practical issues for training modern neural networks. We discuss different activation functions, the importance of data ... Deep Learning with Neural Networks and TensorFlow Introduction Welcome to a new section in our Machine Learning Tutorial series: Deep Learning with Neural Networks and TensorFlow. The artificial neural network is a ... A friendly introduction to Deep Learning and Neural Networks A friendly introduction to neural networks and deep learning. This is a follow up to the Introduction to Machine Learning video. How Deep Neural Networks Work A gentle introduction to the principles behind neural networks, including backpropagation. Rated G for general audiences. Follow me for announcements: ... Neural Network Model - Deep Learning with Neural Networks and TensorFlow Welcome to part three of Deep Learning with Neural Networks and TensorFlow, and part 45 of the Machine Learning tutorial series. In this tutorial, we're going to ... What is a Neural Network - Ep. 2 (Deep Learning SIMPLIFIED) With plenty of machine learning tools currently available, why would you ever choose an artificial neural network over all the rest? This clip and the next could ... How to Make a Neural Network - Intro to Deep Learning #2 How do we learn? In this video, I'll discuss our brain's biological neural network, then we'll talk about how an artificial neural network works. We'll create our own ... Fine-tuning a Neural Network explained In this video, we explain the concept of fine-tuning an artificial neural network. Fine-tuning is also known as “transfer learning.” We also point to another resource ... How to Predict Stock Prices Easily - Intro to Deep Learning #7 We're going to predict the closing price of the S&P 500 using a special type of recurrent neural network called an LSTM network. I'll explain why we use ...
2020-09-26 08:04:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6935325860977173, "perplexity": 1645.9780964592233}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400238038.76/warc/CC-MAIN-20200926071311-20200926101311-00506.warc.gz"}
http://andrebertel.blogspot.com/2014/02/the-projection-of-energy-tachikata-and.html
## Saturday, 15 February 2014 ### The projection of energy: tachikata and unsoku Sokumen jodan uchi-uke doji ni sokumen gedan-barai. Something I rediscovered recently, but on a more in-depth level, was `directing power’ when moving in the various tachikata (stances); that is, how the weight is projected in techniques. In particular, this relates to the width and length of stances in direct relation to techniques: for example, movements 38-41 of Jion. In all four of these movements if the zenkutsu-dachi (front stance) is even slightly too wide one’s energy will partially go to the side (as opposed to being fully projected forward). This is easier to feel, and correct, in the two jun-zuki but can subtly go under the radar, and is more challenging, when turning with the two uchi-uke. Quite simply, this is because the “sideward energy” applied in the uchi-uke (going from inside-outward); furthermore, the use of hanmi (the half-facing position) and zenmi/shomen (the front-on/squared position) respectively. While all of this is plain, and very easy to understand in text, it requires diligent practice. Why? Because one must physically/subconsciously understand, and maximise, how their stances and movements optimise the various techniques of karate-do (especially in correlation with unsoku/leg movements). This starts from the straight line (choku-zuki, mae-geri etc) and runs a full course to the full-circle (kaiten-waza/ tenshin); subsequently, the added impetus/possibilities/combinations of raising and lowering the body are added to the equation. Taken as a whole, as Nakayama Shuseki-Shihan stated, karate-do masters all of the possible bodily movements for potential offense and defence. Subsequently, effective application of technique can easily come from this baseline approach in training. Last but not least, knowing is not enough in Karate-Do. Only by having “…the ability to express knowledge within one’s physical technique” is knowledge useful for the karateka. Osu, André. A snow covered view of Aso-San from my apartment. The volcanic steam, rising out of the crater, is hidden by clouds. © André Bertel. Aso-shi, Kumamoto-ken, Japan (2014).
2017-10-20 14:17:26
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8195506930351257, "perplexity": 6382.510507884462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824225.41/warc/CC-MAIN-20171020135519-20171020155519-00520.warc.gz"}
https://robotics.stackexchange.com/questions/17922/rotating-two-things-on-the-same-axis/17924
# Rotating two things on the same axis I'm working on a project where I want two separate shafts to be in the same place. Think of it like a clock where the hour and minute hands move independently. Also like a clock, I want the motors and gears tucked away in back. With clocks, they use a kind of tube that slides over a shaft. Then the shaft and the tube can both be driven from the back and rotate independently. Is there a name for this tube part? If I wanted it to be low-friction, is there a name for a part that includes bearings? • maybe coaxial shafts or tube shaft .......... name for a part that includes bearings I don't see how this would have a specific name – jsotola Jan 4 '19 at 1:49
2021-07-26 17:06:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29690754413604736, "perplexity": 643.0218500482523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152144.81/warc/CC-MAIN-20210726152107-20210726182107-00068.warc.gz"}
http://www.jiskha.com/math/geometry/?page=4
Sunday August 28, 2016 # Homework Help: Math: Geometry ## Recent Homework Questions About Geometry Geometry Find area of a triangle with side lengths 15 15 and 8 Sunday, April 10, 2016 by Anonymous Geometry A rectanglular swimming pool has a length of 25 feet and a width of 12 feet. You want to ass a gravel walkway that is 4 feet wide all the way around the pool. What is the area of the gravel walkway Sunday, April 10, 2016 by Saniya Geometry given the parallelogram of the perimeter RATU with vertices R(1,3) S(-2,-2) T(4,0) U(7,4):round to the nearest hundredth Rec1: Rec2: triangle 1: triangle 2: triangle 3: triangle 4: Sunday, April 10, 2016 by Anthony Determine whether you can construct many, one, or no triangle(s) with the given description. a) a triangle with angle measures of 50°, 70°, and 100° no b) a triangle with one angle measure of 60° and one 4-centimeter side no c) a scalene triangle with a 3-... Sunday, April 10, 2016 by cade Geometry The local swimming pool is rectangular and the length is 5 feet more than twice the width If the area of the pool is 1375 square feet then what are the length and width of the pool Saturday, April 9, 2016 by Anonymous Geometry A rectangular patio is 8 feet longer than it isi wide The area of the patio is 84 square feet. What are the dimensions of the patio Saturday, April 9, 2016 by Anonymous geometry find the arc length of AB to the nearest tenth. AB equals 45 degrees. radius is 5 in. Friday, April 8, 2016 by Abigail Geometry Quadrilateral MNPQ is the same shape but different size than quadrilateral MNPQ.tell whether one figure is a dilation of the other or not. Explain your reasoning. Thursday, April 7, 2016 by Khan Geometry mollys rectangular backyard has 300 ft of fencing. one side is 40 ft find the area of the backyard Thursday, April 7, 2016 by Anonymous Geometry Quadrilateral MNPQ is the same shape but different size than quadrilateral mnpq Thursday, April 7, 2016 by Khan Geometry Triangle RST has angles 38 and 75.Triangle rst has angles 67 and 38.the side are proportional. Thursday, April 7, 2016 by Khan Geometry Regular pentagons A and B are similar. The apothem of Pentagon A equals the radius of Pentagon B. Compare the areas. The area of Pentagon A is equal to 1.49 times the area of Pentagon B. The area of Pentagon B is equal to 1.49 times the area of Pentagon A. The area of Pentagon... Thursday, April 7, 2016 by SkatingDJ Geometry A square pyramid has a surface area of 175 square inches. The square base has side lengths of 5 inches. Find the slant height of the pyramid. Wednesday, April 6, 2016 by Helen Geometry A rectangular grass area ub a park measures 50 yards by 100 yards. The city wishes to put a uniform sidewalk around the grass area which would increase the area by 459 yd ^2. What is the width of sidewalk Is it 9.18 feet Wednesday, April 6, 2016 by Anonymous Geometry Henry has a patio next to his house that is 15ft by 10ft. He wants to put a uniform flower bed around three sides of the patio. The area of the flower bed is 100 ft^2. What is the width of the flower bed? Wednesday, April 6, 2016 by Anonymous Geometry Roy has a patio that covers 434 ft ^2. He wants to increase his patio 196 ft ^2 by adding 4 ft to both the length and width. The length of the original patio is 3 ft more than twice the width. What is the length of the new patio Wednesday, April 6, 2016 by Anonymous Geometry A rectangular grass area ub a park measures 50 yards by 100 yards. The city wishes to put a uniform sidewalk around the grass area which would increase the area by 459 yd ^2. What is the width of sidewalk Wednesday, April 6, 2016 by Anonymous Geometry Find the area of a hexagon with the indicated apothem: 6 sqr root 3 108 sqr root 3 in.^2 432 sqr root 3 in.^2 96 sqr root 3 in.^2 216 sqr root 3 in.^2 Last question, don't know. Please help? Thanks Wednesday, April 6, 2016 by SkatingDJ Geometry If the width of rectangle ABCD is 8 cm and the length of diagonal line AC is 14, find the length of rectangle ABCD. A2 + b2 = c2. 64+ b2 =196. square root of b2 = square root of 132. B =2square of 33. L = ? Wednesday, April 6, 2016 by Laura Geometry a jewlery box in the shape of a rectangle prism has a volume of 90in.3 and a height of 2 1/2 in, What are possible demensions for the length and width of base Wednesday, April 6, 2016 by Emily Geometry The area of a triangle is 96. If the base of a triangle is tripled and the height is reduced to one-third its original length, what is the new area? 3 1/3 9 The area would not change I'm stumped:/ Please help? Thanks Wednesday, April 6, 2016 by SkatingDJ Geometry The expression (n+3) represents the measure of an exterior angle of a regular 18 gon. What is the value of n in the expression Tuesday, April 5, 2016 by Anonymous Geometry I have to find the area of a triangle. The base is 22 The height is 11 square root 3... for this, it was 19.052... I rounded that to just 19. Was this okay??? I then get 418, divided by 2, and got 209 as the area. Was it accurate that I rounded that number? Tuesday, April 5, 2016 by SkatingDJ Geometry The expression (a-4) represents the measure of an interior angle of a regular 20-gon. WHat is the value of a in the expression Tuesday, April 5, 2016 by Anonymous Geometry A rectangular grass area ub a park measures 50 yards by 100 yards. The city wishes to put a uniform sidewalk around the grass area which would increase the area by 459 yd ^2. What is the width of sidewalk Tuesday, April 5, 2016 by Anonymous Geometry mr james has a swimming pool in his backyard that is 10 ft by 15 ft he wants to put a concrete sidewalk with uniform width around his pool so that the pool and sidewalk cover a combined area in his backyard of 336 ft ^2. What is the width of the sidewalk? Tuesday, April 5, 2016 by Anonymous Geometry A farmer is estimating the surface area of his barn to find how much paint he needs to buy. One part of the barn is triangular as shown. The base of the triangle is 22 meters long Both angles on either side connecting the base to each leg is 30 degrees. (It looks like an ... Tuesday, April 5, 2016 by SkatingDJ Geometry A piece of art is in the shape of an equilateral triangle with sides of 21 in. What is the area of the piece of art to the nearest tenth? 311.8 in.^2 381.9 in.^2 155.9 in.^2 191.0 in.^2 Please help? Tuesday, April 5, 2016 by SkatingDJ geometry Line AB is parallel to line CD. What is the sum of the measure of angle K and the measure of angle Y? Tuesday, April 5, 2016 by Brittany geometry If the measures of the acute angles of a right triangle are 3x + 4 degrees and 4x + 2 degrees, what are the measures of all three angles of the right triangle? Monday, April 4, 2016 by luci Geometry One advantage of the prismoidal formula is that you can use it to A. determine volumes of figures that aren't prismoids. B. estimate the volume of solids that are combinations of other solids. C. calculate precise volumes of all prismoids. D. calculate both volume and ... Monday, April 4, 2016 by marceline math- geometry A dilation with a scale factor of 4 maps figure A onto A'. By what scale factor would another dilation map figure A' back to figure A? is it 1/4? Monday, April 4, 2016 by renee geometry given the points A (-4 ,3) B (12 ,11) and C (4,7) .point c divides AB into what ratio? Sunday, April 3, 2016 by roseanne Geometry Ralph wants to put a fence around his rectangular garden. His garden measures 53 feet by 55 feet.The garden has a path around it that is 3 feet wide. How much fencing material does Ralph need to enclose the garden and path? Saturday, April 2, 2016 by Boberto Geometry Find the length of a rectangle if the area is (2x^2|15x|18) inches^2 and the width is (x=6) inches. Saturday, April 2, 2016 by Anonymous Geometry An equilateral triangle with altitude length of 7 square root 3 centimeters. Find the area and perimeter A=49 square root 3 divided by 2 cm ^2 P=42cm Saturday, April 2, 2016 by Anonymous Geometry Find the perimter and area of a 45-45-90 degree triangle with hypotenuse length of 10 centimeters. P=20 square root 2 cm? A=50 cm ^2 Saturday, April 2, 2016 by Anonymous Geometry A wheel 2ft wide rolls a distance of .015 miles in .005 hours. What is the wheels speed in rpm's? Thursday, March 31, 2016 by Alan Geometry A segment with endpoints (3, -2) and (4, 2) is dilated to the image segment with endpoints (9, -6) and (12, 6). What is the scale factor for the dilation? 3 2 6 **my choice 9 **i feel its this one too, im stuck Thursday, March 31, 2016 by Jessica Geometry The lateral area of a cone is 736 pi cm^2. The radius is 46 cm. Find the slant height to the nearest tenth. A. 16 cm B. 21.1 cm C. 13.4 cm D. 21.6 cm step-by-step Thursday, March 31, 2016 by CarpeDiem Geometry (x)(x+10)=1200 Thursday, March 31, 2016 by Annie geometry and trig I have a regular octagon with a radius of 9. I have split the 8 pie slices into right angles--leaving angles 90, 67.5, and 22.5. I looked for the sine of 22.5 opp/hyp and get .38. Then the oppositeside is 6.84. I can't get the right answer, what am I doing wrong? Thanks Wednesday, March 30, 2016 by lulu Geometry The area of a rhombus is 31 feet squared one diagonal is 14 ft long. Find the length of the other diagonal Wednesday, March 30, 2016 by Anonymous Geometry The height of a trapezoid is 2 inches. If one base is 14 inches long and the area is 49 inches squared find the length of the other base. I know the Area of a trapezoid is A=1/2(b1+b2)h but Idk how to apply it when working backwards Wednesday, March 30, 2016 by Anonymous geometry construct a triangle PQR in which pq=5.5cm angle p=75° and pr=5cm? Wednesday, March 30, 2016 by Anonymous geometry construct a triangle PQR in which pq=5.5cm angle p=75° and pr=5cm? Wednesday, March 30, 2016 by Anonymous Geometry I am getting ,4,11,2 marks I need 82 To pass what should I doo and my final paper is over and it Was not good for me iam feeling depressed Wednesday, March 30, 2016 by Aradhana Geometry (check) Mario’s company makes unusually shaped imitation gemstones. One gemstone had 12 faces and 10 vertices. How many edges did the gemstone have? A. 23 edges B. 22 edges C. 25 edges D. 20 edges**** Tuesday, March 29, 2016 by CarpeDiem geometry for triangle ABC find the measure of AB given measure of angle A=55 degrees, measure of angle B=44 degrees, and b = 68 Tuesday, March 29, 2016 by Buddy Geometry Find the surface area of a conical grain storage tank that has a height of 46 meters and a diameter of 16 meters. Round the answer to the nearest square meter. A. 1375 m^2 B. 3151 m^2 C. 2548 m^2 How would I go about doing this? Tuesday, March 29, 2016 by CarpeDiem Geometry The ratio of the width to the length of a rectangle is 4:5, If the area of the rectangle is 500 square centimeters, what is the length of the rectangle? Monday, March 28, 2016 by JCAINE Geometry Is it true for a reflection on a plane... To reflect across the: xy-plane (x,y,z)→(x,y,−z) xz-plane (x,y,z)→(x,−y,z) yz-plane (x,y,z)→(−x,y,z)? Is that true for all planes or just a few particular ones? Monday, March 28, 2016 by Aaden Caldwell Geometry A plane is located at C on the diagram. There are two towers located at A and B. The distance between the towers is 7,600 feet, and the angles of elevation are given. a. Find BC, the distance from Tower 2 to the plane, to the nearest foot. b. Find CD, the height of the plane ... Saturday, March 26, 2016 by SkatingDJ geometry What is the ratio of the area of sector AOB to the area of sector COD? 1/3,1/4,1/9,3/8 Friday, March 25, 2016 by jo geometry 2004-06-01-04-00_files/i0320000.jpg Find the volume of the cone. Round your answer to the nearest hundredth. Friday, March 25, 2016 by allison Geometry There is a flagpole in the school parking lot. Which of the following is true about the angle of depression from the top of the flagpole to the parking lot, and the angle of elevation from the parking lot to the top of the flagpole? Choose two of the following. They are ... Thursday, March 24, 2016 by SkatingDJ geometry a regular pentagon has an apothem of 3.2 and an area of 37.2 cm. What is the length of one side of the pentagon? Thursday, March 24, 2016 by Steve Geometry Find the value of x. Round to the nearest tenth of a unit. A right triangle. The hypotenuse is 580 yards. The opposite leg (not the base) is x. The exterior degree on the top of the triangle is 27. 263.3 yd 295.5 yd 516.8 yd 1,277.6 yd Thursday, March 24, 2016 by SkatingDJ Geometry You want to draw an enlargement of a design that is painted on a 4 in. by 5 in. You will be drawing this design on a piece of paper that is 8 1/2 in. by 11 in. What are the dimensions of the largest complete enlargement you can make? A. 1 3/5 in by 10 5/8 in B. 1 3/8 in by 3/5... Thursday, March 24, 2016 by newbie Geometry What is the area of a parallelogram with base length 12 m and height 9 m? A = bh so... 12 x 9 = 180 A = 180m Right? Thursday, March 24, 2016 by UpSaNdDoWnS Geometry A parallelogram has sides 15 cm and 18 cm. The height corresponding to a 15-cm base is 9 cm. What is the height corresponding to an 18-cm base? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ step-by-step explanation please and thank you ^_^ Thursday, March 24, 2016 by UpSaNdDoWnS Geometry We know from historical records that the Great Pyramid of Giza was originally about 147 m tall. Assuming it was built with its faces at 52 degree inclines, what was the original length of one side of its base? Round your answer to the nearest meter. 91 m 115 m 181 m 230 m I&#... Wednesday, March 23, 2016 by SkatingDJ Geometry Describe in words the translation of X represented by the translation rule T < -7, -8 > (X). A. 7 units to the right and 8 units up B. 8 units to the left and 7 units up C. 7 units to the right and 8 units down D. 7 units to the left and 8 units down***** If I am wrong, ... Wednesday, March 23, 2016 by Carpe_Diem geometry In triangle FGH, g = 8ft, h=13ft, and m=F= 72degrees. Find m<G round to nearest tenth a- 26.2 b- 33.9 c- 72.1 d- 32.5 my answer is b Wednesday, March 23, 2016 by Steve geometry The area of a triangle is 6 square feet. The base is 3 times the height. What is the base of the triangle? Tuesday, March 22, 2016 by laura Math (Geometry!) What angle is complimentary to 32° a.−32° b.148° c.58° d.32° im pretty sure the answer is going to be 32 degrees if it is complimentary agnle Monday, March 21, 2016 by Sarah Lott Math (Geometry!) What is the definition of complimentary angles? a. Two angles that add to be 180° b. Two angles that have a difference of 90° c. Two angles add to be 90° d. Two angles that have a difference of 180° I think the answer is B. Monday, March 21, 2016 by Sarah Lott Geometry Use the Law of Sines to find the missing angle of the triangle. Find m∠B given that c = 83, a = 44, and m∠A = 31. A. 76.3° B. 15.8° C. 72.7° D. 164.2° Need help on how I would go about doing this...step-by-step. Thank you! Monday, March 21, 2016 by Carpe_Diem geometry In a square of compost 4 ft by 4 ft. Ms. Lee had 1000 worms. How many worms can she have if her square of compost has a side lenght that is 8 times longer? Sunday, March 20, 2016 by Anonymous geometry Flower seeds will be planted at points that lie on a circle that has a diameter of 8 feet. The point where any seed is planted must be 2 feet away from the seefs on either side of it. What is the maximum number of seeds that can be planted. Sunday, March 20, 2016 by nik Geo Hi I need some major help on surface area and volume. Do you think you can provide some examples for me to solve out? I don't get any of the formulas or how to get the correct answers. I'm struggling in Geometry this year I'm in K12 (homeschooled) and I have a D in... Friday, March 18, 2016 by Aaden Caldwell Geometry #1: A ball has a radius of 1 foot. What is the surface area of the ball? A. 2 pi ft^2 B. 4 pi ft^2 C. 6 pi ft^2 D. 8 pi ft^2 #2: What is the approximate surface area of the sphere if the sphere has a measurement of 27 cm? A. 247,000 cm^2 B. 9,160 cm^2 C. 36,600 cm^2 D. 339 cm^2 Friday, March 18, 2016 by William M Geometry If a/4 = b/7, what is the value of a/b? Can somebody guide me through a step by step please? I know it's cross multiplication, I just want to make sure I know what I'm doing since I'm a little confused with this one. Friday, March 18, 2016 by SkatingDJ Geometry 5 questions What is the image of point P (-2, 3, 5) after a reflection about the xy-plane? a. P' (-2, -3, 5) b. P' (2, 3, 5) c. P' (2, -3, -5) d. P' (-2, 3, -5) #2: What is the image point R (4, -1, -3) under the translation T (x, y, z) → T' (x - 2, y + 1, z - 4... Friday, March 18, 2016 by Geometry Help! Geometry a flag pole is 25 ft high. your line of sight is 5 ft from the ground. the angle of elevation is 23 degrees, how far away from the base of the flag pole are you? Friday, March 18, 2016 by Chloe Geometry (check) A model is made of a car. The car is 3 meters long and the model is 3 centimeters long. What is the ratio of the length of the car to the length of the model? A. 3 : 3 B. 1 : 100 C. 1 : 3 D. 100 : 1*** Deal or No Deal? Friday, March 18, 2016 by Carpe_Diem geometry Use the Law of Cosines to solve the problem. A ship travels due west for 83 miles. It then travels in a northwest direction for 111 miles and ends up 165 miles from its original position. To the nearest tenth of a degree, how many degrees north of west did it turn when it ... Friday, March 18, 2016 by Steve Geometry Which of the following is equivalent to e+f/a-d = b/a a(e+f) = b(a-d) a+f/a-d = b/a ae+f = ab-d b(e+f) = a(a-d) I don't know how to do this:/ Some guidance would be great:) Thursday, March 17, 2016 by SkatingDJ Geometry A model is made of a car. The car is 9 feet long and the model is 6 inches long. What is the ratio of the length of the car to the length of the model? Thursday, March 17, 2016 by Brianna Geometry Find the value of x. There is two lines, they are not parallel. In between (connected) those two lines is a shape. A trapezoid by the looks of it. It is wide at the bottom and thin at the top. There are equations outside of the shape, so external angles. Upper left: (x + 16) ... Wednesday, March 16, 2016 by SkatingDJ Geometry There are three points at the rim of the circle: (-4,1) (-2,-3) (5,-2). Where is the center of the lake? What is it's diameter if each unit of a coordinate plane represents 3/5 of a mile? Wednesday, March 16, 2016 by Peluche 🐻 Geometry A construction crew wants to hoist a heavy beam so that it is standing up straight. They tie a rope to the beam, secure the base, and pull the rope through a pulley to raise one end of the beam from the ground. When the beam makes an angle of 40 degrees with the ground, the ... Wednesday, March 16, 2016 by NORA Geometry Write the equation of circle O centered at origin that passes through (9,-2) Circle B with center (0,-2) that passes through (-6,0) >For circle B, is the radius 6 in this case? So equation would be x^2+(x+2)^2=36, correct? If this is the case, how would I solve for circle O? Wednesday, March 16, 2016 by Peluche geometry In a right triangle, the angles are 90, 66, and 24. The length of the base is 11. What is the length of the hypotenuse? Wednesday, March 16, 2016 by Steve geometry Find the missing value to the nearest hundredth. tan ___ = 73 a- 81.61 b- 64.61 c- 60.61 d- 89.22 My answer is D Wednesday, March 16, 2016 by Steve GEOMETRY/ALGEBRA A balloon, with a final volume of 40 cm3, took 4 min to fill. At what rate was the balloon filled? Choose exactly two answers that are correct. A. B. 40 cm3 = 4 min • r C. 40 cm3 • r = 4 min D. r – 4 h = 14 in. Wednesday, March 16, 2016 by DEE Geometry Honors Problem: A triangular lot has two sides of length 100m and 48m as indicated in the figure below. The length of the perpendicular from a corner of the lot to the 48m side is 96m. A fence is to be erected perpendicular to the 48m side so that the area of the lot is equally ... Tuesday, March 15, 2016 by Ryan geometry each face of the cabinet 413S is in the shape of a rectangle. What is the volume of the Model 413S in cubic feet? dimensions are 24 by 72 by 18 Tuesday, March 15, 2016 by Ligia Geometry A fish tank with a rectangular base has a volume of 4,320 cubic inches the length and width of the tank are 15 inches and 12 inches .find the height in inches of the tank Tuesday, March 15, 2016 by Sam Geometry #4 Which could be the scale factor of the following similar figures? (1 point) The first triangle's sides are 9,9,12 The second triangles sides are 6,6,8 A.) 2/3 B.) 3/4 C.) 1 D.) 3/2 E.) 4/3 My answers: b & c _______________________________ #5 The rectangles in the figure... Tuesday, March 15, 2016 by Patty Geometry The sum of the angle measures of a polygon w/ "s" sides is 2880. Find "s" Please help with step-by-step explanation, so I can fully understand it. Thank you. Tuesday, March 15, 2016 by Shae Geometry Use less than, equal to, or greater than to complete this statement: The sum of the measures of the exterior angles of a regular 9-gon, one at each vertex, is ____ the sum of the measures of the exterior angles of a regular 6-gon, one at each vertex. A. cannot tell B. less ... Tuesday, March 15, 2016 by Shae Geometry The sum of the angle measures of a polygon w/ "s" sides is 2880. Find "s" Please help with step-by-step explanation, so I can fully understand it. Thank you. (^_^) \_♥_/ Tuesday, March 15, 2016 by Shae Geometry Which answer is not a theorem or postulate used to prove triangles similarity? SAS SSA ~~my choice SSS AA Tuesday, March 15, 2016 by Jessica Geometry Two model trains begin at the toy train station and travel continuously on two circular tracks of equal length. The first train completes a circuit in 20 seconds and the second train in 15 seconds. If both trains leave the station at the same time, after how many seconds will ... Monday, March 14, 2016 by Ashley Math (geometry) Earth's diameter at the equator is about 7926 miles. A jet flies 550 miles per hour. How long would it take the jet to fly halfway around the equator? Monday, March 14, 2016 by Skylar(7th grade) Geometry Please explain how I am supposed to get this I really don't understand, but I think that I know the answer. Thanks! 1. A conveyor belt carries supplies from the first floor to the second floor which is 12 feet higher. The belt makes a 60 degree angle with the ground. How ... Monday, March 14, 2016 by Blair Geometry Two model trains begin at the toy train station and travel continuously on two circular tracks of equal length. The first train completes a circuit in 20 seconds and the second train in 15 seconds. If both trains, leave the station at the same time, after how many seconds will... Monday, March 14, 2016 by Ashley Geometry Triangle ABC and <1 congruent <2 Prove: AD/AB = De/BC Monday, March 14, 2016 by Susanne geometry Points $F$, $E$, and $D$ are on the sides $\overline{AB}$, $\overline{AC}$, and $\overline{BC}$, respectively, of right $\triangle ABC$ such that $AFDE$ is a square. If $AB = 12$ and $AC = 8$, then what is $AF$? Sunday, March 13, 2016 by nikita
2016-08-28 07:18:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4342879056930542, "perplexity": 873.6948376103709}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982935857.56/warc/CC-MAIN-20160823200855-00025-ip-10-153-172-175.ec2.internal.warc.gz"}
https://cs.stackexchange.com/questions/66212/how-to-convert-nested-loops-into-code-taking-advantage-of-parallel-computing
# How to “convert” nested loops into code taking advantage of parallel computing? Let's say that an algorithm uses several nested loops to accomplish it's task, for instance : • An algorithm treating voxels on several frames, so there would be four dimensions : t (for time), x, y and z. • An algorithm evaluating a function that takes n entries, searching for which entries there is a certain result. For instance if we search f(a, b, c, d, e) = 2001 where each variable is tested between 0 and 100. From what I understand, several APIs make you model the way your kernel is executed by the GPU with a 1-2-3d grid, but many of them don't provide n-dimensional grids for execution. So if I had to treat a simple 2D image of 1024x2048 pixels, I could have simply made my kernel execute itself over a grid of that size instead of having to nest loops, but since most grids seem to be limited to 3 dimensions, what to do when I need to execute work for more than 3 dimensions (like in my second example) ? Should I put loops directly in my kernels ? Thank you. EDIT : Some pseudocode : Let's say that I want to test for which inputs a given function has a certain value, I could have some brute force approach like that : for a in 1...100 { for b in 1...100 { for c in 1...100 { for d in 1...100 { if f(a,b,c,d) == 1984 { equationSolutions.append((a,b,c,d)) } } Now with grids, a certain kernel can be executed over the grid (sorry, I may not use the right term, there is a much better explaination about grids here), so if you want to treat each pixel of an image of 2048x512 pixels, instead of doing : for x in 0...imageWidth { for y in 0...imageHeight { treatPixel(x,y) } You will define grid as a 2D grid of size 2048x512 and then your kernel won't contain any loop, it will just get it's position on the grid as an argument : func myPixelFunction(positionInGrid) { treatPixel(positionInGrid.x, positionInGrid.y) } My question is that, since most grids are limited to 3 dimensions, what to do if you want to execute code that requires more than 3 dimensions ? (like my code trying to solve a function ?). From what I understand, in cases like the ones I showed, grids make more sense than loops on GPUs since grids enable to parallelise the work, making several threads work in parallel on the same function/kernel, but while it's really easy to simply use position in grid when you are treating data that is representable by the grid, I don't understand how to write code using grids when I needs to word on "additional dimensions". EDIT #2 : This SE question is similar to mine. • Can you help me understand your question? What does "kernel" mean? Do you just mean the algorithm/function f? What do you mean by a 1-2-3d grid or by "provide n-dimensional grids for execution"? What do you mean by "tread"? What does it mean to execute a kernel over a grid, and how is that different from a nested loop? Are you talking about the difference between executing these iterations in parallel vs sequentially? What kind of answer do you expect? Are you looking for an algorithm, or for code? – D.W. Nov 19 '16 at 1:29 • @D.W. Kernel is for function executed by GPU. Grid is the way some GPU APIs (such as Metal or CUDA) define how your kernel is gonna be executed (nice explanation here developer.apple.com/reference/metal/mtlcomputecommandencoder). Wrong typo, I meant "thread" not "tread". For what it means to execute kernel over a grid see the link. About the nested loops vs grid, I'll update my answer to try to make it clearer. – Trevör Nov 19 '16 at 13:19 • @D.W. Oops, sorry again, that wasn't "thread" that was "treat". – Trevör Nov 19 '16 at 13:30 • @D.W. I updated my question. – Trevör Nov 19 '16 at 13:30 Let's say you want to iterate over a,b,c,d each ranging from 0..99. There are two ways to do that. ## Approach #1: fewer nested loops Set up a 100x100 grid, which will handle a,b. Handle c,d by explicit nested loops: treatPixel(a,b): for c in 0..99: for d in 0..99: doSomething(a,b,c,d) ## Approach #2: bigger grid Set up a 10,000x10,000 grid. Then decode the x-coordinate to a,b and the y-coordinate to c,d, like this: treatPixel(x,y): a := floor(x/100); b := x mod 100 c := floor(y/100); d := y mod 100 doSomething(a,b,c,d) ## Which to choose? Either one should work. Assuming the number of iterations greatly exceeds the number of compute elements in the GPU, the difference between the two probably comes down to different memory locality effects. You can try implementing both of them, benchmarking, and seeing if one is faster. If there is little or no memory access and doSomething() is compute-bound, they'll probably both run at about the same speed. • Thank you ! Somebody once told me that mod is pretty expensive, do you think I should be concerned about that in my case ? – Trevör Nov 19 '16 at 16:10 • @TrevörAnneDenise, I don't know. It probably depends on how much work doSomething() does. Note also that you can replace b := x mod 100 with b := x - 100*a, if that helps (a multiply might be faster than a remainder/mod, depending on the architecture). The best way to know for sure is probably to benchmark it. – D.W. Nov 19 '16 at 16:23 • @TrevörAnneDenise If the iteration count is a power of two, the modulo operation can be replaced by a bitwise & (assuming your architecture is base 2) – Mr Tsjolder Nov 19 '16 at 16:29
2020-04-05 10:43:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3716161847114563, "perplexity": 1142.448412074055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371576284.74/warc/CC-MAIN-20200405084121-20200405114121-00334.warc.gz"}
http://mymathforum.com/number-theory/347119-t-sequences-explicit-enumeration-epsilon_0-a-5.html
My Math Forum T Sequences - Explicit Enumeration of $\epsilon_0$ User Name Remember Me? Password Number Theory Number Theory Math Forum October 9th, 2019, 01:23 AM   #41 Senior Member Joined: Oct 2009 Posts: 884 Thanks: 340 Quote: This is how I interpret his post. You essentially recursively build up a function as follows: Take $\mathcal{A}_0 = \{1,2,3\}$. Assume that $\mathcal{A}_n$ is defined, then we define according to rule 1,2,3: 1) $x\in \mathcal{B}_n$ if and only if there is some $a\in \mathcal{A}_n$ such that $a, a-1, a-2\in\mathcal{A}_n$ and such that $x=a-3$. 2) $x\in \mathcal{C}_n$ if and only if there is some $a\in \mathcal{A}_n$ such that $x = a+1$ 3) $x\in \mathcal{D}_n$ if and only if there is some $a\in \mathcal{A}_n$ such that $a, a+1, a+2\in \mathcal{A}_n$ and such that $x=a+\omega$. Then we define $$\mathcal{A}_{n+1} = \mathcal{A}_n\cup \mathcal{B}_n \cup \mathcal{C}_n\cup \mathcal{D}_n$$ Example: $$\mathcal{A}_0 = \{1,2,3\}$$ We have $$\mathcal{B}_0 = \{0\},~\mathcal{C}_0 = \{4\},~\mathcal{D}_0 = \{\omega\}$$ Thus $\mathcal{A}_1 = \{0,1,2,3,4,\omega\}$. Next, once we take for an ordinal $x$, the level of $x$ to be defined as the least $n$ such that $x\in \mathcal{A}_n$. There are finitely many ordinals of level $n$. We then order the ordinals on their levels. For example: Of level 0: 1,2,3 Of level 1: 0, 4, $\omega$ Final sequence: 1,2,3, 0, 4, $\omega$, (here come the ordinals of level 2),... I think this is what he WANTS to do since what he wrote makes little sense. But he writes things very differently and I think he makes several mistakes in his post. Last edited by skipjack; October 10th, 2019 at 01:31 PM. October 9th, 2019, 05:57 AM #42 Senior Member   Joined: Jun 2014 From: USA Posts: 620 Thanks: 52 Yes Micrm@ss, that is the basic approach to $T$ sequences and I assume now Mascke understands too. For any ordinal $\alpha$, there are $\alpha^3$ triplets that can be made from the elements of $\alpha$ (if allowing triplets where two or more elements of the triplet can be the same ordinal). This gives us an exact number of triplets to choose from for each $\alpha$ if comprising a $T$ sequence that inserts one and only one ordinal $\alpha$ into the sequence per rule in a fashion where each rule is of the form: “if the triplet (a,b,c) can be formed from some initial segment of a $T$ sequence defined using all rules less than rule $\alpha$, where a,b,c < $\alpha$, then $\alpha$ gets inserted into the sequence.” How about that, still with me? Last edited by AplanisTophet; October 9th, 2019 at 06:12 AM. October 9th, 2019, 11:09 AM   #43 Senior Member Joined: Jun 2014 From: USA Posts: 620 Thanks: 52 Quote: Originally Posted by Maschke Your lengthy posts are not tethered to rationality in my opinion. I don't seem to be able to communicate to you that your exposition is incoherent and meaningless; and your belief that it's practically obvious is a delusion. Not a mathematical confusion of some sort. A genuine break with reality that should generate in you -- I'll be blunt -- concern. I keep telling you that I don't know much when it comes to mathematical notation and my overall knowledge of mathematics is limited. It's as though you think I'm just an idiot in all aspects though as opposed to just my half-baked mathematical musings in this forum. I naturally assumed you're just trying to irritate me because I happily play along, which is fun and all because continually saying something is incoherent after it's been explained to you numerous times in numerous different ways is funny. We're on the same page then? I'm saying that I think you're capable. I'm also saying that if you truly couldn't begin to make sense of my enumeration of $\omega^2$, then that's my mistake for thinking you would be a helpful and well meaning mathematician who could assist me by pinpointing breaks in my notation so as to try and define the $T$ sequence model in a nice way that makes sense to everyone. Let me know if I should treat you like a not-so-good (or at least not-so-willing) mathematician that is incapable of helping another not-so-good mathematician like me. Quote: Originally Posted by Micrm@ss I don't understand your T sequence. I mean, I know what you want to do... Here Micrm@ss does what you never do, which is say simply, "yeah, I get what you want to do." Sure, my notation needed a little help or whatever, but if Micrm@ss wanted to, he/she could have helped me make sense of it because Micrm@ss is clearly capable. It's not clear to me that you are capable, or if you are capable, that you are not intentionally playing stupid to toy with me. Do you not see that? I don't want to read another novel from you about how incoherent I am because they will never help. You need to say why it's incoherent to you. Ok, all that said, do you really want to do this 150 words at a time? I have fun either way so I'm happy to. October 9th, 2019, 01:02 PM   #44 Senior Member Joined: Jun 2014 From: USA Posts: 620 Thanks: 52 Quote: Originally Posted by Micrm@ss This is how I interpret his post. You essentially recursively build up a function as follows: Take $\mathcal{A}_0 = \{1,2,3\}$. Assume that $\mathcal{A}_n$ is defined, then we define according to rule 1,2,3: 1) $x\in \mathcal{B}_n$ if and only if there is some $a\in \mathcal{A}_n$ such that $a, a-1, a-2\in\mathcal{A}_n$ and such that $x=a-3$. 2) $x\in \mathcal{C}_n$ if and only if there is some $a\in \mathcal{A}_n$ such that $x = a+1$ 3) $x\in \mathcal{D}_n$ if and only if there is some $a\in \mathcal{A}_n$ such that $a, a+1, a+2\in \mathcal{A}_n$ and such that $x=a+\omega$. Then we define $$\mathcal{A}_{n+1} = \mathcal{A}_n\cup \mathcal{B}_n \cup \mathcal{C}_n\cup \mathcal{D}_n$$ Example: $$\mathcal{A}_0 = \{1,2,3\}$$ We have $$\mathcal{B}_0 = \{0\},~\mathcal{C}_0 = \{4\},~\mathcal{D}_0 = \{\omega\}$$ Thus $\mathcal{A}_1 = \{0,1,2,3,4,\omega\}$. Next, once we take for an ordinal $x$, the level of $x$ to be defined as the least $n$ such that $x\in \mathcal{A}_n$. There are finitely many ordinals of level $n$. We then order the ordinals on their levels. For example: Of level 0: 1,2,3 Of level 1: 0, 4, $\omega$ Final sequence: 1,2,3, 0, 4, $\omega$, (here come the ordinals of level 2),... I think this is what he WANTS to do since what he wrote makes little sense. But he writes things very differently and I thinks he makes several mistakes in his post. Since you’ve demonstrated your ability to understand what I’m saying, perhaps you can try and write rule 4 the same way you did rules 1-3? I won’t ask you to rewrite any of the rules for me other than these, but the whole reason I use the notation that I do is because your approach won’t work if you don’t have an $a \in A_n$ to base your notation on like you do for rules 1-3. PS - Feel free to put the ellipses back in there too, or leave them off, as it’s still clearly the same either way. Last edited by AplanisTophet; October 9th, 2019 at 01:21 PM. October 9th, 2019, 06:28 PM   #45 Senior Member Joined: Aug 2012 Posts: 2,412 Thanks: 755 Quote: Originally Posted by AplanisTophet I naturally assumed you're just trying to irritate me ... The chip on your shoulder makes it unpleasant to interact with you. You are factually mistaken in your assumption. October 9th, 2019, 07:10 PM   #46 Senior Member Joined: Jun 2014 From: USA Posts: 620 Thanks: 52 Quote: Originally Posted by Maschke The chip on your shoulder makes it unpleasant to interact with you. You are factually mistaken in your assumption. Ok, but think of it this way. Pretend I'm a kid skateboarding at a local park and you, a grumpy old man (remember, I said we're just pretending) walk up to me and say, "hey, punk, your skateboarding... it's all wrong. It's just not the way you're supposed to do it." I would rightfully look at you, laugh, and go about my merry way. Let's say you also happen to be a grumpy old pro skateboarder and could give me some pointers if I had correctly interpreted you and not went on my merry way. Well, that would be nice, but there is no way I could have known. You would have had to say something different, like "try picking your back foot up more" (or whatever, I don't actually skateboard). Well then I might have listened to you, or at least tried to do what you were saying. Now let's say you were a really good pro skateboarder that actually went to the park to help kids and made it a point to do so. Well, then we wouldn't be having this conversation. October 10th, 2019, 12:46 PM   #47 Member Joined: Oct 2018 From: USA Posts: 99 Thanks: 72 Math Focus: Algebraic Geometry Quote: Originally Posted by AplanisTophet ... walk up to me ... But the problem is that's not what happened, it was more you coming up to a group of skateboarders and saying "Hey, look at this kickflip I can do", and then doing some strange kickflip-esque thing. You then get annoyed when the group of skateboarders provide criticism of your kickflip. October 10th, 2019, 01:11 PM   #48 Senior Member Joined: Oct 2009 Posts: 884 Thanks: 340 Quote: Originally Posted by AplanisTophet Since you’ve demonstrated your ability to understand what I’m saying, perhaps you can try and write rule 4 the same way you did rules 1-3? I won’t ask you to rewrite any of the rules for me other than these, but the whole reason I use the notation that I do is because your approach won’t work if you don’t have an $a \in A_n$ to base your notation on like you do for rules 1-3. PS - Feel free to put the ellipses back in there too, or leave them off, as it’s still clearly the same either way. Sorry no. You got to make a huge effort to learn the proper notations and proper way of communicating in mathematics. I don't care what excuses you have for not learning this, but I have absolutely zero interest in exploring your ideas until you learn the proper way of doing things. October 10th, 2019, 04:27 PM   #49 Senior Member Joined: Jun 2014 From: USA Posts: 620 Thanks: 52 Quote: Originally Posted by Micrm@ss Sorry no. You got to make a huge effort to learn the proper notations and proper way of communicating in mathematics. I don't care what excuses you have for not learning this, but I have absolutely zero interest in exploring your ideas until you learn the proper way of doing things. Um, learning is what I'm trying to do here. I just want you to take a crack at rule 4 because I would have defined rules 1-3 the same way as you if the later rules didn't require a different approach. It's not like I haven't made an effort and it's not like I don't appreciate learning how to properly communicate in mathematics (I'm certified when it comes to accounting, the language of business...), I'm just confused. That's why I'm here. I completely understand if you have better things to do. But no, you will not be telling me I'm full of excuses for not trying to do the very thing I am clearly here trying to do: learn the proper notation for writing rules $\geq$ 4. Last edited by AplanisTophet; October 10th, 2019 at 04:30 PM. October 10th, 2019, 10:17 PM   #50 Senior Member Joined: Oct 2009 Posts: 884 Thanks: 340 Quote: Originally Posted by AplanisTophet Um, learning is what I'm trying to do here. I just want you to take a crack at rule 4 because I would have defined rules 1-3 the same way as you if the later rules didn't require a different approach. It's not like I haven't made an effort and it's not like I don't appreciate learning how to properly communicate in mathematics (I'm certified when it comes to accounting, the language of business...), I'm just confused. That's why I'm here. I completely understand if you have better things to do. But no, you will not be telling me I'm full of excuses for not trying to do the very thing I am clearly here trying to do: learn the proper notation for writing rules $\geq$ 4. You can't learn by asking random questions on a forum. I mean, you can, it is useful, but also if you do some outside reading. So I can recommend you some books that you can go through in order to learn proper notation and argumentation. If you go through these books I am definitely willing to help you through it and to check your work. But if you only "learn" through forum posts then I don't think it's going to work out. Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post mahjk17 Applied Math 4 June 27th, 2012 07:46 PM arron1990 Calculus 12 January 31st, 2012 04:17 AM techmix Math Software 0 September 24th, 2011 11:12 AM khyratmath123 Math Software 1 January 27th, 2010 04:28 PM khyratmath123 Calculus 4 January 10th, 2010 05:16 AM Contact - Home - Forums - Cryptocurrency Forum - Top
2019-10-21 23:37:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8094651103019714, "perplexity": 773.0039139707884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987795253.70/warc/CC-MAIN-20191021221245-20191022004745-00121.warc.gz"}
http://www-old.newton.ac.uk/programmes/FRB/Kim.html
# $\alpha$-Gauss curvature flows with flat sides Presenter: Lami Kim (Hokkaido University) Co-authors: Ki-Ahm Lee (Seoul National University), Eunjai Rhee (Duksung Womens University) ### Abstract In this paper, we study the deformation of the 2-dimensional convex surfaces in $\mathbb R ^{3}$ whose speed at a point on the surface is proportional to $\alpha$-power of positive part of Gauss curvature. First, for $\frac{1}{2}<\alpha \leq 1$, we show that there is smooth solution if the initial data is smooth and strictly convex and that there is a viscosity solution with $C^{1,1}$-estimate before the collapsing time if the initial surface is only convex. Moreover, we show that there is a waiting time effect which means the flat spot of the convex surface will persist for a while. We also show the interface between the flat side and the strictly convex side of the surface remains smooth on $0 < t < T_0$ under certain necessary regularity and non-degeneracy initial conditions, where $T_0$ is the vanishing time of the flat side.
2017-01-17 02:49:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8138068318367004, "perplexity": 564.7136666826088}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00407-ip-10-171-10-70.ec2.internal.warc.gz"}
https://cs.stackexchange.com/questions/103403/why-cook-levin-thorems-proof-can-mean-sats-np-hardness
# Why Cook-Levin thorem's proof can mean SAT's NP-Hardness I'm studying about Cook-Levin theorem but there is a problem I faced. Cook-Levin theorem shows that any NPTM can be encoded as a boolean formula. About given language $$A$$, instance $$w$$, and NPTM $$M$$ that decides $$A$$, I understand that when a boolean formula $$φ$$ is true when $$M$$ accepts $$w$$ and false when $$M$$ rejects $$w$$, decision problem $$w∈A?$$ will be the same problem as $$φ=true?$$. But I am confused that can we actually certificate existence of reduction from any other NP problem to SAT, with Cook-Levin theorem. I can't make actual Karp reduction to SAT that doesn't picky about where the reduction comes from, so I can't tell that SAT is in NP-Hard. Please help me to understand why conversion to boolean formula can mean SAT's NP-Hardness... Any language $$L$$ in NP is decided by some nondeterministic Turing machine $$M$$. By Cook–Levin, the problem "Does $$M$$ accept input $$x$$?" can be decided by constructing a Boolean formula $$\varphi_{M,x}$$ that is satisfiable if, and only if, $$M$$ accepts $$x$$. Hence $$L$$ reduces to SAT, so SAT is NP-hard.
2019-12-09 11:34:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 19, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7002307772636414, "perplexity": 734.1442377379193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540518627.72/warc/CC-MAIN-20191209093227-20191209121227-00374.warc.gz"}
https://codereview.stackexchange.com/questions/26344/writing-reading-data-structure-to-a-file-using-c
# Writing/reading data structure to a file using C++ I wrote some piece of code which reads and write multiple data structures on a file using C++. I would be grateful to get your feedback on what you think about the code (it works at least when I tested). Thanks. #include "stdafx.h" #include <iostream> #include <fstream> #include <sstream> using namespace std; // data structure to be written to a file struct WebSites { char SiteName[100]; int Rank; }s1,s2,s3,s4; int _tmain(int argc, _TCHAR* argv[]) { strcpy(s1.SiteName, "www.ppp.com"); s1.Rank = 0; strcpy(s2.SiteName, "www.rrr.com"); s2.Rank = 111; strcpy(s3.SiteName, "www.code.com"); s3.Rank = 123; strcpy(s4.SiteName, "www.yahoo.com"); s4.Rank = 14; // write fstream binary_file("c:\\test.dat",ios::out|ios::binary|ios::app); binary_file.write(reinterpret_cast<char *>(&s1),sizeof(WebSites)); binary_file.write(reinterpret_cast<char *>(&s2),sizeof(WebSites)); binary_file.write(reinterpret_cast<char *>(&s3),sizeof(WebSites)); binary_file.write(reinterpret_cast<char *>(&s4),sizeof(WebSites)); binary_file.close(); fstream binary_file2("c:\\test.dat",ios::binary|ios::in| ios::ate ); int size = binary_file2.tellg(); for(int i = 0; i<size/sizeof(WebSites); i++) { WebSites p_Data; binary_file2.seekg(i*sizeof(WebSites)); cout<<p_Data.SiteName<<endl; cout<<"Rank: "<< p_Data.Rank<<endl; } binary_file2.close(); return 0; } The trouble with writing binary blobs is that they lead to brittle storage. The stored objects have a tendency to break over time as the assumptions you make about the hardware no longer hold true (in this case that the sizeof(int) is constant and the endianess of int will not change). It has become more standard therefore to use a method know as serialization. In this you convert the object to a format that is hardware agnostic (and usually human readable). Note: Binary blobs have advantages. But you must weigh those against the brittleness. Therefore your first choice should be serialization (unless you have specific requirements that prevents this). Then look at binary Blobs only after you have shown that serialization has too much overhead (unlikely for most situations, but it is a possibility). In C++ you would do this using the operator<< and operator>>. I would re-write your code as: struct WebSites { std::string siteName; int rank; WebSite() : siteName("") , rank(0) {} WebSite(std::string const& siteName, int rank) : siteName(siteName) , rank(rank) {} void swap(WebSites& other) throws() { std::swap(rank, other.rank); std::swap(siteName, other.siteName); } }; std::ostream& operator<<(std::ostream& stream, WebSites const& data) { stream << data.rank << " " << data.siteName.size() << ":" << data.siteName; return stream; } std::istream& operator>>(std::istream& stream, WebSites& data) { WebSite tmp; std::size_t size; if (stream >> tmp.rank >> size) { tmp.siteName.resize(size); { data.swap(tmp); } } return stream; } Now you can write your code like this: int _tmain(int argc, _TCHAR* argv[]) { WebSite s1("www.ppp.com", 0); WebSite s2("www.rrr.com", 111); WebSite s3("www.code.com", 123); WebSite s4("www.yahoo.com", 14); // write fstream binary_file("c:\\test.dat",ios::out|ios::binary|ios::app); binary_file << s1 << s2 << s3 << s4; binary_file.close(); fstream binary_file2("c:\\test.dat",ios::binary|ios::in| ios::ate ); WebSites p_Data; while(binary_file2 >> p_Data) { cout << p_Data.SiteName << endl; cout << "Rank: "<< p_Data.Rank << endl; } binary_file2.close(); // Read data into a vector fstream binary_file3("c:\\test.dat",ios::binary|ios::in| ios::ate ); std::vector<WebSites> v; std::copy(std::istream_iterator<WebSites>(binary_file3), std::istream_iterator<WebSites>(), std::back_inserter(v) ); binary_file3.close(); } • thanks I will study your code a bit later though. – user1999360 May 20 '13 at 13:07 • Hey Loki, what you described is serialization right? Do you have some links which explain this concept further? – user1999360 Jun 17 '13 at 9:49 • Loki I asked a question related to your suggestion above: stackoverflow.com/questions/17277070/… -- your feedback would be very useful. thanks – user1999360 Jun 24 '13 at 14:23 • @user1999360: Yes this is serialization. – Martin York Jun 24 '13 at 22:24 • +1 for "Binary blobs have advantages. But you must way those against the brittleness." I have several situations where I absolutely need data in a particular per-byte format, but it's definitely not the norm for most programmers using basic (non-byte) types. – underscore_d Jan 7 '16 at 10:36
2019-07-18 08:31:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.322064608335495, "perplexity": 13103.330022231474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525524.12/warc/CC-MAIN-20190718063305-20190718085305-00440.warc.gz"}