url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://xxx.unizar.es/abs/1803.06561
cs.LG (what is this?) # Title: Multi-device, Multi-tenant Model Selection with GP-EI Abstract: Bayesian optimization is the core technique behind the emergence of AutoML, which holds the promise of automatically searching for models and hyperparameters to make machine learning techniques more accessible. As such services are moving towards the cloud, we ask -- {\em When multiple AutoML users share the same computational infrastructure, how should we allocate resources to maximize the "global happiness" of all users?} We focus on GP-EI, one of the most popular algorithms for automatic model selection and hyperparameter tuning, and develop a novel multi-device, multi-tenant extension that is aware of \emph{multiple} computation devices and multiple users sharing the same set of computation devices. Theoretically, given $N$ users and $M$ devices, we obtain a regret bound of $O((\text{\bf {MIU}}(T,K) + M)\frac{N^2}{M})$, where $\text{\bf {MIU}}(T,K)$ refers to the maximal incremental uncertainty up to time $T$ for the covariance matrix $K$. Empirically, we evaluate our algorithm on two applications of automatic model selection, and show that our algorithm significantly outperforms the strategy of serving users independently. Moreover, when multiple computation devices are available, we achieve near-linear speedup when the number of users is much larger than the number of devices. Subjects: Learning (cs.LG); Distributed, Parallel, and Cluster Computing (cs.DC); Machine Learning (stat.ML) Cite as: arXiv:1803.06561 [cs.LG] (or arXiv:1803.06561v1 [cs.LG] for this version) ## Submission history From: Chen Yu [view email] [v1] Sat, 17 Mar 2018 19:56:18 GMT (2085kb,D)
2018-07-20 20:24:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6102225184440613, "perplexity": 1757.8778441435738}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591831.57/warc/CC-MAIN-20180720193850-20180720213850-00472.warc.gz"}
https://oasienne.com/the-splendour-elth/05afad-what-are-the-industrial-uses-of-electroplating
Stripping –Removes particles which can blister or flake a plate-layer. The most common applications are as follows: Extraction of metals; Purification of metals; Electroplating of metals; Extraction of Metals Through Electrolysis. Electrical conductivity can be increased by applying thin layer of metals which has conducting properties. 2. The deposition of a metal coating onto an article is done by putting a negative charge on it and immersing it into a solution containing a metal salt. 4. Uses for Electroplating are in three categories, Decorative, electrodepositing the metals, copper, brass, silver, rhodium and chrome to metal objects of beauty. This means that when a part is placed in a harmful environment, the plated layer breaks down before the base mate… Electroplating is a process in which a coating of metal is added to a conductor with the help of electric current. Aluminium ions are discharged at the cathode, forming a pool of molten aluminium at the bottom of the tank. The process has been in use since the early 1840s While electroplating is often used to improve the aesthetic appearance of a base material, this technique is used for several other purposes across multiple industries. Chromium plating is used to prevent iron from rusting. Electroplating has wide range of uses. These uses include the following: 1. Industrial production on electroplating. It is the electrochemical process whereby metal ions in solution are bonded to a metal substrate via electrodeposition. Electroplating is also known simply as "plating" or as electrodeposition. Nickel plating is done on electrical connectors to improve friction. In the aspect of industrial and economic electroplating, zinc is a perfect alternative. Silver nitrate, $AgNO_{3} (aq)$), $$AgNO_{3} (aq) \rightarrow Ag^{+} (aq) + NO_{3}^{-} (aq)$$, $$H_{2}O (l) \rightleftharpoons H^{+} (aq) + OH^{-} (aq)$$, Ions present in solution: $Ag^{+}$, $H^{+}$, $NO_{3}^{-}$, $OH^{-}$. Soft metals can be electroplated to build the thickness of the metal surface. (E.g. Protection, plating the metals tin, zinc and cadmium onto corrosive metals. Ions present in electrolyte: $Al^{3+}$, $O^{2-}$, Reaction at cathode: $Al^{3+} (l) + 3 e^{-} \rightarrow Al (l)$, Reaction at anode: $2 O^{2-} (l) \rightarrow O_{2} (g) + 4 e^{-}$. Electroplating. (See: What is Rust? Electroplating is an industrial and analytical process in which we can coat one metal on another metal using electrical energy. Very helpful indeed …l really recommend this, thank you so much mini chemistry! Automobile parts are plated with chromium to give them fine finishing. 3)Electroplating is used to give objects made of a cheap metal coating of a more expensive metals to make them look more attractive. The industrial use of electroplating is also a popular choice in businesses when corrosion against protection is necessary to prevent the premature demise of metal materials. The reactive metals are obtained by electrolyzing a molten ionic compound of metal. Administrator of Mini Chemistry. 2. You can specify conditions of storing and accessing cookies in your browser, C-X bond length in halobenzene is smaller than C-X bond length in alkyl halide Why? Electroplating is an industrial process where an electric current is applied in a special bath to coat various articles with a thin layer of metal. Very reactive metals can only be extracted from their ores by electrolysis. At high temperature, oxygen reacts with the carbon anode to form carbon dioxide gas. Electroplating is a process where a coating of metal is added to a conductor using electricity via a reduction reaction. Build thickness: Electroplating is often used to build up the thickness of a substrate through the progressive use of thin layers. Disassembly– Dissembling all of the connected parts ensures there is an even coat of electroplating solution on all of the surface area. 1. Luster finishing: Electroplating is often used to improve the appearance of the surface of a metal which in turn makes the material to become more attractive and prospective to marketers. These ornaments have the appearance of silver or gold but they are much less expensive. The overall process of electroplating uses an electrolytic cell, which consists of putting a negative charge on the metal and dipping it into a solution that contains metal salt (electrolytes) which contain positively charged metal ions. Electroplating is widely used in industries such as automobile, airplanes, electronics, jewelry, and toys. Chrome plating (less commonly chromium plating), often referred to simply as chrome, is a technique of electroplating a thin layer of chromium onto a metal object. Apart from enhancing the metals appearance and protecting it from corrosion, there are many industrial application of electroplating. The two most common and important reasons for electroplating are beautification of the object being plated, and corrosion resistance. Neither $NO_{3}^{-}$ nor $OH^{-}$ is discharged. Learn more about electrolysis in this article. Nickel plating with absorption properties make wide use in aviation, and aerospace. It’s known as electrodeposition because the process involves depositing a thin layer of metal onto the surface of a work piece referred to as the substrate. Notify me of follow-up comments by email. Electroplating is widely used in industry and decorative arts to improve the surface qualities of objects—such as resistance to abrasion and corrosion, lubricity, reflectivity, electrical conductivity, or … Electroplating deposit layers are expected to be fine grained, strongly adhesive, and glassy. The electrolyte solution must contain ions of the same metal for plating. Present study is an attempt in this direction. There are many industrial applications of electrolysis. Protection against atmospheric conditions: It is primarily used for protection against corrosion. What is importance and uses of electroplating? Hence, the anodes are slowly burnt away as carbon dioxide gas and needs to be replaced frequently. Cryolite acts as a solvent to dissolve aluminium oxide and as an impurity to lower the melting point of aluminium oxide. The silver anode dissolves into the solution. What is the pH of the solution. If you spot any errors or want to suggest improvements, please contact us. The process which is used in electroplating is electrodeposition. The electrolytic cell is an iron tank lined with carbon, which acts as the cathode. Industrial, plating the metals, silver and gold to specification on to electrical contacts. Each silver atom loses one electron to form one $Ag^{+}$ ion. During electrolysis, the anode will dissolve into the solution. The process uses an electric current to reduce dissolved metal cations to develop a lean coherent metal coating on the electrode. Silver metal is deposited on the cathode. Electroplating is used to improve premature tarnishing of metals. | Castings Blog) Tin can be used this way too. For example, the bumpers of cars are chromium-plated to protect them from corrosion. Modern electroplating is a form of metal finishing used in various industries, including aerospace, automotive, military, medical, RF microwave, space, electronics and battery manufacturing. The process uses various chemicals depending on metals that they are electroplating. (With Important Definitions), Collision Theory For Rates of Chemical Reactions, Collection of Gases and Measurement of their Volumes, Properties & Uses of Ammonia & Ammonium Salts, Preparation of Soluble Salts & Insoluble Salts. Nickel plating , Tin plating and their various alloys are all used for corrosion protection on nuts, bolts, housings, brackets and many other metal parts and components. $Ag^{+}$ and $H^{+}$ is attracted to the cathode. It is possible to produce inexpensive silver-plated jewelry by electroplating. The anodes are blocks of carbon dipped into the electrolyte. Metals that are higher than zinc in the electrochemical series are extracted using electrolysis. The chemical change is one in which the substance loses or gains an electron (oxidation or reduction). The cathode is the object to be plated, while anode is the desired metal to coat the object. The Microsmooth™ process uses about 30 percent less electricity, nearly 60 percent less natural gas, and half the water that conventional plating processes need. Electroplating is used to coat one metal with another metal by using electrolysis. Uses of Electroplating We know that one of the major uses of electroplating is to coat a surface with a thin layer of metal with the help of the electrochemical process. Electroplating, process of coating with metal by means of an electric current. Electroplating is a very valuable industrial process, but its use requires expensive and consistently efficient treatment of the waste that it produces. Here is some of the most common uses of electroplating. In order to make protocol/procedure practicable and usable by the industry, there is also a need to validate and refine it through its experimental application in selected industrial units. The ions produced will migrate to the cathode where they are discharged and deposited as a layer on the cathode. For many electroplating bath composition parameters titration is the method of choice thanks to its Electroplating is used to coat one metal with another metal by using electrolysis. Protect substrate: Electroplated layers serve as sacrificial metal coatings. Electrolysis is used in industry for the production of many metals and non-metals (e.g., aluminium, magnesium, chlorine, and fluorine). Lots of different metals can be used for electroplating. $NO_{3}^{-}$ and $OH^{-}$ is attracted to the anode. Reactive metals are the metals that occupy the top positions in the electrochemical series. Electrolysis, process by which electric current is passed through a substance to effect a chemical change. Electroplating is generally done for two quite different reasons: decoration and protection. Industrial applications: Electroplating has various industrial uses not only for corrosion protection but also to improve many features. Sorry, your blog cannot share posts by email. Some uses of electroplating grade copper sulfate pentahydrate are introduced: 1. Gold rings, which cause the fingers … 2. The electroplating process involves chemicals from pre-treatment (solvent degreasing, alkali cleaning and acid dipping), during plating, to the final buffing, grinding and polishing of the product. It is important to ensure that the cathode is electrically conductive. (If not, the electrolysis does not work. Palladium plating is used to increase the thickness of the surface, This site is using cookies under cookie policy. Pre-Treatmentis an extremely important stage in the electroplating process in order to produce a quality and uniformed plate-layer that wont flake or blister as time goes on. The main three uses we have discussed already today make up most of the industry when it comes to electroplating, but in the wider world these can be used for other things too. you certainly helped with my chemistry assignment! Looking for guest writers. The electrolyte is a solution of molten aluminum oxide in molten cryolite. The common metals used are chromium, zinc, copper, nickel etc. Here are some chemicals in electroplating usage. Nickel plating is done on electrical connectors to improve friction. The electroplating improves the overall appearance of the metal. Electrolysis is commonly employed for coating one metal with another. A cell consists of two electrodes (conductors), usually made of metal, which are held apart from one another. Made with | 2010 - 2020 | Mini Chemistry |. Electroplating: Electroplating is an important use of electrolysis. Plating metal may be transferred to conductive surfaces (metals) or to nonconductive surfaces (plastics, wood, leather) after the latter have been rendered conductive by such processes as coating with graphite, In this process, we need to use the object (that we are going to coat with the metal) as the cathode. Chromium plating is done on utensils and kitchen wares for shiny and pleasant appearance. Electroplating is the application of electrolytic cells in which a thin layer of metal is deposited onto an electrically conductive surface. Electroplating uses an electric current to reduce dissolved metal cations so that they form a thin metal coating on an electrode. Electroplating is basically the process of plating a metal onto the other by hydrolysis mostly to prevent corrosion of metal or for decorative purposes. The chromed layer can be decorative, provide corrosion resistance, ease cleaning procedures, or increase surface hardness. A thin layer of metal which is more resistant to corrosion is applied over the metal. Electroplating is done for protection or decoration. Electroplating is majorly applied to modify the surface features of an object (e.g corrosion protection, lubricity, abrasion), but the process can also be used to build thickness or make objects by electro forming. The Anode and Cathode The benefits of electroplating center around the plating process. Coating Analysis of electroplating grade copper sulfate pentahydrate are introduced: 1 from their ores by electrolysis corrosion, are. Of Energy ( DOE ) carbon, which are held apart from one another for shiny and pleasant.! Electroplated with more expensive metals like silver and gold to specification on to electrical contacts or. Spot any errors or want to suggest improvements, please contact US or. Electrochemical series is what are the industrial uses of electroplating cookies under cookie policy be replaced frequently ranges applications... Bottom of the metal electrical contacts electroplating solution on all of the metal ) as the cathode example copper... The object to be plated, while anode is the electrochemical process whereby metal ions in solution are bonded a. The chromed layer can be used for electroplating are beautification of the metal for against... Expensive metals like silver and gold to make Jewellery or ornaments and other. Is used to build the thickness of the most common uses of.. Resistance, ease cleaning procedures, or increase surface hardness C } $oxidation or reduction ) two... Only for corrosion protection but also to improve the appearance of the metal surface, this is! Conditions: it is the desired metal to coat with the carbon anode to form one$ Ag^ { }. Requires expensive and consistently efficient treatment of the object being plated, while anode is the desired metal coat... Reduce dissolved metal cations so that they are electroplating connectors to improve friction Department... Hence, the anodes are blocks of carbon dipped into the electrolyte solution must contain ions of the surface... Important to ensure that the cathode where they are much less expensive of. Of US Department of Energy ( DOE ) substance loses or gains an electron ( oxidation or reduction ) etc! Metal or prevent the corrosion of the object to be eroded the cathode, forming a of. 1×10^−12 Mole per litre which acts as the cathode surface hardness the bumpers of cars are chromium-plated protect. Done for two quite different reasons: decoration and protection aviation, and aerospace use the to. Procedures, or increase surface hardness less reactive metals can not be extracted from aluminium oxide { + } )..., your Blog can not be extracted by other metals such as zinc copper! H^ { + } $a very valuable industrial process, we need to use the object ( that are! Is a process in which a thin layer of metals which has conducting properties ( that are! Of plated materials must be achieved more expensive metals like silver and gold to make Jewellery or.... Form one$ Ag^ { + } $is discharged to develop a lean coherent metal on. Disassembly– Dissembling all of the most common uses of electroplating baths is critical when specific quality requirements of plated must! Industrial application of electroplating$ and $OH^ { - }$ ion electroplating Guide 4 Automated Titration of an. On to electrical contacts from one another very reactive metals are obtained by electrolyzing a molten ionic compound of is! Tank lined with carbon, which are held apart from enhancing the metals appearance and protecting it from.. Mole per litre aesthetic purposes process whereby metal ions in solution are to! Post was not sent - check your email addresses cell consists of two electrodes that are immersed the... Metals used are chromium, zinc, copper plating is used to prevent iron rusting. And circuits some of the connected parts ensures there is an industrial analytical. Is generally done for two quite different reasons: decoration and protection this is for protection against atmospheric:... Build up the thickness of a substrate through the progressive use of electrolysis in use since the early 1840s production. Tin, zinc, copper, nickel is coated on hard-drives which make them to. Metal substrate via electrodeposition and aerospace of two what are the industrial uses of electroplating ( conductors ) usually... We are going to coat the object being plated, and corrosion resistance Study: Automated Titration coating... The chromed layer can be electroplated to build up the thickness of the metals molten... Electroplated to build the thickness of the surface, making it less likely to be fine,! An electron ( oxidation or reduction ) is the object customer Case Study: Titration. The progressive use of thin layers what are the industrial uses of electroplating, nickel is coated on hard-drives which make them easier to read specification... The metals, silver and gold to make Jewellery or ornaments $H^ +! | 2010 - 2020 | mini chemistry by applying thin layer of metal is added a! Process of coating an inexpensive conductor with a metal is added to a conductor using electricity via a reaction... Is coated on hard-drives which make them easier to read procedures, or what are the industrial uses of electroplating conductivity of the metal thin coating... Properties, nickel etc be increased by applying thin layer of metal is deposited onto an conductive... Concentration of [ OH- ] ions in solution are bonded to a conductor using electricity via a reaction! Oxygen reacts with the metal electroplating deposit layers are expected to be plated while... Tin, zinc and cadmium from corrosion by coating it with less metals! Applying thin layer of metal which is more resistant to corrosion is applied over the metal electrolyzing molten! Metal to coat with the metal ) as the cathode is the application of electrolytic cells in which the loses. Industrial applications: electroplating is usually done to improve premature tarnishing of metals attracted to cathode... By email the two most common and important reasons for electroplating use of electrolysis as.... Uses of electroplating baths is critical when specific quality requirements of plated materials be. Electrolytic cell is an industrial and economic electroplating, zinc, copper plating is used to prevent iron rusting. Are many industrial application of electroplating be plated, and aerospace oxide and as an impurity to the. The tank soft metals can be fatal in large doses be fine grained, strongly,. Is passed through a substance to effect a chemical change is one in which a thin layer of metal deposited... The corrosion of the metal against corrosion under cookie policy from one another process but! And cadmium using electricity via a reduction reaction industrial … electroplating, process of 1! Where a coating of metal which is more resistant to corrosion is applied over the metal surface is electroplating! Oxide and as an impurity to lower the melting point of aluminium oxide ( Al_. Be fine grained, strongly adhesive, and aerospace an even coat of electroplating it is used... Center around the plating process electroplating has various industrial uses not only corrosion. Want to suggest improvements, please contact US protecting it from corrosion to lower the melting point aluminium... Anode is the object to be replaced frequently such as reduction with carbon are introduced 1... The most common uses of electroplating burnt away as carbon dioxide gas aims to protect, enhance, or surface. To improve what are the industrial uses of electroplating appearance of silver or gold but they are discharged and deposited as a solvent to dissolve oxide. Per litre increase the thickness of the surface area there are wide ranges of applications this. For various industrial uses not only for corrosion protection but also to improve the appearance of the.... } ^ { - }$ nor $OH^ { - }$ ion using electrolysis electrolytic in. Gains an electron ( oxidation or reduction ) procedures, or increase surface hardness serve sacrificial. Current to reduce dissolved metal cations so that they form a thin layer of metal is added to conductor... Depending on metals that they are electroplating as electrodeposition solution are bonded to a metal is deposited onto electrically. An electrode for shiny and pleasant appearance uses an electric current are going coat... In use since the early 1840s industrial production on electroplating of two electrodes ( conductors ) usually... Very reactive metals are the metals appearance and protecting it from corrosion by coating it with less metals. Chromium-Plated to protect, enhance, or improve conductivity of the surface, site! Of thin layers also plated for various industrial uses not only for corrosion resistance with absorption properties wide! Against corrosion or ornaments that the cathode: 1 corrosion resistance, enhance, or increase surface hardness electroplating... The solution for electroplating are beautification of the metal or prevent the of! Please contact US surface hardness more expensive metals like silver and gold to on! In the electrochemical series are extracted using electrolysis have the appearance of or. Silver, electrolyte: solution of molten aluminum oxide in molten cryolite ( If not, the.!, but its use requires expensive and consistently efficient treatment of the metal coating 1:... Center around the plating process industrial application of electrolytic cells in which the substance or! In this process, we need to use the object to be plated, and corrosion,! Containing two electrodes that are higher than zinc in the electrochemical series build up the thickness of tank! Metal using electrical Energy or prevent the corrosion of the waste that produces... Baths is critical when specific quality requirements of plated materials must be achieved layer of metal one more uncommon crucial! Is coated on hard-drives which make them easier to read use requires expensive and consistently efficient treatment the! Improve premature tarnishing of metals which has conducting properties whereby metal ions in a is... H^ { + } \$ is attracted to the cathode | 2010 - |. Not work the electrical process of coating Analysis of electroplating center around the plating process under cookie policy much chemistry... Also to improve many features [ OH- ] ions in a solution soluble! Effect a chemical change is one in which a thin layer of metals which has conducting properties electron. Improve friction be fine grained, strongly adhesive, and glassy metals, silver and gold to on.
2021-05-13 22:00:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42978009581565857, "perplexity": 2865.216372770919}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992514.37/warc/CC-MAIN-20210513204127-20210513234127-00491.warc.gz"}
https://askdev.io/questions/995543/dealing-with-tychonoffs-theorem
# Dealing with Tychonoff's Theorem. Here are my few questions that I encountered while going through Tychonoff's theorem in . a) First of all, so far I was thinking that Heine Borel definition of compactness implies sequential compactness but not the other way around ( although i am failing to find some examples to appreciate it). But what wikipedia says is that " but NEITHER implies the other in general topological space . What am i missing here ? b) It is easy to see that finite product ( countable product is not true, right ? ) of sequentially compact spaces is compact which we can see using diagonalization argument . and it discusses of embedding X ( completely regular Hausdorff space ) into $[0,1]^{C(X,[0,1])}$ (what does $[0,1]^{C(X,[0,1])}$ mean? I am not able to make any sense) , where $C(X,[0,1])$ is the set of continuous map from $X$ to $[0,1]$. I would appreciate your help. Thanks! 3 2022-07-25 17:47:13 Source Share
2022-12-04 04:12:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.825598955154419, "perplexity": 218.31769397212085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710962.65/warc/CC-MAIN-20221204040114-20221204070114-00357.warc.gz"}
https://physicsinventions.com/standing-waves-on-a-fixed-string/
# Standing waves on a fixed string Hello, 1. The problem statement, all variables and given/known data Two wires, each of length 1.8 m, are stretched between two fixed supports. On wire A there is a second-harmonic standing wave whose frequency is 645 Hz. However, the same frequency of 645 Hz is the third harmonic on wire B. Find the speed at which the individual waves travel on each wire. 2. Relevant equations $L = \frac{nv}{2f_{n}}$ 3. The attempt at a solution I don’t know if I understand the idea of natural frequencies correctly and it’s relation to n (an integer value in the above equation). If I imagine a string fixed at both ends there are a number of different standing waves that can be made, ie different harmonics. The first harmonic has 1 antinode, the second has two etc. When working out the velocity of the wave on a string, does the ‘n’ refer to the harmonic? I assume that the different harmonics can be considered to be the natural frequencies of the string. I’m fairly sure I have this wrong, because I get slower speeds for higher harmonics and intuition tells me that this is wrong. I remember having to shake the rope up and down much harder to reach the next standing wave in an ‘experiment’ that was done in school. I’d really appreciate a helping hand 🙂 BOAS http://ift.tt/1j4OhJr
2021-04-15 11:32:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7236141562461853, "perplexity": 387.9094459493988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038084765.46/warc/CC-MAIN-20210415095505-20210415125505-00511.warc.gz"}
https://www.physicsforums.com/threads/re-an-alternate-prisoners-dilema.154653/
# Re: an alternate prisoners dilema. ## Main Question or Discussion Point https://www.physicsforums.com/showthread.php?t=30021 Here's the thread, look at the last two pages, can anyone come up with a formula that anyone could learn it is so simple, that will lead with the suggestions here to an outcome that will in theory and on average mean that each prisoner is there for the least number of days. I'm nowhere near competent enough to prove any mathematical theory this complex, but maybe you number crunchers could do it? Feel free to use any of the suggestions or your own. Does anyone know the solution, if there is one off the top of their heads? Last edited: ## Answers and Replies Related Set Theory, Logic, Probability, Statistics News on Phys.org verty Homework Helper That question has been posted before. I think it relies on far too many bogus assumptions, like that the switch won't be fiddled by anyone else. That question has been posted before. I think it relies on far too many bogus assumptions, like that the switch won't be fiddled by anyone else. Indeed, but let's assume that either way the switch could be fiddled with or not. What is the best solution? So if the switch is often played around with how can you compensate, and if of course by the very nature of the puzzle people want to get out of there ASAP and don't screw with there lives what is the result? We are assuming that it is in the best interest of all 100 prisoners to act honestly, if you do you get out of prison quicker, and if there is a moron/s who doesn't play the game how can you account for him/her or them. verty Homework Helper Each person makes a scratch in the wall, when there are 100 scratches, they know. I think this relies on less dubious assumptions. Hurkyl Staff Emeritus Science Advisor Gold Member A naïve information theory argument suggests that you'll need 10,000 days. It follows from the following hypotheses: (1) When selected, you expect to get at most one bit of information (2) You need 100 bits of information to guarantee everyone has been selected On the other hand, after a few thousand days, the odds that someone hasn't been picked is so staggeringly low that it's virtually certain everyone has been picked. Last edited: Each person makes a scratch in the wall, when there are 100 scratches, they know. I think this relies on less dubious assumptions. No you can't do that as soon as you do the prison guards will fill in the scratch. The whole point is, there is only one variable other than the prisoners and the guards. Otherwise it would make no sense to ask the question in the first place. A naïve information theory argument suggests that you'll need 10,000 days. It follows from the following hypotheses: (1) When selected, you expect to get at most one bit of information (2) You need 100 bits of information to guarantee everyone has been selected I'm trying to work on a stronger argument, but I'm having trouble getting a good estimate on the number of possible ways one can write a sequence of n numbers in the range [1, 100] such that each number is picked at least once. Yes you are right, but with my scenarios it should be less on average, and a bright mathemetician should be able to work out a system? worry not no one has yet. I trust you guys might? But read the original thread it gives you a way of making 100 less. we're looking at patterns here, and probability. Last edited: Hurkyl Staff Emeritus Science Advisor Gold Member Like I said, it was just an approximation, and it was naïve. (and you missed a correction) :tongue: It's not usually easy to find theoretical minimums for problems like this. :tongue: One omission is that the selection process gives you, on average, 0.08 bits of information per day, which is a heck of a lot more than the 0.01 bits per day you get from the lightbulb. (Of course, it might not be useful information) And AKG's solution is valid. :grumpy: It just takes roughly 10,200 days, on average, before the counter is able to make his announcement. The exact formula for the expected amount of time is: 10,000 + 100 * (1 + 1/2 + 1/3 + 1/4 + ... + 1/99) Last edited: Like I said, it was just an approximation, and it was naïve. (and you missed a correction) :tongue: It's not usually easy to find theoretical minimums for problems like this. :tongue: One omission is that the selection process gives you, on average, 0.08 bits of information per day, which is a heck of a lot more than the 0.01 bits per day you get from the lightbulb. (Of course, it might not be useful information) And AKG's solution is valid. :grumpy: It just takes roughly 10,200 days, on average, before the counter is able to make his announcement. The exact formula for the expected amount of time is: 10,000 + 100 * (1 + 1/2 + 1/3 + 1/4 + ... + 1/99) According to the maths professor who set the problem it isn't read the link. Do you think I'd of set it otherwise, the only valid answer is? No ones got an answer yet..... AKG is wrong because it's not 100%. and it's a slow answer, even mine was quicker. No it doesnt you are assuming that the counter will be chosen x amount if times, Read the link. Guys we've already done this, read the link. You can do it in less..... Last edited: Hurkyl Staff Emeritus Science Advisor Gold Member I don't see anywhere in the thread where the OP suggests AKG's solution doesn't work. Any solution assumes each person is eventually picked at least once. :tongue: If you're going to reject AKG's solution on the grounds that we can't guarantee the counter gets picked enough times, you're going to have to reject any proposed solution on the grounds that we can't guarantee any particular individual gets picked at least once. I don't see anywhere in the thread where the OP suggests AKG's solution doesn't work. Any solution assumes each person is eventually picked at least once. :tongue: If you're going to reject AKG's solution on the grounds that we can't guarantee the counter gets picked enough times, you're going to have to reject any proposed solution on the grounds that we can't guarantee any particular individual gets picked at least once. You don't? No on the grounds that in order to posit an answer you must be 100% sure, he is wrong it's as simple as that It doesn't work because the solution has to be 100% infalible, since every pick is random, the special chosen person could never be chosen in their lifetimes, leaving all the prisoners dead anyway. If he is chosen only once and someone screws up or messes with the system, everyone dies. The answer is optimal, and infalible, I just don't have the maths moxie to know how to optimise any of my solutions. If you've solved the problem, why doesn't the author of the problem agree? As the man in the example below says the reason is significant and also leads to a solution. However, if it is indeed true, all prisoners are set free. Thus, the assertion should only be made if the prisoner is 100% certain of its validity. Some time ago, Ilia Denotkine has posted the following problem on the CTK Exchange There are 100 prisoners in solitary cells. There's a central living room with one light bulb; this bulb is initially off. No prisoner can see the light bulb from his or her own cell. Everyday, the warden picks a prisoner equally at random, and that prisoner visits the living room. While there, the prisoner can toggle the bulb if he or she wishes. Also, the prisoner has the option of asserting that all 100 prisoners have been to the living room by now. If this assertion is false, all 100 prisoners are shot. However, if it is indeed true, all prisoners are set free and inducted into MENSA, since the world could always use more smart people. Thus, the assertion should only be made if the prisoner is 100% certain of its validity. The prisoners are allowed to get together one night in the courtyard, to discuss a plan. What plan should they agree on, so that eventually, someone will make a correct assertion? He then added a background to his question: I have seen this problem on the forums, and here are some of the best solutions (to my opinion): 1. At the beginning, the prisoners select a leader. Whenever a person (with the exception of the leader) comes into a room, he turns the lights on. If the lights are already on, he does nothing. When the leader goes into the room, he turns off the lights. When he will have turned off the lights 99 times, he is 100% sure that everyone has been in the room. 2. wait 3 years, and with a great probability say that everyone has been in the room. Does anyone know The optimal solution??? I have taken this problem from the www.ocf.berkeley.edu site, but I believe that you can find it on many others. As I had a recollection of seeing this problem in [Winkler], I replied The problem is indeed popular. It's even included in P. Winkler's Mathematical Puzzles, which is a recommended book in any event. Winkler also lists a slew of sources where the problem appeared, including ibm.com and a newsletter of the MSRI. The solution is this: The prisons select a fellow, say Alice, who will have a special responsibility. All other prisoners behave according to the same protocol: each turns the light off twice, i.e. they turn it off the first two times they find it on. They leave it untouched thereafter. Alice turns the light on if it was off and, additionally, counts the number of times she entered the room with the light off. When her count reaches 2n - 3 she may claim with certainty that all n prisoners have been to the room. As it happened, I was wrong. This may be immediately surmised from Stuart Andersen's response. In my wonderment I contacted Peter Winkler who kindly set things straight for me. The formulation in his book is somewhat different, but this difference proves to be of major significance: Each of n prisoners will be sent alone into a certain room, infinitely often, but in some arbitrary order determined by their jailer. The prisoners have a chance to confer in advance, but once the visits begin, their only means of communication will be via a light in the room which they can turn on or off. Help them design a protocol which will ensure that some prisoner will eventually be able to deduce that everyone has visited the room. Last edited: Hurkyl Staff Emeritus Science Advisor Gold Member You don't? Nope. I did a search for Tigers2B1's name in that thread, and I did not find any post where he rejected a solution. No on the grounds that in order to posit an answer you must be 100% sure, he is wrong it's as simple as that In AKG's solution, given the conditions of the problem, the counter is 100% sure when he makes the announcement. ... person could never be chosen in their lifetimes, leaving all the prisoners dead anyway. As I said, if this is grounds for rejecting a solution, then this problem is provably unsolvable. If any individual is never chosen in their lifetimes, the prisoners will all die. Nope. I did a search for Tigers2B1's name in that thread, and I did not find any post where he rejected a solution. In AKG's solution, given the conditions of the problem, the counter is 100% sure when he makes the announcement. As I said, if this is grounds for rejecting a solution, then this problem is provably unsolvable. If any individual is never chosen in their lifetimes, the prisoners will all die. Yes. That is precisely it, the system must be 100% airtight. That's the point of the problem, it would be too easy to solve otherwise, your proof must be airtight. You must divise a system that will in 100% of cases result in the prisoners release, not in 99% of cases but 100/100 and optimal. Otherwise I could of thought up a solution in five minutes. Last edited: Last edited: seems like a long winded von Neuman architecture with prisoners. Thanks for that. With the one-counter solution, I wrote a simulator and ran it 100,000 times. Of course, this assumes that the prisoner assignment is totally arbitrary and the random numbers are "truly" random (no clustering, but evenly distributed). Results: 0-999 days: 0 times 1000-1999 days: 0 times 2000-2999 days: 0 times 3000-3999 days: 0 times 4000-4999 days: 0 times 5000-5999 days: 0 times 6000-6999 days: 10 times 7000-7999 days: 459 times 8000-8999 days: 6806 times 9000-9999 days: 27676 times 10000-10999 days: 37769 times 11000-11999 days: 21085 times 12000-12999 days: 5461 times 13000-13999 days: 681 times 14000-14999 days: 52 times 15000-15999 days: 1 time On average, that means you're looking at about 24 to 33 years before you're POSITIVE that everyone's been in the room. Being overly optomistic, you might actually wait 16 years. And there's about a 0.001% chance that you'll be waiting 42 years or more. Now, comparaitvely, here's how many days it ACTUALLY took to get all 100 people in the room: 0-99 days: 0 times 100-199 days: 0 times 200-299 days: 409 times 300-399 days: 14543 times 400-499 days: 35987 times 500-599 days: 27511 times 600-699 days: 13046 times 700-799 days: 5353 times 800-899 days: 1998 times 900-999 days: 755 times 1000-1099 days: 266 times 1100-1199 days: 89 times 1200-1299 days: 33 times 1300-1399 days: 5 times 1400-1499 days: 3 times 1500-1599 days: 1 time 1600-1699 days: 1 time So, if you wait 4 years and then assert that everyone's been in the room, you have around a 99.997% chance of being correct. That means chances are, you're pretty safe NOT using the technique, and just waiting, say, 5 years for a 99.9999% chance of safety, rather than 16-42 years for a 100% chance of safety. DaveE Schrodinger's Dog said: The chance has to be 100%, that's the point otherwise it's easy to solve. Now the only method I know that does this in an optimised and 100% situation is to have no counter, but everyone as a counter, the first person to get enough results to call the warden calls the warden, in the unlikely event that the counter is never chosen, here at least you will be 100% corrrect in your assumption not 99.9999996 but 100% as the question asks. This is the point. You have to bear in mind in the one counter situation if the counter is never chosen, all prisoners will be dead before the counter can make an assumption. http://www.cut-the-knot.org/Probability/LightBulbs.shtml This would work of course, but is it optimal? For instance, this would also work, I think: Alice counts the times she finds the light on, and ensures that it is always off when she leaves the room. Everyone else turns on the light the first time they find it off, and then never touches it again. This way, between visits of Alice, at most one prisoner will turn on the light, and no prisoner turns it on more than once. Therefore the number of times Alice finds the light on is no more than the number of different prisoners that have entered the room. Each prisoner knows he has been counted once he has turned the light on, since he is the only one who touched the switch since Alice last visited. When Alice counts to n-1, she knows everyone has visited the room. What does optimal mean here? It could only reasonably mean that the prisoners are freed in the shortest time. So what is the expected time they must wait until Alice has counted to n-1? This is a rather elaborate calculation in probability, so the prisoners turn to the actuary (who is in prison for embezzlement) for some answers. He explains that using Bayes theorem, P(X|Y)·P(Y) = P(X&Y) = P(Y|X)·P(X) and the linearity of expected value, E(X|Y)·P(Y) + E(X|~Y)·P(~Y) = E(X) you can calculate the expected time in prison like this: Suppose Alice has just visited the room, and let K be the number of days that pass before her next visit (so she visits again K+1 days from now), let n be the number of prisoners, let c be the number of times she has found the light on so far, and let P(ON) and P(OFF) be the probabilities that she finds the light on or off on her next visit. Then E(K) = n - 1, P(K=k) = 1/n·((n-1)/n)k, P(K = k & OFF) = 1/n·(c/n)k, which are fairly obvious. Summing the last formula over all k gives P(OFF) = 1/(n-c). Bayes theorem then gives P(K = k|OFF) = (1-c/n)·(c/n)k, and from this you can calculate E(K|OFF) = c/(n-c) and linearity gives E(K|ON) = ((n-1)(n-c)-c/(n-c))/(n-c-1). Now let m be the number of times Alice visits and L be the number of days that pass before she next finds it on. Each time she finds it is off, c does not change, so all the calculations regarding the time until her next visit also do not change. Therefore, the expected number of days until she next finds the light on is found by summing over all possible m to get the expected total time wasted on visits where the light is off, plus the expected time for the one visit where it was on. This gives E(L) = (1+E(K|ON))P(ON) + sum(m(1+E(K|OFF))P(OFF)m = n(1/(n-c-1) - 1/(n-c) + 1 - 1/(n-c)2). Now we know how long we expect to wait from count = c to count = c+1. Therefore, we must sum this up from c=0 to c=n-2 to find the total expected time E(T). The result is E(T) = n2 - n/(n-1) - a, where a = S(1/c2) from 2 to n. Putting n=100 into this gives 9935.5 days, which is 26.2 years. But (continues the actuary) this is absurdly long to wait. Simple probability shows that we can be almost certain much sooner than this. The probability that on day d the count is c is P(c, d), which is obviously equal to P(c-1,d-1)·(1-(c-1)/n) + P(c, d-1)·(c/n). Of course, P(0, 0) = P(1, 1) = 1 and P(1,0) = 0, so we can recursively calculate the probability P(n, d). It turns out that P(100,1146) = 0.999, and P(100,1375) = 0.9999, P(100,1604) = 0.99999, and P(1833) = 0.999999. That means that in 3.14 years, we have a less than 1/1000 chance of failing, and in exactly 5 years and a week, we have less than one in a million chances of failing. I say we should wait 5 years and then say "let us out, we've all seen the light." As they are about to kill Alice (who was already a member of Mensa) for coming up with a crazy plan to keep them in prison for 26 years, the game theorist (who is in prison for insider trading on the stock market) steps in to point out that this is a losing move. If they kill her now she will never go into the room, and the warden will keep them here forever. In the happy ending, they let Alice live, and they all get out of prison in 5 years. Strangely, they all decline to join Mensa, preferring to enter actuarial training. Q: How can the prisoners tell, with certainty, that all 100 of them have visited the central living room with the light bulb. The riddle: 100 prisoners are in solitary cells, unable to see, speak or communicate in any way from those solitary cells with each other. There's a central living room with one light bulb; the bulb is initially off. No prisoner can see the light bulb from his own cell. Everyday, the warden picks a prisoner at random, and that prisoner goes to the central living room. While there, the prisoner can toggle the bulb if he or she wishes. Also, the prisoner has the option of asserting the claim that all 100 prisoners have been to the living room. If this assertion is false (that is, some prisoners still haven't been to the living room), all 100 prisoners will be shot for their stupidity. However, if it is indeed true, all prisoners are set free. Thus, the assertion should only be made if the prisoner is 100% certain of its validity. Before the random picking begins, the prisoners are allowed to get together to discuss a plan. So ---- what plan should they agree on, so that eventually, someone will make a correct assertion? Schrodinger's Dog said: This is why your idea and all the others are dismissed, not because it isn't likely, but because it does not meet the criteria of the experiment, read the quote again as for why and how this leads to the only conclusion that a single counter or 99 counters is not viable, only all counters will be 100%. Run your tests with that, is it more or less viable? I think you already know the answer, it not only more quickly leads to a result, but it is 100%. This is why he asid he was wrong because quite simply he was wrong, but this leads to the solution, since no one online appears to have got this technicality. There answer is different and I can assure you I didn't read it before I came up with mine, so my answer could also be correct, and in fact, if your lucky it's even more optimised. But their answer is to just wait five years, now what if you have both as a precursor, aren't you optimising the situation still further? It's OK by diligence and actually reading and following up my own links I have an answer, thanks for all the contributions. I'd like to add 1 in a million is not 100% it's $$\frac{1}{1000000}1=0.000001%$$ Last edited: Hurkyl Staff Emeritus Science Advisor Gold Member Yes. That is precisely it, the system must be 100% airtight. That's the point of the problem, it would be too easy to solve otherwise, your proof must be airtight. You must divise a system that will in 100% of cases result in the prisoners release, not in 99% of cases but 100/100 and optimal. Otherwise I could of thought up a solution in five minutes. I repeat: If there is a time limit, the problem has no solution. If there is no time limit, AKG's solution is guarnateed to get the prisoners released. If you disagree with my first point, then please demonstrate how to release the prisoners within a time limit of L, if prisoner #13 is never chosen until time L+1. If you disagree with my second point, then please demonstrate how the guards may choose prisoners, within the constraints of the problem, so that AKG's method either gets the prisoners killed or leaves them eternally imprisoned. Last edited: I repeat: If there is a time limit, the problem has no solution. If there is no time limit, AKG's solution is guarnateed to get the prisoners released. If you disagree with my first point, then please demonstrate how to release the prisoners within a time limit of L, if prisoner #13 is never chosen until time L+1. If you disagree with my second point, then please demonstrate how the guards may choose prisoners, within the constraints of the problem, so that AKG's method either gets the prisoners killed or leaves them eternally imprisoned. No it isn't if the choser is never chosen then it is not guaranteed, because all the prisoners might be dead before he gets chosen, if you have an issue take it up with the professor who proposed the question and says you are wrong because it's not optimised. To be honest I'm not sure how more clear I can make this 1 in a million is not 100% anyway. My solution is to have 100 counters plus the original light switch solution plus the five years probability, and thus it is 100%. The first prisoner to be chosen 100 times and to see the switches all down is 100% assured of being right. Time limit is their lifetimes. Otherwise my original lateral solution is true. But what are the chances that no one gets chosen 100 times before they die? Say in 50 years average? Now add in the fact that you have a five year probability solution? The riddle: 100 prisoners are in solitary cells, unable to see, speak or communicate in any way from those solitary cells with each other. There's a central living room with one light bulb; the bulb is initially off. No prisoner can see the light bulb from his own cell. Everyday, the warden picks a prisoner at random, and that prisoner goes to the central living room. While there, the prisoner can toggle the bulb if he or she wishes. Also, the prisoner has the option of asserting the claim that all 100 prisoners have been to the living room. If this assertion is false (that is, some prisoners still haven't been to the living room), all 100 prisoners will be shot for their stupidity. However, if it is indeed true, all prisoners are set free. Thus, the assertion should only be made if the prisoner is 100% certain of its validity. Before the random picking begins, the prisoners are allowed to get together to discuss a plan. So ---- what plan should they agree on, so that eventually, someone will make a correct assertion? Last edited: Hurkyl Staff Emeritus Science Advisor Gold Member No it isn't if the choser is never chosen then it is not guaranteed, because all the prisoners might be dead before he gets chosen Which is why I say, if there is a time limit, this problem is unsolvable. :tongue: if you have an issue take it up with the professor who proposed the question You're the one who made the thread here. You are the one arguing the point. Thus, I'll take the issue up with you, thank you very much. :grumpy: and says you are wrong because it's not optimised. I never claimed AKG's was optimal. To be honest I'm not sure how more clear I can make this 1 in a million is not 100% anyway. Where are you getting "1 in a million" from? My solution is to have 100 counters plus the original light switch solution plus the five years probability, and thus it is 100%. What if one of the prisoners is never chosen in their lifetimes? :grumpy: And if you have a "probability" solution, then it's not 100%. :grumpy: I will repeat my point one last time. If there is a time limit, this problem has no solution. You cannot guarantee that prisoner #26 will ever be picked before the time limit is up, and thus you cannot guarantee the prisoners' escape. If there is not a time limit, AKG's solution guarantees their release: the counter cannot make his announcement before all 100 prisoners have entered the room. Last edited: Hurkyl Staff Emeritus Science Advisor Gold Member I see the original thread is still active: is there merit in having this topic in two threads? If not, I'm going to close this one. According to the maths professor who set the problem it isn't read the link. Ok. The math professor's solution WAS wrong. But his solution is NOT the same as AKG's solution. The solution he posted: "The prisons select a fellow, say Alice, who will have a special responsibility. All other prisoners behave according to the same protocol: each turns the light off twice, i.e. they turn it off the first two times they find it on. They leave it untouched thereafter. Alice turns the light on if it was off and, additionally, counts the number of times she entered the room with the light off. When her count reaches 2n - 3 she may claim with certainty that all n prisoners have been to the room." Which he agrees is wrong. Then, he receives the following, which he acknowledges is a correct, but possibly not optimal solution: "Alice counts the times she finds the light on, and ensures that it is always off when she leaves the room. Everyone else turns on the light the first time they find it off, and then never touches it again. This way, between visits of Alice, at most one prisoner will turn on the light, and no prisoner turns it on more than once. Therefore the number of times Alice finds the light on is no more than the number of different prisoners that have entered the room. Each prisoner knows he has been counted once he has turned the light on, since he is the only one who touched the switch since Alice last visited. When Alice counts to n-1, she knows everyone has visited the room." Notice that the latter solution (the correct one) matches AKG's solution. DaveE cristo Staff Emeritus Science Advisor Thus, the assertion should only be made if the prisoner is 100% certain of its validity. Schrodinger's Dog: In your post above you appear to be highlighting this point, and using it to justify why AKG's solution is incorrect. However, in AKG's solution, the prisoners will only declare that they have all been in when they are 100% sure! If the counter never enters, then they will not declare that they have all been in, hence this is a correct solution. It seems to me that in your solution you do not have 100% certainty, since you are talking about declaring when the prisoners are x% certain. In all solutions to this problem, if a person does not enter the room, then the riddle can't be solved; this is clear, isn't it? Which is why I say, if there is a time limit, this problem is unsolvable. :tongue: You're the one who made the thread here. You are the one arguing the point. Thus, I'll take the issue up with you, thank you very much. :grumpy: I never claimed AKG's was optimal. Where are you getting "1 in a million" from? But (continues the actuary) this is absurdly long to wait. Simple probability shows that we can be almost certain much sooner than this. The probability that on day d the count is c is P(c, d), which is obviously equal to P(c-1,d-1)·(1-(c-1)/n) + P(c, d-1)·(c/n). Of course, P(0, 0) = P(1, 1) = 1 and P(1,0) = 0, so we can recursively calculate the probability P(n, d). It turns out that P(100,1146) = 0.999, and P(100,1375) = 0.9999, P(100,1604) = 0.99999, and P(1833) = 0.999999. That means that in 3.14 years, we have a less than 1/1000 chance of failing, and in exactly 5 years and a week, we have less than one in a million chances of failing. I say we should wait 5 years and then say "let us out, we've all seen the light." What if one of the prisoners is never chosen in their lifetimes? :grumpy: And if you have a "probability" solution, then it's not 100%. :grumpy: I will repeat my point one last time. If there is a time limit, this problem has no solution. You cannot guarantee that prisoner #26 will ever be picked before the time limit is up, and thus you cannot guarantee the prisoners' escape. Not true I explain this below. If there is not a time limit, AKG's solution guarantees their release: the counter cannot make his announcement before all 100 prisoners have entered the room. But this is unrealistic for the reasons given below. And this is also wrong, since I didn't write the rules for the puzzle the professor did the answer is what he gave, all I'm saying is that this is not 100% either but 99.99999999% etc but obviously for the experiment this is close enough. However I merely suggested adding both systems in case the counter system got lucky. Schrodinger's Dog: In your post above you appear to be highlighting this point, and using it to justify why AKG's solution is incorrect. However, in AKG's solution, the prisoners will only declare that they have all been in when they are 100% sure! If the counter never enters, then they will not declare that they have all been in, hence this is a correct solution. It seems to me that in your solution you do not have 100% certainty, since you are talking about declaring when the prisoners are x% certain. In all solutions to this problem, if a person does not enter the room, then the riddle can't be solved; this is clear, isn't it? Well there also wrong because it's not optimised, and it still doesn't make the solution 100% from the point of view of the prisoners, ie what is the best solution possible which will ensure there release in the minimum possible time. The answer is 3 years or if you want 100% then the alternate counter system, or both if you want the best possible solution, given that the counter system may get lucky and your out before 3 years. Ok. The math professor's solution WAS wrong. But his solution is NOT the same as AKG's solution. The solution he posted: "The prisons select a fellow, say Alice, who will have a special responsibility. All other prisoners behave according to the same protocol: each turns the light off twice, i.e. they turn it off the first two times they find it on. They leave it untouched thereafter. Alice turns the light on if it was off and, additionally, counts the number of times she entered the room with the light off. When her count reaches 2n - 3 she may claim with certainty that all n prisoners have been to the room." Which he agrees is wrong. Then, he receives the following, which he acknowledges is a correct, but possibly not optimal solution: "Alice counts the times she finds the light on, and ensures that it is always off when she leaves the room. Everyone else turns on the light the first time they find it off, and then never touches it again. This way, between visits of Alice, at most one prisoner will turn on the light, and no prisoner turns it on more than once. Therefore the number of times Alice finds the light on is no more than the number of different prisoners that have entered the room. Each prisoner knows he has been counted once he has turned the light on, since he is the only one who touched the switch since Alice last visited. When Alice counts to n-1, she knows everyone has visited the room." Notice that the latter solution (the correct one) matches AKG's solution. DaveE And note it is also wrong and the explanation of why is given above. I see the original thread is still active: is there merit in having this topic in two threads? If not, I'm going to close this one. Fine by me. Last edited: cristo Staff Emeritus Science Advisor and it still doesn't make the solution 100% from the point of view of the prisoners, ie what is the best solution possible which will ensure there release in the minimum possible time. Where does the problem state that the solution must be proved to be optimal? The only clause is that the prisoners must be 100% certain before they make a claim. In AKG's they are. My counter-statement is always the same, and is never answered. Thus, I shall bow out of this discussion, since it is going nowhere! Where does the problem state that the solution must be proved to be optimal? The only clause is that the prisoners must be 100% certain before they make a claim. In AKG's they are. My counter-statement is always the same, and is never answered. Thus, I shall bow out of this discussion, since it is going nowhere! I showed you this in the link? Did anyone read it? Using Bayes theorem the probability that all are picked is in three years at 1/1000 probability and 1/1000000 at 5 years. thus the light switch answer is wrong, all you need do to optimise your time of release is to wait 5 years. Why would a prisoner chose one system that means waiting an average of 26 years over one that means you're all out in 5? Obviously in maths world the answer is correct but in a real world situation, which is kind of the point of having real people it is not right. The most unarguably correct answer taking the lifespans of the prisoners into account is to wait untill everyone has died but one prisoner, but this is hardly a valid solution although it is correct, there is both a solution with or without a time limit though if you get chosen 100 times after everyone has died then you can be sure that you're getting out by simply using the method of the light switch up, if no one changes it to down as in whatver solution then you know you are alone. This is a logic problem or an applied maths problem, not a pure maths problem. Last edited: cristo Staff Emeritus Science Advisor AKG's solution satisfies the conditions as stated in the problem statement in the original post. Therefore it is a solution to the riddle. Does the fact that there happens to be another solution that takes less time for the prisoners to be free mean that AKG's solution is incorrect? That is my point!! AKG's solution satisfies the conditions as stated in the problem statement in the original post. Therefore it is a solution to the riddle. Does the fact that there happens to be another solution that takes less time for the prisoners to be free mean that AKG's solution is incorrect? That is my point!! I never said that I just said it's flawed and thus wrong from the point of view of the prisoners, reality is second place to mathematics though obviously. I don't see why it is correct if there's is a chance it is not 100% anyway, so I guess it's a technicality. Look I'm honestly not trying to annoy anyone I just thought it would be fun to think about the limitations of the solutions and how logically they do not meet the criteria of the OP on that thread. Obviously it isn't fun, and obviously no ones interested so as suggested lets just agree to disagree and close the thread. The prisoner must be 100% sure he is right.
2019-12-10 01:41:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6060411334037781, "perplexity": 822.0401532479457}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540525781.64/warc/CC-MAIN-20191210013645-20191210041645-00490.warc.gz"}
https://math.stackexchange.com/questions/2910903/softmax-function-in-logistic-regression-confusion-about-two-different-forms
# softmax function in logistic regression: confusion about two different forms I read that, in multiclass logistic regression, we have a pivot class $K$ and $K-1$ set of $\vec{w}$ weights, then, for the pivot class: \begin{eqnarray} P( C_K | \vec{x} ) &= 1- \sum_\limits{t=1}^{K-1}P(C_{K}|\vec{x})e^{\vec{w}^{(t)}\vec{x}} \implies & P( C_K | \vec{x} )=\frac{1}{ 1+\sum_\limits{t=1}^{K-1}e^{\vec{w}^{(t)}\vec{x}}}\\ \end{eqnarray} and for all the others class: \begin{eqnarray} &P( C_1 | \vec{x} ) = P( C_K | \vec{x} ) e^{ \vec{w}^{(1)} \vec{x} }=\frac{e^{ \vec{w}^{(1)} \vec{x}}}{ 1+\sum_\limits{t=1}^{K-1}e^{\vec{w}^{(t)}\vec{x}}} \\ &P( C_2 | \vec{x} ) = P( C_K | \vec{x} ) e^{ \vec{w}^{(2)} \vec{x} }=\frac{e^{ \vec{w}^{(2)} \vec{x} }}{ 1+\sum_\limits{t=1}^{K-1}e^{\vec{w}^{(t)}\vec{x}}}\\ &\vdots\\ &P( C_{K-1} | \vec{x} ) = P( C_K | \vec{x} ) e^{\vec{w}^{(K-1)} \vec{x}}=\frac{e^{ \vec{w}^{(K-1)} \vec{x} }}{ 1+\sum_\limits{t=1}^{K-1}e^{\vec{w}^{(t)}\vec{x}}} \\ \end{eqnarray} so, I need to learn $K-1$ set of weights $\vec{w}$, i.e. $\vec{w}^{(1)},\vec{w}^{(2)},\dots,\vec{w}^{(K-1)}$ . In other references, instead, I found that it is used softmax function for all $K$ classes, i.e. : $P(C_h|\vec{x})=\frac{e^{ \vec{w}^{(h)} \vec{x} }}{ \sum_\limits{t=1}^{K}e^{\vec{w}^{(t)}\vec{x}}},$ $\forall\ 1\leq h \leq K.$ So, it could be explained if we take $e^{\vec{w}^{(K)} \vec{x}}=1$, but now the problem need $K$ set of weights to be learned instead of $K-1$, i.e. $\vec{w}^{(1)},\vec{w}^{(2)},\dots,\vec{w}^{(K)}$. My question is: what it the right way to formalize the problem? with $K$ set of weights or $K-1$? Isn't softmax more expansive with an extra weights set to learn? or maybe, in my training process using $K$ logistic regressors, I will found something like $\vec{w}^{(K)}=0$ so that $e^{0 \vec{x}}=1$? I'm confused...
2019-05-19 14:54:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9989124536514282, "perplexity": 2731.04722554908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232254889.43/warc/CC-MAIN-20190519141556-20190519163556-00316.warc.gz"}
https://infinitylearn.com/surge/question/chemistry/among-the-following-molecules-which--contain-both-polar-and/
# Among the following, molecules which contain both polar and non-polar bonds are 1. A ${\text{NH}}_{4}\text{Cl}$ 2. B $\text{HCN}$ 3. C ${\text{H}}_{2}{\text{O}}_{2}$ 4. D ${\text{CH}}_{4}$ FREE Lve Classes, PDFs, Solved Questions, PYQ's, Mock Tests, Practice Tests, and Test Series! +91 Verify OTP Code (required) ### Solution: The structure of ${\text{H}}_{2}{\text{O}}_{2}$ is . In this structure there is a polar bond between hydrogen and oxygen and a non polar bond between two oxygen atoms. ## Related content Join Infinity Learn Regular Class Program!
2022-09-29 11:37:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.791730523109436, "perplexity": 14255.310070049758}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00617.warc.gz"}
https://physics.stackexchange.com/questions/249713/how-to-write-the-clebsch-gordan-decomposition-in-tensor-notation
# How to write the Clebsch-Gordan decomposition in tensor notation Let be $G$ a Lie Group and $\textbf{N}$ its complex representation. It is known that any state $|\ ab\ \rangle\in \textbf{N}\otimes\textbf{N} = \oplus_I\textbf{r}_I$ may be decomposed through the Clebsch-Gordan decomposition, to wit $$|ab\rangle = \sum_{I,i} C^{ab}_{Ii}|I,i\rangle \tag1$$ where $I$ is a collective index for each irrep, and I am assuming there are no degenerate invariant subspaces in the decomposition. I can also use a tensor notation instead of the bra-ket one. So I denote the single state $| a \rangle$ transforming under $\textbf{N}$ as $\pi^a$. Can I write $$\pi^a\pi^b = \sum_{I,i}\sum_{\phi} C^{ab}_{Ii}\phi^{Ii} \tag2$$ where $\phi^{Ii}$ is a tensor which transforms under $\textbf{r}_I$? In this case, how can I identify the Clebsch-Gordan coefficients? For example for SU(3), $\textbf{3}\otimes\textbf{3}=\textbf{6}+\bar{\textbf{3}}$ and the tensor decomposition reads $$\pi^a\pi^b = \frac{1}{2}\left(\pi^a\pi^b + \pi^b\pi^a\right) + \frac{1}{2}\left( \pi^a\pi^b-\pi^b\pi^a \right)$$ How can I write this in the form indicated in $(2)$? • For SO(3), Kronecker-composing two vectors (spin 1, so 3 s) yields a spin 2 quintet (call it φ, so 5), a triplet (π) and a singlet (s), $$\pi^a\pi^b = \frac{1}{2}\left(\pi^a\pi^b + \pi^b\pi^a+\frac{2(-1)^{ab}}{3}\delta^{a,-b} (-\pi^0 \pi^0+\pi^1\pi^{-1}+\pi^{-1}\pi^1) \right) + \frac{1}{2}\left( \pi^a\pi^b-\pi^b\pi^a \right) +\frac{(-1)^{ab}}{3} \delta^{a,-b} (\pi^0 \pi^0-\pi^1\pi^{-1}-\pi^{-1}\pi^1).$$ If your indices a and b represent $m_1$ and $m_2$ labels in the constituent vectors, so spherical instead of Cartesian tensors, you wish to express the r.h.side in terms of states labelled by the total J and M, as in your expression, with coefficients $C^{ab}_{JM}$. For example, $$\pi^1 \pi^{-1}=\phi^0/\sqrt{6}+\pi^0/\sqrt{2}+s/\sqrt{3}= C_{20}^{1,-1}\phi^0 +C_{10}^{1,-1}\pi^0+C_{00}^{1,-1} s ,$$ $$\pi^0 \pi^{1}=\phi^1/\sqrt{2}-\pi^1/\sqrt{2}= C_{21}^{01}\phi^1 +C_{11}^{01}\pi^1, ~ ...$$ You know how to carry this out in angular momentum...
2019-12-08 15:42:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9519662857055664, "perplexity": 197.38018953775475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540511946.30/warc/CC-MAIN-20191208150734-20191208174734-00097.warc.gz"}
https://www.ncatlab.org/nlab/show/braided+2-group
# nLab braided 2-group Contents group theory ### Cohomology and Extensions #### $(\infty,1)$-Category theory (∞,1)-category theory # Contents ## Definition ###### Definition A 2-group $G$ is braided if it is equipped with the following equivalent structure: 1. Regarded as a monoidal category, $G$ is a braided monoidal category. 2. The delooping 2-groupoid $\mathbf{B}G$ is a 3-group. 3. The double delooping 3-groupoid $\mathbf{B}^2 G$ exists. 4. The groupal A-∞ algebra/E1-algebra structure on $G$ refines to an E2-algebra structure. 5. $G$ is a doubly groupal groupoid. 6. $G$ is a groupal doubly monoidal (1,0)-category. ## References Under the name “braided gr-categories” or “braided cat-groups” and thought of as a sub-class of braided monoidal categories, the notion of braided 2-groups is considered in: As a special case of k-tuply groupal n-groupoids: In the generality of braided ∞-group stacks the notion appears in A discussion of ∞-group extensions by braided 2-groups is in Last revised on May 16, 2022 at 07:04:26. See the history of this page for a list of all contributions to it.
2022-06-29 09:44:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 9, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8381965160369873, "perplexity": 5168.758958589553}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103626162.35/warc/CC-MAIN-20220629084939-20220629114939-00231.warc.gz"}
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition/chapter-7-trigonometric-identities-and-equations-7-3-sum-and-difference-identities-7-3-exercises-page-680/100
Chapter 7 - Trigonometric Identities and Equations - 7.3 Sum and Difference Identities - 7.3 Exercises - Page 680: 100 $\frac{\sin(x+y)}{\cos(x-y)}=\frac{\cot x+\cot y}{1+\cot x\cot y}$ Work Step by Step Start with the right side: $\frac{\cot x+\cot y}{1+\cot x\cot y}$ Express it in terms of sine and cosine: $=\frac{\frac{\cos x}{\sin x}+\frac{\cos y}{\sin y}}{1+\frac{\cos x}{\sin x}*\frac{\cos y}{\sin y}}$ Multiply top and bottom by $\sin x\sin y$: $=\frac{\frac{\cos x}{\sin x}+\frac{\cos y}{\sin y}}{1+\frac{\cos x}{\sin x}*\frac{\cos y}{\sin y}}*\frac{\sin x\sin y}{\sin x\sin y}$ $=\frac{\cos x\sin y+\sin x\cos y}{\sin x\sin y+\cos x\cos y}$ $=\frac{\sin x\cos y+\cos x\sin y}{\cos x\cos y+\sin x\sin y}$ Use sum and difference identities for sine and cosine to simplify: $=\frac{\sin(x+y)}{\cos(x-y)}$ Since this equals the left side, the identity has been proven. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2018-09-21 21:25:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7468881011009216, "perplexity": 413.51595547400274}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267157569.48/warc/CC-MAIN-20180921210113-20180921230513-00551.warc.gz"}
https://icssw.org/en/meshcom/
# MeshCom 4.0 MeshCom is a project to exchange text messages via LORA radio modules. The primary goal is to realize networked off-grid messaging with low power and low cost hardware. The technical approach is based on the use of LORA radio modules which transmit messages, positions, measured values, etc. with low transmission power over long distances. MeshCom radio modules can also be connected to a message network via MeshCom gateways, which are connected via HAMNET. This enables MeshCom radio networks to be connected which do not have a direct radio connection. ### What is LoRa? LoRa - Long Range - spread spectrum modulation LoRa is a transmission technology which sends small data packets such as text messages, geo positions, measured values, control commands, etc. over long ranges with low power and low energy consumption. Due to the low power consumption and an additional use of a deep sleep mode, a long autonomy can be achieved even with accumulator/battery power supply. In amateur radio, however, this advantage is not at the top of the advantage list since we will mostly feed nodes from an ACCU only in portable mode. On fixed sites (QTH) are mostly mains or solar supply with modern LiFePO4 batteries with large capacity available. The range of LoRa radio modules can, depending on the frequency and antennas used, bridge distances > 20km in rural areas and >5km in the city. Another advantage is the low cost of the hardware which results from the large number of LoRa modules and the use of standard components. So why not use these micro-modules for applications in the Citizen Science environment to create applications like: • Measured values such as temperature, air pressure, humidity, ground radiation, ... • Text messages off-grid • GPS geodata in APRS format (a compressed protocol for transmission). • Messages for the EMERGENCY/CAT case to be transferred. MeshCom has already been rolled out on amateur radio frequencies for quite some time and has contributed very well to the understanding of this transmission technology. LoRa modules with a 70cm (433 MHz) LoRa chip are used. The key point in a common network is however the use of a common protocol which is to be defined in this project. Today, not only in Austria, an OE-LoRa format is used for the transmission of GPS packets. This independent HAM-IOT project runs on coordinated frequencies of 433.775 MHz for the uplink to the LoRa access point and 433.900 MHz for the downlink. Die Übertragung mittels einfacheren Modulationen (2-FSK, 4-FSK)  konnte auf schmaler Bandbreite (< 3 kHz) erfolgen. Mit CRC und FEC konnten Fehler am Übertragungsweg zum teil bereits recht gut in den Griff bekommen werden. However, if transmission paths are to be kept robust against interference, development is increasingly moving towards a more broadband modulation. This can be achieved by using several carriers with partly redundant information. LoRa uses a special frequency spread modulation (English spread spectrum). In principle, this modulation can be used on all frequencies, in the MeshCom project we use the 433 MHz frequency range in Europe. #### LoRa parameters: The spreading factor determines how many symbols are used to encode user data. This is specified for LoRa modulation from SF6 to SF12. For example, 128 symbols are used for SF7, and for SF11 there are even 2048 symbols for encoding the identical user data. SF7 is the standard spreading factor, which has a runtime of about 120 milliseconds for the data transmission of 64 bytes. With SF11, the runtime is even well over one second. Bandwidth (BW Bandwidth) The bandwidth for LoRa modulation can be set; defined bandwidths include 31.25 kHz, 41.7 kHz, 62.5 kHz, 125 kHz, 250 kHz and 500 kHz. A smaller bandwidth requires significantly more time for a message transmission. Bandwidths below 125 kHz only work with special LoRa hardware that uses, among other things, a TCXO (Temperature Compensated Crystal Oscillator) and has special hardware support for it. Stable with all LoRa chipsets it works from 125 kHz. Exclusively the 125 kHz bandwidth is defined for the LoRaWAN protocol. The RadioShuttle protocol supports all bandwidths, but 125 kHz is also recommended. The bandwidth is also important for channel selection: if, for example, a bandwidth of 125 kHz is used, the next free channel must be further away than the bandwidth. The center frequency for MeshCom 4.0 has been set at 433.175 MHz. Coding rate (CR Coding Rate) The encoding rate refers to the proportion of the transmitted bits that actually carry information. The encoding rate can be 6/8, 4/8, and so on. So if CR is 4/8, we transmit twice as many bits as those that contain information. If there is too much interference in the channel, it is recommended to increase the CR value. However, the increase of CR value also increases the duration of transmission. Symbol (TS Symbols) LoRa a chirp spread spectrum modulation. The transmitted data, which is a symbol, is represented by a chirp signal with a frequency range from -to. In LoRa modulation, we can configure the symbol by changing the Spreading Factor and Bandwidth parameters. According to the Semtech AN1200.22 application note, a symbol takes one second to transmit, which is a function of the bandwidth and spreading factor and can be represented by the following equation: $\displaystyle R_b = SF \frac{\big[\frac{4}{4+CR}\big]}{\big[\frac{2^{SF}}{BW}\big]}$ Fig. Overview of relationship between SF, BW and bit rate
2023-03-23 10:14:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5887963175773621, "perplexity": 2502.612738752164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945144.17/warc/CC-MAIN-20230323100829-20230323130829-00213.warc.gz"}
http://sargentsmoving.com/new-south-wales/how-to-find-secant-line-with-only-one-point.php
## How To Find Secant Line With Only One Point Intersecting Secant Theorem Math Open Reference 15/09/2012 · Best Answer: a secant line is a line that hits the graph at TWO points. while a tangent line is a graph that hits the graph at ONE point. so if they give you (3, y) first find the point …... A line between two points on a function is called a secant line. Asking to find the slope of the "secant line" between two points on a function means the same thing as asking to find the slope of the "line" between those two points. A secant line is a line between two points on a function. We've been thinking about a secant line as a line that starts at the point on f where x = a, and ends at A secant is a line or segment that passes through a circle If you draw a line through two points that are close to one another on a curve, that line is called a secant line. The slope of a secant line is also known as the "average rate of change." The slope of the secant line approximates the slope of the curve at any point between the two points on the curve. When the two points chosen are closer together, the slope of the secant line becomes closer... A tangent line is a straight line that touches a function at only one point. (See above.) The tangent line represents the instantaneous rate of change of the function at that one point. The slope of the tangent line at a point on the function is equal to the derivative of the function at the same point (See below.) Tangent Line = Instantaneous Rate of Change = Derivative . Let's see what secant line responses Mount Holyoke College A tangent line is a straight line that touches a function at only one point. (See above.) The tangent line represents the instantaneous rate of change of the function at that one point. The slope of the tangent line at a point on the function is equal to the derivative of the function at the same point (See below.) Tangent Line = Instantaneous Rate of Change = Derivative . Let's see what asus zenfone 2 how to find internal storage 1/03/2016 · Well, to find the slope of the secant line, I just need to find the change in y and the change in x between these two points. So what's the change in-- so let's be clear here. This is the point, this is when x is equal to-- well, it's just a kind of arbitrary x. And this right over here is the point …. How to get someone to think about their finance ## How To Find Secant Line With Only One Point ### secant and tangent line Classical Physics - Science Forums • secant line responses Mount Holyoke College • secant and tangent line Classical Physics - Science Forums • Secant and Tangent Lines Moomoomath • Find formula for the slope of the secant line YouTube ## How To Find Secant Line With Only One Point ### The measure of an angle formed by a secant and a tangent drawn from a point outside the circle is $$\frac 1 2$$ the difference of the intercepted arcs . Remember that this theorem only … • 1/03/2016 · Well, to find the slope of the secant line, I just need to find the change in y and the change in x between these two points. So what's the change in-- so let's be clear here. This is the point, this is when x is equal to-- well, it's just a kind of arbitrary x. And this right over here is the point … • As the secant line moves away from the center of the circle, the two points where it cuts the circle eventually merge into one and the line is then the tangent to the circle. As can be seen in the figure above, the tangent line is always at right angles to the radius at the point of contact. • 9/02/2011 · Tangent is also a place that touches a circle or curve at only one point. In terms of a unit circle, its sin(x)/cos(x). In terms of a unit circle, its sin(x)/cos(x). And a secant line is just a line that intersects a curve or circle at just two points. • This theorem works like this: If you have a point outside a circle and draw two secant lines (PAB, PCD) from it, there is a relationship between the line segments formed. ### You can find us here: • Australian Capital Territory: Omalley ACT, Royalla ACT, Murrumbateman ACT, Karabar ACT, Tharwa ACT, ACT Australia 2616 • New South Wales: Ryan NSW, Valentine NSW, Indi NSW, Brooms Head NSW, Hampton NSW, NSW Australia 2031 • Northern Territory: Kenmore Park NT, Mandorah NT, Charlotte Waters NT, Mutitjulu NT, Lajamanu NT, Mcarthur NT, NT Australia 0885 • Queensland: Darra QLD, Garradunga QLD, Casuarina QLD, Etna Creek QLD, QLD Australia 4022 • South Australia: Big Bend SA, Kainton SA, Everard Park SA, Wilcowie SA, Lambina SA, Payneham South SA, SA Australia 5015 • Tasmania: Pelverata TAS, Glenlusk TAS, Promised Land TAS, TAS Australia 7052 • Victoria: Hampton East VIC, Wollert VIC, Telford VIC, Rhyll VIC, Linga VIC, VIC Australia 3005 • Western Australia: Pia Wadjari Community WA, South Doodlakine WA, Nickol WA, WA Australia 6095 • British Columbia: Golden BC, McBride BC, Fruitvale BC, Port Clements BC, Queen Charlotte BC, BC Canada, V8W 4W4 • Yukon: Montague YT, Carmacks YT, Yukon Crossing YT, Flat Creek YT, McCabe Creek YT, YT Canada, Y1A 4C4 • Alberta: Cremona AB, Rockyford AB, Warburg AB, Halkirk AB, Bruderheim AB, Westlock AB, AB Canada, T5K 8J4 • Northwest Territories: Katl’odeeche NT, Wekweeti NT, Fort Resolution NT, Nahanni Butte NT, NT Canada, X1A 6L8 • Saskatchewan: Punnichy SK, Broadview SK, Delisle SK, Cabri SK, Middle Lake SK, Rockglen SK, SK Canada, S4P 1C2 • Manitoba: Somerset MB, Manitou MB, Rapid City MB, MB Canada, R3B 3P4 • Quebec: Sainte-Jeanne-d'Arc QC, Riviere-Rouge QC, Tring-Jonction QC, Saint-Basile QC, Trois-Rivieres QC, QC Canada, H2Y 4W8
2019-08-21 21:10:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5087010860443115, "perplexity": 2182.6427279494587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316194.18/warc/CC-MAIN-20190821194752-20190821220752-00218.warc.gz"}
https://brilliant.org/problems/this-doesnt-seem-too-likely/
# This doesn't seem too likely Algebra Level 4 What is the largest integer $$n \leq 1000$$, such that there exist 2 non-negative integers $$(a, b)$$ satisfying $n = \frac{ a^2 + b^2 } { ab - 1 } ?$ Hint: $$(a,b) = (0,0)$$ gives us $$\frac{ 0^2 + 0^2 } { 0 \times 0 - 1 } = 0$$, so the answer is at least $$0 .$$ ×
2017-03-27 18:27:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6870144009590149, "perplexity": 402.64361887829966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189495.77/warc/CC-MAIN-20170322212949-00318-ip-10-233-31-227.ec2.internal.warc.gz"}
https://cs.stackexchange.com/questions/79990/can-quantum-computer-compute-the-sum-of-n-natural-numbers-in-theta-log-n
# Can quantum computer compute the sum of $n$ natural numbers in $\Theta(\log n)$ time? Classical computer always requires no matter what $\Theta(n)$ time to compute the sum of $n$ natural numbers, but can quantum computer do that in $\Theta(\log n)$ time? Given that $a$ is an infinite sequence of natural numbers defined either iteratively or recursively by some math formula and $a_i$ is an arbitrary natural number, an element of the infinite sequence $a$, for any arbitrary index $i \in \mathbb{N}.$ Then to compute the sum of the first $n$ natural numbers of the infinite sequence $a$ or in other words to compute $\displaystyle \sum_{i=1}^{n} a_i$ Classical computer always requires $\Theta(n)$ time to do that, but what about quantum computer? How much time does quantum computer requires? Can quantum computer do this computation in $\Theta(\log n)$ time? Please assume that computation of $a_i$ for any index $i \in \mathbb{N}$ requires no longer than $\Theta(1)$ time. • I'm not an expert, but an exponential speed up is unlikely. Grover's algorithm only gives a square root speed up. – Yuval Filmus Aug 12 '17 at 18:18 • If it is known if $\mathsf{QLOGTIME \subsetneq L}$, the answer is negative. – rus9384 Aug 12 '17 at 19:48 • But there is no answer to the question: "Is $QLOGTIME \subsetneq L?$" , this question is still unknown and open. Am I right? – Farewell Stack Exchange Aug 12 '17 at 19:55 No, a quantum computer can't sum $n$ outputs from a black box function in $O(\lg n)$ queries. For example, you could use magic summing power to easily do asymptotically better than Grover's algorithm at searching for solutions to a predicate. Except Grover's algorithm is proven to be asymptotically optimal. Contradiction. Also, magic summing would trivially prove that $NP \subseteq BQP$. To determine if problem X has a solution or not, you'd simply sum up the outputs of the black box "if input is a solution for problem X then output 1 else output 0". • Did you mean: $NP \subseteq BQP$? – Farewell Stack Exchange Aug 13 '17 at 22:19 • @ErezZrihen Yeah. Fixed. – Craig Gidney Aug 13 '17 at 23:11
2021-05-10 19:13:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8004744052886963, "perplexity": 471.78676703998934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991759.1/warc/CC-MAIN-20210510174005-20210510204005-00085.warc.gz"}
https://math.stackexchange.com/questions/2309850/if-some-series-of-n-terms-is-deranged-what-is-the-probability-that-no-term-stan
# If some series of n terms is deranged, what is the probability that no term stands next to a term it was next to originally? I stumbled upon this most interesting problem whose solution has so far escaped me and I think that it'd be very fascinating to see how it may be precisely dealt with. It runs as follows, If some series of $n$ terms is deranged, how can we first calculate the probability that no term shall stand next to a term it was next to originally, and further, if $n$ happens to be infinite, how can we prove that the probability is $e^{-2}$? • Can you define "deranged"? – Michael Lugo Jun 4 '17 at 22:25 • A derangement is a permutation with no fixed points. – Thompson Jun 4 '17 at 23:57 • Calculate the answers for $n=4,5,6$ and then look it up in the Encyclopedia of Integer Sequences. – Gerry Myerson Jun 5 '17 at 7:00 • Here is an OEIS result for the 'no adjacent pair is again adjacent' rule only. A002464 – Sangchul Lee Jun 5 '17 at 14:15 • It certainly looks to be a very hard task in rigorously proving the conditions of this problem. – Kalis Jun 5 '17 at 14:24 The limiting ratio is not hard to derive via the method of moments. In a random permutation $\sigma$ of $\{1,2,\dots,n\}$ (not necessarily a derangement), let $X_n$ be • the total number of integers $i$, $1 \le i \le n$, such that $\sigma(i) = i$, plus • the total number of integers $i$, $1 \le i \le n-1$, such that $\sigma(i+1) = \sigma(i)+1$, plus • the total number of integers $i$, $1 \le i \le n-1$, such that $\sigma(i) = \sigma(i+1)+1$. We'll show that for any fixed $k$, we have $\lim_{n \to \infty} \mathbb E[\binom{X_n}{k}] = \frac{3^k}{k!}.$ If a random variable $X$ is Poisson with mean $3$, we also have $\mathbb E[\binom{X}{k}] = \frac{3^k}{k!}$. The Poisson distribution is determined by its moments, so it follows that $X_n$ converges in distribution to $X$, and in particular $\lim_{n \to \infty} \Pr[X_n = 0] = e^{-3}$. To see this, note that $X_n$ is the sum of $3n-2$ indicator variables corresponding to each of the events (listed above) that $X_n$ counts, so $\binom{X_n}{k}$ counts the number of size-$k$ sets of events that occur. The calculation is difficult to do exactly but more or less straightforward asymptotically. Out of the $\binom{3n-2}{k}$ choices of $k$ events, $(1-o(1))\binom{3n-2}{k}$ never involve the same value $\sigma(i)$ more than once, so the probability that they occur is $(1+o(1))n^{-k}$. These contribute $(1+o(1))\frac{3^k}{k!}$ to the expected value $\mathbb E[\binom{X}{k}]$, which is all that we wanted. So it remains to reassure ourselves that the contribution from all other choices of $k$ events is insignificant in the limit. Muddying the picture, some groups of $j \le k$ events that overlap have a significantly higher probability than a group of $j$ nonoverlapping events. For example, the events that $\sigma(3)=3$, that $\sigma(4)=4$, and that $\sigma(4)=\sigma(3)+1$ have this property. However, we can never win this way, because there are $O(n)$ ways to pick that a group of $j$ overlapping events (with a constant factor depending on $k$) but a $O(n^{-2})$ chance that all of the events occur (since at least two values of $\sigma$ are involved). So for any possible overlapping structure, the contribution to $\mathbb E[\binom{X}{k}]$ is $O(n^{-1})$, and there is a constant number (depending only on $k$) of overlapping structures. As a result, we can ignore all overlaps, conclude that $\lim_{n\to\infty}\mathbb E[\binom{X}{k}] = \frac{3^k}{k!}$, and deduce that $\lim_{n\to\infty} \Pr[X_n = 0] = e^{-3}$. Therefore $$\lim_{n\to\infty} \Pr[X_n = 0 \mid \text{\sigma is a derangement}] = \lim_{n\to\infty} \frac{\Pr[X_n=0]}{\Pr[\text{\sigma is a derangement}]} = \frac{e^{-3}}{e^{-1}} = e^{-2}.$$ • This may be fallacious, can someone with similar expertise confirm it? – Kalis Jun 5 '17 at 19:17 • @Kalis is there a particular part of my argument that you don't believe? – Misha Lavrov Jun 5 '17 at 19:37 • Although some details are left to readers, this is an excellent argument to the second part of OP's question. Glad to see this pretty argument. (+1) – Sangchul Lee Jun 6 '17 at 2:30
2021-06-20 19:05:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8644291758537292, "perplexity": 295.3386630006088}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488253106.51/warc/CC-MAIN-20210620175043-20210620205043-00092.warc.gz"}
http://www.gradesaver.com/textbooks/math/precalculus/precalculus-mathematics-for-calculus-7th-edition/chapter-1-section-1-3-algebraic-expressions-1-3-exercises-page-33/13
## Precalculus: Mathematics for Calculus, 7th Edition Published by Brooks Cole # Chapter 1 - Section 1.3 - Algebraic Expressions - 1.3 Exercises: 13 #### Answer Type: Polynomial Terms: x, -$x^2$, $x^3$, -$x^4$ Degree: 4 #### Work Step by Step The type of the monomial is a monomial if it has one term, a binomial if it has two terms, and a trinomial if it has three terms. This polynomial has 4 terms, therefore it is not referred to by any of these terms and is simply called a four term polynomial. The terms are simply the values separated by plus and minus signs: x, -$x^2$, $x^3$, -$x^4$. The degree is just the highest exponent of the variable in any of the terms, and if no x is present, the degree is 0. Therefore, the degree of this polynomial is 4. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2017-08-16 13:46:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6560924053192139, "perplexity": 738.8900484363243}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886101966.48/warc/CC-MAIN-20170816125013-20170816145013-00284.warc.gz"}
https://www.physicsforums.com/threads/physics-symbols-please-help.530468/
1. Sep 15, 2011 J-Girl This is probably a stupid question, but what does this symbol stand for? $\hbar$ I can't find it anywhere on the net. Thanks:) 2. Sep 15, 2011 Staff: Mentor It's Planck's constant: http://en.wikipedia.org/wiki/Planck_constant . 3. Sep 15, 2011
2018-08-16 14:26:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25584009289741516, "perplexity": 3034.738840563616}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221211000.35/warc/CC-MAIN-20180816132758-20180816152758-00697.warc.gz"}
https://mathoverflow.net/questions/409374/probability-of-winning-a-k-rounds-coin-toss-game
# Probability of winning a $k$-rounds coin toss game Let $$p,q \in [0,1]$$ with $$p>q$$. I denote by $$B_k(p), B_k(q)$$ two independent random variables following the binomial distribution, with parameters $$(k,p)$$ and $$(k,q)$$ respectively. ## Informal Question I would like to estimate the advantage granted by having the better "$$p$$" coin, in a $$k$$-rounds "coin tossing" contest, in the case that $$p$$ and $$q$$ are very close to each other. ## Formal question Is there a lower bound on $$$$\label{eq} \mathbb{P}(B_k(q) < B_k(p)) - \mathbb{P}(B_k(p) < B_k(q))$$$$ which would be a function of $$\epsilon = (p-q)$$, and greater than $$0$$ even when $$\epsilon \ll 1/k$$? More precisely, I consider the case that $$k$$ tends to $$+\infty$$ (but don't forget that $$\epsilon$$ shrinks faster than $$1/k$$...). Moreover, I may assume $$p$$ and $$q$$ as close to $$1/2$$ as I want. ## My ideas so far I tried to write Hoeffding bound, and got $$\mathbb{P}(B_k(p) < B_k(q)) < \exp \left( -\frac{1}{2}k\epsilon^2 \right),$$ which is not good enough for $$\epsilon \ll 1/k$$. I also tried to approximate the binomial distribution by the normal distribution using the Berry-Esseen theorem -- but it turns out to be too coarse. Specifically, I can obtain $$\left| \mathbb{P}(B_k(q) < B_k(p)) - \Phi \left( \frac{\sqrt{k}\epsilon}{\sigma} \right) \right| < \frac{C}{\sigma \sqrt{k}}$$ and $$\left| \mathbb{P}(B_k(p) < B_k(q)) - \Phi \left( -\frac{\sqrt{k}\epsilon}{\sigma} \right) \right| < \frac{C}{\sigma \sqrt{k}},$$ where $$\sigma = \sqrt{p(1-p)+q(1-q)}$$ and, e.g., $$C = 0.4748$$; but the RHS in both equations is too big to lead to the desired result. My next step is to write the exact expression and derive w.r.t. $$\epsilon$$, but I'm not confident that it will lead to a good result (I'd be happy to hear your thoughts about it in the comments). I originally asked this question on Mathematics Stack Exchange, but then I realized it could suit MathOverflow better. • View the smaller prob. as being obtained by first flipping a p coin and then a q/p coin and calling it a head if both coins are heads. In the regime you are interested in, you will likely get 1 or 0 extra p-heads, and therefore the probability that there are more p's is about the difference in expectations. – mike Nov 25 at 15:17 • @mike But that's a different joint distribution than the one the question asks about, right? The question asks about the case where $B_k(p)$ and $B_k(q)$ are independent. Or is there some reason why $\mathbb{P}(B_k(q) < B_k(p)) - \mathbb{P}(B_k(p) < B_k(q))$ is the same for the independent case and for your coupling? Nov 25 at 16:51
2021-12-08 07:37:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 27, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.8668981790542603, "perplexity": 197.7029131225942}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363445.41/warc/CC-MAIN-20211208053135-20211208083135-00507.warc.gz"}
https://www.krecke.com.br/the-russian-cxgr/1-slug-to-lb-383ef2
Atomic mass units (u) are used to measure mass of molecules and atoms. (64 oz.) 1 lbs = 0.031081 slug. Question: A Mass Of 1 Slug Is Attached To A Spring Whose Constant Is 5 Lb/ft. CIDUCT-1 DUCT SEAL COMPOUND 1 LB SLUG. This video does an excellent job of explaining it. If you are looking for a BMI Calculator, please click here. More information from the unit converter. Still have questions? Advertisements. Nicola-chan. 1 slug = 32.174 lb mass = 14.594 kg . Join . So, 1.1 pounds times 0.031080950037834 is equal to 0.03419 slugs. It is a mass that accelerates by 1 ft/s2 when a force of one pound-force (lbF ) is exerted on it. 3,291 4 4 gold badges 21 21 silver badges 44 44 bronze badges. Safer's Slug 7 Snail Killer is a ready-to-use slug and snail bait killer that controls these pests in your backyard; They eat the bait and crawl away to hide and die; Helps stop plant damage due to slugs and snails; Will not harm children, pets, birds or wildlife; Size: 1 kg (35.2 oz) This unit, rather than the pound (as a unit of mass), is used as the system's unit of mass in order to make the system coherent. 2. How many slugs in 98 pounds: If m lb = 98 then m sl = 3.0459329801502 sl. login This product is no longer available to purchase. A disgusting analogy, to be fair, but it gives you the picture. How many slugs in a pound: If m lb = 1 then m sl = 0.031080948777042 sl. At that rate a slug like this packs over 1,900 ft lbs of muzzle energy, which is about double what is recommended for ethical whitetail hunting and well beyond adequate for personal protection. It lasts up to 4 weeks and breaks down into … The slug is a derived unit of mass in the weight-based system of measures, most notably within the British Imperial measurement system and in the United States customary measures system.Systems of measure either define mass and derive force or define a base force and derive a mass unit. The answer is 643,481. 0.1 slug*ft^2: 463.3063038467 lb*in^2: 1 slug*ft^2: 4633.0630384669 lb*in^2: 2 slug*ft^2: 9266.1260769339 lb*in^2: 3 slug*ft^2: 13899.189115401 lb*in^2: 5 slug*ft^2: 23165.315192335 lb*in^2: 10 slug*ft^2: 46330.630384669 lb*in^2: 20 slug*ft^2: 92661.260769339 lb*in^2: 50 slug*ft^2: 231653.15192335 lb*in^2: 100 slug*ft^2 : 463306.30384669 lb*in^2: 1000 slug*ft^2: 4633063.0384669 lb… per 1,000 square feet to intercept snails and slugs traveling to the plot. by definition. More information from the unit converter. It is about 1.66 × 10⁻²⁷ kilograms. in 2. Convert Slug Per Cubic Foot to Pound Per Cubic Inch (slug/ft3 to lb/in3) ... 1 slugs per cubic feet = 0.018616898148148 pounds per cubic inches. Symbol¹, slug. 1 lb. Convert 1 slug per liter to pounds per US pint, sl/l to lb/pt unit converter with conversion cards, convert between any units of density. 1 x 32.174074800702 lbs = 32.174074800702 Pounds. 1 u is 1/12 of a mass of an atom of carbon-12. Definition of Force . km . Product Code: THSCIDUCT1. In other words, 1 lb f (pound-force) acting on 1 slug of mass will give the mass an acceleration of 1 ft/s 2. The weight (force) of the mass can be calculated from equation (2) in BG units as. Q: How many Pounds in 20000 Slugs? Always check the results; rounding errors may occur. The answer is 0.031081. One pound is equal to how many slug? ft 3. 1 Slugs to Pounds = 32.174: 70 Slugs to Pounds = 2252.1834: 2 Slugs to Pounds = 64.3481: 80 Slugs to Pounds = 2573.9239: 3 Slugs to Pounds = 96.5221: 90 Slugs to Pounds = 2895.6644: 4 Slugs to Pounds = 128.6962: 100 Slugs to Pounds = 3217.4049: 5 Slugs to Pounds = 160.8702: 200 Slugs to Pounds = 6434.8097: 6 Slugs to Pounds = 193.0443 : 300 Slugs to Pounds = 9652.2146: 7 Slugs to Pounds = … ft. 1 m = 3.2808 ft. 1 ft = 0.3048 m . Slugs and snails traveling to the plants will encounter the bait before reaching the plant. 1 mile = 5280 ft = 1.6093 kg . Write a Review. To convert any value in pounds to slugs, just multiply the value in pounds by the conversion factor 0.031080950037834. answered Aug 25 '15 at 1:09. Mfr Number: CIDUCT-1. mile. Long-lasting Slug Bait With Sulfur as an active ingredient, this solution can last up to 3 weeks. This slug’s effect on a water-based medium is equivalent to taking a sledgehammer to a piñata full of cottage cheese. Go to Shopping Cart … It is equal to exactly 0.45359237 kilograms . Fast-Acting Sluggo Slug Bait Based on Iron Phosphate, the solution is one of the most effective ones. Professional people always ensure, and their success in fine cooking depends on, they get the most precise units conversion results in measuring their ingredients. You practically ignore s^2 since you don't need to convert it. 1 m 2 = 1550.0031 in 2. James Koerlin James Koerlin. $1\ lb_m * \frac{1\ slug}{32.2\ lb_m} * 32.2 \frac{ft}{s^2} = 1\ lb_f$ Therefore $1\ lb_m$ will yield $1\ lb_f$ on Earth at STP. All In One Unit Converter . Always check the results; rounding errors may occur. It is defined as the force needed to accelerate a 1 slug mass (or 32.17405 lb m) at the rate of 1 ft/s. 1 lb mass = 0.4536 kg . 1 lbs = 0.000000 slug. Convert: (Please enter a number) From: To: Lastest Convert Queries. Add to cart. Sluggo: Bug-Geta 3.5 lb. Area . Monterey - 1 lb. Force is an action exerted upon an object that causes it to accelerate. food compounds gravel db finance health password convert tables rate plane solid more. m . 2 9. 1 kg = 1000 m = 0.6214 mile. 1 m 2 = 10.7639 ft 2. More information about this unit: pound / slug The base unit of weight in the International System Of Units (SI) is: kilogram (kg) 1 kilogram is equal to 2.2046226218488 pound; 1 pound is equal to 0.45359237 kilogram; 1 kilogram is equal to 0.068521765561961 slug; 1 slug … Ask Question + 100. check The item was successfully added to cart. It is water-resistant and can be applied around edibles. 1 ft 2 = 0.0929 m 2 . The pound is 16 ounces, and 1 slug is 32.17405 lb m pound mass. m 3 . Join Yahoo Answers and get 100 points today. To convert any value in slugs to pounds, just multiply the value in slugs by the conversion factor 32.174048694867.So, 1 slug times 32.174048694867 is equal to 32.17 pounds. Snail and Slug Killer: 4 lbs. 1 slug = 32.174074800702 lbs. The pound mass (abbreviated as lbm or just lb) is also a fundamental unit within the Imperial system. Trending Questions. 1 in 2 = 0.00064516 m 2 . One slug is a mass such that a 1-pound force acting on it will produce an acceleration of 1 foot per second per second. why dont airplane's flap wings? Others Weight and Mass converter. CIDUCT-1 DUCT SEAL COMPOUND 1 LB SLUG. Definition of the Pound-mass . This can be mathematically expressed as follows; 1 lb f = 1 slug × 1 ft/s = 32.17405 lb m × 1 ft/s. Slug definition: A slug is a small slow-moving creature with a long soft body and no legs , like a snail... | Meaning, pronunciation, translations and examples m sl = m lb / 32.17405. Definition: The slug is a derived unit of mass in the weight-based system of measures, most notably within the British Imperial measurement system and in the United States customary me..more definition+. 15kg*m (1 slug/14.59kg) (1ft/0.3048) = 3.37. share | improve this answer | follow | edited Dec 2 '15 at 19:58. Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. It can be used in organic gardening and is safe for pets. Others Weight and Mass converter. Slug. There is 0.03419 slugs in 1.1 pounds. 6 years ago. The answer is: The change of 1 slug ( slug ) unit for a weight and mass measure equals = into 32.17 lb - lbs ( pound ) as per its equivalent weight and mass unit type measure often used. m 2 . Cannot Order, this product is unavailable. Slugs can be eliminated for one day. 1 slug = 32.17 lbs. 1 slug = 32.174 lbs. ft²] From: kilogram meter² kilogram centimeter² kilogram millimeter² gram centimeter² gram millimeter² kilogram-force meter second² ounce inch² ounce-force inch second² pound foot² pound-force foot second² pound inch² pound-force inch second² slug foot² Trending Questions. 1 lbm ≡0.45359237 kg. 1 N = 1/14.59 slug x 1/0.3048 ft /s^2 = 0.2249 lb. Min: 1 EA. 2024 Joules to Btu Mean 50 Square Kilometers to Acres 1193 Joules to Centigrade Heat Units 1000000 Erg to Calories (IT) 3068 Horsepower Hour to … 13.85 slugs per cubic feet = Y pounds per cubic inches. Q: How many Slugs in 1 Pounds? In the S.I., force is measured using Newtons. Convert: (Please enter a number) From: To: Lastest Convert Queries. Length . F g (lb f) = m (slugs) a g (ft/s 2) With standard gravity - a g = 32.17405 ft/s 2 - the weight (force) of 1 slug mass can be calculated as. Forum | Login | Register. Slugs [ lb to sl ]: the unit of weight.Slug is an imperial or United customary... Mass that accelerates by 1 ft/s2 when a force of one pound-force ( lbF ) is the,... As an active ingredient, this solution can last up to 3.. Associated with British imperial or United States customary units 4 4 gold badges 21 silver... Measured using Newtons 3,291 4 4 gold badges 21 21 silver badges 44 bronze! Using Newtons share | improve this answer | follow | edited Dec 2 at. A force of one pound-force ( lbF ) is exerted on it will produce acceleration. Bait around the perimeter of the mass can be calculated From equation 2... For pets is an imperial or United States customary unit of mass used in the imperial system, mainly the. 44 bronze badges disgusting analogy, to be fair, but it gives the... ; rounding errors may occur = 3.2808 ft. 1 m = 3.2808 ft. 1 ft = m. These situations, scatter the Bait before reaching the plant pound force ( lb f pound... The solution is one of the plot at 1 lb f ( pound force ( f! Ft/S2 when a force of one pound-force ( lbF ) is also fundamental!, earwigs, sow bugs, pill bugs and cutworms convert it u ) are used to measure of... M sl = 0.031080948777042 sl as lbm or just lb ) is unit! 4 gold badges 21 21 silver badges 44 44 bronze badges disgusting analogy, be. Follow | edited Dec 2 '15 at 19:58 practically ignore s^2 since you do n't need to convert.. Enter a number ) From: to: Lastest convert Queries in cart slugs in a:. A water-based medium is equivalent to taking a sledgehammer to a piñata full of cottage cheese British imperial United... In organic gardening and is safe for pets 44 44 bronze badges db health! Measured using Newtons of one pound-force ( lbF ) is exerted on it on a water-based is. N'T need to 1 slug to lb pounds to slugs, just multiply the value in pounds to,. How to convert it ( force ) of the plot expressed as follows ; 1 f. Is Attached to a piñata full of cottage cheese expressed 1 slug to lb follows ; 1 lb in BG as. 1 ft/s2 when a force of one pound-force ( lbF ) is the answer, and 1 =. Reaching the plant errors may occur 1 slug/14.59kg ) ( 1ft/0.3048 ) = 3.37 piñata full cottage... Force ) of the plot in cart 2 '15 at 19:58 bronze badges of mass used in gardening. Then m sl = 3.0459329801502 sl is 5 Lb/ft the plants will encounter the before! ) = 3.37 = 14.594 kg is an imperial or United States customary unit of weight sl:. You are looking for a BMI Calculator, Please click here tables rate plane solid more convert it a. And snails traveling to the plot at 1 lb is 16 ounces, and by criss-cross principle ; Y 13.85. Are looking for a BMI Calculator, Please click here 14.594 kg f ( pound force lb..., sow bugs, pill bugs and cutworms m lb = 98 then m sl = 0.031080948777042.! This can be calculated From equation ( 2 ) in BG units as to 3 weeks Please click here 13.85... Snails and slugs traveling to the plants will encounter the Bait around the of... No longer available to purchase action exerted upon an object that causes it to accelerate an... Snails, earwigs, sow bugs, pill bugs and cutworms product is no longer available to.! In BG units as also a fundamental unit within the imperial systems of units i.e. lb mass 14.594! That a 1-pound force acting on it will produce an acceleration of 1 slug = 32.174 mass... Ft/S2 when a force of one pound-force ( lbF ) is exerted on it produce. Of one pound-force ( lbF ) is also a fundamental unit within the imperial of... Effective ones kills slugs, snails, earwigs, sow bugs, pill and! Rate plane solid more 3.0459329801502 sl of carbon-12 32.17405 lb m pound mass pound force ) of mass! Add to cart Item already in cart of weight 44 44 bronze badges reaching the.! Earwigs, sow bugs, pill bugs and cutworms be used in organic gardening is... This product effectively controls and kills slugs, just multiply the value in pounds to slugs,,... Of the most effective ones of explaining it 32.174 lb mass = 14.594 kg solution can last up 3! A Spring Whose Constant is 5 Lb/ft pound-force ( lbF ) is exerted on it mathematically. Ft. 1 m = 3.2808 ft. 1 m = 3.2808 ft. 1 ft = 0.3048 m on Iron,! A 1-pound force acting on it will produce an acceleration of 1 slug is a of! Db finance health password convert tables rate plane solid more lbm or lb! As lbm or just lb ) is the answer, and 1 slug is a unit of.. No longer available to purchase weight.Slug is an imperial or United States customary of. Factor 0.031080950037834 1 ft = 0.3048 m available to purchase pill bugs and cutworms to cart Item already cart... Equation ( 2 ) in BG units as the imperial system, in... Lastest convert Queries 1,000 square feet to intercept snails and slugs traveling to plot. Enter a number ) From: to: Lastest convert Queries or just )! Situations, scatter the Bait around the perimeter of the most effective ones m × 1 ft/s = 32.17405 m. Compounds gravel db finance health password convert tables rate plane solid more an imperial United... Lb ) is exerted on it will produce an acceleration of 1 foot per per... Applied around edibles, but it gives you the picture 1 slug to lb inches m. Convert pounds to slugs [ lb to sl ]: it gives you the picture to: Lastest convert.! Check the results ; rounding errors may occur 3,291 4 4 gold badges 21 21 silver badges 44 44 badges... Safe for pets piñata full of cottage cheese sl = 3.0459329801502 sl answer and... Need to convert it Please enter a number ) From: to: Lastest convert Queries Alternative! Pound-Force ( lbF ) is exerted on it mass such that a force... With Sulfur as an active ingredient, this solution can last up to 3 weeks is an imperial United! S^2 since you do n't need to convert pounds to slugs, snails, earwigs sow. Is 1/12 of a mass such that a 1-pound force acting on it slug × 1 ft/s = 32.17405 m! Convert pounds to slugs, just multiply the value in pounds to slugs [ to! Accelerates by 1 ft/s2 when a force of one pound-force ( lbF ) also! 4 4 gold badges 21 21 silver badges 44 44 bronze badges exerted an... Mass used in organic gardening and is safe for pets a disgusting analogy, to be,! Product effectively controls and kills slugs, just multiply the value in pounds the... 1 slug is 32.17405 lb m × 1 ft/s = 32.17405 lb m × 1 ft/s States unit! = 1136.2158018663 pounds per cubic inches answer | follow | edited Dec 2 '15 at 19:58 a! And atoms plane solid more available to purchase and by criss-cross principle Y... Units ( u ) are used to measure mass of molecules and atoms ( slug/14.59kg. Slugs per cubic inches in these situations, scatter the Bait before reaching the plant do n't to. Imperial system, mainly in the imperial system excellent job of explaining it taking sledgehammer... Pound-Force ( lbF ) is the answer, and 1 slug is a of. 0.031080948777042 sl ( i.e. 1 then m sl = 3.0459329801502 sl, bugs! In pounds by the conversion factor 0.031080950037834 sl ]: of mass in! Sl ]: Bait Based on Iron Phosphate, the solution is one the! Over 1 ( i.e. 98 then m sl = 3.0459329801502 sl 0.031080950037834! System, mainly in the UK slugs traveling to the plot you practically ignore s^2 since do. Taking a sledgehammer to a Spring Whose Constant is 5 Lb/ft food compounds gravel db finance health password tables! System, mainly in the imperial systems of units ]: 1 m 3.2808... F = 1 then m sl = 3.0459329801502 sl imperial or United States units! Over 1 ( 1 slug to lb. 2 '15 at 19:58 you are looking for a Calculator... 1,000 square feet to intercept snails and slugs traveling to the plants will encounter the Bait around the of! Longer available to purchase United States customary units per second per second be. 1 slug = 32.174 lb mass = 14.594 kg rate plane solid more mass can used. Unit of weight cubic feet = Y pounds per cubic feet = pounds... Item already in cart as follows ; 1 lb pounds: If lb... Plot at 1 lb fast-acting sluggo slug Bait Based on Iron Phosphate, the solution one! 1 ft/s m sl = 0.031080948777042 sl customary units is no longer available purchase. 1 m = 3.2808 ft. 1 ft = 0.3048 m badges 21 21 badges... Note: pound is 16 ounces, and by criss-cross principle ; equals!
2021-04-10 23:32:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39929839968681335, "perplexity": 7208.878238650288}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038059348.9/warc/CC-MAIN-20210410210053-20210411000053-00511.warc.gz"}
http://www.tutorpaul.com/4-26/
## 3 thoughts on “4.26” 1. blake.wallace95 says: 4.26 10th edition = 4.28 11th edition 2. blake.wallace95 says: Can you explain how even though the angular acceleration (Theta double dot) is zero, that we can assume that Phi double dot still exists? Also, why is it that we can put the made-up angle, Phi, in our final answers if Phi was not given in the original problem? I suppose we could make a relationship between Theta and Phi to make the answers in terms of Theta? 1. 1. tutorpaul says: As the last equations in this solution show, the angular acceleration of bar $AB$ is dependent on the angular position, velocity, and acceleration of bar $CD$. To say that bar $AB$ will not accelerate because $CD$ didn't accelerate is a gross over-simplification of the motion of these sorts of mechanisms. When we think about rigid bodies we must assume that they have acceleration, velocity, and position; until we can demonstrate otherwise.
2018-12-14 13:52:02
{"extraction_info": {"found_math": true, "script_math_tex": 4, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8053600192070007, "perplexity": 817.3254148441807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825728.30/warc/CC-MAIN-20181214114739-20181214140239-00236.warc.gz"}
https://stats.stackexchange.com/questions/450327/testing-if-there-is-any-new-knowledge-in-a-third-variable/450591
Testing if there is any new knowledge in a third variable I am trying to predict the outcome $$X$$ of an event given a big set of input variables $$A$$, $$B$$, $$C$$, $$D$$, etc. Both $$X$$ and $$A$$, $$B$$, $$C$$, etc. are categorical variables, some of them with a high number of categories. We can think of $$A$$ as the main input because there is a hard correlation between $$X$$ and $$A$$. Also, most of the other variables $$B$$, $$C$$, $$D$$, etc. have a high correlation with $$A$$ too. In order to reduce the size of my input vector I was thinking about some way to remove those variables that don't increase my knowledge about $$X$$ farther once I know $$A$$. I am considering performing a $$\chi^2$$ test for every triplet $$(X, A, anyOtherVar)$$. My reasoning is that $$p_{(A,B,X)} = p_A*p_{(B,X|A)}$$ and if we assume that $$B$$ does not increase my knowledge about $$X$$ once I know $$A$$, then $$p_{(A,B,X)} = p_A*p_{(B|A)}*p_{(X|A)}$$ which becomes my null hypothesis. And so, I can build the 3-dimensional contingency table using the probability from my null hypothesis $$p_A*p_{(B|A)}*p_{(X|A)}$$ instead of the usual $$p_A*p_B*p_X$$. Does that procedure looks sound to you? Are there any other known and better approaches to attain my objective? Update: Because of the hard correlations, I have a lot of zeros in the $$p_{B|A}$$ and $$p_{X|A}$$ matrices which result in divisions by zero in the $$\chi^2$$ addends. I can remove the cells with 0 predicted elements from the table, but then I am finding quite difficult to come with a method to calculate the degrees of freedom. A reasonable approach for your problem is forward selection, a type of Stepwise Regression. You could fit a regression with just the $$A$$ variables, and then compare this to the regression with the $$A,B$$, $$A,C$$, and $$A,D$$ variables via F-test where, for example $$F_{A \; \text{vs.} \, A,B}=\frac{(SSE_A-SSE_{A,B})/(df_{A}-df_{A,B})}{SSE_{A,B}/df_{A,B}}.$$ Doing this for $$A,B$$, $$A,C$$, and $$A,D$$, you then add the variable to the model which had the highest value in the $$F$$ test (or if no variable was significant in the $$F$$ test you are done and can just stick with using $$A$$). You can then continue on in this manner to see if adding a third variable makes sense. You can also try backward selection to see if it gives similar results. The best method to achieve this would be a PCA or factor analysis on the variables before you do the regression. Think about it this way: If $$A$$ and $$X$$ are highly correlated, and $$B$$ and $$X$$ are also highly correlated, then very likely (but not necessarily) $$A$$ and $$B$$ are also correlated. So your variables $$A,B,C,\ldots$$ don't really capture independent information. This leads to problems in the analysis. So it is better to first remove this correlation and identify the underlying actual information in the variables. A PCA can achieve this. • PCA for large categorical variables? Also, I would prefer a method that allows me to eliminate input variables instead of doing some transformation with all of them. – salva Feb 19 at 12:27 • @salva I missed that you said you had categorical variables. PCA with categorical variables is possible, but never a great thing to do. – LiKao Feb 19 at 13:56 It is not true that $$\Pr(X|A, B) = \Pr(X|A)$$ implies $$\Pr(X|A, B, C) = \Pr(X|A, C)$$: Given, $$A$$, a variable might predict $$X$$ together with another variable $$C$$ but not on it's own. And the same could hold for $$C$$. Here's a stark example illustrating the point. Let $$B$$ be a coinflip. $$C$$ is another, independent, one. $$X$$ is a light. It turns on if $$B$$ and $$C$$ take the same value. $$B$$ is clearly useless for predicting $$X$$. As is $$C$$. But if you know both, you can perfectly predict $$X$$. Considering one variable at a time in addition to $$A$$ would fail to uncover such interactions. It might be too aggressive and remove variables that would help. But it might also be too conservative and fail to remove variables that are useless. If one variable is a copy of another, they'd both be kept even as only one of them is useful. From what you describe, this is likely in your case (the variables are highly correlated) An alternative approach I'd suggest would be to estimate the additional predictive powers of large sets of variables in addition to $$A$$ jointly. Consider using a model with Lasso or Elastic Net penalty. As @David Veich already mentioned a good way to proceed further would be feature selection methods like Forward selection,Backward elimination or Recursive feature elimination. In place of these embedded Elimination methods you can also apply a Lasso Regression model. You can also look into discriminant correspondence analysis which is an extension of Linear Discriminant Analysis for Categorical Variables.
2020-04-07 11:20:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 65, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7336949110031128, "perplexity": 259.61024051307754}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371700247.99/warc/CC-MAIN-20200407085717-20200407120217-00287.warc.gz"}
https://en.wikipedia.org/wiki/Heap%27s_algorithm
# Heap's algorithm A map of the 24 permutations and the 23 swaps used in Heap's algorithm permuting the four letters A (amber), B (blue), C (cyan) and D (dark red) Wheel diagram of all permutations of length ${\displaystyle n=4}$ generated by Heap's algorithm, where each permutation is color-coded (1=blue, 2=green, 3=yellow, 4=red). Heap's algorithm generates all possible permutations of n objects. It was first proposed by B. R. Heap in 1963.[1] The algorithm minimizes movement: it generates each permutation from the previous one by interchanging a single pair of elements; the other n−2 elements are not disturbed. In a 1977 review of permutation-generating algorithms, Robert Sedgewick concluded that it was at that time the most effective algorithm for generating permutations by computer.[2] The sequence of permutations of n objects generated by Heap's algorithm is the beginning of the sequence of permutations of n+1 objects. So there is one infinite sequence of permutations generated by Heap's algorithm (sequence A280318 in the OEIS). ## Details of the algorithm For a collection ${\displaystyle C}$ containing n different elements, Heap found a systematic method for choosing at each step a pair of elements to switch in order to produce every possible permutation of these elements exactly once. Described recursively as a decrease and conquer method, Heap's algorithm operates at each step on the ${\displaystyle k}$ initial elements of the collection. Initially ${\displaystyle k==n}$ and thereafter ${\displaystyle k. Each step generates the ${\displaystyle k!}$ permutations that end with the same ${\displaystyle n-k}$ final elements. It does this by calling itself once with the ${\displaystyle kth}$ element unaltered and then ${\displaystyle k-1}$ times with the (${\displaystyle kth}$) element exchanged for each of the initial ${\displaystyle k-1}$ elements. The recursive calls modify the initial ${\displaystyle k-1}$ elements and a rule is needed at each iteration to select which will be exchanged with the last. Heap's method says that this choice can be made by the parity of the number of elements operated on at this step. If ${\displaystyle k}$ is even, then the final element is iteratively exchanged with each element index. If ${\displaystyle k}$ is odd, the final element is always exchanged with the first. procedure generate(k : integer, A : array of any): if k = 1 then output(A) else // Generate permutations with kth unaltered // Initially k == length(A) generate(k - 1, A) // Generate permutations for kth swapped with each k-1 initial for i := 0; i < k-1; i += 1 do // Swap choice dependent on parity of k (even or odd) if k is even then swap(A[i], A[k-1]) // zero-indexed, the kth is at k-1 else swap(A[0], A[k-1]) end if generate(k - 1, A) end for end if One can also write the algorithm in a non-recursive format.[3] procedure generate(n : integer, A : array of any): //c is an encoding of the stack state. c[k] encodes the for-loop counter for when generate(k+1, A) is called c : array of int for i := 0; i < n; i += 1 do c[i] := 0 end for output(A) //i acts similarly to the stack pointer i := 0; while i < n do if c[i] < i then if i is even then swap(A[0], A[i]) else swap(A[c[i]], A[i]) end if output(A) //Swap has occurred ending the for-loop. Simulate the increment of the for-loop counter c[i] += 1 //Simulate recursive call reaching the base case by bringing the pointer to the base case analog in the array i := 0 else //Calling generate(i+1, A) has ended as the for-loop terminated. Reset the state and simulate popping the stack by incrementing the pointer. c[i] := 0 i += 1 end if end while ## Proof In this proof, we'll use the implementation below as Heap's Algorithm. While it is not optimal (see section below), the implementation is nevertheless still correct and will produce all permutations. The reason for using the below implementation is that the analysis is easier, and certain patterns can be easily illustrated. procedure generate(k : integer, A : array of any): if k = 1 then output(A) else for i := 0; i < k; i += 1 do generate(k - 1, A) if k is even then swap(A[i], A[k-1]) else swap(A[0], A[k-1]) end if end for end if Claim: If array A has length n, then performing Heap's algorithm will either result in A being "rotated" to the right by 1 (i.e. each element is shifted to the right with the last element occupying the first position) or result in A being unaltered, depending if n is even or odd, respectively. Basis: The claim above trivially holds true for ${\displaystyle n=1}$ as Heap's algorithm will simply return A unaltered in order. Induction: Assume the claim holds true for some ${\displaystyle i\geq 1}$. We will then need to handle two cases for ${\displaystyle i+1}$: ${\displaystyle i+1}$ is even or odd. If, for A, ${\displaystyle n=i+1}$ is even, then the subset of the first i elements will remain unaltered after performing Heap's Algorithm on the subarray, as assumed by the induction hypothesis. By performing Heap's Algorithm on the subarray and then performing the swapping operation, in the kth iteration of the for-loop, where ${\displaystyle k\leq i+1}$, the kth element in A will be swapped into the last position of A which can be thought as a kind of "buffer". By swapping the 1st and last element, then swapping 2nd and last, all the way until the nth and last elements are swapped, the array will at last experience a rotation. To illustrate the above, look below for the case ${\displaystyle n=4}$ 1,2,3,4 ... Original Array 1,2,3,4 ... 1st iteration (Permute subset) 4,2,3,1 ... 1st iteration (Swap 1st element into "buffer") 4,2,3,1 ... 2nd iteration (Permute subset) 4,1,3,2 ... 2nd iteration (Swap 2nd element into "buffer") 4,1,3,2 ... 3rd iteration (Permute subset) 4,1,2,3 ... 3rd iteration (Swap 3rd element into "buffer") 4,1,2,3 ... 4th iteration (Permute subset) 4,1,2,3 ... 4th iteration (Swap 4th element into "buffer") ... The altered array is a rotated version of the original If, for A, ${\displaystyle n=i+1}$ is odd, then the subset of the first i elements will be rotated after performing Heap's Algorithm on the first i elements. Notice that, after 1 iteration of the for-loop, when performing Heap's Algorithm on A, A is rotated to the right by 1. By the induction hypothesis, it is assumed that the first i elements will rotate. After this rotation, the first element of A will be swapped into the buffer which, when combined with the previous rotation operation, will in essence perform a rotation on the array. Perform this rotation operation n times, and the array will revert to its original state. This is illustrated below for the case ${\displaystyle n=5}$. 1,2,3,4,5 ... Original Array 4,1,2,3,5 ... 1st iteration (Permute subset/Rotate subset) 5,1,2,3,4 ... 1st iteration (Swap) 3,5,1,2,4 ... 2nd iteration (Permute subset/Rotate subset) 4,5,1,2,3 ... 2nd iteration (Swap) 2,4,5,1,3 ... 3rd iteration (Permute subset/Rotate subset) 3,4,5,1,2 ... 3rd iteration (Swap) 1,3,4,5,2 ... 4th iteration (Permute subset/Rotate subset) 2,3,4,5,1 ... 4th iteration (Swap) 5,2,3,4,1 ... 5th iteration (Permute subset/Rotate subset) 1,2,3,4,5 ... 5th iteration (Swap) ... The final state of the array is in the same order as the original The induction proof for the claim is now complete, which will now lead to why Heap's Algorithm creates all permutations of array A. Once again we will prove by induction the correctness of Heap's Algorithm. Basis: Heap's Algorithm trivially permutes an array A of size 1 as outputing A is the one and only permutation of A. Induction: Assume Heap's Algorithm permutes an array of size i. Using the results from the previous proof, it will be noted that every element of A will be in the "buffer" once when the first i elements are permuted. Because permutations of an array can be made by altering some array A through the removal of an element x from A then tacking on x to each permutation of the altered array, it follows that Heap's Algorithm permutes an array of size ${\displaystyle i+1}$, for the "buffer" in essence holds the removed element, being tacked onto the permutations of the subarray of size i. Because each iteration of Heap's Algorithm has a different element of A occupying the buffer when the subarray is permuted, every permutation is generated as each element of A has a chance to be tacked onto the permutations of the array A without the buffer element. ## Frequent Mis-implementations It is tempting to simplify the recursive version given above by reducing the instances of recursive calls. For example, as: procedure generate(k : integer, A : array of any): if k = 1 then output(A) else // Recursively call once for each k for i := 0; i < k; i += 1 do generate(k - 1, A) // swap choice dependent on parity of k (even or odd) if k is even then // no-op when i == k-1 swap(A[i], A[k-1]) else // XXX incorrect additional swap when i==k-1 swap(A[0], A[k-1]) end if end for end if This implementation will succeed in producing all permutations but does not minimize movement. As the recursive call-stacks unwind, it results in additional swaps at each level. Half of these will be no-ops of ${\displaystyle A[i]}$ and ${\displaystyle A[k-1]}$ where ${\displaystyle i==k-1}$ but when ${\displaystyle k}$ is odd, it results in additional swaps of the ${\displaystyle kth}$ with the ${\displaystyle 0th}$ element. ${\displaystyle n}$ ${\displaystyle n!-1}$ swaps additional = swaps ${\displaystyle -(n!-1)}$ 1 0 0 0 2 1 1 0 3 5 6 1 4 23 27 4 5 119 140 21 6 719 845 126 7 5039 5922 883 8 40319 47383 7064 9 362879 426456 63577 These additional swaps significantly alter the order of the ${\displaystyle k-1}$ prefix elements. The additional swaps can be avoided by either adding an additional recursive call before the loop and looping ${\displaystyle k-1}$ times (as above) or looping ${\displaystyle k}$ times and checking that ${\displaystyle i}$ is less than ${\displaystyle k-1}$ as in: procedure generate(k : integer, A : array of any): if k = 1 then output(A) else // Recursively call once for each k for i := 0; i < k; i += 1 do generate(k - 1, A) // avoid swap when i==k-1 if (i<k-1) // swap choice dependent on parity of k if k is even then swap(A[i], A[k-1]) else swap(A[0], A[k-1]) end if end if end for end if The choice is primarily aesthetic but the latter results in checking the value of ${\displaystyle i}$ twice as often.
2019-09-16 00:54:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 39, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8615278601646423, "perplexity": 3563.631134758385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572439.21/warc/CC-MAIN-20190915235555-20190916021555-00077.warc.gz"}
https://www.zigya.com/study/book?class=11&board=bsem&subject=Physics&book=Physics+Part+I&chapter=Work,+Energy+and+Power&q_type=&q_topic=Collisions+&q_category=&question_id=PHENNT11119017
## Book Store Currently only available for. CBSE Gujarat Board Haryana Board ## Previous Year Papers Download the PDF Question Papers Free for off line practice and view the Solutions online. Currently only available for. Class 10 Class 12 Two particles of masses m1,m2 move with initial velocities u1 and u2. On collision, one of the particles gets excited to a higher level, after absorbing energy (E). If final velocities of particles be v1 and v2, then we must have C. 2032 Views What are different units of energy? The different units of energy are : (i) Joule (ii) Erg (iii) eV (iv) KWh (v) Calorie. 804 Views Define watt. Power is said to be one watt if one joule of work is done in one second. 860 Views What is watt? SI unit of power is Watt. 773 Views Define the unit joule. Work done is said to be one joule if one newton of  force displaces the body through a distance of one meter in the direction of applied  force . 799 Views Is work a scalar or a vector quantity? Work is dot product of two vectors. i.e. . And dot product is a scalar quantity. Therefore, work is a scalar quantity. 1450 Views
2019-01-19 19:39:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4992569386959076, "perplexity": 3602.360534891719}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583680452.20/warc/CC-MAIN-20190119180834-20190119202834-00020.warc.gz"}
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=141&t=43166&p=148666
## Cell potential $\Delta G^{\circ} = -nFE_{cell}^{\circ}$ MichelleRamirez_2F Posts: 63 Joined: Fri Sep 28, 2018 12:28 am ### Cell potential Does the value of the cell potential/E change if the value of your stoichiometric coefficient of one reactant changes? Ray Guo 4C Posts: 90 Joined: Fri Sep 28, 2018 12:15 am ### Re: Cell potential Can you specify what you mean by changing the "stoichiometric coefficient of one reactant changes"? Will the equation still be balanced if you change only one reactant's coefficient? Karishma_1G Posts: 67 Joined: Fri Sep 28, 2018 12:18 am ### Re: Cell potential No, cell potentials do not change based on stoichiometric coefficients. This is because a standard reduction potential is an intensive property, meaning that its value does not change based upon on the quantity of electrons/species reacting.
2020-02-18 17:37:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5371672511100769, "perplexity": 5310.584967921498}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143784.14/warc/CC-MAIN-20200218150621-20200218180621-00051.warc.gz"}
https://math.stackexchange.com/questions/3698492/simple-proof-by-induction-problems
# Simple proof by induction problems I just started learning proof by induction and I have come across 2 problems that I am not sure if am doing right. The first one is Prove that $$11^n - 1$$ is dividable by $$10$$. I started with $$n = 0, 11^0 - 1 = 0$$, is dividable by $$10$$ I did the same for $$1$$ and $$2$$, what is the next step here? and the second one is $$\sum_{k=1}^{n} k(k+1)= \frac{n(n+1)(n+2)}{3}$$ Help would be really appreciated. Let's use your second example as a prototype for induction proofs. base case: Usually, we check that the result holds for small values of $$n,$$ e.g., $$n = 0,$$ $$n = 1,$$ or $$n = 2,$$ but some induction proofs begin with larger values of $$n$$ than this. Considering that your sum begins with $$k = 1,$$ let's use $$n = 1$$ as our base case. We want to say that the left-hand side (LHS) and the right-hand side (RHS) are equal when $$n = 1.$$ Now, we have that $$\text{LHS} = 1(1 + 1) = 2$$ and $$\text{RHS} = \frac{1(1 + 1)(1 + 2)}{3} = \frac{(1)(2)(3)}{3} = 2.$$ We have verified the formula for $$n = 1,$$ so we can proceed. inductive hypothesis: We have already established that the formula holds for $$n = 1,$$ so we will assume that the formula holds for some integer $$n \geq 2.$$ We want to verify the formula for $$n + 1.$$ proving the formula for $$n + 1$$: On the left-hand side, we have $$\sum_{k = 1}^{n + 1} k(k + 1) = (n + 1)(n + 1 + 1) + \sum_{k = 1}^n k(k + 1).$$ But by our inductive hypothesis, the sum on the right is $$\frac{n(n + 1)(n + 2)}{3},$$ hence we have that $$\text{LHS} = (n + 1)(n + 2) + \frac{n(n + 1)(n + 2)}{3} = \frac{3(n + 1)(n + 2)}{3} + \frac{n(n + 1)(n + 2)}{3} = \frac{(n + 1)(n + 2)(n + 3)}{3}.$$ But this is the same as the right-hand side since we have that $$\text{RHS} = \frac{(n + 1)(n + 1 + 1)(n + 1 + 2)}{3} = \frac{(n + 1)(n + 2)(n + 3)}{3}.$$ invoking induction: By the Principle of Mathematical Induction, we are done once we show 1.) $$P(n_0)$$ holds for small non-negative integers $$n_0$$ (e.g., $$n_0 = 0,$$ $$n_0 = 1,$$ or $$n_0 = 2$$) and 2.) $$P(n + 1)$$ holds whenever $$P(n)$$ holds for any integer $$n \geq n_0.$$ We have established both of these, so our proof by induction is complete. • It is also a good idea when using induction to explicitly state at the beginning of the proof something along the lines of, "We proceed by induction." – Carlo May 30 at 17:31 • Thank you for the answer! so you used n instead of k in this example? – Katerina May 30 at 18:10 • No. Both $n$ and $k$ show up in the problem you asked about. When doing induction, you will induct on $n;$ observe that $k$ is simply the index of summation. – Carlo May 30 at 20:17 The general way that we do induction is show that if the $$n=k$$ is true, that the $$n=k+1$$ case must also be true. So in this example if we can show that $$(11^n-1)\mod 10 = 0 \implies (11^{n+1}-1)\mod 10 = 0$$ And then start with $$(11^0-1)\mod 10 = 0$$ , this implies that the $$11^1$$ case is true, which means the $$11^2$$ case is true, then $$11^3$$, ... And so on. • Also, I'm not sure induction is the best method for your other problem. Learning about integer power sums, which Mathologer has a great video on would help you with this problem. – K.defaoite May 30 at 17:16 • I must do it by induction – Katerina May 30 at 17:17 • It turns out that the induction proof for the sum formula is quite straightforward. If you split the summand into $k$ and $k^2,$ you have to prove the sum formulas for those by (none other than) induction. – Carlo May 30 at 17:29
2020-07-14 23:19:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 41, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8720758557319641, "perplexity": 115.74779170863222}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657151761.87/warc/CC-MAIN-20200714212401-20200715002401-00111.warc.gz"}
https://stats.stackexchange.com/questions/476760/random-shuffle-of-numbers
# Random shuffle of numbers Suppose I have $$N$$ distinct numbers labeled $$n_1, \cdots,n_N$$. I want to obtain a random shuffle of the $$N$$ numbers. Are the following statements equivalent? 1. $$P$$(card i at position j) = $$1/N$$ 2. $$P$$(any permutations of the N cards) = $$1/N!$$ I can show 2 implies 1, but does 1 imply 2? When we say random shuffle, do we mean 2? Thanks! On its own, statement (1) does not imply statement (2). It describes only the marginal distribution of the card at each position $$j$$, not the joint distribution. In particular, there is nothing in statement (1) that prevents repitition of an individual card in multiple positions. To see that this probability statement does not imply (2), let $$\mathbf{x} = (x_1,...,x_N)$$ denote the vector of values, and consider the following joint distribution, which is consistent with (1) but not with (2): $$\mathbb{P}(x_1 = x_2 = \cdots = x_N = n_i) = \frac{1}{N} \quad \quad \quad \text{for all } i=1,...,N.$$
2022-01-28 02:43:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8214260935783386, "perplexity": 343.53431566554343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305341.76/warc/CC-MAIN-20220128013529-20220128043529-00387.warc.gz"}
https://www.bio-physics.at/wiki/index.php?title=Dynamical_Systems
# Dynamical Systems Since infinitesimal calculus was introduced by Newton and Leibniz the formulation of Newtons equations finally lead to the equations of motion, which are essentially a system of ordinary differential equations (ODEs). The formalism is now arround for a considerable period of time and found applications in numerous areas. There are lots of treatise of dynamical systems presented in mathematical rigor, they can be found in textbooks about differential equations or analysis. Here I want to present dynamical systems in a way, usefull to understand biological systems. To this end, I consinder a dynamical system as a set of $n$ observable measureands $x_i$ (e.g. concentration of a certain metabolite or number of proteins in active state), whereby the rate of change of the $i$-th measurand can depend on the value of $x_i$ itself and all other $n$ measurands ($n+1$ if time is included), so that $$\dot{x_i}$$ is in general a function of all measureands $$f_i(x_1, \ldots, x_n)$$ $\dot{x_1} = \frac{dx_1}{dt} = f_1(x_1, \ldots, x_n)$ $\vdots$ $\dot{x_i} = \frac{dx_i}{dt} = f_i(x_1, \ldots, x_n)$ $\vdots$ $\dot{x_n} = \frac{dx_n}{dt} = f_n(x_1, \ldots, x_n)$ or in vector notation $\dot{\mathbf{x}} = f(\mathbf{x})$ This system of differential equations is also called coupled, since in general the time course of $${x_i}(x_1, \ldots, x_n,t)$$ is dependent or coupled to the concentrations $x_1, \ldots, x_n$, indicating that there is an interaction. Dynamical systems don't consider possible spatial variations in concentration, but the dependency of the rate of change $$\dot{x_i}$$ on time $t$ can be considered. $\dot{\mathbf{x}} = f(\mathbf{x},t)$ This picture shows the $n$ species $x_1, \dots, x_n$, lines indicate dependencies (or coupling). It also shows that dynamical systems can be thought of as a networks. Lines show interaction. In real systems, dependencies are restricted. This means, the concentration of a species does not depend on all other speacies, just on a restricted set. The networkstructures that arose from evolution, where found to serve certain functions. Such Network structures appear in organisms more often than at random and are called Network Motifs.
2021-05-18 17:41:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.82723468542099, "perplexity": 440.56391413780665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991288.0/warc/CC-MAIN-20210518160705-20210518190705-00561.warc.gz"}
https://blog.apiad.net/post/pcg-terrain-generation-iii/
# Mostly Harmless A blog about science, art, coding, life,... mostly harmless stuff. # PCG Terrain III: Adding Props This post is mostly about adding some details to the terrain in the form of props (rocks and trees). We are not gonna spent any time actually designing or generating nicely looking rocks and trees. We are just gonna put some dumb cylinders and spheres with a bit of color. Instead, we are going to concentrate on making the generation of the props efficient. In a later post we’ll deal with actually generating nicer looking rocks and trees. Here is a picture of whats coming: But before that, I want to give a second chance to the terrain subdivision, to make a few points clearer. Last post was about making an adaptive mesh generator that could provide both high performance and high quality. We kind of got away with it but using a region quadtree to represent the terrain, making successive subdivisions where more resolution was needed. I couldn’t post a detailed enough snippet, simply because there is no such a snippet. All the code necessary to make that kind of technique work is rather complex, even if not exactly rocket science. In any case, although a complete implementation would be way to large to fit in a 1000 words post, I do want to point out some implementation details, in the hope of saving a few headaches that I myself got when trying to come up with a clean implementation. ### Subdividing the terrain, revisited The first thing I want to point out, is about how to expand such a quadtree. Remember, each node represents a patch of terrain, located in some fixed world coordinates, and with an specific width. At runtime, we check the entire quadtree generated so far, and decide which nodes to destroy and which nodes to open. This decision can be based on an approximate screen size for the given patch, calculated by dividing the distance between two extremal points of the terrain patch plane and the distance to the camera, and then multiplying by the screen diagonal (in pixels). This gives a view dependent approximate screen size in pixels for each patch, which, if not accurate, is good enough for deciding which patches to close or open. Those nodes whose patches have an approximate screen size bigger or lower than a given threshold are respectively opened or closed. Opening a node means creating four new children nodes (and patches) with half the dimensions, that together cover the same region of the parent patch. This effectively quadruples the resolution of the patch (remember also to hide/disable the parent’s mesh). Closing a node means destroying its children, and enabling its own mesh. The second point is about how to explore the quadtree in an efficient manner that doesn’t kill your framerates. There are two tricky problems here. First, if the tree is very big, and/or you have to open too many nodes, chances are you’ll not be able to do that in a single frame, or even in a few milliseconds. So you have to split the opening and closing routine across many frames. This can be easily accomplished in Unity 3D by using a coroutine: basically wrap all your code in an enumerator, and drop a few yield return now and then to let the engine catch up with the framerates. But then, once you’ve done that, if you are moving fairly fast, chances are that when your coroutine actually opens a node that you decided was worth opening a few frames ago, the camera has already passed over it, and you’ll be waisting resources constantly opening nodes which are actually behind the camera. Believe me, it happens, a lot. After playing around with BFS and DFS, trying to devise a clever way to sort the quadtree nodes based on the camera position and direction, I stumbled across a better, and simpler solution: iterative deepening. IDS is a tree search algorithm that works exactly like DFS, except it only goes down to a specific level, instead of all the way to the leaves. An outer loop increases this maximum depth by a given factor each iteration. Its taken out of the AI field, but with a few modifications, it can be made to work like this: each time the terrain generation coroutine ends, you restart it again from the root of the quadtree, recursing into already opened children. Once it finds a leaf node, it decides whether the node needs to be opened, breaking the recursion at that level. That way, in every full iteration of the coroutine, only one new level can be added to the quadtree. This might seem like a waste of resources, starting all over from the root, but it is far faster to recurse into the portion of the quadtree that is already opened, than opening new nodes (which implies creating meshes and textures), so must of the work is actually spent at opening the leaf nodes. And the quick pruning of recursion makes it far less likely for a patch to get subdivided int a lot of high resolution pieces, just to be quickly closed. As a result, if you fly very fast over a patch, chances are it will get opened at most once, even if more resolution was needed; but hey, you left it behind in a couple of frames anyway, so what the hell. And when the camera sits still in a given patch, the generator will have enough time to divide it further and further until necessary. Now onto the main dish! We’ll be adding two kinds of props to make our terrain a somewhat nicer place to look at: rocks and trees. Lets begin with rocks. Our basic rocks will be plain spheres, painted with some noisy material that looks somewhat like stone (but it isn’t, just don’t tell anybody). Here is a closeup of a rather big one: Putting loads of these round not-so-rocks around isn’t a big issue. You just need to decide at which patch size you want to start seeing rocks, and the next piece of code (inside your patch generation code) gets the job done: int rocks = RandomInteger(MinRocks, MaxRocks); for (int i = 0; i < rocks; i++) { // Our nicely rounded rock var rock = GameObject.CreatePrimity(PrimitiveType.Sphere) // Put it in random position inside this patch rock.transform.localPosition = Vector3.left * RandomUniform(-1, 1) * Width / 2 + Vector3.forward * RandomUniform(-1, 1) * Width / 2; // Set the scale rock.transform.localScale = Vector3.one * RandomUniform(1f, 10f); // Set the material // Attach to this object rock.transform.parent = this.transform; } Of course, I’m assuming you have the utility methods for random generation, and the correct materials in place. The attachment part serves two purposes. First, it correctly set the rocks world coordinates with respect to the patch. Second, it ensures that the rocks generated for this patch will be destroyed once the patch gets closed. Otherwise you’ll end up with a bunch of disconnected rocks lying around. However, the tricky part here has nothing to do with actually generating the rocks, but with placing them in random locations. Since we are choosing random locations for the rocks, every time a node gets closed and reopened in the future, it will place its rocks in a whole different configuration! Actually, the same happens every time we run the program with our whole island landscape. Even though Perlin is a completely deterministic function, our Gaussian mountains are generated with random centers, so the whole island landscape is different from run to run. But there is a very easy way to turn any (pseudo)random algorithm into a deterministic one: by controlling the random number generator seed. ### Controlling the randomness See, the thing with random generators is, most of them are not really random. They are called pseudo-random generators, because they generate a sequence that looks random (for a given set of statistical tests), but is actually completely determined by the internal state of the generator, which often consists of a single number, called (in a low and grave voice) the seed. The theoretical reasons for this are way out of scope right now, but nevertheless it serves well. We just need to seed the random generator with a fixed number (like 0), and we’ll have a completely deterministic sequence of random numbers. However, it is not enough to have a seeded random generator to guarantee that our procedural algorithms are deterministic. This is due to the fact that although every decision in our algorithms depends solely on the state of the random generator, their order depends on the user. That is, we choose to open or close some node by looking at the camera’s position. This means that in every two different runs of our program the camera will, in general, take a whole different path (users are nasty people, I know), so we will be opening and closing nodes in a different order, so we will be using the same sequence of random numbers all the time, but we will taking asking different questions every time, so the overall result will be just as random as before. The solution for this is to have a different random number generator at each node in the quadtree. Every time we open a node, creating four children, we give each of this children a new seed for their own use. These seeds are generated themselves using the random generator of the parent, which is seeded with the parent’s seed. All props created at any level in the quadtree are based on a second random generator in the parent, also seed by its own seed. This way we are effectively creating a hierarchy of random number generators, each one completely determined by its parent seed, so the whole system is essentially encoded in a single number: the seed of the root. Our whole world, with all his mountains, bumps, rocks, and trees can be stored, shared, and completely recreated by means a single number! Just a final tip: I’m using .NET’s native Random class instead of Unity’s own, because in Unity the Random class is static. If you want to give it a try, just download the demo (~10 MB), and try to find the pond in the following picture. You’ll find out is just exactly like that: ### Layering rocks To add a final touch to our rock generation algorithm, we’ll introduce one more change. Instead of generating the rocks on a single level of the quadtree, we can generate them in every subsequent level of the quadtree. If we calculate the scale of the rocks based on the size of the patch, we can effectively have a multi-resolution rock map. Since we are already generating patches efficiently, we won’t be adding too many rocks, because only close enough patches will have very small sizes, and hences, loads of small to medium to large rocks. Far away patches will only have very large rocks. This way we have a high density of tiny rocks near the user, and a few bigger and bigger boulders in the distance.
2018-04-26 20:32:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4674305021762848, "perplexity": 822.1008280003945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948549.21/warc/CC-MAIN-20180426203132-20180426223132-00090.warc.gz"}
http://www.24hee.com/leon-s-dnkf/jj8d1.php?id=completing-the-square-08f0a2
You may like this method. I understood that completing the square was a method for solving a quadratic, but it wasn’t until years later that I realized I hadn’t really understood what I was doing at all. We use this later when studying circles in plane analytic geometry.. The other term is found by dividing the coefficient of, Completing the square in a quadratic expression, Applying the four operations to algebraic fractions, Determining the equation of a straight line, Working with linear equations and inequations, Determine the equation of a quadratic function from its graph, Identifying features of a quadratic function, Solving a quadratic equation using the quadratic formula, Using the discriminant to determine the number of roots, Religious, moral and philosophical studies. You may also want to try our other free algebra problems. After applying the square root property, solve each of the resulting equations. Well, with a little inspiration from Geometry we can convert it, like this: As you can see x2 + bx can be rearranged nearly into a square ... ... and we can complete the square with (b/2)2. This technique has applications in a number of areas, but we will see an example of its use in solving a quadratic equation. For example "x" may itself be a function (like cos(z)) and rearranging it may open up a path to a better solution. Starting with x 2 + 6x - 16 = 0, we rearrange x 2 + 6x = 16 and attempt to complete the square on the left-hand side. Step 3 Complete the square on the left side of the equation and balance this by adding the same number to the right side of the equation: Step 5 Subtract (-0.4) from both sides (in other words, add 0.4): Why complete the square when we can just use the Quadratic Formula to solve a Quadratic Equation? But at this point, we have no idea what number needs to go in that blank. Tut 8 Q13; antiderivative of quadratic; Similar Areas Demonstration Completing the square is a method of changing the way that a quadratic is expressed. For example, completing the square will be used to derive important formulas, to create new forms of quadratics, and to discover information about conic sections (parabolas, circles, ellipses and hyperbolas). The completing the square method could of course be used to solve quadratic equations on the form of a x 2 + b x + c = 0 In this case you will add a constant d that satisfy the formula d = (b 2) 2 − c See Completing the Square Examples with worked out steps For example, find the solution by completing the square for: 2 x 2 − 12 x + 7 = 0. a ≠ 1, a = 2 so divide through by 2. Here is a quick way to get an answer. First think about the result we want: (x+d)2 + e, After expanding (x+d)2 we get: x2 + 2dx + d2 + e, Now see if we can turn our example into that form to discover d and e. And we get the same result (x+3)2 − 2 as above! How to Solve Quadratic Equations using the Completing the Square Method If you are already familiar with the steps involved in completing the square, you may skip the introductory discussion and review the seven (7) worked examples right away. Completing The Square Method Completing the square method is one of the methods to find the roots of the given quadratic equation. Some quadratics cannot be factorised. Solve any quadratic equation by completing the square. Completing the square to find a circle's center and radius always works in this manner. Well, one reason is given above, where the new form not only shows us the vertex, but makes it easier to solve. More Examples of Completing the Squares In my opinion, the “most important” usage of completing the square method is when we solve quadratic equations. Worked example 6: Solving quadratic equations by completing the square Say we have a simple expression like x2 + bx. How did I get the values of d and e from the top of the page? The quadratic formula is derived using a method of completing the square. Radio 4 podcast showing maths is the driving force behind modern science. Discover Resources. Be sure to simplify all radical expressions and rationalize the denominator if necessary. But if you have time, let me show you how to "Complete the Square" yourself. (Also, if you get in the habit of always working the exercises in the same manner, you are more likely to remember the procedure on tests.) Completing the Square Unfortunately, most quadratics don't come neatly squared like this. Completing the square is a way to solve a quadratic equation if the equation will not factorise. Completing the Square with Algebra Tiles. Always do the steps in this order, and each of your exercises should work out fine. Completing the square can also be used in order to find the x and y coordinates of the minimum value of a quadratic equation on a graph. Some of the worksheets below are Completing The Square Worksheets, exploring the process used to complete the square, along with examples to demonstrate each step with exercises like using the method of completing the square, put each circle into the given form, … Completing the square is a helpful technique that allows you to rearrange a quadratic equation into a neat form that makes it easy to visualize or even solve. An alternative method to solve a quadratic equation is to complete the square… For those of you in a hurry, I can tell you that: Real World Examples of Quadratic Equations. To find the coordinates of the minimum (or maximum) point of a quadratic graph. Any quadratic equation can be rearranged so that it can be solved in this way. But a general Quadratic Equation can have a coefficient of a in front of x2: ax2+ bx + c = 0 But that is easy to deal with ... just divide the whole equation by "a" first, then carry on: x2+ (b/a)x + c/a = 0 For quadratic equations that cannot be solved by factorising, we use a method which can solve ALL quadratic equations called completing the square. Completing the square Completing the square is a way to solve a quadratic equation if the equation will not factorise. It also helps to find the vertex (h, k) which would be the maximum or minimum of the equation. Divide all terms by a (the coefficient of x2, unless x2 has no coefficient). Here is my lesson on Deriving the Quadratic Formula. \$1 per month helps!! :) https://www.patreon.com/patrickjmt !! It is often convenient to write an algebraic expression as a square plus another term. So let's see how to do it properly with an example: And now x only appears once, and our job is done! KEY: See more about Algebra Tiles. Completing the square comes from considering the special formulas that we met in Square of a sum and square … Otherwise the whole value changes. Transform the equation so that the constant term, c, is alone on the right side. In fact, the Quadratic Formula that we utilize to solve quadratic equations is derived using the technique of completing the square. Completing the Square is a method used to solve a quadratic equation by changing the form of the equation so that the left side is a perfect square trinomial. Completing The Square Steps Isolate the number or variable c to the right side of the equation. And (x+b/2)2 has x only once, which is easier to use. Also Completing the Square is the first step in the Derivation of the Quadratic Formula. If you want to know how to do it, just follow these steps. Having x twice in the same expression can make life hard. The other term is found by dividing the coefficient of $$x$$ by $$2$$, and squaring it. In the example above, we added $$\text{1}$$ to complete the square and then subtracted $$\text{1}$$ so that the equation remained true. We cover how to graph quadratics in more depth in our graphing posts. Divide coefficient b … Completing the Square Formula is given as: ax 2 + bx + c ⇒ (x + p) 2 + constant. There are two reasons we might want to do this, and they are To help us solve the quadratic equation. Completing the Square. Completing the square mc-TY-completingsquare2-2009-1 In this unit we consider how quadratic expressions can be written in an equivalent form using the technique known as completing the square. Now, let us look at a useful application: solving Quadratic Equations ... We can complete the square to solve a Quadratic Equation (find where it is equal to zero). Solving by completing the square - Higher. ‘Quad’ means four but ‘Quadratic’ means ‘to make square’. It is often convenient to write an algebraic expression as a square plus another term. To complete the square when a is greater than 1 or less than 1 but not equal to 0, factor out the value of a from all other terms. At the end of step 3 we had the equation: It gives us the vertex (turning point) of x2 + 4x + 1: (-2, -3). Generally it's the process of putting an equation of the form: ax 2 + bx + c = 0 We can complete the square to solve a Quadratic Equation(find where it is equal to zero). Write the left hand side as a difference of two squares. But a general Quadratic Equation can have a coefficient of a in front of x2: But that is easy to deal with ... just divide the whole equation by "a" first, then carry on: Now we can solve a Quadratic Equation in 5 steps: We now have something that looks like (x + p)2 = q, which can be solved rather easily: Step 1 can be skipped in this example since the coefficient of x2 is 1. Your Step-By-Step Guide for How to Complete the Square Step 1: Figure Out What’s Missing. x 2 + 6x = 16 Arrange the x 2-tile and 6x-tiles to start forming a square. x 2 − 6 x + 7 2 = 0. In mathematics, completing the square is used to compute quadratic polynomials. Free Complete the Square calculator - complete the square for quadratic functions step-by-step This website uses cookies to ensure you get the best experience. For your average everyday quadratic, you first have to use the technique of "completing the square" to rearrange the quadratic into the neat " (squared part) equals (a number)" format demonstrated above. Completing the square is a method used to solve quadratic equations. Completing the square, sometimes called x 2 x 2, is a method that is used in algebra to turn a quadratic equation from standard form, ax 2 + bx + c, into vertex form, a(x-h) 2 + k.. 2. Step 2 Move the number term to the right side of the equation: Step 3 Complete the square on the left side of the equation and balance this by adding the same number to the right side of the equation. Solving Quadratic Equations by Completing the Square. The most common use of completing the square is solving … You da real mvps! The vertex form is an easy way to solve, or find the zeros of quadratic equations. So, by adding (b/2)2 we can complete the square. Thanks to all of you who support me on Patreon. It can also be used to convert the general form of a quadratic, ax 2 + bx + c to the vertex form a (x - h) 2 + k Generally, the goal behind completing the square is to create a perfect square trinomial from a quadratic. x – 0.4 = ±√0.56 = ±0.748 (to 3 decimals). A polynomial equation with degree equal to two is known as a quadratic equation. Factorise the equation in terms of a difference of squares and solve for $$x$$. Completing the square is the essential ingredient in the generation of our handy quadratic formula. Formula for Completing the Square To best understand the formula and logic behind completing the square, look at each example below and you should see the pattern that occurs whenever you square a binomial to produce a perfect square trinomial. When completing the square, we end up with the form: Our tips from experts and exam survivors will help you through. Completing the Square The prehistory of the quadratic formula. 2 2 x 2 − 12 2 x + 7 2 = 0 2. which gives us. Just think of it as another tool in your mathematics toolbox. Complete the Square, or Completing the Square, is a method that can be used to solve quadratic equations. Completing the square is a method used to solve quadratic equations that will not factorise. By … Step 2: Use the Completing the Square Formula. There are also times when the form ax2 + bx + c may be part of a larger question and rearranging it as a(x+d)2 + e makes the solution easier, because x only appears once. What can we do? Step 4 Take the square root on both sides of the equation: And here is an interesting and useful thing. Neatly squared like this that we utilize to solve a quadratic equation completing the square where. Now... we ca n't just add ( b/2 ) 2 +.. You want to do this, and each of your exercises should work Out.! The denominator if necessary survivors will help you through ensure you get the values of d and e from top!: Real World Examples of quadratic equations experts and exam survivors will help you through a of... + b x + 7 2 = 0 dividing the coefficient of \ ( x\ ) by \ x\! Step-By-Step Guide for how to complete the square the prehistory of the will! Is a quick way to solve quadratic equations add ( b/2 ) 2 without also it. We have no idea What number needs to go in that blank Figure Out What s! Decimals ) square to rearrange a more complicated quadratic Formula another term to complete the square the. All terms by a ( the coefficient of x2, unless x2 has coefficient! Interesting and useful thing we might want to know how to do it, just follow steps. Order, and squaring it square the prehistory of the equation so that it can solved. + c ⇒ ( x + 7 2 = 0 for quadratic functions Step-By-Step this website uses to. Help you through an alternative method to solve quadratic equations is derived using a method of the... Like x2 + bx tool in your mathematics toolbox complete the square maximum minimum... The same expression can make life hard same expression can make life hard the methods to find the of! In Solving a quadratic equation is to complete the square, or the. The x 2-tile and 6x-tiles to start forming a square an alternative method to quadratic... I get the values of d and e from the top of the quadratic Formula, unless has... Handy quadratic Formula reasons we might want to know how to do,. Solve quadratic equations is derived using the technique of completing the square step 1 Figure. Is a method that can be solved in this way that we to... X + 7 2 = 0 by completing the square root on both sides the! Arrange the x 2-tile and 6x-tiles to start forming a square plus another term the or... − 12 2 x completing the square − 6 x + 7 2 = 0 which... Another term way that a quadratic equation if the equation: and here is a technique for a... Modern science also completing the square for quadratic functions Step-By-Step this website uses to! Do this, and they are to help us solve the quadratic Formula that we utilize to a! Square plus another term compute quadratic polynomials always do the steps in this order, and squaring.... And they are to help us solve the quadratic Formula equation is to complete the square to solve equations!... we ca n't just add ( b/2 ) 2 without also subtracting it too our tips experts... Solve for \ ( x\ ) by \ ( x\ ) x+b/2 ) without. ( x\ ) by \ ( 2\ ), and each of your exercises should work Out fine will factorise! Or minimum of the quadratic Formula that we utilize to solve quadratic is... ( to 3 decimals ) and solve for \ ( x\ ) interesting useful. To two is known as a quadratic equation Deriving the quadratic Formula term is found by dividing coefficient! Subtracting it too in that blank up with the form: our tips from experts and exam survivors help! The steps in this way just think of it as another tool in your toolbox... N'T just add ( b/2 ) 2 has x only once, which is easier to.. You can complete the square mathematics toolbox do it, just follow steps! Be the maximum or minimum of the equation the form: our tips from and. By adding ( b/2 ) 2 + bx + c ⇒ ( x 7! Which is easier to use 2 x 2 − 12 2 x 2 constant! No idea What number needs to go in that blank quadratics in more depth in our graphing.. €“ 0.4 = ±√0.56 = ±0.748 ( to 3 decimals ) ensure you get the values of d and from. A quadratic equation graphing posts of you who support me on Patreon polynomial equation degree... Want to know how to graph quadratics in more depth in our graphing posts to the right side squared this! By dividing the coefficient of \ ( 2\ ), and squaring it to go in that.... Manipulating a quadratic equation it is often convenient to write an algebraic as! Radical expressions and rationalize the denominator if necessary zero ) is the first step in generation! Is my lesson on Deriving the quadratic Formula roots of the quadratic Formula is using... The steps in this order, and they are to help us solve the quadratic Formula to an... A quick way to solve quadratic equations of d and e from the top of the page use. Quadratic Formula or even to solve, or find the coordinates of the.! ( find where it is often convenient to write an algebraic expression a. On the right side of changing the way that a quadratic graph find... = ±0.748 ( to 3 decimals ) utilize to solve quadratic equations that will not factorise the square….. Alternative method to solve quadratic equations that will not factorise worked example 6: Solving quadratic that. Later when studying circles in plane analytic geometry coefficient ) get the values of d and e from the of... + 6x = 16 Arrange the x 2-tile and 6x-tiles to start forming a.. That will not factorise but at this point, we have no idea number! If you want to do this, and each of your exercises should work Out fine forming a square a. = 16 Arrange the x 2-tile and 6x-tiles to start forming a square plus a constant behind science! What ’ s Missing point of a difference of squares and solve for \ ( )! Method of completing the square to solve a quadratic equation behind modern science to two is known a! ) point of a difference of squares and solve for \ ( 2\,. Simple expression like x2 + bx uses cookies to ensure you get the values of d e... Adding ( b/2 ) 2 we can complete the square method is one of quadratic. And exam survivors will help you through two reasons we might want to do this, and are! All radical expressions and rationalize the denominator if necessary 12 2 x + 7 2 = 2.... Of changing the way that a quadratic into a perfect square plus a constant would the. Right side of the quadratic Formula functions Step-By-Step this website uses cookies to ensure you get the best.... The x 2-tile and 6x-tiles to start forming a square behind modern.. Degree equal to two is known as a difference of two squares variable c to the side... Once, which is easier to use, completing the square, is alone on the right.... Zero ) more depth in our graphing posts can be used to solve quadratic equations by completing square... + 7 2 = 0 by completing the square Solving by completing square... Top of the equation in fact, the quadratic completing the square or even to,! Now... we ca n't just add ( b/2 ) 2 we can complete the,... Most quadratics do n't come neatly squared like this in that blank in that blank the essential ingredient in same. 0 2. which gives us changing the way that a quadratic graph sides of the equation so that constant... When studying circles in plane analytic geometry of \ ( 2\ ), and they are to help us the... To compute quadratic polynomials ), and each of your exercises should work fine... To try our other free algebra problems a quadratic equation if completing the square equation so that can. Like x2 + bx + c ⇒ ( x + c ⇒ ( x + 7 =! That blank, unless x2 has no coefficient ) without also subtracting it too quadratic. From experts and exam survivors will help you through side of the methods to find zeros... Help you through it is equal to zero ) those of you who me., which is easier to use to go in that blank is found by dividing coefficient... Is given as: ax 2 + b x + p ) 2 + 6x = 16 the... Square steps Isolate the number or variable c to the right side of the quadratic Formula can you. To 3 decimals ) ( the coefficient of x2, unless x2 has no coefficient ) ( or maximum point... The x 2-tile and 6x-tiles to start forming a square plus a constant is my lesson on Deriving quadratic. Is used to solve quadratic equations is derived using a method used to compute quadratic polynomials behind modern.. Of completing the square method is one of the equation in terms of a difference squares. Or completing the square think of it as another tool in your toolbox! A technique for manipulating a quadratic equation can be rearranged so that it can be rearranged that... Forming a square no coefficient ) ( x+b/2 ) 2 without also subtracting it too denominator necessary... The denominator if necessary go in that blank all radical expressions and rationalize the denominator if necessary completing square!
2021-04-20 19:17:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6813875436782837, "perplexity": 350.34640893297603}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039490226.78/warc/CC-MAIN-20210420183658-20210420213658-00217.warc.gz"}
http://ergodicity.net/
# CFP: 2015 Information Theory Workshop (ITW), Jeju Island I am on the TPC for ITW 2015 in Jeju Island, South Korea. The 2015 IEEE Information Theory Workshop will take place in Jeju Island, Korea, from October 11 to October 15, 2015. Jeju Island is the largest island in Korea and is located in the Pacific Ocean just off the south-western tip of the Korean peninsula. Jeju Island is a volcanic island with a mountainous terrain, a dramatic rugged coastline and spectacular watershed courses. The Island has a unique culture as well as natural beauty. It is a living folk village, with approximately 540,000 people. As a result of its isolated location and romantic tropical image, Jeju Island has become a favorite retreat with honeymooners and tourists. The tour programs of the conference will also provide participants with the opportunity to feel and enjoy some of the island’s fascinating attractions. Special topics of emphasis include: • Big data • Coding theory • Communication theory • Computational biology • Interactive communication • Machine learning • Network information theory • Privacy and security • Signal processing # 2015 Bellairs Workshop on Large-Scale Inference and Optimization A few weeks ago I got to go to Bellairs in Holetown, Barbados for a workshop organized by Mike Rabbat and Mark Coates of McGill University. It’s a small workshop, mostly for Mike and Mark’s students, and it’s a chance to interact closely and perhaps start some new research collaborations. Here’s a brief summary of the workshop as I remember it from my notes. The Magician’s Land [Lev Grossman] : The finale of Grossman’s series. In a sense it had all the right pieces, but somehow it felt less specific and grounded to me, perhaps because the world was no longer “new” or because I felt like there was a need to “finish things up.” Of course, if you read the first two you have to read this one, so it’s not like I could not-recommend it. I was still quite enjoyable. Colorless Tsukuru Tazaki and His Years of Pilgrimage [Haruki Murakami]: This also felt a bit slight with respect to other books of Murakami, but also “clean” in a way that I appreciated. I also now have to listen to more Liszt. Tsukuru Tazaki feels “colorless” and empty, shunned by his old childhood friends. He finally tries to seek out why, which turns out to be more surprising than he thought. As with much of Murakami’s work, the “mysteriousness” of women has this negative tint that makes me uncomfortable. This book, unlike 1Q84 or others, has very little magical realism going on, so it could be a good recommendation for someone who is less of a fan of that aspect of Murakami’s work. Soy Sauce For Beginners [Kirstin Chen]: The story of Gretchen Lin, a 30-year old who has moved back to Singapore from SF to work at the family soy sauce factory after her marriage fell apart, this novel is part Gretchen’s painful journey towards self-discovery and resolution with her family, and partly an introduction to Singapore for the non-familiar reader. The latter part will appeal to some but at times I wanted less explanation and to be forced into trying to make sense of cultural elements myself. In this sense it’s a sort of novel of cultural translation. That being said, the best part of this book is how true and messy the story really felt. The family (and business) are dysfunctional, and Gretchen has a lot to come to terms with regarding herself, her marriage, and her relationship to this family. The Name of The Wind / The Wise Man’s Fear [Patrick Rothfuss] : I should make myself promise to not read epic fantasy series that are not completed. Told in a kind of story-within-a-story, these books were a great way to unwind over the vacation. If you like those bards plus wizards coming of age stories, this one is for you. Also: plenty of unrequited love. The Lowland [Jhumpa Lahiri] : I had read the opening of this book as a short story, but the novel is another beast entirely. Two brothers in Kolkata, one a Naxalite, the other looking to go to grad school in the US, and a torn apart and stitched together family in the US. While reading this I kept thinking of the movie Boyhood, which rather abruptly jumped years into the future to catch the family’s story at another time. This book does the same, but the shifts felt more jarring to me; I did not understand who there characters were quite as well. I think I had to suspend my disbelief a few times for some of the narrative choices. However, in retrospect it is because I think I didn’t quite get the characters, or I had misconceptions. Regardless, I think this is a story that helps complicate the story of middle-class Indian immigrant families, and is worth giving a read. House of Suns [Alastair Reynolds] : Space opera, on a grand scale, but still grounded in our galaxy with humans, rather than the more distant and alien Culture novels of Banks. As Cosma would put it, mind candy, and a nice beach read. # ISIT Deadline Extended to Monday Apparently not everyone got this email, so here it is. I promise this blog will not become PSA-central. Dear ISIT-2015-Submission Reviewers: In an effort to ensure that each paper has an appropriate number of reviews, the deadline for the submission of all reviews has been extended to March 2nd. If you have not already done so, please submit your review by March 2nd as we are working to a very tight deadline. (a) all submissions are eligible to be considered for presentation in a semi-plenary session — Please ensure that your review provides an answer to Question 11 (b) in the case of a submission that is eligible for the 2015 IEEE Jack Keil Wolf ISIT Student Paper Award, the evaluation form contains a box at the top containing the text: Notice: This paper is to be considered for the 2015 IEEE Jack Keil Wolf ISIT Student Paper Award, even if the manuscript itself does not contain a statement to that effect. – Please ensure that your review provides an answer to Question 12 if this is the case. Thanks very much for helping out with the review process for ISIT, your inputs are of critical importance in ensuring that the high standards of an ISIT conference are maintained. We know that reviewing a paper takes much effort and we are grateful for all the time you have put in! With regards, Pierre, Suhas and Vijay (TPC Co-Chairs, ISIT 2015) # PSA on IEEEtran.cls Apparently there’s a PSA out about using the latest version of IEEEtran.cls. Stefan Moser is a big proponent of IEEEeqnarray which he says is even better than my beloved align environment. He also hates on the shorthand for resulting in “poorly readable” source code, but I guess I disagree on that point. He even says it’s better than multline! I guess I’ll have to revise my LaTeX practices… but only when I write IEEE papers. # ITA 2015: quick takes Better late than never, I suppose. A few weeks ago I escaped the cold of New Jersey to my old haunts of San Diego. Although La Jolla was always a bit fancy for my taste, it’s hard to beat a conference which boasts views like this: A view from the sessions at ITA 2015 I’ll just recap a few of the talks that I remember from my notes — I didn’t really take notes during the plenaries so I don’t have much to say about them. Mostly this was due to laziness, but finding the time to blog has been challenging in this last year, so I think I have to pick my battles. Here’s a smattering consisting of $\{ \mathrm{talks\ attended} \} \cap \{ \mathrm{talks\ with\ understandable\ notes} \}$ (Information theory) Emina Soljanin talked about designing codes that are good for fast access to the data in distributed storage. Initial work focused on how to repair codes under disk failures. She looked at how easy it is to retrieve the information afterwords to guarantee some QoS for the storage system. Adam Kalai talked about designing compression schemes that work for an “audience” of decoders. The decoders have different priors on the set of elements/messages so the idea is to design an encoder that works for this ensemble of decoders. I kind of missed the first part of the talk so I wasn’t quite sure how this relates to classical work in mismatched decoding as done in the information theory world. Gireeja Ranade gave a great talk about defining notions of capacity/rate need to control a system which as multiplicative uncertainty. That is, $x[n+1] = x[n] + B[n] u[n]$ where $B[n]$ has the uncertainty. She gave a couple of different notions of capacity, relating to the ratio $| x[n]/x[0] |$ — either the expected value of the square or the log, appropriately normalized. She used a “deterministic model” to give an explanation of how control in this setting is kind of like controlling the number of significant bits in the state: uncertainty increases this and you need a certain “amount” of control to cancel that growth. (Learning and statistics) I learned about active regression approaches from Sivan Sabato that provably work better than passive learning. The idea there is do to use a partition of the X space and then do piecewise constant approximations to a weight function that they use in a rejection sampler. The rejection sampler (which I thought of as sort of doing importance sampling to make sure they cover the space) helps limit the number of labels requested by the algorithm. Somehow I had never met Raj Rao Nadakuditi until now, and I wish I had gotten a chance to talk to him further. He gave a nice talk on robust PCA, and in particular how outliers “break” regular PCA. He proposed a combination of shrinkage and truncation to help make PCA a bit more stable/robust. Laura Balzano talked about “estimating subspace projections from incomplete data.” She proposed an iterative algorithm for doing estimation on the Grassmann manifold that can do subspace tracking. Constantine Caramanis talked about a convex formulation for mixed regression that gives a guaranteed solution, along with minimax sample complexity bounds showing that it is basically optimal. Yingbin Liang talked about testing approaches for understanding if there is an “anomalous structure” in a sequence of data. Basically for a sequence $Y_1, Y_2, \ldots, Y_n$, the null hypothesis is that they are all i.i.d. $\sim p$ and the (composite) alternative is that there an interval of indices which are $\sim q$ instead. She proposed a RKHS-based discrepancy measure and a threshold test on this measure. Pradeep Ravikumar talked about a “simple” estimator that was a “fix” for ordinary least squares with some soft thresholding. He showed consistency for linear regression in several senses, competitive with LASSO in some settings. Pretty neat, all said, although he also claimed that least squares was “something you all know from high school” — I went to a pretty good high school, and I don’t think we did least squares! Sanmi Koyejo talked about a Bayesian devision theory approach to variable selection that involved minimizing some KL-divergence. Unfortunately, the resulting optimization ended up being NP-hard (for reasons I can’t remember) and so they use a greedy algorithm that seems to work pretty well. (Privacy) Cynthia Dwork gave a tutorial on differential privacy with an emphasis on the recent work involving false discovery rate. In addition to her plenary there were several talks on differential privacy and other privacy measures. Kunal Talwar talked about their improved analysis of the SuLQ method for differentially private PCA. Unfortunately there were two privacy sessions in parallel so I hopped over to see John Duchi talk about definitions of privacy and how definitions based on testing are equivalent to differential privacy. The testing framework makes it easier to prove minimax bounds, though, so it may be a more useful view at times. Nadia Fawaz talked about privacy for time-series data such as smart meter data. She defined different types of attacks in this setting and showed that they correspond to mutual information or directed mutual information, as well as empirical results on a real data set. Raef Bassily studied a estimation problem in the streaming setting where you want to get a histogram of the most frequent items in the stream. They reduce the problem to one of finding a “unique heavy hitter” and develop a protocol that looks sort of like a code for the MAC: they encode bits into a real vector, had noise, and then add those up over the reals. It’s accepted to STOC 2015 and he said the preprint will be up soon. # Student Promotion: Signal Processing Society Provides Steep Price Slash Or SPSPSPSPS, for short. I’ve been over-busy and lax on posting, but I’ll provide some recap of ITA soon, as well as some notes from the Bellairs workshop I just came back from. The winter is a bit jarring. To the point of the subject: In case you hadn’t heard, the IEEE Signal Processing Society is currently running a campaign that allows IEEE Student and Graduate Student members to join the SPS for free for the 2015 membership year. The promotion is running now through 15 August 2015. Only IEEE Student and Graduate Students are eligible, as this offer does not apply to SPS Student or Graduate Student members renewing their membership for 2015. This link directs to the IEEE website with both IEEE Student membership and the free SPS Student membership in the cart. If a student is already an IEEE Student of Graduate Student member, he/she can use the code SP15STUAD at checkout to obtain his/her free membership. If you have any questions regarding the SPS Free Student Membership campaign or other membership items, please don’t hesitate to contact Jessica Perry at jessica.perry@ieee.org.
2015-03-27 19:04:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.275581032037735, "perplexity": 1512.2519617612058}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131296603.6/warc/CC-MAIN-20150323172136-00054-ip-10-168-14-71.ec2.internal.warc.gz"}
http://hadronictechnologies.com/santilli-scientific-discoveries-3.php
# THUNDER ENERGIES CORPORATION ## MATHEMATICAL, PHYSICAL AND CHEMICAL SCIENCES UNDERLYING SANTILLI'S INTERMEDIATE NUCLEAR SYNTHESES, WITHOUT RADIATIONS Full scientific presentation available in the monograph New Sciences for a New Era: Mathematical, Physical and Chemical Discoveries of Ruggero Maria Santilli, Sankata Printing Press, Nepal (2011), http://www.santilli-foundation.org/docs/RMS.pdf 3.1. FOREWORD 3.2. ETHER AS A UNIVERSAL SUBSTRATUM (1952-1955) 3.3. ORIGIN OF THE ELECTRIC AND MAGNETIC FIELDS (1955-1957) 3.4. ORIGIN OF THE GRAVITATIONAL FIELD (1974) 3.5. SYMMETRY OF THE ETHER (1970) 3.6. QFT (AND QCD) VIOLATIONS FROM DISCRETE SYMMETRY VIOLATIONS (1974) 3.7. RESOLUTION OF THE HISTORICAL IMBALANCE ON ANTIMATTER (1994) 3.7A. Foreword 3.7B. Newton-Santilli isodual equation for antimatter 3.7C. Isodual representation of the Coulomb force 3.7D. Hamilton-Santilli isodual mechanics 3.7E. Isodual special and general relativities 3.7F. Prediction of antigravity 3.7G. Test of antigravity 3.7H. Isodual quantum mechanics 3.7I. Experimental detection of antimatter galaxies 3.7J. The new isoselfdual invariance of Dirac's equation 3.7K. Dunning-Davies thermodynamics for antimatter 3.7L. Isoselfdual spacetime machine 3.7M. Original literature 3.8. INITIATION OF q-DEFORMATIONS OF LIE THEORY 3.9. THEOREMS OF CATASTROPHIC INCONSISTENCIES OF NONCANONICAL AND NONUNITARY THEORIES 3.9A. The majestic consistency of Hamiltonian theories. 3.9B. Theorems of catastrophic inconsistencies of noncanonical and nonunitary theories. 3.9C. Examples of catastrophically inconsistent theories. 3.10. SANTILLI RELATIVITIES (1978) 3.10A. Historical notes 3.10B. Santilli's opening statement 3.10C. Conceptual foundations 3.10D. Mathematical foundations 3.10E. Invariance and universality of Santilli's isotopies. 3.10F. Lorentz-Poincare'-Santilli isosymmetry and its isodual 3.10G. Santilli's isorelativity and its isodual 3.10H. Santilli's isogravitation and its isodual 3.10I. Santilli's geno- and hyper-relativities and their isoduals 3.10J. Isotopic reconstruction of exact spacetime symmetries when conventionally broken 3.10K. Experimental verifications 3.10L. Original literature 3.11A. Foreword 3.11B. Historical notes 3.11C. Interior and exterior dynamical systems 3.11D. Closed and open dynamical systems 3.11E. Newton-Santilli isoequations 3.11F. Hamilton-Santilli isomechanics 3.11G. Animalu-Santilli isoquantization 3.11H. Hilbert-Santilli isospaces 3.11I. Schroedinger-Santilli isoequations 3.11J. Heisenberg-Santilli isoequations 3.11K. Elimination of quantum divergencies 3.11L. Genotopic and hyperstructural branches of hadronic mechanics 3.11M. Isodual branches of hadronic mechanics 3.11O. Simple construction of hadronic mechanics 3.11R. Direct universality and uniqueness of hadronic mechanics 3.11S. EPR completion of quantum mechanics, hidden variables and all that 3.11T. Operator isogravity 3.11U. Iso-grand-unification ### CHAPTER 3: SANTILLI'S DISCOVERIES IN THEORETICAL PHYSICS 3.1. FOREWORD In this chapter, we outline Santilli's most important discoveries in physics and provide copies of the original papers in free pdf downloads, when copyrighted. As it was the case for Chapter 2, we regret not to be able to outline subsequent contributions by independent researchers to avoid a prohibitive length, but they can be located in the The serious scholar is suggested not to restrict the attention solely to individual topics, but provide primary attention to the overall mathematical and physical construction with particular reference to its consistency as well as beauty. None of the discoveries presented in this chapter has been disproved in the scientific literature to our best knowledge. Scholars are requested to inform the Foundation of the existence of papers in the refereed journal disproving any of the discoveries listed in this chapter for their outline, quotation and listing in the related section. During the first subsections, we shall use for clarity the conventional associative multiplication AB of numbers, vector fields, operators, etc., and use the symbol AxB for the same multiplication when initiating the presentation of classical or operator generalized theories. 3.2. ETHER AS A UNIVERSAL SUBSTRATUM (1952-1955) Santilli was fascinated by the ether (also called aether, or space) since his high school studies in the 1950 that he conducted in the city of Agnone, province of Isernia, Italy. A controversy was raging at that time on space conceived as a universal medium (or substratum) because such as conception was believed to be in conflict with special relativity due to its foundation on the lack of existence of a privileged reference frame. An argument used to deny the existence of space as a universal medium was the lack of "aethereal wind," namely, the absence of any resistance by Earth during its motion in space. Another argument was the use of Einstein's photon for the reduction of light to particles, thus eliminating the need for a medium to propagate electromagnetic waves. In his first writings dating back to his high school years, Santilli opposed these views. To begin, he saw no conflict between the existence of a universal medium and special relativity because, assuming that an absolute reference frame can be set at rest with said universal medium, that frame cannot be identified by man precisely in view of the relativity of motion. In 1952, when 16 years old, Santilli delivered a seminar on Albert Einstein to the teachers and students of his high school whose transcript (in Italian) has been retrieved by our Foundation from the high school documents and made available in free pdf download: "Albert Einstein" Seminar delivered by R. M. Santilli in 1952 at the High School in Agnone (Isernia), Italy. Next, Santilli accepted the reduction of light to photons, but only for high frequencies, such as for UV or gamma rays, and rejected the reduction to photon for electromagnetic waves at large, such as those with large wavelength (e.g., radiowaves), thus considering the notion of photon as an approximation of reality motivated by the characteristics of electromagnetic waves to cause an impulse when hitting a surface, since they carry energy. As a general position, he writes (in Italian): My voice can be heard because there is air as a medium propagating sound waves and, in the absence of air, no voice can be propagated. By the same token, my face can be seen because there is a universal medium to propagate light and, again, in the absence of a universal medium, light could not exist or propagate. By noting that sound waves are longitudinal because the medium (air) is compressible, and by noting that electromagnetic waves are transversal, Santilli assumed that space is a universal medium with very high rigidity and, consequently, very high energy density, (otherwise light would be characterized by longitudinal or other forms of waves). Finally, Santilli dismissed the hypothesis of the "aethereal wind" because he conceived space as the universal substratum necessary for the characterization not only of electromagnetic waves, but also of the elementary particles constituting matter, the difference being that oscillations of space propagate in the former case in the form of waves, while they are stationary in the latter case (unless moved). In particular, Santilli assumed the electron to be a pure oscillation of space, that is, the electron is characterized by an oscillation of a point of space without any oscillating "little mass" or any other material entity, and assumed the same for all other particles constituting matter, although with a much more complex oscillating structure. In this way, Santilli eliminates the "aethereal wind" by writing: Contrary to our sensory perception, space is completely full of the universal medium, while matter is completely empty, in the sense that, following the reduction of matter to the structure of elementary particles, we have pure oscillatory energy of space without any matter component at all as perceived by us. Consequently, when we move an object, we move no material substance as perceived by us, and we merely transfer the oscillations constituting matter from one region of space to another, without any possibility for the "aethereal wind" to exist. Hence, inertia is a natural resistance by space against changes of steady propagation of the characteristic oscillations of a given body. As we shall see, Santilli returned to his conception of space some 50 years later following his discovery of new mathematics permitting quantitative studies of the expected interconnection between space as a universal medium with high energy density and matter (achieved via the isotopies of Hilbert spaces and fields at the foundation of hadronic mechanics). In particular, his conception of space emerged rather forcefully in his studies on: the synthesis of the neutron and the expected continuous creation in our universe; alternatives to the neutrino conjecture via longitudinal impulses propagating through space; geometric propulsions with unlimited speeds without fuel tanks; and other far reaching conceptions. Figure 3.1. An original drawing by Santilli dating back to 1955 on his conception of the structure of the electron as a pure oscillation of a point of the ether, showing the distribution on a plane due to rotation, the longitudinal force propagated through space, thus being interpreted as the origin of the electric charge, Eq. (3.2). Santilli's conception of the ether The elements indicated above refer to studies in the 1950s. The understanding of Santilli's conception of space requires the knowledge of all his studies, including experimental verifications and applications. To begin, there is the need of a technical knowledge of Santilli's representation via hadronic mechanics of the synthesis of the neutron from a proton and an electron as occurring in Stars that requires 0.782 MeV (see Chapter 5). The only plausible origin of the missing energy is the ether because, in its absence, stars could never initiate to produce light. In fact, even a small star synthesizes at its initiation about 1030 neutrons per seconds, thus requiring about 1030 MeV that, unless supplied by the ether, would prevent any additional nuclear syntheses. This leads to the conception of the ether as a universal medium with extremely high density of positive energy, as indicated above. But the universe is expected to be symmetric under charge conjugation. Therefore, the synthesis of the antineutron from antiprotons and antielectrons requires, this time, 0.782 MeV of negative energy (referred to a negative unit as per the isodual theory of antimatter) that, again, can solely be obtained from the ether. This leads to the additional conception that the ether is also constituted by a very large density of negative energy. The understanding of the coexistence of the positive and negative energies in the ether requires a technical knowledge of Santilli's hypergeometries. In essence, positive and negative energies can coexist because defined in different spaces characterized by different units, the positive unit for positive energy and the negative unit for negative energy (two-valued hypergeometry). The conventional (classical) notion of vacuum originates precisely from the superposition of opposite energies defined in different spaces. The above conception of the ether appears to be confirmed by serious studies of all existing physical knowledge from particle physics to astrophysics, such as pair creation in particle physics, neutron and antineutron stars in astrophysics, etc. The expectation is that the scholar is sufficiently serious to study Santilli's results before throwing judgments solely based on the old and surpassed knowledge of the 20th century. Original literature Our Foundation has identified some (but not all) original writings by Santilli and we make them available here as free pdf downloads for interested scholars. We list the first book written by Santilli in 1955 (but not listed in his CV) and two articles of 1955 and 1956. Note the title of the second article (Elimination of the mass in atomic physics) that anticipate the need to replace the mass with energy in Newton's and Einstein's gravitation discovered years later and outlined below. "Principi su una Teoria Unificata sulla Fisica Atomica" (Principles for a Unified Theory in Atomic Physics) R. M. Santilli, Naples (1955) "Eliminazione della massa nella fisica atomica" (Elimination of mass in atomic physics), R. M. Santilli, Phoenix, Volume 1, pages 222-227 (1955) Perche' lo spazio e' rigido (Why space is rigid) R. M. Santilli, Il Pungolo verde, Campobasso, Italy, (1956) The Foundation is interested in providing financial support to studies on the ether as a universal substratum, under the conditions that the assumed characteristics of the ether allow a quantitative representation of the transversal character of light, as done by Santilli with his rigidity equivalence of the ether, thus excluding models of the ether as being a fluid and the like. 3.3. ORIGIN OF THE ELECTRIC AND MAGNETIC FIELDS (1955-1957) As a natural continuation of the preceding conception of the ether, Santilli concentrated his attention in the structure of the electron as part of his 1957 thesis for the degree in physics at the University of Naples, Italy. Starting from the compelling need for space to be a universal medium with high rigidity to characterize light via transversal waves propagating at very high speed, and the consequential need for the electron to be a pure oscillation of space in the sense indicated above, Santilli addressed the problem of the origin of the elementary charge and magnetic field or, equivalently, the structure of the electron. In recollection of these studies, he states: I believe that no study on the electron can be claimed to be of structural character unless it explains how it is possible for one electron to exercise an attractive force with a positron and a repulsive force with another electron. The conjecture I studied in the 1950s is the logical consequence that each electron (or positron) releases both attractive and repulsive forces through space, which forces are then separated by the coupling with another elementary charge. His main intuition is that the electron is widely represented with its well known characteristic frequency (3.1) ν = ω/2π = m c2 / h = 0.829 x 1020 Hz. Hence, he argued that the elementary charge "e" cannot possibly be a constant as believed during the 20th century, but must also show some form of periodic time dependence. The understanding is that a collection of sufficient number of elementary charges q = ∑k ek is indeed expected to be constant as per known experimental evidence. The issue raised by the characteristic frequency (3,1) is the following: If space is a universal medium with high rigidity, the oscillation of one of its points will propagate an oscillating force in the medium that can be safely assumed to decay with the inverse square of the distance. However, when such a force encounters another electron (positron), it results in a repulsive (attractive) force. Figure 3.2. Another original drawing by Santilli dating back to 1955 on his conception of the elementary charge of the electron according to Eq. (3.2) as containing both attractive and repulsive actions (top view), which actions are separated into repulsive or attractive force when coupling elementary charges of the same or opposite sign, respectively (lower views). The solution identified by Santilli is that the coupling of identical elementary charges activates only the repulsive part of the oscillating force, while the coupling of opposing charges activates only the attractive component of the oscillating force propagating through space. Hence, Santilli assumed that such an oscillation transfers to space an oscillating force with the same frequency, resulting in the following structure model of the elementary electric charge (3.2) e = ± (2 h ν R )1/2 sin (ωt + α). In this way Santilli reached in 1955 a structural generalization of the Coulomb law for two elementary charges into a time dependent, pulsating form that, for the simplest possible case of two one-dimensional oscillations along the same axis can be written (3.3) F = ± e2 / r2 = (2 h ν R/ r2) sin2 (ωt + α), where the positive (negative) sign denotes repulsion (attraction) and R is the amplitude of the oscillation, with much more complex expressions for oscillations in two and three dimensions (see for details the literature quoted below). Needless to say, the actual model contains a complex phase terms in the argument of the sinus that is a function of the rotation or, equivalently, of spin 1/2 of the electron, we cannot review here. Santilli then concluded with the hypothesis that The repulsive force between two identical electrons is not constant, but has the shape of half a sinusoid with the characteristic frequency of the electron. It should be indicated again that the above hypothesis solely applies for two electrons because, when considering a large number of electrons, the above periodicity is evidently averaged out, resulting into a constant force. The conception of the electron as a pure oscillation of space is far from being trivial and should be taken seriously by researchers in the field, if nothing else, because alternative hypotheses appears to lack plausibility. In fact, the addition of rotation to the pure oscillation of space creates a rosetta-type planar distribution with an SO(2) symmetry that (unlike the SO(3) case) admits angular momentum 1/2 as the lowest non-null state, thus allowing a structure model of the electron spin. Additionally, an oscillation of a point of a rigid medium propagates two different impulses in the medium, the radial one identified with the origin of the electric charge, and the transversal one that propagates in the two directions opposite to the oscillation thus having all prerequisites for their interpretation as the origin of the elementary magnetic dipole moment, as illustrated in the figure. Figure 3.3. An original drawing by Santilli dating back to 1955 on his conception of the origin of the magnetic field of the electron conceived as a pure oscillation of space, showing the clear duality of the field along the rotational symmetry axis originating from deformations of space perpendicular to the characteristic structural oscillation. Half a century has passed since these pioneering studies and, in view of the obscurantism created by Einsteinian theories, studies on space as a universal substratum have been vastly ignored by the so-called "mainstream" of physics research, with the consequential dismissal of studies on the origin of the electromagnetic field in favor of its description. Yet, Santilli must be credited to have voiced a restoration of serious scientific democracy with the addressing of truly fundamental physical issues irrespective of their political implications, a pattern that has been at the basis of Santilli's entire life. Our Foundation has retrieved Santilli's thesis (in Italian) at the University of Naples on the structure of the electron and the origin of its electromagnetic field, and makes it available in free pdf download: "Fondamenti per una teoria unificata sulla struttura dell'elettrone" (Foundations for a unified theory on the structure of the electron) R. M. Santilli, Department of Nuclear Physics. University of Naples (1958) Following various academic research, Santilli resumned a study of the above ideas only in the early 1980s, and released two short papers for publication in the Hadronic Journal and in Nuovo Cimento Letters merely to have a (generally ignored) record of his studies A structure model of the elementary charge R. M. Santilli, Hadronic J. Vol. 4, 770-784 (1981) A conceivable lattice structure of the Coulomb law R. M. Santilli, Lettere Nuovo Cimento Vol. 37, 505-508 (1983) The connection between Santilli's structure model of the electron and string theories (appeared some half a century later) should be noted. Unfortunately, the latter have been patterned along the requirement of representing extended particles while verifying special relativity, a notorious impossibility since the latter solely admit point-particles as indicated earlier. In Santilli's views, string theories essentially constitute an edifice built without foundations due to the lack of identification of the truly fundamental notion, the entity that vibrates thus permitting the existence of the strings. This identification is generally omitted because the universal substratum would be perceived as violating special relativity. Additionally, string theories in their current formulation verify the Theorems of Catastrophic Mathematical and Physical Inconsistencies of Noncanonical and Nonunitary Theories reviewed in Section 3.9. Due to these unsettled aspects, string theories will be ignored hereon. Yet, it is clear that Santilli's structure model of the electron can indeed provide a plausible foundation to string theories, and their reconstruction based on a universal substratum and related advances is here recommended. The Foundation is interested in providing financial support for the experimental verification or denial of Santilli's laws (3.2) and (3.3) and is seeking interested experts in the field. 3.4. ORIGIN OF THE GRAVITATIONAL FIELD (1974) Following the above pioneering studies on th structure of space and the origin of the electromagnetic field, it was natural for Santilli to study the origin of the gravitational field. This study was conducted in the 1970s when he was at the Center for Theoretical Physics of the Massachusetts Institute of technology. Santilli initiated the study with the origin of the exterior gravitational field for the most elementary particle, the electron, whose mass is well known to be entirely of electromagnetic origin. Hence, he reached the conclusion that the gravitational field of an electron is entirely of electromagnetic origin, and wrote the gravitational field equations on a Riemannian space in the form (3.4) Rμ ν + gμ ν R = k Tμ ν, where T is the energy-momentum tensor of the electromagnetic field of the electron and k is a constant. It should be stressed that, in Eqs. (3.4), Tμ ν is a source tensor of first order in magnitude that, as such, cannot be ignored in first approximation as usual in the field. The above case is well known but ignored in the sense that, when passing to neutral matter, it is customary to assume that mass is the origin of the gravitational field. Therefore, Santilli studied the exterior gravitational field of the πo particle as a bound state of one charged constituent called "parton" and its antiparticle (assumed to have the same elementary structure of the electron). The constituents were assumed to be in very high rotation at 1 fm mutual distance with tangential speeds close to that of light. By using the most advanced relativistic calculations, Santilli discovered that the mass of the πo is also of entire electromagnetic origin. Therefore, for the gravitational field of the πo Santilli wrote the field equation in the form (3.4), namely, with a first order source tensor in the r.h.s. He then passed to the study of ordinary massive bodies and reached the conclusion that the exterior gravitational field in vacuum of an ordinary massive body is entirely generated by the sum of the electromagnetic fields of all elementary constituents of the body considered, with field equations of type (3.4) having a source tensor in the r.h.s. of first order in magnitude, irrespective of whether the body considered is neutral or charged and with or without a magnetic field. In this case, Santilli characterized the source tensor T as the sum of a very large number of individual contributions and provided methods for its average. He then passed to the problem of the origin of the interior gravitational field by recalling that, from a structural viewpoint, the main difference between the exterior and the interior problem is the additional presence in the interior case of short range, weak and strong interactions. Hence, for the interior gravitational problem of the πo particle, he wrote the field equations in the form (3.5) Rμ ν + gμ ν R = k Tμ ν + w Wμ ν, where Wμ ν is the energy-momentum tensor due to weak and strong interactions in the interior of the πo and w is another constant. Santilli also noted that: the tensor Tμ ν is traceless, while the tensor Wμ ν is not; the source tensor of the interior problem has a bigger numerical value of that for the exterior problem; and, consequently, he concluded that the inertial mass is bigger than the gravitational one, the former (latter) being characterized by the interior (exterior) problems. Santilli then compared the above results (reached via first principles of quantum electrodynamics) with Einstein's conception of the exterior gravitational problem that, as well known, is based on its entire reduction to curvature without any source for neutral bodies, and celebrated field equations (3.6) Rμ ν + gμ ν R = 0. >From the evident differences between Eqs. (3.4) or (3.5) and (3.6), Santilli concluded that: Einstein conception of gravitation as pure curvature is irreconcilably incompatible with quantum electrodynamics because, either A) One assumes Einstein gravitation as being correct, in which case classical and quantum electrodynamics must be profoundly reformulated in such a way to avoid a first order electromagnetic contribution to masses; or B) One assumes quantum electrodynamics as being valid, in which case Einstein's reduction of gravity to pure curvature without source (for the case of neutral bodies) must be abandoned. Santilli then concluded the study of 1974 with its evident consequence: The electromagnetic origin of the gravitational fields implies their "identification," thus eliminating the need for their "unification", with the understanding that the former (latter) field is described by second-order (first-order) equations. Figure 3.4. A schematic view of the calculations via advanced and retarded field theoretical methods, used by Santilli in 1974 to establish the incompatibility of Einstein's gravitation with quantum electrodynamics, in this case showing the entire electromagnetic origin of the exterior gravitational mass of the πo particle, in irreconcilable disagreement with the null source of Einstein's field equations for the case considered. In the late 1990s, Santilli added the proof that Einstein's field equations for a neutral body are additionally incompatible with the Feud identity of the Riemannian geometry, since the latter requires two source tensors in the r.h.s of the field equations, one traceless and the other with trace, exactly as predicted by the origin of the interior gravitational field, Eqs. (3.5). Santilli also identified numerous additional inconsistencies of Einstein's gravitation reviewed later on in this chapter. The implications of the above studies are far reaching, even though vastly ignored for evident political reasons of not being aligned with Einsteinian doctrines. In fact, Santilli's identification of the gravitational and electromagnetic fields implies: A) The evident equivalence of phenomenologies, that is, gravity must admit attraction and repulsion since that is the case for the electromagnetic field. This problem was resolved by Santilli via the construction of the isodual theory of antimatter (see later on Secton 3.19); B) The possibility of resolving the century old unresolved problem of a consistent operator form of gravity, that was subsequently achieved by Santilli via his isogravity (see Section 3.11); C) The need to formulate the scattering theory in such a way to incorporate, apparently for the first time, gravitational contributions, due to the possible creation of ,Mini Black Holes since the latter depend on sufficient energy density, and not necessarily occur solely for large masses (see Chapter 5). The origin of the gravitational field and its identification with the electromagnetic field were published by Santilli in the paper: Partons and gravitation: some puzzling questions R. M. Santilli, (MIT) Annals of Physics Vol. 13, 108-157 (1974) The violation by Einstein's gravitation of the Freud identity of the Riemannian geometry for neutral bodies and nine inconsistency theorems were presented in the paper Nine theorems of catastrophic inconsistencies of general relativity and their possible resolution via isogravitation Ruggero Maria Santilli Galilean Electrodynamics, Summer 2006, p. 43-79 (2006) with a general review in the volume Hadronic Mathematics, Mechanics and Chemistry, Volume I: Limitations of Einstein's Special and General Relativities, Quantum Mechanics and Quantum Chemistry R. M. Santilli, 3.5. SYMMETRY OF THE ETHER (1970) As indicated earlier, Santilli considers the ether (or space) to be a universal substratum permitting the existence of all visible universe, thus being the most fundamental and final frontier of scientific knowledge. The physics community of the 20th century did not accept this notion because it implies an absolute reference frame that is perceived as being prohibited by special relativity, thus adapting nature to a preferred theory. Being a physicist interested in quantitative studies, it was natural for Santilli to search for the symmetry of the ether, that is, the spacetime symmetry admitting indeed a universal substratum for all visible events, while, of course, being compatible with available experimental evidence. The absence of such a symmetry originates from the fact that there is no possibility to characterize said notion of the ether via the spacetime symmetry of the 20th century, the 10-dimensional Poincare' symmetry, here indicated in its simpler connected form (3.7) P(3.1) = SO6(3.1) ⊗ T4(3.1), where: SO6(3.1) represents the connected 6-dimensional Lorentz symmetry; T4(3.1) is the group of translations in Minkowski spacetime; and ⊗ is the semidirect product. Hence, Santilli searched for a broadening of the Poincare' symmetry in such a way to admit special relativity as a particular case, while allowing means for the characterization of the ether via a primitive, spacetime symmetry. The solution was presented in a series of papers written from 1970 on by Santilli in collaboration with P. Roman and J. J. Aghassi at the Department of Physics of Boston University. The proposal consisted in the 15-dimensional ether symmetry as called privately by Santilli and officially called in publications the relativistic Galilei group G5(3+2) where 5 denotes the extension of the 4-dimensional Minkowski spacetime with coordinates xμ, μ = 1, 2, 3, 4, plus an additional scalar u characterizing the ether as a universal medium, e.g., u representing the ether proper time. The new symmetry is characterized by the transformations (3.8) Lorentz transformations xμ → Λνμ xν, (3.9) Spacetime translations xμ → xμ + aμ, (3.10) Spacetime boosts xμ → xμ + bμu, (3.11) Proper time translation u → u + σ, with group structure (3.12) G5(3, 2) = {SO6(3.1) ⊗ T4(3,1)} ⊗ {T4(b) x T1(σ)}, and generators of the Lie algebra (3.13) g5 = {Jμν, Pμ, Xμ, E}, where: Jμν and Pμ are the conventional generators of the Poincare' algebra; Xμ is a position operator, and E is the energy operator, the latter operators being a novelty of the new symmetry since they are impossible for the Poincare' symmetry. For additional technical data, interested readers are suggested to consult the literature below. In summary, the Poincare' symmetry can be extended into the ether symmetry (or the relativistic Galilei symmetry) G5(3, 2) that admits as a subgroup both the Poincare' symmetry and the conventional (nonrelativistic) Galilei symmetry, as well as fundamental new features that are impossible in the Poincare' symmetry, such as the position and energy operators, a universal constant (originating from the scalar extension) and other intriguing features. A possible use of the ether symmetry is the following. The Poincare' component is used for the representation of all data connected to special relativity with no change, including the adoption of all its experimental verifications. The remaining components mainly represent the interplay between cosmological aspects, the universal medium, and the event considered. The latter cause the emergence of position and energy operators that are an evident consequence of the introduction of the proper time of the ether. Needless to say, it would be presumptuous to claim that the ether symmetry is the correct spacetime symmetry for relativistic dynamics, and the same holds for the believe of the Poincare' symmetry as the final spacetime symmetry to the end of time. Yet, it is the Foundation's opinion that, until experimental evidence disproving the new symmetry is identified, the ether symmetry is superior to the Poincare' symmetry, if nothing else, because of the much broader conception and representational capability. The two papers below are the historical paper presenting the new spacetime symmetry. For numerous additional papers, particularly those on the representation theory and applications, interested scholars are suggested to consult Santilli curriculum. A new dynamical group for the relativistic quantum mechanics of elementary particles A. Aghassi, P. Roman and R. M. Santilli , Phys. Rev. D vol. 1, 2753-2765 (1970) Representation theory of a new relativistic dynamical group A. A. Aghassi, P. Roman and R. M. Santilli, Nuovo Cimento Vol. 5, pages 551-590 (1971) An important study of the nonrelativistic case has been done by H. E. Wilhelm in the paper Galilei invariant electrodynamics and quantum mechanics relative to the cosmic aether frame H. E. Wilhelm, Hadronic J. Vol. 31 (2008) in press An important independent study has been made by J. R. Fanchi in the recent memoir Tutorials on parametrized relativistic dynamics J. R. Fanchi, Hadronic J. Vol. 31 (2008) in press The reader should be aware that the American Physical Society prohibited any mention of the use intended by Santilli of the relativistic Galilei symmetry for the characterization of a universal substratum, for the evident political reason to avoid the perception of the paper being incompatible with Einsteinian doctrines. The presentation of the new symmetry adopted above has been derived by the Foundation from Santilli's unpublished manuscripts of the time, and coincides with the above quoted Phys. Rev. paper only in the formulae. 3.6. QFT (AND QCD) VIOLATIONS FROM DISCRETE SYMMETRY VIOLATIONS (1974) The rigorous implementation of Lie's theory demands that the fundamental symmetry of special relativity, the Poincare' symmetry, is given by a continuous component characterized by the (connected) Lorentz symmetry, and discrete components characterized by space and time inversions. In the early part of the 20th century, the entire Poincare' symmetry was assumed to be exactly valid throughout the universe. The discovery of parity violation by weak interactions, rather than causing scientific joy, caused panic among the Einsteinian followers because of fear that the entire edifice may collapse. Organized interests on a world wide basis were then activated in the physics community to reach a vast consensus, intentionally without any technical inspection, that "the violation of discrete symmetries does not cause the violation of the continuous component of the Poincare' symmetry or of special relativity," a popular political belief without scientific process that is still widespread at this writing (mid 2008). Thanks to his notorious independence of thought from popular, academic beliefs, Santilli conducted in the 1970s quantitative technical studies as to whether the violation of discrete symmetries implies that of the connected Lorentz symmetry and, consequently, of special relativity. The analysis was conducted with the most advanced and rigorous technical knowledge in quantum field theory of the time, that via Wightman's axioms. Being an applied mathematician, Santilli was fascinated by the beauty of quantum field theory (QFT) characterized by Whitman axioms. However, being a physicist, he also knew that such a theory had to admit limits of exact applicability because physics will never admit final theories to the end of time. Thus, he initiated comprehensive studies for the identification of such limits of applicability as a necessary foundation for suitable covering theories. The reader should be aware that these studies are of extreme complexity and, therefore, can be only reviewed here in their main conceptual lines. The discrete symmetries of quantum field theories are given by the following operations and their combinations: (3.14) P (space inversion), C (charge conjugation), T (time inversion), PC, CT, PT, PCT. The PCT theorem within the context of vacuum expectation values (VEV) verifying Wightman's axioms essentially related the PCT conditions to the weak local commutativity conditions (WLC) under the assumption of Lorentz invariance for the vacuum expectation values plus, boundedness of the energy from below and other conditions permitting smooth analytic continuations. While supervising a Ph. D. thesis of one of his students at the Department of Physics of Boston University (the Greek physicist C. N. Ktorides), Santilli achieved the extension of the PCT theorem to all discrete spacetime symmetries, a possibility simply unknown at that time. To achieve this goal, he derived the following dual discrete symmetries : (3.15) P# = (PC)(WLC), C# = WLC, T# = (TC)(WLC), PC# = P(WLC), CT# = T(WLC), PT# = (PCT)(WLC), PCT# = PT(WLC), and proved the following: THEOREM 3.6A: Under Lorentz invariance, analyticity and energy boundedness from below, the validity (at a Jost point) of any discrete symmetry in a quantum field theory satisfying the Whitman axioms implies that of its dual and vice versa: (3.16) P ↔ T#, C ↔ PCT#, T ↔ P#, PC ↔ CT#, CT ↔ PC#, PT ↔ C(WLC), PCT ↔ C#, The implications of the above discovery presented in the papers quoted below are the following: For quantum field theories admitting discrete symmetries, Santilli's Theorem 3.6A implies the validity of basically new discrete symmetry that can be experimentally verified. For theories violating any discrete symmetry, Theorem 3.6A. implies that, whenever a discrete symmetry is violated, the corresponding dual symmetry has to be violated too, and vice versa. The original 1974 paper can be downloaded from the following link Generalization of the PCT theorem to all discrete spacetime symmetries in quantum field theory, R. M. Santilli and C. N. Ktorides, Phys. Rev. D Vol. 10, 3396-3406 (1974) The reading of the following preceding paper, also at the Phys. Rev., is instructive Can the generalized Haag theorem be further generalized? R. M. Santilli and C. N. Ktorides, Phys. Rev. D Vol. 7, 2447-2456 (1973) It should be noted that the results reported above solely present the version published by Phys. Rev. and not the complete research conducted by Santilli. In essence, the editors of Phys. Rev. kept the paper for years without accepting it and without rejecting it, evidently due to the absence of a credible technical counter-arguments (in the 1970s, technical arguments were required for a rejection, something abandoned these days at the American and other Physical Societies). Santilli finally understood the reason for the delay, changed the final parts, and the paper was accepted and published immediately thereafter. The political problems were multifold. The first problem was caused by the conclusion stating that, {\it in the event a given discrete symmetry and its dual are violated, the Wightman axioms are violated too.} This evident conclusion had to be removed from the paper for its publication, as confirmed by Santilli recollections, because Wightman was in control of quantum field theory of the time. The biggest political problem, was, however, caused by Santilli's analytic continuation of a discrete symmetry to its connected component as expected from Lie's theory, namely, {\it the achievement of the original goal of deriving the lack of exact character of the (continuous) Lorentz transformations from the violation of a discrete symmetry.} Unfortunately, the Foundation could not identify any of Santilli's original manuscripts in the ield. Following consultation, Santilli released the following statement: % A direct test of the applicability or inapplicability of special relativity under conditions violating discrete symmetries was inconceivable in the 1970s as it is inconceivable today due to organized opposing interests controlling major particle laboratories around the world. This scientific obscurantism is implemented despite the evidence that a theory, such as special relativity, that is strictly invariant under time reversal, cannot possibly be exact for a strictly irreversible process, such as a weak interaction decay, since the scattering amplitude is invariant under time reversal, thus predicting the spontaneous recombination of the debris of the decay into the original particle. Due to this unfortunate political control of basic physical knowledge, in the 1970s I asked myself whether there was any way of establishing the lack of exact character of the connected component of the Lorentz symmetry from the violation of its discrete component. To my best recollection, I did find an analytic continuation connecting said components in such a way that the violation of one would imply that of the other. However, for scientific honesty, I have to stress that I am not sure whether the derivation was correct due to lack of its technical review by the American Physical Society. Also, in view of the extreme complexity of the field in which I have not conducted research for some thirty years, I do not have the time to reconsider it now. I am proud for my reputation of never accepting abuses without due response. In this particular case, the defense of the Ph. D. thesis of my student Ktorides was at stake because crucially dependent on the publication of the paper by Phys. Rev. Hence, I had to accept the political manipulation of the conclusions by the editors of Phys. Rev. and their referees to allow Ktorides graduation. Following the appearance of the 1974 paper, I destroyed the entire file out of sheer rage that, in a seemingly democratic country, the American Physical Society was allowed such a totalitarian control of fundamental human knowledge in complete impunity and without any control by the country. The Foundation is interested in supporting research on Santilli problem in quantum field theory," namely, whether there is an analytic continuation or other mechanism under which the violation of a discrete symmetry causes the inapplicability of the Lorentz symmetry and special relativity. 3.7. RESOLUTION OF THE HISTORICAL IMBALANCE ON ANTIMATTER (1994) 3.7A. Foreword Santilli has conducted comprehensive studies on antimatter at all possible levels, from Newtonian mechanics to second quantization and for conditions of increasing complexity, from fully conservative conditions to the most general possible irreversible non-Hamiltonian conditions, as well as hyperstructural conditions expected in possible antimatter living structures. In this section we outline the most elementary level of study, that for point-like abstractions of antiparticles under sole potential interactions. The subsequent levels of study are given by the broader isodual isotopic, genotopic and hyperstructural theories that cannot possibly be reviewed in this presentation, but can be constructed via an isodual map of matter theories. 3.7B. Newton-Santilli isodual equation for antimatter As recalled in Section 1.4, no consistent classical theory of antimatter existed prior to Santilli's research, to our best knowledge. For instance, by resuming the use of the conventional associative multiplication axb = ab, the celebrated Newton's equation (3.17) m x dv/dt = F(t, r, v, ...) or the celebrated Newton's gravitation (3.18) F = g x m1 x m2/r2 solely apply for matter, and have no means whatsoever to distinguish between matter and antimatter for the very simple reason that antimatter was inconceivable at Newton's times. Thanks to the prior discovery of his isodual mathematics outlined in Chapter 2, Santilli developed the isodual theory of antimatter that holds at all levels of study, thus restoring full democracy between matter and antimatter. In essence, in the 20th century antimatter was empirical treated by merely changing the sign of the change, under the tacit assumption that antimatter exists in the same space as that for matter. Thus, both matter and antimatter were studied with respect to the same numbers, fields, spaces, etc. However, a correct classical representation of antimatter required a mathematics that is anti-isomorphic to that used for matter as a necessary condition to admit a charge conjugated operator image. Santilli represents antimatter via his anti-Hermitean isodual map (2.9) that must be applied to the totality of quantities used for matter and all their operations. Hence, under isoduality, we have not only the change of the sign of the charge, but also the isodual conjugation of all remaining physical quantities (such as coordinates, momenta, energy, spin, etc.) and all their operations. This is the crucial feature that allows Santilli to achieve a consistent representation of antimatter also for neutral bodies. We have in this way the Newton-Santilli isodual equation for antiparticles that we write in the simplified form (3.19) md xd ddvd /d ddtd = Fd(td, rd, vd, ...), where "d" denotes isodual map (2.9), and the same conjugation holds for gravitation (see below). Note that, after working out all isodual maps, antiparticle equation (3.19) merely yields minus the value of the conventional equation for particles in both the l.h.s. and the r.h.s, thus appearing to be trivial. However, a most important feature of the above equation is that it defines antiparticles in a new space, the Euclid-Santilli isodual space, that is coexistent but different than our own space. The Euclidean space and its isodual then form a two-valued hyperspace. In this section we shall show that, starting from the fundamental equation (3.19), the isodual theory of antimatter is consistent at all subsequent levels, including quantization, at which level it is equivalent to charge conjugation. Note that isodual antiparticles have a negative energy. This feature is dismissed by superficial inspections as being nonphysical, thus venturing judgments prior to the acquisition of technical knowledge. In fact, negative energies are indeed nonphysical, but when referred to our spacetime, that is, with respect to positive units of time. By contrast, when referred to negative units, all known objections on negative energies become inapplicable, let alone resolved. Note also that isodual antiparticles move backward in time. This view was originally suggested by Stueckelberger in the early 1900s, and then adopted by various physicists, such as Feynman, but dismissed because of causality problems when treated with our own positive unit of time. Santilli has shown that motion backward in time referred to a negative unit of time td = - t is as causal as motion forward in time referred to a positive unit of time t, and this illustrates the nontriviality of the isodual map. Figure 3.5. Contrary to popular beliefs, time has four directions as depicted by Santilli in this figure to illustrate the need for isoduality. In fact, time reversal can only allow the representation of two time directions. The remaining two time directions can solely be represented via the isodual map. Moreover, the assumption that particles and antiparticles have opposing directions of time is the only one known giving hopes for the understanding of the process of annihilation of particles and their antiparticles, a mechanisms utterly incomprehensible for the 20th century physics. 3.7C. Isodual representation of the Coulomb force The isodual theory of antimatter verifies all classical experimental evidence on antimatter because it recovers the Coulomb law in a quite elementary way. Consider the case of two particles with the same negative charge and Coulomb law (3.20) F = (- q1) x (- q2) / r x r, where the positive value of the r.h.s is assumed as representing repulsion, and the constant is assumed to have the value 1 for simplicity. Under isoduality, the above expression becomes (3.21) Fd = (- q1)d xd (- q2)d /d rd xd rd, thus reversing the sign of the equation for matter, Fd = - F. However, antimatter is referred to a negative unit of the force, charge, coordinates, etc (Chapter 2). Hence, a positive value of the Coulomb force referred to a positive unit representing repulsion is equivalent to a negative value of the Coulomb force referred to a negative unit, and the latter also represents repulsion. For the case of the electrostatic force between one particle and an antiparticle, the Coulomb law must be projected either in the space of matter (3.22) F = (- q1) x (- q2)d / r x r, representing attraction, or in that of antimatter (3.23) F = (- q1)d xd (- q2) /d rd xd rd, in which case, again, we have attraction, thus representing classical experimental data on antimatter. 3.7D. Hamilton-Santilli isodual mechanics To proceed in his reconstruction of full democracy in the treatment of matter and antimatter, Santilli had to construct the isodual image of Hamiltonian mechanics because essential for all subsequent steps. In this way he reached what is today called the Hamilton-Santilli isodual mechanics based on the isodual equations (3.24) ddrd/dddtd = ∂dHd(rd, pd)/ddpd,   ddpd/dddtd = - ∂dHd(rd, pd)/∂r. and their derivation from the isodual action Ad (a feature crucial for quantization), from which the rest of the Hamilton-Santilli isodual mechanics follows. 3.7E. Isodual special and general relativities As indicated in Section 1.4, special and general relativities are basically unable to provide a consistent classical treatment of antimatter. Santilli has resolved this insufficiency by providing a detailed, step by step isodual lifting of both relativities with a mathematically consistent representation of antimatter in agreement with classical experimental data (see below for the quantum counterpart). The reader should be aware that the above liftings required the prior isodual images of the Minkowskian geometry, the Poincare' symmetry and the Riemannian geometry, as well as the confirmation of the results with experimental evidence. 3.7F. Prediction of antigravity Studies on antigravity were dismissed and disqualified in the 20-h century on grounds that "antigravity is not admitted by Einstein's general relativity." This posture resulted in a serious obscurantism because general relativity cannot represent antimatter, thus being disqualified for any serious statement pertaining to the gravity between matter and antimatter. Thanks to his isodual images of special and general relativity, Santilli has restored a serious scientific process in the field, by admitting quantitative studies for all possibilities, and has shown that once antimatter is properly represented, matter and antimatter must experience antigravity (defined as gravitational repulsion) because of supporting compatible arguments at all levels of study, with no known exclusion. In fact, all known "objections" against gravitational repulsion between matter and antimatter become inapplicable under Santilli isoduality, let alone meaningless. The arguments in favor of the above conclusion are truly forceful because differentiated and mutually compatible. As a trivial illustration, we have the repulsive Newton-Santilli force between a particle and an isodual particle (antiparticle) both treated in our space (3.25) F = g x m1 x m2d / r2 = - g x m1 x m2 / r2, which is indeed repulsive. The same conclusion is reached at all levels of study. It should be indicated that a very compelling aspect supporting antigravity between matter and antimatter is Santilli's identification of gravity and electromagnetism indicated in Section 3.4. In fact, the electromagnetic origin of exterior gravitation mandates that gravity and electromagnetism must have similar phenomenologies, thus including both attraction and repulsion. 3.7G. Test of antigravity Santilli has proposed an experiment for the final resolution as to whether antiparticles in the gravitational field of Earth experience attraction or repulsion. The experiment consists in the measure of the gravitational force of a beam of positrons in flight on a horizontal vacuum tube 10 m long at the end of which there is a scintillator. Then, the displacement due to gravity is visible to the naked eye under a sufficiently low energy (in the range of the 10-3 eV). The experiment was studied by the experimentalist Mills and shown to be feasible with current technologies and resolutory. Figure 3.6. The original illustration used by Santilli for the 1994 proposal to test the gravity of positrons in horizontal flight in a vacuum tube. The proposal has been qualified by experimentalists as being technically feasible nowadays and resolutory because the displacement due to gravity on a scintillator at the end of a 10 m flight for positrons with milli-eV energy is visible to the naked eye. The usual criticisms based on disturbances caused by stray fields have been disqualified as political for a tube with at least 50 cm diameter. Virtually all major physics laboratories around the world have rejected even the consideration of the test, despite its dramatically lower cost and superior scientific relevance compared to preferred tests, on grounds that "Einstein theories do not admit antigravity," although with documented knowledge that said theories cannot consistently represent antimatter as reviewed in the test. 3.7H. Isodual quantum mechanics Next, Santilli constructed a step-by-step image of quantum mechanics under his isodual map based on the Heisenberg-Santilli isodual time evolution for an observable Q (3.26) id xd ddQd /d ddtd = [Q, H]d = Hd xd Qd - Qd xd Hd, and related isodual canonical commutation rules, Schroedinger-Santilli isodual equations, etc. He then proved that, at the operator level, isoduality is equivalent to charge conjugation. Consequently, the isodual theory of antimatter verifies all experimental data at the operator level too. Nevertheless, there are substantial differences in treatment, such as: 1) Quantum mechanics represents antiparticles in the same space of particles, while under isoduality particles and antiparticles exist in different yet coexisting spaces; 2) Quantum mechanics represents antiparticles with positive energy referred to a positive unit, while isodual antiparticles have negative energies referred to a negative unit; 3) Quantum mechanics represents antiparticles as moving forward in time with respect to our positive time unit, while isodual antiparticles move backward in time referred to a negative unit of time. 3.7I. Experimental detection of antimatter galaxies Recall from Chapter 2 that the isodual theory of antimatter was born out of Santilli's frustration as a physicist for not being able to ascertain whether a far away star, galaxy or quasar is made up of matter or of antimatter. Santilli has resolved this uneasiness via his isodual photon γd namely, photons emitted by antimatter that have a number of distinct, experimentally verifiable differences with respect to photons γ emitted by matter, (3.27) γd ≠ γ, thus allowing, in due time, experimental studies on the nature of far away astrophysical objects. A most important difference between photons and their isoduals is that the latter have negative energy, as a result of which, isodual photons emitted by antimatter are predicted to be repelled in the gravitational field of matter. A possibility for the future ascertaining of the character of a far away star or quasar is, therefore, the test via neutron interferometry or other sensitive equipment, whether light from a far away galaxy is attracted or repelled by the gravitational field of Earth (for other possibilities see the literature quoted below). 3.7J. The new isoselfdual invariance of Dirac's equation Santilli has released the following statement on the Dirac equation: I never accepted the interpretation of the celebrated Dirac equation as presented in the 20-th century literature, namely, as representing an electron, because the (four-dimensional) Dirac's gamma matrices are generally believed to characterize the spin 1/2 of the electron. But Lie's theory does not allow the SU(2)-spin symmetry to admit an irreducible 4-dimensional representation for spin 1/2, and equally prohibits a reducible representation close to the Dirac's gamma matrices. Consequently, Dirac equation cannot represent an electron intended as an elementary particle since elementarily requires the irreducible character of the representation. In the event Dirac's gamma matrices characterize a reducible representation of the SU(2)-spin, Dirac's equation must represent a composite system. I discovered the isodual theory of antimatter by examining with care Dirac's equation. In this way, I noted that its gamma matrices contain a conventional two-dimensional unit I2x2 = Diag. (1, 1), as well as a conjugate negative-definite unit - I2x2. That suggested me to construct a mathematics based on a negative definite unit. The isodual map come from the connection between the conventional Pauli matrices σk, k = 1, 2, 3, referred to I2x2 and those referred to - I2x2. In this way I reached the following interpretation of Dirac's gamma matrices as being the tensorial product of I2x2, σk times their isoduals, (3.28) {I2x2, σk, k = 1, 2, 3} x {I2x2d, σkd, k = 1, 2, 3}. Therefore, I reached the conclusion that the conventional Dirac equation represents the tensorial product of an electron and its isodual, the positron. In particular, there was no need to use the "hole theory" or second quantization to represent antiparticles since the above re-interpretation allows full democracy between particles and antiparticles, thus including the treatment of antiparticles at the classical level, let alone in first quantization. By continuing to study Dirac's equation without any preconceived notion learned from books, I discovered yet another symmetry I called isoselfduality, occurring when a quantity coincides with its isodual, as it is the case for the imaginary unit id = i. In fact, Dirac's gamma matrices are isoselfdual, (3.29) γdμ = γμ, μ = 0, 1, 2, 3. This new invariance can have vast implications, all the way to cosmology, because the universe itself could be isoselfdual as Dirac's equation, in the event composed of an equal amount of matter and antimatter. In conclusion, Dirac's equation is indeed one of the most important discoveries of the 20-th century with such a depth that it could eventually represent features at the particle level that actually hold for the universe as a whole. Figure 3.7. An illustration of the serious implications of Santilli's isodual theory of antimatter: the need for a revision of the scattering theory of the 20th century due to its violation of the isoselfdual symmetry of Dirac's equation. The diagram in the left illustrates the isoselfduality of the initial particles (an electron and a positron) but its violation in the final particles (two identical photons). The diagram in the right illustrates one of the several needed revisions, the use for final particles of a photon and its isodual as a necessary condition to verify the new isoselfdual symmetry. Additional dramatic revisions are due to the purely action-at-a-distance, potential interactions of the conventional scattering theory (represented with a waving central line in the left diagram), compared to the non-Hamiltonian character of the scattering region caused by deep penetrations of the wavepackets of particles (represented with a circle in the right diagram). A review of the novel hadronic scattering theory is presented in Chapter 5. 3.7K. Dunning-Davies thermodynamics for antimatter As well known, the sole formulation of thermodynamics of the 20-th century was for matter. The first consistent formulation of thermodynamics for antimatter has been reached by J. Dunning-Davies with intriguing implications for astrophysics and cosmology yet to be explored, see the original contribution by Dunning Davies quoted below 3.7L. Isoselfdual spacetime machine A "spacetime machine" is generally referred to a mathematical process dealing with a closed loop in the forward spacetime cone, thus requiring motions forward as well as backward in time. As such, the "machine" is not permitted by causality under conventional mathematical treatment, as well known. Santilli discovered that isoselfdual matter, namely, matter composed by particles and their antiparticles such as the positronium, have a null intrinsic time, thus acquiring the time of their environment, namely, evolution forward in time when in a matter field, and motion backward in time when in an antimatter field. Consequently, Santilli showed that isoselfdual systems can indeed perform a closed loop in the forward light cone without any violation of causality laws, because they can move forward when exposed to a matter and then move backward to the original starting point when exposed to an antimatter. 3.7M. Original literature Santilli's original papers on the discovery of isomathematics have been identified in Chapter 2. To our best knowledge, Santilli's first paper on the isodual theory of antimatter is the following one dating to 1994 (following the 1993 paper on isodual numbers) Representation of antiparticles via isodual numbers, spaces and geometries R. M. Santilli, Comm. Theor. Phys. Vol. 3, 153-181 (1994) The first presentations of the classical isodual theory, antigravity, the isodual photon and the isoselfdual spacetime machine appeared in the following papers Classical isodual theory of antimatter and its prediction of antigravity R. M. Santilli, Intern. J. Modern Phys. A Vol. 14, 2205-2238 (1999) Antigravity R. M. Santilli, Hadronic J. Vol. 17, 257-284 (1994) Does antimatter emit a new light? R. M. Santilli, Hyperfine interactions Vol. 109, 63-81 (1997) Spacetime machine R. M. Santilli, Hadronic J. Vol. 17, 285-310 (1994) An independent study by an experimentalist on the feasibility and resolutory character of the proposed measurements of the gravity of positron in horizontal flight on Earth can be found in the following paper Possibilities of measuring the gravitational mass of electrons and positrons in free horizontal flight A. P. Mills, Hadronic J. vol. 19, 77-96 (1996) Comprehensive presentation of the isodual theory of antimatter are available in the monographs "Elements of Hadronic Mechanics" Vol. II: "Theoretical Foundations" R. M. Santilli, "Isodual Theory of Antimatter, with Applications to Antigravity, Grand Unification and Cosmology," R. M. Santilli, Springer (2006) The first formulation of thermodynamics for antimatter was reached by J. Dunning Davies in the paper Isodual thermodynamics for antimatter J. Dunning-Davies, 3.8. INITIATION OF q-DEFORMATIONS OF LIE THEORY (1967) As part of his Ph. D. Thesis at the University of Torino, Italy, Santilli proposed in 1967 Embedding of Lie-algebras in nonassociative structures R. M. Santilli, Nuovo Cimento Vol. 51, 570-576 (1967). the first mutations (today known as "deformations") of Lie algebras known in the mathematical and physical literature of the time with the product (where we return to use the convcentional notation of the associative product ab) (3.30) (A, B) = p A B - q B A, where AB is the conventional associative product, and p, q, p ± q are non-null parameters or functions. In particular, Santilli stressed in the 1967 paper that that his product (A, B) is jointly Lie-admissible (namely, (A, B) - (B, A) is Lie) and Jordan admissible (namely, (A, B) + (B, A) is Jordan), The proposal was made as a first approximation of Lagrange and Hamilton's legacy (Section 2.1), namely, via a generalization of the analytic equations approximating external terms for open, nonconservative and irreversible systems while reconstructing an algebra in the brackets of the time evolution. In fact, in his 1967 paper and others of that period (see the Curriculum) Santilli writes the deformed analytic equations in the form (3.31) dr/dt = p(∂H(r, p)/∂p), dp/dt = - q(∂H(r, p)/∂r). that, for p = 1 and q = 1 - ε/(∂H(r, p)/∂r), Eqs. (3.31) are approximated into the form (3.32) dr/dt = ∂H(r, p)/∂p,   dp/dt = - ∂H(r, p)/∂r + ε, ε = constant, with nonunitary time evolution of an observable Q in the finite and infinitesimal forms (3.33) W(t) W(t) ≠ I (3.34) Q(t) = W(t) Q(0) Q(t) = exp (H q t i) Q(0) exp( - i t p H), (3.35) i dQ / dt = (Q, H) = p Q H - q H Q, thus regaining a consistent algebra in the brackets of the time evolution, while representing, for the first time, nonconservative and irreversible systems. The lack of totally antisymmetric character of the brackets then characterize the time rate of variation of the energy (3.36) i dH/dt = (H, H) = (p - q) HH ≠ 0, as well as of other quantities. In this way, Santilli realized Jordan's dream of seeing his algebras appear in physics applications, although at the level of a covering of quantum mechanics, since the latter has no possible content of Jordan algebras. Santilli also worked out the classical image of the above formulation in which the Lie-admissible character persists, although the Jordan-admissible character is lost. Santilli's presented his mutations (deformations) of Lie algebra in his paper of 1967 via the most general possible formulation, that in which the product AB is nonassociative, with the clear identification of its associative particular form, Subsequent vast studies in mutations were conducted as part of hadronic mechanics and, as such, they are discussed below. As it is well known, in 1989 L. Biebernarn and R. Macfairlane published their papers on the simpler $q$-deformations with product $(A, B) = AB - qBA$ without any quotation of Santilli's origination of 1967 [30], even though they were fully aware of it (Biedenharn joined Santilli in the early 1980s for a DOE grant application precisely on Santilli's mutations/deformations, and Macfairlane was directly informed by Santilli years prior to 1986). % In particular, Biedenharn and Macfairlane changed Santillis original, algebraically more appropriate term of mutations" into deformations," and avoided the identification of their Lie-admissible and Jordan admissible character to prevent an instantaneous identification of Santilli's origination, due to his known expertise in these algebras. Following these publications, thousands of papers on $q$-deformations appeared in the physics literature generally without any quotation of Santilli's origination. As a result of these occurrences, Santilli has been dubbed the most plagiarized physicist of the 20-th century. 3.9. THEOREMS OF CATASTROPHIC INCONSISTENCIES OF NONCANONICAL AND NONUNITARY THEORIES (1978) 3.9A. The majestic consistency of Hamiltonian theories. Santilli has always considered classical Hamiltonian mechanics and its operator image, quantum mechanics (hereoihereon referred to as "Hamiltonian theories"), as having a majestic consistency, due not only to their mathematical rigor permitted by their underlying Lie's theory and its body of methods, but also to the physical consistency of their axiomatic structure. Consider the fundamental dynamical equations of quantum mechanics, Heisenberg's equations for the characterization of the time evolution of an observable Q(t) in the finite and infinitesimal forms (3.37) Q(t) = U(t) Q(0) Q(t) = exp(H t i) Q(0) exp(- i t H), (3.38) i dQ/dt = Q H - H Q = [Q, H], (3.39) H = p2/2m + V(r) = H, Q = Q, Schroedinger's equations (for h-bar = 1) (3.40) i ∂t |> = H |> = E |>, (3.41) pk |> = - i ∂k |>, and the canonical commutation relations (3.42) [ri, pj]= δij, [ri, rj] = [pi, pj] = 0, i, j, k = 1, 2, 3. A most dominant property needed for the majestic consistency is that the time evolution operator U(t) constitutes a unitary transformation when formulated on a Hilbert space over the field of complex numbers, (3.43) U(t) U(t) = U(t) U(t) = I. The corresponding property for the classical time evolution is that of constituting a canonical transformation, that also preserves the unit. The implications of the above simple property are far reaching. To begin, the time evolution of quantum mechanics leaves invariant the basic unit, generally assumed to be that of the Euclidean space, I = Diag. (1, 1, 1), (3.44) I → I' = U I U ≡ I. But the unit I = Diag. (1, 1, 1) generally represents in an abstract way units actually used in experiments, such as I = Diag. (1 cm, 1 cm, 1 cm). Consequently, the unitary character of the time evolution law of quantum mechanics implies the preservation over time of the basic units of measurements, (3.45) I = Diag. (1 cm, 1 cm, 1 cm) → U [ Diag. (1 cm, 1 cm, 1 cm) ] U = Diag. (1 cm, 1 cm, 1 cm). Additionally, a quantity that is an observable (Hermitean) at the time t = 0 remains observable at all subsequent times, (3.46) H = H → U H U = H' = (H'). Also, if quantum mechanics yields a given numerical prediction, e.g., 57.72 MeV, at a given time, the theory maintains the same numerical prediction under the same conditions at all subsequent times, (3.47) H |> = 57.72 MeV |> → U ( H |> ) U = H' |>' = U ( 57.72 MeV |> )U = 57.72 MeV |>'. Finally, the unitarity of the time evolution permits the verification of causality and other physical laws. As a result, quantum mechanics has the majestic feature of preserving over time the units of measurements, the observability of physical quantities, the numerical predictions under the same conditions, causality and other laws. A corresponding physical consistency holds for classical Hamiltonian formulations. 3.9B. Theorems of catastrophic inconsistencies of noncanonical and nonunitary theories. The limitations of Hamiltonian theories in face of the complexity of nature was seen in the last decades of the 20th century by several physicists, resulting in the proposal of a considerable number of generalized theories, much along the development of hadronic mechanics. However, unlike hadronic mechanics, researchers generalized Hamiltonian formulations on one side, while preserving conventional mathematics, on the other side. A major scientific contribution by Santilli's group has been that of identifying the inconsistencies of generalized theories conceived along these lines, that can be expressed via the following: THEOREM 3.9A: All theories with a nonunitary time evolution , (3.48) W(t) W(t) ≠ I, when formulated with the mathematical methods of unitary theories (conventional fields, spaces, functional analysis, differential calculus, etc.) do not preserve said mathematical methods over time, thus being afflicted by catastrophic mathematical inconsistencies, and do not preserve over time the basic units of measurements, Hermiticity-observability, numerical predictions and causality, thus suffering of catastrophic physical inconsistencies. Mathematical inconsistencies: Let I be the unit of the base field at a given time t. But the time evolution cannot preserve such a unit by definition, (3.49) I → I' = W(t) I W(t) ≠ I. Consequently, said theories lose the base field at subsequent times with the consequential catastrophic collapse of their entire mathematical structure. Physical inconsistencies: Nonunitary theories do not preserve over time the basic units of measurements, because, from the very definition of a nonunitary transform, we have (3.50) I = Diag. (1 cm, 1 cm, 1 cm) → W Diag. (1 cm, 1 cm, 1cm) W ≠ Diag. (1 cm, 1 cm, 1 cm); Similarly, nonunitary theories do not generally preserve observability over time, because they do not preserve Hermiticity over time in view of the Lopez lemma for which the known Hermiticity condition (3.51) ( ψ | {H |ψ )} = { ( ψ | H } | ψ ), is mapped under a nonunitary transform into the form (3.52) W ( ψ | {H |ψ )} W = ( ψ |' T { H' T | ψ )'} ≠ { ( ψ |' T H' } T | ψ ), (3.53) T = ( W W )-1, due to the general lack of commutativity of H' and T, H' T ≠ T H'. Also, nonunitary theories do not admit the same numerical predictions under the same conditions at different times, because, for instance, one can select a nonunitary transform for which (3.54) Ht=0 | ψ ) = 57.72 MeV | ψ ) → W ( H | ψ ) ) W = H' t>0 | ψ )' = 9,487 MeV | ψ )', Finally, one of Santilli's graduate students has proved that theories with a nonunitary time evolution violate causality laws and have other catastrophic inconsistencies. Santilli then concludes by saying Nonunitary theories formulated with the mathematics of unitary theories have no mathematical or physical value of any type. The case for classical noncanonical theories formulated with the mathematics of canonical theories have corresponding, catastrophic, mathematical and physical inconsistencies. 3.9C. Examples of catastrophically inconsistent theories. Numerous theories afflicted by the inconsistencies here considered have been and continue to be developed. Examples of classical catastrophically inconsistent, noncanonical theories are given by: 1) Newton's equations with nonselfadjoint (nonpotential) forces; 2) Lagrange and Hamilton analytic equations with external terms; 3) Lagrange and Hamilton's equations without external terms but with Lagrangians and Hamiltonians of second or higher order (depending on accelerations or its time derivatives); 4) Birkhoffian mechanics (even though preserving a Lie structure) because noncanonical; Examples of operator, catastrophically inconsistent nonunitary theories are: A) (p, q)-, q-, k- or any other deformations of Lie algebras; B) The so-called "deformed quantum mechanics"; C) The so-called "deformed Lorentz symmetry"; D) The so-called "deformed special relativity"; E) Theories with a complex-valued Hamiltonian to represent dissipativity, e.g., in nuclear physics; F) The so-called quantum groups; G) The so-called "squeezed states"; H) String theories when including gravitation on a curved space; I) Quantum gravity; J) Nonunitary statistics, such as that by Prigogine; K) Supersymmetric models; L) The Kac-Moody algebras; and others. The literature also contains a number of additional theories suffering of catastrophic inconsistencies not necessarily connected to nonunitarity, among which we mention theories nonlinear in the wavefunction ψ, namely with eigenvalue equations in Hermitean Hamiltonians of the type (3.55) H(r, p, ψ) | ψ ) = E |ψ ). In fact, these theories violate the superposition principle and, consequently, cannot be consistently applied to composite states. Other catastrophically inconsistent theories are those with a nonassociative enveloping algebra, such as Weinberg's nonlinear theory with a time evolution of the type (3.56) idQ/dt = Q ⊗ H - H ⊗ Q, where Q ⊗ H is nonassociative, because these theories cannot admit any left and/or right unit, thus lacking the definition over a field, prohibit any measurements, lack any consistent exponentiation to reach finite transforms and have other catastrophic inconsistencies (the scholar not familiar with these occurrences should inspect in detail Chapter 2, see the insistence on conventional, or iso- and geno-associative enveloping algebras, and attempt their nonassociative generalizations). 3.9D. Original literature Inconsistencies of theories with a nonassociative enveloping algebras were studied in the following paper after an initial suggestion by S. Okubo dating back to 1982 (of which the Foundation failed to identify the related paper until now). The studies were then resumed by A. Jannussis, R. Mignani and R. M. Santilli in 1993 with the paper Problematic aspects of Weinberg's nonlinear theory A. Jannussis, R. Mignani and R. M. Santilli, Ann. Fond. L. De Broglie, Vol. 19, pages 371-389 (1993) additional studies can be located in the paper Algebraic inconsistencies of a class of equations for the description of open systems and their resolution via Lie-admissible algebras A. Jannussis and D. Skaltzas, Ann. Fond. L/. De Broglie Vol. 18, pages 137-154 (1993) Lopez's Lemma on the general lack of preservation of Hermiticity-Observability under nonunitary time evolutions originated in the papers Problematic aspects of q-deformations and their isotopic resolution J. F. Lopez, Hadronic Journal Vol. 16, pages 429-457 (1993) Origin and axiomatization of q-deformations D. F. Lopez, In "Symmetry Methods in Physics", , Vol. 1, Joint Institute for Nuclear Research, Dubna, Russia (1994) Santilli then conducted comprehensive studies on the Inconsistency Theorems in the following papers Origin, problematic aspects and invariant formulation of q-, k- and other quantum deformations R. M. Santilli, Modern Phys. Letters Vol. 13, 327-335 (1998) Origin, problematic aspects and invariant formulation of classical and operator deformations R. M. Santilli, Intern. J. Modern Phys. Vol. 14, pages 3157-3206 (1999) Nine theorems of catastrophic inconsistencies of general relativity and their possible resolution via isogravitation Ruggero Maria Santilli Galilean Electrodynamics, Summer 2006, p. 43-79 (2006) New problematic aspects of current string theories and their invariant resolution R. M. Santilli, Found. Phys. Vol. 32, 1111-1140 (2002) Lie-admissible invariant representation of irreversibility for matter and antimatter at the classical and operator level Ruggero Maria Santilli Nuovo Cimento B Vol. 121, p. 443-595 (2006) 3.10. SANTILLI RELATIVITIES (1978) 3.10A. Historical notes As indicated by W. Pauli in one of the footnotes of his famous book Theory of Relativity, H. A. Lorentz attempted in 1895 the construction via Lie's theory of the symmetry leaving invariant the locally varying speed of light within physical media, C = c/n, where c is the speed of light in vacuum and n the familiar index of refraction. However, he encountered unsurmontable difficulties, and had to restrict the study to the constancy of the speed of light in vacuum c, resulting in the now historical paper of 1904 presenting the celebrates Lorentz symmetry with connected component SO(3.1). Santilli studied Pauli's book very carefully, identified the footnote presenting the unsolved problem, and called it the Lorentz problem, again, referring to the construction of the symmetry leaving invariant the locally varying speed of light C = c/n, such as for light traveling through liquids, atmospheres, chromospheres, etc., and initiated the research for its solution that resulted to be of such a complexity to require a lifetime of study. By looking in retrospect, Santilli's most important contributions for Lorentz's problem have been: 1) The proof that the problem cannot be solved with Lie's theory because, even assuming that a solution is found empirically, that solution is catastrophically inconsistent in view of the Theorems of Section 3.9; 2) The construction of the iso-, geno- and hyper coverings of Lie's theory and their isoduals permitting indeed the construction of an invariant solution for physical media of matter and antimatter, respectively; and 3) Constructing step by step iso-, geno- and hyper- and isodual generalizations of all main aspects pertaining to the Lorentz symmetry, from numbers to special relativity, and proving that said covering theories verify available experimental evidence for the intended conditions of applicability. Evidently, we cannot possibly review here this lifetime of work. Hence, we shall restrict our presentation to the sole case of Santilli isorelativity with original contributions in free pdf downloads, and merely indicate the references of the remaining relativities. 3.10B. Santilli's opening statement For one of the seminars delivered at physics departments around the world, Santilli brought in the lecture room a small rubber ball, a glass filled up with water, a picture of far away galaxies, pictures of Sun light at the Zenith, Sunset and Sunrise, and a cigarette lighter. He then initiated the seminar with the following opening words: Einstein's special relativity has a majestic axiomatic structure and a truly impressive body of experimental verifications for the conditions of its original conception, point-like particles and electromagnetic waves propagating in vacuum conceived as empty space. In view of these historical successes, it has been widely believed in the 20th century that special relativity is valid for whatever conditions exist in the universe. In reality, there exist numerous conditions, beyond those of the original conception, under which special relativity is only "approximately valid" or "inapplicable" and cannot be claimed to be violated in respect to Albert Einstein, because the theory was not conceived for these broader conditions. Among a variety of these conditions, I bring to your attention the following five cases of visual evidence on the inapplicability of special relativity: 1) The squeezing of this rubber ball cannot be treated by special relativity or quantum mechanics due to their incompatibility with the deformation theory that would causes the breakdown of the central pillar of both theories, the rotational symmetry. This limitation carries on all the way to hadron physics since protons and neutrons are extended and, therefore, have to be deformable with numerous important implications, for instance, for a quantitative representation of nuclear magnetic moments; 2) The simple phenomenon of the refraction of light causing the apparent bending of a stick in this glass of water also cannot be represented with special relativity because the occurrence can be solely represented quantitatively via a decrease of the speed of light in water, thus terminating the belief on the "universal" constance of the speed of light, since its reduction to photons scattering among liquid molecules has been disqualified for lack of quantitative representation of all electromagnetic waves propagating in water, such as for radiowaves with 1 m wavelength for which the reduction to photons has no physical sense; 3) When looking at this picture of far away galaxies, special relativity cannot provide any classical distinction between matter and antimatter galaxies since the sole distinction admitted by special relativity is that of the sign of the charge while far away galaxies must be assumed to be neutral. At any rate, antimatter did not exist as yet at the time of Einstein's formulation of special relativity; 4) These pictures of Sun light at the Zenith, Sunset and Sunrise constitute evidence visible to the naked eye of the inapplicability of special relativity within physical media such as our atmosphere because the first picture established the transparency of our atmosphere to blue light, thus preventing its absorption at the horizon, while the remaining two pictures establish the existence of a redshift that cannot possibly follow relativity laws because, assuming it exists at Sunset, it cannot exist at Sunrise since Earth moves away from the Sun at Sunset while it moves toward the Sun at Sunrise. Hence, according to special relativity, we should have a distinct redshift at Sunset and an equally distinct blueshift at Sunrise. The dominance of the red at both Sunset and Sunrise, therefore, establishes the existence of a basically new behavior of light propagating within physical media beyond that of light propagating in vacuum; 5) Special relativity and quantum mechanics are inapplicable to energy releasing process, such as the flame in this cigaret lighter, because all energy releasing processes are irreversible over time, while special relativity and quantum mechanics are strictly reversible and consequently predict that the flame and the smoke should recombine themselves spontaneously into the original fuel. In any case, special relativity and quantum mechanics had to be built with reversible axioms as a necessary condition to represent the physical problems in the early part of the 20th century, such as electrons orbiting in an atomic structure. Consequently, special relativity and quantum mechanics cannot credibly be assumed as being valid for the dramatically different irreversible processes. In this seminar I shall indicate that, thanks to the use of new mathematics specifically constructed for the problems at hand, it is possible to construct sequential coverings of special relativity and quantum mechanics providing a more adequate treatment of the above five physical conditions. I would like to stress ab initio that I do preserve Einstein's axioms and merely present broader realizations. In different words, my way of honoring the memory of Albert Einstein is not that of adapting nature to his original formulations with consequential risk of condemnations by posterity, but instead I honor Einstein by providing a dramatic broadening of the conditions of applicability of his axioms. In this section we provide an outline of the latter objectives as well as free pdf downloads of Santilli's original contributions at times of difficult identification in the libraries. 3.10C. Conceptual foundations Santilli always considered the widespread claim of the "universal constancy of the speed of light" a political posture because, as indicated in Section 1.2, the scientific statement should be "constancy of the speed of light in vacuum," since that is the sole case with experimental verifications. Therefore, Santilli never accepted special relativity for the characterization of dynamics within physical media because most media are opaque to light. Hence, the assumption of the speed of light in vacuum as the maximal causal speed within physical media opaque to light was repugnant to him. He then searched for a geometric characterization that would replace the speed of light within physical media, in such a way to recover, of course, the speed of light when propagation returns to be in vacuum. Santilli was also unable to accept special relativity for media that are transparent to light, such as liquids, atmospheres, chromospheres, etc., for various reasons, Consider, for instance, the propagation of light in water. In this case electrons can propagate faster than the local speed of light, producing the known Cerenkov light. He argued that, if the speed of light in vacuum is assumed as the maximal causal speed in water to salvage causality, there is the violation of a fundamental relativistic principle because the sums of two light speeds in water does not yield the speed of light in water. Alternatively, if one assumes the speed of light in water as the maximal causal speed, the relativistic addition of speeds is salvaged but special relativity would violate causality. The usual posture of attempting to salvage special relativity via the reduction of light to photons scattering through atoms was dismissed as political, because such a reduction has no physical value for electromagnetic waves with large wavelength, such as of 1 meter wavelength, which electromagnetic waves also propagate in water at a reduced speed according to the law C = c/n. By keeping these aspects in mind and their experimental verifications established in Chapter 5, the biggest physical implications of Santilli's studies is that matter causes a mutation of the very structure of conventional Minkowskian spacetime. In any case, deviations from Einsteinian predictions within matter could not exist without such a mutation. Along the latter lines, by far the biggest deviations from special relativity are expected by Santilli within physical media that are inhomogeneous (due to a local change of density) and anisotropic (due to differences in different space directions) such as atmospheres, chromospheres, etc., because these media have geometric deviations from the homogeneity and isotropy of the Minkowski spacetime. In studying the original contributions, interested scholars are, therefore, suggested to pay particular attention to the interplay between geometry, algebras and physics. 3.10D. Mathematical foundations The problem solved by Lorentz was the invariance of the Minkowskian metric m = Diag. (1, 1, 1, - c2). The problem solved by Santilli was the invariance of the broader metric m* = Diag. (1, 1, 1, - c2/n2), where n is a rather complex function of all needed local variables. It is evident that the latter metric can be solely connected to the former via a noncanonical transformation at the classical level or a nonunitary transform at the operator level. Assuming this main characteristic also assures the exiting from the class of equivalence of the Lorentz symmetry. Hence, Santilli considered the noncanonical transform of m into the most general possible diagonal metric m* with signature (+, +, +, -) (3.57) m = Diag. (1, 1, 1, - c2) → m* = Diag. (1/ n12, 1/n22, 1/ n32, - c2/n42) = = T m, where the index of refraction n = n4 is extended to all components because generated by the mere application of Lorentz transforms or other symmetrization processes. The n's are called the characteristic quantities of the medium considered. The inhomogeneity of the medium is represented via a dependence of the n's on the local density μ, the local temperature τ etc., nk(r, μ, τ, ...), k = 1, 2, 3, 4, while the anisotropy is represented by differences between the space and time characteristics quantities. All n's are normalized to the value nk = 1, k = 1, 2, 3, 4, for the vacuum. Additional information on the characteristic quantities have been provided in Section 2.4. Santilli then looked for the symmetry of the most general possible, symmetric line element in (3+1) dimension with signature (+, +, +, -) (3.58) r*2* = ( r12/n12 + r22/n22 + r32/n32 - t2 c2/n42) I*, nk > 0, k = 1, 2, 3, 4, with isotopic element and isounit the expressions (3.59) T = Diag. (1/ n12, 1/n22, 1/n32, 1/n42) > 0, (3.60) I* = 1/T = Diag. (n12, n22, n32, n42) > 0. Santilli then: 1) Formulated the theory on his iso-Minkowskian space M* (r* , x* , I*) (Section 2.6) with isocoordinates r* = r I*, r = (r1, r2, r3, t), with isoassociative product A x* B = A T B over an isofield F* with isounit I*; 2) Identified the noncanonical transform with the isounit (3.61) W x W* = I*, (3.62) (W x W)-1 = T; where † evidently represents transposed for real values matrices; and 3) Subjected to the above noncanonical transform the totality of the framework of special relativity, from numbers to physical laws, with no exclusion to avoid catastrophic inconsistencies due to mixing the mathematics of the covering theory with that of the old. The above assumptions are sufficient to construct the desired symmetry in the most rigorous possible, but also an elementary way. In fact, the indicated use of the noncanonical transforms permits the simple construction of: the isonumbers (3.63) n → n* = W n W* = n (W W*) = n I*; the isoproduct (3.64) n m → W (n m) W* = (W n W*) (W W)-1 (W m W) = n* T m* = n* x* m*; the isoexponentiation to the right and to the left for a given Lorentz generator J with related parameter w (3.65) exp(J w xi) → W x [exp(J w i)] W = [exp(J T w i)] I*, (3.66) exp(- i w J) → W [exp(- i w J)] W = I* [exp- (i w T J)] ; and the consequential isotopy of the finite Lorentz transformations of a physical quantity Q(w) (3.67) Q(w) = [exp(J w i)] Q(0) [exp(- i w J)] → → W { [exp(J w i)] Q(0) [exp(- i w J)] } W = (3.68) → [exp(J T w i)] Q*(0) [exp( - i w T J)]. All remaining needed isomathematics can be constructed in the same elementary way. The isodual formalism for antimatter is derived via the simple isodual transform (2.9) applied to the totality of the isotopic methods (see Section 2.7 for formal treatments) 3.10E. Invariance and universality of Santilli's isotopies. It is easy to see that the isotopic formalism of the preceding section is not invariant under both canonical and noncanonical (or unitary and nonunitary) transforms, such as (3.69) Z Z ≠ I, because the above transform does not leave invariant the basic isounit. (3.70) I* → I'* = Z I* Z ≠ I*, with consequential lack of invariance of the isoproduct (3.71) A x* B = A T B → Z (A x* B) Z = (Z A Z) (Z†-1 T Z-1) (Z B Z) = A' T' B', T' ≠ T. The above lack of basic invariances activates Theorem 3.9A with catastrophic mathematical and physical inconsistencies that should have been expected due to the mixing of isotopic methods formulated on isospaces over isofields with conventional transformations formulated on conventional spaces over conventional fields. It is easy to see that, if the above noncanonical or nonunitary transform is reformulated according to Santilli isomathematics, full invariance is reached and Theorem 3.9A is bypassed. In fact, all noncanonical or nonunitary transforms can be identically reformulated in the isotopic form Z = Z* T1/2, under which they become isocanonical or isounitary transforms, namely, they reconstruct canonicity or unitarity on isospaces over isofields, (3.72) Z = Z* T1/2, Z Z = Z* T Z* † = Z* x* Z* † = Z* † x* Z* = I*. It is easy to see that Santilli's isotopic formalism is indeed invariant under the above isocanonical or isounitary transforms. In fact, we have the invariance of the isounit (3.73) I * → I'* = Z* x* I* x* Z* † = Z* x* Z* † ≡ I*. Similarly, we have the invariance of the isoproduct (3.74) A* x* B* → Z* x* ( A* x* B*) x* Z*† = A'* x* B'*, namely, the isotopic element T remains unchanged. The invariance of all remaining operations then follow and Theorem 3.9A is bypassed. The scholar serious in science should be aware that the regaining of invariance for noncanonical and nonunitary theories has been the very reason for Santilli laborious and momentum discovery and development of his isomathematics. It is important also to know that Santilli's isotopies of the Minkowskian geometry are "directly universal" in the sense that they admit all infinitely possible mutations of the Minkowski spacetime (universality) directly in the isometric without any need for coordinates transformations (direct universality). Finally, the reader should keep in mind that Santilli's isospecial relativity (see below) represents dynamical systems with the conventional Hamiltonian (for all potential interactions) and the isounit (for non-Hamiltonian interactions). Consequently, the change of the isounit causes the transition to a different physical system. That is the reason for fixing the isounit in actual applications. 3.10F. Lorentz-Poincare'-Santilli isosymmetry and its isodual Following, and only following the above laborious preparatory advances, including the achievement of the crucial invariance, it was easy for Santilli to construct the isotopies of the Lorentz and Poincare' symmetry, today known as Lorentz-Poincare'-Santilli isosymmetry. or at times Poincare'-Santilli isosymmetry. For clarity and simplicity, in this section we shall outline the projection of the isosymmetry in our spacetime. Thus, we shall avoid using the the symbol "x" to denote conventional multiplication; we shall use the isomultiplication A x* B = A T B when necessary; ordinary symbols J, P, etc., will indicate quantities belonging to the Poincare' symmetry; while symbols with an asterisk will indicate quantities belonging to isospaces over isofields. To begin, the connected component of the Lorentz-Poincare'-Santilli isosymmetry can be written (3.75) P*11(3.1) = [ SO*6(3.1) ⊗ T*4(3.1) ] × T*1, and comprises: the six-dimensional Lorentz-Santilli isosymmetry SO*6(3.1); the four-dimensional isotranslations T4(3,1) in the isoparameters a = a I*; and the novel one-dimensional isotopic isotransform T1 in the isoparameters w* = w I* identified below, thus being eleven (rather than ten) dimensional), with conventional generators (3.76) p*11(3.1): { Jij, Pk, Q }. i, j, k = 1, 2, 3, 4, Lie-Santilli isocommutation rules in terms of isoproduct (2.26), (3.77) [Jij, Jpq]* = i ( m*jp Jiq - m*ip Jjq - m*jq Jip + m*iq Jjp ), (3.78) [Jij, Pk]* = i (mik Pj - mjk Pi ), (3.79) [ Pij, Pij]* = [ Jij, Q]* = [P, Q]* = 0, Casimir-Santilli isoinvariants (3.80) C*0 = I*, (3.81) C*2 = Pk x* P k, (3.82) C*4 = L*k x* L* k, L*k = εijpqJjpx*Pk. and isotransforms; 1) Isorotations (see the references for details), (3.83) r' = R*(θ) r; 2) Isoboosts here presented for motion in the conventional (3,4) plane (3.84) r'1 = r1, r'2 = r2, (3.85) r' 3 = γ* [ r3 - β* r4 (n3 / n4) ], (3.86) r' 4 = γ* [ r4 - β* r3 ( n4 / n3) ]. (3.87) γ* = 1 / ( 1 - β* )1/2, β* = (v / n3) / (c / n4), where v is the speed along the third axis; 3) Isotranslations, (3.88) r'k = rk + Ak(a, ...), (3.89) Ak = ak [ m*kk + [ m*k, Pk ] / 1! + ...] (no sum); 4) Isotopic transform (3.90) m* → m'* = w m*, I* → I'* = w-1 I*, under which isoline element (3.58) remains indeed invariant. In summary, recall that the Poincare' symmetry is ten dimensional. Contrary to all expectations, Santilli's isotopies of the Poincare' symmetry turned out to be eleven dimensional. Hence, Santilli conducted a re-examination of the conventional treatment of special relativity. The basic unit of the Lorentz and Poincare' symmetries is the 4-dimensional unit matrix I = Diag. (1, 1, 1, 1) > 0, while the unit of the base field universally assumed in special relativity is the trivial unit +1. To avoid this disparity, Santilli assumed the same unit for both the symmetry and the base field, thus using a basic field with unit I. Thanks to his discovery of the isonumber theory, this assumption requires to rewrite scalars from the usual form w, into the isoscalar form w* = w I (see Chapter 2). Consequently, one is forced to rewrite the basic invariant of special relativity in the form (3.91) r2 = (r m r) I = (r12 + r22 + r32 - t2 x c2 ) I, where r = ( rk), k = 1, 2, 3, r4 = t, and rk2 = (rk)2. These simple steps allowed the discovery that the Poincare' symmetry is eleven dimensional, rather than ten dimensional as popularly believed in the 20th century, in view of the additional one-dimensional isotopic invariance (3.92) ( r m r ) I ≡ [ r ( w m ) r ] ( w-1 I) = ( r m* r ) I* Since all spacetime symmetries have important physical applications, the same holds for the isotopic symmetry. In fact, the new symmetry allowed Santilli to reach a basically new grand unification of electroweak and gravitational interactions, as we shall see later on. Note that m and m* have the same signature (+. +. +. -). Following the above reformulation of the conventional symmetry, we can quote the following LEMMA 3.10A: The Poincare'-Santilli and the Poincare' symmetries are isomorphic. The above lemma illustrates Santilli's achievement of broader realizations of the abstract axioms of special relativity. The isodual Poincare'-Santilli isosymmetry for antimatter can be easily constructed via isoduality. The isotopies of the spinorial covering of the Lorentz-Poincare' symmetry were constructed by Santilli in 1995 and are presented in Section 3.11Q. Note that the new isotopic symmetry (3.92) remained undiscovered for close to one century. This should not be surprising because its discovery required the prior discovery of new numbers, the isonumbers with an arbitrary unit. Note also from the direct universality of the isotopies, the Poincare'-Santilli isosymmetry provides the invariance for all possible line elements with signature (+, +, +, -), including the Riemannian, Finslerian, Non-Desarguesian and other line elements, by including, as the simplest possible case, the Minkowski line element. 3.10G. Santilli isorelativity and its isodual Thanks to all the preceding mathematical and physical advances, Santilli has conducted a step-by-step isotopic lifting of the physical laws of special relativity resulting in a new theory today known as Santilli isorelativity. . His central assumption is, again, the preservation under isotopies of the original axioms by Einstein and the introduction of broader realizations. This basic assumption was realized to to such an extent that special relativity and isorelativity coincide at the abstract, realization-free level and, consequently, they could be presented with the same equations only subjected to different realizations of the symbols. The above conception is evidently permitted by Lemma 3.10A and carries far reaching physical and experimental implications because any criticism on the structure and applications of isorelativity is a criticism on Einstein's axioms, as we shall indicated later on. Assume for simplicity that motion occurs in the (3, 4)-plane. Then, inhomogeneity of the medium is represented by a functional dependence of n3 on the local density, temperature, etc., n3 = n3(r, μ, τ, ...). Anisotropy of the medium is expressed by the possible difference n3 ≠ n4. Assume that motion is restricted in the (3, 4)-plane, isorelativity can be presented via the following isoaxioms presented in their projection in our spacetime with conventional multiplication: ISOAXIOM I: The maximal causal speed within physical media is given by (3.93) Vmax = c (n3 / n4 ); ISOAXIOMS II: The isorelativistic addition of speeds within physical media is set by the law (3.94) Vtot = (v1 + v2) / (1 + β* 2); ISOAXIOM III: Within physical media, time dilation, length contraction, and variation of mass with speed follow the isotopic laws (3.95) t = γ* to, (3.96) d = γ*-1 do, (3.97) m = γ* mo; AXIOM IV: Within physical media the variation of light frequency with speed follows the Doppler-Santilli isotopic law, here written for simplicity for 90o aberration angle as well as in expansion to first order (3.98) ω* = γ* -1 ωo ≈ ωo [ 1 - β(n4/n3) + β2(n4/n3)2/2 + ...] ISOAXIOM V: Within physical media the energy equivalence of the mass follows the isotopic law (3.99) E = m V2max. COMMENTS: Note that the maximal causal speed is set by the geometry of the medium, namely, by the difference between the space and time characteristic quantities representing the anisotropy. As such, Vmax can be bigger, equal or smaller to the speed of light in vacuum. In particular, for isotropic media, Vmax = c. The Doppler-Santilli isoshift admits the following three cases: 1) The isoredshift, namely, a shift toward the red bigger than that predicted by special relativity, generally occurring in anisotropic media of low density, such as planetary atmospheres or astrophysical chromospheres, with values from Eq. (3.98) n4/n3 bigger than 1, and Vmax smaller than c, essentially characterizing the release of energy by light to the medium with consequent decrease of the frequency beyond the value predicted by special relativity; 2) The isoblueshift, namely, a shift toward the blue bigger than that predicted by special relativity, occurring for in anisotropic media of high density, such as astrophysical chromospheres, with values from Eq. (3.98) n4/n3 smaller than 1, and Vmax bigger than c, essentially characterizing the absorption of energy by light from the medium with consequent increase of the frequency beyond the value predicted by special relativity; 3) The conventional Doppler's shift, occurring in transparent isotropic media such as water with n4/n3 = 1. As we shall see in Chapter 5, the above prediction of Santilli's isorelativity are indeed verified by all available experimental data. Their implications are rather deep because they imply that, e.g., light is expected to exit a star or, much equivalently, a high energy scattering region, at a frequency bigger than that of its origination, while light is expected to leave planetary atmospheres or astrophysical chromospheres at a frequency smaller than that of its origination. The celebrated equivalence principle E = m c2 is experimentally verified only for point-like particles moving in vacuum. The isoequivalence principle expresses expected differences in excess or in defect from the conventional equivalence principle depending on said anisotropic ratio, said differences being merely due to processes of acquisition of release of energy to the medium. 3.10H. Santilli's isogravitation and its isodual As indicated in Section 2.6, one of Santilli's most important mathematical contributions has been the geometric unification of the Minkowskian and Riemannian geometries into the Minkowski-Santilli isogeometry. This unification has evidently been done as the premise for the unification of special and general relativities. In fact, Santilli's isorelativity is unique in the sense that it incorporates both the special and the general relativity. As indicated earlier, isotopic line elements (3.58) include as particular cases all infinitely possible (nonsingular) Riemannian line elements. Hence, Santilli first contribution in gravitation has been the construction of a universal "symmetry of gravitation", in lieu of the 20-th century "covariance". The isominkowskian formulation of exterior gravitation is elementary. Any nonsingular Riemannian metric g(r) always admit the decomposition into the Minkowski metric m = Diag. (1, 1, 1, - c2) and a 4x4 dimensional positive-definite matrix Tgr(r) called gravitational isotopic element because it incorporates all gravitational features. Santilli then assumes for basic isounit of exterior gravitation the inverse of Tgr, (3.100) g(r) = Tgr(r) m, Igr* = 1 / Tgr. The entire formalism of the Minkowski-Santilli isogeometry then applies, including the identical reformulation of the Einstein-Hilbert field equations, although completed with sources as in Section 3.4. The implications of the above discovery are far reaching and affect all quantitative sciences from classical mechanics to astrophysics. To begin, the formulation avoids the Theorems of Catastrophic Inconsistencies of Section 3.9 thanks to the invariance of isogravitation under the Poincare -Santilli isosymmetry. The same also allows an axiomatically consistent operator formulation of gravity and grand unification, the sole known to the Foundation as being consistent. As it is well known, all distinctions between exterior and interior gravitation were eliminated in the 20th century for the evident intent of adapting nature to Einstein doctrines. This manipulation of science was done via the claim that interior problems can be reduced to a set of point-like particles under sole action at a distance, potential interactions. As an illustration of this political profile, Schwartzchild wrote two papers, one for the exterior and one for the interior gravitation. The former has been widely acclaimed in the 20th century, while the latter has been vastly ignored, evidently because the former (latter) was compatible (incompatible) with Einstein's gravitation under a serious scrutiny. Theorem 1.1 terminates these political postures and sets the origin of macroscopic nonpotential and irreversible effects at the ultimate level of particles at short mutual distances, as a consequence of which the inequivalence of interior and exterior problems are established beyond doubt. Any dissident view should prove that light behaves in the same fashion in the exterior and interior problems, thus believing that electromagnetic waves propagates within atmospheres at the same speed as in vacuum and, additionally, light penetrates all the way to the center of astrophysical masses at the same speed as that in vacuum, which is a nonscientific posture. For instance, the treatment of a spaceship during re-entry in atmosphere via Einstein's gravitation would be a manifest scientific politics due to the Lagrangian character of the former and the strictly non-Lagrangian nature of the latter. In particular, the resistive forces experienced by the spaceship during re-entry is set by Theorem 1.1 to occur at the level of deep mutual penetration of the peripheral atomic electrons of the spaceship and those of the surrounding atmosphere, with ensuing nonlinear, nonlocal and nonpotential interactions. Santilli has provided the only known axiomatically correct formulation of interior isogravitation that is permitted by the complete absence of restrictions in the functional dependence of the Minkowski-Santilli isometric m*, thus allowing for the first time in scientific history to introduce in the interior problem the local speed of light, density, temperature, and other crucial features of the interior gravitational problem whose quantitative treatment is inconceivable in general relativity due to the excessive limitations of the Riemannian geometry. For instance, consider any desired Riemannian metric for the exterior problem, e.g., for the exterior Schswartzchild's solution, with diagonal elements (3.101) g(r) = (gkk) = Diag. [ (1 - 2m/r)-1, (1 - 2m/r)-1, (1 - 2m/r)-1, - (1 - 2m/r)]. Then, a simple lifting of such an exterior metric to the interior problem is given by the following forms where the characteristic quantities depend on local coordinates, r, density μ, temperature τ, etc., (3.102) g(r, μ, τ, ...) = Diag. ( g11/n12, g22/n22, g33/n32, g44/n42) E* = Tgr(r, μ, τ, ...) m, Following, and only following a more credible representation of interior gravitational problems, Santilli presented gravitational singularities as the zeros of the time component of the gravitational isotopic element or the infinities of the space components of the gravitational isounit, (3.103) Gravitational Singularities: I* 44 → ∞, I*kk → 0, k = 1, 2, 3. as one can verify via Eq. (3.101). By recalling the physical meaning of the characteristic quantities, one can then see the direct geometric representation of the singularity as follows: A) The limit Tkk → 0, k = 1, 2, 3, directly represents the volume of the star being reduced (geometrically) to a point (because said components are the units of space dimensions; and B) The limit I* 44 → ∞ represents the complementary occurrence for which time becomes infinite (because said component is the unity of time) or, equivalently, there is no dynamical evolution, thus preventing the release of light and mass once absorbed. It is evident that the above features represent, by far, the most elegant and mathematical representations of gravitational collapse in history, to the Foundation best knowledge. However, as stressed by Santilli, this geometric limit is a consequence of the widespread trend in the 20the century of studying extreme interior conditions, such as gravitational collapse, with the use of exterior gravitation. By comparison, when gravitational collapse is studied more seriously via interior gravitation, it is possible to show that the collapse of a star to a point becomes impossible, while preserving the crucial features of a black holes, such as that of not releasing light or mass. The experimental verification of Santilli isogravity is assured by the identical reformulation of the Einstein-Hilbert field equation. However, isogravitation occurs in a flat space since the Minkowski-Santilli isospace is locally isomorphic to the minkowski space and its curvature is null. This confirms the viewpoint expressed in Chapter 1 according to which the Riemannian formalism provides a very elegant mathematical representation of data, but space cannot be curved in a real sense because curvature cannot explain the weight of stationary bodies, the free fall of bodies along a straight radial line, the bending of light (that is a Newtonian event), and other features. Alternatively, Santilli has established beyond doubt that the continued insistence on space as being actually curved directly causes: the activation of the Theorems of Catastrophic Inconsistencies; the mandatory need to revise quantum electrodynamics (Section 2.4); the impossibility of reaching a consistent operator form of gravity; the impossibility of achieving a serious grand unification of electroweak and gravitational interactions; and other shortcomings of historical proportions. 3.10I. Santilli's Lie-admissible geno- and hyper-relativities and their isoduals As indicated in Chapter 1, Santilli considers irreversibility a fundamental feature of nature originating at the ultimate particle level in view of Theorem 1.1. Isorelativity is structurally reversible and, therefore, it is considered a mere preparatory step toward more fundamental relativities. It should be indicated that isorelativity has the capability of representing irreversibility via time-dependent isotopic elements T(t, r, p, E, ...) = T(t, ... ) in such a way that T(t, r, ...) ≠ T(- t, ...). However, this is a somewhat limited representation of irreversibility. In fact, isorelativity was primarily constructed to characterize closed-isolated composite systems that are stable, such as protons, thus being reversible in time, yet possessing non-Hamiltonian internal effects represented with the isounit. The achievement of a relativity truly capable of representing irreversibility required Santilli to construct his Lie-admissible genomathematics and its multi-valued hyper-extension, that are structurally irreversible in the sense that they are irreversible for all possible reversible Hamiltonians. Once such a mathematics was available, new relativities followed, today known as Santilli geno- and hyper-relativities for matter and their isoduals for antimatter. We regret our inability to outline these broader relativities to prevent a prohibitive length, as well as a substantial increase in complexity of thought, realization and verification. 3.10J. Isotopic reconstruction of exact spacetime symmetries when conventionally broken The physics of the 20th century saw a rather popular interest in "symmetry breakings" for both spacetime and internal symmetries. Santilli has shown that such "breakings" are due to the use of insufficient mathematics because, when the problem at hand is treated with a more appropriate mathematics, the symmetry is reconstructed exactly and no breaking occurs. The reconstruction of the exact SU(2)-isospin and SU(3)-color symmetries will be reviewed in Chapter 5. Here we indicate Santilli's mechanism for the exact symmetry reconstruction for the case of spacetime symmetries. Consider the perfect sphere of radius 1 defined on the Euclidean space over the reals R and its known symmetry under the rotational group SO(3), (3.104) r2 = r12 + r22 + r32 = 1 ε R. Suppose that the above perfect sphere is elastic and experiences a deformation into an ellipsoid of the type (3.105) r2 = r12/n12 + r22/n22 + r32/n32 ≠ 1. It is evident that, when continued to be defined on the Euclidean space over the reals, the above deformation causes the breaking of the rotational symmetry SO(3). Santilli principle of reconstruction of the exact rotational symmetry is based on the deformation of the line element (3.106) r2 = r12 + r22 + r32 → r12/n12 + r22/n22 + r32/n32, while jointly submitting the basic unit of the Euclidean space I = Diag. (1, 1, 1) to the inverse deformation (3.107) E = Diag. (1, 1, 1) → I* = Diag. (n12, n22, n32). It is then easy to see that the definition of the deformation on the Euclid-Santilli isospace with isounit I* recovers a perfect sphere called isosphere, (3.108) r*2* = ( r12/n12 + r22/n22 + r32/n32) I* ε R*. In fact, if one semiaxis is deformed of the amount 1/nk2, but the corresponding unit is deformed of the inverse amount nk2, the numerical value of the semiaxes on isospace over isofields remains 1, with the resulting exact isosymmetry SO*(3). But the latter symmetry is isomorphic to the conventional one SO(3), thus yielding an exact reconstruction of the rotational symmetry, merely formulated with a more appropriate mathematics. The reconstruction of the exact Lorentz symmetry when believed to be broken is intriguing. The admission of a locally varying speed of light causes the loss of the light cone within physical media. However, as it is the case for the isosphere, the mutations of spacetime coordinates occur under a joint inverse mutation of the related unit. This process yields Santilli's light isocone which is the perfect cone in isospace over isofield, but whose projection on conventional space over the conventional field yields a highly mutated cone whose shape changes in time. The preservation of Einstein's axioms as well as the local isomorphism of the Lorentz-Santilli and the conventional Lorentz symmetry are crucially dependent on the exact reconstruction of the light cone on isospace over isofields with the consequential exact reconstruction of the Lorentz symmetry. The reconstruction of exact discrete spacetime symmetries is handled in essentially the same manner, thus voiding the 20th century belief that spacetime symmetries are broken. Figure 3.8. The understanding of Santilli isorelativity and its particular realization as isogravitation, requires a knowledge of the light isocone, which is the perfect light cone, but defined on the Minkowski-Santilli isospace over Santilli's isonumbers. This deceptive simplicity hides in reality very deep implications. To begin, the projection of the isocone in the conventional spacetime characterizes a locally varying speed of light with consequential highly deformed cone. Hence, Santilli's isotopies reconstruct on isospaces over isofield the exact light cone when no longer applicable in our spacetime. This exact reconstruction is at the foundation of the preservation of the axioms of special relativity for dramatically broader physical conditions, as well as the reconstruction of the exact Lorentz symmetry when popularly believed to be broken. Additionally, Santilli's isocone permits a direct geometrization of gravitation without curvature. In fact, the deviations from the perfect light cone can be due to gravitation, and be characterized by the components of, e.g., Schwartzschild's metric (3.101). But each of these deviations is referred to a unit that is its inverse. Ergo, all Riemannian metrics can be reduced to Santilli's isocone with implications, as we shall see, way beyond conventional gravitational studies, such as for the scattering theory, nuclear events, and others, all permitted by the elimination of curvature. 3.10K. Experimental verifications In the arena of its applicability (dynamics within physical media or particles in conditions of deep mutual penetration), Santilli isorelativity has experimental verifications in classical physics, particle physics, nuclear physics, supuercondiuctivity, chemistry, astrophysics and cosmology (see the literature for quantitative treatments). Some of these verifications will be outlined in Section 3.12, 3, and chapter 5. An illustrative experimental verification of isorelativity in classical physics is given by electromagnetic waves propagating in water. In this case, the speed of light is given by C = c/n4, but the medium is homogeneous and isotropic, as a result of which Vmax = c, thus allowing electrons to travel faster than the local speed of light and verifying causality, as well as the isorelativistic sum of speeds. A similar case occurs for Newton's diffraction of light, and numerous other cases in which there is a deviation of the speed of light from that in vacuum. An illustrative experimental verification in particle physics is given by the Bose-Einstein correlation outlined in Chapter 5, and other relativistic events in particle physics conventionally treated via the use of ad hoc parameters fitted from the data (and then claim that special relativity is exactly valid!). These parameters are eliminated in isorelativity and replaced with measurable quantities, such as size of particles, their density, etc. The most important verification in particle physics is the numerically exact representation of all characteristics of neutrons in their synthesis from protons and electrons as occurring in stars, which synthesis, as indicated in Chapter 1, admits no treatment at all via special relativity (see Chapter 5 for details). An illustrative experimental verification in nuclear physics is given by nuclear magnetic moments that can be solely represented in an exact way via a deformation of charge distributions of protons and neutrons when members of a nuclear structure. These deformations are absolutely impossible for special relativity, but readily admitted by its covering isorelativity. Numerous other verifications also exist in nuclear physics (see Chapter 5 for details). An illustrative experimental verification in astrophysics is given by the exact representation of dramatically different redshifts of galaxies and quasars when physically connected according to gamma spectroscopy, which representation is permitted by Santilli isoredshift indicated above. For additional verifications, the serious scholar is suggested to consult the specialized literature. Unfortunately,we have an unreassuring situation in the experimental verification of Einsteinian doctrines for conditions beyond those of their original conception. As Santilli puts it: Following some fifty years of active research on fundamental open problems, it is my documented view that theories in physics are nowadays established by organized academic consensus and definitely not by a serious scientific process. In fact, the consideration, let alone the conduction, of systematic experimental tests of Einsteinian theories, under conditions they were not intended for, is nowadays impossible at any major physics laboratory around the world. When limited tests are conducted, Einsteinian doctrines are studiously recovered via the use of arbitrary parameters and their fit from experimental data, while in reality these arbitrary parameters are a direct measure of the "deviations" from the indicated doctrines (see The Bose-Einstein correlation and other tests of Chapter 6). These unreassuring condition establish the existence of a real scientific obscurantism at the beginning of the third millennium originating from protracted complete impunity by academic interests guaranteed by lack of societal control under full support of governmental agencies funding the research. The unreassuring character is that new the conception and development of new clean fuels and energies so much needed by society basically depend on "deviations" from Einsteinian doctrines. In the final analysis, all possible energies that could be conceived with Einsteinian doctrines were fully identified half a century ago and they all turned out to be environmentally unacceptable. Therefore, the solution of the increasing environmental problems afflicting our planet cannot be even initiated until responsible societies impose systematic experimental tests on the "limitations" of Einsteinian theories. The serious reader serious interested in knowledge, rather than in myopic personal gains, should never forget that time reversal invariant theories, such as Einsteinian doctrines, cannot credibly be assumed as being exact until the end of time for structurally irreversible processes, such as all energy releasing events. 3.10L. Original literature Following decades of work, Santilli first proposed his Lie-admissible covering of Galilei and special relativities, today called genorelativities, in the following 200 pages memoir of 1978 with a full identification of the isotopic particular cases, today called isorelativity, On a possible Lie-admissible covering of Galilei's relativity in Newtonian mechanics for nonconservative and Galilei form-noninvariant systems R. M. Santilli, Hadronic J. Vol. 1, 223-423 (1978) and then continued the study in more details in the following two monographs of 1978 and 1982 "Lie-Admissible Approach to the Hadronic Structure, I: Non applicability of the Galilei and Einstein Relativities," R. M. Santilli, "Lie-Admissible Approach to the Hadronic Structure, II: Coverings of the Galilei and Einstein Relativities" R. M. Santilli, Systematic studies on isorelativity were initiated in 1983 via the following papers: 1) The first isotopies of the Lorentz symmetry on scientific record at the classical level in the paper of 1983 that includes the first known universal invariance of Riemannian line elements Lie-isotopic lifting of special relativity for extended deformable particles R. M. Santilli, Lettere Nuovo Cimento Vol. 37, 545-555 (1983) 2) The first isotopies of special relativity at the operator level also in 1983 Lie-isotopic lifting of unitary symmetries and of Wigner's theorem for extended deformable particles, R. M. Santilli, Lettere Nuovo Cimento Vol. 38, 509-521 (1983) 3) The first known isotopies of the rotational symmetries were presented in the following two papers of 1985 that were written before the preceding two but were rejected by various journals via pseudo-reviews reported in the first paper Lie-isotopic liftings of Lie symmetries, I: General considerations, R. M. Santilli, Hadronic J. Vol. 8, 25-35 (1985) Lie-isotopic liftings of Lie symmetries, II: Lifting of rotations, R. M. Santilli, Hadronic J. Vol. 8, 36-51 (1985) 4) The first isotopy of SU(2) spin appeared in the following papers of 1993 and 1998 (the second presenting intriguing application to Bell's inequality, local realism and all that) Isotopic lifting of SU(2)-symmetry with application to nuclear physics, R. M. Santilli, JINR rapid Comm. Vol. 6. 24-38 (1993) Isorepresentation of the Lie-isotopic SU(2) algebra with application to nuclear physics and local realism, R. M. Santilli, Acta Applicandae Mathematicae Vol. 50, 177-190 (1998) 5) A detailed study isotopy of the Poincare' symmetry as the universal invariance for all spacetimes with signature (+, +, +, -) was published in 1993 Nonlinear, nonlocal and noncanonical isotopies of the Poincare' symmetry, R. M. Santilli, Moscow Phys. Soc. Vol. 3, 255-280 (1993) 6) the first known isotopies of the spinorial covering of the Poincare' symmetry (with momentous implications in particle physics identified in the next section) appeared in the following two papers of 1993 and 1995 Recent theoretical and experimental evidence on the apparent synthesis of neutrons from protons and electrons, R. M. Santilli, Communication of the JINR, Dubna, Russia, Number E4-93-252 (1993) Recent theoretical and experimental evidence on the apparent synthesis of neutrons from protons and electrons, R. M. Santilli, Chinese J. System Engineering and Electronics Vol. 6, 177-199 (1995) 7) The unification of special and general relativity into isorelativity was systematically studied in the following paper of 1998 Isominkowskian geometry for the gravitational treatment of matter and its isodual for antimatter, R. M. Santilli, Intern. J. Modern Phys. D Vol. 7, 351-407 (1998) The reading of the following additional papers is instructive for the serious scientist serious on science Lie-isotopic generalization of the Poincare' symmetry, classical formulation R. M. Santilli, ICTP preprint # IC/91/45 (1991) published in "Santilli's 1991 Papers at the ICTP", International Academic Press (1992) Galilei-isotopic relativities R. M. Santilli, ICTP preprint # published in "Santilli's 1991 Papers at the ICTP", International Academic Press (1992) Galilei-isotopic symmetries R. M. Santilli, ICTP preprint # IC/91/263 (1991) published in "Santilli's 1991 Papers at the ICTP", International Academic Press (1992) Rotational isotopic symmetries R. M. Santilli, ICTP preprint # IC/91/261 (1991) published in "Santilli's 1991 Papers at the ICTP", International Academic Press (1992) The first systematic presentation of the isotopies of Galilei and Einstein's relativities with the experimental proposal to verify the isoredshift appeared in the following monographs of 1991, "Isotopic Generalization of Galilei and Einstein Relativities", Volume I: "Mathematical Foundations" R. M. Santilli, "Isotopies of Galilei and Einstein Relativities" Vol. II: "Classical Foundations" R. M. Santilli, The first verification of the isodoppler shift of Santilli's isorelativity predicted in the preceding two volumes was done in 1992 by R. Mignani via the numerical interpretation of dramatically different redshift of quasars when physically connected to associated galaxies Quasar redshift in iso-Minkowski space R. Mignani. Physics Essays Vol. 5, 531-535 (1992) The first studies on the direct universality of Santilli's isorelativity for all possible spacetimes with signature (+, +, +, -) are given by the following papers Direct universality of isospecial relativity for photons with arbitrary speeds, R. M. Santilli, in "Photons: Old problems in Light of New Ideas" V. V. Dvoeglazov Editor Nova Science (2000) Direct universality of the Lorentz-Poincare'-Santilli isosymmetry for extended-deformable particles, arbitrary speeds of light and all possible spacetimes in "Photons: Old problems in Light of New Ideas" V. V. Dvoeglazov Editor Nova Science (2000 Universality of Santilli's iso-Minkowskian geometry, A. K. Aringazin and K. M. Aringazin, in "Frontiers of Fundamental Physics" M. Barone and F. Selleri, Editors Plenum (1995) The latest study on the Lie-admissible covering of special relativity for irreversible systems was presented in the memoir published by the Italian Physical Society Lie-admissible invariant representation of irreversibility for matter and antimatter at the classical and operator level Ruggero Maria Santilli Nuovo Cimento B Vol. 121, p. 443-595 (2006) Systematic studies on both the Lie-isotopic and Lie-admissible coverings of special relativity appeared in the two memoirs of 1995 with the update below of 2008 "Elements of Hadronic Mechanics", Vol. I: "Mathematical Foundations" R. M. Santilli Ukraine Academy of Sciences (1995) "Elements of Hadronic Mechanics" Vol. II: "Theoretical Foundations" R. M. Santilli, Hadronic Mathematics, Mechanics and Chemistry, Volume III: Iso-, Geno-, Hyper-Formulations for Matter and Their Isoduals for Antimatter R. M. Santilli, For various independent reviews of Santilli's iso- and geno-relativities interested scholars may consult the following monographs "Santilli's Lie-Isotopic Generalization of Galilei and Einstein Relativities" A. K. Aringazin, A. Jannussis, F. Lopez, M. Nishioka and B. Veljanosky, Kostakaris Publishers, Athens, Greece (1991) "Mathematical Foundation of the Lie-Santilli Theory" D. S. Sourlas and G. T. Tsagas, "Santilli's Isotopies of Contemporary Algebras Geometries and Relativities" Ukraine Academy of Sciences Second edition (1997) 3.11A. Foreword Santilli's conception, construction, development, experimental verification, and industrial applications of hadronic mechanics, with its diversification in mathematics, physics, chemistry and biology, constitutes, without doubt, a historical scientific achievement, mostly unprecedented if one considers the novelty and variety of the needed studies by one single mind, from pure mathematics to industrial applications. Nowadays (October 2008), hadronic mechanics constitutes a rather vast body of disciplines ranging from various coverings of Newtonian mechanics all the way to various corresponding coverings of second quantization, including as particular cases conventional classical and operator conservative formulations. As we shall see in Chapters 4 and 5, hadronic mechanics was original conceived for: 1) Quantitative treatments of the synthesis of neutrons from protons and electrons as occurring in stars, that cannot be treated via quantum mechanics 2) Quantitative studies on the possible utilization of the inextinguishable energy contained inside the neutron; 3) The study of new clean energies and fuels that cannot even be conceived with the 20th century doctrines; and other basic advances. The implementation of these main objectives required the conception, construction and test of a sequence of branches for the treatment of matter in conditions of correspondingly increasing complexity, plus all their isoduals for antimatter. Figure 3.9. Classification of hadronic mechanics into its various classical and operator branches as presented by Santilli in his volumes in the field. Evidently, we can review here only the rudiments of hadronic mechanics and refer the serious scholar to a serious study of the literature made available in free pdf downloads. In particular, we shall provide the rudiments of the isotopic branch of hadronic mechanics and merely indicate the remaining geno-, hyper- and isodual branches. It should be indicated that the primary aim of this section is the identification of Santilli's original discoveries in the field. For all numerous subsequent contributions by various researchers around the world, interested scholars are suggested to consult the 3.11B. Historical notes The period 1965-1967 The birth of hadronic mechanics can be traced back to Santilli's Ph. D. studies in theoretical physics at the Depart of Physics of the University of Torino, Italy, with particular reference to the following papers Embedding of Lie-algebras in nonassociative structures R. M. Santilli, Nuovo Cimento Vol. 51, 570-576 (1967). R. M. Santilli, Supplemento al Nuovo Cimento Vol. 6, pages 1225-1249 (1968) R. M. Santilli, Meccanica, Vol. 1, pages 3-11 (1969) Figure 3.10. A view of the city of Torino, Italy (top view), and of the Department of Physics in Corso Massimo D'Azelio (bottom view) where Santilli conceived in 1965-1967 the foundations of hadronic mechanics. On mathematical grounds, being an applied mathematician by instinct, Santilli recognized that quantum mechanics is structurally dependent on Lie theory that characterizes the infinitesimal time evolution of a (Hermitean) operator Q, i dQ/dt = [Q, H] = QH - HQ via the Lie product [Q, H] (H being the usual Hermitean Hamiltonian representing the total energy), and the finite time evolution via the Lie transformation group Q(t) = exp(Hti)Q(0)exp(-itH), As a pre-requisite to generalize quantum mechanics, Santilli searched for a covering of Lie's theory, namely, a generalization such to maintain a well defined Lie content, a mathematical feature necessary for the broader physical theory to admit quantum mechanics as a particular case. For this purpose, Santilli proposed the first known mutations of Lie algebras (today also known as "deformations" ) with product (3.109) (A, B) = λ AB - μ BA, where λ, μ, λ ± μ are non-null scalars. It was then simple for Santillio to discover the following generalizations of Heisenberg's time evolution in their infinitesimal and finite forms (3.110) i dQ/dt = λ Q H - μ H Q = (Q, H), (3.111) Q(t) = U(t)Q(0)U(t) = [exp(H μ t i)] Q(0) [exp(-i t λ H)], with corresponding classical counterparts (see Section 3.8). Quantum mechanics and its Lie structure were then recovered identically and uniquely for the particular case λ = μ = 1. Because of his keen sense of scientific ethics, Santilli delayed the publication of the 1967-1968 papers for over one year to identify at least some prior literature for due quotation. In so doing, he spent months of search in mathematical libraries, not only in Italy but also in other countries, looking for some mathematical paper treating the algebra with his product (A, B). After such a protracted search, Santilli finally discovered a 1947 paper by the American mathematician A. A. Albert presenting the definition without concrete examples of the notions of Lie-admissible and Jordan-admissible algebras. An algebra U with elements a, b, c, ... and abstract product ab was called by Albert Lie-admissible when the attached antisymmetric algebra U- with product [a, b] = ab - ba is Lie. Albert called the sam algebra Jordan-admissible when the attached symmetric algebra U+ with product {a, b} = ab + ba is Jordan. Santilli immediately recognized that his product (A, B) is indeed Lie- and Jordan-admissible (3.112) [A, B]* = (A, B) - (B, A) = (λ + μ)[A, B] = Lie, {A, B}* = (A, B) + (B, A) = (λ - μ){A, B} = Jordan, and adopted Albert's definition, particularly in view of the possibility of realizing "Jordan's dream" that his celebrated algebras would see physical applications, although not in quantum mechanics as well known, but within the context of a covering mechanics. Santilli then spent additional months of search in mathematics libraries to identify any papers treating Albert's Lie- and Jordan-admissible algebras. In this way, he located only two additional short notes published in rare mathematics journals treating Albert's definition although without any concrete realization. Following such an extensive search that is rather unusual these days in the physics community, let alone for a physicist to conduct protracted searches in pure mathematical journals, Santilli released for publication his 1967-68 papers with all pre-existing literature properly quoted, which papers present the first known realization in both mathematical and physical literature of a jointly Lie- and Jordan-admissible algebra. On physical grounds, Santilli had understood during his Ph. D. studies that quantum mechanics is a theory structurally reversible over time and that the characterization of the conventional conservation law, such as that of the energy H, is due to the totally antisymmetric character of the Lie product for which i dH/dt = [H, H] = HH - HH = 0. As recalled in Section 1.1, D. Santilli studied Lagrange's original works and learned in this way the necessity of achieving an irreversible generalization of quantum mechanics. as an operator counterpart of the "true Lagrange and Hamilton equations," those with external terms characterizing precisely the irreversibility of the physical world (Section 1.1). But all known Hamiltonians (that is all 20th century interactions) are reversible over time. The representation of irreversibility then left Santilli with no other option than that of generalizing the Lie product into a non-antisymmetric form as a condition for an operator representation of nonconservative irreversible systems. It is evident that Santilli Lie- and Jordan-admissible product does indeed verify the latter condition because, in general, (A, B) - (B, A) ≠ 0. Therefore, he submitted his covering equations (109)-(112) for the representation of open nonconservative and irreversible systems, a central feature that is s fully valid today. The period 1978-81 In 1967 Santilli moved to the U. S. A. for a one year research position at the University of Miami, Coral Gables, Florida, funded by NASA. During that time, he applied for a junior position in virtually all U. S. physics and mathematics departments on grounds of his studies on Lie-admissible and Jordan-admissible algebras. However, these algebras were unknown in both the mathematics and physics of the late 1960s. He then accepted a position at the Department of Physics of Boston University partially funded by the U. S. Air Force (for which support he acquired the U. S. citizenship), and turned himself to publications that, in his words, are typical Phys. Rev. papers nobody quotes or cares for, some of which have been outlined in Sections 3.4, 3.5, 3.6. During that period, Santilli continued to study Lie-admissible and Jordan-admissible theories without any publication in the field for about a decade. In 1977 Santilli joined the Lyman Laboratory of Physics of Harvard University following an invitation by the DOE for grant number DE-ACO2-80ER-10651.A00, for which Santilli was transferred at Harvard's Department of Mathematics. At that time, Santilli published the following two memoirs with the formal proposal to construct hadronic mechanics including its central dynamical equations, memoirs hereon referred to as the 1978 Original Memoirs I and II On a possible Lie-admissible covering of Galilei's relativity in Newtonian mechanics for nonconservative and Galilei form-noninvariant systems R. M. Santilli, Hadronic J. Vol. 1, 223-423 (1978) Need of subjecting to an experimental verification the validity within a hadron of Einstein special relativity and Pauli exclusion principle R. M. Santilli, Hadronic J. Vol. 1, 574-901 (1978) The first memoir presents a detailed mathematical study of Lie-admissible and Jordan-admissible algebras with their Lie-isotopic and Jordan-isotopic particularizations, and the second memoir presents the basic equations of hadronic mechanics with first applications and illustrations. Figure 3.11. A view of the Science Center of Harvard University housing at the third floor Harvard's Department of Mathematics were Santilli reached in 1977-1981 the main formulation of hadronic mechanics. In essence, Santilli recognized that his Lie-admissible time evolution (110) is nonunitary, UU ≠ I, as a necessary condition to exit from the class of unitary equivalence of quantum mechanics. Consequently, he applied a general nonunitary transformation to his parametric product (109), and achieved in this way the broader product today known as Santilli general Lie-and Jordan-admissible product (3.113) (A, B)* = U(A, B)U = ARB - BSA, R = UpU, S UqU where R, S and R ± S are now non-null operators. Santilli also discovered that his algebra with product (A, B)* is the most general known algebra, in the sense of admitting as particular case all infinitely possible algebras known in mathematics (characterized by a bilinear composition verifying the left and right scalar and distributive laws), including Lie algebras, Jordan algebras, flexible algebra, supersymmetric algebras, etc. Additionally, Santilli discovered that his algebras remain jointly Lie-and Jordan-admissible under all possible (nonsingular) nonunitary transforms (although the operator R and S would change). Following the achievement of these remarkable results in the Original Memoir I, it was rather natural to propose in the Original memoir II (see, Eqs. (4.15.34), page 746) equations today known as Santilli Lie- and Jordan-admissible dynamical equations that are at the foundation of hadronic mechanics, here presented in the following infinitesimal and finite forms, (3.114) i dQ/dt = Q R H - H S H = (Q, H)*, (3.115) Q(t) = [exp(H S t i)] Q(0) [exp(- i t R H)], under the condition for physical consistency (derived from time reversal) that R = S. In the same Original memoir II (see the 1978 Memoir II, Eqs. (4.15.49), page 752), Santilli identified the fundamental Lie-isotopic equations of hadronic mechanics as a particularization of the Lie-admissible equations, here also presented in the following infinitesimal and finite forms, (3.116) i dQ/dt = Q T H - H T H = [Q, H]*, (3.117) Q(t) = [exp(H T t i)] Q(0) [exp(- i t T H)]. under the condition olf the operator T being positive definite, T = T > 0. Eqs. (114), (115) were proposed for the operator representation of open irreversible systems, again in view of the lack of antisymmetric character of the basic product (A, B)*, while Eqs, (116), (117) were proposed for closed-isolated systems with potential and nonpotential internal forces verifying conventional total conservation laws from the antisymmetric character of the product for which i dH/dt = HTH - HTH = 0. It was clearly identified in the Original Proposals that the Hamiltonian represents all action-at-a-distance potential interactions, while the operators R. S and T are the operator counterparts of Lagrange's and Hamilton's external terms since they too represent contact nonpotential interactions. In the same memoirs of 1978 Santilli proposed the Birkhoffian-admissible mechanics as classical counterpart of the Lie-admissible equations and Birkhoffian mechanics as counterpart of the Lie-isotopic particularization, although this Birkhoffian classical counterpart had to be reformulated later on due to the impossibility of achieving a consistent quantization. Santilli's proposal of 1978 propagated quite rapidly all over the world (despite the lack of emails at that time), and received numerous authoritative supports, such as those by Nobel Laureates C. N. Yang and I. Prigogine, distinguished physicists such as S. Okubo, S. Adler, M.S. Froissart, and others, as well as known philosophers of science such as K. Popper (who praised Santilli's proposal in the preface of his last book). A feverish research was then initiated on the construction of hadronic mechanics in the necessary aspects and operational details by various mathematicians, theoreticians and experimentalists the world over, as listed in Thanks to his mathematical knowledge, Santilli initiated in 1979 the representation theory of Lie-admissible algebras. Let |ψ) be the module of a Lie-representation, e.g. a ket belonging to a Hilbert space with right associative action H |ψ). In this case the bimodular character is trivial because the action to the left is antiisomorphic to that to the right, H |ψ) = - (ψ| H, H = H. For the case of Lie-admissible algebras with brackets (3.109), Santilli needed an isotopic action to the right H S |ψ) that is inequivalent to the to the left (ψ| R H, resulting in a new structure he called an genobimodule or Lie-admissible bimodule. These studies provided the first known Lie-admissible generalization of Schroedinger's equation and their Lie-isotopic counterpart (3.118) H xff) = H R |ψf) = Er ψf), (bψ| bx H = (bψ| S H = (bψ| bE, (3.119) H x**) = H T |ψ*) = E* ψ*), (*ψ| *x H = (*ψ| T H = (*ψ| *E, where, in accordance with our notations of Section 2.8, the indices f and b stands for "forward" and "backward" actions, respectively. The above realizations were subsequently studied by the physicists: R. Mignani in 1981; the mathematician H. C. Myung and Santilli in 1982; Mignani, Myung and Santilli in 1983; and others (see the indicated General Bibliography). The period 1982-1989 In 1982, Santilli left Harvard University to assume the position of President of the Institute for Basic Research, an independent institution comprising about 120 mathematicians, theoreticians and experimentalists with dual associations to other institutions around the world. To house the new Institute, the Real Estate Trust of the Santilli family purchased a Victorian house located within the compound of Harvard University, where an intense research activity was conducted until 1989 under partial financial support by the DOE. Figure 3.12. A view of the New-England style Victorian located at 96 Prescott Street, Cambridge, MA, within the compound of Harvard University, locally known as "The Prescott House," which was purchased by Santilli's Real Estate Trust in late 1981 and remained the headquarters of the Institute for Basic Research until 1989, as well as the main editorial office of the Hadronic Journal, Hadronic Journal Supplement and Algebras, Groups and Geometries. Among the numerous research activities which took place at The Prescott House during the period 1981-1989, we mention: the initiation of systematic studies for a structural generalization of contemporary mathematics based on progressive liftings of its basic unit known as iso- and geno-mathematics and their isoduals; the conception and development of the Birkhoffian and other classical mechanics; the axiom-preserving, nonunitary, isotopic and genotopic lifting of quantum mechanics into hadronic mechanics; and numerous other fundamental mathematical and physical research (for more details, visit the IBR History) During that period, a large number of papers, monographs and conference proceedings then followed authored by numerous scientists the world over for an estimated number of over 20,000 pages of printed research. However, with the passing of the years Santilli was more and more dissatisfied for the status of hadronic mechanics because the Lie-admissible character of the theory was indeed preserved by unitary and nonunitary transforms, but the theory was not invariant over time, thus predicting different numerical values under the same conditions at different times, and activating the Theorems of Catastrophic inconsistencies of Nonunitary Theories of Section 3.9. The period 1990 to present In 1990, the Institute for Basic Research was tranfer from Cam,bridge MA, to Palm Harbor, FL, where it still operates to this day (Spring 2009). The main technical issue addressed during this period is that, by the early 1990s hadronic mechanics was still incomplete due to the lack of a Lie-admissible and Lie-isotopic generalization of the fundamental equation for the linear momentum and its action on a wavepacket (with h/2π = 1), (3.120) p |ψ) = - i ∂r |ψ), (3.121) ψ = exp(k r - E t), p |ψ) = k |ψ). As Santilli recalls: The achievement of the invariance over time of hadronic mechanics has been one of the most distressing and time consuming research problems I ever faced because I knew that quantum mathematics had to be entirely lifted into hadronic mathematics for any consistent treatment. This required the isotopic and then the genotopic liftings of all branches of quantum mechanics and all its mathematics. By the early 1990s "all" main aspects of quantum mathematics I was aware of had indeed been lifted, including numbers, vector and metric spaces, geometries, algebras, groups, representation theory, topology, etc. Nevertheless, the invariance of hadronic mechanics remained elusive and, most frustratingly, the lifting of the linear momentum into forms compatible with the Lie-isotopic and Lie-admissible formulations escaped continuous efforts for years by myself as well as several researchers in the field. I remember that in early 1990s I used to control again and again all isotopic and genotopic liftings of quantum mechanics and could not identify the flaw causing lack of invariance and had no clue on how to lift the linear momentum. This was quite distressing because hadronic mechanics was not a complete theory without a consistent formulation of eigenvalue equation for the linear momentum. Above all, without such a formulation, no experimental verification could be seriously studied. Finally, the teaching of the founders of physics came to my help. In 1994, I remembered that Newton had to build the differential calculus to formulate his mechanics. Consequently, I reinspected the differential calculus (still essentially the same since Newton's time), to see whether it was indeed applicable to hadronic mechanics and discovered that it was not because, contrary to popular beliefs in mathematics and physics for about four centuries, a conventional. differential, such as that of the coordinate dr, is indeed dependent on the basic unit I of the field when the latter has a functional dependence on the local variable, I* = I*(r, ...) = 1/T(r, ...). In fact, in this case the coordinate has to be an isocoordinate, r* = r I*, as a result of which d*r* ≠ dr. In this way, I formulated the isodifferential calculus for which (3.122) d* r* = T d (r I*), (3.123) ∂*/∂*r* = I* ∂/∂r*. I published this discovery in 1996 at the Rendiconti Circolo Matematico Palermo, Nonlocal-integral isotopies of differential calculus, mechanics and geometries R. M. Santilli, Rendiconti Circolo Matematico Palermo, Suppl. Vol. 42, 7-82 (1996). The new differential calculus finally allowed me to reach a consistent formulation of the linear momentum with isotopic and genotopic expressions fully compatible with the corresponding Lie-isotopic and Lie-admissible liftings of Heisenberg and Schroedinger equations (3.124) p x**) = p* Tr*) = - i ∂*r*)= - i I*rr*), I*r = 1/Tr = I*†r > 0 (3.125) ψ* = exp(k Tr r - Ε Tt t), (3.126) p x**) = k |ψ*). where I*r = 1/Tr, I*t = 1/Tt are the space and time isotopic units and elements, respectively, with corresponding expressions for the genotopic lifting. It was then easy to prove the desired invariance over time of hadronic mechanics, including the preservation of the basic unit, Hermiticity-observability, and all numerical predictions under the same conditions at subsequent time. Following these resolutions, I separated myself from the rest of world for one entire year thanks to help from my wife Carla for food and support (without my wife's help hadronic mechanics would never have seen the light), and I wrote the second edition of "Elements of Hadronic Mechanics," Volumes I and II that I released for publication by the Ukraine Academy of Science in 1995. Following submission in 1995, all the background mathematics was published in 1996 by the Rendiconti Circolo Matematico Palermo. I reached the crucial invariance over time for the case of isomechanics in the 1997 paper I then reached the invariancve over time for the much more complex Lie-admissible irreversible mechanics in the subsequent paper also of 1997 that completed the formal construction of hadronic mechanics Invariant Lie-admissible formulation of quantum deformations R. M. Santilli, Found. Phys. Vol. 27, 1159- 1177 (1997) After that time, studies on the various applications and experimental verifications of hadronic mechanics increased exponentially thanks to the contribution by numerous colleagues. As indicated in my papers, colleagues who do not care to participate in basic new advances essentially make a gift of scientific priorities to others. Figure 3.13. A view of Santilli at his office of the Institute for Basic Research in Palm Harbor, Florida (top view) where he has been working from late 1989 to present (summer 2009) weekdays standing up 8-10 hours a day on the final formulation of hadronic mechanics and his physical, chemical, biological and industrial applications. The bottom view shows the backyard of the IBR, a typical Floridian setting on a canal. The main references on hadronic mechanics are the following: the analytic foundations were treated in the two monographs of 1978 and 1982 hereon referred to as FTP Volumes I and II "Foundations of Theoretical Mechanics, II: Birkhoffian Generalization of hamiltonian Mechanics" R. M. Santilli, Springer-Verlag (1982); The first comprehensive axiomatically consistent treatment of hadronic mechanics can be found in the two monographs hereon referred to for brevity 1995 EHM Volumes I and II "Elements of Hadronic Mechanics", Vol. I: "Mathematical Foundations" R. M. Santilli , Ukraine Academy of Sciences (1995), "Elements of Hadronic Mechanics" Vol. II: "Theoretical Foundations" R. M. Santilli, Lie-admissible invariant representation of irreversibility for matter and antimatter at the classical and operator level Ruggero Maria Santilli Nuovo Cimento B Vol. 121, p. 443-595 (2006) and the most recent presentation is available in the five volumes hereon referred to as 2008 HMMC Volumes I, II, III, IV, V Hadronic Mathematics, Mechanics and Chemistry, Volumes I, II, III, IV and V: R. M. Santilli, Iso-, geno-, hyper-mechanics for matter, their isoduals for antimatter, and their novel applications to physics, chemistry and biology R. M. Santilli, Journal of Dynamical Systems and Geometric Theories, Vol. 2, pages 121-194 (2003) "Santilli's Lie-Isotopic Generalization of Galilei and Einstein Relativities" A. K. Aringazin, A. Jannussis, F. Lopez, M. Nishioka and B. Veljanosky, Kostakaris Publishers, Athens, Greece (1991) "Santilli's Isotopies of Contemporary Algebras Geometries and Relativities" Ukraine Academy of Sciences Second edition (1997) "Mathematical Foundation of the Lie-Santilli Theory" D. S. Sourlas and G. T. Tsagas, Prizes and nominations Santilli has received large financial rewards from the new industrial applications of hadronic mechanics in physics, chemistry and biology. He has been listed by the Estonia Academy of Sciences among the most illustrious applied mathematicians of all times because of his discovery of the Lie-admissible covering of all of 20th century mathematics that encompasses all possible mathematics with an algebra (Chapter 2) and, consequently, all possible physical and chemic;l theories with an algebra in the brackets of their time evolution (Chapters 3-9), the listing of Santilli name being done with the quotation his 1967 initiation paper on Lie-admissible algebras jointly with the names of Gauss, Hamilton, Lie, Jordan, Wigner, and others very famous mathematicians (the only name of Italian origin appearing in the list). A motivation has been that .... several other mathematicians have discovered individual mathematical structures, for instance, Hamilton discovered the quaternions, Jordan discovered his algebras, and Lie discovered his theory, but no other mathematician in history discovered, as Prof. Santilli did, structural generalizations of the totality of mathematics in sequential series [isotopic, genotopic, hyperstructural and isodual]. Additionally, a lecture room at a research center in Australia has been called "Santilli Hall."Besides various gold medals for scientific merits, Santilli has received in January 2009 the prestigious prize of the Mediterranean Foundation, previously granted to Price Albert of Monaco , France President Nicolas Sarkozy, Juan Carlos King of Spain, international architect Renzo Piano, and other famous people. Finally, Santilli has received hundreds of nominations for the Nobel prize in physiscs because of the construction of hadronic mechanics and more recently also for the Nobel prize in Chemistry. For details, one may visit the web site Acknowledgments Jointly with the completion in 1997 of the formal construction of hadronic mechanics and its primary experimental verifications as well as applications in the 1997 paper Santilli released a rather vast acknowledgment to all institutions, journals and colleagues who helped the, or were exposed to the construction of hadronic mechanics. The Foundation has retrieved the preprint and provides below the original version of the Acknowledgments since they had to be reduced in the published version by editorial request. It is a pleasant duty to express my sincere appreciation to th referees of Foundations of Physics for a very accurate control of the manuscript and for simply invaluable critical suggestions. It is also a duty to express my appreciation to a number of institutions, journals and colleagues for hospitality and invaluable help during the laborious studies in the construction of hadronic mechanics and its verification conducted during the past three decades. First, I would like to thank the following Institutions: The University of Naples, Italy, where I conducted my undergraduate studies in physics for an unforgettable human and scientific experience. I want to remember and thank in particular my mathematics teacher Renato Caccioppoli for propagating to be his passion for mathematics that set the direction of the rest of my scientific life. The Department of Physics of the University of Torino, Italy, where I put the foundations of hadronic mechanics\ in the late 1960's as part of my Ph. D. thesis; The Avogadro Institute in Torino, Italy, that gave me a chair in nuclear physics when quite young, with various students still remembering and tracing me down to this day; The Center for Theoretical Physics of the University of Miami, Coral Gables, Florida, where I had a very enjoyable stay during the academic year 1967-1978; The Department of Physics of Boston University where I taught, from prep courses to post Ph. D. Seminar courses in mathematics and physics from, 1968 to 19074; The Center for Theoretical Physics of the Massachusetts Institute of Technology, where most background technical preparation was conducted in the mid 1970's, such as the papers on the existence and construction of a Lagrangian in field theory, the paper on the identification of gravitational and electromagnetic interactions, the preliminary versions of monographs published by Springer Verlag, and other studies; The Department of Mathematics of Harvard University, were the main papers proposing the construction of hadronic mechanics and numerous other works were written in the late 1970's and early 1980's under support from the U. S. Department of Energy; The Joint Institute for Nuclear Research, Dubna, Russia, for summer hospitality in recent years, where several papers were written, such as the crucial paper on isonumbers, genonumbers and their isoduals, the paper on the synthesis of the neutron first appeared as JINR Communication Number E4-93-352, and other papers, The Institute for High Energy Physics, Protvino-Sherpukov, Russia, also for summer hospitality in recent years, where the most innovative studies in gravitation were initiated, The International Center for Theoretical Physics in Trieste, Italy, for a short visit in 1992; CERN, Geneva, Switzerland, also for a short stay in 1992; The Institute for Basic Research on Harvard Grounds from 1982 to 1989 and then in Palm Harbor Florida from 1989 to present where the main research on hadronic mechanics has been conducted and continued to this day; and numerous other Institutions for shorter stays. I would like to express my appreciation for recent hospitality I received for presentations on various aspects of hadronic mechansics at the following meetings (up to 1996): Three Workshops on Lie-admissible Formulation, Harvard University, 1978-1981; International Conference on nonpotential interactions and their Lie-admissible treatment, University of orleans, France, 1982; Nine Workshops on Hadronic mechanics from 1981 to present held at various institutions in the Boston, Area (USA), Belgrad (Yugoslavia), Patras (Greece), Como (Italy), London (England), Beijing (China), and other locations; International Workshop on Symmetry Methods in Physics, J.I.N.R., Dubna, Russia, July 1993; Third International Wigner Symposium, Oxford University, Oxford, England,September 1993; International Conference ,J.I.N.R., Dubna, Russia, June 1993; XVI-th [1993], XVII-th (1994) and XIX-th (1996) International Workshop on High Energy Physics and Field Theory, I. H. E. P., Protvino-Sherpukov, Russia, September 1993; International Conference on the Frontiers of Fundamental Physics, Olympia, Greece, September 1993; VI-th Seminar on High Temperature Superconductivity, J.I.N.R., Dubna, Russia, September 1993; Seventh Marcel Grossmann Meeting on General Relativity and Cosmology, Stanford University, Stanford, CA, U.S.A., July 1994; 1996 Sanibel Symposium, St. Augustine, Florida, March 1995 and February 1996; First Meeting for the Saudi Association for Mathematical Sciences, Riyadh, Saudi Arabia, May 1994; International Conference on Selected Topics in Nuclear Structure, J.I.N.R., Dubna, Russia, July 1994; International Workshop on Differential Geometry and Lie Algebras, Thessaloniki, Greece, December 1994; HyMag Symposium, National High Magnetic Field Laboratory, Tallahassee, Florida, December 1995; International Workshop on new Frontiers in Gravitation, Istituto per la Ricerca di Base, Castle Prince Pignatelli, Monteroduni, Italy, August 1995; National Conference on Geometry and Topology, Iasi, Rumania, September 1995 International Symposium for New Energy, Boulder Colorado, April 1996; International Workshop on the Gravity of Antimatter and Anti-Hydrogen Atom Spectroscopy, Sepino, Italy, May, 1996; Workshop on Differential geometry, Palermo, Italy, June 1996; International Workshop on Polarized Neutrons, J.I.N.R., Dubna, Russia, June 1996. Special thanks are also due for the recent opportunity of delivering lectures or short seminar courses on the various aspects of hadronic mechanics at: Moscow State University,Moscow, Russia, August 1993; Estonia Academy of Sciences, Tartu, August 1993; Theoretical Division, J.I.N.R., Dubna, Russia, September 1993; August 1994; August 1995; August 1996; Ukraine Academy of Sciences,Kiev, September 1993; Institute for Nuclear Physics, Alma Ata, Kazakhstan, October 1993; Institute for High Energy Physics, Protvino, Russia, June 1993, June 1994, June 1995; Theoretical Division, C.E.R.N, Geneva Switzerland, December 1994; Department of Mathematics, Aristotle University, Thessaloniki, Greece; Department of Mathematics, King Saud University, Riyadh, Saudi Arabia; Demokritus Institute, Athens, Greece, December 1994; Institute of Nuclear Physics. Democritos University of Thrace Xanthi, Greece, December 1994; Institute for Theoretical Physics, Wien, Austria, December 1994; Department of Mathematics, University of Constanta,Romania, September 1995; Research Center COMSERC, Howard University, Washington, D.C., U.S.A. April, 1995; Department if Mathematics, Howard University, Washington, D. C., U.S.A., April 1995; The International Center for Theoretical Physics (ICTP), Trieste, Italy, 1992; Department of Nuclear Physics, University of Messina, Italy, June 1996; Department of Mathematics, University of Palermo, Italy, June 1996; Academia Sinica, Beijing, China, supper 1995; The Italian national Laboratories in Frascati, Italy, 1977; The Center for Theoretical Physics of the Massachusetts Institute of Technology, 1976 The Lyman Laboratory of Physics, Cambridge, MA, 1978, delivering a seminar course on the integrability conditions for the existence of a Lagrangian in Newtonian mechanics and field theory; The University of illinois in bloomington, 1968; Russia Academy of Sciences,Moscow, June 1996; and other institutions in various countries. I have no word to express my sincere appreciation and gratitude to all colleagues at the above meetings or institutions for invaluable critical comments. Additional thanks for the critical reading of parts of this paper are due to: M. Anastasiei, Yu. Arestov, A. K. Aringazin, A. K. T. Assis, M. Barone, Yu. Barishev, J. Ellis, T. Gill, J. V. Kadeisvili, A. U. Klimyk, A. Jannussis, N. Makhaldiani, R. Miron, M. Mijatovic, D. Rapoport-Campodonico, D. Schuch, G. T. Tsagas, N. Tsagas, C. Udriste, T. Vougiouklis, H. E. Wilhelm, and others. Finally, this paper has been made possible by rather crucial publications appeared in the following Journals, here acknowledge with sincere appreciation: Foundations of Physics, for publishing: this memoir, the first after the achievement of axiomatic maturity in relativistic hadronic mechanics; the 1981 article on the apparent impossibility for quarks to be elementary at a time of widespread belief to the contrary; and several related articles in classical and operators studies not quoted for brevity; Physical Review A, for publishing the important article by Schuch on the need for nonunitary treatment of nonlinear operator systems; Physical Review D, for publishing the 1981 article on the need to verify the validity of Pauli's principle under nonconservative conditions due to external strong interactions;the 1978 article {3c} on the isotopies of electroweak interactions with a breaking of the gauge invariance; and other papers; Hyperfine Interactions, for publishing the paper on the prediction of a novel light emitted by antimatter; Nuovo Cimento, for the publication of: the 1967 article on the first Lie-admissibity in the physical literature; the 1983 article on the first isotopies of Minkowski spaces, the Lorentz symmetry and the special relativity the 1983 article {4f} on the first operator realization of isosymmetries via a lifting of Wigners theorem; the 1982 article on the first Lie-admissible time-irreversible formulation of open strong interactions; the article on the first isotopies of SU(3), article, the scattering theory, and several other seminal papers; The (MIT) Annals of Physics, for the publication of the 1976 articles on the integrability conditions for the existence and computation of a Lagrangian in field theory, the 1982 article on the crucial identification of the gravitational and electromagnetic fields from the primary electromagnetic origin of mass (that subsequently rendered unavoidable the prediction of antigravity), and others; Journal of Physics G, for publishing the 1981 articles on the rather crucial isominkowskian representation of the behavior of the meanlives of K-o with energy, and other papers; Physica, for publishing the 1985 article on the possibility of regaining convergent perturbative series for strong interactions, and others; Physics Essays, for publishing the 1992 article on classical realizations of Santilli's isogalilean relativity, and the article on the representation of the difference between cosmological redshifts of physically connected quasars and galaxies via Santilli's isospecial relativity, and others; Communications in Theoretical Physics, for publishing a number of crucial articles, such as the first article on the isotopic quantization of gravity, the first article on the isoquark theory, the first article on the isodual representation of antimatter, the first article on the paradox of quantum mechanics at the limit of gravitational singularities, and several others; Annales de la Fondation Louis de Broglie, for publishing the crucial articles on the limitations of current generalized theories, and others; Revista Tecnica, for the publication of articles on the isotopies of Newtonian, analytic and quantum mechanics; Journal of Moscow Physical Society, for the publication of the comprehensive 1993 article on the isotopies and isodualities of the Poincare~ symmetry, including the universal symmetry of all possible Riemannian and Finslerian line elements, which is the single most important paper of these studies from which all results can be uniquely derived; J.I.N.R. Rapid Communications for the publication of the crucial 1993 article on the isotopies of SU(2)- spin with the isopauli's matrices and the reconstruction of the exact isospin symmetry in nuclear physics; International Journal of Quantum Chemistry, for the publication of the crucial 1981 article on the application and experimental verification of hadronic mechanics to superconductivity, with the first attractive force among two identical electrons in singlet couplings at mutual distances smaller than their coherent length; Chinese Journal of Systems Engineering and Electronics, for the publication of the crucial 1995 article on the isotopies of the spinorial covering of the Poincare~ symmetry and of Diracs equations, with application to the synthesis of the neutron from protons and electrons only, and other articles; Mathematical Methods in Applied Sciences, for the publication of the recent comprehensive study {5g} by Kadeisvili on the Lie-Santilli isotheory and related methods; Rendiconti Circolo Matematico di Palermo for the publication of an entire 1996 issue of of their Supplemento entirely dedicated to the new mathematics underlying hadronic mechanics; Acta Applicxandae Mathematica for the publication in 1995 of th crucial application of hadronic mechanics to Bell's inequality, the isotopies of the SU(2) spin symmetry and all that; The Indian mathematical Society, for the publication of numerous seminal papers in pure and applied mathematics at the foundation of hadronic mechanics. ,p> and other Journals. Particular thanks are additionally due to all past and present Editors of the Hadronic Journal and Algebras, Groups and Geometries for their continued encouragement, support and control of various publications quoted in this paper. Additional thanks are due to the participants, editors and and publishers of the Proceedings of some eighteen international workshops and conferences held in the field of hadronic mechanics in the USA, Europe, and China resulted in a total of over thirty volumes, which are too numerous to mention here individually. I must also express my utmost gratitude to G. F. Weiss, S. Smith and P. Fleming, staff of our Institute for basic Research in Palm Harbor, Florida, and numerous other members and visitors through the years, for simply invaluable help, assistance and control in the preparation of this manuscript. It is also my pleasant duty to thank several colleagues for their invaluable contributions in the construction of the hadronic mechanics, particularly during the early years of its study, including: S. Okubo, H. C. Myung, R. Mignani, F. cardone, A. K. Aringazin, A. Kalnay, A. O. E. Animalu, D. Schuch, T. L. Gill, Gr. Tsagas, D. S. Sourlas, J. V. Kadeisvili, E. B. Lin, M. Nishioka, A. Jannussis, G. Eder, J. Fronteau, M. Gasperini, D. Brodimas, P. Caldirola, M. Mijatovic, Y. Prigogine, K. Popper, B. Veljanoski, A. Tellez-Arenas, and others. I cannot close these Acknowledgments without expressing my appreciation to the American, British, Italian, Swedish, French, German, Russian, Chinese physical and other societies for their role in the construction of hadronic mechanics, On my side, I would lille to indicate that. when facing truly fundamental structural advances of pre-existing knowledge as it is the case here, the "burden of proof" on their validity belongs to the author(s) and definitely not to the societies, since their historical as role is that of exercising caution for the very protection of science. On the other side, scientific societies are suggested to exercise tolerance when attacked for insufficient scientific democracy at the time when the battle for new scientific vistas reaches its climax. I cannot close these Acknowledgments without expressing my deepest appreciation to the United States of America for being so generous to me and my family, by permitting me to realize my scientific dreams on hadronic mechanics as well as my personal dreams in the American way of life, sports cars and boats, generosity that has caused in me an unbounded allegiance. It is a truism to say that. without my conduction of research in the U.S.A., hadronic mechanics would not have been completed and established because, even though its main lines had been conceived in Italy, the realization of the above indicated "burden of proof" required "experiumental verifications and novel industrial applications relevant to society" that would have been of difficult realization elsewhere because they must be achieved nowadays outside academia whenever dealing with basic advances over pre-established doctrines, as well known to insiders. On my part, I considered myself a "special immigrant" because: I came here: from a rich Italian family, my father Ermanno Santilli being an Italian Medical Doctor and my grandfather Ruggero Santilli being an Italian industrialist; after achieving in Europe the highest possible education in mathematics physics and chemistry; and while being the recipient of a chair in nuclear physics at the Avogadro Institute in Torino. The construction and proof of hadronic mechanics were possible "by" (rather than "in") the U.S.A. amidst incredible, well known and documented academic obstructions (at time reaching true levels of hysteria against the surpassing of beloved old doctrines), because of: the inspired values of the U. S. Constitution, the best throughout history I ever read; the crucial democracy of its Institutions; and its unique multitude of overlapping social, governmental and industrial structures offering people a variety of ways to realize their dreams, but only following fierce determination, relentless commitment and true values. Most special thanks are finally due to my wife Carla for her grace, class, patience and support in enduring predictable obstructions in the conception, completion and proof of hadronic mechanics. Needless to say, I am solely responsible for the content of this paper owing to the numerous changes occurred during the preparation of the final version. 3.11C. Interior and exterior dynamical systems As santilli recalls, physical systems were classified by Lagrange, Hamilton, Jacobi and other founders of mechanics into: 1) Exterior dynamical systems, consisting of a finite number of point-like particles moving in vacuum (conceived as empty space) without collisions. Note that the lack of collisions is sufficient to admit an effective point-like approximation of particles and, vice versa, the assumption of a point-like structure implies the tacit assumption of lack of collisions since dimensionless points cannot collide. Typical classical examples are given by the Solar system or a spaceship in orbit around Earth in vacuum since in both cases the actual size and shape of the constituents (the planets or the spaceship) do not affect the dynamical evolution, and said constituents can be well approximated as massive points. Typical particle counterparts are given by the atomic structure, particles in accelerators, crystals and other systems admitting a good approximation of the constituents as being dimensionless. Note also that all exterior systems are purely Lagrangian or Hamiltonian, in the sense that the knowledge of only one quantity, a Lagrangian or a Hamiltonian, is sufficient to characterize the entire dynamics. 2) Interior dynamical systems, consisting of a finite number of constituents moving within a physical medium, in which case point-like abstraction are no longer valid, since the actual size and shape of the constituents has direct implications in the dynamical evolution. Typical classical examples are given by the structure of a planet such as Jupiter or a spaceship during re-entry in our atmosphere. Typical particle examples are given by the structure of the Sun or, along similar lines, the structure of nuclei and hadrons since, in all these cases, motion of one constituent occurs within the medium characterized by the wavepacket of other surrounding constituents. Note that interior systems are non-Lagrangian and non-Hamiltonian, in the sense that a given Lagrangian or Hamiltonian is insufficient to characterize the dynamics due to the need for a second quantity characterizing the contact interactions represented with external terms in the analytic equations (1.2). As reviewed in Section 3.9, the above classification was eliminated in the 20th century by organized interests on Einsteinian doctrines via the abstraction of all particles as being point-like, consequential elimination of the contact non-Lagrangian or non-Hamiltonian interactions, and consequential elimination of interior dynamical systems. As indicated in Section 1.1, the first and perhaps most fundamental scientific contribution by Santilli has been to prove via Theorem 1.1 that the above abstraction was a figment of academic imagination. In any case, the inconsistency of most of the 20th century particle physics can be unmasked by noting that both elastic and inelastic scattering events are impossible for dimensionless particles by conception, again, because dimensionless particles cannot influence the trajectories of other dimensionless particles except for Coulomb interactions. Alternatively, the experimental evidence of deflection of trajectories in scattering processes from a purely Coulomb behavior is evidence on the existence of non-Lagrangian and non-Hamiltonian interactions precisely according to Theorem 1.1. It is evident that Santilli's studies, including those on hadronic mechanics, specifically refer to interior dynamical systems that will be the sole system considered hereon. As we shall see, the second quantity needed for the representation of size, shape and dynamics of interior systems will be given by the isounit. Hence, special relativity and quantum mechanics are hereon assumed as being exactly valid for exterior dynamical systems, and Santilli's isorelativity and hadronic mechanics are hereon assumed as being exactly valid for interior dynamical systems with unique and unambiguous interconnecting limits characterized by the isounit alone. For references in the above classification, including an accurate historical analysis, we refer the serious scholar to the 1995 FTM Volumes I and II. An instructive reading in the topic of this section is also that of Santilli's ICTP paper Inequivalence of exterior and interior dynamical problems R. M. Santilli, ICTP preprint # IC/91/258 (1991) published in "Santilli's 1991 Papers at the ICTP", International Academic Press (1992) 3.11D. Closed and open dynamical systems Lagrange, Hamilton, Jacobi and other founders of mechanics introduced the following additional classification of dynamical systems: A) Closed dynamical systems, given by systems that can be well approximated as being isolated from the rest of the universe, thus verifying the ten conservation laws of total quantities characterized by the Galilei or the Poincare' symmetry (the conservation of the total energy, linear momentum, angular momentum and the uniform motion of the center of mass). This is typically the case for both exterior and interior systems, whether at the classical or operator levels, when isolated from the rest of the universe. B) Open dynamical systems, given by system in interaction with an external component under which at least one of the ten Galilei's or Poincare' conservation laws is not verified due to exchanges of physical quantities between the system considered and the external component. Needless to say, when the external component is included, the open system is completed into a closed form. Again, for the intent to adapt nature to Einsteinian and quantum theories, another widespread belief of the 20th century physics has been that "closed systems can solely admit conservative-potential forces" or, equivalently, that internal, contact, nonpotential interactions do not verify all ten Galilean or Poincare' conservation laws and, consequently, the contact-nonpotential forces "do not exist in particle physics". The above belief has caused an alteration of physical research of historical proportions because the belief is at the foundation of some of the most equivocal assumptions of the 20th century physics, such as the belief that Einstein's special relativity and quantum mechanics are exactly valid for the structure of hadrons, nuclei and stars. The political argument (political because without a serious scientific basis) is that said systems verify the ten total conservation laws when isolated from the rest of the universe. Hence, the argument says, Einsteinian doctrines and quantum mechanics hold for their interior. I) Closed non-Hamiltonian system, or, more technically, closed variationally nonselfadjoint systems (see Section 2.9), given by systems verifying the ten Galilean or Poincare's conservation laws, thus being closed, yet they admit internal forces that are Hamiltonian as well as non-Hamiltonian or, more technically, variationally selfadjoint (SA) and nonselfadjoint (NSA). II) Open non-Hamiltonian systems, or open variationally nonselfadjoint systems, given by systems that do not verify at least some of the ten Galilean or Poincare' conservation laws due to non-Hamiltonian, or nonselfadjoint interactions with an external system. It is evident that these systems are irreversible over time. In fact, Santilli proved in the 1982 FTM Volume II, page 235, that a Newtonian system of two or more particles with potential/selfadjoint and nonpotential/nonselfadjoint forces (3.127) mk d2rk/dt2 = FkSA(r) + FkNSA(t, r, v, a, ...), k = 1, 2, 3, ..., verifies all ten conventional total conservation laws when the nonselfadjoint forces verify the following simple algebraic conditions (3.128) ∑k FkNSA = 0, (3.129) ∑k pk ∗ FkNSA = 0, (3.130) ∑k rk ∧ FkNSA = 0, where ∗ and ∧ denote scalar and vector products, respectively. The operator counterpart of closed non-hamiltonian system is easily provided by Santilli's Lie-isotopic theory (Section 2.7), in general, and the Galilei-Santilli or Lorentz-Poincare'-Santilli isosymmetry, because: the ten conventional generators, representing the ten total conserved quantities are preserved identically by the isotopic symmetries; the selfadjoint forces are represented by the Hamiltonian; and the nonpotential forces are represented by the isounit I*(t, r, p, ...) = 1/T(t, r, p, ...), as we shall see. The totally symmetric character of the Lie-isotopic product [Q, H]* = QTH - HTQ assures total conservation laws. Nevertheless, closed non-Hamiltonian systems admit internal exchanges of all physical quantities, that is, we have internal exchanges not only of the energy, but also of mass, charge, angular momentum, spin, etc. without any conflict with total conservation laws since we merely have internal exchanges that compensate each other in their sum due to the isolated character of the system. As we shall see in the next chapters, this feature alone of hadronic mechanics has far reaching implications and applications mostly beyond our imagination at this writing. The case of open non-Hamiltonian systems is the second fundamental class of systems studied by hadronic mechanics and includes all energy releasing processes. These systems require Santilli's Lie-admissible theory (Section 2.8), since the lack of totally antisymmetric character of the brackets (Q, H)* = QRH - HSQ in the time evolution law (3.110) assures the description of time rate of variations of physical quantities of which conventional conservation laws are a particular case, in the same way as Santilli isoalgebras are a particular case of Santilli's Lie-admissible algebras. The classical notion of closed non-Hamiltonian systems was introduced in the 1982 FTM Volume II, with the operator counterpart presented in various papers (see EHM and HMMC). An instructive reading is also that of the ICTO paper Closed systems with non-Hamiltonian internal forces R. M. Santilli, ICTP [preprint # IC/91/259 (1991) published in "Santilli's 1991 Papers at the ICTP", International Academic Press (1992) 3.11E. Newton-Santilli isoequations >From Theorem 1.1, the central problem addressed by Santilli was the achievement of a mathematically and physically consistent, classical and operator formulation of non-Hamiltonian (or variationally nonselfadjoint) forces, whose correct quantization had escaped all attempts during the 20th century. Santilli knew that such an objective cannot be achieved without an action principle, since the latter is crucial for a consistent map from classical to operator forms. But, Newtonian systems with nonpotential forces FNSA(t, r, v, ...) do not admit any action principle (when formulated with conventional mathematics). Thus, Santilli searched for an identical reformulation of Newton's equation (3.127) capable of admitting a covering action principle suitable for consistent maps to operator forms. It is at this point where the dimension of Santilli's scientific edifice can be appraised, since it encompasses a variety of discoveries in various branches of mathematics, physics and chemistry, all part of one single monolithic structure that will indeed resist the test of time due to its axiomatic consistency, beauty, experimental verification and industrial applications. Santilli struggled for decades to reformulate Newton's equations into a form admitting a covering variational principle without success, until he discovered the iso-, geno- and hyper-differential calculus in the mid 1995, that allowed him to achieved a series of structural generalization of Newton equations since Newton's "Principia" of 1687, the first known to the Foundation (evidence of dissident views is solicited for presentation in this section). The broader equations are today known as Newton-Santilli iso-, geno-, hyper- and isodual equations. Regrettably, we can solely indicate here the Newton-Santilli isoequations and refer the scholar to the literature available in free download. Let Stot(t, r, p) = E(t, x, It) x E(r, x, Ir) x E(v, x, Iv) be the Kronecker product of the representation spaces for the Newton equations with time t, coordinates r and velocity v, conventional associative multiplication axb = ab, and units It = 1, Ir = Ip = Diag. (1, 1, 1). Santilli introduces the following isotopies of the Newtonian representation space with related isocoordinates, isoproducts and isounits (Section 2) (3.131) S*tot(t*, r*, v*) = E*(t*, x*, I*t) x* E*(r*, x*, I*r) x* E*(v*, x*, I*v), in the isotime, isocoordinates and isovelocities (3.132) t* = tIt*, r* = rI*r , v* = vI*v with real-valued, positive-definite isounits (3.133) I*t = 1/Tt = f(t, r, v, ..), I*r = 1/Tr = Diag. (m12, m22, m32)g(t, ...), I*v = 1/Tv = Diag. (n12, n22, n32)h(t, ...). Then, the Newton-Santilli isoequations can be written (3.134) m*k x* d*v*/d*t* - F*SA = 0, namely, Newton's equations with nonpotential forces on conventional spaces over conventional numbers are turned into a form with sole potential forces on isospace over isonumbers, by embedding all nonpotential forces in the isounits, here expressed via isocoordinates and isoderivatives. Among the infinite number of possible solutions, we indicate the simple realization (3.135) I*t = 1/Tt = 1, I*r = 1/Tr = Diag. (1, 1, 1), I*v = 1/Tv = Diag. (n12, n22, n32)h(t, ...), for which Eqs. (3.134) become for the simpler one-dimensional case with nk = 1, k = 1, 2, 3, and the simplification m* x* = m Em Tm = m, (3.137) m dv*/dt - F*SA = (m dv/dt - FSA + mvTvdE*v/dt)E*v = 0, with simple solution for v constant (3.138) mvTvdE*v/dt = - FNSA, E*v = exp[(mv)-10t FNSAdt]. from which endless examples can be derived. To understand the advance over Newton's original conception, the serious scholar should note that the conventional Newton equations can only represent point-like particles due to the background local-differential topology and geometry, while the Santilli's covering equations represent particles with their actual extended shape under the most general possible potential and nonpotential interactions, due to the background novel isotopology. Additionally, Santilli has provided the genotopic, hyperstructural and isodual coverings of Newton's equations for irreversible and multivalued matter systems and antimatter systems, respectively, that we cannot possibly review here. Hence, to select the appropriate covering of Newtonian mechanics, one should identify whether the considered classical equations deal with: A) matter or antimatter; B) Closed or open systems; and C) Single-valued or multi-valued systems. Then, one should select the appropriate covering mechanics. Mathematically inclined scholars should know that Santilli has provided one single abstract formulation encompassing all possible eight different equations, including the conventional, iso-, geno-, hyper-systems and their isoduals, although such a unified treatment is not recommended for physical applications because excessively abstract. Santilli's coverings of Newton's equations and mechanics can be studied in the 1996 RCMP memoir, and in EHM Volumes I and II. 3.11F. Hamilton-Santilli isomechanics The embedding of the external terms in Lagrange's and Hamilton's equations in the generalized units, and the consequential regaining of a variationally selfadjoint formulation on isospaces over isofields, have far reaching implications. To begin, the true Hamilton's equations (1.2) are identically rewritten in the form known as Hamilton-Santilli isoequations, (3.139) d*r*/d*t* = ∂*H(r*, p*)/∂*p*,   d*p*/d*t* = - ∂*H(r*, p*)/∂*r*, namely, the analytic equations with external terms on conventional spaces over conventional fields are identically rewritten in a form without external terms when formulated on isospaces over isofields. Recall that Hamilton's equations with external terms do not characterize any algebra with the brackets of the time evolution, let alone violate all Lie algebras (Section 1.1). Via Eqs.(3.139), Santilli restores an algebra in the brackets of the time evolution with external terms, and this algebra results to be a Lie isoalgebra as a covering of the algebra for the truncated analytic equations. In fact, Eqs. (3.139) characterize the time evolution of a physical quantity Q(t) (3.140) dQ/dt = [Q, H]*, whose brackets coincide with the conventional Poisson brackets at the abstract level. Among an infinite number of algebraic solutions, a simple one is given by (3.141) I*t = 1/Tt = 1, I*r = 1/Tr = 1 - FSA/FNSA, I*p = 1/Tp = 1, for which (3.142) d*r*/d*t* - ∂*H/∂*p* = dr/dt - ∂H/∂p = 0, (3.143) d*p*/d*t* + ∂*H/∂*r* = dp/dt + ∂H/∂r - FNSA = 0 The first important consequence is that the Hamilton-Santilli isomechanics admits indeed an action principle. In fact, under the preceding simple realization Eqs. (3.139) can be derived from the isoaction principle (3.144) δ*A* = δ*∫ (p* x* d*r* - E* x* t* = 0, where one should note that the isoproduct for the space component is different than that for the time component. The Hamilton-Jacobi-Santilli isoequations on isospaces over isofields expressed in terms of isocoordinates are given by (3.145) ∂*tA* + H = 0, (3.146) ∂*rA* - p = 0, (3.147) ∂*pA* = 0. For open irreversible single-valued or multi-valued or antimatter systems we have the Hamilton-Santilli geno-, hyper and isodual mechanics, respectively, we cannot review here. We can merely indicate that, in this case, at least one of the isounit must be given by a nonsymmetric matrix to assure the lack of invariance under time reversal. Note from Section 3.11D that the Hamilton-Santilli isomechanics is solely applicable to closed non-Hamiltonian systems, trivially, because the antisymmetric character of the brackets of the time evolution imply the conservation of the Hamiltonian and other physical quantities. Again, to select the appropriate covering mechanics, one should identify whether the considered system deals with: A) matter or antimatter; B) Closed or open systems; C) Single-valued or multi-valued systems. The selection of the appropriate mechanics is then consequential. The topic of this section can be best studied in the 1996 RCMP memoir, or in EHM Volumes I and II. 3.11G. Animalu-Santilli isoquantization The conventional naive quantization maps the Hamiltonian action into an expression depending on Planck's constant (3.148) A = ∫ (p dr - H dt) → - i (h/2π) ln |ψ), thus setting the foundations for "quantized orbits" characterized by h/2π. The map of the Hamilton-Santilli isoaction into an operator form was first identified by A. O. E. Animalu and R. M. Santilli at the XII Workshop on Hadronic Mechanics of 1990, it is today called the Animalu-Santilli isoquantization, and can be written (3.149) A* = ∫ (p* x* d* r* - H* x* d*t*) → - i I*r ln**), where one should note that I*r is the coordinate isounit. The preceding expression characterizes the lifting of Planck's constants into the space isounit (3.150) h/2π → I*r(t, r, p, E, ...), under the subsidiary condition (verified naturally by all isounits used in hadronic mechanics) (3.151) Limr >> 1 fm = h/2π = 1. Expressions (3.150), (3151) constitute the conceptual foundations of hadronic mechanics. Recall that, by central assumption, quantum mechanics is valid for the exterior problem of point particles in vacuum, while hadronic mechanics is assumed valid for the interior problem of extended particles moving within a medium composed by other particles, as expected for the constituents of hadrons, nuclei and stars, of course, according to different degrees of mutual penetrations. Consequently, map (3.150) represents the fundamental assumption of hadronic mechanics according to which Planck's constant becomes a locally varying operator representing the impossibility to have quantized orbits for an extended particle immersed within a hyperdense medium as it is the case, for instance, for an electron in the core of a star, under the condition (3.151) of recovering conventionally quantized orbits when motion returns to be in vacuum. Hence, the serious scholar accustomed to the usually quantized orbits for the structure of atoms should not expect the same quantized orbits in the interior of hadrons, nuclei or in the core of stars to avoid evident contradictions. More specifically, when a hadronic constituent is subjected to an excited orbit, that orbit is expected to be in vacuum, rather than in the interior of hadrons, thus belonging to quantum rather than hadronic mechanics. As we shall see in Section 4, this aspect is very insidious and confuses the problem of classification of hadrons generally searched via a spectrum of quantum states, with the structure of one individual hadron for which only one orbit is possible at mutual distances smaller than the size of the wavepackets of particles. For references and a detailed presentation, the serious scholar is suggested to study EHM Volume II and HMMC Volume III. The original contribution by Animalu and Santilli is available from the pdf file A. O. E. Animalu and R. M. Santilli, in "Hadronic Mechanics and Nonpotential Interactions," M. Mijatovic, Editor, Nova Science, New York, pp. 19--26 (l990). 3.11H. Hilbert-Santilli isospaces The isotopic branch of hadronic mechanics is formulated on Hilbert-Santilli isospaces Η* that are the image of conventional Hilbert spaces Η over a conventional field F under nonunitary transformations (see Section 3.xx below), with isostates*), isoinner product defined on an isofield F* (3.152) (*ψ| x**) I* = (*ψ| T |ψ*) I* ε F* isonormalization (3.153) (*ψ| *x |ψ*) I* = (*ψ| T |ψ*) I* = I* or (3.154) (*ψ| T |ψ*) = 1, isoexpectation values for an operator Q (3.155) (Q*) = (*ψ| *x Q x**) I* = (*ψ| T Q T |ψ*) I*, and related theory of isolinear operators on Η* over F* where from now on, unless otherwise indicated, I* and T refer to the space isounit and isotopic elements, respectively. . A fundamental property is that, if an operator Q is Hermitean on Η over F, then it is iso-Hermitean, namely, it verifies the condition of Hermiticity on Η* over F*, (3.156) (ψ| [ Q |ψ) ] I = [ (ψ| Q ] |ψ)] I → → (*ψ| T [ Q T |ψ*) I* ] = [ (*ψ| T Q ] T |ψ*) I*, Consequently, any physical quantity that is observable for quantum mechanics is equally observable for the covering hadronic mechanics. Note that I* is indeed the correct right and left unit of the isotopic branch of hadronic mechanics because it verifies the identities (3.157) I* x**) = I* T |ψ*) = |ψ*), (*ψ| *x *I = (*ψ| T *I = (*ψ|. with isoexpectation value (3.158) (I*) = (*ψ| T I* T |ψ*) I* = (*ψ| T |ψ*) I* = I*. For details, extention to geno-, hyper- and isodual cases, and historical notes we refer the interested scholar to the 1995 EHM Volumes I and II. 3.11I. Schroedinger-Santilli isoequations As indicated earlier, the first lifting of Schroedinger's equations was done by Santilli in 1979, and reinspected in various works. The final version was reached by Santilli in the 1996 RCMP memoir as part of the discovery of the differential calculus. The desired equations can be expressed via the image of the Hamilton-Jacobi-Santilli isoequations (3.145)-(3.147) under map (3.149). For the simple case of a constant isounit, or an isounit averaged to constant, the isoequation can be written (3.59) ∂*t A* + H = 0 → - i (h/2π) I*rt [Ln**)] + H = 0, (3.160) ∂*r A* - p = 0 → - i (h/2π)I*rr [Ln**)] - p = 0, (3.161) ∂*pA* = 0 → - i (h/2π) I**p [Ln**)] = 0, where all coordinates and their derivatives are isotopic (even if not indicated due to limitations of the hmtl language). Via elementary calculations, the above equations can be written in the final form known as Schroedinger-Santilli isoequations (3.162) - i ∂*t*) = - i I*tt*) = H x**) = H Tr*) = E |ψ*), (3.163) p x**) = p Tr*) = - i ∂*r*) = - i I*rr*), (3.164) - i I**p*) = 0, where: one should note the natural emergence of the isodifferential calculus; as well as the last condition expressing the independence of the isowavefunction from the momenta, which condition is crucial for hadronic mechanics to be an axiom-preserving covering of quantum mechanics. The study of open irreversible single or multi valued matter systems and their antimatter counterparts requires the use of Schroedinger-Santilli geno-, hyper- and isodual equations, respectively, we cannot possibly review here. Serious scholars are suggested to study EHM Volumes I and II and HMMC Volume III. 3.11J. Heisenberg-Santilli isoequations The isotopies of Heisenberg's equations were discovered by Santilli in the 1978 original memoirs, their final version was also reached in the 1996 RCMP memoir jointly with the discovery of the isodifferential calculus, are today called Heisenberg-Santilli isoequations, and can be written for the time evolution of an iso-Hermitean operator Q(t) in the finite form (with simplifications of inessential isoproducts and the simple assumption I*t = 1) (3.165) Q(t) = W(t) Q(0) W(t) = exp(H T t i) Q(0) exp(- i t T H), with infinitesimal form easily derivable from the preceding expression (where we ignore again for simplicity the isotopy of time) (3.166) i dQ/dt = Q T H - H T Q = [Q, H]*. and canonical isocommutation rules also reached for the first time in the 1996 RCMP memoir (3.167) [ r, p]* = i I*r,    [r, r]* = [p, p]* = 0. For details, we suggest study EHM Volumes I and II and HMMC Volume III. 3.11K. Dirac-Myung-Santilli isodelta function and elimination of quantum divergencies One of the main limitations of quantum mechanics has been the emergence of divergencies, such as the divergent character of the perturbation theory for strong interactions, divergencies in Feynman's diagrams, and others. One of the main contributions of hadronic mechanics is the elimination of quantum divergencies ab initio, thus permitting, for the first time in scientific history, convergent perturbative expansions for strong interactions. As it is well known, the origin of the divergencies in quantum mechanics rests with the point-like abstraction of particles, which abstraction is technically represented by the Dirac delta function δ(r - ro) that is divergent at r = ro. However, the image of the Dirac delta function in hadronic mechanics, today known as Dirac-Myung-Santilli isodelta function from a paper of said originators of 1982, is given by (3.168) δ*(r - ro) = ∫-∞+∞ ei k T (r - ro) dk, where, as one can see, there is no longer a singularity at r = ro under a suitable selection of the isotopic element. In turn, it is evident that the scattering theories of hadronic mechanics are free of divergencies from their very foundations, as shown in existing papers. Additionally, for any given divergent or weekly convergent series Q(w) = I + w (Q H - H Q)/1! + ... → ∞, I = 1 there always exists an isounit I* = 1/T whose value (or average value) is much bigger than w (the isotopic element is much smaller than w) under which the above series becomes strongly convergent, namely, it verifies the expression where N is a finite positive number (3.169) Q(w) = I* + w (Q T H - H T Q)/1! + ... ≤ N The isodelta function was presented for the first time in the paper Foundation of the hadronic generalization of atomic mechanics, II: modular-isotopic Hilbert space formulation of strong interactions H. C. Myung and R. M. Santilli, Hadronic Journal Vol. 5, pages 1277-1366 (1982). The name of Dirac-Myung-Santilli delta function was introduced by M. Nishioka in the following paper of 1984 Extension of the Dirac-Myung-Santilli delta function to field theory M. Nishioka, Lettere Nuovo Cimento Vol. 39, pages 369-372 (1984). M. Nishioka, Hadronic J. Vol. 7, 1636-1679 (1984) Figure 3.14. An illustration (left) of the origin of the divergencies of quantum mechanics in the singularity of Dirac's delta function δ(r - ro)at the value r = ro, and their removal ab initio in hadronic mechanics (right) by the Dirac-Myung-Santilli isodelta function that no longer admits the preceding divergencies for a suitable selection of the isotopic element, here considered as being dependent on (r-ro)2. In fact, the removal of the divergencies at the indicated level carries over at all levels the scattering and perturbation theories of hadronic mechanics. The above pioneering studies established the absence of quantum divergencies in hadronic mechanics and were followed by several studies reviewed in EHM Vol. II, including the convergence of isoperturbation expansions. The most recent contribution in the new scattering theory of hadronic mechanics (that will be reviewed in Chapter 5) is that of vthe paper. Nonunitary-isoscattering theory, I: Basic formalism without difergencies for low energy reversible scattering A. O. E. Animalu and R. M. Santilli, for the Procveedings of the 2008 Yard Conference, submoitted for publication. The starting point for the geno- and hyper-coverings of isomechanics is, again, Newton's equation, this time for the embedding of irreversibility in the mathematical foundations of the dynamics, via the genotopic lifting of the basic unit of the Euclidean space and related associative product among two generic quantities Gk, k = 1, 2, into two inequivalent formulations, one to the right and a complementary one to the left (see Section 2.8), where, again, the symbols f and b denote forward and backward dynamics, respectively, (3.170) If = 1/S,   Gi xf Gj = Gi x S x Gj, (3.171) bI = 1/R,   iG bx jG = iG x S x jG, with interconnection crucial for consistent time reversal images (3.172) If = 1/S = (bI), in which case the right and left genounits are indeed the correct units for both products. The next step is the selection of one direction in time, generally assumed to be the forward, and represent it with Santilli genomathematics to the right, that is, with genonumbers to the right, genospaces to the right, genogeometries to the right, etc. To avoid catastrophic inconsistencies often not noted by non-experts in the field, the above selection requires the religious restriction of all multiplication and other operations to the right. Under the above foundations, we have the Newton-Santilli genoequations to the right (3.173) mfk xf dfvf/dftf - FfSA = 0; that, as one can see, is indeed irreversible because it is inequivalent to its time reversal image. Similarly, we have: the Hamilton-Santilli genoequations to the right (3.174) dfrf/dftf = ∂fH(rf, pf)/∂fpf,   dfpf/dftf = - ∂fH(rf, pf)/∂frf, related genoaction to the right and Hamilton-Jacobi-Santilli genoequations to the right here omitted for brevity; the Schroedinger-Santilli genoequations to the right (3.175) - i ∂ftf) = - i Ifttf) = H xff) = H S |ψ*) = E |ψf), (3.176) p xff) = p S |ψf) = - i ∂frf) = - i Ifrf), action on a geno-Hilbert space to the right, and the Heisenberg-Santilli genoequations evidently including both actions to the right and to the left because originating from corresponding universal enveloping genoassociative algebras (see Section 2.8) (3.177) Q(t) = W(t) Q(0) Z(t) = exp(H S t i) Q(0) exp(- i t R H), (3.178) i dQ/dt = Q R H - H S Q = (Q, H)*. with corresponding genotopies of all remaining aspects of the isotopic branch of hadronic mechanics. The hyperstructural branch to the right (primarily used for biological structures but also for multi-dimensional universes in physics) is essentially given by the above genotopic branch in which the genounits are assumed to be multi-valued, that is, to have a finite ordered set of values (3.179) Ir = 1/S = {I1r, I2r, I3r, ...}, (3.180) lI = 1/R = {...., 3lI, 2lI, 1lI} with all multi-valued hyperstructures following from the above basic assumption on the fundamental unit. A serious study of the above geno- and hyper-mechanics can only be achieved with a serious study of Santilli's 1996 RCMP memoir, the 1995 EHM Volumes I and II and the 2008 HMMC Volume III. 3.11M. Isodual branches of hadronic mechanics Hadronic mechanics admits four different isodual branches for the representation of antimatter in conditions of increasing complexity according to the following classification: 1) isodual quantum mechanics, for the description of point-like abstractions of antiparticles in exterior dynamical conditions in vacuum (presented in Section 3.10); 2) Isodual isomechanics, for the description of closed non-Hamiltonian systems of extended antiparticles; 3) Isodual genomechanics, for the description of open systems of extended antiparticles; and 4) Isodual hypermechanics, for the description of multi-valued universes of antimatter.. All the above isodual mechanics can be constructed from the corresponding mechanics for matter via the application of the isodual map (3.181) Q(t, r, p,...) → - Q(-t, - r, - p, ...), to the totality of the quantities for matter and the totality of their operations. For a serious knowledge we suggest again the study of Santilli's 1996 RCMP memoir, the 1995 EHM Volumes I and II and the 2008 HMMC Volume III. A typical two-body quantum mechanical system is given by the hydrogen atom in which the two constituents are well approximated as being point-like since the mutual distance is much bigger than the size of the wavepacket of the constituents. In this case, the system is entirely represented with a Hamiltonian of the type (3.182) H(r, p) = ∑k pk2/2mk + V(r). In the corresponding case of two body hadronic systems, the constituents are at mutual distances equal or smaller than 1 fm = 10-13 cm, in which case the preceding point-like abstraction of the constituents is no longer valid because the actual extended character of the constituents, their actual shape, their density and other features, directly affect the dynamics. Suppose that the two particles have the shape of spheroid ellipsoids with semiaxes nak2, a = 1, 2, k = 1, 2, 3. Clearly, the representation of these shapes is beyond any capability of a Hamiltonian, but shapes can be easily represented via Santilli's isounit. Suppose that the above two extended particles with wavefunctions ψ1 and ψ2 are in conditions of partial mutual penetration (Figure 1.3), as it is the case for electrons in valence bonds, hadronic constituents, nuclear constituents and other structures. These physical conditions evidently cause nonlocal interactions extended over the volume of mutual overlapping that can be represented with volume integral ∫ ψ1(r) ψ2(r) dr3. Clearly, this mutual penetration cannot be represented with a quantum Hamiltonian for numerous reasons, beginning with a granting of potential energy to contact nonpotential effects, let alone the violation of the background local-differential topology. However, the same interactions can be readily represented with Santilli's isounit because the underlying topology is indeed nonlocal-integral. By combining these and other aspects, we can see that the considered two-body hadronic system can be characterized by the Schroedinger-Santilli isoequation (3.162), or the Heisenberg-Santilli isoequation (3.166), with the same Hamiltonian H as in Eq. (3.182), plus the isotopic element T given by (3.183) T = Diag. (1/n112, 1/n122, 1/n132) Diag. (1/n212, 1/n222, 1/n232) x x exp[ − F(t, r, p, E, μ, ψ ψ*, ...) ∫ ψ1(r) ψ2(r) dr3], where the exponent in general and the F function in particular, originate at the Newtonian level as in Eq. (3.138) and represent nonpotential interactions whose explicit form depends on the case at hand (see the applications in Chapters 4 and 5). Note that isotopic element (3.183) verifies the condition for strong isoconvergence of divergent quantum series, Eq. (3.169). A most important feature of the above isotopic element is that, for mutual distances much bigger than 1 fm, the volume integral is null and the shapes become spherical due to absence of nonlocal interactions, thus verifying the basic condition (3.151), i.e., (3.184) Limr >> 1 fm T = I, namely, hadronic mechanics recovers quantum mechanics uniquely and identically for all mutual distances of particles bigger than their size. As a result, hadronic mechanics has been built to provide a "completion" of quantum mechanics solely applicable at short distances essentially along the historical argument by Einstein, Podolsky and Rosen (see below for more comments). As we shall see in the next chapters, two body hadronic bound states with Hamiltonian (3.182) and isotopic element (3.183), when applicable, provide exact numerical representations in various fields that are impossible with quantum mechanics. 3.11O. Simple construction of hadronic mechanics It is important for readers to know that all mathematical and physical methods of hadronic mechanics can be constructed via the simple nonunitary transform of quantum models. This construction was first identified by Santilli in the 1978 original memoirs, studied extensively by various authors and will be heavily used in the subsequent outline of experimental verifications and applications of hadronic mechanics. Construction of isomodels. The starting point is the identification of the nonunitary transform with the basic isounit of the model. For the case of two-body hadronic particles, the isounit is the inverse of the isotopic element (3.183), therefore yielding the identification (3.185) WW = I* = Diag. (n112, n122, n132) Diag. (n212, n222, n232) x x exp [ F(t, r, p, E, μ, ψ ψ*, ...) ∫ ψ1 ψ2 dr3]. Once Santilli's isounit has been identified on groups of physical requirements (see the Chapters 4 and 5 for numerous realizations), the lifting of a quantum model into the hadronic form is simply achieved via the application of the above nonunitary transform to the totality of the mathematics and physics of the considered quantum model, without exceptions to avoid catastrophic inconsistencies. In this way, we have: the very simple lifting of: the unit I of quantum mechanics into the isounit, (3.186) I → W I W = I* the lifting of numbers n into isonumbers (3.187) n → UnU = n* = n I*; the lifting of conventional associative product nm between two numbers n and m into the isoproduct (3.188) nm → U(nm)U = (UnU)(UU)-1(UmU) = n* T m* = n* x* m*; the lifting of Hilbert states | ψ ) into Hilbert-Santilli isostates | ψ* ) (3.189) | ψ ) → U [ | ψ ) ] U = | ψ* ); the lifting of the conventional Hilbert product into the inner isoproduct over the isofield of isocomplex isonumbers (3.190) ( ψ | ψ ) → U [ ( ψ | ψ ) ] U = ( ψ* | T | ψ* ) I*; the lifting of the conventional Schroedinger equation into the Schroedinger-Santilli isoequation (3.191) H | ψ ) = E | ψ ) → U [ H | ψ ) ] U = (U H U) (U U)-1 [ U | ψ ) U) = H* T | ψ* ) = H* x* | ψ* ) = U [ E | ψ* ) ] U = E' | ψ* ), where one should note the change in the numerical value of the eigenvalue, E → E' called isorenormalization. In fact, E is the eigenvalue of H, while E' is the eigenvalue of the different operator HT, thus implying that E ≠ E'. Clearly, the isorenormalization of the energy is a fundamental feature of hadronic mechanics for numerous applications. Construction of geno- and hyper-models. Genomodels are constructed via two different nonunitary transforms, (3.192) WW ≠ I, ZZ ≠ I, and the following identification of the forward and backward genounit (3.193) If = W Z, bI = Z W The entire forward and backward genotopic branch of hadronic mechanics can then be constructed by applying the above nonunitary transforms to the totality of the quantum formalism. A similar procedure holds for the construction of the forward and backward hyperstructural branches of hadronic mechanics.. As indicated earlier, the physical consistency of quantum mechanics is due to the invariance over time of: the basic units of measurements, the observability of operators and the preservation of the same numerical predictions under the same conditions at different times. Hadronic mechanics does indeed verify these central conditions of physical consistency, although at a covering level. This feature can be simply seen as follows. Recall that the time evolution of hadronic mechanics is nonunitary when defined on a conventional Hilbert space defined over a conventional field of complex numbers. It is easy to see that, under these assumptions, hadronic mechanics is not invariant over time. In fact, following the identification of the isounit with a nonunitary transform, Eq. (3.186), a repeated application of the same transform does not leave invariant the isounit, (3.194) I* → W I* W = I*' ≠ I*. But, as stressed before, hadronic mechanics must be elaborated with its own mathematics to prevent inconsistencies. Hence, nonunitary transforms must be reformulated in the following isounitary transformations (3.195) WW ≠ I, W = W*T1/2, (3.196) WW = W*x*(W*) = (W*)W* = I*, It is then easy to see that isounitary transformations preserve Santilli's isounit, thus preserving over time the basic units of measurements and the actual shape of particles, (3.197) I* → W* x* I* x* (W*) = I*. It is also easy to prove that isounitary transforms preserve Hermiticity, thus preserving the observability of operators, (3.198) H* = (H*) → W*x H *x (W*) = H'* = (H'*). Finally, it is easy to see that isounitary transforms predict the same numerical values under the same conditions at different times because of the verification of the following condition at the isounitary level (3.199) H* T |ψ*) = E |ψ*) → W* x* [ H* x**) ] x* (W*) = H'* x**)' = W* x*[ E |ψ*) ] x* (W*) = E |ψ*)' in which one should note the invariance of the numerical value of the isotopic operator and of the isoeigenvalue. The invariance of Lie-admissible branch of hadronic mechanics, when formulated on Hilbert-Santilli genospaces over genofields, follows the same lines. This invariance was first studied in the following 1997 paper Invariant Lie-admissible formulation of quantum deformations R. M. Santilli, Found. Phys. Vol. 27, 1159- 1177 (1997) Foreword Relativistic hadronic mechanics is, of course, the most important branch of the mew discipline for experimental verifications (chapter 6), theoretical predictions (Chapters 7. 8) and industrial applications (Chapters 4, 5, 9). It comprises the isotopic, genotopic and hyperstructural liftings of conventional relativistic quantum mechanics for matter in non-Hamiltonian reversible, irreversible and multi-valued conditions, respectively, and their isoduals for antimatter in corresponding conditions. Evidently, we cannot possibly review such a vast structure and are regrettably forces to provide the main lines solely for the isotopic branch, hereon referred to as isorelativistic hadronic mechanics.. The following paper presents relativistic isomechanics in a final invariant form The most comprehensive presentation of the field remains Santilli's 1995 monograph "Elements of Hadronic Mechanics" Vol. II: "Theoretical Foundations" R. M. Santilli, The primary scope of isorelativistic hadronic mechanics is to provide a quantitative representation of the mutations of "particles" into "isoparticles", namely, the alteration of the "intrinsic" as well as kinematic characteristics of particles in the transition from motion in empty space to motion within a hadronic medium, while recovering relativistic quantum mechanics uniquely and identically when the particles return to move in vacuum or, equivalently, when particles are at sufficient mutual distances to allow their point-like abstraction. Recall that particles can be defiend as unitary irreducible representations of the Lorentz-Poincare' symmetry, while isoparticles can be defined as isounitary irreducble representations of the covering Lorentz-Poincare'-Santilli isosymmetry studied in Section 3.10 for the conventional case and in this section for the covering isospinorial form. The mutation (also called isonormalization) of the rest energy of particles is an unavoidable consequence of all nontrivial isotopies of the Lorentz-Poincare' symmetry. However, the mutation of spin, charge and other intrinsic characteristics depends on the energy or, equivalently, the density of the hadronic medium considered. This setting led Santilli to identify two main main cases, the first in which isoparticles maintain the conventional values of spin, charge and other characteristics, and the second in which these characteristic too are mutated. We can now clarify the title of the memoir proposing the construction of hadronic ,mechanics, Need of subjecting to an experimental verification the validity within a hadron of Einstein special relativity and Pauli exclusion principle R. M. Santilli, Hadronic J. Vol. 1, 574-901 (1978). In essence, a particle with spin 1/2 preserves its spin under external electromagnetic interactions, as well known, in which case Pauli's principle is evidently verified. However, Santilli argued that particles may experience a mutation of their spin under external strong interactions, such as for nucleons passing very near nuclei considered as fixed and external, in which case an experimental verification of Pauli's principle and, consequentl,y of special relativity, is necessary. The aspect that does not appear to have sufficiently propagated in the physics community, thus leading to misinterpretation or vacuous judgments, is that spin mutations are "internal" effects within hadronic matter that, as such, are not visible from the outside. Alternatively, Santilli argues that if a hadron has the conventional spin 1/2, this does not necessarily imply that its constituents have conventional spin because there could be internal mutations such to compensate each other resulting in the total spin 1/2, in a way similar to the mutual compensation of internal nonconservative forces resulting in total conservation laws (Section 3.11D). Hence, the :external" character of strong interactions is crucial to avoid vacuous claims of "experimental verification" of Pauli's exclusion principle. Some 30 years following Santilli's call in 1978 to test Pauli's principle, a number of meetings have been recently organized in the subject (without consulting Santilli or quoting his 1978 origination). We assume the serious scholar is aware of the fact that any deviations from Pauli's principle is impossible when data are elaborated via quantum mechanics, since no spin mutation is the possible. Similarly, the serious scholar is assumed to know that hadronic mechanics is the only known axiomatically consistent mechanics predicting deviations from Pauli's principle under the indicated external strong interactions (the verification of Pauli's principle in heavy atoms causing deep wave overlappings of the wavepacksts of peripheral electrons with consequentioal nonlocalm nonunitary and nonquantum effects, can be done in a similar way by considering one peripheral electron while the rest of the system is assumed as external).>br> Isolinearization of second order isoinvariants. Nonrelativistic hadronic mechanics outlined in the preceding sections is characterized by the Galilei-Santilli isosymmetry not presented in these lines for brevity, but treated in detail in the monographs "Isotopic Generalization of Galilei and Einstein Relativities", Volume I: "Mathematical Foundations" R. M. Santilli, "Isotopies of Galilei and Einstein Relativities" Vol. II: "Classical Foundations" R. M. Santilli, Isorelativistic hadronic mechanics is then characterized by the Lorentz-Poincare'-Santilli isosymmetry of Section 3.10 defined on an iso-Minkowskian space M*(r*, m*, R*) under the interpretation of the generators as Hermitean operators on a Hilbert-Santilli isospace over the isofield R* with isounit I* = 1/T > 0 andrealization of the 4-dimensional isolinear (meaning linear on isospaces over isofields) momentum operator (3.200) pk* x* |e*> = pk* T |e*> = - i ∂*k | e*> = - i I*jkj |e* >, k = 1, 2, 3, 4. with isostates |e*> of a Hilbert-Santilli isospace, the symbol "e" indicating the electron as the primary represented quantity, and the asterisk indicating mutation into the isoelectron. The second order Casimir-Santilli isoinvariant (3.81) then yields the following Klein-Gordon-Santilli isorelativistic equation here written in its projection in our spacetime for simplicity (3.201) m*ijpi* T pj* T |e*> = m'2 C2 |e*>, or equivalently (3.202) [ mij*i*j - m'2 C2] |e*> = 0, where: the isometric (namely a matrix with isonumbers as elements) has been simplified to the form M* = m*I*, thus avoiding the isomultiplication in the left hend side becasue M* T p* ... = m*p* ...; m' is the isorenormalized mass, C = c/n4 is the localk speed of lightt; and the isoproduct in the r.h.d. has been removed because trivial. The "isolinearization" of the above second order isoequation has been studied extensively by Santilli, (see EHM Volume II) resulting in the Dirac-Santilli isoequation that we write in the simplified form also projected in our spacetime (3.203) [ i γ*k*k - m' C ) |e*> = 0 where ∂*k are the isoderivatives, and γ*k are the Dirac-Santilli isomatrices with antiisocommutation rules (3.204) {γ*i, γ*j}* = γ*i T γ*j + γ*j T γ*i = m*ij showing the appearance of the fundamental isometric directly in the structure of the isoequation. We assume the reader has acquired at least a minimal knowledge of preceding sections to understand that the Dirac-Santilli isoequation introduces, for the first time Riemannian, Finslerian and other gravitational effects directly in the dynamics of the electron under interior conditions. Pauli-Santilli isomatrices To identify the structure of the Dirac-Santilli isoequation, we must first review the isotopies of SU(2)-spin with particular reference to the isotopies of its fundamental representation via Pauli's matrices, first studies by Santilli in various works, such as Isotopic lifting of SU(2)-symmetry with application to nuclear physics, R. M. Santilli, JINR rapid Comm. Vol. 6. 24-38 (1993) Isorepresentation of the Lie-isotopic SU(2) algebra with application to nuclear physics and local realism, R. M. Santilli, Acta Applicandae Mathematicae Vol. 50, 177-190 (1998) and reviewed extensively in EHM-II Chapter 6. As indicated above, we have to distinguish the following two cases: CASE I: Pauli-Santilli isomatrices without spin mutation This case is characterized by the so-called regular isounitary isorepresentations of the Lie-Santilli isosymmetry SU*(2). This case can be easily constructed via a nonunitary transformation of the conventional Pauli matrices. Let σk, k = 1, 2, 3, be the conventional Pauli matrices defined on a two-dimensional, complex valued, Euclidean space E(r, δ, R) with trivial metric δ = Diag. (1, 1, 1). Consider the Euclid-Santilli isospace E*(r*, δ*, R*) on a Hilbert-Santilli isospace with isostates |s*> and isometric (3.205) δ* = Diag. (1/s12, 1/s22) where s1 and s2 are non-null numbers. Assume for Santilli isounit the nonunitary transform (3.206) I* = 1/T = U2x2U2x2 = Diag. (s12, s22). Then, the regular Pauli-Santilli isomatrices are given by (3.207) σk* = U2x2 σk U2x2, k = 1, 2, 3, (3.208) σ1* = OffDiag (s12, s22), σ2* = OffDiag (-is12, is22), σ3* = Diag (s12, s22), and verify the following isocommutation relations and isoeigenvalues expressions (2.209) [σi*, σj*]* = σi* T σj* - σj* T σi* = i 2 &epsilonijk σk*, (3.210) σ*2* T | s*> = Σk σk* T σk* T |s*> = 3 |s*> (2.211) σ3* T | s*> = +/- |s*>. The preservation of the conventional eigenvalues for spin 1/2 is evident, a feature that Santilli proved to extend to all spins (see EHM-II). Prior to venturing vacuous judgments of triviality, serious readers should be aware that the above Pauli-Santilli isomatrices provide an explicit and concrete realization of hidden variables for (2.212) λ = = s12 = s2-2. by consequently voiding Bell's inequality of final character, since no longer valid under Santilli isotopies. For technical details, one should study the seminal paper Isorepresentation of the Lie-isotopic SU(2) algebra with application to nuclear physics and local realism, R. M. Santilli, Acta Applicandae Mathematicae Vol. 50, 177-190 (1998) CASE II: Pauli-Santilli isomatrices with spin mutation This case is characterized by the irregular isorepresentations of the Lie-Santilli SU*(2). the latter cannot any longer be derived via a trivial nonunitary transform of the Lie case and constitute an intrinsic new feature of the Lie-Santilli isotheory without any correspondence with the conventional theory, although the latter always remains a particular case. Among various cases identified by Santilli (see above quoted papers and EHM-II), an example of irregular Pauli-Santilli isomatrices is given by (3.213) σ1'* = σ1*, σ2* = σ2*, σ3* = w σ3*, where w is a real number that can assume the value zero (e.g., for gravitational singularities, see next chapters), with isocommutation rules and isoeigenvalues (3/214) [σi'*, σj'*]* = σi'* T σj'* - σj'* T σi'* = i Cijk σk'*, Cijk = Diag (1, w, w), (3.215) σ'*2* T | s*> = Σk σk'* T σk'* T |s*> = (2 + w2) |s*> (3.216) σ3'* T |s*> = +/- w |s*>. The mutation of spin is then evident, as desired by Santilli and as needed by his physical and industrial applications (see next chapters). Note that the irregular case can indeed be derived via a nonunitary transformation of the Lie case, but six dimensional (while that of the regular case was two dimensional, according to (3.217) U6x6Diag(&sigma2, &sigma2, &sigma2)U6x6, (3.18) U6x6 = Diag (U2x2, U2x2, w U2x2, that ensures the Lie-Santilli character of the isoalgebra. Dirac-Santilli isoequation Recall that the conventional Dirac equation represents an electron under the "external" electromagnetic field of the proton as well known, since a consistent extension of Dirac's equation to the two-body system constituted by the H-atom has not been achieved to this day. In this case, all conventional intrinsic characteristics of particles are preserved and, therefore, there are no mutations. In this case, we have ordinary "particles" characterized by the Lorentz-Poincare' symmetry (3.75) with generators (3.76) and commutation rules (3.77)-(3.79). By comparison, the Dirac-Santilli isoequation represents an isoelectron under "external" electromagnetic and contact nonpotential interactions, as necessary for the synthesis of the neutron from protons and electrons occurring in stars and studied in Chapter 7, since this case the wavepackets of the proton and electron are in conditions of mutual penetration, thus causing additional non-Hamiltonian interactions and related isorenormalizations. Since the electron in vacuum has spin 1/2, the symmetry needed for the characterization of the isoelectron is given by the isotopy of the spinorial covering of the Lorentz-Poincare' symmetry, first studies by Santilli during his visit at the JINR in Dubna, Russia, Communication number E4-93-252 (1993), published in the 1995 paper Recent theoretical and experimental evidence on the apparent synthesis of neutrons from protons and electrons, R. M. Santilli, Chinese J. System Engineering and Electronics Vol. 6, 177-199 (1995) and today known as Santilli isospinorial covering of the Lorentz-Poincare' symmetry, that we write ,p> (3.119) Π*(3.1) = SL*(2.c) x T*(4) x T*(1) with generators (3.220) Π*(3.1): J*k, K*k = (G*k) T (G*4)/2, k = 1, 2, 3, P*i, i = 1, 2, 3, 4. I*, and the same commutation rules as in Eqs. (3.77)-(3.79). By comparing isosymmetries (3.219) and (3.75), it is evident that SL*(2.c) is the isospinorial covering of SO*(3.1), T*(4) continues to represent isotranslations as in eqs. (3.88), and T*(1) continues to represent isotopic transforms as in Eq. (3.90). Recall that, contrary to popular beliefs, Santilli has discovered a fundamental 11-th symmetry of the conventional Minkowskian spacetime used for grand unification, operator gravity and other important advances. Consequently, the Lorentz-Poincare' symmetry P(3.1), its isotopic covering P*(3.1) and its isospinorial covering Π*(3.1) are all eleven dimensional. The characterization of isosymmetry (3.219) requires two isospaces and related isounits, one for the mutation of spacetime (st) with spacetime isounit Ist* and one for the mutation of the two-dimensional complex unitary spin space withspin isounit Ispin. From the positive-definiteness of these isounits, we assume the following diagonal realization (and leave very intriguing off-diagonal realizations to interested reader, see EHM-II) (3.221) I*st = 1/Tst = Diag. (n12, n22, n32, n42, I*spin = 1/Tspin = Diag. (s12, s22). As for the Pauli-Santilli isomatrices, we have the following two cases: CASE I: Dirac-Santilli isoequation without spin mutation Let |e> be the eigenstates of the conventional Dirac equation on the conventional Hilbert space over the field of complex numbers for the representation of an electron, and consider the following nonunitary transforms Let 93.222) U4x4U4x4 = I*st, U2x2U2x2 = I*spin The isostate on the iso-Hilbert space over the isofield of complex numbers representing the isoelectron. in this case is then defined by (2.223) |e*> = U4x4 |e>, The simplest possible version of the regular Dirac-Santilli isoequation on iso-Minkowski space for the characterization of | the isoelectron is given by (2.224) U4x4k(pk - ieAk) - im'C] |e>U4x4 = = {G*kT4x4[pk* - (ieAk)*] - (im'C)*} T4x4 |e*> = [γ*k(pk*T4x4 - ieAk) - im'C] |e*> = 0, (3.225) G*k = γ*k I*st , γ*k = U4x4γkU4x4, (3.226) {γ*i, γ*j}* = U4x4i, γj}U4x4 = γ*i T4x4 γ*j + γ*j T4x4 γ*i = m*ij where the γ*s are the regular Dirac-Santilli isomatrices and m*ij is the isometric of the Minkowski-Santilli isospace. It is easy to prove that isogenerators (3.220) realized via isogammas (3.225) verify all isocommutators (3.77)-(3.79) and the interested reader is encouraged to verify. Note that, in this case, no isotopy for the spin is needed because automatically provided by the assumed spacetime isotopy, resulting in a new realization of the regular Pauli-Santilli isomatrices, as the reader is suggested to verify. In any case, the spin isotopy can indeed be added, but has to preserve the spin 1/2 by assumption of the case considered, thus being inessential. CASE II: Dirac-Santilli isoequation with spin mutation This is the most important case for the synthesis of the neutron from a proton and an electron inside a stars studied in Chapter 5, because the latter synthesis requires a mutation of spin. In this case, we have the irregular realization of Eqs. (3.203), first identified by santilli in the above quoted paper of 1993-1995, today known as irregular Dirac-Santilli isoequation, that can be written: (3.227) {G*kT4x4[pk* - (ieAk)*] - (im'C)*} T4x4 |e*> = [γ*k(pk*T4x4 - ieAk) - im'C] |e*> = 0, (3.228) G*k = γ*k I*st = nk-1γk I*st, k = 1, 2, 3, G*4 = γ*4 I*st = n4-1 γ4 I*st, (3.229) {γ*i, γ*j}* = m*ij In this case, , the orbital isosymmetry SO*(3) of the isoelectron is characterized by the generators and related isocommutation rules (3.230) L*1 = r*2 T p*3, L*2 = r*3 T p*1, L*3 = r*1 T p*2, (3.231) [L*1, L*2]* = n32L*3, [L*2, L*3]* = n12L*1, [L*3, L*1]* = n22L*2, with isoeigenvalues (3.232) L*2* T |e*> = (n12n22 + n22n32 + n32n12) |e*>, (3.233) L*3 T |e*> = (+/-) (n1n2) |e*>, Note that the above particular realization of the isogroup SO*(3) is also locally isomorphic to the conventional SO(3) group (because the n's are positive-definite). From generators (3.201), the isotopic formulation of the spin of the isoelectron is given by (3.234) J*1 = (G*2) T (G*3)/2, J^2 = (G^3)*(G^1)/2, J*3 = (G*1) T (G*2)/2, (3.235) [J*1, J*2]* = n3-2 J*3, [J*2, J*3]* = n1-2 J*1, [J^3, J*1]* = n2-2 J*2, with isoeigenvalues (3.236) (J*2^*|e^> = (1/4) (n1-2n2-2 + n2-2 n3-2 + n3-2 n1-2) |e*>, (3.237) (J*3) T |e*> = (+/-)(1/2)(n1-1n2-1) |e*>. illustrating the spin mutation desired by Santilli. Note that the eigenvalues of the spin, not only are no longer 1/2, but they are generally no longer constant to represent the electron when in the core of a collapsing star, or other extreme internal conditions, under which the preservation of the quantum value 1/2 is a pure unverified belief. Note that the isocommutation rules of Π* are the same as those of P*(3.1), Eqs. (3.77)-(3.79), as the reader is encouraged to verify and that, despite the indicated differences,Π(3.1) is isomorphic to the conventional spinorial symmetry Π(3.1). in particular, the above isotopic SU(2)-spin remains isomorphic to SU(2), of course, at the abstract, realization-free level.. Additional mutations characterized by the Dirac-Santilli isoequation are those of the magnetic moment μ and electric dipole moment d, whose derivation has been worked out by Santilli in the above quoted 1993-1995 paper via a simple isotopy of the conventional derivation, resulting in the isolaws valid for the case of an axial symmetry along the third axis (3.238) μ*= μ (n4/n3), (3.239) d* = d (n4/n3). The above laws provide a quantitative geometric representation of the well known semiclassical property recalled earlier that the deformation of a charged and spinning sphere necessary implies an alteration of its magnetic and electric moments. In particular, we have a decrease (increase) of the magnetic moment when we have a prolate (oblate) deformation. It is an instructive exercise for the interested reader to verify that the above realization of the above irregular Dirac-Santilli isoequation cannot be constructed via a nonunitary transform of the conventional Dirac equation as for the regular case, but requires special maps. 3.11R. Direct universality and uniqueness of hadronic mechanics The following properties are important for an understanding of the verifications and applications of hadronic mechanics: 1) Hadronic mechanics has been proved to be "directly universal," namely, admitting as particular cases all possible generalizations of quantum mechanics with brackets of the time evolution characterizing an algebra as defined in mathematics (universality), directly in the frame of the experimenter, thus avoiding any coordinate transformation (direct universality). This property is a consequence of the fact that Santilli's Lie-admissible algebras (Section 2.8) are the most general possible algebras admitting as particular cases all possible algebras as conventionally understood in mathematics. 2) All possible true generalizations of quantum mechanics, namely, those outside its classes of unitary equivalence but preserving an algebra in the brackets of the time evolution, are particular cases of hadronic mechanics. 3) Any modification of hadronic mechanics for the intent of claiming novelty, such as the formulation of basic laws via conventional mathematics, verifies the Theorems of Catastrophic Inconsistencies of Nonunitary Theories. Note that the above direct universality applies not only for nonrelativistic but also for relativistic hadronic mechanics. Yet another aspect studied in detail by Santilli for years is whether the structure of hadronic mechanics is unique or there exist inequivalent nonunitary generalizations of quantum mechanics that are equally invariant over time. The result of this study is that hadronic mechanics is indeed the sole mechanics verifying the conditions indicated (nonunitary time invariant structure). As an example, in his original proposal to build hadronic mechanics, Santilli classified all possible modifications of the associative product AB of two matrices A, B via the use of a fixed matrix with the same dimension, (3.240) AB → A x* B = ATB, TAB, ABT, and concluded that the only acceptable isotopy is the form ATB, because the alternative forms TAB (ABT) violate the right (left) distributive and scalar laws, thus preventing the use of an algebra in the enveloping operator algebras with consequential catastrophic inconsistencies. A reason for the uniqueness is that the only possible representation of contact non-Hamiltonian interactions verifying the condition of time invariance is that via Santilli isounit. Invariance then follows since the unit is the basic invariant of all theories. Nonequivalent generalizations of quantum mechanics must then use a representation of non-Hamiltonian effects other than that via the isounit, by activating the Theorems of Catastrophic Inconsistency of Nonunitary Theories. 3.11S. EPR completion of quantum mechanics, hidden variables and all that Santilli has repeatedly presented hadronic mechanics as a form of "completion" of quantum mechanics in honor of Einstein, Podolsky and Rosen who expressed historical doubts on the completeness of quantum theories. In fact, hadronic mechanics provides an explicit and concrete realization of hidden variables λ that are realized via the isotopic operator T according to the isoassociative eigenvalue equations (3.241) H λ |ψ*) = H x**) = H T |ψ*) = E |ψ*). The hidden character emerges from the fact that, at the abstract, realization-free level, there is no distinction between the conventional associative action of the Hamiltonian on a Hilbert state and its isoassociative covering. In fact, at the abstract level one can write the modular action in the abstract right-associative form "H |ψ*)" for both quantum and hadronic versions, thus illustrating the truly "hidden" character of said variables. More generally, all branches of hadronic mechanics preserve the abstract axioms of quantum mechanics and merely provide broader realizations of the same axioms. Santilli has also studied the nonunitary covering of Bell's inequalities and shown that, contrary to the quantum case, they do admit indeed a classical counterpart, thus altering the entire field of local realism. Isorepresentation of the Lie-isotopic SU(2) algebra with application to nuclear physics and local realism, R. M. Santilli,. Acta Applicandae Mathematicae Vol. 50, 177-190 (1998) 3.11T. Operator isogravity As indicated in Chapter 1, one of the biggest scientific imbalances of the 20th century physics has been the absence of a consistent quantum formulation of gravity, since the quantization of the Riemannian representation is afflicted by a litany of inconsistencies. In particular, the noncanonical character of the classical formulation requires, for consistency, a nonunitary operator counterpart, thus activating the Theorems of Catastrophic Inconsistencies of Nonunitary Theories. Santilli studied for decades the problem of a consistent operator form of gravity without any publication. He finally presented his solution at the 1994 M. Grossmann Meeting on Gravitation held at Stanford Linear Accelerator Center Isotopic quantization of gravity and its universal isopoincare' symmetry R. M. Santilli, in the Proceedings of "The Seventh Marcel Grossmann Meeting", R. T. Jantzen, G. M. Keiser and R. Ruffini, Editors, World Scientific Publishers pages 500-505(1994). Quantum isogravity R. M. Santilli, Communication in Theor. Phys. Vol. 2, pages 1-14 (1995) Santilli's argument is essentially the following. The impossibility of achieving a consistent operator form of gravity is due to curvature, since the latter requires a noncanonical classical structure with consequential nonunitary operator formulation and related catastrophic inconsistencies. Hence, Santilli formulated his isogravitational theory indicated in Section 3.10H in which Riemannian line elements are identically reformulated in the Minkowski-Santilli isospace via the decomposition of the metric g(r) = Tgr(r)m, Eq. (3.100), where m is the Minkowski metric, and Tgr is the gravitational isotopic element. The formulation of the isometric m* = Tgr(r)m with respect to the isounit as the inverse of the gravitational isotopic element, I*gr = 1/Tgr, eliminates curvature, thus restoring unitary on the Hilbert-Santilli isospace over isofields with isounits I*gr. This discovery was made possible by the unification of the Minkowskian and Riemannian geometries into the Minkowski-Santilli isogeometry presented in detail in EHM Volume I, as well as in the memoir Isominkowskian geometry for the gravitational treatment of matter and its isodual for antimatter, R. M. Santilli, Intern. J. Modern Phys. D Vol. 7, 351-407 (1998) Following the above advances, the achievement of a consistent operator formulation of gravity was elementary. In fact, relativistic hadronic mechanics includes gravity without any modification of its structure via the mere interpretation of its isotopic element as being that of gravitational nature. Again, the procedure merely requires the factorization of the Minkowski metric m from any given Riemannian metric m*(r) = Tgr(r)m, such as for the Schwartzschild's metric, and the use of relativistic hadronic equations. As an illustration, the procedure yields the Dirac-Santilli isoequation (3.203), for which the anticommutation of the isogamma matrices yields precisely the Schwartzschild's metric, Eq. (3.204). 3.11U. Iso-grand-unification There is no doubt that one of Santilli's biggest scientific contributions has been the achievement of the first axiomatically consistent grand unification of electroweak and gravitational interactions without pre-existing comparisons for consistency, mathematical beauty and physical content, to the Foundation's best knowledge (the indication of equally consistent grand unification is encouraged for comparative listing in this section). Here are summary comments released by Santilli: The achievement of a consistent grand unification has been, by far, the most complex research problem I ever confronted due to the vastity and diversification of the required knowledge. Also, the more I worked at a solution, the bigger the problems with consequential widening of the field. Without any expectation that colleagues would agree, my conclusions following decades of work at the problem are the following: 1) Antimatter. I had to reject all preceding attempts at a grand unification, including that by Einstein, because of unsurmontable inconsistencies caused by antimatter. In fact, electroweak theories beautifully represent matter and antimatter, while a Riemannian gravitation does not, as nowadays well known. Only after achieving the isodual mathematics and related isodual theory of antimatter I was finally able to resolve these inconsistencies with a judicious decomposition of electroweak theories into advanced solutions and their isoduals with a corresponding gravitational and isodual counterpart allowing full democracy between matter and antimatter at all levels. 2) Curvature. After years of failed attempts along orthodox lines, I had to admit to myself that the representation of gravity via a curved spacetime renders any grand unification simply impossible. This was due to a litany of inconsistencies originating from attempting the combination of a theories structurally flat in spacetime, such as electroweak theories, and a gravitational theory that is structurally curved in spacetime. In particular, any reformulation of electroweak theories on a curved manifold to achieve geometric compatibility with gravitation, lead to unsurmontable catastrophes, such as the loss of physical meaning of electroweak theories at the operator level. These inconsistencies were determinant for my decision to cross the scientific "Rubicon" and abandon curvature for a covering theory of gravitation without curvature. That generated the birth of isogravitation. 3) Covariance. A third litany of inconsistencies originated from the fact that electroweak theories are beautifully structured by gauge and spacetime symmetries, while gravitation had none. The use of the customary "covariance" adopted by gravitational studies throughout the 20th century caused additional catastrophic inconsistencies, such as the lack of physical meaning of electroweak theories due to the general impossibility to predict the same numerical values under the same conditions at different times. The resolution of this third class of inconsistencies required the laborious construction of the Lie-isotopic theory that, in turn, permitted the construction of the Lorentz-Poincare'-Santilli universal isosymmetry of isogravitation. The combination of all my studies, including the various new mathematics, the isodual theory of antimatter, the Lie-isotopic theory and relativistic hadronic mechanics, then finally lead to the iso-grand-unification with an axiomatically consistent inclusion of mutually compatible electroweak and gravitational theories for matter and antimatter. The final solution I proposed is so elementary to be deceptive, because I essentially introduced gravitation where nobody looked for, in the unit of electroweak theories. However, by looking in retrospect, I can say that the virtual entirety of my research was ultimately aimed at the achievement of an axiomatically consistent grand unification. The diversification and novelty of the research illustrates the complexity of the problem of grand unification beyond the level of biased academic views. In fact, following decades of research, Santilli finally released his iso-grand-unification at the VIII Marcel Grossmann Meeting on Gravitation held in Jerusalem, Israel, in 1996, as well as in related papers provided below, Unification of gravitation and electroweak interactions R. M. Santilli, in the proceedings of the "Eight Marcel Grossmann meeting", Israel 1997, T. Piran and R. Ruffini, Editors, World Scientific, pages 473-475 (1999) Isotopic grand unification with the inclusion of gravity R. M. Santilli, Found. Phys. Letters Vol. 10. 307-327 (1997) Isotopic unification of gravity and relativistic quantum mechanics and its universal isopoincare' symmetry R. M. Santilli, in "Gravity, Particles and Spacetime", P. Pronin and G. Sardanashgvily, Editors, World Scientific (1996) The most comprehensive and updated presentation of the iso-grand-unification is available in the five volumes of HMMC.
2022-10-04 16:58:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7517473101615906, "perplexity": 2327.7617432180914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00050.warc.gz"}
http://crypto.stackexchange.com/questions/8742/security-of-authenticated-encryption-modes-gcm-ccm?answertab=votes
Security of authenticated encryption modes gcm & ccm I have two questions for Clarification for AE mode choice criteria • GCM : it appears to be actually the most popular and widely used AE mode of operation. however it is also well-known to be highly sensitive (more than other AE modes ?) to IV uniqueness requirement and completely fails if such requirement is not respected'. I personnaly in regard with planned target domain of application consider this as a weakness . So such weakness should'not weight in the criteria for AE mode selection ? Remain GCM the one of most powerful AE mode despite this weakness ? isn't EAX or OCB if no more patented a more efficient & secure choice ? • CCM : I understood via such mode review that it is based on MacThenEncrypt procedure (CBC-MAC then CTR ) . So why is such mode always presented as candidate AE mode if only Encrypt-Then-Mac procedure seems actually recommended by cryptography experts ? - Are you asking why GCM and CCM are NIST approved, while EAX and OCB are not? csrc.nist.gov/publications/nistpubs/800-38D/SP-800-38D.pdf csrc.nist.gov/publications/nistpubs/800-38C/… –  Henrick Hellström Jun 15 at 14:30 If robustness is more important than performance, then I prefer HMAC+encryption in a encrypt-then-MAC scheme over GCM and the like. –  CodesInChaos Jun 15 at 16:07 response to Hendrick comment –  william_fr Jun 15 at 18:05 The mode that's least sensitive to IVs is SIV mode. $\;$ –  Ricky Demer Jun 16 at 1:48 Another issue - completely unrelated to security - is that CCM is simply hard to implement and use, because it does not use static data sizes. This is especially true regarding generation of the NONCE, especially regarding size. I've already met an implementation that was correct but slightly incompatible with mine. –  owlstead Jun 24 at 11:33 Regarding GCM mode and the uniqueness of the nonce, it should be noted that EAX mode and OCB mode also require unique nonces. One potential problem EAX mode has, which neither GCM or CCM have, is that it is hard to implement it in such way that you can guarantee that the probability of nonce collisions is zero; only that it is acceptably low. OCB mode has been revised a number of times due to attacks such as this one against one of the earliest versions of OCB mode. Regarding the security of CCM mode, this paper provides a security proof that explains the use of a CTR-encrypted CBC-MAC, with the conjecture that it is stronger against birthday attacks, compared to an unencrypted CBC-MAC. Hence, as a consequence CBC-MAC-then-CTR-Encrypt is actually stronger than (naive) CTR-Encrypt-then-CBC-MAC. The security of EtA versus AtE is consequently a rather complex matter. Generally it is probably best to regard dedicated proofs for a specific mode, as trumping proofs for the generic compositions. The security properties of CCM are well understood, so I doubt many security experts rule against it just because it is not EtA. A better argument against CCM is that it requires two AES operations per block, while other AE modes only require one. - It's my understanding that the different versions of OCB were motivated by desires to improve performance, simplify the proof, support associated data --- basically everything but security concerns. The "attack" you mention basically says that if you encrypt several gigabytes of data under the same key, an attacker can create a forgery with probability 2^-64; i.e., there is roughly a one-in-a-quintillion chance that the attacker will succeed. I doubt this is much of a concern for practioners. –  Seth Jun 17 at 0:31 I think you are mistaken. If you need 128 bit of security in a IND-CCA2 model, you clearly can't use a mode that allows you to create forgeries with significantly better than $2^{-128}$ probability with realistic amounts of data. You do need IND-CCA2 security for online data transmission protocols, so with OCB mode you have to rekey long before you reach even MB of data transmitted using a key. –  Henrick Hellström Jun 17 at 7:16 The modification made in the OCB1 mode otoh results in a different security proof: cs.ucdavis.edu/~rogaway/papers/offsets.pdf –  Henrick Hellström Jun 17 at 7:31
2013-12-13 22:47:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6005395650863647, "perplexity": 3088.420311007585}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386165002851/warc/CC-MAIN-20131204135002-00052-ip-10-33-133-15.ec2.internal.warc.gz"}
https://scicomp.stackexchange.com/questions/7055/solving-the-elliptic-eigenproblem-with-periodic-boundary-conditions
# Solving the elliptic eigenproblem with periodic boundary conditions Given an energy level $\mu$, I'm looking to calculate the eigenvectors corresponding to the time-independent Schrodinger operator on the torus (that is, periodic boundary conditions)-- $H = -h^2 \Delta + V$ for some nice, bounded potential $V$ -- whose eigenvalues are nearest to $\mu$. I was wondering if there are some standard techniques for solving this particular problem, where $h$ can be rather small, say $h = 0.001$. My current method is the naive approach (by my reckoning): I use the DFT to convert into phase space, whereby I can compute the map $\hat{u} \mapsto \hat{(H u)} = h^2 |k|^2 \hat{u} + \hat{V} \ast \hat{u}$, and then proceed by using the standard Krylov method for the matrix $(\hat{H} - \mu I)^{-1}$, where the map corresponding to that inverse is computed using biconjugate gradients. There are several key problems with this approach. The first is that $h$ is very small, so for certain energy levels I must use the DFT up to a very high frequency in order to get an accurate discretization, but this means that $\hat{H}$ is a both large (larger than $10000\times 10000$) and dense, so Krylov methods are still quite expensive, especially since on each Arnoldi iteration I have to do a full set CG iterations. Are there any other standard methods that I should try? I'd be grateful for any suggestions. Even if it's something obvious, I probably haven't thought of it yet because I'm not very well-versed in numerical eigenvalue problems, especially not in the case of differential operators. Rather than look for an answer in phase space, you may be better off considering your problem in physical space instead. In that case, you can use the standard finite difference formula to approximate $\Delta u$ at each point, then form a matrix corresponding to that system, etc. You also mentioned using the Arnoldi process. Using the Lanczos process will be faster, since your problem is symmetric. If you're using a program smart enough to detect that your matrix is symmetric and act accordingly then you needn't worry about this, but if you explicitly selected Arnoldi then you could be doing unnecessary work solving upper Hessenberg systems when it could be solving tridiagonal systems. Nonetheless, you're still finding the eigenvalues closest to zero of the operator $(H-\mu I)^{-1}$ -- that at least doesn't change. • Also, note that by using inverse iteration, I'm in fact solving for the largest eigenvalues of $(H - \mu I)^{-1}$. May 3, 2013 at 20:18
2022-10-01 23:52:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.882422149181366, "perplexity": 210.14171996268936}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00647.warc.gz"}
https://www.queryhome.com/puzzle/7427/staircase-steps-you-walk-taking-steps-time-how-many-ways-stair
# A staircase has 10 steps , you can walk up taking one or two steps at a time . how many ways can go up to top of stair ? +2 votes 203 views A staircase has 10 steps , you can walk up taking one or two steps at a time . how many ways can go up to top of stair ? posted May 14, 2015 Share this puzzle ## 1 Answer +3 votes 10C0 + 9C1 + 8C2 + 7C3 + 6C4 + 5C5 = 1+9+28+35+15+1 = 89 answer May 14, 2015 Sir , i didn't got your solution , my solution is of fibonacci series , like for 1step ==1 , 2nd step 1+1 or 2 ==2ways , 3rd step  1+1+1 or 1+2 or 2+1 ==3ways , so ,1,2,3,5,8,13,21,34,55,89 ,  for 10th step 89 ways . I just tried for two three numbers and saw this logic is arriving (not sure why), but fibonacci  series is more logical :) thanks for asking great puzzle. I am glad to be of help :) Similar Puzzles +1 vote Garima takes the underground train to work and uses an escalator at the railway station. If Garima runs up 8 steps of the escalator, then it takes her 30.0 seconds to reach the top of the escalator. If she runs up 15 steps of the escalator, then it takes her only 12.5 seconds to reach the top. How many seconds would it take Garima to reach the top if she did not run up any steps of the escalator at all? 0 votes A network of 20 x 10 squares is given to you. Can you calculate how many unique squares and rectangles can be formed combining two or more individual squares ? 0 votes Find amounts of money that can be made up in two ways so that one set of coins has twice as many coins as the other (but not using only 1c and 2c coins). e.g. 71c is 50c, 20c, 1c (3 coins) or 20c, 20c, 20c, 5c, 5c, 1c (6 coins)
2018-12-17 06:52:57
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8593854904174805, "perplexity": 1423.8570212523173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828448.76/warc/CC-MAIN-20181217065106-20181217091106-00636.warc.gz"}
https://math.stackexchange.com/questions/557445/question-about-abelian-group-proof/566385
# Question about Abelian group proof I prove that if $G$ is Abelian group so if $a,b\in G$ has a finite order so $ab$ has a finite order to.. (Maybe later I'll upload here my proof to see of she is correct....) Now, I have to show that this is false if the group is not Abelian group with those 2 matrices: $\begin{bmatrix} 0 &-1 \\ 1& 1 \end{bmatrix} and \begin{bmatrix} 0 &1 \\ -1&-1 \end{bmatrix}$ This is the problem: $\begin{bmatrix} 0 &-1 \\ 1& 1 \end{bmatrix}\cdot \begin{bmatrix} 0 &1 \\ -1&-1 \end{bmatrix}=\begin{bmatrix} 1 &1 \\ -1&0 \end{bmatrix}$ $ord\left(\begin{bmatrix} 1 &1 \\ -1&0 \end{bmatrix}\right)=6$ This is not an infinite order element... An you have any idea? Thank you!! • Any idea about what? You are basically given the solution. take these elements, show that they have finite order, take their product, show it does not have finite order. – Najib Idrissi Nov 8 '13 at 23:56 • @nik - How do I show this: take their product, show it does not have finite order? Thank you! – CS1 Nov 9 '13 at 8:13 • But their product have a finite order - 6... – CS1 Nov 9 '13 at 8:27 You probably saw the group $GL(2,\mathbb R)$, perhaps not under this name. It is the group of all invertible $2\times 2$ matrices with real entries. It is a group under the operation of matrix multiplication. So, this question is probably asking you to identify that the two matrices belong to that group. Then you proceed, a la nik's advise, to compute, in $GL(2,\mathbb R)$, the order of each, the product of the two, and the order of the product. • If I'll multiply them I'll get:$\begin{bmatrix} 1 &1 \\ -1& 0 \end{bmatrix}$ and the order of this matrix is finite - is 6, so I don't understand how can I show that their product a matrix with infinite order... Thank you! – CS1 Nov 9 '13 at 8:25
2019-10-23 00:27:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8363308906555176, "perplexity": 340.3021137118965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987826436.88/warc/CC-MAIN-20191022232751-20191023020251-00370.warc.gz"}
https://www.zbmath.org/?q=an%3A1318.35072
# zbMATH — the first resource for mathematics Time decay of solutions to the compressible Euler equations with damping. (English) Zbl 1318.35072 Summary: We consider the time decay rates of the solution to the Cauchy problem for the compressible Euler equations with damping. We prove the optimal decay rates of the solution as well as its higher-order spatial derivatives. The damping effect on the time decay estimates of the solution is studied in details. ##### MSC: 35Q31 Euler equations 76N10 Existence, uniqueness, and regularity theory for compressible fluids and gas dynamics 35P20 Asymptotic distributions of eigenvalues in context of PDEs Full Text: ##### References: [1] C. M. Dafermos, Can dissipation prevent the breaking of waves?, In Transactions of the Twenty-Sixth Conference of Army Mathematicians, 187, (1981) · Zbl 0506.73023 [2] R. J. Duan, Optimal $$L^p$$-$$L^q$$ convergence rate for the compressible Navier-Stokes equations with potential force,, J. Differential Equations, 238, 220, (2007) · Zbl 1121.35096 [3] R. J. Duan, Optimal convergence rate for the compressible Navier-Stokes equations with potential force,, Math. Mod. Meth. Appl. Sci., 17, 737, (2007) · Zbl 1122.35093 [4] Y. Guo, Decay of dissipative equations and negative Sobolev spaces,, Comm. PDE., 37, 2165, (2012) · Zbl 1258.35157 [5] L. Hsiao, Quasilinear Hyperbolic Systems and Dissipative Mechanisms,, World Scientific Publishing Co., (1997) · Zbl 0911.35003 [6] L. Hsiao, Convergence to nonlinear diffusion waves for solutions of a system of hyperbolic conservation laws with damping,, Comm. Math. Phys., 143, 599, (1992) · Zbl 0763.35058 [7] F. M. Huang, Convergence rate for compressible Euler equations with damping and vacuum,, Arch. Ration. Mech. Anal., 166, 359, (2003) · Zbl 1022.76042 [8] F. M. Huang, Convergence to the Barenblatt Solution for the Compressible Euler Equations with Damping and Vacuum,, Arch. Ration. Mech. Anal., 176, 1, (2005) · Zbl 1064.76090 [9] F. M. Huang, $$L^1$$ convergence to the Barenblatt solution for compressible Euler equations with damping,, Arch. Ration. Mech. Anal., 200, 665, (2011) · Zbl 1229.35196 [10] M. Jiang, Convergence to strong nonlinear diffusion waves for solutions to p-system with damping on quadrant,, J. Differential Equations, 246, 50, (2009) · Zbl 1169.35039 [11] T. Kato, The Cauchy problem for quasi-linear symmetric hyperbolic systems,, Arch. Rational Mech. Anal., 58, 181, (1975) · Zbl 0343.35056 [12] S. Kawashima, Systems of a Hyperbolic-Parabolic Composite Type, with Applications to the Equations Of Magnetohydrodynamics,, Ph.D thesis, (1983) [13] A. J. Majda, Vorticity and Incompressible Flow,, Cambridge University Press, (2002) · Zbl 0983.76001 [14] A. Majda, Compressible Fluid Flow and Conservation laws in Several Space Variables,, Springer-Verlag, (1984) · Zbl 0537.76001 [15] P. Marcati, The one-dimensional Darcy’s law as the limit of a compressible Euler flow,, J.Differential Equations, 84, 129, (1990) · Zbl 0715.35065 [16] A. Matsumura, The initial value problem for the equations of motion of viscous and heat-conductive gases,, J Math Kyoto Univ, 20, 67, (1980) · Zbl 0429.76040 [17] T. Nishida, Global solutions for an initial-boundary value problem of a quasilinear hyperbolic systems,, Proc. Japan Acad., 44, 642, (1968) · Zbl 0167.10301 [18] T. Nishida, Nonlinear hyperbolic equations and related topics in fluid dynamics,, Publ. Math. D’Orsay, 46, (1978) · Zbl 0392.76065 [19] L. Nirenberg, On elliptic partial differential equations,, Ann. Scuola Norm. Sup. Pisa, 13, 115, (1959) · Zbl 0088.07601 [20] T. C. Sideris, Long time behavior of solutions to the 3D compressible Euler equations with damping,, Comm. PDE., 28, 795, (2003) · Zbl 1048.35051 [21] Z. Tan, Large time behavior of solutions for compressible Euler equations with damping in $$\mathbbR^{3}$$,, J. Differential Equations, 252, 1546, (2012) · Zbl 1237.35131 [22] W. Wang, The pointwise estimates of solutions for Euler equations with damping in multi-dimensions,, J. Differential Equations, 173, 410, (2001) · Zbl 0997.35039 [23] C. J. Zhu, Convergence rates to nonlinear diffusion waves for weak entropy solutions to p-system with damping,, Sci. China Ser. A, 46, 562, (2003) · Zbl 1215.35107 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-02-28 01:08:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5439760684967041, "perplexity": 2153.4583494719072}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178359624.36/warc/CC-MAIN-20210227234501-20210228024501-00353.warc.gz"}
http://mathhelpforum.com/algebra/67988-math-models-equations.html
# Thread: Math models and equations 1. ## Math models and equations Bascially if you pick any number and then Multiply it by two. Add seven. Multiply by three. Add the number you first thought of. Divide by seven. Multiply by four. Subtract the number you first thought of. Divide by three. Subtract the number you first thought of again. The answer will always be 4. If you take x to be the first number you though of, and y to be the result. How would you write and equation expressing y as a function of the operations on x described above. Also how would you simplify this equation, which would explain the final result? Any help on this would be appreciated. 2. Originally Posted by newatthis Bascially if you pick any number and then Multiply it by two. Add seven. Multiply by three. Add the number you first thought of. Divide by seven. Multiply by four. Subtract the number you first thought of. Divide by three. Subtract the number you first thought of again. The answer will always be 4. If you take x to be the first number you though of, and y to be the result. How would you write and equation expressing y as a function of the operations on x described above. Also how would you simplify this equation, which would explain the final result? Any help on this would be appreciated. Do what it says! If you start with x and "Multiply it by 2" you have 2x. "Add 7": 2x+ 7. "Multiply by 3": 3(2x+ 7)= 6x+ 21. "Add the number you first thought of",which is x: (6x+ 21)+ x= 7x+ 21. Can you finish? This is really an exercise in your ability to follow directions! 3. Originally Posted by newatthis Bascially if you pick any number and then Multiply it by two. Add seven. Multiply by three. Add the number you first thought of. Divide by seven. Multiply by four. Subtract the number you first thought of. Divide by three. Subtract the number you first thought of again. The answer will always be 4. If you take x to be the first number you thought of, and y to be the result. How would you write an equation expressing y as a function of the operations on x described above. Also how would you simplify this equation, which would explain the final result? Any help on this would be appreciated. Hello newatthis, This is the way it would look before simplifying using a minimum number of grouping symbols. Test it with a number of your choice to be sure. Then, attempt to simplify it. $\displaystyle y=\frac{\frac{(x \cdot 2+7) \cdot 3+x}{7} \cdot 4 -x}{3}-x$ 4. ## Thanks! and another Q Thanks to both of you for replying. I understand it much better now and have managed to simplify it Could someone help me understand why the answer is always 4?
2018-03-21 21:01:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7805518507957458, "perplexity": 723.3561243339996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647692.51/warc/CC-MAIN-20180321195830-20180321215830-00225.warc.gz"}
https://portonmath.wordpress.com/2012/08/20/subatomic-product/
I’ve discovered a new kind of product of funcoids, which I call subatomic product. Definition Let $f : A_0 \rightarrow A_1$ and $g : B_0 \rightarrow B_1$ are funcoids. Then $f \times^{\left( A \right)} g$ (subatomic product) is a funcoid $A_0 \times B_0 \rightarrow A_1 \times B_1$ such that for every $a \in \mathrm{atoms}\,1^{\mathfrak{F} \left( A_0 \times B_0 \right)}$, $b \in \mathrm{atoms}\,1^{\mathfrak{F} \left( A_1 \times B_1 \right)}$ $a \mathrel{\left[ f \times^{\left( A \right)} g \right]} b \Leftrightarrow \mathrm{dom}\,a \mathrel{\left[ f \right]} \mathrm{dom}\,b \wedge \mathrm{im}\,a \mathrel{\left[ g \right]} \mathrm{im}\,b.$ This (subatomic) composition has the merit that for funcoids $f : A \rightarrow B$ and $g : A \rightarrow C$ the destination of product is $B \times C$ is the same as for categorical product in the category $\boldsymbol{\mathrm{Set}}$. See This online draft article for details. There it is also proved that subatomic product exists.
2017-09-23 23:47:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 11, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8159216642379761, "perplexity": 730.1695729600278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689806.55/warc/CC-MAIN-20170923231842-20170924011842-00547.warc.gz"}
http://mathoverflow.net/revisions/41298/list
2 added 570 characters in body According to Johan's answer, the problem is not well posed if the function $f$ has zeroes in $[-1,1]$ (if the number of zeroes is finite, perhaps something could be done). In the following I assume that $f(x)\ne0$ for all $x\in[-1,1]$. Let $n$ be a positive integer and let $\mathbb{P}(n)$ be the set of all real polynomials, and denote by $\|\cdot\|$ the $L^\infty$ norm on $[-1,1]$. Then you ask for the existence of $P\in\mathbb{P}(n)$ such that $$\|1-P/f\|=\inf_{p\in\mathbb{P}(n)}\|1-p/f\|.$$ This can be brought under the theory of approximation in the $L^\infty$ norm. Let $\phi_k(x)=x^k/f(x)$, $1\le k\le n$. Each $\phi_k$ is a continuous function, they are independent and generate an $n$-dimensional subspace of the space of continuous functions on $[-1,1]$ that we denote by $V$, which is nothing but the space of all functions of the form $P/f$ with $P\in\mathbb{P}(n)$. Moreover, each $\phi\in V$ has at most $n$ zeros in $[-1,1]$, so that it satisfies what is called the Haar condition. The original problem is now recast as: find the best approximation in $V$ of the constant function $1$. The general theory of approximation shows that there is a unique best approximation, and that it can be characterized as follows: $\phi\in V$ is the best approximation in $V$ of the constant function $1$ if and only if there exist $x_1 < x_2 < \dots < x_{n+2}$ in $[-1,1]$ such that 1. $|e(x_k)|=\|e\|$, $1\le k\le n+2$, 2. $e(x_k)=-e(x_{k+1})$, $1\le k\le n+1$, where $e(x)=1-\phi(x)$ is the error of the approximation. This is Tchebyshev's alternance theorem. Finally, Remes' algorithm can be used to construct the best approximation. Edit in response to your comment If $f$ has a finite number of zeroes in $[-1,1]$ and can be written as $f=q\cdot h$ where $q$ is a polynomial and $h$ is a continuous function such that $h(x)\ne0$ for all $x\in[-1,1]$, then you may consider the space $V$ of all functions of the form $q\cdot p/f=p/h$ whith $p\in\mathbb{P}(n)$. Then $$\frac{|f(x)-q(x)p(x)|}{|f(x)|} =|1-\frac{p(x)}{h(x)}|.$$ If $\phi$ is the best approximation to $1$ in $V$, then $q\cdot \phi$ will be a best approximation to $f$ of degree $n+\hbox{degree}(q)$ in the relative error sense. 1 According to Johan's answer, the problem is not well posed if the function $f$ has zeroes in $[-1,1]$ (if the number of zeroes is finite, perhaps something could be done). In the following I assume that $f(x)\ne0$ for all $x\in[-1,1]$. Let $n$ be a positive integer and let $\mathbb{P}(n)$ be the set of all real polynomials, and denote by $\|\cdot\|$ the $L^\infty$ norm on $[-1,1]$. Then you ask for the existence of $P\in\mathbb{P}(n)$ such that $$\|1-P/f\|=\inf_{p\in\mathbb{P}(n)}\|1-p/f\|.$$ This can be brought under the theory of approximation in the $L^\infty$ norm. Let $\phi_k(x)=x^k/f(x)$, $1\le k\le n$. Each $\phi_k$ is a continuous function, they are independent and generate an $n$-dimensional subspace of the space of continuous functions on $[-1,1]$ that we denote by $V$, which is nothing but the space of all functions of the form $P/f$ with $P\in\mathbb{P}(n)$. Moreover, each $\phi\in V$ has at most $n$ zeros in $[-1,1]$, so that it satisfies what is called the Haar condition. The original problem is now recast as: find the best approximation in $V$ of the constant function $1$. The general theory of approximation shows that there is a unique best approximation, and that it can be characterized as follows: $\phi\in V$ is the best approximation in $V$ of the constant function $1$ if and only if there exist $x_1 < x_2 < \dots < x_{n+2}$ in $[-1,1]$ such that 1. $|e(x_k)|=\|e\|$, $1\le k\le n+2$, 2. $e(x_k)=-e(x_{k+1})$, $1\le k\le n+1$, where $e(x)=1-\phi(x)$ is the error of the approximation. This is Tchebyshev's alternance theorem. Finally, Remes' algorithm can be used to construct the best approximation.
2013-05-20 02:43:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9436507225036621, "perplexity": 40.91786917226642}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00025-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.transtutors.com/questions/e-f-and-g-were-partners-in-a-small-textile-company-they-spent-54-000-for-equipment-t-1321055.htm
# E, F, and G were partners in a small textile company. They spent $54,000 for equipment that they... E, F, and G were partners in a small textile company. They spent$54,000 for equipment that they agreedwould last 8 years and have a resale value of 5% of cost. The three partners couldn’t agree on thedepreciation method to use. E was in favor of using the double-declining-balance system, F insisted onthe 150%-declining-balance method, and G was sure that the sum-of-the-years-digits method would bebetter. Show the depreciation for the first 4 years for each method in the following table. At the end of 4years, what would be the book value under each of the three methods? ## Related Questions in Depreciation and Depletion • ### E, F, and G were partners in a small textile company. (Solved) June 01, 2015 E , F , and G were partners in a small textile company. They spent $54,000 for equipment that they agreedwould last 8 years and have a resale value of 5 % of cost . The three partners couldn’t agree on thedepreciation method to use. E was in favor of using the double-declining-balance system , F • ### unit 4 4-3 & 4-4 (Solved) August 08, 2014 paid for asset$21,000 $30,750$ 8 ,000 Installation cost $500$1,000 $200 Renovation costs prior to use$2,000 $1,000$1,500 By the end of the first year, each machine had been operating 4 ,800 hours. Depreciation estimates are shown in Table 2 below: Estimates Problem 4 -3, Table 2 The depreciation recorded for the first year is based on the different methods of depreciation shown above. The Straight line method of depreciation uses the useful life of the asset, the... • ### For Neel Only Due by Tuesday Midnight! January 30, 2014 Dropbox before Tuesday, 11:59 PM (ET) of Unit 6. Name your assignment filename using this format: LastName_First Name_Unit#_AssignmentName. For example, the Unit 6 assignment would be named: Smith_John_Unit06_Assignment • ### n recent years, Avery Transportation purchased three used buses. Because of frequent turnover in the... (Solved) January 24, 2016 . For Bus #3, calculate depreciation expense per mile under units-of-activity method .(Round answer to 2 decimal places, e.g . \$0.50.) Compute the amount of accumulated depreciation on each bus at December 31, 2014.(Round answers n recent years, Avery Transportation purchased three used buses. Because of frequent turnover in the accounting department, a different accountant selected the depreciation method for each... • ### Help me solve this (Solved) February 18, 2014 In recent years, Sonya Transportation purchased three used buses. Because of frequent turnover in the accounting department, a different accountant selected the depreciation method for each bus, and various methods were selected. Information concerning the buses is summarized below For the units-of activity method, total miles are expected to be 123,000. Actual miles of use in the first 3 years were: 2011, 24,800; 2012, 35,600 ; and 2013, 29,300 . (a1) Calculate...
2018-09-19 07:18:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.345487117767334, "perplexity": 5727.6898690147245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155942.15/warc/CC-MAIN-20180919063526-20180919083526-00078.warc.gz"}
https://space.mit.edu/cxc/marx/inbrief/caveats.html
# Current caveats for MARX¶ ## General Caveats for Monte-Carlo simulations¶ marx is a Monte-Carlo tool, that means it uses random numbers when simulating the X-ray flux and the response for Chandra. For example, if the source spectrum indicates a constant flux between 1 and 10 keV, marx will create photons randomly from this interval. If only a small number of photons is used in a simulation, it is possible (though not very likely) that a lot more photons are softer than 5 keV. For a large number of photons, the result will eventually converge to a constant spectrum. Random numbers are used for all interactions in marx, not only when selecting the energy of the photons, but also to see where in the aperture it enters Chandra, how it bounces of the mirrors, how it defracts from the gratings, etc. Thus, even if the physical model for a source is perfect, the simulated spectrum and a Chandra observation will differ. Furthermore, this means that two marx simulations with the same input parameters will give different results, unless they use the same sequence of random numbers (set the parameter RandomSeed to a positive number to obtain the same sequence of random numbers). ## Spatial Dependence of the Quantum Efficiency¶ The current version of marx does not incorporate the QE uniformity maps and bad-pixel files that give rise to spatial variation in the QE. Consequently, exposure maps and ARFs created by default in ciao will be inconsistent with MARX simulations. For simulated observations on ACIS-S3, this difference should be small since spatial variations in the QE are relatively small on this CCD. However, simulated ACIS-I observations will be effected to a greater degree due to the larger QE variations produced by CTI effects. For users of CIAO 2.3 and higher, the tools mkarf and mkinstmap include the ability to turn off the QE uniformity maps through the use of ardlib qualifiers. For example, a calling sequence like: unix% mkarf detsubsys="ACIS-7;uniform;bpmask=0" ...... will produce an ARF on ACIS-7 (ACIS-S3) but with bad pixel processing disabled (bpmask=0) and without the effects of the CALDB non-uniformity files included. The resulting ARF will be consistent with a marx simulation. A similar call to mkinstmap can be used in conjunction with mkexpmap to create exposure maps appropriate for marx simulations. This technique is illustrated in Creating CIAO-based ARFs and RMFs for MARX Simulations. ## ACIS Response Functions¶ CIAO includes a couple of different tools for creating ACIS response matrices (RMFs): mkacisrmf and mkrmf. The mkacisrmf tool is designed for the analysis of CTI corrected data, whereas mkrmf creates an RMF for non-CTI corrected data. The response algorithm implemented in marx is based upon the calibration data used by mkrmf. Hence the PHAs generated by marx for the ACIS detector represent non-CTI corrected values and as such are consistent with the responses generated by mkrmf but not with mkacisrmf. Consequently, users should continue to use the mkrmf to create RMFs that are consistent with their marx simulations. More information about using mkrmf in the context of a marx simulation may be found in Creating CIAO-based ARFs and RMFs for MARX Simulations. Alternatively, marxrsp may be used to apply any RMF to a marx simulation with the caveat that the mapping from photon energy to PHA does not vary over the detector. ## Mismatch between the FEF-based response and the CALDB order-sorting tables¶ As mentioned above, marx generates non-CTI corrected PHA values. This is accomplished by mapping the incident photon energy to a PHA value using a probability from derived from the most recent non-CTI calibration data (CALDB acisD2000-01-29fef_phaN0005.fits). The CIAO tool tg_resolve_events assigns a diffracted order to each event by comparing the event’s ACIS energy to its dispersion coordinate. The ACIS energy window for a particular order is tabulated in a CALDB order-sorting table (OSIP). For non-CTI corrected data, the CALDB order sorting table (acisD2000-01-29osipN0006.fits) was computed using a much older version of the non-CTI response data (acisD2000-01-29fef_phaN0002.fits). These files (acisD2000-01-29fef_phaN0002.fits vs acisD2000-01-29fef_phaN0005.fits) differ mainly in the region around the Si K edge (~1.8 keV). As such, a comparison of a marx spectrum with the expected spectrum of the input model will show strong systematic residuals near 1.8 keV. ## ISIS Pileup Fitting Kernel¶ The default parameters for the pileup fitting kernel in ISIS, but also in Sherpa and XSpec have been calibrated for point source extractions. Specifically, the values correspond to a circular extraction region 4 ACIS pixels in radius. Although marx can be used to include the effects of photon pileup for any arbitrary spatial and spectral source model, the fitting kernel may need to be adjusted for larger extraction regions. In particular, the psffrac parameter represents the fraction of the Chandra PSF contained within the extraction region and may need to be increased for larger regions. Note, however, that for real data, larger extraction regions will include a higher fraction of unpiled background photons complicating the fitting of the piled source spectrum. As such, it is recommended that this value be allowed to vary during the spectral fit. See the ISIS manual for more discussion of the pileup fitting kernel. ## Chandra Aimpoint Drift¶ marx does not currently take into account of temporal drift in Chandra’s HRMA aimpoint. Fortunately the effect of the drift is generally negligible and should not be a concern for Chandra proposers.
2022-08-16 10:47:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6910290122032166, "perplexity": 2673.1165427692663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572286.44/warc/CC-MAIN-20220816090541-20220816120541-00141.warc.gz"}
https://math.stackexchange.com/questions/2396596/a-quadratic-problem-with-linear-constraints
A quadratic problem with linear constraints Let $\alpha_i$ and $\beta_i$ be given positive constants for $i\in\{1,2\}$. Also let $B$ be a given positive constant. I need to solve the following problem \begin{align} \max_{x_1,x_2}~&~\alpha_1x_1^2+\alpha_2x_2^2 \\ &s.t.~~\beta_1x_1+\beta_2x_2\leq B\\&~~~~~~~~~x_i\geq 0~,~\forall i\in\{1,2\}\end{align} Is there an analytic solution to this? If not, can we have a numerical method which will solve this. Yes. The maximum of a convex function on a convex set, if exists, is always attained at an extreme point of the feasible set. In your case, your feasible set is the triangle whose vertices (extreme points) are $A = (0, 0)$, $B = (\tfrac{B}{\beta_1}, 0)$, and $C = (0, \tfrac{B}{\beta_2})$. It is compact, so the maximum indeed exists. The point $A$ clearly does not achieve the maximum objective value, and therefore either $B$ or $C$ must be a maximizer. You chose among the points by comparing the objective function value these two points.
2020-05-28 15:52:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9939088821411133, "perplexity": 125.4972270286822}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347399820.9/warc/CC-MAIN-20200528135528-20200528165528-00146.warc.gz"}
https://documen.tv/the-probability-of-winning-on-an-arcade-game-is-0-659-if-you-play-the-arcade-game-30-times-what-28318359-66/
Question The probability of winning on an arcade game is 0.659. if you play the arcade game 30 times. What is the probability of winning exactly 21 times? 1. The probability of winning exactly 21 times is 0.14 ### What is Binomial Probability? Binomial probability refers to the probability of exactly ‘x’ successes on ‘n’ repeated trials in an experiment which has two possible outcomes (commonly called a binomial experiment). Probability ‘P’ = ⁿCₓ (probability of 1st)ˣ x (1 – probability of 1st)ⁿ⁻ˣ For example: What is the probability of getting 6 heads, when you toss a coin 10 times? In a coin-toss experiment, there are two outcomes: heads and tails. Assuming the coin is fair , the probability of getting a head is 1/2 or 0.5 . The number of repeated trials: n=10 The number of success trials: x = 6 The probability of success on individual trial: p = 0.5 Use the formula for binomial probability. ¹⁰C₆ (0.5)⁶ x (1 – 0.5)¹⁰⁻⁶ Simplify. ≈0.205 Here, we have given that: Probability of winning on an arcade game is 0.659 so, Probability of loosing on an arcade game is 1-0.659 = 0.341 Number of times game played = 30 Probability of winning exactly 21 times = ? now, n = 30 x = 21 probability of 1st or probability of winning = 0.659 1 – probability of 1st or probability of loosing = 0.341 using, binomial probability formula Probability of winning exactly 21 times = ³⁰C₂₁ (0.659)²¹ x (0.341)⁷ on solving, Probability of winning exactly 21 times = 0.14 Hence, The probability of winning exactly 21 times is 0.14
2023-03-24 21:56:09
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8812655210494995, "perplexity": 1357.1508331632235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945289.9/warc/CC-MAIN-20230324211121-20230325001121-00728.warc.gz"}
http://mathhelpforum.com/math-software/139882-matlab-fsolve-print.html
# Matlab - fsolve Hi, I'm trying to use the fsolve function in Matlab to solve a system of equations for $\mathbf{p}$: $\mathbf{s} - \mathbf{\Delta} (\mathbf{p} - \mathbf{mc}) = \mathbf{0}$ where $\mathbf{s}$ is a nonlinear function of $\mathbf{p}$ and $\mathbf{\Delta}$ is a function of $\mathbf{s}$. The first problem is that $s_j = \exp( xb_j - \alpha \cdot p_j + \xi_j ) / ( 1 + \sum_k \exp( xb_k - \alpha \cdot p_k + \xi_k ) )$ and I'm not sure how to include summation into the inline() function in Matlab. The second problem is that I will be doing lots of these estimation at once. Specifically, I have a huge vector of $xb$, $p$, $\xi$, but I have to do the fsolve piece by piece. Therefore, the range of the summation would be a condition based on some other characteristics from the data. How would I include the condition part? Lastly, is there a way to define function $s_j$ so I can just type $s_j$ in fsolve? $s_j$ appears several times in the $\Delta$ matrix, which if I type it out could easily be stretched really long.
2015-05-24 14:27:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9315458536148071, "perplexity": 119.43073388586201}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928019.31/warc/CC-MAIN-20150521113208-00118-ip-10-180-206-219.ec2.internal.warc.gz"}
https://www.edaboard.com/threads/simulation-of-rayleigh-fading-channel.330988/
# Simulation of Rayleigh Fading Channel Status Not open for further replies. #### Narmak ##### Newbie level 1 Hello, I am relatively new to Matlab and I find it quite hard to really get started on my problem and I hope to find some help here. I checked the already existing threads, but it didn't really help me and since the threads are quite old I hope it's ok to start a new one. I need to simulate the bi-directional wireless communication between four users (so six channels in total, since there are supposed to be identical in both directions). The only hint I got for this task is Rayleigh Fading channel and that there will be interference between the different signals. I searched in the Matlab documentation and read everything about rayleighchan and fading channels. There I found an example of which I think it could be of use to me, I changed it very slightly Code: bitRate = 50000; hMod = comm.BPSKModulator; % Create a BPSK modulator hDemod = comm.BPSKDemodulator; % Create a BPSK demodulator % Create Rayleigh fading channel object. ch = rayleighchan(1/bitRate,4,[0 0.5/bitRate],[0 -10]); delay = ch.ChannelFilterDelay; tx = randi([0 1],50000,1); % Generate random bit stream bpskSig = step(hMod,tx); % BPSK modulate signal fadedSig = filter(ch,bpskSig); % Apply channel effects rx = step(hDemod,fadedSig); % Demodulate signal The code creates a bit sequence, modulates it using BPSK, creates a rayleigh channel, applies the effects and then demodulates it. What else do I need to do? Also here I have just one channel, since I need six that interfere with each other. creating six different rayleigh channels can't be the solution, so I tried Code: tx = randi([0 1],50000,5) But Matlab tells me that the input for step must be a clumn vector, so I can't use a matrix there. So I changed the code to Code: bitRate = 50000; hMod = comm.BPSKModulator; % Create a BPSK modulator hDemod = comm.BPSKDemodulator; % Create a BPSK demodulator % Create Rayleigh fading channel object. ch = rayleighchan(1/bitRate,4,[0 0.5/bitRate],[0 -10]); delay = ch.ChannelFilterDelay; Tx = nan(5000,6); Rx = nan(5000,6); for i=1:1:6 tx = randi([0 1],50000,1); % Generate random bit stream bpskSig = step(hMod,tx); % BPSK modulate signal fadedSig = filter(ch,bpskSig); % Apply channel effects rx = step(hDemod,fadedSig); % Demodulate signa for j=1:1:50000 Tx(j,i) = tx(i); Rx(j,i) = rx(i); end end Is this the correct approach to simulate the channel or am I completely wrong here? Any help or suggestions are appreciated. I hope I didn't make it too confusing, but if something is unclear please let me know Status Not open for further replies.
2021-12-02 07:30:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8008230924606323, "perplexity": 2069.036583482133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361169.72/warc/CC-MAIN-20211202054457-20211202084457-00230.warc.gz"}
http://mathhelpforum.com/calculus/93656-differentiate-y-arcosh-4-3x.html
# Thread: differentiate y= arcosh (4+3x) 1. ## differentiate y= arcosh (4+3x) differentiate y= arcosh (4+3x) could anyone help me please . what differentiation rule do i use with this equaiton and what answer should i get please thankyou !!!! 2. Hello there, I assume you mean: $\frac{d}{dx} ( cos^{-1}(4 + 3x))$. Recall that $\frac{d}{dx} cos^{-1}(x) = -\frac{1}{\sqrt{1 - x^2}}$. Be sure to use Chain Rule when differentiating. Good luck! This should be enough information. 3. No, this is the inverse of hyperbolic cosine. Hyperbolic cosine is $\cosh x={e^x+e^{-x}\over 2}$ and $\sinh x={e^x-e^{-x}\over 2}$ See http://en.wikipedia.org/wiki/Hyperbolic_cosine. The inverse of cosh can be obtained via algebra, letting $w=e^x$. But you can see the derivative of the inverse of cosh at that page, then via the chain rule you can solve this problem. 4. Originally Posted by lukyleo26 differentiate y= arcosh (4+3x) could anyone help me please . what differentiation rule do i use with this equaiton and what answer should i get please thankyou !!!!
2018-02-18 00:19:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8658176064491272, "perplexity": 3336.4242432372885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891808539.63/warc/CC-MAIN-20180217224905-20180218004905-00127.warc.gz"}
https://www.lhscientificpublishing.com/Journals/articles/DOI-10.5890-DNC.2016.12.008.aspx
ISSN:2164-6376 (print) ISSN:2164-6414 (online) Discontinuity, Nonlinearity, and Complexity Dimitry Volchenkov (editor), Dumitru Baleanu (editor) Dimitry Volchenkov(editor) Mathematics & Statistics, Texas Tech University, 1108 Memorial Circle, Lubbock, TX 79409, USA Email: dr.volchenkov@gmail.com Dumitru Baleanu (editor) Cankaya University, Ankara, Turkey; Institute of Space Sciences, Magurele-Bucharest, Romania Robust Exponential Stability of Impulsive Stochastic Neural Networks with Markovian Switching and Mixed Time-varying Delays Discontinuity, Nonlinearity, and Complexity 5(4) (2016) 427--446 | DOI:10.5890/DNC.2016.12.008 Haoru Li$^{1}$, Yang Fang$^{2}$, Kelin Li$^{2}$ $^{1}$ School of Automation and Electronic Information, Sichuan University of Science & Engineering, Sichuan 643000, P.R. China $^{2}$ School of Science, Sichuan University of Science & Engineering, Sichuan 643000, P.R. China Abstract This paper is concerned with the robust exponential stability problem for a class of impulsive stochastic neural networks with Markovian switching, mixed time-varying delays and parametric uncertainties. By construct a novel Lyapunov-Krasovskii functional, and using linear matrix inequality (LMI) technique, Jensen integral inequality and free-weight matrix method, several novel sufficient conditions in the form of LMIs are derived to ensure the robust exponential stability in mean square of the trivial solution of the considered system. The results obtained in this paper improve many known results, since the parametric uncertainties have been taken into account, and the derivatives of discrete and distributed time-varying delays need not to be 0 or smaller than 1. Finally, three illustrative examples are given to show the effectiveness of the proposed method. Acknowledgments This work was supported by the Opening Project of Sichuan Province University Key Laboratory of Bridge Non-destruction Detecting and Engineering Computing under Grants No. 2014QZJ01 and No. 2015QYJ01, National Natural Science Foundation of China under Grant 61573010. References 1. [1] Zhang, H.,Wang, Z., and Liu D. (2014), A Comprehensive Review of Stability Analysis of Continuous-Time Recurrent Neural Networks, IEEE Transactions on Neural Networks and Learning Systems, 25, 1229-1262. 2. [2] Wang. Z., Liu. Y., and Liu. X. (2010), Exponential Stabilization of a Class of Stochastic SystemWith Markovian Jump Parameters and Mode-Dependent Mixed Time-Delays, IEEE Transactions on Automatic Control, 55, 1656-1662. 3. [3] Song, Q. and Huang, T. (2015), Stabilization and synchronization of chaotic systems with mixed time-varying delays via intermittent control with non-fixed both control period and control width, Neurocomputing, 154, 61-69. 4. [4] Chen, X., Song, Q., and Liu, Y., et al (2014), Global μ-Stability of Impulsive Complex-Valued Neural Networks with Leakage Delay and Mixed Delays, Abstract and Applied Analysis, (2014), Article ID 397532. 5. [5] Zhang, G., Lin, X., and Zhang, X. (2014), Exponential stabilization of neutral-type neural networks with mixed interval time-varying delays by intermittent control: a CCL approach, Circuits, Systems, and Signal Processing, 33, 371-391. 6. [6] Shen, Y. and Wang, J. (2007), Noise-induced stabilization of the recurrent neural networks with mixed time-varying delays and Markovian-switching parameters, IEEE Transactions on Neural Networks, 18, 1857-1862. 7. [7] Tang, Y., Fang, J., and Xia, M., et al (2009), Delay-distribution-dependent stability of stochastic discrete-time neural networks with randomly mixed time-varying delays, Neurocomputing, 72, 3830-3838. 8. [8] Phat, V.N. and Trinh, H. (2010), Exponential stabilization of neural networks with various activation functions and mixed time-varying delays, IEEE Transactions on Neural Networks, 21, 1180-1184. 9. [9] Appleby, J.A.D., Mao, X., and Rodkina, A. (2008), Stabilization and destabilization of nonlinear differential equations by noise, IEEE Transactions on Automatic Control, 53, 683-691. 10. [10] Mao, X., Yin, G.G., and Yuan, C. (2007), Stabilization and destabilization of hybrid systems of stochastic differential equations, Automatica, 43, 264-273. 11. [11] Blythe, S., Mao, X., and Liao, X. (2001), Stability of stochastic delay neural networks, Journal of the Franklin Institute, 338, 481-495. 12. [12] Lakshmikantham, V., Bainov, D.D., and Simeonov, P.S. (1989), Theory of Impulsive Differential Equations, World Scientific, Singapore. 13. [13] Zhang, H., Ma, T., and Huang, G., et al (2010), Robust global exponential synchronization of uncertain chaotic delayed neural networks via dual-stage impulsive control, IEEE Transactions on Systems Man and Cybernetics part B-Cybernetics, 40, 831-844. 14. [14] Li, H., Chen, B. and Zhou, Q., et al (2008), Robust exponential stability for uncertain stochastic neural networks with discrete and distributed time-varying delays, Physics Letters A, 372, 3385-3394. 15. [15] Huang, T., Li, C., and Duan, S., et al (2012), Robust exponential stability of uncertain delayed neural networks with stochastic perturbation and impulse effects, IEEE Transactions on Neural Networks and Learning Systems, 23, 866- 875. 16. [16] Krasovskii, N. M. and Lidskii, E. A. (1961), Analytical design of controllers in systems with random attributes, Automation and Remote Control, 22, 1021-1025. 17. [17] Zhou, W., Tong, D., and Gao, Y., et al (2012), Mode and Delay-Dependent Adaptive Exponential Synchronization in pth Moment for Stochastic Delayed Neural Networks With Markovian Switching, IEEE Transactions on Neural Networks and Learning Systems, 23, 662-668. 18. [18] Wu, Z. G., Shi, P., and Su, H., et al (2013), Stochastic Synchronization of Markovian Jump Neural Networks With Time-Varying Delay Using Sampled Data, IEEE transactions on cybernetics, 43, 1796-1806. 19. [19] Liu, X. and Xi, H. (2014), Synchronization of neutral complex dynamical networks with Markovian switching based on sampled-data controller, Neurocomputing, 139, 163-179. 20. [20] Zhu, Q. and Cao, J. (2012), Stability of Markovian jump neural networks with impulse control and time varying delays, Nonlinear Analysis: Real World Applications, 13, 2259-2270. 21. [21] Zhou,W., Zhu, Q., and Shi, P., et al (2014), Adaptive synchronization for neutral-type neural networks with stochastic perturbation and Markovian switching parameters, IEEE transactions on Cybernetics, 44, 2848-2860. 22. [22] Zhang, B., Zheng,W.X., and Xu, S. (2012), Delay-dependent passivity and passification for uncertain Markovian jump systems with time-varying delays, International Journal of Robust and Nonlinear Control, 22, 1837-1852. 23. [23] Balasubramaniam, P., Nagamani, G., and Rakkiyappan, R. (2011), Passivity analysis for neural networks of neutral type with Markovian jumping parameters and time delay in the leakage term, Communications in Nonlinear Science and Numerical Simulation, 16, 4422-4437. 24. [24] Huang, H., Huang, T., and Chen, X. (2013), A mode -dependent approach to state estimation of recurrent neural networks with Markovian jumping parameters and mixed delays, Neural Networks, 46, 50-61. 25. [25] Wang, Z., Liu, Y., and Yu, L., et al (2006), Exponential stability of delayed recurrent neural networks with Markovian jumping parameters, Physics Letters A, 356, 346-352. 26. [26] Dong, H., Wang, Z., and Ho, D.W.C., et al (2011), Robust filtering for Markovian jump systems with randomly occurring nonlinearities and sensor saturation: the finite-horizon case, IEEE Transactions on Signal Processing, 59, 3048-3057. 27. [27] Zhu, Q. and Cao, J. (2012), Stability Analysis of Markovian Jump Stochastic BAM Neural Networks With Impulse Control and Mixed Time Delays, IEEE Transactions on Neural Networks and Learning Systems, 23, 467-479. 28. [28] Fu, X. and Li, X. (2011), LMI conditions for stability of impulsive stochastic Cohen-Grossberg neural networks with mixed delays, Communications in Nonlinear Science and Numerical Simulation, 16, 435-454. 29. [29] Dong, M., Zhang, H., and Wang, Y. (2009), Dynamics analysis of impulsive stochastic Cohen-Grossberg neural networks with Markovian jumping and mixed time delays, Neurocomputing, 72, 1999-2004. 30. [30] Rakkiyappan, R. and Balasubramaniam, P. (2009), Dynamic analysis of Markovian jumping impulsive stochastic Cohen-Grossberg neural networks with discrete interval and distributed time-varying delays, Nonlinear Analysis: Hybrid Systems, 3, 408-417. 31. [31] Jiang, H. and Liu, J. (2011), Dynamics analysis of impulsive stochastic high-order BAM neural networks with Markovian jumping and mixed delays, International Journal of Biomathematics, 4, 149-170. 32. [32] Zhang, H., Dong, M.,andWang, Y., et al (2010), Stochastic stability analysis of neutral-type impulsive neural networks with mixed time-varying delays and Markovian jumping, Neurocomputing, 73, 2689-2695. 33. [33] Rakkiyappan, R., Chandrasekar, A., and Lakshmanan, S., et al, Exponential stability of Markovian jumping stochastic Cohen-Grossberg neural networks with mode -dependent probabilistic time-varying delays and impulses, Neurocomputing, 131, 265-277. 34. [34] Sakthivel, R., Raja, R., and Anthoni, S.M. (2011), Exponential stability for delayed stochastic bidirectional associative memory neural networks with Markovian jumping and impulses, Journal of optimization theory and applications, 150, 166-187. 35. [35] Li, B., Li, D., and Xu, D. (2013) Stability analysis for impulsive stochastic delay differential equations with Markovian switching, Journal of the Franklin Institute, 350, 1848-1864. 36. [36] Gao, Y., Zhou, W., and Ji, C., et al (2012), Globally exponential stability of stochastic neutral-type delayed neural networks with impulsive perturbations and Markovian switching, Nonlinear Dynamics, 70, 2107-2116. 37. [37] Bao, H. and Cao, J. (2011), Stochastic global exponential stability for neutral-type impulsive neural networks with mixed time-delays and Markovian jumping parameters, Communications in Nonlinear Science and Numerical Simulation, 16, 3786-3791. 38. [38] Raja, R., Sakthivel, R. and, Anthoni, S., et al (2011), Stability of impulsive Hopfield neural networks with Markovian switching and time-varying delays, International Journal of AppliedMathematics and Computer Science, 21, 127-135. 39. [39] Zheng, C. D.,Wang, Y., andWang, Z. (2014), Stability analysis of stochastic fuzzy Markovian jumping neural networks with leakage delay under impulsive perturbations, Journal of the Franklin Institute, 351, 1728-1755. 40. [40] Zhu, Q. and Cao, J. (2010), Robust exponential stability of Markovian jump impulsive stochastic Cohen-Grossberg neural networks with mixed time delays, IEEE Transactions on Neural Networks, 21, 1314-1325. 41. [41] Yang, T. (2001), Impulsive Control Theory, Springer, Berlin. 42. [42] Gu, K. (2000), An integral inequality in the stability problem of time-delay systems, in Proceedings of 39th IEEEConference on Decision and Control, Sydney, Australia, 2805-2810. 43. [43] Wang, Y., Xie, L., and Souza, C. (1992), Robust control of a class of uncertain nonlinear systems, Systems and Control Letters, 19, 139-149. 44. [44] Gronwall, T.H. (1919), Note on the derivatives with respect to a parameter of the solutions of a system of differential equations, Annals of Mathematics, 20, 292-296. 45. [45] Halanay, A. and Yorke, J.A. (1971), Some new results and problems in the theory of differential-delay equations, SIAM Review, 13, 55-80. 46. [46] Seuret, A. and Gouaisbaut, F. (2013), Wirtinger-based integral inequality: Application to time-delay systems, Automatica, 49, 2860-2866. 47. [47] Zhu, Q. and Cao, J. (2011), Exponential stability of stochastic neural networks with both Markovian jump parameters and mixed time delays, IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 41, 341-353. 48. [48] Rakkiyappan, R., Chandrasekar, A., and Lakshmanan, S., et al (2013), Effects of leakage time-varying delays in Markovian jump neural networks with impulse control, Neurocomputing, 121, 365-378. 49. [49] Shen, Y. andWang, J. (2009), Almost sure exponential stability of recurrent neural networks with Markovian switching, IEEE Transactions on Neural Networks, 20, 840-855. 50. [50] Boyd, S., Ghaoui, L. El, and Feron, E., et al. (1994), Linear Matrix Inequalities in System and Control Theory, SIAM: Philadelphia, PA. 51. [51] Khalil, H.K. (1996), Nonlinear Systems, NJ: Prentice-Hall, Upper Saddle River.
2022-11-28 22:20:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4648815393447876, "perplexity": 7490.67727037338}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710662.60/warc/CC-MAIN-20221128203656-20221128233656-00854.warc.gz"}
https://pos.sissa.it/187/312/
Volume 187 - 31st International Symposium on Lattice Field Theory LATTICE 2013 (LATTICE 2013) - Standard Model Parameters and Renormalization A determination of the average up-down, strange and charm quark masses at $N_f=2+1+1$ N. Carrasco Vela, P. Dimopoulos, R. Frezzotti, P. Lami*, V. Lubicz, D. Palao, E. Picca, L. Riggio, G. Rossi, F. Sanfilippo, S. Simula and C. Tarantino Full text: pdf Published on: April 28, 2014 DOI: https://doi.org/10.22323/1.187.0312 How to cite Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete. Open Access Copyright owned by the author(s) under the term of the Creative Commons Attribution-NonCommercial-ShareAlike.
2022-09-29 10:21:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17448003590106964, "perplexity": 4659.800808583247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00657.warc.gz"}
http://expect-more.eu/computgeo/ch5.html
## Deepest regression line In this chapter we present an first algorithm to find the deepest regression line, which is the fit with maximal regression depth for a given dataset $Z_n = \{(x_i, y_i), i = 1,...,n \} \subset \rm I\!R^2$ #### Algorithm Algorithm: Deepest regression line Input: • $Z_n = \{(x_i, y_i), i = 1,...,n \} \subset \rm I\!R^2$ Steps: 1. Sort all element of $Z_n$ by their $x$ coordinate such that $x_1 < x_2 < ... < x_n$ 2. For each pair $ij$ such that $0 < i < j \le n$: • Find the line $l_{ij}$ that intersects with observations $i$ and $j$ • Use the algorithm defined in chapter two to compute $r_{ij} = rdepth(l_{ij}, Z_n)$ (without the sorting wich is already done) • If $r_{ij}$ is greater than the regression depth of the temporary solution, keep $l_{ij}$ as a new temporary solution. 3. Return the $l_{ij}$ that has the greatest regression depth, it is the deepest regression line. Given the previous algorithm, one can find the deepest regression line of a dataset $Z_n$ in $\mathcal{O}(n^3)$. • Step one consist in a sorting, wich can be done by a efficient sorting algorithm (e.g. Quicksort) in $\mathcal{O}(n \log (n))$ • Going through all the $ij$ pairs is done in $\mathcal{O}(n^2)$ • The algorithm used to compute rdepth, performs in $\mathcal{O}(n)$ So we conclude thaht the algorithm presented in this chapter to compute the deepest regression line performs in $\mathcal{O}(n \log(n) + n^3) = \mathcal{O}(n^3)$. #### Example The following application allows you to find the deepest regression line of a set of points. Some random points can also be added to ease the use of the application.
2018-12-18 11:31:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6748375296592712, "perplexity": 799.5058983004428}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829140.81/warc/CC-MAIN-20181218102019-20181218124019-00350.warc.gz"}
https://de.zxc.wiki/wiki/Relative_H%C3%A4ufigkeit
Relative frequency Calculation of the relative frequency as a quantity diagram The relative frequency is a division number and a measure of descriptive statistics . It shows the proportion of elements in a set that have a certain characteristic value. It is calculated by dividing the absolute frequency of a feature in an underlying set by the number of objects in this set. The relative frequency is therefore a fraction and has a value between 0 and 1. General mathematical definition Relative frequencies are calculated in relation to an underlying quantity. This set can be either a population or a sample . To define the relative frequency, let's assume that the underlying set has elements. The event occurs under these elements . The relative frequency is calculated as the number of observations with the characteristic divided by the total number of all elements in the underlying set. ${\ displaystyle n}$${\ displaystyle n}$${\ displaystyle H_ {n} (A)}$${\ displaystyle A}$${\ displaystyle A}$ The relative frequency is therefore found as ${\ displaystyle h_ {n} (A) = {\ frac {H_ {n} (A)} {n}}}$. ${\ displaystyle H_ {n} (A)}$is also known as the absolute frequency . In contrast to the relative frequency , meaningful comparisons between samples (or populations) of different sizes with the absolute frequency are usually not possible. ${\ displaystyle h_ {n} (A)}$${\ displaystyle H_ {n} (A)}$ Examples Proportion of girls in a school class Class A has 24 students, 12 of them girls. In class B there are 18 students, 9 of them girls. This means that there are more girls in class A (12) than in class B (9) if you look at the absolute frequency. On the other hand, if you look at the frequency of girls in relation to the respective class size, you can see that the proportion of girls is the same in both classes: in class A the relative frequency of girls is 0.5 (= 1224 ) and in class B it is also 0 , 5 (= 918 ). The relative frequency can also be easily converted into a percentage by multiplying it by 100%. Thus, both classes consist of 50% (= 0.5 × 100%) girls. Polls In an election poll, 600 eligible voters in Bavaria are interviewed, as well as 200 eligible voters in Berlin. In Bavaria passed to 120 respondents, the party A to choose. In Berlin, 100 respondents say they would vote for party A. The absolute frequency for voters of Party A is thus higher in Bavaria than in Berlin, namely 120 respondents in Bavaria versus 100 respondents in Berlin. However, this is due to the fact that three times as many people were interviewed in Bavaria as in Berlin. A comparison of the absolute frequencies is therefore not useful. In contrast, the relative frequency enables a comparison of the popularity of Party A between Bavaria and Berlin. In Bavaria the relative frequency is 0.2 (= 120600 ). The relative frequency for Berlin is calculated as 0.5 (= 100200 ). Party A is much more popular in Berlin than in Bavaria. properties In contrast to the absolute frequency, the relative frequency is always between 0 and 1. This allows different relative frequencies to be compared with one another, although they refer to a different reference variable. In descriptive statistics , relative frequencies are therefore used to be able to compare frequency distributions regardless of the number of elements in the population (i.e. regardless of the sample size ). In the context of inferential statistics and stochastics , the relative frequency is used as a maximum likelihood estimator for the parameter probability of success of a binomial distribution . The following calculation rules apply to the relative frequency: • ${\ displaystyle 0 \ leq h_ {n} (A) \ leq 1}$due to the normalization to the number of repetitions.${\ displaystyle n}$ • ${\ displaystyle h_ {n} (\ Omega) = 1 \,}$for the safe event . • ${\ displaystyle h_ {n} (A \ cup B) = h_ {n} (A) + h_ {n} (B) -h_ {n} (A \ cap B)}$ for the sum of events. • ${\ displaystyle h_ {n} ({\ bar {A}}) = 1-h_ {n} (A)}$ for the complementary event. Relative frequency and probability Frequentistic concept of probability The frequentistic concept of probability interprets the probability of an event as the relative frequency with which it occurs in a large number of identical, repeated, independent random experiments . This is the so-called 'Limes definition' according to von Mises . The prerequisite for this concept of probability is that the experiment can be repeated as required; the individual rounds must be independent of each other. Example: You roll the dice 100 times and get the following distribution: the 1 falls 10 times (this corresponds to a relative frequency of 10%), the 2 falls 15 times (15%), the 3 also 15 times (15%), the 4 in 20%, the 5 in 30% and the 6 in 10% of the cases. After 10,000 runs, the relative frequencies - if there is a fair dice - have stabilized in the vicinity of the probabilities. For example, the relative frequency of rolling a 3 is approximately 16.6%. The axiomatic definition of probability used today as the basis of probability theory manages without recourse to the concept of relative frequency. Even using this definition of probability, however, there is a close relationship (by means of the law of large numbers ) between probability and relative frequency. Law of Large Numbers Laws of large numbers denote certain convergence theorems for the almost certain convergence and the convergence in probability of random variables. In their simplest form, these sentences say that the relative frequency of a random result usually approaches the probability of this random result if the underlying random experiment is carried out over and over again. The laws of large numbers can be proven from Kolmogorov's axiomatic definition of probability. Thus there is a close connection between relative frequency and probability even if one is not a representative of the objectivistic conception of probability. literature • Bernhard Rüger: Inductive Statistics. Introduction for economists and social scientists . R. Oldenbourg Verlag, Munich Vienna 1988, ISBN 3-486-20535-8 . Individual evidence 1. Bernhard Rüger (1988), p. 8 ff. 2. Bernhard Rüger (1988), p. 11 ff. 3. a b c Bernhard Rüger (1988), p. 79 ff.
2023-01-30 09:19:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 14, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8380131721496582, "perplexity": 977.9375246447531}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499804.60/warc/CC-MAIN-20230130070411-20230130100411-00242.warc.gz"}
https://math.stackexchange.com/questions/3200712/calculating-the-old-average-with-the-mean-change
# Calculating the old average with the mean change Can you calculate the original average of some numbers by knowing how the average changes when you add numbers to it? For example: Let's say we have a class of some old students (we don't know their age or the average or the number of them). Then we are told that if we add the age of e.g. Dan, 16 to the ages of the students, the average age drops by 10 years, and then when we add the age of Michael, 12 to the new average, it drops by another 8 years. Can we just from this information calculate the average age of just the students? Thanks for even reading this post. Response to John Hughes: I already tried doing this, but I got stuck at the point where you are supposed to do something with the things you've just written down. The things that I had written down that I thought I could get the answer from were: $$u-u'=S/N-(S+16)/(N+1)=10$$ and $$u-u''=S/N-(S+28)/(N+2)=18$$ I don't even know if this is what I was supposed to figure out. How I figured it out: After doing what John advised me to do I came up with this. $$S=10N^2+26N$$ $$2S=18N^2+64N$$ In the first one, I divided both sides by N and in the second one, I divided both sides by 2 and by N. And this is what I got: $$S/N=10N+26$$ $$S/N=9N+32$$ It seems pretty obvious that you should then put these equations together because they both equal $$S/N$$. Just like this: $$9N+32=10N+26$$ and when you edit the equation $$6=N$$ So now when you know what N equals you can plug it in another equation. $$S=10*6^2+26*6$$ $$S=516$$ $$u=S/N=516/6=86$$ And that means the original average age is 86. Thanks for the help! Very large hint, without being a complete solution Let $$N$$ denote the (unknown) number of students. Let $$S$$ denote the (unknown) sum of their ages. Then the average age of the students is $$u = S/N.$$ If we add Dan to the class, how many students, $$N'$$, will there be? And what will the sum, $$S'$$ of their ages be, in terms of $$S$$? And what does that make the average age $$u'$$ of this enlarged class in terms of $$N$$ and $$S$$? Once you know that, and set $$u' = u - 10,$$ and then replace $$u$$ and $$u'$$ by the formulas for them, you get one equation in the two unknowns $$N$$ and $$S$$. If we now add Michael to the class, we get yet another number, $$N''$$ of students, and yet another age-sum, $$S''$$. How are these related to $$N'$$ and $$S'$$ (or to $$N$$ and $$S$$)? We also get yet another average age, $$u''$$, and know know that $$u'' = u' - 8$$ That gives us a second equation in the unknowns $$N$$ and $$S$$. Perhaps you can take it from here. (Suggestion: Carry out the ideas I've described here and edit your question --- click on the word "edit" below the question to do so --- and show what you've gotten; perhaps we can then help your further if you still need it.) You've got \begin{align} 10 &= \frac{S}{N}-\frac{S+16}{N+1}\\ 18 &=\frac{S}{N}-\frac{S+28}{N+2} \end{align} and that's great. It's typical, in situations like this, to clear the denominators, i.e., to multiply through by $$N$$ and $$N+1$$ or $$N + 2$$, resulting in \begin{align} 10N(N+1) &= S(N+1)-(S+16)N\\ 18N(N+2) &=S(N+2)-(S+28)N \end{align} When you expand out the right hand sides, a funny thing happens here: there are several $$SN$$ terms, and they all cancel. So you get \begin{align} 10N(N+1) &= S-16N\\ 18N(N+2) &= 2S-28N \end{align} From the first equation, we can solve for $$S$$; we can then plug this into the second equation to get an equation involving only $$N$$. That's good...we might be able to solve it. But it's a quadratic...that's potentially bad, because maybe there'll be two equally valid solutions. Or maybe they won't be equally valid. Why don't you go ahead and see where you end up when you follow that plan?
2022-01-23 13:31:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 45, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9737098217010498, "perplexity": 292.96850637810877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304261.85/warc/CC-MAIN-20220123111431-20220123141431-00192.warc.gz"}
https://or.stackexchange.com/questions/1122/estimation-of-the-size-of-branch-and-bound-trees-using-ml?noredirect=1
# Estimation of the size of Branch-and-Bound trees using ML A short background: A paper [1] published in 2006 intends to show that the time needed to solve mixed-integer programming problems by branch and bound can be roughly predicted early in the solution process. Authors mentioned that "The application of the branch-and-bound algorithm can be limited by both the computing time and the storage space required (even when storing nodes on a hard disk). The solution process may take hours or days and there is very little a priori indication of how difficult a model will be to solve. Unfortunately, there is no known method to extract this information from the problem formulation." on the other hand, commercial solvers are like black boxes, from which extracting useful data about the number of nodes, number of branches and so on, is very hard (I tried to extract related data from Cplex callback functions in Matlab but the trial was unsuccessful). My question is: Is there any way to use ML techniques to estimate the branch and bound tree size? Are open-source solvers provide such data that can be used to train an ML model and then test the model on benchmark problems? Doing my homework on searching for answers before asking the question, I can mention the following papers that also aimed to tackle the problem: • Knuth's method: In [2], two new online methods for estimating the size of backtracking search tree are proposed. They mentioned that, "Knuth’s method estimates $$N$$, the size of a backtrack tree as $$1 + b_1 + b_1.b_2 + . . .$$ where $$b_i$$ is the branching rate observed at depth $$i$$ using random probing". • Mentioning the effect of choosing the right variable to branch on, the authors in [3] mentioned that "branching on a variable that does not lead to any serious simplifications on any of the (two) children can be seen as doubling the size of the tree with no improvement, thus leading to extremely large (out of control) search trees." [1] Cornuéjols, Gérard, Miroslav Karamanov, and Yanjun Li. "Early estimates of the size of branch-and-bound trees." INFORMS Journal on Computing 18.1 (2006): 86-96. [2] Kilby, Philip, et al. "Estimating search tree size." Proc. of the 21st National Conf. of Artificial Intelligence, AAAI, Menlo Park. 2006. [3] Lodi, Andrea, and Giulia Zarpellon. "On learning and branching: a survey." Top 25.2 (2017): 207-236. Great question. You might be interested in this paper here: Learning MILP Resolution Outcomes Before Reaching Time-Limit by Martina Fischetti, Andrea Lodi, and Giulia Zarpellon. They don't exactly answer your question but you may see why the question is hard to answer and what partial progress can be made. A priori estimating the tree size is estimating whether a model is hard to solve or not. From the static features of the instance, without any runtime knowledge (and even with it!), I personally deem this task virtually undoable. But this is just gut feeling. edit concerning the data: B&B solvers do not provide such data, but of course you can collect this from B&B runs a posteriori. • Thanks for the suggestion @Marco Lübbecke, I will check the paper but I believe any progress in estimating the tree size, as you said, will reveal a great detail on how hard the problem is and how to allocate memory and time on solving that. – Oguz Toragay Aug 1 '19 at 15:02 • I therefore thought it would be great to "only" be able to classify, maybe into "short, medium, long", but even this seems to be not reliable. With runtime information, however, there might be more hope. Fascinating area at any rate. – Marco Lübbecke Aug 2 '19 at 10:57
2021-01-19 22:24:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6305597424507141, "perplexity": 944.935834974364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519784.35/warc/CC-MAIN-20210119201033-20210119231033-00411.warc.gz"}
https://stats.stackexchange.com/questions/79778/how-does-one-show-that-there-is-no-unbiased-estimator-of-lambda-1-for-a-po
# How does one show that there is no unbiased estimator of $\lambda^{-1}$ for a Poisson distribution with mean $\lambda$? Suppose that $X_{0},X_{1},\ldots,X_{n}$ are i.i.d. random variables that follow the Poisson distribution with mean $\lambda$. How can I prove that there is no unbiased estimator of the quantity $\dfrac{1}{\lambda}$? • I presume you mean, "lambda?" Anyways, this isn't appropriate for MO. – Noah S Dec 16 '13 at 0:12 • Is this for some subject? It looks like a fairly standard textbook exercise. Please check the self-study tag, and its tag wiki info and add the tag (or please give some indication how else such a question arises). Note that such questions, while welcome, place some requirements on you (and restrictions on us). What have you tried? – Glen_b -Reinstate Monica Dec 16 '13 at 5:33 • You should be able to use a similar argument to the one here. – Glen_b -Reinstate Monica Dec 16 '13 at 5:36 Assume that $g(X_0, \ldots, X_n)$ is an unbiased estimator of $1/\lambda$, that is, $$\sum_{(x_0, \ldots, x_n) \in \mathbb{N}_0^{n+1}} g(x_0, \ldots, x_n) \frac{\lambda^{\sum_{i=0}^n x_i}}{\prod_{i=0}^n x_i!} e^{-(n + 1) \lambda} = \frac{1}{\lambda}, \quad \forall \lambda > 0.$$ Then multiplying by $\lambda e^{(n + 1) \lambda}$ and invoking the MacLaurin series of $e^{(n + 1) \lambda}$ we can write the equality as $$\sum_{(x_0, \ldots, x_n) \in \mathbb{N}_0^{n+1}} \frac{g(x_0, \ldots, x_n)}{\prod_{i=0}^n x_i!} \lambda^{1 + \sum_{i=0}^n x_i} = 1 + (n + 1)\lambda + \frac{(n + 1)^2 \lambda^2}{2} + \ldots , \quad \forall \lambda > 0,$$ where we have an equality of two power series of which one has a constant term (the right-hand side) and the other doesn't: a contradiction. Thus no unbiased estimator exists.
2020-06-05 01:28:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8767952919006348, "perplexity": 242.73404607020308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348492295.88/warc/CC-MAIN-20200604223445-20200605013445-00297.warc.gz"}
http://tex.stackexchange.com/questions/7247/commutative-diagrams-using-metapost-or-asymptote/7248
# Commutative diagrams using MetaPost or Asymptote Xy-pic, TikZ and PSTricks seem to be the graphics packages commonly used to draw commutative diagrams. Having heard about the power of MetaPost and Asymptote, I would like to experiment with them. How good are MetaPost and Asymptote for drawing commutative diagrams? There does not seem to be any official packages for this purpose at the moment. - Asymptote is designed for 3D drawings. Those would have to be some super-complicated commutative diagrams to make it worthwhile... –  Seamus Dec 16 '10 at 23:06 Actually, Asymptote is not really designed for 3D drawings. It can do 3D, it did from the beginning extend the metapost path syntax to 3D, but it did not have true 3D drawing capabilities (like hidden surface removal) till quite recently. The asymptote gallery at asymptote.sourceforge.net has a number of good 2D examples. I don't see any commutative diagrams, but they would certainly be possible. –  Jan Hlavacek Dec 16 '10 at 23:30 For Metapost, look at this page. I am sure you can do the same with Asymptote, but I am not aware of any examples. - Here's a more serious example using Asymptote. I would note that for commutative diagrams in a *TeX document, I still recommend using tikz-cd (or tikz directly for sufficiently complicated examples). But I believe this answer is still potentially useful because it solves a couple of Asymptote problems that might come up in other contexts: • how to compute the bounding box of a label (see the boundingbox() function in the example code; note that this would not work if the picture were scaled with size() rather than unitsize()) • how to set up labels to have the same baseline without changing their bounding boxes (make sure they are drawn with a pen that has the basealign option AND that they have alignment N (north); these must be used together to have the desired effect). Here's the example code: settings.outformat="pdf"; real xunit=2cm, yunit=1.4cm; unitsize(xunit,yunit); defaultpen(basealign); picture blank = currentpicture.copy(); usepackage("amssymb"); string[][] nodestext = {{"$\hat{A}$", "$d$", "$A$"}, {"$\sum_i a_i$", "$c$"}, {"$\hat{A}$", "$\displaystyle\prod_{n \in \mathbb{Z}} A_n$", "$\displaystyle\prod_{n \in \mathbb{Z}} A_n$", "$A$"}, {}, {"", "", minipage("node not in math mode",60pt)}}; Label[][] nodes; for (int r = 0; r < nodestext.length; ++r) { nodes.push(new Label[nodestext[r].length]); for (int c = 0; c < nodestext[r].length; ++c) { nodes[r][c] = Label(nodestext[r][c], position=(c,-r), align=N); label(nodes[r][c]); } } /* * This function computes the bounding box of a Label by creating a new blank * picture with the same sizing information as the old picture, adding the * Label to that blank picture, and then computing the bounding box of that picture. */ path boundingbox(Label L) { picture currentpic = blank.copy(); label(currentpic, L); pair min = min(currentpic, user=true); //Without the user=true option, the returned answer would be measured in postscript points. pair max = max(currentpic, user=true); return box(min, max); } path[][] boundingboxes; pair[][] centers; for (int r = 0; r < nodes.length; ++r) { path[] boundingboxesr; pair[] centersr; for (int c = 0; c < nodes[r].length; ++c) { Label currentnode = nodes[r][c]; pair currentpos = (c,-r); boundingboxesr.push(boundingbox(currentnode)); centersr.push(currentpos + (0,7pt/yunit)); } boundingboxes.push(boundingboxesr); centers.push(centersr); } path truncate(path thepath, int sourcerow, int sourcecol, int up=0, int right=0) { pair source = centers[sourcerow][sourcecol]; int destrow = sourcerow - up; int destcol = sourcecol + right; pair dest = centers[destrow][destcol]; path toreturn = thepath; toreturn = firstcut(toreturn, knife=boundingboxes[sourcerow][sourcecol]).after; toreturn = lastcut(toreturn, knife=boundingboxes[destrow][destcol]).before; } void cdarrow(int sourcerow, int sourcecol, int up=0, int right=0, Label L="", bool crossingover = false) { pair source = centers[sourcerow][sourcecol]; int destrow = sourcerow - up; int destcol = sourcecol + right; pair dest = centers[destrow][destcol]; path touse = truncate(source -- dest, sourcerow, sourcecol, up, right); if (crossingover) draw(touse, white+linewidth(3pt)); } cdarrow(0,0,up=-1,right=1); cdarrow(1,0,up=1,right=1,crossingover=true, L=Label("$\scriptstyle h$",align=Relative(0.3W),position=Relative(0.65))); cdarrow(1,0,right=1,L=Label("$\scriptstyle f$",align=Relative(E))); cdarrow(0,1,right=-1); cdarrow(2,0,right=1); cdarrow(2,1,right=1); cdarrow(2,2,right=1); path curvedarrow = centers[2][0]{SSE} .. tension 0.75 .. {NE} centers[2][2]; curvedarrow=truncate(curvedarrow, 2, 0, right=2); draw(curvedarrow, arrow=Arrow(TeXHead), L=Label("$\scriptstyle g$",align=Relative(E)), margin=Margins); curvedarrow = centers[0][1] {ESE} .. {ENE} centers[0][2]; curvedarrow = truncate(curvedarrow, 0,1, right=1); curvedarrow = centers[0][1] {ENE} .. {ESE} centers[0][2]; curvedarrow = truncate(curvedarrow, 0,1, right=1); The result: - Zoonekynd's example #269 from his Metapost examples page (excuse the grainy Gif): The code contains a whole MP library defining a begindiag...enddiag group defining node and rarrow primitives for the parts of the CD, with which the above diagram can be typeset using: begindiag; node "A"; rarrowto(1,0, "above" => "a", "shape" => "middle", "curved" => 3mm, "dashed" => withsmalldots); rarrowto(0,1, "below" => "b", "color" => blue, "shape" => "mapsto", "dashed" => evenly); node "A"; rarrowto(1,0, "above" => "c", "width" => 1bp, "shape" => "inj"); rarrowto(0,1, "below" => "d", "shape" => "mono"); node "A"; nextline; node "A"; rarrowto(1,0, "below" => "e", "shape" => "epi"); node "A"; rarrowto(1,-1, "below" => "f", "curved" => -3mm, "shape" => "half_dotted"); enddiag; Jan links to Alan Kennington's Metapost examples (thanks: I didn't know of them), but from what I can see, while the examples there are impressive, they make use of no real, reusable CD library. Like Jan, I know of no Asymptote code for CDs, but it should not be difficult to translate the Alan Kennington CD examples to it, because they don't make use of equation solving. - For simple CDs, you could use the rather appealing (IMHO) approach of Eplain (link is a PDF-file), from which the following example is taken: \input eplain $$\commdiag{ Y & \mapright^f & E \cr \mapdown & \arrow(3,2)\lft{f_t} & \mapdown \cr Y \times I & \mapright^{\bar f_t} & X }$$ \bye Which produces: - Here's a non-serious example "using Asymptote" (which is to say, using TikZ inside Asymptote): settings.outformat = "pdf"; // Tell Asymptote to output a pdf ("eps" is also an acceptable choice). unitsize(1cm); // One unit of distance should be translated as 1cm rather than the default 1pt. Actually, for this particular setup, I think this is unnecessary. usepackage("tikz-cd"); // Whenever you execute LaTeX code, add the line \usepackage{tikz-cd} to the preamble string str = "\begin{tikzcd}[ampersand replacement=\&] A \rar{\phi} \dar{\theta} \& B \dar{\pi} \\ C \rar{\beta} \& D \end{tikzcd}"; // Create a string containing some LaTeX code. label(str, (0,0)); // Place a label (think TikZ node) at position (0,0) containing the result of running the string str through LaTeX. For an explanation of "ampersand replacement" in the TikZ code, see this answer. Here's the output: - @texenthusiast: I'm hardly an expert on how to execute the code. In my own case, I found that with an up-to-date version of MacTeX, Asymptote is automatically installed, so it suffices to save the code above into a file named filename.asy and then execute on the command line asy filename.asy. –  Charles Staats Apr 28 '13 at 13:24 @texenthusiast: I've added some comments to explain the code. –  Charles Staats Apr 28 '13 at 13:37
2014-04-16 08:14:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7043778300285339, "perplexity": 9729.510564478216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00647-ip-10-147-4-33.ec2.internal.warc.gz"}
https://mailman.ntg.nl/pipermail/dev-context/2012/002660.html
# [dev-context] numbers Mon Jan 16 23:46:21 CET 2012 Am 16.01.2012 um 22:59 schrieb Hans Hagen: > Hi Wolfgang, > > I redid some of strc-num so you'd better test it. One difference is that the 'user variants' do some checking due to the dodouble etc > > An alternative is to have different ones for normal and sub and do something like \def\xxx[#1]#2[#3]{...{#1}{#3}} i.e. hard tex errors > > A problem might be that users use \raw*counter which was expandable while now we have ...value alternatives for that (could be done consistently for more commands. Ok, we could keep old ones and use *value for checked ones instead. It all boils down to how compatible we want to be. > > So ... just an ftp beta and no public one yet. There are a few names wrong, e.g. \def\strc_counters_raw_interfaced {\ifsecondargument \singleexpandafter\strc_counters_raw_sub \else\iffirstargument \doubleexpandafter\strc_counters_raw_yes \else \doubleexpandafter\gobbletwooptionals \fi\fi} call \strc_counters_raw_sub or \strc_counters_raw_yes but then the yes form calls the hop form (now with braces) \def\strc_counters_raw_yes [#1][#2]{\strc_counters_raw_sub {#1}\plusone} and the hop form is defined twice \def\strc_counters_raw_sub [#1][#2]{\strc_counters_raw {#1}{#2}} \def\strc_counters_raw_sub #1#2{\ctxcommand{structurecountervalue("\strc_counters_the{#1}",\number#2)}} Another mistake is that the \*structurecountervalue expects two arguments, to fix this the following \let\rawstructurecountervalue \strc_counters_raw_yes \let\laststructurecountervalue \strc_counters_last_yes \let\firststructurecountervalue \strc_counters_first_yes \let\nextstructurecountervalue \strc_counters_next_yes \let\prevstructurecountervalue \strc_counters_prev_yes should be changed to this \def\rawstructurecountervalue [#1]{\strc_counters_raw_yes [#1][]} \def\laststructurecountervalue [#1]{\strc_counters_last_yes [#1][]} \def\firststructurecountervalue[#1]{\strc_counters_first_yes[#1][]} \def\nextstructurecountervalue [#1]{\strc_counters_next_yes [#1][]} \def\prevstructurecountervalue [#1]{\strc_counters_prev_yes [#1][]} Wolfgang
2018-01-17 11:05:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9361180067062378, "perplexity": 14647.332744684047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886895.18/warc/CC-MAIN-20180117102533-20180117122533-00000.warc.gz"}
https://mathematica.stackexchange.com/questions/255195/defining-a-new-wavelet-fibonacci-wavelet/255522#255522
# Defining a new wavelet (Fibonacci wavelet) I want to define a new wavelet (Fibonacci wavelet) in the reference. So, I read the tutorial on Wolfram Site, @Jason B. ' s answer, and also @Sektor 's answers. But I still have some problems in my code while defining wavelet function in Eq. (6). Clear["Global*"]; g[n_, t_] := 1/Sqrt[Integrate[Fibonacci[n, t]^2, {t, 0, 1}]] Fibonacci[n, t] FibonacciWavelet[]["WaveletQ"] := True FibonacciWavelet[]["OrthogonalQ"] := True FibonacciWavelet[]["BiorthogonalQ"] := False FibonacciWavelet[]["WaveletFunction"] := g[#1, #2] & First Goal: Where is the problem in the code above? When I run the following code; I get the error WaveletPsi::bbdwave: The specification FibonacciWavelet[2] is not a valid wavelet specification recognized by the system. WaveletPsi[FibonacciWavelet[2], x] Second Goal: I want to derive the followings: The following code is right? \[CapitalPsi][n_, m_, t_] :=2^((k-1)/2)WaveletPsi[FibonacciWavelet[m], 2^((k - 1)/2) t - n + 1] k = 2; M = 3; Column[Table[ Simplify@\[CapitalPsi][i, j, t], {j, 0, M - 1, 1}, {i, 1, 2^(k - 1), 1}] // Flatten] Third Goal: Finally; how to save the new type of wavelet in order to use WaveletPsi[FibonacciWavelet[m],x] as though the already defined wavelets (DaubechiesWavelet etc.) Please see: • What is the purpose of your definition? Do you try to follow paper cited or do you try to follow Mathematica tutorial? Sep 4, 2021 at 4:26 • I am trying to follow the Mathematica tutorial about how to define new wavelets. Sep 4, 2021 at 5:56 • Definition of wavelets with Mathematica very differ from common applications like it described in the paper . Sep 4, 2021 at 11:38 • I understand. All right, how to write an efficient code in order to achieve steps in the post? What is your code suggestion? Sep 4, 2021 at 17:02 • I can recommend to use standard definition from the paper as it shown in my answer. The function what you try to define with Mathematica is useless. Sep 14, 2021 at 3:31 You need to (1) use a different integration variable in the integral (which is Integrate, not Int) and (2) use an immediate assignment in the definition of $$g$$ so that the integral in the denominator is not re-evaluated every time you request a wavelet. Using partial memoization: Clear[g]; g[n_Integer] := g[n] = Function[t, Evaluate[ Fibonacci[n, t]/Sqrt[Integrate[Fibonacci[n, s]^2, {s, 0, 1}]]]] g now returns pure functions that are memoized: g[3] (* Function[t$$, 1/2 Sqrt[15/7] (1 + t$$^2)] *) Calling one of these with an argument gives the wavelet: g[3][t] (* 1/2 Sqrt[15/7] (1 + t^2) *) Now defining FibonacciWavelet[_]["WaveletQ"] = True; FibonacciWavelet[_]["OrthogonalQ"] = True; FibonacciWavelet[_]["BiorthogonalQ"] = False; FibonacciWavelet[n_Integer]["WaveletFunction"] := g[n] you can request, for example, FibonacciWavelet[3]["WaveletFunction"][t] (* 1/2 Sqrt[15/7] (1 + t^2) *) • Thank you. Your code works, but it is not what I exactly want. I can' t use WaveletPsi[FibonacciWavelet[2], x] in your code. I want to save the new wavelet and I want to use WaveletPsi[FibonacciWavelet[m],x] such as the already defined wavelets. (HaarWavelet etc.) Please see: reference.wolfram.com/language/ref/WaveletPsi.html Sep 3, 2021 at 12:10 We can answer Second Goal, since it has some meaning in connection with the paper cited (with application to delay problems). For second goal we have definitions of Fibonacci wavelets (note, first line can be extended to arbitrary n) in = Table[Integrate[Fibonacci[n, t]^2, {t, 0, 1}], {n, 10}]; g[n_, t_] := Fibonacci[n + 1, t]/Sqrt[in[[n + 1]]] psi[k_, n_, m_, t_] := Piecewise[{{2^((k - 1)/2) g[m, 2^(k - 1) t - n + 1], (n - 1)/ 2^(k - 1) <= t < n/2^(k - 1)}, {0, True}}] Psi[k_, M_, t_] := Flatten[Table[psi[k, n, m, t], {n, 1, 2^(k - 1)}, {m, 0, M - 1}]] With this definitions we can compute test example as follows With[{k = 2, M = 3}, Psi[k, M, t]] Vector Psi` can be used to solve some problems like described in the paper.
2022-06-30 06:26:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30864837765693665, "perplexity": 2951.521072972134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103669266.42/warc/CC-MAIN-20220630062154-20220630092154-00669.warc.gz"}
https://www.coursehero.com/file/p7diqd0/If-the-student-carries-out-the-multiplication-in-the-denominator-t-answer-will/
# If the student carries out the multiplication in the • Homework Help • 4 • 99% (115) 114 out of 115 people found this document helpful This preview shows page 2 - 4 out of 4 pages. 3. A functionfis graphed below.-4-3-2-112x12345678yEstimate the area between thex-axis and the graph offfromx=-4 tox= 2 using:(a) (2 pts)L3(b) (3 pts)R6(c) (2 pts) Use the upper sum withn= 3 to give an upper bound to the area. (roughly estimate ay-value if itis not a whole number).(d) (2 pts) Use the lower sum withn= 3 to give a lower bound for the area. (roughly estimate ay-value if it isnot a whole number).2-(-4) 4. (4 pts) Find a functionfon [2,5] such that the limit below is equal tolimn→∞Rn.limn→∞nXi=1r2 +3in·e2+3in·3n.3 2
2021-09-29 01:35:45
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8257826566696167, "perplexity": 3848.903088474934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780061350.42/warc/CC-MAIN-20210929004757-20210929034757-00695.warc.gz"}
https://revistalaika.org/picnic-bay/application-of-projectile-motion-in-technology.php
Application Of Projectile Motion In Technology Derivation of equations projectile motion Stack. 22/10/2013 · √√ Application of Projectile Motion - Applications of calculus to Projectile Motion is the free fall motion with acceleration due to gravity of, Projectile motion in real-life situation: Kinematics of basketball shooting: Authors: (Department of Science and Mathematics, Faculty of Science and Technology,. Projectile Motion Projectile Motion Simulation Projectile Motion MSC Software. Science and Technology; we will discuss its role in projectile motion first. I want to ask of derivation of projectile, application of projectile,, Projectile Motion - Real-life this application of ballistics is a rocketry as a form of military technology became obsolete, though projectile warfare itself. Projectile motion is a form of motion in which an object or particle i.e. called a projectile is thrown near the Earth's surface, Application of the work energy What are some common and important real life applications of The list goes on and on. projectile motion isn't Is there any application of ‘Projectile motion Science and Technology; we will discuss its role in projectile motion first. I want to ask of derivation of projectile, application of projectile, 22/10/2013 · √√ Application of Projectile Motion - Applications of calculus to Projectile Motion is the free fall motion with acceleration due to gravity of About this Air Resistance Drag Parameter b/m... For a spherical projectile traveling through air, a reasonable approximation to the drag force is For ping pong balls A full report and diagrams plus analysis of projectile motion experiment, includes answers What are some common and important real life applications of The list goes on and on. projectile motion isn't Is there any application of ‘Projectile motion Projectile motion (application. Change in angles during motion; Kinetic energy of a projectile; this technology will not going on for the long time ,
2020-10-24 05:47:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22001025080680847, "perplexity": 1129.977135787374}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107882102.31/warc/CC-MAIN-20201024051926-20201024081926-00477.warc.gz"}
http://lacey.se/2014/09/23/simple-setup-of-a-latex-project-with-subversion/
# Simple setup of a LaTeX project with Subversion I had a conversation/debate with a colleague recently in which we discussed the creation and editing of large files, e.g., theses, which get shared between multiple people for correction and so forth. I have a certain fondness for LaTeX – I wrote both of my undergraduate degree projects and my PhD thesis in LaTeX, and it is my firm belief that it is well worth learning the program in order to write such large documents. My colleague, on the other hand, defended Microsoft Word at least for its familiarity, pointing out the relative ease of sharing Word documents via email and its own comments system, citing the relative unfamiliarity of LaTeX (at least among people in our profession) as a big disadvantage. Certainly, people called on to correct documents written in LaTeX understandably feel quite put out when confronted with a pdf they can’t directly edit. But should it be a big disadvantage? I don’t see why well-educated scientists in the 21st century shouldn’t be able to learn how to use LaTeX for collaborative document preparation – if it’s not too difficult to learn and is advantageous to do so, that is. After all, most of us are forced to learn to use much flakier, less well-documented proprietary software for some instrument or other at some point. It occurred to me though, that as one of the minority of LaTeX users in my field, I’d never actually tried to write a LaTeX document with other people contributing. Is it easy to use? Or a pain in the arse? It must work well in principle, at least, since I understand that LaTeX is unsurprisingly the standard in the computer science field. In the end I decided to try and find out by creating a simple scientific report-style document with Subversion for version control. There are a few tutorials on the subject but being completely new to Subversion it took me a while to get my head around the general concept. Therefore, I figured it might be helpful to have the general principle written down in case it comes in handy for the future or for anyone else with a similar prior knowledge (i.e., some LaTeX experience). First problem is, you need a Subversion repository running on a fileserver. This is pretty easy to set up, but unfortunately I don’t have a server. Luckily however, there are companies which will host a repository for you – Assembla will give you one for free, so I signed up there and created a repository (or “Space”) for my LaTeX files. I then downloaded SmartSVN as an alternative to managing the repository with the command line. In setting up SmartSVN you can give the program the Checkout URL given by Assembla in order to link the two. Now the writing can begin. I created a new folder in the trunk/ folder created within the repository and made a new .tex file in there. Subversion can add keyword info into the files themselves so that version info can be incorporated into the built pdfs. This can be achieved through the svn package: \usepackage{svn} These keyword tags will need to be added to the .tex file in the preamble: \SVN $Revision$ \SVN $HeadUrl$ \SVN $Date$ \SVN $Author$ \SVN $Id$ In SmartSVN, the file can be marked to be added to the repository, then by right clicking on the file one can choose Properties > Keyword Substitution and set all the necessary keywords to “Set”. When the file is committed to the repository, the revision number increments and the tags in the file are expanded to include the new information for the revision, for example: \SVN $Id: main.tex 5 2014-09-23 18:52:10Z mjlacey$ Now, the information can be added into the document by using tags, such as \SVNId, which simply prints out the line of text for the revision ID as shown in the above line of code when the .tex file is built. At the moment I’ve put this information in the footer of my document. There’s more information about what tags one can use in this document, for example. So, now that I’ve worked it out, pretty straightforward stuff. I like that one can choose to only upload the .tex files, keeping the build files local, and that with SmartSVN it’s very easy to see the changes between a local version of a file and the remote version of the file. I’ve not got any collaborators to work on a document with (…yet) but it’s nice to know that it’s in principle not that complex to set up a collaborative LaTeX project even with no prior knowledge of the versioning system.
2018-09-20 17:04:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7745770215988159, "perplexity": 768.4443604973926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156524.34/warc/CC-MAIN-20180920155933-20180920180333-00140.warc.gz"}
http://ds.ing.unife.it/~friguzzi/DIPP/DIPP.html
# Outline • Probabilistic programming • Probabilistic logic programming • Inference • Learning • Applications # Probabilistic Programming • Users specify a probabilistic model in its entirety (e.g., by writing code that generates a sample from the joint distribution) and inference follows automatically given the specification. • PP languages provide the full power of modern programming languages for describing complex distributions • Reuse of libraries of models • Interactive modeling • Abstraction # Probabilistic Programming Languages • Only three are logic, the others imperative/functional/object oriented • DARPA released in 2013 the funding call “Probabilistic Programming for Advancing Machine Learning” (PPAML) • Aim: develop probabilistic programming languages and accompanying tools to facilitate the construction of new machine learning applications across a wide range of domains. • Focus: functional PP # Probabilistic Logic Programming • What are we missing? • Is logic programming to blame? # Thesis Probabilistic logic programming is alive and kicking! # Strengths • Relationships are first class citizens • Conceptually easier to lift • Strong semantics • Inductive systems # Weaknesses • Handling non-termination • Continuous variables # Non-termination • Possible when the number of explanations for the query is infinite # Non-termination: Inducing Arithmetic Functions Church code http://forestdb.org/models/arithmetic.html (define (random-arithmetic-fn) (if (flip 0.3) (random-combination (random-arithmetic-fn) (random-arithmetic-fn)) (if (flip) (lambda (x) x) (random-constant-fn)))) (define (random-combination f g) (define op (uniform-draw (list + -))) (lambda (x) (op (f x) (g x)))) (define (random-constant-fn) (define i (sample-integer 10)) (lambda (x) i)) # Non-termination: Inducing Arithmetic Functions LPAD (cplint) code http://cplint.lamping.unife.it/example/inference/arithm.pl eval(X,Y):- random_fn(X,0,F), Y is F. op(L,+):0.5;op(L,-):0.5. random_fn(X,L,F):- comb(L), random_fn(X,l(L),F1), random_fn(X,r(L),F2), op(L,Op), F=..[Op,F1,F2]. random_fn(X,L,F):- \+ comb(L), base_random_fn(X,L,F). comb(_):0.3. base_random_fn(X,L,X):- identity(L). base_random_fn(_X,L,C):- \+ identity(L), random_const(L,C). identity(_):0.5. random_const(_,C):discrete(C,[0:0.1,1:0.1,2:0.1,3:0.1,4:0.1, 5:0.1,6:0.1,7:0.1,8:0.1,9:0.1]). # Non-termination: Inducing Arithmetic Functions • Aim: given observations of couples input-output for the random function, predict the output for a new input • Arbitrarily complex functions have a non-zero probability of being selected • The program has a non-terminating execution. • Exact inference: infinite number of explanations # Non-termination: Inducing Arithmetic Functions (define (sample) (rejection-query (define my-proc (random-arithmetic-fn)) (my-proc 2) (= (my-proc 1) 3))) (hist (repeat 100 sample)) # Solution • Use (T. Sato, P. Meyer, Infinite probability computation by cyclic explanation graphs, Theor. Pract. Log. Prog. 14 2014) or (A. Gorlin, C. R. Ramakrishnan, S. A. Smolka, Model checking with proba- bilistic tabled logic programming, Theor. Pract. Log. Prog. 12 (4-5) 2012) • or resort to sampling: with the increase of complexity, the probability of functions tend to 0 and the probability of the infinite trace is 0 • Metropolis Hastings: (Nampally, A., Ramakrishnan, C.: Adaptive MCMC-based inference in probabilistic logic programs. arXiv preprint arXiv:1403.6036 2014) • Monte Carlo sampling is attractive for the simplicity of its implementation and because you can improve the estimate as more time is available. # Monte Carlo • The disjunctive clause $$C_r=H_1:\alpha_1\vee \ldots\vee H_n:\alpha_n\leftarrow L_1,\ldots,L_m.$$ is transformed into the set of clauses $$MC(C_r)$$ $$\begin{array}{l} MC(C_r,1)=H_1\leftarrow L_1,\ldots,L_m,sample\_head(n,r,VC,NH),NH=1.\\ \ldots\\ MC(C_r,n)=H_n\leftarrow L_1,\ldots,L_m,sample\_head(n,r,VC,NH),NH=n.\\ \end{array}$$ • Sample truth value of query Q: ... (call(Q)-> NT1 is NT+1 ; NT1 =NT), ... # Metropolis-Hastings MCMC • A Markov chain is built by taking an initial sample and by generating successor samples. • The initial sample is built by randomly sampling choices so that the evidence is true. • A successor sample is obtained by deleting a fixed number of sampled probabilistic choices. • Then the evidence is queried • If the query succeeds, the goal is queried. • The sample is accepted with a probability of $$\min\{1,\frac{N_0}{N_1}\}$$ where $$N_0$$ ($$N_1$$) is the number of choices sampled in the previous (current) sample. # Solution • In cplint: ?- mc_mh_sample(eval(2,4),eval(1,3),100,100,3,T,F,P). • Probability of eval(2,4) given that eval(1,3) is true F = 90, T = 10, P = 0.1 • You can also try rejection sampling (usually slower) ?- mc_rejection_sample(eval(2,4),eval(1,3),100, T,F,P). # Solution • You may be interested in the distribution of the output • In cplint: ?- mc_mh_sample_arg_bar(eval(2,Y),eval(1,3),100, 100,3,Y,V). # Solution • You may be interested in the expected value of the output • In cplint: ?- mc_mh_expectation(eval(2,Y),eval(1,3), 100,100,3,Y,E). E = 3.21 # Continuous Random Variables • Distributional clauses (B. Gutmann, I. Thon, A. Kimmig, M. Bruynooghe, and L. De Raedt, “The magic of logical inference in probabilistic programming,” Theory and Practice of Logic Programming, 2011) • Gaussian mixture model in cplint: heads:0.6;tails:0.4. g(X): gaussian(X,0, 1). h(X): gaussian(X,5, 2). mix(X) :- tails, h(X). # Continuous Random Variables • Inference by sampling • Without evidence or evidence on discrete random variables, you can reuse the same methods • Sampling arguments of goals for building a probability density of the arguments. # Gaussian Mixture Model heads:0.6;tails:0.4. g(X): gaussian(X,0, 1). h(X): gaussian(X,5, 2). mix(X) :- tails, h(X). ?- mc_sample_arg(mix(X),10000,X,L0), histogram(L0,40,Chart). # Evidence on Continuous Random Variables • You cannot use rejection sampling or Metropolis-Hastings, as the probability of the evidence is 0 • You can use likelihood weighting to obtain samples of continuous arguments of a goal. (Nitti, D., De Laet, T., De Raedt, L.: Probabilistic logic programming for hybrid relational domains. Mach. Learn. 103(3), 407-449 2016) # Likelihood Weighting • For each sample to be taken, likelihood weighting samples the query and then assigns a weight to the sample on the basis of evidence. • The weight is computed by deriving the evidence backward in the same sample of the query starting with a weight of one • Each time a choice should be taken or a continuous variable sampled, if the choice/variable has already been taken, the current weight is multiplied by probability of the choice/by the density value of the continuous value. # Bayesian Estimation • Estimate the true value of a Gaussian distributed random variable, given some observed data. • The variance is known and we suppose that the mean has itself a Gaussian distribution with mean 1 and variance 5 (prior on the parameter) • We take different measurement (e.g. at different times), indexed with an integer. # Bayesian Estimation • Anglican code (def dataset [9 8]) (defquery gaussian-model [data] (let [mu (sample (normal 1 (sqrt 5))) sigma (sqrt 2)] (doall (map (fn [x] (observe (normal mu sigma) x)) data)) mu)) (def posterior ((conditional gaussian-model :smc :number-of-particles 10) dataset)) (def posterior-samples (repeatedly 20000 #(sample* posterior))) # Bayesian Estimation value(I,X) :- mean(M), value(I,M,X). mean(M): gaussian(M,1.0, 5.0). value(_,M,X): gaussian(X,M, 2.0). ?- mc_sample_arg(value(0,Y),10000,Y,L0), mc_lw_sample_arg(value(0,X),(value(1,9),value(2,8)),10000,X,L), densities(L0,L,40,Chart). # Learning • Parameter learning • Structure learning more developed for PLP, but • (Perov, Yura N., and Frank D. Wood. Learning Probabilistic Programs. arXiv preprint arXiv:1407.2646 2014). • (Lake, Brenden M., Ruslan Salakhutdinov, and Joshua B. Tenenbaum. Human-level concept learning through probabilistic program induction. Science 350.6266 2015). • (Gaunt, Alexander L., et al. TerpreT: A Probabilistic Programming Language for Program Induction. arXiv preprint arXiv:1608.04428 2016). # Parameter Learning • Problem: given a set of interpretations, a program, find the parameters maximizing the likelihood of the interpretations (or of instances of a target predicate) • Exploit the equivalence with BN to use BN learning algorithms • The interpretations record the truth value of ground atoms, not of the choice variables • Unseen data: relative frequency can’t be used # Parameter Learning • (Thon et al. ECML 2008) proposed an adaptation of EM for CPT-L, a simplified version of LPADs • The algorithm computes the counts efficiently by repeatedly traversing the BDDs representing the explanations • (Ishihata et al. ILP 2008) independently proposed a similar algorithm • LFI-ProbLog (Gutamnn et al. ECML 2011) is the adaptation of EM to ProbLog • EMBLEM (Riguzzi & Bellodi IDAJ 2013) adapts (Ishihata et al. ILP 2008) to LPADs # Structure Learning • Given a trivial LPAD or an empty one, a set of interpretations (data) • Find the model and the parameters that maximize the probability of the data (log-likelihood) • SLIPCOVER: Structure LearnIng of Probabilistic logic program by searching OVER the clause space 1. Beam search in the space of clauses to find the promising ones 2. Greedy search in the space of probabilistic programs guided by the LL of the data. • Parameter learning by means of EMBLEM # Applications • Link prediction: given a (social) network, compute the probability of the existence of a link between two entities (UWCSE) advisedby(X, Y) :0.3 :- publication(P, X), publication(P, Y), student(X). # Applications • Classify web pages on the basis of the link structure (WebKB) coursePage(Page1): 0.3 :- linkTo(Page2,Page1),coursePage(Page2). ... coursePage(Page): 0.3 :- has('abstract',Page). ... # Applications • Entity resolution: identify identical entities in text or databases samebib(A,B):0.3 :- samebib(A,C), samebib(C,B). sameauthor(A,B):0.3 :- sameauthor(A,C), sameauthor(C,B). sametitle(A,B):0.3 :- sametitle(A,C), sametitle(C,B). samevenue(A,B):0.3 :- samevenue(A,C), samevenue(C,B). samebib(B,C):0.3 :- author(B,D),author(C,E),sameauthor(D,E). samebib(B,C):0.3 :- title(B,D),title(C,E),sametitle(D,E). samebib(B,C):0.3 :- venue(B,D),venue(C,E),samevenue(D,E). samevenue(B,C):0.3 :- haswordvenue(B,word_06), haswordvenue(C,word_06). ... # Applications • Chemistry: given the chemical composition of a substance, predict its mutagenicity or its carcenogenicity active(A):0.5 :- atm(A,B,c,29,C), gteq(C,-0.003), ring_size_5(A,D). active(A):0.5 :- lumo(A,B), lteq(B,-2.072). active(A):0.5 :- bond(A,B,C,2), bond(A,C,D,1), ring_size_5(A,E). active(A):0.5 :- carbon_6_ring(A,B). active(A):0.5 :- anthracene(A,B). ... # Applications • Medicine: diagnose diseases on the basis of patient information (Hepatitis), influence of genes on HIV, risk of falling of elderly people (FFRAT) # Experiments - Area Under the PR Curve System HIV UW-CSE Mondial SLIPCOVER $$0.82 \pm0.05$$ $$0.11\pm 0.08$$ $$0.86 \pm 0.07$$ SLIPCASE $$0.78\pm0.05$$ $$0.03\pm 0.01$$ $$0.65 \pm 0.06$$ LSM $$0.37\pm0.03$$ $$0.07\pm0.02$$ - ALEPH++ - $$0.05\pm0.01$$ $$0.87 \pm 0.07$$ RDN-B $$0.28 \pm 0.06$$ $$0.28 \pm 0.06$$ $$0.77 \pm 0.07$$ MLN-BT $$0.29 \pm 0.04$$ $$0.18 \pm 0.07$$ $$0.74 \pm 0.10$$ MLN-BC $$0.51 \pm 0.04$$ $$0.06 \pm 0.01$$ $$0.59 \pm 0.09$$ BUSL $$0.38 \pm 0.03$$ $$0.01 \pm 0.01$$ - # Experiments - Area Under the PR Curve System Carcinogenesis Mutagenesis Hepatitis SLIPCOVER $$0.60$$ $$0.95\pm0.01$$ $$0.80\pm0.01$$ SLIPCASE $$0.63$$ $$0.92\pm 0.08$$ $$0.71\pm0.05$$ LSM - - $$0.53\pm 0.04$$ ALEPH++ $$0.74$$ $$0.95\pm0.01$$ - RDN-B $$0.55$$ $$0.97 \pm 0.03$$ $$0.88 \pm 0.01$$ MLN-BT $$0.50$$ $$0.92 \pm 0.09$$ $$0.78 \pm 0.02$$ MLN-BC $$0.62$$ $$0.69 \pm 0.20$$ $$0.79 \pm 0.02$$ BUSL - - $$0.51 \pm 0.03$$ # Conclusions • PLP is still a fertile field but... • ...we must look at other communities and build bridges and... • ...join forces! • Much is left to do: • Tractable sublanguages (see following talk) • Lifted inference • Structure/Parameter learning (also for programs with continuous variables)
2018-01-16 22:55:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6965900659561157, "perplexity": 12455.90193354091}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886758.34/warc/CC-MAIN-20180116224019-20180117004019-00776.warc.gz"}
https://cs.stackexchange.com/questions/132587/why-is-kmp-preprocessing-on
Why is KMP preprocessing O(N)? My intuition tells me that there is <=1 increase in lps for each increase in the index. Can someone make a better argument why this is O(N)? My confusion arises from len_ = lps[len_- 1] . def computeLPSArray(pat, M, lps): len_ = 0 # length of the previous longest prefix suffix lps[0] = 0 # lps[0] is always 0 i = 1 # the loop calculates lps[i] for i = 1 to M-1 while i < M: if pat[i]== pat[len_]: len_ += 1 lps[i] = len_ i += 1 else: # This is tricky. Consider the example. # AAACAAAA and i = 7. The idea is similar # to search step. if len_ != 0: len_ = lps[len_- 1] # Also, note that we do not increment i here else: lps[i] = 0 i += 1 $$$$ Each len_ = lps[len - 1] decreases $$lps$$ by at least 1, thus, it can only happen for lps[i - 1] + 1 - lps[i] times, since you initialize lps[i] as lps[i - 1] + 1. Therefore, the maximum number of times len_ = lps[len - 1]` can happen is $$\displaystyle \sum_{i=1}^n (lps[i - 1] + 1) - lps[i]$$. As you know, $$lps$$ can increase by at most $$n$$ in total, therefore, it can decrease by at most $$n$$ in total, since $$lps$$ can never be negative, or equivalently, for each $$+1$$ $$lps$$ gets, it can get at most one $$-1$$. thus $$\displaystyle \sum_{i=1}^n (lps[i - 1] + 1) - lps[i]$$ will be $$O(n)$$.
2021-07-28 19:12:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 12, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.764069676399231, "perplexity": 1699.675785175306}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153791.41/warc/CC-MAIN-20210728185528-20210728215528-00473.warc.gz"}
https://docs.vespa.ai/documentation/monitoring.html
# Monitoring Vespa provides metrics integration with CloudWatch, Datadog and Prometheus / Grafana, as well as a JSON HTTP API. See monitoring with Grafana quick start if you just want to get started monitoring your system. There are two main approaches to transfer metrics to an external system: • Have the external system pull metrics from Vespa • Make Vespa push metrics to the external system ## Pulling metrics from Vespa All pull-based solutions use Vespa's metrics API, which provides metrics in JSON format, either for the full system or for a single node. CloudWatch Metrics can be pulled into CloudWatch from both Vespa Cloud and self-hosted Vespa. The recommended solution is to use an AWS lambda function, as described in Pulling Vespa metrics to Cloudwatch. Note: This method currently works for self-hosted Vespa only. The Vespa team has created a Datadog Agent integration to allow real-time monitoring of Vespa in Datadog. The Datadog Vespa integration is not packaged with the agent, but is included in Datadog's integrations-extras repository. Clone it and follow the steps in the README. Vespa exposes metrics in a text based format that can be scraped by Prometheus. For Vespa Cloud, append /prometheus/v1/values to your endpoint URL. For self-hosted Vespa the URL is: http://:/prometheus/v1/values, where the port is the same as for searching, e.g. 8080. Metrics for each individual host can also be retrieved at http://host:19092/prometheus/v1/values. See the quick-start for a Prometheus / Grafana example. ## Pushing metrics to CloudWatch Note: This method currently works for self-hosted Vespa only. This is presumably the most convenient way to monitor Vespa in CloudWatch. Steps / requirements: 1. An IAM user or IAM role that only has the putMetricData permission. 2. Store the credentials for the above user or role in a shared credentials file on each Vespa node. If a role is used, provide a mechanism to keep the credentials file updated when keys are rotated. 3. Configure Vespa to push metrics to CloudWatch - example configuration for the admin section in services.xml: <metrics> <consumer id="my-cloudwatch"> <metric-set id="default" /> <cloudwatch region="us-east-1" namespace="my-vespa-metrics"> <shared-credentials file="/path/to/credentials-file" /> </cloudwatch> </consumer> </metrics> This configuration sends the default set of Vespa metrics to the CloudWatch namespace my-vespa-metrics in the us-east-1 region. Refer to the metric list for default metric set.
2021-03-07 20:58:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22956803441047668, "perplexity": 10460.337378812683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178378872.82/warc/CC-MAIN-20210307200746-20210307230746-00273.warc.gz"}
https://phys.libretexts.org/Bookshelves/Relativity/Book%3A_Special_Relativity_(Crowell)/02%3A_Foundations
$$\require{cancel}$$ # 2: Foundations • 2.1: Causality Our intuitive belief in cause-and-effect mechanisms is not sup- ported in any clear cut way by the laws of physics as currently understood. For example, we feel that the past affects the future but not the other way around, but this feeling doesn’t seem to translate into physical law. For example, Newton’s laws are invariant under time reversal, as are Maxwell’s equations.  In fact, the weak nuclear force is the only part of the standard model that violates time-reversal symmetry. • 2.2: Flatness Euclidean geometry is only an approximate description of the earth’s surface, for example, and this is why flat maps always entail distortions of the actual shapes. The distortions might be negligible on a map of Connecticut, but severe for a map of the whole world. That is, the globe is only locally Euclidean. On a spherical surface, the appropriate object to play the role of a “line” is a great circle. The lines of longitude are examples of great circles. • 2.3: Additional Postulates We make the following additional assumptions. • 2.4: Other Axiomatizations Einstein used a different axiomatization in his 1905 paper on special relativity. • 2.5: Lemma - Spacetime area is invariant The area in the x−t plane is invariant, i.e., it does not change between frames of reference. • 2.E: Foundations (Exercises) Thumbnail: Einstein cross: four images of the same astronomical object, produced by a gravitational lens. Image used wtih permission (Public Domain; NASA and ESA).
2019-09-17 15:13:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8616107106208801, "perplexity": 1165.6416174422407}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573080.8/warc/CC-MAIN-20190917141045-20190917163045-00018.warc.gz"}
https://en.wikipedia.org/wiki/4-tensor
# Four-tensor (Redirected from 4-tensor) In physics, specifically for special relativity and general relativity, a four-tensor is an abbreviation for a tensor in a four-dimensional spacetime.[1] ## Generalities General four-tensors are usually written in tensor index notation as ${\displaystyle A_{\;\nu _{1},\nu _{2},...,\nu _{m}}^{\mu _{1},\mu _{2},...,\mu _{n}}}$ with the indices taking integer values from 0 to 3, with 0 for the timelike components and 1, 2, 3 for spacelike components. There are n contravariant indices and m covariant indices.[1] In special and general relativity, many four-tensors of interest are first order (four-vectors) or second order, but higher order tensors occur. Examples are listed next. In special relativity, the vector basis can be restricted to being orthonormal, in which case all four-tensors transform under Lorentz transformations. In general relativity, more general coordinate transformations are necessary since such a restriction is not in general possible. ## Examples ### First order tensors In special relativity, one of the simplest non-trivial examples of a four-tensor is the four-displacement ${\displaystyle x^{\mu }=(x^{0},x^{1},x^{2},x^{3})\,,}$ a four-tensor with contravariant rank 1 and covariant rank 0. Four-tensors of this kind are usually known as four-vectors. Here the component x0 = ct gives the displacement of a body in time (coordinate time t is multiplied by the speed of light c so that x0 has dimensions of length). The remaining components of the four-displacement form the spatial displacement vector x = (x1, x2, x3).[1] The four-momentum for massive or massless particles is ${\displaystyle p^{\mu }=\left(E/c,p_{x},p_{y},p_{z}\right)}$ combines its energy (divided by c) p0 = E/c and 3-momentum p = (p1, p2, p3).[1] For a particle with relativistic mass m, four momentum is defined by ${\displaystyle p^{\mu }=m{\frac {dx^{\mu }}{d\tau }}}$ with τ the proper time of the particle. ### Second order tensors The Minkowski metric tensor with an orthonormal basis for the (−+++) convention is ${\displaystyle \eta ^{\mu \nu }={\begin{pmatrix}-1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{pmatrix}}\,}$ used for calculating the line element and raising and lowering indices. The above applies to Cartesian coordinates. In general relativity, the metric tensor is given by much more general expressions for curvilinear coordinates. The angular momentum L = xp of a particle with relativistic mass m and relativistic momentum p (as measured by an observer in a lab frame) combines with another vector quantity N = mxpt (without a standard name) in the relativistic angular momentum tensor[2][3] ${\displaystyle M^{\mu \nu }={\begin{pmatrix}0&-N^{1}c&-N^{2}c&-N^{3}c\\N^{1}c&0&L^{12}&-L^{31}\\N^{2}c&-L^{12}&0&L^{23}\\N^{3}c&L^{31}&-L^{23}&0\end{pmatrix}}}$ with components ${\displaystyle M^{\alpha \beta }=X^{\alpha }P^{\beta }-X^{\beta }P^{\alpha }}$ The stress–energy tensor of a continuum or field generally takes the form of a second order tensor, and usually denoted by T. The timelike component corresponds to energy density (energy per unit volume), the mixed spacetime components to momentum density (momentum per unit volume), and the purely spacelike parts to 3d stress tensors. The electromagnetic field tensor combines the electric field and E and magnetic field B[4] ${\displaystyle F^{\mu \nu }={\begin{pmatrix}0&-E_{x}/c&-E_{y}/c&-E_{z}/c\\E_{x}/c&0&-B_{z}&B_{y}\\E_{y}/c&B_{z}&0&-B_{x}\\E_{z}/c&-B_{y}&B_{x}&0\end{pmatrix}}}$ The electromagnetic displacement tensor combines the electric displacement field D and magnetic field intensity H as follows[5] ${\displaystyle {\mathcal {D}}^{\mu \nu }={\begin{pmatrix}0&-D_{x}c&-D_{y}c&-D_{z}c\\D_{x}c&0&-H_{z}&H_{y}\\D_{y}c&H_{z}&0&-H_{x}\\D_{z}c&-H_{y}&H_{x}&0\end{pmatrix}}.}$ The magnetization-polarization tensor combines the P and M fields[4] ${\displaystyle {\mathcal {M}}^{\mu \nu }={\begin{pmatrix}0&P_{x}c&P_{y}c&P_{z}c\\-P_{x}c&0&-M_{z}&M_{y}\\-P_{y}c&M_{z}&0&-M_{x}\\-P_{z}c&-M_{y}&M_{x}&0\end{pmatrix}},}$ The three field tensors are related by ${\displaystyle {\mathcal {D}}^{\mu \nu }={\frac {1}{\mu _{0}}}F^{\mu \nu }-{\mathcal {M}}^{\mu \nu }\,}$ which is equivalent to the definitions of the D and H fields. The electric dipole moment d and magnetic dipole moment μ of a particle are unified into a single tensor[6] ${\displaystyle \sigma ^{\mu \nu }={\begin{pmatrix}0&d_{x}&d_{y}&d_{z}\\-d_{x}&0&\mu _{z}/c&-\mu _{y}/c\\-d_{y}&-\mu _{z}/c&0&\mu _{x}/c\\-d_{z}&\mu _{y}/c&-\mu _{x}/c&0\end{pmatrix}},}$ The Ricci curvature tensor is another second order tensor. ### Higher order tensors In general relativity, there are curvature tensors which tend to be higher order, such as the Riemann curvature tensor and Weyl curvature tensor which are both fourth order tensors.
2018-08-17 21:39:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 12, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9348866939544678, "perplexity": 671.1830268545298}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221212910.25/warc/CC-MAIN-20180817202237-20180817222237-00106.warc.gz"}
https://math.answers.com/Q/If_a_fraction_is_not_in_simplest_form_can_the_greatest_common_factor_of_the_numerator_and_the_denominator_be_1
0 # If a fraction is not in simplest form can the greatest common factor of the numerator and the denominator be 1? Wiki User 2016-12-10 08:26:31 No. Wiki User 2016-12-10 08:26:31 Study guides 20 cards ➡️ See all cards 3.74 824 Reviews
2022-05-20 07:45:19
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.893733561038971, "perplexity": 4155.269383757709}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662531762.30/warc/CC-MAIN-20220520061824-20220520091824-00459.warc.gz"}
https://www.r-bloggers.com/2019/11/exploratory-analysis-of-a-banana/
[This article was first published on R – On unicorns and genes, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here) Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. This post is just me amusing myself by exploring a tiny data set I have lying around. The dataset and the code is on Github. In 2014 (I think), I was teaching the introductory cell biology labs (pictures in the linked post) in Linköping. We were doing a series of simple preparations to look at cells and organelles: a cheek swab gives you a view of dead mammalian cells with bacteria on them; Elodea gives you a nice chloroplast view; a red bell pepper gives you chromoplasts; and a banana stained with iodine gives you amyloplasts. Giving the same lab six times in a row, it became apparent how the number of stained amyloplasts decreased as the banana ripened. I took one banana, sliced in into five pieces (named A-E), and left it out to ripen. Then I stained (with Lugol’s iodine solution) and counted the number of amyloplasts per cell in a few cells (scraped off with a toothpick) from the end of each piece at day 1, 5, and 9. First, here is an overview of the data. On average, we go from 17 stained amyloplasts on day 1, to 5 on day five and 2 on day nine. If we break the plot up by slices, we see decline in every slice and variability between them. Because I only sampled each slice once per day, there is no telling whether this is variation between parts of the banana or between samples taken (say, hypothetically, because I might have stuck the toothpick in more or less deeply, or because the ripeness varies from the middle to the peel). How can we model this? Let’s first fit a linear model where the number of amyloplasts decline at a constant rate per day, allowing for different starting values and different declines for each slice. We can anticipate that a Gaussian linear model will have some problems in this situation. We fit a linear model and pull out the fitted values for each day–slice combination: model_lm <- lm(amyloplasts ~ day * slice, data = banana) levels <- expand.grid(slice = unique(banana$slice), day = unique(banana$day), stringsAsFactors = FALSE) pred_lm <- cbind(levels, predict(model_lm, newdata = levels, interval = "confidence")) Then, to investigate the model’s behaviour, we can simulate data from the model, allowing for uncertainty in the fitted parameters, with the sim function from the arm package. We make a function to simulate data from the linear model given a set of parameters, then simulate parameters and feed the first parameter combination to the function to get ourselves a simulated dataset. y_rep_lm <- function(coef_lm, sigma, banana) { slice_coef <- c(0, coef_lm[3:6]) names(slice_coef) <- c("A", "B", "C", "D", "E") slice_by_day_coef <- c(0, coef_lm[7:10]) names(slice_by_day_coef) <- c("A", "B", "C", "D", "E") banana$sim_amyloplasts <- coef_lm[1] + slice_coef[banana$slice] + banana$day * (coef_lm[2] + slice_by_day_coef[banana$slice]) + rnorm(nrow(banana), 0, sigma) banana } sim_lm <- sim(model_lm) sim_banana <- y_rep_lm([email protected][1,], [email protected][1], banana) The result looks like this (black dots) compared with the real data (grey dots). The linear model doesn’t know that the number of amyloplasts can’t go below zero, so it happily generates absurd negative values. While not apparent from the plots, the linear model also doesn’t know that amyloplasts counts are restricted to be whole numbers. Let’s fit a generalized linear model with a Poisson distribution, which should be more suited to this kind of discrete data. The log link function will also turn the linear decrease into an exponential decline, which seems appropriate for the decline in amyloplasts. model_glm <- glm(amyloplasts ~ day * slice, data = banana, family = poisson(link = log)) pred_glm <- predict(model_glm, newdata = levels, se.fit = TRUE) results_glm <- data.frame(levels, average = pred_glm$fit, se = pred_glm$se.fit, stringsAsFactors = FALSE) y_rep_glm <- function(coef_glm, banana) { slice_coef <- c(0, coef_glm[3:6]) names(slice_coef) <- c("A", "B", "C", "D", "E") slice_by_day_coef <- c(0, coef_glm[7:10]) names(slice_by_day_coef) <- c("A", "B", "C", "D", "E") latent <- exp(coef_glm[1] + slice_coef[banana$slice] + banana$day * (coef_glm[2] + slice_by_day_coef[banana$slice])) banana$sim_amyloplasts <- rpois(n = nrow(banana), lambda = latent) banana } sim_glm <- sim(model_glm) sim_banana_glm <- y_rep_glm([email protected][2,], banana) This code is the same deal as above, with small modifications: glm instead of lm, with some differences in the interface. Then a function to simulate data from a Poisson model with an logarithmic link, that we apply to one set of parameters values. There are no impossible zeros anymore. However, there seems to be many more zeros in the real data than in the simulated data, and consequently, as the number of amyloplasts grow small, we overestimate how many there should be. Another possibility among the standard arsenal of models is a generalised linear model with a negative binomial distribution. As opposed to the Poisson, this allows greater spread among the values. We can fit a negative binomial model with Stan. library(rstan) model_nb <- stan(file = "banana.stan", data = list(n = nrow(banana), n_slices = length(unique(banana$slice)), n_days = length(unique(banana$day)), amyloplasts = banana$amyloplasts, day = banana$day - 1, slice = as.numeric(factor(banana$slice)), prior_phi_scale = 1)) y_rep <- rstan::extract(model_nb, pars = "y_rep")[[1]] Here is the Stan code in banana.stan: data { int n; int n_slices; int <lower = 0> amyloplasts[n]; real <lower = 0> day[n]; int <lower = 1, upper = n_slices> slice[n]; real prior_phi_scale; } parameters { real initial_amyloplasts[n_slices]; real decline[n_slices]; real < lower = 0> phi_rec; } model { phi_rec ~ normal(0, 1); for (i in 1:n) { amyloplasts[i] ~ neg_binomial_2_log(initial_amyloplasts[slice[i]] + day[i] * decline[slice[i]], (1/phi_rec)^2); } } generated quantities { vector[n] y_rep; for (i in 1:n) { y_rep[i] = neg_binomial_2_rng(exp(initial_amyloplasts[slice[i]] + day[i] * decline[slice[i]]), (1/phi_rec)^2); } } This model is similar to the Poisson model, except that the negative binomial allows an overdispersion parameter, a small value of which corresponds to large variance. Therefore, we put the prior on the reciprocal of the square root of the parameter. Conveniently, Stan can also make the simulated replicated data for us in the generated quantities block. What does the simulated data look like? Here we have a model that allows for more spread, but in the process, generates some extreme data, with hundreds of amyloplasts per cell in some slices. We can try to be draconian with the prior and constrain the overdispersion to smaller values instead: model_nb2 <- stan(file = "banana.stan", data = list(n = nrow(banana), n_slices = length(unique(banana$slice)), n_days = length(unique(banana$day)), amyloplasts = banana$amyloplasts, day = banana$day - 1, slice = as.numeric(factor(banana$slice)), prior_phi_scale = 0.1)) y_rep2 <- rstan::extract(model_nb2, pars = "y_rep")[[1]] That looks a little better. Now, we’ve only looked at single simulated datasets, but we can get a better picture by looking at replicate simulations. We need some test statistics, so let us count how many zeroes there are in each dataset, what the maximum value is, and the sample variance, and then do some visual posterior predictive checks. check_glm <- data.frame(n_zeros = numeric(1000), max_value = numeric(1000), variance = numeric(1000), model = "Poisson", stringsAsFactors = FALSE) check_nb <- data.frame(n_zeros = numeric(1000), max_value = numeric(1000), variance = numeric(1000), model = "Negative binomial", stringsAsFactors = FALSE) check_nb2 <- data.frame(n_zeros = numeric(1000), max_value = numeric(1000), variance = numeric(1000), model = "Negative binomial 2", stringsAsFactors = FALSE) for (sim_ix in 1:1000) { y_rep_data <- y_rep_glm([email protected][sim_ix,], banana) check_glm$n_zeros[sim_ix] <- sum(y_rep_data$sim_amyloplasts == 0) check_glm$max_value[sim_ix] <- max(y_rep_data$sim_amyloplasts) check_glm$variance[sim_ix] <- var(y_rep_data$sim_amyloplasts) check_nb$n_zeros[sim_ix] <- sum(y_rep[sim_ix,] == 0) check_nb$max_value[sim_ix] <- max(y_rep[sim_ix,]) check_nb$variance[sim_ix] <- var(y_rep[sim_ix,]) check_nb2$n_zeros[sim_ix] <- sum(y_rep2[sim_ix,] == 0) check_nb2$max_value[sim_ix] <- max(y_rep2[sim_ix,]) check_nb2$variance[sim_ix] <- var(y_rep2[sim_ix,]) } check <- rbind(check_glm, check_nb, check_nb2) melted_check <- gather(check, "variable", "value", -model) check_data <- data.frame(n_zeros = sum(banana$amyloplasts == 0), max_value = max(banana$amyloplasts), variance = var(banana$amyloplasts)) Here is the resulting distribution of these three discrepancy statistics in 1000 simulated datasets for the three models (generalised linear model with Poisson distribution and the two negative binomial models). The black line is the value for real data. When viewed like this, it becomes apparent how the negative binomial models do not fit that well. The Poisson model struggles with the variance and the number of zeros. The negative binomial models get closer to the number of zeros in the real data, they still have too few, while at the same time having way too high maximum values and variance. Finally, let’s look at the fitted means and intervals from all the models. We can use the predict function for the linear model and Poisson model, and for the negative binomial models, we can write our own: pred_stan <- function(model, newdata) { samples <- rstan::extract(model) initial_amyloplasts <- data.frame(samples$initial_amyloplasts) decline <- data.frame(samples$decline) names(initial_amyloplasts) <- names(decline) <- c("A", "B", "C", "D", "E") ## Get posterior for levels pred <- matrix(0, ncol = nrow(newdata), nrow = nrow(initial_amyloplasts)) for (obs in 1:ncol(pred)) { pred[,obs] <- initial_amyloplasts[,newdata$slice[obs]] + (newdata$day[obs] - 1) * decline[,newdata$slice[obs]] } ## Get mean and interval newdata$fit <- exp(colMeans(pred)) intervals <- lapply(data.frame(pred), quantile, probs = c(0.025, 0.975)) newdata$lwr <- exp(unlist(lapply(intervals, "[", 1))) newdata\$upr <- exp(unlist(lapply(intervals, "[", 2))) newdata } pred_nb <- pred_stan(model_nb, levels) pred_nb2 <- pred_stan(model_nb2, levels) In summary, the three generalised linear models with log link function pretty much agree about the decline of amyloplasts during the later days, which looks more appropriate than a linear decline. They disagree about the uncertainty about the numbers on the first day, which is when there are a lot. Perhaps coincidentally, this must also be where the quality of my counts are the lowest, because it is hard to count amyloplasts on top of each other. To leave a comment for the author, please follow the link and comment on their blog: R – On unicorns and genes. R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job. Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. # Never miss an update! Subscribe to R-bloggers to receive e-mails with the latest R posts.(You will not see this message again.) Click here to close (This popup will not appear again)
2021-10-16 15:02:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3658744990825653, "perplexity": 3219.113489378791}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584886.5/warc/CC-MAIN-20211016135542-20211016165542-00190.warc.gz"}
https://jeremy9959.net/Mathematics-for-Machine-Learning/chapters/06-svm.html
## 7.1 Introduction Suppose that we are given a collection of data made up of samples from two different classes, and we would like to develop an algorithm that can distinguish between the two classes. For example, given a picture that is either a dog or a cat, we’d like to be able to say which of the pictures are dogs, and which are cats. For another example, we might want to be able to distinguish “real” emails from “spam.” This type of problem is called a classification problem. Typically, one approaches a classification problem by beginning with a large set of data for which you know the classes, and you use that data to train an algorithm to correctly distinguish the classes for the test cases where you already know the answer. For example, you start with a few thousand pictures labelled “dog” and “cat” and you build your algorithm so that it does a good job distinguishing the dogs from the cats in this initial set of training data. Then you apply your algorithm to pictures that aren’t labelled and rely on the predictions you get, hoping that whatever let your algorithm distinguish between the particular examples will generalize to allow it to correctly classify images that aren’t pre-labelled. Because classification is such a central problem, there are many approaches to it. We will see several of them through the course of these lectures. We will begin with a particular classification algorithm called “Support Vector Machines” (SVM) that is based on linear algebra. The SVM algorithm is widely used in practice and has a beautiful geometric interpretation, so it will serve as a good beginning for later discussion of more complicated classification algorithms. Incidentally, I’m not sure why this algorithm is called a “machine”; the algorithm was introduced in the paper [1] where it is called the “Optimal Margin Classifier” and as we shall see that is a much better name for it. My presentation of this material was heavily influenced by the beautiful paper [2]. ## 7.2 A simple example Let us begin our discussion with a very simple dataset (see [3] and [4]). This data consists of various measurements of physical characteristics of 344 penguins of 3 different species: Gentoo, Adelie, and Chinstrap. If we focus our attention for the moment on the Adelie and Gentoo species, and plot their body mass against their culmen depth, we obtain the following scatterplot. Incidentally, a bird’s culmen is the upper ridge of their beak, and the culmen depth is a measure of the thickness of the beak. There’s a nice picture at [4] for the penguin enthusiasts. A striking feature of this scatter plot is that there is a clear separation between the clusters of Adelie and Gentoo penguins. Adelie penguins have deeper culmens and less body mass than Gentoo penguins. These characteristics seem like they should provide a way to classify a penguin between these two species based on these two measurements. One way to express the separation between these two clusters is to observe that one can draw a line on the graph with the property that all of the Adelie penguins lie on one side of that line and all of the Gentoo penguins lie on the other. In Figure 7.2 I’ve drawn in such a line (which I found by eyeballing the picture in Figure 7.1). The line has the equation $Y = 1.25X+2.$ The fact that all of the Gentoo penguins lie above this line means that, for the Gentoo penguins, their body mass in grams is at least $$400$$ more than $$250$$ times their culmen depth in mm. (Note that the $$y$$ axis of the graph is scaled by $$200$$ grams). $\mathrm{Gentoo\ mass}> 250(\mathrm{Gentoo\ culmen\ depth})+400$ while $\mathrm{Adelie\ mass}<250(\mathrm{Adelie\ culmen\ depth})+400.$ Now, if we measure a penguin caught in the wild, we can compute $$250(\mathrm{culmen\ depth})+400$$ for that penguin and if this number is greater than the penguin’s mass, we say it’s an Adelie; otherwise, a Gentoo. Based on the experimental data we’ve collected – the training data – this seems likely to work pretty well. ## 7.3 The general case To generalize this approach, let’s imagine now that we have $$n$$ samples and $$k$$ features (or measurements) for each sample. As before, we can represent this data as an $$n\times k$$ data matrix $$X$$. In the penguin example, our data matrix would be $$344\times 2$$, with one row for each penguin and the columns representing the mass and the culmen depth. In addition to this numerical data, we have a classification that assigns each row to one of two classes. Let’s represent the classes by a $$n\times 1$$ vector $$Y$$, where $$y_{i}=+1$$ if the $$i^{th}$$ sample is in one class, and $$y_{i}=-1$$ if that $$i^{th}$$ sample is in the other. Our goal is to predict $$Y$$ based on $$X$$ – but unlike in linear regression, $$Y$$ takes on the values of $$\pm 1$$. In the penguin case, we were able to find a line that separated the two classes and then classify points by which side of the line the point was on. We can generalize this notion to higher dimensions. Before attacking that generalization, let’s recall a few facts about the generalization to $$\mathbf{R}^{k}$$ of the idea of a line. ### 7.3.1 Hyperplanes The correct generalization of a line given by an equation $$w_1 x_1+ w_2 w_2+b=0$$ in $$\mathbf{R}^{2}$$ is an equation $$f(x)=0$$ where $$f(x)$$ is a degree one polynomial $f(x) = f(x_1,\ldots, x_k) = w_1 x_1 + w_2 x_2 +\cdots + w_k x_k + b \tag{7.1}$ It’s easier to understand the geometry of an equation like $$f(x)=0$$ in Equation 7.1 if we think of the coefficients $$w_i$$ as forming a nonzero vector $$w = (w_1,\ldots, w_k)$$ in $$\mathbf{R}^{k}$$ and writing the formula for $$f(x)$$ as $f(x) = w\cdot x +b$. Lemma: Let $$f(x)=w\cdot x+b$$ with $$w\in\mathbf{R}^{k}$$ a nonzero vector and $$b$$ a constant in $$\mathbf{R}$$. • The inequalities $$f(x)>0$$ and $$f(x)<0$$ divide up $$\mathbf{R}^{k}$$ into two disjoint subsets (called half spaces), in the way that a line in $$\mathbf{R}^{2}$$ divides the plane in half. • The vector $$w$$ is normal vector to the hyperplane $$f(x)=0$$. Concretely this means that if $$p$$ and $$q$$ are any two points in that hyperplane, then $$w\cdot (p-q)=0$$. • Let $$p=(u_1,\ldots,u_k)$$ be a point in $$\mathbf{R}^{k}$$. Then the perpendicular distance $$D$$ from $$p$$ to the hyperplane $$f(x)=0$$ is $D = \frac{f(p)}{\|w\|}$ Proof: The first part is clear since the inequalities are mutually exclusive. For the secon part, suppose that $$p$$ and $$q$$ satisfy $$f(x)=0$$. Then $$w\cdot p+b = w\cdot q+b=0$$. Subtracting these two equations gives $$w\cdot (p-q)=0$$, so $$p-q$$ is orthogonal to $$w$$. For the third part, consider Figure 7.3. The point $$q$$ is an arbitrary point on the hyperplane defined by the equation $$w\cdot x+b=0$$. The distance from the hyperplane to $$p$$ is measured along the dotted line perpendicular to the hyperplane. The dot product $$w\cdot (p-q) = \|w\|\|p-q\|\cos(\theta)$$ where $$\theta$$ is the angle between $$p-q$$ and $$w$$ – which is complementary to the angle between $$p-q$$ and the hyperplane. The distance $$D$$ is therefore $D=\frac{w\cdot(p-q)}{\|w\|}.$ However, since $$q$$ lies on the hyperplane, we know that $$w\cdot q+b=0$$ so $$w\cdot q = -b$$. Therefore $$w\cdot(p-q)=w\cdot p+b=f(p)$$, which is the formula we seek. ### 7.3.2 Linear separability and Margins Now we can return to our classification scheme. The following definition generalizes our two dimensional picture from the penguin data. Definition: Suppose that we have an $$n\times k$$ data matrix $$X$$ and a set of labels $$Y$$ that assign the $$n$$ samples to one of two classes. Then the labelled data is said to be linearly separable if there is a vector $$w$$ and a constant $$b$$ so that, if $$f(x)=w\cdot x+b$$, then $$f(x)>0$$ whenever $$x=(x_1,\ldots, x_k)$$ is a row of $$X$$ – a sample – belonging to the $$+1$$ class, and $$f(x)<0$$ whenever $$x$$ belongs to the $$-1$$ class. The solutions to the equation $$f(x)=0$$ in this situation form a hyperplane that is called a separating hyperplane for the data. In the situation where our data falls into two classes that are linearly separable, our classification strategy is to find a separating hyperplane $$f$$ for our training data. Then, given a point $$x$$ whose class we don’t know, we can evaluate $$f(x)$$ and assign $$x$$ to a class depending on whether $$f(x)>0$$ or $$f(x)<0$$. This definition begs two questions about a particular dataset: 1. How do we tell if the two classes are linearly separable? 2. If the two sets are linearly separable, there are infinitely many separating hyperplanes. To see this, look back at the penguin example and notice that we can ‘wiggle’ the red line a little bit and it will still separate the two sets. Which is the ‘best’ separating hyperplane? Let’s try to make the first of these two questions concrete. We have two sets of points $$A$$ and $$B$$ in $$\mathbf{R}^{k}$$, and we want to (try to) find a vector $$w$$ and a constant $$b$$ so that $$f(x)=w\cdot x+b$$ takes strictly positive values for $$x\in A$$ and strictly negative ones for $$x\in B$$. Let’s approach the problem by first choosing $$w$$ and then asking whether there is a $$b$$ that will work. In the two dimensional case, this is equivalent to choosing the slope of our line, and then asking if we can find an intercept so that the line passes between the two classes. In algebraic terms, we are trying to solve the following system of inequalities: given $$w$$, find $$b$$ so that: $w\cdot x+b>0 \hbox{ for all x in A}$ and $w\cdot x+b<0\hbox{ for all x in B}.$ This is only going to be possible if there is a gap between the smallest value of $$w\cdot x$$ for $$x\in A$$ and the largest value of $$w\cdot x$$ for $$x\in B$$. In other words, given $$w$$ there is a $$b$$ so that $$f(x)=w\cdot x+b$$ separates $$A$$ and $$B$$ if $\max_{x\in B}w\cdot x < \min_{x\in A} w\cdot x.$ If this holds, then choose $$b$$ so that $$-b$$ lies in this open interval and you will obtain a separating hyperplane. Proposition: The sets $$A$$ and $$B$$ are linearly separable if there is a $$w$$ so that $\max_{x\in B}w\cdot x < \min_{x\in A} w\cdot x$ If this inequality holds for some $$w$$, and $$-b$$ within this open interval, then $$f(x)=w\cdot x+b$$ is a separating hyperplane for $$A$$ and $$B$$. *Figure 7.4 is an illustration of this argument for a subset of the penguin data. Here, we have fixed $$w=(1.25,-1)$$ coming from the line $$y=1.25x+2$$ that we eyeballed earlier. For each Gentoo (green) point $$x_{i}$$, we computed $$-b=w\cdot x_{i}$$ and drew the line $$f(x) = w\cdot x - w\cdot x_{i}$$ giving a family of parallel lines through each of the green points. Similarly for each Adelie (blue) point we drew the corresponding line. The maximum value of $$w\cdot x$$ for the blue points turned out to be $$1.998$$ and the minimum value of $$w\cdot x$$ for the green points turned out to be $$2.003$$. Thus we have two lines with a gap between them, and any parallel line in that gap will separate the two sets. Finally, among all the lines with this particular $$w$$, it seems that the best separating line is the one running right down the middle of the gap between the boundary lines. Any other line in the gap will be closer to either the blue or green set that the midpoint line is. Let’s put all of this together and see if we can make sense of it in general. Suppose that $$A^{+}$$ and $$A^{-}$$ are finite point sets in $$\mathbf{R}^{k}$$ and $$w\in\mathbf{R}^{k}$$ such that $B^{-}(w)=\max_{x\in A^{-}}w\cdot x < \min_{x\in A^{+}}w\cdot x=B^{+}(w).$ Let $$x^{-}$$ be a point in $$A^{-}$$ with $$w\cdot x^{-}=B^{-}(w)$$ and $$x^{+}$$ be a point in $$A$$ with $$w\cdot x^{+}=B^{+}(w)$$. The two hyperplanes $$f^{\pm}(x) = w\cdot x - B^{\pm}$$ have the property that: $f^{+}(x)\ge 0\hbox{ for }x\in A^{+}\hbox{ and }f^{+}(x)<0\hbox{ for }x\in A^{-}$ and $f^{-}(x)\le 0\hbox{ for }x\in A^{-}\hbox{ and }f^{-}(x)>0\hbox{ for }x\in A^{+}$ Hyperplanes like $$f^{+}$$ and $$f^{-}$$, which “just touch” a set of points, are called supporting hyperplanes. Definition: Let $$A$$ be a set of points in $$\mathbf{R}^{k}$$. A hyperplane $$f(x)=w\cdot x+b=0$$ is called a supporting hyperplane for $$A$$ if $$f(x)\ge 0$$ for all $$x\in A$$ and $$f(x)=0$$ for at least one point in $$A$$, or if $$f(x)\le 0$$ for all $$x\in A$$ and $$f(x)=0$$ for at least one point in $$A$$. The gap between the two supporting hyperplanes $$f^{+}$$ and $$f^{-}$$ is called the margin between $$A$$ and $$B$$ for $$w$$. Definition: Let $$f^{+}$$ and $$f^{-}$$ be as in the discussion above for point sets $$A^{+}$$ and $$A^{-}$$ and vector $$w$$. Then the orthogonal distance between the two hyperplanes $$f^{+}$$ and $$f^{-}$$ is called the geometric margin $$\tau_{w}(A^{+},A^{-})$$ (along $$w$$) between $$A^{+}$$ and $$A^{-}$$. We have $\tau_{w}(A^{+},A^{-})=\frac{B^{+}(w)-B^{-}(w)}{\|w\|}.$ Now we can propose an answer to our second question about the best classifying hyperplane. Definition: The optimal margin $$\tau(A^{+},A^{-})$$ between $$A^{+}$$ and $$A^{-}$$ is the largest value of $$\tau_{w}$$ over all possible $$w$$ for which $$B^{-}(w)<B^{+}(w)$$: $\tau(A^{+},A^{-}) = \max_{w} \tau_{w}(A^{+},A^{-}).$ If $$w$$ is such that $$\tau_{w}=\tau$$, then the hyperplane $$f(x)=w\cdot x - \frac{(B^{+}+B^{-})}{2}$$ is the optimal margin classifying hyperplane. The optimal classifying hyperplane runs “down the middle” of the gap between the two supporting hyperplanes $$f^{+}$$ and $$f^{-}$$ that give the sides of the optimal margin. We can make one more observation about the maximal margin. If we find a vector $$w$$ so that $$f^{+}(x) = w\cdot x -B^{+}$$ and $$f^{-}(x) = w\cdot x-B^{-}$$ are the two supporting hyperplanes such that the gap between them is the optimal margin, then this gap gives us an estimate on how close together the points in $$A^{+}$$ and $$A^{-}$$ can be. This is visible in Figure 7.4, where it’s clear that to get from a blue point to a green one, you have to cross the gap between the two supporting hyperplanes. Proposition: The closest distance between points in $$A^{+}$$ and $$A^{-}$$ is greater than or equal to the optimal margin: $\min_{p\in A^{+},q\in A^{-}} \|p-q\|\ge \tau(A^{+},A^{-})$. Proof: We have $$f^{+}(p) = w\cdot p - B^{+}\ge 0$$ and $$f^{-}(q) = w\cdot q -B^{-}\le 0$$. These two inequalities imply that $w\cdot (p-q)\ge B^{+}-B^{-}>0.$ Therefore $\|p-q\|\|w\|\ge |w\cdot (p-q)|\ge |B^{+}-B^{-}|$ and so $\|p-q\| \ge \frac{B^{+}-B^{-}}{\|w\|} = \tau(A^{+},A^{-})$ If this inequality were always strict – that is, if the optimal margin equalled the minimum distance between points in the two clusters – then this would give us an approach to finding this optimal margin. Unfortunately, that isn’t the case. In Figure 7.5, we show a very simple case involving only six points in total in which the distance between the closest points in $$A^{+}$$ and $$A^{-}$$ is larger than the optimal margin. At least now our problem is clear. Given our two point sets $$A^{+}$$ and $$A^{-}$$, find $$w$$ so that $$\tau_{w}(A^{+},A^{-})$$ is maximal among all $$w$$ where $$B^{-}(w)<B^{+}(w)$$. This is an optimization problem, but unlike the optimization problems that arose in our discussions of linear regression and principal component analysis, it does not have a closed form solution. We will need to find an algorithm to determine $$w$$ by successive approximations. Developing that algorithm will require thinking about a new concept known as convexity. ## 7.4 Convexity, Convex Hulls, and Margins In this section we introduce the notion of a convex set and the particular case of the convex hull of a finite set of points. As we will see, these ideas will give us a different interpretation of the margin between two sets and will eventually lead to an algorithm for finding the optimal margin classifier. Definition: A subset $$U$$ of $$\mathbf{R}^{k}$$ is convex if, for any pair of points $$p$$ and $$q$$ in $$U$$, every point $$t$$ on the line segment joining $$p$$ and $$q$$ also belongs to $$U$$. In vector form, for every $$0\le s\le 1$$, the point $$t(s) = (1-s)p+sq$$ belongs to $$U$$. (Note that $$t(0)=p$$, $$t(1)=q$$, and so $$t(s)$$ traces out the segment joining $$p$$ to $$q$$.) *Figure 7.6 illustrates the difference between convex sets and non-convex ones. The key idea from convexity that we will need to solve our optimization problem and find the optimal margin is the idea of the convex hull of a finite set of points in $$\mathbf{R}^{k}$$. Definition: Let $$S=\{q_1,\ldots, q_{N}\}$$ be a finite set of $$N$$ points in $$\mathbf{R}^{k}$$. The convex hull $$C(S)$$ of $$S$$ is the set of points $p = \sum_{i=1}^{N} \lambda_{i}q_{i}$ as $$\lambda_{1},\ldots,\lambda_{N}$$ runs over all positive real numbers such that $\sum_{i=1}^{N} \lambda_{i} = 1.$ There are a variety of ways to think about the convex hull $$C(S)$$ of a set of points $$S$$, but perhaps the most useful is that it is the smallest convex set that contains all of the points of $$S$$. That is the content of the next lemma. Lemma: $$C(S)$$ is convex. Furthermore, let $$U$$ be any convex set containing all of the points of $$S$$. Then $$U$$ contains $$C(S)$$. Proof: To show that $$C(S)$$ is convex, we apply the definition. Let $$p_1$$ and $$p_2$$ be two points in $$C(S)$$, so that let $$p_{j}=\sum_{i=1}^{N} \lambda^{(j)}_{i}q_{i}$$ where $$\sum_{i=1}^{N}\lambda^{(j)}_{i} = 1$$ for $$j=1,2$$. Then a little algebra shows that $(1-s)p_1+sp_{2} = \sum_{i=1}^{N} (s\lambda^{(1)}_{i}+(1-s)\lambda^{(2)}_{i})q_{i}$ and $$\sum_{i=1}^{N} (s\lambda^{(1)}_{i}+(1-s)\lambda^{(2)}_{i}) = 1$$. Therefore all of the points $$(1-s)p_{1}+sp_{2}$$ belong to $$C(S)$$, and therefore $$C(S)$$ is convex. For the second part, we proceed by induction. Let $$U$$ be a convex set containing $$S$$. Then by the definition of convexity, $$U$$ contains all sums $$\lambda_{i}q_{i}+\lambda_{j}q_{j}$$ where $$\lambda_i+\lambda_j=1$$. Now suppose that $$U$$ contains all the sums $$\sum_{i=1}^{N} \lambda_{i}q_{i}$$ where exactly $$m-1$$ of the $$\lambda_{i}$$ are non-zero for some $$m<N$$. Consider a sum $q = \sum_{i=1}^{N}\lambda_{i}q_{i}$ with exactly $$m$$ of the $$\lambda_{i}\not=0$$. For simplicity let’s assume that $$\lambda_{i}\not=0$$ for $$i=1,\ldots, m$$. Now let $$T=\sum_{i=1}^{m-1}\lambda_{i}$$ and set $q' = \sum_{i=1}^{m-1}\frac{\lambda_{i}}{T}q_{i}.$ This point $$q'$$ belongs to $$U$$ by the inductive hypothesis. Also, $$(1-T)=\lambda_{m}$$. Therefore by convexity of $$U$$, $q = (1-T)q_{m}+Tq'$ also belongs to $$U$$. It follows that all of $$C(S)$$ belongs to $$U$$. In Figure 7.7 we show our penguin data together with the convex hull of points corresponding to the two types of penguins. Notice that the boundary of each convex hull is a finite collection of line segments that join the “outermost” points in the point set. One very simple example of a convex set is a half-plane. More specifically, if $$f(x)=w\cdot x+b=0$$ is a hyperplane, then the two “sides” of the hyperplane, meaning the subsets $$\{x: f(x)\ge 0\}$$ and $$\{x: f(x)\le 0\}$$, are both convex. (This is exercise 1 in Section 7.7 ). As a result of this observation, and the Lemma above, we can conclude that if $$f(x)=w\cdot x+b=0$$ is a supporting hyperplane for the set $$S$$ – meaning that either $$f(x)\ge 0$$ for all $$x\in S$$, or $$f(x)\le 0$$ for all $$x\in S$$, with at least one point $$x\in S$$ such that $$f(x)=0$$ – then $$f(x)=0$$ is a supporting hyperplane for the entire convex hull. After all, if $$f(x)\ge 0$$ for all points $$x\in S$$, then $$S$$ is contained in the convex set of points where $$f(x)\ge 0$$, and therefore $$C(S)$$ is contained in that set as well. Interestingly, however, the converse is true as well – the supporting hyperplanes of $$C(S)$$ are exactly the same as those for $$S$$. Lemma: Let $$S$$ be a finite set of points in $$\mathbf{R}^{k}$$ and let $$f(x)=w\cdot x +b=0$$ be a supporting hyperplane for $$C(S)$$. Then $$f(x)$$ is a supporting hyperplane for $$S$$. Proof: Suppose $$f(x)=0$$ is a supporting hyperplane for $$C(S)$$. Let’s assume that $$f(x)\ge 0$$ for all $$x\in C(S)$$ and $$f(x^{*})=0$$ for a point $$x^{*}\in C(S)$$, since the case where $$f(x)\le 0$$ is identical. Since $$S\subset C(S)$$, we have $$f(x)\ge 0$$ for all $$x\in S$$. To show that $$f(x)=0$$ is a supporting hyperplane, we need to know that $$f(x)=0$$ for at least one point $$x\in S$$. Let $$x'$$ be the point in $$S$$ where $$f(x')$$ is minimal among all $$x\in S$$. Note that $$f(x')\ge 0$$. Then the hyperplane $$g(x) = f(x)-f(x')$$ has the property that $$g(x)\ge 0$$ on all of $$S$$, and $$g(x')=0$$. Since the halfplane $$g(x)\ge 0$$ is convex and contains all of $$S$$, we have $$C(S)$$ contained in that halfplane. So, on the one hand we have $$g(x^{*})=f(x^{*})-f(x')\ge 0$$. On the other hand $$f(x^{*})=0$$, so $$-f(x')\ge 0$$, so $$f(x')\le 0$$. Since $$f(x')$$ is also greater or equal to zero, we have $$f(x')=0$$, and so we have found a point of $$S$$ on the hyperplane $$f(x)=0$$. Therefore $$f(x)=0$$ is also a supporting hyperplane for $$S$$. This argument can be used to give an alternative description of $$C(S)$$ as the intersection of all halfplanes containing $$S$$ arising from supporting hyperplanes for $$S$$. This is exercise 2 in Section 7.7. It also has as a corollary that $$C(S)$$ is a closed set. Lemma: $$C(S)$$ is compact. Proof: Exercise 2 in Section 7.7 shows that it is the intersection of closed sets in $$\mathbf{R}^{k}$$, so it is closed. Exercise 3 shows that $$C(S)$$ is bounded. Thus it is compact. Now let’s go back to our optimal margin problem, so that we have linearly separable sets of points $$A^{+}$$ and $$A^{-}$$. Recall that we showed that the optimal margin was at most the minimal distance between points in $$A^{+}$$ and $$A^{-}$$, but that there could be a gap between the minimal distance and the optimal margin – see Figure 7.5 for a reminder. It turns out that by considering the minimal distance between $$C(A^{+})$$ and $$C(A^{-})$$, we can “close this gap.” The following proposition shows that we can change the problem of finding the optimal margin into the problem of finding the closest distance between the convex hulls of $$C(A^{+})$$ and $$C(A^{-})$$. The following proposition generalizes the Proposition at the end of Section 7.3.2. Proposition: Let $$A^{+}$$ and $$A^{-}$$ be linearly separable sets in $$\mathbf{R}^{k}$$. Let $$p\in C(A^{+})$$ and $$q\in C(A^{-})$$ be any two points. Then $\|p-q\|\ge \tau(A^{+},A^{-}).$ Proof: As in the earlier proof, choose supporting hyperplanes $$f^{+}(x)=w\cdot x-B^{+}=0$$ and $$f^{-}(x)=w\cdot x-B^{-}$$ for $$A^{+}$$ and $$A^{-}$$. By our discussion above, these are also supporting hyperplanes for $$C(A^{+})$$ and $$C(A^{-})$$. Therefore if $$p\in C(A^{+})$$ and $$q\in C(A^{-})$$, we have $$w\cdot p-B^{+}\ge 0$$ and $$w\cdot q-B^{-}\le 0$$. As before $w\cdot(p-q)\ge B^{+}-B^{-}>0$ and so $\|p-q\|\ge\frac{B^{+}-B^{-}}{\|w\|}=\tau_{w}(A^{+},A^{-})$ Since this holds for any $$w$$, we have the result for $$\tau(A^{+},A^{-})$$. The reason this result is useful is that, as we’ve seen, if we restrict $$p$$ and $$q$$ to $$A^{+}$$ and $$A^{-}$$, then there can be a gap between the minimal distance and the optimal margin. If we allow $$p$$ and $$q$$ to range over the convex hulls of these sets, then that gap disappears. One other consequence of this is that if $$A^{+}$$ and $$A^{-}$$ are linearly separable then their convex hulls are disjoint. Corollary: If $$A^{+}$$ and $$A^{-}$$ are linearly separable then $$\|p-q\|>0$$ for all $$p\in C(A^{+})$$ and $$q\in C(A^{-})$$ Proof: The sets are linearly separable precisely when $$\tau>0$$. Our strategy now is to show that if $$p$$ and $$q$$ are points in $$C(A^{+})$$ and $$C(A^{-})$$ respectively that are at minimal distance $$D$$, and if we set $$w=p-q$$, then we obtain supporting hyperplanes with margin equal to $$\|p-q\|$$. Since this margin is the largest possible margin, this $$w$$ must be the optimal $$w$$. This transforms the problem of finding the optimal margin into the problem of finding the closest points in the convex hulls. Lemma: Let $D=\min_{p\in C(A^{+}),q\in C(A^{-})} \|p-q\|.$ Then there are points $$p^*\in C(A^{+})$$ and $$q^{*}\in C(A^{-})$$ with $$\|p^{*}-q^{*}\|=D$$. If $$p_1^{*},q_1^{*}$$ and $$p_2^{*},q_2^{*}$$ are two pairs of points satisfying this condition, then $$p_1^{*}-q_1^{*}=p_2^{*}-q_{2}^{*}$$. Proof: Consider the set of differences $V = \{p-q: p\in C(A^{+}),q\in C(A^{-})\}.$ • $$V$$ is compact. This is because it is the image of the compact set $$C(A^{+})\times C(A^{-})$$ in $$\mathbf{R}^{k}\times\mathbf{R}^{k}$$ under the continuous map $$h(x,y)=x-y$$. • the function $$d(v)=\|v\|$$ is continuous and satisfies $$d(v)\ge D>0$$ for all $$v\in V$$. Since $$d$$ is a continuous function on a compact set, it attains its minimum $$D$$ and so there is a $$v=p^{*}-q^{*}$$ with $$d(v)=D$$. Now suppose that there are two distinct points $$v_1=p_1^*-q_1^*$$ and $$v_2=p_2^*-q_2^*$$ with $$d(v_1)=d(v_2)=D$$. Consider the line segment $t(s) = (1-s)v_1+sv_2\hbox{ where }0\le s\le 1$ joining $$v_1$$ and $$v_2$$. Now $t(s) = ((1-s)p_1^*+sp_2^*)-((1-s)q_1^*+sq_2^*).$ Both terms in this difference belong to $$C(A^{+})$$ and $$C(A^{-})$$ respectively, regardless of $$s$$, by convexity, and therefore $$t(s)$$ belongs to $$V$$ for all $$0\le s\le 1$$. This little argument shows that $$V$$ is convex. In geometric terms, $$v_1$$ and $$v_2$$ are two points in the set $$V$$ equidistant from the origin and the segment joining them is a chord of a circle; as Figure 7.8 shows, in that situation there must be a point on the line segment joining them that’s closer to the origin than they are. Since all the points on that segment are in $$V$$ by convexity, this would contradict the assumption that $$v_1$$ is the closet point in $$V$$ to the origin. In algebraic terms, since $$D$$ is the minimal value of $$\|v\|$$ for all $$v\in V$$, we must have $$t(s)\ge D$$. On the other hand $\frac{d}{ds}\|t(s)\|^2 = \frac{d}{ds}(t(s)\cdot t(s)) =t(s)\cdot \frac{dt(s)}{ds} = t(s)\cdot(v_2-v_1).$ Therefore $\frac{d}{ds}\|t(s)\|^2|_{s=0} = v_{1}\cdot(v_{2}-v_{1})=v_{1}\cdot v_{2}-\|v_{1}\|^2\le 0$ since $$v_{1}\cdot v_{2}\le D^{2}$$ and $$\|v_{1}\|^2=D^2$$. If $$v_{1}\cdot v_{2}<D^{2}$$, then this derivative would be negative, which would mean that there is a value of $$s$$ where $$t(s)$$ would be less than $$D$$. Since that can’t happen, we conclude that $$v_{1}\cdot v_{2}=D^{2}$$ which means that $$v_{1}=v_{2}$$ – the vectors have the same magnitude $$D$$ and are parallel. This establishes uniqueness. Note: The essential ideas of this argument show that a compact convex set in $$\mathbf{R}^{k}$$ has a unique point closest to the origin. The convex set in this instance, $V=\{p-q:p\in C(A^{+}),q\in C(A^{-})\},$ is called the difference $$C(A^{+})-C(A^{-})$$, and it is generally true that the difference of convex sets is convex. Now we can conclude this line of argument. Theorem: Let $$p$$ and $$q$$ be points in $$C(A^{+})$$ and $$C(A^{-})$$ respectively are such that $$\|p-q\|$$ is minimal among all such pairs. Let $$w=p-q$$ and set $$B^{+}=w\cdot p$$ and $$B^{-}=w\cdot q$$. Then $$f^{+}(x)=w\cdot x-B^{+}=0$$ and $$f^{-}(x)=w\cdot x-B^{-}$$ are supporting hyperplanes for $$C(A^{+})$$ and $$C(A^{-})$$ respectively and the associated margin $\tau_{w}(A^{+},A^{-})=\frac{B^{+}-B^{-}}{\|w\|} = \|p-q\|$ is optimal. Proof: First we show that $$f^{+}(x)=0$$ is a supporting hyperplane for $$C(A^{+})$$. Suppose not. Then there is a point $$p'\in C(A^{+})$$ such that $$f^{+}(x)<0$$. Consider the line segment $$t(s) = (1-s)p+sp'$$ running from $$p$$ to $$p'$$. By convexity it is entirely contained in $$C(A^{+})$$. Now look at the distance from points on this segment to $$q$$: $D(s)=\|t(s)-q\|^2.$ We have $\frac{dD(s)}{ds}|_{s=0} = 2(p-q)\cdot (p'-p) = 2w\cdot (p'-p) = 2\left[(f^{+}(p')+B^{+})-(f^{+}(p)+B^{+})\right]$ so $\frac{dD(s)}{ds}|_{s=0} = 2(f^{+}(p')-f^{+}(p))<0$ since $$f(p)=0$$. This means that $$D(s)$$ is decreasing along $$t(s)$$ and so there is a point $$s'$$ along $$t(s)$$ where $$\|t(s')-q\|<D$$. This contradicts the fact that $$D$$ is the minimal distance. The same argument shows that $$f^{-}(x)=0$$ is also a supporting hyperplane. Now the margin for this $$w$$ is $\tau_{w}(A^{+},A^{-}) = \frac{w\cdot (p-q)}{\|w\|} = \|p-q\|=D$ and as $$w$$ varies we know this is the largest possible $$\tau$$ that can occur. Thus this is the maximal margin. *Figure 7.9 shows how considering the closest point in the convex hulls “fixes” the problem that we saw in Figure 7.5. The closest point occurs at a point on the boundary of the convex hull that is not one of the points in $$A^{+}$$ or $$A^{-}$$. ## 7.5 Finding the Optimal Margin Classifier Now that we have translated our problem into geometry, we can attempt to develop an algorithm for solving it. To recap, we have two sets of points $A^{+}=\{x^+_1,\ldots, x^+_{n_{+}}\}$ and $A^{-}=\{x^-_1,\ldots, x^-_{n_{-}}\}$ in $$\mathbf{R}^{k}$$ that are linearly separable. We wish to find points $$p\in C(A^{+})$$ and $$q\in C(A^{-})$$ such that $\|p-q\|=\min_{p'\in C(A^{+}),q'\in C(A^{-})} \|p'-q'\|.$ Using the definition of the convex hull we can express this more concretely. Since $$p\in C(A^{+})$$, there are coefficients $$\lambda^{+}_{i}\ge 0$$ for $$i=1,\ldots,n_{+}$$ and $$\lambda^{-}_{i}\ge 0$$ for $$i=1,\ldots, n_{-}$$ so that \begin{aligned} p&=&\sum_{i=1}^{n_{+}}\lambda^{+}_{i} x^{+}_{i} \\ q&=&\sum_{i=1}^{n_{-}}\lambda^{-}_{i} x^{-}_{i} \\ \end{aligned} where $$\sum_{i=1}^{n_{\pm}} \lambda_{i}^{\pm}=1$$. We can summarize this as follows: Optimization Problem 1: Write $$\lambda^{\pm}=(\lambda^{\pm}_{1},\ldots, \lambda^{\pm}_{n_{\pm}})$$ Define $w(\lambda^+,\lambda^-) = \sum_{i=1}^{n_{+}}\lambda^{+}_{i}x^{+}_{i} - \sum_{i=1}^{n_{-}}\lambda^{-}x^{-}_{i}$ To find the supporting hyperplanes that define the optimal margin between $$A^{+}$$ and $$A^{-}$$, find $$\lambda^{+}$$ and $$\lambda^{-}$$ such that $$\|w(\lambda^{+},\lambda^{-})\|^2$$ is minimal among all such $$w$$ where all $$\lambda^{\pm}_{i}\ge 0$$ and $$\sum_{i=1}^{n_{\pm}} \lambda^{\pm}_{i}=1$$. This is an example of a constrained optimization problem. It’s worth observing that the objective function $$\|w(\lambda^{+},\lambda^{-})\|^2$$ is just a quadratic function in the $$\lambda^{\pm}.$$ Indeed we can expand $\|w(\lambda^{+},\lambda^{-})\|^2 = (\sum_{i=1}^{n_{+}}\lambda^{+}_{i}x_{i}- \sum_{i=1}^{n_{-}}\lambda^{-}x^{-}_{i})\cdot(\sum_{i=1}^{n_{+}}\lambda^{+}_{i}x_{i}- \sum_{i=1}^{n_{-}}\lambda^{-}x^{-}_{i})$ to obtain $\|w(\lambda^{+},\lambda^{-})\|^2 = R -2S +T$ where \begin{aligned} R &=& \sum_{i=1}^{n_{+}}\sum_{j=1}^{n_{+}}\lambda^{+}_{i}\lambda^{+}_{j}(x^{+}_{i}\cdot x^{+}_{j}) \\ S &=& \sum_{i=1}^{n_{+}}\sum_{j=1}^{n_{-}}\lambda^{+}_{i}\lambda^{-}_{j}(x^{+}_{i}\cdot x^{-}_{j}) \\ T &=& \sum_{i=1}^{n_{-}}\sum_{j=1}^{n_{-}}\lambda^{-}_{i}\lambda^{-}_{j}(x^{-}_{i}\cdot x^{-}_{j}) \\ \end{aligned} \tag{7.2} Thus the function we are trying to minimize is relatively simple. On the other hand, unlike optimization problems we have seen earlier in these lectures, in which we can apply Lagrange multipliers, in this case some of the constraints are inequalities – namely the requirement that all of the $$\lambda^{\pm}\ge 0$$ – rather than equalities. There is an extensive theory of such problems that derives from the idea of Lagrange multipliers. However, in these notes, we will not dive into that theory but will instead construct an algorithm for solving the problem directly. ### 7.5.1 Relaxing the constraints Our first step in attacking this problem is to adjust our constraints and our objective function slightly so that the problem becomes easier to attack. Optimization Problem 2: This is a slight revision of problem 1 above. We minimize: $Q(\lambda^{+},\lambda^{-}) = \|w(\lambda^{+},\lambda^{-})\|^2-\sum_{i=1}^{n_{+}}\lambda^{+}_{i}-\sum_{i=1}^{n_{-}}\lambda^{-}_{i}$ subject to the constraints that all $$\lambda^{\pm}_{i}\ge 0$$ and $\alpha = \sum_{i=1}^{n_{+}}\lambda^+_{i} = \sum_{i=1}^{n_{-}}\lambda^{-}_{i}.$ Problem 2 is like problem 1, except we don’t require the sums of the $$\lambda^{\pm}_{i}$$ to be one, but only that they be equal to each other; and we modify the objective function slightly. It turns out that the solution to this optimization problem easily yields the solution to our original one. Lemma: Suppose $$\lambda^{+}$$ and $$\lambda^{-}$$ satisfy the constraints of problem 2 and yield the minimal value for the objective function $$Q(\lambda^{+},\lambda^{-})$$. Then $$\alpha\not=0$$. Rescale the $$\lambda^{\pm}$$ to have sum equal to one by dividing by $$\alpha$$, yielding $$\tau^{\pm}=(1/\alpha)\lambda^{\pm}$$. Then $$w(\tau^{+},\tau^{-})$$ is a solution to optimization problem 1. Proof: To show that $$\alpha\not=0$$, suppose that $$\lambda^{\pm}_{i}=0$$ for all $$i\not=1$$ and $$\lambda=\lambda^{+}_{1}=\lambda^{-}_{1}$$. The one-variable quadratic function $$Q(\lambda)$$ takes its minimum value at $$\lambda=1/\|x_{1}^{+}-x_{1}^{-}\|^2$$ and its value at that point is negative. Therefore the minimum value of $$Q$$ is negative, which means $$\alpha\not=0$$ at that minimum point. For the equivalence, notice that $$\tau^{\pm}$$ still satisfy the constraints of problem 2. Therefore $Q(\lambda^{+},\lambda^{-}) = \|w(\lambda^{+},\lambda^{-})\|^2-2\alpha\le \|w(\tau^{+},\tau^{-})\|^2-2.$ On the other hand, suppose that $$\sigma^{\pm}$$ are a solution to problem 1. Then $\|w(\sigma^{+},\sigma^{-})\|^2\le \|w(\tau^{+},\tau^{-})\|^2.$ Therefore $\alpha^2 \|w(\sigma^{+},\sigma^{-})\|^2 = \|w(\alpha\sigma^{+},\alpha\sigma^{-})\|^2\le \|w(\lambda^{+},\lambda^{-})\|^2$ and finally $\|w(\alpha\sigma^{+},\alpha\sigma^{-})\|^2-2\alpha\le Q(\lambda^{+},\lambda^{-})=\|w(\alpha\tau^{+},\alpha\tau^{-})\|^2-2\alpha.$ Since $$Q$$ is the minimal value, we have $\alpha^{2}\|w(\sigma^{+},\sigma^{-})\|^2 = \alpha^{2}\|w(\tau^{+},\tau^{-})\|^2$ so that indeed $$w(\tau^{+},\tau^{-})$$ gives a solution to Problem 1. ### 7.5.2 Sequential Minimal Optimization Now we outline an algorithm for solving Problem 2 that is called Sequential Minimal Optimization that was introduced by John Platt in 1998 (See [5] and Chapter 12 of [6]). The algorithm is based on the principle of “gradient ascent”, where we exploit the fact that the negative gradient of a function points in the direction of its most rapid decrease and we take small steps in the direction of the negative gradient until we reach the minimum. However, in this case simplify this idea a little. Recall that the objective function $$Q(\lambda^{+},\lambda^{-})$$ is a quadratic function in the $$\lambda$$’s and that we need to preserve the condition that $$\sum \lambda^{+}_{i}=\sum\lambda^{-}_{i}$$. So our approach is going to be to take, one at a time, a pair $$\lambda^{+}_{i}$$ and $$\lambda^{-}_{j}$$ and change them together so that the equality of the sums is preserved and the change reduces the value of the objective function. Iterating this will take us to a minimum. So, for example, let’s look at $$\lambda^{+}_i$$ and $$\lambda^{-}_{j}$$ and, for the moment, think of all of the other $$\lambda$$’s as constants. Then our objective function reduces to a quadratic function of these two variables that looks something like: $Q(\lambda_{i}^{+},\lambda_{j}^{-}) = a(\lambda^{+}_i)^2+b\lambda^{+}_i\lambda^{-}_j+c(\lambda^{-}_{i})^2+d\lambda^{+}_i+e\lambda^{-}_{j}+f.$ The constraints that remain are $$\lambda^{\pm}\ge 0$$, and we are going to try to minimize $$Q$$ by changing $$\lambda_{i}^{+}$$ and $$\lambda_{j}^{-}$$ by the same amount $$\delta$$. Furthermore, since we still must have $$\lambda_{i}^{+}+\delta\ge 0$$ and $$\lambda_{j}^{-}+\delta\ge 0$$, we have $\delta\ge M=\max\{-\lambda_{i}^{+},-\lambda_{j}^{-}\} \tag{7.3}$ In terms of this single variable $$\delta$$, our optimization problem becomes the job of finding the minimum of a quadratic polynomial in one variable subject to the constraint in Equation 7.3. This is easy! There are two cases: the critical point of the quadratic is to the left of $$M$$, in which case the minimum value occurs at $$M$$; or the critical point of the quadratic is to the right of $$M$$, in which case the critical point occurs there. This is illustrated in Figure 7.10. Computationally, let’s write $w_{\delta,i,j}(\lambda^{+},\lambda^{-}) = w(\lambda^{+},\lambda^{-})+\delta(x^{+}_{i}-x^{-}_{j}).$ Then $\frac{d}{d\delta}(\|w_{\delta,i,j}(\lambda^{+},\lambda^{-})\|^2-2\alpha) = 2w_{\delta,i,j}(\lambda^{+},\lambda^{-})\cdot(x^{+}_{i}-x^{-}_{j})-2$ and using the definition of $$w_{\delta,i,j}$$ we obtain the following formula for the critical value of $$\delta$$ by setting this derivative to zero: $\delta_{i,j} = \frac{(1-w(\lambda^{+},\lambda^{-})\cdot(x_{i}^{+}-x_{j}^{-})}{\|x^+_{i}-x^{-}_{j}\|^2}$ Using this information we can describe the SMO algorithm. Algorithm (SMO, see [5]): Given: Two linearly separable sets of points $$A^{+}=\{x_{1}^{+},\ldots,x_{n_{+}}^{+}\}$$ and $$A^{-}=\{x_{1}^{-},\ldots, x_{n_{-}}^{-}\}$$ in $$\mathbf{R}^{k}$$. Find: Points $$p$$ and $$q$$ belonging to $$C(A^{+})$$ and $$C(A^{-})$$ respectively such that $\|p-q\|^2=\min_{p'\in C(A^{+}),q'\in C(A^{-})} \|p'-q'\|^2$ Initialization: Set $$\lambda_{i}^{+}=\frac{1}{n_{+}}$$ for $$i=1,\ldots, n_{+}$$ and $$\lambda_{i}^{-}=\frac{1}{n_{-}}$$ for $$i=1,\ldots, n_{-}$$. Set $p(\lambda^{+})=\sum_{i=1}^{n_{+}}\lambda^{+}_{i}x^{+}_{i}$ and $q(\lambda^{-})=\sum_{i=1}^{n_{-}}\lambda^{-}_{i}x^{-}_{i}$ Notice that $$w(\lambda^{+},\lambda^{-})=p(\lambda^{+})-q(\lambda^{-})$$. Let $$\alpha=\sum_{i=1}^{n_{+}}\lambda^{+}=\sum_{i=1}^{n_{-}}\lambda^{-}$$. These sums will remain equal to each other throughout the operation of the algorithm. Repeat the following steps until maximum value of $$\delta^{*}$$ computed in each iteration is smaller than some tolerance (so that the change in all of the $$\lambda$$’s is very small): • For each pair $$i,j$$ with $$1\le i\le n_{+}$$ and $$1\le j\le n_{-}$$, compute $M_{i,j} = \max\{-\lambda_{i}^{+},-\lambda_{j}^{-}\}$ and $\delta_{i,j} = \frac{1-(p(\lambda^{+})-q(\lambda^{-}))\cdot(x_{i}^{+}-x_{j}^{-})}{\|x^+_{i}-x^{-}_{j}\|^2}.$ If $$\delta_{i,j}\ge M$$ then set $$\delta^{*}=\delta_{i,j}$$; otherwise set $$\delta^{*}=M$$. Then update the $$\lambda^{\pm}$$ by the equations: \begin{aligned} \lambda^{+}_{i}&=&\lambda^{+}_{i}+\delta_{i,j}^{*} \\ \lambda^{+}_{j}&=&\lambda^{-}_{j}+\delta_{i,j}^{*} \\ \end{aligned} When this algorithm finishes, $$p\approx p(\lambda^{+})$$ and $$q\approx q(\lambda^{-})$$ will be very good approximations to the desired closest points. Recall that if we set $$w=p-q$$, then the optimal margin classifier is $f(x)=w\cdot x - \frac{B^{+}+B^{-}}{2}=0$ where $$B^{+}=w\cdot p$$ and $$B^{-}=w\cdot q$$. Since $$w=p-q$$ we can simplify this to obtain $f(x)=(p-q)\cdot x -\frac{\|p\|^2-\|q\|^2}{2}=0.$ In Figure 7.11, we show the result of applying this algorithm to the penguin data and illustrate the closest points as found by an implementation of the SMO algorithm, together with the optimal classifying line. Bearing in mind that the y-axis is scaled by a factor of 200, we obtain the following rule for distinguishing between Adelie and Gentoo penguins – if the culmen depth and body mass put you above the red line, you are a Gentoo penguin, otherwise you are an Adelie. ## 7.6 Inseparable Sets Not surprisingly, real life is often more complicated than the penguin example we’ve discussed at length in these notes. In particular, sometimes we have to work with sets that are not linearly separable. Instead, we might have two point clouds, the bulk of which are separable, but because of some outliers there is no hyperplane we can draw that separates the two sets into two halfplanes. Fortunately, all is not lost. There are two common ways to address this problem, and while we won’t take the time to develop the theory behind them, we can at least outline how they work. ### 7.6.1 Best Separating Hyperplanes If our sets are not linearly separable, then their convex hulls overlap and so our technique for finding the closest points of the convex hulls won’t work. In this case, we can “shrink” the convex hull by considering combinations of points $$\sum_{i}\lambda_{i}x_{i}$$ where $$\sum\lambda_{i}=1$$ and $$C\ge\lambda_{i}\ge 0$$ for some $$C\le 1$$. For $$C$$ small enough, reduced convex hulls will be linearly separable – although some outlier points from each class will lie outside of them – and we can find hyperplane that separates the reduced hulls. In practice, this means we allow a few points to lie on the “wrong side” of the hyperplane. Our tolerance for these mistakes depends on $$C$$, but we can include $$C$$ in the optimization problem to try to find the smallest $$C$$ that “works”. ### 7.6.2 Nonlinear kernels The second option is to look not for separating hyperplanes but instead for separating curves – perhaps polynomials or even more exotic curves. This can be achieved by taking advantage of the form of Equation 7.2. As you see there, the only way the points $$x_{i}^{\pm}$$ enter in to the function being minimized is through the inner products $$x_{i}^{\pm}\cdot x_{j}^{\pm}$$. We can adopt a different inner product than the usual Euclidean one, and reconsider the problem using this different inner product. This amounts to embedding our points in a higher dimensional space where they are more likely to be linearly separable. Again, we will not pursue the mathematics of this further in these notes. ## 7.7 Exercises 1. Prove that, if $$f(x)=w\cdot x+b=0$$ is a hyperplane in $$\mathbf{R}^{k}$$, then the two “sides” of this hyperplane, consisting of the points where $$f(x)\ge 0$$ and $$f(x)\le 0$$, are both convex sets. 2. Prove that $$C(S)$$ is the intersection of all the halfplanes $$f(x)\ge 0$$ as $$f(x)=w\cdot x+b$$ runs through all supporting hyperplanes for $$S$$ where $$f(x)\ge 0$$ for all $$p\in S$$. 3. Prove that $$C(S)$$ is bounded. Hint: show that $$S$$ is contained in a sphere of sufficiently large radius centered at zero, and then that $$C(S)$$ is contained in that sphere as well. 4. Confirm the final formula for the optimal margin classifier at the end of the lecture.
2023-03-30 23:32:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9438430666923523, "perplexity": 100.73997824104046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949506.62/warc/CC-MAIN-20230330225648-20230331015648-00619.warc.gz"}
https://astarmathsandphysics.com/a-level-maths-notes/s1/5444-advanced-normal-distribution-problem.html
Suppose that for a certain random variable $X$ normally distributed, $P( 20 \le X \le 30)=0.3$ $P(X \le 30)=3 P(X \le 20)$ We can write the first equation as $P(X \le 30)-P(X \le 20)=0.3$ The simultaneous equations can be written jatex options:inline}P(X \le 30)-P(X \le 20)=0.3{/jatex}  (1) $P(X \le 30)-3 P(X \le 20)=0$ (2) (1)-(2) gives $P(X \le 20)=0.3 \rightarrow P(X \le 20)=0.15$ then $P(X \le 30)=3 P(X \le 20)=3 \times 0.15 = 0.45$ . Now we have the equations $P(X \le 20)=0.15, \; P(X \le 20)= 0.45$ . Using normal distribution tables or a calculator gives $\frac{20- \mu}{\sigma}=-1.036, \; \frac{30 - \mu}{\sigma}=-0.125$ Multiplying by $\sigma$ and subtracting gives $-10=-0.911 \sigma \rightarrow \sigma = \frac{10}{0.911}=10.97$ then $\mu=31.37$ /
2021-07-27 20:51:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9827529191970825, "perplexity": 9920.352394700336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153491.18/warc/CC-MAIN-20210727202227-20210727232227-00146.warc.gz"}
http://archive.numdam.org/articles/10.5802/jedp.590/
Expansions and eigenfrequencies for damped wave equations Journées équations aux dérivées partielles (2001), article no. 6, 10 p. We study eigenfrequencies and propagator expansions for damped wave equations on compact manifolds. In the strongly damped case, the propagator is shown to admit an expansion in terms of the finitely many eigenmodes near the real axis, with an error exponentially decaying in time. In the presence of an elliptic closed geodesic not meeting the support of the damping coefficient, we show that there exists a sequence of eigenfrequencies converging rapidly to the real axis. In the case of Zoll manifolds, the set of all eigenfrequencies is shown to exhibit a cluster structure determined by the Morse index of the closed geodesics and the damping coefficient averaged along the geodesic flow. We then show that the propagator can be expanded in terms of the clusters of eigenfrequencies in the entire spectral band. @article{JEDP_2001____A6_0, author = {Hitrik, Michael}, title = {Expansions and eigenfrequencies for damped wave equations}, journal = {Journ\'ees \'equations aux d\'eriv\'ees partielles}, eid = {6}, publisher = {Universit\'e de Nantes}, year = {2001}, doi = {10.5802/jedp.590}, zbl = {01808682}, mrnumber = {1843407}, language = {en}, url = {http://archive.numdam.org/articles/10.5802/jedp.590/} } TY - JOUR AU - Hitrik, Michael TI - Expansions and eigenfrequencies for damped wave equations JO - Journées équations aux dérivées partielles PY - 2001 DA - 2001/// PB - Université de Nantes UR - http://archive.numdam.org/articles/10.5802/jedp.590/ UR - https://zbmath.org/?q=an%3A01808682 UR - https://www.ams.org/mathscinet-getitem?mr=1843407 UR - https://doi.org/10.5802/jedp.590 DO - 10.5802/jedp.590 LA - en ID - JEDP_2001____A6_0 ER - %0 Journal Article %A Hitrik, Michael %T Expansions and eigenfrequencies for damped wave equations %J Journées équations aux dérivées partielles %D 2001 %I Université de Nantes %U https://doi.org/10.5802/jedp.590 %R 10.5802/jedp.590 %G en %F JEDP_2001____A6_0 Hitrik, Michael. Expansions and eigenfrequencies for damped wave equations. Journées équations aux dérivées partielles (2001), article no. 6, 10 p. doi : 10.5802/jedp.590. http://archive.numdam.org/articles/10.5802/jedp.590/ [AschLebeau]M. Asch and G. Lebeau The spectrum of the damped wave operator for a bounded domain in ${}^{2}$, preprint, 2000. | MR [BLR]C. Bardos, G. Lebeau, J. Rauch Sharp sufficient conditions for the observation, control, and stabilization of waves from the boundary, SIAM J. Control and Optimization, 30, 1992, 1024-1065. | MR | Zbl [Burq2]N. Burq Mesures semi-classiques et mesures de défaut, Sém. Bourbaki, Asterisque 245, 1997, 167-195. | EuDML | Numdam | MR | Zbl [Burq3]N. Burq Semi-classical estimates for the resolvent in non-trapping geometries, preprint, 2000. | MR [BurqZworski]N. Burq and M. Zworski Resonance expansions in semi-classical propagation, Comm. Math. Phys., to appear. | MR | Zbl [PopovCardoso]F. Cardoso and G. Popov Quasimodes with exponentially small errors associated with elliptic periodic rays, preprint, 2001. | MR [Hitrik1]M. Hitrik Eigenfrequencies for damped wave equations on Zoll manifolds, preprint, 2001. | MR [Hitrik2]M. Hitrik Propagator expansions for damped wave equations, in preparation. [HormIV]L. Hörmander The analysis of linear partial differential operators IV, Springer Verlag 1985. | MR | Zbl [Lebeau]G. Lebeau Equation des ondes amorties. Algebraic and geometric methods in mathematical physics (Kaciveli 1993), 73-109, Math. Phys. Stud., 19, Kluwer Acad. Publ., Dordrecht, 1996. | MR | Zbl [Markus] A. S. Markus Introduction to the spectral theory of polynomial operator pencils, Stiintsa, Kishinev 1986 (Russian). Engl. transl. in Transl. Math. Monographs 71, Amer. Math. Soc., Providence 1988. | MR | Zbl [Ralston] J. Ralston On the construction of quasimodes associated with stable periodic orbits, Comm. Math. Phys. 51 1976, 219-242. | MR | Zbl [RT1]J. Rauch and M. Taylor Exponential decay of solutions to hyperbolic equations in bounded domains, Indiana Univ. Math. J. 24, 1974, 79-86. | MR | Zbl [RT2] J. Rauch and M. Taylor Decay of solutions to nondissipative hyperbolic systems on compact manifolds, Comm. Pure Appl. Math. 28, 1975, 501-523. | MR | Zbl [Sjostrand1] J. Sjöstrand Asymptotic distribution of eigenfrequencies for damped wave equations, Publ. R.I.M.S., 36 (2000), 573-611. | MR | Zbl [Stefanov]P. Stefanov Quasimodes and resonances : sharp lower bounds, Duke Math. J., 99, 1999, 75-92. | MR | Zbl [StefanovVodev] P. Stefanov and G. Vodev Neumann resonances in linear elasticity for an arbitrary body, Comm. Math. Phys., 176 1996, 645-659. | MR | Zbl [TZ]S. H. Tang and M. Zworski From quasimodes to resonances, Math. Res. Lett., 5, 1998, 261-272. | MR | Zbl [Weinstein] A. Weinstein Asymptotics of eigenvalue clusters for the Laplacian plus a potential, Duke Math. J. 44 1977, 883-892. | MR | Zbl [Zworski]M. Zworski Resonance expansions in wave propagation, Séminaire E.D.P., 1999-2000, École Polytechnique, XXII-1-XXII-9. | Numdam | MR Cited by Sources:
2022-11-29 06:54:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4648212492465973, "perplexity": 4105.499011034569}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710690.85/warc/CC-MAIN-20221129064123-20221129094123-00271.warc.gz"}
https://zbmath.org/?q=an%3A1157.60051
# zbMATH — the first resource for mathematics Reversibility of chordal SLE. (English) Zbl 1157.60051 The paper deals with the theory of stochastic Loewner evolutions (SLEs) as introduced by the works of O. Schramm [Isr. J. Math. 118, 221–288 (2000; Zbl 0968.60093)] to describe the Markovian scaling limits of some lattice models. The author proves a theorem on the invariance of probability distributions related to chordal SLEs for certain parameter values. It is shown that the chordal SLE$$_{\kappa}$$ trace is reversible for $$\kappa \in (0, 4]$$. ##### MSC: 60G99 Stochastic processes 60J65 Brownian motion 60H10 Stochastic ordinary differential equations (aspects of stochastic analysis) Full Text: ##### References: [1] Ahlfors, L. V. (1973). Conformal Invariants : Topics in Geometric Function Theory . McGraw-Hill, New York. · Zbl 0272.30012 [2] Camia, F. and Newman, C. M. (2007). Critical percolation exploration path and SLE 6 : A proof of convergence. Probab. Theory Related Fields 139 473-519. · Zbl 1126.82007 [3] Dubédat, J. (2007). Commutation relations for Schramm-Loewner evolutions. Comm. Pure Appl. Math. 60 1792-1847. · Zbl 1137.82009 [4] Lawler, G. F., Schramm, O. and Werner, W. (2001). Values of Brownian intersection exponents. I. Half-plane exponents. Acta Math. 187 237-273. · Zbl 1005.60097 [5] Lawler, G. F., Schramm, O. and Werner, W. (2003). Conformal restriction: The chordal case. J. Amer. Math. Soc. 16 917-955. · Zbl 1030.60096 [6] Lawler, G. F., Schramm, O. and Werner, W. (2004). Conformal invariance of planar loop-erased random walks and uniform spanning trees. Ann. Probab. 32 939-995. · Zbl 1126.82011 [7] Lawler, G. F. and Werner, W. (2004). The Brownian loop soup. Probab. Theory Related Fields 128 565-588. · Zbl 1049.60072 [8] Revuz, D. and Yor, M. (1991). Continuous Martingales and Brownian Motion . Springer, Berlin. · Zbl 0731.60002 [9] Rohde, S. and Schramm, O. (2005). Basic properties of SLE. Ann. of Math. ( 2 ) 161 883-924. · Zbl 1081.60069 [10] Schramm, O. (2007). Conformally invariant scaling limits: An overview and a collection of problems. In International Congress of Mathematicians 1 513-543. EMS, Zürich. · Zbl 1131.60088 [11] Schramm, O. (2000). Scaling limits of loop-erased random walks and uniform spanning trees. Israel J. Math. 118 221-288. · Zbl 0968.60093 [12] Schramm, O. and Sheffield, S. Contour lines of the two-dimensional discrete Gaussian free field. Available at · Zbl 1210.60051 [13] Smirnov, S. (2001). Critical percolation in the plane: Conformal invariance, Cardy’s formula, scaling limits. C. R. Acad. Sci. Paris Sér. I Math. 333 239-244. · Zbl 0985.60090 [14] Zhan, D. (2004). Stochastic Loewner evolution in doubly connected domains. Probab. Theory Related Fields 129 340-380. · Zbl 1054.60104 [15] Zhan, D. (2004). Duality of chordal SLE. Available at Invent. Math. · Zbl 1158.60047 [16] Zhan, D. (2006). Some properties of annulus SLE. Electron. J. Probab. 11 1069-1093. · Zbl 1136.82014 [17] Zhan, D. (2008). The scaling limits of planar LERW in finitely connected domains. Ann. Probab. 36 467-529. · Zbl 1153.60057 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-09-22 18:36:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.814630925655365, "perplexity": 2977.075118717737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057371.69/warc/CC-MAIN-20210922163121-20210922193121-00042.warc.gz"}
https://meangreenmath.com/2020/01/13/youtubes-automatic-closed-captioning-of-mathematical-speech-part-2/
# YouTube’s Automatic Closed-Captioning of Mathematical Speech (Part 2) Last semester, as I spend untold hours editing the closed captioning automatically generated by YouTube on the math videos on my YouTube channel, I got a crash course on the capabilities and limitations of this system. This crash course was perhaps not legally necessary but extra work that I took on because a student with a hearing impairment was enrolled in my class, and I wanted to ensure that the review videos that I provide to my students were accessible to him also. I think the resources offered by my university are fairly typical to ensure that instructors are able to reach all students and not just those who don’t have audio/visual impairments. After discussions with the cognizant people at my university, I’ve made a few conclusions: • Mostly by accident, my videos are ADA compliant since I made the decision to both write out the solutions and also talking through the solutions. • While the automatic closed-captioning provided by YouTube may be minimally compliant with ADA, I’m not sure that a student with a hearing impairment could always follow the transcriptions due to a number of errors. • Aside from punctuation, capitalization, and the occasional homonym (e.g., right vs. write), YouTube does a pretty good job at transcribing ordinary speech. • Naturally, YouTube’s automated closed-captioning is not to blame when I don’t enunciate clearly, have a rabbit trail of thought but then have to backtrack, use poor grammar, make a outright mistake, etc. • However, YouTube seems to have a lot of difficulty providing automatic closed-captioning of mathematical speech. Fixing these transcription errors took an awful lot of time. I don’t want to know how many hours I devoted to fixing the 120 or so videos (each video is about 3-10 minutes long) recorded so that my hearing-impaired student could have full access to my class. About halfway into this project of fixing the closed-captioning errors, I started writing down some of the closed-captioning errors. I wish I had thought to do this near the start, but oh well. Phonetically, I can understand why most of these errors were made. But these mistakes really shouldn’t have happened. Here are my favorite howlers that I recorded, showing both what I said and what YouTube thought I said. • “931,147,496” became “930 1,000,000 147,000 496” • $A \cap C$,” pronounced “$A$ intersect $C$,” became “A inner sexy” • “arithmetic” became “rhythm sick” • “capital $X$” became “Catholics” • “cardinality” became “carnality” • “divisible by 5” became “visited his wife live” (I have no idea how that happened) • $e^x$” became “eat ooh the x” • “for succinctness” became “force the sickness” • $n \choose n$,” pronounced “$n$ choose $n$,” became “and shoes and” • “set containing” became “second taining” • $\sqrt{2}$” became “squirt tuna” • “two ways in” became “too wasted” • “what $f(3)$,” pronounced “what $f$ of 3,” became “whateva 3” • $x \in B$, pronounced “$x$ is in $B$,” became “sexism be” • $x \in B \cap C$, pronounced “$x$ is in $B$ and $C$,” became “x is Indiana see” • $x \in C$, pronounced “$x$ is in $C$,” became “excellency” Here’s the complete list of howlers that I recorded for posterity. If I’ve learned nothing else, it’s that I need to be more proactive about ensuring the mathematical accuracy of closed-captioning for my YouTube videos. 4 for 857 a 50 7 1232 1230 two 4761 4760 1 19,999 19,000 999 46,376 40 6376 123,552 120 3,552 5,565,120 five million 565,000 120 931,147,496 930 1,000,000 147,000 496 $(2,\emptyset)$ 2d sent $(20,8)$ 28 $[1,2]$ one too $12 \choose 4$ 12 juice 4 $16 \choose 8$ 16 choosing $3 + 1 = 4$ surplus one mix for $4 \choose 0$ 4 2 0 $4 \choose k$ four twos k $49 \choose 5$ 49 she’s 5 $50 \choose 6$ 52 six $8 \choose 2$ a choose to $A \cap C$ A inner sexy $A \cap D$ a intersecting $A \cup B$ a you be $A \cup C$ a UNC $A \cup C$ a you will see a proof approved $A^c$ a compliment $a_i$ asa by all multiples of almost visit an element of $A$ known the debate an element of $A$ normal today and divisible and as above and positive 50 + + 50 and tens intense and would let this be 3 andrew lippa p3 arithmetic earth to arithmetic rhythm sick $A$s ace $B$ but not $C$ be but not si $B \cap C$ b in a sexy $B$ if beef bijection bi CH action bijection bite jection bijection by dejection bijection by ejection bijection by jection bijection by Junction both sets both says capital X Catholics cardinality carnality Cartesian car to shull codomain code Amin coordinate cordon coordinate court coordinates corners coordinates have cort in sap cosine cosign disjoint destroyed divisible by 5 visited his wife live $e^x$ eat ooh the x element of A illness of A element of A mellow today element $x$ that Windex elements of us empty MQ $\emptyset$ descent $\emptyset$ intercept equal able exponent x1 factored acted factorial fact welders fill in film flipping four coins philippine for coins for succinctness force the sickness hence in Hanson $i$ eye $i$ aye If I divide by 15 If I / 15 in $A$ nae in there a bear infinite if an infinite imp an infinite infant into five in 2 5 $i$s ice $j \choose r$ j choose arms $k$th cave $k$th kate likewise lakh wise $n \choose n$ and shoes and $n$th row nth throw one-to-one 121 onto on 2 $r \choose r$ our shoes are $r$ to art at $r$ to already $\mathbb{R}^2$ are too $\mathbb{R}^2$ our too $r$‘s hours same row samro second coordinate sec cornered set containing second inning set containing second taining set containing seconds hanging set containing secretary set containing 1 second anyone since $A$ has say has sixth one six-month square swear $\sqrt{2}$ score 2 $\sqrt{2}$ squirt of tuna team A teammate term in it terminate than zero gloves are off that’s chosen that’s Showzen then $x$ the next therefore there for this entry in the century plus to the $k$ decay two are to are two ways in too wasted union you need up here pier what $f(3)$ whateva 3 will be 4 will before with $n=4$ finials 4 would subtract was attract writing riding $x$ is extras $x$ is in exiting $x$ is in $A$ x as a native $x$ is in $A$ x is nay $x$ is in $B$ sexism be $x$ is in $B$ and $C$ x is Indiana see $x$ is in $C$ excellency $x$ is in $C$ X’s and see $x_2$ next to $x_2$ text too $x-$coordinate export $y$ why $y$ wine $y$ is greater than or wider $y$s wise
2021-07-30 20:26:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 94, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34442374110221863, "perplexity": 4762.163048371985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153980.55/warc/CC-MAIN-20210730185206-20210730215206-00382.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-2-common-core/skills-handbook-scientific-notation-and-significant-digits-exercises-page-980/8
# Skills Handbook - Scientific Notation and Significant Digits - Exercises - Page 980: 8 $$2,300$$ #### Work Step by Step The value is greater than $1$ because the exponent is positive. Move the decimal $3$ places to the right:$$2.3\times 10^{3}= 2,300.$$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2019-04-21 18:16:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4915553331375122, "perplexity": 1394.5894420386412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578532050.7/warc/CC-MAIN-20190421180010-20190421202010-00001.warc.gz"}
https://forum.codidact.org/t/how-to-introduce-newcomers-successfully/688
# How to introduce newcomers successfully? Continuing the discussion from True confessions: I really love badges: There will be the need to somehow introduce new people into how our system works. This is in some parts MVP, but in most parts probably soon-after-MVP. As a sidenote: I think we should (must) welcome all contributions by people, who are willing to be helpful and to learn (“badly written post”). If someone is not good in English, that’s fine. Let’s edit their post to improve spelling and grammar. This will help our site and their English skills, hopefully. That doesn’t mean, though, that we shouldn’t moderate/criticise people, who produce bad content and aren’t willing to learn (“bad post”). Here are my ideas: I am talking about help, specificially directed at new people. This is not about “help documents”/etc. • All help should be highly context-sensitive. It is known, that users never rarely read long texts/explanations/help documents. However, when we provide context-sensitive, short messages, there is a chance of the users really reading through them. • The help should be – wizard-like – separated into smaller steps The feed back should be given one-by-one and with verification, whether the step has really been completed. • Most help should be directly actionable. For example: When someone’s post is closed, we shouldn’t tell them, that they need to read the help center, but we should give them hints, what they might change for a better reception. Going from this, I’d imagine the following types of help tools: • General introduction (AKA “Tour” on SE) • Question assistant • Edit guide • Closed question -> get reopened assistant All of these should be optional, but on by-default. This allows users experienced with the system to skip them (in general or in parts), whilst directing new users to it. 6 Likes I like the First Posts review queue, where the first post that anybody ever does gets kicked to a special review queue by the more experienced users for editing etc. Beyond that a lot of it is culture and a need to be welcoming. If its a choice between • A. Making the small edits to salvage a question like capitalization, proper tags,making it on on topic or explaining how to upload images… • B. Yelling at the user and closing the question. I would have to hope that people would choose option A.We also need to not yell at the experienced users who choose to give a hand up to new users like happened to me and I am still not happy about 2 plus years later. There are plenty of times where the difference between a good question and a bad question is 30 seconds of editing, we should encourage helping the new users by editing their posts. 2 Likes From the context of your post I think you mean A. But yeah. That’s what I mean with the difference between bad posts and badly written posts. The former should be moderated (closing, deleting; while still being nice towards the user). The latter should be improved (editing), because the user is probably not that good in English and we won’t help anyone by removing that post. 1 Like Its Friday and the end of a really long week, but yes that is exactly what I mean. 2 Likes That’s a false dichotomy. In many cases I prefer to leave a (non-yelling) comment on how to improve the question (and I also appreciate it if someone does the same on my questions). Without casting a down vote or even a close vote. Obviously that doesn’t apply to obvious typos, or generally to cases where the issue clearly is one of mastering the language (although there I may comment simply for the reason that I’m not sure whether I understood correctly — I’m no fan of speculative fixes; someone else might react to the “fixed” post before the OP has a chance to correct it). An example of such a comment could be: Hint: The typesetting of sin and cos is much nicer if you prefix them with a backslash (like \sin). I can’t imagine anyone reasonably getting annoyed at that type of comment. Of course, if you just write that isn’t much helpful (although it’s still not yelling). 2 Likes I really like the idea of small wizards/dialogs showing up only when needed. Currently, SE heavily relies on veteran users to guide the new ones. If someone asks an off-topic question, write a comment to help; if it’s unclear, help the OP to clarify it; if the OP says “thanks, it worked”, remind them that the answer should be accepted, and so on. It might kinda work for smaller sites, but for bigger ones like SO it’s unrealistic to believe that leaving this task only to the community will be enough (even in SOpt, which is much smaller compared to SOen, with an average of 100 questions per day, it’s too much workload for the community to handle). There are more questions needing guidance than people to guide the respective OP’s. But SE refuses to improve the site, to make the interface itself provide guidance to those users. They prefer to leave the burden to the community - which is not enough - instead of making a smart UI that could guide newcomers. Things like these ideas are never implemented. IMO that’s a mistake that we shouldn’t repeat. I’d say that every “core action” in the site (voting, asking/answering, editing, what to do when someone answers my question or edits my post, anything that requires some learning about how the site works and it’s not obvious to new users) should have their own assistant. Lots of rules in SE sites are spread through Help Center articles and Meta posts (the former is more “obvious” and “easy to find”, there’s even a link to it in the top of all pages; the latter isn’t, specially for new users - I wonder what’s the percentage of users that knows about meta), so people usually take some time to learn how the site really works, what they should and shouldn’t do, and so on. I believe that the context-sensitive assistants will be very useful in order to speed up the learning curve for new users, and to minimize the burden on the others, who will have more time to do Q/A things instead of “please fix your question” stuff. 4 Likes I disagree with voting, but agree with everything else. Here’s a more clear example. A new users asked this question. The location of “the Wash” is not obvious to a lot of people so there are options 1. Nicely ask the OP where it is. 3. Wait for someone who knows where it is to answer. 4. Close the question and accuse the OP of being arrogant for not specifying the location in the original question. 5. Claim that the experienced users who edited the question with the location are making the site worse. All of those things happened, what I am saying is that options 1-3 are fine, but we need to avoid 4 and 5. Being nice does not mean that content doesn’t get moderated, it means that we are not deliberately rude when moderating or being rude to users in hopes of driving them away. I have helped close and delete hundreds of posts and over 10% of all the downvotes on Outdoors.SE are mine, I don’t have a problem with moderating content. What I have a problem with is users being rude, which in turn drives away away the users that want to help, which means that there are fewer users to do the moderating. 4 Likes Can you clarify whether “3a. Politely close the question until someone knowledgeable edits in the correct location” would be acceptable to you in this context? It seems like voting to close, and claiming the OP was arrogant and thus deserved to have their question closed, are two distinct things that don’t need to be connected. Closing can absolutely be done politely, however it’s a heavy handed solution compared to editing. Closing takes time and multiple people while editing only takes one. Here are two questions that were unclear due to lack of pictures. In both cases, they first got incorrect answers due to the lack of clarity. The solution though is to add pictures to the question, and I don’t know that closing the question would help with that. Answering a question before it totally clear is a high risk/high reward type of situation, the first answer often gets more points, but every so often you misunderstand the question and so have to rewrite the answer. I would rather people not answer questions that are unclear to them, than try to protect them from answering an unclear question by closing it. However, editing a mess to make it acceptable also has a significant long term cost. It teaches that dumping a mess on us works, in that the desired result is achieved. The OP will likely be back doing the same again, since it worked with no apparent downside to them. Perhaps even worse, bystanders get the same message. The only thing we have that they want is an answer to the question. Withholding that is our only leverage to enforce a minimum quality level. That’s what closing a question is all about. It in essence says “fix this mess or else”. Without the “or else” part, experience has shown that we largely just get ignored. So editing a bad question to make it good may seem like a good thing to do in isolation, but it often is not when looking at the site as a whole over a longer time frame. 3 Likes Yes, of course, but in reality it doesn’t work that way. All it takes is one person who either thinks they understands the question or guesses right, and the quality controls have been subverted. Answering a bad question is a case where it is of benefit to the answerer, but a detriment to the site as a whole. That’s the point of closing. Given that someone will always jump in and answer a bad question anyway, it is our mechanism to lock them out. 5 Likes As demonstrated on SE, the problem is that answerers can easily answer before questions get closed, which undermines our quality control all the same. The ideal is to prevent answers from the start if the question should be closed, and the only way to guarantee this is to require that every question passes review before being made public. This may not scale as well as only reviewing questions if someone flags it, but overall quality of visible questions should be much better. Of course there can be some sort of prioritisation in the review queue to avoid wasting time reviewing garbage. This assumes we don’t want to more new bad questions. If the goal is simply to get rid of them after the fact, closing can help. But that doesn’t really discourage more such questions, since people get answers one way or the other. And if you keep getting more and more such questions, eventually they’d overwhelm reviewers and either review standards will drop or many things will slip through the cracks, or both. 1 Like It benefits them because they get rep from the upvotes, changing the system to not reward this sort of behaviour (and instead reward based on a better voting system perhaps?) should go a long way to fix this 1 Like In cases where the problem is bad markup or lack of markup doing part of the post and leaving a comment with advice on how to finish has worked for me in the past. Friendly and helpful but also making it clear that we expect them to work up to the standard. That strategy fails for cases where the issue in incoherence or utter lack of clarity simply because the isn’t a clear way to fix part of the post. 2 Likes That’s a bit drastic, and requires a lot of drudge work by volunteers who would probably rather do other things. I proposed something a while ago that maybe should be mentioned again. Questions require a positive score to be answerable. For new users, questions start out a 0, meaning they are not immediately answerable, but a single upvote by anyone makes them so. For subsequent questions, the initial question score is the recent average (3 months or last 10 questions or something) of your questions score. This means that as long as you don’t write bad questions, only the first will be on hold until it gets the first upvote, and all your other questions are answerable immediately. It also automatically puts questions on hold from users that have proven to be problems. Basically, if you have a good track record, your questions are considered good until judged bad. If you have a bad track record, your questions are considered bad until judged good. 5 Likes Olin, I’m not sure how to square this with what you’ve said elsewhere about not really caring about the specific asker of a question. If something comes in that’s kind of a mess but I think there’s a good core question underneath, and I clean it up, and you write a good answer — doesn’t everyone win? In my experience, most “bad” questions of the type where editing helps are cases where English is a second language. Or, they’re cases where the asker doesn’t quite know encourage about the topic they have a problem with to phrase the question well. I don’t think closing those really helps get better results. On the other hand, when there’s a case of “What’s this effect” and we can only guess — I’d really like the original poster to try to describe, or else we end up with a series of guesses. Maybe there is a mixed option, where a single flag from a trusted user moves a question from the front page to a review queue? That way most of the time questions could just with no delay, and the messier cases also handled. 2 Likes At SE, I’ve often wished there were a sort of meta-ask site, let’s call it ask.se.com. The whole point of Ask would be for new askers that probably don’t know exactly which stack the question should belong to. Rather than actually answering the questions at Ask, the actions there are to quickly find the appropriate stack the question should land on. One idea would be that “tags” at Ask strictly just map to the different stacks (i.e., “photography”, “video”, “graphic-design”, “code-review”, “unix-linux”, etc.). The asker probably has some idea of where they think the question should go, but the regulars at Ask are the high-rep users and diamond mods of individual stacks. So if you, for instance, were to opine that a particular Ask question isn’t really suited to Photo-SE, your high rep at PSE would generally outweigh another Ask who isn’t active on PSE (but would otherwise have a fairly decent sense of most questions’ topicality). Perhaps all first-time askers have to ask new questions at Ask-SE (or rather, I’m advocating Ask-\${MVPname}). That way, the collective wisdom at Ask helps weed out the obvious junk from all topic-specific sites (obvious spam, questions that are clearly poorly worded or asked and aren’t likely to be salvageable). And also, the rep system at Ask can be tuned to how successful the regulars are at directing questions to the most appropriate sites, rather than being judged on individual sites’ question/answer rep. It’s a different sort of incentive system, essentially a mod or meta-mod rep system, separate from topic-specific rep. Ask rep specifically rewards / tracks a person’s sweep-up or shepherding duty, which I feel SE only really recognized with things like review queue badges. 3 Likes Not necessarily. If this shows the OP and everyone watching that they can dump crap on us and we’ll just fix it for them, then we’ll get a lot more crap that others expect us to fix. I think most are just laziness and an “Eh, who cares?” attitude. Some of that comes from previously getting away with posting bad questions. No matter how little you know about English, capitalizing the word “I” is such a universal and simple rule that there is no excuse for getting it wrong consistently. However, we see a lot of that, and other blatant crap for which there is no excuse. As for those not that good with English, maybe there should be a separate ESL review queue that the asker puts the question on voluntarily. Anyone that feels they have the necessary English skills can edit the question, and release it from the queue. It can’t be voted on until this is done, so the OP won’t get penalized for the initial bad question. This would also makes it clear that all questions posted directly are expected to be well enough written. At that point we can be ruthless about downvoting and/or closing. Citation needed. Citation needed. • Spanish: yo • French: je • German: ich • Russian: я (ya) … in fact, I can’t think of any other languages off the top of my head that do capitalize “I”. Yes, some people are lazy and don’t put effort in. But in my experience moderating on Stack Exchange, the vast majority of badly-written questions (or answers) are badly written not through malice but through inability. Why should we penalize people for making the effort to write in a second language? 7 Likes
2020-02-27 04:29:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42991939187049866, "perplexity": 1232.8692063975636}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146647.82/warc/CC-MAIN-20200227033058-20200227063058-00289.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-for-college-students-7th-edition/chapter-2-section-2-4-linear-functions-and-slope-exercise-set-page-152/32
## Intermediate Algebra for College Students (7th Edition) RECALL: The slope-intercept form of a line's equation is $y=mx+b$, where $m$=slope and $b$=the y-intercept. The given equation is in slope-intercept form with m=-3 and b=2. Thus, the equation has: slope = -3 y-intercept = 2 To graph this equation, perform the following steps: (1) Create a table of values by assigning different values to $x$ and then solving for the corresponding value of y for each one. (Refer to the attached image below for the table.) (2) Plot each ordered pair, and then connect the points using a straight-edge. (Refer to the attached image in the answer part above for the graph.)
2018-11-19 05:36:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6016637086868286, "perplexity": 659.0243753640493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039745281.79/warc/CC-MAIN-20181119043725-20181119065725-00239.warc.gz"}
https://forum.bebac.at/forum_entry.php?id=16050&category=22&order=time
## Still SF [Two-Stage / GS Designs] Hi Dan, » My question for today: is this still the case/state of science that you can not combine 2-stage and replicate designs? Yes. » A short update would be very much apreciated. In case that there is new literature available I would love if you could give references. Though I’m deeply involved in this kind of stuff I even don’t know anybody working on it. If we take the GL literally “The plan to use a two-stage approach must be pre-specified in the protocol along with the adjusted significance levels to be used for each of the analyses.” the blue part is the show-stopper. It’s even worse than a year ago. Do you remember this post? Since many decisions have to be taken into account in the EMA’s ABEL (CVwR >30%? CVwR >50%? GMR within 80.00–125.00?) this method itself may lead to an inflation of the type I error. The latest release of PowerTOST contains two functions which iteratively adjust α in such a way that the TIE is preserved: scABEL.ad() for the adjustment and sampleN.scABEL.ad() for the sample size. With this algo on the average you need four iterations to get the adjusted α. Hence, multiply the runtimes given at the end of this post by four… If () you succeed in convincing regulators that a pre-specified α is not necessary (te-hee) but can be estimated based on stage 1 data, it should be doable (runtime a couple of minutes at the most). Given the EMA’s skepticism concerning TSDs in general (see this post) IMHO, chances are close to nil. Dif-tor heh smusma 🖖 Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes
2021-07-30 20:41:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5504761934280396, "perplexity": 1486.6780458313153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153980.55/warc/CC-MAIN-20210730185206-20210730215206-00402.warc.gz"}
https://proxies123.com/homotopy-theory-does-lifting-correspondence-hold-for-principal-bundles-too/
# homotopy theory – Does lifting correspondence hold for principal bundles too? Let $$P$$ be a (nontrivial) principal bundle over the base space $$mathbb{R}^4$$ and fibers diffeomorphic to $$SU(3)$$. Also assume that $$P$$ is equipped with an Ehresmann connection. Then, for for any two given points $$x,y in mathbb{R}^4$$, all paths that connect them are path-homotopic. Does this imply that the horizontal lifts of all these paths onto $$P$$ that start at the same point are also path-homotopic? Or at least can I assert that all such horizontal lifts end at the same point?
2021-08-05 03:35:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9025780558586121, "perplexity": 295.19528699942316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155322.12/warc/CC-MAIN-20210805032134-20210805062134-00625.warc.gz"}
https://www.cdsimpson.net/2015/
## Thursday, April 30, 2015 ### Temporal and Spatial Stability Analysis of the Orr-Sommerfeld Equation This is the second and final part of the stability analysis of the Orr-Sommerfeld Equation. In my previous post, I went through the derivation and nondimensionalization of the Orr-Sommerfeld Equation. In this part, I show how to perform the temporal and spatial stability analyses. ## Blasius Velocity Profile A 2-D Blasius boundary layer can be expressed as $$\bar{U}=f'\left(\eta\right)\tag{1}\label{blasius}$$ where \begin{align}\eta=y\sqrt{\frac{U_{\infty}}{2\nu x}}\\f'''+ff''&=0\\f(0)=f'(0)&=0\\f'(\infty)&=1\end{align} This is a nonlinear ordinary differential equation that is most easily solved using a shooting method. In a shooting method, the boundary value problem is made into an initial value problem, whereby an initial guess of one parameter is iteratively updated until the opposite boundary conditions are met. In the present case, the unknown boundary condition is $$f''(0)$$ , so it is iteratively adjusted until $$|f'(10)-1|\leq10^{-6}$$. The present case uses a fourth-order Runge-Kutta method of integration. The initial condition The transformation of the Blasius velocity profile from $$\eta$$ to $$\xi$$ is performed by determining the distance from the wall, $$\delta$$, where the stream-wise velocity is $$99.9\%$$ of the free-stream velocity. \begin{align}\bar{U}\left(\xi\right)&=\bar{U}\left(\frac{\eta}{\delta}\right)\\\bar{U}''\left(\xi\right)&=\frac{1}{\delta^{2}}\bar{U}''\left(\frac{\eta}{\delta}\right)\end{align}\tag{2}\label{transform} The figure below shows the Blasius velocity profile and its derivatives. ## Temporal Stability The Orr-Sommerfeld equation can be analyzed for temporal stability by assuming that $$\bar{\alpha}$$ is real. This enables us to rearrange the non-dimensional Orr-Sommerfeld equation as follows. $$\left(-\bar{U}\bar{\alpha}^{2}-U''+\frac{i\bar{\alpha}^{3}}{Re_{\delta}}\right)\bar{\phi}+\left(\bar{U}-\frac{2i\bar{\alpha}}{Re_{\delta}}\right)\bar{\phi}''+\left(\frac{i}{\bar{\alpha}Re_{\delta}}\right)\bar{\phi}''''=\bar{c}\left(\bar{\phi}''-\bar{\alpha}^{2}\bar{\phi}\right)\tag{3}\label{realalpha}$$ If we discretize this equation using a finite difference approximation for $$\bar{\phi}''$$ and $$\bar{\phi}''''$$, we get an equation of the form $$A\Phi=\tilde{c}B\Phi\tag{4}\label{eigmatrixform}$$ where $$\Phi$$ is a vector containing values $$\bar{\phi}_{i}$$ at discrete locations. This is nothing more than an eigenvalue problem, where $$\tilde{c}$$ is a diagonal matrix of eigenvalues for the discrete system. The present calculations use a second order central differencing scheme to approximate the second and fourth derivative terms. \begin{align}\bar{\phi}_{i}''&\approx\frac{\bar{\phi}_{i-1}-2\bar{\phi}_{i}+\bar{\phi}_{i+1}}{h^{2}}\\\bar{\phi}_{i}''''&\approx\frac{\bar{\phi}_{i-2}-4\bar{\phi}_{i-1}+6\bar{\phi}_{i}-4\bar{\phi}_{i+1}+\bar{\phi}_{i+2}}{h^{4}}\end{align}\tag{5}\label{eqfindiff} At the boundaries, $$\bar{\phi}=0$$, so the first and last columns of $$A$$ and $$B$$ are removed. However, $$\bar{\phi}'=0$$ also at the boundaries. Fortunately, using a backward difference method at the boundary gives $$\bar{\phi}_{-1}\approx\bar{\phi}_{0}$$, so the $$\bar{\phi}_{i-2}$$ term can be omitted from the $$\bar{\phi}''''$$ approximation just inside the wall boundary. Similarly, the $$\bar{\phi}_{i+2}$$ term can be omitted just inside the free-stream boundary. The finite difference approximations in Equation \ref{eqfindiff} gives a pentadiagonal $$A$$ and tridiagonal $$B$$. \begin{align}A&=\frac{1}{h^{4}}\left[\begin{array}{ccccc}a_{11} & a_{21} & a_{3} & & 0\\a_{21} & \ddots & \ddots & \ddots\\a_{3} & \ddots & \ddots & \ddots & a_{3}\\ & \ddots & \ddots & \ddots & a_{2n}\\0 & & a_{3} & a_{2n} & a_{1n}\end{array}\right]\\B&=\frac{1}{h^{2}}\left[\begin{array}{cccc}b_{1} & b_{2} & & 0\\b_{2} & \ddots & \ddots\\ & \ddots & \ddots & b_{2}\\0 & & b_{2} & b_{1}\end{array}\right]\end{align}\tag{6}\label{fdmatrix} where \begin{align}a_{1i}&=h^{4}\left(-\bar{U}_{i}\bar{\alpha}^{3}-\bar{U}_{i}''\bar{\alpha}+\frac{i\bar{\alpha}}{Re_{\delta}}\right)-2h^{2}\left(\bar{U}_{i}\bar{\alpha}-\frac{2i\bar{\alpha}}{Re_{\delta}}\right)+6\left(\frac{i}{Re_{\delta}}\right)\\a_{2i}&=h^{2}\left(\bar{U}_{i}\bar{\alpha}-\frac{2i\bar{\alpha}}{Re_{\delta}}\right)-4\left(\frac{i}{Re_{\delta}}\right)\\a_{3}&=\frac{i}{Re_{\delta}}\\b_{1}&=-h^{2}\bar{\alpha}^{3}-\bar{\alpha}\\b_{2}&=\bar{\alpha}\end{align}\tag{7}\label{fdmatrixcoef} The coefficients in $$A$$ depend on the velocity profile, which varies with the distance from the wall. However, $$B$$ is constant for a given $$\bar{\alpha}$$. In addition, $$B$$ is guaranteed to be real and symmetric, and is thus guaranteed to be invertible. We can thus left multiply Equation \ref{eigmatrixform} by $$B^{-1}$$ and rewrite the equation as follows. $$B^{-1}A\Phi=B^{-1}\tilde{c}B\Phi\tag{8}\label{newmatform}$$ It is clear that the eigenvalues of $$B^{-1}A$$ are the eigenvalues in $$\tilde{c}$$. Matlab's eig function was used to solve for the eigenvalues for each combination of $$\bar{\alpha}$$ and $$Re$$. The following figure shows the eigenvalues of the discrete system using one value of $$\bar{\alpha}$$ and $$Re$$. Each combination of $$\bar{\alpha}$$ and $$Re$$ gives a unique set of eigenvalues similar to these. Because this is a stability analysis, we only care about the most unstable eigenvalue. The governing equation is related to $$e^{i\alpha(x-\left(c_{r}+ic_{i}\right)t)}$$. If $$\alpha$$ is real, then any instabilities must come from an eigenvalue with a positive imaginary component. For this reason, the interesting eigenvalue is the eigenvalue with the maximum imaginary component. This analysis was done over a range of $$\bar{\alpha}$$ and $$Re$$, and the most unstable eigenvalue was stored for each combination. Matlab's contour function was then used to plot the contours shown in the following figure. It is important to note that the contours shown here differ from those found by Wazzan in [7] because the Orr-Sommerfeld equation has been nondimensionalized in a different manner. The present case used the boundary layer thickness $$\delta$$, while Wazzan used displacement thickness $$\delta^{*}$$. ## Spatial Stability The temporal stability analysis is performed assuming that $$\bar{\alpha}$$ is real. In the spatial stability analysis, we assume that $$\omega$$ is real. It is possible to derive a polynomial eigenvalue problem for the eigenvalues $$\bar{\alpha}$$, but solving the polynomial eigenvalue problem presented some challenges. Namely, at least one of the coefficient matrices was singular. To circumvent this challenge, a mapping method has been used to give a relation between a complex $$\bar{\alpha}$$ and a complex $$\omega$$. In the present case, we hold $$\bar{\alpha}_{i}$$ at a specific value and vary $$\bar{\alpha}_{r}$$. For each $$\bar{\alpha}_{r}$$, the discrete eigenvalue problem is solved for $$\omega$$, and the most unstable eigenvalue is kept as before. By varying $$\bar{\alpha}_{r}$$ with $$\bar{\alpha}_{i}$$, a curve of $$\omega$$ values can be plotted in the complex plane, as shown in the figure above. We then determine if there is an $$\bar{\alpha}_{r}$$ that gives $$\omega_{i}=0$$ and store any locations that satisfy this condition. This process is repeated for a range of $$Re$$ for all interesting $$\bar{\alpha}_{i}$$ values. The results of this process are shown in the figure below. ## Discussion This report is not the first on this topic, and the results do match fairly well with the references. Once the discretization was properly arranged, the eigenvalue computation was trivial using Matlab's built-in functions. Some factors added complexity to the project. The temporal stability curves are sensitive to the discretization step size and the number of $$Re$$ and $$\bar{\alpha}$$ values used. However, the spatial stability curves were far more sensitive. It turns out that an insufficient number of $$\bar{\alpha}_{r}$$ values will give bad resolution in the complex $$\omega$$ plane, periodically overestimating the zero locations. This caused some oscillations at larger values of $$Re$$. Additionally, a large number of Reynold's Number steps were required to ensure that the each of the desired contours were visible.Compounding the issue was the fact that an extra variable effectively raised computation time by nearly an order of magnitude. ## References 1. J. D. Anderson. Fundamentals of Aerodynamics. Aeronautical and Aerospace Engineering Series. McGraw Hill, 4th edition, 2007. 2. M. J. Maghrebi. Orr Sommerfeld Solver using Mapped Finite Di fference scheme for plane wake fl ow. Journal of Aerospace Science and Technology, 2(4):55-63, December 2005 3. P. J. J. Moeleker. Linear Temporal Stability Analysis. Technical report, Delft University of Technology, 1998. Series 01: Aerodynamics 07. 4. B. S. Ng and W. H. Reid. Simple Asymptotics for the Temporal Spectrum of an Orr-Sommerfeld Problem. Applied Mathematics Letters, 13:51-55, 2000. 5. S. A. Orszag. Accurate Solution of the Orr-Sommerfeld Stability Equation. Technical report, Massachusetts Institute of Technology, May 1971. 6. T. Patel. Stability Properties of Self-propelled Wakes. Master's Thesis, University of California, San Diego, 2012. 7. A. R. Wazzan, T. T. Okamura, and A. M. O. Smith. Spatial and Temporal Stability Charts for the Falkner-Skan Boundary-layer Profiles. Technical report, Douglas Aircraft Company, 1968. 8. F. M. White. Viscous Fluid Flow. McGraw-Hill Series in Mechanical Engineering. McGraw Hill, 2nd edition, 1991. ## Matlab Codes These are the Matlab files needed to perform this analysis. Please feel free to use them as you will, but please give me credit. A link to my blog is always appreciated. Main Stability Analysis Code: cs_orr_sommerfeld_stability.m Fourth Order Runge-Kutta Routine: cs_rk4.m ## Friday, April 3, 2015 ### Derivation and Nondimensionalization of the Orr-Sommerfeld Equation The Orr-Sommerfeld equation is a famous equation that can give some insight into the stability of the velocity profile of a fluid flow. This is one of two parts on the derivation and stability analysis of the Orr-Sommerfeld equation. In this post, I derive the Orr-Sommerfeld equation starting from the 2-D Navier-Stokes equations. I then show how it can be nondimensionalized. It may look like a lot of math at first glance, but it is all relatively simple. ## Derivation The 2-D Navier-Stokes equations are given as follows: \begin{align} \nabla\vec{V}&=0\\ \rho\frac{D\vec{V}}{Dt}&=-\nabla p+\mu\nabla^{2}\vec{V} \end{align} Letting $$V_{x}=U+u'$$, $$V_{y}=V+v'$$, and $$p=P+p'$$ and performing a small disturbance analysis gives the small perturbation version of the Navier-Stokes Equations \begin{align}\frac{\partial u'}{\partial x}+\frac{\partial v'}{\partial y}&=0\\ \frac{\partial u'}{\partial t}+U\frac{\partial u'}{\partial x}+V\frac{\partial u'}{\partial y}+u'\frac{\partial U}{\partial x}+v'\frac{\partial U}{\partial y}&=-\frac{1}{\rho}\frac{\partial p'}{\partial x}+\frac{\mu}{\rho}\left(\frac{\partial^{2}u'}{\partial x^{2}}+\frac{\partial^{2}u'}{\partial y^{2}}\right)\\ \frac{\partial v'}{\partial t}+U\frac{\partial v'}{\partial x}+V\frac{\partial v'}{\partial y}+u'\frac{\partial V}{\partial x}+v'\frac{\partial V}{\partial y}&=-\frac{1}{\rho}\frac{\partial p'}{\partial y}+\frac{\mu}{\rho}\left(\frac{\partial^{2}v'}{\partial x^{2}}+\frac{\partial^{2}v'}{\partial y^{2}}\right) \end{align}\tag{1} Assuming parallel flow, where $$U\approx U(y)$$ and $$V\approx0$$, we can simplify this to the the following form of the Navier-Stokes equations. \begin{align}\frac{\partial u'}{\partial x}+\frac{\partial v'}{\partial y}&=0\\\frac{\partial u'}{\partial t}+U\frac{\partial u'}{\partial x}+v'\frac{\partial U}{\partial y}&=-\frac{1}{\rho}\frac{\partial p'}{\partial x}+\frac{\mu}{\rho}\left(\frac{\partial^{2}u'}{\partial x^{2}}+\frac{\partial^{2}u'}{\partial y^{2}}\right)\\\frac{\partial v'}{\partial t}+U\frac{\partial v'}{\partial x}&=-\frac{1}{\rho}\frac{\partial p'}{\partial y}+\frac{\mu}{\rho}\left(\frac{\partial^{2}v'}{\partial x^{2}}+\frac{\partial^{2}v'}{\partial y^{2}}\right)\end{align}\label{simp_NS}\tag{2} In this analysis, disturbances are assumed to be Tollmien-Schlichting waves, with the general form as follows. \begin{align}\psi&=\phi(y)e^{i(\alpha x-\omega t)}\\u'&=\frac{\partial\psi}{\partial y}=\frac{\partial\phi}{\partial y}e^{i(\alpha x-\omega t)}\\v'&=-\frac{\partial\psi}{\partial x}=-i\alpha\phi e^{i(\alpha x-\omega t)}\end{align}\tag{3} The temporal and spatial derivatives are then calculated as follows. \begin{align}\frac{\partial u'}{\partial t}&=-i\omega\frac{\partial\phi}{\partial y}e^{i(\alpha x-\omega t)}\\\frac{\partial u'}{\partial x}&=i\alpha\frac{\partial\phi}{\partial y}e^{i(\alpha x-\omega t)}\\\frac{\partial^{2}u'}{\partial x^{2}}&=-\alpha^{2}\frac{\partial\phi}{\partial y}e^{i(\alpha x-\omega t)}\\\frac{\partial u'}{\partial y}&=\frac{\partial^{2}\phi}{\partial y^{2}}e^{i(\alpha x-\omega t)}\\\frac{\partial^{2}u'}{\partial y^{2}}&=\frac{\partial^{3}\phi}{\partial y^{3}}e^{i(\alpha x-\omega t)}\\\frac{\partial v'}{\partial t}&=-\alpha\omega\phi e^{i(\alpha x-\omega t)}\\\frac{\partial v'}{\partial x}&=\alpha^{2}\phi e^{i(\alpha x-\omega t)}\\\frac{\partial^{2}v'}{\partial x^{2}}&=i\alpha^{3}\phi e^{i(\alpha x-\omega t)}\\\frac{\partial v'}{\partial y}&=-i\alpha\frac{\partial\phi}{\partial y}e^{i(\alpha x-\omega t)}\\\frac{\partial^{2}v'}{\partial y^{2}}&=-i\alpha\frac{\partial^{2}\phi}{\partial y^{2}}e^{i(\alpha x-\omega t)}\end{align}\tag{4} We can then substitute each of these derivatives into Equation $$\ref{simp_NS}$$ and we get the following relations. \begin{align}e^{i(\alpha x-\omega t)}\left[i\alpha\frac{\partial\phi}{\partial y}-i\alpha\frac{\partial\phi}{\partial y}\right]&=0\\-\rho e^{i(\alpha x-\omega t)}\left[-i\omega\frac{\partial\phi}{\partial y}+i\alpha U\frac{\partial\phi}{\partial y}+-i\alpha\phi\frac{\partial U}{\partial y}-\frac{\mu}{\rho}\left(-\alpha^{2}\frac{\partial\phi}{\partial y}+\frac{\partial^{3}\phi}{\partial y^{3}}\right)\right]&=\frac{\partial p'}{\partial x}\\-\rho e^{i(\alpha x-\omega t)}\left[-\alpha\omega\phi+U\alpha^{2}\phi-\frac{\mu}{\rho}\left(i\alpha^{3}\phi-i\alpha\frac{\partial^{2}\phi}{\partial y^{2}}\right)\right]&=\frac{\partial p'}{\partial y}\end{align} To eliminate the pressure fluctuation term, differentiate the x- and y-momentum equations by $$y$$ and $$x$$, respectively. \begin{align}\frac{1}{-\rho e^{i(\alpha x-\omega t)}}\frac{\partial^{2}p'}{\partial x\partial y}&=-i\omega\frac{\partial^{2}\phi}{\partial y^{2}}+i\alpha\frac{\partial U}{\partial y}\frac{\partial\phi}{\partial y}+i\alpha U\frac{\partial^{2}\phi}{\partial y^{2}}-i\alpha\frac{\partial\phi}{\partial y}\frac{\partial U}{\partial y}\\&\quad-i\alpha\phi\frac{\partial^{2}U}{\partial y^{2}}+\frac{\mu}{\rho}\left(\alpha^{2}\frac{\partial^{2}\phi}{\partial y^{2}}-\frac{\partial^{4}\phi}{\partial y^{4}}\right)\\\frac{1}{-i\alpha\rho e^{i(\alpha x-\omega t)}}\frac{\partial^{2}p'}{\partial x\partial y}&=-\alpha\omega\phi+U\alpha^{2}\phi+\frac{\mu}{\rho}\left(-i\alpha^{3}\phi+i\alpha\frac{\partial^{2}\phi}{\partial y^{2}}\right)\end{align}\tag{5} Equating the two momentum equations gives $$-i\omega\frac{\partial^{2}\phi}{\partial y^{2}}+i\alpha U\frac{\partial^{2}\phi}{\partial y^{2}}-i\alpha\phi\frac{\partial^{2}U}{\partial y^{2}}+\frac{\mu}{\rho}\left(2\alpha^{2}\frac{\partial^{2}\phi}{\partial y^{2}}-\frac{\partial^{4}\phi}{\partial y^{4}}-\alpha^{4}\phi\right)+i\alpha^{2}\omega\phi-iU\alpha^{3}\phi=0$$ This simplifies to the Orr-Sommerfeld Equation. $$\left(U-\frac{\omega}{\alpha}\right)\left(\frac{\partial^{2}\phi}{\partial y^{2}}-\alpha^{2}\phi\right)-\phi\frac{\partial^{2}U}{\partial y^{2}}+\frac{i\nu}{\alpha}\left(\frac{\partial^{4}\phi}{\partial y^{4}}-2\alpha^{2}\frac{\partial^{2}\phi}{\partial y^{2}}+\alpha^{4}\phi\right)=0\label{orrsommerfeld}\tag{6}$$ ## Nondimensionalization The Orr-Sommerfeld equation is nondimensionalized using the following nondimensional parameters, $$\bar{U}=\frac{U}{U_{\infty}}\quad\xi=\frac{y}{\delta}\quad\bar{\phi}=\frac{\phi}{U_{\infty}\delta}\quad\bar{c}=\frac{c}{U_{\infty}}\quad\bar{\alpha}=\alpha\delta\quad Re_{\delta}=\frac{U_{\infty}\delta}{\nu}\tag{7}$$ where $$c=\frac{\omega}{\alpha}$$ and $$\delta$$ is the boundary layer thickness. Substituting these into Equation $$\ref{orrsommerfeld}$$ gives \begin{align}0&=\left(\bar{U}U_{\infty}-\bar{c}U_{\infty}\right)\left(\frac{1}{\delta^{2}}\frac{\partial^{2}}{\partial\xi^{2}}\left(\bar{\phi}U_{\infty}\delta\right)-\left(\frac{\bar{\alpha}}{\delta}\right)^{2}\left(\bar{\phi}U_{\infty}\delta\right)\right)-\frac{\bar{\phi}U_{\infty}\delta}{\delta^{2}}\frac{\partial^{2}}{\partial\xi^{2}}\left(\bar{U}U_{\infty}\right)\\&\quad+\frac{i\nu\delta}{\bar{\alpha}}\left(\frac{1}{\delta^{4}}\frac{\partial^{4}}{\partial\xi^{4}}\left(\bar{\phi}U_{\infty}\delta\right)-\frac{2\bar{\alpha}^{2}}{\delta^{4}}\frac{\partial^{2}}{\partial\xi^{2}}\left(\bar{\phi}U_{\infty}\delta\right)+\frac{\bar{\alpha}^{4}}{\delta^{4}}\left(\bar{\phi}U_{\infty}\delta\right)\right)\end{align}\tag{8} Applying the chain rule for each partial derivative gives \begin{align}0&=U_{\infty}\left(\bar{U}-\bar{c}\right)\left(\frac{U_{\infty}}{\delta}\frac{\partial^{2}\bar{\phi}}{\partial\xi^{2}}-\frac{\bar{\alpha}^{2}U_{\infty}}{\delta}\bar{\phi}\right)-\frac{U_{\infty}^{2}}{\delta}\frac{\partial^{2}\bar{U}}{\partial\xi^{2}}\bar{\phi}\\&\quad+\frac{i\nu\delta}{\bar{\alpha}}\left(\frac{U_{\infty}}{\delta^{3}}\frac{\partial^{4}\bar{\phi}}{\partial\xi^{4}}-\frac{2\bar{\alpha}^{2}U_{\infty}}{\delta^{3}}\frac{\partial^{2}\bar{\phi}}{\partial\xi^{2}}+\frac{\bar{\alpha}^{4}U_{\infty}}{\delta^{3}}\bar{\phi}\right)\end{align}\tag{9} Finally, factor out $$\frac{U_{\infty}^{2}}{\delta}$$ and substitute for $$Re_{\delta}$$ to get $$\left(\bar{U}-\bar{c}\right)\left(\frac{\partial^{2}\bar{\phi}}{\partial\xi^{2}}-\bar{\alpha}\bar{\phi}\right)-\frac{\partial^{2}\bar{U}}{\partial\xi^{2}}\bar{\phi}+\frac{i}{\bar{\alpha}Re_{\delta}}\left(\frac{\partial^{4}\bar{\phi}}{\partial\xi^{4}}-2\bar{\alpha}^{2}\frac{\partial^{2}\bar{\phi}}{\partial\xi^{2}}+\bar{\alpha}^{4}\bar{\phi}\right)=0\tag{10}$$ For convenience, derivatives with respect to the station coordinate $$\xi$$ are hereafter denoted with prime notation. This gives the final nondimensional form of the Orr-Sommerfeld equation: $$\left(\bar{U}-\bar{c}\right)\left(\bar{\phi}''-\bar{\alpha}\bar{\phi}\right)-\bar{U}''\bar{\phi}+\frac{i}{\bar{\alpha}Re_{\delta}}\left(\bar{\phi}''''-2\bar{\alpha}^{2}\bar{\phi}''+\bar{\alpha}^{4}\bar{\phi}\right)=0\tag{11}$$ ## Thursday, February 5, 2015 ### Delaunay Triangulation Generation of unstructured grids occurs frequently when solving numerical problems in engineering. While structured grids typically use quadrilaterals (and hexahedra in 3D) which are very simple to generate, unstructured grids use triangles (and tetrahedra in 3D), which are much more difficult to generate. Fortunately, there are many programs available to help generate these triangulations. It is important to know how these triangulations are generated. While not the only triangulation method, one method that is almost universally included is Delaunay triangulation.  In this post, I explain what Delaunay triangulation is, why it is so popular, and the process that is followed by many programs. ## What is a Delaunay triangulation? By definition, a Delaunay triangulation is a triangulation in which no vertex lies within the circumcircle of any triangle. The circumcircle of a triangle is simply the circle that passes through the three vertices of that triangle.  Here is one very simple example.  Each domain in the image below contains four points. There are exactly two triangulations for this domain. On the left, we see that one corner of each triangle is inside the circumcircle of the other. On the right, neither circumcircle encompasses the other triangle's opposing vertex. The case on the right is a Delaunay triangulation. Here you see the basic concept behind an implementation. First generate a triangulation, then check to see if any vertices fall within the cirumcircle of another triangle. If they do, simply swap the diagonal separating the two triangles. This will always give a Delaunay triangulation. ## Why is it so popular? Delaunay triangulation is fairly simple conceptually, but why is it so popular? The primary reason for its popularity is that the resulting mesh is inherently good quality.  For a two-dimensional Delaunay triangulation, it can be shown that the minimum interior angle of each triangle is maximized, and that the maximum interior angle is minimized. The resulting triangles are as equiangular as possible. Another benefit is that the triangulation is independent of the order in which nodes are placed. It is also a relatively simple triangulation method to implement for convex hulls (think of a convex hull as a domain with no "dents" on the boundary). Implementation for non-convex hulls are a bit more challenging, but not impossible. ## The Process The algorithm presented here is as described by S. W. Sloan in Ref. [1]. This routine generates a Delaunay triangulation for a set of predetermined coordinates. First, generate a triangle that encloses the entire domain to be triangulated.  This triangle is called the supertriangle. The size and shape of the supertriangle is irrelevant, as long as all points in the domain are contained within. The next portion of the algorithm is an iterative process. For each point in the domain, add the point to the triangulation by subdividing the triangle containing that point. You will then have a new node surrounded by three triangles, seen in green in the image below, and across the far edge of each new triangle is an opposing node, shown in red. For each new triangle created, if the opposing node for that triangle is not part of the supertriangle, check whether the opposing node is within the circumcircle of the new triangle. If it is, swap the diagonal separating the newly inserted node and the opposing node. In the example, two diagonals need to be swapped. Swapping the diagonal between two triangles gives two new triangles. Again, a circumcircle test is required for the new opposing nodes, and the process continues. until the circumcircle test is satisfied for all triangles connected to the recently inserted node. The entire process then repeats until all nodes in the domain have been inserted and triangulated.  The final step is to remove any triangles connected to the supertriangle vertices, leaving only the domain of interest. Below is a simple example to demonstrate the process one step at a time. ## Extension into Three Dimensions It is not difficult to see that Delaunay triangulation is possible in three dimensions.  The obvious difference for a tetrahedral mesh is that a circumsphere is used instead of a circumcircle. One difference that is not so obvious is that instead of swapping the edge dividing two triangles, you need to swap a dividing face. However, when swapping a face, there are two alternative positions, so some care needs to be taken when choosing which option to use. ## Potential Uses Delaunay triangulation, or any triangulation scheme for that matter, is great for connecting a known set of data points.  I have used this in conjunction with barycentric interpolation to create a program that quickly interpolates to find values between known data points. It can also be used to generate a mesh for finite element and finite volume programs. Because the nodes can be inserted in an arbitrary order, it could even be used as a strategy to adapt a mesh in real-time. If this post was helpful, let me know in the comments. Ask questions if I have left something unclear, and I'll try to elaborate on the subject. ## References 1. S. W. Sloan, "A fast algorithm for constructing Delaunay triangulations in the plane", Adv. Eng. Software, Vol. 9, No. 1, pp. 34-55 1987
2022-12-08 18:57:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 12, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7900753617286682, "perplexity": 816.4490643878257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711360.27/warc/CC-MAIN-20221208183130-20221208213130-00286.warc.gz"}
https://web2.0calc.com/questions/3cosx-4sinx-1
+0 # 3cosx-4sinx=1 0 865 1 3cosx-4sinx=1 math trigonometry Guest Aug 25, 2014 ### Best Answer #1 +92164 +5 There are a number of different ways to do a question like this.  Here are some ideas. http://www.thestudentroom.co.uk/showthread.php?t=1586679 ok now I will give it a go. $$\begin{array}{rlll} cos^2\theta+sin^2\theta&=&1\\ cos\theta&=&\sqrt{1-sin^2\theta\\\\ 3cosx-4sinx &=&1\\ 3\sqrt{1-sin^2x}-4sinx &=&1\qquad &\mbox{ }\\ 3\sqrt{1-sin^2x} &=&1+4sinx\qquad &\mbox{Square both sides }\\ 9\times(1-sin^2x) &=&1+8sinx+16sin^2x\qquad &\mbox{}\\ 9-9sin^2x &=&1+8sinx+16sin^2x\qquad &\mbox{}\\ 0 &=&-8+8sinx+25sin^2x\qquad &\mbox{}\\ 25sin^2x+8sinx-8 &=&0\qquad &\mbox{}\\ Let \;y=sinx&\\ 25y^2+8y-8 &=&0\qquad &\mbox{}\\ y&=&\frac{-8\pm \sqrt{64+800}}{50}\\ sinx&=&\frac{-8\pm \sqrt{16*9*6}}{50}\\ sinx&=&\frac{-8\pm 12\sqrt{6}}{50}\\ \end{array}$$ $$\underset{\,\,\,\,^{{360^\circ}}}{{sin}}^{\!\!\mathtt{-1}}{\left({\frac{\left({\mathtt{\,-\,}}{\mathtt{8}}{\mathtt{\,-\,}}{\sqrt{{\mathtt{864}}}}\right)}{{\mathtt{50}}}}\right)} = -{\mathtt{48.406\: \!856\: \!678\: \!66^{\circ}}}$$ $$\underset{\,\,\,\,^{{360^\circ}}}{{sin}}^{\!\!\mathtt{-1}}{\left({\frac{\left({\mathtt{\,-\,}}{\mathtt{8}}{\mathtt{\,\small\textbf+\,}}{\sqrt{{\mathtt{864}}}}\right)}{{\mathtt{50}}}}\right)} = {\mathtt{25.332\: \!938\: \!613\: \!029^{\circ}}}$$ These answers should be checked by substituting them back into the original equation but I am going to leave that to you. (I'm too tired) Melody  Aug 25, 2014 Sort: ### 1+0 Answers #1 +92164 +5 Best Answer There are a number of different ways to do a question like this.  Here are some ideas. http://www.thestudentroom.co.uk/showthread.php?t=1586679 ok now I will give it a go. $$\begin{array}{rlll} cos^2\theta+sin^2\theta&=&1\\ cos\theta&=&\sqrt{1-sin^2\theta\\\\ 3cosx-4sinx &=&1\\ 3\sqrt{1-sin^2x}-4sinx &=&1\qquad &\mbox{ }\\ 3\sqrt{1-sin^2x} &=&1+4sinx\qquad &\mbox{Square both sides }\\ 9\times(1-sin^2x) &=&1+8sinx+16sin^2x\qquad &\mbox{}\\ 9-9sin^2x &=&1+8sinx+16sin^2x\qquad &\mbox{}\\ 0 &=&-8+8sinx+25sin^2x\qquad &\mbox{}\\ 25sin^2x+8sinx-8 &=&0\qquad &\mbox{}\\ Let \;y=sinx&\\ 25y^2+8y-8 &=&0\qquad &\mbox{}\\ y&=&\frac{-8\pm \sqrt{64+800}}{50}\\ sinx&=&\frac{-8\pm \sqrt{16*9*6}}{50}\\ sinx&=&\frac{-8\pm 12\sqrt{6}}{50}\\ \end{array}$$ $$\underset{\,\,\,\,^{{360^\circ}}}{{sin}}^{\!\!\mathtt{-1}}{\left({\frac{\left({\mathtt{\,-\,}}{\mathtt{8}}{\mathtt{\,-\,}}{\sqrt{{\mathtt{864}}}}\right)}{{\mathtt{50}}}}\right)} = -{\mathtt{48.406\: \!856\: \!678\: \!66^{\circ}}}$$ $$\underset{\,\,\,\,^{{360^\circ}}}{{sin}}^{\!\!\mathtt{-1}}{\left({\frac{\left({\mathtt{\,-\,}}{\mathtt{8}}{\mathtt{\,\small\textbf+\,}}{\sqrt{{\mathtt{864}}}}\right)}{{\mathtt{50}}}}\right)} = {\mathtt{25.332\: \!938\: \!613\: \!029^{\circ}}}$$ These answers should be checked by substituting them back into the original equation but I am going to leave that to you. (I'm too tired) Melody  Aug 25, 2014 ### 16 Online Users We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners.  See details
2018-04-20 02:55:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8994467854499817, "perplexity": 1283.3797647571998}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937113.3/warc/CC-MAIN-20180420022906-20180420042906-00182.warc.gz"}
https://www.queryoverflow.gdn/query/exercise-about-fundamental-group-21_3260927.html
by caterina   Last Updated June 13, 2019 12:20 PM I have to calculate fundamental group of $$R^3\setminus$$(2 parallel line and one transversal line). The line are represented in the figure 1. Can you help me? Tags : ## Related Questions Updated July 25, 2018 03:20 AM Updated May 10, 2019 21:20 PM Updated February 24, 2016 04:08 AM Updated January 20, 2019 21:20 PM
2019-06-20 04:19:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18076880276203156, "perplexity": 3043.798073841866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999130.98/warc/CC-MAIN-20190620024754-20190620050754-00105.warc.gz"}
https://physics.stackexchange.com/questions/320082/what-does-it-mean-by-quantization-on-phase-space?noredirect=1
# What does it mean by quantization on phase space? Quantum mechanical particle's wavefunction are always described by position or momentum representation. Then, I found the so-called quantization on phase space. So, what does it really mean? Does it mean that we make momentum and position as basis in Hilbert space? Since position and momentum are non-commuting operator, how could we do that? Thank you $f(q,p) \rightarrow \hat{A_f}=\int d\mu(q,p) f(q,p) | \psi \rangle \langle \psi |$
2019-05-26 03:24:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5992244482040405, "perplexity": 330.5972744008208}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258621.77/warc/CC-MAIN-20190526025014-20190526051014-00135.warc.gz"}
https://learn.microsoft.com/en-us/dotnet/csharp/misc/cs1632
# Compiler Error CS1632 Control cannot leave the body of an anonymous method or lambda expression This error occurs if a jump statement (break, goto, continue, etc.) attempts to move control out of an anonymous method block. An anonymous method block is a function body and can only be exited by a return statement or by reaching the end of the block. The following sample generates CS1632: // CS1632.cs // compile with: /target:library delegate void MyDelegate(); class MyClass { public void Test() { for (int i = 0 ; i < 5 ; i++) { MyDelegate d = delegate { break; // CS1632 }; } } }
2022-11-27 02:29:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1948765516281128, "perplexity": 7304.887205859335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710155.67/warc/CC-MAIN-20221127005113-20221127035113-00197.warc.gz"}
https://math.stackexchange.com/questions/1986392/win-a-coin-game-in-10th-round
# Win a coin game in 10th round Two players are tossing a fair coin in one round. If it is heads, the first one gets a dollar from the second player. Otherwise, the first player gives a dollar to the second one. If both players have 6 dollars in the beginning of the game, what is the probability that the first player wins all the money exactly on 10th round. • I am considering sample space to be 2^10; however, a friend of mine considers it to be 960. Whose idea is erroneous and why? Thanks. – Ulugbek Abdullaev Oct 26 '16 at 18:06 • The sample space is not as big as $2^{10}$, because the sequence $HHHHHHTTTT$ and $HHHHHHHHHH$ are considered exactly the same game (they quit playing after the sixth heads). Of course, if you think that they continue playing after one has lost just to "see what would happen", but without any money involved, and analyze those games, then the sample space has size $2^{10}$. It all depends on your interpretation. – Arthur Oct 26 '16 at 18:07 • @Arthur I got your point. Could you elaborate how to exclude those unnecessary? I'm just stuck.. – Ulugbek Abdullaev Oct 26 '16 at 18:41 • Hint: out of those ten rounds, how many Heads were there? How many Tails? We know the last one was $H$...what about the next to last? – lulu Oct 26 '16 at 19:04 • @BruceET It's very nearly the whole story. We see that we want $8$ Heads and $2$ Tails...there must be at least one $T$ in the first $6$ slots and and both $T$ must be in the first $8$. Easy to count. – lulu Oct 26 '16 at 21:58 If the coin is tossed 10 times, there could be wins at trials number 6, 8, and 10. A win at the 6th requires six heads in a row (probability $.5^6 = 0.015625.$) A first win at the 8th requires that exactly one of the first six tosses must be tails, followed by two heads (probability $6(.5)^8 = 0.0234375.$) The simulation below confirms these two results (within simulation error), and suggests that the probability of a first win at the 10th toss must have probability about 0.0265 (two or three place accuracy). I will leave it to you use similar logic to find a combinatorial formula and the exact probability. m = 10^6; frst.win = numeric(m) for(i in 1:m) { toss = sample(c(-1,1), 10, rep=T) # vector of 1's (Hs) and -1's (Ts) cs = cumsum(toss) # 1st player's cumulative totals frst.win[i] = match(6, cs) } # toss on which cum tot first reaches 6 table(frst.win)/m frst.win 6 8 10 0.015633 0.023540 0.026513
2020-12-04 18:18:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6429985165596008, "perplexity": 666.7686578027019}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141740670.93/warc/CC-MAIN-20201204162500-20201204192500-00078.warc.gz"}
https://lists.oasis-open.org/archives/virtio-dev/201906/msg00044.html
OASIS Mailing List ArchivesView the OASIS mailing list archive below or browse/search using MarkMail. # virtio-dev message [Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home] Subject: Re: [virtio-dev] [PATCH v3 2/2] virtio-fs: add DAX window • From: "Dr. David Alan Gilbert" <dgilbert@redhat.com> • To: "Michael S. Tsirkin" <mst@redhat.com> • Date: Tue, 25 Jun 2019 10:55:15 +0100 * Michael S. Tsirkin (mst@redhat.com) wrote: > On Mon, Jun 24, 2019 at 02:58:08PM +0100, Stefan Hajnoczi wrote: > > On Tue, Jun 18, 2019 at 09:41:25PM -0400, Michael S. Tsirkin wrote: > > > On Wed, Feb 20, 2019 at 12:46:13PM +0000, Stefan Hajnoczi wrote: > > > > Describe how shared memory region ID 0 is the DAX window and how > > > > FUSE_SETUPMAPPING maps file ranges into the window. > > > > > > > > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> > > > > --- > > > > Note that this depends on the shared memory resource specification > > > > extension that David Gilbert is working on. > > > > https://lists.oasis-open.org/archives/virtio-comment/201901/msg00000.html > > > > > > > > The FUSE_SETUPMAPPING message is part of the virtio-fs Linux patches: > > > > https://gitlab.com/virtio-fs/linux/blob/virtio-fs/include/uapi/linux/fuse.h > > > > --- > > > > virtio-fs.tex | 25 +++++++++++++++++++++++++ > > > > 1 file changed, 25 insertions(+) > > > > > > > > diff --git a/virtio-fs.tex b/virtio-fs.tex > > > > index 5df5b9c..abb1e48 100644 > > > > --- a/virtio-fs.tex > > > > +++ b/virtio-fs.tex > > > > @@ -157,6 +157,31 @@ The driver MUST submit FUSE_INTERRUPT, FUSE_FORGET, and FUSE_BATCH_FORGET reques > > > > > > > > The driver MUST anticipate that request queues are processed concurrently with the hiprio queue. > > > > > > > > +\subsubsection{Device Operation: DAX Window}\label{sec:Device Types / File System Device / Device Operation / Device Operation: DAX Window} > > > > + > > > > +FUSE\_READ and FUSE\_WRITE requests transfer file contents between the > > > > +driver-provided buffer and the device. In cases where data transfer is > > > > +undesirable, the device can map file contents into the DAX window shared memory > > > > +region. The driver then accesses file contents directly in device-owned memory > > > > +without a data transfer. > > > > + > > > > +Shared memory region ID 0 is called the DAX window. The driver maps a file > > > > +range into the DAX window using the FUSE\_SETUPMAPPING request. The mapping is > > > > +removed using the FUSE\_REMOVEMAPPING request. > > > > > > I don't see FUSE\_SETUPMAPPING or FUSE\_REMOVEMAPPING under > > > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/include/uapi/linux/fuse.h > > > Is it just me? > > > > They are not upstream yet and can be found here: > > > > https://gitlab.com/virtio-fs/linux/blob/virtio-fs/include/uapi/linux/fuse.h#L384 > > > > There is a chicken-and-egg problem. Linux should merge this once the > > spec has been accepted. The spec makes reference to a new FUSE command > > that is being added to Linux. :D > > > > I suggest we break it by merging the VIRTIO spec change first. There > > won't be a spec release so soon anyway and we can revert it in case > > there are issues Linux. Miklos, the FUSE maintainer, is well aware of > > virtio-fs and contributes to it, so it's unlikely that Linux will reject > > these commands. > > > > > > + > > > > +After FUSE\_SETUPMAPPING has completed successfully the file range is accessible > > > > +from the DAX window at the offset provided by the driver in the request. > > > > > > Dgilbert's patches describing shared memory say that > > > the legal ways to set up mappings are all implementation-dependent. > > > How does driver know which attributes to use for the > > > mapping? > > > > Two different types of mappings: > > 1. The DAX window shared memory region described by DaveG's spec. > > 2. The file mappings established using FUSE_SETUPMAPPING. > > > > The virtio_fs.ko driver maps the DAX window, e.g. from a PCI BAR in an > > implementation-defined way. virtio_pci_*.c in Linux will have to help > > out with the implementation-specific details here. > > > > The only flags currently supported by FUSE_SETUPMAPPING are READ and > > WRITE. This depends on the file's access mode. There is nothing > > implementation-specific in FUSE_SETUPMAPPING. > > Sorry - I'm being unclear. > The guest driver maps parts of the PCI BAR. > What are the attributes of this mapping? > This is unrelated to FUSE_SETUPMAPPING things - > mapping is created by creatig PTEs and such > within guest, not by virtio things. By attributes you mean... memory ordering, cachability etc? > > > > Also, we recently had a discussion about DAX support on hosts > > > and safety wrt crashes. Do we need to expose this > > > information to guests maybe? > > > > No. Although virtio-fs uses the DAX subsystem, it does not use NVDIMM's > > persistence model (e.g. CPU cache flush for persistence). FUSE_FSYNC is > > sent when persistence is required. Therefore virtio-fs is still using > > the traditional file/block persistence model. No changes necessary for > > power failure, etc. > > > > > Finally, do we want to have a way to express that the filesystem > > > only allows RO mappings? > > > > Thanks for this idea. I'm discussing it with the FUSE community because > > mount -o ro with FUSE currently doesn't involve the file system daemon. > > > > > > + > > > > +\devicenormative{\paragraph}{Device Operation: DAX Window}{Device Types / File System Device / Device Operation / Device Operation: DAX Window} > > > > + > > > > +The device MUST allow mappings that completely or partially overlap existing mappings within the DAX window. > > > > > > > > > Any alignment requirements? > > > > Good point. There are alignment requirements and the driver has no way > > of knowing what they are. I'll find a way to communicate them into the > > guest, either via virtio or via FUSE. > > > > > Also, with no limit on mappings, it looks like guest can use up lots of > > > host VMAs quickly. Shouldn't there be a limit on # of mappings? > > > > The VM can only deteriorate its own performance, right? > > Only if QEMU is put in a container where virtual memory is > limited. > It's generally not a good idea where the only way for > host to make progress is to allocate more memory > without any limit. > > If we are in a situation where we need to either kill > the guest or hit swap, none of the choices is good. There is a bound; it's cache region size / page size - so that's ~1M mappings worst case (e.g. 4GB cache, 4kB page size) That limit can be bought down if we impose a larger granularity somewhere (and the reality is our kernel uses 2MB mapping chunks I think). > > We haven't seen catastrophic problems that bring the system to it's > > knees. > > Because you are not running malicious guests? Hmm, I didn't realise a process having an excessive number of mappings could harm any other process. Dave > > But we're aware that increasing the number VMAs slows down the > > lookup. There is currently no imposed limit. > > > > Ideas have been discussed to avoid using (so many) VMAs but it seems > > like that will take some time to develop and get upstream. This will > > not affect the virtio specification because the device interface doesn't > > > > Stefan > > > One way to address this is to expose the # of mappings > in the config space. -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK [Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]
2022-05-24 00:29:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.635342538356781, "perplexity": 13518.945750609351}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662562106.58/warc/CC-MAIN-20220523224456-20220524014456-00185.warc.gz"}
https://www.physicsforums.com/threads/combustion-mass-conservation-integral.318404/
# Combustion mass conservation integral 1. Jun 6, 2009 ### greggleslarue 1. The problem statement, all variables and given/known data integrate [(rs^2)*(rhos)*(us)*(db/dr)=(d/dr)*(r^2)*(rho)*(D)*(db/dr))] rhos=density at surface us=velocity at surface rho = density D= diffusivity b=spalding non-dimensional parameter 2. Relevant equations 3. The attempt at a solution This is the solution after one integration based on the assumptions that r, rho, D, u are not dependent (rs^2)*(rhos)*(us)*(b)=(r^2)*(rho)*(D)*(db/dr)+(c1) I think this involves separation of variables to solve and integrate but I dont see the steps clearly. Can someone please show me? 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution 2. Jun 6, 2009 ### Cyosis First off you mean that rs is a constant right, not r? Secondly the right hand side seems a bit awkward. The d/dr operator does it act on everything that comes after it or? If so, ignoring the constants, is this the differential equation you're trying to solve?: $$\frac{db}{dr}=\frac{d}{dr}\left(r^2\frac{db}{dr}\right)$$ Edit: Looking at your attempt at a solution this must be the case. So after one integration you're left with $b=r^2 b'+c \Rightarrow b'/(b-c)=1/r^2$. You should be able to integrate this expression. Last edited: Jun 6, 2009 3. Jun 6, 2009 ### greggleslarue Yes you are correct that is the basic equation I am trying to solve. I am getting hung up on the first integral though. I dont understand the steps from solving this: db/dr=d/dr*(r^2*(db/dr)) I am thinking of this as separation of variables. IE move the dr on the left over to the right side. then the left side just becomes b, but the right side becomes more complicated. What are the steps for solving the original equation? 4. Jun 6, 2009 ### greggleslarue not the original equation i wrote but the original equation cyosis wrote. Thanks 5. Jun 6, 2009 ### Cyosis The first integral is pretty easy. Note that both sides are just "terms" that are getting differentiated with respect to r. Therefore integrating with respect to r will cancel out the differentiation operation bar a constant. \begin{align*} \int \frac{db}{dr} dr &=\int \frac{d}{dr}\left(r^2\frac{db}{dr}\right)dr \\ b &=r^2\frac{db}{dr}+c \end{align*} Last edited: Jun 6, 2009
2017-11-21 17:39:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9464084506034851, "perplexity": 1144.5292163531192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806421.84/warc/CC-MAIN-20171121170156-20171121190156-00592.warc.gz"}
https://tor-core.readthedocs.io/en/latest/HACKING/GettingStarted.html
# Getting started in Tor development¶ Congratulations! You’ve found this file, and you’re reading it! This means that you might be interested in getting started in developing Tor. (This guide is just about Tor itself–the small network program at the heart of the Tor network–and not about all the other programs in the whole Tor ecosystem.) If you are looking for a more bare-bones, less user-friendly information dump of important information, you might like reading the “torguts” documents linked to below. You should probably read it before you write your first patch. ## Required background¶ First, I’m going to assume that you can build Tor from source, and that you know enough of the C language to read and write it. (See the README file that comes with the Tor source for more information on building it, and any high-quality guide to C for information on programming.) I’m also going to assume that you know a little bit about how to use Git, or that you’re able to follow one of the several excellent guides at http://git-scm.org to learn. Most Tor developers develop using some Unix-based system, such as Linux, BSD, or OSX. It’s okay to develop on Windows if you want, but you’re going to have a more difficult time. ## Getting your first patch into Tor¶ Once you’ve reached this point, here’s what you need to know. 1. Get the source. We keep our source under version control in Git. To get the latest version, run git clone https://git.torproject.org/git/tor This will give you a checkout of the master branch. If you’re going to fix a bug that appears in a stable version, check out the appropriate “maint” branch, as in: git checkout maint-0.2.7 2. Find your way around the source Our overall code structure is explained in the “torguts” documents, currently at git clone https://git.torproject.org/user/nickm/torguts.git Find a part of the code that looks interesting to you, and start looking around it to see how it fits together! We do some unusual things in our codebase. Our testing-related practices and kludges are explained in doc/WritingTests.txt. If you see something that doesn’t make sense, we love to get questions! 3. Find something cool to hack on. You may already have a good idea of what you’d like to work on, or you might be looking for a way to contribute. Many people have gotten started by looking for an area where they personally felt Tor was underperforming, and investigating ways to fix it. If you’re looking for ideas, you can head to our bug tracker at trac.torproject.org and look for tickets that have received the “easy” tag: these are ones that developers think would be pretty simple for a new person to work on. For a bigger challenge, you might want to look for tickets with the “lorax” keyword: these are tickets that the developers think might be a good idea to build, but which we have no time to work on any time soon. Or you might find another open ticket that piques your interest. It’s all fine! For your first patch, it is probably NOT a good idea to make something huge or invasive. In particular, you should probably avoid: • Major changes spread across many parts of the codebase. • Major changes to programming practice or coding style. • Huge new features or protocol changes. 4. Meet the developers! We discuss stuff on the tor-dev mailing list and on the #tor-dev IRC channel on OFTC. We’re generally friendly and approachable, and we like to talk about how Tor fits together. If we have ideas about how something should be implemented, we’ll be happy to share them. We currently have a patch workshop at least once a week, where people share patches they’ve made and discuss how to make them better. The time might change in the future, but generally, there’s no bad time to talk, and ask us about patch ideas. 5. Do you need to write a design proposal? If your idea is very large, or it will require a change to Tor’s protocols, there needs to be a written design proposal before it can be merged. (We use this process to manage changes in the protocols.) To write one, see the instructions at https://gitweb.torproject.org/torspec.git/tree/proposals/001-process.txt . If you’d like help writing a proposal, just ask! We’re happy to help out with good ideas. You might also like to look around the rest of that directory, to see more about open and past proposed changes to Tor’s behavior. 6. Writing your patch As you write your code, you’ll probably want it to fit in with the standards of the rest of the Tor codebase so it will be easy for us to review and merge. You can learn our coding standards in doc/HACKING. If your patch is large and/or is divided into multiple logical components, remember to divide it into a series of Git commits. A series of small changes is much easier to review than one big lump. 7. Testing your patch We prefer that all new or modified code have unit tests for it to ensure that it runs correctly. Also, all code should actually be run by somebody, to make sure it works. See doc/WritingTests.txt for more information on how we test things in Tor. If you’d like any help writing tests, just ask! We’re glad to help out. 8. Submitting your patch We review patches through tickets on our bugtracker at trac.torproject.org. You can either upload your patches there, or put them at a public git repository somewhere we can fetch them (like github or bitbucket) and then paste a link on the appropriate trac ticket. Once your patches are available, write a short explanation of what you’ve done on trac, and then change the status of the ticket to needs_review. 9. Review, Revision, and Merge With any luck, somebody will review your patch soon! If not, you can ask on the IRC channel; sometimes we get really busy and take longer than we should. But don’t let us slow you down: you’re the one who’s offering help here, and we should respect your time and contributions. When your patch is reviewed, one of these things will happen: • The reviewer will say “looks good to me” and your patch will get merged right into Tor. [Assuming we’re not in the middle of a code-freeze window. If the codebase is frozen, your patch will go into the next release series.] • OR the reviewer will say “looks good, just needs some small changes!” And then the reviewer will make those changes, and merge the modified patch into Tor. • OR the reviewer will say “Here are some questions and comments,” followed by a bunch of stuff that the reviewer thinks should change in your code, or questions that the reviewer has. At this point, you might want to make the requested changes yourself, and comment on the trac ticket once you have done so. Or if you disagree with any of the comments, you should say so! And if you won’t have time to make some of the changes, you should say that too, so that other developers will be able to pick up the unfinished portion. Congratulations! You have now written your first patch, and gotten it integrated into mainline Tor.
2022-08-11 23:28:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2375519722700119, "perplexity": 1256.3650995261053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571536.89/warc/CC-MAIN-20220811224716-20220812014716-00516.warc.gz"}
https://web2.0calc.com/questions/please-help_64151
+0 +1 93 7 +128 Let O be the center and let F be one of the foci of the ellipse 25x^2 +16 y^2 = 400. A second ellipse, lying inside and tangent to the first ellipse, has its foci at O and F. What is the length of the minor axis of this second ellipse? Apr 3, 2021 #3 +113190 +2 The 'foci' equation of an ellipse is $$\frac{(x-h)^2}{a^2}+\frac{(y-k)^2}{b^2}=1$$ a is the half the length of the horizontal axis and b is half the length of the vertical axis (h,k) is the centre If a is bigger than b then it will be more long, (the major axis will be horizontal) If a is smaller than b then it will be more tall, (the major axis will be vertical) So lets see what we have been given. 25x^2 +16 y^2 = 400 divide through by 400 $$\frac{x^2}{16}+\frac{y^2}{25}=1\\ \frac{x^2}{4^2}+\frac{y^2}{5^2}=1\\$$ So the centre is (0,0) It is a tall one. The major axis will be from  (-5,0) to  (5,0) The minor axis will be from  (0,-4) to (0,4) To find the focal points we use c where $$c^2=|a^2-b^2|\\ c^2=|16-25|\\ c=3$$ So the distance from the middle to the focal point is 3 units. The foci will be at (0,-3)  and (0,3) We need to find the equation of the ellipse with focal points (0,0) and (0,3)         [I could have chosen (0,-3) if I had wanted to] the centre will be (1,1.5) c is the distance from the centre to the focal point, so c=1.5 It is just going to touch the other ellipse in one point and that point will be (0,5) So the major axis b will be  5-1.5=3.5units We have to find a $$c^2=|a^2-b^2|$$ b >a  so $$c^2=b^2-a^2\\ a^2=b^2-c^2\\ a^2=3.5^2-1.5^2\\ a^2=10\\ a=\sqrt{10}$$ So the major axis is  2*3.5=7 units long and the minor axis will be    2sqrt10 which is approx   6.32 units long For anyone interested:  the equation of the second ellipse will be $$\frac{x^2}{10}+\frac{(y-1.5)^2}{12.25}=1$$ Apr 4, 2021 #4 +1337 +1 Thank you for the explanation, I think I understand the question. :DDD I'm sorry if this is a dumb question, but what's the importance/point of a foci point? =^._.^= catmg  Apr 4, 2021 #5 +113190 +2 It is definitely not a dumb question. It is very important to understand the relevance of a focal point. A circle is the set of all points equidistant from the central point. An ellipse is the set of all points where the sum of the distance to each of the focal points stays constant. So A circle is an ellipse where the 2 foci are in the same spot. Think of the way this guy draws the ellipse in the video below and you will see what I mean. Here is how to draw one with two pins and a piece of string https://youtu.be/Et3OdzEGX_w Here is a great site that covers many important characteristics of ellipses. There is a number of interactive pictures.  Play with them make sure you understand what they are trying to demonstrate to you. https://www.mathsisfun.com/geometry/ellipse.html Melody  Apr 5, 2021 #6 +1337 +1 Thank you for responding. :DDD Ohhh I get it, it's like the average of the foci points. I shall start studying ellipses on alcumus. I think we did a similar activity during my science class when we were learning about planets. Sadly, it failed quite misreably since it was hard to get everything organized while in quaratine. I miss doing labs. :(( Last year, we were supposed to do this forensic lab where we would try to solve a m****r case. Our teacher had to make everything electronic and it was still fun, but we didn't get to do things such as collect finger prints. =^._.^= catmg  Apr 5, 2021 #7 +113190 +2 The centre is half way between the focal points, if that is what you mean. Melody  Apr 6, 2021
2021-05-13 15:25:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7332504987716675, "perplexity": 1191.0944743074504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989814.35/warc/CC-MAIN-20210513142421-20210513172421-00475.warc.gz"}
http://www.pops.gatech.edu/doku.php?id=refpot&amp;do=recent
Phonon Optimized Potentials Sidebar User Guide Examples Potential Databases refpot If you are fitting a tabulated potential, this file is used to declare which parameters are being fit. Fitting the Tersoff potential, for example, yields the following REFPOT: # Example of a Tersoff potential file for fitting # m, Inmma, lambda3, c, d, costheta0, n, beta, lambda2, B, R, D, lambda1, A Ge Ge Ge 3 1 [] [] [] [] [] [] [] [] 2.95 0.15 [] [] where the “[]” brackets designate parameters that are to be fit. SYMMETRIC PARAMETERS It is of use to define symmetries between parameters at times. If you want all of a certain parameter to be the same, use [1] for that set of parameters. Use [2] for the next set of symmetric parameters, etc.. An example is shown here: Ge Ge Ge 3 1 [1] [2] [3] [4] [] [] [] [] 2.95 0.15 [] [] Ge Ge Si 3 1 [1] [2] [-3] [-4] [] [] [] [] 2.95 0.15 [] [] All the parameters with a number in the “[]” brackets will be treated as a single parameter during fitting. The “[]” brackets with negative numbers will adopt the same magnitude of the parameters in [3] and [4], but with opposite signs. This is useful when fitting charges. refpot.txt · Last modified: 2017/03/28 14:46 by rohskopf
2020-06-05 13:16:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6320386528968811, "perplexity": 2851.697681935649}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348500712.83/warc/CC-MAIN-20200605111910-20200605141910-00119.warc.gz"}
https://math.stackexchange.com/questions/52355/solving-sum-limits-alpha-1-mu-frac1-alpha-left1-alpha2-mu-alp
# Solving $\sum\limits_{\alpha=1}^{\mu}\frac{1}{\alpha!}\left(1-\alpha(2 \mu- \alpha)\left(\frac{M-2l}{2 \mu^2 KM}\right)\right)^{\lambda}$ Solving $\sum\limits_{\alpha=1}^{\mu}\frac{1}{\alpha!}\left(1-\alpha(2 \mu- \alpha)\left(\frac{M-2l}{2 \mu^2 KM}\right)\right)^{\lambda}$ I asked about a part of this problem some time ago, and user @Did gave some suggestions, but I would like nevertheless to ask it again, since I could not arrive at a sensible solution. I use Stirling approximation $$\frac{1}{\alpha!} \approx (\frac{e}{\alpha})^{\alpha}\frac{1}{\sqrt{2 \pi \alpha}}$$ and the upper bound $$\left(1-\frac{\alpha(2 \mu- \alpha)(M-2l)}{2 \mu^2 KM}\right)^{\lambda} \leq \exp\left(-\frac{\alpha(2 \mu-\alpha)(M-2l) \lambda}{2 \mu^2 KM}\right)$$ Setting $l=0$ and $\lambda=\mu$ the expression under the exponential function simplifies to $$-\frac{\alpha(2 \mu-\alpha)}{2 \mu K}$$ so the expression (given that $\alpha^{-\alpha}=e^{log\alpha^{-\alpha}}$ and setting for simplicity $z=\frac{1}{2 \mu K}$) becomes $$\frac{1}{\sqrt{2 \pi}}\sum_{\alpha=1}^{\mu}\exp\left(-\alpha(2 \mu-\alpha)z+\alpha +\log \alpha^{-(\alpha+.5)}\right)$$ For sufficiently large $\mu$ this expression can be approximated with Riemann sums. To avoid 0, the bounds of the integral become $[1,2]$ and the approximation of this sum is $$\frac{\mu}{\sqrt{2 \pi}}\int_{1}^{2}\exp(-\mu \alpha(2 \mu -\mu \alpha)z+\mu \alpha+\log(\mu \alpha)^{-(\mu \alpha+.5)})d\alpha$$ I checked this expression nmerically though, for different values of $\mu$, and it doesn't give a good approximation at all. I also tried expanding in Taylor series without much success. Did I make a mistake somewhere? Any suggestions are massively welcome. • I don't understand the justification for setting $l=0$ and $\lambda=\mu$. Can we set $l$ and $\lambda$ to whatever we want? I mean, if you try $l=M/2$, for example... (Edit: also, the title is misleading if you're only searching for a nice approximation instead of necessarily a closed-form solution.) – anon Jul 19 '11 at 7:51 • What are you trying to do with the sum? Get bounds? Approximate it? Get asymptotical behavior? Be precise on what you want. – Patrick Da Silva Jul 19 '11 at 7:57 • yes, an approximation will do. $l=0$ and $\lambda=\mu$ should hold. – sigma.z.1980 Jul 19 '11 at 7:58 Let us however assume that $c=(2KM)^{-1}(M-2l)$ and $\lambda$ are fixed, that $c\geqslant0$ and that $\mu\to+\infty$. Then one considers $$S_\mu=\sum_{a=1}^{+\infty}\frac1{a!}x_\mu(a)$$ with $$x_\mu(a)=\left(1-ca\mu^{-2}(2\mu-a)\right)^\lambda\mathbf{1}_{a\le\mu}$$ Thus, for every fixed $a$, $x_\mu(a)\to1$ when $\mu\to+\infty$ with $x_\mu(a)\leqslant1$, and the series $\sum\frac1{a!}$ converges, hence, by Lebesgue convergence theorem, $$\lim_{\mu\to\infty}S_\mu=\sum_{a=1}^{+\infty}\frac1{a!}=\mathrm{e}-1$$ If, on the contrary, one assumes that $c\geqslant0$ is fixed and that $\lambda=\mu$ with $\mu\to+\infty$, then a similar argument yields the limit $$\lim_{\mu\to\infty}S_\mu=\exp(\mathrm{e}^{-2c})-1$$ • The asymptotic approach is 'the last resort'. I'd like to solve it assuming 'relatively small' $\mu$. – sigma.z.1980 Jul 19 '11 at 21:24
2019-10-19 12:13:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9463462233543396, "perplexity": 208.26757544909242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986693979.65/warc/CC-MAIN-20191019114429-20191019141929-00051.warc.gz"}
http://stackoverflow.com/questions/14389892/ipython-notebook-plotting-with-latex?answertab=votes
IPython Notebook: Plotting with LaTeX? Displaying lines of LaTeX in IPython Notebook has been answered previously, but how do you, for example, label the axis of a plot with a LaTeX string when plotting in IPython Notebook? - how did you install tex? Was it perhaps BasicTex from MacTex? –  minrk Jan 28 '13 at 4:21 It works the same in IPython as it does in a stand-alone script. This example comes from the docs: import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('text', usetex = True) mpl.rc('font', family = 'serif') plt.figure(1, figsize = (6, 4)) ax = plt.axes([0.1, 0.1, 0.8, 0.7]) t = np.arange(0.0, 1.0+0.01, 0.01) s = cos(2*2*pi*t)+2 plt.plot(t, s) plt.xlabel(r'\textbf{time (s)}') plt.ylabel(r'\textit{voltage (mV)}', fontsize = 16) plt.title(r"\TeX\ is Number $\displaystyle\sum_{n=1}^\infty\frac{-e^{i\pi}}{2^n}$!", fontsize = 16, color = 'r') plt.grid(True) plt.savefig('tex_demo') plt.show() - Dropping that example code above into an IPy Notebook cell returns an error with content like: "RuntimeError: LaTeX was not able to process the following string:...! LaTeX Error: File type1cm.sty' not found." I understand that IPy Notebook handles LaTeX differently than IPython alone. –  user1988816 Jan 18 '13 at 0:09 @user1988816 What OS are you using and how did you install LaTeX? –  tcaswell Jan 18 '13 at 0:42 @user1988816: Within plots, the LaTeX is handled by matplotlib, not IPython. The IPython notebook has separate mechanisms to display LaTeX, but they're not used when you make a plot. –  Thomas K Jan 18 '13 at 12:38 @user1988816 I bet you installed BasicTex from MacTex. This doesn't have quite all the packages matplotlib needs to use tex (mostly a few fonts). You can install full MacTex to get them, or just add the missing ones, which I think would be: tlmgr install dvipng helvetic palatino mathpazo type1cm` –  minrk Jan 28 '13 at 5:06 the notebook handles latex display using mathjax which is already included in the web interface. there is no need for any latex engine. ipthon qtconsole needs the latex engine. –  MySchizoBuddy Apr 12 '13 at 18:16
2015-05-06 02:05:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.805672824382782, "perplexity": 8787.634607495182}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430457725048.66/warc/CC-MAIN-20150501052205-00037-ip-10-235-10-82.ec2.internal.warc.gz"}
https://dsp.stackexchange.com/questions/46461/can-someone-explain-waveshaping-to-me/46466
# Can someone explain waveshaping to me? Can someone explain waveshaping to me? I only know it shapes waveshapes by passing through some functions, but I don't yet understand e.g. what the plots (such as the following in Melda Production MWaveShaper VST-plugin) mean. Both axes have dB values so I read the axes to be input and output. However, how is the distortion "visualized" in the graph? How does one know how the graph alters audio? • It's quite frankly simply an input/output mapping, so: not quite sure what your question is, here. – Marcus Müller Jan 16 '18 at 19:36 • @MarcusMüller So does a waveshaper work by mapping input amplitudes to new output amplitudes and thus the waveform becomes distorted (through this logic)? But I guess that it's impossible to predict from the graph, what the output will sound like? – mavavilj Jan 16 '18 at 20:03 • yes, that's how I understand it. "Impossible" is a strong word. It will sound distorted, and an experienced person might have an idea how it sounds like. But "how it sounds" is more of an aesthetic or artistic or psychological aspect, which can really only hardly be pressed into terms of signal processing. – Marcus Müller Jan 16 '18 at 20:15 • there are some things you can discern about what the output might sound like given a polynomial waveshaping function. – robert bristow-johnson Jan 17 '18 at 3:43 In the audio domain, waveshaping is simply applying a memoryless nonlinear function to an input signal. $$y(t) = g\big( x(t) \big)$$ The waveshaping function, $g(x)$, is most often a continuous function that goes through the origin: $g(0)=0$. Sometimes $g(x)$ is an odd-symmetry function: $g(-x)=-g(x)$, but it doesn't have to be. Sometimes you want 2nd harmonic distortion and then $g(x)$ does not have odd symmetry. I am more interested in the case where the waveshaping function, $g(x)$ is a polynomial of finite order $N$: $$g(x) = \sum\limits_{n=0}^{N} a_n\,x^n$$ Note that if $g(x)$ has even symmetry ($g(-x)=g(x)$), then all odd terms are zero ($a_n=0$ for $n$ odd). If $g(x)$ has odd symmetry ($g(-x)=-g(x)$), then all even terms are zero ($a_n=0$ for $n$ even). I guess, if you want $g(0)=0$, then $a_0=0$. Now, if the input is sinusoidal: $$x(t) = A \cos(2 \pi f_0 t)$$ then the output is periodic, having harmonic frequency components, with an upper limit to the frequency content. The highest frequency coming out will be $N f_0$. So waveshaping with polynomial mapping functions guarantees a limit to the highest frequencies generated and that can be useful in deciding an oversampling ratio necessary to guard against aliasing. As I alluded to in this answer, if the order of the polynomial is $N$, to insure against aliases folding back into your original passband, one must oversample with an upsampling ratio of at least $\frac{N+1}{2}$. Oversampling by 4x sufficies for a 7th-order polynomial. Also note that an even-symmetry polynomial will generate only even harmonics and an odd-order polynomial will generate only odd harmoncs. \begin{align} y(t) &= g\big( x(t) \big) \\ \\ &= \sum\limits_{n=0}^{N} a_n\,\big(x(t)\big)^n \\ \\ &= \sum\limits_{n=0}^{N} a_n\,\big(A \cos(2 \pi f_0 t)\big)^n \\ \\ &= \sum\limits_{k=0}^{N} b_k \cos(2 \pi k f_0 t) \\ \end{align} where the $b_k$ coefficients depend on the $a_n$ coefficients and the input amplitude of $A$. This relationship can be worked out using the Euler identity and the binomial theorem: \begin{align} y(t) &= g\big( x(t) \big) \\ \\ &= \sum\limits_{n=0}^{N} a_n\,\big(A \cos(2 \pi f_0 t)\big)^n \\ \\ &= \sum\limits_{n=0}^{N} a_n\,\left(\tfrac{A}{2}(e^{j 2 \pi k f_0 t}+e^{-j 2 \pi k f_0 t})\right)^n \\ \\ &= \sum\limits_{n=0}^{N} a_n\,\left(\tfrac{A}{2}\right)^n\left(e^{j 2 \pi f_0 t}+e^{-j 2 \pi f_0 t}\right)^n \\ \\ &= \sum\limits_{n=0}^{N} a_n\,\left(\tfrac{A}{2}\right)^n \sum\limits_{m=0}^{n} \frac{n!}{m!(n-m)!} (e^{j 2 \pi f_0 t})^m (e^{-j 2 \pi f_0 t})^{n-m} \\ \\ &= \sum\limits_{n=0}^{N} a_n\,\left(\tfrac{A}{2}\right)^n \sum\limits_{m=0}^{n} \frac{n!}{m!(n-m)!} e^{j 2 \pi m f_0 t} e^{-j 2 \pi (n-m) f_0 t} \\ \\ &= \sum\limits_{n=0}^{N} a_n\,\left(\tfrac{A}{2}\right)^n \tfrac12 \left(\sum\limits_{m=0}^{n} \frac{n!}{m!(n-m)!} e^{j 2 \pi m f_0 t} e^{-j 2 \pi (n-m) f_0 t} \\ + \sum\limits_{m=0}^{n} \frac{n!}{m!(n-m)!} e^{j 2 \pi m f_0 t} e^{-j 2 \pi (n-m) f_0 t} \right) \\ \\ &= \sum\limits_{n=0}^{N} a_n\,\left(\tfrac{A}{2}\right)^n \tfrac12 \left(\sum\limits_{m=0}^{n} \frac{n!}{m!(n-m)!} e^{j 2 \pi m f_0 t} e^{-j 2 \pi (n-m) f_0 t} \\ + \sum\limits_{m=0}^{n} \frac{n!}{m!(n-m)!} e^{j 2 \pi (n-m) f_0 t} e^{-j 2 \pi m f_0 t} \right) \\ \\ &= \sum\limits_{n=0}^{N} a_n\,\left(\tfrac{A}{2}\right)^n \tfrac12 \left(\sum\limits_{m=0}^{n} \frac{n!}{m!(n-m)!} e^{j 2 \pi (2m-n) f_0 t} + \sum\limits_{m=0}^{n} \frac{n!}{m!(n-m)!} e^{j 2 \pi (n-2m) f_0 t} \right) \\ \\ &= \sum\limits_{n=0}^{N} a_n\,\left(\tfrac{A}{2}\right)^n \tfrac12 \sum\limits_{m=0}^{n} \frac{n!}{m!(n-m)!} \left( e^{j 2 \pi (n-2m) f_0 t} + e^{-j 2 \pi (n-2m) f_0 t} \right) \\ \\ &= \sum\limits_{n=0}^{N} \sum\limits_{m=0}^{n} a_n\,\left(\tfrac{A}{2}\right)^n \frac{n!}{m!(n-m)!} \tfrac12 \left( e^{j 2 \pi (n-2m) f_0 t} + e^{-j 2 \pi (n-2m) f_0 t} \right) \\ \\ &= \sum\limits_{n=0}^{N} \sum\limits_{m=0}^{n} a_n\,\left(\tfrac{A}{2}\right)^n \frac{n!}{m!(n-m)!} \cos \big(2 \pi (n-2m) f_0 t \big) \\ \\ ... \\ &= \sum\limits_{k=0}^{N} b_k \cos(2 \pi k f_0 t) \\ \end{align} Okay, to get an expression for $b_k$, the only way I can figger it out is for three cases. We let $k=n-2m$ and observe that since $2m$ is always even, only even $n$ terms contribute to $b_k$ when $k$ is even. Likewise, only odd $n$ terms contribute to $b_k$ when $k$ is odd. The expression $\lfloor N/2 \rfloor$ is the floor() function applied to $N/2$. just round $N/2$ down to the nearest integer. Case 1, $k=0$ (the DC term) Only the even terms of $n$ will contribute to $b_0$. And the only term of the inside summation that contributes is that when $m=\tfrac{n}{2}$. So letting $n=2i$: $$b_0 = \sum\limits_{i=0}^{\lfloor N/2 \rfloor} a_{2i}\,\left(\tfrac{A}{2}\right)^{2i} \frac{(2i)!}{\big((i)!\big)^2}$$ In the following two cases, keep in mind that the $\cos()$ function is even symmetry and these two terms are equal: $$\cos(2 \pi (-k) f_0 t) = \cos(2 \pi k f_0 t)$$ Case 2, $k$ even, $k>0$ (even harmonics) Only the even terms of $n$ will contribute to $b_k$. The only two term of the inside summation that contribute are those where $m=\tfrac{n-k}{2}$ and $m=\tfrac{n+k}{2}$ . So letting $n=2i$: $$b_k = \sum\limits_{i=0}^{\lfloor N/2 \rfloor} a_{2i}\,\left(\tfrac{A}{2}\right)^{2i} \frac{2(2i)!}{(i-\tfrac{k}{2})!(i+\tfrac{k}{2})! } \qquad k>0 \text{ even}$$ Terms with $i<\tfrac{k}{2}$ will result in $\infty$ in the denominator because factorials of negative integers are infinite. That effectively modifies the bottom limit: $$b_k = \sum\limits_{i=k/2}^{\lfloor N/2 \rfloor} a_{2i}\,\left(\tfrac{A}{2}\right)^{2i} \frac{2(2i)!}{(i-\tfrac{k}{2})!(i+\tfrac{k}{2})! } \qquad k>0 \text{ even}$$ Case 3, $k$ odd, $k>0$ (odd harmonics) Only the odd terms of $n$ will contribute to $b_k$. The only two term of the inside summation that contribute are those where $m=\tfrac{n-k}{2}$ and $m=\tfrac{n+k}{2}$. So letting $n=2i+1$: $$b_k = \sum\limits_{i=0}^{\lfloor (N-1)/2 \rfloor} a_{2i+1}\,\left(\tfrac{A}{2}\right)^{2i+1} \frac{2(2i+1)!}{(i-\tfrac{k-1}{2})!(i+\tfrac{k+1}{2})!} \qquad k>0 \text{ odd}$$ Terms with $i<\tfrac{k-1}{2}$ will result in $\infty$ in the denominator because factorials of negative integers are infinite. That effectively modifies the bottom limit: $$b_k = \sum\limits_{i=(k-1)/2}^{\lfloor (N-1)/2 \rfloor} a_{2i+1}\,\left(\tfrac{A}{2}\right)^{2i+1} \frac{2(2i+1)!}{(i-\tfrac{k-1}{2})!(i+\tfrac{k+1}{2})!} \qquad k>0 \text{ odd}$$ . (i wouldn't mind if someone else checked my math. these results tell us something about how a polynomial waveshaping function will sound, at least what the amplitude of harmonics will be.) • See how posts pile up lines when you enjoy the subject? – A_A Jan 17 '18 at 18:12 • hay @A_A, wanna check my work? (for the 3 cases?) i am not perfectly confident i did it correctly and am too lazy to code up MATLAB to check a few examples. – robert bristow-johnson Jan 17 '18 at 18:19 • It is the end of the day in my timezone so I am going to go "pass" for the moment. But I am very interested in the impact of that function composition there, especially when the shaping is through a polynomial, so, I will definitely come back to take a second look at this, not necessarily with a critical eye. If I spot anything too much out of line I will let you know. – A_A Jan 17 '18 at 18:32 • cool. i just read a mistake that i am going to correct right now. – robert bristow-johnson Jan 17 '18 at 18:33
2020-10-29 17:04:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000091791152954, "perplexity": 1238.2239211962396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107904834.82/warc/CC-MAIN-20201029154446-20201029184446-00213.warc.gz"}
http://www.philo.uni-saarland.de/moodle/course/info.php?id=61
The Moral and Political Philosophy of John Stuart Mill John Stuart Mill remains one of the most influential of Anglophone philosophers, exerting a profound background influence on, especially, discussions on individual freedom and the boundaries of legitimate legal prohibitions. This seminar will address his treatment of these issues in his classic work On Liberty, and on his attempt to give the utilitarianism of Jeremy Bentham a more noble aspect in his Utilitarianism and "Essay on Bentham". Time permitting, attention will also be given to some of Mill's other essays, in particular The Subjection of Women, and to the conventional utilitarian criticism of his views by James Fitzjames Stephen in Liberty, Equality and Fraternity. This seminar will consist of seven weeks of double sessions. (The short duration means that the first session will be full-length.) Preliminary reading is not necessary, but is of course an advantage. So to be best prepared, some preliminary acquaintance with Mill’s life and works is recommended. (On Liberty and Utilitarianism are both available in the Reclam parallel texts orange series.) NB You are very welcome in this course regardless of how good your English is. I'm sure that we'll get the message across to each other. If you have some rudiments of high-school English, you'll be fine. Linguistic deficiencies will be irrelevant. (You will not be held responsible for the lecturer’s inadequate German!) Zeit: Freitag, 12–16 Uhr (vom 21. April bis 9. Juni)Ort:  Geb. C5 2, Raum 2.02
2020-10-23 11:21:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23912428319454193, "perplexity": 4062.960585717727}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107881369.4/warc/CC-MAIN-20201023102435-20201023132435-00580.warc.gz"}
https://docs.habana.ai/en/latest/Management_and_Monitoring/Qualification_Library/Functional_Tests_Plugin.html
# Functional Test Plugins Design, Switches and Parameters¶ This section describes plugin specific switches, however, it will not focus on the common switches although these switches will be mentioned here for the completeness of the command examples. To see the common plugin switches and parameters, refer to hl_qual Common Plugin Switches and Parameters. Functional tests verify the full chip functionality while running several chip hardware modules in parallel and in a synchronized manner. All the tests described in this section verify the accuracy of calculation in parallel to performance metrics in the form of a measured frame per second. The accuracy check executed on host, will not affect the FPS measurement of the test. ## ResNet-50 Training Stress Test Plugin Design Consideration and Responsibilities¶ Note The ResNet-50 training stress test plugin is applicable for both first-gen Gaudi and Gaudi2. The ResNet-50 training stress test plugin runs a functional ResNet-50 training test as a real life training scenario. The test verifies accuracy and performance. To enable an accuracy check, the user must supply a full ImageNet training data set. ### ResNet-50 Training Stress Test Plugin Testing Modes¶ 1. The ResNet-50 training stress test plugin has two testing modes, each with different batch size options: • 64 batch size • 256 batch size 2. Random Data vs ImageNet verification: • The test can run on random data tensors, when using this mode the accuracy check is skipped and only achievable FPS is taken into account in the pass/fail criteria. This test enables pure FPS test without depending on the image augmenter (AEON). • ImageNet - When applying ImageNet dataset, the test will evaluate accuracy and FPS. The FPS could be influenced by the image augmenter (AEON) which runs the image pre-process on the Host. To prevent performance degradation, refer to the note in Test Differences Between First-gen Gaudi and Gaudi2 section. The suggested number of training epochs should not exceed 90. The ResNet50 training app should converge with that range. 90 epoch represents 20-21 hours of running. ### Test Differences Between First-gen Gaudi and Gaudi2¶ The purpose of this test for both first-gen Gaudi and Gaudi2 is the same, however, the test execution is different as Gaudi2 contains a H.264/Jpeg decoder accelerator. The following lists the main test differences: • Gaudi2 test variant needs to test the decoder HW path. • The augmentation and image preprocessing between Gaudi2 and first-gen Gaudi are different: • First-gen Gaudi - Uses AEON augmenter which uses host CPU. • Gaudi2 - Uses the Habana media pipeline meaning Jpeg decoding and image preprocessing is done on the Gaudi2 device. The impact of the above difference is that Gaudi2 test is less dependent on PCI link BW as it sends compressed images when running on multiple devices and is less dependent on host CPU resources. ### ResNet50 training - Pass/Fail Criteria¶ The performance and accuracy test evaluates the loss function received from the device. If the loss function shows an unexpected behavior, the test will fail. The test plugin verifies that the loss is decreasing through the epochs and converges behaviors according to the expected rate for the ResNet50 training process. It also verifies that there are no sharp jumps between iterations. The performance [images/sec] is calculated per training epoch. The expected results per core: FPS: 1580 images/sec Epoch runtime: 13.3 minutes FPS: 5750 images/sec Epoch runtime 3.6 minutes Expected accuracy: for 90 epoch run - 0.743. Note The trainingApp test uses the AEON augmenter which could be a limiting factor on the achievable FPS results when running on multiple devices. To enable the test to runs on all 8 devices, the user’s host machine should include: • Two NUMA nodes • 96 CPU cores (Minimum), evenly distributed between the NUMA nodes. • 384 GB of RAM. For running on smaller sets of devices, the above number could be reduced. ## ResNet-50 Training Stress Test Plugin Switches and Parameters¶ The following lists the training test plugin switches and parameters: **hl_qual -gaudi -gaudi2 -c <pci bus id> [-rmod <serial | parallel> [-dis_mon] [-mon_cfg <monitor INI path>] -trainingApp [-bs <batch size 64 | 256>] [-epoch <number of epochs>] [-n <number of iterations>] [-rand] ** • -trainingApp - Training test plugin selector. • -bs <64 | 256> - Defines the training batch size. • 64 - Batch size 64 • 256- Batch size 256 If the value is not specified, the default value is 256. If the value is not specified, the default value is training. • -epoch - Defines epoch count. • -n - Defines iteration count. You must provide -epoch flag with this flag. Example: -epoch 1 -n 1000. • -rand - Random input generation. This mode disables accuracy and loss validation. Only fps is calculated. The test uses 1500 iterations preset. • -log - Writes statistics to file. ./hl_qual -gaudi -c all -rmod parallel -trainingApp -bs 256 -rand ./hl_qual -gaudi -c all -rmod parallel -trainingApp -bs 256 -epoch 3 ./hl_qual -gaudi2 -c all -rmod parallel -trainingApp -bs 256 -epoch 3 ## ResNet-50 Training Stress Test Plugin Configuration Files and Requirements¶ Before using the plugin, make sure to perform the following: • Untar the imagenet tar file (ILSVRC2012_img_train.tar, ILSVRC2012_img_val.tar). • Change your current directory to hl_qual bin directory. • Run the preparation script - prepare.sh (The script are included in the package). This will untar all the tar files (ILSVRC2012_img_train.tar file) and generate the training list file (train_list.txt). ./prepare.sh -m MODE -d EXTRACTED_DIR -f LABEL_FILE [-h] Parameters: • MODE - ‘train’ or ‘val’. • EXTRACTED_DIR - The path to the directory that contains the untared files from ILSVRC2012_img_train.tar file. • LABEL_FILE - The path to LOC_synset_mapping.txt file. Note IMPORTANT: Expected execution time depends on the number of epochs configured for the test run. Each epoch can take up to 18 minutes. Please make a copy of the following files (can be used after installing new package on the same setup): • train_list.txt • training256.json • training64.json Reinstalling the package or relaunching the script will override/overwrite those files. ## Functional Test 2 Plugin Design Consideration and Responsibilities¶ Note Functional test 2 plugin is applicable for both first-gen Gaudi and Gaudi2. The functional test 2 runs all available hardware components on the first-gen Gaudi and Gaudi2 SOC to test the functionality and the interaction between the different units during parallel execution. When using parallel execution, the test plugin will run on all hardware components simultaneously. The functional test uses synthetic topology which introduces multiple operations that ensure using all computational units and all available memories while introducing high power usage. The output of each topology run is verified against a pre-calculated reference to verify bit exact results. The test can run for long hours and test the following device functionalities: • Thermal stress test, cooling system functionality, temperature dissipation and thermal protection mechanisms can be checked while running power stress plugin in extreme load. • PID and clock relaxation mechanisms verification • Long work periods in typical power workloads (extreme, high) • Full bit-exact calculation Tested units: • DMA engines – moving data between: • PCI ==> HBM, HBM ==>PCI • HBM ==> SRAM, SRAM==>HBM • MME engines • TPC engines • Serdes connectivity - only when using -serdes switch ### Functional Test 2 Testing Modes¶ The functional test purpose is to enable high power consumption while verifying the calculation result on each topology execution (all execution steps are been verified). The functional test contains the following sub-test modes: 1. Extreme - measured power level: 345-355 [watt] 2. High – measured power level: 200-230 [watt] 1. Extreme - measured power level for 54V power supply: 530-560 [watt] 2. High – measured power level for 54V power supply: 370-420 [watt] The measurement above is recorded from a 4 minutes run. This can change depending on the environmental status of the system (fan speed, server box configuration and ambient temperatures). The functional test 2 plugin builds a test topology including large tensors and multiple operators (Conv, Batchnorm). When applying the -serdes the topology include full serdes receive tensor verification graph include sub, L1 norm. The test application runs the topology on each test iteration by injecting pre-calculated inputs and compares the output against a pre-calculated reference for each topology execution on the device. Note The initialization stage can take up to 170 seconds. This is required to recalculate and generate the reference expected output tensors, compile the test topology and test runtime execution calibration. The init time is not included in the test running duration specified by the user when using -t switch. ### Functional test - pass/fail Criteria¶ The pass/fail criteria is composed of the following: • The calculated value of each topology launch must be identical to a pre-calculated reference. • The execution throughput [executions/seconds] must not fall below an existing predefined threshold: 1. Extreme - FPS 260 [Frame/Sec] 2. High – FPS 270 [Frame/Sec] 1. Extreme - FPS 750 [Frame/Sec], measured on HLS2 server 2. High – FPS 880 [Frame/Sec], measured on hls2 server The measurement above is recorded from a 4 minutes run. ## Functional Test 2 Plugin Switches and Parameters¶ hl_qual -gaudi|-gaudi2 -c <pci bus id> [-t <time in seconds>] -rmod <serial | parallel> [-dis_mon] [-mon_cfg <monitor INI path>] -f2 -l <extreme | high> [-d] [-dis_val] [-serdes] • -f2 - Functional test 2 plugin selector. • -d - Download once option. The input tensors are downloaded to the device at the beginning of the test and reused for all test iterations. This switch is useful when the user suspects that functional test performance degradation is due to PCI low BW. • -dis_val - Disables output tensor validation. The test will not fail on bit exact test, but may fail on low FPS. When using this switch, the test performance will be higher as the data is not uploaded to the host for verification. • -seredes - Enables running an allreduce collective operation to test the NIC in parallel to the regular functional test. • -l <extreme | high> - Power level selector: • extreme - 345-355 [w] measured on HL-205 • high - 200-230 [w] measured on HL-205 Power level selector for 54V power supply: • extreme - 530-560 [w] measured on HL-225H • high - 370-420 [w] measured on HL-225H ./hl_qual -gaudi -c all -rmod parallel -f2 -d -l high ./hl_qual -gaudi2 -c all -rmod parallel -f2 -l extreme -t 450 ./hl_qual -gaudi2 -c all -rmod parallel -f2 -l extreme -t 450 -serdes `
2022-12-08 01:44:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.318008691072464, "perplexity": 11764.978257105187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711232.54/warc/CC-MAIN-20221208014204-20221208044204-00060.warc.gz"}
https://ccjou.wordpress.com/page/2/
## 每週問題 April 25, 2016 Let $\mathcal{V}$ be the vector space spanned by functions $\cos(2x)$ and $\sin(2x)$. (a) Find the trace and determinant of the linear transformation $D(f)=f'$ from $\mathcal{V}$ to $\mathcal{V}$. (b) Find the eigenvalues and corresponding eigenvectors of $D$. ## 二階方陣的平方根 $A$ 是一個 $n\times n$ 階矩陣。若同階矩陣 $B$ 使得 $B^2=A$,我們稱 $B$$A$ 的一個平方根。對角化是矩陣平方根的標準算法。若 $A$ 可對角化為 $A=SDS^{-1}$,其中 $S$ 是一個可逆矩陣,$D=\hbox{diag}(\lambda_1,\ldots,\lambda_n)$ 的主對角元 $\lambda_i$$A$ 的特徵值。若 $C$$D$ 的一個平方根,$C^2=D$,則 $B=SCS^{-1}$$A$ 的一個平方根。若 $A$ 有兩兩相異的非零特徵值,則存在 $2^n$ 個平方根 $B=S\,\hbox{diag}(\pm\sqrt{\lambda_1},\ldots,\pm\sqrt{\lambda_n})\,S^{-1}$。但如果 $A$ 有相重特徵值或 $\lambda_i=0$,取決於 $A$ 的 Jordan 典型形式,$A$ 可能不存在平方根,存在少於 $2^n$ 或無窮多個平方根。特別的,$2\times 2$ 階矩陣的平方根公式相當簡單,原因在於其逆矩陣、特徵值與特徵向量都有容易處理的代數式。 ## 每週問題 April 18, 2016 Let $\mathcal{V}$ be an $n$-dimensional vector space, and $S_1, \ldots, S_k$ be subspaces in $\mathcal{V}$. If $\sum_{i=1}^k\dim S_i>n(k-1)$, show that $\bigcap_{1\le i\le k}S_i\neq\{\mathbf{0}\}$. ## 賦範向量空間 $\displaystyle \mathbf{x}=(x_1,x_2,\ldots)=x_1(1,0,0,\ldots)+x_2(0,1,0,\ldots)+\cdots=\sum_{i=1}^\infty x_i\mathbf{e}_i$ ## 每週問題 April 11, 2016 Let $\mathcal{V}$ be a complex inner product space. Show that two vectors $\mathbf{x}$ and $\mathbf{y}$ in $\mathcal{V}$ are orthogonal if and only if $\Vert \alpha\mathbf{x}+\beta\mathbf{y}\Vert^2=\Vert\alpha\mathbf{x}\Vert^2+\Vert\beta\mathbf{y}\Vert^2$ for all pairs of scalars $\alpha$ and $\beta$. ## 每週問題 April 4, 2016 $A^2=0$,則 $A$ 的最大秩是多少? Let $A$ be an $n\times n$ matrix and $A^2=0$. What is the maximum value of $\hbox{rank}A$? ## 每週問題 March 28, 2016 Let $A$ and $B$ be $n\times n$ nonzero matrices. (a) If $A^2=B^2=0$, is it true that $A$ and $B$ are similar if and only if $\hbox{rank}A=\hbox{rank}B$? (b) If $A^3=B^3=0$, is it true that $A$ and $B$ are similar if and only if $\hbox{rank}A=\hbox{rank}B$? ## 證明細解 2 $A$ 為一個 $n\times n$ 階矩陣。若存在同階矩陣 $A^{-1}$ 使得 $A^{-1}A=AA^{-1}=I$,則 $A$ 稱為可逆 (invertible) 矩陣。若 $A$$n$ 個線性獨立的行 (column) 與列 (row),即滿秩,記作 $\hbox{rank}A=n$,則 $A$ 稱為非奇異 (nonsingular) 或非退化 (nondegenerate) 矩陣。可逆矩陣與非奇異矩陣是同義的。我們要證明可逆矩陣的一個充要條件:可逆矩陣不具備「毀滅性」的矩陣乘法,詳述於下列定理。 ## 每週問題 March 21, 2016 If $\mathcal{V}$ is a finite-dimensional vector space and if $\{\mathbf{y}_1,\ldots,\mathbf{y}_m\}$ is any set of linearly independent vectors in $\mathcal{V}$, prove that, unless $\{\mathbf{y}_1,\ldots,\mathbf{y}_m\}$ already form a basis, we can find vectors $\mathbf{y}_{m+1},\ldots,\mathbf{y}_{m+p}$ so that $\{\mathbf{y}_1,\ldots,\mathbf{y}_m,\mathbf{y}_{m+1},\ldots,\mathbf{y}_{m+p}\}$ is a basis.
2016-05-26 02:43:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 142, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.883294939994812, "perplexity": 1010.2914365626146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049275437.19/warc/CC-MAIN-20160524002115-00015-ip-10-185-217-139.ec2.internal.warc.gz"}
https://codeforces.com/problemset/problem/1780/D
D. Bit Guessing Game time limit per test 1 second memory limit per test 256 megabytes input standard input output standard output This is an interactive problem. Kira has a hidden positive integer $n$, and Hayato needs to guess it. Initially, Kira gives Hayato the value $\mathrm{cnt}$ — the number of unit bits in the binary notation of $n$. To guess $n$, Hayato can only do operations of one kind: choose an integer $x$ and subtract it from $n$. Note that after each operation, the number $n$ changes. Kira doesn't like bad requests, so if Hayato tries to subtract a number $x$ greater than $n$, he will lose to Kira. After each operation, Kira gives Hayato the updated value $\mathrm{cnt}$ — the number of unit bits in the binary notation of the updated value of $n$. Kira doesn't have much patience, so Hayato must guess the original value of $n$ after no more than $30$ operations. Since Hayato is in elementary school, he asks for your help. Write a program that guesses the number $n$. Kira is an honest person, so he chooses the initial number $n$ before all operations and does not change it afterward. Input The input data contains several test cases. The first line contains one integer $t$ ($1 \le t \le 500$) — the number of test cases. The description of the test cases follows. The first line of each test case contains the number $\mathrm{cnt}$ — the initial number of unit bits in the binary notation $n$. The hidden integer $n$ satisfies the following constraint: $1 \le n \le 10^9$. Interaction To guess $n$, you can perform the operation at most $30$ times. To do that, print a line with the following format: "- x" ($1 \le x \le 10^9$). After this operation, the number $x$ is subtracted from $n$, and therefore $n$ is changed. If the number $x$ is greater than the current value of $n$, then the request is considered invalid. After the operation read a line containing a single non-negative integer $\mathrm{cnt}$ — the number of unit bits in the binary notation of the current $n$ after the operation. When you know the initial value of $n$, print one line in the following format: "! n" ($1 \le n \le 10^9$). After that, move on to the next test case, or terminate the program if there are none. If your program performs more than $30$ operations for one test case, subtracts a number $x$ greater than $n$, or makes an incorrect request, then response to the request will be -1, after receiving such response, your program must exit immediately to receive the Wrong Answer verdict. Otherwise, you can get any other verdict. After printing a query or the answer, do not forget to output the end of line and flush the output. Otherwise, you will get Idleness limit exceeded. To do this, use: • fflush(stdout) or cout.flush() in C++; • System.out.flush() in Java; • flush(output) in Pascal; • stdout.flush() in Python; • see documentation for other languages. Hacks To make a hack, use the following format. The first line should contain a single integer $t$ ($1 \leq t \leq 500$). Each test case should contain one integer $n$ ($1 \leq n \leq 10^9$) on a separate line. Example Input 3 1 0 1 1 0 2 1 0 Output - 1 ! 1 - 1 - 1 ! 2 - 2 - 1 ! 3 Note For example, the number of unit bits in number $6$ is $2$, because binary notation of $6$ is $110$. For $13$ the number of unit bits is $3$, because $13_{10} = 1101_2$. In the first test case, $n = 1$, so the input is the number $1$. After subtracting one from $n$, it becomes zero, so the number of unit bits in it is $0$. In the third test case, $n = 3$, which in binary representation looks like $3_{10} = 11_2$, so the input is the number of ones, that is $2$. After subtracting $2$, $n = 1$, so the number of unit bits is now $1$. After subtracting one from $n$, it becomes equal to zero. Note that the blank lines in the input and output examples are shown for clarity and are not present in the actual interaction.
2023-02-08 15:58:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7440217137336731, "perplexity": 610.5621837065328}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500837.65/warc/CC-MAIN-20230208155417-20230208185417-00424.warc.gz"}
http://www.ams.org/mathscinet-getitem?mr=1933334
MathSciNet bibliographic data MR1933334 46E30 (42A65 46B15) Kleper, Dvir; Schechtman, Gideon Block bases of the Haar system as complemented subspaces of \$L_p,\ 2 For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
2017-06-24 04:19:27
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9976349472999573, "perplexity": 13858.605926000744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320215.92/warc/CC-MAIN-20170624031945-20170624051945-00001.warc.gz"}
http://mathoverflow.net/feeds/question/111940
Norm bound of the entrywise logarithm of a stochastic matrix stationary matrix - MathOverflow most recent 30 from http://mathoverflow.net 2013-06-20T05:20:42Z http://mathoverflow.net/feeds/question/111940 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/111940/norm-bound-of-the-entrywise-logarithm-of-a-stochastic-matrix-stationary-matrix Norm bound of the entrywise logarithm of a stochastic matrix stationary matrix Daniel86 2012-11-09T21:11:24Z 2012-11-10T11:07:40Z <p>Hello,</p> <p>Denote $\log_\star$ as the entrywise logarithm operation, and let $A$ be some row-stochastic matrix such that $\lim_{p\rightarrow\infty}A^p$ exists and all its entries are non-zero.</p> <p>As a part of my research, I am interested in upper-bounding the following expression: $\parallel\log_\star \lim_{p\rightarrow\infty}A^p\parallel_2$, in terms of $\parallel A\parallel_2$. </p> <p>Does anyone have an idea?</p> <p>Thank you.</p>
2013-06-20 05:20:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9557909369468689, "perplexity": 2939.468696044104}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710313659/warc/CC-MAIN-20130516131833-00030-ip-10-60-113-184.ec2.internal.warc.gz"}
https://physics.meta.stackexchange.com/tags/homework-and-exercises/hot
# Tag Info 59 Questions that can be summarized as "please solve this exercise" or "please plug these numbers into an equation for me" are OFF topic. 56 Questions that can be summarized as "Please explain what this aspect of a solution/derivation means or why it makes sense" are ON topic 40 Questions that can be summarized as "I was working on X and didn't understand why Y isn't the case" or similar are ON topic. 35 Please, please, please ban homework altogether. Allowing homework has resulted in a deluge of questions from users with a reputation of 1 asking us to do their work for them. Yes, we can tag then ignore homework questions, but (a) that requires us to be continually editing tags and (b) it makes the site look rubbish for anyone not au fait with filtering by ... 31 Questions that can be summarized as "I want to solve this problem but do not know how. What relevant physics do I need to research/learn to solve it?" are ON topic. 29 ToughSTEM. A question answer community on a mission to share solutions for all STEM major problems. This is a site built specifically for answering STEM (science, technology, engineering, and mathematics -> physics) problems in a Q&A format like the Stack Exchange network. We love to check your work questions. It is and always will be 100% free. It ... 29 Questions which ask us to perform calculations are off topic. This is too broad. I recognize that it's intended to head off boring copied-from-homework questions like "what's the optimal angle for a 45 mph banked turn if the coefficient of friction is μ = 0.233457821234 also are all those digits important kthxbye". However calculation questions like "... 25 I thought it would be worth distilling my thoughts into a concrete proposal. This is essentially what Manishearth already proposed - I'm just hoping to make the idea clearer. In many ways it is not a big change to the current policy, but it is a big change to the way the policy is presented, and I think it would have a lot of benefits. The current homework ... 25 Yes, we should. "Homework" isn't the issue I think we all agree that whether or not a post comes from a homework assignment is irrelevant, as illustrated by this example question: Consider a transmon qubit with capacitance C and Josephson junction critical current I_c. What is the matrix element <1|Q|0> for this system? I might be motivated to ask ... 22 The current policy is summarised as "we do accept homework questions, but only if X, Y, Z". It would be better to move to a policy that says "we do not accept homework questions. However, if you do X, Y and Z it will not be a homework question any more and we can accept it." 20 Sound pitch of glass with water This is not a homework or homework-like question, it's not a calculation request, it's completely conceptual, but the question doesn't show any prior research, or any evidence that the poster has made an effort to figure out the answer themselves. You might think that's perfectly fine. On the other hand, if you believe the ... 20 Recently I've noticed a few people posting questions in rapid succession, each with a screenshot of a problem. If you look at the questions as a whole, it's obvious they've split their entire problem set or take-home exam into tiny pieces and asked us how to do the whole thing. This sort of behavior, which is undesirable but not necessarily obvious to ... 19 http://www.physicsforums.com 19 To discourage rapid postings of the type alluded to in the question, methinks screenshots of text should be outright banned. I can understand a screenshot of a figure, but if it can be easily done with LaTeX/MathJax, there’s no place for a screenshot. If anything, the time and effort going to typesetting makes it easier to justify that the OP has done some ... 18 Mutual $E$ force due to charged coaxial rings I think this is probably homework because it's really just asking how to solve a problem. However you could argue it's conceptual in the sense that it's asking about how to approach this sort of problem for arbitrary geometries. A belated footnote: the simplest approach I've seen to doing this requires the use ... 18 Three dimensional isotropic harmonic oscilator Hamiltonian This is an advanced question (on quantum mechanics) that shows detailed effort. It could be argued that it doesn't actually ask anything beyond "what am I doing wrong?", though. Vote up if you think this should be on topic under the new policy, or vote down if you think it should be off topic under ... 17 I agree with a strong caveat. The tag wiki excerpt must function as an exceptionally well-tuned tool to catch the great homework/problem-solving questions, turn the borderline cases into good questions, and prevent bad questions as far as possible. The way tag synonyms work at Ask a Question time is the following. You type in the tag you think you should ... 17 [moderator hat off: personal opinion] We don’t close homework-like questions based on where they are from. We close homework-like questions when they are about doing some single-purpose computation (what is this coefficient of friction, where have I lost my minus sign) versus conceptual questions (why does energy work like this, why do approach A and and ... 16 Bad: Your question looks like: "Here is my homework. Solve it instead of me, now". You scanned/photographed your textbook. You don't show effort or curiosity towards the solution. You don't bother to begin sentences with capital cases or end them with a ".". Good: You aren't asking for the numerical solution; you want to understand how you can solve it ... 16 No, I don't have the tag on ignore. Upvote if this is the case. To keep the numbers clean, please don't downvote. $\quad$ 16 The exam is over now! Thankfully, nothing went awry on this particular site. 15 The How to Ask sidebar Homework questions are not allowed on this site. You should ask about specific concepts. See our homework policy. 15 Questions which can be summarized as "This is the statement of an example problem. (Perhaps the "simplest" and/or the "most interesting" case I have been able to think of at the moment.) Rather than solving it (which I may or may not be able to do myself) I'd like you to point out some useful "standard", "technical" terminology, including some name for the ... 14 I think there is one real (or at least, really potential) issue with problem-solving: it could be applied to almost every question on the site. Presumably people wouldn't go around editing it in willy-nilly, but it is likely that new users will apply it without discrimination to their questions. I'm thinking "Oh, I have a problem, and I'd like it solved". ... 14 How about homework-and-exercises? It is self-explanatory, and that way we still call a spade a spade while simultaneously leave open the possibility that it might not be actual homework. 14 I have some sympathy with Brandon's point. I'm a bit concerned we'll end up seeming an unfriendly and elitist site. The moderators are admirably tactful in dealing with homework questions, but I see comments to homework questions that while justified strike me as a bit bitchy. I worry these will put new users off - it's easy for us old timers to forget how ... 14 From what I gather, whether students use Physics.SE to cheat or not has never been our primary concern when considering the homework policy. The main reason behind the "harsh" homework policy is that the SE is not a forum where nice people come around and solve your problems for you. It is intended to be a community of more or less knowledgeable physicists ... 14 Introductory remarks First some remarks to address specific points or misconceptions in your post, then I'll try to pick apart this case. I don't think there is an agreed definition of "student". I take the word to mean any one who is seriously studying the subject. Being in secondary school certainly is no barrier to be a full participant on Physics SE. ... 14 In my opinion, in this discussion there is too much focus on the "question" and not enough on the "value of the question plus the answer". If the goal of the site is to be a "resource" to all serious students of physics, we need to make sure that the question-plus-answer becomes something of lasting value. "I need an answer to this homework question before ... 14 Your question does not ask a conceptual question about physics, it simply asks users to solve the exercise you call "the Devil's problem". That you ask us to solve the exercise not by saying "Solve this for me" but by asking whether a given number is the answer doesn't change anything in my eyes - how is an answer supposed to make the ... Only top voted, non community-wiki answers of a minimum length are eligible
2021-09-20 08:48:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36420938372612, "perplexity": 722.0431941716985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057033.33/warc/CC-MAIN-20210920070754-20210920100754-00249.warc.gz"}
http://www.last.fm/it/music/Death+Grips/_/Guillotine+(It+Goes+Yah)?setlang=it
# Last.fm Elimina As your browser speaks English, would you like to change your language to English? Or see other languages. Death Grips # Guillotine (It Goes Yah) Non è ancora disponibile una wiki... ## Bacheca Aggiungi un commento. Accedi a Last.fm o registrati. • it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes it goes • IT GOES IT GOES IT GOES YAH !
2016-02-12 10:25:44
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9776131510734558, "perplexity": 2048.1035878533226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701163663.52/warc/CC-MAIN-20160205193923-00209-ip-10-236-182-209.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/58917/making-a-square-centered-at-a-point-in-tikz
Making a square centered at a point in TikZ Im a newbie to TikZ, so this question may seem stupid. Im a looking at a facility location problem in the plane where I have generated some data points consisting of (x,y)-coordinates for each customer node and each facility node. In the literature, customers nodes are usually marked by a circular dot so I have used the notation \draw [fill] (x,y) circle [radius=0.05]; for some point specified by (x,y). The facility nodes are usually marked with a small square. But how do I create a square centered at the point (x,y)? I have looked at rectangle function. But as I have the center point, this approach does not seem to be the right way to go. Can anyone help me here? - For this application, I would recommend to use the \draw plot [<options>] coordinates {<coordinate list>}; functionality. If you load \usetikzlibrary{plotmarks}, you have access to a variety of different marks, including filled or empty squares, and it's really easy to provide the coordinates. \documentclass{article} \usepackage{tikz} \usetikzlibrary{plotmarks} \begin{document} \begin{tikzpicture} \draw [very thin, lightgray] (0,0) grid (4,4); \draw [cyan] plot [only marks, mark=square*] coordinates {(1,1) (2,3) (2.5,2)}; \draw [orange] plot [only marks, mark size=2.5, mark=*] coordinates {(0,0.5) (1,1.5) (1,2.5) (2,1) (4,2)}; \end{tikzpicture} \end{document} - The little problem with marks, when you want to scale the picture, you scale the marks. For example if xand yare different, the marks lok not fine. It's not always desired. I don't know if it's possible to avoid this feature. It's a problem in geometry with points. It would be interesting to altern between marks and nodes or coordinates. –  Alain Matthes Jun 7 '12 at 14:31 You can make the plot mark size independent of the scale by putting \makeatletter \def\pgfuseplotmark#1{\pgftransformresetnontranslations\csname pgf@plot@mark@#1\endcsname} \makeatother into your preamble. –  Jake Jun 7 '12 at 14:55 Thanks very useful! Perhaps it is a good idea that I ask the question for other users. It's interesting to know this feature. –  Alain Matthes Jun 7 '12 at 15:01 To complete Jake's answer, other possibilities are : \documentclass{article} \usepackage{tikz} \begin{document} \begin{tikzpicture} \newcommand\Square[1]{+(-#1,-#1) rectangle +(#1,#1)} \draw [very thin, lightgray] (0,0) grid (4,4); \draw (2,3) +(-2pt,-2pt) rectangle +(2pt,2pt) ; \draw (2,3) \Square{12pt} ; \end{tikzpicture} \end{document} You can use a node and also a coordinate like this \begin{tikzpicture} [dot/.style={draw,rectangle,minimum size=4mm,inner sep=0pt,outer sep=0pt,thick}] \draw [very thin, lightgray] (0,0) grid (4,4); \path (1,1) coordinate[dot] ; \end{tikzpicture} With a node : \node [rectangle,minimum size=4mm,inner sep=0pt,outer sep=0pt,thick] at (1,1) {}; -
2015-08-28 19:26:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8581489324569702, "perplexity": 2208.5592231363926}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644063881.16/warc/CC-MAIN-20150827025423-00164-ip-10-171-96-226.ec2.internal.warc.gz"}
https://ateliers-frileuse.com/blog/lemon-pound-cpzsmt/emjpk.php?tag=how-is-a-wave-function-related-to-an-orbital%3F-8bb258
# how is a wave function related to an orbital? 23 The new quantum mechanics did not give exact results, but only the probabilities for the occurrence of a variety of possible such results. The fifth 3d orbital, called the $$3d_{z^2}$$ orbital, has a unique shape: it looks like a $$2p_z$$ orbital combined with an additional doughnut of electron probability lying in the xy plane. 0 or Notation of complex valued atomic orbitals, but I've not found a complete clarification. This correlation is necessarily ignored in the molecular orbital wave function, and the resulting error is often referred to as the correlation error. What is Bose-Einstein condensate used for? u In quantum physics, you can determine the angular part of a wave function when you work on problems that have a central potential. ℓ Drum mode One can substitute "orbital" with "wavefunction" and the meaning is the same. Fundamentally, an atomic orbital is a one-electron wave function, even though most electrons do not exist in one-electron atoms, and so the one-electron view is an approximation. 20 In argon, the 3s and 3p subshells are similarly fully occupied by eight electrons; quantum mechanics also allows a 3d subshell but this is at higher energy than the 3s and 3p in argon (contrary to the situation in the hydrogen atom) and remains empty. When x For a one-dimensional particle, the time-dependent Schroedinger equation can be written, (a) 1s electrons can be "found" anywhere in this solid sphere, centered on the nucleus. There is no 2s in here at all. An atom that is embedded in a crystalline solid feels multiple preferred axes, but often no preferred direction. The outermost electrons of Li and Be respectively belong to the 2s subshell, and those of Na and Mg to the 3s subshell. Although not as accurate by themselves as STOs, combinations of many Gaussians can attain the accuracy of hydrogen-like orbitals. The shapes of atomic orbitals in one-electron atom are related to 3-dimensional spherical harmonics. = This article, in order to show wave function phases, shows mostly ψ(r, θ, φ) graphs. For the case where ℓ = 0 there are no counter rotating modes. (see hydrogen atom). Note: empty cells indicate non-existent sublevels, while numbers in italics indicate sublevels that could (potentially) exist, but which do not hold electrons in any element currently known. Every orbital is a wave function, but not every wave function is an orbital. The latter induce tiny binding energy differences especially for s electrons that go nearer the nucleus, since these feel a very slightly different nuclear charge, even in one-electron atoms; see Lamb shift. (A-2), radial wave functions are not changed by a parity transformation. These conditions are not seen except transiently in collisions of very heavy nuclei such as lead or uranium in accelerators, where such electron-positron production from these effects has been claimed to be observed. The Stern–Gerlach experiment — where an atom is exposed to a magnetic field — provides one such example.[19]. 2 However some quantum physicists[22][23] include a phase factor (−1)m in these definitions, which has the effect of relating the px orbital to a difference of spherical harmonics and the py orbital to the corresponding sum. This is the case when electron correlation is large. the energy is pushed into the shell two steps higher. n (b) The electron density map plots the points where electrons could be. 0 = For this to happen, the electron would need to gain an energy of exactly E2 − E1. m a ψ ( r, θ ϕ, t. An orbital is a description of that wave function’s solution in terms of known functions (spherical harmonics) and quantum numbers (like S, P…). This notation means that the corresponding Slater determinants have a clear higher weight in the configuration interaction expansion. ℓ α {\displaystyle \ell =0} The advantage of spherical coordinates (for atoms) is that an orbital wave function is a product of three factors each dependent on a single coordinate: ψ(r, θ, φ) = R(r) Θ(θ) Φ(φ). In our current understanding of physics, the Bohr model is called a semi-classical model because of its quantization of angular momentum, not primarily because of its relationship with electron wavelength, which appeared in hindsight a dozen years after the Bohr model was proposed. - and The newly discovered structure within atoms tempted many to imagine how the atom's constituent parts might interact with each other. [a] The Bohr model for a short time could be seen as a classical model with an additional constraint provided by the 'wavelength' argument. The number of electrons in an electrically neutral atom increases with the atomic number. Orbitals for ℓ > 3 continue alphabetically, omitting j (g, h, i, k, ...)[3][4][5] because some languages do not distinguish between the letters "i" and "j".[6]. * See Answer Additionally, an electron always tends to fall to the lowest possible energy state. Wave functions are solution of Schroedinger's equation. None of the other sets of modes in a drum membrane have a central antinode, and in all of them the center of the drum does not move. The spatial components of these one-electron functions are called atomic orbitals. The periodic table may also be divided into several numbered rectangular 'blocks'. ℓ {\displaystyle n} This is related to the shape of the orbital. As the principal quantum number increases, the orbital becomes larger and will have a higher energy level. u In the exact wave function, the motions of the electrons tend to be correlated so that if one electron is on the left, the other tends to be on the right. is given in the following table. Represents Orbital(?2) Represents OrbitalE? (psi ultimately refers to a wave function) How should I interpret that ? 1 Photons that reach the atom that have an energy of exactly E2 − E1 will be absorbed by the electron in state 1, and that electron will jump to state 2. Three of these planes are the xy-, xz-, and yz-planes—the lobes are between the pairs of primary axes—and the fourth has the centres along the x and y axes themselves. m 22 ℓ …mathematical function known as a wave function, denoted ψ. All other orbitals (p, d, f, etc.) {\displaystyle r_{max}=25a_{0}} m A wave function describes the probability of a particle's quantum state in terms of it's position, momentum, time, and/or spin In states where a quantum mechanical particle is bound, it must be localized as a wave packet, and the existence of the packet and its minimum size implies a spread and minimal value in particle wavelength, and thus also momentum and energy. I have my answer, thanks a lot ;) $\endgroup$ – titipoof Jun 27 '14 at 11:16 Orbitals of multi-electron atoms are qualitatively similar to those of hydrogen, and in the simplest models, they are taken to have the same form. With J. J. Thomson's discovery of the electron in 1897,[13] it became clear that atoms were not the smallest building blocks of nature, but were rather composite particles. To do this, the wave function, which may include an imaginary number, is squared to yield a real number solution. Each successively higher value of The following is the order for filling the "subshell" orbitals, which also gives the order of the "blocks" in the periodic table: The "periodic" nature of the filling of orbitals, as well as emergence of the s, p, d, and f "blocks", is more obvious if this order of filling is given in matrix form, with increasing principal quantum numbers starting the new rows ("periods") in the matrix. Loosely speaking n is energy, ℓ is analogous to eccentricity, and m is orientation. = The mathematical derivation of energies and orbitals for electrons in atoms comes from solving the Schrodinger equation for the atom of interest. = • The sign of a wave function is not important - for considering electron density which is related to the square of the wave function and has to be positive • It is important when two wavefunctions interact (see later) • It does not matter that the 2s is represented as positive and negative in books – x = ℓ In 1909, Ernest Rutherford discovered that the bulk of the atomic mass was tightly condensed into a nucleus, which was also found to be positively charged. m Electrons jump between orbitals like particles. n A bulb of 40 W is producing a light of wavelength 620 nm with 80% of efficiency, then the number of photons emitted by the bulb in 20 seconds are (1 e V = 1. It is also mentioned that psi does not carry any physical meaning. {\displaystyle n} Atomic orbitals can be the hydrogen-like "orbitals" which are exact solutions to the Schrödinger equation for a hydrogen-like "atom" (i.e., an atom with one electron). Also, in 1927, Albrecht Unsöld proved that if one sums the electron density of all orbitals of a particular azimuthal quantum number ℓ of the same shell n (e.g. , where Z is the atomic number, The 1s orbital has zero radial nodes. A hydrogen atom has the wave function (a) What is the magnitude of the orbital angular momentum of the electron around the proton? Atom exhibits both wave nature and particle nature. These correspond to a node at the nucleus for all non-s orbitals in an atom. However, this did not explain similarities between different atoms, as expressed by the periodic table, such as the fact that helium (two electrons), neon (10 electrons), and argon (18 electrons) exhibit similar chemical inertness. The three p-orbitals for n = 2 have the form of two ellipsoids with a point of tangency at the nucleus (the two-lobed shape is sometimes referred to as a "dumbbell"—there are two lobes pointing in opposite directions from each other). In this sense, the electrons have the following properties: Thus, electrons cannot be described simply as solid particles. ), Wave function of 4p orbital (real part, 2D-cut, one-electron atoms, the wave functions are available in most physical chemistry textbooks up through n = 3. The value of the wave function of a particle at a given point of space and time is related to the likelihood of the particle’s being there at the time. The above equations suppose that the spherical harmonics are defined by -values. No. θ within a given In atoms with multiple electrons, the energy of an electron depends not only on the intrinsic properties of its orbital, but also on its interactions with the other electrons. When thinking about orbitals, we are often given an orbital visualization heavily influenced by the Hartree–Fock approximation, which is one way to reduce the complexities of molecular orbital theory. The increase in energy for subshells of increasing angular momentum in larger atoms is due to electron–electron interaction effects, and it is specifically related to the ability of low angular momentum electrons to penetrate more effectively toward the nucleus, where they are subject to less screening from the charge of intervening electrons. ≤ [9] However, the idea that electrons might revolve around a compact nucleus with definite angular momentum was convincingly argued at least 19 years earlier by Niels Bohr,[10] and the Japanese physicist Hantaro Nagaoka published an orbit-based hypothesis for electronic behavior as early as 1904. Additionally, as is the case with the s orbitals, individual p, d, f and g orbitals with n values higher than the lowest possible value, exhibit an additional radial node structure which is reminiscent of harmonic waves of the same type, as compared with the lowest (or fundamental) mode of the wave. ) For example, the orbital 1s2 (pronounced as the individual numbers and letters: "'one' 'ess' 'two'") has two electrons and is the lowest energy level (n = 1) and has an angular quantum number of ℓ = 0, denoted as s. There is also another, less common system still used in X-ray science known as X-ray notation, which is a continuation of the notations used before orbital theory was well understood. In fact, it can be any positive integer, but for reasons discussed below, large numbers are seldom encountered. {\displaystyle \ell } Thus, two electrons may occupy a single orbital, so long as they have different values of s. However, only two electrons, because of their spin, can be associated with each orbital. : thus, electrons can not be restricted to a the wave function of an?... State is given by the following [ 20 ] [ 21 ] a node the... Solid particles you where the wave function of an electron bound to an orbital ℓ { \displaystyle n=1 } has! Would be no sense in distinguishing m = −1 for electronic structure one-electron functions are the property their... A particular electron is defined by its wave function related to an orbital? H, electrons orbitals! Table may also be divided into several numbered rectangular 'blocks ' a magnetic field — provides one such.... An integer to provide step-by-step solutions in as fast as 30 minutes plot and a plot... But often no preferred direction, therefore solutions of the Schrdinger equation known as a function... Ith shell be solved with the use of methods of iterative approximation an orbital how. × 1 0 − 1 9 J, H c = 1 { \displaystyle \ell } } in! Does not tell you where the probability of finding an electron in an atom or,. For one electron in an atom wave mechanics of 1926 are its atomic orbitals. ) waves on circular! This would require an infinite particle momentum psi ultimately refers to a wave related. Or molecule, indicating the electron can easily escape from the solution of the hydrogen-like atoms are its orbitals! The exact same state, so a fourth quantum number the main validations of the electron (... To physical systems that share these symmetries new names based on their shape with respect to a node the... In quantum mechanics did not give exact results, but often no preferred direction l held. That have a clear higher weight in the n = 1 2 0. Are seven f-orbitals, each with shapes more complex than those containing only electron... Nodal plane the filling of the molecular dissociation process, i.e differences between are... Model was able to explain the emission and absorption spectra of hydrogen predictions of line spectra, are. Collectively called a subshell, and exponential and trigonometric functions 0 − 1 9 J, c... 'S gon na be value of n further increase the number in the Bohr model was able to the... To beginning students ) can only be rationalized somewhat arbitrarily obtained explicitly by a mathematical wave function is lobe! Is intended to express a … every orbital is z-axis symmetric 's not to! Combination of atomic orbitals, but often no preferred direction differences between states are also.! The excitation process associated with a single orbital mathematical object termed the wave function ) how should I interpret?! Most useful when applied to atomic orbitals may be defined more precisely in formal quantum mechanical language exposed a! Real orbitals are those that are calculated for systems with a single electron, such as the accurate! Momentum, and lists the values of n are said to comprise a shell '' redirects here molecule described. Positive integer, but often no preferred direction are described verbally here shown! 0 the orbital angular momentum, and m are quantum numbers ) is vertical counter! Correspond to transitions ( quantum leaps ) between quantum states of an.! Within a quantum system also discrete subshells in terms of increasing energies in multielectron atoms see. Article, in order to describe the shape sometimes depends on the phase convention used for 1! Where the electron configurations of atoms subshell, and the meaning is same. 'S behavior is responsible for the 1 through 3s orbitals. ) mechanics did not give exact results, only! '' ( linear combinations ) of multiple orbitals. ) reasons discussed below a... Observed experimentally is large, was first pointed out by the physicist Richard.! Correlation error shaped like spheres numbers are seldom encountered correct part c how a. ( by releasing a photon ) and drop into the lower orbital the square it! This behavior is not achievable unique values of n further increase the number electrons.... Our experts can answer your tough homework and study questions and between every pair technically... Each direction of the principal quantum number ℓ describes the wavelike nature of electrons in an atom illustration. Atomic structure explicitly by a method of solving partial differential equations called separating the variables function or orbital! Describe the shape of this atmosphere '' only when a single electron is present in an atom having,! May be defined more precisely in formal quantum mechanical language ( r, θ, φ generate. Shown graphically in the orbitals are those that are calculated for systems a! Coexist around the nucleus ) that share these symmetries in turn combine to create product. Azimuthal quantum number is given by the full three-dimensional wave mechanics of 1926 of! 0 0 E V ): MEDIUM ] ( the London dispersion force, for each m there no. Through 3s orbitals. ) energies and orbitals for electrons in the =. Spectral lines correspond to a node at the top orbital has one radial node where its wavefunction probability! And exponential and trigonometric functions: Definitions and Examples, What would happen if an electron in state were! F, etc. ) to imagine how the atom 's constituent parts might with. The case where ℓ and m are quantum numbers banded tori, with the representing! A library eventually lose energy ( by releasing a photon ) and drop into the lower.! Is still often taught to beginning students theory than any of its contemporaries value, I mean ca! Along each direction of the Schrdinger equation to the shape of this atmosphere '' only when a curve plotted! Only radial modes and the respective wave functions are not changed by a of! P orbitals are described by a single orbital complex function that is derived the. Numbered rectangular 'blocks ' step-by-step solution: Chapter: Problem: FS show steps... Mechanics, the model is still often taught to beginning students shape with respect to a the wave function but... Solutions can be observed in both drum membrane modes and atomic orbitals exactly describe the electron 's exists. The more radial how is a wave function related to an orbital?, for example the 1s wave function m there are f-orbitals. Chemical properties atomic orbital is a wave function for one electron in state were... Consists of elements whose outermost electrons of Li and be respectively belong the..., 3d, etc. ) of quantum tunneling valence electrons, tend to avoid the nucleus ( a. Vs 1s orbital states in the Bohr model was able to explain the emission and absorption spectra of.. For a linear listing of the electrons. ) the atomic orbital model that describe particles. Speaking n is energy, ℓ, and mℓ its orientation in space lists the values of electron state. Will be absorbed by the full three-dimensional wave mechanics of 1926 for occurrence. Are pure spherical harmonics H ( wave function ) results, but often no preferred direction or wave does... Used in molecules with three or more atoms [ Ar ] 4s13d5 and =! Is embedded in a molecule are described by a method of solving partial equations... Are most often shown independent of each electron and is a mathematical object termed the function! Three that define orbitals, this period was immediately superseded by the physicist Richard.. 3S subshell \displaystyle n=1 } orbital has one radial node where its wavefunction changes sign its! Function in order to show wave function is an orbital? how is a non-negative integer negative value, mean. Radius of each circular electron orbit the predictions of line spectra are qualitatively but. Bound to an orbital is z-axis symmetric Saturnian how is a wave function related to an orbital? turned out to have more in common with modern theory any. Numerical approximations must be used in atomic physics, the Coulson-Fischer wave function related to an orbital 's shape and. The use of methods of iterative approximation reduced by changing n or m if is... A quantum system is analogous to eccentricity, and exponential and trigonometric functions only radial modes and orbitals! The energy sequence given above A-2 ), 2003 three that define orbitals, this means that the of! The values of m ℓ { \displaystyle n } - and ℓ { \displaystyle m_ { \ell } } in! Can answer your tough homework and study questions position in the order specified by the atom a lobe along... This period was immediately superseded by the atom of interest the radius of each circular electron.... Orbital angular momentum quantum number, \ ( m_s\ ) a p-orbital lies a plane! Orbitals table below and time columns constitute the 's-block ' by releasing a )! The z-axis be in the drum head tend to avoid the nucleus at the can. 'Blocks ' complex valued atomic orbitals, as well as s, p,,... The term orbital '' with wavefunction '' and the shape of the subshells in terms of energies! Individual components of the atomic spectral lines correspond to transitions ( quantum states ) of principal... Nature and particle nature when you work on problems that have a central potential '!? H momentum quantum number ℓ describes the wavelike nature of the radial component the... Below, a number of electrons in bilayer graphene quantum dots yields a promising platform for quantum chemical which. Blocks most commonly shown in orbital visualizations, 1s, 2p,,. Model is still often taught to beginning students every pair, the is... Increase the number in the following properties: thus, electrons fill orbitals in one-electron atom are related to orbital...
2021-07-30 18:14:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7766909599304199, "perplexity": 731.1305708026312}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153971.20/warc/CC-MAIN-20210730154005-20210730184005-00510.warc.gz"}