url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://mathhelpforum.com/differential-geometry/190949-show-weak-concavity-function.html
## Show (weak) concavity of a function I have a real-valued function $f(\boldsymbol{x}) = f(x_1,\dotsc,x_n)$ of $n$ variables, defined on the non-negative orthant $\forall i \colon x_i \geq 0$, which fulfills the following properties: • $f(\boldsymbol{x}) \geq \boldsymbol{0}$ • The function vanishes on the axes only, i.e., $f(\dotsc,0,\dotsc) = 0$ and $\boldsymbol{x} \neq \boldsymbol{0} \Rightarrow f(\boldsymbol{x}) > 0$ • $f$ is continuous and twice differentiable • $f$ is marginally concave in all $x_i$, that is, $\frac{\partial^2 f}{\partial x_i^2} < 0$, and marginally bounded in x_i, that is, $f(\dotsc,x_i,\dotsc)$ is bounded in $x_i$ if all $x_j$ with $j \neq i$ are held fixed • For any $\alpha \geq 0$, one has $f(\alpha \boldsymbol{x}) = \alpha f(\boldsymbol{x})$. Note that one consequence of this is that the concavity can only be weak! A direct computation of the Hessian does not seem possible. So my question is: can you prove (weak) concavity of this function $f$ by knowing only the above properties? (Note that I might have listed more properties than actually needed) Here is an examplary plot of such a function: You see very clearly that it is concave, and considering the above properties, it seems like it can't be otherwise. Thanks for any hints! Jens
2016-12-09 20:12:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9551514387130737, "perplexity": 291.35488753567523}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542765.40/warc/CC-MAIN-20161202170902-00080-ip-10-31-129-80.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/292152/galilean-transformation-in-non-relativistic-quantum-mechanics
Galilean transformation in non-relativistic quantum mechanics I'm reading Weinberg's Lectures on Quantum Mechanics and in chapter 3 he discusses invariance under Galilean transformations in the general context of non-relativistic quantum mechanics. Being a symmetry of nature (if we forget about relativity), Galilean boosts (particular case of Galilean transformations) should be represented by a linear unitary operator $U(\mathbf{v})$ which is taken to satisfy: $$U^{-1}(\mathbf{v}) \mathbf{X}_H (t) U(\mathbf{v}) = \mathbf{X}_H (t) + \mathbf{v} t$$ where $\mathbf{X}_H (t)$ is the Heisenberg picture operator for the position of any given particle. This is basically what we understand by a Galilean boost. Now, I have a few questions: 1. Taking the time derivative of the previous expression and using $\dot{\mathbf{X}}_H(t) = i \left[H,\mathbf{X}_H(t)\right]/ \hbar$, we can conclude that, at $t=0$ (in this case Schrödinger and Heisenberg picture operators coincide): $$i\left[U^{-1}(\mathbf{v}) H U(\mathbf{v}), \mathbf{X}\right] = i \left[H, \mathbf{X}\right] + \hbar \mathbf{v}$$ From here Weinberg says that necessarily $U^{-1}(\mathbf{v}) H U(\mathbf{v}) = H + \mathbf{P}\cdot\mathbf{v}$. I can prove that if this is the case (where $\mathbf{P}$ is the total momentum operator defined as the generator of spatial translations), the previous commutation relation holds, but couldn't we have a different form of the transformed Hamiltonian still satisfying the previous commutation relation? 2. In any case, it is clear that although being a symmetry of nature the transformation $U(\mathbf{v})$ doesn't commute with the Hamiltonian. This is also the case for its generator $\mathbf{K}$, where $U(\mathbf{v}) = \exp(-i \mathbf{v} \cdot \mathbf{K})$. This is an exception to the general rule that the generators of symmetries commute with $H$, and Weinberg argues that the reason is that $\mathbf{K}$ is associated with a symmetry which depends explicitly on time, as the effect on $\mathbf{X}_H (t)$ shows. The question is then, shouldn't $U(\mathbf{v})$ be somehow time dependent, so that we can't simply take the time derivative in the first expression I wrote by considering that it only acts on $\mathbf{X}_H (t)$? This would mean that the whole derivation in 1 is fallacious... 3. And now the question that is bothering me the most. Following the argument presented in the previous two points, I want to understand what $U(\mathbf{v})$ really does (this should solve question 2, showing that it has no time dependence). For simplicity, I will consider a one-particle system. My physical intuition tells me that, if $\Phi_{\mathbf{x},t}$ is an eigenstate of $\mathbf{X}_H(t)$ with eigenvalue $\mathbf{x}$, then we should have: $$U(\mathbf{v}) \Phi_{\mathbf{x},t} = \Phi_{\mathbf{x}+\mathbf{v}t,t}$$ But this leads me to some contradictions. First of all, at $t=0$ this equation means that $U(\mathbf{v}) \Phi_{\mathbf{x}} = \Phi_{\mathbf{x}}$, and since the $\{\Phi_{\mathbf{x}}\}$ form a complete set of orthonormal states, the operator $U(\mathbf{v})$ should be the identity (which is a disaster, since obviously this operator must act non-trivially on, e.g., $\mathbf{X}_H (t)$). We could argue that there might be phases (depending on $\mathbf{x}$) in the previous equation, so I will show another problem I have found. Let $\{\Psi_{\mathbf{p},t}\}$ be a complete orthonormal set of momentum eigenstates at time $t$, so that we have the usual inner product: $$(\Psi_{\mathbf{p},t},\Phi_{\mathbf{x},t}) = (2\pi \hbar)^{-3/2} \exp(-i \mathbf{p}\cdot\mathbf{x}/\hbar)$$ This equation is certainly true at $t=0$ and I assume it is also valid at time $t$ because it follows from properties of translations at a fixed time. Again using what a Galilean boost should be, I assume $U(\mathbf{v}) \Psi_{\mathbf{p},t} = \Psi_{\mathbf{p} + m \mathbf{v},t}$. Then, going to momentum space, we conclude: $$U(\mathbf{v}) \Phi_{\mathbf{x},t} = (2\pi \hbar)^{-3/2} \int d^3 \mathbf{p} \exp(-i \mathbf{p}\cdot\mathbf{x}/\hbar) \Psi_{\mathbf{p} + m \mathbf{v},t} = \exp(i m \mathbf{v}\cdot\mathbf{x}/\hbar) \Phi_{\mathbf{x},t}$$ which contradicts our original idea $U(\mathbf{v}) \Phi_{\mathbf{x},t} = \Phi_{\mathbf{x}+\mathbf{v}t,t}$. So I am really lost here... Any help, especially with question 3? • Before anything else, you may want to see if your questions are answered by the very nice approach to Galilei transformations in Fonda & Ghirardi's "Symmetry Principles in Quantum Physics", Sec. 2.5, pgs.83-89: scribd.com/doc/30539019/… – udrv Nov 11 '16 at 19:02 • @udrv Very helpful reference! I found there the full answer to my question, and I will definitely keep an eye on it for further clarification on symmetries in quantum mechanics since it seems to be a pretty complete book! – Alex V. Nov 14 '16 at 11:10 • Welcome and good luck. – udrv Nov 14 '16 at 23:25 First, about about symmetries of the theory. Let's work in the Schroedinger picture where the only dependence on time of operators is explicit. That $K_j(t)$ generates symmetry of Hamiltonian $H$ means that transformed wavefunction satisfies the same Schroedinger equation (I'll take $\hbar=1$), $$i\partial_t\Big(e^{iK_j(t)v_j}\Psi_t\Big) = H\Big(e^{iK_j(t)v_j}\Psi_t\Big)$$ From that we can derive, $$i\partial_t K_j(t)=[H,K_j(t)],$$ that generalizes the usual condition of vanishing commutator for time-independent operators. It is equivalent to vanishing of $\frac{d}{dt}K_j(t)$ in the Heisenberg picture. Now the answer for your third question is that your physical intuition is not correct. The reason is that the Galilean boost changes both coordinate AND momentum. $$e^{iK_j(t)v_j}x_k e^{-iK_j(t)v_j}=x_k+v_j t\delta_{jk},\quad e^{iK_j(t)v_j}p_k e^{-iK_j(t)v_j}=p_k+m v_j \delta_{jk}$$ whereas simple shift of coordinates changes coordinates but not momentum because it's generated by momentum operator that commutes with itself, $$e^{ip_jv_j t}x_ke^{-ip_jv_j t}=x_k+v_j t\delta_{jk},\quad e^{ip_jv_j t}p_ke^{-ip_jv_j t}=p_k$$ For non-relativistic particle Galilean boost generator can be written in the form, $$K_j(t)=tp_j+mx_j$$ which can be obtained as limit $c\rightarrow\infty$ of the Lorentzian boost. It can be easily checked that it satisfies the symmetry generator condition for Hamiltonian $H=\frac{\vec{p}^2}{2m}$. You can also check that it transforms $x$ and $p$ correctly. Now how it transforms the wavefunction in the coordinate representation. $$e^{iK_j(t)v_j}\psi_t(x)=e^{imx_jv_j+tv_j\partial_j}\psi_t(x)$$ Use Baker-Campbell-Hausdorff formula to rewrite it in the form, $$e^{i\frac{mv_j^2}{2}t}e^{imx_jv_j}e^{tv_j\partial_j}\psi_t(x)=e^{i\frac{mv_j^2}{2}t}e^{imx_jv_j}\psi_t(x+vt)$$ For $t=0$ we reproduce your result $$\psi_0(x)\mapsto e^{imx_jv_j}\psi_0(x)$$
2019-12-12 07:48:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 8, "x-ck12": 0, "texerror": 0, "math_score": 0.9258507490158081, "perplexity": 142.25068381908594}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540542644.69/warc/CC-MAIN-20191212074623-20191212102623-00141.warc.gz"}
https://scipost.org/SciPostPhysProc.1.010
## Theory Status - Puzzles in B Meson Decays and LFU? Svjetlana Fajfer SciPost Phys. Proc. 1, 010 (2019) · published 18 February 2019 ### Proceedings event The 15th International Workshop on Tau Lepton Physics ### Abstract Currently B meson puzzles motivate many studies of New Physics due to the observed deviations from the Standard Model predictions. There are two B meson puzzles, RD(∗) and RK(∗). The first one denotes the deviations in the decays driven by the charged current in the ratio of the decay widths for B → D(∗)τν and B → D(∗)μν, while the second one is related to the ratio of the decay widths for B → K (∗) μ+ μ− and B → K (∗) e+ e− transition. Also, the measured muon anomalous magnetic moment differs from the SM predictions. Usually, the effective Lagrangian approach containing New Physics effects is used to analyse RD(∗) and RK(∗). Among many models of New Physics, various leptoquark models are suggested to resolve both B meson anomalies. If New Physics is confirmed in B decays a number of processes at low and high energies should confirm its presence. ### Author / Affiliations: mappings to Contributors and Organizations See all Organizations. Funder for the research work leading to this publication
2021-12-03 15:30:12
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8390605449676514, "perplexity": 2693.6600972535325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362891.54/warc/CC-MAIN-20211203151849-20211203181849-00533.warc.gz"}
http://hogwartspapers.com/dche0whf/93b998-what-are-the-advantages-of-control-charts
useful in diagnosis • Disadvantages: – It uses only the information from last sample provided – Insensitive to small process shifts (i.e. This data will be a key input into your process improvement plan. Learn more about control charts in our Introduction to Control Charts. Where in Victoria could you buy Mulberry and Osage orange wood? Although these statistical tools have widespread applications in service and manufacturing environments, they do come with some disadvantages. ss. When did organ music become associated with baseball? The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Multiply. Precise documentation of process. Six Sigma project teams use control charts to analyze data for special causes, and to understand the amount of variation in a process due to common cause variation. However, care must be taken to use the right type of chart to accurately depict the numbers. Who is the longest reigning WWE Champion of all time? SPC relies on control charts to detect products or services that are defective. Cusum Charts Compared with Shewhart Charts Although cusum charts and Shewhart charts are both used to detect shifts in the process mean, there are important differences in the two methods. Control chart for the range of air void content measurements in each lot. I can go polite and calm getting him assured that I am the only person who will be helpful to his queries at this point of time. Terms of Use - Control Chart approach - Summary Determine the measurement you wish to control/track Collect data (i.e. Statistical process control, or SPC, is used to determine the conformance of a manufacturing process to product or service specifications. You can also What specific tool or technique is used to chart in this data? • Types of Control Charts. Because control limits are calculated from process data, they are independent of customer expectations or specification limits. InDEX • What is a Control Chart ? After all, control charts are the heart of statistical process control (SPC). 5 advantages of using Gantt charts for project managers 1. How can you get pokemon to miagrate from other games to pokemon diamond? And have to scroll down if they have more number of sub tasks. Today, we explore the how the top benefits of control charts on the manufacturing shop floor. How old was queen elizabeth 2 when she became queen? Individuals charts are the most commonly used, but many types of control charts are available and it is best to use the specific chart type designed for use with the type of data you have. Shewhart Control Charts • Advantages: – Useful when process is out of control – Identify causes that result in large shifts, i.e. 2. BENEFITS OF USING CONTROL CHARTS Following are the benefits of control charts: 1. Ability to manage complex info. Students also viewed these Statistics questions A firm monitors the use of its email system. Although extremely useful for many purposes organizational charts are not for everybody. This is a good place to start our discussion. when to take corrective action on the process being controlled. Let us see all Advantages & Disadvantages of Gantt Charts. The vast majority of the time, whatever predator is introduced will only control the population of the pest they are meant to target, making it a green alternative to chemical or mechanical control methods. 3. From operators and engineers to managers and executives, control charts offer a variety of information for all the key stakeholders involved in the creation of a manufactured product.. © 2000-2020 Bayt.com, Inc. All Rights Reserved. I will mention only one attribute chart because I think it is important to flexible film packaging. A Presentation on Control Charts Presented By Salvi Mandar Chaughule Pradip 2. I am quite a fan of diagrams and charts. What Are the Disadvantages of SPC?. What is the rapid correct action to be taken in the Incident of the death of one of the labors at construction site. in the order of 1.5 or less) 7 A process is 1 Answer to 1. Tag: advantages of control charts ... Control Chart Excel Template |How to Plot Control Chart in Excel | Download Template: Hi! What raw materials are reading glasses made from? How can creditor collect balance due after auction in Texas? And everything can not be shown in a single Chart, User have to scroll in order to see the tasks in different time frames. In our previous articles we’ve covered types of org charts and best practices to follow when creating org charts.Now lets look at advantages and disadvantages of organizational charts so you can make an informed decision to whether to have one or not. Reading the Chart. 3. The project team has collected a series of issues and ranked them by frequency. Following are some of the advantages of flow chart. smaller span of control this will create an organizational chart that is narrower and. Cookie Policy, Question added by Muhammad Farooq , QA-QC Manager , AL Bawani Contracting Co, Answer added by Usman Islam, PMP, Senior Operation Coodinator , SGB International FZE, Answer added by Mohamed Helal, Project Manager| Resident Engineer| PMP®| PMI-RMP® , Group Consult International, Senior ‎, Answer added by Eng Ahmed Elsharkawy, Civil Engineering Project Manager , Altwijry office, Answer added by PARTHA SARATHI GHOSAL, Manager – Project & Commissioning Startup , Raufar Turnaround Specialists Private Limited (as Prime Contractor) in Oil & Gas EPC & FPSO Construc, Answer added by Ramzi SAMRI, Logistics and Supply Chain Manager , ACRYLAINE, Answer added by Sathish Prabhu.V, Manager - Operations & Process Improvement , Revolution Valves, Answer added by NDAGIJIMANA Pontien, Vice , computer repare maintenance and solution, Answer added by Iftekhar Pathan, Regional Sales Manger , Dangee Dums, Which of the following statements are not true regarding corrective actions? Control charts are used for monitoring the outputs of a particular process, making them important for process improvement and system optimization. CONTROL CHARTS 1. Advantages of Flow Chart. Another important result of using control charts is: a. The advantage of using variables control charts is to stabilize a process so that management can predict process performance into the future. Gantt chart has both its advantages and disadvantages. The control chart shows how much the defects are costing d. c. The control chart shows how much the defects are costing, thanks for invition ,,,,,,,,,, my choice is option C, Control chart identifies the special causes. Get Fresh Updates On your job applications, and stay connected. The advantage of a control chart is that this makes it easier to see trends or outliers than if you glance at a row of numbers. A control chart indicates when something may be wrong, so that corrective action can be taken. One of the most important things in using control charts is that they not only show when the process is out of control but also show when the process is in control and only normal variations are taking place. or log in One of he advantages of SPC is the ability to use it for analysis through control charts—visual diagrams that track shop floor processes and detect issues, variances, and defects in real time. Register now Does pumpkin pie need to be refrigerated? The patterns of the plot on a control chart diagnosis possible … Therefore, if we can see complex ideas as a picture, this will help our understanding. Using a control chart shows the effects of alterations to your process and helps you correct any errors in real time. Every day, thousands of new job vacancies are listed on the award-winning platform from the region's top employers. This is why flow charts are used. One of the important advantages of using control charts in managing a production operation is that the control chart tells you when to take corrective action on the process being controlled. Advantages of Gantt Charts. All Rights Reserved. Control charts for attribute data are for counting, or conversion of counts for proportions of percentages or the presence or absence of characteristics. What are the advantages and disadvantages of control charts for attributes over those for variables? How did the rastafarian culture come to South Africa? Control Chart. (e.g., Mavenlink, Wrike, Smartsheet, AceProject) It organises your thoughts. to join your professional community. It may be that when you look at the chart, you see nothing special. Where is medineedcom what is medical tourism concept? The main disadvantages of a Gantt chart are: these are large and complex for big projects, it needs to be updated if changes takes place. Only with a stable process can an organization use process capability to compare the voice of the customer (VOC) with the voice of the process (VOP). process and helps you correct any errors in real time. This means that we have a guide that tells us when we should not be taking corrective action as well as a guide to tell us when we should take corrective action. What should you call a female patterdale? The control chart identifies the special causes b. How long was Margaret Thatcher Prime Minister? Copyright © 2020 Multiply Media, LLC. 2. Advantages of Gantt Chart. 1. There are no disadvantages of a central bank so long as that bank is indirectly controlled by a sovereign government representing the people. Advantages of Biological Control: Biological control is a very specific strategy. Here are is list of advantages of Gantt Chart. • What do these charts do ? We think in pictures. Span of Control is the number of subordinates that report to a manager. Real-time SPC helps reduce the margin of error We find that there are several situations in which CUSUM control charts have an economic advantage over $$\bar X$$ charts. Each point on a Shewhart chart is based on information for a single subgroup sample or measurement. Privacy Statement - Control charts provide operational insight for critical stakeholders. Advantages of Control Charts: Various advantages of control charts for variables are as follows: (1) Control charts warn in time, if required rectification is done, well in time the scrap and percentage rejection can be reduced. predict the range of possible future results. You can also predict the range of possible future results. ADVERTISEMENTS: (2) Thus ensures product quality level. (charts used for analyzing repetitive processes) by Roth, Harold P. Abstract- CPAs can increase the quality of their services, lower costs, and raise profits by using control charts to monitor accounting and auditing processes.Control charts are graphic representations of information collected from processes over time. Payment times fluctuate randomly around the centerline but within the control limits. How Well do Your Managers Coach Their Employees to Succeed? With this information they can make the right decision about how to implement process improvements, whether that involves addressing the process itself or dealing with external factors that affect process performance. When did Elizabeth Berkley get a gap between her front teeth? Gantt Charts are efficient tools for project managers, schedulers, planning engineers, project coordinators, team members and anyone who wants to control the performance of the project. Hence A is the right answer. Why don't libraries smell like bookstores? With the help of Gantt charts, you can easily scan critical project information. How long will the footprints on the moon last? At times it is easier to interpret things graphically than to explain them in words. They are useful tools for all industries such as construction, engineering, military, manufacturing, infrastructure, mining and IT. Control charts, also known as Shewhart charts (after Walter A. Shewhart) or process-behavior charts, are a statistical process control tool used to determine if a manufacturing or business process is in a state of control.It is more appropriate to say that the control charts are the graphical device for Statistical Process Monitoring (SPM Using a control chart shows the effects of alterations to your Each point represents the highest measurement minus the lowest measurement for the given lot. Applications of control charts. Learn about the different types such as c-charts and p-charts, and how to know which one fits your data. Discuss the significance of an appropriate sample size for a proportion-nonconforming chart. Reader, today we will guide you on how to plot control chart in Excel with an example. All control charts usually consist of a center line and an upper and lower control limit. Invented by Walter A. Shewhart while he was working for Bell Labs in the ’20s, control charts have been used in a variety of industries as part of a process improvement methodology.Shewhart understood that, no matter how well a process was designed, there will always be variation within that process—and the effect can become negative if the variation keeps you from meeting deadlines or quotas. Bayt.com is the leading job site in the Middle East and North Africa, connecting job seekers with employers looking to hire. 3. The control chart tells you when you should not take corrective action, c. The control chart shows how much the defects are costing d. The control chart shows who is responsible for the defects. the organization in question, and there are advantages and disadvantages to each. • How to plot a certain kind of chart. Charts offer an excellent way of quickly organizing and communicating a large amount of information. The most basic type of control chart, the individuals chart, i… What are the advantages and disadvantages of control charts for attributes over those for variables? Control charts are a key tool for Six Sigma DMAIC projects and for process management. • Case Study for a particular product. It creates a picture of complexity. We compare the economic performance of CUSUM and $$\bar X$$ charts for a wide range of cost and system parameters in a large experiment using examples from the literature. • Its advantages and purposes. Flow charts are beneficial in precise documentation of different processes and operation going on in an organization. With some disadvantages help of Gantt chart Identify causes that result in shifts! \Bar X\ ) charts ) it organises your thoughts are beneficial in documentation... Get a gap between her front teeth situations in which CUSUM control charts Presented by Mandar... Come to South Africa Summary Determine the measurement you wish to control/track Collect data ( i.e p-charts, and connected! Specific strategy Insensitive to small process shifts ( i.e ) it organises your thoughts important result of using charts! The rapid correct action to be taken in the Incident of the death of one the. And North Africa, connecting job seekers with employers looking to hire can predict process performance into the future diagnosis... Learn more about control charts are a key input into your process improvement plan or measurement stay.... Us see all advantages & disadvantages of control charts on the manufacturing shop floor by Salvi Mandar Pradip... Of air void content measurements in each lot shows the effects of alterations to your and... Usually consist of a center line and an upper and lower control limit and ranked by. Other games to pokemon diamond the Incident of the labors at construction.. How old was queen Elizabeth 2 when she became queen get a gap between her front teeth the of! Out of control charts usually consist of a particular process, making them important for process management the outputs a... ) Thus ensures product quality level with an example i think it is important to flexible film.... Used to chart in Excel | Download Template: Hi in words the! Charts • advantages: – it uses only the information from last sample provided – Insensitive small! Range of possible future results not for everybody job applications, and how plot. To small process shifts ( i.e see complex ideas as a picture, will. Process being controlled in words and have to scroll down if they have more number sub... Film packaging Presentation on control what are the advantages of control charts Following are the advantages of Gantt chart explain in. It is easier to interpret things graphically than to explain them in words are. Quality level have an economic advantage over \ ( \bar X\ ) charts games pokemon! – it uses only the information from last sample provided – Insensitive to small process (... Use of its email system that when you look at the chart, you can predict... An upper and lower control limit Gantt charts for project managers 1 or! Sample size for a proportion-nonconforming chart d. applications of control charts Following are the advantages and disadvantages of a line. Six Sigma DMAIC projects and for process improvement and system optimization get a between! One attribute chart because i think it is easier to interpret things graphically than explain! Costing d. applications of control charts usually consist of a particular process, making them important for process.! All time they do come with some disadvantages to accurately depict the numbers can Collect. Have widespread applications in service and manufacturing environments, they do come some. Mavenlink, Wrike, Smartsheet, AceProject ) it organises your thoughts to product or service specifications for! Some disadvantages be taken to use the right type of chart to depict. Advantages of using control charts: 1 Berkley get a gap between her front teeth purposes organizational charts are key! Seekers with employers looking to hire Elizabeth Berkley get a gap between her front teeth to what are the advantages of control charts them words... Excel | Download Template: Hi or SPC, is used to chart in Excel an... Be that when you look at the chart, you can easily scan critical information. To control charts • advantages: – it uses only the information from last sample what are the advantages of control charts – Insensitive small... Is indirectly controlled by a sovereign government representing the people certain kind of chart to accurately depict numbers! Therefore, if we can see complex ideas what are the advantages of control charts a picture, will! Shewhart control charts in our Introduction to control charts to detect products or that... This is a very specific strategy to explain them in words wish to control/track data! Counting, or SPC, is used to chart in Excel with an example employers looking to.! With some disadvantages center line and an upper and lower control limit Champion of all time she became queen the. Service specifications advantages: – useful when process is out of control is a very strategy..., and there are several situations in which CUSUM control charts the presence or absence of.... You can also predict the range of possible future results for everybody an.! Percentages or the presence or absence of characteristics ) charts absence of characteristics at times it easier! Charts Presented by Salvi Mandar Chaughule Pradip 2 to your process and helps you correct errors! Fits your data when she became queen labors at construction site taken in the Incident of the death one... Connecting job seekers with employers looking to hire is to stabilize a process so that management can predict performance. In to join your professional community an appropriate sample size for a proportion-nonconforming chart advantages! Defects are costing d. applications of control charts is: a chart that is narrower and accurately depict the.. The moon last taken to use the right type of chart to accurately depict the numbers for the! One of the death of one of the labors at construction site the effects of alterations to process! E.G., Mavenlink, Wrike, Smartsheet, AceProject ) it organises your thoughts guide you on to. Sample provided – Insensitive to small process shifts ( i.e scroll down they. Around the centerline but within the control chart shows how much the defects are costing d. of., mining and it in precise documentation of different processes and operation going on in an.. Going on in an organization easily scan critical project information charts for attribute are... Are useful tools for all industries such as c-charts and p-charts, and stay connected them by.! Of alterations to your process improvement and system optimization tool or technique is used to chart in data. • how to know which one fits your data monitors the use its. Smartsheet, AceProject ) it organises your thoughts guide you on how to know which one fits your.. Alterations to your process and helps you correct any errors in real time alterations... For the range of possible future results employers looking to hire: Biological control is a specific. In Victoria could you buy Mulberry and Osage orange wood measurement you wish to control/track Collect (...: a making them important for process management job vacancies are listed on the process being controlled the types!... control chart indicates when something may be wrong, so that management predict. On how to plot control chart shows the effects of alterations to your process helps. Lowest measurement for the given lot shows how much the defects are costing d. applications control., Mavenlink, Wrike, Smartsheet, AceProject ) it organises your thoughts engineering, military,,. Miagrate from other games to pokemon diamond appropriate sample size for a single subgroup sample or measurement creditor Collect due. | Download Template: Hi are some of the advantages and disadvantages to.. Them by frequency in our Introduction to control charts for attributes over those for variables all industries such construction. Them important for process improvement and system optimization proportions of percentages or the presence or absence of.! Advantages & disadvantages of control is the rapid correct action to be taken to use the type. When she became queen result of using control charts have an economic advantage over \ \bar. Explore the how the top benefits of control is the number of subordinates that report to a manager labors construction... Diagnosis • disadvantages: – useful when process is out of control is a good to! In service and manufacturing environments, they do come with some disadvantages possible future results look at chart. Type of chart to accurately depict the numbers are not for everybody to Succeed chart i. Attribute chart because i think it is easier to interpret things graphically than to explain them in words in. Explain them in words shows the effects what are the advantages of control charts alterations to your process improvement and optimization. In words within the control limits today, we explore the how the top benefits of control the... Have to scroll down if they have more number of subordinates that report to a.! Being controlled charts to detect products or services that are defective economic advantage over \ \bar! Sample size for a single subgroup sample or measurement the death of one of death! Will guide you on how to plot control chart for the range of air void content measurements each. Key input into your process improvement and system optimization longest reigning WWE of. To South Africa what are the benefits of using variables control charts on manufacturing... A center line and an upper and lower control limit is: a Champion of all time a. Charts usually consist of a particular what are the advantages of control charts, making them important for process management to process. Organises your thoughts death of one of the advantages of Gantt charts for project managers 1 a center and... Advantage of using variables control charts for attributes over those for variables taken... What is the longest reigning WWE Champion of all time SPC, used. Beneficial in precise documentation of different processes and operation going on in an organization come some... You get pokemon to miagrate from other games to pokemon diamond randomly around the centerline but within the chart... Professional community to Succeed, mining and it be a key tool for Six Sigma DMAIC projects and process. 2020 what are the advantages of control charts
2021-01-21 15:08:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18879404664039612, "perplexity": 2342.486286300106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703524858.74/warc/CC-MAIN-20210121132407-20210121162407-00200.warc.gz"}
https://mca2021.dm.uba.ar/en/tools/view-abstract?code=3032
## View abstract ### Session S09 - Number Theory in the Americas Thursday, July 22, 16:30 ~ 17:00 UTC-3 ## Hecke characters and some diophantine equations ### Ariel Pacetti #### U. N. Cordova, Argentina   -   This email address is being protected from spambots. You need JavaScript enabled to view it. document.getElementById('cloakb06cbbcb8ee829802dc45288740da39f').innerHTML = ''; var prefix = '&#109;a' + 'i&#108;' + '&#116;o'; var path = 'hr' + 'ef' + '='; var addyb06cbbcb8ee829802dc45288740da39f = '&#97;p&#97;c&#101;tt&#105;' + '&#64;'; addyb06cbbcb8ee829802dc45288740da39f = addyb06cbbcb8ee829802dc45288740da39f + 'f&#97;m&#97;f' + '&#46;' + '&#117;nc' + '&#46;' + '&#101;d&#117;' + '&#46;' + '&#97;r'; var addy_textb06cbbcb8ee829802dc45288740da39f = '&#97;p&#97;c&#101;tt&#105;' + '&#64;' + 'f&#97;m&#97;f' + '&#46;' + '&#117;nc' + '&#46;' + '&#101;d&#117;' + '&#46;' + '&#97;r';document.getElementById('cloakb06cbbcb8ee829802dc45288740da39f').innerHTML += '<a ' + path + '\'' + prefix + ':' + addyb06cbbcb8ee829802dc45288740da39f + '\'>'+addy_textb06cbbcb8ee829802dc45288740da39f+'<\/a>'; In this talk we will study solutions to the equation $x^2 + dy^6 = z^p$. We will explain how to attach a ${\mathbb Q}$-curve over a quadratic field to a putative solution, and how to extend the representation to a rational one of a concrete level and Nebentypus (corresponding to a classical modular form via Serre's conjectures). The way to explicitly obtain the level and nebentypus is via the construction of a Hecke character with some desired properties. After computing the respective space of classical modular forms, some classical elimination techniques allow us to prove that no non-trivial solution exists for some values of $d$ and all $p$ sufficiently large.
2023-02-05 16:45:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32579928636550903, "perplexity": 4390.70343392483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500273.30/warc/CC-MAIN-20230205161658-20230205191658-00594.warc.gz"}
https://www.gktoday.in/question/a-and-b-can-do-a-piece-of-work-in-12-days-b-and-c
A and B can do a piece of work in 12 days. B and C in 8 days and C and A in 6 days. How long would B take to do the same work alone ? [A] 24 days [B] 32 days [C] 40 days [D] 48 days
2018-05-23 03:38:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35813963413238525, "perplexity": 317.62534338110237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865411.56/warc/CC-MAIN-20180523024534-20180523044534-00386.warc.gz"}
https://www.cfd-online.com/Forums/main/125407-about-hershel-bulkley-model-non-newtonian-fluids-print.html
CFD Online Discussion Forums (https://www.cfd-online.com/Forums/) -   Main CFD Forum (https://www.cfd-online.com/Forums/main/) tiger0004 October 24, 2013 23:47 About the Hershel-Bulkley model of Non-Newtonian fluids Hi,everyone! Lately I have been using the Herschel-Bulkley model to model the behaviors of a Non-Newtonian fluid. The model states that if the stress of the fluid is less than the yield stress tau0,then the fluid would act like rigid(pseudoplastic). The mathematical description of the HB model is tau=tau0+k*gamma^n,in which tau is the stress, tau0 is the yield stress,gamma is the shear rate and k and n are fluid constants. Here is the question. In some fluid field, if I don't previously know in which area the shear stress would be smaller than tau0 (plug area), how can I find it out? I mean, according to the HB equation, because gamma and tau, tau0 point to the same direction, so no matter what value set gamma to be, the absolute value of tau is always no less than that of tau0. (tau,tau0 and gamma always have the same sign.) For example, if k=1,n=1,gamma>0, then tau(positive)=tau0(positive)+gamma(positive), tau>tau0;if gamma<0, then tau(negative)=tau0(negative)+gamma(negative),abs(t au)>abs(tau0). In both case, there would be no plug area in the fluid. Can anyone help me? Thanks! triple_r October 25, 2013 16:43 As you said, when shear stress is higher than the yield stress, then and only then the fluid will experience shear deformation. In other words, if tau > tau0, then > 0. Now, the equation shows exactly the same behavior, if you have a that is non zero, then you must have had a shear stress that was higher than the yield stress. In order to see the plug flow region, you must write the equations in terms of shear stress, and not strain. For an example, you can take a look at some of the solutions for a Bingham plastic fluid in most non-Newtonian fluid flow text books. By the way, in the constitutive equation that you have, neither tau, nor can be negative. They are "magnitudes" of tensors. Usually they are defined as: and in these definitions, the left hand side symbols are scalars (and these are the ones that appear your constitutive equation), and the right hand side symbols are tensors. also, means double-inner product of tensors (multiply corresponding components with each other, and then sum all of the products). Hope this helps. tiger0004 October 25, 2013 22:43 Quote: Originally Posted by triple_r (Post 459047) As you said, when shear stress is higher than the yield stress, then and only then the fluid will experience shear deformation. In other words, if tau > tau0, then > 0. Now, the equation shows exactly the same behavior, if you have a that is non zero, then you must have had a shear stress that was higher than the yield stress. In order to see the plug flow region, you must write the equations in terms of shear stress, and not strain. For an example, you can take a look at some of the solutions for a Bingham plastic fluid in most non-Newtonian fluid flow text books. By the way, in the constitutive equation that you have, neither tau, nor can be negative. They are "magnitudes" of tensors. Usually they are defined as: and in these definitions, the left hand side symbols are scalars (and these are the ones that appear your constitutive equation), and the right hand side symbols are tensors. also, means double-inner product of tensors (multiply corresponding components with each other, and then sum all of the products). Hope this helps. Thank you triple_r! I know sometimes my fundamental concepts are ambiguous. I referred to some text books and scientific papers, and found out that usually the shear rate is calculated first and then then shear stress is calculated with the constitution equation. In some simple cases, like in a tube or between two plates, the shear stress can be obtained when knowing the pressure distribution. Is this always the way to find out the shear stress without the shear rate? All times are GMT -4. The time now is 14:32.
2017-01-22 18:32:34
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9006432890892029, "perplexity": 720.7159784512544}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00294-ip-10-171-10-70.ec2.internal.warc.gz"}
http://raftulcuinitiativa.provobis.ro/ebooks/category/algebraic-geometry/
# Lectures on Algebraic Geometry I: Sheaves, Cohomology of Format: Hardcover Language: English Format: PDF / Kindle / ePub Size: 10.33 MB Let and be points of order 2 and consider third point of intersection of ℓ(, ) and. Nash’s claim means that Newton’s laws of mechanics would have no sense in the bent torus on an infinite amount of points, kind of like general relativity no longer makes sense at singularities better known as black holes! But because polynomials are so ubiquitous in mathematics, algebraic geometry has always stood at the crossroads of many different fields. We prove a theorem of Kakeya type for the intersection of subsets of n-space over a finite field with k-planes. Format: Hardcover Language: English Format: PDF / Kindle / ePub Size: 10.33 MB Let and be points of order 2 and consider third point of intersection of ℓ(, ) and. Nash’s claim means that Newton’s laws of mechanics would have no sense in the bent torus on an infinite amount of points, kind of like general relativity no longer makes sense at singularities better known as black holes! But because polynomials are so ubiquitous in mathematics, algebraic geometry has always stood at the crossroads of many different fields. We prove a theorem of Kakeya type for the intersection of subsets of n-space over a finite field with k-planes. # Gorenstein Dimensions (Lecture Notes in Mathematics) Format: Paperback Language: English Format: PDF / Kindle / ePub Size: 10.10 MB P → 0.. then (dα)P becomes identified with the map defined earlier. When they do intersect properly. p2). fn ))d which proves the formula.. By using computer calculations of the E2 page of the MASS, we are able to compute the 2-complete stable homotopy groups of the motivic sphere spectrum $\pi_{n,m}$ for $n \leq 12$ over finite fields of odd characteristic. Write Z = V (p) with p a prime ideal in k[U]. We seek the same, or similar, for knots.) The first ingredient for an “Algebraic Knot Theory” exists in many ways and forms; these are the many types and theories of “knot invariants”. Format: Paperback Language: English Format: PDF / Kindle / ePub Size: 10.10 MB P → 0.. then (dα)P becomes identified with the map defined earlier. When they do intersect properly. p2). fn ))d which proves the formula.. By using computer calculations of the E2 page of the MASS, we are able to compute the 2-complete stable homotopy groups of the motivic sphere spectrum $\pi_{n,m}$ for $n \leq 12$ over finite fields of odd characteristic. Write Z = V (p) with p a prime ideal in k[U]. We seek the same, or similar, for knots.) The first ingredient for an “Algebraic Knot Theory” exists in many ways and forms; these are the many types and theories of “knot invariants”. # K3 Surfaces and Their Moduli (Progress in Mathematics) Format: Hardcover Language: English Format: PDF / Kindle / ePub Size: 12.11 MB The development of the Mayer-Vietoris sequence in homology is shoddy. On the other hand, geometry regarded as a study of the topology of manifolds is also ubiquitous, for instance: Differential equations of mathematical physics, such as Maxwell's equations, are efficiently expressed in a coordinate-independent way using the language of manifold theory. Among other precious items they preserved are some results and the general approach of Pythagoras (c. 580–c. 500 bce) and his followers. Format: Hardcover Language: English Format: PDF / Kindle / ePub Size: 12.11 MB The development of the Mayer-Vietoris sequence in homology is shoddy. On the other hand, geometry regarded as a study of the topology of manifolds is also ubiquitous, for instance: Differential equations of mathematical physics, such as Maxwell's equations, are efficiently expressed in a coordinate-independent way using the language of manifold theory. Among other precious items they preserved are some results and the general approach of Pythagoras (c. 580–c. 500 bce) and his followers. # Mixed Hodge Structures and Singularities (Cambridge Tracts Format: Hardcover Language: English Format: PDF / Kindle / ePub Size: 14.91 MB What can one say about the local structure of Our main interest is to understand the most complicated links. The picture began to change around 1955, with the advent of the Yang–Mills equations, which showed that particle physics could be treated by the same kind of geometry as Maxwell’s theory, but with quantum mechanics playing a dominant role. Let U be an open subset of Ui. .. k[ X0. . i This obviously defines a sheaf O of k-algebras on Pn. p171). there is an automorphism ϕa: C → C. is D( X0 ).. Format: Hardcover Language: English Format: PDF / Kindle / ePub Size: 14.91 MB What can one say about the local structure of Our main interest is to understand the most complicated links. The picture began to change around 1955, with the advent of the Yang–Mills equations, which showed that particle physics could be treated by the same kind of geometry as Maxwell’s theory, but with quantum mechanics playing a dominant role. Let U be an open subset of Ui. .. k[ X0. . i This obviously defines a sheaf O of k-algebras on Pn. p171). there is an automorphism ϕa: C → C. is D( X0 ).. # Basic College Mathematics Format: Print Length Language: English Format: PDF / Kindle / ePub Size: 8.94 MB Loci of abelian differentials with prescribed type of zeros form a natural stratification. How to reach Galatasaray University: Galatasaray University is 24 km away from the İSTANBUL ATATÜRK AIRPORT. The motion is specified as follows: starting at a point on the inner wall of the cylinder, choose at random a direction and let the particle move with constant speeduntil it hits another point of the cylinder. Example 3: Let A be any subset of a discrete topological space X, show that the derived set A' = $\phi$ Format: Print Length Language: English Format: PDF / Kindle / ePub Size: 8.94 MB Loci of abelian differentials with prescribed type of zeros form a natural stratification. How to reach Galatasaray University: Galatasaray University is 24 km away from the İSTANBUL ATATÜRK AIRPORT. The motion is specified as follows: starting at a point on the inner wall of the cylinder, choose at random a direction and let the particle move with constant speeduntil it hits another point of the cylinder. Example 3: Let A be any subset of a discrete topological space X, show that the derived set A' = $\phi$ # Solid geometry Format: Hardcover Language: English Format: PDF / Kindle / ePub Size: 5.17 MB Show that the Picard group for ℙ1 is the group ℤ under addition.5.5.. 379 Definition 6.) 1 and a homogeneous Exercise 6. The book gives, in a simple way, the essentials of synthetic projective geometry. Algebraic geometers regularly use the variable for a complex number. ℂ −3 + 4 1 1 2+ 1-6:complexplane −3 − 2 Figure 5. 2010.. BN ⊂ mB + b ⇒ BN = mBN + BN ∩ b. which is an integral domain. which contradicts the assumption that Z = V (b) is nonempty.. 1 in (*) to zero. Format: Hardcover Language: English Format: PDF / Kindle / ePub Size: 5.17 MB Show that the Picard group for ℙ1 is the group ℤ under addition.5.5.. 379 Definition 6.) 1 and a homogeneous Exercise 6. The book gives, in a simple way, the essentials of synthetic projective geometry. Algebraic geometers regularly use the variable for a complex number. ℂ −3 + 4 1 1 2+ 1-6:complexplane −3 − 2 Figure 5. 2010.. BN ⊂ mB + b ⇒ BN = mBN + BN ∩ b. which is an integral domain. which contradicts the assumption that Z = V (b) is nonempty.. 1 in (*) to zero. # Algebraic-Geometric Codes (Mathematics and its Applications) Format: Hardcover Language: English Format: PDF / Kindle / ePub Size: 13.84 MB V( − 1) ∩V( 2 2 − 2 − 1)) = = 2 (. we have ∑ (. ) = 2 + 2 − 4 2. In the approach taken here, two geometric figures are defined to be congruent if there is a sequence of rigid motions that carries one onto the other. This volume includes articles exploring geometric arrangements, polytopes, packing, covering, discrete convexity, geometric algorithms and their complexity, and the combinatorial complexity of geometric objects, particularly in low dimension. Format: Hardcover Language: English Format: PDF / Kindle / ePub Size: 13.84 MB V( − 1) ∩V( 2 2 − 2 − 1)) = = 2 (. we have ∑ (. ) = 2 + 2 − 4 2. In the approach taken here, two geometric figures are defined to be congruent if there is a sequence of rigid motions that carries one onto the other. This volume includes articles exploring geometric arrangements, polytopes, packing, covering, discrete convexity, geometric algorithms and their complexity, and the combinatorial complexity of geometric objects, particularly in low dimension. # Proceedings of the International Conference on Complex Format: Paperback Language: English Format: PDF / Kindle / ePub Size: 13.20 MB In addition, there are a swimming pool, Turkish Bath and Sauna (steam room), and these are free. This Togliatti surface is an algebraic surface of degree five. It is a field which involves a lot of both algebra and geometry, although for a beginner, it may initially seem to be more “algebra heavy”. NP and related questions called Geometric Complexity Theory (GCT). Spontaneous magnetization in the two-dimensional Ising model. One goal is to use this geometric perspective to explain the structure of the algebraic K-theory of group rings over the sphere spectrum, which in turn explains much of what we know about automorphisms of high-dimensional manifolds. Format: Paperback Language: English Format: PDF / Kindle / ePub Size: 13.20 MB In addition, there are a swimming pool, Turkish Bath and Sauna (steam room), and these are free. This Togliatti surface is an algebraic surface of degree five. It is a field which involves a lot of both algebra and geometry, although for a beginner, it may initially seem to be more “algebra heavy”. NP and related questions called Geometric Complexity Theory (GCT). Spontaneous magnetization in the two-dimensional Ising model. One goal is to use this geometric perspective to explain the structure of the algebraic K-theory of group rings over the sphere spectrum, which in turn explains much of what we know about automorphisms of high-dimensional manifolds. # Singularities of Differentiable Maps, Volume 2: Monodromy Format: Paperback Language: English Format: PDF / Kindle / ePub Size: 14.80 MB Then an ideal in [ 1 .. .6. prove that the radical. 1. = 0( 1.. 0 ⋅⋅⋅ ⋅⋅⋅. . We will explain all needed notions from Differential Geometry and Partial Differential Equations, but knowledge of these subjects at an introductory level is required for this course. Course grades will be based on these problems (and any other participation); solving at least half of them will be considered a perfect score. Format: Paperback Language: English Format: PDF / Kindle / ePub Size: 14.80 MB Then an ideal in [ 1 .. .6. prove that the radical. 1. = 0( 1.. 0 ⋅⋅⋅ ⋅⋅⋅. . We will explain all needed notions from Differential Geometry and Partial Differential Equations, but knowledge of these subjects at an introductory level is required for this course. Course grades will be based on these problems (and any other participation); solving at least half of them will be considered a perfect score. # Selected Papers (Springer Collected Works in Mathematics) Format: Paperback Language: English Format: PDF / Kindle / ePub Size: 10.06 MB Note that ab ⊂ a ∩ b. am bn ). b = 0 ⇒ a = 0.. k will be a field (not necessarily algebraically). Now in its ninth year, Binghamton University's Graduate Conference in Algebra and Topology is organized by and for graduate students working in the fields of algebra and topology. For more complicated manifolds, it's quite normal to need more coordinate patches. When π is the fundamental group of a surface group S, the mapping class group acts with a complicated and mysterious dynamics. Format: Paperback Language: English Format: PDF / Kindle / ePub Size: 10.06 MB
2017-05-24 02:12:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7090016603469849, "perplexity": 1429.0431262643278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607731.0/warc/CC-MAIN-20170524020456-20170524040456-00423.warc.gz"}
https://gharpedia.com/centre-line-method-of-estimation/
## Centre to Centre Line Method of Estimation: All you Need to Know Estimation is the predictive process, and it is used for finding quantity and cost of required resources for completing any projects in its time limit. Estimate is done in various stages and for preparing an estimate basic drawings, i.e. plan, elevation and sections are required. Every construction project has a unique set of variables, and when you start preparing an estimate, you should have adequate knowledge about drawing. ##### Also Read: Architecture Drawings & their Types! Courtesy - newbedroom.club There are different cost estimation methods for calculating the quantity of various item like earthwork, foundation, concrete & brickwork (plinth and superstructure). But mainly below two methods are used: • Long wall and short wall method • Centre to centre line method Here we discuss centre to centre line method. ## Centre to Centre Line Method Centre to centre line method is one of the methods for preparing an estimate. In this method first, calculate the centre line length of the wall, and then multiply it with the breadth and depth of the wall to find out quantity. Center to centre line method is suitable for rectangular, circular (polygonal, hexagonal, octagonal) buildings having no inter or cross walls(the cross wall is an interior dividing wall of a building.). Centre to centre line method is quick, but it requires special attention and consideration at the junctions or meeting points of partition or cross walls, etc. For each junction, half breadth of the respective item should be deducted from the total centre length for accurate quantity one has to learn seriously as the accuracy is very important while preparing bills rather than working out estimates. In the case of building having different types of walls, for example, outer (main) walls are of “X ” and inner cross wall shall be “Y”, then all X type of walls shall be taken jointly first, and then all Y type wall shall be taken together separately. In such cases, no deduction is required for X type walls, but when Y type walls are taken, for each junction deduct half breadth of Y type wall shall have to be made from the total centre length of walls. For more explanation,  for above figure when we find the centre length of the wall, at the junction, the portion of  A and B shown by hatch lines and in figure A & B portion will come twice times, and we get the quantity excess by these portions, and these excesses shall be deducted. So when we find the centre length of wall A and B portion (Twice times half breadth) shall be deducted for accurate centre length. Note: At the corners of the building where two walls are meeting no addition or subtraction is required. Here we give you three alternative method for finding out centre line. ### 01. First Method for Finding Out Centre to Centre Line Here we give you a figure for single room building. In this figure centre line is indicated with red dotted line. Total centre to centre length of walls = 3.7+9.7+3.7+9.7= 26.80 m ### 02. Second Method for Finding Out Centre to Centre Line For finding out centre line length here, we find out the external perimeter(perimeter is the continuous line forming the boundary of a closed geometrical figure) of the wall as shown in the figure. Total External Perimeter Length = (2 x 10) + (2 x 4)= 28 m For accurate measurement adjust the length of each corner. Therefore we need to deduct twice time half of breadth at each of the 4 corners. Total external perimeter length is 28 m. So, Total centre line length = External perimeter – (4 x 2 x (half of original width/Breadth)) Total centre line length = 28 – (4 x (2 x 15/100) = 28 – 1.2 = 26.80 m Therefore centre line length of this building is 26.80 m. ### 03. Third Method for Finding Out Centre to Centre Line Here we find out the internal perimeter of the building. You can see in the image with an inner dimension. The centerline is indicated in red colour. Total Internal Perimeter Length = (2 x 9.4) + (2 x 3.4) = 25.60m For accurate measurement adjust the length of each individual corner. Therefore we need to add twice time half of breadth at each of the 4 corners. Total internal perimeter length is 28 m. Total centre line length = Internal perimeter + (4 x 2 x (half of original width/Breadth)) Total centre line length = 25.60 + (4 x 2 x (15/100)) = 25.60 + 1.2 = 26.80 m Total centre line length of wall is 26.80 m. So, these are the three alternative methods that we may select for finding out centre line length. You can choose any one for calculating the centre line of the building. ##### Also Read: 8 Things to Keep in Mind Regarding Cost While Buying a House After finding out centre line length, we can find out quantity for various items, i.e. earthwork, concrete, brickwork in foundation and brickwork in the superstructure. ##### Also Read: What are the Points that I should Check in the Construction Permission while Buying a House? In conclusion, using the above centre to centre line method, you can prepare an estimate. When using this method, everyone should know where addition or subtraction is required because it affects the quantity. So this is another simple method for finding out the quantity of various construction items. As per your judgement, you can choose any method for finding quantity and cost of various construction items. Also Read: ## Material Exhibition Explore the world of materials. Exhibit your Brands/Products. This site uses Akismet to reduce spam. Learn how your comment data is processed. ### More From Topics Use below filters for find specific topics
2019-03-23 23:37:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22255301475524902, "perplexity": 1754.746209657378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203093.63/warc/CC-MAIN-20190323221914-20190324003914-00284.warc.gz"}
http://mathhelpforum.com/calculus/24202-finding-values-x.html
# Thread: Finding Values of x 1. ## Finding Values of x 1)For what values of x is the graph of y=e^-x^2 concave down? I am not sure how to go about doing this problem, so please, any advice or steps on how to complete the problem will be much appreciated. -M 2. Originally Posted by blurain 1)For what values of x is the graph of y=e^-x^2 concave down? I am not sure how to go about doing this problem, so please, any advice or steps on how to complete the problem will be much appreciated. -M Find the second derivative and set it equal to zero. Solve for all values of x and test in between those values to see if you get a negative or positive number. Negative results mean it is concave down within that sub-domain. 3. I'm not sure how to find the derivative of y=e^-x^2? 4. Originally Posted by blurain I'm not sure how to find the derivative of y=e^-x^2? Chain Rule.. $f(x)=e^{g(x)}$ $f'(x)=g'(x)e^{g(x)}$
2013-12-12 01:45:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6205012202262878, "perplexity": 246.00068642315307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164333763/warc/CC-MAIN-20131204133853-00045-ip-10-33-133-15.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/361687/how-does-lasso-regularization-select-the-less-important-features
# How does lasso regularization select the “less important” features? I'm just starting in machine learning and I can't figure out how does lasso method find which features are redundant to shrink their coefficients to zero? • Possible duplicate of Why L1 norm for sparse models – Karolis Koncevičius Aug 10 '18 at 18:07 • Your mistaken - lasso doesnt find the redundant features - it just shrinks features to zero somewhat arbitrarily - you need.to use another method like cross validation or scoring to determine which features are best – Xavier Bourret Sicotte Aug 25 '18 at 9:59 There are many ways to think about regularization. I find the restricted optimization formulation to be quite intuitive. $$\hat\beta = \arg\min_\beta \|\bf y - X\bf\beta\|$$ $$\text{subject to } \|\bf \beta\| \leq \lambda_\star$$ Usually in the first line, we use the L2 norm (squared), which corresponds to Ordinary Least Squares estimates. The restriction gives a way of "shrinking" the estimates back to the origin. If an L2 norm is used in the second line, we have Ridge Regression, which effectively pulls the OLS estimates back towards the origin. Useful in many situations, but not very good at setting estimates equal to 0. If an L1 norm is used in the second line, we get LASSO. This is good for "feature selection" since it is able to effectively set inert or nearly-inert features to 0. The following image illustrates this nicely in two dimensions. Figure from Elements of Statistical Learning by Hastie, Tibshirani, and Friedman • Can you include some captions on the image, for example L1 (left) and L2 (right)? – Jon Aug 13 '18 at 16:21 • @Jon Good idea. How does that look? – knrumsey Aug 13 '18 at 21:49 how does lasso method find which features are redundant to shrink their coefficients to zero? The Lasso method on its own does not find which features to shrink. Instead, it is a combination of Lasso and Cross Validation (CV) which allows us to determine the best Lasso parameter $\lambda$. Different $\lambda$ may shrink different parameters to zero, and choosing the right $\lambda$ can sometimes be easy (e.g.) in the graphic below, and sometimes be difficult/subjective task. A good introduction to combining Lasso and cross validation is provided by the inventor of the Lasso, Robert Tibshirani, pages 15 and 16 here. Also here: Lasso cross validation and here ## What happens when features are correlated The case of correlated features and Lasso is more subtle and is well explained here: How does LASSO select among collinear predictors?. But in practice there is some arbitrariness in which features are included in the model and when/why. Quoting EdM In practice, the choice among correlated predictors in final models with either method is highly sample dependent, as can be checked by repeating these model-building processes on bootstrap samples of the same data. If there aren't too many predictors, and your primary interest is in prediction on new data sets, ridge regression, which tends to keep all predictors, may be a better choice.
2020-05-31 01:54:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5514096617698669, "perplexity": 927.5349680552863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347410535.45/warc/CC-MAIN-20200530231809-20200531021809-00008.warc.gz"}
https://puzzling.stackexchange.com/questions/59606/find-the-value-of-bigstar-puzzle-8-inequality
# Find the value of $\bigstar$: Puzzle 8 - Inequality This puzzle replaces all numbers with other symbols. Your job, as the title suggests, is to find what value fits in the place of $\bigstar$. To get the basic idea down, I recommend you solve Puzzle 1 first. All symbols abide to the following rules: 1. Each numerical symbol represents integers and only integers. This means fractions and irrational numbers like $\sqrt2$ are not allowed. However, negative numbers and zero are allowed. 2. Each symbol represents a unique number. This means that for any two symbols $\alpha$ and $\beta$ which are in the same puzzle, $\alpha\neq\beta$. 3. The following equations are satisfied (this is the heart of the puzzle): $$\text{I. }a\times a=a \\ \space \\ \text{II. }a<b<b^a+c^a+c^a \\ \space \\ \text{III. }a-b<c<a \\ \space \\ \text{IV. }c\times d+d\times d=b\times b\times d - e\times e \\ \space \\ \text{V. }a-c-e<d \\ \space \\ \text{VI. }d-c-e\times b=\bigstar$$ ## What is a Solution? A solution is a value for $\bigstar$, such that, for the group of symbols in the puzzle $S_1$ there exists a one-to-one function $f:S_1\to\Bbb Z$ which, after replacing all provided symbols using these functions, satisfies all given equations. ## What is a Correct Answer? An answer is considered correct if you can prove that a certain value for $\bigstar$ is a solution. This can be done easily by getting a function from every symbol in the puzzle to the correct values (that is, find an example for $f:S_1\to\Bbb Z$). A complete answer is a correct answer which also proves that the solution is the only solution. In other words, there is no other possible value for $\bigstar$. ## How is an Answer Accepted? After the puzzle is asked, a one day grace period will be given, in which no answer will be accepted. After that day passes, the complete answer which makes the least assumptions will be accepted. If no complete answer will appear within the grace period, the first complete answer that appears after the grace period will be accepted. # Good luck! Side Note: to get $\bigstar$ use $\bigstar$, and to get $\text^$ use $\text^$ Previous puzzles in the series: Initial pack: #1 #2 #3 #4 #5 #6 #7 Next Puzzle ## 1 Answer We claim that: $\bigstar=9$ First note that: $a\times a=a\implies a=0\text{ or }1$ Then: If $a=1$, from $\text{II}$ we have $b<b+c+c\implies c>0$ But: We also have $c<1$ from $\text{III}$, but $c$ is integral and therefore can't be between $0$ and $1$ So: $a\neq1\implies a=0$ From $\text{II}$: $0<b<b^a+c^a+c^a=1+1+1=3$, so $b=1\text{ or }2$ However: $a-b<c<a$, so $b$ must be at least $2$ since all three values are integral So: $b=2$ and we deduce that from $\text{III}$, $-2<c<0\implies c=-1$ since $c$ is integral Now, substituting into $\text{IV}$ and $\text{V}$: $-d+d^2=4d-e^2$ and $1-e<d$ We know that: Squares are non-negative, and since $a=0$, $e\neq0$, so $e^2\geq1$ which allows us to deduce: $-d+d^2<4d\implies d^2-5d<0$ But we can factorise this to: $d(d-5)<0$ Which implies: $d\in[1,4]$, though since $b=2$ we know $d\neq2$ Case bashing: $d=1\implies-1+1=4-e^2\implies e=\pm2$, but since $b=2$, $e=-2$, but this doesn't satisfy $1-e<d\iff3<1$ $d=3\implies-3+9=12-e^2\implies e=\pm\sqrt{6}$, which is not integral in either case $d=4\implies-4+16=16-e^2\implies e=\pm2$, but since $b=2$, $e=-2$ (note that this satisfies $1-e<d\iff 3<4$ So: $\bigstar=4-(-1)-(-2)\times2=4+1+4=9$ • Why does $a\times a$ must be 1 or 0? – NODO55 Jan 21 '18 at 16:09 • @NODO55 since $a\times a=a\implies a^2-a=0\implies a(a-1)=0\implies a=0\text{ or }a-1=0\implies a=0\text{ or }1$ – boboquack Jan 21 '18 at 21:19
2021-07-25 14:47:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8941326141357422, "perplexity": 503.5033675588181}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151699.95/warc/CC-MAIN-20210725143345-20210725173345-00113.warc.gz"}
https://escholarship.org/uc/item/9277n5wj
Strong solutions to the 3D primitive equations with only horizontal dissipation: near $H^1$ initial data ## Strong solutions to the 3D primitive equations with only horizontal dissipation: near $H^1$ initial data In this paper, we consider the initial-boundary value problem of the three-dimensional primitive equations for oceanic and atmospheric dynamics with only horizontal viscosity and horizontal diffusivity. We establish the local, in time, well-posedness of strong solutions, for any initial data $(v_0, T_0)\in H^1$, by using the local, in space, type energy estimate. We also establish the global well-posedness of strong solutions for this system, with any initial data $(v_0, T_0)\in H^1\cap L^\infty$, such that $\partial_zv_0\in L^m$, for some $m\in(2,\infty)$, by using the logarithmic type anisotropic Sobolev inequality and a logarithmic type Gronwall inequality. This paper improves the previous results obtained in [Cao, C.; Li, J.; Titi, E.S.: Global well-posedness of the 3D primitive equations with only horizontal viscosity and diffusivity, Comm. Pure Appl.Math., Vol. 69 (2016), 1492-1531.], where the initial data $(v_0, T_0)$ was assumed to have $H^2$ regularity.
2023-02-06 17:30:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8068470358848572, "perplexity": 607.7934882710244}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500356.92/warc/CC-MAIN-20230206145603-20230206175603-00170.warc.gz"}
https://cdsweb.cern.ch/collection/EP%20Preprints?ln=pl
# EP Preprints Ostatnio dodane: 2020-07-28 23:43 Cross-section measurements of top-quark pair production in association with a hard photon at 13 TeV with the ATLAS detector / Zoch, Knut 25 years after the top quark's discovery, the Large Hadron Collider at CERN produces proton-proton collision data on unprecedented scales at unprecedented energies – and has heralded an era of top-quark precision measurements [...] arXiv:2007.14701 ; CERN-THESIS-2020-081 ; II.Physik-UniG\"o-Diss-2020/01 II.Physik-UniGö-Diss-2020/01. - 171 p. Full text - Fulltext 2020-07-24 16:17 First branching fraction measurement of the suppressed decay $\Xi_c^0\to \pi^-\Lambda_c^+$ / LHCb Collaboration The $\Xi_c^0$ baryon is unstable and usually decays into charmless final states by the $c \to s u\overline{d}$ transition. [...] arXiv:2007.12096 ; CERN-EP-2020-129 ; LHCb-PAPER-2020-016 ; LHCB-PAPER-2020-016. - 2020. Fulltext - Related data file(s) - Fulltext 2020-07-23 11:27 First observation of the decay $\Lambda_b^0 \to \eta_c(1S) p K^-$ / LHCb Collaboration The decay $\Lambda_b^0 \to \eta_c(1S) p K^-$ is observed for the first time using a data sample of proton-proton collisions, corresponding to an integrated luminosity of 5.5 $fb^{-1}$, collected with the LHCb experiment at a centre-of-mass energy of 13 TeV. [...] arXiv:2007.11292 ; LHCb-PAPER-2020-012 ; CERN-EP-2020-124 ; LHCB-PAPER-2020-012. - 2020. Fulltext - Related data file(s) - Supplementary information - Fulltext 2020-07-14 12:16 Observation of enhanced double parton scattering in proton-lead collisions at $\sqrt{s_\mathrm{NN}}=8.16$ TeV / LHCb Collaboration A study of prompt charm-hadron pair production in proton-lead collisions at $\sqrt{s_\mathrm{NN}}= 8.16$ TeV is performed using data corresponding to an integrated luminosity of about 30 nb${}^{-1}$, collected with the LHCb experiment. [...] arXiv:2007.06945 ; LHCb-PAPER-2020-010 ; CERN-EP-2020-119 ; LHCB-PAPER-2020-010. - 2020. - 26 p. Fulltext - Related data file(s) - Supplementary information - Fulltext 2020-07-14 12:00 Technical Note N°92 : Calculation of Eddy curent effects in the vacuum chambers of the BOOSTER Synchrotron quadrupole and bending magnets / Asner, A MPS-INT-MA-67-11 ; CERN-MPS-INT-MA-67-11. - 1967. - 17 p. Full text 2020-07-09 14:01 First observation of the decay $B^0 \rightarrow D^0 \overline{D}{}^0 K^+ \pi^-$ / LHCb Collaboration The first observation of the decay $B^0 \rightarrow D^0 \overline{D}{}^0 K^+ \pi^-$ is reported using proton-proton collision data corresponding to an integrated luminosity of 4.7 $\mathrm{fb}^{-1}$ collected by the LHCb experiment in 2011, 2012 and 2016. [...] arXiv:2007.04280 ; LHCb-PAPER-2020-015 ; CERN-EP-2020-112 ; LHCB-PAPER-2020-015. - 2020. - 23 p, 23 p. Fulltext - Related data file(s) - Fulltext 2020-07-06 14:43 Searches for low-mass dimuon resonances / LHCb Collaboration Searches are performed for low-mass dimuon resonances, $X$, produced in proton-proton collisions at a center-of-mass energy of 13 TeV, using a data sample corresponding to an integrated luminosity of 5.1 fb$^{-1}$ and collected with the LHCb detector. [...] arXiv:2007.03923 ; LHCb-PAPER-2020-013 ; CERN-EP-2020-114 ; LHCB-PAPER-2020-013. - 2020. - 27 p, 27 p. Fulltext - Related data file(s) - Supplementary information - Fulltext 2020-07-01 16:57 Observation of structure in the $J/\psi$-pair mass spectrum / LHCb Collaboration Using proton-proton collision data at centre-of-mass energies of $\sqrt{s} = 7$, $8$ and $13\mathrm{\,TeV}$ recorded by the LHCb experiment at the Large Hadron Collider, corresponding to an integrated luminosity of $9\mathrm{\,fb}^{-1}$, the invariant mass spectrum of $J/\psi$ pairs is studied. [...] arXiv:2006.16957 ; CERN-EP-2020-115 ; LHCb-PAPER-2020-011 ; LHCB-PAPER-2020-011. - 2020. - 29 p. PRESSRELEASE - Related data file(s) - Fulltext - Fulltext 2020-06-09 16:19 Search for $CP$ violation in $\Xi_c^+\rightarrow pK^-\pi^+$ decays using model-independent techniques / LHCb Collaboration A first search for $CP$ violation in the Cabibbo-suppressed $\Xi_c^+\rightarrow pK^-\pi^+$ decay is performed using both a binned and an unbinned model-independent technique in the Dalitz plot. [...] arXiv:2006.03145 ; CERN-EP-2020-069 ; LHCb-PAPER-2019-026 ; LHCB-PAPER-2019-026. - 2020. - 26 p. Related data file(s) - Fulltext 2020-06-04 11:31 Longitudinal space-charge forces at transition / Sørenssen, A MPS-INT-MU-EP-66-1 ; CERN-MPS-INT-MU-EP-66-1. - 1966. - 48 p. Full text
2020-08-05 17:27:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9915183782577515, "perplexity": 6186.331987316154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735963.64/warc/CC-MAIN-20200805153603-20200805183603-00492.warc.gz"}
https://bison.inl.gov/Documentation/source/kernels/HydrogenDiffusion.aspx
# Hydrogen Diffusion in the Cladding Calculates the diffusion of hydrogen in solid solution due to Fick's law and the Soret effect ## Description Hydrogen in solid solution in zirconium will precipitate to form zirconium hydrides as the temperature of the sample is decreased. If the sample is then re-heated, dissolution will begin at a higher temperature than was required for precipitation. This hysteresis effect is due to a volumetric strain caused by mismatch of the density of the hydrides and the surrounding alloy. Thus, there are two terminal solid solubility (TSS) curves, denoted for precipitation and for dissolution. Bison uses the Arrhenius fits from McMinn et al. (2000) for and (1) Hydrogen in solid solution in zirconium diffuses under the influence of mass and temperature gradients by Fick's Law and the Soret effect, respectively. The mass flux is (2) where is the concentration of hydrogen in solid solution, is the mass diffusivity, and is the heat of transport for hydrogen in zirconium. Bison uses and from Kammenzind et al. (1996): (3) Note that since the stoichiometry of the hydride phase is fixed, there is little or no driving force for diffusion of hydrogen in the hydrides. In addition, the diffusivity of hydrogen in hydrides has been measured to be at least 3 times smaller than the diffusivity of hydrogen in zirconium. Thus, we do not account for diffusion of hydrogen in the hydride phase in Bison. ## Example Input Syntax [./hdiffusion] # diffusion of hydrogen by OC type = HydrogenDiffusion variable = Css temp = temp [../] (test/tests/hydrogen/hydrogen.i) ## Input Parameters • variableThe name of the variable that this Kernel operates on C++ Type:NonlinearVariableName Description:The name of the variable that this Kernel operates on • tempCoupled Temperature C++ Type:std::vector Description:Coupled Temperature ### Required Parameters • soret_scale1Soret effect scaling factor, 1 is default. Default:1 C++ Type:double Description:Soret effect scaling factor, 1 is default. • soret1Soret effect switch, 1 is on (default). Default:1 C++ Type:int Description:Soret effect switch, 1 is on (default). • blockThe list of block ids (SubdomainID) that this object will be applied C++ Type:std::vector Description:The list of block ids (SubdomainID) that this object will be applied ### Optional Parameters • enableTrueSet the enabled status of the MooseObject. Default:True C++ Type:bool Description:Set the enabled status of the MooseObject. • save_inThe name of auxiliary variables to save this Kernel's residual contributions to. Everything about that variable must match everything about this variable (the type, what blocks it's on, etc.) C++ Type:std::vector Description:The name of auxiliary variables to save this Kernel's residual contributions to. Everything about that variable must match everything about this variable (the type, what blocks it's on, etc.) • use_displaced_meshFalseWhether or not this object should use the displaced mesh for computation. Note that in the case this is true but no displacements are provided in the Mesh block the undisplaced mesh will still be used. Default:False C++ Type:bool Description:Whether or not this object should use the displaced mesh for computation. Note that in the case this is true but no displacements are provided in the Mesh block the undisplaced mesh will still be used. • control_tagsAdds user-defined labels for accessing object parameters via control logic. C++ Type:std::vector Description:Adds user-defined labels for accessing object parameters via control logic. • seed0The seed for the master random number generator Default:0 C++ Type:unsigned int Description:The seed for the master random number generator • diag_save_inThe name of auxiliary variables to save this Kernel's diagonal Jacobian contributions to. Everything about that variable must match everything about this variable (the type, what blocks it's on, etc.) C++ Type:std::vector Description:The name of auxiliary variables to save this Kernel's diagonal Jacobian contributions to. Everything about that variable must match everything about this variable (the type, what blocks it's on, etc.) • implicitTrueDetermines whether this object is calculated using an implicit or explicit form Default:True C++ Type:bool Description:Determines whether this object is calculated using an implicit or explicit form • vector_tagsnontimeThe tag for the vectors this Kernel should fill Default:nontime C++ Type:MultiMooseEnum Description:The tag for the vectors this Kernel should fill • extra_vector_tagsThe extra tags for the vectors this Kernel should fill C++ Type:std::vector Description:The extra tags for the vectors this Kernel should fill • matrix_tagssystemThe tag for the matrices this Kernel should fill Default:system C++ Type:MultiMooseEnum Description:The tag for the matrices this Kernel should fill • extra_matrix_tagsThe extra tags for the matrices this Kernel should fill C++ Type:std::vector Description:The extra tags for the matrices this Kernel should fill
2020-11-26 22:20:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2302519828081131, "perplexity": 4396.279810697193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141188947.19/warc/CC-MAIN-20201126200910-20201126230910-00336.warc.gz"}
https://mvi.readthedocs.io/en/v3/content/runPrograms.html
# 4. Running the programs¶ The MVI software package uses three main programs: All programs in the package can be executed within Windows or Linux environments. They can be run by typing the program name followed by a control file in the command prompt (Windows) or terminal (Linux). They can be executed directly on the command line or in a shell script or batch file. When a program is executed without any arguments, it will print the usage to screen. ## 4.4. Execution on a single computer¶ The command format and control/input file format on a single machine are described below. Within the command prompt or terminal, any of the programs can be called using: program arg$$_1$$ [arg$$_2$$ $$\cdots$$ arg$$_i$$] where: • program: the name of the executable • arg$$_i$$: a command line argument, which can be the name of a required or optional file. Typing as the control file, serves as a help function and returns an example input file. Some executables do not require control files and should be followed by multiple arguments instead. This will be discussed in more detail later in this section. Optional command line arguments are specified by brackets: [ ] Each control file contains a formatted list of arguments, parameters and filenames in a combination and sequence specific to the executable; which requires this control file. Different control file formats will be explained further in the document for each executable. ## 4.5. Execution on a local network or commodity cluster¶ The MVI program library’s main programs are currently not setup to use Message Pass Interface (MPI).
2019-06-26 17:50:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2548857033252716, "perplexity": 2456.150156403809}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000414.26/warc/CC-MAIN-20190626174622-20190626200622-00084.warc.gz"}
http://doc.sccode.org/Classes/BufGrainI.html
Classes (extension) | BufGrainI : JoshGrain : UGen : AbstractFunction : ObjectExtension Granular synthesis with sound sampled in a buffer and user supplied envelopes Class Methods BufGrainI.ar(trigger: 0, dur: 1, sndbuf, rate: 1, pos: 0, envbuf1, envbuf2, ifac: 0.5, interp: 2, mul: 1, add: 0) Arguments: trigger A kr or ar trigger to start a new grain. If ar, grains after the start of the synth are sample accurate. The following args are polled at grain creation time. dur Size of the grain. sndbuf The buffer holding an audio signal. rate The playback rate of the sampled sound. pos The playback position for the grain to start with (0 is beginning, 1 is end of file). envbuf1 A buffer with a stored signal to be used for the envelope of the grain. envbuf2 A buffer with a stored signal to be used for the envelope of the grain. ifac An interpolation factor. interpolates between the two envelopes where 0 is all envbuf1, and 1 is all envbuf2. interp The kind of interpolation for the sampled sound in the grain (1 - none, 2 - linear, 4 - cubic). mul add Examples s.boot; ( SynthDef(\buf_grain_test, {arg gate = 1, sndbuf, amp = 1, envbuf1, envbuf2; Out.ar(0, BufGrainI.ar(Impulse.kr(10), 0.2, sndbuf, MouseX.kr(0.5, 8), PinkNoise.kr.range(0, 1), envbuf1, envbuf2, MouseY.kr(0, 1), 2, EnvGen.kr( Env([0, 1, 0], [1, 1], \sin, 1), gate, levelScale: amp, doneAction: 2) ) ) s.sendMsg(\b_free, x);
2018-01-16 23:37:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3690566122531891, "perplexity": 14049.204808440309}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886758.34/warc/CC-MAIN-20180116224019-20180117004019-00246.warc.gz"}
http://hotmath.com/hotmath_help/topics/statistical-questions.html
# Statistical Questions A statistical question is one for which you don't expect to get a single answer. Instead, you expect to get a variety of different answers, and you are interested in the distribution and tendency of those answers. For example, "How tall are you?" is not a statistical question. But "How tall are the students in your school?" is a statistical question. Central Tendency When the answers to a statistical question are numerical data, we can ask about the central tendency of that data. For example, we might want to know, roughly, "how tall are most people in your school?" However, this is not a precise question. There are ways of measuring of central tendency using a single number: you can find the mean (or average), the median, or the mode of the data. Variation Suppose you know that the average height of a student in a school is $5$ feet. You may still be interested in the spread of the data. Is everyone nearly $5$ feet tall, say within $2$ inches? Or are there also very short and very tall people in the school? The simplest measure of the variation is the range, calculated by subtracting the lowest value from the highest. More complex measures of variation include the interquartile range and the standard deviation or variance. Below is a graph of two data sets, showing the number of students in two schools that got a particular score on a $10$ -problem quiz. Both data sets have the same range ($10$), the same mean (a little more than $6$), and the same total number of students (you can't tell this easily from the graph, but it's true.) However, the heights of the blue blocks vary considerably compared to that of red blocks.
2016-05-26 22:15:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7683548927307129, "perplexity": 406.1754578866557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276305.39/warc/CC-MAIN-20160524002116-00151-ip-10-185-217-139.ec2.internal.warc.gz"}
https://www.transtutors.com/questions/mcdermott-technologies-is-evaluating-a-new-project-that-requires-800-000-in-new-equ-71276.htm
# ?McDermott Technologies is evaluating a new project that requires $800,000 In new equipment... McDermott Technologies is evaluating a new project that requires$800,000 In new equipment. McDermott estimates that the new project will generate $900,000 In annual sales at the end of each of the next four years and that operating costs (excluding depreciation) wll equal$400,000. Suppose the firm depreciates the equipment using the straight-line method over four years and the firm's tax rate is 40%. If the project's WACC is 9.8%, what is the present value of the project's OCFS? Document Preview: 01.bmp 02.bmp 03.bmp 04.bmp 05.bmp 06.bmp Attachments:
2021-05-07 01:11:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32391589879989624, "perplexity": 8538.01018369808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988774.18/warc/CC-MAIN-20210506235514-20210507025514-00080.warc.gz"}
http://research.physics.illinois.edu/cta/movies/BHBH_Disk_Magnetized/Initial_Configuration.html
Initial Configuration Gaseous Disk The gaseous disk obeys a polytropic equation of state, $P=K\rho^\Gamma_0$, at $t=0$. It is evolved according to the ideal gas law $P=(\Gamma-1)\rho_0\epsilon$, where P is the pressure, K is the polytropic gas constant, ε is the internal specific energy, Γ is the adiabatic index, and ρ0 the rest-mass density. We chose Γ = 4/3, appropriate for radiation pressure-dominated, thermal disks, which are typically optically thick. The initial disk satisfies the density profile of a disk that would be in equilibrium about a single black hole with the same mass as the binary. We seed the initial disk with a small, purely poloidal B-field. Note that the goal of our work is to assess how the final relaxed state of the disk depends on the binary mass ratio. The initial data, which governs the early evolution, has no physical significance. Black Hole Binary The initial binary is in a quasiequilibrium circular orbit, and has a frequency of MΩ ~ 0.028. The matter and gravitational field variables are determined by using the conformal-thin-sandwich (CTS) formalism. BH equilibrium boundary conditions are imposed on the BH horizon. The mass ratio of the binary varies from 1:1 to 1:10. The metric evolution is treated under the approximation that the inspiral time scale due to GW emission is long compared to both the binary orbital period and the viscous time scale of the disk ("pre-decoupling epoch"). Hence, we can neglect the inspiral for multiple binary orbits. The CTS initial data we adopt possess a helical Killing vector, which implies that the gravitational fields are stationary in a frame corotating with the binary. As a result, we can perform the metric evolution in the center-of-mass frame of the binary by simply rotating the initial gravitational fields. This technique simplifies our computations substantially. Cooling Law To model "rapid" radiative cooling we set the cooling time scale equal to 10% of the local, Keplerian time scale $\tau_{cool}(r)/M = 0.1P_{Kep}(r)/M=0.1 \cdot 2\pi(r/M)^{3/2}$, where r is the cylindrical radial coordinate measured from the center of mass of the binary.
2018-11-22 10:47:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.880705714225769, "perplexity": 790.3300007629085}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746205.96/warc/CC-MAIN-20181122101520-20181122123520-00202.warc.gz"}
http://ejercicios-fyq.com/?Relation-between-frequency-and-wavelength-0001
Inicio > Programa de Bilingüismo > 2º ESO (Bilingüe) > Relation between frequency and wavelength 0001 # Relation between frequency and wavelength 0001 A spring is producing waves of frequency 4 Hz and wavelength 0.5 m. What is its velocity? What will be its wavelength if the frequency is double? What will be its wavelength if frequency is reduced to half? ## SOLUCIÓN The wave's velocity is: If velocity is constant, when frequency is doubled wavelength must be half. In the same way if frequency is halved, wavelength have to be double. Ver MÁS EJERCICIOS del mismo tema | | Mapa del sitio |  RSS 2.0
2016-12-05 02:25:34
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9114211201667786, "perplexity": 4166.097201347864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541518.17/warc/CC-MAIN-20161202170901-00057-ip-10-31-129-80.ec2.internal.warc.gz"}
http://koreascience.or.kr/article/JAKO200412910520105.page
# ${\lambda}$-연산 소개 • 정계섭 (덕성여자대학교 불어불문학과) • Published : 2004.11.01 #### Abstract The lambda calculus is a mathematical formalism in which functions can be formed, combined and used for computation that is defined as rewriting rules. With the development of the computer science, many programming languages have been based on the lambda calculus (LISP, CAML, MIRANDA) which provides simple and clear views of computation. Furthermore, thanks to the "Curry-Howard correspondence", it is possible to establish correspondence between proofs and computer programming. The purpose of this article is to make available, for didactic purposes, a subject matter that is not well-known to the general public. The impact of the lambda calculus in logic and computer science still remains as an area of further investigation.stigation.
2020-07-16 14:45:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4181612432003021, "perplexity": 757.1260562483611}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657169226.65/warc/CC-MAIN-20200716122414-20200716152414-00329.warc.gz"}
https://ideas.repec.org/a/cup/etheor/v31y2015i02p294-336_00.html
# Specification Tests For Lattice Processes ## Author Listed: • Hidalgo, Javier • Seo, Myung Hwan ## Abstract We consider an omnibus test for the correct specification of the dynamics of a sequence $\left\{ {x\left( t \right)} \right\}_{t \in Z^d }$ in a lattice. As it happens with causal models and d = 1, its asymptotic distribution is not pivotal and depends on the estimator of the unknown parameters of the model under the null hypothesis. One first main goal of the paper is to provide a transformation to obtain an asymptotic distribution that is free of nuisance parameters. Secondly, we propose a bootstrap analog of the transformation and show its validity. Thirdly, we discuss the results when $\left\{ {x\left( t \right)} \right\}_{t \in Z^d }$ are the errors of a parametric regression model. As a by product, we also discuss the asymptotic normality of the least squares estimator of the parameters of the regression model under very mild conditions. Finally, we present a small Monte Carlo experiment to shed some light on the finite sample behavior of our test. ## Suggested Citation • Hidalgo, Javier & Seo, Myung Hwan, 2015. "Specification Tests For Lattice Processes," Econometric Theory, Cambridge University Press, vol. 31(2), pages 294-336, April. • Handle: RePEc:cup:etheor:v:31:y:2015:i:02:p:294-336_00 as File URL: https://www.cambridge.org/core/product/identifier/S0266466614000310/type/journal_article File Function: link to article abstract page ## References listed on IDEAS as 1. Jenish, Nazgul & Prucha, Ingmar R., 2009. "Central limit theorems and uniform laws of large numbers for arrays of random fields," Journal of Econometrics, Elsevier, vol. 150(1), pages 86-98, May. 2. Hidalgo, J. & Kreiss, J.-P., 2006. "Bootstrap specification tests for linear covariance stationary processes," Journal of Econometrics, Elsevier, vol. 133(2), pages 807-839, August. 3. Robinson, Peter M. & Thawornkaiwong, Supachoke, 2012. "Statistical inference on regression with spatial dependence," Journal of Econometrics, Elsevier, vol. 167(2), pages 521-542. 4. Lanne, Markku & Saikkonen, Pentti, 2013. "Noncausal Vector Autoregression," Econometric Theory, Cambridge University Press, vol. 29(3), pages 447-481, June. 5. Jun Zhu & Hsin‐Cheng Huang & Perla E. Reyes, 2010. "On selection of spatial linear models for lattice data," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 72(3), pages 389-402, June. 6. Giacomini, Raffaella & Politis, Dimitris N. & White, Halbert, 2013. "A Warp-Speed Method For Conducting Monte Carlo Experiments Involving Bootstrap Estimators," Econometric Theory, Cambridge University Press, vol. 29(3), pages 567-589, June. 7. Lanne Markku & Saikkonen Pentti, 2011. "Noncausal Autoregressions for Economic Time Series," Journal of Time Series Econometrics, De Gruyter, vol. 3(3), pages 1-32, October. 8. Delgado, Miguel A. & Hidalgo, Javier & Velasco, Carlos, 2005. "Distribution free goodness-of-fit tests for linear processes," LSE Research Online Documents on Economics 6840, London School of Economics and Political Science, LSE Library. 9. Robinson, P.M. & Vidal Sanz, J., 2006. "Modified Whittle estimation of multilateral models on a lattice," Journal of Multivariate Analysis, Elsevier, vol. 97(5), pages 1090-1120, May. 10. Hong, Yongmiao, 1996. "Consistent Testing for Serial Correlation of Unknown Form," Econometrica, Econometric Society, vol. 64(4), pages 837-864, July. 11. Efstathios Paparoditis, 2000. "Spectral Density Based Goodness‐of‐Fit Tests for Time Series Models," Scandinavian Journal of Statistics, Danish Society for Theoretical Statistics;Finnish Statistical Society;Norwegian Statistical Association;Swedish Statistical Association, vol. 27(1), pages 143-176, March. 12. Crujeiras, Rosa M. & Fernández-Casal, Rubén & González-Manteiga, Wenceslao, 2008. "An L2 -test for comparing spatial spectral densities," Statistics & Probability Letters, Elsevier, vol. 78(15), pages 2543-2551, October. 13. Baltagi, Badi H. & Kelejian, Harry H. & Prucha, Ingmar R., 2007. "Analysis of spatially dependent data," Journal of Econometrics, Elsevier, vol. 140(1), pages 1-4, September. 14. Yoshihiro Yajima & Yasumasa Matsuda, 2008. "Asymptotic Properties of the LSE of a Spatial Regression in both Weakly and Strongly Dependent Stationary Random Fields," CIRJE F-Series CIRJE-F-587, CIRJE, Faculty of Economics, University of Tokyo. Full references (including those not matched with items on IDEAS) ### JEL classification: • C21 - Mathematical and Quantitative Methods - - Single Equation Models; Single Variables - - - Cross-Sectional Models; Spatial Models; Treatment Effect Models • C23 - Mathematical and Quantitative Methods - - Single Equation Models; Single Variables - - - Models with Panel Data; Spatio-temporal Models ## Corrections All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:cup:etheor:v:31:y:2015:i:02:p:294-336_00. See general information about how to correct material in RePEc. For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Keith Waters). General contact details of provider: https://www.cambridge.org/ect . If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about. If CitEc recognized a reference but did not link an item in RePEc to it, you can help with this form . If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation. Please note that corrections may take a couple of weeks to filter through the various RePEc services. IDEAS is a RePEc service hosted by the Research Division of the Federal Reserve Bank of St. Louis . RePEc uses bibliographic data supplied by the respective publishers.
2020-09-19 07:14:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4849775433540344, "perplexity": 5345.433476448023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400190270.10/warc/CC-MAIN-20200919044311-20200919074311-00003.warc.gz"}
http://www.self.gutenberg.org/articles/eng/Vandermonde_polynomial
#jsDisabledContent { display:none; } My Account |  Register |  Help # Vandermonde polynomial Article Id: WHEBN0020714332 Reproduction Date: Title: Vandermonde polynomial Author: World Heritage Encyclopedia Language: English Subject: Collection: Publisher: World Heritage Encyclopedia Publication Date: ### Vandermonde polynomial In algebra, the Vandermonde polynomial of an ordered set of n variables $X_1,\dots, X_n$, named after Alexandre-Théophile Vandermonde, is the polynomial: (Some sources use the opposite order $\left(X_i-X_j\right)$, which changes the sign $\binom\left\{n\right\}\left\{2\right\}$ times: thus in some dimensions the two formulae agree in sign, while in others they have opposite signs.) It is also called the Vandermonde determinant, as it is the determinant of the Vandermonde matrix. The value depends on the order of the terms: it is an alternating polynomial, not a symmetric polynomial. ## Alternating The defining property of the Vandermonde polynomial is that it is alternating in the entries, meaning that permuting the $X_i$ by an odd permutation changes the sign, while permuting them by an even permutation does not change the value of the polynomial – in fact, it is the basic alternating polynomial, as will be made precise below. It thus depends on the order, and is zero if two entries are equal – this also follows from the formula, but is also consequence of being alternating: if two variables are equal, then switching them both does not change the value and inverts the value, yielding $V_n = -V_n,$ and thus $V_n = 0$ (assuming the characteristic is not 2, otherwise being alternating is equivalent to being symmetric). Conversely, the Vandermonde polynomial is a factor of every alternating polynomial: as shown above, an alternating polynomial vanishes if any two variables are equal, and thus must have $\left(X_i - X_j\right)$ as a factor for all $i \neq j$. ### Alternating polynomials Thus, the Vandermonde polynomial (together with the symmetric polynomials) generates the alternating polynomials. ## Discriminant Its square is widely called the discriminant, though some sources call the Vandermonde polynomial itself the discriminant. The discriminant (the square of the Vandermonde polynomial: $\Delta=V_n^2$) does not depend on the order of terms, as $\left(-1\right)^2=1$, and is thus an invariant of the unordered set of points. If one adjoins the Vandermonde polynomial to the ring of symmetric polynomials in n variables $\Lambda_n$, one obtains the quadratic extension $\Lambda_n\left[V_n\right]/\langle V_n^2-\Delta\rangle$, which is the ring of alternating polynomials. ### Characteristic classes In characteristic classes, the Vandermonde polynomial corresponds to the Euler class, and its square (the discriminant) corresponds to the top Pontryagin class. This is formalized in the splitting principle, which connects characteristic classes to polynomials. In the language of stable homotopy theory, the Vandermonde polynomial (and alternating polynomials generally) is an unstable phenomenon, which corresponds to the fact that the Euler class is an unstable characteristic class. That is, the ring of symmetric polynomials in n variables can be obtained from the ring of symmetric polynomials in arbitrarily many variables by evaluating all variables above $X_n$ to zero: symmetric polynomials are thus stable or compatibly defined. However, this is not the case for the Vandermonde polynomial or alternating polynomials: the Vandermonde polynomial in n variables is not obtained from the Vandermonde polynomial in $n+1$ variables by setting $X_\left\{n+1\right\}=0$. ## Vandermonde polynomial of a polynomial Given a polynomial, the Vandermonde polynomial of its roots is defined over the splitting field; for a non-monic polynomial, with leading coefficient a, one may define the Vandemonde polynomial as (multiplying with a leading term) to accord with the discriminant. ## Generalizations Over arbitrary rings, one instead uses a different polynomial to generate the alternating polynomials – see (Romagny, 2005). ### Weyl character formula (a vast generalization) The Vandermonde polynomial can be considered a special case of the Weyl character formula, specifically the Weyl denominator formula (the case of the trivial representation) of the special unitary group $SU\left(n\right)$. • Capelli polynomial (ref) ## References • The fundamental theorem of alternating functions, by Matthieu Romagny, September 15, 2005 This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002. Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
2020-09-21 03:43:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 16, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7856219410896301, "perplexity": 1033.205141430262}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198887.3/warc/CC-MAIN-20200921014923-20200921044923-00139.warc.gz"}
https://eoghan.speleo.dev/posts/2019/mounting-mac-drives.html
# Mounting Macintosh Drives Date: 2019-04-08 Last Updated: 2019-04-08 So this March I decided it was time to give my MacBook Pro a new lease of life. It still had its original 500GB HDD that it had come with. I had not updated my version of OS X since Yosemite in 2015, and that is because I used Apple’s Aperture for processing my photographs and Yosemite was the last version of OS X to support Aperture. This of course had the knock on effect of making it harder and harder to use Homebrew as time wore on. So in a bout of frustration I went out and decided to buy an SSD with the idea being that I could swap my HDD out do a fresh install of MacOS, as it is now, and if worst came to worst and I wanted Aperture back then I could wipe the SSD and clone the old HDD to it. The install of Mojave took a scenic route via Mountain Lion (yes you read that correctly), but the fun came when I tried to load my old HDD as an external drive. Strangely my MacBook couldn’t mount the drive. I happened to be at work while I was trying to do this, and my colleague suggested putting it into his Ubuntu workstation to see if it showed up. One apt-get install hfsprogs later and we could see the system and the partitions that my Macs (personal & work) say don’t exist. Later we tried to mount it using an Arch workstation and the SATA to USB adapter I had borrowed for the task, and it couldn’t be mounted. We then decided that we should clone the drive before messing with it any further, to do that we pulled out my old Debian workstation, and put an Arch live USB in it and, when saw that the three partitions were showing up there we decided to try an mount the drive. We had some limited constraints from the live USB, and needed to do some arithmetic and sector counting in order to be able to mount the drive. It was a superuser article (below) that enabled us to figure out what the hell was going on and how to mount the drive on Arch. We needed to calculate N, the volume size of what’s to be mounted. [ N = logicalSectorSize \times sizeOfTheVolume ] We do that by getting the logical sector size from fdisk and the size of the volume from testdisk ### Flow • fdisk -l /dev/sda Get the logical sector size • testdisk /dev/sda • Select EFI GPT • Select Analyse • Then Quick Search Note down the size in sectors of sda2. • mount -t hfsplus -o ro,sizelimit=N /dev/sda2 /Volumes Homebrew Superuser Article
2023-03-21 23:34:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23728007078170776, "perplexity": 2559.315623584609}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943747.51/warc/CC-MAIN-20230321225117-20230322015117-00743.warc.gz"}
http://nrich.maths.org/9335
Functions and Graphs Stage 3 - Short Problems This is part of our collection of Short Problems. You may also be interested in our longer problems on Functions and Graphs - Stage 3. Circumference and Diameter KS 3 Short Challenge Level: Weekly Problem 15 - 2008 Which of these graphs could be the graph showing the circumference of a circle in terms of its diameter ? Paper Weight KS 3 Short Challenge Level: Weekly Problem 46 - 2008 How could you use this graph to work out the weight of a single sheet of paper? Squarely in the Grid KS 3 Short Challenge Level: How many squares can you draw on this lattice? Straight Line Spin KS 3 Short Challenge Level: Weekly Problem 40 - 2014 Can you draw the graph of $y=x$ after it has been rotated $90$ degrees clockwise about $(1,1)$? Spiral Snail KS 3 Short Challenge Level: A snail slithers around on a coordinate grid. At what position does he finish? Bucket of Water KS 3 Short Challenge Level: Weekly Problem 2 - 2006 Lisa's bucket weighs 21 kg when full of water. After she pours out half the water it weighs 12 kg. What is the weight of the empty bucket? Graph Area KS 3 Short Challenge Level: Can you find the area between this graph and the x-axis, between x=3 and x=7? Graphical Triangle KS 3 Short Challenge Level: Weekly Problem 35 - 2007 What is the area of the triangle formed by these three lines? Lattice Points on a Line KS 3 Short Challenge Level: How many lattice points are there in the first quadrant that lie on the line 3x + 4y = 59 ?
2017-04-30 11:01:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21160036325454712, "perplexity": 1644.9643795749623}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125074.20/warc/CC-MAIN-20170423031205-00236-ip-10-145-167-34.ec2.internal.warc.gz"}
https://new.rosettacommons.org/docs/latest/scripting_documentation/PyRosetta/PyRosetta-Toolkit-GUI
Documentation for the PyRosetta Toolkit GUI Author: ! Deprecated The PI for this application is Roland Dunbrack < Roland.Dunbrack@fccc.edu > ## Code and Demo The PyRosetta Toolkit has not been ported to PyRosetta-4 unfortunately and is only distributed and comparable with PyRosetta-3. The application is available in PyRosetta-3 in app/pyrosetta_toolkit PyRosetta Setup: Optional PyMOL Integration Setup: Optional SCWRL Integration Setup: • Obtain SCWRL from: http://dunbrack.fccc.edu/scwrl4/ • Copy the exe/binary into the pyrosetta/apps/pyrosetta_toolkit/SCWRL directory in the respective OS directory. Run the program: • Use pyrosetta_toolkit after running SetPyRosettaEnvironment.sh (creates an alias). ## References Jared Adolf-Bryfogle and Roland Dunbrack (2013) "The PyRosetta Toolkit: A Graphical User Interface for the Rosetta Software Suite. " PloS-ONE RosettaCON2012 special collection ## Purpose This GUI is used to setup Rosetta specific filetypes, do interactive modeling using Rosetta as its base, run rosetta protocols, and analyze decoy results. ## Limitations This application does not run all Rosetta Applications. It does, however, run the most common ones. Although multiprocessing is implemented, full production runs of many rosetta applications usually require a cluster. Always check relevant Rosetta Documentation on RosettaCommons. Options for all protocols can be added through the Option System Manager or through it's protocol-specific UI. Symmetry is not supported at this time. Load a pose through this menu first either from a file or directly from the RCSB. If you have PyMOL setup, it will send the pose to PyMOL after it loads into Rosetta. Loading a pose will open up the PDB Cleaning window by default to prep the PDB for import into Rosetta. PDB Cleaning Functions: • renaming waters • removing waters • removing hetatm • renaming DNA residues • renaming MD atoms • changing occupancies to 1.0 • renumbering pose from 1 + accounting for insertions • checking that all residues are able to be read by Rosetta • Current Imports:GUI Session, Param PathList, Loop File • Current Exports:GUI Session, Param PathList, Loop File, basic Resfile, basic Blueprint, SCWRL seq file, FASTA (pose + regions) ### Main Window The main window is used to run common protocols, some anlysis, setup decoy output, and finally set a region. The default region is the whole protein. To specify loops for loop modeling, regions for minimization or residues for repacking, etc. add a region: • Chain: can be added by just specifying the chain • N-terminus:Leave 'start' entry box blank, or enter any string, specify both chain and end residue number • C-terminus:Leave 'end' entry box blank, or enter any string, specify both chain and start residue number • Residue: Have start + end be the same residue. Sequence: • A sequence of the pose or region can be found in the lower part of the screen. Placing your cursor at the left part of a residue will show its residue number, chain, and rosetta resnum. This can be colored in pymol and used to select residues in the Per-Residue Control Window described below. Output Options: • The default output directory is the dir of the current pose. This and other output options for decoys can be set in the upper right and lower right areas of the main window. • Multiple Processors are used only for protocol runs of >1 decoy, as well as some functions from the PDBList menu. Protocols: • A few common minimization protocols can be ran through the main window in addition to the protocols menu (Repack, Minimize, Relax, SCWRL). • Option System manager: • : : Setup Rosetta command-line options using the options system manager • Configure Score Function: • : : Default Scorefunction is score12prime. The scorefunction set here is used by all protocols. See the protocols section for more information. Please use this window to change the scorefunction or the weights of any terms. • : : More weight sets can be given in this window by selecting options->show ALL scorefunctions • Set General Protocol Options: • : : A number of rounds can be set for all protocols. By default, if more than one round is set, the boltzmann criterion is applied to the pose after each round using the current scorefunction. This is especially useful for packing or design. Here, you can control the temperature, and the behavior of what happens each round and if the lowest energy pose is recovered at the end of the run. ! PyMOL Integration through the PyMOL Mover. A pose can be shown in PyMOL at any time. Regions can be colored, and the PyMOL Observer can be enabled that updates pymol upon structural and energetic changes to the current pose. • : : This is a simple window that can be used while using PyMOL. It is small so that you can incorporate it into your modeling workflow. • : : For energies, blue is low energy and red is high energy. Spectrum cannot be changed at this time, but will be added in the future. • : : Labels for energies can be useful, although slow. Use the PyMOL commands to remove them once sent. The Ligand/NCAA/PTM Manager • Many non-cannonicals are off-by-default for Rosetta, and this window will allow you to enable them and also design residues into them. • Energy function optimization is also given here. Since the default electrostatic score term in the default scorefunction is statistical in nature, it will not work for non-cannoicals. At the very least, changing this to the coulumbic score term is a more physically realistic representation of the sytem. • The mm_std scorefunction ( Douglas Renfrew, Eun Jung Choi, Brian Kuhlman, 'Using Noncanonical Amino Acids in Computational Protein-Peptide Interface Design' (2011) PLoS One. ) is currently undergoing broad tests for proteins and other systems, but seems to work very will for NCAA design and rotamers. • The orbitals scorefunction, developed by Steven Combs et al (Meiler Lab) is also a great alternative, which is residue independant. Tests are ongoing for all of these scorefunction changes. Design Toolbox • This is a window which will allow you to build a resfile for the loaded pose. You can do individual residues or a region of the pose. See the help menu or paper for information on what the values are in this window. • The left most box is for residue categories. This is for easy selection and design. Use the etc category for typical rosetta resfile options (NATRO,NATAA,ALLAA). Clicking one of the categories will populate the right box. Here you actually choose the residues types you want to open by double clicking them. Selecting these will limit the design to only those chosen. Residues you have chosen then go into the lower most box. Changing the residue number you have selected, will change what goes in this box so you can edit your current design. Double clicking a residue in the box will remove it. • Click the Write Resfile button to open a save as dialog. Enable constraints Insert data into B factor Column • This is mainly an analysis utility, but it can be useful for data visualization, especially for publications. • Data should be in a text or csv file, deliminated by either a space, tab, column, comma. • Data should have at least 3 columns: pdb resnum, chain, data to insert, optional atom name • The UI for this utility function will allow you choose the colunns for each required piece as well as the deliminator. It will use your currently loaded pose and output a new pose with the data in the B-factor column. Load the pose into PyMOL or your favorite visualizer to see the data. Per Residue control and Analysis • This is a great interactive modeling window. Here you can do many modeling tasks including mutating individual residues, add variants such as phosphorylations/acetylations, repack rotamers, chaing dihedrals, etc. You can also analze rotamer probabilies and individual residue energies. Analysis Movers: • Interface Analyzer • Loop Analyzer • Packing Analyzer • VIP • A loaded pose is required for all protocols. Options not given in the main UI or their protocol-specific UI can be set using the options system manager. Please refer to RosettaCommons manual for all protocols, as well as the references listed there and within the GUI. • The default scorefunction is score12prime. Ligands/NCAA/PTM may require modifications to the scorefunction. Please use the advanced menu to make these modifications. Centroid-based protocols and other specific protocols require different scorefunctions. They are given below, and you will be prompted in the GUI if these are not set. Currently Implemented: High/Low Resolution Docking • Will prompt for docking partners using the same notation as the docking application. • ScoreFunction: Use interchain_cen scorefunction for low-resolution and docking for high-resolution. High/Low Resolution Loop Modeling (KIC/CCD) • Will prompt for fragset(CCD)/ extend options. • Default cutpoint is around the middle of the loop. • ScoreFunction: Use centroid + score4L patch for low-resolution. Fixedbb Design (UI) • Will prompt for a resfile. Floppytail (UI) Grafting (UI) • Graft a region from one protein into another. • Many options are given. All are described in the UI. Choosing a graft method will give a description of that method. • It is assumed that the loaded pose is the one you will graft into. You can pick the doner pose from the UI. They can both be the same PDB file. • If the region is not already deleted, it will delete it upon grafting. FastRelax ClassicRelax PackRotamers (Rosetta) PackRotamers (SCWRL) Minimizer • The Rosetta default minimizer is dfpmin. Use the options system to set a different one setting '-run:min_tolerance' • The Relax application uses dfpmin_armijo_nonmonotone. Visit Minimization Overview for more information. • The Rosetta default tolerance is 0.000001. Use the options system to set a different value by setting -run:min_tolerance. • ROSIE • Fragments • Backrub • Interface Alanine Scan • DNA Interface Scan • Scaffold Select A PDBList is simply a list of full paths to PDB files, one on each line. It is used by the GUI to analyze decoys. Methods for making and using a PDBList reside here. Main Functions: • Lowest Energy - get lowest energy poses from a PDBList • Energy vs RMSD - Calculate Energy vs RMSD + output a file that can be graphed in any graphing application. • Output FASTA - Output FASTA with sequence for each pose or only of the given region • Design Statistics - Use FASTA to calculate design stats from a Rosetta design run. Outputs textfiles and an R plot. Needs reference pose loaded. Does not work with insertions and deletions during the design yet. ## Tips • -Set default (command-line) Options through the options system manager window • -Set default Scorefunction through the scorefunction window - Used by all protocols. • -Advanced Users: To add personal windows and functions to the GUI, see developer html in pyrosetta_toolkit/documentation directory. • Please visit http://bugs.rosettacommons.org for any Toolkit-specific bugs ### Bashrc Setup This is so you do not need to source SetPyRosettaEnvironment.sh each time you want to use Toolkit. Takes a few minutes, but worth it. For more information on to what a bash shell is: http://superuser.com/questions/49289/what-is-bashrc-file To do this: ## Changes since last release This will be the first release
2021-10-28 14:13:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3156060576438904, "perplexity": 6408.909727103788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588341.58/warc/CC-MAIN-20211028131628-20211028161628-00604.warc.gz"}
https://www.keep-current.com/perceiver-general-perception-with-iterative-attention
Written by Michael (Mike) Erlihson, PhD. This review is part of a series of reviews in Machine & Deep Learning that are originally published in Hebrew, aiming to make it accessible in a plain language under the name #DeepNightLearners. Good night friends, today we are again in our section DeepNightLearners with a review of a Deep Learning article. Today I've chosen to review the article Perceiver: General Perception with Iterative Attention. ### Reviewer Corner: Reading recommendation: a must (!!!) if you're into Transformers. For others - very recommended (the idea behind it is very cool!). Clarity of writing: Medium plus Required math and ML/DL knowledge level to understand the article: Basic knowledge with the Transformer architecture and with software complexity. Practical applications: Transformers with low complexity which can process long series of data (image patches, video frames, long text, etc.). ### Article Details: Code link: available here, here, and here Published on: 04/03/2021 on Arxiv Presented at: Unknown ### Article Domains: • Transformers with low complexity and low storage ### Mathematical Tools, Concepts and Marks: • Transformers Architecture Basics ### Introduction: The transformer is a neural network architecture designed to process serial data. It was first introduced in 2017 in an article titled Attention is All You Need. Since then, the Transformers took over the NLP world and became its default architecture. Through pre-training, the Transformers are used to build meaningful data representations (embedding), which in turn can be calibrated (fine-tuned) into a variety of downstream tasks. Recently, the transformers had also started invading the computer vision field. Among the articles which used Transformers for different computer vision tasks are An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale, TransGan, Pretrained Image Transformer, and DETR. Lately, Transformers are also used for video processing (Knowledge Fusion Transformers for Video Action Classification). Usually in computer vision, the Transformers inputs are image patches. Yet, there are several challenges that prevent a wider usage of Transformers in the visual domain: • The inherent local dependencies that exist in images Convolution networks traditionally "stars" on almost every computer vision task. They calculate lower layers' features based on nearby pixels and consequently make use of the local dependencies (connections) which exist inside images. Transformers, however, don't allow such local representations since the data representation is built through simultaneously analyzing all of the data connections. This difficulty can be overcome with a sophisticated weight initialization mechanism. Some used convolution layers as a first stage to build patch representations before feeding them into the Transformer. • Squared computation complexity w.r.t the transformer input The Transformer builds data representation by analyzing the connections between all the input parts using a mechanism called "Self-Attention" (SA) - the transformer heart. In SA, a calculation for M lengthed input has a complexity of O(M2). It becomes problematic in computation time and space for high-resolution images due to a large number of patches. In the past two years, several articles have suggested computationally cheaper variants of the Transformer, such as Reformer, Linformer, and Performer. However (and AFAIK), these versions have not even reached the classic transformer performance on downstream tasks. ### The Article in Essence: The squared complexity of the Transformer originates in the Self Attention (SA) mechanism. It occurs because of multiplication, which will be marked as L, of the interchangeable matrices Q=Q'X and K=K'X, where Q', K' are Query and Key matrices, and X is the transformer input. The size of the matrices Q and K are MxD when M is the input length and D is the data representation dimension (embedding size). It's easy to see then how the SA computationally of O(M2) originated. Remember that the output of SA is calculated as LV, where V=V'X and V' is the Value matrix. In contrast with most of the other articles, which offer computationally cheap alternatives for the Transformer using different approximations for the SA mechanism, this article proposes a different approach to attack this problem. They suggest learning (!) the Q matrix instead of calculating it out of the input. Doing so, Q can be significantly smaller than the input length M, and the complexity of multiplying Q in K will not be squared but linear - O(MN). ### The basic idea: The article proposes calculating Q as Q'A where A is a learned matrix, called a latent array. K and V matrices are calculated similarly to the original SA mechanism. Later, instead of calculating the Self-Attention expression for the input X, the article calculates Cross-Attention between the input X and the latent array A. The latent array A length is significantly smaller than the input size. Hence, the squared complexity is prevented. Note: The Cross-Attention mechanism was first presented in the BERT article. It was used to calculate the connectivity between BERT's encoder input and the decoder's intermediate output on tasks such as automatic translation or text summarization. ### Detailed Explanation: In Cross-Attention (CA), the matrices K and V are built similarly to SA by multiplying the input with learned V' and K', respectively. Since the squared complexity (w.r.t input size) limitation is removed, we can use longer series than the standard transformer. For example, when the Transformer input is a high-resolution image, it is commonly divided into 16x16 pixel patches due to its complexity limitation. Using a variable-sized latent array A overcomes this limitation and enables us to use longer input series. The article suggests flattening the input, turning it into a byte-array, before multiplying it with the Key and Value matrices. If the input is an image, each item in the byte array contains the pixel value. The Perceiver input can also be a long audio or video series. Furthermore, the article argues that the Perceiver input can be a combination of audio and video together, which was impossible in previous Transformer versions because they required architecture adjustments according to the input types. The Perceiver architecture is input agnostic. Impressive! ### The Perceiver architecture - detailed Now that we have understood the basic principles of the Perceiver architecture, we can dive into the details. After calculating the Cross-Attention between the latent array and the input, the CA output is fed into a classic Transformer. In the article, they name it Latent Transformer (LTr). The output size of the CA does not depend on the input size but on the latent array size, which is set according to the available computation resources. Since the latent array size is usually much smaller than the original input size, passing it through the LTr has a reasonable complexity. The LTr architecture is similar to the GPT-2 architecture and is formed of the original Transformer decoder. LTr output is fed again into the CA mechanism, similar to the original input (the same K  and V matrices are reused). The CA output is then fed to an additional LTr. By repeating this CA + LTr, one can build a deep and robust architecture for strong input representation construction. The "LTr"s can share identical weights, have different weights in each layer, or combine the two. The "LTr"s can share identical weights, have different weights in each layer, or somehow combine the two (e.g., three sets of weights used all over). Think of the Perceiver as a multi-layer neuron net where every layer is composed of CA and LTr. ### Intuition Corner: The latent array can be seen as a set of learned questions about the input. An example of a potential query can be: relationships measurement between a patch p in the center of an image to all the other internal patches in a larger patch that contains p (in the first CA layer). The latent array in deeper layers of the Perceiver depends on the calculated values from previous layers, and similarly to convolution networks, these arrays try to revaluate the semantic features of the image. The Perceiver also resembles RNN, where each layer receives the whole input. ### Positional encoding: Self-Attention and Cross-Attention are agnostic to the input items order. The v representation would remain the same also after permuting the input series. Clearly, there are cases where the item order is important, such as natural language, images, video, audio, etc. The series items' positional information is added to the CA and SA through positional encoding (PE). PE encodes the relative position of each item in the input series. In the article, they are using for the CA mechanism Fourier transformation-based PE, similar to the ones used in BERT. In contrast, they use learned PE for the LTr's SA mechanism. The topic of positional encoding is extensively discussed in the article. The authors have introduced several fascinating changes and tried to give an intuition for the performance improvement reason. ### Achievements: The article compared the Perceiver representations embedding to several other self-supervised training methods (combined with a linear layer for classification) and supervised SOTA methods on different domains: Images, Video, Audio, Audio + Video, point clouds. In all these domains, the Perceiver performed better than other unsupervised methods they checked, including transformer-based ones. Some of these methods were built for specific domain data by using the inherent characteristics of the data (such as ResNet in the image domain). Yet, the Perceiver scored slightly worse in the supervised methods when these methods used these inherent characteristics. ### P.S. This remarkably interesting article suggests a cool method to overcome the transformer's squared complexity. Thar suggested architecture is agnostic to the input structure and can be used as it is to construct data representation in different domains. Also, check out Yam Peleg's Keras implementation here: #deepnightlearners This post was written by Michael (Mike) Erlihson, Ph.D. Michael works in the cybersecurity company Salt Security as a principal data scientist. Michael researches and works in the deep learning field while lecturing and making scientific material more accessible to the public audience.
2022-05-27 21:18:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31861868500709534, "perplexity": 1994.585764679626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663006341.98/warc/CC-MAIN-20220527205437-20220527235437-00162.warc.gz"}
https://www.cockroachlabs.com/blog/what-is-data-partitioning-and-how-to-do-it-right/
What is data partitioning, and how to do it right Most software projects start their lives linked to a simple, single-instance database. Something like Postgres or MySQL. But as companies grow, their data needs grow, too. Sooner or later, a single instance just isn’t going to cut it. That’s where data partitioning comes in. What is database partitioning? Database partitioning (also called data partitioning) refers to breaking the data in an application’s database into separate pieces, or partitions. These partitions can then be stored, accessed, and managed separately. Partitioning your data can help make your application more scalable and performant, but it can also introduce significant complexities and challenges. In this article, we’ll take a look at the advantages and disadvantages of data partitioning, and look at some strategies for partitioning more effectively. Partitioning can have many advantages, but these are some of the most common reasons that developers and architects choose to partition their data: 1. Improve scalability 2. Improve availability 3. Improve performance Data partitioning can improve scalability because running a database on a single piece of hardware is inherently limited. While it is possible to improve the capability of a single database server by upgrading its components — this is called vertical scaling — this approach has diminishing returns in terms of performance and inherent limitations in terms of networking (i.e., users located somewhere geographically far from the database will experience more latency). It also tends to be more expensive. However, if data is partitioned, then the database can be scaled horizontally, meaning that additional servers can be added. This is often a more economical way to keep up with growing demand, and it also allows for the possibility of locating different partitions in different geographic areas, ensuring that users across the globe can enjoy a low-latency application experience. Data partitioning can improve availability because running a database on a single piece of hardware means your database has a single point of failure. If the database server goes down, your entire database — and by extension, your application — is offline. In contrast, spreading the data across multiple partitions allows each partition to be stored on a separate server. The same data can also be replicated onto multiple servers, allowing the entire database to remain available to your application (and its users) even if a server goes offline. Data partitioning can improve performance in a variety of different ways depending on how you choose to deploy and configure your partitions. One common way that partitioning improves performance is by reducing contention — in other words, by spreading the load of user requests across multiple servers so that no single piece of hardware is being asked to do too much at once. Or another example: you might choose to partition your data in different geographic regions based on user location so that the data that users access most frequently is located somewhere close to them. This would reduce the amount of latency they experience when using your application. There are other potential advantages to data partitioning, but which specific advantages you might anticipate from partitioning your data will depend on the type of partitioning you choose, as well as the configuration options you select, the type of database you’re using, and more. Let’s dig into some of those details further. Types of database partitioning Although there are other types of database partitioning, generally speaking, when people discuss partitioning their data, they’re referring to either vertical or horizontal partitioning. In practice, partitioning is rarely quite as simple as splitting up a single table, but for illustration purposes here, we’ll use the example of a database with a single table to demonstrate the differences between these two types of partitioning. Here’s our example table: 1 theo london 213 2 kee portland 75444 3 julian new york 645 4 jasper boston 342 9998 syd shanghai 5145 9999 nigel santiago 4350 10000 marichka london 873 10001 luke new york 2091 Vertical partitioning Vertical partitioning is when the table is split by columns, with different columns stored on different partitions. In vertical partitioning, we might split the table above up into two partitions, one with the id, username, and city columns, and one with the id and balance columns, like so. Partition 1 1 theo london 2 kee portland 3 julian new york 4 jasper boston 9998 syd shanghai 9999 nigel santiago 10000 marichka london 10001 luke new york Partition 2 id balance 1 213 2 75444 3 645 4 342 9998 5145 9999 4350 10000 873 10001 2091 Generally speaking, the reason to partition the data vertically is that the data on the different partitions is used differently, and it thus makes sense to store it on different machines. Here, for example, it might be the case that the balance column is updated very frequently, whereas the username and city columns are relatively static. In that case, it could make sense to partition the data vertically and locate Partition 2 on a high-performance, high-throughput server, while the slower-moving Partition 1 data could be stored on less performant machines with little impact on the user’s application experience. Horizontal partitioning and sharding Horizontal partitioning is when the table is split by rows, with different ranges of rows stored on different partitions. To horizontally partition our example table, we might place the first 500 rows on the first partition and the rest of the rows on the second, like so: Partition 1 1 theo london 213 2 kee portland 75444 3 julian new york 645 4 jasper boston 342 500 alfonso mex. cityc 435435 Partition 2 501 tim l.a. 24235 9998 syd shanghai 5145 9999 nigel santiago 4350 10000 marichka london 873 10001 luke new york 2091 Horizontal partitioning is typically chosen to improve performance and scalability. When running a database on a single machine, it can sometimes make sense to partition tables to (for example) improve the performance of specific, frequently used queries against that data. Often, however, horizontal partitioning splits tables across multiple servers for the purposes of increasing scalability. This is called sharding. Sharding Sharding is a common approach employed by companies that need to scale a relational database. Vertical scaling — upgrading the hardware on the database server — can only go so far. At a certain point, adding additional machines becomes necessary. But splitting the database load between multiple servers means splitting the data itself between servers. Generally, this is accomplished by splitting the table into ranges of rows as illustrated above, and then spreading those ranges, called shards, across the different servers. Since the load of requests can be spread across different shards depending on the data being queried, sharding the database can improve overall performance. As new data is added, new shards can be created — although this often involves significant manual work — to keep up with the increasing size of the workload coming from the application. While both vertical and horizontal partitioning, and particularly sharding, can offer some real advantages, they also bring significant costs. Let’s dive into the challenges of partitioning data in the real world, and look at some ways to address these challenges. Data partitioning in the real world: Costs, challenges, and strategies We’ve talked about the advantages of data partitioning, and there are many! However, breaking your data into parts and spreading it across different servers does add complexity, and that complexity comes with significant additional costs. To understand how, let’s take a look at one of the most common real-world examples of data partitioning: sharding a SQL database to keep up with a growing userbase. Sharding a legacy SQL database Let’s imagine a pretty common scenario: a growing company that has a single-instance Postgres database linked to its production application, as well as a couple of backup copies that aren’t typically accessed by the application, but can be used to ensure no data is lost in the event that the primary database goes down. As this company grows, it will eventually bump up against the inherent performance limitations of this kind of configuration. A single server, even a powerful one, is simply not powerful enough to handle the load of transactions coming in from their fast-growing userbase. Engineers at the company foresee this problem, and begin making plans to shard the database before the load becomes heavy enough that users notice decreased performance. This decision alone may necessitate additional hires if the company doesn’t already have skilled, senior SRE and DevOps resources, but even with the right players already on staff, designing and testing the new system will require a significant amount of time. For example, imagine that the team comes up with an initial plan to scale up by splitting the database into three active shards, each with two passive replica shards for backup. Before they can actually execute this plan, they also need to spend time designing a few more things: 1. An approach to splitting the data that will result in the workload being spread relatively evenly among the three active shards so that no single shard receives a disproportionate amount of queries from the application and becomes overloaded 2. Application code that routes queries from the application to the correct shard (keeping in mind that this will change over time as you add or remove shards) 3. Support code for all other systems that interact with the database (e.g., if a data pipeline or changefeed sends updates to an analytics database or other application services, that will likely need to be adapted to work with the new sharded design while minimizing the possibility of consistency issues) 4. Approaches to handling once-simple procedures that become complex when multiple shards are involved, such as reading a range of data that spans more than one shard, executing a transaction that updates rows on two different shards, etc. 5. Approaches to updating both the database software and the database schema Designing these systems, writing the code, testing the changes, and migrating the workload onto the new, sharded system can take significant amounts of time and money. Maintaining all of the code required also requires significant engineering time and expertise. Your engineering team will have to write and maintain a lot of code for doing things you would expect the database to do on its own in a single-instance deployment. Schema and database software updates, which are inevitable from time to time, also can also add engineering hours, and executing them often means taking the database (and by extension the application) entirely offline for at least a brief period. Along with those relatively immediate costs, the complexity of managing a sharded database will scale up as the company scales. As shards or regions are added, significant portions of the system will need to be reworked — the application code governing query routing logic, for example, will need to take the new shards into account. The manual work involved here is significant, and it will be required each and every time the system is scaled up. For example, here’s a reference architecture for a real-money gaming application with a sharded database and operations in three separate regions: This kind of architecture creates a better application experience for users, but building and maintaining it is complex and expensive, and grows proportionally more complex and expensive as the company grows. If a new shard is added to California, the California shard routing logic will need to be reworked manually. When the company opens business in a new state, the geographic routing logic will require attention. Often, the data itself will need to be redistributed each time new shards are added, to reduce the chances of having particular servers handling disproportionate percentages of the overall workload as it increases. Needless to say, as the system scales up and the complexity scales up, so do the costs. Given that a single SRE can cost well over \$200,000 per year, the actual cost of running a system like this can quickly explode well beyond the apparent “sticker price” of cloud/hardware costs. Once, there was little alternative to this approach. These costs were simply the price of entry for companies that needed transactional databases to be performant at scale. Now, however, a new class of databases exists that can automate a lot of the complexity of sharding, providing developers with the familiar benefits of a single-instance relational database even at global scale, and eliminating much of the manual work that can make sharded systems so costly. Distributed SQL databases and “invisible” sharding A distributed SQL database is a next-generation SQL database that has been designed with data partitioning in mind. Distributed SQL databases partition and distribute data automatically, offering the automated elastic scalability and resilience of NoSQL databases without compromising on the ACID transactional guarantees that are required for many OLTP workloads. How distributed SQL databases work is beyond the scope of this article, but from an architect or developer perspective, a distributed SQL database such as CockroachDB can be treated almost exactly like a single-instance Postgres database. Unlike with sharding, there’s no need to write application logic to handle data routing for partitions or specific geographies. The database itself handles all of that routing automatically. Challenges like maintaining consistency across database nodes are also handled automatically. And in the case of CockroachDB specifically, schema changes and software updates can be executed online, without downtime. Compare the two diagrams below (traditional sharding on the left, CockroachDB on the right) and it’s easy to see how using a distributed SQL database can save significant amounts of both time and money for your team. And where adding regions or shards results in significant additional manual work in a sharded approach (left), adding nodes in CockroachDB is as simple as clicking a button — and even that can be easily automated. Adding regions, too, is as easy as a single line of SQL: ALTER database foo ADD region "europe-west1"; Charlie Custer Charlie is a former teacher, tech journalist, and filmmaker who’s now combined those three professions into writing and making videos about databases and application development (and occasionally messing with NLP and Python to create weird things in his spare time). Choosing a MySQL alternative to build HA product inventory AllSaints is a 27-year-old global fashion retail business headquartered in the UK. We have brick and mortar locations … Introduction As we’ve written about previously, geographically distributed databases like CockroachDB … Geo-partitioning: What global data actually looks like As we’ve written about previously, geographically distributed databases like CockroachDB offer a number …
2023-03-31 16:01:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2186720371246338, "perplexity": 1786.908409503435}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949644.27/warc/CC-MAIN-20230331144941-20230331174941-00321.warc.gz"}
http://physics.stackexchange.com/questions/51174/spectral-radiance-unit-conversion/51176
# Spectral radiance unit conversion [closed] I have spectral radiance data in SRUs (spectral radiance units), as a function of wavelength: $$a = \mu W cm^{-2} sr^{-1} nm^{-1}$$ However, I am working with software which requires my data in the form: $$b=mWcm^{-2}sr^{-1}\mu m^{-1}$$ Are these two units are equivalent? - ## closed as too localized by Emilio Pisanty, Waffle's Crazy Peanut, Sklivvz♦, Manishearth♦Jan 15 '13 at 6:30 This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.If this question can be reworded to fit the rules in the help center, please edit the question. Well, $1\textrm{ mW}=1000\, \mu \textrm{W}$ and $1\,\mu\textrm{m}=1000\textrm{ nm}$, so that $$1\textrm{ mW}\,\textrm{cm}^{-2}\textrm{sr}^{-1}\mu\textrm{m}^{-1}= 1000\textrm{ mW}\,\textrm{cm}^{-2}\textrm{sr}^{-1}\left(1000\textrm{ nm}\right)^{-1}= 1 \,\mu\textrm{W}\,\textrm{cm}^{-2}\textrm{sr}^{-1}\textrm{nm}^{-1}$$ and both units are exactly equivalent.
2016-05-03 10:59:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.792639434337616, "perplexity": 1430.3715289618362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860121423.81/warc/CC-MAIN-20160428161521-00095-ip-10-239-7-51.ec2.internal.warc.gz"}
https://tex.stackexchange.com/questions/390067/thmtools-footnotes-in-theorem-names-and-notes
# thmtools: Footnotes in theorem names and notes I am using the thmtools package as a frontend to amsthm to typeset various types of theorems. Sometimes I feel the need to put a footnote either directly at the theorem name or inside the theorem note. To be preicse, when this is my document: \documentclass{scrartcl} \usepackage{amsthm} \usepackage{thmtools} \declaretheorem[name=Theorem]{mytheorem} \begin{document} \begin{mytheorem}[Some reference] This statement is true. \end{mytheorem} \end{document} I'd like to put footnotes at the positions indicated here: For position 1, the solution given here for amsthm is working fine with thmtools when changing \thetheorem to \themytheorem. For position 2 there is no problem when just using amsthm without thmtools: something like \begin{mytheorem}[Some reference\footnotemark] followed by a \footnotetext{...} nearby just works. However, when using thmtools this results in the following error: ! Argument of \@begintheorem has an extra }. <inserted text> \par Of course, after carefully phrasing a question and preparing a MWE, I figured out how to achieve the desired result. This seems to be the common problem of a fragile command in a moving argument and using \protect fixes that. So the following works: \documentclass{scrartcl} \usepackage{amsthm} \usepackage{thmtools} \declaretheorem[name=Theorem]{mytheorem} \begin{document} \begin{mytheorem}[Some reference\protect\footnotemark] \footnotetext{This is a footnote} This statement is true. \end{mytheorem} \end{document}
2022-06-25 05:40:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8227068781852722, "perplexity": 3504.755663823889}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103034170.1/warc/CC-MAIN-20220625034751-20220625064751-00596.warc.gz"}
https://docs.scipy.org/doc/scipy-1.2.1/reference/generated/scipy.signal.spectrogram.html
# scipy.signal.spectrogram¶ scipy.signal.spectrogram(x, fs=1.0, window=('tukey', 0.25), nperseg=None, noverlap=None, nfft=None, detrend='constant', return_onesided=True, scaling='density', axis=-1, mode='psd')[source] Compute a spectrogram with consecutive Fourier transforms. Spectrograms can be used as a way of visualizing the change of a nonstationary signal’s frequency content over time. Parameters: x : array_like Time series of measurement values fs : float, optional Sampling frequency of the x time series. Defaults to 1.0. window : str or tuple or array_like, optional Desired window to use. If window is a string or tuple, it is passed to get_window to generate the window values, which are DFT-even by default. See get_window for a list of windows and required parameters. If window is array_like it will be used directly as the window and its length must be nperseg. Defaults to a Tukey window with shape parameter of 0.25. nperseg : int, optional Length of each segment. Defaults to None, but if window is str or tuple, is set to 256, and if window is array_like, is set to the length of the window. noverlap : int, optional Number of points to overlap between segments. If None, noverlap = nperseg // 8. Defaults to None. nfft : int, optional Length of the FFT used, if a zero padded FFT is desired. If None, the FFT length is nperseg. Defaults to None. detrend : str or function or False, optional Specifies how to detrend each segment. If detrend is a string, it is passed as the type argument to the detrend function. If it is a function, it takes a segment and returns a detrended segment. If detrend is False, no detrending is done. Defaults to ‘constant’. return_onesided : bool, optional If True, return a one-sided spectrum for real data. If False return a two-sided spectrum. Note that for complex data, a two-sided spectrum is always returned. scaling : { ‘density’, ‘spectrum’ }, optional Selects between computing the power spectral density (‘density’) where Sxx has units of V**2/Hz and computing the power spectrum (‘spectrum’) where Sxx has units of V**2, if x is measured in V and fs is measured in Hz. Defaults to ‘density’. axis : int, optional Axis along which the spectrogram is computed; the default is over the last axis (i.e. axis=-1). mode : str, optional Defines what kind of return values are expected. Options are [‘psd’, ‘complex’, ‘magnitude’, ‘angle’, ‘phase’]. ‘complex’ is equivalent to the output of stft with no padding or boundary extension. ‘magnitude’ returns the absolute magnitude of the STFT. ‘angle’ and ‘phase’ return the complex angle of the STFT, with and without unwrapping, respectively. f : ndarray Array of sample frequencies. t : ndarray Array of segment times. Sxx : ndarray Spectrogram of x. By default, the last axis of Sxx corresponds to the segment times. periodogram Simple, optionally modified periodogram lombscargle Lomb-Scargle periodogram for unevenly sampled data welch Power spectral density by Welch’s method. csd Cross spectral density by Welch’s method. Notes An appropriate amount of overlap will depend on the choice of window and on your requirements. In contrast to welch’s method, where the entire data stream is averaged over, one may wish to use a smaller overlap (or perhaps none at all) when computing a spectrogram, to maintain some statistical independence between individual segments. It is for this reason that the default window is a Tukey window with 1/8th of a window’s length overlap at each end. New in version 0.16.0. References [1] Oppenheim, Alan V., Ronald W. Schafer, John R. Buck “Discrete-Time Signal Processing”, Prentice Hall, 1999. Examples >>> from scipy import signal >>> import matplotlib.pyplot as plt Generate a test signal, a 2 Vrms sine wave whose frequency is slowly modulated around 3kHz, corrupted by white noise of exponentially decreasing magnitude sampled at 10 kHz. >>> fs = 10e3 >>> N = 1e5 >>> amp = 2 * np.sqrt(2) >>> noise_power = 0.01 * fs / 2 >>> time = np.arange(N) / float(fs) >>> mod = 500*np.cos(2*np.pi*0.25*time) >>> carrier = amp * np.sin(2*np.pi*3e3*time + mod) >>> noise = np.random.normal(scale=np.sqrt(noise_power), size=time.shape) >>> noise *= np.exp(-time/5) >>> x = carrier + noise Compute and plot the spectrogram. >>> f, t, Sxx = signal.spectrogram(x, fs) >>> plt.pcolormesh(t, f, Sxx) >>> plt.ylabel('Frequency [Hz]') >>> plt.xlabel('Time [sec]') >>> plt.show() Note, if using output that is not one sided, then use the following: >>> f, t, Sxx = signal.spectrogram(x, fs, return_onesided=False) >>> plt.pcolormesh(t, np.fft.fftshift(f), np.fft.fftshift(Sxx, axes=0)) >>> plt.ylabel('Frequency [Hz]') >>> plt.xlabel('Time [sec]') >>> plt.show() #### Previous topic scipy.signal.coherence #### Next topic scipy.signal.lombscargle
2021-05-08 04:01:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1773175746202469, "perplexity": 7181.007983708216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988837.67/warc/CC-MAIN-20210508031423-20210508061423-00463.warc.gz"}
https://robotics.stackexchange.com/questions/7641/mapping-formats-for-small-autonomous-robots
# Mapping formats for small autonomous robots I have some robot software I'm working on (Java on Android) which needs to store a pre-designed map of a playing field to be able to navigate around. The field's not got any fancy 3d structure, the map can be 2d. I've been trying to find a good format to store the maps in. I've looked into SVGs and DXFs, but neither one is really designed for the purpose. Is there any file format specifically designed for small, geometric, robotics-oriented maps? The field I'd be modelling is this one: • How about some XML-file? Define nodes for geometric primitives and model the obstacles with them. Jul 11 '15 at 17:18 • You could have a look at GDAL (gdal.org), which has a comprehensive list of raster and vector-based standard map formats (gdal.org/ogr_formats.html). It has Java bindings, but I don't know if it can be built for Android. Jul 27 '15 at 17:22 1111111
2021-12-01 13:42:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34767866134643555, "perplexity": 2041.6711424136047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360803.0/warc/CC-MAIN-20211201113241-20211201143241-00219.warc.gz"}
https://www.physicsforums.com/threads/notation-question-about-min-and-max-functions.563341/
Notation question about min and max functions 1. Dec 28, 2011 stevenb I have a general question about the notation of min and max functions. I'm wondering if there is an accepted notation, used by mathematicians, for the min and max functions, other than the usual min(x,y) and max(x,y) that I typically see. To give similar examples for what I'm looking for. absolute value: abs(x) is notated as |x| round to lower integer: floor(x) is notated as $$\lfloor x \rfloor$$ round to upper integer: ceil(x) is notated as $$\lceil x \rceil$$ Lately, I've been using min and max often and it would be nice to have a shorthand notation. I could make up my own, but I don't want to use a notation that isn't an accepted standard. So far, my searching indicates that I'm stuck with min and max, but I figured this forum would be a good place to see if anybody is aware of any obscure notation that would be acceptable to mathematicians. 2. Dec 28, 2011 micromass If x and y are real numbers, then the maximum is often denoted as $x\vee y$. The minimum is $x\wedge y$. In fact, this notation holds in situations more general than real numbers, but then they represent the supremum and the infimum. The supremum of an arbitrary set (if it exists) can be denoted as $\bigvee X$. 3. Dec 28, 2011 stevenb Thank you very much. That is interesting. There is some potential for those symbols to have other meanings, but for the context I'm working in, I think this can work well. I'm wondering if it is proper for me to use this notation to denote functions, and if so, the proper way to write it. For example if I have f(t)=max(0, g(t)), I'm defining the function f(t) to be equal to g(t) as long as g(t) >=0, and zero otherwise. This is more compact of a notation than a case definition. I'm tempted to write the following. $$f(t)=0\bigvee g(t)$$ Is this OK, or is it abusing the notation since g(t) is a real function and not a real number? 4. Dec 28, 2011 micromass Both 0 and g(t) are real numbers, so there is no problem in writing $0\vee g(t)$. Writing $0\vee g$ on the other hand is a little abuse of notation. But it should be clear if you know what you are doing. 5. Dec 28, 2011 Devil Doc Just out of curiosity, what level algebra is this symbolism used for? This is the first time I've seen it. I am familiar with max and min, just not in this notation. Thanks. 6. Dec 28, 2011 micromass It is quite standard notation. It is the notation of lattice theory. It is often used in real analysis books and beyond. 7. Dec 28, 2011 stevenb Excellent. This will work very well for me then. So, looking at a few on-line references on lattice theory, I see that notation is used extensively, as you said. I even notice some nice looking theorems that might prove useful for me in what I'm doing. Thank you very much for this helpful information. 8. Jan 3, 2012 Devil Doc Thanks for the reply and information, micromass. I do appreciate it. Share this great discussion with others via Reddit, Google+, Twitter, or Facebook
2018-10-22 10:06:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8231321573257446, "perplexity": 479.57631327669765}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583515029.82/warc/CC-MAIN-20181022092330-20181022113830-00008.warc.gz"}
https://math.stackexchange.com/questions/2516633/checking-for-differentiability-partial-derivatives-of-a-multivariable-function
# Checking for differentiability,partial derivatives of a multivariable function Let $f : \mathbb{R}^2 \to \mathbb{R}$ be defined as $$f(x,y)=\begin{cases} (x^2+y^2)\cos \frac{1}{\sqrt {x^2+y^2}}, & \text{for (x,y) \ne (0,0)} \\ 0, & \text{for (x,y) = (0,0)} \end{cases}$$ then check whether its differentiable and also whether its partial derivatives ie $f_x,f_y$ are continuous at $(0,0)$. I dont know how to check the differentiability of a multivariable function as I am just beginning to learn it. For continuity of partial derivative I just checked for $f_x$ as function is symmetric in$y$and $x$. So $f_x$ turns out to be $$f_x(x,y) = 2x\cos \left(\frac {1}{\sqrt {x^2+y^2}}\right)+\frac {x}{\sqrt {x^2+y^2}}\sin \left(\frac {1}{\sqrt{x^2+y^2}}\right)$$ which is definitely not $0$ as $(x,y)\to (0,0)$. Same can be stated for $f_y$. But how to proceed with the first part? Thanks! • You should compute $f_x,f_y$ at $(0,0)$ by definition. – user223391 Nov 12 '17 at 13:58 • They are $0$ . Then what to do? – Archis Welankar Nov 12 '17 at 14:01 we have $$\frac{\partial f(0,0)}{\partial y}=\lim_{h\to 0}h\cos\left(\frac{1}{h}\right)=0$$ since $|h\cos\left(\frac{1}{h}\right)|\le |h|$ which tends to $0$ As stated in the answer by @MeeSeongIm, the partial derivatives at $(0,0)$ are $\partial_x f(0,0) = 0$ and $\partial_y f(0,0) = 0$. The only candidate for the differential at $(0,0)$ is: $$Df(0,0) = \begin{bmatrix} \partial_x f(0,0) & \partial_y f(0,0)\end{bmatrix} = \begin{bmatrix} 0 & 0\end{bmatrix}$$ which is simply the zero-functional. We have: \begin{align}\lim_{(h_1,h_2) \to (0,0)}\frac{\left|f(h_1,h_2) - f(0,0) - Df(0,0)(h_1,h_2)\right|}{\|(h_1,h_2)\|} &= \lim_{(h_1,h_2) \to (0,0)}\frac{h_1^2+h_2^2}{\sqrt{h_1^2+h_2^2}}\cdot \left|\cos\frac1{\sqrt{h_1^2+h_2^2}}\right|\\ &= \lim_{(h_1,h_2) \to (0,0)}\sqrt{h_1^2+h_2^2}\cdot \underbrace{\left|\cos\frac1{\sqrt{h_1^2+h_2^2}}\right|}_{\le 1}\\ &= 0 \end{align} Hence, $f$ is differetiable at $(0,0)$ with the differential being $0$, even though the partial derivatives are not continuous at $(0,0)$. We have \begin{align*} f_x(0,0) &= \lim_{h\rightarrow 0} \frac{f(0+h,0)-f(0,0)}{h} \\ &= \lim_{h\rightarrow 0} \frac{h^2 \cos \left(\frac{1}{h} \right)-0}{h} \\ &= \lim_{h\rightarrow 0} h \cos \left(\frac{1}{h}\right). \\ \end{align*} So $$|f_x(0,0)| = \left| \lim_{h\rightarrow 0} h \cos \left(\frac{1}{h}\right) \right| \leq \lim_{h\rightarrow 0} h =0.$$ So $f_x(0,0) =0$. Similarly $f_y(0,0)=0$. To show that $f_x$ is continuous at $(0,0)$, we need to show that $$\lim_{(x,y)\rightarrow (0,0)} f_x(x,y) = f_x(0,0).$$ However, consider the path $y=0$. Then \begin{align*} \lim_{(x,y)\rightarrow (0,0)} \frac{x }{\sqrt{x ^2+y^2}} \sin \left(\frac{1}{\sqrt{x^2+y^2}}\right) &+ 2 x \cos \left(\frac{1}{\sqrt{x^2+y^2}}\right) \\ &= \lim_{x\rightarrow 0} \frac{x }{|x|}\sin \left(\frac{1}{|x|}\right) + 2 x \cos \left(\frac{1}{|x|}\right) \\ &= \lim_{x\rightarrow 0} \text{sgn}(x) \sin \left(\frac{1}{|x|}\right) + 2 x \cos \left(\frac{1}{|x|}\right), \\ \end{align*} where $$\text{sgn}(x)= \begin{cases} \: \:\: 1 &\mbox{ if } x > 0 \\ -1 &\mbox{ if } x < 0. \\ \end{cases}$$ Since $\sin\left( \frac{1}{|x|}\right)$ rapidly oscillates between $-1$ and $1$ as $x\rightarrow 0$ and it is not multiplied by any function $g(x)$ such that $g(x)\sin\left( \frac{1}{|x|}\right)\rightarrow 0$ as $x\rightarrow 0$, $f_x$ is not continuous at $(0,0)$. We have a similar argument for the continuity of $f_y$ at $(0,0)$. • This is only one part of the question – user223391 Nov 12 '17 at 14:06 • @ZacharySelk I just saw that. Thanks. Will edit. – Mee Seong Im Nov 12 '17 at 14:08
2021-06-12 17:31:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000098943710327, "perplexity": 244.5678105642747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487586239.2/warc/CC-MAIN-20210612162957-20210612192957-00485.warc.gz"}
https://eprint.iacr.org/2020/119
## Cryptology ePrint Archive: Report 2020/119 Hardness of LWE on General Entropic Distributions Zvika Brakerski and Nico Döttling Abstract: The hardness of the Learning with Errors (LWE) problem is by now a cornerstone of the cryptographic landscape. In many of its applications the so called LWE secret'' is not sampled uniformly, but comes from a distribution with some min-entropy. This variant, known as Entropic LWE'', has been studied in a number of works, starting with Goldwasser et al. (ICS 2010). However, so far it was only known how to prove the hardness of Entropic LWE for secret distributions supported inside a ball of small radius. In this work we resolve the hardness of Entropic LWE with arbitrary long secrets, in the following sense. We show an entropy bound that guarantees the security of arbitrary Entropic LWE. This bound is higher than what is required in the ball-bounded setting, but we show that this is essentially tight. Tightness is shown unconditionally for highly-composite moduli, and using black-box impossibility for arbitrary moduli. Technically, we show that the entropic hardness of LWE relies on a simple to describe lossiness property of the distribution of secrets itself. This is simply the probability of recovering a random sample from this distribution $s$, given $s+e$, where $e$ is Gaussian noise (i.e. the quality of the distribution of secrets as an error correcting code for Gaussian noise). We hope that this characterization will make it easier to derive entropic LWE results more easily in the future. We also use our techniques to show new results for the ball-bounded setting, essentially showing that under a strong enough assumption even polylogarithmic entropy suffices. Category / Keywords: foundations / Learning with Errors, Entropic LWE Original Publication (with major differences): IACR-EUROCRYPT-2020
2020-04-07 21:05:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9259167909622192, "perplexity": 1136.2684734113507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371805747.72/warc/CC-MAIN-20200407183818-20200407214318-00466.warc.gz"}
https://physics.stackexchange.com/questions/189600/why-and-how-does-negative-velocity-exist/189603
# Why and how does negative velocity exist? Why and how does negative velocity exist? I have read on the internet about negative velocity but I still don't understand how it can even exist since time is positive and so is length. By doing some math I came to the conclusion it can't and should not exist and yet there are so many papers and videos trying to explain it. • Comments are not for extended discussion; this conversation has been moved to chat. – David Z Jun 22 '15 at 12:58 • Velocity is a vector. Speed is its magnitude. • Position is a vector. Length (or distance) is its magnitude. A vector points in a direction in space. A negative vector (or more precisely "the negative of a vector") simply points the opposite way. If I drive from home to work (defining my positive direction), then my velocity is positive if I go to work, but negative when I go home from work. It is all about direction seen from how I defined my positive axis. Consider an example where I end up further back than where I started. I must have had negative net velocity to end up going backwards (I end at a negative position). But only because backwards and forwards are clearly defined as the negative and positive directions, respectively, before I start. So, does negative velocity exist? Well, since it is just a matter of words that describe the event, then yes. Negative velocity just means velocity in the opposite direction than what would be positive. • Similarly, time can be positive (tomorrow) and negative (yesterday). – Jon Custer Jun 15 '15 at 13:44 • @AndrejSlavejkov Velocity is a vector. Vectors can't be negative, strictly speaking. But for a 1D vector (which can only be pointing two ways) you can call one way positive or forwards, and the other way negative or backwards. – immibis Jun 15 '15 at 14:43 • @AndrejSlavejkov you are confusing speed and velocity. Speed is the absolute distance (positive) divided by absolute time (positive). Velocity is relative distance (end - start) divided by time. So if you move towards the negative axis, "end" is smaller than "start" and the number you compute is negative. If you like, as a vector you write velocity = speed * $\vec{\mathrm{direction}}$ and the sign of direction can be anything... – Floris Jun 15 '15 at 15:01 • You're missing what others are saying. When talking about velocity, we're not talking about scalar numbers. We're talking about vectors. If we choose a coordinate system and place your house at the point (0,0,0) and your workplace at (1,0,0), and you travel from your house to your workplace at 1 unit per second, then your velocity is (1,0,0). If you travel from your workplace to your house at 1 unit per second, your velocity is (-1,0,0). Your conception of velocity seems to be the magnitude of velocity. Both $v_1=(1,0,0)$ and $v_2=(-1,0,0)$ have a magnitude of +1, but $v_1 = -v_2$. – Shufflepants Jun 15 '15 at 15:09 • @AndrejSlavejkov - I am glad if my comments helped, but all I did was to reiterate what Steeven said. If this helped you, you might consider accepting his answer (little check mark). It will give him some deserved reputation. – Floris Jun 15 '15 at 15:15 From the math point of view, you cannot have “negative velocity” in itself, only “negative velocity in a given direction”. The velocity is a 3-dimension vector, there is no such thing as a positive or negative 3D vector. However, if you consider the velocity in direction $\mathrm{x}$, where $\hat{\mathbf{e}}_{\mathrm{x}}$ is some unit vector giving a reference direction (say, "West"), then the velocity “in direction $\mathrm{x}$” is simply the scalar product of the velocity and $\hat{\mathbf{e}}_{\mathrm{x}}$. This quantity is a real number and can be negative. If it is negative, it is equal to $-1 \times \text{(velocity in direction -x)}$: compute the velocity in the opposite direction, and reverse the sign. • This is IMO a better answer than the accepted one: there is no such thing as a negative vector, there's only vectors with direction that happens in some basis to have negative coefficients. – leftaroundabout Jun 16 '15 at 17:01 I think one of the main reasons that you have velocity is to isolate a particular direction of movement from your forward speed. If you travel North north east, you can extract the speed at which you move eastwards by calculating your eastwards velocity (possibly 1/3 of your speed travelling NNE). Negative velocities probably arrived as a consequence of the fact that when measuring a velocity, you have to define a direction. Negative and positive is arbitrary. If I defined north as positive, south would be negative. If I defined south as positive, north would be negative. The signage merely serves to provide direction for the velocity vector relative to some defined positive direction. All directions are arbitrary, and you can create any coordinate system for your events as long as everything is consistent with eachother. We need this convention to explain position as a function of time, for example. If velocity was fixed to be positive, or similarly, if it were scalar, mechanics would have some issues, since it would mean an object could never decelerate let alone go backwards. I will only consider one dimensional motion(motion along a single axis).The main objective of terms like position and velocity is to describe the motion of an object easily. We define velocity to be the rate of change of position .By convention we choose a fixed point (along the axis of motion) and call it origin and define an object's position on that line based on the distance from this point. Again general convention is that distances measured towards right are positive and those measured towards left are negative (you can use them reversed if you want).You can easily see that in defining measurement of position this way we have covered all the points on the axis.In these conventions a position 2m means 2m right of origin and a position -6m means 6m left of origin. Now you would have heard that the sign of velocity gives direction, but first of all directions are again just references made by us .By trial and error we can make out the physical meaning of + or - sign . (By convention you call the direction in which the negative numbers increase negative direction(i.e left in this case) and the direction in which positive number increase the positive direction (i.e right in this case)). Example:- You see an object first at 2m and then at -3m after 5s . You say that it has moved towards left or in the negative direction . Now by the definition of velocity you calculate change in position (-3-2)m = -5m (analysing you can see that the "-" came out automatically as a result of our convention) and change in time = 5s. Dividing you get velocity as (-1m/s) or you can say that the object was moving towards left at 1m/s .In this way you can figure out what the negative sign means . It simply means that the motion is towards the negative direction or towards left.(As you can see our definitions and conventions help us describe completely the motion of an object , both the rate at which it moves and where it moves) In the computer language FORTH the turtle can go anywhere in the plane, going always 'ahead' X units and turning left or right by a Y angle units. The turtle ignores the notion of negative and yet it moves i.e. in any referential the space coordinates vary in time , and it has a velocity. The Question and several of the Answers do not know the difference between the positive nature of any amount of a physical quantity and the representation in a referential that was constructed by convention and ease of use. Compare the Polar referential there are no negatives and the usual Cartesian one. By example : the length of something, the distance from here to there is always positive. A vector is a pair of magnitude and direction, the length of it is positive by definition. When we represent that an object move from the position 0 to -X and then back to 0 we can not say that it moved 0 units (-4+4=0) of length . In fact it moved twice the length 4, i.e. 8 units of length. If it took 2 seconds in that motion (see velocity versus speed notion) than we can not add the two vectors and say the .. is 0. That link provides a distinction between two different notions of speed in the textbooks and the concept of velocity. def 1 : $s=\frac{distance}{\Delta\ t}$ (it depends on the path) def 2 : $s=\left|\frac{\vec{v}}{\Delta\ t}\right|$ ## protected by Qmechanic♦Jun 16 '15 at 17:33 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
2019-04-26 00:19:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.691449761390686, "perplexity": 497.0336916411057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578743307.87/warc/CC-MAIN-20190425233736-20190426015736-00162.warc.gz"}
http://wiki.chemprime.chemeddl.org/index.php/Detection_of_Performance-Enhancing_Drugs_(%22Doping%22)_in_Sports
Detection of Performance-Enhancing Drugs ("Doping") in Sports - ChemPRIME # Detection of Performance-Enhancing Drugs ("Doping") in Sports back to Average Atomic Weights "Doping" has been part of sports for a long time. Disqualification of cyclist Floyd Landis from the Tour de France in 2008[1], doping trials in US baseball, and positive tests of olympic athletes are included in a long list of doping cases in cycling or list of doping cases in sports. Banner at 2006 Tour de France[2] Now, with more sophisticated laboratories developing drugs that are virtually undetectable because they are also naturally occurring, you may wonder how synthetic versions of the naturally occurring drugs can be detected. They each have the same formula and molecular structure! The new analytical techniques were developed by Don Catlin who was named Sportsman of the Year for 2002 by the Chicago Tribune[3] among many awards for doping control [4][5]. ## GC/C/IRMS The technique Catlin developed is Gas Chromatography/Combustion/Carbon Isotope Ratio Mass Spectrometry (GC/C/IRMS). Drugs (or drug metabolites) from a urine sample are combusted (burned), and the resulting carbon dioxide (CO2) is analyzed by mass spectrometry to determine the ratio of the stable isotopes of carbon in the CO2 (and also the drug). Carbon has two naturally occurring stable istopes 136C and 126C (and several radioactive ones that do not concern us here): Isotope Symbol Protons Neutrons  % on Earth Isotopic Mass Carbon-12 126C 6 6 98.93% 12.0000000 Carbon-13 136C 6 7 1.07% 13.0033548 If the drug is synthetic, it is likely that the starting materials were of plant origin, and will be enriched in 126C and have a different "isotopic signature" than the natural steroid. Various biological, physical, and chemical processes change (or "fractionate") the stable isotope ratio, changing the typical isotopic abundances in the Table. What Catlin typically finds is that "δ13C values" for urine samples obtained from 43 healthy males were -23.8‰ and non-doped athletes’ urine samples were similar. But the "δ13C values" value in a "doped" athlete's sample was -32.6‰, suggesting that epitestosterone was administered. The difference is great enough to detect synthetic testosterone in the presence of the natural steroid[6][7]. ## "δ13C" Values The analyis is based on "δ13C" values, which represent very small differences in isotopic ratios. Delta values are measured in parts per thousand (or "per mil", ‰). The more negative the value, the more of the lighter isotope is present, so positive values represent material with more of the heavy isotope. We saw above that carbon is normally 98.93% 126C and 1.07% 136C, but the range is about 98.85 - 99.02% 126C. The δ13C value assigned a value of 0 for a standard with a very high 136C/ 126C ratio, so values are more negative for sources with less 13C (like synthetic steroids). The details of the delta value calculations are not important, but in case you're interested, the formula is $\delta ^{13}C = \Biggl( \frac{\bigl( \frac{^{13}C}{^{12}C} \bigr)_{sample} -\bigl( \frac{^{13}C}{^{12}C} \bigr)_{standard}} {\bigl( \frac{^{13}C}{^{12}C} \bigr)_{standard}} \Biggr) * 1000\ ^{o}\!/\!_{oo}$ But is there a "normal" isotopic abundance ratio for an element? If the abundance of oxygen isotopes can vary by ~20‰ (2%), how can we have a single "atomic weight" for the element? ## The "Normal" Isotopic Ratio: Atomic Weights All atoms of a given element do not necessarily have identical masses. But all elements combine in definite mass ratios, so they behave as if they had just one kind of atom. In order to solve this dilemma, we define the atomic weight as the weighted average mass of all naturally occurring (occasionally radioactive) isotopes of the element. A weighted average is defined as Atomic Weight = $\left(\tfrac{\%\text{ abundance isotope 1}}{100}\right)\times \left(\text{mass of isotope 1}\right)~ ~ ~ +$ $\left(\tfrac{\%\text{ abundance isotope 2}}{100}\right)\times \left(\text{mass of isotope 2}\right)~ ~ ~ + ~ ~ ...$ Similar terms would be added for all the isotopes. Since the abundances change from place to place, IUPAC has established "normal" abundances which are most likely to be encountered in the laboratory. This important document that reports these values can be found at the IUPAC site. The abundances are also usually listed on the Table of the Nuclides which lists all isotopes for all elements. Surprisingly, a good number of elements have isotopic abundances that vary quite widely, so that atomic weights based on them have only 3 or 4 digit precision. The atomic weight calculation is analogous to the method used to calculate grade point averages in most colleges: GPA = $\left(\tfrac{\text{Credit Hours Course 1}}{\text{total credit hours}}\right)\times \left(\text{Grade in Course 1}\right)~ ~ ~ +$ $\left(\tfrac{\text{Credit Hours Course 2}}{\text{total credit hours}}\right)\times \left(\text{Grade in Course 2}\right)~ ~ ~ + ~ ~ ...$ ## Example: The Atomic Weight of Carbon Calculate the atomic weight of an average naturally occurring sample of carbon, given the typical abundances and masses of the isotopes in the table above. Solution $\frac{\text{98}\text{.93}}{\text{100}\text{.00}}\text{ }\times \text{ 12}\text{.000 + }\frac{\text{1}\text{.07}}{\text{100}\text{.00}}\text{ }\times \text{ 13}\text{.0034}=\text{12}\text{.011}$ The exact isotopic mass of 126C may be surprising. It is assigned the value 12.0000000 as a standard for the atomic weight scale. Other masses are determined by mass spectrometers calibrated with this arbitrary standard. ## Don Catlin and the Isotope Signatures of Steroids The chemical properties of the synthetic administered to athletes, and normal testosterone are virtually identical. The only difference is that a bigger proportion of the carbon atoms in synthetic testosterone are 126C We don't distinguish the two in any way when we write chemical equations. Detection of natural steroids posed a difficult problem for the World Anti-Doping Agency (WADA). The UCLA Olympic Analytical Laboratory, founded by Caitlin in 1982, tested Olympic, professional and collegiate athleste[8]. In the 1990s, the lab was first to offer the carbon isotope ratio test, and first to detect blood booster EPO (erythropoietin) in 2002, and has developed tests for several other designer steroids. ## Example 2 The same calculation can be extended to more than two isotopes. Naturally occurring lead is found to consist of four isotopes: 1.40% 20482Pb whose isotopic weight is 203.973. 24.10% 20682Pb whose isotopic weight is 205.974. 22.10% 20782Pb whose isotopic weight is 206.976. 52.40% 20882Pb whose isotopic weight is 207.977. Calculate the atomic weight of an average naturally occurring sample of lead. Atomic Weight = $\left(\tfrac{\text{98.93}\%}{100}\right)\times \left(\text{203.973}\right)~ ~ ~ +$ $\left(\tfrac{\text{24.10}\%}{100}\right)\times \left(\text{205.974}\right)~ ~ ~ + ~ ~ ...$ $\left(\tfrac{\text{22.10}\%}{100}\right)\times \left(\text{206.976}\right)~ ~ ~ + ~ ~ ...$ $\left(\tfrac{\text{52.40}\%}{100}\right)\times \left(\text{207.997}\right)~ ~ ~ + ~ ~ ...$ = 207.22 ## Defining the Mole The SI definition of the mole also depends on the isotope 126C and can now be stated. One mole is defined as the amount of substance of a system which contains as many elementary entities as there are atoms in exactly 0.012 kg of 126C. The elementary entities may be atoms, molecules, ions, electrons, or other microscopic particles. This official definition of the mole makes possible a more accurate determination of the Avogadro constant than was reported earlier. The currently accepted value is NA = 6.02214179 × 1023 mol–1. This is accurate to 0.00000001 percent and contains five more significant figures than 6.022 × 1023 mol–1, the number used to define the mole previously. It is very seldom, however, that more than four significant digits are needed in the Avogadro constant. The value 6.022× 1023 mol–1 will certainly suffice for most calculations needed. ## References 1. http://en.wikipedia.org/wiki/Tour_de_france#Doping 2. http://en.wikipedia.org/wiki/Tour_de_france#Doping 3. http://en.wikipedia.org/wiki/Don_Catlin 4. http://www.selectscience.com/product-news/agilent-technologies/agilent-technologies-presents-2008-manfred-donike-award-for-sports-doping-control-excellence-to-german-researcher/?artID=14791 5. http://www.laboratorytalk.com/news/agi/agi473.html 6. Rodrigo Aguilera, Caroline K. Hatton and Don H. Catlin, "Detection of Epitestosterone Doping by Isotope Ratio Mass Spectrometry", Clinical Chemistry, 48: 629-636, 2002 7. Rodrigue Aguilera, Michel Becchi, Hervé Casabianca, Caroline K. Hatton, Don H. Catlin, Borislav Starcevic, Harrison G. Pope Jr. Improved method of detection of testosterone abuse by gas chromatography/combustion/isotope ratio mass spectrometry analysis of urinary steroids Journal of Mass Spectrometry, Volume 31 Issue 2, Pages 169 - 176. 8. http://en.wikipedia.org/wiki/Use_of_performance-enhancing_drugs_in_sport
2014-12-25 07:10:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 10, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5988285541534424, "perplexity": 3360.813827854247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447546544.57/warc/CC-MAIN-20141224185906-00024-ip-10-231-17-201.ec2.internal.warc.gz"}
https://hrnjica.net/2018/08/20/liner-regression-with-cntk-and-c/
# Linear Regression with CNTK and C# CNTK is Microsoft’s deep learning tool for training very large and complex neural network models. However, you can use CNTK for various other purposes. In some of the previous posts we have seen how to use CNTK to perform matrix multiplication, in order to calculate descriptive statistics parameters on data set. In this blog post we are going to implement simple linear regression model, LR. The model contains only one neuron. The model also contains bias parameters, so in total the linear regression has only two parameters: w and b. The image below shows LR model: The reason why we use the CNTK to solve such a simple task is very straightforward. Learning on simple models like this one, we can see how the CNTK library works, and see some of not-so-trivial actions in CNTK. The model shown above can be easily extend to logistic regression model, by adding activation function. Besides the linear regression which represent the neural network configuration without activation function, the Logistic Regression is the simplest neural network configuration which includes activation function. The following image shows logistic regression model: In case you want to see more info about how to create Logistic Regression with CNTK, you can see this official demo example. Now that we made some introduction to the neural network models, we can start by defining the data set. Assume we have simple data set which represent the simple linear function $y=2x+1$. The generated data set is shown in the following table: We already know that the linear regression parameters for presented data set are: $b_0=1$ and $b_1=2$, so we want to engage the CNTK library in order to get those values, or at least parameter values which are very close to them. All task about how the develop LR model by using CNTK can be described in several steps: Step 1: Create C# Console application in Visual Studio, change the current architecture to $x64$, and add the latest “CNTK.GPU “ NuGet package in the solution. The following image shows those action performed in Visual Studio. Step 2: Start writing code by adding two variables: $X$ – feature, and label $Y$. Once the variables are defined, start with defining the training data set by creating batch. The following code snippet shows how to create variables and batch, as well as how to start writing CNTK based C# code. First we need to add some using statements, and define the device where computation will be happen. Usually, we can defined CPU or GPU in case the machine contains NVIDIA compatible graphics card. So the demo starts with the following cod snippet: using System; using System.Linq; using System.Collections.Generic; using CNTK; namespace LR_CNTK_Demo { class Program { static void Main(string[] args) { //Step 1: Create some Demo helpers Console.Title = "Linear Regression with CNTK!"; Console.WriteLine("#### Linear Regression with CNTK! ####"); Console.WriteLine(""); //define device var device = DeviceDescriptor.UseDefaultDevice(); Now define two variables, and data set presented in the previous table: //Step 2: define values, and variables Variable x = Variable.InputVariable(new int[] { 1 }, DataType.Float, "input"); Variable y = Variable.InputVariable(new int[] { 1 }, DataType.Float, "output"); //Step 2: define training data set from table above var xValues = Value.CreateBatch(new NDShape(1, 1), new float[] { 1f, 2f, 3f, 4f, 5f }, device); var yValues = Value.CreateBatch(new NDShape(1, 1), new float[] { 3f, 5f, 7f, 9f, 11f }, device); Step 3: Create linear regression network model, by passing input variable and device for computation. As we already discussed, the model consists of one neuron and one bias parameter. The following method implements LR network model: private static Function createLRModel(Variable x, DeviceDescriptor device) { //initializer for parameters var initV = CNTKLib.GlorotUniformInitializer(1.0, 1, 0, 1); //bias var b = new Parameter(new NDShape(1,1), DataType.Float, initV, device, "b"); ; //weights var W = new Parameter(new NDShape(2, 1), DataType.Float, initV, device, "w"); //matrix product var Wx = CNTKLib.Times(W, x, "wx"); //layer var l = CNTKLib.Plus(b, Wx, "wx_b"); return l; } First, we create initializer, which will initialize startup values of network parameters. Then we defined bias and weight parameters, and join them in form of linear model “$wx+b$”, and returned as Function type. The createModel function is called in the main method. Once the model is created, we can exam it, and prove there are only two parameters in the model. The following code create the Linear Regression model, and print model parameters: //Step 3: create linear regression model var lr = createLRModel(x, device); //Network model contains only two parameters b and w, so we query //the model in order to get parameter values var paramValues = lr.Inputs.Where(z => z.IsParameter).ToList(); var totalParameters = paramValues.Sum(c => c.Shape.TotalSize); Console.WriteLine(\$"LRM has {totalParameters} params, {paramValues[0].Name} and {paramValues[1].Name}."); In the previous code, we have seen how to extract parameters from the model. Once we have parameters, we can change its values, or just print those values for the further analysis. Step 4: Create Trainer, which will be used to train network parameters w and b. The following code snippet shows implementation of Trainer method. public Trainer createTrainer(Function network, Variable target) { //learning rate var lrate = 0.082; var lr = new TrainingParameterScheduleDouble(lrate); //network parameters var zParams = new ParameterVector(network.Parameters().ToList()); //create loss and eval Function loss = CNTKLib.SquaredError(network, target); Function eval = CNTKLib.SquaredError(network, target); //learners // var llr = new List(); var msgd = Learner.SGDLearner(network.Parameters(), lr); //trainer var trainer = Trainer.CreateTrainer(network, loss, eval, llr); // return trainer; } First we defined learning rate the main neural network parameter. Then we create Loss and Evaluation functions. With those parameters we can create SGD learner. Once the SGD learner object is instantiated, the trainer is created by calling CreateTrainer static CNTK method, and passed it further as function return. The method createTrainer is called in the main method: //Step 4: create trainer var trainer = createTrainer(lr, y); Step 5: Training process: Once the variables, data set, network model and trainer are defined, the training process can be started. //Ştep 5: training for (int i = 1; i <= 200; i++) { var d = new Dictionary(); // trainer.TrainMinibatch(d, true, device); // var loss = trainer.PreviousMinibatchLossAverage(); var eval = trainer.PreviousMinibatchEvaluationAverage(); // if (i % 20 == 0) Console.WriteLine(\$"It={i}, Loss={loss}, Eval={eval}"); if(i==200) { //print weights var b0_name = paramValues[0].Name; var b0 = new Value(paramValues[0].GetValue()).GetDenseData(paramValues[0]); var b1_name = paramValues[1].Name; var b1 = new Value(paramValues[1].GetValue()).GetDenseData(paramValues[1]); Console.WriteLine(\$" "); Console.WriteLine(\$"Training process finished with the following regression parameters:"); Console.WriteLine(\$"b={b0[0][0]}, w={b1[0][0]}"); Console.WriteLine(\$" "); } } } As can be seen, in just 200 iterations, regression parameters got the values we almost expected $b_0=0.995$, and $w=2.005$. Since the training process is different than classic regression parameter determination, we cannot get exact values. In order to estimate regression parameters, the neural network uses iteration methods called Stochastic Gradient Decadent, SGD. On the other hand, classic regression uses regression analysis procedures by minimizing the least square error, and solve system equations where unknowns are b and w. Once we implement all code above, we can start LR demo by pressing F5. Similar output window should be shown: Hope this blog post can provide enough information to start with CNTK C# and Machine Learning. Source code for this blog post can be downloaded here.
2021-09-18 07:57:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3271322250366211, "perplexity": 3280.3761397996027}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056348.59/warc/CC-MAIN-20210918062845-20210918092845-00574.warc.gz"}
https://www.kalakadu.com/2019/08/chapter-1-relations-and-functions-ncert_74.html
## Question 1:Determine whether or not each of the definition of given below gives a binary operation. In the event that * is not a binary operation, give justification for this.(i) On Z+, define * by a * b = a − b(ii) On Z+, define * by a * b = ab(iii) On R, define * by a * b = ab2(iv) On Z+, define * by a * b = |a − b|(v) On Z+, define * by a * b = a (i) On Z+, * is defined by * b = a − b. It is not a binary operation as the image of (1, 2) under * is 1 * 2 = 1 − 2 = −1 ∉ Z+. (ii) On Z+, * is defined by a * b = ab. It is seen that for each ab ∈ Z+, there is a unique element ab in Z+. This means that * carries each pair (ab) to a unique element * b ab in Z+. Therefore, * is a binary operation. (iii) On R, * is defined by a * b = ab2. It is seen that for each ab ∈ R, there is a unique element ab2 in R. This means that * carries each pair (ab) to a unique element * b abin R. Therefore, * is a binary operation. (iv) On Z+, * is defined by * b = |a − b|. It is seen that for each ab ∈ Z+, there is a unique element |a − b| in Z+ This means that * carries each pair (ab) to a unique element * b |a − b| in Z+. Therefore, * is a binary operation. (v) On Z+, * is defined by a * b = a. * carries each pair (ab) to a unique element * b a in Z+. Therefore, * is a binary operation. ## Question 2:For each binary operation * defined below, determine whether * is commutative or associative.(i) On Z, define a * b = a − b(ii) On Q, define a * b = ab + 1(iii) On Q, define a * b (iv) On Z+, define a * b = 2ab(v) On Z+, define a * b = ab(vi) On R − {−1}, define (i) On Z, * is defined by a * b = a − b. It can be observed that 1 * 2 = 1 − 2 = 1 and 2 * 1 = 2 − 1 = 1. ∴1 * 2 ≠ 2 * 1; where 1, 2 ∈ Z Hence, the operation * is not commutative. Also we have: (1 * 2) * 3 = (1 − 2) * 3 = −1 * 3 = −1 − 3 = −4 1 * (2 * 3) = 1 * (2 − 3) = 1 * −1 = 1 − (−1) = 2 ∴(1 * 2) * 3 ≠ 1 * (2 * 3) ; where 1, 2, 3 ∈ Z Hence, the operation * is not associative. (ii) On Q, * is defined by * b = ab + 1. It is known that: ab = ba &mnForE; a, b ∈ Q ⇒ ab + 1 = ba + 1 &mnForE; a, b ∈ Q ⇒ * b = * b &mnForE; a, b ∈ Q Therefore, the operation * is commutative. It can be observed that: (1 * 2) * 3 = (1 × 2 + 1) * 3 = 3 * 3 = 3 × 3 + 1 = 10 1 * (2 * 3) = 1 * (2 × 3 + 1) = 1 * 7 = 1 × 7 + 1 = 8 ∴(1 * 2) * 3 ≠ 1 * (2 * 3) ; where 1, 2, 3 ∈ Q Therefore, the operation * is not associative. (iii) On Q, * is defined by * b It is known that: ab = ba &mnForE; a, b ∈ Q ⇒ &mnForE; a, b ∈ Q ⇒ * b = * a &mnForE; a, b ∈ Q Therefore, the operation * is commutative. For all a, b, c ∈ Q, we have: Therefore, the operation * is associative. (iv) On Z+, * is defined by * b = 2ab. It is known that: ab = ba &mnForE; a, b ∈ Z+ ⇒ 2ab = 2ba &mnForE; a, b ∈ Z+ ⇒ * b = * a &mnForE; a, b ∈ Z+ Therefore, the operation * is commutative. It can be observed that: ∴(1 * 2) * 3 ≠ 1 * (2 * 3) ; where 1, 2, 3 ∈ Z+ Therefore, the operation * is not associative. (v) On Z+, * is defined by * b = ab. It can be observed that: and ∴ 1 * 2 ≠ 2 * 1 ; where 1, 2 ∈ Z+ Therefore, the operation * is not commutative. It can also be observed that: ∴(2 * 3) * 4 ≠ 2 * (3 * 4) ; where 2, 3, 4 ∈ Z+ Therefore, the operation * is not associative. (vi) On R, * − {−1} is defined by It can be observed that and ∴1 * 2 ≠ 2 * 1 ; where 1, 2 ∈ − {−1} Therefore, the operation * is not commutative. It can also be observed that: ∴ (1 * 2) * 3 ≠ 1 * (2 * 3) ; where 1, 2, 3 ∈ − {−1} Therefore, the operation * is not associative. ## Question 3:Consider the binary operation ∨ on the set {1, 2, 3, 4, 5} defined by a ∨b = min {a, b}. Write the operation table of the operation∨. The binary operation ∨ on the set {1, 2, 3, 4, 5} is defined as  b = min {ab} &mnForE; ab ∈ {1, 2, 3, 4, 5}. Thus, the operation table for the given operation ∨ can be given as: ∨ 1 2 3 4 5 1 1 1 1 1 1 2 1 2 2 2 2 3 1 2 3 3 3 4 1 2 3 4 4 5 1 2 3 4 5 ## Question 4:Consider a binary operation * on the set {1, 2, 3, 4, 5} given by the following multiplication table. (i) Compute (2 * 3) * 4 and 2 * (3 * 4)(ii) Is * commutative?(iii) Compute (2 * 3) * (4 * 5).(Hint: use the following table) * 1 2 3 4 5 1 1 1 1 1 1 2 1 2 1 2 1 3 1 1 3 1 1 4 1 2 1 4 1 5 1 1 1 1 5 (i) (2 * 3) * 4 = 1 * 4 = 1 2 * (3 * 4) = 2 * 1 = 1 (ii) For every a, b ∈{1, 2, 3, 4, 5}, we have * b = b * a. Therefore, the operation * is commutative. (iii) (2 * 3) = 1 and (4 * 5) = 1 ∴(2 * 3) * (4 * 5) = 1 * 1 = 1 ## Question 5:Let*′ be the binary operation on the set {1, 2, 3, 4, 5} defined by a *′ b = H.C.F. of a and b. Is the operation *′ same as the operation * defined in Exercise 4 above? Justify your answer. The binary operation *′ on the set {1, 2, 3 4, 5} is defined as *′ b = H.C.F of a and b. The operation table for the operation *′ can be given as: *′ 1 2 3 4 5 1 1 1 1 1 1 2 1 2 1 2 1 3 1 1 3 1 1 4 1 2 1 4 1 5 1 1 1 1 5 We observe that the operation tables for the operations * and *′ are the same. Thus, the operation *′ is same as the operation*. ## Question 6:Let * be the binary operation on N given by a * b = L.C.M. of a and b. Find(i) 5 * 7, 20 * 16 (ii) Is * commutative?(iii) Is * associative? (iv) Find the identity of * in N(v) Which elements of N are invertible for the operation *? The binary operation * on N is defined as * b = L.C.M. of a and b. (i) 5 * 7 = L.C.M. of 5 and 7 = 35 20 * 16 = L.C.M of 20 and 16 = 80 (ii) It is known that: L.C.M of a and b = L.C.M of b and a &mnForE; a, b ∈ N a * b = * a Thus, the operation * is commutative. (iii) For a, b∈ N, we have: (* b) * c = (L.C.M of a and b) * c = LCM of ab, and c a * (b * c) = a * (LCM of b and c) = L.C.M of ab, and c ∴(* b) * c = a * (* c Thus, the operation * is associative. (iv) It is known that: L.C.M. of a and 1 = a = L.C.M. 1 and a &mnForE; a ∈ N ⇒ a * 1 = a = 1 * a &mnForE; a ∈ N Thus, 1 is the identity of * in N. (v) An element a in N is invertible with respect to the operation * if there exists an element b in N, such that * b = e = b * a. Here, e = 1 This means that: L.C.M of a and b = 1 = L.C.M of b and a This case is possible only when a and b are equal to 1. Thus, 1 is the only invertible element of N with respect to the operation *. ## Question 13:Consider a binary operation * on N defined as a * b = a3 + b3. Choose the correct answer.(A) Is * both associative and commutative?(B) Is * commutative but not associative?(C) Is * associative but not commutative?(D) Is * neither commutative nor associative? #### On N, the operation * is defined as a * b = a3 + b3. For, a, b, ∈ N, we have: a * b = a3 + b3 = b3 + a3 = b * a[Addition is commutative in N] Therefore, the operation * is commutative. It can be observed that: ∴(1 * 2) * 3 ≠ 1 * (2 * 3) ; where 1, 2, 3 ∈ N Therefore, the operation * is not associative. Hence, the operation * is commutative, but not associative. Thus, the correct answer is B. PDF FILE TO YOUR EMAIL IMMEDIATELY PURCHASE NOTES & PAPER SOLUTION. @ Rs. 50/- each (GST extra) HINDI ENTIRE PAPER SOLUTION MARATHI PAPER SOLUTION SSC MATHS I PAPER SOLUTION SSC MATHS II PAPER SOLUTION SSC SCIENCE I PAPER SOLUTION SSC SCIENCE II PAPER SOLUTION SSC ENGLISH PAPER SOLUTION SSC & HSC ENGLISH WRITING SKILL HSC ACCOUNTS NOTES HSC OCM NOTES HSC ECONOMICS NOTES HSC SECRETARIAL PRACTICE NOTES # 2019 Board Paper Solution HSC ENGLISH SET A 2019 21st February, 2019 HSC ENGLISH SET B 2019 21st February, 2019 HSC ENGLISH SET C 2019 21st February, 2019 HSC ENGLISH SET D 2019 21st February, 2019 SECRETARIAL PRACTICE (S.P) 2019 25th February, 2019 HSC XII PHYSICS 2019 25th February, 2019 CHEMISTRY XII HSC SOLUTION 27th, February, 2019 OCM PAPER SOLUTION 2019 27th, February, 2019 HSC MATHS PAPER SOLUTION COMMERCE, 2nd March, 2019 HSC MATHS PAPER SOLUTION SCIENCE 2nd, March, 2019 SSC ENGLISH STD 10 5TH MARCH, 2019. HSC XII ACCOUNTS 2019 6th March, 2019 HSC XII BIOLOGY 2019 6TH March, 2019 HSC XII ECONOMICS 9Th March 2019 SSC Maths I March 2019 Solution 10th Standard11th, March, 2019 SSC MATHS II MARCH 2019 SOLUTION 10TH STD.13th March, 2019 SSC SCIENCE I MARCH 2019 SOLUTION 10TH STD. 15th March, 2019. SSC SCIENCE II MARCH 2019 SOLUTION 10TH STD. 18th March, 2019. SSC SOCIAL SCIENCE I MARCH 2019 SOLUTION20th March, 2019 SSC SOCIAL SCIENCE II MARCH 2019 SOLUTION, 22nd March, 2019 XII CBSE - BOARD - MARCH - 2019 ENGLISH - QP + SOLUTIONS, 2nd March, 2019 # HSCMaharashtraBoardPapers2020 (Std 12th English Medium) HSC ECONOMICS MARCH 2020 HSC OCM MARCH 2020 HSC ACCOUNTS MARCH 2020 HSC S.P. MARCH 2020 HSC ENGLISH MARCH 2020 HSC HINDI MARCH 2020 HSC MARATHI MARCH 2020 HSC MATHS MARCH 2020 # SSCMaharashtraBoardPapers2020 (Std 10th English Medium) English MARCH 2020 HindI MARCH 2020 Hindi (Composite) MARCH 2020 Marathi MARCH 2020 Mathematics (Paper 1) MARCH 2020 Mathematics (Paper 2) MARCH 2020 Sanskrit MARCH 2020 Sanskrit (Composite) MARCH 2020 Science (Paper 1) MARCH 2020 Science (Paper 2) Geography Model Set 1 2020-2021 MUST REMEMBER THINGS on the day of Exam Are you prepared? for English Grammar in Board Exam. Paper Presentation In Board Exam How to Score Good Marks in SSC Board Exams Tips To Score More Than 90% Marks In 12th Board Exam How to write English exams? How to prepare for board exam when less time is left How to memorise what you learn for board exam No. 1 Simple Hack, you can try out, in preparing for Board Exam How to Study for CBSE Class 10 Board Exams Subject Wise Tips? JEE Main 2020 Registration Process – Exam Pattern & Important Dates NEET UG 2020 Registration Process Exam Pattern & Important Dates How can One Prepare for two Competitive Exams at the same time? 8 Proven Tips to Handle Anxiety before Exams!
2021-10-26 15:46:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.535224974155426, "perplexity": 731.5508176583843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587908.20/warc/CC-MAIN-20211026134839-20211026164839-00612.warc.gz"}
https://psebsolutions.com/pseb-9th-class-science-solutions-chapter-8/
# PSEB 9th Class Science Solutions Chapter 8 Motion Punjab State Board PSEB 9th Class Science Book Solutions Chapter 8 Motion Textbook Exercise Questions and Answers. ## PSEB Solutions for Class 9 Science Chapter 8 Motion PSEB 9th Class Science Guide Motion Textbook Questions and Answers Question 1. An athelete completes one round of circular track of diameter 200 m in 40 s. What will be the distance covered and the displacement at the end of 2 min 20 s? Solution: Diameter of circular track (d) = 200m Radius of circular track (r) = $$\frac{d}{2}$$ = $$\frac{200m}{2}$$ = 100m Length of circular track (circumference) = 2πr = 2 × $$\frac{22}{7}$$ × 100 = $$\frac{4400}{7}$$m Time taken to complete 1 round (t) = 40 s Total time = 2 minutes 20 seconds = (2 × 60 + 20) seconds = (120 + 20) seconds = 140 s. Distance covered in 40 s = $$\frac{4400}{7}$$ m = (Circumference of 1 complete circular track) Distance covered in 1 s = $$\frac{4400}{7×40}$$ m Distance covered in 140 s = $$\frac{4400}{7×40}$$ × 140 = 2200 m An athelete starting from A and going in clockwise direction returns to point A in 3 rounds and reaches point B in 3.5 rounds. ∴ Displacement in 3.5 rounds = AB = shortest distance between initial and final position = 200 m from A to B. Question 2. Joseph jogs from end A to the other end B of a straight 300 m road in 2 minutes 50 seconds and then turns around and jogs 100 m back to point C in another 1 minute. What are Joseph’s average speeds and velocities in jogging (a) from A to B (b) from A to C? Solution: (a) Length between end point A and end point B (AB) = 300 m Time taken (t) = 2 min. 30 s = (2 × 60 + 30) s = (120 + 30) s = 150 s. Average speed = Average velocity = $$\frac{Total distance between A and B(AB)}{Total time(t)}$$ = $$\frac{300m}{150s}$$ = 2ms-1 (b) Length from end A to end B + Length on return from B to point C. = AB + BC = 300 m + 100 m = 400 m Total Time = 2 min 30 s + 1 min = 3 min 30 s = (3 × 60 + 30) s = (180 + 30) s = 210 s Question 3. Abdul while driving to school, computes the average speed for his trip to be 20 km h-1. On his trip along the same route, there is less traffic and average speed is 40 km h-1. What is the average speed for Abdul’s trip? Solution: Question 4. A motorboat starting from rest on a lake accelerates in a straight line at a constant rate of 3.0 m s-2 for 8.0 s. How far does the boat travel during this time? Solution: Here, initial velocity of motorboat (u) = 0 [Starting from rest] Acceleration (a) = 3.0 m s-2 Time (t) = 8.0 s Distance covered by the motorboat (S) =? We know, S = ut + $$\frac{1}{2}$$at2 = 0 × 8 + $$\frac{1}{2}$$ × 3 × (8)2 = 0 + $$\frac{1}{2}$$ × 3 × 8 × 8 ∴ S = 96 m. In other words, the motorboat covers a distance (S) = 96 m. Question 5. A driver of a car travelling at 52 kmh-1 applies the brakes and accelerates uniformly in opposite direction. The car stops in 5 s. Another driver going at 3 km h-1 applies his brakes slowly and stops in 10 s. On the same graph paper plot the speed versus time graph for the two cars. Which of the two cars travelled farther after the brakes were applied? Solution: In the figure AB and CD represent velocity-time graphs of two cars which have their speeds 52 kmh-1 and 30 kmh-1 respectively. In this way, after applying brakes the second car would cover more distance than the first car. Question 6. Fig shows the distance-time graphs of three objects A, B and C. Study the graph and answer the following questions: (a) Which of the three is travelling the fastest? (b) Are all three ever at the same point on the road? (c) How far has C travelled when B passes A? (d) How far has B travelled by the time it passes C? Solution: (a) Velocity of A = Slope of PN $$\frac{10-6}{1.1-0}$$ $$\frac{40}{11}$$ = 3.63 kmh-1 Because slope of object B is maximum of all therefore, it is moving fastest. (b) Since all the three graphs do not intersect at any point therefore, all the three do not meet ever at the same point on the road. (c) When the object B passes A at point E (at 1.4 hr) then at that time the object C will be at F i.e. 9.3 km away from the origin O. (d) B passes C at G after covering 8 km. Question 7. A ball is gently dropped from a height of 20 m. If its velocity increases uniformly at the rate of 10 m s-2, with what velocity it will strike the ground?After what time will it strike the ground? Solution: u = 0 ms-1 S = 20 m a = 10 ms-2 υ = ? t = ? Using υ2 – u2 = 2as υ2 – (0)2 = 2 × 10 × 20 υ2 = 4000 ∴ υ = $$\sqrt{400}$$ = $$\sqrt{20 \times 20}$$ = 20 m s-1 Now υ = u + at 20 = 0 + 10 × t or t = $$\frac{20}{10}$$ ∴ t = 2 s Question 8. Speed-time graph for a car is shown in the fig. (a) Find how far the car travelled in first 4 s. Shade the area on the graph that represents the distance travelled by car during this period. (b) Which part of the graph represents uniform motion of the car? Solution: (a) 5 small squares of x axis = 2s 3 small squares of y axis = 2 ms-1 Area of 15 small squares = 2s × 2 ms-1 = 4m ∴ Area of 1 small square = $$\frac{4}{15}$$ Area of velocity-time graph under 0 to 5s = 57 complete small squares + $$\frac{1}{2}$$ × 6 small squares. = (57 + 3) small squares = 60 small squares. Distance covered by car in 4 s = 60 × $$\frac{4}{15}$$ m = 16 m (b) After 6 s the car has uniform motion. Question 9. State which of the following situations are possible and give an example for each of these. (a) an object with a constant acceleration but with zero velocity. (b) an object moving in a certain direction with an acceleration in the perpendicular direction. (a) Yes, this situation is possible. Example: When an object is projected upwards, its velocity at the maximum height is zero although acceleration on it is 9.8 ms-2 i.e. equal to g. (b) Yes, at the maximum height of projection the velocity is in the horizontal direction and its acceleration is perpendicular to the direction of motion as shown in figure. Question 10. An artificial satellite is moving in a circular path orbit of radius 42,250 km. Calculate its speed if it takes 24 hours to revolve around the earth. Solution: Radius of circular path of artificial satellite (r) = 42,250 km Angle formed (subtended) at the centre of earth (θ) = 2π radian Time taken by the satellite to complete 1 revolution (t) = 24hrs = 24 × 3600s = 86400 s Science Guide for Class 9 PSEB Motion InText Questions and Answers Question 1. An object has moved through a distance. Can it have zero displacements? If yes,support your answer with an example. Yes, a body can have zero displacement, if fhis body While moving occupies its final position coinciding with its initial position. Example: Suppose a body starting its motion from initial position O covers some distance and reaches a position A. If this body while moving returns to its initial position O then in that situation its displacement will be zero. But distance covered by the body = OA + AO = 60 km + 60 km = 120 km Question 2. A farmer moves along the boundary of a square field of side 10 m in 40 s. What will be the magnitude of displacement of the farmer at the end of 2 minutes 20 seconds? Solution: Total distance round the boundary of field once (i.e. circumference) = AB + BC + CD + DA = 10 m + 10 m + 10 m + 10 m = 40 m Time taken to go round the field once = 40 s Total time taken = 2minutes 20 seconds = (2 × 60 + 20) seconds = (120 + 20) seconds = 140 seconds. Time taken by fanner to complete 3 rounds of field = 3 × 40 s = 120 s Time left after completing 3 rounds of field = (140 – 120)s = 20 s ∴ Distance covered by farmer in 40 s = 40 m ∴ Distance covered in 1 s = 1 m Distance that would be covered in 20 s = 20 m In other words farmer starting from point A and while going along the boundary of the field and after completing 3 rounds in 2 min 20 s would reach the point C. ∴ Displacement = AC (the shortest distance between initial and final position) Question 3. Which of the following is true for displacement? (a) It cannot be zero (b) Its magnitude is greater than the distance travelled by the object. (c) Its magnitude is less than or equal to distance travelled by the object. (c) Its magnitude is less than or equal to distance travelled by the object. Question 4. Distinguish between speed and velocity. Distinction between Speed and Velocity: Speed Velocity 1. It is defined as the rate of a change of a position of a body i.e. the distance covered by a body per unit time. It is defined as the rate of change of displacement of a body. i.e. it is the speed in a particular direction. 2. It is a scalar quantity and can be completely represented by its magnitude only. It is a vector quantity. To represent it completely it requires both magnitude and direction. 3. Speed of an object is always positive. Velocity of an object can be both positive and negative. Question 5. Under what condition(s) is the magnitude of average velocity of an object is equal to its average speed? We know, Average speed = Total distance travelled / Total time taken and Average velocity = Displacement /Total time When a body travels in a straight line with variable motion in the same direction then total distance covered and displacement are equal in magnitude. In this case the average speed and average velocity are equal. Question 6. What does the odometer of an automobile measure? The odometer of an automobile measures the distance covered by it. Question 7. What does the path of an object look like when it is in uniform motion? When an object is in uniform motion, it moves along a straight line. But an object can also move with uniform motion along a circular path. Question 8. During an experiment, a signal from a spaceship reached the ground station in five minutes. What was the distance of the spaceship from the ground station?The signal travels at a speed of light that is 3 × 10s ms-1. Solution: Time taken by the signal to reach the ground station from spaceship (t) = 5 min = 5 × 60 s = 300 s Speed of Signal (υ) = Speed of light = 3 × 108 ms-1 Distance of the spaceship from earth (s) = ? Distance of spaceship from ground (s) = speed of signal (υ) × Time (t) = 3 × 108 × 300 = 3 × 108 × 3 × 102 = 9 × 108 × 102 = 9 × 1010 m Question 9. When will you say a body is in: 1. uniform acceleration? 2. non-uniform acceleration? 1. Uniform Acceleration. When a body travels in a straight line and its velocity changes by equal amounts in equal intervals of time then it is said to travel with uniform acceleration. 2. Non-Uniform Acceleration. When the velocity of a body changes by unequal amounts in equal intervals of time then the body is said to travel with non-uniform acceleration. Question 10. A bus decreases its speed from 80 km h-1 to 60 km h-1 in 5 s. Find the acceleration of the bus. Solution: Hence, the bus has negative acceleration (retardation). Question 11. A train starting from a railway station and moving with uniform acceleration attains a speed 40 km h-1 in 10 minutes. Find its acceleration. Solution: Question 12. What is the nature of the distance-time graphs (x – t) for uniform and non-uniform motion of an object? When a body covers equal distances in equal intervals of time, then it is said to travel with uniform motion. In this situation, the distance covered by the body is directly proportional to the time taken. Therefore, distance-time (x – t) graph for uniform motion is a straight line. Distance – time (x – t) graph for non-uniform motion may be a curved graph of any shape because a body travels unequal distances in equal intervals of time. Question 13. What can you say about the motion of object whose distance – time graph is a straight line parallel to time axis? The object whose distance-time (x – t) graph is a straight line parallel to the time axis will be at rest with respect to the surroundings. Question 14. What can you say about the motion of an object if its speed-time graph is a straight line parallel to time axis? The object whose speed – time (u – t) graph is a straight line parallel to time axis shows that it is in motion with uniform speed. Question 15. What is the quantity which is measured by the area occupied below velocity-time graph? The area occupied below velocity-time graph measures displacement of the body. Question 16. A bus starting from rest moves with a uniform acceleration of 0.1 ms-2 for two minutes. Find (a) the speed acquired (b) the distance travelled. Solution: (a) Initial speed of the bus (u) = 0 (Starting from Rest) Acceleration of the bus (a) = 0.1 m s-2 Time taken (t) = 2 minutes = 2 × 60 s = 120 s Final speed of the bus (υ) = ? Distance travelled by the bus (S) =? We know, υ = u + at υ = 0 + 0.1 × 120 υ = 1 × 12 υ = 12 ms-1 (b) Again, using S = ut + $$\frac{1}{2}$$ at2 S = 0 × 120 + $$\frac{1}{2}$$ × 0.1 × (120)2 = 0 + $$\frac{1}{2}$$ × 0.1 × 120 × 120 = $$\frac{1}{2}$$ × 1 × 12 × 120 = 720 m/s Question 17. A train is travelling at a speed of 90 km h-1. Brakes are applied so as to produce a uniform acceleration of -0.5 ms-2. Find how far the train will move before it is brought to rest? Solution: Initial speed of train (υ) = 90km h-1 = 90 × $$\frac{5}{18}$$ m s-1 = 5 × 5 ms-1 = 25 ms-1 Uniform acceleration (a) = – 0.5m s-2 Final speed of the train (υ) = 0 Distance moved by the train (S) =? We know, υ2 – u2 = 2as (0)2 – (25)2 = 2 × (-0.5) × S – 25 × 25 = -1 × S ∴ S = 625 m Question 18. A trolley, while going down an inclined plane has an acceleration of 2 cm s~2. What will be its velocity 3 s after the start? Solution: Here initial velocity of trolley (u) = 0 [∵ starting from rest] Acceleration (a) = 2cm s-2 Time (t) = 3 s Final velocity of trolley (υ ) = ? We know, υ = u + at υ = 0 + 2 × 3 ∴ Final velocity of trolley (υ) = 6 cm s-1 Ans. Question 19. A racing car has uniform acceleration of 4 ms-2. What distance will it cover in 10 s after start? Solution: Acceleration of racing car (a) = 4 ms-2 Initial velocity of racing car (u) = 0 Time (t) = 10 s Distance covered by the car (S) = ? We know, S = ut + $$\frac{1}{2}$$at2 S = 0 × 10 + $$\frac{1}{2}$$ × 4 × (10)2 S = 0 + 2 × 10 × 10 ∴ Distance covered by racing car (S) = 200 m Question 20. A stone is thrown in a vertically upward direction with a velocity of 5 m s-1. If the acceleration of the stone during its motion is 10 m s-2 in the downw ard direction. What will be the height attained by the stone and how much time will it take to reach there? Solution: Here, initial velocity (u) = 5 m s-1 Acceleration (a) = – 10 ms-2 [∵ it moves upward against the gravity] Final velocity of stone (υ) = 0 [At the highest point it is brought to rest] Height attained (S = h) = ? Time taken (t) =? We know, υ = u + at 0 = 5 + (-10) × t 0 = 5 – 10 × t 10 × t = 5 or t = $$\frac{5}{10}$$ ∴ Time taken (t) = 0.5 s Again, using υ2 – u2 = 2as (0)2 – (5)2 = 2 × -10 × h – 5 × 5 = – 20 × h or h = $$\frac{-25}{-20}$$ = $$\frac{5}{4}$$ ∴ Height attained (h) – 1.25 m
2022-12-03 20:31:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5075303316116333, "perplexity": 1129.9356723835665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710936.10/warc/CC-MAIN-20221203175958-20221203205958-00371.warc.gz"}
https://www.eng-tips.com/viewthread.cfm?qid=476171
× INTELLIGENT WORK FORUMS FOR ENGINEERING PROFESSIONALS Are you an Engineering professional? Join Eng-Tips Forums! • Talk With Other Members • Be Notified Of Responses • Keyword Search Favorite Forums • Automated Signatures • Best Of All, It's Free! *Eng-Tips's functionality depends on members receiving e-mail. By joining you are opting in to receive e-mail. #### Posting Guidelines Promoting, selling, recruiting, coursework and thesis posting is forbidden. # Any rule on oven energy loss. ## Any rule on oven energy loss. (OP) I have a walk-in oven that uses X dollars of gas in standby at 300F. Is there any rule-of-thumb as to what would be expected if the standby was changed to 400F? It seems they cycle considerably faster at 400F. Keith Cress kcress - http://www.flaminsystems.com ### RE: Any rule on oven energy loss. Hi Keith, rule of thumb in what sense? Heat loss is roughly proportional to temperature, so increasing the temperature difference with ambient by ~43% means ~43% more heat loss. TTFN (ta ta for now) I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm ### RE: Any rule on oven energy loss. I agree, the losses should increase by at least that much. Increases in convective and radiative heat losses will be greater but conductive heat loss though the walls probably dominates. ### RE: Any rule on oven energy loss. Steady state heat transfer is based on the ratio of absolute temperatures. So 300F vs 400F is about 12% higher, so I would expect about 12% more power required. At higher temps you would need to also factor in radiation loss, but not at these temps. = = = = = = = = = = = = = = = = = = = = P.E. Metallurgy, consulting work welcomed ### RE: Any rule on oven energy loss. Conductive heat loss is based on differential temperature no? So assuming the outside is at say 70F, then the difference is 330/230 = 43% more?, so with a bit of extra convection etc, say 50% more heat loss? Remember - More details = better answers Also: If you get a response it's polite to respond to it. ### RE: Any rule on oven energy loss. Convective and conductive heat transfers are proportional to temperature differences with the environment, so I picked 70F; that makes the new temperature delta 330F vs. 230F, hence ~43% more TTFN (ta ta for now) I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm ### RE: Any rule on oven energy loss. (OP) Just what I was looking for and I'd say that even feels right against what I was seeing in the heat cycling. Seemed about 50% more cycles per time period. I was visiting a bakery and was surprised to see 7 ovens idling. Some at 300F some at 400F. I asked around trying to discern when they were actually going to run product thru them. It was 1am. The general consensus was, "The baker will be in at 6AM". Shortly after the president stumbled by and I mentioned, "you have 7 ovens idling here for 5 hours, maybe 4.5 hours if instead they were turned on 30 minutes before the baker shows up." I asked, "Do you know how much it costs to idle one of these ovens per hour?" Response, "I have no idea but that would be really nice to know but I have no idea how to figure that out." So using a stopwatch I figured out how much gas one that was idling at 300F used. Then used the gas tariff tier tables and came up with the cost of idling at 300F. 1/3 Therm or $0.43/hour. A lot less than I expected cost wise, (if not planet-wise). I want to give them the cost but wanted to provide an approximate difference between 300F and 400F too. Thanks everyone. Keith Cress kcress - http://www.flaminsystems.com ### RE: Any rule on oven energy loss. Natural gas, at this instant in history, is extremely cheap, mainly because it's a waste byproduct of various forms of oil production. Nevertheless, that number seems low to me, but, maybe that's because they're idling. Something like this supposedly runs 110,000 BTU/(hr? the datasheet seems to be in error) The corresponding electric oven claims 18 kW power consumption, but electric would be more efficient in this case. Assuming, say, 50% of max heat for 5 hours and 7 ovens gives me 19 therms TTFN (ta ta for now) I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm ### RE: Any rule on oven energy loss. (OP) Thanks IR. These ovens are larger and look like this: You wheel a six foot tall 4 foot square rack in that the oven lifts off the floor and rotates the entire time. They're 275,000BTU/hr which I'm sure you realize means the burner power not the oven hourly consumption rate. They run 14.3 cycles per hour and each cycle runs 31.5 seconds I get 451 seconds of burn per hour. (That would be 275,000btu/hr) 100,000BTUs = 1 Therm Here 1 Therm, in the quantity they use, costs$1.26 Keith Cress kcress - http://www.flaminsystems.com ### RE: Any rule on oven energy loss. We had what seemed like an acre of fluorescent lighting on a timer, set to turn off at 6pm, but some people worked late in the office. I had a small lamp in my cube, so no problem, but one guy's office was simply sectioned off from the open area and had no other lighting. I calculated he was burning a dollar or two a day by coming out and flicking the timer over-ride switch. So I made a suggestion that some offices (naming no names) might get a floor lamp. Instead, the VP of the division tried to get a new $5000 lighting control solution for a different facility and, to get even more credit, that's what he directed a bunch of summer interns, (who were never allowed to have contact with any engineers) to investigate. Plus he named the intern program after himself. While a great hoopla was made of the interns working on his self-named tiny company, they eventually discovered the other facility already had a timer lighting control installed and had just never bothered to use it. Big savings there. I wonder how much the interns cost. So it remained that once a year the lighting controller that plunged me, and that guy who got an office, into darkness an hour early for two weeks because the facilities manager left an hour early and cared not one whit about the start of daylight saving time, and then on every remaining day, that guy would get to walk the 100 feet to the override and 100 feet back to burn a dollar a day (for him, workaholic that he was) so about$300 a year, and never got that $150 floor lamp. I was there about 25 years, so nearly$8,000, not including the summer heat load extraction for the A/C. ### RE: Any rule on oven energy loss. Typical, but keep burning those KWs. Lighting is not the only areas where customers could reduce consumption. Is also possible for compressed air, AC, heat, etc. Problem is management just can't see the problem. And I see it our offices also. ### RE: Any rule on oven energy loss. Heat loss from an oven will also involve convection. Convection requires the determination of the coefficient of convective heat transfer on the inside and outside surfaces of the oven. For more information about convective heat transfer coefficient you should consult engineering handbooks and heat transfer text books. #### Red Flag This Post Please let us know here why this post is inappropriate. Reasons such as off-topic, duplicates, flames, illegal, vulgar, or students posting their homework. #### Red Flag Submitted Thank you for helping keep Eng-Tips Forums free from inappropriate posts. The Eng-Tips staff will check this out and take appropriate action. #### Resources Low-Volume Rapid Injection Molding With 3D Printed Molds Learn methods and guidelines for using stereolithography (SLA) 3D printed molds in the injection molding process to lower costs and lead time. Discover how this hybrid manufacturing process enables on-demand mold fabrication to quickly produce small batches of thermoplastic parts. Download Now Examine how the principles of DfAM upend many of the long-standing rules around manufacturability - allowing engineers and designers to place a part’s function at the center of their design considerations. Download Now Taking Control of Engineering Documents This ebook covers tips for creating and managing workflows, security best practices and protection of intellectual property, Cloud vs. on-premise software solutions, CAD file management, compliance, and more. Download Now Close Box # Join Eng-Tips® Today! Join your peers on the Internet's largest technical engineering professional community. It's easy to join and it's free. Here's Why Members Love Eng-Tips Forums: • Talk To Other Members • Notification Of Responses To Questions • Favorite Forums One Click Access • Keyword Search Of All Posts, And More... Register now while it's still free!
2022-07-06 17:06:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28197672963142395, "perplexity": 5214.380798857416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104675818.94/warc/CC-MAIN-20220706151618-20220706181618-00719.warc.gz"}
https://de.zxc.wiki/wiki/Algorithmus_von_Gilmore
# Gilmore's algorithm The Gilmore algorithm (also Gilmore algorithm ) is based on Herbrand's theorem and provides a semi-decision procedure to test predicate logic formulas for unsatisfiability. The following applies: ${\ displaystyle \ operatorname {\ textit {Gilmore}} \ left (E \ left (F \ right) \ right) = {\ begin {cases} {\ mathrm {\ textit {halt}}}, & {\ text { if}} F {\ text {unsatisfiable}} \\ {\ mathrm {\ textit {undef}}}, & {\ text {if}} F {\ text {satisfiable}} \ end {cases}}}$ Let the countable set be the Herbrand expansion to F and serve as the input of the algorithm. ${\ displaystyle E \ left (F \ right) = \ left \ {A_ {1}, A_ {2}, ... \ right \}}$ Pseudocode: • ${\ displaystyle k {: =} 1}$ • As long as it can be fulfilled (propositionally), set${\ displaystyle \ bigwedge _ {i = 1} ^ {k} A_ {i}}$${\ displaystyle k {: =} k + 1}$ • Stop. (Output: unsatisfiable ) One can see that the algorithm is semi-decidable , since it only lasts in a finite time if it determines that it cannot be fulfilled.
2021-04-18 21:15:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 5, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8796947002410889, "perplexity": 2485.474396934246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038860318.63/warc/CC-MAIN-20210418194009-20210418224009-00125.warc.gz"}
http://mathhelpforum.com/advanced-algebra/148475-find-exact-solution-system.html
# Math Help - Find exact solution to system? 1. ## Find exact solution to system? I am working my homework out and ran into a snag. Be advised that this is only the third day of class so advanced concepts have not yet been learned. Using just the basics I am trying to do this problem. See attached image file. I did the first step..now on the second step I'm not sure what to do. I just need concepts mostly. Any help would be appreciated. . http://viewmorepics.myspace.com/inde...ageID=70399303 Click link to see image file. Thanks. 2. if i put in REF form and on bottom row i have 0 0 0 1 a(b+3)/(a+b) which corresponds to w = a(b+3)/(a+b) So can I say the system has a unique solution when w = a(b+3)/(a+b)? I'm so confused. 3. As long as $a+ b\ne 0$ that is the unique solution. Now, go back to before you divided by a+ b. What happens if a+ b= 0? 4. ## Thanks Appreciate the help. I finally figured it all out for that problem. Pfew! The dividing by potential 0's were def a bad move.
2014-09-20 04:08:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5507153272628784, "perplexity": 894.274706179249}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657132646.40/warc/CC-MAIN-20140914011212-00042-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://labs.tib.eu/arxiv/?author=Baojiu%20Li
• ### Type Ia supernovae, standardisable candles, and gravity(1710.07018) Oct. 29, 2018 gr-qc, astro-ph.CO Type Ia supernovae (SNIe) are generally accepted to act as standardisable candles, and their use in cosmology led to the first confirmation of the as yet unexplained accelerated cosmic expansion. Many of the theoretical models to explain the cosmic acceleration assume modifications to Einsteinian General Relativity which accelerate the expansion, but the question of whether such modifications also affect the ability of SNIe to be standardisable candles has rarely been addressed. This paper is an attempt to answer this question. For this we adopt a semi-analytical model to calculate SNIe light curves in non-standard gravity. We use this model to show that the average rescaled intrinsic peak luminosity -- a quantity that is assumed to be constant with redshift in standard analyses of Type Ia supernova (SNIa) cosmology data -- depends on the strength of gravity in the supernova's local environment because the latter determines the Chandrasekhar mass -- the mass of the SNIa's white dwarf progenitor right before the explosion. This means that SNIe are no longer standardisable candles in scenarios where the strength of gravity evolves over time, and therefore the cosmology implied by the existing SNIa data will be different when analysed in the context of such models. As an example, we show that the observational SNIa cosmology data can be fitted with both a model where $(\Omega_{\rm M}, \Omega_{\Lambda})=(0.62, 0.38)$ and Newton's constant $G$ varies as $G(z)=G_0(1+z)^{-1/4}$ and the standard model where $(\Omega_{\rm M}, \Omega_{\Lambda})=(0.3, 0.7)$ and $G$ is constant, when the Universe is assumed to be flat. • ### Constraining the time variation of Newton's constant $G$ with gravitational-wave standard sirens and supernovae(1804.03066) April 9, 2018 astro-ph.CO The intrinsic peak luminosity of Type Ia supernovae (SNe Ia) depends on the value of Newton's gravitational constant $G$, through the Chandrasekhar mass $M_{\rm Ch}\propto G^{-3/2}$. If the luminosity distance $d_{\rm L}$ can be independently determined, the SNe Ia can be treated as a tracker to constrain the possible time variation of $G$ in different redshift ranges. The gravitational-wave (GW) standard sirens, caused by the coalescence of binary neutron stars, provide a model-independent way to measure the distance of GW events, which can be used to determine the luminosity distances of SNe Ia by interpolation, provided the GW and SN Ia samples have similar redshift ranges. We demonstrate that combining the GW observations of third-generation detectors with SN Ia data provides a powerful and model-independent way to measure $G$ in a wide redshift range, which can constrain the ratio $G/G_0$, where $G$ and $G_0$ are respectively the values in the redshift ranges $z>0.1$ and $z<0.1$, at the level of $1.5\%$. • ### A new smooth-$k$ space filter approach to calculate halo abundances(1801.02547) April 3, 2018 astro-ph.CO We propose a new filter, a smooth-$k$ space filter, to use in the Press-Schechter approach to model the dark matter halo mass function which overcomes shortcomings of other filters. We test this against the mass function measured in N-body simulations. We find that the commonly used sharp-$k$ filter fails to reproduce the behaviour of the halo mass function at low masses measured from simulations of models with a sharp truncation in the linear power spectrum. We show that the predictions with our new filter agree with the simulation results over a wider range of halo masses for both damped and undamped power spectra than is the case with the sharp-$k$ and real-space top-hat filters. • ### Weak lensing by voids in weak lensing maps(1803.08717) March 23, 2018 astro-ph.CO Cosmic voids are an important probe of large-scale structure that allows us to constrain cosmological parameters and test cosmological models. We present a new paradigm for void studies: void detection in weak lensing convergence maps. This approach identifies objects that relate directly to our theoretical understanding of voids as underdensities in the total matter field and presents several advantages compared to the customary method of finding voids in the galaxy distribution. We exemplify this approach by identifying voids using the weak lensing peaks as tracers of the large-scale structure. We find self-similarity in the void abundance across a range of peak signal-to-noise selection thresholds. The voids obtained via this approach give a tangential shear signal up to $\sim50$ times larger than voids identified in the galaxy distribution. • ### The Santiago-Harvard-Edinburgh-Durham void comparison I: SHEDding light on chameleon gravity tests(1710.01730) Feb. 9, 2018 astro-ph.CO We present a systematic comparison of several existing and new void finding algorithms, focusing on their potential power to test a particular class of modified gravity models - chameleon $f(R)$ gravity. These models deviate from standard General Relativity (GR) more strongly in low-density regions and thus voids are a promising venue to test them. We use Halo Occupation Distribution (HOD) prescriptions to populate haloes with galaxies, and tune the HOD parameters such that the galaxy two-point correlation functions are the same in both f(R) and GR models. We identify both 3D voids as well as 2D underdensities in the plane-of-the-sky to find the same void abundance and void galaxy number density profiles across all models, which suggests that they do not contain much information beyond galaxy clustering. However, the underlying void dark matter density profiles are significantly different, with f(R) voids being more underdense than GR ones, which leads to f(R) voids having a larger tangential shear signal than their GR analogues. We investigate the potential of each void finder to test f(R) models with near-future lensing surveys such as EUCLID and LSST. The 2D voids have the largest power to probe f(R) gravity, with a LSST analysis of tunnel (which is a new type of 2D underdensity introduced here) lensing distinguishing at 80 and 11$\sigma$ (statistical error) f(R) models with $|f_{R0}|=10^{-5}$ and $10^{-6}$ from GR. • ### A general framework to test gravity using galaxy clusters I: Modelling the dynamical mass of haloes in $f(R)$ gravity(1802.02165) Feb. 6, 2018 astro-ph.CO We propose a new framework for testing gravity using cluster observations, which aims to provide an unbiased constraint on modified gravity models from Sunyaev Zel'dovich (SZ) and X-ray cluster counts and the cluster gas fraction, among other possible observables. Focusing on a popular $f(R)$ model of gravity, we propose a novel procedure to recalibrate mass scaling relations from $\Lambda$CDM to $f(R)$ gravity for SZ and X-ray cluster observables. We find that the complicated modified gravity effects can be simply modelled as a dependence on a combination of the background scalar field and redshift, $f_R(z)/(1+z)$, regardless of the $f(R)$ model parameter. By employing a large suite of N-body simulations, we demonstrate that a theoretically derived tanh fitting formula is in excellent agreement with the dynamical mass enhancement of dark matter haloes for a large range of background field parameters and redshifts. Our framework is sufficiently flexible to allow for tests of other models and inclusion of further observables. The one-parameter description of the dynamical mass enhancement can have important implications on the theoretical modelling of observables and on practical tests of gravity. • ### Testing modified gravity using a marked correlation function(1801.08975) Jan. 30, 2018 astro-ph.CO In theories of modified gravity with the chameleon screening mechanism, the strength of the fifth force depends on environment. This induces an environment dependence of structure formation, which differs from $\Lambda$CDM. We show that these differences can be captured by the marked correlation function. With the galaxy correlation functions and number densities calibrated to match between $f(R)$ and $\Lambda$CDM models in simulations, we show that the marked correlation functions from using either the local density or halo mass as the marks encode extra information, which can be used to test these theories. We discuss possible applications of these statistics in observations. • ### Marked clustering statistics in $f(R)$ gravity cosmologies(1801.08880) Jan. 26, 2018 astro-ph.CO We analyse the two-point and marked correlation functions of haloes and galaxies in three variants of the chameleon $f(R)$ gravity model using N-body simulations, and compare to a fiducial $\Lambda$CDM model based on general relativity (GR). Using a halo occupation distribution prescription (HOD) we populate dark matter haloes with galaxies, where the HOD parameters have been tuned such that the galaxy number densities and the real-space galaxy two-point correlation functions in the modified gravity models match those in GR to within $1\sim3\%$. We test the idea that since the behaviour of gravity is dependent on environment, marked correlation functions may display a measurable difference between the models. For this we test marks based on the density field and the Newtonian gravitational potential. We find that the galaxy marked correlation function shows significant differences measured in different models on scales smaller than $r\lesssim 20~h^{-1}$ Mpc. Guided by simulations to identify a suitable mark, this approach could be used as a new probe of the accelerated expansion of the Universe. • ### New method for initial density reconstruction(1709.06350) Dec. 14, 2017 astro-ph.CO A theoretically interesting and practically important question in cosmology is the reconstruction of the initial density distribution provided a late-time density field. This is a long-standing question with a revived interest recently, especially in the context of optimally extracting the baryonic acoustic oscillation (BAO) signals from observed galaxy distributions. We present a new efficient method to carry out this reconstruction, which is based on numerical solutions to the nonlinear partial differential equation that governs the mapping between the initial Lagrangian and final Eulerian coordinates of particles in evolved density fields. This is motivated by numerical simulations of the quartic Galileon gravity model, which has similar equations that can be solved effectively by multigrid Gauss-Seidel relaxation. The method is based on mass conservation, and does not assume any specific cosmological model. Our test shows that it has a performance comparable to that of state-of-the-art algorithms which were very recently put forward in the literature, with the reconstructed density field over $\sim80\%$ ($50\%$) correlated with the initial condition at $k\lesssim0.6h/{\rm Mpc}$ ($1.0h/{\rm Mpc}$). With an example, we demonstrate that this method can significantly improve the accuracy of BAO reconstruction. • ### Galaxy-galaxy weak gravitational lensing in $f(R)$ gravity(1710.07291) Oct. 19, 2017 astro-ph.CO We present an analysis of galaxy-galaxy weak gravitational lensing (GGL) in chameleon $f(R)$ gravity - a leading candidate of non-standard gravity models. For the analysis we have created mock galaxy catalogues based on dark matter haloes from two sets of numerical simulations, using a halo occupation distribution (HOD) prescription which allows a redshift dependence of galaxy number density. To make a fairer comparison between the $f(R)$ and $\Lambda$CDM models, their HOD parameters are tuned so that the galaxy two-point correlation functions in real space (and therefore the projected two-point correlation functions) match. While the $f(R)$ model predicts an enhancement of the convergence power spectrum by up to $\sim30\%$ compared to the standard $\Lambda$CDM model with the same parameters, the maximum enhancement of GGL is only half as large and less than 5\% on separations above $\sim1$-$2h^{-1}$Mpc, because the latter is a cross correlation of shear (or matter, which is more strongly affected by modified gravity) and galaxy (which is weakly affected given the good match between galaxy auto correlations in the two models) fields. We also study the possibility of reconstructing the matter power spectrum by combination of GGL and galaxy clustering in $f(R)$ gravity. We find that the galaxy-matter cross correlation coefficient remains at unity down to $\sim2$-$3h^{-1}$Mpc at relevant redshifts even in $f(R)$ gravity, indicating joint analysis of GGL and galaxy clustering can be a powerful probe of matter density fluctuations in chameleon gravity. The scale dependence of the model differences in their predictions of GGL can potentially allow to break the degeneracy between $f(R)$ gravity and other cosmological parameters such as $\Omega_m$ and $\sigma_8$. • ### Equivalence of cosmological observables in conformally related scalar tensor theories(1709.07087) Sept. 20, 2017 gr-qc Scalar tensor theories can be expressed in different frames, such as the commonly-used Einstein and Jordan frames, and it is generally accepted that cosmological observables are the same in these frames. We revisit this by making a detailed side-by-side comparison of the quantities and equations in two conformally related frames, from the actions and fully covariant field equations to the linearised equations in both real and Fourier spaces. This confirms that the field and conservation equations are equivalent in the two frames, in the sense that we can always re-express equations in one frame using relevant transformations of variables to derive the corresponding equations in the other. We show, with both analytical derivation and a numerical example, that the line-of-sight integration to calculate CMB temperature anisotropies can be done using either Einstein frame or Jordan frame quantities, and the results are identical, provided the correct redshift is used in the Einstein frame ($1+z\neq1/a$). • ### The Effect of Thermal Velocities on Structure Formation in N-body Simulations of Warm Dark Matter(1706.07837) June 23, 2017 astro-ph.CO We investigate the role of thermal velocities in N-body simulations of structure formation in warm dark matter models. Starting from the commonly used approach of adding thermal velocities, randomly selected from a Fermi-Dirac distribution, to the gravitationally-induced (peculiar) velocities of the simulation particles, we compare the matter and velocity power spectra measured from CDM and WDM simulations with and without thermal velocities. This prescription for adding thermal velocities results in deviations in the velocity field in the initial conditions away from the linear theory predictions, which affects the evolution of structure at later times. We show that this is entirely due to numerical noise. For a warm candidate with mass $3.3$ keV, the matter and velocity power spectra measured from simulations with thermal velocities starting at $z=199$ deviate from the linear prediction at $k \gtrsim10$ $h/$Mpc, with an enhancement of the matter power spectrum $\sim \mathcal{O}(10)$ and of the velocity power spectrum $\sim \mathcal{O}(10^2)$ at wavenumbers $k \sim 64$ $h/$Mpc with respect to the case without thermal velocities. At late times, these effects tend to be less pronounced. Indeed, at $z=0$ the deviations do not exceed $6\%$ (in the velocity spectrum) and $1\%$ (in the matter spectrum) for scales $10 <k< 64$ $h/$Mpc. Increasing the resolution of the N-body simulations shifts these deviations to higher wavenumbers. The noise introduces more spurious structures in WDM simulations with thermal velocities and modifies the radial density profiles of dark matter haloes. We find that spurious haloes start to appear in simulations which include thermal velocities at a mass that is $\sim$3 times larger than in simulations without thermal velocities. • ### Cluster abundance in chameleon $f(R)$ gravity I: toward an accurate halo mass function prediction(1607.08788) April 11, 2017 astro-ph.CO We refine the mass and environment dependent spherical collapse model of chameleon $f(R)$ gravity by calibrating a phenomenological correction inspired by the parameterized post-Friedmann framework against high-resolution $N$-body simulations. We employ our method to predict the corresponding modified halo mass function, and provide fitting formulas to calculate the fractional enhancement of the $f(R)$ halo abundance with respect to that of General Relativity (GR) within a precision of $\lesssim 5\%$ from the results obtained in the simulations. Similar accuracy can be achieved for the full $f(R)$ mass function on the condition that the modeling of the reference GR abundance of halos is accurate at the percent level. We use our fits to forecast constraints on the additional scalar degree of freedom of the theory, finding that upper bounds competitive with current Solar System tests are within reach of cluster number count analyses from ongoing and upcoming surveys at much larger scales. Importantly, the flexibility of our method allows also for this to be applied to other scalar-tensor theories characterized by a mass and environment dependent spherical collapse. • ### Speeding up $N$-body simulations of modified gravity: Chameleon screening models(1611.09375) March 12, 2017 astro-ph.CO We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied $f(R)$ gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergence rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of $f(R)$ simulations. For example, a test simulation with $512^3$ particles in a box of size $512 \, \mathrm{Mpc}/h$ is now 5 times faster than before, while a Millennium-resolution simulation for $f(R)$ gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity. • ### The imprint of $f(R)$ gravity on weak gravitational lensing II : Information content in cosmic shear statistics(1610.03600) Jan. 5, 2017 astro-ph.CO We investigate the information content of various cosmic shear statistics on the theory of gravity. Focusing on the Hu-Sawicki-type $f(R)$ model, we perform a set of ray-tracing simulations and measure the convergence bispectrum, peak counts and Minkowski functionals. We first show that while the convergence power spectrum does have sensitivity to the current value of extra scalar degree of freedom $|f_{\rm R0}|$, it is largely compensated by a change in the present density amplitude parameter $\sigma_{8}$ and the matter density parameter $\Omega_{\rm m0}$. With accurate covariance matrices obtained from 1000 lensing simulations, we then examine the constraining power of the three additional statistics. We find that these probes are indeed helpful to break the parameter degeneracy, which can not be resolved from the power spectrum alone. We show that especially the peak counts and Minkowski functionals have the potential to rigorously (marginally) detect the signature of modified gravity with the parameter $|f_{\rm R0}|$ as small as $10^{-5}$ ($10^{-6}$) if we can properly model them on small ($\sim 1\, \mathrm{arcmin}$) scale in a future survey with a sky coverage of 1,500 squared degrees. We also show that the signal level is similar among the additional three statistics and all of them provide complementary information to the power spectrum. These findings indicate the importance of combining multiple probes beyond the standard power spectrum analysis to detect possible modifications to General Relativity. • DESI (Dark Energy Spectroscopic Instrument) is a Stage IV ground-based dark energy experiment that will study baryon acoustic oscillations (BAO) and the growth of structure through redshift-space distortions with a wide-area galaxy and quasar redshift survey. To trace the underlying dark matter distribution, spectroscopic targets will be selected in four classes from imaging data. We will measure luminous red galaxies up to $z=1.0$. To probe the Universe out to even higher redshift, DESI will target bright [O II] emission line galaxies up to $z=1.7$. Quasars will be targeted both as direct tracers of the underlying dark matter distribution and, at higher redshifts ($2.1 < z < 3.5$), for the Ly-$\alpha$ forest absorption features in their spectra, which will be used to trace the distribution of neutral hydrogen. When moonlight prevents efficient observations of the faint targets of the baseline survey, DESI will conduct a magnitude-limited Bright Galaxy Survey comprising approximately 10 million galaxies with a median $z\approx 0.2$. In total, more than 30 million galaxy and quasar redshifts will be obtained to measure the BAO feature and determine the matter power spectrum, including redshift space distortions. • DESI (Dark Energy Spectropic Instrument) is a Stage IV ground-based dark energy experiment that will study baryon acoustic oscillations and the growth of structure through redshift-space distortions with a wide-area galaxy and quasar redshift survey. The DESI instrument is a robotically-actuated, fiber-fed spectrograph capable of taking up to 5,000 simultaneous spectra over a wavelength range from 360 nm to 980 nm. The fibers feed ten three-arm spectrographs with resolution $R= \lambda/\Delta\lambda$ between 2000 and 5500, depending on wavelength. The DESI instrument will be used to conduct a five-year survey designed to cover 14,000 deg$^2$. This powerful instrument will be installed at prime focus on the 4-m Mayall telescope in Kitt Peak, Arizona, along with a new optical corrector, which will provide a three-degree diameter field of view. The DESI collaboration will also deliver a spectroscopic pipeline and data management system to reduce and archive all data for eventual public use. • ### The Distribution of Dark and Luminous Matter in the Unique Galaxy Cluster Merger Abell 2146(1609.06734) Sept. 21, 2016 astro-ph.CO Abell 2146 ($z$ = 0.232) consists of two galaxy clusters undergoing a major merger. The system was discovered in previous work, where two large shock fronts were detected using the $\textit{Chandra X-ray Observatory}$, consistent with a merger close to the plane of the sky, caught soon after first core passage. A weak gravitational lensing analysis of the total gravitating mass in the system, using the distorted shapes of distant galaxies seen with ACS-WFC on $\textit{Hubble Space Telescope}$, is presented. The highest peak in the reconstruction of the projected mass is centred on the Brightest Cluster Galaxy (BCG) in Abell 2146-A. The mass associated with Abell 2146-B is more extended. Bootstrapped noise mass reconstructions show the mass peak in Abell 2146-A to be consistently centred on the BCG. Previous work showed that BCG-A appears to lag behind an X-ray cool core; although the peak of the mass reconstruction is centred on the BCG, it is also consistent with the X-ray peak given the resolution of the weak lensing mass map. The best-fit mass model with two components centred on the BCGs yields $M_{200}$ = 1.1$^{+0.3}_{-0.4}$$\times10^{15}M_{\odot} and 3^{+1}_{-2}$$\times$10$^{14}$M$_{\odot}$ for Abell 2146-A and Abell 2146-B respectively, assuming a mass concentration parameter of $c=3.5$ for each cluster. From the weak lensing analysis, Abell 2146-A is the primary halo component, and the origin of the apparent discrepancy with the X-ray analysis where Abell 2146-B is the primary halo is being assessed using simulations of the merger. • ### Constraining $f(R)$ Gravity Theory Using Weak Lensing Peak Statistics from the Canada-France-Hawaii-Telescope Lensing Survey(1607.00184) Aug. 3, 2016 astro-ph.CO In this Letter, we report the observational constraints on the Hu-Sawicki $f(R)$ theory derived from weak lensing peak abundances, which are closely related to the mass function of massive halos. In comparison with studies using optical or x-ray clusters of galaxies, weak lensing peak analyses have the advantages of not relying on mass-baryonic observable calibrations. With observations from the Canada-France-Hawaii-Telescope Lensing Survey, our peak analyses give rise to a tight constraint on the model parameter $|f_{R0}|$ for $n=1$. The $95\%$ CL limit is $\log_{10}|f_{R0}| < -4.82$ given WMAP9 priors on $(\Omega_{\rm m}, A_{\rm s})$. With Planck15 priors, the corresponding result is $\log_{10}|f_{R0}| < -5.16$. • ### Probing Theories of Gravity with Phase Space-Inferred Potentials of Galaxy Clusters(1603.00056) Feb. 29, 2016 gr-qc, astro-ph.CO Modified theories of gravity provide us with a unique opportunity to generate innovative tests of gravity. In Chameleon f(R) gravity, the gravitational potential differs from the weak-field limit of general relativity (GR) in a mass dependent way. We develop a probe of gravity which compares high mass clusters, where Chameleon effects are weak, to low mass clusters, where the effects can be strong. We utilize the escape velocity edges in the radius/velocity phase space to infer the gravitational potential profiles on scales of 0.3-1 virial radii. We show that the escape edges of low mass clusters are enhanced compared to GR, where the magnitude of the difference depends on the background field value |fR0|. We validate our probe using N-body simulations and simulated light cone galaxy data. For a DESI (Dark Energy Spectroscopic Instrument) Bright Galaxy Sample, including observational systematics, projection effects, and cosmic variance, our test can differentiate between GR and Chameleon f(R) gravity models, |fR0| = 4e-6 (2e-6) at > 5{\sigma} (> 2{\sigma}), more than an order of magnitude better than current cluster-scale constraints. • ### Speeding up N-body simulations of modified gravity: Vainshtein screening models(1511.08200) Nov. 25, 2015 astro-ph.CO We introduce and demonstrate the power of a method to speed up current iterative techniques for N-body modified gravity simulations. Our method is based on the observation that the accuracy of the final result is not compromised if the calculation of the fifth force becomes less accurate, but substantially faster, in high-density regions where it is weak due to screening. We focus on the nDGP model which employs Vainshtein screening, and test our method by running AMR simulations in which the solutions on the finer levels of the mesh (high density) are not obtained iteratively, but instead interpolated from coarser levels. We show that the impact this has on the matter power spectrum is below $1\%$ for $k < 5h/{\rm Mpc}$ at $z = 0$, and even smaller at higher redshift. The impact on halo properties is also small ($\lesssim 3\%$ for abundance, profiles, mass; and $\lesssim 0.05\%$ for positions and velocities). The method can boost the performance of modified gravity simulations by more than a factor of 10, which allows them to be pushed to resolution levels that were previously hard to achieve. • ### f(R) gravity on non-linear scales: The post-Friedmann expansion and the vector potential(1503.07204) Nov. 5, 2015 gr-qc, astro-ph.CO Many modified gravity theories are under consideration in cosmology as the source of the accelerated expansion of the universe and linear perturbation theory, valid on the largest scales, has been examined in many of these models. However, smaller non-linear scales offer a richer phenomenology with which to constrain modified gravity theories. Here, we consider the Hu-Sawicki form of $f(R)$ gravity and apply the post-Friedmann approach to derive the leading order equations for non-linear scales, i.e. the equations valid in the Newtonian-like regime. We reproduce the standard equations for the scalar field, gravitational slip and the modified Poisson equation in a coherent framework. In addition, we derive the equation for the leading order correction to the Newtonian regime, the vector potential. We measure this vector potential from $f(R)$ N-body simulations at redshift zero and one, for two values of the $f_{R_0}$ parameter. We find that the vector potential at redshift zero in $f(R)$ gravity can be close to 50\% larger than in GR on small scales for $|f_{R_0}|=1.289\times10^{-5}$, although this is less for larger scales, earlier times and smaller values of the $f_{R_0}$ parameter. Similarly to in GR, the small amplitude of this vector potential suggests that the Newtonian approximation is highly accurate for $f(R)$ gravity, and also that the non-linear cosmological behaviour of $f(R)$ gravity can be completely described by just the scalar potentials and the $f(R)$ field. • ### Modified Gravity N-body Code Comparison Project(1506.06384) Sept. 29, 2015 gr-qc, astro-ph.CO Self-consistent ${\it N}$-body simulations of modified gravity models are a key ingredient to obtain rigorous constraints on deviations from General Relativity using large-scale structure observations. This paper provides the first detailed comparison of the results of different ${\it N}$-body codes for the $f(R)$, DGP, and Symmetron models, starting from the same initial conditions. We find that the fractional deviation of the matter power spectrum from $\Lambda$CDM agrees to better than $1\%$ up to $k \sim 5-10~h/{\rm Mpc}$ between the different codes. These codes are thus able to meet the stringent accuracy requirements of upcoming observational surveys. All codes are also in good agreement in their results for the velocity divergence power spectrum, halo abundances and halo profiles. We also test the quasi-static limit, which is employed in most modified gravity ${\it N}$-body codes, for the Symmetron model for which the most significant non-static effects among the models considered are expected. We conclude that this limit is a very good approximation for all of the observables considered here. • ### Distinguishing general relativity and $f(R)$ gravity with the gravitational lensing Minkowski functionals(1410.2734) Sept. 18, 2015 astro-ph.CO We explore the Minkowski functionals of weak lensing convergence map to distinguish between $f(R)$ gravity and the general relativity (GR). The mock weak lensing convergence maps are constructed with a set of high-resolution simulations assuming different gravity models. It is shown that the lensing MFs of $f(R)$ gravity can be considerably different from that of GR because of the environmentally dependent enhancement of structure formation. We also investigate the effect of lensing noise on our results, and find that it is likely to distinguish F5, F6 and GR gravity models with a galaxy survey of $\sim3000$ degree$^2$ and with a background source number density of $n_g=30~{\rm arcmin}^{-2}$, comparable to an upcoming survey dark energy survey (DES). We also find that the $f(R)$ signal can be partially degenerate with the effect of changing cosmology, but combined use of other observations, such as the cosmic microwave background (CMB) data, can help break this degeneracy. • ### Weak lensing by voids in modified lensing potentials(1505.05809) Sept. 13, 2015 astro-ph.CO We study lensing by voids in Cubic Galileon and Nonlocal gravity cosmologies, which are examples of theories of gravity that modify the lensing potential. We find voids in the dark matter and halo density fields of N-body simulations and compute their lensing signal analytically from the void density profiles, which we show are well fit by a simple analytical formula. In the Cubic Galileon model, the modifications to gravity inside voids are not screened and they approximately double the size of the lensing effects compared to GR. The difference is largely determined by the direct effects of the fifth force on lensing and less so by the modified density profiles. For this model, we also discuss the subtle impact on the force and lensing calculations caused by the screening effects of haloes that exist in and around voids. In the Nonlocal model, the impact of the modified density profiles and the direct modifications to lensing are comparable, but they boost the lensing signal by only $\approx 10\%$, compared with that of GR. Overall, our results suggest that lensing by voids is a promising tool to test models of gravity that modify lensing.
2020-07-16 14:51:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6298971772193909, "perplexity": 1001.194673718851}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657169226.65/warc/CC-MAIN-20200716122414-20200716152414-00406.warc.gz"}
https://sahandsaba.com/combinatorial-generation-using-coroutines-in-python.html
# Combinatorial Generation Using Coroutines With Examples in Python ## 1   Introduction Why is it that, for me, combinatorics arouses feelings of pure pleasure, yet, for many others it evokes feelings of pure panic? - Don Knuth, The Art of Computer Programming, Vol. 4 The goal of combinatorial generation (or searching as Knuth calls it) is to exhaustively produce a set of combinatorial objects, one at a time, often subject to some constraints, and often in a certain required order. Both [KNUTH-4A] and [RUSKEY] provide excellent introductions to the subject of combinatorial generation. Combinatorial generation problems encompass a wide range of problems, from relatively simple (e.g. generating all subsets or all permutations) to rather complex (e.g. generating all ideals of a poset in Gray order). Algorithms for combinatorial generation are often divided into iterative and recursive categories. Iterative algorithms have traditionally been considered superior in performance due to the overhead of repetitive function calls in recursive algorithms. Arguably, this advantage is less noticeable when recursion is used properly (no redundant subtrees in the recursion tree) and modern compilers are used. Recursive algorithms, on the other hand, often have the advantage of being easier to read and understand. These two types of algorithms can be further considered as ways of approaching a combinatorial generation problem. That is, there are a few problem-solving strategies that work naturally with each type of algorithm. For example, with recursion, the main strategy involves reducing the problem to a subproblem. Similarly, with iterative algorithms the strategy of finding the next object in lexicographic order is quite commonly used and is rather powerful. Approaches that use the algebraic or arithmetic properties of the objects generated are also often used in iterative algorithms. We will see some examples of all of these in this article. Coroutines, which can be seen as a generalization of functions, can encompass both recursive and iterative algorithms. As such, they provide an ideal mechanism for combinatorial generation. In fact, one of the most popular coroutine use patterns in modern programming languages is the generator pattern, which we will discuss in next section. As the name suggests, generators provide the perfect mechanism for implementing combinatorial generation algorithms, recursive or iterative. In addition, since coroutines are a generalization of functions, we can exploit their generality to come up with combinatorial generation algorithms that are arguably somewhere between recursive and iterative. These algorithms introduce a new strategy for approaching combinatorial generation, which can be taken as a third approach, in addition to recursive and iterative approaches. This article is intended to provide an introduction to combinatorial generation using coroutines. Most of the discussion in this article will be through examples. Performance is discussed in a few of the examples as well. The main ideas presented here are either directly taken from those in [KR], or inspired by them. Most of the article is written with an intermediate or advanced programmer with a modest level of familiarity with combinatorics and combinatorial generation as the audience in mind, though the last few examples involve combinatorial objects beyond the basics. Examples are all in Python, and all the source code included here is available at https://github.com/sahands/coroutine-generation for any readers who wish to experiment with the code interactively. You can also see the Prezi slides for this project here. A Note on Python 2 v.s. Python 3 All the code in this article is written to be compatible with Python 2.5 to 3.3. However, I make the general assumption that Python 3 is in use, and as such make no effort to write code that would be more efficient in Python 2. For example, I use range instead of xrange, since in Python 3 xrange is removed and range returns an iterator instead of a list. However, in favour of compatibility with Python 2, no Python 3 specific feature (e.g. yield from) is used. ## 2   Coroutines and Their Implementation in Python ### 2.1   Basic Definition As mentioned in the introduction, coroutines are a generalization of functions. Assume A is a function that calls B. In terms of the flow of execution, this involves A pausing its execution and passing the flow to B. As such, A can then be seen to be in a "paused" state until B finishes and returns execution back to the caller, A in this case. Coroutines generalize functions by allowing for any coroutine to pause its execution and yield a result at any point, and for any other coroutine to pass the execution to any other paused coroutine to continue. To achieve this, coroutines need to remember their state so they can continue exactly where they left off when resumed. The coroutine's "state" here refers to the values of local variables, as well as where in the coroutine's code the execution was paused. In other words, coroutines are functions that allow for multiple entry points, that can yield multiple times, and resume their execution when called again. On top of that, coroutines can transfer execution to any other coroutine instead of just the coroutine that called them. Functions, being special cases of coroutines, have a single entry point, can only yield once, and can only transfer execution back to the caller coroutine. ### 2.2   Python Generators In Python, generators, which are basic coroutines with a few restrictions, were introduced in [PEP-255]. The syntax for defining coroutines in Python is very similar to that of functions, with the main different being that instead of return the keyword yield is used to pause the execution and return the execution to the caller. The syntax for using generators is rather different from functions though, and is in fact closer to how classes are treated in Python: calling a generator function returns a newly created "generator object", which is is an instance of the coroutine independent of other instances. To call the generator, the next built-in function is used, and the generator object is passed to next as the parameter. Here is a very simple example demonstrating how a very simple function as be implemented as a coroutine using a generator in Python: def add_func(a, b): return a + b yield a + b # Usage: print(x) # Equivalent to: print(x) # Further calls such as the following to adder will result in a StopIteration # being raised. Of course, the above example is meant to contrast the syntactic differences of generators and functions. The particular use of a coroutine demonstrated above is of course completely unnecessary. Let us look at a somewhat more interesting example, taken, with minor modification, from [PEP-255]: def fib(): a, b = 0, 1 while True: yield b a, b = b, a + b # Usage: f = fib() # Create a new "instance" of the generator coroutine print(next(f)) # Prints 1 print(next(f)) # Prints 1 print(next(f)) # Prints 2 print(next(f)) # Prints 3 print(next(f)) # Prints 5 Here we have a generator that yields the numbers in the Fibonacci sequence ad infinitum. Each call to the generator slides the a and b variables ahead in the sequence, and then execution is paused and b is yielded. ### 2.3   Recursive Generators Before continuing, let us look at a simple example of a recursive algorithm implemented using coroutines as well. In this example, we create a very minimalistic binary tree and then print its post-order traversal. Notice how generators can be recursive, and how they implement the iterator interface which allows them to be used inside for loops and generator expressions. def postorder(tree): if not tree: return for x in postorder(tree['left']): yield x for x in postorder(tree['right']): yield x yield tree['value'] # Usage: tree = lambda: defaultdict(tree) # Let's build a simple tree representing (1 + 3) * (4 - 2) T = tree() T['value'] = '*' T['left']['value'] = '+' T['left']['left']['value'] = '1' T['left']['right']['value'] = '3' T['right']['value'] = '-' T['right']['left']['value'] = '4' T['right']['right']['value'] = '2' postfix = ' '.join(str(x) for x in postorder(T)) print(postfix) # Prints 1 3 + 4 2 - * In Python 3, with [PEP-380], the above can be made even simpler by using the yield from statement: def postorder(tree): if not tree: return yield from postorder(tree['left']) yield from postorder(tree['right']) yield tree['value'] However, the shorter and nicer Python 3 syntax will not be used for the rest of the article to keep the code Python 2 compatible. ### 2.4   PEP 342 and the Enhanced yield Keyword Python generators were further generalized to allow for more flexible coroutines in [PEP-342]. Prior to the enhancements in [PEP-342], Python's generators were coroutines that could not accept new parameters after the initial parameters were passed to the coroutine. With [PEP-342]'s send method, a coroutine's execution can resume with further data passed to it as well. This is implemented by allowing the yield keyword to be used not just as a statement but also as an expression, the evaluation of which results in the coroutine pausing until a value is passed to it via send, which will be the value that the yield expression evaluates to. In this article, we will only need to use the generator pattern, and will only use yield as a statement meaning the send method will not be used. ### 2.5   Clarification Regarding Terminology It is important to mention that in some Python literature the word "coroutine" has come to mean specifically coroutines that use yield as an expression and hence require the use of send to operate. See [BEAZLEY] for example (which, by the way, is an excellent introduction to coroutines and their uses in IO operations, parsing, and more). I believe this is somewhat inaccurate, since coroutines are a general concept, and functions, generators with next or send or both, all fall under coroutines. (That is, on an abstract level, the set of coroutines contains the set of generators and functions, and more.) In this article, I use the word "coroutine" in its generality, as defined in the first paragraph of this section, in accordance with how Knuth defines the word in [KNUTH-1]. I also will more or less use it interchangeably with the word "generator", since we will only use coroutines that are generators in this article. I will refer the readers interested in the enhanced yield keyword and its use to [BEAZLEY]. ### 2.6   A Final Note on Coroutines in Python Before we move on, it is important to note that even with [PEP-342], Python's generators do not implement coroutines in full generality. To quote [PY-1]: All of this makes generator functions quite similar to coroutines; they yield multiple times, they have more than one entry point and their execution can be suspended. The only difference is that a generator function cannot control where should the execution continue after it yields; the control is always transferred to the generator's caller. So unlike the way Knuth defines and uses coroutines, Python's generators are not completely symmetric; an executing generator object is still coupled to the caller, which creates asymmetry. However, this limitation will not be an issue for our purposes here. ## 3   Motivating Example: Multi-Radix Numbers We start our exploration of coroutine-based combinatorial generation with a simple example: multi-radix numbers. The goal here is to provide a short and simple example of the common approaches to solving combinatorial generation problems, and then introduce the coroutine-based approach so as to emphasize the differences and advantages of each approach. The first approach will be based on arithmetical properties of the objects we are generating, the second will be a recursive solution based on a reduction to a subproblem, third will be an iterative approach based on explicitly finding the next lexicographic item, and finally, the fourth approach will be the coroutine-based one ### 3.1   Problem Definition Our goal in this section will be to produce the set of multi-radix numbers in lexicographic (dictionary) order given a multi-radix base $M$ . More specifically, given a list $M$ of positive numbers, produce all lists $a$ of the same length as $M$ such that $0 \le a[i] < M[i]$ , in lexicographic order. Here is an example: >>> M = [3, 2, 4] ... print(a) ... [0, 0, 0] [0, 0, 1] [0, 0, 2] [0, 0, 3] [0, 1, 0] [0, 1, 1] [0, 1, 2] [0, 1, 3] [1, 0, 0] [1, 0, 1] [1, 0, 2] [1, 0, 3] [1, 1, 0] [1, 1, 1] [1, 1, 2] [1, 1, 3] [2, 0, 0] [2, 0, 1] [2, 0, 2] [2, 0, 3] [2, 1, 0] [2, 1, 1] [2, 1, 2] [2, 1, 3] In other words, the combinatorial set of objects being generated is the Cartesian product $\prod_{i=0}^{n-1} \{0, 1, \ldots, m_i - 1\}$ where $M = [m_0, \ldots, m_{n-1}]$ . So those of you familiar with Python's itertools module might already have thought of a quick solution to the problem: from itertools import product return product(*(range(x) for x in M)) This, of course, is not an algorithm as much as it is delegating the task! Nonetheless, it is a good start and we will use it as a base-line for performance comparisons of the rest of the algorithms. We will also briefly look at how Python's itertools.product function is implemented internally after we discuss our algorithms. ### 3.2   An Algorithm Based on Arithmetic To start with our first solution, let's observe that with $M = [2] * n$ , the problem is reduced to counting in binary: >>> M = [2, 2, 2] ... print(a) ... [0, 0, 0] [0, 0, 1] [0, 1, 0] [0, 1, 1] [1, 0, 0] [1, 0, 1] [1, 1, 0] [1, 1, 1] This observation leads to the following iterative solution: simply start from zero and count to $(\prod m_i) - 1$ , and covert the numbers to the multi-radix base given by $M$ , similar to how we convert numbers to binary. This results in the following code. from operator import mul from functools import reduce n = len(M) for i in range(1, n + 1): x, a[-i] = divmod(x, M[-i]) return a n = len(M) a = [0] * n last = reduce(mul, M, 1) for x in range(last): We can classify this algorithm as an iterative algorithm that relies on the arithmetical properties of the objects we are generating. Because of this, it it does not have a very combinatorial feel to it. It also happens to be quite slow, especially in Python, since every number in $a$ is recalculated each time, and multiple divisions have to happen per generated object. ### 3.3   A Recursive Algorithm Based on Reduction to Subproblems Next approach is the recursive one. To use recursion, we need to reduce the problem to a subproblem. Say $M$ has $n$ items in it, so we are producing multi-radix numbers with $n$ digits. Let $M' = [M[0], M[1], \ldots, M[n-2]]$ . That is, $M'$ is the first $n-1$ elements of $M$ . Then if we have a list of multi-radix numbers for $M'$ in lexicographic order, we can extend that list to a list of lexicographic numbers for $M$ by appending $0$ to $M[n-1] - 1$ to each element of the list. This approach leads to the following recursive code: def multiradix(M, n, a, i): if i < 0: yield a else: for __ in multiradix(M, n, a, i - 1): # Extend each multi-radix number of length i with all possible # 0 <= x < M[i] to get a multi-radix number of length i + 1. for x in range(M[i]): a[i] = x yield a n = len(M) a = [0] * n return multiradix(M, n, a, n - 1) Quite simple and elegant, and as we will see, quite fast as well. ### 3.4   An Iterative Algorithm Now, let's look at the iterative approach. Since our goal is to go from one given multi-radix number to the next in lexicographic order, we can start scanning from right to left until we find an index in $a$ that we can increment, do the incrementation, and then set everything to the right of that index to $0$ . For example, if our multi-radix number system is simply given by $M = [10] * 4$ , so we simply have decimal numbers of $4$ digits, and our current $a$ is $0399$ then scanning from right to left tells us that $3$ is the first number that can be incremented, so we increment $3$ getting $0499$ and then set everything to the right of $4$ to $0$ getting $0400$ which is the next number in lexicographic order. We can also just set numbers that can not be incremented to zero as we do the scanning for the first number to increment, which will save us from having two loops. This approach results in the following code: def multiradix_iterative(M): n = len(M) a = [0] * n while True: yield a # Find right-most index k such that a[k] < M[k] - 1 by scanning from # right to left, and setting everything to zero on the way. k = n - 1 while a[k] == M[k] - 1: a[k] = 0 k -= 1 if k < 0: # Last lexicographic item return a[k] += 1 ### 3.5   A Coroutine-Based Algorithm Finally, let's look at the coroutine-based algorithm. The basic idea here is very similar to the previous iterative algorithm, but the execution is very different. To explain this algorithm, I will borrow Knuth's style of explaining his coroutine-based algorithms in [KR]. Picture a line of $n + 1$ friendly trolls. Each troll, with the exception of the first troll holds, a number in his hand. The trolls will behave in the following manner. When a troll is poked, if the number in his hand is strictly less than $m_i - 1$ (meaning the number can be increased) he simply increments the number and yells out "done". If the number in his hand is equal to $m_i - 1$ then he changes the number to $0$ and then pokes the previous troll without yelling anything. The first troll in line is special; whenever poked, he simply yells out "last" without doing anything else. We will call the last troll in line (corresponding to index $n - 1$ ) the lead troll. The algorithm will start with all trolls holding the number $0$ in their hands. Each time we need the next item generated, we poke the lead troll. If we hear "done" then we know we have a new item. If we hear "last" then we know that we are at the end of the generation task. In the implementation of the above idea, each troll becomes a coroutine. Yelling out "done" will be yielding True and yelling out "last" will yielding False. Troll number $-1$ is a special nobody coroutine that simply yields False repeatedly: def nobody(): while True: yield False The rest of the trolls are instances of the troll coroutine in the code given below. Each troll creates the troll previous to it in line, until we get to troll number $0$ , which creates a nobody coroutine as its previous troll. from nobody import nobody def troll(M, a, i): previous = troll(M, a, i - 1) if i > 0 else nobody() while True: if a[i] == M[i] - 1: a[i] = 0 yield next(previous) # Poke the previous troll else: a[i] += 1 yield True n = len(M) a = [0] * n lead = troll(M, a, n - 1) yield a yield a ### 3.6   Discussion In the previous sections we saw four algorithms that solve the problem of generating multi-radix numbers in lexicographic order. The four algorithms were • multiradix_counting: an iterative algorithm based on arithmetic, • multiradix_recursive: a recursive algorithm that reduced the problem to a subproblem, • multiradix_iterative: an iterative algorithm that explicitly produced the next item in lexicographic order, We also saw how to solve the problem using Python's built-in itertools.product function. The latest was implemented as multiradix_product. Let's look at a simple performance comparison of the five by having them generate all multi-radix numbers with $M = [10] * 7$ , in other words, the digits of all 7-digit numbers in base ten. The result is shown below. Testing multiradix_product: Function test_generator took 0.472 seconds to run. Function test_generator took 26.281 seconds to run. Function test_generator took 1.721 seconds to run. Function test_generator took 3.687 seconds to run. Function test_generator took 4.726 seconds to run. So to rank them in order of efficiency, based on this simple test: we have The method based on arithmetic is the slowest by a large margin. This makes sense, provided that we are dealing with base ten numbers, not a power of two which computers are much better at dealing with. On top of that, Python is notoriously slow at numeric calculations. And the fastest, of course, is using the built-in itertools.product method, which is not surprising in the least because it is implemented in C. However, it is interesting to find out which, if any, of the above algorithms is used to implement Python's itertools.product function. For this, let's have a look at Python's source code, file itertoolsmodule.c (see [PY-2]). The relevant section is inside the product_next function: /* Update the pool indices right-to-left. Only advance to the next pool when the previous one rolls-over */ for (i=npools-1 ; i >= 0 ; i--) { pool = PyTuple_GET_ITEM(pools, i); indices[i]++; if (indices[i] == PyTuple_GET_SIZE(pool)) { /* Roll-over and advance to next pool */ indices[i] = 0; elem = PyTuple_GET_ITEM(pool, 0); Py_INCREF(elem); oldelem = PyTuple_GET_ITEM(result, i); PyTuple_SET_ITEM(result, i, elem); Py_DECREF(oldelem); } else { /* No rollover. Just increment and stop here. */ elem = PyTuple_GET_ITEM(pool, indices[i]); Py_INCREF(elem); oldelem = PyTuple_GET_ITEM(result, i); PyTuple_SET_ITEM(result, i, elem); Py_DECREF(oldelem); break; } } Our coroutine-based algorithm lags behind all the other ones in terms of performance except for the arithmetic one. This is not surprising given the overhead of calling coroutines in Python. However, the coroutine-based approach will allow us to solve certain problems in very interesting ways, as we will see. One last thing to note before moving on is that the coroutines given above can continue to be called even after False is yielded. In this case, doing so will result in the list being generated again from scratch, since all the numbers will have been set to zero by the time we get to nobody and other than that all the coroutines are ready to run again. As we will see, this is an interesting property of the coroutine based algorithms, and all of them will behave in this manner. That being said, in most of them, unlike this particular example, the order in which the list is generated is reversed each time False is yielded. ## 4   Binary Reflected Gray Codes Now, let's consider the case of binary reflected Gray codes and see if we can apply our coroutine-based approach to this problem. ### 4.1   Problem Definition For a full introduction and discussion of binary Gray codes, refer to either [KNUTH-4A] or [RUSKEY]. A binary Gray code is a listing of all binary strings of length $n$ such that each two subsequent strings are different in exactly one index. The binary reflected Gray code (BGRC), is one such code. It is given by recursively generating the BGRC for $n - 1$ , then prepending a zero to all strings, and a one to all the strings in reverse order. A very naive recursive implementation in Python, which requires the whole code to be kept in memory, is given below, as a more precise definition. def gray(n): if n > 0: g = gray(n - 1) gr = reversed(g) return (['0' + a for a in g] + ['1' + a for a in gr]) else: return [''] And example output: >>> for a in gray(3): ... print(a) ... 000 001 011 010 110 111 101 100 ### 4.2   A Coroutine-Based Algorithm The first example in [KR] is precisely BGRC, although it is presented as the ideals of the totally disconnected poset with $n$ vertices. To continue with the trolls of last section, again we have a line of $n+1$ trolls, with first troll in line being the special troll that simply yells out "last" when poked. This time, however, each troll is simply holding a light in his hand, which is either on or off. The trolls are also now either asleep or awake. If a sleeping troll is poked, he simply wakes up and pokes the previous troll. When an awake troll is poked, he just switches the light (from on to off, or off to on) and yells "done". It is relatively easy to see that the index of the first awake troll, starting from the right, follows the ruler sequence ($1, 2, 1, 3, 1, 2, 1, 4, 1, \ldots$ ). Once this is established, the fact that the algorithm produces the BGRC can be shown immediately. I encourage you to convince yourself, mentally or using a pen and paper, that the above does indeed work as explained. As for the implementation using coroutines, the awake or asleep state of each troll is simply a matter of which instruction the coroutine will resume from when called next. We will not need a variable to keep track of it. This results in the code for this algorithm to be deceiving simple. The "lead" coroutine will again be the last one, and we start the list with the all zeros list. Putting it all together we have the following code. from nobody import nobody def troll(a, i): previous = troll(a, i - 1) if i > 0 else nobody() while True: a[i] = 1 - a[i] yield True yield next(previous) def setup(n): a = [0] * n lead_coroutine = troll(a, n - 1) def gray(n): yield a yield a With this algorithm, when False is yielded the first time, a will be set to the all ones string. As such, if we run the algorithm a second time, we get the BGRC in reverse order. This can be repeated ad infinitum. ## 5   Steinhaus-Johnson-Trotter Permutation Generation ### 5.1   Problem Definition SJT is an algorithm for generating all permutations in Gray order. Here, Gray order means that "distance" between two subsequent permutations is one, where a distance of one means that they differ from each other by one swap of adjacent elements, also called a transposition. The basic idea of the algorithm is recursive. Given a list of permutations of $n-1$ , we can produce a list of permutations of $n$ by inserting $n$ into each permutation of $n-1$ , first by starting at the very right end and moving to the left, and then moving to the right, and so on. Here is an example output for $n=3$ . 123 132 312 321 231 213 Here, given the permutations $12$ and $21$ for $n=2$ , SJT inserts $3$ at the end of $12$ and then moves it to the left until it can not move any further, then moves on to the next permutation of $n=2$ which is $21$ and inserts $3$ at the left end, and then moves it to the right until it can no longer move. A simple recursive implementation of this algorithm is given below. def permutations(n): if n: r = list(range(n)) for pi in permutations(n - 1): for i in r: yield pi[i:] + [n] + pi[:i] r.reverse() else: yield [] ### 5.2   A Coroutine-Based Algorithm Now let's implement SJT using our coroutine-based approach. Picture our troll friends again, standing in a line, and as they did before, each troll is assigned a number that they will remember, between $0$ and $n$ . This time, however, they no longer hold a number in their hands. Instead, the numbers are laid out in a row on a table, starting in increasing order: $1, 2, 3, \ldots, n$ . Each troll also keeps track of his "direction", which is either left or right. All trolls start with direction left at the beginning. Number $0$ is not on the table since troll number $0$ is again the special troll that just yells "last" when poked. When poked, trolls will walk up to the table and find their number in the row. They will then look at the number next to their number based on their current direction (which they meticulously remember!). If the next number is larger than theirs, or there is no next number, meaning their number is the last or first in the row (depending on direction), then then they just poke the previous troll in line, and switch their direction to be opposite of what it used to be. Otherwise, they move their number, changing its place with the number next to it that it was compared to. In this case, they simply yell out "done". As before, I encourage you to convince yourself that the above does in fact produce all permutations in SJT order. For the coroutine implementation, we follow the above algorithm quite closely, but add a few things for simplicity. First, we pad both sides of our permutation with the number $n + 1$ , which is greater than all numbers in the permutation. These two numbers will never move and their purpose is to simplify the code, since we now never have to worry about invalid indices, since we will always hit a "fence before falling off the cliff". This way we can just check to see if our number is greater than the next number before doing a swap. The end result is the following code. from time import sleep def nobody(): while True: yield False def setup(n): # Example: for n = 4, pi starts as [0, 1, 2, 3, 4, 0] # The zeros act as "fixed barriers", never moving pi = list(range(n + 1)) + [0] # The inverse permutation starts as the identity as well. It does not need # the fixed barriers since their inverses will never be looked up. inv = pi[:-1] def troll(i): """ The goal of troll[i] is to move i in the direction of until it hits a "barrier", defined as an element smaller than it. """ neighbour = troll(i + 1) if i < n else nobody() d = 1 while True: # j is the element next to i in pi, in direction d j = pi[inv[i] + d] if i < j: # Swap i and j pi[inv[i]], pi[inv[j]] = j, i inv[i], inv[j] = inv[j], inv[i] yield True else: # Change direction and poke d = -d yield next(neighbour) # The lead coroutine will be the coroutine in charge of moving 1 return pi, troll(1) def permutations(n): yield pi[1:-1] yield pi[1:-1] def main(): s = set() n = 4 while True: print(pi[1:-1]) print('----', len(s), '----') sleep(1) s.clear() if __name__ == '__main__': main() ### 5.3   Discussion First, let's have a look at the performance of the two implementations by having them generate all $10!$ (about $3.6$ million) permutations of $n=10$ , and compare the running times: Testing coroutine-based algorithm: Function test_generator took 3.944 seconds to run. Testing recursive algorithm: Function test_generator took 4.374 seconds to run. This time the coroutine-based implementation is slightly faster than the recursive one. One last thing to note about this particular example is that similar to our BGRC example, if run through again, the coroutines will generate the permutations in reverse order. For example, with $n=3$ we get: 123 132 312 321 231 213 ------- 213 231 321 312 132 123 ------- 123 132 312 321 231 213 ------- ... ## 6   Ideals of a Poset Consisting of Several Chains ### 6.1   Problem Definition Now let's consider another example taken from [KR]. In this example, the goal is to generate all ideals of a poset consisting of several chains, in Gray order. In simpler terms, we are to generate all binary strings $a$ of length $n$ such that given some set $E = \{e_0, e_1, \ldots, e_{m-1}\}$ with $0 = e_0 < e_1 < \ldots < e_{m-1} \le n,$ we have $a[k-1] \le a[k] \text{ for } k \not\in E.$ This is the same thing as requiring that $a[e_i] \le a[e_i + 1] \le \ldots \le a[e_{i+1} - 1] \text{ for } 1 \le i < m.$ We can see right away that BGRC is a speical case of this one, with $E = \{0, 1, 2, 3, \ldots, n\}$ , which reduces the above to a vacuous condition that is satisfied by any binary string. Here is an example of the desired code for $E = \{0, 2, 3 \}$ and $n=6$ . There are $3\cdot 2\cdot 4 = 24$ strings in the code total. 000000 000001 000011 000111 001111 001011 001001 001000 011000 011001 011011 011111 010111 010011 010001 010000 110000 110001 110011 110111 111111 111011 111001 111000 ### 6.2   A Coroutine-Based Algorithm In this example again, we will have our friendly trolls, with lights in their hands. They are however, no longer in a neat and tidy straight line. Instead, Each troll is next to potentially two other trolls, whom he can poke if needed. Let's call numbers that are at the bottom of a chain "lead" numbers. These are precisely the numbers in $E$ . Trolls with lead numbers will have a access to the previous lead coroutine, and all trolls will have a reference to the troll with the number above them. If there is no number above or to the left of a number, then the corresponding trolls will be the special nobody trolls that always yell out "last". For example, in the above diagram troll[0].above = troll[1] and troll[2].prev_lead = troll[0]. On the other hand, troll[5].above = troll[5].prev_lead = nobody(). Similar to the BGRC case, our trolls will be sleeping or awake. The rules for whom to poke and when to turn the light on and off is a bit more complicated however. This time, the troll's behaviour depends not only on whether he is asleep or awake, but also on whether his light is on or off. Instead of explaining it here, I will let the code do the explanation with some added comments. from nobody import nobody def troll(E, n, i, a): above = nobody() if i + 1 not in (E + [n + 1]): above = troll(E, n, i + 1, a) if i in E and i != 0: prev_lead = troll(E, n, E[E.index(i) - 1], a) while True: # Awake and light off - a[i] = 0 while next(above): yield True a[i] = 1 yield True # Asleep and light on - a[i] = 1 # Awake and light on - a[i] = 1 a[i] = 0 yield True # Asleep and light off - a[i] = 0 while next(above): yield True def setup(E, n): a = [0] * n lead_troll = troll(E, n - 1, E[-1], a) The basic idea is to set bits to one starting from the top of the last chain, and once all the bits in the last chain are set to one, call the coroutine for the previous lead to go to the next string given by the previous chains, and then start setting bits to zero starting from the bottom of the chain. Because of this, the algorithm is a bit similar to our SJT algorithm as well. This is our most complicated example so far so I highly recommend you spend the time needed to make sure you understand it fully. ## 7   Conclusion We looked at a variety of combinatorial generation algorithms implemented using coroutines. With the examples provided, I hope to have at least created some intrigue regarding the use of coroutines in solving combinatorial generation problems. It is my belief that with each style of attacking a combinatorial generation problem, comes a "mode" of thinking. With recursive algorithms, the mode of thinking involves finding ways to reduce the problem to a subproblem; that is, if we have the solution to a smaller instance of the problem, how can we extend it to a solution for the larger instance? With the iterative approach, the mode of thinking either involves imitating what a recursive algorithm does in an iterative way, or it involves finding ways of going explicitly from one object to the next in the desired order. With both of these, the mode of thinking is somewhat "global". What I mean by this is that we are standing outside, looking at the whole list or object, and writing code that deals with the whole list or one object at a time. With coroutines, the mode of thinking becomes more "local". We are no longer looking at the whole list or even a single object, but a single bit or number in a single object. This mode of thinking involves finding rules by which the coroutines representing the bits or numbers in the objects we are generating need to behave and interact with each other so as to produce the desired end result. I believe that this mode of thinking, apart from being interesting and novel of and by itself, can be applied to a variety of problems. It is also quite possible that the mode of thinking might be transferable to other areas, for example parallel processing and multi-tasking, which are the areas coroutines have typically been used in. For those of you interested in learning more [KR] continues generalizing the BGRC and chain poset algorithms that we saw here, with the final algorithm generating the ideals of any given poset. The source code repository for this article has a few more examples in Python, including one for generating ideals of the zig-zag poset in zigzag.py.
2019-03-20 19:57:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 86, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5797932147979736, "perplexity": 960.4169798530363}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202450.86/warc/CC-MAIN-20190320190324-20190320212052-00064.warc.gz"}
http://www.birthphotographyimagecompetition.com/heat-loss-bicwx/mining-massive-datasets-stanford-answers-f0bffa
# mining massive datasets stanford answers having done andrew ng's ml course, this course acts a perfect supplement and covers a lot of practical aspects of implementing the algorithms when applied to massive data sets. scribed as follows: for all itemss, computeru,s= Σx∈userscos-sim(x,u)∗Rxsand recommend ... MINING SOCIAL-NETWORK GRAPHS Exercise 10.8.3: Consider the running example of a social network, last shown in Fig. j=1Rij∗(R Compute pu. number of iterations. Use MathJax to format equations. Please sign in or register to post comments. 1/29/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 27 ¦ ¦ ( ; ) ( ; ) j N i x ij j N i x ij xj xi s s r r s ij… similarity of items i and j r xj…rating of user u on item j N(i;x)… set items rated by x similar to i ∑n Mining of Massive Datasets , by Jure Leskovec @jure, Anand Rajaraman @anand_raj, and Jeff Ullman. 2: Ch. Explain. Euclidean normalized idf. Answer to from Mining of Massive Datasets Jure Leskovec Stanford Univ. qi:=qi+η∗(εiu∗pu− 2 ∗λ∗qi). structures (See Figure 2 ) (e.g. Compute the eigenvalue decomposition of MTM (Use scipy.linalg.eigh function in Information for Stanford Faculty The Stanford Center for Professional Development works with Stanford faculty to extend their teaching and research to a global audience through online and in-person learning opportunities. 10.23. Gradiance (no late periods allowed): GHW 1: Due on … Your answer should show how you derived the expressions (even for the item-item case, weighting in the query: 1. Mining Massive Data Sets. I think this book can be especially suitable for those who: 1. withP⋆being a diagonal matrix whose coefficients are defined byPii⋆=Pii− 1 / 2. 2. Exercise 3.2.3 : What is the largest number of k-shingles a document of n bytes can have? The course CS345A, titled “Web Mining,” was designed as an advanced graduate course, although it has become accessible and interesting to advanced undergraduates. Python). When Jure Leskovec joined the Stanford … Note: The entries along the diagonal ofΣ(part (e)) are referred to as singular values 10.23. ⋆SOLUTION: For the user-user collaborative filtering recommendation,we have that: Similarly, for the item-item collaborative filtering recommendation, we have that: In this question you will apply these methods to a real dataset. given user watched a given show over a 3 month period. Thus,Suis given Week 1: MapReduce Link Analysis -- PageRank Week 2: Locality-Sensitive Hashing -- Basics + Applications Distance Measures Nearest Neighbors Frequent Itemsets Week 3: Data Stream Mining Analysis of Large Graphs Week 4: Recommender Systems Dimensionality Reduction Week 5: Clustering Computational Advertising Week 6: Support-Vector Machines Decision Trees MapReduce Algorithms Week 7: More About Link Analysis -- Topic-specific PageRank, Link Spam. Hint: For the item-item case,Γ =RQ− 1 / 2 RTRQ− 1 / 2. Make sure your graph has ay-axis so function of the number of iterationsi=1..20 forc1.txtand also forc2.txt. pTu) c1.txtand c2.txt. distance metric being used is Euclidean distance? final answer should describe operations on matrix level, notspecific terms of matrices. Press, but by arrangement with the publisher, you can download a free copy Here. If you are not a Stanford student, you can still take CS246 as well as CS224W or earn a Stanford Mining Massive Datasets graduate certificate by completing a sequence of four Stanford Computer Science courses… I was able to find the solutions to most of the chapters here. I've been taking a course in data mining/machine learning and we have been using the free textbook from the stanford … and each column corresponds to a TV show.Rij= 1 if useriwatched the showjover The book is published by … CS 246: Mining Massive Data Sets The availability of massive datasets is revolutionizing science and industry. (Hint: Note that you do not need to write a separate Spark job to computeφ(i). Submission Templates: [pdf | tex | docx] Solutions: [PDF][Code]. indicates that userUlikes itemI. Sign in. Solutions: [PDF][Code]. It was challenging and rewording at the same time . Use the dataset fromq4/datawithin the bundle for this problem. The course is based on the text Mining of Massive Datasets by Jure Leskovec, Anand Rajaraman, and Jeff Ullman, who by coincidence are also the instructors for the course. When Jure Leskovec joined the Stanford … your reasoning. ), [5 pts] What is the percentage change in cost after 10 iterations of the K-Means inEvecssuch that the eigenvector corresponding to the largest eigenvalue appears in The recommendation method using user-user collaborative filtering for useru, can be de- The course is based on the text Mining of Massive Datasets by Jure Leskovec, Anand Rajaraman, and Jeff Ullman, who by coincidence are also the instructors for the course. Mining Massive Datasets Stanford online course mmds.lagunita.stanford.edu Next session: Oct 11 - Dec 13, 2016 Instructors Jure Leskovec, associate professor of CS at Stanford.His research area is mining … Indeed, the relation “userulikesitemi” can be put backward into “itemiis liked byuseru”, This means As the textbook of the Stanford online course of same title, this books is an assortment of heuristics and algorithms from data mining to some big data applications nowadays. The course will discuss data mining and machine learning algorithms for analyzing very large amounts of data. HW3: Due on 2/18 at 11:59pm. The first edition was published by Cambridge University Press, and you get 20% discount by buying it … Let’s define the recommendation matrix, Γ,m×n, such that Γ(i,j) =ri,j. Mining Massive Data Sets. Mining Massive Data Sets. Your The course CS345A, titled “Web Mining,” was designed as an advanced graduate course, although it has become accessible and interesting to advanced undergraduates. usingc1.txtandc2.txt. HW4: Due on 3/03 at 11:59pm. 3: More efficient … Sign in or register and then enroll in this course. This course discusses data mining and machine learning algorithms for analyzing very large … Update the equations: In each update, we updateqiusingpuandpuusingqi. Can someone answer this question: It is from an exercise in the book: Mining of massive datasets: Chapter 3: Finding Similar Itemsets . I think this book can be especially suitable for those who: 1. The book is published by Cambridge Univ. Explain You cs246: mining massive data sets winter 2020 problem set please read the homework submission policies at singular value decomposition and principal component cs246: mining massive data sets winter 2020 problem set please read the homework submission policies at singular value decomposition and principal component More About Locality-Sensitiv… Mining of Massive Datasets. ... Stanford … algorithm when the cluster centroids are initialized usingc1.txtvs. his book focuses on practical algorithms that have been used to solve key problems in data mining … Mining Massive Datasets Stanford online course mmds.lagunita.stanford.edu Next session: Oct 11 - Dec 13, 2016 Instructors Jure Leskovec, associate professor of CS at Stanford.His research area is mining of large social and information networks. recommend thekitems for whichru,sis the largest. SinceRijis 0 or 1, soTii=degree(useri). Runthek-means ondata.txt Also, re-arrange the columns Answer to from Mining of Massive Datasets Jure Leskovec Stanford Univ. Mining of Massive Datasets - Stanford. Cambridge Core - Knowledge Management, Databases and Data Mining - Mining of Massive Datasets - by Jure Leskovec Due to unplanned maintenance of the back-end systems supporting article purchase … Find Γ for both ), [5 pts] Using the Manhattan distance metric (refer to Equation 3 ) as the distance Can someone answer this question: It is from an exercise in the book: Mining of massive datasets: Chapter 3: Finding Similar Itemsets . node degrees, path between nodes, etc.). You should think about: * Work-Study balance as it's very time consuming ( 15+ … This is an iPython Notebook for the homework assignments in the Coursera class Mining Massive Datasets offered in conjunction with Stanford University and taught by … degree of user nodei,i.e.the number of items that userilikes. The emphasis will be on Map Reduce as a tool for creating parallel algorithms that can process very large amounts of data. weighting in the query: 1. roles. ComputingEin pieces I'd define "massive" data as … The book is published by Cambridge Univ. where we give you the final expression). data Locality# sensive# hashing# Clustering# Dimensional ity# reducon# Graph$$data PageRank,# SimRank# Community# DetecOon# Spam# DetecOon# Infinite HW1: Due on 1/21 at 11:59pm. Press, but by arrangement with the publisher, you can download a free copy Here. c2.txtand the The columns are separated by a space. use a single plot or two different plots, whichever you think best answers the theoretical CS 246: Mining Massive Data Sets The availability of massive datasets is revolutionizing science and industry. HW0 (Hadoop tutorial) to help you set up Hadoop: Due on 1/12 at 11:59pm. So, the matrixSIcan be expressed in terms ofQandR: To compute a similar expression forSu, we notice that(R,Q,SI)and(RT,P,Su)play similar ⋆ SOLUTION: In the user-item bipartite graph, Tii equals the degree of useri. Solution 1: Normalize the raw tf-idf weights computed in Ex. memory error when doing large matrix operations, please make sure you are using 64-bit. 2 More precisely, for 9985 users and 563 popular TV shows, we know if a j=1R function of the number of iterationsi=1..20 forc1.txtand also forc2.txt. The data contains information Update equations in the Stochastic Gradient Descent algorithm [3(a)], (ii) Value ofη. Generate a graph where you plot the cost functionφ(i) as a measure, compute the cost functionψ(i) (refer to Equation 4 ) for every iterationi. the methods. usingc1.txtbetter than initialization usingc2.txtin terms of costφ(i)? The function returns two parameters: a list of eigenvalues (let us call this list His research focuses on mining and modeling large social and information networks, their evolution, and diffusion of information and influence over them. The course is based on the text Mining of Massive Datasets by Jure Leskovec, Anand Rajaraman, and Jeff Ullman, who by coincidence are also the instructors for the course. Mining of Massive Data Sets - Solutions Manual? Welcome to the self-paced version of Mining of Massive Datasets! raman and Jeff Ullman for a one-quarter course at Stanford. of users that liked itemi. Sort the list Evalsin descending order 2011 final exam with solutions; 2013 final exam with solutions; Assignments. the first column ofEvecs. Winter 2017. Is randominitialization ofk-means Answers to many frequently asked questions for learners prior to the Lagunita retirement were available on our FAQ page. You must be enrolled in the course to see course content. What are the values ofEvalsandEvecs(after the sorting The weight of a term is 1 if present in the query, 0 otherwise. No single right answer ... 2/2/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets 23 NOTE: x is an eigenvector with the corresponding eigenvalue λ if: m = Å ... Jure Leskovec is an Assistant Professor of Computer Science at Stanford University. raman and Jeff Ullman for a one-quarter course at Stanford. Winter 2016. ... MINING SOCIAL-NETWORK GRAPHS Exercise 10.8.3: Consider the running example of a social network, last shown in Fig. The datasets grow to meet the computing available to them. ij=. Course , current location; Mining Massive Datasets. Anand Rajaraman Milliway Labs Jeffrey D. Ullman Stanford Un... Free download Mining of Massive Datasets PDF. We use analytics cookies to understand how you use our websites so we can make them … Submission Templates: [pdf | tex | docx] Solutions: [PDF][Code]. is a diagonal matrix whosei-th diagonal element is the degree of item nodeior the number T)ji=∑n user-shows.txtThis is the ratings matrixR, where each row corresponds to a user the initial centroids located in one of the two text files. I used the google webcache feature to save the page in case it gets deleted in the future. The things gathering the data themselves become more powerful, and so more of that data makes it downstream. usingc1.txtbetter than initialization usingc2.txtin terms of costψ(i)? a period of three months. Please be sure to answer the question. item-item and user-user collaborative filtering approaches, in terms ofR,P andQ. StanfordOnline: CSX0002 Mining Massive Datasets. ★★★★★ I took one of the courses ( Mining massive date sets) . that, for your first iteration, you’ll be computing the cost function using the initial Graduate Certificate in Mining Massive Datasets at Stanford University is an online program where students can take courses around their schedules and work towards completing their degree. raman and Jeff Ullman for a one-quarter course at Stanford. Learning Stanford MiningMassiveDatasets in Coursera - lhyqie/MiningMassiveDatasets. an item. questions we’re asking you about. Making statements based on opinion; back them up with references or personal experience. When Jure Leskovec joined the Stanford … Mining of Massive Datasets Jure Leskovec Stanford University Anand Rajaraman Rocketship Ventures Jeffrey D. Ullman Stanford University ... raman and Jeff Ullman for a one-quarter course at Stanford. about TV shows. The emphasis will be on Map Reduce as a tool for creating parallel algorithms that can process very large amounts of data. There is no significant advantage to any of the new values forqiandpuusing the old values, and then update the vectorsqiand What is the largest number of k-shingles a document of n bytes … We also represent the ratings matrix for this set of users such that the largest eigenvalue appears first in the list. Handouts Sample Final Exams. compute the cost functionφ(i) (refer to Equation 2 ) for every iterationi. So again non-zero eigen values ofMMTare the diagonal entries ofΣ 2. Mining-Massive-Datasets. This is an iPython Notebook for the homework assignments in the Coursera class Mining Massive Datasets offered in conjunction with Stanford University and taught by Jure Leskovec, Anand … As the textbook of the Stanford online course of same title, this books is an assortment of heuristics and algorithms from data mining to some big data applications nowadays. 3: More efficient method for minhashing in Section 3.3: 10: Ch. The things gathering the data themselves become more powerful, and so more of that data makes it downstream. Highdim. 2: Spark and TensorFlow added to Section 2.4 on workflow systems: 3: Ch. Solution 1: Normalize the raw tf-idf weights computed in Ex. which is equivalent to switching users and items, ie to transpose the matrixR. Let’s define a matrixP,m×m, as a diagonal matrix whosei-th diagonal element is the I've been taking a course in data mining/machine learning and we have been using the free textbook from the stanford university courses described here. Provide details and share your research! that we can read the value ofE. = (UΣVT)(VΣTUT) =UΣ 2 UT Winter 2017. Register. [TLDR] TLDR: need information on solution manual for data mining textbook. Ed Knorr 3/5/12 1.4 p. 16, 3 lines above Sect. ... Stanford students can see them here. Also assume we havem Ed Knorr 3/5/12 1.4 p. 16, 3 lines above Sect. What is the largest number of k-shingles a document of n bytes can have? Mining of Massive Datasets Machine Learning Cluster. thekitems for whichru,sis the largest. 2: Spark and TensorFlow added to Section 2.4 on workflow systems: 3: Ch. HW2: Due on 2/04 at 11:59pm. Tii=, ∑n 1.5 2: Ch. 1/29/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 27 ¦ ¦ ( ; ) ( ; ) j N i x ij j N i x ij xj xi s s r r s ij… similarity of items i and j r xj…rating of user u on item j N(i;x)… set items rated by x similar to i Answers … (i) Equation forεiu. Mining of Massive Datasets - Stanford. Evals) and a matrix whose columns correspond to the eigenvectors of the respective Copyright © 2020 StudeerSnel B.V., Keizersgracht 424, 1016 GC Amsterdam, KVK: 56829787, BTW: NL852321363B01. See figure below for an example. Submission Templates: [pdf | tex | docx] Solutions: [PDF][Code]. Only one plot with your chosenηis required [3(b)], (iii) Please upload all the code to Gradescope [3(b)], Note: Please use native Python (Spark not required) to solve thisproblem. The weight of a term is 1 if present in the query, 0 otherwise. ). Based on the experiment and your derivations in part (c) and (d), do you see any 6.10, we get Mining of Massive Datasets Jure Leskovec Stanford University Anand Rajaraman Rocketship Ventures Jeffrey D. Ullman Stanford University ... raman and Jeff Ullman for a one-quarter course at Stanford. MTM, what is the relationship (if any) between the eigenvalues ofMTM and the (Hint: to be clear, the percentage refers to (cost[0]-cost[10])/cost[0]. Answers to many frequently asked questions for learners prior to the Lagunita retirement were available on our FAQ page. 10 The datasets grow to meet the computing available to them. Ch2: Large-Scale File Systems and Map-Reduce, Linear algebra review document (courtesy CS 229). Similarly, a matrixQ,n×n, MathJax reference. A revised discussion of the relationship between data mining, machine learning, and statistics in Section 1.1. The previous version of the course is CS345A: Data Mining which also included a course project. your reasoning. This is a repository with the list of solutions for Stanford's Mining Massive Datasets. by: Su=P⋆RRTP⋆. distance metric being used is Manhattan distance? Is randominitialization ofk-means If userilikes itemj, thenRi,j= 1, otherwiseRi,j= 0. But avoid … Asking for help, clarification, or responding to other answers. and items asR, where each row inRcorresponds to a user and each column corresponds to data Locality# sensive# hashing# Clustering# Dimensional ity# reducon# Graph$$ data PageRank,# SimRank# Community# DetecOon# Spam# DetecOon# Infinite singular values ofM? Define the non-normalized user similarity matrixT = R∗RT (multiplication of Rand The course CS345A, titled “Web Mining,” was designed as an advanced graduate course, although it has become accessible and interesting to advanced undergraduates. Making statements based on opinion; back them up … The eigenvalues ofMTMare captured by the diagonal elements inΛ(part (d)), [5 pts] Using the Euclidean distance (refer to Equation 1 ) as the distance measure, All readings have been derived from the Mining Massive Datasets by J. Leskovec, A. Rajaraman and J. Ullman. Similarly, the recommendation method using item-item collaborative filtering for userucan eigenvalues (let us call this matrixEvecs). correspondence betweenV produced by SVD and the matrix of eigenvectorsEvecs, Based on the experiment and the expressions obtained in part (c) and part (d) for Since Provide details and share your research! 2. The course will discuss data mining and machine learning algorithms for analyzing very large amounts of data. Euclidean normalized idf. be described as follows: for all items s, compute ru,s = Σx∈itemsRux∗cos-sim(x,s) and Ejemplo de Dictamen Limpio o Sin Salvedades Hw2 - hw2 Hw3 … j=1Rij. Or Precision decreases both for user-user and item-item as k increases. [5 pts] What is the percentage change in cost after 10 iterations of the K-Means Python instead of 32-bit (which has a 4GB memory limit). for example, a recent lecture talked about how the bfr algorithm[1] for finding …, this is an ipython notebook for the homework assignments in the coursera class mining massive datasets offered in conjunction with stanford … This means that, for your first iteration, you’ll be computing the cost function using [TLDR] TLDR: need information on solution manual for data mining textbook. You may I'd define "massive" data as anything where n^2 is too big, where "too big" is bigger than either my ram or my patience. To see course content, sign in or register. CS345A has now been split into two courses CS246 (Winter, 3-4 Units, homework, final, no project) and CS341 … You may MMT= (UΣVT)(UΣVT)T But avoid … Asking for help, clarification, or responding to other answers. Highdim. transposedR). c2.txtand the and re-arranging process)? users andnitems, so matrixRism×n. use a single plot or two different plots, whichever you think best answers the theoretical. should be able to calculate costs while partitioning points into clusters. If you run into Explain e.g. I used the google webcache feature to save the page in case it gets deleted in the future. ¡In many data mining situations, we do not know the entire data set in advance ¡ Stream Managementis important when the input rate is controlled externally: §Google queries §Twitter or Facebook status … You should computeEat the end of a full iteration of training. Nonetheless, do try to solve the questions on your own first (the discussion forums are really helpful! Section Location Problem Reported By Date Reported; 1.1.5 p. 4. l. 13 "orignal" should be "original". Run thek-means ondata.txtusing The course CS345A, titled “Web Mining… The implementations for the solutions are in R. Refer to this repository if you used it to help with your Assignments. Analytics cookies. centroids located in one of the two text files. Integral Calculus - Lecture notes - 1 - 11 2.5, 3.1 - Behavior Genetics Hw0 - This homework contains questions of mining massive datasets. Please be sure to answer the question. Information for Stanford Faculty The Stanford Center for Professional Development works with Stanford … Plot ofEvs. 6.10, we get Generate a graph where you plot the cost functionψ(i) as a I was able to find the solutions to most of the chapters here. ofM. Consider a user-item bipartite graph where each edge in the graph between userUto itemI, ⋆SOLUTION: Comments: open question. Answer to from Mining of Massive Datasets Jure Leskovec Stanford Univ. Submission Templates: [pdf | tex | docx] Solutions: [PDF][Code]. Access study documents, get answers to your study questions, and connect with real tutors for CS 246 : Mining Massive Data Sets at Stanford University. With the Mining Massive Data Sets graduate certificate, you will master efficient, powerful techniques and algorithms for extracting information from large datasets such as the web, social-network graphs, … Query, 0 otherwise Datasets by J. Leskovec, A. Rajaraman and J. Ullman and TensorFlow added to 2.4... To other answers, j= 1, otherwiseRi, j= 1, otherwiseRi j=! Matrix, Γ, m×n, such that Γ ( i ) eigenvalue... 1.4 p. 16, 3 lines above Sect with references or personal experience ( useri ) 3.2.3... Plot or two different plots, whichever you think best answers the theoretical operations, Please make sure you using. Otherwiseri, j= 0 process very large amounts of data at 11:59pm assume we users... Evolution, and then enroll in this course userUto itemI, indicates that userUlikes itemI of chapters! Eigenvalue appears in the list of solutions for Stanford 's Mining Massive Datasets Jure Leskovec is an Professor... P. 4. l. 13 orignal '' should be original '' which! Information on solution manual for data Mining textbook and industry make sure you are using 64-bit publisher, you download. Describe operations on matrix level, notspecific terms of costψ ( i ) available on our FAQ page j=.... Can process very large amounts of data 2013 final exam with solutions ; Assignments on at... The course is CS345A: data Mining which also included a course project the publisher, can... You do not need to write a separate Spark job to computeφ ( i, j doing large operations. Be able to find the solutions to most of the course will data... Challenging and rewording at the same time doing large matrix operations, Please make you. Of MTM ( use scipy.linalg.eigh function in python ) a repository with the publisher, you can download free... Must be enrolled in the graph between userUto itemI, indicates that userUlikes itemI (,... Were available on our FAQ page be sure to answer the question function in python ) it to with. Manual for data Mining which also included a course project clarification, or responding to other answers 56829787,:. De Dictamen Limpio o Sin Salvedades Hw2 - Hw2 Hw3 … Please be sure to answer the question ay-axis that. Even for the item-item case, Γ, m×n, such that Γ ( i ) forums... Read the Value ofE set up Hadoop: Due on 1/12 at 11:59pm and J. Ullman be... While partitioning points into clusters you the final expression ) the query 0! Please be sure to answer the question and modeling large social and information networks, their evolution and... Amsterdam, KVK: 56829787, BTW: NL852321363B01 python instead of 32-bit ( has... The solutions to most of the chapters here error when doing large matrix,... The equations: in each update, we get answers to many frequently asked questions learners! Case it gets deleted in the graph between userUto itemI, indicates userUlikes... To find the solutions to most of the chapters here incorrect sinceP andQare still updated., sign in or register and then enroll in this course discusses data Mining which also included a course.. Feature to save the page in case it gets deleted in the future up Hadoop: Due on at! Labs Jeffrey D. Ullman Stanford Un... free download Mining of Massive Datasets Jure Stanford..., soTii=degree ( useri ) think best answers the theoretical, P andQ publisher, you can download free... Thenri, j= 1, otherwiseRi mining massive datasets stanford answers j= 1, otherwiseRi, j= 0 references or personal.! 1016 GC Amsterdam, KVK: 56829787, BTW: NL852321363B01 Consider the running of! 2011 final exam with solutions ; 2013 final exam with solutions ; 2013 final exam with solutions ; 2013 exam. ) are referred to as singular values ofM: 10: Ch, clarification, responding. Labs Jeffrey D. Ullman Stanford Un... free download Mining of Massive Datasets PDF at the same time dataset the... We havem users andnitems, so matrixRism×n their evolution, and so more of that data makes downstream! A separate Spark job to computeφ ( i ) being used is Euclidean distance by! 1.1.5 p. 4. l. 13 orignal '' should be able to find the solutions to of. To answer the question availability of Massive Datasets PDF copy here can download free! Large amounts of data for Professional Development works with Stanford … weighting in the graph between userUto,... And TensorFlow added to Section 2.4 on workflow systems: 3: Ch availability of Massive Datasets Leskovec. Of a full iteration of training i think this book can be especially suitable for those who 1... Datasets by J. Leskovec, A. Rajaraman and J. Ullman degree of.. Your Assignments, notspecific terms of costψ ( i ) questions on your mining massive datasets stanford answers (... By Date Reported ; 1.1.5 p. 4. l. 13 orignal '' should able. Give you the final expression ) which has a 4GB memory limit ) ofEvalsandEvecs ( after sorting! Welcome to the largest eigenvalue appears in the user-item bipartite graph, Tii equals the of. Be able to find the solutions to most of the mining massive datasets stanford answers the dataset fromq4/datawithin the for... From the Mining mining massive datasets stanford answers Datasets. ) parallel algorithms that can process large...: need information on solution manual for data Mining which also included a project!, thenRi, j= 1, otherwiseRi, j= 0 with solutions ; Assignments the things gathering the data become... J=1Rij∗ ( R T ) ji=∑n j=1R 2 ij= level, notspecific terms of costφ ( i j.: in the graph between userUto itemI, indicates that userUlikes itemI Normalize the raw tf-idf weights in., j= 1, otherwiseRi, j= 1, otherwiseRi, j= 0 during iteration. For analyzing very large amounts of data to from Mining of Massive Datasets is revolutionizing science and industry that... Which has a 4GB memory limit ) metric being used is Manhattan distance separate Spark to!: in each update, we updateqiusingpuandpuusingqi there is no significant advantage to of! The future Datasets by J. Leskovec, A. Rajaraman and J. Ullman of solutions for 's... Query, 0 otherwise help with your Assignments data themselves become more powerful, so. Included a course project describe operations on matrix level, notspecific terms of costψ ( i ) opinion... Of MTM ( use scipy.linalg.eigh function in python ) and Map-Reduce, Linear algebra document. User-Item bipartite graph, Tii equals the degree of useri in each update, we updateqiusingpuandpuusingqi Jeffrey D. Ullman Un! Bypii⋆=Pii− 1 / 2 equations: in each update, we updateqiusingpuandpuusingqi Milliway! Graph where each edge in the query, 0 otherwise repository if you it... Mining and machine learning algorithms for analyzing very large amounts of data vectorsqiand. So that we can read the Value ofE T ) ji=∑n j=1R 2 ij= sure to answer the.... Two different plots, whichever you think best answers the theoretical by Date Reported ; 1.1.5 p. 4. 13. Learners prior to the self-paced version of the course will discuss data Mining and modeling social! The bundle for this problem p. 4. l. 13 orignal '' should be original '' Map Reduce a! The user-item bipartite graph, Tii equals the degree of useri the availability of Massive Datasets Jure Leskovec Stanford.. You run into memory error when doing large matrix operations, Please make you. Modeling large social and information networks, their evolution, and so more that. De Dictamen Limpio o Sin Salvedades Hw2 - Hw2 Hw3 … Please be sure to answer the.... Our FAQ page part ( e ) ) are referred to as singular values ofM 1 if present the!
2021-07-25 13:26:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4284014105796814, "perplexity": 4158.649595870312}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151672.96/warc/CC-MAIN-20210725111913-20210725141913-00328.warc.gz"}
http://mathhelpforum.com/algebra/34607-simultaneous-equation-help-please.html
1. ## Simultaneous Equation Help Please! y=2x+3 x²+y²=2 i get x²+(2x+3)=2 then, x²+(2x+3)(2x+3)=2 then, x²+4x²+12x+9=2 Then im stuck i realise it's a quadratic equation (4x²+12x+9) but i don't know where to go from here. Help greatly appreciated x 2. Hello, (4x²+12x+9) this is equal to: (2x)²+2*3*2x+3² If a=2x and b=3 You have a²+2ab+b², which is (a+b)² 3. Move everything to one side: $5x^{2} + 12x + 7 = 0$ Factor and see what you get 4. thanks
2013-12-06 07:12:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5495392680168152, "perplexity": 7813.919013181457}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163049948/warc/CC-MAIN-20131204131729-00027-ip-10-33-133-15.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/how-to-remember-when-to-add-and-when-to-multiply-exponents.751271/
# How to remember when to add and when to multiply exponents? 1. ### Tyrion101 149 I've always had trouble remembering things that are similar, but not the same, like sometimes you add exponents of an expression, is there something I can use to remember this? 2. ### jbunniii 3,347 Fixed the binary representations. My first attempt omitted some zeros. It might help to think about the special case of powers of 2. For example, ##(2^2)(2^3) = (4)(8) = 32##, which equals ##2^{2+3}##, not ##2^{2 \times 3}##. One way to remember this is to consider the binary representation: $$2^2 = 0000100, 2^3 = 0001000$$ Multiplication by 2 is equivalent to shifting the representation to the left by one "bit", which adds 1 to the exponent. Multiplication by ##2^3## is the same as multiplying by 2 three times, or equivalently, shifting to the left three times, or adding three to the exponent: $$2^2 \times 2^3 = 0100000 = 2^5$$ Last edited: Apr 29, 2014 ### Staff: Mentor Correction: There are too few zeros in the binary representations above, as well as the one later on. ##2^1 = 2 = 000010_2## This means 1 * 2^1 + 0 * 1. ##2^2 = 4 = 000100_2## This means 1 * 2^2 + 0 * 2^1 + 0 * 1. ##2^3 = 8 = 001000_2## This means 1 * 2^3 + 0 * 2^2 + 0 * 2^1 + 0 * 1. ##2^2 \times 2^3 = 100000_2 = 2^5 = 32## 4. ### jbunniii 3,347 Oops, yes, I left out a couple of zeros! Sorry for the confusion. I'll edit my previous post to fix it. ### Staff: Mentor Tyrion, it might helpful to better understand what exponents mean. Exponents represent repeated multiplication, at least for positive integer exponents, so a2 means ##a \cdot a## and a3 means ##a \cdot a \cdot a##. This means we could write (a2)(a3) as ##(a \cdot a \cdot a)(a \cdot a)##. We can regroup these factors (associative property of multiplication) as ##(a \cdot a \cdot a \cdot a \cdot a)##, or a5, since there are 5 factors of a. When you multiply a power of a variable by a power of the same variable, the exponents add. If we had (a3)2, that means (a3)(a3). If you expand each of the two factors as above, you'll see that there are 6 factors of a, so (a3)2 = a6. When you raise a power of a variable to a power, the exponents multiply. 6. ### symbolipoint 3,039 Mark44's post means the most. Understand the rules of exponents, so you do not need to remember instructions about what to do with the exponents. You should reach the ability to know what to do just by seeing an expression with its exponents. You should also still be able to analyze what you see to enable easier work of simplifications. ### Staff: Mentor When in doubt, work it out like Mark44 did. After doing that enough times, you'll internalize the rules and you'll be able to write down the answer immediately without working out the intermediate steps. 8. ### Redbelly98 12,017 Staff Emeritus I think either working an example using simple numbers (jbunni's 1st suggestion in Post #2) OR working it out with symbols (Mark44, post #3) works best if you are having trouble memorizing the rules. Or like with most things: practice, practice practice. Just my opinion: the suggestion of using binary representations may not be very helpful to somebody who is having some struggles or trying to wrap their head around exponent manipulation rules. But applying the same logic to powers of ten may work better: 101 x 102 = 10 x 100 = 1,000 = 103 = 101+2
2015-03-01 06:54:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9876726865768433, "perplexity": 945.3495192002348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462232.5/warc/CC-MAIN-20150226074102-00270-ip-10-28-5-156.ec2.internal.warc.gz"}
https://crad.ict.ac.cn/EN/abstract/abstract2714.shtml
ISSN 1000-1239 CN 11-1777/TP ### A Neighborhood Rough Sets-Based Co-Training Model for Classification Zhang Wei1,2,3, Miao Duoqian1,3, Gao Can4, Yue Xiaodong5 1. 1(School of Electronics and Information, Tongji University, Shanghai 201804);2(School of Computer Science and Technology, Shanghai University of Electric Power, Shanghai 200090);3(Key Laboratory of Embedded System and Service Computing (Tongji University), Ministry of Education, Shanghai 201804);4(Zoomlion Heavy Industry Science and Technology Development Co., Ltd., Changsha 410013) ;5(School of Computer Engineering and Science, Shanghai University, Shanghai 200444) • Online:2014-08-15 Abstract: Pawlak's rough set theory, as a supervised learning model, is only applicable for discrete data. However it is often the case that practical data sets are continuous and involve both few labeled and abundant unlabeled data, which is outside the realm of Pawlak's rough set theory. In this paper, a neighborhood rough sets based co-training model for classification is proposed, which could deal with continuous data and utilize the unlabeled and labeled data to achieve better performance than the classifier learned only from few labeled data. Firstly, a heuristic algorithm based on neighborhood mutual information is put forward to compute the reduct of partially labeled continuous data. Then two diverse reducts are generated. The model employs the two reducts to train two base classifiers on the labeled data, and makes the two base classifiers teach each other on the unlabeled data to boot the their performance iteratively. The experimental results on selected UCI datasets show that the proposed model are more effective to deal with partially labeled continuous data than some representative ones in learning accuracy. CLC Number:
2022-08-17 01:38:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3003394603729248, "perplexity": 2431.6419071553923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572833.78/warc/CC-MAIN-20220817001643-20220817031643-00569.warc.gz"}
https://opals.geo.tuwien.ac.at/html/stable/preCalcFootprint.html
Python script preCalcFootprint python.preCalcFootprint # Aim of script Calculates the axes of the laser footprint on the target using a trigonometric approach. The aim of preCalcFootprint is to calculate the laser footprint area. # General description Using either existing normal and beam vectors (odm-attributes) or getting them on-the-fly, the ellipse can be calculated either as points with attributes (area, semimajor axis, ...) or as a polygon approximating the shape. Output can either be an odm or a shape file. Python script preCalcFootprint The intersection area of a laser beam with the tangent plane of a target surface is commonly referred to as laser footprint. It's size depends on the range (sensor-to-target distance), the beam divergence (i.e. opening angle of the laser beam) and the incidence angle between the beam and the surface. The latter is calculated based on the laser beam direction and the surface normal vector. Output can be either points or polygons, and either opals Datamanager (odm) or ESRI Shapefile (.shp). Different attributes can be selected for string by using the {-a, –attribute} parameter. In case of angles of incidence close to 90°, resulting ellipses will not be exact. To skip those automatically, the {–m, –max_incidence} parameter can be used. ## Basic algorithm The script uses a trigonometric approach to calculate the semimajor and semiminor axes of the theoretical elliptic footprint of the laser beam. In the next step, coordinates for points on this ellipse are calculated, with the ellipse in standard form and the points spaced apart {–pointspacing} degrees (eccentric anomaly). These points are then transformed into their position on the tangent plane. # Parameter description -inFile input DataManager file (*.odm) Type : PathArgument Remark : Mandatory Description: -outFile output file (.odm/.shp) Type : PathArgument Remark : Optional Description: Polygon output file. If extension (.odm/.shp) is provided, the respective format will be used. Default format is .shp. If no outFile is given, results will be written back to the -inFile odm. -filter dataManager filter for pre-selection of points (not implemented yet Type : String Remark : Optional Description: -limits format: "xmin xmax ymin ymax" Type : String Remark : Optional Description: -beamDivergence divergence of the laser beam in [mrad] Type : Floating-point number Remark : Mandatory Description: -trjFile path to the file containing the trajectory (if BeamVector Arguments are not set) Type : PathArgument Remark : Optional Description: -trjFormat format description to be used for parsing the input trajectory. This will be passed to the opalsImport module. Type : String Remark : Optional Description: -pointSpacing spacing between points for resulting ellipse polygon in [deg] (only if 'geometry polygon' is selected Type : Floating-point number Remark : Optional, default: 10 Description: -attributes specify which features (multiple options allowed) should be saved to the output file Type : String Remark : Optional, default: ['area'] Description: to save multiple features use this option multiple times e.g. -attr area -attr axes Possible values: area saves the area of the footprint as attribute [in sq.m.] semimajor saves the semimajor axis [in m] semiminor saves the semiminor axis [in m] axes saves both axes [in m] incidence saves the angle of incidence [in rad] pointid saves the id of the originating point beamvector saves the X/Y/Z components of the beam vector normalvector saves the X/Y/Z components of the normal vector -geometry output geometry type: point or polygon: polygons approximate the ellipse of the laser footprint Type : String Remark : Optional, default: 'polygon' Description: Possible values: point polygon -maxIncidence maximum incidence angle for footprint calculation [deg] Type : Floating-point number Remark : Optional, default: 90 Description: # Examples The data used in the following examples can be found in the \$OPALS_ROOT/demo/ directory. As a prerequisite for the subsequent examples, please import the data using the following command: opalsImport -inFile strip11.laz -trjFile TrjStrips_utm33.txt -tFormat trajectory.xml -storeBeamInfo BeamVector -filter "region[529570 5338660 529610 5338720]" opalsNormals -inFile strip11.odm -searchMode d3 -neigh 8 -searchR 1.5 ## Example 1 Calculate ellipse area and axes, store them in a polygon shapefile. Beam- and normal vectors are assumed as attributes. opals preCalcFootprint -i strip11.odm -a area -a axes -o polygon_footprints.shp -beamdivergence 0.25 ## Example 2 Calculate beam vectors "on-the-fly", and save the result in an odm as points (with attributes) opals preCalcFootprint -i strip11.odm -a area -a incidence -o footprint.odm -trjFile TrjStrips_utm33.txt -trjFormat trajectory.xml -g point -beamdivergence 0.25 ## Example 3 Write polygon information back to the original odm. Beam vectors are, again, assumed. The polygons will have the origninating point id and the three components of the beam vector as attributes. opals preCalcFootprint -i strip11.odm -a pointid -a beamvector -beamdivergence 0.25 # References opalsNormals is the executable file of Module Normals Definition: ModuleExecutables.hpp:148 @ trajectory @ tFormat file format of the trajectory file (opalsImport) opalsImport is the executable file of Module Import Definition: ModuleExecutables.hpp:113 Contains the public interface of OPALS. Definition: ApplyTrafo.hpp:5 @ point pixel (center) represents a point, mainly used for DTM grids @ BeamVector Beam vector in the world coordinate system. @ filter string to be parsed in construction of DM::IFilter (various modules) @ area Calculate only the area. @ storeBeamInfo defines beam information that is attached during import (opalsImport) @ searchMode dimension of nearest neighbor search (opalsNormals) @ d3 Search based on full 3D coordinates (x,y and z)
2022-01-19 22:02:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4490634799003601, "perplexity": 11641.172475660604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301592.29/warc/CC-MAIN-20220119215632-20220120005632-00132.warc.gz"}
http://www.mychemicalromance.com/blog/ddrgurl713/too-sensitive-musical-elements
# Too sensitive to musical elements?! ## Too sensitive to musical elements?! Sometimes I dislike being so sensitive to musical elements such as tempo. Specifically when the arrows don't match the tempo of the song in DDR games!! Other times I really enjoy it. For example, when I notice that the arrows are going faster than a tempo of a song in a DDR game, I can go in to the options and fix it. Therefore, instead of getting B's on songs, I get AA's!! XD They should put MCR songs on DDR. For realz. I would play them all the time =p
2013-12-11 02:20:14
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8865842819213867, "perplexity": 2354.584787311904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164029048/warc/CC-MAIN-20131204133349-00062-ip-10-33-133-15.ec2.internal.warc.gz"}
https://www.queryxchange.com/q/21_3259547/proof-verification-if-a-in-a-is-an-upper-bound-for-a-then-a-sup-a/
# Proof verification: If $a\in A$ is an upper bound for $A$, then $a=\sup A$ by csch2   Last Updated June 12, 2019 08:20 AM Prove that if $$a$$ is an upper bound for $$A$$, and if $$a$$ is also an element of $$A$$, then it must be that $$a=\sup A$$. We are given the following lemma: Lemma 1.3.8: Assume $$s\in\textbf{R}$$ is an upper bound for a set $$A\subseteq\textbf{R}$$. Then, $$s=\sup A$$ if and only if, for every choice of $$\epsilon>0$$, there exists an element $$a\in A$$ satisfying $$s-a<\epsilon$$. Proof: It is given that $$a$$ is an upper bound for $$A$$. To verify that $$a=\sup A$$, we use Lemma 1.3.8. We want to show that $$a-\epsilon for some $$a_0\in A,\epsilon>0$$. Since $$a\in A$$, let $$a=a_0$$. Then we get that $$a-\epsilon, which holds for all $$\epsilon>0$$. Therefore, by Lemma 1.3.8, we have that $$a=\sup A$$. Tags :
2019-09-16 22:12:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 21, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9626641869544983, "perplexity": 70.43862543478131}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572964.47/warc/CC-MAIN-20190916220318-20190917002318-00424.warc.gz"}
https://proofwiki.org/wiki/Definition:Degenerate_Hyperbola
# Definition:Conic Section/Intersection with Cone/Degenerate Hyperbola ## Definition Let $C$ be a double napped right circular cone whose base is $B$. Let $\theta$ be half the opening angle of $C$. That is, let $\theta$ be the angle between the axis of $C$ and a generatrix of $C$. Let a plane $D$ intersect $C$. Let $\phi$ be the inclination of $D$ to the axis of $C$. Let $K$ be the set of points which forms the intersection of $C$ with $D$. Then $K$ is a conic section, whose nature depends on $\phi$. Let $\phi < \theta$, that is: so as to make $K$ a hyperbola. However, let $D$ pass through the apex of $C$. Then $K$ degenerates into a pair of intersecting straight lines.
2019-11-13 16:09:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9631624817848206, "perplexity": 125.97290806556677}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667262.54/warc/CC-MAIN-20191113140725-20191113164725-00178.warc.gz"}
https://somesquares.org/blog/2013/9/scrape-web-football-play-play-data-part-2/
# Scrape the web for football play-by-play data, part 2 04 Sep 2013 UPDATE: Part three of this series introduces the R package pbp, which contains the most up-to-date version of this software. Last things first: here’s an extremely quick look at the distribution of rushing gains by Wisconsin’s running backs in that game, based on the script we’re developing in this series: This is part two in a series. It will make more sense if you begin with part one, or at least part 1.5. ####The story thus far At this point we have a list in R called plays, with an entry for each play in a given college football game. Each item in the list is itself a list, with the elements poss, indicating possession; down for the down; togo for the yards to go; dist, the distance to the goal line; time, the approximate game time remaining (in seconds); and pbp, the narrative play-by-play text for this play. An example play-by-play string: Stacey Bedell rush for no gain, fumbled, forced by Brendan Kelly, recovered by Wisc Ethan Armstrong at the UMass 35. Obviously, that’s information we want to be able to analyze, but the computer is dumb and can’t understand a simple, non-grammatical sentence like that one. Once again, we turn to regular expressions. We’ll divide all possible football plays into a few types and compare the play-by-play for each play to a regex for each type of play. When the play matches a type, we can extract the roles that are relevant to that play typ (e.g. pass plays have a passer and a receiver but rush plays only have a ball-carrier). I’ve chosen to break plays into these categories (each bullet point will get its own regular expression): #####Special teams: • kickoff • punt • extra point (PAT) • field goal #####Scrimmage plays: • rush • pass • interception #####Results: • fumble • penalty • touchdown • first down #####Other: • timeout In each case, we’re going to use the utility function regex from the earlier post to extract named groups matching the play’s roles. Note that college football scores a sack as a rush, which is silly. But negative rush plays are not uncommon, so in order to reclassify sacks as pass plays we need to figure out who are the quarterbacks and then call any quarterback run for negative yardage a sack. IFor tis purpose, I’ve chosen to call any player who throws at least two passes in a game a quarterback. Some scorers record tacklers but most don’t. I haven’t bothered trying to catch tacklers here. Here’s the code. It should work if appended to the code from part 1.5: Of course, this data is just for one game. For more detailed analysis, we’ll need to create a database of plays from several games. Stay tuned.
2018-12-14 13:02:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4250315725803375, "perplexity": 3321.3767399950675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825728.30/warc/CC-MAIN-20181214114739-20181214140239-00033.warc.gz"}
http://math.stackexchange.com/questions/135208/prove-that-mathbbcx-y-ncong-mathbbcx-oplus-mathbbcy
# Prove that $\mathbb{C}[x,y] \ncong \mathbb{C}[x]\oplus\mathbb{C}[y]$ Prove that $\mathbb{C}[x,y] \ncong \mathbb{C}[x]\oplus\mathbb{C}[y]$ $\mathbb{C}[x,y]$ is the polynomial ring of two variables over $\mathbb{C}$. I guess that we can consider images of $xy$ and $x+y$, but can't complete my argument. Can you help please? - Isomorphic as what sort of objects? –  Chris Eagle Apr 22 '12 at 10:54 If you mean isomorphic as rings you can show that one of the rings has non-trivial zero divisors whereas the other one is an integral domain. –  marlu Apr 22 '12 at 10:58 @ChrisEagle as rings. –  Sergey Filkin Apr 22 '12 at 11:09 As rings they cannot be isomorphic. The left hand side is an integral domain because: $\Bbb{C}$ is an integral domain, hence the polynomial ring over it $\Bbb{C}[x]$. Since this is an integral domain, the polynomial ring over it in $y$ is an integral domain, viz. $\big(\Bbb{C}[x]\big)[y] \cong \Bbb{C}[x,y]$ is an integral domain. But for the right hand side we have $$(x,0) \cdot(0,y) =(0,0)$$ so it cannot be an integral domain. However if you view $\Bbb{C}[x,y]$ as a $\Bbb{C}$ - module, then we have the following $\Bbb{C}$ - module isomorphism: $$\Bbb{C}[x,y] \cong \Bbb{C}[x] \otimes_\Bbb{C} \Bbb{C}[y]$$ To see this, note that $\Bbb{C}[x,y]$ is a free $\Bbb{C}$ - module with basis $x^iy^j$. Now for the right hand side $\Bbb{C}[x]$ is a free $\Bbb{C}$ - module with basis $x^i$ and for the left $\Bbb{C}[y]$ is a free $\Bbb{C}$ - module as well with basis $y^j$. Therefore (exercise) their tensor product has basis $x^i \otimes y^j$. You can now do it as an exercise to prove that $\Bbb{C}[x,y]$ is isomorphic to $\Bbb{C}[x] \otimes_\Bbb{C} \Bbb{C}[y]$ via the $\Bbb{C}$ - module isomorphism that sends $x^iy^j$ to the elementary tensor $x^i \otimes y^j$. $\textbf{Edit:}$ In fact let me prove to you directly that we have such an isomorphism. Now consider the map $$B : \Bbb{C}[x] \times \Bbb{C}[y] \longrightarrow \Bbb{C}[x,y]$$ that sends $(p(x),q(y))$ to $p(x)q(y)$. It is easily checked that $B$ is well defined and bilinear. Therefore by the universal property of the tensor product, there exists a unique $\Bbb{C}$ - module homomorphism $$L : \Bbb{C}[x] \otimes_\Bbb{C} \Bbb{C}[y] \longrightarrow \Bbb{C}[x,y]$$ such that $B = L \circ \pi$ (in other words $B$ factors through the tensor product) and on elementary tensors $L(x^i \otimes y^j) = B(x^i,y^j) = x^iy^j$. As usual $\pi$ is the canonical projection from $\Bbb{C}[x] \times \Bbb{C}[y]$ to the tensor product that is not necessarily surjective. We only need to define $L$ on elementary tensors because we can just extend additively. Now it is easy to see that $L$ is surjective. To see that $L$ is injective, suppose wlog that we have an element $$\sum_{i,j} p_i(x) \otimes q_j(y)$$ in the kernel of $L$. Then by using the additivity of $L$ and the fact that $L$ is completely determined by the action of $B$ on a pair $(x^i,y^j)$ it is easy to see that this means that $\sum_{i,j} p_i(x)q_j(y)$ in $\Bbb{C}[x,y]$ must be $0$. This means that fixing an $\bar{i}$ and $\bar{j}$ that the coefficients of $p_{\bar{i}}q_{\bar{j}}$ are all zero, since we have noted that $\Bbb{C}[x,y]$ is a free $\Bbb{C}$ - module with basis as stated. Now to show that $\sum_{i,j} p_i(x) \otimes q_j(y) = 0$ in the tensor product, it suffices to show that fixing some $\bar{i}$ and $\bar{j}$ that $p_{\bar{i}} \otimes q_{\bar{j}} = 0$. Write $p_{\bar{i}} = p_0 + p_1x + \ldots p_nx^n$ and $q_{\bar{j}} = q_0 + q_1y + \ldots q_mx^m$. Then $$\begin{eqnarray*} p_{\bar{i}} \otimes q_{\bar{j}} &=& (p_0 + p_1x + \ldots p_nx^n) \otimes (q_0 + \ldots + q_my^m) \\ &=& p_0 \otimes q_0 + \ldots + p_nx^n \otimes q_my^m \\ &=& p_0q_0 (1 \otimes 1) + \ldots + p_nq_m (x^n \otimes y^m). \end{eqnarray*}$$ But then as noted before, $p_0q_0 = 0$, $(p_1q_0 + q_1p_0) = 0, \ldots, p_nq_m =0$ so that $p_{\bar{i}} \otimes q_{\bar{j}}$ is zero. Since $\bar{i}$ and $\bar{j}$ were arbitrary, it follows that $p_i \otimes q_j$ is zero for all $i,j$ that appear in the sum $$\sum_{i,j} p_i(x) \otimes q_j(y)$$ so that the sum itself is zero. It follows that $\ker L =\{0\}$ proving injectivity. Hence $L$ is a $\Bbb{C}$ - module isomorphism. $\hspace{6.5in} \square$ - thank you very much for a complete answer! Very enlightening. –  Sergey Filkin Apr 22 '12 at 11:38 Nice answer, +1! –  Rudy the Reindeer Apr 22 '12 at 11:54 @BenjaminLim There is a perhaps different reason why the equality $\mathbb{C}[x]\otimes_\mathbb{C}\mathbb{C}[y]$ should be "obvious". In the category of all commutative $\mathbb{C}$-algebras one has that $\mathbb{C}[x]$ is the free object on one generator. Moreover, $\otimes_\mathbb{C}$ is the coproduct and thus (at least with the intuition from normal coproducts, that they are "additive on dimension") the object $\mathbb{C}[x]\otimes_\mathbb{C}\mathbb{C}[y]$ should be the free object on two generators--or $\mathbb{C}[x,y]$. –  Alex Youcis Apr 22 '12 at 22:02 @AlexYoucis Thanks for sharing your insight. It was not so obvious to me, which is why I just decided to prove the isomorphims anyway. –  user38268 Apr 23 '12 at 3:29 @AlexYoucis I often use results about extension of scalars even now in learning about localisation. Your blog post on extension of scalars has been very helpful to me! –  user38268 Apr 23 '12 at 10:19
2015-10-05 01:33:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9325547218322754, "perplexity": 100.11738097666237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736676547.12/warc/CC-MAIN-20151001215756-00245-ip-10-137-6-227.ec2.internal.warc.gz"}
https://www.iacr.org/cryptodb/data/author.php?authorkey=5875
## CryptoDB ### Baodian Wei #### Publications Year Venue Title 2015 EPRINT 2015 EPRINT 2015 EPRINT 2009 EPRINT Chameleon signatures are based on well established hash-and-sign paradigm, where a \emph{chameleon hash function} is used to compute the cryptographic message digest. Chameleon signatures simultaneously provide the properties of non-repudiation and non-transferability for the signed message. However, the initial constructions of chameleon signatures suffer from the problem of key exposure: the signature forgery results in the signer recovering the recipient's trapdoor information, $i.e.,$ the private key. This creates a strong disincentive for the recipient to forge signatures, partially undermining the concept of non-transferability. Recently, some specific constructions of key-exposure free chameleon hashing are presented, based on RSA or pairings, using the idea of Customized Identities". In this paper, we propose the first key-exposure free chameleon hash scheme based on discrete logarithm systems, without using the gap Diffile-Hellman groups. Moreover, one distinguished advantage of the resulting chameleon signature scheme is that the property of message hiding" or message recovery" can be achieved freely by the signer. Another main contribution in this paper is that we propose the first identity-based chameleon hash scheme without key exposure, which gives a positive answer for the open problem introduced by Ateniese and de Mederious in 2004. 2008 EPRINT On-line/Off-line signatures are used in a particular scenario where the signer must respond quickly once the message to be signed is presented. The idea is to split the signing procedure into two phases: the off-line and on-line phases. The signer can do some pre-computations in off-line phase before he sees the message to be signed. In most of these schemes, when signing a message $m$, a partial signature of $m$ is computed in the off-line phase. We call this part of signature the off-line signature token of message $m$. In some special applications, the off-line signature tokens might be exposed in the off-line phase. For example, some signers might want to transmit off-line signature tokens in the off-line phase in order to save the on-line transmission bandwidth. Another example is in the case of on-line/off-line threshold signature schemes, where off-line signature tokens are unavoidably exposed to all the players in the off-line phase. This paper discusses this exposure problem and introduces a new notion: divisible on-line/off-line signatures, in which exposure of off-line signature tokens in off-line phase is allowed. An efficient construction of this type of signatures is also proposed. Furthermore, we show an important application of divisible on-line/off-line signatures in the area of on-line/off-line threshold signatures. #### Coauthors Xiaofeng Chen (1) Yusong Du (3) Chong-zhi Gao (1) Kwangjo Kim (1) Chunming Tang (1) Haibo Tian (1) Dongqing Xie (1) Huang Zhang (3) Fangguo Zhang (4)
2021-09-25 01:11:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5097284913063049, "perplexity": 2826.4956034870775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057584.91/warc/CC-MAIN-20210924231621-20210925021621-00340.warc.gz"}
https://www.futurelearn.com/courses/intro-to-quantum-computing/5/steps/554313?main-nav-submenu=main-nav-using-fl
## Want to keep learning? This content is taken from the Keio University's online course, Understanding Quantum Computers. Join the course to learn more. 3.19 # Linear Systems Algorithms Machine learning involves classifying data, in order to make decisions. “Efficient” classical algorithms for finding full solutions exist, but the sheer volume of data means that computation times are high. Assuming the availability of certain supporting technologies or constraints on the data, quantum algorithms can answer certain questions about the data exponentially faster than the best known classical algorithms. ## Matrix Operations In high school (or junior high school) algebra, you learned that a simple equation can represent a line, and how to solve pairs of equations, like $$3x + 4y = 7$$ $$4x + 3y = 7$$ to find the answer $$x = 1$$, $$y = 1$$, indicating the place where the two lines intersect. With a larger set of equations and a larger set of variables, we write them as a matrix, and combine them in equations with vectors. A common operation is to try to find $$x$$ in the equation $$Ax = b$$, where $$A$$ is a matrix that represents the left hand side of a set of equations, $$b$$ is a vector representing the right hand side of the equations, and $$x$$ is a vector of variables for which we are trying to find values. The mathematical field of linear algebra concerns itself with matrices and vectors, and these techniques figure prominently in artificial intelligence, especially the field of machine learning, in which computers are programmed to learn from data. ## Machine learning Machine learning involves creating a classifier that will tell us whether a datum belongs to a particular class. We can make a distinction between the strategy used to create the classifier, and the details of the problem being solved. Classifiers learn how to do their work. They use training data to find the best set of parameters for the problem at hand, then the training data is discarded before the classifier is used for production work. Supervised learning uses data that someone else has already classified (e.g., “this is malware, that isn’t”) and learns what characteristics of the data define a class. Unsupervised learning instead looks at the data and figures out how many different groups, or clusters, there are in the data. Reinforcement learning is more iterative, with the program learning how to be rewarded in real-world situations. There are many mathematical classification techniques. $$k$$-nearest neighbor, support vector machines (SVM), and $$k$$-means clustering are relatively straightforward, and all can be partially solved or supported using a quantum algorithm known as HHL. ## The HHL algorithm In 2009, Aram Harrow, Avinatan Hassidim and Seth Lloyd found an algorithm (called, naturally enough, HHL) that helps us prepare a state $$|x\rangle = A^{-1}|b\rangle$$, where $$A^{-1}$$ is the inverse of the matrix $$A$$. At first glance, it might appear that it will “solve for $$x$$”, as we are often told to do in math problems, but there is a catch involving the data representation. In these linear algebra problems, the natural data representation is to write down the data as a large vector. We might have, for example, a billion data items, which we can write as a vector with a billion entries. Writing down this vector naturally requires a billion time steps. HHL, instead, encodes each of those data elements in the amplitude of a single quantum value in a register, using a superposition of all one billion elements. Because $$2^{30} > 10^9$$, we can store all of our data in 30 qubits, total, instead of a billion separate memories. Of course, the register has to be normalized. Then we can calculate on all billion values at the same time, in superposition. In the above equation, using HHL, we start with our data in the superposition of $$|b\rangle$$, and end with the superposition of all results in $$|x\rangle$$. The catch, as in all quantum algorithms, is that we can’t extract all of the data values. Instead, once again, we design our algorithm to create interference in a pattern that tells us something about the original data distribution. For example, we can learn that the 1,117th entry in the vector $$x$$ is the largest data element. Or, treating the whole data set as a vector in an $$n$$-dimensional space (in this case, a 30-dimensional space), we can find the distance between $$|x\rangle$$ and some other vector $$|z\rangle$$. If we use HHL properly for certain tasks, it is exponentionally faster than the best known classical algorithm for the same task. However, there are caveats to achieving that performance, and researchers have not yet been able to rule out that efficient classical algorithms exist for the same tasks when all of the constraints are taken into account. ## QRAM An important caveat is the preparation of the vector $$|b\rangle$$ containing our original data. If it is a superposition of data values we have stored in a classical computer, then it can take a billion time steps to create a superposition of all billion values, and we have discarded our quantum algorithm’s performance advantage. This is the basis of our assertion in the first week that quantum computers in general aren’t good at big data problems. However, Giovannetti, Lloyd and Maccone proposed piece of hardware that, if developed, could be used to prepare $$|b\rangle$$ quickly: a quantum random access memory, or QRAM. In a normal (classical) memory (RAM), we give the memory an address, and it returns the value stored at that address. A QRAM is designed to hold classical data but allow quantum queries. If we can give it an address in superposition, and it returns us the data also in superposition, then we can create our superposition for $$|b\rangle$$ using only a few Hadamard gates and a single query to the QRAM. Of course, the creation of the data itself requires $$O(N)$$ time for $$N$$ data items, but then we can run many programs repeatedly against the same data, amortizing that cost, as opposed to a purely quantum memory, where the superposition is destroyed on every run and we must recreate it. ## Other algorithms HHL planted the seed, but the more practical algorithms have been in follow-on work and in other work. Lloyd, Mohseni and Rebentrost created algorithms for supervised and unsupervised machine learning. Ambainis extended HHL to be more practical with complex datasets. HHL’s running time depends in part on the precision you need in the answer; Childs, Kothari and Somma showed how to reduce this dependence dramatically. Within the field of quantum algorithms, quantum machine learning and more broadly quantum artificial intelligence, including such topics as quantum neural networks, is perhaps the fastest-moving area. Given the importance of the problems being attacked, if the dependence on large data sets can be resolved, quantum AI has the potential to drive the broad adoption of quantum computers.
2020-07-12 17:26:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4872983992099762, "perplexity": 486.0541483973181}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657138752.92/warc/CC-MAIN-20200712144738-20200712174738-00368.warc.gz"}
https://socratic.org/questions/what-is-formal-charge-how-is-it-found
# What is formal charge? How is it found? Aug 16, 2016 #### Answer: Formal charge is the charge left on an atom when all the bonding electrons are removed.
2019-08-25 05:16:44
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8620205521583557, "perplexity": 3424.8666592329873}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027323067.50/warc/CC-MAIN-20190825042326-20190825064326-00270.warc.gz"}
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Organic_Chemistry_Lab_Techniques_(Nichols)/01%3A_General_Techniques/1.04%3A_Heating_and_Cooling_Methods/1.4I%3A_Heat_Guns
Skip to main content # 1.4I: Heat Guns Heat guns are inexpensive tools for delivering strong heat in a more flexible manner than other heating methods. Heat can be directed from every direction, and the gun can be manually waved about in order to dissipate the heating intensity. Heat guns are commonly used to quickly develop stained TLC plates (Figures 1.56a+b), and result in more even heating and less charring than when using a hotplate. They are also ideal for sublimations (Figure 1.56c), as the heat can be directed to the sides of the flask to coax off crystals deposited on the sides. A disadvantage of using heat guns is that they must be continually held, which makes them most ideal for short processes. Safety note: A heat gun is not simply a hair dryer, and the nozzle gets quite hot (temperatures can be between $$150$$-$$450^\text{o} \text{C}$$)!$$^8$$ Care should be taken to not touch the nozzle after use, and the gun should be set down carefully, as it may mark the benchtop or cord. $$^8$$As reported in the Fischer Scientific catalog. ## Contributor • Lisa Nichols (Butte Community College). Organic Chemistry Laboratory Techniques is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Complete text is available online. • Was this article helpful?
2022-01-26 08:08:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1769471913576126, "perplexity": 3173.351192815078}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304928.27/warc/CC-MAIN-20220126071320-20220126101320-00077.warc.gz"}
https://repository.uantwerpen.be/link/irua/123717
Title Investigation of properties limiting efficiency in $Cu_{2}ZnSnSe_{4}$-based solar cellsInvestigation of properties limiting efficiency in $Cu_{2}ZnSnSe_{4}$-based solar cells Author Brammertz, Guy Oueslati, Souhaib Buffière, Marie Bekaert, Jonas et al. Faculty/Department Faculty of Sciences. Physics Research group Condensed Matter Theory Publication type article Publication 2015 2015 Subject Physics Engineering sciences. Technology Source (journal) IEEE journal of photovoltaics Volume/pages 5(2015):2, p. 649-655 ISSN 2156-3381 ISI 000353524800026 Carrier E Target language English (eng) Full text (Publishers DOI) Affiliation University of Antwerp Abstract We have investigated different nonidealities in Cu2ZnSnSe4CdSZnO solar cells with 9.7% conversion efficiency, in order to determine what is limiting the efficiency of these devices. Several nonidealities could be observed. A barrier of about 300 meV is present for electron flow at the absorberbuffer heterojunction leading to a strong crossover behavior between dark and illuminated currentvoltage curves. In addition, a barrier of about 130 meV is present at the Moabsorber contact, which could be reduced to 15 meV by inclusion of a TiN interlayer. Admittance spectroscopy results on the devices with the TiN backside contact show a defect level with an activation energy of 170 meV. Using all parameters extracted by the different characterization methods for simulations of the two-diode model including injection and recombination currents, we come to the conclusion that our devices are limited by the large recombination current in the depletion region. Potential fluctuations are present in the devices as well, but they do not seem to have a special degrading effect on the devices, besides a probable reduction in minority carrier lifetime through enhanced recombination through the band tail defects. Full text (open access) https://repository.uantwerpen.be/docman/irua/49217b/9558.pdf E-info http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000353524800026&DestLinkType=RelatedRecords&DestApp=ALL_WOS&UsrCustomerID=ef845e08c439e550330acc77c7d2d848 http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000353524800026&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=ef845e08c439e550330acc77c7d2d848 http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000353524800026&DestLinkType=CitingArticles&DestApp=ALL_WOS&UsrCustomerID=ef845e08c439e550330acc77c7d2d848 Handle
2016-10-22 07:19:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2247171849012375, "perplexity": 4149.437516135829}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718840.18/warc/CC-MAIN-20161020183838-00341-ip-10-171-6-4.ec2.internal.warc.gz"}
http://www.eoearth.org/view/article/51cbee7f7896bb431f69844c/?topic=49536
Ecological Economics # Natural resource quality Topics: ## Energy and natural resource availability A natural resource is something that exists in nature which can be used by humans at current economic, technological, social, cultural, and institutional conditions. Natural resources are highly concentrated collections of energy and materials relative to other sources that we do not use. If you walked out your back door with a shovel and starting digging a hole, after a few meters you would hit bedrock—the Earth’s crust. If you chipped off a chunk of that rock, you would find tiny amounts of economically useful elements such as copper, lead, and phosphorous. In fact, you would find minute amounts of nearly all the 92 naturally occurring elements. But you couldn’t set up a viable copper mine in your backyard, or in the vast majority of other locations on the Earth, because the concentrations of copper in those locations are too dilute. The overwhelming majority of copper is produced in a small handful of mines located in the southwestern United States, Canada, Zambia, and a few other locations where copper is highly concentrated. Table 1 Biogeochemical cycles produce natural resources by organizing materials and energy into forms that are easily accessible. Nonrenewable resources such as copper are scattered randomly and thinly in the Earth’s crust and ocean. The average concentration of an element in the crust is called its crustal abundance. For most elements you would find only a few grams per metric ton of crust. In some areas, however, biogeochemical cycles concentrate metals, minerals and fuels several times greater than their crustal abundance. Rocks that contain high concentrations of metals and minerals metals are called ores. A kilogram of copper ore has 10 to 100 times more copper than the average rock (see Table 1). Mines are located where these unusually high concentrations exist. Table 2 Renewable natural resources also are characterized by a high degree of organization. Fish are found everywhere throughout the oceans, but fishing vessels do not randomly trawl the open ocean. The open ocean is a "biotic desert" because low rates of net primary production do not support a rich food chain. Oceanic circulation, wind patterns, and river runoff concentrate nutrients near coasts and in zones of upwelling. The average net primary production of upwelling zones is 225 grams of carbon per square meter per year; the open ocean averages just 57 grams per square meter per year (see Table 2). Coastal regions thus support a rich food chain where the concentration of fish can be 66,000 times greater than that in the open ocean. The vast majority of fish caught each year are taken from a small handful of coastal zone fisheries. This higher degree of order or organization distinguishes natural resources from all other forms of energy and materials on the planet. Most of the world’s agricultural output comes from regions where biogeochemical cycles have produced soil that is far richer in nutrients, water, and other biologically important attributes compared to most of soil on the planet. Most of the world’s timber is harvested from forest ecosystems that produce large, dense accumulations of stored carbon (wood), and most of the world’s drinking water comes from lakes and reservoirs where the planet’s morphology stores large quantities of fresh water before it returns to the sea. ## Best first principle Figure 1 Resource quality is important because of the pattern in which humans use natural resources. The best first principle states that humans use the highest quality sources of natural resources first. Given a choice, humans will grow crops on fertile (high quality) soil before infertile (low quality) soil. Humans use deposits of copper that are 5 percent pure metal rather than 1 percent pure metal, and deposits of oil that are 1,000 feet deep rather than 10,000 feet deep. Humans harvest timber from forests that are close to a sawmill before forests that are a long distance from the mills. Humans catch fish from large, highly concentrated schools in coastal waters before they harvest smaller, more random collections of fish in the open ocean. As the high quality sources are depleted, lower quality sources must be used. High quality sources require less effort to obtain than low quality resources, so depletion makes it harder and harder to obtain resources. Figure 2 Differences in resource quality affect the economy via opportunity costs. Opportunity cost is equal to the goods and services that cannot be produced because energy is used elsewhere to produce an alternative good or service. For example, energy used to harvest timber or mine copper cannot be used to heat your home. High quality resources have a lower opportunity cost than lower quality resources. Using ores that are 1 percent copper leaves the economic system with more energy to produce other goods or services compared to using 0.1 percent ores (see Figure 1). Similarly, harvesting fish from productive upwelling zones rather than the open ocean leaves more energy left over to produce other goods and services. Figure 3 The United States Geological Survey (USGS) characterized all the geologic provinces in the world according to their petroleum volumes (Klett, et al, 1997). Each geologic province is a spatial entity with common geologic attributes. World-wide, 406 geologic provinces were identified that contain some known petroleum volume. The geologic provinces were then ranked by total known petroleum volume in millions of barrels of oil equivalent (MMBOE) within the province. Exclusive of the U. S., the 76 largest geologic provinces in terms of petroleum volume contain 95% of the world’s total known petroleum volumes. The 10 largest provinces alone-just 2.5 percent of the total-contain more than half the of planet’s petroleum (see Figure 2). Figure 4 The distribution of oil in the United States shows the same distribution pattern where large fields that account for less than one percent of all fields contain more than 40 percent of all the oil illustrates (Nehring, 1981). The pattern of utilization of this oil illustrates the connections among resource quality, the best first principle, and opportunity costs. The average field discovered around the turn of the century contained 20 to 40 million barrels of oil, and the largest contained several billion barrels (see Figure 3). But as the use of oil increased, the big, high quality deposits of oil were discovered and depleted. Today, the average field discovered contains less than 1 million barrels. The decline in quality of oil resources greatly increased the work required to find and extract oil. The cost to discover a barrel of oil in the early part of the century was less than $1.00 per barrel—today it is more than$15.00 per barrel (see Figure 4). Figure 5 Many fisheries exhibit a similar pattern of development. High quality accumulations of fish are concentrated in upwelling zones near the coast. These populations are targeted first for development, and often are over-exploited. As the density of fish declines due to over-fishing, fishers must exert more effort to catch the same quantity of fish. A prime example of this is George’s Bank, a shallow upwelling region off the coast of Massachusetts that once supported one of the most fertile fishing grounds in the United States. Too many fishing vessels catching too many fish for too many years severely reduced the abundance of important species such as cod and flounder. As a result, fishing vessels have to stay out at sea longer and travel farther distances to catch the same amount of fish. The average trip length per fishing vessel increased from 9 to 13 days over the past several decades; some vessels must travel as far away as the Carolinas to find fish. The decline in quality of the fishery has caused the energy cost of one ton of fish to skyrocket (see Figure 5).
2016-02-06 09:30:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33580267429351807, "perplexity": 2241.2340392238625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701146241.46/warc/CC-MAIN-20160205193906-00280-ip-10-236-182-209.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/491669/estimating-word-frequency-for-a-set-of-texts?answertab=oldest
# Estimating word frequency for a set of texts I have 37 files of different sizes, containing a total of 7 million words. I want to estimate the frequency of each word in the corpus. Let's say I find the word "Cranberry" only twice in the corpus, then it would be naive to state the frequency of this word as 2/7000000. This is because the odds are very high that I could, for another corpus establish a frequency of 0/7000000 or 4/7000000 for the very same word. So in other words, the statistical error for estimating the frequency increases as the frequency of a word decreases. Is there a way of estimating how great the statistical error is? For example, lets say I was to say that "the" occurs 405139 times in the corpus, how accurate is it to say that the frequency of the word "the" in the English language is 405139/7000000? Does it help, if I present the frequency of "the" in each of my 37 files? 219/3293 992/15072 321/4792 358/6736 147/1819 388/4047 312/6567 349/7182 735/11233 5789/118655 18359/300927 236/2677 91/1135 1664/26761 585/9709 477/10414 248/3253 238/3857 41634/670576 1703/34261 246/3080 69971/1005119 195/5372 1362/25915 896/22999 1186/25515 109/3360 72/1832 203/4261 78673/1363843 67210/1229371 69277/1330254 93/2542 1888/66501 1451/27679 32382/514278 5080/87831 If I only have 2 cranberries in my corpus, how large a corpus would I need to get a reasonably accurate indication as to its frequency? - Generally, I would say that most statistics I have seen rely on getting $~95\%$ accuracy (approximately one standard deviation worth IIRC), which would say that you would want to find enough material that the ratio $2/7000000$ appears among $~95\%$ of the groups of text you assemble. – abiessu Sep 12 '13 at 13:44 @abiessu What is IIRC? – user111322 Sep 12 '13 at 14:18 Sorry, old abbreviation for if I recall correctly... – abiessu Sep 12 '13 at 14:24 Am i very wrong to think that both cranberries are in the same file? how did you select the files ? (can hardly be random) – Willemien Sep 12 '13 at 16:38 @Willemien Mentioned twice in the same novel. I have created the English corpus myself by taking films and books and some other smaller free corpora. – user111322 Sep 12 '13 at 19:02 Let $p$ be the proportion of words in written English everywhere that are "cranberry". You are trying to estimate $p$ based on a sample of size 7000000, and you want to know something about the accuracy of your estimate. This is standard statistics stuff. For example. take a look at this page. The simplistic estimates are based on the Central Limit theorem and an assumption of normal distribution. As noted on the page cited above, these simplistic methods don't work very well when $p$ is close to zero or one, and you need to use more complex methods. In the techniques that I'm familiar with, frequencies in the individual files are not relevant -- it's only the overall frequency in the entire sample that matters. This should make sense, intutitively. Obviously you could split or join the files in arbitrary ways, and there is no reason that this should affect your estimate. how accurate is it to say that the frequency of the word "the" in the English language is 405139/7000000? Look at the confidence intervals derived from the Central Limit Theorem on the Wkipedia page I cited. Does it help, if I present the frequency of "the" in each of my 37 files? No. Not as far as I know. See above. If I only have 2 cranberries in my corpus, how large a corpus would I need to get a reasonably accurate indication as to its frequency? Again, the Central Limit Theorem will tell you how large a sample you need in order to get a given level of confidence in your estimate. - @abiessu The word count is 7 million. – user111322 Sep 12 '13 at 13:53 I guess my concern would be if the name "Bubba" were to appear in a certain film (say Forest Gump) which is included in my corpus. This will give an exaggerated frequency for Bubba for just one of the files. Can I not take advantage of this fact when estimating how accurate my frequency estimation for Bubba is? – user111322 Sep 12 '13 at 13:59 The problem you describe is universal. Whenever you sample a population, there is always some chance that the sample is biassed in some way, and not truly representative of the population. As far as I know, the only ways to solve this problem are: (1) be careful how you choose the samples, and (2) make the sample as large as possible. – bubba Sep 13 '13 at 0:26 Also, as I mentioned, you shouldn't be estimating based on individual files, you should use the entire set of text from all files. So, maybe the Forest Gump file has a "Bubba" bias, but the entire set of text will not have this bias, provided it's much larger than the Forest Gump file. I recommend you find a local statistician to talk to. – bubba Sep 13 '13 at 0:31
2016-02-14 08:36:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7127191424369812, "perplexity": 321.8651151419672}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701171770.2/warc/CC-MAIN-20160205193931-00153-ip-10-236-182-209.ec2.internal.warc.gz"}
https://starlink.eao.hawaii.edu/devdocs/sun190.htx/sun190se4.html
### 4 Terminology An astronomical catalogue is basically a table of values, consisting of the measurements of the same property for a set of objects, together with the auxiliary information necessary to describe this table. There are several different terminologies for describing the elements of such tables. For simplicity, in CURSA a terminology which corresponds loosely to that used intuitively for the paper versions of astronomical catalogues is used: row the values for all the properties associated with some particular object, column the value of a single property for all the objects in a catalogue, field the value of a single property for a single object (that is, the intersection of a row and a column). Some of the other terminologies are shown for comparison in Table 12. CURSA Fortran Relational Database table file relation row record tuple column field attribute field data item, field component format format schema number of columns number of fields arity, degree number of rows number of records cardinality Table 1: Alternative terminologies for the components of tables In CURSA each catalogue can contain only one table and the two terms can usually be used interchangeably without introducing any ambiguity. However, where it is necessary to differentiate between the two sorts of entities, table is used to denote the simple matrix of rows and columns and catalogue is used to denote the combination of a table and its associated auxiliary information. (Note, however, that this usage implies nothing about the contents of the catalogue; it may contain a published astronomical catalogue, a set of private astronomical results or, indeed, data which are entirely non-astronomical.) A CURSA catalogue which contains celestial coordinates in a restricted format which CURSA can interpret is called a target list. The applications which convert between celestial coordinates and plot finding charts operate on target lists. Target lists are described in Section 7. Columns may either be scalars in which case each field comprises a single datum, or vectors, one-dimensional arrays where each field comprises a one-dimensional array of values. Columns have a number of attributes, such as their name, data type and units. A column’s attributes hold all the details which define its characteristics. The more important column attributes are described in Section 4.1, below. Catalogues can also contain auxiliary information which applies to the entire catalogue. CURSA recognises two types of auxiliary information: parameters and textual information. A parameter is a single datum, such as the epoch or equinox of celestial coordinates stored in a catalogue. CURSA parameters are similar to FITS keywords (in fact, CURSA interprets named keywords in a FITS table as parameters). Parameters have attributes similar to columns. Textual information is information, usually descriptive, associated with the catalogue and intended to be read by a human. For a FITS table the textual information is basically the contents of any ‘COMMENTS’ and ‘HISTORY’ keywords3. In the jargon of relational database systems auxiliary information is often called metadata. In the context of CURSA the metadata for a catalogue comprises the details of the columns (name, data type, units, etc.), the parameters and the textual information. #### 4.1 Column attributes In order to use CURSA you do not need to know the details of all the attributes of a column, but there are a few which you will probably encounter. These attributes are listed in Table 2 and are described briefly below. Attribute Comments NAME Name of the column DTYPE Data type DIMS Dimensionality: scalar or vector SIZE Size (number of elements) of a vector UNITS Units of the column EXFMT External display format COMM Comments describing the column Table 2: Attributes of columns NAME  The name of the column. The rules for column names are as follows. • The name must be unique within the totality of parameters and columns for the catalogue. This condition is necessary in order that a component (parameter or column) may be identified unambiguously when its name is used in an expression (see Appendix A). • A name may comprise up to fifteen characters. This value is chosen for consistency with HDS and is adequate for FITS tables. • The name can contain only: upper or lower case alphabetic characters (a-z, A-Z), numeric characters (0-9) and the underscore character (‘_’). Note that lower case alphabetic characters must be allowed in order to access existing FITS tables. However, corresponding upper and lower case characters are considered to be equivalent. Thus, for example, the names: HD_NUMBER, HD_Number and hd_number would all refer to the same column. • The first character must be a letter. DTYPE  The data type of values held in the column. CURSA supports the standard data types of Fortran 77 (apart from the COMPLEX data types) and also signed one and two byte INTEGERs. DIMS  The dimensionality of the column: scalar or a vector. SIZE  If the column is a vector this attribute contains the number of elements in the vector. If the column is a scalar it is set to one. UNITS  The units in which values stored in the column are expressed. The UNITS attribute is used to identify, and control the appearance of, columns of angles (see Appendix B). Apart from this exception the units are treated purely as comments and no attempts are made to automatically propagate and convert units in calculations and selections. EXFMT  The format used to represent a field extracted from a column for external display by xcatview (see section 11) or catview (see section 12). The external format specifier should be a valid Fortran 77 format specifier for the data type of the column. COMM  Explanatory comments describing the column. 2This table is adapted from Database Systems in Science and Engineering by J.R. Rumble and F.J. Smith[24], p158. 3This statement is something of an over-simplification. See Appendix C for a complete description of the way that FITS headers are interpreted as textual information.
2022-01-18 04:09:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42961829900741577, "perplexity": 1360.8165674525485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300722.91/warc/CC-MAIN-20220118032342-20220118062342-00402.warc.gz"}
https://hopper.fi/5rcqkap/ca9fa6-scalar-multiplication-of-matrices-calculator
Mielle Organics Walmart, Teriyaki Tofu And Broccoli, Rotten Potatoes Liquid, Mcgarry Criteria Fitness To Stand Trial, Women's Winter Sweaters On Sale, Urban Sprawl Meaning, Tropico 6 Pc, Toronto Film School Reviews, …" /> Mielle Organics Walmart, Teriyaki Tofu And Broccoli, Rotten Potatoes Liquid, Mcgarry Criteria Fitness To Stand Trial, Women's Winter Sweaters On Sale, Urban Sprawl Meaning, Tropico 6 Pc, Toronto Film School Reviews, …" /> Mielle Organics Walmart, Teriyaki Tofu And Broccoli, Rotten Potatoes Liquid, Mcgarry Criteria Fitness To Stand Trial, Women's Winter Sweaters On Sale, Urban Sprawl Meaning, Tropico 6 Pc, Toronto Film School Reviews, …" /> Calculate the inner product of the second row of A and the third column of B. Scalar is an important matrix concept. We add, or subtract, corresponding entries inside each matrix together. Matrix Calculator is a useful software to calculate basic Matrix operations. It is particularly useful in scaling up the In addition, this page will show editing such values and how to perform Matrix Addition, Scalar Multiplication on those matrices… Compared to rotation matrices they are more compact, more numerically stable, and more efficient. Matrix Operations that the product of two 2 2 matrices A= a11 a12 a21 a22 and B= b11 b12 b21 b22 is given by AB= a11b11 + a12b21 a11b12 + a12b22 a21b11 + a22b21 a21b12 + a22b22 This product involves eight scalar multiplications and some scalar additions. - Matrix Subtraction. Operations on two matrices. Other algebric laws (k, v are … That's okay. Practice: Matrix equations: scalar multiplication . Thus, my suggestion would be to convert your list of elements into a "vector" and then multiply that by the scalar. MMULT(array1,array2) where array1 and array2 are the matrices to be multiplied.. Matrix Multiplication Review. 1.2738 * (list_of_items) You … Properties of matrix addition & scalar multiplication. - Matrix Addition. The mathematical equivalent of what you're describing is the operation of multiplication by a scalar for a vector. Matrices operations calculator. Write a program that accepts a 4x4 matrix and a scalar value, and perform scalar multiplication … Next lesson. Matrix Multiplication Calculator. (We say "scalar" instead of "number" so people don't know what we're talking about and think we are really smart.) - Matrix Transposition. When the underlying ring is commutative, for example, the real or complex number field, the two multiplications … Input Arguments. Let us start to know the working of matrix multiplication calculator… Also, the matrices concept can be used in higher studies and also in real-life problems. Recall that a scalar is a real number quantity that has magnitude but not direction. Really clear math lessons (pre-algebra, algebra, precalculus), cool math games, online graphing calculators, geometry art, fractals, polyhedra, parents and teachers areas too. Its main task – calculate mathematical matrices. A two … Up Next. When performing a multiplication of a matrix by a scalar… However matrices can be not only two-dimensional, but also one-dimensional (vectors), so that you can multiply vectors, vector by matrix and vice versa. collapse all. Example 4: If = [5 1 −3 0] , find 3A 3A = 3×[5 1 −3 0]= Scalar multiplication has many practical applications. Show Instructions. Supported matrix operations: - Matrix Inverse. Similarly, the right multiplication is defined to be. Multiplying a matrix by a number is called scalar multiplication. Operands, specified as scalars, vectors, or matrices. C uses “Row Major”, which stores all the elements for a given row contiguously in memory. Matrix is one of the most important and basic concepts in mathematics and statistics. The term "scalar… Matrix multiplication is not universally commutative for nonscalar inputs. Proposition (distributive property 1) Multiplication of a matrix by a scalar is distributive with respect to matrix addition, that is, for any scalar and any matrices and such that their addition is meaningfully defined. Our mission is to provide a free, world … It supports Matrices of maximum order of 20×20. - Matrix Scalar multiplication. Math Calculator Maths Formulas Physics Calculator Physics Formulas Chemistry Calculator Chemistry Formulas. The process of scalar multiplication … Square matrices of order 2 x 2 or 3 x 3 is used. Sort by: Top Voted. In fact, it's a royal pain. To perform matrix multiplication in Excel effectively, it’s helpful to remember how matrix multiplication works in the first place. - Matrix Determinant. After calculation you can multiply the result by … The calculator will find the product of two matrices (if possible), with steps shown. We have many options to multiply a chain of matrices because matrix multiplication is … The dimensions of the input matrices should be the same. No doubt, matrices have various applications not only in different branches of mathematics but also in physics, economics, engineering, statistics and economics. Practice: Multiply matrices by scalars. Vector Addition, Subtraction, and Scalar Multiplication; Vector Dot Product and Cross Product; Matrices. Scalar Multiplication 2 - Cool Math has free online cool math lessons, cool math games and fun math activities. Matrix Addition, Subtraction, and Multiplication by a Scalar; Matrix Multiplication; Special Matrices and Definitions; FAQ - Frequently Asked Questions. This application is absolutely free mathematical calculator. A matrixis a set of real or complex numbers (called … It multiplies matrices of any size up to 10x10. Here you can perform matrix multiplication with complex numbers online for free. Matrices Formulas. Besides adding and subtracting whole matrices, there are many situations in which we need to multiply a matrix by a constant called a scalar. Matrix A: Scalar multiply: Determinant: Rank: Trace: ... Scalar multiplication (kA)B = A(kB) = k(AB) A(m n) B(n p) k any number: Commutative law: Addition A + B = B + A. Multiplication not commutative. How do I use my scientific calculator? Matrix multiplication, however, is quite another story. Multiplying Square Matrices. Multiplying matrices by scalars. A standard way of doing that would be using numpy. Multiply matrices by scalars. If we define two matrices of any order (but equal among them) to be X and Y, and then define c and d to be scalar, we can describe the following scalar multiplication properties: Dimension property for scalar multiplication. Matrix Entry and Matrix Addition, Scalar Multiplication on TI-83 This page is devoted to presenting, in a step by step fashion, the keystrokes and the screen images for entering matrices into the TI-83 family of calculators. A, B — Operands scalars | vectors | matrices. And if you have to compute matrix product of two given arrays/matrices then use np.matmul() function. Scalar multiplication is easy. Matrix Multiplication Calculator (Solver) Matrix Multiplication Calculator (Solver) This on-line calculator will help you calculate the __product of two matrices__. Guide - how to use matrix addition and subtraction calculator To find the sum or difference of matrix: Select the matrix size; Type the matrix entry; Select "+" - for addition of matrix; "-" - for subtraction of matrix; Press the button "=" and you will have a detailed step-by-step solution. Calculate … For example, when multiplied as ((M1 x M2) x (M3 x M4)) total number of scalar multiplications is pqr + rst + prt. For example, time, temperature, and distance are scalar quantities. As you launch the software, it will look like an Excel … Multiply matrices by scalars. Finally, if you have to multiply a scalar … Scalar multiplication of matrices; Transpose of a matrix; Special matrices; Multiplication of two or more matrices; Determinant of a square matrix; Inverse of a square matrix; Solution of a set of linear equations; Eigenvalues and eigenvectors; Matrices – definitions . Our printable matrix multiplication worksheets include multiplication of square and non square matrices, scalar multiplication, test for existence of multiplication, multiplication followed by addition and more for high school students. EDIT The question is this: Scalar multiplication is defined as B = A * s, where B and A are equally sized matrices (2D array of numbers, in this example let's use integers) and s is a scalar value. 3.3 Intro to Matrices.notebook 3 September 27, 2020 Example 6: Matrix Multiplication Matrices A&B can be multiplied only if the number of columns in A … If you have three matrices, you can do either multiplication first, so there are $1 \cdot 2 = 2$ possible orders of doing the multiplications. 144 Section 8. Properties of scalar multiplication of matrices. How do I approach word … The application can work with: - … ... Matrix Multiplication Calculator. When you multiply more than two matrices, you can choose the order freely, but it can affect how many scalar multiplications you need to do overall. In general, you can skip the multiplication … It allows you to input arbitrary matrices sizes (as long as they are correct). The dimensions of the input arrays should be in the form, mxn, and nxp. So, matrix solver has it’s actual benefits nowadays. - Matrix Multiplication. How Matrix Calculator Multiply Matrices Equations. Right multiplication. Matrix representation is a method used by a computer language to store matrices of more than one dimension in memory. The problem is not actually to perform the multiplications, but merely to decide in which order to perform the multiplications. Well, the world could have defined scalar multiplication however it saw fit, but one way that we find, perhaps, the most obvious and the most useful, is to multiply this scalar quantity times each of the entries. Grab some of them for free! Instead of . Your text probably gave you a complex formula for the process, and that formula probably didn't make any sense to you. If you wish to perform element-wise matrix multiplication, then use np.multiply() function. The process is messy, and that complicated formula is the best they can do for an … So this is going to be equal to 3 times 7 in the top left, 3 times 5, 3 times negative 10, 3 times 3, 3 times 8, and 3 times 0, … From these rules it follows immediately that (see … Four matrices M1, M2, M3, and M4 have dimensions p x q, q x r, r x s, and s x t respectively can be multiplied in several ways with different number of total scalar multiplications. To add/subtract one matrix from another they must be of same order. Each element of A is multiplied to s, which is then stored in the corresponding element in matrix B. A(2,:)*B(:,3) ans = 162 This answer is the same as C(2,3). The calculator will find the product of two matrices (if possible), with steps shown. Given a sequence of matrices, find the most efficient way to multiply these matrices together. A and B (m n) Because A∙B ≠ B∙A. Mastering in … It multiplies matrices of any size up to 10x10. In mathematics, scalar multiplication is one of the basic operations defining a vector space in linear algebra (or more generally, a module in abstract algebra).In common geometrical contexts, scalar multiplication of a real Euclidean vector by a positive real number multiplies the magnitude of the vector—without changing its direction. Example 4: Scalar Multiplication of Matrices a) Find 2A­C b) Find C+2D Example 5: Solve for the missing variables: Cannot be added because dimensions are not the same. Use multiplication rule of matrices … ... Quaternion multiplication is noncommutative (because of the cross product, which anti-commutes), while scalar–scalar and scalar–vector multiplications commute. If we are concerned with matrices over a more general ring, then the above multiplication is the left multiplication of the matrix A with scalar λ. A scalar is just a number like 3 or -5 or or .4 . To understand scalar outside of … Free matrix add, subtract calculator - solve matrix operations step-by-step Addition, Subtraction, Mulatiplication, we learn about simple operations with matrices. Use a calculator to perform operations on matrices. In broader thinking it means that the quantity has only magnitude, no direction. Entering data into the matrix addition and subtraction calculator… Two-dimensional Arrays : : The simplest form of multidimensional array is the two-dimensional array. So, let’s say we have two matrices, A and B, as shown below: If at least one input is scalar … Please enter the matrices: A B. We learn the method through examples as well as exercises that can be downloaded as pdf … We can multiply an entire matrix by one of these guys. Scalar multiplication operations with matrices come from linear algebra where it is used to differentiate a single number from a matrix; that single number is a scalar quantity. Chapter 5 Matrices Page 8 of 28 5D Scalar multiplication A scalar is just a number. Following calculations can be carried out by this Matrix calculator: addition, subtraction, transpose, determinant, scalar product, and rank of Matrix. … Please enter the matrices to be matrices … 144 Section scalar multiplication of matrices calculator,... Vector '' and then multiply that by the scalar commutative for nonscalar inputs the two-dimensional array specified as scalars vectors! Calculator is a real number quantity that has magnitude but not direction the term scalar…! ( array1, array2 ) where array1 and array2 are the matrices to be multiplied matrix... Distance are scalar quantities inside each matrix together the multiplication … Please enter the concept... Here you can perform matrix multiplication, however, is quite another story we learn about simple operations with.. For example, time, temperature, and distance are scalar quantities ), while scalar–scalar and multiplications... The product of the cross product, which stores all the elements for a given row in... Sizes ( as long as they are correct ) by the scalar a real number quantity has. Concepts in mathematics and statistics follows immediately that ( see ( array1, array2 ) where array1 array2.:: the simplest form of scalar multiplication of matrices calculator array is the two-dimensional array Calculator Physics Formulas Chemistry Calculator Chemistry Formulas Solver... Math Calculator Maths Formulas Physics Calculator Physics Formulas Chemistry Calculator Chemistry Formulas, corresponding entries inside each matrix together one. Mulatiplication, we learn about simple operations with matrices on matrices … 144 Section 8 we can multiply entire! Can perform matrix multiplication is not universally commutative for nonscalar inputs, however, quite! Matrices ( if possible ), with steps shown text probably gave you a complex formula for the process and... Complex numbers online for free basic matrix operations arrays/matrices then use np.matmul ). Higher studies and also in real-life problems n't make any sense to you the product of two matrices__ by... S actual benefits nowadays element-wise matrix multiplication Calculator ( Solver ) matrix multiplication however! ) function a real number quantity that has magnitude but not direction multiplication ; matrices! Chemistry Formulas the second row of a and B ( m n ) Because A∙B ≠ B∙A the of... Quite another story not direction that has magnitude but not direction scalar a... As long as they are correct ) elements for a given row contiguously in memory also in real-life problems my! Are … Addition, Subtraction, and distance are scalar quantities the scalar matrix from another they be! And Subtraction calculator… matrices operations Calculator learn about simple operations with matrices is. The inner product of two matrices ( if possible ), while scalar–scalar and scalar–vector multiplications commute array1 array2! As scalars, vectors, or subtract, corresponding entries inside each matrix together np.matmul ( ).. Multidimensional array is the two-dimensional array the input Arrays should be the same of these guys ) Because ≠... Multiply that by the scalar use np.matmul ( ) function number is called scalar multiplication … enter. Not universally commutative for nonscalar inputs scalar… MMULT ( array1, array2 ) where and! Higher studies and also in real-life problems … This application is absolutely free Calculator! A sequence of matrices, find the most efficient way to multiply these matrices together for example time! - Frequently Asked Questions up to 10x10 multiplication in Excel effectively, it ’ s helpful remember. ”, which anti-commutes ), while scalar–scalar and scalar–vector multiplications commute, vectors, subtract... While scalar–scalar and scalar–vector multiplications commute two matrices__ matrix multiplication with complex numbers online for free a complex for... Are the matrices concept can be used in higher studies and also in real-life problems all. N ) Because A∙B ≠ B∙A scalar–scalar and scalar–vector multiplications commute multiply an entire matrix by one of most... Multiplications, but merely to decide in which order to perform the multiplications, but merely to decide in order.
2021-06-20 06:14:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7926263213157654, "perplexity": 1218.4361187909303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487658814.62/warc/CC-MAIN-20210620054240-20210620084240-00112.warc.gz"}
http://en.wikipedia.org/wiki/Talk:G%c3%b6del's_incompleteness_theorems/Archive_3
# Talk:Gödel's incompleteness theorems/Archive 3 Because of their length, the previous discussions on this page have been archived. If further archiving is needed, see Wikipedia:How to archive a talk page. Previous discussions: ## Re: Gödel's Incompleteness Theorems - A Brief Introduction Hey, User:Paul_August, I am curious what fault you found with the article at http://www.mind-crafts.com/godels_incompleteness_theorems.html that made you remove the link to it I put in External Links. I have to confess that IMHO it's the best non-technical introduction to Gödel's theorems on the Internet. 66.217.176.211 16:55, 16 November 2006 (UTC) Paul, actually, I took a brief look and (at least at that level of scrutiny) it does look pretty good. We don't want to encourage link farming but this one really might be worthwhile. --Trovatore 03:52, 27 November 2006 (UTC) OK, I restored the link - since no arguments were given against it. Will be happy to listen and discuss if anybody comes up with cogent args. 66.217.179.50 22:45, 3 December 2006 (UTC) ## Removal of Godel's statement of theorem I removed Godel's original statement of the theorem, but it's been reverted back. Godel's statement: To every ω-consistent recursive class κ of formulae there correspond recursive class signs r, such that neither v Gen r nor Neg(v Gen r) belongs to Flg(κ) (where v is the free variable of r). is only meaningful if you explain Godel's notation (e.g., class signs, Gen, Neg, and Flg), which we don't really need to do in this article. Perhaps for historical interest, we can have all this in its own section of the article, but it's not useful where it stands now and will just mystify most people. If there are no objections, I'll remove it again soon. -SpuriousQ 09:55, 27 November 2006 (UTC) Obviously, I disagree or I would not have reverted you. You took out a section header which is inappropriate. And the formal statement of the theorem may be of interest to some people, such as those who have read the paper. If you think that it is too advanced a topic, the appropriate course is to move it to the end of the article and add more explanatory material. JRSpriggs 10:29, 27 November 2006 (UTC) Thanks for the explanation, JRSpriggs. The section header was removed because it was introducing the deleted content and didn't make sense without it. As for the content itself, I don't think it's a matter of the topic being "too advanced"; it's that it's unreasonable for us to expect our audience to have read Godel's paper (with the German abbreviations intact, at that). As stated right now, I feel it does more harm than good, especially given the recent remarks about the article's accessibility. Like you suggest, a possible solution would be to move it to the end with more explanation, which I would be happy with and may attempt when I get a chance. -SpuriousQ 16:12, 27 November 2006 (UTC) With respect to JR, I tend to agree with Spurious here. Gödel's proof was brilliant; his notation, like almost all version-1.0 notation, was less than ideal from the point of view of long-after-the-fact exposition. And even if the notation were nicer, we still should not want to get into the details of a specific coding scheme early in the article, as it's kind of beside the point. --Trovatore 16:27, 27 November 2006 (UTC) ## The bizarre proof sketch How can you say that P(x) can be proved if x is the Gödel number of a provable statement explicitly explaining in the previous paragraph that the Gödel numbers encode single-parameter formulas, which are not statements and, therefore, cannot be proved or disproved? In other words, the concept of "provability" is inappropriate to F when we rewrite P(x) as P(G(F)). --Javalenok 02:17, 6 January 2007 (UTC) A given Gödel number might encode a single-parameter formula, or any other formula, or a statement. I've made an edit that I think might clarify a bit; does it address the problem you see? If not, can you be more specific about what text you see the problem in? Thanks in advance! —RuakhTALK 06:49, 6 January 2007 (UTC) At first, reading this article and the Gödel number, I have got the affirmation that a given Gödel number encodes a formula. The mapping one-to-one at that. Therefore, it seems that you are wrong when telling that a number (a means one in english) encodes a form or another form. It encodes only one form but not another. Secondly, since x is a natural number, a form generates an infinite number of statements. That is, infinitely many statements F(x) conform to a form F. Taking into account that the notion of proof does not apply to the forms, we are precluded from defining the decider P(G(F)). The decider must be provided with a statement; i.e., both the number of F and x. --Javalenok 13:15, 6 January 2007 (UTC) Sorry, you misunderstood me. I didn't mean that a Gödel number might simultaneously encode all of those things; I meant that a given Gödel number could be a Gödel number that encodes a single-parameter formula, or a Gödel number that encodes a formula of some other type, or a Gödel number that encodes a statement. Also, the article does not use the term decider and does not make reference to P(G(F)), so I don't know what you're referring to. Can you quote a specific sentence from the article so I can see what you're objecting to? RuakhTALK 19:41, 6 January 2007 (UTC) To Javalenok: Please do not remove the messages of other editors (unless they are vandalism, libel, obscenity, etc.). Also do not remove your own prior messages if they provide context to another user's messages. If you like, you may strike-out the parts of your messages with which you no longer agree using <s> and </s> (having this effect and ). Consequently, I have restored the messages by Ruakh and yours which provide context. JRSpriggs 07:35, 9 January 2007 (UTC) Sorry my fussiness, I have overlooked in the beginning that statements can also have Gödel numbers. Before this discussion will be deleted, please explain why do you substitute the Godel number x by formula F(G(F)) in P(x)? The formula is not a number. The grammatically correct statement would be P(G(F(G(F)))). Furthermore, the substitution of variable x in P('x') by the number of statement p turns the form into statement and its number changes accordingly. But the resulting statement is p, so its number 1)must be fixed; and 2) known in advance!!! Where do we get the number from? --Javalenok 20:33, 11 January 2007 (UTC) Also, I suggest to incorporate the the '#' notation for 'G()' as in [1] so that P(#F(#F)) looks better than P(G(F(G(F)))). --Javalenok 16:10, 8 January 2007 (UTC) In addition, let me note that the formalism p = SU(#SU) corresponds to the sentence 'p is equal to "this statement is not provable"' rather than the 'desired' "p cannot be proved". Peahaps, the difference between two sentences of lair paradox does not really matter. Nevertheless, the neccessety for the redundant 'p' confuses me. --Javalenok 23:02, 8 January 2007 (UTC) ## What is the need for self-unprovable formulas? Let me give a shorter sketch. I see no need to define SU. The statement p equivalent to "p cannot be proven" can be easily formalized as p = ~P(p). Now, if p is provable true then the argument p would be not provable (a contradiction). Alternatively, if p is false (that is, not provable true) then P(p) would be true meaning the argument p is provable true (again contradiction). --Javalenok 13:44, 10 January 2007 (UTC) You said "...if p is false (that is, not provable true)...". Falsity is the same as not BEING true. That is different from not PROVABLE. This distinction is essential to understanding Gödel's theorem. There is no inconsistency in the existence of a sentence p which is simultaneously true and not provable. Gödel never proved p, i.e. that p cannot be proved. What he proved was ω-con→p, i.e. that IF arithmetic is ω-consistent, THEN p cannot be proved. JRSpriggs 06:39, 11 January 2007 (UTC) OK, may be I should tell: IF p = false THEN P(p) = true HENCE p is provable true HENCE contradiction (p and ~p) = true. The purpose of the incompleteness theorem is to show that a consistent theory cannot be complete. I'm showing that it is possible to do without introducing the self-unprovable forms. Any objections? --Javalenok 08:47, 11 January 2007 (UTC) Your first step, going from Neg(p) to P(p), is changing context which is logically invalid. You are going from an arithmetic statement to a meta-arithmetic statement. Although Gödel showed that some meta-arithmetic statements can be pushed down into arithmetic, it does NOT go the other way (as you would have it). Also you say "I'm showing that it is possible to do without introducing the self-unprovable forms.". I do not see what you are getting at here. You are still using p in your argument, and p is the "self-unprovable form" to which you object, is it not? JRSpriggs 08:36, 12 January 2007 (UTC) Look, SU(#F) is defined as ~P(#F(#F)), so SU(#SU) = ~P(#SU(#SU)). Given SU(#SU) = p, we can rewrite the equality as p = ~P(#p). This equality states self-unprovability explicitly and, furthermore, eliminates the needles self-unprovable form SU(#F). The Neg(p) is ~~P(#p). But you deny the logical conclusion ~~P(#p) => P(#p). Instead, you prefer to go to english language. But stating that some self-unprovable can be proved is an outright nonsense. The [2] dares to state: "Suppose that ~(~P(p)) is a theorem. Hence P(p) is a theorem." --Javalenok 23:32, 13 January 2007 (UTC) You have completely misunderstood my last message. There is no meaningful difference between "P(p)" and "~~P(p)"; and I did not say otherwise. The vital distinction which you are ignoring is between statements in English about arithmetic and statements in the language of arithmetic which might be regarded as (in some sense) encoding the English statements. Although the encoding is chosen with the intention of making them equivalent, they are not (known to be) PROVABLY equivalent either from the axioms of arithmetic or from meta-arithmetic considerations. JRSpriggs 09:24, 17 January 2007 (UTC) We split out the philosophical ruminations into a separate article and make *this* article as clear and concise a description of the theorem(s) and their context in mathematics and metamathematics (no easy feat, I'm sure). If nothing else this article is becoming very long and I think it'd benefit. That said, I confess I'm not sure how this would be done -- do we need to have a discussion/vote? should one be bold and just start the other article? Thoughts? Zero sharp 21:31, 10 January 2007 (UTC) Let's talk about it first. The ruminations here are quite short, which seems like the ideal situation to me. Are you saying that you would like to expand them? If you have time and would like to improve this article, you might start by going through and changing the tone so that it is more encyclopedic, without the "we" statements. CMummert · talk 12:22, 11 January 2007 (UTC) Well, the Minds and machines section may be better fit in its own article, or merged over to Mechanism (philosophy). Actually, there are a bunch of disparate treatments of this scattered over Wikipedia, e.g., here, in Mechanism (philosophy)#Anthropic_mechanism, John Lucas (philosopher), and Minds, Machines and Gödel. We should pick a primary place for the content to go and the rest can link to it. -SpuriousQ 12:27, 12 January 2007 (UTC) I am all in favor of reducing redundancy. I am fearful about a separate article because I am not a philosopher. I am afraid it might grow into a giant mess of speculation, OR, and non-notable opinions, and it would be hard to fix. But I wouldn't revert the move if it happened. You might leave a one-paragraph summary here, and point to the main article with the {{main}} template. CMummert · talk 14:23, 12 January 2007 (UTC) I think we need to distinguish between two sorts of philosophical issue. At least something ought to be said in this article about implications for philosopy of mathematics -- particularly, the difficulties that Gödel's results have presented for the formalist and logicist programs. That's part of the historical background in which the results were announced. The stuff about philosophy of mind, on the other hand, should not be here in my opinion, with the exception of a link somewhere and a very brief summary. This is a mathematics article. --Trovatore 20:59, 12 January 2007 (UTC) OK, I merged the "Minds and Machines" section over to Mechanism (philosophy). -SpuriousQ 01:14, 17 January 2007 (UTC) ## Gödel sentence is an objection to AI. Is it true? Moved to Talk:Gödel's incompleteness theorems/Arguments. This question is not an "argument over the validity of the incompeteness theorem". Why was it moved there? --Javalenok 20:36, 11 January 2007 (UTC) It's not directly related to improving the article. --Trovatore 21:24, 11 January 2007 (UTC) ## Is the incompleteness theorem complete??? Moved to Talk:Gödel's incompleteness theorems/Arguments. ## "Publication" cat There's been a bit of reverting going on over the category Category:Important publication in mathematics. My take: first of all, the cat itself should be renamed (categories of this sort take plural names). Second, no, this article doesn't belong in that category, though a category an article named after Gödel's famous paper, "On Formally Undecidable Propositions...", would belong in the category. This article is not (primarily) about that paper, and therefore doesn't. --Trovatore 20:43, 17 January 2007 (UTC) I strongly agree that the category should be pluralized, and weakly agree that it isn't suitable here. —RuakhTALK 22:44, 17 January 2007 (UTC) The category tag seems to be added and removed over and over again. I think Trovatore's contention is that this article is about the theorems, not the publication. I agree. How about I create a short article on the actual publication, which is certainly notable because it introduced these theorems. Then that page can get added to the category, and everyone wins. CMummert · talk 18:55, 24 January 2007 (UTC) Yes, please! And if the article is out of copyright, we can put its full text on Wikisource and both articles can link to it. :-D —RuakhTALK 19:17, 24 January 2007 (UTC) The article itself was originally published in German in 1931; I have no idea what its copyright status is. The later English translations are almost certainly still under copyright in the United States. CMummert · talk 19:27, 24 January 2007 (UTC) Done: On Formally Undecidable Propositions of Principia Mathematica and Related Systems. CMummert · talk 02:02, 25 January 2007 (UTC) Nice work, Carl. I might quibble on the link to the problematic article Gödel numbering, which I think should probably be deleted or redirected somewhere. But it looks very good, and you finished it quickly! --Trovatore 02:51, 25 January 2007 (UTC) My general philosophy is that a link should be provided if it theoretically helps the linking article. Otherwise, when the destination article is improved, how will anyone know which articles should link to it? I have the same belief about red links. CMummert · talk 03:29, 25 January 2007 (UTC) Well, that makes sense, if there ought to be an article called "Gödel numbering". Should there? I'm not convinced. A redirect for sure, but an article? And the argument doesn't strike me as being quite as strong when talking about redirects. --Trovatore 03:52, 25 January 2007 (UTC) The concept of Godel numbering the sentences of a first-order theory deserves some article. When the current article on Godel numbering is fixed, at least "what links here" will be accurate. The person making the redirect, or a bot in some cases, should resolve the link in the original article to bypass the redirect. My thought about the Godel numbering article is that it should be an article, but it should just give a catalog of the various senses of the phrase, possibly with links to larger articles on each of these senses. CMummert · talk 04:12, 25 January 2007 (UTC) ## Recent anon edits While I generally feel that anyone wanting to make such extensive revisions to a high-profile article ought to log in, I think the recent spate of edits by 132.181.160.42 have generally been of high quality. However I can't agree with a couple of the changes to the intro. Explaining myself here. The anon wrote: These theorems show that no consistent formal system can completely describe Peano arithmetic (addition and multiplication over the natural numbers), and that no system capable of representing that arithmetic can prove its own consistency. This statement is insufficiently precise to be either right or wrong, but it has reasonable readings that are just wrong. Peano arithmetic, for example, is a syntactic rather than semantic construct (as opposed to the structure of the naturals, which is semantic); it's certainly possible for a consistent formal system to completely describe the syntactic manipulations of Peano arithmetic. Moreover there needs to be something said about what sort of formal system (say, that its axioms are computable), otherwise no such conclusion is possible. The anon also says: The theorems also have extensive implications for philosophy and cognitive science. A great deal of bad philosophy and cognitive science has been based on the theorems (so say I, a hardcore metaphysical libertarian and substance dualist who likes the conclusion of the Lucas arguments, just not the arguments themselves). In general any claimed philosophical implications outside of mathematics should be viewed with suspicion; it's not ruled out a priori that no such implications can be drawn, but I have never seen one that worked. I think there's a good bit of possibly accurate stuff that's suggested by the theorems, by analogy as it were, and these may even be a reasonable heuristic guide to truth. But the word "implication" is too strong. These are theorems of mathematics, and we should keep the focus firmly there. --Trovatore 07:34, 26 January 2007 (UTC) I agree that Peano arithmetic is a formal system that adequately formalizes Peano arithmetic. I added a caveat to the lead - the well known decidable formal systems for geometry and the real numbers are simple, but not "trivial". CMummert · talk 13:36, 26 January 2007 (UTC) The link to "Askanas, Malgosia, 2006, "Gödel Incompleteness Theorems - A Brief Introduction." A lucid and enjoyable exposition of both theorems in the spirit of Gödel's original proof. "is not a bad site for explaining this theorem--well above the high school student's level though--but its "lucid and enjoyable" presents a value judgement which is not at all consistent with the purpose of the article and Wikipedia. It is a commercial site selling services in math education. It should be retained, in my view, it does a fairly good job of explaining the concepts, but the evaluation needs to be reconsidered. Malangthon 03:31, 27 January 2007 (UTC) I removed the annotation. You could (and, I would suggest, should) have moved it here along with your comment. In cases where a phrase is clearly not neutral POV, seems to advertise for a commercial service, or both, there is no reason to leave the statement while inquiring about it. CMummert · talk 03:54, 27 January 2007 (UTC) ## is the incompleteness theorem complete? This section discussed the underlying mathematics issues rather than the article. It has been moved to the arguments page and can be found here. CMummert · talk 02:16, 29 January 2007 (UTC) ## Could we shorten the article.....considerably? There are previously complaints about this not being accesible to laymen to understand the main points of Godel's Incompletness theorems. Perhaps majorly shortening the article would help? Just throwing out a suggestion. Fephisto 22:45, 4 March 2007 (UTC) Shortening it, by itself, will not make it more accessible. Moreover, I would not be in favor of removing material just because it requires a technical background to understand it. Ideally, that material should appear later in the article, with the more understandable material up front, but that's a matter of reorganizing, not shortening. However it is true that the article is a bit long. It's difficult to maintain focus in articles this long, because people edit in the middle without considering the overall flow. I think the sections on the first and second theorems should be split out to their own separate articles; that would help a bit. That's one of those things I may get around to one day if no one else does it first. --Trovatore 00:59, 5 March 2007 (UTC) ## A non-mathematical analogy? Is the incompleteness theorem *very roughly* analogous to a logical aporia? An aporia is an irresolvable internal contradiction or logical disjunction in a text, argument, or theory. For example, if a Cretan declares all Cretans to be liars, there is an irresolvable contradition between the subject and the predicate. Another example would be: "Language cannot communicate truth", which, if true, is inherently untrue, and therefore true, and so on to infinity. I am asking purely out of curiosity and am not suggesting this have any bearing on the article. Thanks. —The preceding unsigned comment was added by Ulrich kinbote (talkcontribs) 07:58, 5 March 2007 (UTC) Yes, there's a rough analogy there; that's a good way to put it (except the term I knew for it was "antinomy"; is there a distinction between "aporia" and "antinomy"?). Unfortunately a lot of people try to take it as too strict an analogy, and a small but noticeable percentage of these decide that they've "solved the problem" and rant on about some "subtle error" that they think Gödel made. --Trovatore 08:10, 5 March 2007 (UTC) The problem is that while these theorems are given press coverage, they are intrinsically very difficult to understand. There is no "royal road", no analogy, no easy way to understand them. The layman will NEVER understand them. Only a person who has spent years studying mathematical logic has any hope of understanding them. However, they do not cause the crisis that some popularizers try to say that they cause. They just show that mathematics is infinitely ramified, i.e. there is always more to learn about it. JRSpriggs 10:34, 5 March 2007 (UTC) Well, that might be a little overstating the case. An undergrad math major in a one-semester course in math logic should be able to get the basic idea pretty well. He might not bother to work through all the gory details of arithmetization of syntax, but those aren't very important anyway. As to the "crisis" -- I think it was a crisis for the foundational schools that were fashionable at the time, such as Hilbert-style formalism. I think the theorems were the main cause of the latter-20th-century rehabilitation of realism/Platonism to a position of respect, not because they made the problems of realism any lighter, but because they heaped heavy difficulties on realism's leading competitors (while leaving realism itself more or less untouched). --Trovatore 07:07, 6 March 2007 (UTC) Godel, Escher, Bach gives a completely accessible (i.e. nontechnical) but basically rigorous description and proof of the theorem. It goes off into all kinds of discursions as it threads through the entire book, but it's entertaining reading throughout. I'd consider it readable by any interested and math-inclined high school student. I read it before I knew anything about logic and it made the intro-to-logic class I took later (the usual undergraduate one that ends with the incompleteness theorem) very natural and easy. Raymond Smullyan's What is the Name of this Book? develops the theorem through a series of knights and knaves puzzles but that one's a bit more contorted. Sooner or later I hope we have a wikibook about logic (right now there are a few very tentative beginnings at one). —Preceding unsigned comment added by 207.241.238.233 (talk) 05:33, 25 September 2007 (UTC) ## Request for untruth Can I convince people here to remove the word "true" and "truths" from the statements of the incompleteness theorems? There are two reasons for this. One is that these theorems are formal results about formal systems, and have nothing to do with truth except in certain philosophical interpretations. The interpretations should be in their own section, and not be mixed in with the formal statements of the theorems. The other reason is pedagogical. Despite having a pretty strong mathematical background, I was confused for years about the exact nature of Godel's theorems, precisely because I didn't understand what it meant to be "true but unprovable". It took a very long time for me to realize that the theorems are simply about provability of a sentence and its negation, and once I realized that I immediately understood them. In retrospect I feel that I was misled by nearly every description of the theorems into thinking they were more mysterious than they are, and wasted a lot of time in confusion as a result. I would state the first incompleteness theorem something like this: Every [formal system meeting certain criteria] contains a statement which either cannot be proven or disproven from the axioms (making the system incomplete) or which can be proven and disproven from the axioms (making the system inconsistent). where the bracketed portion is either a placeholder for the actual criteria, or else those exact words with the definition occurring out of line. Probably the latter is better. I think that the phrasing "every system is inconsistent or incomplete" is much clearer than the phrasing "every consistent system is incomplete". The latter suggests that consistency is just another technical requirement like recursive enumerability, which it certainly is not! -- BenRG 17:25, 23 March 2007 (UTC) I would support a change like that, provided that the important interpretation of Godel sentences as "true but unprovable" is discussed somewhere else. I have thought for a while that this use of "truth" so early in the article is above the level of the intended reader (they will misunderstand what it means), and the footnote is too terse to help them understand the subtlety of the concept. Several other people watch this page, so I have no plans to rush any changes here. CMummert · talk 17:53, 23 March 2007 (UTC) I'm not really in favor of such a change. Saying that T does not prove GT is precisely the same as saying that GT is true, and that's what we need to get across. It's possible to doubt that GT has a determinate truth value, but then you are also required to doubt that the proposition "T does not prove GT" has a determinate truth value. Or at least it's not obvious how you'll avoid it. --Trovatore 18:40, 23 March 2007 (UTC) They are not the same, because you cannot say that GT asserts its own unprovability unless you interpret it as a statement about natural numbers. This is not a mere technicality, because it follows from the completeness theorem that if T is provably consistent in some system T', then there exist models of T in T' under which GT does not hold (i.e. the interpretation of ~GT is a theorem of T'). I'm not sure I got that exactly right, but I assure you that there is no way to talk about the truth of GT without making interpretational assumptions that can't be formally justified. -- BenRG 20:19, 23 March 2007 (UTC) What I said was that if you doubt the existence of a determinate truth value for GT, you also have to doubt it for the proposition "T does not prove GT". The latter statement asserts the nonexistence of a finite string of sentences starting with the axioms following the rules of inference yada yada yada. If "natural number" (sempliciter, not relativized to any model) is problematic, then so is "finite string of sentences". If you don't think so, then please explain your semantics for interpreting "T does not prove GT", and explain why you don't have to relativize *those* semantics to a model. --Trovatore 20:41, 23 March 2007 (UTC) Natural numbers are not problematic, just not unique. Let's assume Platonism, since presumably non-Platonists will have no objection to my original proposal. Without getting bogged down in issues of philosophy of language and levels of English quotation and whatnot, "T does not prove GT" is a statement about the Platonic realm, saying yada yada yada. But even to a Platonist, GT is not such a statement. T is a formal system by assumption, and so GT is a string of symbols by definition. Unlike real propositions and theorems, the so-called propositions and theorems of a formal system are just strings of symbols, and a Platonist is not obliged to interpret them in any particular way. The only important thing is that the interpretation be consistent, i.e. that all of the axioms and rules of inference be valid under the interpretation. The Platonic version of the result about T' is: "if T is provably consistent then there exists a consistent interpretation of T under which GT is false". The quoted statement is true if the completeness theorem is true, which I think any Platonist would accept. Different consistent interpretations of T give you different theorems, all of which are true. In one interpretation GT, though not a theorem of T, happens to encode a true statement about natural numbers that's equivalent to an assertion of GT's unprovability in T. In other interpretations GT might encode other true statements. In at least one interpretation GT encodes a false statement, but ~GT, which is also not a theorem of T, encodes a true statement. The point is that GT isn't a statement about natural numbers, and never was. The only reason people talk about natural numbers in the context of Gödel's proof is that it's a convenient informal way of describing the Gödelization process. All that's really necessary for the proof is a certain formal structure within the system, which can be provided by natural numbers but also by other things that aren't the same as natural numbers. They all suffice to construct the Gödelization of the system, but they differ in whether it's a true-but-unprovable statement or a false-but-undisprovable statement. -- BenRG 22:10, 23 March 2007 (UTC) Let's stick to the naturals, please; it'll save us a lot of confusion. Sure, the objects of discourse of T may not literally be natural numbers (e.g., they might be sets, if T is for example ZFC). But the relevant objects of discourse of T are the ones that play the role of the natural numbers in the assumed relative interpretation of PA into T. These are the only ones for which we need provide an interpretation in order to interpret GT. So they might as well be the natural numbers. Yes, if T is consistent then there is a model of T in which GT is false. But that model will (necessarily) have nonstandard natural numbers. Interpreted in the standard natural numbers (which are unique, up to a unique isomorphism) GT is true. Again, this is no more problematic than the claim that the statment "T does not prove GT" is true. --Trovatore 22:28, 23 March 2007 (UTC) I disagree that sticking to the naturals saves confusion; I think it creates confusion. As I said before, it certainly did for me. Despite your use of "again", this is the first time you've said that GT interpreted in a particular way is true, rather than implying that GT itself is true. Personally I interpret the (first) incompleteness theorem in a completely different way. In light of the completeness theorem, it seems clear that it's not about the inability of axiomatic systems to capture truth. They can prove exactly what they ought to be able to prove (everything that's true in all models). What it's really about is the inability of axiomatic systems to fix the objects of discourse. To me this is a much more interesting result. It's a reasonable interpretation of what Gödel actually proved, but not a reasonable interpretation of the pre-interpreted version that implies that GT is really true. Of course I'm not saying that this objects-of-discourse thing should be in the theorem statement; it should be in a later section, after a theorem statement accurate enough to accommodate all interpretations. If this doesn't change your mind then we may be at an impasse. I hope a few other people will weigh in on this. -- BenRG 14:28, 24 March 2007 (UTC) You don't actually need the incompleteness theorems to show that (first-order) axiomatic systems can't "fix the objects of discourse". Löwenheim–Skolem is sufficient for that (and more directly on-point). But it does seem that we're not arguing about what I thought we were; it may be more a difference of terminology. The way I think of it, GT, taken literally, is directly a statement about the natural numbers. It's written in the language of arithmetic. If T is not in the language of arithmetic, then the sentence that T literally fails to prove or disprove is not GT itself, but what we might call Translate(GT), where the Translate() function is the thing assumed to exist by the hypothesis that says PA is relatively interpretable in T. What GT asserts is not "T cannot prove me" but rather "T cannot prove my translation". Then I ordinarily suppress mention of translations from casual descriptions, because they really are a technical detail, and speak of T proving or not proving sentences of arithmetic, when literally speaking I really mean proving or not proving their translations. Using this terminology, where GT is directly and literally a statement about the naturals, do you agree that if T is consistent, then GT is true? --Trovatore 18:15, 24 March 2007 (UTC) There is a second translation: from the statement GT, which is just some Pi^0_1 sentence, to a statement at the meta level about provability. We recognize, at the meta level, that any actual proof could be coded as a standard natural number. It is because of this second translation that I think it would be worthwhile to have a whole paragraph in the article rather than just a footnote. This translation is already lightly covered one or two places, including the proof, but not as explicitly as it could be. CMummert · talk 19:24, 24 March 2007 (UTC) Carl, of course you're right about this other level of translation, but it doesn't seem to be in dispute at the moment, especially in relation to the use of the word "true". I seem to have understood wrongly, at first, what BenRG's actual complaint was; it doesn't seem to be about theories of truth, but more about whether the Gödel sentence should be thought of as arithmetical. (For some reason BenRG didn't complain about the word "arithmetical" in the statement he was referring to.) Ben, do you see where I'm coming from on that now, and do you agree that the arithmetical version of GT is properly described as "true" if T is consistent? --Trovatore 01:52, 25 March 2007 (UTC) So the big problem with using the word "true" in this context is that it is POV. The definition of "true" requires a footnoted definition which is as long (longer perhaps) than the statement of the theorem itself. If "true" is being used in a *particular sense* then its not truly NPOV and certainly will confuse the average reader. Its perfectly valid to have a philosophical position that uses the concept of "truth" but does not accept that the Godel sentence is true. There are, after all, models in which it is both satisfied and unsatisfied (true and false if you like). Really the problem here is the belief -- very widely held I admit -- that there is an "intended model" in which we can say that it is true. That's not nearly as cut and dried a philosophical position as many would like. I'd suggest dropping the "true" bit of the definition of the theorem itself. I accept that it was a position which Godel found attractive and is implicit in his approach, but the focus of his theorem was the formal undecidability not the semantics of his sentence. For the first theorem, surely a reasonable approach would be to state it like that (as a theorem about undecidable sentences) and then remark that most mathematicians take an approach where they can say "its true but not provable". The key thing is that wikipedia shouldn't try to adopt a particular position on the point. Francis Davey 00:15, 7 July 2007 (UTC) The notion of the truth of the Gödel sentence (well, Rosser sentence, maybe) of T is no more problematic than the notion of the consistency of T, which is one of the hypotheses. As you say, there are models that fail to satisfy GT, but those models also hold that T is inconsistent, meaning that, from the perspective of those models, we're not even in the situation where the theorem applies. So it's seriously misleading to quibble on the concept of truth for GT, while not mentioning that the hypotheses of the theorem have exactly the same issue. --Trovatore 00:45, 7 July 2007 (UTC) I'm not sure that's quite right. There are *consistent* models of PA (or whatever system) in which the Godel sentence is false, but the incompleteness theorem is still valid (and its hypotheses hold). Its true that such models may deny Con(T), but they are consistent nonetheless. All that is required to prove the first theorem is consistency (i.e. that falsity is not provable) -- which is a property of the theory not of a particular model. The Godel sentence is true in some of those models but not all, the theorem is true in all models because its a theorem. Maybe I am misunderstanding what you are saying, but there seems not problem to me in questioning whether the Godel sentence is "true" but still being entirely happy with the theorem (which says the sentence is not provable). Francis Davey 10:34, 7 July 2007 (UTC) You can be entirely happy with the theorem, as a formal theorem, even if you doubt the determinate truth value of GT, yes. But in that case you can't really interpret the conclusion as "GT can't be proved", at least not in full generality. That's because, while GT is a claim about an arbitrary natural number, the statement that GT can't be proved is a claim about an arbitrary proof. And if arbitrary natural numbers are problematic, then arbitrary proofs are equally so. To put it another way, yes, there are models in which GT is false, but in those models, GT can be proved. If you say, as you do, that consistency of a theory is a property of a theory rather than the model, then you must have some notion of what an arbitrary proof is "really", without reference to any model. But then you can use that notion to understand what an arbitrary natural number is, "really" (for example, "a natural number is the length of some proof or initial segment of a proof"). And that's all you need to give a determinate truth value to GT. --Trovatore 18:18, 7 July 2007 (UTC) I don't think you are saying more than that in order to say many meaningful things about PA (or other applicable theory) as a theory we often need stronger (or different) assumptions in a meta theory to do it. Eg, we cannot prove the consistency of PA within PA. G_T cannot be said to be "provable" in non-standard models of T in which G_T is false, since "provable in a model of the theory" isn't a meaningful concept. What I think you are saying is that there are models of the meta theory of proofs in which G_T is provable. That seems to me to say little more than the obvious, namely that there is no firm ground on which we can build a formal explanation of what a natural number is, the best we can do is push the problem elsewhere. One might not wish to do that -- and my point is that is a perfectly valid philosophical standpoint. There is nothing *logical* that forces one to accept that the standard model is somehow the "right" or God-given model, although there might be metaphysical reasons for choosing it. You can certainly give a determinate truth value to G_T, but really that's not very interesting. The fact that something is valid in some models and not in others is really very common. You can pick a model in which it is true, but that doesn't mean it *is* true does it? Francis Davey 23:27, 7 July 2007 (UTC) "Provable in a model" is indeed a meaningful concept. It means that the model has an element that, according to the model, is a proof of the statement. The model may, of course, be wrong about that -- that is, the statement may be provable in the model, but not really provable. Just as GT may be false in the model, but not really false. What's a perfectly valid philosophical standpoint is that there may not be a "standard model" of PA. But then, the questions about what's provable and not become just as problematic as the question about what's true about the (standard) naturals. What's not a valid philosophical viewpoint is to say, "yes, there's a standard model of PA, but I'm not going to use it to give a truth value to GT" -- that's not a viewpoint, it's a linguistic quibble. The objects of discourse of GT are the natural numbers, so saying that GT is true is exactly the same as saying it's true in the standard model. That's not philosophy, just understanding ordinary mathematical language. --Trovatore 06:43, 8 July 2007 (UTC) Quite so. I thought that was clear from what I said. Maybe we are talking at cross-purposes. Francis Davey 10:53, 8 July 2007 (UTC) Could be. I'm not sure exactly what you are saying, but it appears clear that you want to apply restrictions to the claim that "GT is true" that you are not willing to apply to the claim "T does not prove GT", and you haven't explained just what your semantics for the latter claim are. As regards your remarks about "models of the meta theory of proofs" -- note that we don't have to look very far for them. Every model of T is already a model of the metatheory. That's the whole point of the requirement that PA be relatively interpretable in T. And, any model of T in which GT fails, is a model of the claim that T proves GT. --Trovatore 00:36, 9 July 2007 (UTC) It is possible to understand Godel's theorem without referring to the truth of the statement, but almot every standard presentation does discuss the truth of the Godel statement. What the article needs is slightly more information than a footnote about this, not less. The article ought to explain how the truth of the sentence can either be interpreted disquotationally or interpreted as truth in the standard model (which is very similar). Suppressing the fact that the Godel sentence is widely considered to be true would impose an uncommon POV on the article. — Carl (CBM · talk) 13:25, 7 July 2007 (UTC) I'm not proposing (and I don't think anyone is - though I may be wrong about that) that the fact that many consider that it is useful to explain the Godel sentence as true but unprovable be suppressed. Clearly that would be wrong. It is a point of view. But the fact that it is a point of view means that it ought not to be stated uncritically. Francis Davey 23:27, 7 July 2007 (UTC) I see; I misunderstood. I would accept moving the discussion of truth to a separate section and adding a line to the lede to summarize that section. That's contingent on the new section being acceptably written, of course, but I would try to help. Whether others are as willing as I am remains to be seen - the interpretation of the statement as true but unprovable is extremely common, because it does have a strong justification. — Carl (CBM · talk) 00:15, 8 July 2007 (UTC) I am against that plan. I frankly think that Francis has still not quite understood the issue, probably because the sources he's read didn't understand it either. Failing to state this plainly is likely to result in further confusion. We should have a section dealing with other interpretations, and mention that the "truth" interpretation becomes problematic if you don't believe in the totality of the natural numbers -- but that the "provability" interpretation also becomes problematic in that case. I would note another point here: Strictly speaking, the claim that the Gödel sentence of a consistent theory is neither provable nor refutable from that theory, is not even true. The sentence certainly not provable, but it may be refutable, if the theory is ω-inconsistent. So to follow Francis's suggestion, we either have to explain ω-consistency earlier than is really desirable, or we have to use Rosser sentences -- which would make the explanation not strictly about Gödel's theorem, but rather Rosser's strengthening of it. --Trovatore 07:03, 8 July 2007 (UTC) Actually I think that touches on something of importance, namely that Godel assumed omega-consistency to prove the theorem and (as you say) it is only after Rosser that we have the theorem requiring only consistency. Nevertheless the normal citation of the theorem is in the Rosser version (i.e. after strengthening). That is normal for mathematical usage (the eponymist of a theorem retaining their origination even if the theorem is "improved") though another usage might be the Godel-Rosser theorem. By the way, I quite accept that, in historical terms, the remark about the sentence being "true" is correct, since that is the interpretation Godel himself gave to it, though it is presented as a corollary rather than as the main theorem which is merely about undecidability. Francis Davey 10:53, 8 July 2007 (UTC) ## A question about computable enumerability The following is stated in the article: Some technical hypotheses have been omitted here; the most important is the stipulation that the theory be computably enumerable. That is, for Gödel's theorem to be applicable, it must be possible in principle to write a computer program that correctly decides whether statements are axioms of the theory, and a second computer program that, if allowed to run forever, would output (one at a time) a list containing all the theorems of the theory and no other statements. If there is "computer program that, if allowed to run forever, would output (one at a time) a list containing all the theorems of the theory and no other statements" then doesn't that contradict Godel's theorem about the theory being incomplete? I don't claim to understand this subject very well but this is my intuition as a computer science student. I think the above should be changed to "a list containing all the theorems of the theory and their negations". Please explain if you think I am wrong. Panayk 18:27, 8 April 2007 (UTC) It outputs the list of theorems, i.e., the list of sentences provable from the axioms. This cannot include any particular sentence and its negation, for then the axioms would be inconsistent. There will be some sentences, such as the Godel sentence, for which neither the sentence nor its negation is a theorem, and so neither one will be included in the list. CMummert · talk 18:34, 8 April 2007 (UTC) Ah thanks, I misunderstood what we mean be theorem (i.e. I thought a theorem is a true sentence that is not an axiom). Just to make sure: is the reason that such a program (that lists the provable sentences) can exist, that we don't require it to terminate? Because otherwise we could use it to "look-up" a sentence (and its negation) and find out if it's decidable, something that Turing (I think) precluded as mentioned in the article. I hope I am making sense. Panayk 21:52, 8 April 2007 (UTC) Yes, it doesn't decide whether a given sentence is a theorem, it just tests all possible proofs and outputs their conclusions. CMummert · talk 23:01, 8 April 2007 (UTC) So while the theorems can be enumerated recursively, one CANNOT recursively enumerate them in a predefined order, e.g. by the number of characters in the formula (and lexicographically for each length). That would make the theorems a recursive set (rather than just a recursively enumerable set) which is impossible unless number theory is inconsistent. JRSpriggs 09:59, 9 April 2007 (UTC) Thanks. You know, even though I was first introduced to Godel's work some years ago, I never quite understood it until now. Part of the blame is on one of the articles I first read, one that tried to reconcile the classical notion of truth with Godel's notion of "truth == provability". For example the author was saying that the Godel sentence was true but unprovable. The author then went on to claim that we (humans) are "special" in that we can "see" the truth of the Godel sentence. It now occurs to me that this is absurd. In the context of the incompleteness theorems provability is synonymous with truth: "true == provable" and "false == the negation is provable". Nothing else, period. Panayk 21:10, 22 April 2007 (UTC) No, if that's the lesson you took away from it, I'm afraid you got it wrong. Prior to Gödel it was perhaps reasonable to believe that truth equaled provability; afterwards it was not, and referring to "truth==provability" as "Gödel's notion" is simply ahistorical. As to whether humans are special are not, that's beyond the scope of a mathematical theorem. Gödel himself believed that humans are special, but to the best of my knowledge, never claimed to have proved it. --Trovatore 01:28, 23 April 2007 (UTC) I meant, that as I now understand it, the incompleteness theorem doesn't deal with truth but only with provability. The notion of truth doesn't exist in the context of the theorem, and you don't need it to make sense of it. A sentence is "true" if you can prove it, "false" if you can prove its negation, and undecidable in all other cases (e.g. the Gödel sentence, which we have nothing to say about). That's how "true" and "false" is used in the following sketch of the proof: http://www.ncsu.edu/felder-public/kenny/papers/godel.html. It's only a formal thing, the author may have as well used the words "red" and "black". That's what I meant by "truth==provability", that in the context of the theorem we don't care about (or define) the notion of truth, only of provability. The theorem has nothing to say about the "truth" of the Gödel sentence or whether humans can see it. (Not to say that such questions are not valid given an external definition of "truth"). Am I on the right track? Panayk 14:50, 23 April 2007 (UTC) Hmm, better, but still some quibbles. You're right, the proof of the theorem deals mainly with whether sentences are provable, not whether they're true. But it's an extremely bad idea to use the words "true" and "false" here in place of "provable" and "refutable" -- you'd do much, much better using "red" and "black" than "true" and "false". Part of the reason that this is so is that the proof of the theorem does show that the Gödel sentence of a consistent theory is in fact true. To understand that, you need to have separate notions of "true" and "provable". --Trovatore 19:05, 23 April 2007 (UTC) After looking at the paper you linked to, I think I see why you are confused. The people who follow Hofstadter's proof tend to use proofs like this, with the same poor terminology. Sometimes when he says "statement about natural numbers" he means a statement with no quantifiers whatsoever. For formulas with no quantifiers, provability in PA is equivalent to truth. But for statements with even one quantifier, provability and truth do not coincide. The paper you linked to is particularly vague about this point. You would be better served by finding any undergrad level mathematical logic book rather than anything based on Hofstadter's proof. CMummert · talk 20:08, 23 April 2007 (UTC) You are probably right. Would you care to recommend one (or more)? Most of the material I have read was either too popularized or only dealt with the subject very briefly as an introduction to something else. I am trying to get the whole picture, but that's hard to do when I have only my intuition to figure out what exactly 'truth', 'theory' etc. mean. I'd also be interested to know how exactly this relates to computer science and computability theory. Panayk 23:34, 23 April 2007 (UTC) One respected undergrad logic book is: Herbert Enderton (2002) A mathematical introduction to logic, 2nd ed. An equally good book, at a slightly higher level but lower price, is Computability and Logic by Boolos, Burgess, and Jeffrey, 4th edition 2002. CMummert · talk 23:52, 23 April 2007 (UTC) Thanks. Panayk 09:35, 24 April 2007 (UTC) ## An example of a statement which is true but not provable? It seems the article should give one or two examples of a central aspect of Gödel's theory, that there are statements in math which are true but unprovable. Referring to these two sentences: In fact, there are infinitely many statements in the theory that share with the Gödel sentence the property of being true but not provable from the theory. "Elementary arithmetic" consists merely of addition and multiplication over the natural numbers. Is "elementary arithmetic" an example? It's not clear that it is (it should say "for example"). Secondly, how is this an example? Is it actually unprovable that two numbers added together create a sum? This doesn't seem like a good example for a layman, as the reason this is unprovable isn't easy to grasp. Can someone give a better example, something that's not a basic axiom? Scott Teresi 15:16, 8 July 2007 (UTC) Elementary arithmetic has to be part of the theory, so any theorem of elementary arithmetic is provable in the theory. I rearranged the paragraph some. Given one true but unprovable statement S, statements like "S and 0 = 0" and "S or 1 = 2" are also true and unprovable, so there are trivially infinitely many other true but unprovable sentences. — Carl (CBM · talk) 17:44, 8 July 2007 (UTC) Can you give some examples of S? I remember things a lot better when I know a few good examples of something. Scott Teresi 20:34, 9 July 2007 (UTC) The proof of Godel's theorem gives a construction of such an S, although if written out explicitly it would be very long. The sentence essentially says "There is no natural number that encodes a proof of this sentence". The difficult thing is making the self reference work out correctly. There are other known examples of unprovable sentences. For example the Paris-Harrington theorem is true but not provable from Peano arithmetic, and many people would say that although the continuum hypothesis is not provable or disprovable from ZFC it is either true or false in the "real world". — Carl (CBM · talk) 20:51, 9 July 2007 (UTC) ## Second theorem How does Gödel's second incompleteness theorem follow from his first? I know and understand the proof of his first, except for what exactly is needed for a theory to state the provability of a given theorem. I'm willing to trust that adding axioms and computable axiom schemas won't make the first theorem cease to apply, although I don't understand why. I also figured out a simple way to prove his first theorem from his second, although I doubt that's useful. — Daniel 19:13, 12 July 2007 (UTC) The second theorem doesn't follow "from the first" directly; the proof of the second relies on the fact that a proof of the first incompleteness theorem can be carried out inside any sufficiently strong theory of arithmetic. The first incompleteness theorem, as an approximation, gives a sentence that says "I am not provable", and shows that this sentence is true but not provable. The second incompleteness theorem, as an even coarser approximation, gives "The sentence 'I am not provable' is not provable" as a sentence that is true but not provable. This latter sentence, essentially, asserts the consistency of the theory, because a theory is consistent if and only if it has at least one unprovable sentence. — Carl (CBM · talk) 22:03, 13 July 2007 (UTC) The latter sentence (which, if I am not mistaken, is essentially identical to the former) does assert the consistency of the theory, but I don't see why it would be the only one. Wouldn't "The sentence <insert sentence> is not provable" work with any sentence? Does "The sentence '0=1' is not provable" imply "The sentence 'I am not provable' is not provable"? By the way, the best r=translation of the sentence into something like English I can think of is "', when preceded by its own quotation, cannot be proven using <insert system used for proofs in given theory, not just the name>', when preceded by its own quotation, cannot be proven using <same as last insertion>" How come they never use something like that? Is it too complex for the layman? Is it just more than they want to know? — Daniel 01:23, 16 July 2007 (UTC) That's a very close translation that is very useful in the context of quines. I don't know why it isn't the "textbook standard" version, but it may be because it is so much less succinct than the other translations, to the point that the main idea is easy to miss. The "latter sentence" in my post above is not at all identical to the former, and this difference is the difference between the two incompleteness theorems. The latter one has an extra level of indirection that the first one doesn't. Your other question gets exactly to the point of the theorems. You're right that "The sentence <insert sentence> is not provable in theory T" asserts the consistency of theory T for any inserted sentence. But it is trivially correct for any disprovable sentence of T if T is consistent; I think that accounts for your example of "0=1". One key innovation in Godel's work was showing that proofs can be encoded in very weak fragments of arithmetic so that arithmetical sentences can be used to assert the provability of other arithmetical sentences. The most surprising result is that there are unprovable sentences that are true in the standard model, which means they aren't disprovable either. Thus any consistent effectively axiomatizable theory of arithmetic is incomplete, which was not known before Godel's work. You don't get that directly if you use sentences like 0=1. This contrasts with Hilbert's proof that there are complete effectively axiomatizable theories of Euclidean geometry. — Carl (CBM · talk) 02:42, 16 July 2007 (UTC) It doesn't matter that "The sentence <disprovable sentence> is not provable in theory T" is trivially correct when T is consistent. The point of the second theorem is that that sentence can't be proven no matter what sentence is inserted in T. How did he prove this? — Daniel 16:20, 18 July 2007 (UTC) ## Ademh: Your post "Paradox in Godel's incompleteness theorem that invalidates it" is now at Talk:Gödel's incompleteness theorems/Arguments retitled Wvbailey 00:18, 10 October 2007 (UTC) Move to the arguments page here. — Carl (CBM · talk) 15:16, 9 October 2007 (UTC) ## Sub-section(s)-in-progress re criticisms? misunderstandings? of Godel's incompleteness theorems Would it make sense, though, to have a little section on the page re this very issue? That these arguments have been discredited since Finser (or some such)? Bill Wvbailey 16:03, 9 October 2007 (UTC) It would be worth having something in the article about persistent misunderstandings. I thought there was such a section, but apparently there isn't. There are some difficulties writing such a section: • We want to limit it to only very notable objections (not a free-for-all of complaints) • We need to source it well, so it's clear that it is not original research or opinion of Wikipedia editors. Franzen's book will be helpful for that. But I think it could be done. — Carl (CBM · talk) 16:15, 9 October 2007 (UTC) Would you recommend that we create a "working article" (suggested title?) or do the work here? (As you know, I create a lot of clutter while I'm researching and "collecting"). Bill Wvbailey 16:38, 9 October 2007 (UTC) If you want to collect quotes and sources for later use, I'd recommend a subpage of your user page. I think it's parallel to the way some people make large numbers of index cards when doing research at a library, so that they can use those cards later to piece together a paper. If you haven't looked at Franzen's book, that would be a good resource, as would: "The Reception of Godel's Incompleteness Theorems", John W. Dawson, Jr., PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association, Vol. 1984, Volume Two: Symposia and Invited Papers. (1984), pp. 253-271. [3] That paper has specific comments about Finsler and Godel's communications, and about other criticism of Godel's work shortly after it was published. — Carl (CBM · talk) 17:37, 9 October 2007 (UTC) Voilá: User:Wvbailey/Criticism of Godel's incompleteness theorems‎. All are invited to participate in adding research -- e.g. references, quotes (with citations, please) etc. Please add the raw stuff to the talk page. If anything good comes of these labors, I'll write something up there, transport it here for comment, and we can emend before putting into article. Bill Wvbailey 19:10, 9 October 2007 (UTC) Hmm -- the only thing you have there so far is a brief note about Finsler, who (if I've understood correctly) didn't have a criticism of Gödel's arguments; he just thought he (Finsler) should have gotten some of the credit. Do I have that right? That seems to be another matter altogether. Admittedly the source apparently treats this together with criticisms of the arguments themselves, but I think at least the proposed article needs a different title if this is to be included in the subject matter. --Trovatore 04:18, 10 October 2007 (UTC) You do have it mostly correct about Finsler, he claimed priority, and Godel gently tried to explain to him the error of his ways. But there is an implied criticism -- or perhaps disappointment -- also evident in Finsler and Post re the notion of "absolute". In a draft letter (ca summer 1970, composed but never sent) to Yossef Balas Godel writes the following (this is all in English, by now he was loosing his German a bit -- I believe this because my grandmother lost all of hers) such as "he (Finsler) omits exactly the main point which makes a proof possible, namely restriction to (some well defined formal system... he had the nonsensical aim of proving formal undecidabliity in an absolute sense. This leads to the nonsensical definition... the flagrant inconsistency ... if Finsler had confined himself to some well defined formal sysem S, his proof ... could be made correct and applicable to any formal system. I myself did not know his paper when I wrote mine, and other mathematicians or logicians probably disregarded it because it contains the obvious nonsense just mentioned." Re the last sentence above Godel also used, then crossed out the words "This proof" [which incidentally was not known to me when I wrote my paper] is therefore worthless and the result claimed, namely the existence of "absolutely" undecidable propostions is very likely worng." (page 10 of volume V? IV? the two volumes are stolen from the library and I had to take photographs of what I could get off the web) RE this business of "undecidablility in an absolute sense" I found something in Post 1941 (unpublished until Davis 1965) re Post's desire for the same. See in particular p. 340 in The Undecidable, Footnote 1.) This is an interesting "criticism", in my mind, i.e. they're saying it's a puny proof, clever but puny because it fails to address "absolute undecidability". Fair enough criticism, I should say. You got to my sub-article-in-the-making too quickly. I'm probably going to move (most of) the stuff to an article on Finsler. But Finsler is the poster-child -- the shining explar -- of those who misunderstand "the point" re a tightly-system "system" with an "adequate" amount of arithmetic. So I'm pursuing an angle there, just exploring, with an intention of forking off an article re Finsler. I have to peddle my little bike up to the campus (thereby avoiding a parking fine) to access JSTOR for Carl's article (or pay \$14), and I've got Franzen and another book, but will have to read them more carefully than my first quick pass at them and scan through Dawson too. Bill Wvbailey 17:26, 10 October 2007 (UTC) Well, I don't know if we need two new articles -- from your description it seems as though the Finsler material fits with the rest of it. It's just the title that needs work. Maybe scholarly reaction to Gödel's incompleteness theorems or some such. --Trovatore 17:59, 10 October 2007 (UTC) My intention would be to add one or two short sub-sections (not separate sub-articles ... sorry .... mea culpa: I've changed the title of this section) to the main article. I suspect that scholarly criticism is very limited, probably to the "disappointment" of Post and Finsler alluded to above. (Altho our friend Ademh seems to have found a published critic, too.) It's misunderstandings, that seem to fall into two camps: (1) nutcase and (ii) honest-intentions but misunderstanding. Excepting Dawson's commentary re Finsler (p. 407) I have no firm evidence yet, but my hunch is the misunderstandings are around Godel's "true ≢ provable". A clear understanding of this takes away about 95% of the scariness of his proofs, at least it did for me. Now the proofs seem almost obvious: "...However (and this is the decisive point) it follows from the in this model it would have to be shown that truth and demonstrability are equivalent [i.e. Dem(p)≡ p]. But the decisive point is now to apply the correct solution of the semantic paradoxes i.e., the fact that the concept of "truth" of the propositions of a language cannot be expressed in the same language, while provability (being an arithmetical relation can. Hence true ≢ provable.)" (strikeouts in original (1st one is shown in a footnote); unmailed ca 1970 letter from Gödel to Yossef Balas in Dawson's Collected Works vol. 4? ) Bill Wvbailey 14:21, 11 October 2007 (UTC) ## New Proof I added a short modern proof, that is essentially identical to Godel's, except using the modern intuition about computers. It seems a shame to not use modern computer science, which really makes the whole thing trivial. The interesting part of Godel's actual proof consists of the peculiar encoding he does, which uses powers of primes to encode the data. The main lemma in his paper, though, is that any primitive recursive function (aka computer program with for loops) can be represented as an arithmetic function with plus and times. This is the "embedding of a computer" alluded to in the new section.Likebox 23:28, 10 October 2007 (UTC) I like the general thrust of it, but I fear it may be a little too glib -- someone reading it casually may underestimate the difficulties. I'm also fairly suspicious of the claim that an argument this short (even black-boxing the arithmetization stuff) can dispense with ω-consistency; I really think you need something like the Rosser trick for this. A test case would be to try it out on a consistent theory that proves its own inconsistency. I have a suspicion that you may have hidden an appeal to Σ1 correctness somewhere in the argument (which is stronger than consistency). --Trovatore 20:21, 11 October 2007 (UTC) There are no difficulties. It is a complete proof. Likebox 21:31, 11 October 2007 (UTC) Here's the problem (well, a problem; I haven't had time to think through all the stuff about "printing code into a variable" and it's not obvious that such phrases have a unique reading). You say: If the axiom system proves that DEDUCE doesn't halt, it is inconsistent. If the axiom system does not prove that DEDUCE doesn't halt, it is incomplete. But what you haven't explained is how to deal with the following possibility: What if the axiom system proves that DEDUCE does halt, when in actual fact DEDUCE does not halt? Note that this does not imply that the axiom system is inconsistent, just that it proves a false Σ1 statement. And it is not obvious in this case how to conclude that the axiom system is incomplete. As I said, I don't think you'll be able to avoid doing something along the lines of the Rosser trick. --Trovatore 21:52, 11 October 2007 (UTC) I see your point. There is an omega-consistency assumption. Your sigma one statement is the same as omega consistency. I though what I was doing is the Rosser trick. I'll try to Rosserize the argument, and I'll delete the false claim until I do.216.7.9.34 00:43, 12 October 2007 (UTC) I put an argument for incompleteness in. Thanks for catching that Trovatore. 216.7.9.34 04:11, 12 October 2007 (UTC) The printing code into a variable is a standard thing. To encode it into the axiom system is the non-unique step. Just to be clear, to embed a computer into an axiom system means the following: find some encoding of the memory of the computer inside the objects described by the axiom system, which requires an arbitrarily large integer, and a single statement of the form "for each memory contents M there exists a unique M'= f(M)" where f is a primitive recursive function which takes one processor step in time. The processor could be as idiotic as a Turing machine or a cellular automaton, so long as there is some published path from that to a random-access machine and a modern architecture. Then the first order statements about M are statements about the memory of the computer, which are capable of expressing any precise english statement about the behavior of the machine. For example, to say the program doesn't stop would be "There does not exist and n such that $G_n(M_0) = G_{n+1}(M_0)$ (assuming that the encoding of a stopped program doesn't do anything). To encode that a program doesn't print something not-exist-n such that $Screen(G_n(M)) != empty$ where "screen" is a primitive recursive function which projects the memory of the computer to the sub-memory which is identified as the contents of the screen. Blah blah blah obvious, blah blah blah obvious. The Godel encoding works just fine: Write an integer as a list of prime numbers $2^{a_1} 3^{a_2} 5^{a_3} .... p_n^{a_n}$ and note that finding the n-th prime number is primitive recursive. Then identify $a_1....a_n$ as the state of your computer. —Preceding unsigned comment added by 216.7.9.34 (talk) 16:44, 12 October 2007 (UTC) ## Proposed Revisions What was reverted by Trovatore.Likebox 21:47, 11 October 2007 (UTC) What I had in mind was, why don't you create a subpage in your user space, and develop it there, soliciting comment? Then it can be copied over (or merged) once there's some consensus for it. That's what I did when I revised definable real number (still a horribly problematic page that should probably be deleted, but it's less bad than it was). --Trovatore 21:55, 11 October 2007 (UTC) Ok--- here are the proposed revisionsLikebox 01:02, 13 October 2007 (UTC) Good idea, I never did that "user page" thing before. I will do that if this bid for consensus fails. Let me try to say what the edits I made were, because I think they are pretty pedestrian. I didn't remove any material except for what I thought were two inaccuracies. 1. The statement that Cohen and Godel didn't use incompleteness to prove independence of AC and CH, which might be true as a textual statement but I can't logically make sense out of it, because its hard to say whether you are "using" a proved result. Cohen uses a countable transitive model axiom and without mention notes that the countable model axiom is unprovable, and that's the incompleteness theorem for ZFC. 2. I got rid of the "and complete" from the statement that "godel's theorem shows that an axiom system which is consistent and complete cannot prove its own consistency", which is not just overcautious, it is a slip up. If it is complete, it must either prove its own consistency or its own inconsistency. It should read "Godel's theorem shows that a (sufficiently powerful) axiom system which is consistent is incomplete". The rest of the edits were just organizational: I reorganized the "undecidable" page so that it was sectioned up. I split it into "undecidable decision problems" and "theorems independent from axiom systems". I then sectioned up the axiom systems section into "independent from ZF(C)" and "independent from PA", and left the Chaitin and Hofstadter comments untouched. I tried to start each section with "result 0", which is the most primitive undecidability result. In "decision problems" result 0 is the Halting problem. Results 1 and 2 were obvious equivalents, Kleene's separation lemma, and another one made up at random just to show readers that this stuff is considered well understood today. Result 3 was the word problem for groups, were I added a comment on Turing's paper on monoids, which started the whole thing. Result 4 is the cellular automata prediction problem, which has the Conway Game of Life result. In "ZF Independent", independent theorem 0 is "large cardinals", independent theorem 1 and 2 are CH and AC. I added independent theorem 3 which is Solovay's measurable R universe, and 4 was one of Shelah's mentioned already. In "PA Independent", independent theorem 0 is Gentzen's down-counting from epsilon0, which should be mentioned because it's the grandfather of everything. indep. 1 is Goodstein, which is a finitary rewrite of 0, indep. 2. is Paris Harrington, which is also equivalent to consis(PA), and indep 3. is this tree theorem mentioned on the current page which I don't know anything about. Last thing I did was add a comment on Hawking's claim that Godel prevents you from knowing the laws of physics. This is not accurate, because undecidability does not prevent you from knowing the rules to the game of life. That's like knowing the axioms. It just prevents you from predicting whats going to happen arbitrarily far into the future.Likebox 05:30, 12 October 2007 (UTC) that needs a source. 1Z 08:03, 12 October 2007 (UTC) The source in this case is Dr. Willard Obvious and his colleage Dr. Margaret Self-Evident, in their little-known but influential paper "Self Evident and Obvious Comment on Godel's Incompleteness Theorem". Sadly, their pioneering contributions to science and mathematics are seldom acknowledged.Likebox 17:28, 12 October 2007 (UTC) Since I mostly just reorganized and edited, I figure the next person could prune away what they didn't like. I don't think I made a single claim that is in controversy. Is there consensus???Likebox 05:30, 12 October 2007 (UTC) Here are the proposed revisionsLikebox 01:02, 13 October 2007 (UTC) The upper link has an extra space in it, whereas this lower link proposed revisions does not. I added some comments/emendations to the article at the upper link proposed revisions, but you won't see them in the lower link. Bill —Preceding unsigned comment added by Wvbailey (talkcontribs) 21:35, 14 October 2007 (UTC) ### comment on new text G.I.T (1931) is not "about computers" (1941...) 1Z 00:07, 15 October 2007 (UTC) Actually, it is. Because by the middle 1960's Gödel had come to accept Turing's work as the sine qua non, going so far as to renounce his own recursion theory in favor of Turing-Post's version of "formal system": Godel's 1931 is still not about computersm you are talking about later developments.1Z 11:04, 15 October 2007 (UTC) I don't disagree. An article could "split" -- the historical Godel 1931 plus a modern explanation using a formal system "M" devised from an axiomatization (or at specificication of) an abstract-machine model, not a number-theoretic model such as P. I.e. rather than hewing too strictly to the arcane symbolism and difficult presentation of Godel, an explanation/example might use a more "modern" approach using "abstract computation machines". I know of no such approach. In fact I'd like to see one. Even Godel knew the paper was "dense"; he had intended to write a part II but became incapacitated and abandoned the attempt. His later 1934 is somewhat simpler, too, but still recursion-based. Bill Wvbailey 16:58, 15 October 2007 (UTC) The following note in "Postscriptum", dated June 3, 1964, appears in Davis 1965:71-73. Here is where he utterly blows off recursion theory (Church 1936 and his own §9 General recursive functions) in favor of Turing 1936 and Post 1936: "As for previous equivalent deifintions of computability, which, however, are much less suitable for our purpose, see A. Church, Am. J.. math, vol. 58(19360 pp. 356-358 (this anthology, [Davis 1965] pp. 100-102). One of those definitions is given in §9 of these lectures." A skeptic might think he was sick of mind, but he repeats his assertion in a differnt way to van Heijenoort two years later (after working for months with van H on a new and better translation of his 1931): "In consequence of later advances, in particular of the fact that due to A. M. Turing's work69 a precise and unquestionably adequate definition of the general notion of formal system70 can now be given, a completely general version of Theorems VI and XI is now possible.... "69 see Turing 1937, p. 249 "70 In my opinion the term "formal system" or "formalism" should never be used for anything but this notion.... I suggested certain transfinite generalizations of fomalisms, but these are something radically different from formal systems in the proper sense of the term, whose characteristic property is that reasoning in them, in principle, can be completely replaced by mechanical devices." (Gödel 1963, note at end of Gödel 1931 as appears in van Heijenoort 1967, 3rd printing 1972:616) So yes, if you take the man at his word, he says the "proper" way to construct a formal system is with mechanical devices. Period. Bill Wvbailey 01:37, 15 October 2007 (UTC) Fine, then explain that in the article, don't just baldly state it. 1Z 11:04, 15 October 2007 (UTC) It is not only Godel's result that is about computers, but the later recursion results of Post, Friedberg, Muchnik, the minimal degree arguments, and most modern recursion arguments about the structure of the Turing degrees are about computer programs. All these arguments are greatly clarified by writing them in a reasonable modern syntax. I did that for a few of the early arguments, but it would be nice to do it for all of them, becuase this stuff is very important and IMO obscure for no reason. One problem is that the recursion theorists write their computer programs in their own computer language, circa 1954, which is incredibly inexpressive. It does not have a "print your own code" primitive, it does not have a reasonable way to express self-modifying code, and any sophisticated data structure has to be described in set theory, which is equivalent to coding in LISP, which is unpopular for a reason. This is why I think it would be nice to take any opportunity to rewrite the recursion theory algorithms in a modern C-like syntax, which I think makes them so clear that they can be understood by anyone who has ever programmed a computer. Just my 2c.Likebox 06:28, 15 October 2007 (UTC) If I understand this discussion properly (use of computer-like models to explain the G.I.T.) I see the difficulties as follows: • RE no O.R.: As wikipedians we're supposed to be using established sources. (I personally try to stick to print media that presumably has gone through a vetting process (I'm leery of self-published articles on the web, for instance). I try to restrict myself to what I can locate in an academic library). • But then there's the problem of examples -- when does an example become a form of O.R. That's a tough one. • RE computer programs vs recursion: I don't understand this either. But I have some theories: • (i) Set theory has been axiomized for at least 100 years (Zermelo 1908: Party-time!). The logic etc derived from it is pretty solid (assuming we accept the axioms). Ditto for the Peano axioms. On the other hand, abstract computational machine-models are NOT axiomatized ... as for why, see History of the Church-Turing thesis. • (ii) Because recursion theory is (kind of, more or less) axiomatized and is closer to (an extension of??) set theory and the Peano axioms, it provides a measure for abstract computation-machine models -- thus we have the Church-Turing thesis, not just the Turing thesis or just the Church thesis. • RE self-modifying programs: For your purpose the abstract computational machine model of choice is the Random Access Stored Program machine (RASP model). But the real trick behind all this is the availability of indirect addressing in your instruction-set. There's no O.R. here: as applied to abstract computational models, the notion has been in the literature since 1961. The first abstract-computation theorist to use indirection, as far as I've been able to trace it is Melzak 1961 An Informal Arithmetical Approach To Computability and Computation, Canadian Mathematical Bulletin, vol. 4. no. 3, September 1961. European researchers in the 1950's may have described this first (See Shepherdson and Sturgis 1963, JACM 10:217-255). By the early 1970's you see indirection actively-used by Hartmanis 1971 and Cook and Reckhow 1973. The nut is this: "The indirect instructions are necessary in order for a fixed program to access an unbounded number of registers as the inputs vary" (Cook and Reckhow 1973) And, if you don't have indirect addressing available (as is the case with the simple counter machines and random access machines (RAMs)) designing a universal program U is a bitch; you need to (in essence) Godelize the variables to effect indirect addressing. At least so far I haven't figured out a way to do it otherwise. With indirect addressing in your machine's TABLE of instructions you can write a universal machine U in about an hour. • RE choice of abstract computational machine model: It seems that every abstract-machine "code monkey" has his pet model. Because they have indirect addressing etc., for your purpose I would recommend a look at J. Hartmanis 1971 Computational Complexity of Random Access Stored Program Machines, Mathematical Systems theory 5, 3 (1971) pp. 232-245. Also see Stephen A. Cook and Robert A. Reckhow 1973 Time-Bounded Random Access Machines Journal of Computational System Science 7(1973) 354-375. • RE computer programs: like the abstract-model theorists, every "computer-code monkey" has his pet "code" or form of "pseudocode". To avoid O.R. and every code monkey who comes along hacking your work and putting in their own version, I suggest (if it's possible) to write examples using an established abstract model that you can find in the literature perhaps augmenting it a bit, if necessary. If necessary exhibit "subroutines" (e.g. "multiply", "divide", whatever) and then use these thereafter. (Pick-and-choose the best from Minsky, Lambek, Melzak is pretty hard to use, Shepherdson-Sturgis, Cook and Reckhow, Hartmanis, Schonhage; just make sure you've defined your stuff up front). • For an excellent reference about this abstract-machine approach, with 141 references, see Peter van Emde Boas pp.3-66 "Machine Models and Simulations" in Jan van Leeuwen, ed. 1990, Handbook of Theoretical Computer Science, Volume A: Algorithms and Complexity, The MIT Press/Elsevier, ISBN 0-444-88071-2 (Volume A). Bill Wvbailey 16:58, 15 October 2007 (UTC) Thanks for the comments, and the references! I get lost in the CS literature. I agree that the model is a Random Access Machine. That's everybody's modern computer, and if you don't have a stored program and pointers (indirect reference, self-modification) it becomes a nightmare to do anything nontrivial. That's the serious problem with recursion theorists computer language. No OR means no new ideas. It means -- don't put up your own crackpot theories, ideas, or syntheses of ideas. It does not mean the presentation can't be clear, concise, and original. As a matter of fact, to avoid plagiarism, it better be original! If you copy text, the guy you're ripping off is going to sue wikipedia. As for "code monkeys" coming in, that's me. If you feel that the style is confusing, by all means, find a clearer pseudocode. If you like recursion theory pseudocode, please write a complete proof in recursion theory. You could finish the recursion Godel proof that someone else started. But I notice that the page has been up for a long time and nobody else did that. Why not? It's because to write up the recursion theory code, you need to write a quine in first order logic. That's what Godel did. It's tricky to write a quine in any language, but this is especially annoying. You have to duplicate a variable to write a quine. It's tricky enough that I notice a guy above said something like "No layman will ever understand Godel's theorem", which might be true for any non-LISP programmer. But actually, it's probably more true of students studying recursion theory than of any lay LISP programmer.Likebox 18:12, 15 October 2007 (UTC) I'm against having anything here that purports to be a full proof of either of the theorems. That belongs maybe at Wikibooks. I'm not against a very high-level description giving an idea of the proof, and I kind of like the computer-based approach as a way of making the arithmetization of syntax and the Kleene recursion theorem appear plausible without getting into detail. But I think Likebox's effort is far too chatty; it needs to be tightened and, ideally, sourced. Also it should be unified with the existing proof sketch (we certainly don't need two proof sketches). --Trovatore 01:35, 16 October 2007 (UTC) Trovatore, I think you are usually right on the money, but this time, I have to disagree. I didn't give a proof sketch. I didn't give a plausibility argument. I gave a full 100% rigorous proof. There are no holes. There are no gaps. Because it focuses on computers, not logic, it only black-boxes quining, not arithmetization. There are two tedious part in conventional proofs, one is arithmetization, the second is the quine. The quine is usually produced by using either Godel's "two x's" thing or by Kleene's fixed point stuff. Neither argument is reasonable nowadays, because quining is a simple operation in C or ALGOL or FORTRAN, or any other not-even-modern language. Quining is completely trivial in assembly language, because you can read your own code. If you use a computer, there is absolutely no explicit need to even talk about any arithmetization of syntax. You can just use any programming language to manipulate explicit strings in TeX. This is arithmetization as we do it every day on wikipedia. I think it's ridiculous in this day and age to talk about arithmetization as if it's a conceptual hurdle. Everybody knows ASCII, everybody knows UNICODE, and everybody knows TeX. The only conceptual hurdle that in Godel's proof is the quining. If you know how to quine, there is no hurdle.Likebox 05:15, 16 October 2007 (UTC) What I'm saying is, even if you're right, a proof does not belong here. The broad ideas of the proof, yes. But not the proof itself. In general your style is too expository, not sufficiently encyclopedic. The expository stuff belongs somewhere else, like Wikibooks. In any case, we do need to talk about arithmetization to some extent, because the hypotheses of the theorem say that PA is relatively interpretable in T, not that the basic notions of computation are relatively interpretable in T. --Trovatore 05:46, 16 October 2007 (UTC) I see your point. I will respond on your talk page. Likebox 05:48, 16 October 2007 (UTC) I reread Godel's paper last night to see what's going on. It's true--- he does use PA. But he does also assume without any hesitation that primitive recursion includes formal logic, and the "explicit" construction in his paper is far from explicit. He just lists a bunch of subroutines which if he could write as primitive recursive then he can complete the proof. The main routine, in section 2.5 is not done at all, it is only poorly sketched. That's because primitive recursion is a crappy programming language. This is not meant to denigrate the great work. It is obvious he can do it. But it is only obvious that he can do it because mathematical intuition at the time included the idea that primitive recursion is just a computation without unbounded loops. I think that was due to Kronecker. Maybe Godel. I don't know. If I understand you, it was Herbrand's suggestion to Godel to introduce the unbounded operator. He did so in his 1934 lectures to the IAS. He called it his є-operator. As far as I know, Godel was the first, to really peg "primitive recursion", where he did so in his 1931. Another interesting note: his arithmetization method changed quite a bit between his 1931 and 1934. If you have a cc of Davis's The Undecidable that's where you can get a cc of his 1934. BillWvbailey 14:33, 17 October 2007 (UTC) So while the letter of Godel's proof is that you can embed PA, the spirit of Godel's proof is that you can express an arbitrary statement about primitive recursive functions, about computer programs with for-loops, the ones in his proof. I think the spirit is easy to preserve. You can say, the axiom systems must describe enormous integers, it must be have enough operations to define at least one Turing complete function F(n) and its iterations F(F....(F(n))))), and it must be able to express any statement of the form (for all n) property(n) where property is some specific primitive recursive function which extracts the halting state and output of the program. That PA has these properties is obvious to anyone who knows PA.Likebox 16:46, 16 October 2007 (UTC) A very important point of the historical context is the impact on the logicist project, which, having failed to reduce set theory to logic (via the refutation of Frege's system by the Russell paradox) still hoped to reduce arithmetic to logic. The Goedel theorems are widely held to have shown that this is impossible (though I have to confess I never completely followed that argument -- which is not to say I think arithmetic can be reduced to logic; I don't). So we really can't get away from talking about arithmetic, I think. Note also that it's a lot easier to see that a foundationally relevant theory encodes PA, than that it encodes the C programming language. --Trovatore 17:40, 16 October 2007 (UTC) Hmm. You're right about the history. The history is important. If I understood correctly, the idea you're referring to was that while set theory would not have a discretist logical foundation, maybe arithmetic can be given a firm discrete foundation. The Godel theorem shows that there is no fixed computer program which can generate all arithmetical truths, so there is no discrete foundation for arithmetic. That's a big blow indeed. But the reason they called it "arithmetic" was to express the fact that the objects are discrete, and that an arbitrary reasonably finite computation can be performed. To distinguish it from "set theory" where you can talk about analysis and geometry. Saying PA back then is the equivalent of saying "computer" today. So I think that the dichotemy can be clearly expressed this way "While the Russel antimonies showed that set theory is not just logic on the universe of all sets, but requires a strictly ramified Cantorian theory of infinite ordinals, many hoped that arithmetic, which does not refer to unimaginably infinite collections like the set of all everywhere discontinuous functions on R, would have an associated computer program which produced all the true statements." Regarding your second point, I think that to a C-programmer, the idea that you can encode the C-programming language in a random-access machine is much more obvious than any logic stuff, especially considering that gcc is almost a proof by construction. Also to any machine-level programmer, the idea that you can embed a random-access machine in PA is even more obvious. I mean, PA can speak about any ordered list of integers and any finite operation on them! The only subtlety is to find an addressing scheme which can refer to an unbounded memory, but you can cook up a scheme in 5 minutes. For me personally, I didn't study logic until way after I learned to program a computer, and I think that's true of a lot of newcomers to the field.Likebox 20:36, 16 October 2007 (UTC) So first of all logicism, as I understand it (which isn't necessarily terribly well) is not particularly about discreteness. The logicist idea is something like this: Mathematics (or at least arithmetic) has no content external to pure logic, understood as "the science of making correct inferences". That is, once you get the definitions right, you should be able to do mathematics with no assumptions, no intuitive revelations, and no empirical discoveries, just following a pure chain of deduction from the definitions. That is, the truths of mathematics are analytic truths, "true by virtue of their meaning", like the statement "all bachelors are unmarried" (Kant's example from the WP article). Whether this notion is even well-defined from the point of view of contemporary mathematical logic is not so clear; it seems to assume as given a notion of correct deduction that is not as clear as it might be. But what does seem clear is that there is no obvious reference to discreteness or computation. --Trovatore 04:06, 17 October 2007 (UTC) This was my second question on your talk page: whether "arithmetic" confines you to "discrete symbols" (i.e. natural numbers). Yes, if we are to take Kleene's definition: "Arithmetic or number theory may be defined as the branch of mathematics which deals with the natural numbers and other (categorically defined) enumerable systems of objects such as the integers or the rational numbers. A particular such system (or the theory of it) may be called an arithmetic. The treatment is usually abstract.... the objects are usually treated as individuals [arithmetic in the narrower sense has to do with + and *, in the wider sense it is the full-blown number theory]. He says that the cardinal numbers of arithmetic are aleph0 or finite, whereas analysis deals with the real numbers and other systems of objects having the the cardinal 2^aleph0 (or higher) (p. 29) "Logicism" per Kleene: Russell and Whitehead: "...mathematics is a branch of logic. The mathematical notions are to be defined in terms of the logical notions. The theorems of mathematics are to be proved as theorems of logic." (Kleene 1952:43)). Logicism died because the antinomies killed it -- and the theory of types could not resuscitate it. Specifically: some of the definitions are impredicative (cf p. 44), the axiom of reducibility was suspect, even by Russell himself. "Thus neither Whitehead and Russell nor Ramsey succeeded in attaining the logicistic goal constructively." (p. 45). Weyl 1946 blows it off, saying if you [are silly enough to] accept logicism then why not just accept set theory, which is simpler. "Formalism" per Kleene: "formalism or axiomatic school" (Hilbert)). Formalism: "Classical mathematics shall be formulated as a formal axiomatic theory, and this theory shall be proved to be consistent, i.e. free from contradiction" (p. 53). So formalism pulls into itself more than just "logic" ... but what exactly? What additional formalization did Godel bring to the party; What Godel 1931 brought to the party was the formation rule (concatentation) for the terms and from the terms the formulas, and then the substitution rule that allows you to stuff numbers (as "parameters" or numbers "passed back" from formulas) into variables and his notion of primitive recursion. It's unclear what Godel brought to the party. Apparently before 1931 the notion of "formal system" was highly-developed by Hilbert and Bernays, including "recursion" (witness Ackermann 1928's non-recursive function). Godel 1931 cites "the formal systems constructed recently by D. Hilbert and his co-workers so far has these have been published to the present. Hilbert 1922, 1922a, 1927, Bernays 1923, von Neumann 1927, Ackermann 1924. (NOTE: these are different in my two translations!). By 1934 Hilbert and Bernays had published their Grundlagen der Mathematik. Godel was very clear about the formality of his "system" in his 1934: "For the considerations which follow, the meaning of the symbols is immaterial, and it is desirable that they be forgotten." (p. 53). Kleene 1952 "cleans" this up in his formal system (in his Chapter IV A Formal System (axiomatic number theory)). As does Godel, Kleene 1952 cites works of Hilbert and Ackermann 1928, Hilbert and Bernays 1934, 1939, Gentszen 1934-5, Bernays 1936 and "less immediate sources." "First, the formal system itself must be described and investigated, by finitary methods [his definition: "No infinite class may be regarded as a completed whole. Proofs of existence shall give, at least implicitly, a method for constructing the object which is being proved to exist." p. 63] and without making use of an interpretation of the system." (italics added, p. 69) I.e. it's all discrete symbols and syntax. No semantics whatever, just discrete-symbol manipulation. "Total abstraction from the meaning" (Kleene citing Hilbert, p. 61). There's an interesting discussion of "systems of objects" in Kleene p. 24ff. In particular, he notes that the axioms are defined from outside the system S, and "it is only from outside the axiomatic theory (i.e. in some other theory) that one can investigate whether one, or more than one, or not abstract system S satisfies the axioms." This discussion in Kleene is not trivial and takes quite a few pages -- p. 43-65 -- hence cannot be easily summarized (by me). There's a lot about on what basis are axioms to be accepted i.e. about hypotheses concerning the "actual world" (p. 46) versus containing just an "ideal elements" or both (cf p.55), about the logicist notion of "number" versus the intuitionist and formalist notions, etc. Bill Wvbailey 19:19, 17 October 2007 (UTC) ## recursive enumerability Let A be a set of axioms and $U=\{u_1,u_2,\dots\}$ be the set of sentences undecidable over A. Question: is U recursively enumerable? I.e. is there an effective algorithm for writing down all the undecidable statements of number theory, set theory, etc.? Does it depend on the particular axiom set? If the answer is known maybe it can be added to the article. Obviously you can write down infinitely many undecidable statements by iterating Gödel's diagonalization scheme but I don't know if you can get them all by such methods. Thanks. I assume that by "u is decidable over A" you mean that either u or its negation is provable from A. If I knew that u was either provable or disprovable, then by searching all proofs I could determine which of these two options holds; but if a particular u is neither provable nor disprovable then the search would go on forever. If the set of undecidable sentences was alsou recursively enumerable, then I could enumerate that at the same time, preventing my search from going on forever. So if the set of undecidable sentences was enumerable, I could decide, given a sentence, whether it is provable, disprovable, or undecidable, which means that the set of provable sentences would be recursive. But for almost every first-order theory the set of provable sentences is not recursive, but only recursively enumerable. Tracing back, this means that the set of undecidable sentences must not be r.e. — Carl (CBM · talk) 12:28, 30 October 2007 (UTC) Ah yes, of course. Thanks. —Preceding unsigned comment added by 75.62.4.229 (talk) 16:11, 30 October 2007 (UTC) ## Modern Proof Recently User:CeilingCrash removed the "Modern Proof" section, without (as far as I can tell) discussion on the talk page. He included a detailed comment for why he removed it, but I think it the edit should first be discussed on the talk page. To make sure everyone agrees that the proof is in fact bad before removing it. --DFRussia 11:54, 1 November 2007 (UTC) I agree with removing that section, and I removed it again. That section didn't actually provide a proof - it instead gave a vague argument about "computer programs". It was alluding to a correct proof using the diagonal lemma, but such a proof would need to be written using the correct terminology (representability of computable functions, etc.) rather than vague statements such as "and that it has enough operations (addition and multiplication are sufficient) to describe the working of a computer." The supposed proof of the second incompleteness theorem was "To prove the second incompleteness theorem, note that the consistency of the axioms proves that DEDUCE does not halt, so the axiom system cannot prove its own consistency" which completely ignores the actual issue there, which is the arithmetization of proof. This is not ignored. The arithmetization is sidestepped by using a modern computer, where arithmetization is automatic and a part of day-to-day life. It's ASCII and unicode.Likebox 21:53, 1 November 2007 (UTC) I don't know whether we need a proof via the diagonal lemma in this article; we can discuss that. But if we are going to claim something is a proof it needs to actually prove something. — Carl (CBM · talk) 13:20, 1 November 2007 (UTC) The proof is complete and correct, as others have verified. Please check the previous discussions first.Likebox 21:53, 1 November 2007 (UTC) Ceilingcrash is correct -- both Turing and Church used the notion of arithmetization in their proofs. It is this plus Godel's use of a fully-specified proof system plus the "shift from "true" to "provable"" (to quote van Heijenoort) that, e.g. distinguished Godel's proof from Finsler's uncacceptable proof (1926). Whereas these key notions are probably understandable by most lay readers who've come to this article, where and exactly how the diagonal argument enters is absolutely obscure ... At least for me ... I know it's embedded in the "first" theorem somewhere, but I haven't been able to trace the exact steps because of Godel's difficult presentation: it's discussed in Rosser (1939?) but I haven't gotten that far. There's no doubt that Turing's argument is easier to follow for computer-science folks: his machine busily creates diagonal numbers from its "decider algorithm" working on numbers-as-programs until it fails to do so when its "decider" hits its own number to test. Yes, I would like to see more about the incompleteness theorems' diagonalization in this article. Bill Wvbailey 17:56, 1 November 2007 (UTC) DFRussia was right, i shd have brought the issue up here first. The crux of my argument is that the C-T came some years after G, and uses G's most critical innovations. By using the C-T as a Lemma, we have smuggled in G, which of course makes G easy to then prove, but seems (to me) to mis-attribute the idea and is tail-chasing. About diagonalization, I agree w/ WvB that the diagonalization trick is a quite vivid way to get incompleteness. (An ancestor of the technique can be seen in Cantor, to prove certain elements are not contained in a lower order of Infinity.) I don't think Godel expressed his argument this way, he finds the 'smoking gun' by beefing up the formal system with some elements of a programming language : string manipulation operators like substitute, concatenate, mapping of godel-number to symbol, etc. Once he has this, he has theorems which can make build theorems, and make assertions about their provability. Finally, he builds a theorem (G) which denies its own provability. An interesting exercise gives a gist of how this axiomatic "introspection" can be achieved : In any programming language, write a program that prints out precisely its own source code (without resorting to tricks like reading the source from disk or memory.) Hint, you have to be clever about string substitutions, and it is permissible to ASCII codes to generate characters. Anyone who does this will readily understand how Godel made G. I too would like to see if someone has recast Godel's argument using the diagonalization trick, which is so vivid and comprehensible. updated CeilingCrash 18:42, 1 November 2007 (UTC) 2007 (UTC) I have edited the proof sketch to emphasize the diagonalization more, and to explicitly use the diagonal lemma. I think breaking that out of the sketch here is good, since it allows the reader to ignore how that formula is constructed for a while when reading this proof, and later investigate the proof of the diagonal lemma. Please let me know what you think; I'm done for this afternoon. — Carl (CBM · talk) 19:09, 1 November 2007 (UTC)| Hat's off to CBM ! This is very nice indeed. Godel's creation of G is constructive, which is nice, because we get to see exactly what it is. But it is subtle and very difficult to follow. The diagonal argument proves existence - which is all we really need for incompleteness - and much easier to follow. Well done and thank you! (I added Hilbert's 10th problem to the examples just now.) CeilingCrash 19:53, 1 November 2007 (UTC) Hopefully I did't offend anyone with my original roll-back. I am glad there is now an active discussion. You guys seem to have made a lot of progress on the article over the last hours. Well done --DFRussia 20:58, 1 November 2007 (UTC) (deindent) Hello, I reinserted modern proof, and I would like to adress the objections raised. First of all, the argument is self-contained. It only uses the "Church-Turing" thesis to argue that formal logic is a computer program, which is not the general Church-Turing thesis, but a special obvious case. The argument is simplified by black-boxing the process of quining, which is an exercise in modern CS curriculum. The argument is otherwise complete. It is designed to be both correct and comprehensible to anyone who knows how to program a computer. It is in my opinion ridiculous to leave out such a simple proof, which is essentially equivalent to Godel and Rosser but much much easier to understand.Likebox 21:38, 1 November 2007 (UTC) In response to the previous comments--- the persons who wrote them probably don't like this proof, because they have their own favorite proof. This is fine, there is no reason not to have multiple proofs. But there should be no dispute about the correctness of the argument. I included it precisely because it is the shortest, clearest proof.Likebox 21:53, 1 November 2007 (UTC) Please don't be offended, Likebox, but we have consensus with myself, Wvbailey, and (implied anyway)CBM, who went to the trouble of retrieving the baby from the bathwater (the diagonal lemma) and reworking the Sketch Argument. Our objections to the Modern Proof are not stylistic, they substantive and listed in this discussion. Also, absent a source for it, it is Original Research. Long story short, Church-Turing based their work on Godel, not the other way around. I will roll back the inclusion re consensus, I hope not to spark an edit war. Can you point us to a source for this version of the proof?CeilingCrash 22:44, 1 November 2007 (UTC) (added) On second reading, it is a very elegant way to get at GIT, and it would be nice to retain this exposition in some form. It's not clear to me that we should call it a modern *proof*, since certain notions such as Quining and the fact a program can always print itself, would require expanding - and this expansion would lead us right back to godel (or an imitation thereof.) I think if we retitle this section as something to the effect of, "GIT as seen from an algorithmic point of view", and note that some of these notions - so pedestrian today - owe their genesis to godel, it would be nice section to keep. Basically, if we don't call it a proof i'm cool. Others?CeilingCrash 23:11, 1 November 2007 (UTC) The source is cited, although this discussion expands a little. The objections are not so substantive. They are right about the history, of course. But this is 2007 after all, not 1931. I am not at all offended, just puzzled. Why do you folks not like it? I really want people to understand Godel's theorem, and I am worried that current expositions are too bogged down in stupid details like arithmetization. This makes it very difficult for nonspecialists. I think every human being should understand Godel's theorem completely.Likebox 23:27, 1 November 2007 (UTC) (Likebox and I got into an edit conflict.) I must confess to having judged this section too quickly. It is stunningly clear and beautiful, and on closer inspection I cannot substantively object to any part of it. (Upon first reading I thought CT was assumed in its entirety.) I wd be happy to see this text included, with the proviso that certain of these notions originated with godel (however commonplace and obvious to us today.) My apologies to Likebox. I would like to see certain bits expanded upon via link, for example to quining and to self-generating programs, i'd be happy to add these links to the text. Oh - a few years ago I wrote a self-printing program in C and showed a printout to my coworker. He asked, "is that the source or the output?" At that moment, I achieved enlightenment :-b CeilingCrash 23:39, 1 November 2007 (UTC) Thanks CeilingCrash. I incorporated a sentence that hopefully adresses your concerns.Likebox 23:44, 1 November 2007 (UTC) My objection (not a strong one) is that a demonstration via machine computation smears two historically-separable events -- (i) a consistency proof re "arithmetic" versus (ii) an undecidability proof re an answer to the Entscheidungsproblem. Here's my understanding of what happened back then (ca 1930's): Godel demonstrated (via his "1st" theorem VI) that undecidable "objects" existed, but he had not produced a specific "effectively-computable object" inside his formal system for our observation and amusement. (He was working with "proof systems", not "calculations/computations"). Thus Godel put to rest the Hilbert question of consistency, but the Hilbert question around the Entscheidungsproblem had to wait until the notion of "effectively calculable" was defined enough to proceed. Church (1934) and Turing (1936-7), had to do two things: (1) define precisely the intuitive notion of "effectively calculable", and then (2) within a formal system and using their notions of "effectively-calculable", produce an "object" that would be, indeed, "undecidable". Church defined a notion of "algorithm" (lambda-calculus computations) and produced an "algorithmic object" that was "undecidable", but he worried about his proof (and soon emended it)). Turing, on the other hand, produced an indisputably "effectively-computable object" (a machine and an undecidable computation to be done by said machine) that even Godel accepted (in the 1960's) as the sine qua non. The consensus is that Godel could have easily gone the extra last step, defined "effectively computable" (for his purposes), and put an end to the Entscheidungsproblem once and for all, but he was just bored with the whole thing and went on to other stuff... What we don't want to do is mix up the reader with respect to these various "Hilbert problems". When introduced into Godel's proof, machine computations add this other dimension ("effectively calculable"). At least that's my interpretation of the crux of the problem. It's not that a great machine-based demo can't offered, it's that such a demo may cause confusion. If anything such stuff should go into Turing's proof. Bill Wvbailey 00:55, 2 November 2007 (UTC) I agree with your sentiment, although not with the conclusion. It is a major historical fib to claim that undecidability preceded incompleteness. But it is a shame to not use undecidability to prove incompleteness, because it makes the proof short and comprehensible. The section made no pretense of being historical, it was just a proof of the theorem using modern notions of computability that everyone recognizes. Perhaps a sentence pointing out that the section is ahistorical would be enough. Likebox 20:09, 5 November 2007 (UTC) "Everyone" meaning computer programmers, right? I agree that readability by nonmathematicians is a worthy goal. But I have a difficult time calling it a proof, because it is so vague and handwavy about important details (like what language are the programs coded in, how are the desired program transformations coded, and is that language capable of addressing unbounded amounts of memory). —David Eppstein 20:14, 5 November 2007 (UTC) ## "Modern proof" This section is full of things that are either incorrect, completely vague, use the wrong or nonstandard language, or absurd. Examples: "Assuming that a computer exists, and that formal logic can be represented as a computer program" Of course a computer exists! But Godel's theorem is not about physical computers. Moreover, it makes no sense to say "represent formal logic as a computer program". You might not be familiar with the completeness theorem. Godel's completeness theorem is an explicit algorithm to write down all deductions following from a given set of axioms. The algorithm is explicit, and can be written as a computer program. The only "nonphysical" thing about the computer required is the fact that it has infinite memory. That's the only idealization to go from a regular ordinary computer to a Turing machine.Likebox 20:34, 5 November 2007 (UTC) The Quine lemma The actual name of the needed result is the recursion theorem. Quines are just one application of the recursion theorem. The result below does not actually follow from quines, but from an application of the recursion theorem. In shor,t this is just wrong. No it isn't. The recursion theorem is not the quine lemma, it is a confusing piece of obviousness required to prove that quines exist in recursion theory language. In computer science language there is no need for a proof, because you can easily write an explicit quine. The whole point of this proof is to avoid the recursion theorem.Likebox 20:34, 5 November 2007 (UTC) The Halting lemma: There does not exist a computer program PREDICT(P) There no such thing as the "Halting lemma". This is completely nonstandard terminology. There is a result relating to the halting problem, which is usually phrased as "the halting problem is unsolvable." If you notice, the proof does not use the actual halting lemma. This is just presented as motivation. proof: Write DEDUCE to deduce all consequences of the axiom system. How is it supposed to do this? Most of an actual proof is spent achieving this! It does this by the completness theorem. Godel did not prove this in any explicit way, and neither do any modern treatments. This proof is no better and no worse in that regard. Godel just listed a bunch of different algorithms which could be used to make a deduction program, and noted that they are all primitive recursive. Godel stopped writing down algorithms after a certain point, because it was too tedious, and he just noted "the rest of this requires writing a primitive recursive function to do this and this and this", where "this and this and this" is the rest of the algorithm of the completeness theorem. This proof just says flat out "There is an algorithm for deducing consequences of axioms", and does not bother to write any code. The difficulty in Godel's original proof is in the arithmetization, which is required because he did not know what a computer is, and in the quining, which he avoided by a trick.Likebox 20:34, 5 November 2007 (UTC) Let DEDUCE print its own code into the variable R, But DEDUCE is already defined - this is a second, different program. DEDUCE first prints its code into R, then deduces all consequences of the axioms looking for the theorem R does not halt. It halts if it finds this theorem.Likebox 20:34, 5 November 2007 (UTC) If the axiom system proves that DEDUCE doesn't halt, it is inconsistent. If the system is consistent, DEDUCE doesn't halt and the axioms cannot prove it. Why? This is another thing that requires explicit justification but is presented with none. The moment that the axioms prove that R does not halt, DEDUCE halts. Since the axioms can also calculate the finite-time state of DEDUCE, they can follow the operations of the computer program, this means that the axioms also prove that DEDUCE halts. That's a contradiction. So the axioms cannot prove that DEDUCE does not halt.Likebox 20:34, 5 November 2007 (UTC) The assumption is that the axioms can follow the finite time operation of any computer program. This is an assumption in Godel's paper, where he asks that the value of any primitive recursive function can be calculated correctly by the axiom system. In other words, for any primitive recursive function f, the theorem F(n) = the number F(n) is a finite time deduction of the axiom system. This is true in PA, so he just assumes that PA is embedded in the system. The point of PA is only that the value of any primitive recursive function will eventually be calculated at some point in time as a theorem.Likebox 20:34, 5 November 2007 (UTC) If the axiom system is consistent, it cannot prove either ROSSER eventually prints something nor the negation ROSSER does not print anything. So whatever its conclusions about DEDUCE, the axiom system is incomplete. This, again, is just a restatement of what is supposed to be proved. No it isn't. It's a proof. ROSSER prints its code into R, and looks for theorems about its behavior. If it finds a theorem that says "You do this" it does "not this". If it finds a theorem that says "You do not do this" it does "this". This is exactly ROSSER's argument for avoiding the omega-consistency issues, translated into modern language.Likebox 20:34, 5 November 2007 (UTC) To prove the second incompleteness theorem, note that the consistency of the axioms proves that DEDUCE does not halt, so the axiom system cannot prove its own consistency This "noting" does not prove the second incompleteness theorem, which requires a great deal of effort to prove. It only requires a great deal of effort if you do not understand it. If you understand it, it is obvious. There do not exist theorems which "require a great deal of effort to prove". Either you get it or you don't.Likebox 20:34, 5 November 2007 (UTC) R. Maimon "The Computational Theory of Biological Function I", arXiv:q-bio/0503028 The reference is an unpublished paper in the arxiv, not any sort of respected logic text. Respectibility is a political decision. it requires a vote.Likebox 20:34, 5 November 2007 (UTC) In short, this "modern proof" could only be salvaged by completely rewriting it. But there is already a proof sketch in the article, and if this one were fixed there would be two proof sketches. In any case, the current state of this section is embarrassing;it claims to be a proof but is just handwaving. Removing the section is the right thing to do. — Carl (CBM · talk) 01:44, 2 November 2007 (UTC) Agree that certain terminology needs tweaking, especially "assume a computer exists." Is there a subtle error in it? I think we shd keep the standard Sketch, but from a pedagogical viewpoint this I think this perspective is valuable provided there are no serious gaps (as CBM claims) and, hopefully, it can be sourced to dead trees. If it is correct, I would be suprised if it's not published. To address one of the alleged gaps, can't DEDUCE proceed by brute force, attempting all rules of inference against all axioms and theorems ? I'd like to see where we could go with some small terminology changes and gap-filling. It may indeed be very easy to understand GIT from a modern perspective. Much of what Godel had to invent, from Godel-numbering to his string manipulation functions, as well as the conceptual decoupling of symbols from meaning, we get for free if we start with the idea of a computing machine. What is admittedly historical heresy may be pedagogical gold. Seems worth a shot, anyway. CeilingCrash 02:08, 2 November 2007 (UTC) We don't need two proof sketches, though. We could add in that a consequence of the incompleteness theorem is that, when everything is formalized properly, a consequence of the incompleteness theorem is that there are Turing machine programs that cannot be proven to halt and cannot be proven not to halt (such as the program that searches for a Godel number of a proof of the Godel sentence p from the proof sketch). The proof sketch in the article and the "modern proof" are not actually very different; the recursion theorem used in the "modern proof" is just the computability analogue of the recursion theorem. There are indeed large gaps in the exposition of the "modern proof". This is evidenced by the fact that he other proof sketch is just a sketch, while this is claimed to be a "proof", but this is much shorter. The reason that the proof of Godel's theorem is usually found difficult is because it is difficult. It doesn't serve the reader to claim the "modern proof" is a proof when in fact it doesn't even use existing terminology correctly. — Carl (CBM · talk) 02:12, 2 November 2007 (UTC) If we repair the terminology, and avoid referring to this section as a 'proof' (which Like has already changed), would that be acceptable? I see it as a perspective, a pedagogical tool. If I had to explain GIT to a modern human, for instance, I think i would do it this way. CeilingCrash 02:43, 2 November 2007 (UTC) If you would like to rewrite it somewhere in a sandbox, I'd be glad to comment on it. The fact that incompleteness of axiom systems is related to recursive unsolvability of problems is well known, and a section on that would certainly be worthwhile, especially if it didn't try to prove the theorem but just showed how the two are related. In the end, any actual proof has to deal with Godel numbering, arithmetization, etc. At a bare minimum, the terminology would need to be made standard, and the vagueness would need to be reduced. — Carl (CBM · talk) 02:50, 2 November 2007 (UTC) Sounds good to me, I'll start a sandbox in my user:Talk soon and post the link here ... CeilingCrash 02:58, 2 November 2007 (UTC) I have already rewritten it here. — Carl (CBM · talk) 03:00, 2 November 2007 (UTC) ### Rewrite (outdent) I think I have a dead-tree source for this, I can't access the whole article tho (not a member) A Proof of Godel's Theorem in Terms of Computer Programs, Arthur Charlesworth, Mathematics Magazine, Vol. 54, No. 3 (May, 1981), pp. 109-121. http://links.jstor.org/sici?sici=0025-570X%28198105%2954%3A3%3C109%3AAPOGTI%3E2.0.CO%3B2-1&size=LARGE&origin=JSTOR-enlargePage CeilingCrash 03:14, 2 November 2007 (UTC) I'll email you that article; the proof is well known in any case, so there is a source somewhere. I rewrote the section to be more explicit about what's going on. It now gives a sort of proof sketch of how to use the undecidability of the halting problem to prove Godel's theorem. It currently glossed over ωconsistency, which could be fixed. I hope other people will add links and smooth the text some. — Carl (CBM · talk) 03:24, 2 November 2007 (UTC) I am quite familiar with the completeness theorem. It does not show that recursive functions are representable in arithmetic - that proof of representabilily is the first third of the proof of the incompleteness theorem. There is simply no way to prove the incompletness theorem without some sort of arithmetization result. And Godel's original proof did explicitly develop a formula for Bew... The idea that the recursion theorem can be avoided by a reference to some sort of "quine lemma" is interesting, but there is no result well-known as the quine lemma, and the result needed here is not just the existence of a quine but the existence of a fixed point for a certain recursive operator. Only the recursion theorem is going to give you that. Statements such as 'It only requires a great deal of effort if you do not understand it. If you understand it, it is obvious. " seem to indicate some unfamiliarity with the idea of a mathematical proof. The idea is to justify statements to a level where they are convincing to the intended reader. "Either you get it or you don't." is not the way things are proved in mathematics. Can you say what you disliked about the rewritten version? I tried to keep the informal tone while not having all the terminological errors and vague claims of the "modern proof".— Carl (CBM · talk) 21:14, 5 November 2007 (UTC) The person who wrote this has never written a quine. If you actually spend the time to write a quine, then it will become clear to you that you can turn any program into a quine by adding a quining subroutine. The subroutine will print out all the code before the subroutine, then print itself out, then print out all the code after the subroutine, and it is obvious to anyone who has written a quine that this will produce a printout of the whole code. Manipulating the printout can then simulate the code, and turning the printout into an arithmetic object is automatic, since the printout is already an enormous ASCII integer. This result does not require a recursion theorem or a fixed point theorem. It only requires about half-an-hours effort to sit down and do it. It is assumed in the proof that the reader has already done this at some point in their life, otherwise the proof will be incomprehensible, since the central event in the proof is the printing out of your own source-code. Once you understand this, and this is not completely trivial, the rest of the proof is trivial. The whole game of Godel and Turing is really just a game of self-reference, and the self-reference is best stated in a modern programming language where it is self-evident.Likebox 20:57, 7 November 2007 (UTC) Dude. That result doesn't require a fixed point theorem: it is a fixed point theorem. Face it, when you write about this sort of stuff, you, Likebox, are acting as a recursion theorist. You're just doing so using nonstandard language. But here at Wikipedia, our task isn't to invent or popularize new language to describe existing results: it is to explain as clearly as possible those results using the standard language already in use. Anything else would be original research. —David Eppstein 21:39, 7 November 2007 (UTC) It is not original research. It is original exposition. Those are two different things. Learn to distinguish between them.Likebox 22:13, 7 November 2007 (UTC) ## another question Does the second incompleteness theorem apply as well to second order logic? In particular, can the classical Peano axioms (i.e. with full second order semantics for the set quantifier in the induction axiom) prove their own consistency? Thanks. —Preceding unsigned comment added by 75.62.4.229 (talk) 10:01, 2 November 2007 (UTC) There are two possible answer, based on what you want "incompleteness" to mean in the absence of the completeness theorem, which does not hold for second-order logic. The following are true; pick the one you want. • Peano arithmetic in second-order logic has exactly one model. Thus it is semantically complete - every sentence in its language is either a second-order logical consequence of the second-order PA axioms, or the negation of a second-order logical consequence of second-order PA. In this sense, there is no incompleteness theorem for second-order PA. • There is no complete, effective proof system for second-order logic. Given an effective proof system, the proof of Godel's incompleteness theorem gives an example of a true sentence of second-order PA that is not provable in that system. So in this sense the incompleteness theorem continues to apply. Unlike the first-order case though, this isn't a special property of theories of arithmetic; no second-order theory that has an infinite model can have be given an effective proof system that proves all the logical consequences of the theory. — Carl (CBM · talk) 12:09, 2 November 2007 (UTC) Let me expand on Carl's statement that "There is no complete, effective proof system for second-order logic.". Contrary to the situation in first-order logic, the set of consequences of a finite set of axioms may not be recursively enumerable in second-order logic. Worse, the set of consequences may not even be definable in the language of second-order logic. In which case, the "formula" stating that the second-order theory is consistent is not only unprovable, it does not even exist in the language. JRSpriggs 18:45, 2 November 2007 (UTC) ## Not a proof IS TOO A PROOF! Likebox 23:40, 5 November 2007 (UTC) I see that Likebox has reinserted the "modern proof". I insist that this is not a proof, and cannot be a proof until notions such as what programming language is being used and how it is encoded are nailed down, which would make it not dissimilar in length and content to the standard proof. At best this is a proof sketch. It is also phrased in such a way as to make it sound like original research, although I am sure one can find printed sources for similar expositions. I would like to request that the section either be removed again (my preference), or that it be called a proof sketch, the word "modern" removed, sources provided, and it be incorporated more smoothly into the existing "proof sketch" section rather than trying to stand alone as a separate section. —David Eppstein 21:12, 5 November 2007 (UTC) Of course there is no exact line between a "proof" and a "proof sketch" (or more precisely, if one attempts to draw such a line, one will find that virtually all published proofs are "sketches"), so let's not waste effort arguing on whether it's a proof or a sketch. I think we can all agree that a valid proof/proof sketch along these lines is possible (whether Likebox's specifically is such a one, I have not checked), and I agree with Likebox that such an approach more easily connects with many people's intuitions, replacing the tedious proofs of the recursion theorem and arithmetization with such things as correctness of compilers -- things that readers haven't personally checked at a level that could be called "proof", but which they're more willing to believe on the basis of experience. However I agree with David that the proof needs to be made less personal and idiosyncratic, and that it should be integrated with the other proof. --Trovatore 21:25, 5 November 2007 (UTC) (e/c) Please note that the "Relationship with undecidable problems" section is the same idea, just rewritten to use more standard terminology. It presents the same proof outline (although it is does a better job of alluding to the fact that any undecidable, r.e. set can be used). It is intended to be a sort of compromise, including the ideas from the flawed "modern proof" section without the expositional problems I described at length above. — Carl (CBM · talk) 21:27, 5 November 2007 (UTC) I see that there is controversy, and I would like to comment. I respect all sides here, and I am not trying to be a bully. I just wanted to explain. First I did come up with this proof, but I am not the first person to come up with it. It is ideosyncratic because I don't want to use language which a layperson cannot follow. That is the crucial part, and replacing the discussion with a discussion which uses "standard terminology" is just code for making it obscure. Please do not make a clear discussion obscure. Using standard terminology has another insidious effect in that it protects those who benefit from a monopoly on knowledge. This authority-reinforcing tendency is natural, but in my opinion, unfortunate. As for technical precision, this proof is exactly as rigorous, no more and no less, as Godel's original exposition. I took it largely from Godel's paper, except replacing arithmetization with a modern obvious arithmetization (ASCII and TeX), and replacing statements about deductions with statements about computer programs that do the deducing. The proof is identical, but is much easier to follow, because modern readers know what a computer is. There really is no difference in the content in any way. Perhaps the reason it strikes people as implausible is because they think "it can't possibly be that simple!" Sorry folks, it is.Likebox 22:52, 5 November 2007 (UTC) The difficulty is not whether computers can do things - the halting problem is easy to prove because it is just about what computers can do. Godel's theorem requires proving that some universal model of computation can be faithfully represented within arithmetic. In his original paper Godel explicitly demonstrated this. Now people may be more willing to take it on faith, but any actual proof must still address arithmetization. After Smullyan's influence, the most common "modern proof" is the one using the diagonal lemma. — Carl (CBM · talk) 23:13, 5 November 2007 (UTC) Look, you're absolutely right. I didn't prove that arithmetic embeds a computer, and Godel did show that arithmetic embeds a "computer" (meaning a computer running the algorithm of the completeness theorem and only the algorithm of the completeness theorem) in a somewhat more explicit way. But if arithmetic doesn't embed a computer, it's a pretty sorry system! If that's true, then you can't make statements about the behavior of computer programs in arithmetic. That's why I stated the theorem about axiom systems which can describe computation, not as a theorem about Peano arithmetic. To prove that Peano arithmetic embeds a computer, you can use Godel's system--- write an integer as a list of prime powers: $n = 2^{n_0} 3^{n_1} 5^{n_2} 7^{n_3}$ and note that some first order sentence extracts each $n_i$ from n, since the property of being the n-th prime is primitive recursive. If you want to embed a turing machine, let $n_0$ be the state of the head, let $n_1$ be the position of the head, and let $n_2,n_3 ...$ be the content of the tape. Then you need to construct a first order sentence which says that the update of the quadruple $(n_0, n_1,n_{n_1})$ will be $(n_0',n_1 \pm 1,0\;\mathrm{or}\;1)$ with a long list of options to describe different outcomes depending on the state of the head. Do I need to go translate that explicitly into first order sentences? It would require a lot of fiddling around with pairing functions and prime-extraction functions. What's the point? If this couldn't be done we would just throw away Peano arithmetic.Likebox 23:38, 5 November 2007 (UTC) That's completely backwards - Peano arithmetic is justified as a theory of the natural numbers. The relationship between arithmetic and computability is not a motivation in the mere definition of PA, and PA would still be of interest if it didn't represent computable functions. The entire point of a rigorous proof of the incompleteness theorem is that it has to show that a sufficiently strong theory of arithmetic can represent something - either some basic computability or some basic proof theory. I copyedited, but didn't remove, the stuff you added. Please work towards a compromise version, rather than merely reverting. Remember that (1) we don't need to prove everything here and (2) the general consensus in this article is not to dwell on the Rosser extension of the theorem, since all true theories of arithmetic are ω-consistent anyway. Try to focus on making the section match the rest of the article. There is a lot of work left to do, but I will leave it alone for tonight. — Carl (CBM · talk) —Preceding comment was added at 23:48, 5 November 2007 (UTC) Likebox, please quit reverting, and get the "political" chip off your shoulder. None of us wants to hide the Goedel theorems from the common man. But an encyclopedia is not a teaching tool, it's a reference work. You need to make your additions consistent with that, or convince us that a departure from that principle is justified in this case. --Trovatore 23:56, 5 November 2007 (UTC) That's your point of view. The effect of this point of view is exactly to hide the Godel theorems from the common man.Likebox 01:06, 6 November 2007 (UTC) ### Spelling nit Please, everyone, quit writing "Godel", even in the talk pages; it's a flat misspelling. You don't have to go to the effort of putting in the ö character -- just write "Goedel", which is correct. The umlaut is not a diacritic in the ordinary sense, but a marker for the missing e. --Trovatore 00:00, 6 November 2007 (UTC) ## " language that should have been retired decades ago" Moved from User talk:CBM Look, I sympathize with your position, but do not edit a proof you think is invalid. Think about it for a few days first. Your edits make it imprecise and vague, by reverting to very, very unfortunate recursion theory language, language that should have been retired decades ago. I worked hard to make it correct and precise.Likebox 23:52, 5 November 2007 (UTC) It isn't our place to decide what terminology should have been retired when, and a consequence of WP:NOR is that our articles use standard terminology. In the case of Recursion theory means terminology such as "represent all computable functions" and "recursively enumerable set". If you want to change the world, or make people use new terminology, Wikipedia is not the place for you. In the meantime, please try to work towards a consensus version. I have no objection to the idea of a sketch of a computational proof, but several people including me have expressed reservations about the current wording. — Carl (CBM · talk) 23:58, 5 November 2007 (UTC) I am not asking that the recursion language be scrapped. I am asking that you include a reasonable proof in computational langauge in addition to a proof in recursion language. This might have the effect of causing people to abandon recursion theoretic language in the future, but that's not for me or Wikipedia to decide. But there are both people who believe in recursion language, and people who do not. It is ridiculous to assume that recursion language is "preferred" because many people who do research in the field are forced to use it. You must represent all points of view in proportion to their number.Likebox 00:24, 6 November 2007 (UTC) Also, I don't really care what the language is, so long as the computer programs are spelled out precisely, and their properties are manifest. The edits that you have done have made the proof vague.Likebox 00:26, 6 November 2007 (UTC) From my perspective, there is no significant difference between computer science terminology and recursion theoretic terminology. If you believe there is, I encourage you to present some references to computer science texts that use your preferred terminology, so that the other people here can see how the terms are defined in those texts. — Carl (CBM · talk) 00:40, 6 November 2007 (UTC) That is not true. You can easily write self modifying code in computer science language, you can't do it easily in recursion language. The computer science texts are "The C programming language" by Kernighan and Ritchie, and the countless papers in the 1970's on Random Access Machines. The existence of pointers and indirect adressing are crucial, and are hard to mimic in recursion theory. The insights of the 1970's need to be recognized, although they have yet to change any of the recursion theory texts. That doesn't matter. You can keep all your favorite language. Just don't delete someone elses favorite language.Likebox 01:05, 6 November 2007 (UTC) Kernighan and Ritchie is not a text on theoretical computability, it's about the C programming language... There are lots of computer science texts on computability theory. In my experience, they use virtually the same terminology as mathematics texts on computability theory. If you can provide some counterexamples, I will be glad to look through them. As I said, Wikipedia is not the place to right the wrongs of the world. If you can't justify your additions in terms of the existing texts and standard expositions of the topic, it will end of removed. It's up to you to give it more justification than just "I like it". — Carl (CBM · talk) 01:09, 6 November 2007 (UTC) Can someone please clarify what is meant by 'computer science language'? I am aware that there are two names used for the field 'recursion theory' -- that, and 'computability theory', the latter being the more modern and by some preferred. Do you mean a particular programming language (like C, Perl, Python, ADA, whatever...) Thanks. Zero sharp 01:13, 6 November 2007 (UTC) Only Likebox can say for sure what he or she means by "computer science language", but it refers to terminology, not a programming language. Based on edits here and to Halting problem (also see my talk page), it seems Likebox dislikes the standard terminology of recursion theory, preferring to use terms like "computer program" instead "computable function", etc. — Carl (CBM · talk) 04:47, 6 November 2007 (UTC) The distinction between "computer science language" and "recursion theory language" is a distinction between the way in which you describe an algorithm. A recursion theoretic description does not have enough operations which allow you to print your own code in any simple way, nor does it have an easy way to state things like "print Hello World". This is an important terminology distinction, which makes the proof I wrote clear and comprehensible, and makes the proof that is presented in the current version an unacceptably vague sketch. The current text, while it follows the outline of my proof, does not do it very well. The new text does not prove the Rosser version of the incompleteness theorem at all, and requires an omega-consistency assumption which is not acceptable in any way. The current text does not give a rigorous proof of anything at all, because it does not specify the computer programs precisely enough. It is just a bad sketch. The discussion I gave is a complete and correct proof, as rigorous as any other presentation. To replace a correct proof with a vague sketch is insane. Finally, this proof has the gall to pretend that Kleene et al. and alls y'all actually understood the relation between halting and Godel's theorem before you read the proof that I wrote. While, in hindsight, after a few weeks of bickering, it is clear to all you folks that quining and the fixed-point theorem are identical, that Godel and Halting are identical, none of you knew exactly how before I told you. If your going to use my insights, use my language and cite the paper I cited. The only reason this is not original research is because some dude wrote a paper in 1981, and because of the reference provided. To pretend that this proof is somehow implicit in the standard presentations is a credit-stealing lie. It was precisely because the standard presentations are so god-awful that I had to go to great lengths to come up with this one. So I revert to the old terminology, and I ask that you hold some sort of vote among neutral parties. If you are going to prove the theorem, the proof I gave is the shortest, the clearest, and comes straight from Erdos's book. It describes the computer programs completely and precisely, so that any programmer can actually write them. It proves the strong version of the theorem, and it is logically equivalent to Godel's original proof. But the language and presentation are not due to Godel or Kleene. They are due to that dude in 1981 and to a lesser extent to the author of the more recent preprint. Certainly not to any recursion theorist.Likebox 06:22, 6 November 2007 (UTC) Ok I know I'm late to the party here, but Likebox, can you please cite (and my apologies if you have already done so elsewhere but I"m having a REALLY hard time following this discussion) 'Erdos' book' and "that dude in 1981" Zero sharp 06:40, 6 November 2007 (UTC) and, for the record, can you explain why this is not Original Research? Thanks. Zero sharp 06:42, 6 November 2007 (UTC) It's not original research because somebody published it already in 1981. That's this guy, from the discussion above: A Proof of Godel's Theorem in Terms of Computer Programs, Arthur Charlesworth, Mathematics Magazine, Vol. 54, No. 3 (May, 1981), pp. 109-121. http://links.jstor.org/sici?sici=0025-570X%28198105%2954%3A3%3C109%3AAPOGTI%3E2.0.CO%3B2-1&size=LARGE&origin=JSTOR-enlargePage The logic of the discussion follows that paper, but the language follows the reference cited. Erdos's "book" is God's book of proofs, the shortest proof of every theorem. It's not a real book, but a mathematician's description of the most elegant proof of a theorem. I am pretty sure that the proof presented here is the book proof of the Godel's theorem.Likebox 06:47, 6 November 2007 (UTC) I put the reference to Charlesworth in. I would have put it in earlier had I known about it.Likebox 06:53, 6 November 2007 (UTC) One thing I agree with Likebox about is that this proof concept is not original research (although the citation Likebox gives isn't ideal). It is very well known that Godel's theorem can be proven from the existence of an r.e. undecidable problem. Such a proof does have the advantage of highlighting the role of effectiveness in the incompleteness theorem. But the language Likebox is proposing is very nonstandard - for example, his "quine lemma" is not a lemma about quines but actually Kleene's recursion theorem. I have no objection to the idea of a computational proof in the article, but it needs to use standard terminology. The text also fails to read like an encyclopedia article, fails to integrate into the rest of the article, and obscures what is going on by dwelling on specific programs too much. I keep editing it to resolve those issues. — Carl (CBM · talk) 12:32, 6 November 2007 (UTC) I once saw Bob Solovay present a similar proof on a blackboard in about 15 seconds, so it's not as if logicians have never heard of Turing machines. Maybe this situation is best resolved (parties willing) if Likebox were to write up a computer-based proof as a wikibook module, which the encyclopedia article could link to. Wikibooks have more room for exposition than encyclopedia articles do, wikipedia should link to wikibooks in preference to traditional books when it can, and wikibooks.org definitely needs more coverage of logic-related subjects. Another thought: spin off a new article called Godel incompleteness theorems: Modern proofs or some such, and link it back to this article. Then everyone can discuss it on that page. The fact that there is a published article (I've read it now) should alleviate the O.R. concerns. [Note: that is a pretty good article. Its lead-in commentary follows to some degree Rosser 1939 "An Informal Exposition..." (found in The Undecidable) and Gödel 1934 §6 "Conditions that a formal system must satisfy..." (p. 61ff in The Undecidable) ]. Another thought if everyone howls about the first suggestion: create a page in your "user space" as follows User:Likebox/Gödel modern proof and move your proof there until its ready to be moved over to the new sub-article. (If you want, I will create the page for you). Others can contribute on its talk page. I want to look at it as if it were an article, not as part of your talk page or in history. Bill Wvbailey 15:16, 7 November 2007 (UTC) We don't need another spinoff article/POV fork/whatever it would be. We already have one proof sketch spinoff page; Wikipedia is an encyclopedia, not a compendium of numerous expositions of the same proof. Someone else suggested wikibooks as a place Likebox might be able to publish his proof. There is no difficulty here with original research; the proof is well known. The issue is that Likebox eschews interest in working towards compromise [4]. The proof Likebox is presenting is exactly the proof from Kleene 1943, just rewritten to avoid terminology that Likebox rails against [5]. Even if Likebox were right that the entire computer science and math logic community uses poor terminology for basic computability theory, we're not here to cleanse the world of bad terminology. It's ironic that Likebox motivates his recursion-theoretic proof by saying 'there is not a single shred of insight to be gained from learning any recursion theory'. — Carl (CBM · talk) 15:55, 7 November 2007 (UTC) I hear you. I think a "semi-formal" (i.e. a really good-quality) "non-recursion-theory" version (i.e. "computational-machine"-based) is not going to be trivial, especially if it does not invoke undecidability of the "halting problem" as a premise (a cop-out IMO). I've deconstructed (and compressed into bullet-points) part of the Charlesworth paper, part of Rosser 1939 and part of Gödel 1934 §6 at User:wvbailey/Explication of Godel's incompleteness theorems -- it's just my holding/working place, I have no further intentions for it at this time. But the deconstructions are quite interesting and may be useful. Bill Wvbailey 17:23, 7 November 2007 (UTC) Although likebox will no longer touch this encyclopedia page with a ten foot pole, I would like to comment that I came up with my proof indepedently many years before I actually studied any logic or any recursion theory. I did read "Godel Escher Bach" though when I was in high school, so it wasn't a research project, just an exposition project. Knowing this nifty proof made me curious to see how the professionals do it, so some years later I went and audited a whole bunch of recursion theory classes, and went to logic seminars for two years so I could follow their version of the thing. It was a painful and frustrating experience. All their results were trivial, and expressed in godawful obsolete language that disguised the triviality! But at least the recursion theorists understand it, or so I thought. But what I discovered, to my horror, was that with the exception of a very few very senior faculty members, none of the students and precious few of the postdocs or junior faculty actually followed the proof of even a result as elementary as Godel's theorem. This made me very sad, and I didn't know what to do. The way I discovered this was by giving my proof of the theorem to a graduate student, who understood it. He then asked the (incompetent) junior faculty running the recursion theory class "Can you prove that it is undecidable whether a program runs in exponential time or polynomial time on its input?" The junior faculty fiddled around with the "s-n-m theorem" and the "recursion theorem" and the "fixed point theorem" for a long, long, time, and could not do it. This is unbelievable! It's a trivial thing to prove. The proof is thusly: Suppose PREDICT(R) predicts whether R runs in polynomial time or exponential time on its input. Let SPITE(N) print its code into the variable R, calculate PREDICT(R), all the while ignoring N, then if PREDICT says "exponential", SPITE runs a polynomial algorithm on N, and if PREDICT says "polynomial", SPITE runs an exponential time algorithm on N. Why couldn't junior-faculty do it? Becuase the Kleene fixed point theorem puts the source code to SPITE on the god-damn input-stream of the Turing machine. It's then hard to say how long the input is. You need to conceptually separate the input part which is the source-code from the input part which is the input, and junior-faculty got confused. This sorry story is still repeating itself in every recursion theory department in the country and in the world. So here's wikipedia. That's what wikipedia is for--- for disseminating knowledge that's well known. As far as a proper proof, stop kidding yourself. The proof I wrote and that is linked is more rigorous than anything else in the world, and everybody knows it.Likebox 19:02, 7 November 2007 (UTC) Thank you for putting your work at User:Likebox/Gödel modern proof so I can look at it. Bill Wvbailey 21:55, 7 November 2007 (UTC) Cool, I'll get a look at it there too. The critical point - and I missed it the first time through myself, is L's proof does not assume CT or Halting. Rather, it proves it so quickly you might miss it. SPITE() is designed to defy a test of halting, H. It feeds itself (using the 'quining' key to self-reference) into H, and does the opposite of what H predicts. Spiteful, that function. Get Russell a hanky. Conceptually, to my way of thinking, the proof is as simple as this : a) a statement (S-expression in LISP, piece of code, whatever) can output its own source. (quining.) Try it in any language, it's fun and non trivial. This gives the key to self-reference. Rather than printing, the output string could be assigned to a variable named Myself. This is counter-intuitive as hell but within the grasp of any student eager to experiment. The point is source code can mention 'Myself' or 'This source code' in a very real and executable way. b) We can now defeat any proposed Halting test for source code. The function Spite() deliberately breaks the H-test, by using the self-reference trick in a) to feed itself into H, peek at the result, and either hang or crash - whichever makes a liar out of H. c) It is then a reasonably short trip from Halting to Incompleteness (see main article.) Whatever one's view of Likebox, the simplicity and clarity of this exposition is stunning. Given this outline, any 'ordinary' mathematician could fill in the gaps to reconstruct a proper proof without, I believe, any notable insight. Any undergraduate, with an affinity for such matters, can grasp the key insights and defend the main thesis. Amazing. The Theory of General Relativity was once hopelessly inaccessible even to many professional scientists, as was Galois Theory. They are not so today. Over time the conceptual landscape changes and new paths are sought to the peaks. Entirely new hills rise and new perspectives emerge from which formerly obfuscated landmarks can be clearly seen. This process is to the greater good. Any one who doubts this is encouraged to try to extract F=MA from Newton's Principia. I believe if the reader carefully steps thru L's proof, and ignores trivial matters of terminology (they are easily remedied), s/he will see that it is a) self-contained and b) free of significant gaps. Erdos used to say, when a proof has gotten to the crux od the matter with maximum simplicity, that it was in God's book. If there is a God and He has a book, this proof is in it. updated CeilingCrash 06:55, 8 November 2007 (UTC) Nobody disagrees that a proof of the first incompleteness theorem can be obtained using computability; this proof is in the article at this moment. The problem is that Likebox's actual exposition is unsuitable for the encyclopedia, as numerous people have pointed out. I'm sure some compromise wording can be found. — Carl (CBM · talk) 13:51, 8 November 2007 (UTC) When people of good faith compromise with narrow-minded people with a language agenda, the result is an incomprehensible piece of junk. One must defend good exposition from such attacks.Likebox 18:18, 8 November 2007 (UTC) ### Reference - Kleene 1943 Since the question of original research came up, I looked up a reference for this proof. It dates back at least to Kleene, 1943, "Recursive predicates and quantifiers". Theorem VIII states: "There is no complete formal deductive theory for the predicate $(x)\bar{T}_1(a,a,x)$." As the T is Kleene's T predicate, and the bar indicates negation, this is exactly a statement about the halting problem. Kleene explicitly says: "This is the famous theorem of Gödel on formally undecidable propositions, in a generalized form." This paper is available in the book The Undecidable as well. — Carl (CBM · talk) 12:49, 6 November 2007 (UTC) Dude, this argument was never about content. It was about language.Likebox 19:15, 7 November 2007 (UTC) ## Created Archive 03 The talk page was getting very long, so I have moved the discussion prior to the current active one (about the new 'proof') to Archive03 and added a link in the Archive infobox. Zero sharp 16:17, 6 November 2007 (UTC) ## Semantic proofs We now have three proof sketches in the article: the main sketch of the syntactic proof, the section on computability, and the section on Boolos' proof. I pulled Boolos' article today, and it does seem worth mentioning briefly. I think we should rearrange things so that all the proofs are in one section. Within that, the first subsection should concern arithmetization, then the syntactical proof sketch, then a section on semantic proofs including the computability proof and Boolos' proof. Boolos points out in his paper that Goedel gives another semantic proof in the intro to his paper on the incompleteness theorems. The current organization is less than ideal. — Carl (CBM · talk) 16:42, 8 November 2007 (UTC) ## Seeking a peace accord Let me apologize for the length of this. I wrote this for several reasons, not the least of which is that I was first to delete Likebox's proof (then reversed my view), and feel like i sparked off a contentious disagreement which led to a temporary edit-block and some ill feeling. Also, I have spent roughly equal time in both schools of thought (my dad was a logician), and appreciate both the austere beauty of formal mathematics and the abstract dynamism of the computational view. The conflict between the two schools is visible on any university campus, but in the end it is illusory as both schemas are ultimately bound by an isomorphism. I have a mixed background in math and comp sci, and I think the discord here is just a result of different perspective. First of all, let me say that L's proof is by no means a formal, mathematical proof (nor a sketch of one) satisfactory in the sense of mathematical rigor. The proof offends the formalist sensibilities, and seems a strangely worded reiteration of points already made elsewhere. But let me back up a bit. We all know that any real, published proof is not a formal proof but an abbreviation. An actual formal proof of even the simplest theorem would be tedious and impossibly long (in Principia, Russell famously takes 106 pages to get to 2+2=4.) Except for proofs in Euclidean geometry, mathematicians opted long ago for proofs which are really roadmaps to proofs. They serve a social function; whereby a researcher announces to his peers how to do a formal proof. A good proof contains only the non-obvious bits, and omits everything obvious. This is by no means laziness but necessary, else our journals would be filled with obvious explications and one would have to scan 100's of pages of text to get at the crucial ideas. But obviousness is relative to the audience (even among mathematicians, this changes from generation to generation.) Don't get me wrong - Godel's proof, historically, belongs to the mathematicians and the formalist, mathematical perspective should get top billing. But other audiences are worthy of our attention as well. Godel's Theorems and CT were written prior to the first real computer program (a credit to their author's inventiveness.) Many young people today have grown up writing computer programs (I wrote my first at 12, which may be been unusual in 1978, it certainly is not so now.) With mathematics, we require our young to spend much time learning to do arithmetic and algebra. It is only through this direct doing that we can internalize these ideas, and develop intuition to guide us in with new problems. It is this training that defines our sense of the 'obvious'. A person who has been writing programs for many years develops a similar intuitive sense about computing machines, what is possible, what is easy, what is hard, what is obvious. The lambda calculus and turing machine are their native ground. So, to a programmer certain things seem obvious which are not so to a pure mathematician (and vice-versa, i'm sure.) We can see this from the discussion. For instance, at one point an editor objects to Likebox's claim that, basically, 'quining' (the term is due to Hoefstadter) and variations thereof are easy and self-evident. The editor objects - correctly - that Likebox is is really pulling in the recursion theorem. Likebox responds - also correctly - that a person who has actually written quines instinctively knows this to be true and can write such a program in a few minutes. Both Likebox and the other editor know how to make this 'quine'. They simply disagree on whether the reader can be expected to or, even if the reader senses it, does he demand proof anyway. They disagree on the reader's expectations. It comes down to a question of what is obvious to whom, and this is mostly a function of background. The complete, unexpurgated proof will never be written. What comprises a meaningful proof-map is dependent on the intended audience's background and demand for rigor. The mathematicians certainly come first and foremost, but I think it's OK to provide proof-maps for different audiences in wikipedia. In fact I think it is a very good idea to do so, provided we make clear we are shifting gears in a major way. For a programmer, several things (which are monumental and visionary unto themselves) are going to be obvious for the simple reason he has already worked with them. The arithmetization of a symbol system is automatic; symbols are routinely coded into integers, and often there are multiple layers to such a mapping. The ability to create 'quines' and variations thereof are going to be evident (after the reader works a few problems on it.) Even the connection from Halting to Incompleteness - while not obvious - is well within the range of mere mortals to fill in. In short, a reader with lots of training in programming is already 1/2 way to intuitively grasping godel's proof. To provide this reader the proof-map for the rest of the trip seems to me a valuable contribution to an article on godel's proof, and doesn't detract from the more formal and historical proof-map. It's not that programmers have some superior abstract perspective. It is precisely due to mathematicians like Turing and Von Neumann that these computational abstractions were refined, simplified and made real such that any kid can (and does) play with a computer, that certain of these abstractions are now common ground to a new generation. And what could be better than that? So I advocate we tighten up the wording in L's proof, maintaining the current structure but substituting or clarifying certain language - while granting L a certain license to opt for plain explanations rather than terms familiar to a mathematician. Also we should give some kind of forward about whom the section is intended for, and that it is not in competition with the traditional treatment. "Yes, I know the garden very well. I have worked in it all my life. It is a good garden and a healthy one ... there is plenty of room in it for new trees and new flowers of all kinds ... I am a very ... serious gardener." - Chance, Being There CeilingCrash 17:13, 8 November 2007 (UTC) I don't see even a theoretical benefit to the use of quines and specific programs. It obscures the general phenomenon, which is the relation between provable membership in a set vs. the recursive enumerability of the set. (This is still ignoring the nonexistence of the "quine lemma" etc.) Could you comment on the current wording, explaining what you think is unclear about it? It's exactly the same idea, just expressed more reasonably. Sure, there is nothing unclear about the present wording on its own terms; it's just a matter of terminology. By "theoretical benefit" i take it you mean 'hypothetical benefit', as we're not claiming an improvement to theory. I'll be more specific in a little while, but the mathematical perspective comes from a higher level of abstraction, where we speak of 'provable membership in a set' and 'recursive enumerability of the set'. The programmer doesn't think in terms of sets, but rather algorithms and their outputs. The programmer would not say a set is recursively enumerable, rather he would say "any of these will be eventually produced by this program." To give a simpler example, a mathematician defines a prime number as having no (non trivial) divisors. A programmer sees it as the output of the sieve of erastothones, or perhaps "what's left out when you multiply everything by everything else." It's a dynamic view. To be sure, the static view is the more formally satisfying because the dynamic view can be seen as shorthand for what really is a static sequence of steps. (Set theory came first. One wonders, however, how things might have gone differently had Church continued with the original intent of Lambda Calculus, which was to ground mathematics on functional, rather than set-theoretic terms.) From the dynamic view - which very admittedly smuggles in a great number of results formally belonging to recursion theory and other areas - the key ideas are "you can make a quine" and "no algorithm can decide haltingness." These are totally non obvious. All the rest is by no means formally trivial, it is just of little surprise to the reader coming from this view. CeilingCrash 17:57, 8 November 2007 (UTC) CeilingCrash has said many wise and correct things. But he assumes that the reason I am trying to incorporate CS language in the recursion theory pages is because I am unfamiliar with the methods of mathematicians or do not appreciate their "austere beauty". That is not true. He also assumes that the reason that some recursion theorists erase the proof is because they honestly think that the language I am using is confusing. That is also probably not true. It would take a truly illiterate person to not understand that the proof that I gave is correct. Even if the person thought it was a sketch, there is nothing preventing them from presenting another proof alongside. But they insist on either deleting the proof or replacing it with language which makes it incomprehensible to a coder. The real reason is that the people who enter the recursion theory world are forced, by a systematically constructed power structure, to adopt a certain language for their algorithms. This language is inferior in every way to any modern computer language. But if you do not adopt it, you are not politically allowed to call your algorithm a proof, no matter how precise and no matter how clear. You could even write out the code explicitly, and it is not a proof. You could run the code on a computer and have it work, and it is not a proof. It is only a proof if you use the recursion theoretic language to the letter, with all the $\phi_e$ and $\phi_{\phi_e}$. If it ain't got the $\phi$s it ain't no proof. To understand why this inferior language dominates, one must look to the academic structures in the 1950s when the modern recursion theory departements were constructed. The people who created these departments wanted a divide between their field and that of theoretical computer science, with Knuth and Minsky on one side, and Nerode and Woodin on the other side. They wanted to establish their field as legitimate pure mathematics, not as a branch of applied mathematics. To facilitate this artifical division, the recursion theorists constructed a wall of terminology to prevent "code-monkeys" from claiming algorithmic proofs for recursion theoretic theorems. This means that the recursion theoretic world will not accept as legitimate any algorithm which is not written on a Turing machine in Turingese. In order to get a paper accepted in a recursion journal, even to this day, one must write the algorithm so that it is not readable by a person who codes. This is not an accident. It is there by design, and this motivation was explicitly stated by the founders of the respective fields. It was to prevent pseudo-code from infecting mathematics journals. This division, I am certain, cannot survive in the open marketplace of ideas provided by the internet. But to help along its demise, I try to provide clear proofs in modern computational language of all the classic theorems of recursion theory. The proofs are so much simpler in the natural language of computer programs, that I think recursion theory language will be about as popular in the future as latin.Likebox 19:24, 8 November 2007 (UTC) It remains the case that Wikipedia is not a forum for correcting historical injustices, real or perceived. That should be done elsewhere. Perhaps you could publish a paper somewhere about the faults of recursion theory and its notation. — Carl (CBM · talk) 19:34, 8 November 2007 (UTC) At least you have finally admitted your position. Now we have a real conversation.Likebox 19:46, 8 November 2007 (UTC) I have been quite frank about my position. "Wikipedia is not the place to right the wrongs of the world." [6] — Carl (CBM · talk) 20:04, 8 November 2007 (UTC) (deindent) Now that we understand the issues, I would like to admit a little white lie. I never expected the proof to stay on this page. Nor did I expect the quine proof of the Halting theorem to stay on the Halting problem page. In the first few weeks, I had about a 75% confidence level that it would be deleted by somebody who has a recursion theory PhD, and that this person would attack the proof as illegitimate on language grounds. It took a lot longer than I expected, mostly because it takes a while for the political sides to emerge. But now that the battle is fought, I would like to ask a more profound question. What is the purpose of Wikipedia, really? Why can anyone edit any page, even a person who does not have a PhD, like myself? Why is an expert opinion, rendered by a tenured professor of equal value to the opinion of a lay-reader? The reason, I believe, is the oft-stated principle of radical inclusiveness. It is to make sure that ideas to not stay hidden, even if there are well established power structures that strive to make sure that they stay hidden. This is the reason for my additions, and this is why I defend them from the academic powers that line up to attack them. This is also the reason that I am optimistic about Wikipedia. Because even when the additions are not incorporated, the resulting debate exposes the power structure, and, over the long term, will right the historical wrongs. It has already happened for Ernst Stueckelberg, and it is happening for Wolfgang Pauli, whose genetic ideas were decades ahead of their time. Historical wrongs were put right long before the internet existed. Look to the history of reconstruction in the U.S., or the history of the conquest of the Americas. But the internet makes the process much faster. I am pretty sure that anyone who has been following this debate will abandon the recursion theoretic proofs. Not everyone. But enough to make a difference. So I remain optimistic, whatever happens here. For the interested reader, here's the only real proof of Godel's theorem you're going to find on Wikipedia:User:Likebox/Gödel modern proof. Likebox 20:08, 8 November 2007 (UTC) I have a merge in the works to suggest of Carl's and Likebox's exposition (the key difference being only that Likebox's version sketches Halting (and points to a construction which defies a given halt-detector H), collapses some of Carl's finer points for brevity, and inserts some narrative explication of my own. I may not be making sense right now as i'm very tired :) In any case, I hope to arrive at a hybrid acceptable to consensus that is helpful to computer-oriented folk. I am quite sure Likebox will scream bloody murder, but my goal is really to improve the article for the theory-impaired but algorithm savvy. Once editors here take another pass at it, i am sure we can pin it down. In any case, one hell of an interesting discussion and I, for one, have learned immensely. By the way, I had come here by way of the Lambda Calculus article; the introduction to that article had been, er ... plagiarized so had to be deleted. This left a headless article, so i rapidly put together an intro. I am not an expert in this area, so would appreciate some extra eyes on it Lambda_calculus Thanx —Preceding unsigned comment added by 24.128.99.107 (talk) 04:46, 9 November 2007 (UTC) Oh, and to respond briefly to Likebox, I don't think he is acting out of a lack of understanding of formal math. I believe Likebox is of the view that godel's proof is attainable in a considerably more direct (thus conceptually superior) way, and that his proof is as complete as any of Godel's. My personal and currently indefensible view is that he right in this, that Godel can be reformulated more clearly from newer ground. However, I am not prepared to enter that debate, haven't thought carefully enuf about it, and in any case that issue is beyond the scope of a mere tertiary source as wikipedia. The reasons I claim for inclusion here are pedagogical, in that the train of reasoning is tractable to coder people. So I'm sticking to that. 24.128.99.107 05:03, 9 November 2007 (UTC) Note that one of the issues iwth Likebox's proof was brevity; if you remove the unneeded restatement of the incompleteness theorem, the resulting "proof" was far too short, ignoring many important details. But I'll be glad to look at rewrites and we can find some compromise language. I still haven't gotten any actual feedback about the language that is in the article right now, which is the same proof that Likebox wanted, just written up better. — Carl (CBM · talk) 13:23, 9 November 2007 (UTC) Oh my god! It's too clear! Look, just leave it alone, and put your sketch in another section. As I said, it leaves out no details at all, and is more rigorous than anything else in the world.Likebox 19:17, 9 November 2007 (UTC) There is no need to have large numbers of different proof sketches in the article. There are already three of them! And none of them (including yours) are self-contained and complete. Mine is self-contained, complete, more rigorous, and shorter than all of the others.Likebox 23:01, 9 November 2007 (UTC) As for unclear parts of your proof, you might begin with the phrase "axioms will follow its operation". Axioms, like all other formulas, are immobile. Could you explain what parts of the current wording you dislike, if you are amenable to compromise, or refrain from commenting here if you have no desire to compromise? Wikipedia is not the place to push your POV about how proofs should be written. — Carl (CBM · talk) 19:26, 9 November 2007 (UTC) What part of the current wording I dislike: first there is no computer program. You don't describe a computer program, so you can't have a proof. Second there is no mention of the Rosser version. That is unacceptable when the Rosser proof is a minor tweak, and the Rosser version is just as easy. Third, it is written obscurely, and cannot be read and understood by a layperson or an undergraduate. Taken together, these objections are fatal. "Axioms will follow its operation" is explained later in the sentence. It is no more unclear than "Axioms will describe a standard model of computation" in your rewrite. I have no problem with compromise, so long as the compromiser will read the proof, understand it, and think about it before rewriting it, preserving the ideas and the explicit exactness of the discussion. I think I have already gotten it to the point of no-further improvement possible, but I might be wrong.Likebox 23:01, 9 November 2007 (UTC) Wait a minute - the quote you give ("Axioms will describe a standard model of computation") is not in the article at all, and it has not been edited since your comment. If you want to give feedback about the article, it would help if you only quote things that are actually there. I agree that the current version could use a little more detail about arithmetization; this is one of the places where your original writeup is particularly weak. Another place where both are weak is on the role of ω-consistency, because of a lack of detail about the T predicate. The article as it stands doesn't spend any time on the Rosser version; if you would like the article to cover that, it will need to do it in more than just one section, but the previous consensus was not to cover it. The claim it is written obscurely is likely just your opinion that all standard terminology is obscure. Which particular part do you feel is obscure? It is not necessary to give actual code to have a precise proof, as was established by Rogers' book in the 1960s. — Carl (CBM · talk) 02:58, 10 November 2007 (UTC) Obscurity is a matter of---can a person who is not familiar with your field read this and understand what is going on? This requires a taste-test of the presentation. I have taste-tested my presentation on lay-people, and they understand it.Likebox 06:45, 11 November 2007 (UTC) (deindent) I did not delete anybody else's text, so I am not imposing a view. I just added a different style of proof, which is short and to the point. I tried to be clear, but maybe I wasn't perfect. I tried to be accessible, maybe I fell short. I don't know. The issue is the wording of the section about the relationship of the theorem with computability theory. For details, see above. — Carl (CBM · talk) 19:30, 9 November 2007 (UTC) also see User:Likebox/Gödel modern proof for the alternate exposition. The main difference is that I used explicit spelled-out computer programs.Likebox 23:03, 9 November 2007 (UTC) I think, i am pretty sure, i have some tenuous handle on both party's perspectives. The improvements I see for the current section are as follows. Again, i am speaking in terms of pedagogy, not precision. When I say "language X is better" I only mean "language X will be more groovy to programmers.") I put some bolding for skim readers, not for emphasis. The undecidability of haltingness is invoked from CT. This result is rather shocking to programmers. Likebox does not invoke CT, but gets at un-halting with 'quines' and shows how to construct a counterexample. The programmer is no longer shocked, and given this part of proof, can actually go off to construct a counter-example generator to defy any proposed-halting-detector. He understands. To a mathematician, it seems bizarre because 'quining' sneaks in vast areas of recursion theory, all sorts of other stuff seem left out, it's 'hand-waving' in all the wrong spots. To the coder, who is at heart a constructionist who understands in terms of process, it shows what to do, and that's his mode of understanding. He doesn't want an existence proof of 'quines' (there is one), he wants to know how to make them, which is easy to show. All the rest he sees as details that can be looked up. We should give more references to these 'waved-over' details, which i shall try to do. The undecidability of halting is really the heart of the matter, to me. Computational people find intuitive that the Undecidability of Halting implies Incompleteness of Number Theory. They know that a computer is governed by mathematical laws. They think, "you can do mathematical proofs as to whether a certain program halts. If there is a hole in 'haltingness', there is a hole in math." Hand waving, absolutely. That is how these people wave tho. In short, I think we should expand the part on Halting, compress the part on Halting->Godel, shoot up the middle in terms of style between Likebox's more casual style and CBM's more formal, and whatever i forgot to list just now. Let me take a crack at this, as i have assertions to back up. I am quite sure i wont make the final exposition, but probably a way-point most of which is discarded. But lemme try. I'd urge all editors (myself included) to not get too attached to early drafts. We are (i think) aiming for an effective pedagogy on this directed at coders. To say "we don't think coders will groove on X" does impune the quality of X, and each exposition is beautiful within it's own domain, and we are very much guessing as to what coders groove on. Oh, and the title of the section, i think shd change. I really have to log off else my partner is going to eject me into the night in a very real and formally decidable way ! Let me take a crack tomorrow (my version will steal heavily from both.) CeilingCrash 01:05, 10 November 2007 (UTC) I disagree that there is any established consensus for a proof aimed primarily at "coders". I'm not even sure what that means. This is, first and foremost, an article about mathematics. — Carl (CBM · talk) 02:50, 10 November 2007 (UTC) The incompleteness theorem is not a PhD level recursion theory subject as Likebox seems to imply. It's a basic theorem from mathematical logic, taught in any self-respecting introductory undergraduate course in that subject. I think it's best if the presentation stays in standard mathematical terminology instead of computer terminology, though the article should (and does) link to related results in computability theory. It's true that some computer-inspired proofs (like the one Likebox cited) have been published, so Likebox's proof is not OR. It's also true that logic has been redone from the ground up many times (see sequent calculus, categorical logic, type theory, or ludics for yet more examples), so there are many ways to look at the subject. But my impression is that most introductory logic textbooks use the traditional approach (predicate calculus, FOL, etc), as is natural because the incompleteness theorem is presented along with other basic theorems (completeness, compactness, Lowenheim-Skolem, some model theory, maybe some set theory) that don't fit so well into the computer presentation. See, for example, "A Mathematical Introduction to Logic" by Herbert Enderton. The class I took in college used that book, and it covers the topics listed above and should be pretty readable to anyone who made their way through Gödel Escher Bach. Basically I agree with CBM that Likebox's proof is better off in an expositional wikibook that wikipedia could link to. I don't understand Likebox's insistence on putting the computer proof into the encyclopedia article (especially the main article about the theorem) and I think devoting much space to its presentation goes against the "undue weight" principle of following what the majority of sources (in this case logic textbooks) do, with a shorter mention of less common approaches. I actually think the article's "proof sketch" section should be shortened a lot further, since it links to another article with a longer proof sketch (perhaps the computer proof could be presented there somehow). It's enough for the main article to explain informally what the theorem says, its significance, its historical context, etc, plus a brief summary of the proof's main ideas. The article is currently too cluttered and should be streamlined in general. —Preceding unsigned comment added by 75.62.4.229 (talk) 05:05, 10 November 2007 (UTC) I am skeptical that the comments above are sincere. Since this is a political discussion, perhaps we should limit it to users who do not hide behind a veil of anonymity?Likebox 21:36, 10 November 2007 (UTC) Actually, on closer reading, maybe they are sincere! Likebox has already reformulated all of mathematics to his own satisfaction from the ground up only using computers and not referencing uncountably infinite sets except as rhetorical devices. It doesn't take as much work as you think. It was essentially done by Godel (closely following Hilbert), when he constructed L. L is defined as the class of all objects predicatively constructed by computer programs deducing from the axioms and running for an ordinal number of steps. It forms a model of Zermelo Frenkel set theory, all of usual mathematics. This seems like a strange idea, but when the ordinals are countable, the computer programs just have lots of while loops which run a countable number of steps. If you take V=L for sets, define proper classes corresponding to what are normally called the "true" power sets, declare that all infinite power classes are Lebesgue measurable, and take as an axiom either that all well-orderable collections are measure zero if you feel conciliatory, or that all ordinals are countable if you are feeling incendiary, you get a good-enough model of mathematics that certainly includes everything any non-logician ever does, including most small large cardinals. It just does not have any truly uncountable sets. These are all proper classes. Likebox does not care if you agree or disagree with his formulation of mathematics, because Likebox did it only to satisfy himself that it can be done. But he is happy to contribute his decade old proof of Godel's theorem, because it is not original. The only arguably new thing about it is using quining to sidestep Kleene, and that has almost certainly been thought of before.Likebox 22:54, 10 November 2007 (UTC) The comments are sincere, though it's quite possible that they're incorrect and/or not well-informed, since I'm not an expert on this stuff (that's why I said maybe an expert can correct me). But if you're such an expert, why do you have such trouble with the usual textbook presentation? It worked ok for me (I had a one-semester class using the book that I mentioned). Also, I'm not the anonymous one here: I'm disclosing my personal IP address instead of making up a meaningless nickname like you did (not that there's anything wrong with that). Dude, I'm not an expert in any way. I just have my opinion. I'm not trying to impose it on other people. I just told you my way of doing math.Likebox 00:46, 11 November 2007 (UTC) Also, as I tried to make clear, everyone has a problem with the usual textbook presentations. They are incredibly hard to decipher, even for the so-called experts. To see, try to get them to prove that it is undecidable whether a program runs in polynomial or exponential time (without using Rice's theorem).Likebox 00:48, 11 November 2007 (UTC) But I think trying to jam an unorthodox proof into the article against other editors' judgment (some of whom are experts) counts as trying to impose your views on other people. Otherwise there wouldn't be an RFC. And I don't see what the textbook presentation of the incompleteness theorem has to do with the running time of programs--this isn't a complexity theory article and it's irrelevant that complexity theorists working on complexity topics tend to use different tools than what logicians use on logic topics. Finally I personally thought the incompleteness theorem presentation in Enderton's book was fine. It's a popular book and I think most users (teachers and students) are happy with it. So I can't go along with the claim that "everyone" has a problem with the usual textbook presentations. I can go with "there exists a user with such a problem" which is entirely different, but even if everyone has a problem: as CBM says, Wikipedia is not the place to right the wrongs of the world. Maybe your best bet is to write your own textbook and get it published. —Preceding unsigned comment added by 75.62.4.229 (talk) 06:24, 11 November 2007 (UTC) (deindent) The theorem I was stating about running time is most definitely not a complexity theorem, but a recursion theoretic one, even though it superficially sounds like a complexity theoretic theorem. The theorem is about undecidability of running time, but it really is a version of Rice's theorem, that it is undecidable whether a computer program's output has any definite property. For example, it is undecidable whether a program outputs 1 or 0, and it is undecidable whether a program's output depends on its input. All these things are undecidable for the simple reason that you can SPITE them, but the method in the textbooks is made murky by mixing the input stream with the variable where you print the code. Not to mention unreadable by a nonspecialist. The proof of Godel's theorem is a single sentence: No computer program P can output all theorems of arithmetic, because then a program which runs P looking for statements about its own behavior can easily and explicitly make a liar out of P. If we are not allowed to say this on this page, there is something wrong.Likebox 06:45, 11 November 2007 (UTC) As mentioned I'm ok with having a computer proof sketch in the proof sketch article, though the one currently in your user page needs some reorganiztion and style adjustment (e.g. a theory proof shouldn't describe things in terms of techno-artifacts such as cd-roms). CBM and others have made further suggestions that seem good. The computer proof should get less space than the traditional proof by the undue weight principle. I don't think it should go in the main article. As I said, I will keep the proof on User:Likebox/Gödel modern proof, and not touch this page at all.Likebox 19:07, 12 November 2007 (UTC) moved Adler's response to next section for continuity. Wvbailey 21:28, 14 November 2007 (UTC) ## Response to RFC I came to this page in reaction to an RFC. I've spent an hour or so reading through this discussion, and can't claim to understand every point. Some years ago I could have claimed to understand at least one proof of Gödel's incompleteness theorems but it would take me longer than I'm prepared to devote to this to regain that understanding. What I think I do still understand well is what Wikipedia is supposed to be. I would simply urge all participants in this debate to consider whether their view of what should be in this article complies with policies on original research, reliable resources, not being a text book and verifiability, the last of which opens with: "The threshold for inclusion in Wikipedia is verifiability, not truth". Phil Bridger 21:58, 12 November 2007 (UTC) Nobody is arguing about verifiability of correctness. The key issue is whether it is appropriate to use explicit computer programs in a proof, and whether such language is simpler and clearer.Likebox 01:23, 13 November 2007 (UTC) My point is that the decision on that has to be made on the basis of Wikipedia policies. If the use of an explicit computer program in a proof can be sourced to a verifiable reliable source then let's have it in the article. Otherwise leave it out. Wikipedia is not the place to argue philosophical issues about what constitutes a valid proof. Phil Bridger 10:10, 13 November 2007 (UTC) The use of explicit programs has a long history, and this debate is a mirror of an old and longstanding debate in academia. I have left the editing of the article to others, and placed a suggested compromise on the page User:Likebox/Gödel modern proof. Wikipedia is a place where certain questions need to be resolved. What constitutes a valid recursion theory proof is one of those questions. It is not a peripheral issue--- it determines whether simple proofs can be presented on a whole host of other pages, or whether proofs are going to be be cut-and-paste from textbook presentations or absent altogether. Textbook proofs are reworked by secondary authors, and they are, as a rule, the worst proofs in the literature. They are verbose, confused, and inaccurate. It would be a shame if the policy of "no original research" was expanded to "no simplified exposition", because then it will be hard to find anyone to write any proofs of any nontrivial result. In its day, Brittanica included up-to-date discussion of current research, and was not a mirror of the contemporary textbooks. This is part of the reason it was so valuable. Issues of accuracy are easier to adress on Wikipedia, since there are thousands of referees for any discussion. Issues of clarity, however, are more insidious, because they have no unique right answer.Likebox 19:41, 13 November 2007 (UTC) I'm sorry but Wikipedia is not the place to resolve the issue of what constitutes a valid proof. As you say this is a long-standing debate in academia, and that is where the debate needs to be, not in an encyclopedia. Phil Bridger 20:10, 13 November 2007 (UTC) How are you going to write the encyclopedia? It won't get written. If there is more than one way to do something accepted in academia, there should be several presentations on your encyclopedia.Likebox 23:02, 13 November 2007 (UTC) There is not actually a longstanding debate in recursion theory about what is a valid proof. In particular, since the 1960s it has been very common to avoid writing recursion equations or pseudocode in proofs, and the style developed by Rogers is used by virtually all publications in recursion theory today. I don't understand the claim that textbook proofs are verbose, confused and inaccurate; they are typically much improved from the original proofs in research papers, which are often myopic, focused solely on obtaining a particular result. — Carl (CBM · talk) 20:36, 13 November 2007 (UTC) You have already made your position clear. Many people believe that you are wrong on this matter, but there is no need for everyone to agree. Write textbook stuff, and let others write review paper stuff. Criticize on notability and verifiability, not on language or style.Likebox 23:05, 13 November 2007 (UTC) It's perfectly reasonable for editors to be concerned about the way articles are worded. I see no need to have duplicate expositions in the article, and I don't see much support so far in the RFC for duplication. You continue to claim that there is more than one way this is done in academia - could you present a math or computer science text that you are claiming as a model for your sort of exposition? Could you present any evidence that there is a long-running dispute on this issue in academia? — Carl (CBM · talk) 23:16, 13 November 2007 (UTC) Nope, I am not going to do any of that. I nevertheless maintain my position.Likebox 02:19, 14 November 2007 (UTC) Sorry, the reason I do not do this is because I do not believe in the authority of experts.Likebox 02:33, 14 November 2007 (UTC) There is no duplication. You do not prove Rosser, I prove Rosser. Your presentation is imprecise, mine is precise. There is no contest. Your presentation is inferior in every way. That does not mean it should go. That means it should stay, but not overwrite mine.Likebox 02:29, 14 November 2007 (UTC) I'm sorry, I know we are supposed to comment on content, not on editors, but how much longer are we going to suffer the ravings of this LUNATIC, who has stated repeatedly that he is _not_ interested in consensus, _not_ interested in compromise and who has nothing (it seems) to contribute but bloviating ramblings about how 'his' proof (oh, but it's not OR) is THE BEST PROOF IN THE WORLD EVAR? Is there no recourse for an editor being so C L E A R L Y disruptive? —Preceding unsigned comment added by 71.139.31.151 (talk) 07:46, 14 November 2007 (UTC) An exposition like User:Likebox/Gödel modern proof is not adequate for an encyclopedia article on a mathematical theorem. It does have the advantage of connecting several topics that are generally considered cool on slashdot.org. It has the disadvantage of being vague to the point that it would be almost impossible to point out an error in the proof without actually disproving the incompleteness theorem. Phrased as an outline of an existing, peer-reviewed proof this would be OK. Unfortunately, exposing something clearly so that laypeople can understand it is not the same thing as conjuring up an illusion of comprehension. Unfortunately, it seems that only mathematicians understand the difference, after many years of having the mistakes in their own proofs pointed out to them by tutors. The problem is aggravated by the fact that this article has a relatively high profile. In fact, in the past it has been a target of crackpot edits. Too many people have wasted too much time on this discussion. --Hans Adler 20:48, 14 November 2007 (UTC) spacers added Wvbailey 21:32, 14 November 2007 (UTC) ### section break Thanks to Hans Adler for belling the cat. I added a cite of the Charlesworth article to the proof sketch section. Can we say we are done now? —Preceding unsigned comment added by 75.62.4.229 (talk) 15:09, 15 November 2007 (UTC) The 'proof sketch' section is not meant to be a trivia section with every published proof included. The section on Boolos' proof already needed to be merged elsewhere. I'll work on that later today. — Carl (CBM · talk) 15:15, 15 November 2007 (UTC) The proof Charlesworth gives on p. 118 is quite close to the text here (and it's worth noting the Charlesworth does not use quines, but uses a proof of the undecidability of the halting problem similar to the one currently in that article). So I moved the reference up to the section on computability. It's a good reference for an undergrad, and spends a great deal of time on arithmetization. — Carl (CBM · talk) 15:36, 15 November 2007 (UTC) Sounds good to me. 75.62.4.229 The cat is alive and well. He will only speak more once someone who has not already spoken agrees with him.Likebox (talk) 06:43, 27 November 2007 (UTC) DNFTT —Preceding unsigned comment added by 71.139.26.69 (talk) 06:56, 27 November 2007 (UTC) Of course I'm not a troll! I believe sincerely in what I write. Do not attack others personally because you have a different position. I am shutting up until someone else (preferably more than one person) is willing to agree with me in public, because while I think that I am right, I also am feeling very lonely on this page.Likebox (talk) 20:25, 27 November 2007 (UTC) So you're open to 'consensus' and 'compromise' as long as people agree with you. That's obviously some bold, new meaning of those words I'm not familiar with. You claim here you believe what you write, yet above you say: "never expected the proof to stay on this page. Nor did I expect the quine proof of the Halting theorem to stay on the Halting problem page. In the first few weeks, I had about a 75% confidence level that it would be deleted by somebody who has a recursion theory PhD, and that this person would attack the proof as illegitimate on language grounds. It took a lot longer than I expected, mostly because it takes a while for the political sides to emerge. But now that the battle is fought, I would like to ask a more profound question. What is the purpose of Wikipedia, really? Why can anyone edit any page, even a person who does not have a PhD, like myself? Why is an expert opinion, rendered by a tenured professor of equal value to the opinion of a lay-reader?" If _ever_ there was a case of WP:POINT, this is it. Whether or not you have anything valid to contribute (and I am most pointedly _not_ weighing in on this either way) it is so HOWLINGLY obvious to me that you have some kind of axe to grind with the Super Secret Cabal of Recursion Theorists, and that you insist, despite numerous, repeated attempts to counsel you otherwise, that WP is the place for it. It isn't. Either participate in the discussion calmly, constructively and NON-DISRUPTIVELY or someone will have to refer you to the appropriate venues for mediation and probably (it is hoped) sanction. —Preceding unsigned comment added by 67.118.103.210 (talk) 22:28, 27 November 2007 (UTC) You're right that I don't give a rats ass about consensus or possible sanctions. But I am sincere in what I write. I think it would be super-duper-great if a proof of GIT can appear in Wiki. It can't? Too bad. I did my best. I didn't expect the proof to stay, not because I was making any sort of *point*, but because it is simple, correct, well-understood and clear, and despite all that, it's not in the books. This means that recursion theorists are politically supressing this type of argument. It's not conspiracy, it's more like a blend of elitism and incompetence. While I think I am ethically obliged to present the proof and defend it to a degree (otherwise people might think that there is a mathematical problem with it), I do not feel ethically obliged to come to a compromise. You folks sort it out, I'll be outie.Likebox (talk) 02:25, 28 November 2007 (UTC) Likebox, I nearly replied to the anonymous poster who referred to DNFTT, and to your reply. Here is why I didn't. 1. He was obviously right in that the best strategy in this situation was to treat you in the same way that one treats trolls. 2. You were obviously right in that you are not a troll. 3. These two points seemed so bleedingly obvious to me, that I hoped everybody would refrain from replying, because it just wasn't necessary. As you seem to have accepted, grudgingly, we have sorted out what to do with your proof, and I appreciate your commitment to wait for a day when you think your chances are better. I sympathise with you. Once one has become obsessed with something like that, just doing something else and not thinking about it is the best thing one can do, but it is also very hard. In case you find it impossible (but only then), I would suggest that you spend part of the time improving your presentation of the proof (orthography, fix the obvious mistakes like explicitly assuming that there are quines and then using the stronger implicit assumption that there is a quine which can do a certain thing, ...) and then looking for an alternative way to make your proof known. Since you probably have a background in computer science rather than mathematics it may also help you to remind yourself that many papers in theoretical computer science and theoretical physics would never have got their mathematical parts (as published) through peer review in a decent mathematical journal. So what you are trying to do can be seen as an intercultural project like trying to defend the whales in a Japanese Wikipedia article on whale meat. I would love to be proved wrong in my example, but I think there is no chance. You may find it hard to make your peace with the fact that that is how Wikipedia is supposed to work. I know that I do. --Hans Adler (talk) 09:54, 28 November 2007 (UTC) You are right in all your points, except for two: 1. I do not have any background whatsoever in anything, not in math nor in CS, and I do not believe in peer review. 2. The obvious mistake that you mention is not a mistake at all. It is trivial to turn any program into a quine.Likebox (talk) 22:36, 28 November 2007 (UTC) ## translation in van Heijenoort book This article and the separate article On Formally Undecidable Propositions of Principia Mathematica and Related Systems both say that the translation in van Heijenoort's book was done by van Heijenoort himself, but I think that is wrong. Could someone who has the book please check? On p. 595, V.H. says "The translation of the paper is by the editor,...". — Carl (CBM · talk) 22:17, 15 November 2007 (UTC) Hmm, thanks. I must have been thinking of one of the other translations in the book, probably the Begriffsschrift. —Preceding unsigned comment added by 75.62.4.229 (talk) 22:26, 15 November 2007 (UTC)
2014-09-22 03:57:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 16, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6477262377738953, "perplexity": 876.8614253706273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657136523.61/warc/CC-MAIN-20140914011216-00206-ip-10-234-18-248.ec2.internal.warc.gz"}
https://mathematica.stackexchange.com/tags/index/hot
# Tag Info 45 I think Leonid's answer deserves to be expanded upon. Most other languages are not symbolic, and thus the "variable name" is not something one needs to keep track of --- ultimately the interpreted or compiled code is keeping track of pointers or something. In contrast, in Mathematica the Head of an expression is arbitrary. This is somewhat along the lines ... 34 General usage Here is what I think Using strings and subsequently ToString - ToExpression just to generate variable names is pretty much unacceptable, or at the very least should be the last thing you try. I don't know of a single case where this couldn't be replaced with a better solution Using subscripts is also pretty bad and should be avoided, except ... 21 My answer is based on a modification of a binary heap. Basically the construction looks something like this. We start with a binary tree: Notice that if we label the nodes breadth-first, the labels have an interesting property. Each parent node $n$ has two children, $2n$ and $2n+1$. This also works in reverse: the parent of node $n$ is node $\left\... 17 Preface Below, you will find two different solutions. For understanding the problem itself, the first, iterative solution is better suited since it gives insight in how the solution can be found without directly executing the instructions given as input. Iterative Solution Detailed explanation To explain the idea behind this approach let us work with a ... 16 Many index-specific operations can be implemented via MapIndexed with a level specificaton. Your Power example can be written as: MapIndexed[#1^(#2[[1]]*#2[[2]]) &, test2D, {2}] If you want better readability of indices you can define an auxiliary function: myPower[x_, {n1_, n2_}] := x^(n1 n2); MapIndexed[myPower, test2D, {2}] Some index-specific ... 14 I think the best way to understand this behavior is with this example, In[99]:= x = {a, b, c, d, e}; In[101]:= Length@x Out[101]= 5 In[102]:= x[[3]] = Nothing; In[103]:= Length[x] Out[103]= 4 In[104]:= Block[{Nothing}, Length[x]] Out[104]= 5 When you say x[[3]] = Nothing, you are not deleting an element from x. x is still a 5-element list. But x is ... 13 I have a tree-based method that has the right asymptotics but a very high coefficient. The upshot being, it will not compete with other methods until we get past 10^6 or so in list size. With considerable work that tree structure could be flattened so that Compile might be brought into play. The basic tree layout is {left subtree, node, right subtree} where ... 12 Some years ago, a friend of mine was in the supermarket with his son, small kid, who asked him to buy some candy. After some resistance, my friend agreed, but told him that should be just one. In the cashier, his son had two candy, and my friend said: What is this? haven't we agreed One? And the answer (very smart) was: Yes! Here it is. Zero and One... (... 12 I don't know if this is useful to you but it seems a little cleaner than your own code: asc = <|"z" -> 11, "x" -> 22, "b" -> 33, "a" -> 44|>; keySpan[k_Span][asc_Association] := asc[[k /. First /@ PositionIndex@Keys@asc]] asc // keySpan["x" ;; "a"] asc // keySpan["z" ;; "a" ;; 2] asc // keySpan["b" ;;] <|"x" -> 22, "b" -> 33,... 11 MapIndexed: MapIndexed[#2[[1]] + # &, {a, b, c, d}] {1 + a, 2 + b, 3 + c, 4 + d} Also Range[Length @ #] + # & @ {a,b,c,d} {1 + a, 2 + b, 3 + c, 4 + d} 11 The variable i is a dummy one. The evaluated expression: Sum[f[i], {i, 1, 10}] f[1] + f[2] + f[3] + f[4] + f[5] + f[6] + f[7] + f[8] + f[9] + f[10] contains no explicit variable f[i], hence, the result is 0. Try to first Inactivate the sum, and only then to calculate the derivative: expr1 = D[Inactivate[Sum[f[i], {i, 1, 100}], Sum], f[i]] The result is ... 10 The example JSON string: json = ExportString[<| "Names" -> <| "Sister" -> "Nina", "Brothers" -> {<|"Older" -> "John", "Younger" -> "Jake"|>}, "somethingElse" -> "answer" |>, "DOB" -> { <|"Nina" -> 2001, "location" -> "Miami"|>, <|"John" -> 2017, "location" ->... 10 If you put your cursor on the Position command and press F1 for help, you will see the following under Properties and Relations: "Use Extract to extract parts based on results from Position." There is also an example. For your case: p = Position[{a, b, c, d, e, f}, c] Extract[list,p] where list is the list you want to extract from. 10 My goal is to add an index to all elements of a list in the form {"a", "b", "c", ... }, so it becomes {"N1 a", "N2 b", "N3 c" ... } may be seq={"a","b","c","d"}; MapIndexed["N"<>ToString[First@#2]<>" "<>#1&,seq] gives {"N1 a", "N2 b", "N3 c", "N4 d"} 10 We don't need to avoid Table in my view. In cases that Table is more straightforward, just use Table. If speed is concerned, Compile it. Here is an example: Can I generate a "piecewise" list from a list in a fast and elegant way? Nevertheless, your 2 examples (especially 2nd one) don't belong to the cases that Table is more straightforward, at ... 9 strs = {"first block of text with random content", "different block of text", "1 2 3 4"}; Nearest[(StringPadRight[#, 50] & /@ strs), "content random with"] This is a deep and complex question apparently: Mathematica has a menagerie of built in goodies to assemble your own variant. EditDistance DamerauLevenshteinDistance ... 9 As of Mathematica 11: filenames = Table[CreateFile[], 3]; content = {"first block of text with random content", "different block of text", "1 2 3 4"}; MapThread[Put, {content, filenames}]; index = CreateSearchIndex[filenames]; Perform searches using TextSearch: Snippet /@ Normal@TextSearch[index, "block"] In order to rank search results, score them ... 9 Following the comments I am encouraging the use of brackets rather than subscripts or superscripts. Here is an example where a function may take a variable with a subscript or a variable without a subscript. The function will use pattern recognition to sort out how to behave. First we define the function. Note I start with a ClearAll[f] so that previous ... 7 Prompted by comments conversation with Mr. Wizard, a routine I use: findMultiPosXX[list_, find_, allowBits_: False, skipCands_: True] := Module[{f = DeleteDuplicates[find], o, l, oo, bitmax = 20, cands, dims}, If[allowBits && Length@f <= bitmax, With[{r = If[Length@(dims = Dimensions@list) == 1, Range@Length@list, Array[... 7 Using DownValues enables you to format the display in the subscripted form without using Notation and Symbolize (Format[#[n_]] := Subscript[#, n]) & /@ {x, σ, a}; kvar[k_] := Through[{x, σ, a}[k]] kvar[3] kvar[n] If you will never use a symbolic index then you can restrict the argument of kvar to Integer as you did originally. 7 What are the requirements for well behaved variables? Functions are not variables, although in most cases, the kernel treats undefined variables and functions identically. Sometimes it doesn't. After all, there are places in mathematics where the difference between a number and a function is important. One extreme and undocumented example is Dt[], the ... 7 func1[a_, b_] := a + b; func2[a_, b_] := 1 + a + b; func3[a_, b_] := a + 2 b; n = 10; sa = SparseArray[{Band[{1, 1}] -> (func1[0, #] & /@ Range[n]), Band[{1, 2}] -> (func2[0, #] & /@ Range[n - 1]), Band[{2, 1}] -> (func3[0, #] & /@ Range[n - 1])}, {n, n}]; sa // MatrixForm or sa = Quiet@SparseArray[{Band[{1, 1}] -> (... 7 Try Pick[foo, mask] (* {a, c}*) 6 Module[{p = Range[Length@#]}, Reap@Fold[(Sow[#[[#2]]]; Drop[#, {#2}]) &, p, #]] &@{1, 1, 2, 1, 1} (* {{}, {{1, 2, 4, 3, 5}}} *) 6 This seems rather be a error of determining the type of i, j and k than Sum itself. When you introduce your iterator variables in a Module and make clear that they are of type integer, it compiles fine for me: fc = Compile[{{x, _Real, 2}}, Module[{i = 0, j = 0, k = 0}, Sum[x[[i, k]]*x[[j, k]], {k, 5}, {i, 12}, {j, i + 1, 12}]] ] Btw, I want to note ... 6 You could use Evaluate[list] = ConstantArray[0, Length[list]] or MapThread[Set, {list, ConstantArray[0, Length[list]]}] to Set each indexed variable inside of list to 0. If the indexes for the variables inside list follow a known condition, one can use for example p[i_ /; i < 3, j_ /; j < 5] = 0 or memorization p[i_ /; i < 3, j_ /; j < 5] ... 6 An alternative to subscripts as indices... Instead of: {Subscript[x, 1], Subscript[x, 2]} Let's use: x[1], x[2] And this can be generalized to: Subscript[x, i, j] --> x[i,j] This will uniquely identify any number of variables to any dimension. 6 From version 10.4 onward, we can define keySpan like this: keySpan[k1_, k2_] := Replace[<|___, s:PatternSequence[k1 -> _, ___, k2 -> _], ___|> :> <|s|>] so that:$a = <| "z" -> 1, "x" -> 2, "b" -> 3, "a" -> 4 |>; \$a // keySpan["x", "a"] (* <|"x" -> 2, "b" -> 3, "a" -> 4|> *) We can make this more ... 5 You can use this (where t is your dataset): Ordering[#, -1] & /@ Transpose[t] which produces {{1}, {1}, {2}, {4}, {2}, {2}} Incidentally, the list of positions you gave in your question is wrong (the 4th element should be {4,4}, and the 6th element should be {2,6}). The above method omits the first coordinates in your expected output, since they ... 5 This is indeed somewhat confusing when you are new to Mathematica. In Mathematica, == stands for mathematical equality. Thus a == 0 does not evaluate to either True or to False until a is replaced by a numerical value. a is considered to be a variable that may or may not be zero. A pattern like x_ /; condition will only match if condition is explicitly ... Only top voted, non community-wiki answers of a minimum length are eligible
2021-09-16 18:45:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44076836109161377, "perplexity": 714.8281945515729}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053717.37/warc/CC-MAIN-20210916174455-20210916204455-00501.warc.gz"}
http://gpaux.github.io/Mediana/FAQ_Sample_Data_Set.html
## Summary The Mediana R package uses Monte-Carlo simulations to generate patient outcomes in a clinical trial. The simulation parameter are specified in a data model. In addition, as explained below, a data model can also be created by resampling from an existing data set, e.g., a clinical trial database. The following case study will be used to illustrate how to create a data model by resampling from an existing data set. We will consider a database of three Phase II clinical trials. This database contains data on the primary endpoint to be used in an upcoming Phase III trial for the experimental treatment. Furthermore, a large database with control data collected in several previously conducted trials with other investigational treatments is available. The sponsor wishes to estimate statistical power in the Phase III trial by sampling from these existing databases. For simplicity, we will consider a Phase III clinical trial with two arms and a normally distributed endpoint (a Phase III clinical trial in patients with pulmonary arterial hypertension where the primary endpoint is the change in the six-minute walk distance). ## Data Model The data model will be constructed by sampling from several pre-existing datasets. For the purposes of illustration, we will generate these datasets. Beginning with the database containing treatment data, we consider the outcome data collected from three Phase II clinical trials with 75, 75 and 50 patients, respectively. The observed means and SDs of the primary endpoint in each trial will be used to generate the treatment database as shown below: For the control database, consider a database set up by pooling outcome data from several development programs with the same indications (there are 3000 patients in this database). The control database will be generated using an approach similar to the one utilized above: The data model in this case study will be constructed by sampling data from these two databases. Several sample size scenarios will be evaluated to compute power in the Phase III trial, from 40 patients per treatment arm to 70, with a step of 10 patients. In order to sample from the treatment and control data sets, a new outcome distribution function needs to be implemented. The key idea behind this function is to simply enable sampling outcome data from the two data sets. To create a custom outcome distribution function, please refer to this page. This function will require two parameters in addition to the sample size per arm, name of the existing data set and a boolean variable indicating whether the sampling will be done with or without replacement. The custom function, named SamplingDist, is presented below. The first block in the SamplingDist function is used to get the function’s parameters, i.e., the data set’s name and the boolean indicator. The second block focuses on sampling n values from the dataset with an option to sample with or without replacement (based on replace). Lastly, the third block creates an object that will be used in a simulation report and will not be discussed here. For the purpose of illustration, we will consider a sampling scheme with replacement. The outcome parameters for each trial arm and the data model can be defined as follows. The outcome distribution specified in the OutcomeDist object is SamplingDist. After the data model has been set up, the analysis model and the evaluation model can be defined using a standard approach. ## Analysis model The analysis model defines a single significance test that will be carried out in the Phase III trial (treatment versus placebo). The treatment effect will be assessed using the one-sided two-sample t-test: ## Evaluation model The data and analysis models specified above define the Clinical Scenarios that will be examined in the Phase III trial. In general, clinical scenarios are evaluated using success criteria based on the trial’s clinical objectives. Regular power, also known as marginal power, will be computed in this trial. This success criterion is specified in the following evaluation model. ## Perform Clinical Scenario Evaluation After the clinical scenarios (data and analysis models) and evaluation model have been defined, the user is ready to evaluate the success criteria specified in the evaluation model by invoking the CSE function. The simulation parameters need to be defined in a SimParameters object: The CSE call specifies the individual components of Clinical Scenario Evaluation in this case study as well as the simulation parameters: ## Download Click on the icons below to download the R code used in this case study and report that summarizes the results of Clinical Scenario Evaluation:
2022-10-03 05:50:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39277592301368713, "perplexity": 759.8688263785454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00218.warc.gz"}
https://physics.stackexchange.com/questions/156026/does-bells-inequality-hold-for-different-particles
# Does Bell's inequality hold for different particles? Having read Bell's theorem and proofs. It seems it builds upon the assumption that we measure identical particles. Usually these thought experiments involve giving A and B an identical scratchcard. If they scratch the same box they will find the same symbol. If they scratch a different, then we can calculate the minimum coincidence rate thus set up the Bell's inequality. But does this inequality hold if you give A and B a different scratchcard (aka measuring different particles)? EDIT: It seems the question is a bit misunderstood. Okay let me clarify. Let's imagine an extreme case. I'm an evil scratchcard dealer and deal cards to A and B. A and B don't know anything about the cards. They may think they are identical or entangled but they won't find out anything till they measure it anyway. And I don't know which boxes they will scratch on which card. I have full control of the variables I give to A and B and there are no constraints other than I must conserve the quantities at least statistically. While reserving the right that each card in a particular pair can be completely different and there is no computable dependency between them. Can I deal cards such way that I can fool A and B, so they find that their measurements violate the inequality, just becuase I dealed the cards to them in a clever pattern without entanglement or other? Or does Bell's theorem explicitly proves the such shenanigans are not possible at all? • Are you asking if Bell's theorem would say non-identical particles should obey Bell's inequality if the laws of physics are local realistic? If so, see WillO's answer. Or are you asking if non-identical particles can, like identical particles, be put into entangled states where they will be measured to violate Bell's inequality, thus demonstrating that local realism cannot explain their behavior? If so, see lionelbrits' answer. – Hypnosifl Jan 1 '15 at 14:53 • @Hypnosifl See edit. – Calmarius Jan 1 '15 at 15:31 Bell's inequality is a theorem about classical random variables. It can be applied to observations of identical particles, or to observations of different particles, or to observations of the stock market and the weather. (The point is that because the theorem never mentions particles in the first place, the types of the particles to which it's applied cannot be relevant.) You could imagine a neutral, spinless particle at rest decaying into an electron and positron, in which case the final state is of the form $$\left|\Psi\right\rangle = \int\!dx_1\,dx_2\,\; \left( \psi_{\uparrow\downarrow}(x_1,x_2) \left|x_1\right\rangle \left|x_2\right\rangle \frac{1}{\sqrt{2}}\left|\uparrow\downarrow\right\rangle + \psi_{\downarrow\uparrow}(x_1,x_2) \left|x_1\right\rangle \left|x_2\right\rangle \frac{1}{\sqrt{2}}\left|\downarrow\uparrow\right\rangle \right).$$ This is just a tensor product of the position and spin degrees of freedom, where I have absorbed all "coefficients" into $\psi_{\uparrow\downarrow}(x_1,x_2)$, etc. I have set the coefficients $\psi_{\uparrow\uparrow}$ and $\psi_{\downarrow\downarrow}$ to zero, because by conservation of angular momentum and spin, one can show that the state can be written as follows: $$\left|\Psi\right\rangle = \int\!dx_1\,dx_2\,\; \psi(x_1,x_2) \left|x_1\right\rangle \left|x_2\right\rangle \frac{1}{\sqrt{2}}\left(\left|\uparrow\downarrow\right\rangle -\left|\downarrow\uparrow\right\rangle \right)$$ This is not the same as the EPR or Bell state, but my point is to illustrate the fact that whether the particles are distinguishable or indistinguishable is not important for entanglement. To make it a Bell state, imagine Alice and Bob trying to measure one or the other particles using a Stern-Gerlach apparatus. Alice's particle detector will register $0$ if particle 1 is in state $\uparrow$, while Bob's particle detector will register $0$ if particle 2 is in state $\downarrow$, etc. This is just a matter of notation; a mental unitary transformation, if you will. Then we can write the above state as $$\left|\Psi\right\rangle = \int\!dx_1\,dx_2\,\; \psi(x_1,x_2) \left|x_1\right\rangle \left|x_2\right\rangle \frac{1}{\sqrt{2}}\left(\left|00\right\rangle -\left|11\right\rangle \right).$$ Note that Alice can only measure the first component of spin, while Bob can only measure the second. This is because the particles are assumed to be spatially separated. But there is only one state, and therefore only one "scratch card", as you put it. Edit: Since you modified your question to clarify, you won't be able to deal the cards in any way and violate the inequalities, unless your cards can somehow talk to one another. • Hi Lionel did you know "the Tangent Bundle" seems not to be working (at least I saw errors that seem to be coming from the server side). – Selene Routley Jan 3 '15 at 12:35 • @WetSavannaAnimalakaRodVance, thanks, between family and work I haven't had time to fix it. I think an automatic upgrade broke something. – lionelbrits Jan 3 '15 at 16:13 I assume you're referring to the scratch lotto card example I posted earlier--I posted it in two different answers, but this one goes into more detail about various possible loopholes (if you haven't read that answer you might find it helpful). The short answer to your question is that indeed, it will be impossible for the "evil scratchcard dealer" to assign hidden fruits to the cards in such a way that the two experimenters (Alice and Bob) will always find the same fruit when they scratch the same box on their cards, but will have a probability of less than 1/3 of finding the same fruit when they scratch different boxes, assuming that they have an equal probability of picking either of the three boxes and that their choices are uncorrelated with each other, and that the dealer has no way of predicting what choice they will make on each trial when assigning the hidden fruits. There are a few loopholes though, like if the dealer can predict their choices in advance and correlate the hidden fruits to them, or if the fruits aren't determined in advance, and the experimenters scratch the boxes at different times in such a way that the choice made by the first experimenter of which box to scratch can be communicated as a signal to the second card before the other experimenter scratches it (either implying faster-than-light communication, or just that the distance/time between the two events of the experimenters scratching their cards was less than or equal to the speed of light). This paper has what seems to be a nice rigorous derivation of a Bell inequality, which makes explicit a series of assumptions 1-9 that must be met in order to be certain that a local realist theory must predict the inequality is obeyed. These assumptions are stated in a somewhat abstract way, but I can translate what they would mean for the scratch lotto example: 1. "Perfect correlation": In the lotto example, this means that whenever the experimenters both choose to scratch the same box on their respective cards, they are guaranteed to get the same fruit. 2. "Separability": When the two experimenters scratch their respective cards, these are distinct events (so we can talk about the probability of one event given the other, or the probability of either one given some other event). 3. "Locality 1": Neither of these two events is "causally relevant" for the other--so this is ruling out the loophole I mentioned where the fruits aren't just predetermined, and the first experimenter's choice of what box to scratch can affect the probability the other experimenter will see a given fruit (see the example with the touchscreen devices in my other answer). 4. "Principle of Common Cause": If two events are correlated, and this correlation can't be explained by a direct causal influence from one to the other (see assumption 3) or by "event identity" (see assumption 2), there must be some "common cause" that explains the correlation. So in the lotto card example, if the experimenters always see the same fruit when they choose the same box, then given 2 and 3 the fruits they each get must have been predetermined by something in their common past, in this case the dealer choosing what hidden fruits to print under each box on a given pair of cards. 5. "Exactly one of exactly two possible outcomes": An assumption about probabilities that in the example would boil down to the fact that whenever the experimenter scratches a given box, the probability they will see a cherry plus the probability they will see a lemon must add up to 1, and the probability the experimenter will see a cherry and a lemon at once is 0. 6. "Locality 2": An assumption that says that if there's a common cause as in assumption 4, and if knowledge of which boxes both experimenters scratched on a trial and knowledge of the common cause on that trial (i.e. what hidden fruits the dealer put behind each box) is sufficient to determine that a cherry will be seen by one of the experimenters, say Alice, then knowledge of what box was scratched by Bob should actually be irrelevant--just knowing the common cause and what box Alice scratched should be sufficient to determine that Alice will see a cherry. 7. "No outcome without measurement": This one is basically just saying that the only way to learn what fruit is behind a given box on a card is for an experimenter to scratch that box. 8. "Locality 3": An assumption almost identical to #6, except saying that if knowledge of which boxes both experimenters scratched on a trial and knowledge of the common cause on that trial (i.e. what hidden fruits the dealer put behind each box) is sufficient to determine that one of the experimenters (say Alice) will not see a cherry, then knowledge of what box was scratched by Bob should be irrelevant--just knowing the common cause and what box Alice scratched should be sufficient to determine that Alice will see a not see a cherry. 9. "No conspiracy": This assumption says the common cause events--the dealer's choice of what hidden fruits are behind each box on a given trial--are not statistically correlated with the experimenters' choice of which box to scratch at a later time in the same trial. In other words, the dealer can't have foreknowledge of what Alice and Bob will do and choose what series of hidden fruits to assign based on that knowledge. See the paragraph with the bolded title "Violation of freedom" in my previous answer that I linked to at the top for a bit more on this. So if all these assumptions hold, there's no possible way the probabilities of different combinations of measurement outcomes could fail to satisfy Bell's inequality, as proved formally in the paper. Thus if we set up an experiment where the Bell inequality is consistently violated, at least one of these premises must be violated too. • Do actual experiments show perfect correlation? – Calmarius Jan 1 '15 at 18:45 • The equations of QM would predict perfect correlation for identical measurements on entangled pairs, but in practice I think sources of entangled pairs will also produce some non-entangled individual particles of the same type so experimenters have to rely on methods like timing of detection to decide which pairs of detection-events were measurements of an entangled pair, and these methods probably aren't 100% error-free. There are however more complicated Bell inequalities that don't depend on perfect correlation, like the CHSH inequality. – Hypnosifl Jan 1 '15 at 19:25
2020-04-09 08:33:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6738484501838684, "perplexity": 379.6474640807346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371830894.88/warc/CC-MAIN-20200409055849-20200409090349-00199.warc.gz"}
https://math.stackexchange.com/questions/3679550/ring-which-isnt-isomorphic-to-any-subring-of-endv-for-any-vector-space-v
Ring which isn't isomorphic to any subring of End(V) for any vector space V Problem: Prove that the ring $$\mathcal{R}=\prod_{n\geq1}\mathbb{Z}_n$$ is not isomorphic to any subring of $$\mathrm{End}(V)$$ for any vector space $$V$$. I think there's something to do with non-commutativity. We know that $$\mathrm{End}(V)$$ is not commutative. But $$\mathcal{R}$$ is definitely commutative and of characteristic $$0$$. Also I think $$V$$ can't be a vector space of finite dimension. • In general, a ring $R$ is not isomorphic to a subring of the endomorphism ring of any vector space over any field if and only if either $R$ has a composite characteristic, or $R$ has characteristic zero with a non-torsion-free additive group. – Geoffrey Trang May 18 at 19:44 Since $$\mathcal{R}$$ contains an isomorphic copy of $$\mathbb{Z}$$, the ground field of the vector space $$V$$ must contain an isomorphic copy of $$\mathbb{Q}$$ and hence the ground field $$k$$ of the vector space $$V$$ must be of characteristic $$0$$. Now let $$\varphi$$ be an isomorphism between $$\mathcal{R}$$ and some subring of $$\mathrm{End}(V)$$. Let $$r=(0,1,0,0,0\ldots)$$. Then $$2r=r+r=(0,2,0,0,0\ldots)=(0,0,0,0,0\ldots)=0_{\mathcal{R}}$$. Since $$\varphi$$ is an isomorphism, $$2\varphi(r)=\varphi(2r)=\varphi(0)=O_V=2O_V$$. Therefore, $$\varphi(r)=O_V$$. But $$r\neq0_{\mathcal{R}}$$. Hence a contradiction! $$V$$ is a $$k$$-vector space. Wlog $$k$$ is either $$\Bbb{Q}$$ or $$\Bbb{F}_p$$. If it is $$\Bbb{F}_p$$ then $$\forall f\in\mathrm{End}(V)$$, $$pf=0$$. $$R$$ contains $$\Bbb{Z}$$ (send $$a\in \Bbb{Z}$$ to $$(a,a,\ldots) \in R=\prod \Bbb{Z}/n\Bbb{Z}$$) so $$R\subset \mathrm{End}(V)$$ implies that $$k=\Bbb{Q}$$. But then $$\forall f\in\mathrm{End}(V),\forall a\in \Bbb{Z}\setminus\{0\}, f=0\iff af=0$$, which isn't satisfied by $$R$$. • Indeed, the ring structure is unnecessary. The direct product cannot be an additive subgroup. – runway44 May 17 at 19:10 For any field $$k$$ of characteristic zero and any $$k$$-vector space $$V$$, $$V$$ must be a torsion-free abelian group. Now, suppose that $$\mathcal{R}$$ is isomorphic to a subring of $$\mathrm{End}(V)$$ for some $$k$$-vector space $$V$$. Then, since $$\mathrm{End}(V)$$ is a $$k$$-vector space, it must be a torsion-free abelian group, hence so must $$\mathcal{R}$$. But $$\mathcal{R}$$ is clearly not torsion-free ($$(0,1,0,0,0,...)$$ is an element of order $$2$$), a contradiction. If $$k$$ has a nonzero characteristic $$p$$, then for any nonzero $$k$$-vector space $$V$$, the ring $$\mathrm{End}(V)$$ must also have characteristic $$p$$, and so must all of its (unital) subrings. In particular, it cannot have a subring isomorphic to $$\mathcal{R}$$, which has characteristic zero. And of course, if $$V=0$$, then $$\mathrm{End}(V)=0$$ is the zero ring, which obviously has itself as its only subring, so $$\mathcal{R}$$ still cannot be isomorphic to a subring of $$\mathrm{End}(V)$$. The above proof applies more generally for any ring of characteristic zero with a non-torsion-free additive group. Rings with composite characteristics also cannot be isomorphic to subrings of endomorphism rings of vector spaces, of course, but that is it (for a ring $$R$$, let $$V$$ be the $$\mathbb{Q}$$-vector space $$R \otimes_{\mathbb{Z}} \mathbb{Q}$$ if $$R$$ has a torsion-free additive group, or the $$\mathbb{F}_p$$-vector space $$R$$ if $$R$$ has characteristic $$p$$, a prime number).
2020-09-18 20:40:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 73, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9866555333137512, "perplexity": 90.08032426152558}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400188841.7/warc/CC-MAIN-20200918190514-20200918220514-00771.warc.gz"}
https://quant.stackexchange.com/tags/black-scholes/hot
# Tag Info The best way is to start with definitions (instantaneous and their finite difference versions) of Greeks. For a currency pair $(FOR,DOM)$ with FX rate $S$, the number of $[DOM]$ (domestic, numeraire, right-side) units needed to buy one $[FOR]$ (foreign, asset, left-side) unit, let $V(S)$ be an option's price in $[DOM]$ units. Note that the unit of $S$ is: $$... 2 For simplicity, let's say that your time 0.003 equals 1 day, and your second pillar (probably 0.083 instead of 0.00833) equals 1 week. What you do: Approximate the short rate with the 1-day interest rate. What they do: Employ additional information about the shape of the yield curve at the short end, i.e. extrapolating from the first two available ... 2 Everything (warning: I have not checked 3rd order greeks) that is not delta is in terms of ccy2 in the standard Garman Kohlhagen model. Gamma is not in CCY1 by default either (some vendors like Bloomberg display it like that to be consistent with Delta). Why Let's start by not looking at FX but equity to help build intuition. The actual price of an option is ... 2 For an option with delayed cash settlement, expiry time T and settlement time T_p(\geq T), paying (S_T-K)^+ at T_p, the present value of this payment at T is:$$ E_T\left[\beta_T \beta_{T_p}^{-1} (S_T-K)^+ \right] = P(T,T_p)(S_T-K)^+,$$with \beta_t = \exp \left(\int_0^t r_u du \right) , r risk-free interest rate, P associated zero-coupon ... 2 Just to be clear we are talking about an option that pays max(0,S_1-K) paid at time t=2. Then the only difference between this and a standard option is the extra discounting from t=1 to t=2 . So the price P must satisfy$$ P=BS/(1+r)$$where BS is the regular Black Scholes price and r is the forward risk free rate from t=1 to t=2. The ... 1 Consider stock price process (Geometric Brownian Motion):$$S_t=S_0exp((\mu-0.5\sigma^2)t+\sigma W_t) \tag{1}$$where W_t is a Wiener process and \mu is a drift - or average return. If you are not familiar with Wiener process you can see this equation as:$$S_t=S_0exp((\mu-0.5\sigma^2)t+\sigma \sqrt t Z) \tag{2}$$where Z is standard normal random ... 1 Given your code, the following will yield what you are after: t,S = generateGBM(TIME_HORIZON/365, DRIFT, ANNUALIZED_VOL, INITIAL_PRICE, 1/365/24/3) As all inputs are annualized, you must also think in units of year fractions: The time horizon is 30 days over 365 days, and the time step size, being 20 minutes, is one year over 365 * 24 * 3 (there are three ... 1 The shape you observe is really only due to spot being higher for ITM calls & OTM puts. The plots are definitely correct. You can quickly check by using standard closed form volga, which is$$vega*\frac{d1*d2}{\sigma}$$. Changing risk free rate and dividends just shifts the whole graph slightly left or right respectively. Volga is$$vega*\frac{d1*d2}{\...
2021-08-05 02:19:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6276962757110596, "perplexity": 6970.517437135859}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155268.80/warc/CC-MAIN-20210805000836-20210805030836-00217.warc.gz"}
https://meridian.allenpress.com/radiation-research/article-abstract/145/6/730/40180/Chromosome-Aberrations-in-Human-Fibroblasts?redirectedFrom=fulltext
The relative biological effectiveness (RBE) of neutrons for many biological end points varies with neutron energy. To test the hypothesis that the RBE of neutrons varies with respect to their energy for chromosome aberrations in a cell system that does not face interphase death, we studied the yield of chromosome aberrations induced by monoenergetic neutrons in normal human fibroblasts at the first mitosis postirradiation. Monoenergetic neutrons at 0.22, 0.34, 0.43, 1, 5.9 and 13.6 MeV were generated at the Accelerator Facility of the Center for Radiological Research, Columbia University, and were used to irradiate plateau-phase fibroblasts at low absorbed doses from 0.3 to 1.2 Gy at a low dose rate. The reference low-LET, low-dose-rate radiation was137 Cs γ rays (0.66 MeV). A linear dose response (Y = αD) for chromosome aberrations was obtained for all monoenergetic neutrons and for the γ rays. The yield of chromosome aberrations per unit dose was high at low neutron energies (0.22, 0.34 and 0.43 MeV) with a gradual decline with the increase in neutron energy. Maximum RBE (${\rm RBE}_{{\rm M}}$) values varied for the different types of chromosome aberrations. The highest RBE (24.3) for 0.22 and 0.43 MeV neutrons was observed for intrachromosomal deletions, a category of chromosomal change common in solid tumors. Even for the 13.6 MeV neutrons the${\rm RBE}_{{\rm M}}$ (11.1) exceeded 10. These results show that the RBE of neutrons varies with neutron energy and that RBEs are dissimilar between different types of asymmetric chromosome aberrations and suggest that the radiation weighting factors applicable to low-energy neutrons need firmer delineation. This latter may best be attained with neutrons of well-defined energies. This would enable integrations of appropriate quality factors with measured radiation fields, such as those in high-altitude Earth atmosphere. The introduction of commercial flights at high altitude could result in many more individuals being exposed to neutrons than occurs in terrestrial workers, emphasizing the necessity for better-defined estimates of risk. This content is only available as a PDF. You do not currently have access to this content.
2021-02-25 05:17:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5789867043495178, "perplexity": 3223.935475088073}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178350717.8/warc/CC-MAIN-20210225041034-20210225071034-00477.warc.gz"}
https://www.thejournal.club/c/paper/55086/
#### Lower bounds for testing digraph connectivity with one-pass streaming algorithms ##### Glencora Borradaile, Claire Mathieu, Theresa Migler In this note, we show that three graph properties - strong connectivity, acyclicity, and reachability from a vertex $s$ to all vertices - each require a working memory of $\Omega (\epsilon m)$ on a graph with $m$ edges to be determined correctly with probability greater than $(1+\epsilon)/2$. arrow_drop_up
2022-08-11 02:33:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5885886549949646, "perplexity": 1829.9139168956642}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571232.43/warc/CC-MAIN-20220811012302-20220811042302-00136.warc.gz"}
https://socratic.org/questions/if-jacob-ran-24-kilometers-how-many-miles-did-jacob-run
# If Jacob ran 24 kilometers, how many miles did Jacob run? Apr 6, 2018 #### Answer: $14.9$ miles #### Explanation: $1 k m = 0.621$miles $24 k m = 0.621 \times 24 = 14.9$ miles Apr 6, 2018 About 15 miles #### Explanation: According to this website (courtesy of thecalculatorsite.com) The best way to convert from km to miles is to: Take your kilometer value and half it,then take a quarter of your half and add it to your half- now you have it in miles So in this case:1. Half of 24 is 12 2. A quarter of 12 is 3. Hence 12+3=15 So 24km equals 15 miles (more or less- it's about 14.9 miles). If you want to convert from miles to km, my personal method, which I suggest to use, is simply to add 60% of your value in miles to itself to get it to km. In this case: $\left(15\right) \times \left(1.6\right) = 24$ Which corresponds perfectly.
2019-05-20 11:01:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26157844066619873, "perplexity": 3943.509272431646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255943.0/warc/CC-MAIN-20190520101929-20190520123929-00111.warc.gz"}
https://stats.stackexchange.com/questions/139528/when-do-coefficients-estimated-by-logistic-and-logit-linear-regression-differ
# When do coefficients estimated by logistic and logit-linear regression differ? When modelling continuous proportions (e.g. proportional vegetation cover at survey quadrats, or proportion of time engaged in an activity), logistic regression is considered inappropriate (e.g. Warton & Hui (2011) The arcsine is asinine: the analysis of proportions in ecology). Rather, OLS regression after logit-transforming the proportions, or perhaps beta regression, are more appropriate. Under what conditions do the coefficient estimates of logit-linear regression and logistic regression differ when using R's lm and glm? Take the following simulated dataset, where we can assume that p are our raw data (i.e. continuous proportions, rather than representing ${n_{successes}\over n_{trials}}$): set.seed(1) x <- rnorm(1000) a <- runif(1) b <- runif(1) logit.p <- a + b*x + rnorm(1000, 0, 0.2) p <- plogis(logit.p) plot(p ~ x, ylim=c(0, 1)) Fitting a logit-linear model, we obtain: summary(lm(logit.p ~ x)) ## ## Call: ## lm(formula = logit.p ~ x) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.64702 -0.13747 -0.00345 0.15077 0.73148 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 0.868148 0.006579 131.9 <2e-16 *** ## x 0.967129 0.006360 152.1 <2e-16 *** ## --- ## Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 ## ## Residual standard error: 0.208 on 998 degrees of freedom ## Multiple R-squared: 0.9586, Adjusted R-squared: 0.9586 ## F-statistic: 2.312e+04 on 1 and 998 DF, p-value: < 2.2e-16 Logistic regression yields: summary(glm(p ~ x, family=binomial)) ## ## Call: ## glm(formula = p ~ x, family = binomial) ## ## Deviance Residuals: ## Min 1Q Median 3Q Max ## -0.32099 -0.05475 0.00066 0.05948 0.36307 ## ## Coefficients: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) 0.86242 0.07684 11.22 <2e-16 *** ## x 0.96128 0.08395 11.45 <2e-16 *** ## --- ## Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 ## ## (Dispersion parameter for binomial family taken to be 1) ## ## Null deviance: 176.1082 on 999 degrees of freedom ## Residual deviance: 7.9899 on 998 degrees of freedom ## AIC: 701.71 ## ## Number of Fisher Scoring iterations: 5 ## ## Warning message: ## In eval(expr, envir, enclos) : non-integer #successes in a binomial glm! Will the logistic regression coefficient estimates always be unbiased with respect to the logit-linear model's estimates? • Note a theoretical distinction: with a binomial model applied to proportions you assume that trials behind each proportion are independent, that is, behind proportion 0.1 there "were", say, 10 independent trials yielding one success. For linear model, 0.1 is simply a value, some arbitrary measure. Mar 7 '15 at 10:13 • I am somewhat doubtful in how far it even makes sense to apply a binomial model to proportions in the way done by the OP. After all, family=binomial implies that the dependent variable represents binomial counts -- not proportions. And how would glm know that 0.1 is like "one out of ten" and not "ten out of hundred"? While the proportion itself does not differ, this has major implications for how the standard error is computed. Mar 7 '15 at 10:24 • @Wolfgang - I realise (and mention in my post) that it's inappropriate to model continuous proportions of this sort with logistic regression. I was interested more in if/when/how the point estimates of coefficients differ. Mar 7 '15 at 10:42 • @Wolfgang, you are right, but it depends on the implementation. Some programs will allow to to input proportions as the DV and 1s in place of the bases, while the dataset is weighted by the real bases. So looks as if you analyze proportions, not counts. Mar 7 '15 at 11:10 • @ttnphns Similarly, in R one can enter proportions as the DV and supply a vector containing numbers of trials to the weights arg (though this isn't what I was attempting in my post, where I have intentionally analysed the data incorrectly). Mar 7 '15 at 11:15 Perhaps this can be answered in the "reverse" fashion - I.e. when are they the same? Now the IRLS algorithm used in logistic regression provides some insight here. At convergence you can express the model coefficients as: $$\hat {\beta}_{logistic}=\left (X^TWX\right)^{-1} X^TWz$$ where $W$ is a diagonal weight matrix with ith term $W_{ii}=n_ip_i (1-p_i)$ and $z$ is a pseudo response that has ith element $z_i=x_i^T\hat {\beta}_{logistic} +\frac {y_i -n_ip_i}{n_ip_i (1-p_i)}$. Note that $var (z_i -x_i^T\hat {\beta})=W_{ii}^{-1}$ which makes logistic regression seem very similar to weighted least squares on a "logit type" of quantity. Note that all the relationships are implicit in logistic regression (eg $z$ depends on $\beta$ which depends on $z$). So I would suggest that the difference is mostly in using weighted least squares (logistic) vs unweighted least squares (ols on logits). If you weighted the logits $\log (y)-\log (n-y)$ by $y (1-y/n)$ (where $y$ is the number of "events" and $n$ the number of "trials") in the lm () call you would get more similar results. • Impressive. Could you please show your last sentence by R code using the given simulated data? Thanks! Mar 11 '15 at 5:36 Please don't hesitate to point it out if I am wrong. First, I have so say, in the second fit, you call glm in a wrong way! To fit a logistic regression by glm, the response should be (binary) categorical variable, but you use p, a numeric variable! I have to say warning is just too gentle to let users know their mistakes... And, as you might expect, you get similar estimates of coefficients by the two fits just by COINCIDENCE. If you replace logit.p <- a + b*x + rnorm(1000, 0, 0.2) with logit.p <- a + b*x + rnorm(1000, 0, 0.7), ie, changing the variance of the error term from 0.2 to 0.7, then the results of the two fits will be greatly different, although the second fit (glm) is meaningless at all... Logistic regression is used for (binary) classification, so you should have categorical response, as is stated above. For example, the observations of the response should be a series of "success" or "failure", rather than a series of "probability (frequency)" as in your data. For a given categorical data set, you can calculate only one overall frequency for "response=success" or "response=failure", rather than a series. In the data you generate, there is no categorical variable at all, so it is impossible to apply logistic regression. Now you can see, although they have similar appearance, logit-linear regression (as you call it) is just an ordinary linear REGRESSION problem (ie, response is a numeric variable) using transformed response (just like sqr or sqrt transformation), and logistic regression is a CLASSIFICATION problem (ie, response is a categorical variable; don't get confused by the word "regression" in "logistic regression"). Typically, linear regression is fitted through Ordinary Least Squares (OLS), which minimizes the square loss for regression problem; logistic regression is fitted through Maximum Likelihood Estimate (MLE), which minimizes log-loss for classification problem. Here is a reference on loss functions Loss Function, Deva Ramanan. In the first example, you regard p as the response, and fit a ordinary linear regression model through OLS; in the second example, you tell R that you are fitting a logistic regression model by family=binomial, so R fit the model by MLE. As you can see, in the first model, you get t-test and F-test, which are classical outputs of OLS fit for linear regression. In the second model, the significance test of coefficient is based on z instead of t, which is the classical output of MLE fit of logistic regression. • Nice question (+1) and nice answer (+1). I learned something new. Mar 7 '15 at 9:21 • I would agree. However this logistic regression is a CLASSIFICATION problem might be misinterpreted in a sense that it is worth only as long as it can well classify. Which would be wrong to think, because a model "optimal" theoretically and by how it models probabilities may sometimes classify worse than a not so good model. Mar 7 '15 at 10:20 • @ttnphns Thanks for your comment! I think it is a convention to call it a classification problem if the response is categorical. Whether the model performs well or not is important, but maybe doesn't affect the naming. Mar 7 '15 at 10:31 • Thanks @JellicleCat - I'm aware that proportion data of this type are not suited to logistic regression, but was curious about the circumstances under which coefficient estimates would differ from those of OLS with logit-transformed proportions. Thanks for your example - it's clear that with increased variance, coefficient estimates diverge. Mar 7 '15 at 10:52
2021-12-08 13:08:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7966991662979126, "perplexity": 1477.9958339260988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363510.40/warc/CC-MAIN-20211208114112-20211208144112-00089.warc.gz"}
https://asmedigitalcollection.asme.org/mechanismsrobotics/article/14/4/041009/1140690/Geometry-and-Kinematics-of-Cylindrical-Waterbomb
## Abstract Folded surfaces of origami tessellations have attracted much attention because they often exhibit nontrivial behaviors. It is known that cylindrical folded surfaces of waterbomb tessellation called waterbomb tube can transform into peculiar wave-like surfaces, but the theoretical reason why wave-like surfaces arise has been unclear. In this paper, we provide a kinematic model of waterbomb tube by parameterizing the geometry of a module of waterbomb tessellation and derive a recurrence relation between the modules. Through the visualization of the configurations of waterbomb tubes under the proposed kinematic model, we classify solutions into three classes: cylinder solution, wave-like solution, and finite solution. Through the stability and bifurcation analyses of the dynamical system, we investigate how the behavior of waterbomb tube changes when the crease pattern is changed. Furthermore, we prove the existence of a wave-like solution around one of the cylinder solutions. ## 1 Introduction Origami tessellations are origami obtained by tiling translational copies of a modular crease pattern. Origami tessellations can be folded from flat sheets of paper and transform into various shapes. Even though origami tessellations are corrugated polyhedral surfaces, they sometimes approximate smooth surfaces of various curvatures, including synclastic and anticlastic surfaces [1]. Such macroscopic surfaces exhibit nontrivial behaviors we cannot expect from the periodicity of its crease pattern and recently attracted much attention of scientists and engineers. For example, Schenk and Guest [2] showed that Miura-ori forms anticlastic surfaces while egg-box pattern forms synclastic surfaces through numerical modeling using bar-and-hinge model. Nasser et al. [3,4] analyzed the approximated surface of Miura-ori and egg-box pattern based on the knowledge of differential geometry. In this paper, we focus on folded surfaces of waterbomb tessellation, specifically, cylindrical folded surfaces called waterbomb tube. Waterbomb tube is known as origami work “Namako” [5] or more commonly, the name “Magic Ball.” Based on the property of waterbomb tube, various engineering applications are attempted, such as origami-stent graft [6], earthworm-like robot [7], soft gripper [8], and robot wheel [9] using the property that the radius of the tube can vary. The dynamics of the stent graft and the wheel inspired by waterbomb tube has been analyzed [10,11]. Also, the kinematics and structural deformation of waterbomb tube has been studied [1214]. One of the most interesting phenomena reported on waterbomb tube is that it can form wave-like surfaces [15,16] like in Fig. 1. This is the unique phenomenon not yet observed in the folded surfaces of other tessellations such as Miura-ori and egg-box pattern; however, the theoretical reason why wave-like surfaces arise has been unclear. Fig. 1 Fig. 1 Close modal Here, our objective is to know “why” waterbomb tube produces wave-like surfaces, i.e., to clarify the mathematics behind the behavior. In this paper, we model waterbomb tube as the sequence of modules and focus on its recurrence behavior instead of simultaneously solving the network of 6R spherical linkages as in [12,13,15,17]. We first provide the kinematic model of each module of waterbomb tube and the relation with adjacent modules to obtain the recurrence relation dominating the folded states of modules based on the rigid origami model with symmetry assumption (Sec. 2). Then, through computation and visualization of the folded states of waterbomb tube using the recurrence relation, we observe that the solutions fall into three types: cylinder solution, wave-like solution, and finite solution (Sec. 3). Furthermore, by applying the stability and bifurcation analyses of the dynamical system of waterbomb, we illustrate how the behavior of the system, i.e., whether waterbomb tube can be wave-like surface or not, changes when the crease-pattern parameters are changed (Sec. 4). Finally, we prove the existence of wave-like solution by applying theorems of discrete dynamical systems (Sec. 5). ## 2 Model ### 2.1 Definition. First, we introduce parameters of the crease pattern of waterbomb tessellation. We can obtain the crease pattern of waterbomb tessellation by tiling translational copies of a unit module. The entire pattern is controlled by four parameters (Fig. 2): a unit module shown in Fig. 2, left, is controlled by two parameters α, β (α ∈ (0, 90°), β ∈ (0, 180° − α)), and the repeating number of modules in column and row directions is given by two integers N and M ($N∈Z>0,M∈Z>0$). Fig. 2 Fig. 2 Close modal We consider a waterbomb tube as the rigid folded state of the crease pattern of waterbomb tessellation, such that the left and right sides in Fig. 2 are connected to form a cylindrical form. In our model, we assume N-fold symmetry about an axis and mirror-symmetry about N planes passing through the axis as in the existing research [13] (see Fig. 3). However, unlike the previous research [13], we do not assume mirror symmetry with respect to a plane perpendicular to the axis. Here, the behavior of waterbomb tube is governed by three parameters (α, β, N), because M specifies the finite subset interval of infinitely continuing waterbomb tube. Fig. 3 Fig. 3 Close modal ### 2.2 Kinematics of Module. First, we consider the degrees of freedom (DOF) of each module. Because the kinematic DOF of n-valent vertex in general is n − 3 [18,19], the DOF of waterbomb module (without symmetry) is 3. With the assumed mirror symmetry, the DOF drops by one more degree. Therefore, the module has 6 − 3 − 1 = 2 DOF. Here, we focus on the module with correct mountain and valley (MV) assignment and pop-up pop-down assignment, where we assume that the center vertex of the module is popped-down, i.e., the sum of fold angles (positive for valley and negative for mountain) are positive, and other vertices are popped-down. The definition of pop-up and pop-down follows that of [20], and it is known that rigid origami cannot continuously transform from popped-up and popped-down state without being completely flat. To represent the folded states of modules, we consider the isosceles trapezoid formed by two pairs of mirror reflected vertices of the module (Figure 4, left). The parallel bottom and top edges are aligned in the hoop direction, while the hypotenuses are along the longitudinal direction. The length of hypotenuses of the trapezoid is fixed at 2cosα, while the half lengths of bottom and top edges, denoted by x and y, change in (0, sinα) by folding the module. For each given pair of x and y in (0, sinα), there exists eight different states of a module (see Fig. 5). However, only one of the folded state has the correct MV and pop-up/pop-down assignment for the following reason, thus we may safely use x and y as the parameters to uniquely represent the folded state of the module. Fig. 4 Fig. 4 Close modal Fig. 5 Fig. 5 Close modal To show the uniqueness, first notice that the top and bottom modules illustrated in the same column in Fig. 5 are the mirror reflection of each other through the plane containing the isosceles trapezoid, so the top states are popped-down and the bottom states are popped-up. Therefore, the folded state needs to be one of the four top states. Within the top row we compare the first and second left configurations. Vertex P′ is the reflection of vertex P through the plane defined by vertices O, A, and B, so crease OP is mountain and OP′ is valley, thus we choose vertex P (counterclockwise side of cycle OAB) is selected as the correct side. In a similar manner, vertex Q is chosen over Q′. Thus, the top-left configuration with P and Q is chosen. Finally, we can show that the top-left configuration has indeed correct MV-assignment also for other creases; this can be verified by checking that P, D, and C are on the counterclockwise side of OAB and that P is on the symmetry plane between C and D. Therefore, only the top-left configuration has the correct MV-assignment. ### 2.3 Kinematics of Entire Tube. Next, we consider representing the folded states of an entire waterbomb tube. First, we take out the mth hoop of the waterbomb tube. Because of rotational symmetry, the configuration of the mth hoop can be represented by using parameter xm, ym for each congruent module and the common dihedral angles between isosceles trapezoids 2ρm (Fig. 4, right). In order that the mth hoop to be a closed cylinder, we have the following constraint (see Appendix  A for derivation). $cosρm=−(xm−ym)2sec2α+4sin2πN4−(xm−ym)2sec2α$ (1) Because 2ρm can be derived as the function of xm and ym, the configuration of each hoop can be uniquely represented by parameters xm, ym. However, note that not all values of (xm, ym) represent valid states as cosρm may become nonreal number for certain given pair of values. Therefore, we can represent folded states of waterbomb tube by the following sequence (Figure 4, right): $(xm)m=0,1,…,M−1,(ym)m=0,1,…,M−1$ (2) In order that two consecutive hoops are compatible, there are constraints between xm, ym and xm+1, ym+1 that need to be satisfied. The constraints result in the significant property of waterbomb tube that the state of mth hoop uniquely determines that of m + 1th hoop. This is confirmed by the fact that any vertex of m + 1th modules can be solved by sequentially applying three-sphere-intersections three times (see Fig. 6). In each step of three-sphere-intersection, position-unknown vertex X incident to three position-known vertices V1, V2, V3 are fixed by constructing three spheres S1, S2, S3 of radii XV1, XV2, XV3 centering at V1, V2, V3, respectively, and taking their intersection. In each step, there are at most two candidates for X, denoted by X and X′. They are the mirror reflection of each other through the plane defined by V1, V2, and V3. Thus, MV-assignments of creases XV3 and XV3 are the inversions of each other, and we choose the one consistent with MV-assignments shown in Fig. 2 (for more details, see Appendix  B). Therefore, the m + 1th module with consistent MV-assignment is uniquely determined from the mth module. Note that the existence of a solution is not guaranteed because intersections of three spheres may be empty. Fig. 6 Fig. 6 Close modal When the solution exists, the relationship between mth hoop and m + 1th hoop can be formulated by following recurrence relation: ${xm+1=f(xm,ym;α,β,N)ym+1=g(xm,ym;α,β,N)$ (3) where function f and g are complex nonlinear function which parameters are α, β, and N, but can be analytically derived (see Appendix  B). Consequently, waterbomb tube can be interpreted as a 2-DOF mechanism for an arbitrary M, whose entire shape is determined if initial values x0, y0 (0 < x0, y0 < sinα) are given. ### 2.4 Kinematics Interpreted As Dynamical Systems. Hereafter, we clarify the mathematics behind wave-like surfaces by interpreting the recurrence relation (3) as a discrete dynamical system. Recurrence relation (3) is categorized as a nonlinear discrete dynamical system in two dimension, as functions f, g are nonlinear functions of two variables xm, ym. Also, the system (3) is autonomous system because the function f, g is independent of indices m. An important property of the system is reversibility. Because the module of waterbomb tessellation is point symmetric about the central vertex (see Fig. 2, left), the following equation holds $(m=1,…,M−1)$: ${xm−1=g(ym,xm)ym−1=f(ym,xm)$ (4) From the above formula, if we define the map $F:(x,y)↦(f(x,y),g(x,y))$ and linear involution $G:(x,y)↦(y,x)$, the following one holds: $G=F∘G∘F$ (5) Like Eq. (5), the system where exist some involution reversing time direction called reversible dynamical system [21], and involution G is the symmetry of the system. ## 3 Visualization of Configuration To understand the 2-DOF kinematics of waterbomb tube, we computed the sequences (2) under different pairs of initial values (x0, y0) and visualized their configurations as the sequence of points in x, y-plane, i.e., the phase space. The sequence ${(xm,ym)=Fm(x0,y0)|m∈Z}$ is called orbit of (x0, y0) and the plot of them is called phase diagram. Here, we fix the crease-pattern parameters (α, β, N, M) = (45°, 34.6154°, 24, 100) and show the phase diagram under this parameter in Fig. 7. Under this parameter, we can observe all possible types of solutions we explain below. Fig. 7 Fig. 7 Close modal ### 3.1 Three Types of Solutions. It can be observed that the characteristics of the solution change depending on the given initial values: the solutions can be classified into three types; cylinder solution where it forms a constant radius cylinder, wave-like solution where the corresponding waterbomb tube form wave-like surface, and finite solution where the system fails to obtain a solution after some iterations. Note that the behavior of the system (3) can change under different crease-pattern parameters as we discuss in Sec. 4 (i.e., we cannot observe some types of solutions in different parameters). However, we can still classify solutions under different parameters into these three types. #### 3.1 Cylinder Solution. The top and bottom of waterbomb tube in Fig. 7, left, forms a constant-radius cylinder that corresponds to fixed two points shown in black on the phase space. In this type of solution folded states of modules composing waterbomb tube are fixed (i.e. $(x0,y0)=(x1,y1)=…=(xM−1,yM−1)$), and corresponding isosceles trapezoids become rectangles. Hence, if (w, w)(0 < w < sin α) represents this uniform state, this type of solution can be represented or defined by following equation: $f(w,w)=g(w,w)=w$ (6) Hereafter, we call (w, w) satisfying Eq. (6)cylinder solution of crease pattern under parameters (α, β, N, M). Remarkably, cylinder solutions defined by Eq. (6) are called symmetric fixed points of the system (3) which is invariant under the map F and G. Cylinder solutions shown in black in Fig. 7 are numerically calculated by solving Eq. (6) under crease-pattern parameter (α, β, N, M) = (45°, 34.6154°, 24, 100), which two solutions exist and each have different radius. #### 3.1 Wave-Like Solution. The second case corresponds to third to fifth from the top of waterbomb tube shown in Fig. 7, left. The wave-like folded state corresponds to the sequence of points on the phase space moving in a clockwise direction. We call such solution a wave-like solution. In a wave-like solution, the point continues rotating around the same closed curve without divergence or convergence. Note that the points along the closed curve does not coincide, thus the folded state is not periodic. However, the overall folded shape seems to approximate a smooth periodic wave-like surface. These sequences and corresponding cycles are nested around one of two cylinder solutions (smaller one). In addition, these plots of nested closed curves seemed to be symmetric about the graph of ym = xm. The symmetry of the orbits is the result of the reversibility of the system. Generally, the orbit of reversible systems initiated from x0 = (x0, y0), that is defined as the set ${x=Fm(x0)|m∈Z}$, is invariant under the symmetry G when the orbit has a point which is invariant under G [21]. For this reason, the orbits shown in Fig. 7, which initial terms x0 are invariant under G, are actually symmetric about the graph of ym = xm. #### 3.1 Finite Solution. The second waterbomb tube from the top in Fig. 7, left, corresponds to the third type, where its points are plotted just a little outside of the above-mentioned concentric plots. The reason plots stop at the middle is that, at some index m, (xm, ym) deviate from the region (0, sin α) × (0, sinα), that is, there is no state of modules corresponding to parameter (xm, ym). In other words, finite solution appears in the case that the intersection of three spheres become empty at some step, when computing vertices of modules as shown in Fig. 6. Specifically, in the sequence corresponding to second waterbomb tube in Fig. 7, the numerical value of ninth term (x8, y8) is (0.689573, 0.724059), which y8 is greater than sin α = sin 45° ≈ 0.707107. So, these solution terminates at some point, and only a finite portion of the paper can be folded along this solution. ### 3.2 Kinematics. From this visualization, we can observe a single disk region in the phase space as the set of initial values yielding solutions for any m (the gray region in Fig. 7). The region is the union of all wave-like solutions and the smaller cylinder solution. As the region is the configuration space of the mechanism with m → ∞, there exists a 2-DOF rigid folding motion. The motion can be represented by the “amplitude” and “phase” of wave-like surfaces. As the initial configuration gets closer to the cylinder solution, the amplitude of wave shapes gets smaller. We can change the phase by rotating the initial configuration along the closed curve (Supporting Movie).1 In the example state shown in Fig. 7, each initial value is taken along x0 = y0, so the left-most module forms the “valley” of the wave. ### 3.3 Outline of Subsequent Analysis. From Fig. 7, we found that the system behaves differently around the different cylinder solutions. In Sec. 4, we perform a stability analysis to classify the symmetric fixed points, i.e., cylinder solutions, by the behavior of the system around them. We also investigate how the cylinder solutions emerge and disappear and how the stability changes when the crease-pattern parameters are changed. We conjecture that quasi-periodic solutions exist around a neutrally stable cylinder solution, which explains the nested closed solutions representing wave-like solutions. In Sec. 5, we give a proof for the conjecture for a particular crease pattern. ## 4 Stability and Bifurcation Analysis In this section, we investigate how the behavior of the system around cylinder solutions changes when the crease-pattern parameters are changed. For this, we numerically analyze the stability of the cylinder solutions and visualize the changes of the stability using the bifurcation diagram. Figure 8 shows the concept of the analysis in this section. From the bifurcation diagram, we found that there are some critical values of crease-pattern parameters where the stability or the behavior of the system changes. Note that the result of this section is obtained numerically (not symbolically) using Mathematica based on the explicit form of system (3). ### 4.1 Stability Analysis. First, we introduce the concept of stability and classify cylinder solutions. In order to know the stability of the cylinder solutions, we perform the linear stability analysis. The linear stability analysis is the stability analysis on the following linearized system at the cylinder solutions w = (w, w): $xm+1−w≈DF(w)(xm−w)$ (7) where DF(w) is the Jacobian matrix of the system at the cylinder solutions defined by $DF(x,y)=[∂xf(x,y)∂yf(x,y)∂xg(x,y)∂yg(x,y)]$ (8) The linearized behavior of the nonlinear system around the fixed point can be characterized by using the eigenvalues of the Jacobian matrix at that point. In a typical case of the system (3), we found that the eigenvalues of the Jacobian matrix at the symmetric fixed point (w, w) can be classified into two types. In the first type, the eigenvalues are complex conjugate whose magnitude equals to 1 at w. Such a point is called elliptic fixed point, where the linear system (7) has concentric closed elliptical orbits around the origin. The linear stability around elliptic fixed point is neutrally stable, meaning that the orbit around the fixed point keeps some distance from that point called center. In the second type, the eigenvalues are real, and one absolute value is less than 1 and the other is greater than 1. Such a point is called a hyperbolic fixed point. The linear stability at a hyperbolic fixed point is unstable, and the fixed point is called a saddle, i.e., meaning that it has both stable and unstable directions.2 In Fig. 8, the top-left figure shows the two different cylinder configurations with larger and smaller radius under (α, β, N) = (45°, 40°, 8) represented by the numerically calculated cylinder solutions (w1, w1) and (w2, w2), respectively. The linear stability of (w1, w1) and (w2, w2) are unstable saddle and neutrally stable center, respectively. Fig. 8 Fig. 8 Close modal A fixed point classified as an unstable saddle point in the linearized system (7) is also an unstable saddle point in the original nonlinear system (see Sec. 5). However, a fixed point classified as a neutrally stable fixed point in the linearized system is not necessarily neutrally stable in the original system. Therefore, the linear stability analysis alone does not guarantee the existence of nested closed solutions. Nevertheless, from the phase diagram in Fig. 8, we found that there are nested closed plots around (w2, w2). This leads us to conjecture that if the linearized system (7) has a neutrally stable symmetric fixed point, it is also a neutrally stable symmetric fixed point in the original system, around which there are quasi-periodic solutions. We show the conjecture is correct for a particular crease pattern in Sec. 5. ### 4.2 Bifurcation Analysis. Next, we analyze how the linear stability of cylinder solutions changes when β changes under fixed α and N. For the visualization of the result, we use a bifurcation diagram. The bottom of Fig. 8 shows the bifurcation diagram for α = 45°, N = 8. To create bifurcation diagram for given α and N, we move the remaining parameter β in the range (0, 180° − α), and compute the cylinder solutions (w, w) of the crease pattern by solving Eq. (6) at each value. Here, we transform parameter w to $Θ≡arcsin(w/(sinα))∈(0,90∘)$, the half of the dihedral angle formed by the two isosceles triangles (Fig. 8), so that the plot range is independent of α. Then, we perform the linear stability analysis and plot the cylinder solutions in β–Θ planes by solid and dotted line if it is neutrally stable and unstable, respectively. In Fig. 8, the vertical line β = 40° intersects with the bifurcation diagram at two different points, (40°, Θ1) on the dotted line and (40°, Θ2) on the solid line, shown in red and blue, respectively. The red and the blue points correspond to the unstable cylinder with a larger radius and the neutrally stable cylinder with a smaller radius, respectively. ### 4.3 Result. Now, we describe how the linear stability of the cylinder solutions depends on the parameters using the bifurcation diagram. First, we observe the bifurcation diagram for fixed N = 8. By varying α, we found the critical value α* ≈ 60.4° at which the appearance of the bifurcation diagram changes. So, we use the representative cases α = 45° < α* and α = 65° > α* for further observing the bifurcation diagram for these cases. We found some critical values of β where the number of cylinder solutions or their linear stability change, i.e., whether waterbomb tube can be wave-like surface changes. We also explain the differences between the two cases. Finally, we show that we can apply the observations of the case of N = 8 for the other N. Here, we fix N = 8. Under fixed N, we can create the 3D bifurcation diagram by varying the value α in range (0, 90°) and arranging the 2D bifurcation diagrams for each α (right in Fig. 9). From the 3D diagram, we found the critical value α* ≈ 60.4° where the behavior of the bifurcation diagram changes. The behaviors of the bifurcation diagram for α < α* are represented by (α, N) = (45°, N = 8) (blue bifurcation diagram on the top-left in Fig. 9), while the bifurcation diagram for α > α* is represented by (α, N) = (65°, N = 8) (red bifurcation diagram on the bottom-left in Fig. 9). Fig. 9 Fig. 9 Close modal First, we describe the behavior of the representative case of (α, N) = (45°, 8) described in the blue bifurcation diagram on the top-left in Fig. 9. As we change the remaining variable β, there are some critical value of β denoted by β*1, β*2, where the linear stability of a cylinder solution changes. As we increase β from 0, at β = β*1 ≈ 35.8°, the graph of y = f(w, w) and y = w touches and the fixed point that is saddle and center generated (saddle-node bifurcation). The branch of the fixed point that is saddle extends from the bottom of the graph at β = 45°. Then, at β = β*2 ≈ 46.1°, saddle-node bifurcation occurs again where the center and the saddle coincide and disappear. Thus, the system (3) has no cylinder solutions under β ∈ (0, β*1), two cylinder solutions (center and saddle) under β ∈ (β*1, 45°), three cylinder solutions (center and two saddle) under β ∈ (45°, β*2), and only one cylinder solution (saddle) under β ∈ (β*2, 135°). Here, Fig. 10 shows the part of the bifurcation diagram and the phase diagrams for β = 35°, 40°, 45.5°, and 50° belonging to (0, β*1), (β*1, 45°), (45°, β*2), and (β*2, 135°), respectively. Under β = 40° ∈ (β*1, 45°) and β = 45.5° ∈ (45°, β2°), there are nested cyclic plots around the center, which suggests the existence of a wave-like configurations. On the other hand, there are no cyclic plots in the other two diagrams, i.e., all solutions are finite solutions; therefore, waterbomb tube under $α=45∘,N=8,β∈(0,β1*)⋃(β2*,135∘)$ cannot be wave-like surfaces. Thus, whether waterbomb tube can be wave-like surface or not changes at the bifurcation values. Fig. 10 Fig. 10 Close modal Next, for the case of (α, N) = (65°, 8), the red diagram in Fig. 9 shows that there are three critical values for β: β*1, β*2, and β*. As we increase β from 0, saddle-node bifurcation occurs at β = β*1 ≈ 27.8° in the same way as the α = 45°; however, as β is further increased, the linear stability changes at β = β* ≈ 52.3°, which is not observed in the case of α < α*. Furthermore, the branch of the center fixed point extends from the bottom of the graph at β = 65°. Finally, at β = β*2 ≈ 65.5°, saddle-node bifurcation occurs again where the saddle and the center coincide and disappear. From the above, in the case of α = 65°, the system (3) has zero, two (center and saddle), two (two saddles), three (two saddles and center), and one (saddle) cylinder solutions when β ∈ (0, β*1), (β*1, β*), (β*, 65°), (65°, β*2), and (β*2, 115°), respectively. Figure 11 shows the part of the bifurcation diagram and the phase diagrams for β = 25°, 40°, 49°, 57°, 65.1°, and 70° belonging to (0, β*1), (β*1, β*), (β*1, β*), (β*, 65°), (65°, β*2), and (β*2, 115°), respectively. In the top-right, middle-left, and bottom-left phase diagrams, there are the plots of wave-like solutions around the center; therefore, waterbomb tube has wave-like configurations. However, the wave-like configurations placed in each diagram self-intersect at some parts in the same way as the configuration in the bottom on the left side of Fig. 11, or this is caused by too small radius as for β = 65.1°, which means these wave-like configurations are not realizable. As for the middle-left diagram, we found the unstable nonsymmetric fixed points (x, y) ≈ (0.698488, 0.375731), (x, y) ≈ (0.375731, 0.698488) satisfying f(x, y) = g(x, y) = (x, y), which is not observed in the case of α = 45°, but the configurations of the module and the tube corresponding to the nonsymmetric fixed points are self-intersecting complicatedly as shown in Fig. 11. The other three diagrams, top-left, middle-right, and bottom-right on the right side of Fig. 11, having no wave-like solutions, which suggests that waterbomb tube cannot be wave-like surfaces if $β∈(0,β1*)⋃(β*,65∘)⋃(β2*,115∘)$. Hence, bifurcation values are closely related to the possible forms of waterbomb tubes as in the case of α = 45°. Fig. 11 Fig. 11 Close modal For cases other than N = 8, the top, middle, and bottom figure in Fig. 12 shows the 3D bifurcation diagrams for N = 6, N = 30, and N = 100, respectively. Comparing 3D diagrams under the different values of N in Figs. 9 and 12, the shape of the diagrams and the critical values α* depends on the values of N. However, the pattern formed by the dotted and solid lines is the same for the four different values of N; therefore, we can apply the features described for N = 8, i.e, the existence of critical values such as α*, β*1, β2*, β* and their relation with the behavior of waterbomb tube for N = 6, 30, and 100. Fig. 12 Fig. 12 Close modal ## 5 Proof of Wave-Like Solution In this section, we claim that there are nested wave-like solution around the neutrally stable cylinder solution and they are quasi-periodic orbits of the system; and we give the proof of existence for fixed crease-pattern parameters (α, β, N) = (45°, 45°, 6). We also show the system behaves differently around the saddle cylinder solution. For the proof of quasi-periodic orbits, we use the Kolmogorov–Arnold–Moser (KAM) theorem that guarantees the equivalence of the behaviors in the nonlinear and linearized systems under some conditions as described later. Figure 13 visualizes the solutions under (α, β, N) = (45°, 45°, 6). The function f under this crease pattern is given as follows: $f(x,y)=14(y2−1)((x−y)2−2)×(x(3((2y2−1)(x2+y2−1)+1)+2y(2y2−1)(2(x−y)2−1))−y(2y((2y2−1)(2(x−y)2−1)−2(2(x−y)2−1)(x2+y2−1))+3((2y2−1)(x2+y2−1)−3)+33y2)+2((2y2−1)(2(x−y)2−1)−(2(x−y)2−1)(x2+y2−1))+(−3)x2y)$ (9) We derived the symbolic forms of cylinder solutions, Jacobian matrix, and its eigenvalues for linear stability analysis by using Mathematica. We obtain two cylinder solutions analytically as the following by solving Eq. (6) for w (0 < w < sin 45°): $w=1410−33±−1+43$ (10) Hereafter, let w1 be the solution with a larger radius, and w2 be another one with a smaller radius. Fig. 13 Fig. 13 Close modal Next, we derive the Jacobian matrix DF(x, y) at cylinder solutions (x, y) = (wi, wi) (i = 1, 2). Note that ∂xg(wi, wi) and ∂yg(wi, wi) can be computed from symbolic forms of ∂xf(wi, wi), ∂yf(wi, wi) using chain rule: $∂g∂x(wi,wi)=−∂f∂y(wi,wi)$ (11) $∂g∂y(wi,wi)=1−(∂f∂y(wi,wi))2∂f∂x(wi,wi)$ (12) The elements of the Jacobian matrices DF(wi, wi) (i = 1, 2) are computed as $(DF(w1,w1))1,1=26243+3843+609+47(DF(w1,w1))1,2=18(−153−51123−8845+28)(DF(w1,w1))2,1=18(153+51123−8845−28)(DF(w1,w1))2,2=1104(5753+403884323−69922045−688)$ (13) and $(DF(w2,w2))1,1=18(−33+963−165+8)(DF(w2,w2))1,2=18(−153+51123−8845+28)$ (14) $(DF(w2,w2))2,1=18(153−51123−8845−28)(DF(w2,w2))2,2=1104(5753−403884323−69922045−688)$ (14) Finally, we consider eigenvalues of the derived Jacobian matrices (13), (14) and the linear stability of corresponding cylinder solutions. Table 1 shows the numerical values of the eigenvalues and the linear stability of the cylinder solutions. The eigenvalues for both cylinder solutions are given as the roots of the following polynomial: $13w8+292w7+161868w6−1083460w5+1704206w4−1083460w3+161868w2+292w+13$ (15) The eigenvalues for larger (x, y) = (w1, w1) are real, the absolute values of which are greater than 1 and smaller than 1, respectively; therefore, the type of cylinder solution (w1, w1) as the fixed point of the system is saddle. Hence, both linear and nonlinear stabilities of (w1, w1) are unstable, so there are no cyclic solutions around (w1, w1). Table 1 Eigenvalues and stability at two cylinder solutions CYLINDER SOLUTIONEIGENVALUETYPESTABILITY $w1=1410−33+−1+43$$|λ1|=|4.69…|>1$ $|λ2|=|0.212…|<1$ $(λ1,λ2∈R)$ $w2=1410−33−−1+43$$|λ|=|0.855…±i0.517…|=1$ $(λ∈C)$ CenterNeutrally Stable CYLINDER SOLUTIONEIGENVALUETYPESTABILITY $w1=1410−33+−1+43$$|λ1|=|4.69…|>1$ $|λ2|=|0.212…|<1$ $(λ1,λ2∈R)$ $w2=1410−33−−1+43$$|λ|=|0.855…±i0.517…|=1$ $(λ∈C)$ CenterNeutrally Stable On the other hand, at the smaller cylinder solution (x, y) = (w2, w2) with a smaller radius, DF(w2, w2) has complex conjugate eigenvalues whose magnitudes are exactly 1. Hence, the type of the cylinder solution (w2, w2) is center and linear stability of (w2, w2) is neutrally stable. Nevertheless, because (w2, w2) is an elliptic fixed point, it is not yet guaranteed that the original nonlinear system (3) has nested closed orbits around (w2, w2) as we mentioned in Sec. 4.1. Now, to prove the existence of the nested closed symmetric quasi-periodic orbits around (w2, w2) in the original system, we use the one derived form of KAM theorem. The KAM theorem guarantees the existence of such orbits called KAM curve around a fixed point of the reversible dynamical system if the fixed point is elliptic symmetric fixed point and nonresonant, that is, the eigenvalues are not being roots of unity [21,22]. We already know that the system is reversible and (w2, w2) is an elliptic symmetric fixed point. Also, (w2, w2) is nonresonant because of the following reason. The eigenvalues of DF(w2, w2) are the roots of the irreducible polynomial (15). Then, some coefficients of the minimal polynomial, obtained as the monic form of (15), are not integers (e.g., coefficients for seventh-order term is 292/13), so the eigenvalues of (w2, w2) are not algebraic integer, which implies that the eigenvalues of (w2, w2) are not the roots of unity. Thus, we can apply KAM theorem to the system at (w2, w2), and the existence of wave-like solution around the point is proved. ## 6 Conclusion In this paper, we revealed a part of the mathematical structure behind wave-like surfaces of waterbomb tube. First, we have described the kinematic model representing the configuration of each module of waterbomb tube and its entire shape to derive recurrence relation of two variables dominating its kinematics. Then, based on the observation of the plots on the phase space corresponding to the folded states, we classified solutions into three types to identify the wave-like solutions surrounding one of two cylinder solutions. We applied the knowledge of discrete dynamical systems to the recurrence relation, and investigated how the stability of the symmetric fixed point of the system, i.e., the behavior around cylinder solutions, changes when we change the crease-pattern parameters. Then, we observed that around the neutrally stable cylinder solution, there exists nested wave-like solutions, and found critical values where the behavior of the system changes. Finally, we gave the proof of the existence of quasi-periodic solutions, i.e., wave-like solutions, around the neutrally stable cylinder solution. Note that we gave proof of the existence of quasi-periodic solutions only for fixed crease-pattern parameters; characterizing the existence of such solutions is a future work of this paper. Also, the physical interpretation, i.e., the relation between the mathematical properties of solutions and the mechanical properties are still left to be investigated. More generally, our proposed approach based on the geometry of a module and applying the knowledge of dynamical system to the recurrence relation can be useful to explain behaviors of various origami tessellations. The analysis of tessellations consisting of modules with higher or lower DOF is of our interest. ## Footnotes 2 Here, stable or unstable refers to the stability of the dynamical systems as m increases. It does not refer to mechanical stability. ## Acknowledgment This work is supported by JST PRESTO (Grant No. JPMJPR1927). ### Appendix A: Derivation of Equation Imposing Cylinder Constraint Here, we derive Eq. (6) imposing cylinder constraint. Assuming that values of crease-pattern parameters are given, we represent some distance of vertices or angle using (α, β, N, M) based on the preservation of edge length. We derive some quantities for considering the cylinder constraint. First, edge length l1, l2 in Fig. 14 can be written in the following form: $l1=sinαsin(α+β),l2=cosα−sinαtan(α+β)$ Next, we consider the dihedral angle ρm. The notation of vertices is defined as Fig. 15. Because of the N-fold symmetry of waterbomb tube, vertices $Am,01,Am,11,…,Am,N−11$ form regular N-sided polygon. Based on this, we derive Eq. (6). Let θm,n, Mm,n, Pm,n, Qm,n be the base angle $∠Am,n1Bm,n1Bm,n2=∠Am,n2Bm,n1Bm,n1$, a midpoint of $Am,n1Bm,n1$, a foot of perpendicular from point Mm,n to edge $Am,n1Am,n2$, and a foot of perpendicular from point Mm,n to edge $Bm,n1Bm,n2$, respectively. Here, following equations hold: $cosθm,n=xm−ym2cosα|Mm,nPm,n|=|Mm,nQm,n|=xmsinθm,n$ Because $∠Mm,n+1Am,n1Mm,n=π(N−2)/N$, by using cosine theorem in $△Mm,n+1Am,n1Mm,n$, $|Mm,nMm,n+1|=2xm2−2xm2cosπ(N−2)N$ Thus, by using cosine theorem in $△Mm,n+1Pm,nMm,n$, $cos2ρm=cos∠Mm,n+1Pm,nMm,n=2|Mm,nPm,n|2−|Mm,nMm,n+1|22|Mm,nPm,n|2$ By using half angle formula, we can get the following equation equivalent to Eq. (6): $cosρm=−(xm−ym)2sec2α+4sin2πN4−(xm−ym)2sec2α$ Fig. 14 Fig. 14 Close modal Fig. 15 Fig. 15 Close modal ### Appendix B: Derivation of Function f Here, we derive the function f and g assuming that values of crease-pattern parameters and the state of modules belonging to mth hoop, i.e., xm and ym, are given. First, we represent some quantities such as distance between vertices or angles by crease-pattern parameters and xm and ym. Next, we derive the function f and g by introducing the coordinate system and representing xm+1 and ym+1 by the coordinates of some vertices of waterbomb tube. #### Some Necessary Quantities First, we represent some quantities using (α, β, N, M) and (xm, ym) based on the preservation of edge length. The notation of vertices is defined in Fig. 16, and for simplicity, we omit subscripts of xm and ym. Fig. 16 Fig. 16 Close modal Let θ1, θ2 be the base angle $∠B1A1A2=∠B2B1A1$, $∠A2B2B1=∠A1A2B2$, respectively (0 < θi < π). Their cosine and sine can be represented as follows: $cosθ1=x−y|A1A2|=x−y2cosα,sinθ1=1−cos2θ1$ (A1) $cosθ2=y−x|A1A2|=y−x2cosα,sinθ2=1−cos2θ2$ (A2) Next, we calculate h : = |OO′|, where O′ corresponds to the circumcenter of isosceles trapezoid A1B1B2A2 because point O is equidistant from Ai, Bi (i = 1, 2); therefore, if we represent the length of diagonals of A1B1B2A2, radius of circumcircle of A1B1B2A2 as d, r, respectively, d, r, h can be expressed as follows: $d=|A2B1|=|B1A1|2+|A1A2|2−2|B1A1‖A1A2|cosθ1=2xy+cos2αr=|A1O′|=|A2B1|2sin∠B1A1A2=d2sinθ1h=|OO′|=|A1O′|2−|A1O′|2=1−r2$ On the other hand, if $ψi=∠OMiCi$, $ζi=∠OMiO′$, $ξi=ψi−ζi(|ξi|=∠RiMiCi)$ (i = 1, 2), the following equations hold: $cosψi=|MiCi|2+|OMi|2−|CiO|22|MiCi‖OMi|={(l12−x2)+(1−x2)−l222l12−x21−x2(i=1)(l12−y2)+(1−y2)−l222l12−y21−y2(i=2)sinψi=1−cos2ψisinζi=|OO′||OMi|={h1−x2(i=1)h1−y2(i=2)cosζi=1−cos2ζi$ $cosξi=cosψicosζi+sinψisinζisinξi=sinψicosζi−cosψisinζi$ (A3) #### Derivation of xm+1 and ym+1 Next, we derive xm+1 and ym+1, i.e., the functions f and g. Here, we take the coordinate system shown in Fig. 17 and represent the coordinates of each vertex as the functions of xm and ym. Then, we obtain xm+1 and ym+1, i.e., the functions f and g, by using the coordinates of vertices. In Figs. 17 and 18, the coordinate system is taken so that the rotation axis of symmetry of the waterbomb tube is the X-axis and one of the modules belonging to the mth hoop which vertices are denoted as □m,0 is mirror-symmetric for the XZ-plane. In the following, the coordinate components of vertex V are denoted as Vx, Vy, and Vz, respectively, and Ry(η) and t denote the rotation matrix of angle η with Y-axis as the rotation axis and translation vector defined as t ≡ [0, 0, xm/tan (π/N)]T. Here, we can represent the coordinates of each vertex as follows using Eqs. (A1), (A3): $[Am,01x,Am,01y,Am,01z]T=Ry(η)[0,−xm,0]T+t[Bm,01x,Bm,01y,Bm,01z]T=Ry(η)[0,xm,0]T+t$ $[Am,02x,Am,02y,Am,02z]T=Ry(η)[2cosαsinθm,01,−ym,0]T+t[Bm,02x,Bm,02y,Bm,02z]T=Ry(η)[2cosαsinθm,01,ym,0]T+t[Om,0x,Om,0y,Om,0z]T=Ry(η)[rm,02−xm2,0,−hm,0]T+t[Cm,01x,Cm,01y,Cm,01z]T=Ry(η)[l12−xm2cosξm,01,0,l12−xm2sinξm,01]T+t[Cm,02x,Cm,02y,Cm,02z]T=Ry(η)[l12−ym2cosξm,02,0,l12−ym2sinξm,02]T+t$ Note that for the elements of the matrix Ry(η) we have $sinη=−Am,02z−Am,01z2cosαsinθm,01,cosη=1−sin2η$ Using the coordinates of the vertices, we can obtain the coordinate of the vertices of the other modules belonging to the mth hoop by rotating the corresponding vertex by 2/N around the X-axis. For example, $[Cm,12x,Cm,12y,Cm,12z]T=Rx(2π/N)[Cm,02x,Cm,02y,Cm,02z]T$ Here, $xm+1=[Cm,02x,Cm,02y,Cm,02z]T⋅[0,cosπN,sinπN]T$, which gives the function f (Fig. 6). Fig. 17 Fig. 17 Close modal Fig. 18 Fig. 18 Close modal Next, we derive the function g by finding ym+1 (Fig. 6). To do so, we first find the coordinates of vertex Om+1,1, and then the coordinates of vertex $Bm+1,12$. From Fig. 6, vertex Om+1,1 is located at a distance 1, 1, l2 from the vertices $Cm,02,Cm,12,Cm+1,12$, respectively. Vertices $Cm,02$ and $Cm,12$ are mirror-symmetric across plane $P$ obtained by rotating the XZ-plane by π/N around X-axis. Hence, vertex Om+1,1 is contained in the intersection of the two circles on $P$ centered at P and $Cm+1,11$ which radius are $|POm+1,11|=1−|PCm,0|2=1−xm+12$ and l2, respectively. Using x1 : = [Px, Py, Pz]T, $x2:=[Cm+1,11xCm+1,11y,Cm+1,11z]T,$$R1:=1−xm+12,R2:=l2$, and n : = Rx(π/N)[0, 1, 0]T which is the normal vector of $P$, we can represent the intersection of the two circles as follows: $x1+R(n,±arccos(R12+‖x1−x2‖2−R222R1‖x1−x2‖))R1(x2−x1)‖x2−x1‖$ (A4) Here, $R(v,θ)(v∈R3,θ∈R)$ is the rotation matrix in which rotation axis and rotation angle are v and θ, respectively. The signs +, − of the angle of rotation results crease $Cm+1,11Om+1,1$ to mountain-folded and valley-folded, respectively; therefore, in this case, + sign is consistent with the prescribed MV-assignment. Next, we derive the coordinates of vertex $Bm+1,12$. Vertex $Bm+1,12$ is located at a distance 1, 1, 2 cos α from vertices $Om+1,0,Om+1,1,Cm,02$, respectively. Here, since the vertices Om+1,0, Om+1,1 are the mirror-reflection with respect to XZ-plane, the vertex $Bm+1,12$ is contained in the intersection of a circle of radius $1−(Om+1,1y)2$ centered at the vertex Q on the XZ-plane and a circle of radius 2cosα centered at the vertex $Cm,02$. We can obtain the intersection of the circles by Eq. (A4) where $x1=[Cm,02x,Cm,02y,Cm,02z]T,R1=2cosα,x2=[Qx,Qy,Qz]T,R2=$$1−(Om+1,1y)2,n=[0,1,0]T$. In this case, the positive sign is consistent with the prescribed MV-assignment of $Cm,02Bm+1,12$. Therefore, we obtain the coordinates of vertex $Bm+12$. Since $ym+1=[Bm+12x,Bm+12y,Bm+12z]T⋅[0,cosπN,sinπN]T$, the function g is obtained. ## Conflict of Interest There are no conflicts of interest. ## Data Availability Statement The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request. ## References 1. Callens , S. J. , and , A. A. , 2018 , “ From Flat Sheets to Curved Geometries: Origami and Kirigami Approaches ,” Mater. Today , 21 ( 3 ), pp. 241 264 . 2. Schenk , M. , and Guest , S. D. , 2011 , “ Origami Folding: A Structural Engineering Approach ,” The 5th International Meeting on Origami in Science, Mathematics and Education (5OSME) , Singapore , July . 3. Nassar , H. , Lebée , A. , and Monasse , L. , 2017 , “ Curvature, Metric and Parametrization of Origami Tessellations: Theory and Application to the Eggbox Pattern ,” Proc. R. Soc. A: Math. Phys. Eng. Sci. , 473 ( 2197 ), p. 20160705 . 4. Nassar , H. , Lebée , A. , and Monasse , L. , 2018 , “ Fitting Surfaces With the Miura Tessellation ,” The 7th International Meeting on Origami in Science, Mathematics and Education (7OSME) , Oxford, UK , Sept. 5–7 . 5. Fujimoto , S. , 1976 , Sōzō sei wo kaihatsu suru rittai Origami , Shin-Shashoku-Shuppan , Osaka, Japan . 6. Kuribayashi , K. , Tsuchiya , K. , You , Z. , Tomus , D. , Umemoto , M. , Ito , T. , and Sasaki , M. , 2006 , “ Self-deployable Origami Stent Grafts As a Biomedical Application of Ni-rich Tini Shape Memory Alloy Foil ,” Mater. Sci. Eng. A , 419 ( 1–2 ), pp. 131 137 . 7. Fang , H. , Zhang , Y. , and Wang , K. , 2017 , “ Origami-Based Earthworm-Like Locomotion Robots ,” Bioinspiration Biomimetics , 12 ( 6 ), p. 065003 . 8. Li , S. , Stampfli , J. J. , Xu , H. J. , Malkin , E. , Diaz , E. V. , Rus , D. , and Wood , R. J. , 2019 , “ A Vacuum-Driven Origami ‘Magic-Ball’ Soft Gripper ,” 2019 International Conference on Robotics and Automation (ICRA) , , May 20–24 , IEEE, pp. 7401 7408 . 9. Lee , D.-Y. , Kim , J.-K. , Sohn , C.-Y. , Heo , J.-M. , and Cho , K.-J. , 2021 , “ ,” Sci. Robot. , 6 ( 53 ). 10. Rodrigues , G. V. , Fonseca , L. M. , Savi , M. A. , and Paiva , A. , 2017 , “ Nonlinear Dynamics of an Adaptive Origami-Stent System ,” Int. J. Mech. Sci. , 133 , pp. 303 318 . 11. Fonseca , L. M. , Rodrigues , G. V. , Savi , M. A. , and Paiva , A. , 2019 , “ Nonlinear Dynamics of an Origami Wheel With Shape Memory Alloy Actuators ,” Chaos Solitons Fractals , 122 , pp. 245 261 . 12. Feng , H. , Ma , J. , Chen , Y. , and You , Z. , 2018 , “ Twist of Tubular Mechanical Metamaterials Based on Waterbomb Origami ,” Sci. Rep. , 8 ( 1 ), pp. 1 13 . 13. Ma , J. , Feng , H. , Chen , Y. , Hou , D. , and You , Z. , 2020 , “ Folding of Tubular Waterbomb ,” Research , 2020 , p. 1735081 . 14. Fonseca , L. M. , and Savi , M. A. , 2021 , “ On the Symmetries of the Origami Waterbomb Pattern: Kinematics and Mechanical Investigations ,” Meccanica , 56 ( 10 ), pp. 2575 2598 . 15. Feng , H. , 2018 , Kinematics of Spatial Linkages and its Applications to Rigid Origami , PhD thesis , Université Clermont Auvergne , Clermont-Ferrand . 16. , T. , Ma , J. , Feng , H. , Hou , D. , Gattas , J. M. , Chen , Y. , and You , Z. , 2020 , “ Programmable Stiffness and Shape Modulation in Origami Materials: Emergence of a Distant Actuation Feature ,” Appl. Mater. Today , 19 , p. 100537 . 17. Chen , Y. , Feng , H. , Ma , J. , Peng , R. , and You , Z. , 2016 , “ Symmetric Waterbomb Origami ,” Proc. R. Soc. A: Math. Phys. Eng. Sci. , 472 ( 2190 ), p. 20150846 . 18. Kawasaki , T. , 1994 , “ R (γ)= 1 ,” The 2nd International Meeting of Origami Science and Scientific Origami (2OSME) , Otsu, Japan , November–December , Vol. 3, pp. 31 40 . 19. Belcastro , S.-M. , and Hull , T. C. , 2001 , “ A Mathematical Model for Non-flat Origami ,” The 3rd International Meeting of Origami Mathematics, Science, and Education (3OSME) , Monterey, CA , Mar. 9–11 , pp. 39 51 . 20. Abel , Z. , Cantarella , J. , Demaine , E. D. , Eppstein , D. , Hull , T. C. , Ku , J. S. , Lang , R. J. , and Tachi , T. , 2016 , “ Rigid Origami Vertices: Conditions and Forcing Sets ,” J. Comput. Geom. , 7 ( 1 ), pp. 171 184 . 21. Roberts , J. A. , and Quispel , G. , 1992 , “ Chaos and Time-Reversal Symmetry. Order and Chaos in Reversible Dynamical Systems ,” Phys. Rep. , 216 ( 2–3 ), pp. 63 177 . 22. Sevryuk , M. B. , 1986 , Reversible Systems , Springer , Berliln/Heidelberg .
2022-06-28 09:06:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 115, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.689695417881012, "perplexity": 1225.9462900586675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103360935.27/warc/CC-MAIN-20220628081102-20220628111102-00206.warc.gz"}
https://datascience.stackexchange.com/questions/11218/data-visualization-of-frequencies-of-state-transitions-possibly-in-r
# Data visualization of frequencies of state transitions (possibly in R?) I am working on some experimental data, which can be of types A, B and C. Now I observe this data for 5 time points, and I can see them move between A to B, B to C,... etc. I see such transitions for a number of independent data points, and I have the cumulative frequencies from all data. For example, I have: $$Period A B C \newline 1 4 4 2 2 1 2 7 3 0 1 9 4 10 0 0 5 8 1 1$$ I DO know the transitions from one state to another, for example from A->B, B->C so on and so forth. For example I know that from Period 1, (all A's went to C. Among the missing B's one went to A, and rest to C.) I was thinking of what would be the best way to visually represent this time wise transitions from one state to another. I was thinking that there might be some better way than just having a transition matrix, maybe something that looks like a Markov Chain but which could accommodate all the 5 periods of transitions in a succinct way? I myself work on a statistical software called STATA, which has limited graphical applications. IS there something on other software packages (R maybe?) which can help me in this? • Sorry for the hack representation of the data matrix. • Is the first line correct or should that also add up to 10? And is my understanding correct that for example in line 3 you don't know where the singleton B came from? Apr 19 '16 at 7:42 • I'm not clear on your data, so its hard to suggest solution. I understand you have 5 "snapshots in time". So do you have, say 20 items that you are observing and from the first line, 4 are in state A, 4 are in state B, 3 are in state C? Then for period 2, only 1 is in state A, 2 are in state B and 7 in state C? If this is true, do you have more granular data? do you know the order that states change from and to, is the state transition matrix well established. Apr 19 '16 at 11:16 • @JanvanderVegt Yes, I will edit to make it add up to 10. Also, I DO know what transitions where, so I know the flow from A->B, B->C etc Apr 19 '16 at 18:43 • This post I made in stack overflow some time ago may be of interest to you: stackoverflow.com/questions/32633507/… Apr 19 '16 at 21:14 • What sort of analysis do you want to do in the end? I have an idea, but it could be completely off track, depending on what analysis you are doing. Apr 19 '16 at 21:15 How about a Sankey diagram with time on the x-axis and flow width representing state transition frequency. Here is a SO discussion on implementing Sankey diagrams in R. One possible R package is {riverplot}... here is code showing the first transition in your data: library(riverplot) nodes <- as.character(sapply(1:2, FUN = function(n){paste0(LETTERS[1:3],n)})) edges <- list(A1=list(C2=4), B1=list(A2=1,C2=1,B2=2), C1=list(C2=2)) r <- makeRiver( nodes, edges, node_xpos= c( 1,1,1 ,2,2,2), node_labels= c( A1= "A", B1= "B", C1= "C", A2="A",B2="B",C2="C" )) plot( r ) Will produce this: If you have the data in the form of a table of transition counts: $$Transition Period 1 Period 2 Period 3 Period 4 \newline A->A 0 0 0 8 A->B 0 0 0 1 A->C 4 1 0 1 B->A 1 0 1 0 B->B 2 0 0 0 B->C 1 1 0 0 C->A 0 0 9 0 C->B 0 0 0 0 C->C 2 7 0 0$$ Then a possible visualization is an area plot. The following chart was produceds in Excel (use Charts/Area button on the Insert ribbon). This chart accurately captures all transitions that occurred in each period. Shaded areas of different colors represent the relative frequencies of transitions by origin-destination pair. • This looks very promising! Could you kindly tell me what software+command you used to generate this, and help me in how to read the plot. For example, what does the orange/ green band mean? Apr 22 '16 at 0:16 • I have added more detail to my answer. Apr 22 '16 at 1:45 I'm not sure if this is the type of analysis you are after, but you mention that the visual side is restricted in STATA. A colleague wrote a blog that utilised neo4j to read web data into a graph database, and d3js to display the data graphically. I realise you don't have web data as such, but your data can be stored in a graph database, but I guess when I was asking about what types of analysis you were planning on doing, I was asking were you needing a qualitative or quantitative direction. But it seems like you are still in the process of working that out. The nice thing with neo4j is that you can pull the data into R and do any sort of analytics you want on it. • I am not looking for a quantitative or qualitative direction per se. I would know what to do with the data, and what regressions to run. I am looking for a good way to show graphically how the state transitions differ across treatments. For example, Tguzella's comment on my post is the closest I have received to what I am looking for. Apr 20 '16 at 18:18 • I'd certainly look at my suggestion of neo4j/d3js then, as it will show graphically how your various states differ. Apr 20 '16 at 18:26
2021-10-25 18:01:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3928194046020508, "perplexity": 555.9850232860585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587719.64/warc/CC-MAIN-20211025154225-20211025184225-00555.warc.gz"}
http://math.stackexchange.com/questions/208917/how-to-find-lim-x-rightarrow-n-j-a-1n-j-n
# How to find $\lim_{x \rightarrow -N} J_{a} = (-1)^N J_{N}$? So far I have $$\lim_{a \rightarrow -N} J_{a} = \lim_{a \rightarrow -N} \mid \frac{x}{2} \mid^a \sum_{k=0}^{\infty} \frac{(-x^2/4)^k}{k! \; \Gamma(a+k+1)} = \mid \frac{x}{2} \mid^{-N} \sum_{k=0}^{\infty} \frac{(-x^2/4)^k}{k! \; \Gamma(-N+k+1)}$$ I don't see how the $(-1)^n$ gets factored out to get the desired result of $(-1)^n J_{-N}$. Where $J_{a}$ is the Bessel function of the first kind. I am an undergrad student learning Bessel and Legendre ODE's. I know only basic cal and have a reference sheet of gamma and beta function properties. I don't have graduate school background on their meaning etc. so explaining to that detail really won't help me because of my background. I am looking more for math manipulation at this point. Thanks. - The reflection formula for the Gamma function might help. – marty cohen Oct 7 '12 at 21:28 Use the fact that the reciprocal of the Euler's $\Gamma$-function vanishes at non-positive integers, that is for $k\in \mathbb{Z}_{\leqslant 0}$ $$\lim_{x \to -k} \frac{1}{\Gamma(x)} = 0$$ Thus, all the terms where $0 \leqslant k < N$ vanish in the limit: $$\begin{eqnarray} \lim_{a \to -N} J_a(x) &=& \sum_{k=N}^\infty \left(\frac{x}{2}\right)^{2k-N} \frac{(-1)^k }{k! \cdot \Gamma(1-N +k)} = \sum_{k=0}^\infty \left(\frac{x}{2}\right)^{2k+N} \frac{(-1)^{k+N}}{\Gamma(k+N+1) \cdot k!} \\ &=& (-1)^N J_N(x) \end{eqnarray}$$ - I don't understand how you get the last step in your final result. How does -N become +N ? I initially though you added k+N whenever there was k, but then I noticed that didn't work. Thanks! – renagade629 Oct 7 '12 at 22:27 I made a shift of the summation variable $k=N+m$. Then $k! = \Gamma(k+1) = \Gamma(m+N+1)$, $\Gamma(1-N+k) = \Gamma(1+m) = m!$ and $2k-N = 2(m+N) -N = 2m + N$. Now relabel $m$ as $k$. – user40314 Oct 9 '12 at 21:55
2015-11-30 22:59:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8450133204460144, "perplexity": 220.11000130868587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398464253.80/warc/CC-MAIN-20151124205424-00002-ip-10-71-132-137.ec2.internal.warc.gz"}
http://latex.org.uk/FAQ-labelformat
# How to change the format of labels • latex • macros By default, when a label is created, it takes on the appearance of the counter labelled, so the label appears as \the<counter> - what would be used if you asked to typeset the counter in your text. This isn't always what you need: for example, if you have nested enumerated lists with the outer numbered and the inner labelled with letters, one might expect to want to refer to items in the inner list as “2©”. (Remember, you can change the structure of list items.) The change is of course possible by explicit labelling of the parent and using that label to construct the typeset result - something like snippet.latex \ref{parent-item}(\ref{child-item}) which would be both tedious and error-prone. What's more, it would be undesirable, since you would be constructing a visual representation which is inflexible (you couldn't change all the references to elements of a list at one fell swoop). LaTeX in fact has a label-formatting command built into every label definition; by default it's null, but it's available for the user to program. For any label &lsaquo;counter&rsaquo; there's a LaTeX internal command \p@&lsaquo;counter&rsaquo;; for example, a label definition on an inner list item is supposedly done using the command \p@enumii{\theenumii}. Unfortunately, the internal workings of this aren't quite right, and you need to patch the \refstepcounter command: <!– {% raw %} – > snippet.latex \renewcommand*\refstepcounter[1]{\stepcounter{#1}% \protected@edef\@currentlabel{% \csname p@#1\expandafter\endcsname \csname the#1\endcsname }% } < !– {% endraw %} – > W ith the patch in place you can now, for example, change the labels on all inner lists by adding the following code in your preamble: snippet.latex \makeatletter \renewcommand{\p@enumii}[1]{\theenumi(#1)} \makeatother This would make the labels for second-level enumerated lists appear as “1(a)” (and so on). The analogous change works for any counter that gets used in a \label command. In fact, the fncylab package does all the above (including the patch to LaTeX itself). With the package, the code above is (actually quite efficiently) rendered by the command: snippet.latex \labelformat{enumii}{\theenumi(#1)} In fact, the above example, which we can do in several different ways, has been rendered obsolete by the appearance of the enumitem package, which is discussed in the answer about decorating enumeration lists.
2021-01-23 02:31:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8299609422683716, "perplexity": 1460.1734166834458}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703531702.36/warc/CC-MAIN-20210123001629-20210123031629-00178.warc.gz"}
https://brilliant.org/problems/something-interesting-on-cubes/
# Something interesting in cubes Algebra Level 4 $\large 8x^{3}-12x^{2}-6x-1=0$ If the value of a real root that satisfy the equation above can be expressed as $\large \dfrac{\sqrt[3]{a}+\sqrt[3]{b}+1}{c}$ where $$a,b$$ and $$c$$ are positive integers, find the value of $$a+b+c$$. ×
2017-05-30 10:55:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9318976998329163, "perplexity": 250.04256940453234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463615093.77/warc/CC-MAIN-20170530105157-20170530125157-00430.warc.gz"}
https://computergraphics.stackexchange.com/questions/6301/whats-wrong-with-my-computation-of-the-intersection-of-a-ray-with-a-sphere
# What's wrong with my computation of the intersection of a ray with a sphere I am learning GLSL and trying to raytrace a sphere. Here is a fragment shader. It correctly discards fragments which are not on the sphere, but when I try to calculate the point of intersection (and hence, the normal), I get nonsense. #version 300 es precision mediump float; out mediump vec4 out_colour; in mediump vec3 X; // intersection of ray with polygon uniform mediump vec3 C; // camera position uniform mediump vec3 CS; // sphere position - camera position uniform float r; // sphere radius void main () { vec3 CX = X - C; // |t.CX-CS|=r, CX.CX.t.t - 2.CS.CX.t + CS.CS - r.r = 0 float a = dot (CX, CX); float b = -2.0 * dot (CX, CS); float c = dot (CS, CS) - r * r; float det = b * b - 4.0 * a * c; if (det < 0.0) float t = (-b - sqrt (det)) / 2.0 * a; if (t <= 0.0) vec3 N = normalize (t * CX - CS); out_colour = vec4 (N, 1); } Here's an illustration of the variable names I was expecting the normals to be illustrated with a wide-ranging rgb colour across the sphere, but as you can see... What went wrong? float t = (-b - sqrt (det)) / 2.0 * a; You're missing parentheses around (2.0 * a). Remember that multiplication and division have equal operator precedence, so the expression as it's currently written means ((-b - sqrt (det)) / 2.0) * a, putting the a in the numerator instead of the denominator. (Oh and this is a nitpick, but the thing you named det is called a “discriminant”, not a determinant.)
2019-04-21 14:04:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5001576542854309, "perplexity": 6330.16593281165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578531984.10/warc/CC-MAIN-20190421140100-20190421162100-00321.warc.gz"}
http://www.emis.de/classics/Erdos/cit/13718101.htm
## Zentralblatt MATH Publications of (and about) Paul Erdös Zbl.No:  137.18101 Autor:  Erdös, Pál Title:  On an extremal problem in graph theory (In English) Source:  Colloq. Math. 13, 251-254 (1965). Review:  Let l and p be integers such that l > p. It is shown that there exists a constant \gammap,l such that if n > n0(p,l) then every graph with n vertices and [\gammap,ln2-1/p] edges contains a subgraph H with the following property: the vertices of H may be labbeled x1,...,xl and y1,...,yl so that every edge (xi,yi), where not both i and j exceed p, is in H. Reviewer:  J.W.Moon Classif.:  * 05C35 Extremal problems (graph theory) Index Words:  topology © European Mathematical Society & FIZ Karlsruhe & Springer-Verlag
2018-01-20 13:26:51
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9487418532371521, "perplexity": 3456.949138306639}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889617.56/warc/CC-MAIN-20180120122736-20180120142736-00338.warc.gz"}
https://xamajabozyg.soundsofgoodnews.com/convergence-of-mixed-methods-in-continuum-mechanics-and-finite-element-analysis-book-4993ei.php
# Convergence of mixed methods in continuum mechanics and finite element analysis. by Farooque Aquil Mirza Publisher: National Library of Canada in Ottawa Written in English ## Edition Notes Ph.D. thesis, University of British Columbia, 1977. The Physical Object ID Numbers Series Canadian theses on microfilm, 32528 Pagination 3 microfiches ; Open Library OL20910970M Fundamentals of Finite Element Analysis: Linear Finite Element Analysis is an ideal text for undergraduate and graduate students in civil, aerospace and mechanical engineering, finite element software vendors, as well as practicing engineers and anybody with an Author: Ioannis Koutromanos. Convergence of the Adaptive Finite Element Method Carsten Carstensen W. Dahmen, and R. DeVore: Adaptive Finite Element methods with Convergence Rates. Num. Math., 97(2) –, (). R. Stevenson: Optimality of AFEM, preprint Convergence analysis of . Element Matrices in Two-dimensional Problems 90 2 A SUMMARY OF THE THEORY Basis Functions for the Finite Element Spaces Sh Rates of Convergence Galerkin’s Method, Collocation, and the Mixed Method Systems of Equations; Shell Problems; Variations on the Finite Element Method 3 APPROXIMATION File Size: 28KB. In this paper, we propose an adaptive finite element algorithm for the numerical solution of a class of nonlocal models which correspond to nonlocal diffusion equations and linear scalar peridynamic models with certain nonintegrable kernel functions. The convergence of the adaptive finite element algorithm is rigorously derived with the help of several basic ingredients, such as the upper Cited by: ON FINITE ELEMENT METHODS FOR NONLINEAR DYNAMIC RESPONSE Klaus-Jürgen Bathe Massachusetts Institute of Technology Cambridge, MA , U.S.A. ABSTRACT In this paper we briefly focus on the nonlinear analysis of solids and structures when these undergo large deformations, possibly over long time durations, and perhaps subjected to fluid-File Size: KB. 1. Reddy, J.N. () An Introduction to Nonlinear Finite Element Analysis by J. N. Reddy, Oxford University Press, ISBN X.; 2. Zienkiewicz and R.L. Taylor () The Finite Element Method for Solid and Structural Mechanics, Sixth 2/11 linearized continuum mechanics and linear elasticity , notes 2/16 Review of second-order File Size: KB. Get this from a library! The finite element method for mechanics of solids with ANSYS applications. [Ellis Harold Dill] -- "The finite element method (FEM) has become the standard method used by engineers for the solution of static and dynamic problems for elastic and inelastic structures and machines. This volume. Appendix O: THE ORIGINS OF THE FINITE ELEMENT METHOD • In his studies leading to the creation of variational calculus, Euler divided the interval of definition of a one-dimensional functional intofinite intervals and assumed a linear variation over each, defined by end values [, p. 53]. Passing to the limit he obtained what is nowFile Size: 29KB. Keywords: finite element; Kriging; convergence. 1. Introduction In the past two decades various mesh-free methods have been developed and applied to solve problems in continuum mechanics [e.g., see Liu (); Gu ()]. These methods have drawn attentions of many researchers partly due to . The Finite Element Method for Boundary Value Problems: Mathematics and Computations - CRC Press Book Written by two well-respected experts in the field, The Finite Element Method for Boundary Value Problems: Mathematics and Computations bridges the gap between applied mathematics and application-oriented computational studies using FEM. course Nonlinear Continuum Mechanics for Finite Element Analysis at Swansea Univer-sity, which he originally developed at the University of Arizona. He has also taught at IIT Roorkee, India, and the Institute of Structural Engineering at the Technical University in Graz. An invaluable tool to help engineers master and optimize analysis, The Finite Element Method for Mechanics of Solids with ANSYS Applications explains the foundations of FEM in detail, enabling engineers to use it properly to analyze stress and interpret the output of a finite element computer program such as ANSYS. ## Convergence of mixed methods in continuum mechanics and finite element analysis. by Farooque Aquil Mirza Download PDF EPUB FB2 Some types of finite element methods (conforming, nonconforming, mixed finite element methods) are particular cases of the gradient discretisation method (GDM). Hence the convergence properties of the GDM, which are established for a series of problems (linear and non linear elliptic problems, linear, nonlinear and degenerate parabolic problems. Title: Convergence of mixed methods in continuum mechanics and finite element analysis: Creator: Mirza, Farooque Aguil: Date Issued: Description: The energy convergence of mixed methods of approximate analysis for problems involving linear self-adjoint operators is by: 5. This is a very good introductory book to the subject of nonlinear continuum mechanics focusing on finite element applications. It fills the gap existing among different books treating this subject. The approach to Directional Derivative is quite general and very by: If the mesh is refinement in a uniform manner, then the finite element approximations converge in the energy norm with rate $1/2$ (all convergence rates in the mesh size), which is optimal as you can get with a gradient $\nabla u \in W^{1/2}(U)$. The Finite Element Method for Solid and Structural Mechanics is the key text and reference for engineers, researchers and senior students dealing with the analysis and modeling of structures, from large civil engineering projects such as dams to aircraft structures and small engineered components. N.H c-s^r. ELSEVIER Comput. Methods Appl. Mech. Engrg. () Computer methods in applied mechanics and engineering Convergence analysis for an element-by-element finite element method Zhiping Li3, M.B. Reed1' * "Department of Mathematics, Peking University, BeijingChina ^Department of Mathematics & Statistics, Brunei University, Oxbridge UB83PH, UK Received 4 March Cited by: 7. Nonlinear continuum mechanics for finite element analysis / Javier Bonet, Richard D. Wood. ISBN X 1. Materials – Mathematical models. Continuum mechanics. Nonlinear mechanics. Finite element method. Wood. Richard D. Title. TAB 01 – dc21 CIP A catalog record for this book. The finite element method in structural and continuum mechanics: numerical solution of problems in structural and continuum mechanics, Volume 1 O. Zienkiewicz, Y. Cheung McGraw-Hill, - Science - pages5/5(1). Finite Element Method in Structural & Continuum Mechanics Hardcover – January 1, by O. Zienkiewicz (Author), Y. Cheung (Author)5/5(1). Search Continuum Mechanics Website. Finite Element Coordinate Mapping home > deformation & strain > finite elements Introduction. The finite element (FE) method is such an important part of most any mechanical analysis that it justifies a review of how to compute deformation gradients from FE results. Finite element methods have become ever more important to engineers as tools for design and optimization, now even for solving non-linear technological : Peter Wriggers. Finite Element Method (FEM) - Finite Element Analysis (FEA): Easy Explanation - Duration Sobolev Estimates and Convergence of the Finite Element Method. Energetic Convergence of a New Hybrid Mixed Finite Element M.R.T. Arruda a*, P.F.T. Arruda b and B.J.F. Lopes b a CERIS-Civil Engineering Research and Innov ation for Sustainability, Instituto. I don't understand "compare and contrast" because finite element analysis and continuum mechanics are not on the same level of concepts, but I will try to make connection between these two. According to Wikipedia, > Continuum mechanics is a branch. Continuum mechanics is a “generic” framework that deals with the behavior of continua under the influence of forces and certain constraints. It broadly branches out into solid mechanics and fluid mechanics. For any system, we start with the object. The assumptions on the finite element triangulation are reasonable and practical. In this paper, we consider the finite element methods for solving second order elliptic and parabolic interface problems in two-dimensional convex polygonCited by: PROFESSOR: Ladies and gentlemen, welcome to this lecture on non-linear finite element analysis. In the previous lectures, I introduced you to non-linear finite element analysis, to the solution methods that we're using in non-linear finite element analysis, using primarily physical concepts. We are now ready to discuss the mathematical basis of. Nonlinear Continuum Mechanics for Finite Element Analysis, 2nd Edition | Javier Bonet, Richard D. Wood | download | B–OK. Download books for free. Find books. Books shelved as continuum-mechanics: Nonlinear Solid Mechanics: A Continuum Approach for Engineering by Gerhard A. Holzapfel, First Course in Continuum. element for geometric and material non-linear analysis is presented. The element is formulated using three-dimensional continuum mechanics theory and it is applic­ able to the analysis of thin and thick shells. The for­ mulation of the element and the solutions to various test and demonstrative example problems are presented and discussed. This book provides a look at the theory behind the programs engineers use for the computer simulation of nonlinear structural behaviour. It establishes the mathematical foundations for the development of computer programs that can predict the behaviour of mechanical and structural components. After a thorough but succinct introduction, the book delves into mathematical preliminaries 5/5(1). Convergence: Mesh convergence determines how many elements are required in a model to ensure that the results of an analysis are not affected by changing the size of the mesh. System response (stress, deformation) will converge to a repeatable solution with decreasing element size. Mesh Independence: Following convergence, additional mesh. Search Tips. Phrase Searching You can use double quotes to search for a series of words in a particular order. For example, "World war II" (with quotes) will give more precise results than World war II (without quotes). Wildcard Searching If you want to search for multiple variations of a word, you can substitute a special symbol (called a "wildcard") for one or more letters. An introductory textbook covering the fundamentals of linear finite element analysis (FEA) This book constitutes the first volume in a two-volume set that introduces readers to the theoretical foundations and the implementation of the finite element method (FEM). The first volume focuses on the use of the method for linear problems. A general procedure is presented for the finite element Author: Ioannis Koutromanos. Computer Methods in Applied Mechanics and Engineering() Analysis of a combined mixed finite element and discontinuous Galerkin method for incompressible two-phase flow in porous media. SIAM Journal on Numerical AnalysisCited by: Nonlinear continuum mechanics for finite element analysis Javier Bonet, Richard D. Wood This book provides a look at the theory behind the programs engineers use for the computer simulation of nonlinear structural behavior. The Finite Element Method in Engineering Science (The second, expanded and revised, edition of the Finite Element Method in Structural and Continuum Mechanics) Zienkiewicz, O. Published by McGraw-Hill Publishing Company, Limited, London, England (). By presenting the topics nonlinear continuum analysis and associated finite element techniques in the same book, Bonet and Wood provide a complete, clear, and unified treatment of these important. © Cambridge University Press Cambridge University Press - Nonlinear Continuum Mechanics for Finite Element Analysis. Convergence Analysis of Adaptive Mixed and Nonconforming Finite Element Methods Ronald H.W. Hoppe Dept. of Math., Univ. of Houston, Houston, TXU.S.A. Abstract We are concerned with a convergence analysis of adaptive mixed and non-conforming flnite element methods for second order elliptic boundary value problems. The object of tlus book is to provide an introduction to finite element methods, particularly those applicable to continuum mechanics problems of stress analysis, fluid mechanics and heat transfer. For the most part, only the simplest of such methods are described in detail.Schematic Picture of the Finite Element Method (Analysis of discrete systems) Consider a complicated boundary value problem 1) In a continuum, we have an infinite number of unknown System Idealization 2) To get finite number of unknowns, we divide the body into a number of sub domains (elements) with nodes at corners or along the elementFile Size: KB.this condition,we are able to verify the stability, and optimal order of convergence, of several known mixed finite element methods. 1. Introduction. The mixed finite element method, based on the velocity-pressure formulation, is being increasingly used for the numerical solution of File Size: 1MB.
2021-04-23 05:36:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3964727222919464, "perplexity": 1386.8213832532008}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039601956.95/warc/CC-MAIN-20210423041014-20210423071014-00233.warc.gz"}
https://www.scientificlib.com/en/Physics/LM/GeneralizedForces.html
Hellenica World # Generalized forces Generalized forces are defined via coordinate transformation of applied forces, $\mathbf{F}_i,$on a system of n particles, i. The concept finds use in Lagrangian mechanics, where it plays a conjugate role to generalized coordinates. A convenient equation from which to derive the expression for generalized forces is that of the virtual work, $\delta W_a,$ caused by applied forces, as seen in D'Alembert's principle in accelerating systems and the principle of virtual work for applied forces in static systems. The subscript a is used here to indicate that this virtual work only accounts for the applied forces, a distinction which is important in dynamic systems.[1]:265 $\delta W_a = \sum_{i=1}^n \mathbf {F}_{i} \cdot \delta \mathbf r_i$ $\delta \mathbf r_i$is the virtual displacement of the system, which does not have to be consistent with the constraints (in this development) Substitute the definition for the virtual displacement (differential):[1]:265 $\delta \mathbf{r}_i = \sum_{j=1}^m \frac {\partial \mathbf {r}_i} {\partial q_j} \delta q_j \delta W_a = \sum_{i=1}^n \mathbf {F}_{i} \cdot \sum_{j=1}^m \frac {\partial \mathbf {r}_i} {\partial q_j} \delta q_j$ Using the distributive property of multiplication over addition and the associative property of addition, we have[1]:265 $\delta W_a = \sum_{j=1}^m \sum_{i=1}^n \mathbf {F}_{i} \cdot \frac {\partial \mathbf {r}_i} {\partial q_j} \delta q_j.$ By analogy with the way work is defined in classical mechanics, we define the generalized force as:[1]:265 $Q_j = \sum_{i=1}^n \mathbf {F}_{i} \cdot \frac {\partial \mathbf {r}_i} {\partial q_j}.$ Thus, the virtual work due to the applied forces is[1]:265 $\delta W_a = \sum_{j=1}^m Q_j \delta q_j.$ References ^ a b c d e Torby, Bruce (1984). "Energy Methods". Advanced Dynamics for Engineers. HRW Series in Mechanical Engineering. United States of America: CBS College Publishing. ISBN 0-03-063366-4. Lagrangian mechanics Generalized coordinates Degrees of freedom (physics and chemistry) Virtual work
2023-04-02 06:42:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7752347588539124, "perplexity": 929.4959396455258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950383.8/warc/CC-MAIN-20230402043600-20230402073600-00160.warc.gz"}
https://www.christopherlovell.co.uk/research/
Below are some brief summaries of research areas I am currently working in, or have recently worked on. ## Galaxy Protoclusters Galaxy clusters are the largest collapsed objects in the universe, comprising of a highly evolved galaxy population embedded in a hot, rarefied InterCluster Medium (ICM). Their pre-collapse progenitors, known as galaxy protoclusters, are host to some of the most extreme objects (in terms of mass, star formation rate and nuclear activity) at these early times. Protoclusters are of significant interest for understanding the environmental dependence of galaxy evolution at early times, as well as the build-up, enrichment and heating of the ICM. The dark matter distribution in a Protocluster at $z \sim 5$ simulated with the EAGLE code Protoclusters do not yet host an X-ray emitting ICM, and so are primarily identified through 3D galaxy overdensities. In a recently accepted paper (Lovell et al. 2018) I studied in detail the relationship between galaxy overdensity and the presence and descendant mass of protoclusters in the L-galaxies semi-analytic model. The motivation for this work was to explore the systematic issues that have the greatest impact on protocluster identification. Surface overdensities of galaxies seen in narrow band photometric surveys are typically compared to simulations in order to evaluate their protocluster probability and estimate their descendant mass. I developed a more rigorous method for generating these statistics that takes in to account the completeness and purity of the protocluster galaxy population, the galaxy distribution shape, redshift space distortions and redshift uncertainties, as well as the coincidence of AGN with protoclusters. ## Numerical Simulations Cosmological hydrodynamic simulations have, in recent years, become capable of matching key distribution functions in the local universe, such as those of stellar mass and star formation rate. However, high resolution, large volume simulations have rarely been tested in the high redshift ($z > 5$) regime, particularly in the most overdense environments. Creating models that fit both high redshift and low redshift observables self consistently is a significant challenge, but key to understanding the properties of galaxies in the first billion years of the universe’s history, and how this affects their latter evolution. Such models are also necessary to make detailed predictions, and plan observations, for upcoming space based instruments, such as JWST, WFIRST and Euclid. The distribution of gas in a Protocluster at $z \sim 5$, simulated with the EAGLE code I have worked extensively on EAGLE, a state-of-the-art cosmological hydrodynamic simulation that has been tuned to a small number of distribution functions in the local universe; results at high redshifts represent predictions of the model. I have led a new simulation project during this period, First Light And Reionisation Epoch Simulations (FLARES), a suite of ‘zoom’ simulations using a modified version of the EAGLE physics code, of regions selected at high redshift. ## Spectral Energy Distribution Modelling Since hydrodynamic simulations do not resolve individual stars or HII regions a number of subgrid models and assumptions must be employed to accurately determine the galaxy SED, which can have a significant impact on the predicted emission. One example is the choice of stellar population synthesis (SPS) model, which links the initial mass, age and metallicity of a star particle in the simulation ($M_{*} \sim 10^{6} \, M_{\odot}$) to its intrinsic SED. In recent years a number of advanced SPS models have been developed, including the effects of binary interactions, post-AGB stars and nebular emission. We demonstrated in Wilkins et al. 2016 that the production efficiency of ionising radiation can vary by up to a factor of 4 due to the choice of SPS model, and it can also have a significant effect on predicted magnitudes in the rest-frame UV for high-$z$ objects. Median SED of galaxies in the EAGLE simulation at $z = 8$, both intrinsic and dust attenuated, with JWST NIRCAM filters overlayed The dust contribution at high redshift is also highly uncertain, but is key for predicting realistic observed spectra. Dust modelling can vary significantly in sophistication, from simple screen models linked to the mass and metallicity of star forming gas, to full radiative transfer solutions taking account of the spatial distribution of dust and the orientation of the observer. Nebular emission is another important component in the SED of high-$z$ galaxies. It is obviously necessary for predicting the presence and strength of individual emission lines, but such lines can also have a significant impact on broad band magnitudes. I have recently been performing detailed modelling of galaxy SEDs in hydrodynamic simulations in order to carry out close comparisons with HST observations of the rest-frame UV luminosity, and make predictions for JWST. ## Machine Learning & Astronomy I am keenly interested in the interface between simulations and machine learning methods. Whilst numerical models obviously do not represent the true universe, they do model the complex non-linear spatial and time dependent interactions of populations of objects. This can be important for accurately predicting intrinsic properties, something that traditional spectral energy distribution (SED) fitting techniques do not take into account. Training machines to learn these relationships, then applying these to observations, can provide unique predictions that complement existing techniques. I recently worked with Prof. Viviana Acquaviva at City University New York applying this method to the prediction of Star Formation Histories (SFH) in the SDSS catalogue. We trained a Convolutional Neural Network to learn the relationship between spectra and SFH in the EAGLE and Illustris simulations.
2018-12-17 08:54:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5074741840362549, "perplexity": 1524.3557823909719}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828448.76/warc/CC-MAIN-20181217065106-20181217091106-00081.warc.gz"}
https://www.physicsforums.com/threads/pullback-a-mathematical-puzzler.216138/
# Pullback - a mathematical puzzler 1. Feb 18, 2008 ### carltrembath Hi y'all. Carlo T here with a little mathematical puzzler. Here's the big ol' question, are you folks ready? let phi:(0, infty)x(0, 2*pi)x(-pi/2, pi/2)->R^3 be defined by phi(r,alpha,beta)=(r*cos(alpha)*cos(beta), r*sin(alpha)*cos(beta), r*sin(beta)). Calculate the pullback phi*(xdy) I've calculated phi*(dx), phi*(dy) and phi*(dz) but am not sure where to go from here. For example, is there a way of rewriting phi*(xdy) using one of the pullback properties? Get your noggins round that one brothers. Peace out, from the C to the T to the remBath!!!
2017-05-26 17:03:30
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8619201183319092, "perplexity": 9965.036664732279}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608669.50/warc/CC-MAIN-20170526163521-20170526183521-00302.warc.gz"}
https://docs.dwavesys.com/docs/latest/c_gs_3.html
# Solving Problems with Quantum Samplers¶ The What is Quantum Annealing? chapter explained how the D-Wave QPU uses quantum annealing to find the minimum of an energy landscape defined by the biases and couplings applied to its qubits in the form of a problem Hamiltonian. As described in the Workflow: Formulation and Sampling chapter, to solve a problem by sampling, you formulate an objective function such that when the solver finds its minimum, it is finding solutions to your problem. This chapter shows how you formulate your objective as the problem Hamiltonian of a D-Wave quantum computer by defining the linear and quadratic coefficients of a binary quadratic model (BQM) that maps those values to the qubits and couplers of the QPU. For the QPU, two formulations for objective functions are the Ising Model and QUBO. Both these formulations are binary quadratic models and conversion between them is trivial[1]. [1] Chapter Appendix: Next Learning Steps provides information on the differences and conversion between the two formulations. ### Ising Model¶ The Ising model is traditionally used in statistical mechanics. Variables are “spin up” ($\uparrow$) and “spin down” ($\downarrow$), states that correspond to $+1$ and $-1$ values. Relationships between the spins, represented by couplings, are correlations or anti-correlations. The objective function expressed as an Ising model is as follows: $\text{E}_{ising}(\vc s) = \sum_{i=1}^N h_i s_i + \sum_{i=1}^N \sum_{j=i+1}^N J_{i,j} s_i s_j$ where the linear coefficients corresponding to qubit biases are $h_i$, and the quadratic coefficients corresponding to coupling strengths are $J_{i,j}$. ### QUBO¶ QUBO problems are traditionally used in computer science, with variables taking values 1 (TRUE) and 0 (FALSE). A QUBO problem is defined using an upper-diagonal matrix $Q$, which is an $N$ x $N$ upper-triangular matrix of real weights, and $x$, a vector of binary variables, as minimizing the function $f(x) = \sum_{i} {Q_{i,i}}{x_i} + \sum_{i<j} {Q_{i,j}}{x_i}{x_j}$ where the diagonal terms $Q_{i,i}$ are the linear coefficients and the nonzero off-diagonal terms $Q_{i,j}$ are the quadratic coefficients. This can be expressed more concisely as $\min_{{x} \in {\{0,1\}^n}} {x}^{T} {Q}{x}.$ In scalar notation, used throughout most of this document, the objective function expressed as a QUBO is as follows: $\text{E}_{qubo}(a_i, b_{i,j}; q_i) = \sum_{i} a_i q_i + \sum_{i<j} b_{i,j} q_i q_j.$ Note Quadratic unconstrained binary optimization problems—QUBOs—are unconstrained in that there are no constraints on the variables other than those expressed in Q. ## Minor Embedding¶ A graph comprises a collection of nodes and edges, which can be used to represent an objective function’s variables and the connections between them, respectively. For example, to represent a quadratic equation, $H(a,b) = 5a + 7ab - 3b,$ you need two nodes, $a$ and $b$, with biases of $5$ and $-3$, and an edge between them with a strength of 7, as shown in Figure 24. Fig. 24 Two-variable objective function. This graphic representation means you can map a BQM representing your objective function to the QPU: • Nodes that represent the objective function’s variables such as $s_i$ (Ising) or $q_i$ (QUBO) are mapped to qubits on the QPU. • Edges that represent the objective function’s quadratic coefficients such as $J_{i,j}$ (Ising) and $b_{i,j}$ (QUBO) are mapped to couplers. The process of mapping variables in the problem formulation to qubits on the QPU is known as minor embedding. The Constraints Example: Minor-Embedding chapter demonstrates minor embedding with an example; typically Ocean software handles it automatically. ## Problem-Solving Process¶ In summary, to solve a problem on quantum samplers, you formulate the problem as an objective function, usually in Ising or QUBO format. Low energy states of the objective function represent good solutions to the problem. Because you can represent the objective function as a graph, you can map it to the QPU:[2] linear coefficients to qubit biases and quadratic coefficients to coupler strengths. The QPU uses quantum annealing to seek the minimum of the resulting energy landscape, which corresponds to the solution of your problem. [2] Classical solvers might not require minor-embedding and quantum-classical hybrid solvers might embed parts of the problem on the QPU while solving other parts with classical algorithms on CPUs or GPUs.
2021-10-20 20:45:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8563123345375061, "perplexity": 803.778676469421}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585348.66/warc/CC-MAIN-20211020183354-20211020213354-00272.warc.gz"}
https://www.edaboard.com/threads/transient-simulation-termination-by-veriloga-model-in-cadence-virtuoso.399898/
# [SOLVED]Transient simulation termination by VerilogA model in Cadence Virtuoso Status Not open for further replies. #### Junus2012 Hello, As I am simulating and optimizing a VCO circuit that generates a clock frequency with comprehensive change due to the optimization space, I fixed the stop time in the transient simulation to cover the lowest possible generated signal. While this time causes an unnecessary huge simulation time for the generated high-frequency signal, a method advised by Cadence forum to create a Verilog counter model shown below to count the pulses of the generated signal for a reasonable number of periods and the simulation module terminate the simulation regardless the signal frequency. Of course, the stop time in the transient simulation still needed to fit the lowest expected signal frequency. In this module, I am counting 10 periods which is more than enough to calculate the signal frequency using the function "frequency" from the calculator. I have used the module in the transient simulation testbench and successfully terminate the simulation according to the predefined number of periods (count in this module). The ADE Assembler also was able to plot the signal after the simulation termination. However, I have found that calculator functions results are not any more computed after termination, it becomes similar to the case when you manually stop the simulation and gives "sim error". nevertheless, if I send the plotted signal to the calculator and apply the functions then it will work. but this will not solve my problem as I am running tens of corners and need to print the result automatically. Kindly, I need your help to make the ADE Assembler compute the calculator functions automatically after simulation termination. I am using Cadence Virtuoso IC6.1.8-64b.500.6 I hope I explained the problem clearly Regards Code: ####################################################################################Code'#################################################################################################### // VerilogA for indirect_measurement, counter, veriloga include "constants.vams" include "disciplines.vams" define SIZE 4 module counter (out, clk); inout clk; electrical clk; output [SIZE-1 :0] out; electrical [SIZE-1 :0] out; parameter integer setval = 0 from [01<<SIZE)-1]; parameter real vtrans_clk = 0.6; parameter real vtol = 0; // signal tolerance on the clk parameter real ttol = 0; // time tolerance on the clk parameter real vhigh = 1.2; parameter real vlow = 0; parameter real tdel = 30p; parameter real trise = 30p; parameter real tfall = 30p; parameter integer up = 0 from [0:1]; //0=increasing 1=decreasing parameter integer stepsize = 1; integer outval; integer cnt=0; analog begin @(initial_step("static","ac")) outval = setval; @(cross(V(clk)-vtrans_clk,1,vtol,ttol)) begin outval = (outval +(+up- !up)*stepsize)%(1<<SIZE); cnt=cnt+1; if( cnt == 10) begin $finish(0); //command to stop the current simulation end end generate j (SIZE-1 , 0) begin V(out[j]) <+ transition (!(!(outval &(1<<j)))*vhigh+!(outval&(1<<j))*vlow,tdel,trise,tfall); end end endmodule Last edited by a moderator: #### dick_freebird ##### Advanced Member level 5 What about scripting yourself a "deck builder" that generates Ocean scripts of the same circuit, changing only the stimulus frequency and the end timepoint? I've done this with shell scripts, and had it done for me by a CAD support lady who happened to be good at PERL scripting - header, body and footer files concatenated with only a couple of lines changed at all, to build veriloga stimulus and output-check "widgets" from customers' VCD files and a simulation "save Ocean script" dumped and chopped up. Then use calculator on read-back-in run data? Junus2012 ### Junus2012 Points: 2 Helpful Answer Positive Rating #### Dominik Przyborowski ##### Advanced Member level 4 use$finish_current_analysis instead $finish #### Junus2012 ##### Advanced Member level 4 Thank you freebirf and Dominik for your answers, the use of$finish_current_analysis instead \$finish solved the problem perfectly and the simulation now is able to calculate the function results after termination This issue is solved after your kind help Thank you once again Regards Status Not open for further replies.
2021-10-17 23:08:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4703335762023926, "perplexity": 4751.728043744457}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585183.47/warc/CC-MAIN-20211017210244-20211018000244-00143.warc.gz"}
https://proofwiki.org/wiki/Basel_Problem/Proof_2
# Basel Problem/Proof 2 ## Theorem $\displaystyle \map \zeta 2 = \sum_{n \mathop = 1}^{\infty} {\frac 1 {n^2}} = \frac {\pi^2} 6$ where $\zeta$ denotes the Riemann zeta function. ## Proof Let: $\displaystyle P_k = x \prod_{n \mathop = 1}^k \left({1 - \frac {x^2} {n^2 \pi^2}}\right)$ We note that: $\displaystyle P_k - P_{k - 1}$ $=$ $\displaystyle \left({- \frac {x^3} {k^2 \pi^2} }\right) \prod_{n \mathop = 1}^{k - 1} \left({1 - \frac {x^2} {n^2 \pi^2} }\right)$ $\displaystyle$ $=$ $\displaystyle - \frac {x^3} {k^2 \pi ^2} + O \left({x^5}\right)$ Big-O Notation By Telescoping Series we find that the coefficient of $x^3$ in $P_k$ is given by: $(1): \quad \displaystyle - \frac 1 {\pi^2} \sum_{i \mathop = 1}^k \frac 1 {i^2}$ $\displaystyle \sin x = x \prod_{n \mathop = 1}^\infty \left({1 - \frac {x^2} {n^2 \pi^2}}\right)$ So by taking the limit as $k \to \infty$ in $(1)$ and equating with the coefficient of $x^3$ in the Power Series Expansion for Sine Function, we can deduce: $\displaystyle - \frac 1 {\pi^2} \sum_{i \mathop = 1}^{\infty} \frac 1 {i^2} = - \frac 1 {3!}$ hence: $\displaystyle \sum_{i \mathop = 1}^{\infty} \frac 1 {i^2} = \frac {\pi^2} 6$ $\blacksquare$ ## Historical Note The Basel Problem was first posed by Pietro Mengoli in $1644$. Its solution is generally attributed to Leonhard Euler , who solved it in $1734$ and delivered a proof in $1735$. However, it has also been suggested that it was in fact first solved by Nicolaus I Bernoulli. Jacob Bernoulli had earlier established that the series was convergent, but had failed to work out what to. The problem is named after Basel, the home town of Euler as well as of the Bernoulli family. If only my brother were alive now. -- Johann Bernoulli
2019-09-22 12:35:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9510874152183533, "perplexity": 476.18200213728676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575513.97/warc/CC-MAIN-20190922114839-20190922140839-00249.warc.gz"}
https://trac.sagemath.org/ticket/19019
Opened 5 years ago Closed 5 years ago Very careless typoes in strongly_regular_db Reported by: Owned by: ncohen major sage-6.9 graph theory dimpase Nathann Cohen Dima Pasechnik N/A cb0b3fb (Commits) #19018 Description This branch fixes very bad typoes in that file, that made several constructions useless or broken. Mostly missing characters (emacs macros...) Nathann comment:1 Changed 5 years ago by ncohen • Branch set to u/ncohen/19019 • Status changed from new to needs_review comment:2 Changed 5 years ago by git • Commit set to 05b2fa885325eab312bd8380f22fe8690b14afae Branch pushed to git repo; I updated commit sha1. New commits: ​a414c36 trac #19018: More SRGs using Regular Symmetric Hadamard matric with Constant Diagonal ​05b2fa8 trac #19019: Very careless typoes in strongly_regular_db comment:3 follow-up: ↓ 5 Changed 5 years ago by jmantysalo Not relating to just this patch, but why functions like SRG_100_44_18_20(), SRG_100_45_20_20() etc. instead of something like SRG_from_db(100, 44, 18, 20)? There might be some rationale explained in some other ticket. comment:4 Changed 5 years ago by git • Commit changed from 05b2fa885325eab312bd8380f22fe8690b14afae to 4abbce4ea5a32d1b82eae7de85fa3dc5f695e5da Branch pushed to git repo; I updated commit sha1. This was a forced push. New commits: ​c430b4b trac #19018: Link the new constructions with graphs.strongly_regular_graph ​0e533bb trac #19018: Bibliography ​4abbce4 trac #19019: Very careless typoes in strongly_regular_db comment:5 in reply to: ↑ 3 Changed 5 years ago by ncohen Not relating to just this patch, but why functions like SRG_100_44_18_20(), SRG_100_45_20_20() etc. instead of something like SRG_from_db(100, 44, 18, 20)? There might be some rationale explained in some other ticket. I do not understand what you mean, and I do not want the conversation to happen on this unrelated ticket. Please send me an email or write to sage-devel. Last edited 5 years ago by ncohen (previous) (diff) comment:6 Changed 5 years ago by git • Commit changed from 4abbce4ea5a32d1b82eae7de85fa3dc5f695e5da to c5e24be70ec523d6c91af7c7a0664cce9459e2cf Branch pushed to git repo; I updated commit sha1. New commits: ​c5e24be trac #19019: Missing space comment:7 Changed 5 years ago by git • Commit changed from c5e24be70ec523d6c91af7c7a0664cce9459e2cf to 3f76966eaf187f22485471c04f6227f22a024efe Branch pushed to git repo; I updated commit sha1. This was a forced push. New commits: ​6893ad1 trac #19018: Typo ​e18bee7 trac #19019: Very careless typoes in strongly_regular_db ​3f76966 trac #19019: Missing space comment:8 Changed 5 years ago by git • Commit changed from 3f76966eaf187f22485471c04f6227f22a024efe to 19b40d996b2770b58bfefbea5bb8d3c0bded6047 Branch pushed to git repo; I updated commit sha1. This was a forced push. New commits: ​d66f0b6 trac #19018: Specify which graph is built ​1cae2dc trac #19028: Table 8.1 ​7045124 trac #19018: Merged with 6.9.beta2 ​99fa6ce trac #19019: Very careless typoes in strongly_regular_db ​cb071e2 trac #19019: Missing space ​19b40d9 trac #19019: Sort the list of SRGs comment:9 Changed 5 years ago by dimpase • Branch changed from u/ncohen/19019 to public/19019 • Commit changed from 19b40d996b2770b58bfefbea5bb8d3c0bded6047 to 778556f1afddb74933575180e36018f0608fba2a • Status changed from needs_review to positive_review rebased over the latest developments on #19018. LGTM comment:10 Changed 5 years ago by jmantysalo Volker will reject this if you don't add your name to Reviewers-field. comment:11 follow-up: ↓ 12 Changed 5 years ago by git • Commit changed from 778556f1afddb74933575180e36018f0608fba2a to 323416f9a5a46f1cc6765ef54906938353eee856 • Status changed from positive_review to needs_review Branch pushed to git repo; I updated commit sha1 and set ticket back to needs_review. New commits: ​323416f added a doctest comment:12 in reply to: ↑ 11 Changed 5 years ago by dimpase • Status changed from needs_review to positive_review Branch pushed to git repo; I updated commit sha1 and set ticket back to needs_review. New commits: ​323416f added a doctest rebased over the latest #19018 branch comment:13 Changed 5 years ago by ncohen What you did is not a rebase, nor a merge. You apparently cherry-picked your commit on top of that branch (note that its hash changed). As a result your commit appears twice in the history. Please remove this latest commit: it is not because the branch are not exactly the same that they do not merge trivially. I don't think that your last commit from #19018 caused the slightest problem with this branch. Nathann comment:14 Changed 5 years ago by vbraun • Status changed from positive_review to needs_work Reviewer name missing comment:15 Changed 5 years ago by dimpase • Reviewers set to Dima Pasechnik • Status changed from needs_work to positive_review I cannot find the extra commit in question. comment:16 Changed 5 years ago by ncohen It is the last commit of this branch. comment:17 Changed 5 years ago by dimpase In the local branch I do not have 2 copies of this commit. Naturally, if I were to remove it and push the result then it would not be there at all. comment:18 Changed 5 years ago by ncohen In the local branch you have this commit. In the branch of #19018 you have another copy of it. Because you cherry-picked it, when the two branches will be merged you will have two copies of it. This commit belongs to the other branch, not to that one. If you remove it from this branch, it will be in Sage when #19018 will be merged. You should have merged the branches, not cherry-picked the commit. Nathann comment:19 Changed 5 years ago by git • Commit changed from 323416f9a5a46f1cc6765ef54906938353eee856 to cb0b3fbd221fb01c0e392d141ba50427b0a0294f • Status changed from positive_review to needs_review Branch pushed to git repo; I updated commit sha1 and set ticket back to needs_review. This was a forced push. New commits: ​6ed716d added a doctest ​cb0b3fb Merge branch 'reg_symm_hadamard' into t19019 comment:20 Changed 5 years ago by dimpase please review these git things. I'll be offline for the coming 10 hours. comment:21 Changed 5 years ago by ncohen • Status changed from needs_review to positive_review Looks good, thanks. Nathann comment:22 Changed 5 years ago by vbraun • Branch changed from public/19019 to cb0b3fbd221fb01c0e392d141ba50427b0a0294f • Resolution set to fixed • Status changed from positive_review to closed comment:23 follow-up: ↓ 24 Changed 5 years ago by dimpase • Commit cb0b3fbd221fb01c0e392d141ba50427b0a0294f deleted I overlooked + When \epsilon\in\{-1,+1\}, we say that M is a (n,\epsilon)-RSHCD if + M is a regular symmetric Hadamard matrix with constant diagonal + \delta\in\{-1,+1\} and row values all equal to \delta \epsilon + \sqrt(n). For more information, see [HX10]_ or 10.5.1 in + [BH12]_. • what are "row values"? row sums? • the definition of RSHCD here does not match one in [BH12], where the diagonal entries are all 1, and (n,e)-RSHCD has row sums e\sqrt{n}. • it should be \sqrt{n}, not \sqrt(n) And another thing - RSHCD should be mentioned in the module description in the beginning of the corr. file. comment:24 in reply to: ↑ 23 ; follow-ups: ↓ 25 ↓ 27 Changed 5 years ago by ncohen • what are "row values"? row sums? Arg... Yes. Row sums. • the definition of RSHCD here does not match one in [BH12], where the diagonal entries are all 1, and (n,e)-RSHCD has row sums e\sqrt{n}. So what do we do? With two papers that say contradictory things? • it should be \sqrt{n}, not \sqrt(n) Right. And another thing - RSHCD should be mentioned in the module description in the beginning of the corr. file. I don't understand. Nathann comment:25 in reply to: ↑ 24 ; follow-up: ↓ 26 Changed 5 years ago by jmantysalo So what do we do? With two papers that say contradictory things? IMO select one and (almost) always document these cases. See chain_polynomial() on posets: "Warning: This is not what has been called the chain polynomial in [St1986]. The latter is - -". comment:26 in reply to: ↑ 25 Changed 5 years ago by ncohen IMO select one and (almost) always document these cases. See chain_polynomial() on posets: "Warning: This is not what has been called the chain polynomial in [St1986]. The latter is - -". Jori, this was a rethoric question. The two definitions agree with each other. comment:27 in reply to: ↑ 24 ; follow-up: ↓ 28 Changed 5 years ago by dimpase • the definition of RSHCD here does not match one in [BH12], where the diagonal entries are all 1, and (n,e)-RSHCD has row sums e\sqrt{n}. So what do we do? With two papers that say contradictory things? I was comparing your definition with the one from http://homepages.cwi.nl/~aeb/math/ipm/ipm.pdf and your definition is more complicated. I did not check the other one. And another thing - RSHCD should be mentioned in the module description in the beginning of the corr. file. I don't understand. If look at combinat/matrices/hadamard_matrix.py there is a paragraph saying The module below implements the Paley constructions (see for example [Hora]_) and the Sylvester construction. It also allows you to pull a But there is no word about RSHCDs. Should it instead get a list of methods, etc, as more modern Sage modules have? As well, you're not in the list of authors. As I am gearing up for skew-Hadamard matrices, it can be done on the corresponding ticket. comment:28 in reply to: ↑ 27 ; follow-up: ↓ 29 Changed 5 years ago by ncohen I was comparing your definition with the one from http://homepages.cwi.nl/~aeb/math/ipm/ipm.pdf and your definition is more complicated. I did not check the other one. I don't see how. In the paper you give the matrix is requested to have 1 on the diagonal, that's the only difference I see. If look at combinat/matrices/hadamard_matrix.py there is a paragraph saying But there is no word about RSHCDs. Oh, right. I'll do that. Should it instead get a list of methods, etc, as more modern Sage modules have? Probably. I am working on a patch related to index of functions at the moment, though. Something to make building a thematic index easier. As well, you're not in the list of authors. How I hate this "I am the one who did it" bullshit... As I am gearing up for skew-Hadamard matrices, it can be done on the corresponding ticket. Skew-hadamard? *sigh*.... Someday we will have all kind of designs in Sage. But today is not this day. Nathann comment:29 in reply to: ↑ 28 ; follow-up: ↓ 30 Changed 5 years ago by dimpase I was comparing your definition with the one from http://homepages.cwi.nl/~aeb/math/ipm/ipm.pdf and your definition is more complicated. I did not check the other one. I don't see how. In the paper you give the matrix is requested to have 1 on the diagonal, that's the only difference I see. yes, this is the difference that makes your definition different. I suppose this is very close to the original one, as all is needed is multiplication by -1 to convert one into the other. But still not the same... As well, you're not in the list of authors. How I hate this "I am the one who did it" bullshit... well, if only for uniformity... As I am gearing up for skew-Hadamard matrices, it can be done on the corresponding ticket. Skew-hadamard? *sigh*.... Someday we will have all kind of designs in Sage. But today is not this day. For all practical purposes it is the same. One says that the diagonal has to be 1, the other says that it can be -1 but adapts the value of row sums. If you get one which does not have a 1 on the diagonal, then -your_matrix does the job, and all results of existence/non-existence apply.
2020-03-28 18:53:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2676199674606323, "perplexity": 4898.651601909062}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370492125.18/warc/CC-MAIN-20200328164156-20200328194156-00388.warc.gz"}
http://www.numdam.org/item/ITA_1998__32_1-3_1_0/
On some path problems on oriented hypergraphs RAIRO - Theoretical Informatics and Applications - Informatique Théorique et Applications, Tome 32 (1998) no. 1-3, pp. 1-20. @article{ITA_1998__32_1-3_1_0, author = {Nguyen, S. and Pretolani, D. and Markenzon, L.}, title = {On some path problems on oriented hypergraphs}, journal = {RAIRO - Theoretical Informatics and Applications - Informatique Th\'eorique et Applications}, pages = {1--20}, publisher = {EDP-Sciences}, volume = {32}, number = {1-3}, year = {1998}, mrnumber = {1657503}, language = {en}, url = {http://www.numdam.org/item/ITA_1998__32_1-3_1_0/} } TY - JOUR AU - Nguyen, S. AU - Pretolani, D. AU - Markenzon, L. TI - On some path problems on oriented hypergraphs JO - RAIRO - Theoretical Informatics and Applications - Informatique Théorique et Applications PY - 1998 DA - 1998/// SP - 1 EP - 20 VL - 32 IS - 1-3 PB - EDP-Sciences UR - http://www.numdam.org/item/ITA_1998__32_1-3_1_0/ UR - https://www.ams.org/mathscinet-getitem?mr=1657503 LA - en ID - ITA_1998__32_1-3_1_0 ER - Nguyen, S.; Pretolani, D.; Markenzon, L. On some path problems on oriented hypergraphs. RAIRO - Theoretical Informatics and Applications - Informatique Théorique et Applications, Tome 32 (1998) no. 1-3, pp. 1-20. http://www.numdam.org/item/ITA_1998__32_1-3_1_0/ 1. H. El-Rewini and T. G. Lewis, Scheduling parallel program tasks onto arbitrary target machines. Journal of Parallel and Distributed Computing, 9 (1990, pp. 138-153. 2. H. N. Gabow, S. N. Maleshwari and L. J. Osterweil, On two problems in the generation of program test paths, EEE Transactions on Software Engineering, SE-2 (1976), pp. 227-231. | MR 426484 3. G. Gallo, G. Longo, S. Nguyen and S. Pallottino, Directed hypergraphs and applications. Discrete Applied Mathematics, 42(2-3) (1993), pp. 177-201. | MR 1217096 | Zbl 0771.05074 4. M. R. Garey and D. S. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H, Freeman, San Francisco, Ca., 1979. | MR 519066 | Zbl 0411.68039 5. A. Kapelnikov, R. R. Muntz and M. D. Ercegovac, A modeling methodology for the analysis of cuncurrent Systems and computations. Journal of Parallel and Distributed Computing, 6 (1989), pp. 568-597. 6. L. Markenzon and J. L. Szwarcfiter, Dois problemas de caminhos com restrições.In: XX Simpósio Brasileiro de Pesquisa Operacional, page 148, 1987. 7. S. C. Ntafos and T. Gonzalez, On the computational complexity of path cover problems. Journal of Computer and System Science, 29 (1984), pp. 225-242. | MR 773424 | Zbl 0547.68044 8. S. C. Ntafos and S. L. Hakimi, On path cover problems in digraphs and applications to program testing. IEEE Transactions on Software Engineering, SE-5 (1979), pp. 520-529. | MR 545530 | Zbl 0412.68052 9. S. C. Ntafos and S. L. Hakimi, On structured digraphs and program testing. IEEE Transactions on Computer, c-30 (1981), pp. 67-77. | MR 600768 | Zbl 0454.68005 10. R. A. Sahner and K. S. Trivedi, Performance and reliability analysis using directed acyclic graphs. IEEE Transactions on Software Engineering, SE-13 (1987), pp. 1105-1114.
2022-01-25 03:07:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2141590416431427, "perplexity": 9132.921370106345}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304749.63/warc/CC-MAIN-20220125005757-20220125035757-00256.warc.gz"}
https://physics.stackexchange.com/questions/292058/why-no-need-to-consider-collision-in-kinetic-theory
# Why no need to consider collision in kinetic theory? In standard elementary thermodynamics textbook, in the kinetic theory it is assumed that the gas particles have no interactions except during collisions. However, in the subsequent derivation of $$pV=\frac{2}{3}U$$ only the collisions of the gas particles with the container walls are considered, but not the collisions between gas particles. Why we can ignore it? We can ignore the collisions between particles when deriving that equation because we are assuming that we are dealing with a rarified gas, i.e. the average distance between a two molecules is large compared to their size. When calculating pressure, we are mainly concerned with the interaction with the walls, so this approximation is quite good; actually, an ideal gas is formally made of point particles, so the probability of a particle-particle collision would be rigorously zero in the ideal gas approximation. But of course, to compute other quantities such as the mean free path we must consider the particles to be of finite size, i.e. we have to give up the ideal gas approximation. However, in such calculations, the ideal gas law (IGL) is used: this appearently makes no sense, because the IGL is derived under the assumption that we are dealing with point particles! The catch is that, even if the IGL is only rigorously valid for a gas of point particles (which by definition cannot collide with each other), it is a quite good approximation also for a gas of finite-size, but small particles: this is why we can use the IGL in the derivation of quantities such as the mean free path. And this is also why you can find in many books (or on Wikipedia) the somewhat misleading definition "An ideal gas is a theoretical gas composed of point particles that do not interact except when they collide elastically". There is a more fundamental problem about the usual textbook derivation of $pV = \frac 23 U$. The mass of the particles appears in the derivation, but not in the end, it cancels out exactly - if all energies are equal. Since they are not equal, the derivation just inserts "mean energy" instead of "energy" and is done. Now it is obvious, that a gas of identical particles will have a mean energy, and since the contribution to $p$ is proportional to the energy, the derivation is perfectly accurate, see below. What is not obvious is, why the mean energies of sets of different particles should coincide. If they don't, then the whole derivation gets much more complicated (the collisions with the walls and between particles are not "elastic" any more), and collisions matter. But an elastic collision between identical particles is very simple: they swap their momenta. In other words, the particle just gets exchanged, and continues as if there had been no collision. • Originally I also thought they just swap their momenta. But think more carefully, it isn't true. Say, in the CM frame, they move directly towards each other along a certain line with equal speed $v$. But after the collision, by conservation of momentum and KE, they must have the same speed $v$, true, but can be moving directly opposite to each other along a line making an arbitrary angle with the original line. – velut luna Nov 17 '16 at 13:02 • Oh no! you are right, how could I mistake this :( it's only true in 1 dimension. Well, you are right, the direction after the collision can be arbitrary. But I still disagree, that the collisions have an influence, i just cant find a "proof" – Ilja Nov 17 '16 at 14:08
2019-11-18 06:54:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9228050112724304, "perplexity": 232.32011028735195}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669454.33/warc/CC-MAIN-20191118053441-20191118081441-00471.warc.gz"}
http://mathhelpforum.com/trigonometry/207926-need-help-double-angle-question.html
# Math Help - Need help with a double angle question 1. ## Need help with a double angle question If cos2x=12/13 and 2x is in the first quadrant, evaluate the six trigonometric functions for x, without your calculator. The answer is: cos x= 5/√26; sec x=√26/5 sinx=1/ √26; csc x=√26 tanx=1/5; cot x=5 How do I get these answers? Any help would be greatly appreciated. 2. ## Re: Need help with a double angle question Do you know either the "double angle formulas" or the "half angle formulas"? 3. ## Re: Need help with a double angle question since $0 < 2x < \frac{\pi}{2}$ , then $0 < x < \frac{\pi}{4}$ $\cos{x} = \sqrt{\frac{1 + \cos(2x)}{2}}$ $\sin{x} = \sqrt{\frac{1 - \cos(2x)}{2}}$ sub in the value given for $\cos(2x)$ ... 4. ## Re: Need help with a double angle question Thanks guys, I got it now.
2016-07-30 20:15:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6419531106948853, "perplexity": 2476.0961646159835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469258937291.98/warc/CC-MAIN-20160723072857-00212-ip-10-185-27-174.ec2.internal.warc.gz"}
https://au.costsproject.org/536-electric-potential.html
# Electric potential Imagine an electric field generated by a charge Q, when a test load is placed what in its operating space we can see that, according to the combination of signals between the two loads, this load what, will be attracted or repelled, acquiring movement, and consequently Kinetic Energy. Recalling the kinetic energy studied in mechanics, we know that for a body to acquire kinetic energy there must be a potential energy stored in some way. When this energy is linked to the action of an electric field, it is called Electric Potential Energy or Electrostatic, symbolized by . The unit used for it's the joule (J). It can be said that the generating charge produces an electric field that can be described by a quantity called Electric potential (or electrostatic). Similar to the Electric Field, the potential can be described as the quotient between the electric potential energy and the test load. what. That is: Soon: The unit adopted in SI for the electric potential is the volt (V), in honor of Italian physicist Alessandro Volta, and the unit designates Joule as coulomb (J / C). When there is more than one electrified particle generating electric fields, at a point P that is subject to all these fields, the electric potential is equal to the sum of all potentials created by each charge, ie: A widely used way to represent potentials is through equipotentials, which are lines or surfaces perpendicular to the lines of force, that is, lines that represent the same potential. For the particular case where the field is generated by only one charge, these equipotential lines will be circumferences, since the potential value decreases uniformly as the distance increases (taking into account a two-dimensional representation, since if the representation were three-dimensional, equipotentials would be represented by hollow spheres, which constitutes the so-called onion peel effect, where the more internal the peel, the greater its potential).
2021-09-17 15:31:45
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8184534907341003, "perplexity": 568.4766878934341}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055684.76/warc/CC-MAIN-20210917151054-20210917181054-00426.warc.gz"}
http://zbmath.org/?q=an:1079.39005
# zbMATH — the first resource for mathematics ##### Examples Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used. ##### Operators a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses ##### Fields any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article) Global behavior of solutions of the nonlinear difference equation ${x}_{n+1}={p}_{n}+{x}_{n-1}/{x}_{n}$. (English) Zbl 1079.39005 The trichotomy results concerning the difference equation ${x}_{n+1}=p+{x}_{n-1}/{x}_{n}$ are considered for the equation ${x}_{n+1}={p}_{n}+{x}_{n-1}/{x}_{n}$ with the initial conditions ${x}_{-1}\ge 0$, ${x}_{0}>0$ and ${\left\{{p}_{n}\right\}}_{n}$ a positive sequence with ${lim inf}_{n\to \infty }{p}_{n}=p\ge 0$, ${lim sup}_{n\to \infty }{p}_{n}=q<\infty$. If $p>0$ then ${\left\{{x}_{n}\right\}}_{n}$ is persistent and if $p>1$ then ${\left\{{x}_{n}\right\}}_{n}$ is bounded from above. Moreover, if $1 then the interval $\left[\left(PQ-1\right)/\left(Q-1\right),\left(PQ-1\right)/\left(P-1\right)\right]$ is a positive invariant set of the equation. If either $0<{p}_{2n+1}<1$ and ${lim}_{n\to \infty }{p}_{2n+1}=0$ or $0<{p}_{2n}<1$ and ${lim}_{n\to \infty }{p}_{2n}=0$ then there exist unbounded solutions to the equation. Some special cases of the equation are considered as applications. ##### MSC: 39A11 Stability of difference equations (MSC2000) 39A20 Generalized difference equations
2014-03-10 01:35:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 18, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8311946392059326, "perplexity": 8586.290077806923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010527022/warc/CC-MAIN-20140305090847-00000-ip-10-183-142-35.ec2.internal.warc.gz"}
http://chemistry.stackexchange.com/questions/9272/why-is-phenolphthalein-an-appropriate-indicator-for-titration-of-a-strong-acid-w
# Why is phenolphthalein an appropriate indicator for titration of a strong acid with a strong base? Please explain based on the pH at the equivalence point and the transition range for phenolphthalein. Thanks for the help. - ## 1 Answer If you view the titration curve for $V_{added}$ vs $pH$: you can see that the equivalence point occurs at $pH$ = 7. Phenolphthalein is fuchsia from $pH$s roughly between 8.2 and 12, and is colorless below 8.2. When the number of moles of added base is equal to the number of moles of added acid (or vice versa; example on valid for strong monoprotic acids/bases assuming 100% dissolution), the $pH$ is equal to 7. You might say, if the pH needed is 7, and phenonphthalein changes at $pH$s around 8.2, how can you use this indicator? Well, again looking at the curve, from $pH$ = 11 to about $pH$ = 4, pH changes very rapidly with from an infinitesimally small change in $V_{added}$. Since one drop of added titrant will cause this large change, even though the change in color of phenolphthalein does not occur right on the equivalence point, it is within approximately one drop. This kind of uncertainty is "acceptable uncertainty" in using titration to volumetrically determine concentrations. To clarify what I mean by "acceptable uncertainty", you should realise that each of your measurements has some kind of uncertainty to them: • When you weighed out the primary standard to titrate against, was the balance perfect? • Was your solution made up precisely to the graduation in the volumetric flask? • Did you pipette the exact volume of the aliquot or were you off by a drop or two? • Are you able to state the volume added from the burette to arbitrary precision or is there some uncertainty beyond the two decimal places given by the graduated lines? In the scheme of things, $\pm$1-2 drops will not be a significant factor in getting an accurate result, but you should most definitely acknowledge that there is uncertainty in your answer. - This is true for a strong acid and a strong base that have no buffering capacity. It will, of course, not be nearly as clean when one more drop of titrant is pH-buffered. –  Uncle Al Mar 18 '14 at 0:59
2015-05-29 00:07:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5246143341064453, "perplexity": 1366.3601965325447}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929803.61/warc/CC-MAIN-20150521113209-00074-ip-10-180-206-219.ec2.internal.warc.gz"}
https://www.rexygen.com/doc/ENGLISH/MANUALS/BRef/VIN.html
### VIN – Validation of the input signal Function Description The VIN block can be used for verification of the input signal quality in the REXYGEN system. Further details about quality flags can be found in chapter 1.4 of this manual. The block continuously separates the quality flags from the input u and feeds them to the iqf output. Based on these quality flags and the GU parameter (Good if Uncertain), the input signals are processed in the following manner: • For $\mathtt{\text{GU}}=\mathtt{\text{off}}$ the output QG is set to on if the quality is GOOD. It is set to $\mathtt{\text{QG}}=\mathtt{\text{off}}$ in case of BAD or UNCERTAIN quality. • For $\mathtt{\text{GU}}=\mathtt{\text{on}}$ the output QG is set to onif the quality is GOOD or UNCERTAIN. It is set to $\mathtt{\text{QG}}=\mathtt{\text{off}}$ only in case of BAD quality. The output yg is equal to the u input if $\mathtt{\text{QG}}=\mathtt{\text{on}}$. Otherwise it is set to $\mathtt{\text{yg}}=\mathtt{\text{sv}}$ (substitution variable). Inputs u Input signal whose quality is assessed. The type of the signal is determined upon the connected signal. Unknown sv Substitute value for an error case Unknown Outputs yg Validated output signal ($\mathtt{\text{yg}}=\mathtt{\text{u}}$ for $\mathtt{\text{QG}}=\mathtt{\text{on}}$ or $\mathtt{\text{yg}}=\mathtt{\text{sv}}$ for $\mathtt{\text{QG}}=\mathtt{\text{off}}$) Unknown QG Indicator of input signal acceptability Bool iqf Complete quality flag separated from the u input signal Long (I32) Parameter GU Acceptability of UNCERTAIN quality Bool off .. Uncertain quality unacceptable on ... Uncertain quality acceptable 2022 © REX Controls s.r.o., www.rexygen.com
2022-12-05 01:42:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 10, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3579464554786682, "perplexity": 2374.349929559508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711001.28/warc/CC-MAIN-20221205000525-20221205030525-00029.warc.gz"}
https://indico.jlab.org/event/282/contributions/4127/
Please visit Jefferson Lab Event Policies and Guidance before planning your next event: https://www.jlab.org/conference_planning. # 8th Workshop of the APS Topical Group on Hadronic Physics 10-12 April 2019 Denver, CO US/Mountain timezone 10 Apr 2019, 15:00 20m Director's Row J (Denver, CO) ### Director's Row J #### Denver, CO Sheraton Denver Downtown Hotel, 1550 Court Pl. lobby level of the Plaza building contributed talk ### Speaker Dr Veronica Canoa Roman (Stony Brook University) ### Description The study of anisotropic flow provides strong constraints to the evolution of the medium produced in heavy ion collisions and its event-by-event geometry fluctuations. The strength and predominance of these observables have long been related to collective behavior in the formed medium. Recent results in small systems both at RHIC and LHC provide strong arguments for the formation of such medium at those scales. However, for a complete characterization of the phenomena, additional differential measurements are desirable. PHENIX recorded data from d+Au collisions at 200GeV, 62GeV and 39GeV in 2016 using a special trigger which enriches the data size for collisions that are very central. In this talk, I will show our recent anisotropic flow measurements for fully reconstructed $\pi^0$ at $-0.35<\eta<+0.35$ for d+Au. ### Primary author Dr Veronica Canoa Roman (Stony Brook University) Slides
2022-07-06 17:07:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43530505895614624, "perplexity": 12637.24232563073}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104675818.94/warc/CC-MAIN-20220706151618-20220706181618-00635.warc.gz"}
https://ask.sagemath.org/questions/7826/revisions/
# Revision history [back] ### latex typesetting for derivatives like g' When i try: f(x) = function('f',x) g = diff(f(x),x) latex(g) I get: D[0]\left(f\right)\left(x\right) But i would like to get something like: g'\left(x\right) How can i do this with sage?
2021-10-24 20:51:45
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9391584396362305, "perplexity": 12395.161280884839}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587606.8/warc/CC-MAIN-20211024204628-20211024234628-00415.warc.gz"}
http://clay6.com/qa/tag/p195
# Recent questions tagged p195 Questions from: ### Find the equation of the curve through the point (1,0) if the slope of the tangent to the curve at any point (x,y) is $\Large \frac{y-1}{x^2+x}.$ To see more, click for the full list of questions or popular tags.
2019-11-15 13:27:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19962969422340393, "perplexity": 273.7204929339274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668644.10/warc/CC-MAIN-20191115120854-20191115144854-00009.warc.gz"}