url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
https://matematiku.wordpress.com/category/statistics/ | # Nontrivial zeros and the eigenvalues of random matrices
According to the article written by Meier and Steuding[5], one of the most interesting and aesthetic of all possible scenarios proposed by Riemann Hypothesis (RH) is regularity in the distribution of zeros, and thus in prime number distribution as well. There are numerous articles and papers which attempt to prove RH and to investigate its consequence. One of them is to relate the zeros of the Riemann zeta function and the eigenvalues of random matrices.
The Riemann zeta function and its Euler’s expression can be defined as :
$\zeta(s)= \sum_{n=1}^{\infty}\frac{1}{n^{s}}= \prod_{p}\left ( 1-\frac{1}{p^{s}} \right )^{-1}$
here the product on the right side is taken over all prime numbers p. The Euler-product on the right side may be ragarded as an analytic version of the unique prime factorization of integers, and it converges for $Re(s)>1$. Riemann showed that $\zeta(s)$ has a continuation to the complex plane and satisfi es a functional equation :
$\xi(s):=\pi^{-s/2} \Gamma \left ( s/2 \right )\zeta(s)= \xi \left ( 1-s \right )$
$\xi(s)$ is entire except for simple poles at s = 0 and 1. The zeros of $\xi(s)$ is
$\frac{1}{2}+ i \gamma$
It’s clear that $\left |Im(\gamma) \right | \leq \frac{1}{2}$. Hadamard and de la Vallée Poussin in their proofs of Prime Number Theorem established that $\left |Im(\gamma) \right | < \frac{1}{2}$.
The Riemann hypothesis (RH) states that all nontrivial zeros of $\zeta(s)$ lie on the critical line $Re(s) =\frac{1}{2}$. So here $\gamma \in \mathbb{R}$.
In 1973 Montgomery[1] conjectured that the number of nontrivial zeros $\frac{1}{2}+ i\gamma$, $\frac{1}{2}+i\gamma '$ of $\zeta(s)$ satisfying the inequalities :
$0<\alpha\leq \frac{\log T}{2\pi}(\gamma-\gamma ')\leq\beta$
is asymptotically equal to
$N(T)\displaystyle\int_{\alpha }^{\beta }\left ( 1-\left ( \frac{\sin\pi u}{\pi u} \right )^2 \right )du$
as $T\rightarrow\infty$.
This conjecture is now called Montgomery’s pair correlation and plays a complementary role to the Riemann hypothesis; i.e., vertical vs. horizontal distribution of the nontrivial zeros.
Montgomery’s pair correlation conjecture claims that the two-point correlation for the zeros of the zeta function on the critical line is (in the limit) equal to the two-point correlation for the eigenangles of random Hermitian matrices from the Gaussian Unitary Ensemble (GUE). This conjecture implies that almost all zeros of the zeta function are simple; the predicted pair correlation matches to the one of the eigenangles of certain random matrix ensembles. Here because the corresponding asymptotics is a theorem in random matrix theory, not only a conjecture[5].
Let’s assume RH and order the ordinates $\gamma$ :
$...\gamma_{-1}\leq 0\leq \gamma_{1}\leq \gamma_{2}...$
Then $\gamma_{j}= -\gamma_{-j}, j= 1,2,...$, Riemann computed $\gamma_{1}$, the one with smallest positive imaginary part being 14,43725. Riemann noted that :
$\left \{ j:0\leq \gamma_{j}\leq T \right \}\sim \frac{T \log T}{2 \pi}$, as $T\rightarrow \infty$
In particular, the mean spacing between the $\gamma^{'}_{j}s$ tends to zero as $j\rightarrow \infty$.
Local spacings law between these numbers can be represented by re-normalization as follows :
$\hat{\gamma_{j}}= \frac{\gamma_{j}\log \gamma_{j}}{2\pi}$ for $j \geq 1$
The consecutive spacings $\delta_j$ are de fined to be :
$\delta_j=\hat{\gamma_{j+1}}-\hat{\gamma_{j}}, j=1,2,...$
More generally, the k-th consecutive spacings are :
$\delta^{(k)}_j=\hat{\gamma_{j+k}}-\hat{\gamma_{j}}, j=1,2,...$
Odlyzko [2] has made an extensive and profound numerical study of the zeros and in particular their local spacings. He finds that they obey the laws for the (scaled) spacings between the eigenvalues of a typical large unitary matrix. That is they obey the laws of the Gaussian Unitary Ensemble (GUE). Computations of Odlyzko[2] showed that also the nearest neighbor spacing for the nontrivial zeros of the zeta-function seems to be amazingly close to those for the eigenangles of the GUE.
The upper figure shows Odlyzko’s pair correlation for $2 \times 10^8$ zeros of $\zeta (s)$ near the $10^{23}$rd zero. The lower figure shows the difference between the histogram in the first graph and $1-\left( \frac{\sin\pi t}{\pi t}\right )^2$. In the interval displayed, the two agree to within about 0.002 …. (pictures credited to Meier and Steuding[5]).
The Montgomery-Odlyzko law claims that these distributions are, statistically, the same. But the only proved cases of pair correlation asymptotics are those of Katz and Sarnak[4] for certain local zeta functions. (I found references below for comprehensive chronology of these developments or here [6] for the review).
References :
[1] H.L. Montgomery, The pair correlation of zeros of the Riemann zeta-function on the critical line, Proc. Symp. Pure Math. Providence 24 (1973), 181-193.
[2] A.M. Odlyzko, The 10^{20}th zero of the Riemann zeta-function and 70 million of its neighbors, in ’Dynamical, spectral, and arithmetic zeta functions’ (San Antonio, TX, 1999), 139–144, Contemp. Math. 290, Amer. Math. Soc., Providence 2001.
[3] N.M. Katz, P. Sarnak, Zeros of The Zeta Function and Symmetry, Bulletin of The AMS, Vol. 36, Number 1, January 1999, Pages 1-26.
[4] N.M. Katz, P. Sarnak, Random matrices, Frobenius eigenvalues, and monodromy, AMS, Providence 1999.
[5] P. Meier, J. Steuding, The Riemann Hypothesis, available at claymath.org.
[6] J. Steuding, The Riemann Zeta Function and Predictions from Random Matrix Theory, AMS, subject classification numbers: 11M06.
# PSPP : Perangkat Lunak GNU Pengganti SPSS
Bagi yang terbiasa melakukan analisis data statistik, nama SPSS mungkin tidak asing terdengar di telinga. Salah satu proprietary software, selain MS Excel tentunya, yang kerap dengan mudah dan “asal-asalan” (baca : bajakan) dipakai oleh orang yang tidak bertanggung-jawab ketika melakukan analisis data. (dicoret karena tampaknya budaya bajakan sudah mulai berkurang saat ini hehe, yang kesindir jangan nyepam ya hehe…).
Tetapi bagi lingkungan akademik, budaya proprietary software seperti yang ditawarkan SPSS tidaklah cocok dengan budaya pendidikan, melainkan open source software. Proprietary software hanya menawarkan kemudahan bagi orang awam, namun tidak begitu banyak memberikan pengertian. Dan bagi yang mendalami suatu masalah, proprietary software tidaklah elok. Demikian halnya di dalam analisis data statistik. Karena itu perangkat lunak bebas untuk analisis data statistik telah banyak dikembangkan sesuai dengan kultur lingkungan atau komunitas, salah satunya ialah PSPP yang dikembangkan oleh GNU.
PSPP atau GNU PSPP merupakan salah satu perangkat lunak bebas pengganti SPSS yang dikembangkan oleh GNU dan berada di bawah GPLv3. Mengenai singkatan PSPP saya kurang mengetahui, tetapi dari namanya sudah kelihatan kalau PSPP merupakan nama untuk mengganti SPSS. Beberapa sintak dan file data PSPP compatible dengan yang digunakan SPSS.
PSPP mendukung interface perangkat lunak bebas lain yang berhubungan seperti GNumeric dan OpenOffice.org. PSPP dapat dijalankan baik melalui GUI maupun baris-perintah di terminal yang di dalam prakteknya kedua cara ini saling melengkapi.
Khusus di Ubuntu (karena saya menggunakannya di Ubuntu), PSPP packages untuk karmic update dapat dilihat di :
Instalasi :
sudo apt-get install pspp
Untuk mencek apakah pspp telah terpasang, ketik :
which pspp
bila muncul
/usr/bin/pspp
berarti pspp telah terpasang. Oh ya, mengenai informasi/berita terakhir mengenai PSPP bisa dibaca-baca di sini. | 2017-11-21 02:17:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 35, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8593801856040955, "perplexity": 6597.636701544156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806310.85/warc/CC-MAIN-20171121021058-20171121041058-00548.warc.gz"} |
https://math.stackexchange.com/questions/2313215/birthday-paradox-2-pairs | I have followed the various birthday paradox posts. Can someone please assist with the logic for finding the probability of two pairs of people with the same birthday in a group of $23$. If $n=23$ gives us more than $0.5$ probability of one pair, what is the outcome for two pairs in the same population (assuming the second pair do not share the same birth date as the first pair)?
For the standard birthday problem, the number of possible sequences of $n$ birthdays that don't have two the same is $365\times364\times...\times(365-n+1)$ and so the probability that we don't have two the same after $n$ attempts is $\frac{365\times364\times...\times(365-n+1)}{365^n}=p_n$, which goes below $1/2$ at $n=23$.
The probability of either having all birthdays different or exactly one pair the same, all others different, or exactly three people the same, all others different is $$\frac{365\times...\times(365-n+1)+\binom n2365\times...\times(365-n+2)+\binom n3365\times...\times(365-n+3)}{365^n}\\=p_n\bigg(1+\binom n2\frac1{365-n+1}+\binom n3\frac1{(365-n+1)(365-n+2)}\bigg).$$ Thus we need to check when this is less than $1/2$. This first happens at $n=36$.
This assumes that you count any two pairs, including where both pairs have the same birthday. It would be more complicated to exclude this, and it shouldn't make very much difference, but as it happens the probability is actually very close to $1/2$ at $n=36$.
(edit: Actually, doing some more careful bounds, the probability of not having exactly one set of $2$, $3$ or $4$ the same, all others different at $n=36$ is $0.49944$. A simple bound on the probability of any configuration with $5$ or more people having the same birthday is the expected number of sets of $5$ people with the same birthday, and this is only $0.0000212$, not enough to tip the probability over $1/2$. So it is $n=36$ with either interpretation.) | 2020-07-12 23:14:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9049063324928284, "perplexity": 122.42985404161239}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657140337.79/warc/CC-MAIN-20200712211314-20200713001314-00251.warc.gz"} |
https://homework.cpm.org/category/CC/textbook/ccg/chapter/10/lesson/10.3.1/problem/10-119 | ### Home > CCG > Chapter 10 > Lesson 10.3.1 > Problem10-119
10-119.
The circle at right is inscribed in a regular hexagon. Find the area of the shaded region.
Subtract the area of the circle from the area of the hexagon.
Look at the diagram at right. What is the height of an equilateral triangle with sides of $6.0$ inches? | 2021-03-04 22:21:36 | {"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6942458152770996, "perplexity": 436.4388104526979}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369523.73/warc/CC-MAIN-20210304205238-20210304235238-00414.warc.gz"} |
https://eprint.iacr.org/2019/176 | ### Homomorphic Encryption for Finite Automata
Nicholas Genise, Craig Gentry, Shai Halevi, Baiyu Li, and Daniele Micciancio
##### Abstract
We describe a somewhat homomorphic GSW-like encryption scheme, natively encrypting matrices rather than just single elements. This scheme offers much better performance than existing homomorphic encryption schemes for evaluating encrypted (nondeterministic) finite automata (NFAs). Differently from GSW, we do not know how to reduce the security of this scheme to LWE, instead we reduce it to a stronger assumption, that can be thought of as an inhomogeneous variant of the NTRU assumption. This assumption (that we term iNTRU) may be useful and interesting in its own right, and we examine a few of its properties. We also examine methods to encode regular expressions as NFAs, and in particular explore a new optimization problem, motivated by our application to encrypted NFA evaluation. In this problem, we seek to minimize the number of states in an NFA for a given expression, subject to the constraint on the ambiguity of the NFA.
Available format(s)
Category
Secret-key cryptography
Publication info
A minor revision of an IACR publication in ASIACRYPT 2019
Keywords
Finite AutomataInhomogeneous NTRUHomomorphic EncryptionRegular Expressions.
Contact author(s)
nicholasgenise @ gmail com
History
2020-03-16: last of 2 revisions
See all versions
Short URL
https://ia.cr/2019/176
CC BY
BibTeX
@misc{cryptoeprint:2019/176,
author = {Nicholas Genise and Craig Gentry and Shai Halevi and Baiyu Li and Daniele Micciancio},
title = {Homomorphic Encryption for Finite Automata},
howpublished = {Cryptology ePrint Archive, Paper 2019/176},
year = {2019},
note = {\url{https://eprint.iacr.org/2019/176}},
url = {https://eprint.iacr.org/2019/176}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content. | 2023-01-27 12:20:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19847296178340912, "perplexity": 4073.9094008032666}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494976.72/warc/CC-MAIN-20230127101040-20230127131040-00547.warc.gz"} |
https://docs.microsoft.com/en-us/previous-versions/ms838270(v=msdn.10)?redirectedfrom=MSDN | # What's New in Visual Studio 2005 for Native Developers
Nishan Jebanasam
Microsoft Corporation
May 2005
Applies to
Windows Mobile-based devices
Windows Mobile 2003 Second Edition-based devices
Windows CE-based devices
Visual Studio 2005
eMbedded Visual C++ version 4.0
ActiveX
ActiveSync
Summary: This article provides an overview about the Visual Studio 2005 native device development feature set. It is intended for both eMbedded Visual C++ developers who want to learn about the successor to eMbedded Visual C++, in addition to desktop computer C++ developers who want to learn about targeting device platforms with their native applications. (35 printed pages)
Introduction
Prerequisites
IDE
Native Libraries
Debugging
Emulator
How Do I?
## Introduction
Visual Studio 2005 includes C/C++ development for Windows Mobile-based and Windows CE-based devices. It will be the successor to eMbedded Visual C++ version 4.0, and it will allow developers to write C/C++ applications for Microsoft device platforms. Some of Visual Studio 2005 features include:
• Device platforms in the native project system
• Application and class wizards
• SDK integration
• Resource editor
• Device cross-compilers
• Remote deployment and debugging
• Native device frameworks
• Emulator
• Help
## Prerequisites
If you plan to use Visual Studio 2005 to develop for devices, the following are prerequisites:
• Windows 2000 or later
• 192 MB of RAM (256 MB or more is highly recommended)
• Intel Pentium III 600 MHz processor or equivalent
• ActiveSync version 4.0 (if you intend to deploy and debug to physical devices)
• Windows Mobile version 5.0 software development kits (SDKs) (if you intend to target Windows Mobile 5.0 devices)
## IDE
This section covers the design-time features provided by Visual Studio 2005 to target devices.
### Application Wizards
Visual Studio 2005 ships with five application wizards to help you create the following project types:
• Win32 Smart Device Project
• ATL Smart Device Project
• MFC Smart Device Application
• MFC Smart Device DLL
• MFC Smart Device ActiveX Control
You can find these application wizards in the New Project dialog box in the Visual C++ node under Smart Device, as shown in Figure 1.
Figure 1. Available application wizards for projects in Visual Studio 2005. Click the thumbnail for a larger image.
As you create your application, you also need to choose the platform SDK (or SDKs) that your project targets, as shown in Figure 2. Visual Studio 2005 ships with the Windows Mobile 2003 SDKs in the box, so when you first install Visual Studio 2005, the Pocket PC 2003 SDK and Smartphone 2003 SDK are available.
Figure 2. Platform SDKs to add to your project. Click the thumbnail for a larger image.
Any additional Windows Mobile or Windows CE SDKs that you have installed in Visual Studio 2005 will also show up in this page. (You can choose one or more platform SDKs for your project.) Note that Visual Studio 2005 only supports Windows Mobile 2003 platforms and later, and Windows CE version 5.0 platforms and later.
After you've chosen the platforms SDKs, the application wizard generates your project, template source code, default resources, and project properties (compiler switches, dependent libraries, and other project properties).
### Class Wizards
Visual Studio 2005 also ships with class wizards that generate code to help you accomplish common tasks. Examples include helping you to create an Active Template Library (ATL) COM object or a Microsoft Foundation Class (MFC) class. To run a class wizard on your project, right-click your project, click Add, and then click Class.
Visual Studio 2005 supports the following class wizards for device platforms, as shown in Figure 3:
• ATL Simple Object
• ATL Control
• ATL Dialog
• ATL Property Page
• Add ATL Support to MFC
• MFC Class
• C++ Class
Figure 3. Supported class wizards in Visual Studio 2005. Click the thumbnail for a larger image.
The class wizards that Visual Studio supports for smart device platforms feature a small "device" icon embedded in the wizard icon.
### Configurations and Platforms
Almost all of the settings your project has are "configuration" specific. A configuration-specific setting combines the debug or release build information with the project's platform. For example, you can set compiler switches specific to your Pocket PC 2003 (ARMV4) Debug configuration and different switches for your Pocket PC 2003 (ARMV4) Release configuration.
Each configuration produces its own project output binary. If your project targets Pocket PC 2003 (ARMV4) and Smartphone 2003 (ARMV4), for example, when you build the Pocket PC 2003 (ARMV4) Release configuration, you get a different binary than if you build the Smartphone 2003 (ARM V4) Release configuration. Similarly, building the Pocket PC 2003 (ARMV4) Debug configuration produces yet another binary output. Figure 4 summarizes the SDK, platform, architecture, configuration, and project output relationships.
Figure 4. Relationships among SDKs, platforms, architecture, configuration, and project outputs.
Visual Studio applies project properties to a single specific configuration by default. To have settings take effect for multiple configurations, you must select All Configurations and/or All platforms in the Project Property Pages dialog box to have settings take effect for Debug and Release, and/or all the platforms in your project respectively. Furthermore, some properties contain text (rather than an enumeration of switches, for example). If you select All Configurations and/or All Platforms, properties that contain text may clear because the project system does not take the intersect or union of the text. If the text in the property does not match exactly for the two or more configurations selected by the user, then nothing is displayed. For these cases, you should apply the properties on a configuration-by-configuration basis to avoid any text being dropped.
In Figure 5, the Preprocessor Definitions do not exactly match for Debug and Release, so when the use selects All Configurations, this property clears, as shown in Figure 6.
Figure 5. Preprocessor definitions do not match Debug and Release. Click the thumbnail for a larger image.
Figure 6. Due to a mismatch, the property clears when All Configurations is chosen. Click the thumbnail for a larger image.
This multiple-platform project capability has many advantages, which allow you to maintain one code base and customize your application's UI, input handling, and more by wrapping your code around #ifdef's. Furthermore, because you can apply properties to all of your configurations (see Project Properties), you can easily maintain your configurations. For example, you can choose to sign your project output with one certificate, which you can apply to all project outputs (so your Pocket PC binary and Smartphone binary are signed with the same certificate).
### Project Properties
This section covers properties that device developers may find interesting. All of the properties in this section apply on a per-configuration basis.
#### Deployment
The Deployment configuration property contains some of the more frequently used sets of properties for device developers, as shown in Figure 7. It allows you to choose your target deployment device, to enumerate any additional files you may want to deploy with your project, to specify the remote directory on the device for your project output, and to dictate whether you want your project output registered on the device after it is deployed. Most of these properties are very straightforward although to use the Additional Files property, you need to use a special syntax.
Figure 7. The Deployment configuration property. Click the thumbnail for a larger image.
The Additional Files property allows you to specify one or more additional files to be downloaded to the target device when you deploy your project. Note that files you specify will not be compiled; they will simply be copied to the device (and registered, if specified). For examples about the Additional Files syntax, see the How Do I? section.
#### Authenticode Signing
Application security is becoming more prevalent on Windows Mobile-based devices. Device developers should understand the various security models and how these security models can affect the ways they develop and redistribute their applications.
Authenticode signing is a way of authenticating the origins of digital content. Signing encodes binary with a private key, which can only be verified with its corresponding public key. The public key is redistributed in the form of a certificate that can be installed on a device. In this way, users can verify that you created the application when they start it on their devices. Users can trace certificates to a trusted root certificate in attempt to validate signing authorities. For example, a well-known and trusted signing authority will most likely have a valid root certificate to trace on retail devices, whereas random individuals who sign their applications with certificate private keys that they generated themselves will most likely not have a trusted root to chain back to on retail devices.
A Practical Guide to the Smartphone Application Security and Code Signing Model for Developers provides an excellent starting point for Authenticode signing. You should familiarize yourself with this article.
The Authenticode Signing configuration property (as shown in Figure 8) allows you to select a certificate to sign your project output with, to dictate whether you want to provide the device with that certificate, and to specify which certificate store on the device to provide the certificate to. Provisioning is the act of configuring the device with some setting (in this example, installing a certificate into the certificate store).
Figure 8. The Authenticode Signing configuration property. Click the thumbnail for a larger image.
If you set the Authenticode Signature property to Yes, after you select a certificate, Visual Studio 2005 signs the project output each time it is built. If you set the Provision Device property to Privileged Certificate Store, then the certificate selected in the Certificate property is provisioned to the privileged certificate store on the target device the next time you deploy the project. Similarly, if you set the Provision Device property to Unprivileged Certificate Store, the certificate selected in the Certificate property is provisioned to the unprivileged certificate store on the target device the next time you deploy the project. If the device security policy does not permit provisioning certificates, this step fails, so you need to modify the policy on the device to allow certificate provisioning.
#### C/C++
The C/C++ configuration property contains all of the compiler settings for your project, as shown in Figure 9. Compilation for device platforms invokes specific device compilers for the device architecture you are targeting, so the properties that are available for device platforms are slightly different to the properties that are available for desktop computer platforms.
Figure 9. The C/C++ configuration property. Click the thumbnail for a larger image.
Some of the key properties that affect device developers include:
• Precompiled Headers
Most native device application projects that developers created in Visual Studio 2005 are set to Use Precompiled Header, which by default is stdafx. For more information about creating and using your own precompiled headers, see the How Do I? section.
• Advanced
• Compile For Architecture
This property sets the instruction set to compile for. Each device compiler can compile for one of many architectures.
• Interwork ARM and ARM Thumb calls
This property enables generation of thunking code to interwork 16- and 32-bit ARM code.
• Command Line
You can set additional switches that aren't available in the property pages.
#### Linker
The Linker configuration property contain all the linker settings for your project, as shown in Figure 10.
Figure 10. The Linker configuration property. Click the thumbnail for a larger image.
When your application targets device platforms, the C/C++ configuration property has a device-specific set of properties. The Linker configuration properties are the same for device or desktop computer platforms because you use the same linker. Because you use the same linker, some properties do not apply to device platforms but they will be visible anyway. Table 1 describes some examples.
Table 1. Linker properties not applicable to device platforms
Property Page Property that does not apply to device platforms Notes
General Register Output Set in Deployment page
Input Embed Managed Resource File None
System SubSystem
Terminal Server
Swap Run from CD
Swap Run from Network
Driver
None
Optimization Optimize for Windows 98
Profile Guided Database
None
Advanced Target
Profile
CLR Thread Attribute
CLR Image Type
Key File
Key Container
Delay Sign
Set in Command Line property of Linker page.
Signing for device platforms handled in Authenticode Signing page
### Resource Editor
The Visual Studio 2005 native resource editor should appear very familiar because it's the same native resource editor that eMbedded Visual C++, Visual C++ version 6.0, and Visual Studio .NET 2003 uses. Native smart device projects in Visual Studio 2005 supports all of the following resource types:
• Accelerator
• Bitmap
• Cursor
• Dialog
• Icon
• Menu
• Registry
• String Table
• Toolbar
• Version
#### Multiple Resource Files
If a project targets Pocket PC and Smartphone, Visual Studio 2005 makes it easy for you to customize the UI of your application for the different device form factors by generating a separate resource file for each targeted platform. The sample project shown in Figure 11 has both a Pocket PC resource file and a Smartphone resource file. Notice that the Smartphone resource file (MyDeviceApp1sp.rc) has a No Build icon — the current target platform for the project is Pocket PC. Therefore, when the user builds the project, only the Pocket PC resource file is included in the build. If the user changes the active target platform to Smartphone, then the No Build icon disappears from MyDeviceApp1sp.rc and appears on MyDeviceApp1ppc.rc. Therefore, the correct resource file compiles into the project depending on what platform the user targets.
Figure 11. Sample project with Pocket PC and Smartphone resource files
#### RC2 File
Some of the Application wizards generate an RC2 file in addition to the standard Resource Compiler (RC) file. The Resource Compiler does not touch this RC2 file, but the RC file includes this file, which contains resources that the Resource Compiler doesn't know how to handle. Some examples include the HI_RES_AWARE custom resource (for more information about the HI_RES_AWARE custom resource, see High Resolution and Orientation Awareness), in addition to the menu RCDATA that the Resource Compiler edits if placed in the RC file (hex value equivalents replace the style data and won't be translated back). The RC2 file is a great place to put other custom resources that you don't want the Resource Compiler editing for you.
#### UI Model
Device SDKs can define their own UI model, which you can use to filter the list of controls that appear in the Dialog Editor to only show the controls that a platform supports. Visual Studio 2005 ships with a "CE" UI model for the Windows Mobile 2003 SDKs that are already included in Visual Studio 2005, as shown in Figure 12.
Figure 12. Visual Studio and its built-in UI model for the Windows Mobile 2003 SDKs. Click the thumbnail for a larger image.
### High Resolution and Orientation Awareness
Windows Mobile 2003 Second Edition and later have high resolution capability (the ability to display graphics in a higher DPI) in addition to orientation switching capability (the ability to rotate the screen dynamically and display a "portrait" or "landscape" mode). Visual Studio 2005 provides native device developers with support for writing high resolution and orientation-aware applications.
#### DeviceResolutionAware.h
When the Developer Resources for Windows Mobile 2003 Second Edition was released, it included a useful header file, UIHelper.h, that contained several macros and functions to assist developers in creating high resolution and orientation aware applications. These functions included:
• GetDisplayMode
Determines if the display is currently configured as portrait, square, or landscape.
• StretchIcon
Stretches an icon to the specified size (only applies on Windows Mobile 2003 Second Edition platforms and later).
• StretchBitmap
Stretches a bitmap containing a grid of images.
• ImageList_LoadImage
Operates identically to the platform ImageList_LoadImage, except that it first checks the DPI fields of the bitmap (by using GetBitmapLogPixels), compares it to the DPI of the screen (by using LogPixelsX and LogPixelsY), and then performs scaling (by using ImageList_StretchBitmap), if the values are different.
• RelayoutDialog
Re-lays out a dialog based on a dialog template. This function iterates through all of the child window controls and calls SetWindowPos for each. It also calls SetWindowText for each static text control, and then updates the selected bitmap or icon in a static image control. This method assumes that the current dialog and the new template have all of the same controls with the same IDCs.
Visual Studio 2005 includes these functions, in the header file, DeviceResolutionAware.h (namespace "DRA::").
#### Orientation Awareness
The five smart device application wizards, and the ATL Dialog Class wizard, generate template code that contains the WM_SIZE event handler to rotate any dialogs the wizards generate. Furthermore, the wizards generate two versions of their default dialogs. For example, for the About dialog, a square/portrait version and a landscape version are generated. You can use this code as a useful example to follow when you design the UI for your native device applications.
// Message handler for About box.
INT_PTR CALLBACK About(HWND hDlg, UINT message, WPARAM wParam, LPARAM lParam)
{
switch (message)
{
// ...
// Other message handlers cut for brevity
// ...
#ifdef _DEVICE_RESOLUTION_AWARE
case WM_SIZE:
{
DRA::RelayoutDialog(
g_hInst,
hDlg,
DRA::GetDisplayMode() != DRA::Portrait ? MAKEINTRESOURCE(IDD_ABOUTBOX_WIDE) : MAKEINTRESOURCE(IDD_ABOUTBOX));
}
break;
#endif
}
return (INT_PTR)FALSE;
}
#### High Resolution Awareness
Applications compiled for Windows Mobile 2003 (that is, Windows CE subsystem version 4.20) will automatically be pixel-doubled on high resolution capable devices, unless the users define a custom resource (HI_RES_AWARE) that tells the devices' operating system not to pixel-double the application. The wizard-generated code automatically defines the HI_RES_AWARE resource in Visual Studio 2005. This design encourages developers to think about high resolution awareness when they write their applications, so that they can take advantage of the crisper display capabilities of devices emerging in the market today. If you want your Windows Mobile 2003 application pixel-doubled on high resolution capable devices, you can remove the HI_RES_AWARE resource from the RC2 file in your project. Any application that you build on a later platform version (that is, later than Windows CE version 4.20) will not be pixel doubled, even if it does not include the HI_RES_AWARE resource. Also note that Smartphones do not pixel double at all on high resolution Smartphone devices.
For more information about orientation and high resolution awareness, see Step by Step: Develop Orientation-Aware and DPI-Aware Applications for Pocket PC.
## Native Libraries
Visual Studio 2005 contains updated versions of the Microsoft Foundation Classes (MFC), Active Template Library (ATL) and Standard C++ Library (SCL) for devices, along with a small subset of the C Runtime (CRT), as shown in Table 2. These new device libraries are based on the desktop MFC version 8.0, ATL version 8.0, SCL version 8.0, and CRT version 8.0 libraries, subset based on size, performance, and platform capability. They are not factored any differently for Windows CE, Pocket PC, or Smartphone, so you can rely on the functionality of these runtimes being available for these platforms. These runtimes, however, contain some degree of platform awareness. ATL, for example, behaves differently on DCOM platforms than on COM platforms, and the same is true for GUI and Headless platforms. MFC will be UI–model aware and will behave differently on AYGShell than on non-AYGShell platforms.
The native libraries are available as both dynamic and static libraries (except SCL, which will be only available as a static library).
Table 2. Summary of CRT, ATL, MFC, and SCL
Library Link options .dll name ([d] is debug version)
"Mini" C Runtime 8.0 Static and dynamic Msvcr80[d].dll
ATL 8.0 Static and dynamic Atl80.dll
MFC 8.0 Static and dynamic Mfc80u[d].dll
SCL 8.0 Static only Not applicable
### Mini C Runtime 8.0
MFC and ATL 8.0 rely on certain C APIs that are not available in the CRT that ships in the device. Therefore, a "mini" C runtime provides these extra APIs. This runtime is not intended to be a full CRT, but it is provided primarily for MFC and ATL support. Table 3 lists the APIs that are provided in msvcr80.dll for devices.
Table 3. APIs provided by msvcr80.dll
_CrtDbgReportW _wsplitpath_s _CrtGetReportHook _ui64toa_s _CrtSetReportFile _ui64tow_s _CrtSetReportHook _ultoa_s _CrtSetReportMode _ultow_s _gmtime64_s _wcstoi64 _i64toa_s _wcstoui64 _i64tow_s calloc _invalid_parameter memcpy_s _itoa_s memmove_s _itow_s strcat_s _ltoa_s strcpy_s _ltow_s strncpy_s _localtime64_s wcscat_s _mktime64 wcscpy_s _strtoi64 wcsftime _strtoui64 wcsncpy_s _time64 wcsnlen _wmakepath_s
### Active Template Library 8.0
Developers have traditionally used ATL for COM-based applications. ATL features useful classes to make COM programming easier, in addition to string manipulation and conversion, managing arrays, lists and trees, and more. Some differences that ATL device developers will see in Visual Studio 2005 compared to eMbedded Visual C++ include Web services client support, extended socket support (IPv6), and improved security and robustness. However, ATL 8.0 for devices does not have all of the desktop ATL functionality. Security, Services, ATL Data, and ATL Server are not included in the device version (Web services consumption is supported). These omissions are largely due to schedule and resource constraints.
### Microsoft Foundation Classes 8.0
MFC still plays an important role in the device space. There are a large number of native applications on devices today that use MFC, and even with the advent of the .NET Compact Framework, there continues to be a need for native GUI applications, especially on resource-constrained devices.
MFC for devices in Visual Studio 2005 provide a rich framework for applications, from simple dialog-based applications to sophisticated applications that employ the MFC document/view architecture. Naturally there are classes that have no underlying support in the device operating system, and there are also classes that were not ported due to size, performance, or schedule reasons. Figure 13 provides an overview of the subset of MFC that Visual Studio 2005 supports for devices.
Figure 13. Visual Studio 2005 supports a subset of MFC for devices. Click the thumbnail for a larger image.
### Standard C++ Library 8.0
The Standard C++ Library 8.0 for devices is also a subset of the desktop SCL. Table 4 describes the facilities that SCL 8.0 provides for devices.
Table 4. Facilities for devices in SCL 8.0
Facility Description
Diagnostics Includes components for reporting several kinds of exceptional conditions and components for documenting program assertions.
General utilities Includes components used in other elements of the SCL. These components may also be used by any C++ programs. This category also includes components used by the Standard Template Library (STL) and function objects, dynamic memory management utilities, and date/time utilities. This category also includes memory management components from the C library.
Strings Includes components for manipulating sequences of "characters," where characters may be of type char, w_char, or of a type defined in a C++ program. The library provides a class template basic_string, which defines the basic properties of strings. The string and wstring types are predefined template instantiations the library provides.
STL Provides a C++ program with access to the most widely used algorithms and data structures. STL headers can be grouped into three major organizing concepts: containers, iterators, and algorithms. Containers are template classes that provide powerful and flexible ways to organize data (for example, vectors, lists, sets and maps). Iterators are the glue that pastes together algorithms and containers. STL provides a large set of programmable algorithms to handle sorting, searching, and other common tasks.
Numerics Includes components used to perform semi-numerical operations and components for complex number types, numeric arrays, generalized numeric algorithms, and facilities included from the ISO C library.
Input/output Includes components for forward declarations of iostreams, predefined iostream objects, base iostream classes, stream buffering, stream formatting and manipulators, string streams, and file streams.
The SCL also incorporates the Standard C Library. Note that only portions of the Standard C Library that have underlying device operating system support is incorporated.
### Windows Template Library 8.0
The Windows Template Library 8.0 (WTL) continues to remain an unsupported sample on the Web. There will be a device port of WTL 8.0 most likely around the time Visual Studio 2005 releases. You can find the current WTL for devices in the Microsoft Download Center.
## Debugging
The native device debugger in Visual Studio 2005 provides a fast, reliable, and feature-rich debugging experience for device developers. The most notable remote debugger improvements since eMbedded Visual C++ are speed and reliability, with large improvements to responsiveness in scenarios like stepping and expression evaluation. Key debugger features include the ability to:
• Control program flow through stepping and "set next statement."
• Handle Windows exceptions.
• Set breakpoints and apply conditions to breakpoints (note that data breakpoints aren't supported).
• View the state of the application through expressions in the Watch, Autos and Locals windows, including support for STL visualization.
• View the lower level assembly representation through the Register window and through the Disassembly window.
• Attach to running processes to debug them, and then detach when finished.
• Enable just-in-time (JIT) debugging on the device.
• Post-mortem debug a Watson Kdump.
There are several ways to debug native device applications in Visual Studio 2005, many of which the How Do I? section outlines.
In any situation where you are debugging an .dll or .exe file that you did not build (that is, no project is available), it is recommended to set your symbol search path to include the location of the .pdb files for the component you are debugging (if .pdb files are available).
To set your symbol search path
1. In Visual Studio 2005, click Tools, and then click Options.
2. In the Options dialog box, expand Debugging, and then select Symbols
3. Enter the folders where your .pdb files are located.
Click the thumbnail for a larger image.
### F5
Perhaps the most common debugging scenario is F5: starting the application under the debugger. In Visual Studio 2005, debugging your native application on the device is as seamless as debugging a local desktop computer application. You can start the application under the debugger with F5 (start new instance of application), F10 (step over) or F11 (step into). Because you will have access to the application symbols and sources, you will get the following debugging information:
• Breakpoints
• Watch
• Autos
• Locals
• Immediate
• Callstack
• Threads
• Modules
• Processes
• Memory
• Disassembly
• Registers
## Emulator
Visual Studio 2005 ships with the Microsoft Device Emulator 1.0, an emulator that allows developers access to device targets that they can deploy their smart device applications to. The Device Emulator starts the device operating system (referred to as an "image" in this document) in its own address space and emulates the ARM instruction set to provide high-fidelity emulation of a real device, as shown in Figure 13. Developers can treat the emulator as a real device in almost every respect.
Figure 13. Device Emulator. Click the thumbnail for a larger image.
Because the Device Emulator can run ARM binaries, any project that developers build for Windows Mobile can be run on the emulator without the developers having to rebuild. The Device Emulator appears as its own target "device" in the list of available target devices for a given platform, as shown in Figure 14.
Figure 14. The Device Emulator appears as a target "device" in Visual Studio
When you select an emulator and deploy the application, the emulator starts (in Figure 14, the image is Pocket PC 2003 Second Edition). After the emulator starts, the user treats the emulator like a real device, and Visual Studio 2005 downloads the application and starts the debugger. Furthermore, you can run multiple emulators at any given time, each with a different image booted. With Visual Studio 2005, you can have several "devices" at your disposal to deploy and debug your application on.
The Device Emulator has a host of features to provide a rich device experience to developers. For more information about the Device Emulator's features, see the How Do I? section.
## How Do I?
This section provides more details about specific tasks that native Smart Device developers may want to accomplish.
### Project: Multiple Platform Development
Before you create your project, it is ideal if you know what platforms you want to target. When you create your project, you can then select the platforms in the Application Wizard. However, if you don't know what platforms you'd like your project to target, or you wish to add desktop platforms as targets, you can add more platforms after you create your native device project.
To add another platform to a project
1. Open the Configuration Manager.
The Configuration Manager appears.
Click the thumbnail for a larger image.
1. Under Active solution platform, select New.
Note that adding a Windows Mobile 5.0 platform to your existing Windows Mobile 2003 project requires you to perform a manual step to successfully build for your Windows Mobile 5.0 configuration.
To add a Windows Mobile 5.0 platform to an existing Windows Mobile 2003 project
1. Right-click the project, and then select Properties.
2. On the Properties dialog box under Platform, select the Windows Mobile 5.0 platform.
3. Expand Linker, and then select Command Line.
4. Delete /MACHINE:ARM.
5. Click Apply.
6. Repeat steps 2 to 6 for every Windows Mobile 5.0 platform you added.
If you do not perform the previous procedure, you will receive the following link error when you build your Windows Mobile 5.0 configurations:
Fatal error LNK1112: module machine type 'THUMB' conflicts with target machine type 'ARM'
### Project: Specifying Additional Files to Download with My Application
To include additional files to be downloaded with your project, you need to specify them in the following format:
file name|source directory|remote directory|register
where:
File name is the name of the file that you want to deploy.
Source Directory is the fully qualified path on the desktop computer where you can find the file.
Remote Directory is the location on the device where you want to deploy the file.
Register is either a 0 or a 1 (0 means do not register; 1 means register).
For example, to include c:\foo\bar.dll to be downloaded to the \windows directory on the device and registered on deployment, you would have the following entry in the Additional Files property: Bar.dll|c:\foo|\windows|1.
To deploy more than one additional file
1. Click the Ellipse button of that property.
Click the thumbnail for a larger image.
1. Type the new files in separate lines by using the previous format.
2. Click OK.
### Project: Authenticode Signing with a Test Certificate
If you have no signing certificates in your Personal certificate store, you can perform the following steps to import a certificate into your Personal certificate store. This example uses a test certificate that Visual Studio 2005 includes.
To import a certificate into your Personal certificate store
1. Select the Authenticode signing configuration property.
2. Click the Ellipse button in the Certifcate entry.
3. Click Manage Certificates, and then click Import.
4. Browse to <VS Install Directory>\SmartDevices\SDK\SDKTools\TestCertificates.
5. In the Filter box, type *.pfx.
Note This is an important step because importing a .cer file will not allow you to sign with it because .cer files have no private key.
6. Select TestCert_Privileged.pfx, and then run through the wizard. The wizard requires no password.
7. After the certificate is imported, close the Certificate Manager.
8. In the Select Certificate dialog box, select the certificate, and then click OK.
9. In the Provision to Device list, select a certificate store.
If the certificate you imported doesn't appear in the Select Certificate dialog box in step 8, you have either imported a non-code signing certificate or a certificate without a private key. You need to follow the procedure again and make sure you select the .pfx file in step 5.
### Project: Creating and Using Precompiled Headers
Most native device application projects created in Visual Studio 2005 will be set to Use Precompiled Header, which by default is stdafx. If you want to use a different precompiled header, perform the following procedure.
To use a precompiled header other than the default
1. Right-click the .cpp file you want to precompile, and then select Properties.
2. Expand C/C++, and then select Precompiled Headers.
3. Under Create/Use Precompiled Header, select Create Precompiled Header.
4. In the same property page, under Create/Use PCH Through File, type the name of the header file to use.
5. Click OK.
6. Right-click the project, and then select Properties.
7. Expand C/C++, and then select Precompiled Headers.
8. Under Create/Use Precompiled Header, make sure Use Precompiled Header is selected.
9. In the same property page under Create/Use PCH Through File, type the name of the header file to use.
### Resource Editor: Menus on Smartphone
Creating menus for Smartphones involves some manual steps. The article, How to: Create a Soft Key Bar, is an excellent reference about this topic.
You can use Visual Studio 2005 to create a Smartphone menu correctly.
To create a Smartphone menu in Visual Studio 2005
1. Make sure you have an RCDATA section. Typically, you can find this section in the RC2 file.
2. Make sure the Resource IDs have values greater than or equal to 100 (to work around a bug in Windows Mobile 2003). You can set the IDs in the resource header file (resourcesp.h for Smartphone).
3. Make sure that buttons have NOMENU as their index.
IDR_MENU RCDATA
BEGIN
IDR_MENU,
2,
I_IMAGENONE, IDM_OK, TBSTATE_ENABLED, TBSTYLE_BUTTON | TBSTYLE_AUTOSIZE,
IDS_OK, 0, NOMENU,
I_IMAGENONE, IDM_HELP, TBSTATE_ENABLED, TBSTYLE_DROPDOWN | TBSTYLE_AUTOSIZE,
IDS_HELP, 0, 0,
END
### Resource Editor: ActiveX Control Development
When designing ActiveX controls for devices by using Visual Studio 2005, you need to take a few extra steps. Because the Resource Editor relies on the control being registered on the desktop computer to manipulate it at design time and because you cannot register device controls on the desktop computer, the following steps provide an alternative design time experience. The following procedure assumes you already have your ActiveX control project and host project, and you are hosting the ActiveX control in a dialog.
To design ActiveX controls by using Visual Studio 2005
1. In the Resource Editor, open the dialog of the host project.
2. From the Toolbox, drag a Custom Control onto the dialog.
3. Position and size the custom control onto the dialog to reflect how you want your ActiveX control to appear.
4. Right-click the custom control, and then select Properties.
5. In the Class property, paste the GUID of the ActiveX control (remember to include the curly braces "{…}").
6. In the Solution Explorer, right-click the Project Name.RC2 file, and then select View Code.
7. In the Add manually edited resources here section, add the following dialog init code. The custom control requires a dialog init section to display correctly. The contents of the actual dialog init section are not used. Remember to replace <project name> with the name of your project.
IDD_<project name>_DIALOG DLGINIT
BEGIN
IDC_CUSTOM1, 0x376, 22, 0
0x0000, 0x0000, 0x0800, 0x0000, 0x094d, 0x0000, 0x043d, 0x0000, 0x0013,
0xcdcd, 0xcdcd,
0
8. Build and run your host project (remember that you need to deploy and register the ActiveX control on the target device).
### Debugging: Attach to Process
If the application is already running on the device (or emulator), you can attach the debugger to the already running instance.
To attach the debugger to an application running on the device or emulator using Visual Studio 2005
1. On the Tools menu, click Attach to Process. The Attach to Process dialog box appears.
Click the thumbnail for a larger image.
1. In the Transport box, select Smart Device.
2. Click Browse to bring up the list of devices you can connect to (including emulators).
3. Select a target device or emulator and click Connect
You can choose to attach with the native or managed debugger explicitly, or you can select Automatic to let the IDE decide the appropriate debugger. If you are unsure which to select, Automatic is the best choice. After you select the target device, the Available Process list enumerates the running processes on the device.
Click the thumbnail for a larger image.
Note that the Type column indicates whether the application is managed or native. WinCE indicates native, and .NET CF indicates managed. All managed processes inherently have native code running in them, so for a managed application, you will see WinCE, .NET CF in the Type column.
1. Select a process, and then click Attach. The debugger attaches, and the IDE enters Debug mode.
If you have a copy of the .dll or .exe file that you are debugging on the desktop computer and in the symbol search path of Visual Studio 2005, the debugger loads it, and tries to find symbols/sources to the component. If the debugger is successful, you'll receive full debugging information (similar to having launched the project with F5).
If you cannot find the .dll or .exe file on the desktop computer, and you are targeting Windows Mobile 5.0, the debugger loads PDATA from the device. ARM, MIPS, and SH device compilers use PDATA structures to aid in stack walking at runtime. This structure aids in callstack unwinding. If you're debugging to a Windows Mobile 2003 device, the debugger will not be able to load the PDATA from the device, so even if you have the symbols and sources to the .dll or .exe file, but you don't actually have a copy of the .dll or .exe on the desktop computer, you'll receive no debugging information.
### Debugging: JIT Debugging
Just-in-time (JIT) debugging allows you to attach the debugger to an application at the point of crash, providing you with the opportunity to get details about the cause of the crash. To do this, you need to install the JIT debugger onto the device to give the debugger a chance to catch the exception that the crash throws.
To enable JIT debugging
1. Go to <VS Install Directory>\SmartDevices\Debugger\target\wce400\cpu (for example, cpu is ARMV4 for Windows Mobile 2003 and ARMV4i for Windows Mobile 5.0).
2. Copy eDbgJit.exe to the \windows folder on the device.
3. Start the executable file.
• If you are running this file on a Smartphone, after copying the executable to \windows, create a shortcut to the executable file, and then place the shortcut in the Start Menu folder of the Smartphone. This shortcut will allow you to easily access and start the executable file.
• If you are running this file on Windows Mobile 2003-based device, soft reset the device.
• If you are using the emulator, Saving State is a good option after soft reset.
At this point, the JIT debugger is installed, and any application that crashes on the device results in the JIT debugger giving you notification and the opportunity to attach Visual Studio 2005 to the application (or to end the application).
To disable JIT debugging
• Delete eDbgJit.exe from the device.
### Debugging: Post-Mortem Debugging
In cases where you do not have the opportunity to debug a process at the time of the crash, post-mortem debugging allows you to debug an application after it has crashed by attaching the debugger to the crash dump file.
The first step is to actually get the dump file from the device. There is an established process, called Windows Quality Online Services, that allows you to retrieve dump files from your application crash. Due to privacy issues, you need to sign up for the program. You can find more information at Windows Quality Online Services.
After you get a dump file, perform the following procedure.
To debug a dump file in Visual Studio 2005
1. Copy the filename.kdump file to a directory on the computer that has Visual Studio 2005 installed.
2. In Visual Studio 2005, on the File menu, click Open, and then click Project/Solution.
3. Open the dump file.
4. Press F5.
Note Make sure you open the .kdump file as a Project/Solution. If you click Open File icon instead and open the .kdump file as a file, you will not be able to debug it.
Click the thumbnail for a larger image.
If you have the symbols to the .dll or .exe file that crashed, you should set the symbol search path to include the folder containing that file.
### Debugging: Services
Support for debugging Services.exe is being evaluated for Visual Studio 2005, but in the meantime, there is an unsupported workaround that enables services debugging for Visual Studio 2005 Beta 2.
To enable services debugging in Visual Studio 2005 Beta 2
1. Go to System drive\Documents and Settings\Username\Local Settings\Application Data\Microsoft\CoreCon\1.0.
2. Open conman_ds_debugger.xsl in an editor (for example, Notepad).
Note Make sure to create a backup of this .xsl file in a separate folder before proceeding.
1. Search for services.exe.
2. Delete this entry from the file.
3. Save, and then close the file.
4. Close Visual Studio, and then restart it.
The next time you attach to process, you'll see Services.exe as an available process to attach to.
Click the thumbnail for a larger image.
Note that for Visual Studio 2005 Beta 2, this scenario is unsupported. It is being evaluated for official support in the Visual Studio 2005 final release.
### Emulator: Folder Sharing
It is possible to map a folder on your desktop computer (or network) to the emulator as an "SD card." This action simulates inserting a card into the device that contains the files in the desktop computer's folder. It is a convenient way to move files between the Device Emulator image and your desktop computer.
To folder share
1. On the emulator, select File, and then select Configure.
2. On the General tab, enter the folder you want to share in the Shared Folder property.
3. Click OK. You can access the shared folder from the emulator.
### Emulator: Save State
After an image has been started in the emulator, you can configure the image and then save its "state." Therefore, you can turn off the emulator completely, and the next time that you use that image, its last state is restored. This feature is extremely useful if your application requires a specific environment or other installed applications to run. Another benefit to the Save State feature is a drastically reduced start time the next time you start the emulator with that image (because the saved-state image is already started).
To erase a saved state and cold boot the device
• On the emulator, select File, and then select Clear Saved State.
The Pocket PC 2003 Second Edition and Smartphone 2003 Second Edition Emulator images that ship in Visual Studio 2005 are actually pre-started saved-state images, which is why the image appears to "boot" instantly when the developer starts the emulator.
### Emulator: ActiveSync
In Visual Studio 2005, it is possible to establish an ActiveSync connection to the emulator. You can do this by virtually placing the emulator in its "cradle.". Your desktop computer must have ActiveSync installed (Visual Studio 2005 only supports ActiveSync 4.0 or later).
To establish an ActiveSync connection to the emulator
1. Start the emulator (for example, in the IDE on the Tools menu, click Connect to Device, and then select an image to boot).
2. After the emulator boots, in Visual Studio 2005 on the Tools menu, click Device Emulator Manager.
3. Select the image you have booted (it should have a green arrow next to it).
4. On the Actions menu, click Cradle.
5. Make sure ActiveSync is configured to allow DMA connections
6. ActiveSync should then start as the emulator makes a connection to your desktop computer.
After you have an ActiveSync connection to the emulator, you can use ActiveSync File Explorer and any other ActiveSync features.
Note After you have an ActiveSync connection to your emulator, when you use Visual Studio 2005, you must treat that emulator as a device when deploying. For example, if you establish an ActiveSync connection to your Pocket PC 2003 Second Edition Emulator image and you want to deploy your application to it, you must select Pocket PC 2003 Device as the target device in Visual Studio.
### Emulator: Screen Rotation
The emulator supports rotation to simulate real devices that have screen rotation capability (Portrait to Landscape mode). Note that the underlying image must also support rotation (for example, Pocket PC 2003 Second Edition and later).
To rotate the emulator
1. On the File menu, select Configure.
2. Select the Display tab.
3. Set the orientation to 0, 80, 180, or 270 degrees.
Note For Pocket PC, the Calendar button is mapped to the rotate function, so if you select this button, the emulator rotates (and the image inside it).
Click the thumbnail for a larger image.
### Emulator: COM Port Mapping
You can also map serial ports on the emulator to physical COM ports on your desktop computer. This feature allows you to plug in peripherals and actually have them available to the emulator. A practical example of this feature is having a GPS device communicate over serial Bluetooth that is being mapped to your desktop computer's COM1 port, and then mapping your emulator's Serial port 1 to your desktop computer's COM1 port. You can then debug your GPS driver on the emulator.
To COM port map on the emulator
1. On the File menu of the emulator, select Configure.
2. In the Emulator Properties dialog box, select the Peripherals tab.
3. Under Serial Port 1, select COM1, and then click OK. | 2020-05-29 14:26:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19848759472370148, "perplexity": 5610.913455875625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347404857.23/warc/CC-MAIN-20200529121120-20200529151120-00236.warc.gz"} |
http://eatdrinkitaly.org/books/mww-type-titanosilicate-synthesis-structural-modification-and-catalytic-applications-to-green | # MWW-Type Titanosilicate: Synthesis, Structural Modification
Format: Paperback
Language: English
Format: PDF / Kindle / ePub
Size: 8.20 MB
We attempt to establish this fruitful communication and quick publication. To learn the skills to help patients recover their range of motion or relieve pain, students take courses in natural science, physical therapy techniques, and patient management; this physical therapy theory is then applied during intensive clinical rotations. A group from the International School for Advanced Studies (SISSA) in Trieste has developed a faster and simpler method requiring modest resources for validating physical models of RNA used in molecular dynamics.
Pages: 125
Publisher: Springer; 2013 edition (August 20, 2013)
ISBN: 3642391141
Principles of Physical Chemistry
Organic Reaction Mechanisms: 40 Solved Cases
Non-Equilibrium Thermodynamics in Multiphase Flows (Soft and Biological Matter)
Interfacial Synthesis, Vol. 1: Fundamentals
The World of Physical Chemistry
An urgent problem in protein science is to understand ion uptake and ion recognition (selectivity) by proteins and polypeptides. Why and how are these bio-‐nanostructures so exquisitely sensitive to particular ions Annual Review of Physical download here http://eatdrinkitaly.org/books/annual-review-of-physical-chemistry-volume-8? At about the same time, the Scottish physicist James Clerk Maxwell (1831-1879) and the Austrian physicist Ludwig Boltzmann (1844-1906) had analyzed the behavior of gases on the assumption that they were an assemblage of a vast number of randomly moving particles (the kinetic theory of gases) ref.: An Introduction to Markov State Models and Their Application to Long Timescale Molecular Simulation: 797 (Advances in Experimental Medicine and Biology) read pdf. If any of these topics interest you and you have completed CHM 341, please stop by and I can tell you more about the expectations in my sections of CHM 496/7. Gordon, "On The Electronic Structure of Bis(η5-cyclopentadienyl)Ti", Journal of Physical Chemistry A, 106, 7921, (2002). Gordon, "Towards Multireference Equivalents of the G2 and G3 Methods", Journal of Chemical Physics, 115, 8758, (2001) ref.: Perfume Handbook http://webtest.ummat.ac.id/?lib/perfume-handbook.
Surface Plasmon and Hydrogel Optical Waveguide: New Biosensor Applications of Surface Plasmon and Hydrogel Optical Waveguide Spectroscopy
Physical Chemistry of 1,2-Dithiole Compounds: The Question of Aromaticity (Sulfur Reports Series)
Quantum Chemistry of Polymers Solid State Aspects (Nato Science Series C:)
We prepare colloidal particles of different shapes, sizes, materials and interactions. Characterization by various experimental techniques and development of theoretical models helps us to understand their physical chemistry. Chemistry is such a broad subject and one so full of detail that it is easy for a newcomer to find it somewhat overwhelming, if not intimidating Physical Chemistry of Aging download here Physical Chemistry of Aging. Chemistry 441 students will have additional homework assignments. Photoelectric effect, heat capacity of solids, line spectra of atoms, Bohr theory of the atom. deBroglie waves, Davisson-Germer experiment, Heisenberg Uncertainty Principle, two-slit diffraction experiment and wave-particle duality. Mathematics of waves, wave equations, separation of variables, solving linear second-order differential equations with constant coefficients Dynamic Light Scattering: Applications of Photon Correlation Spectroscopy http://eatdrinkitaly.org/books/dynamic-light-scattering-applications-of-photon-correlation-spectroscopy. In the example shown here, the light is in the infrared range, which excites spring-like motions of chemically-bonded atoms. This provides a quick way of identifying the kind of chemical bonds present in a molecule — an important tool in determining its structure Design Data for Reinforced read epub read epub. Experimental and theoretical work features nanomaterials and devices, biophysical chemistry, atmospheric and environmental chemistry, the use of high resolution rovibronic spectra to characterize transition states, single molecule and single quantum dot spectroscopy, and condensed phase molecular dynamics Technetium Rhenium (Topics in Current Chemistry, Vol 176) http://eatdrinkitaly.org/books/technetium-rhenium-topics-in-current-chemistry-vol-176. Organic reaction science with an emphasis on mechanistic organic and organometallic chemistry, new synthetic methods, selectivity analysis, strategies for the design and synthesis of complex molecules, concepts for innovative problems solving and how to put these skills together in the generation of impactful ideas and proposals directed at solving problems in science , e.g. Modern Aspects of Small-Angle download here http://eatdrinkitaly.org/books/modern-aspects-of-small-angle-scattering-nato-science-series-c.
Journal Of Physical Chemistry, Volume 7...
Atoms And Molecules
Introduction to physical chemistry,
Pure Substances. Part 2 _ Compounds from BeBr_g to ZrCl2_g (Landolt-Börnstein: Numerical Data and Functional Relationships in Science and Technology - New Series) (Vol 19)
Catalysis: Science and Technology
Research in Surface Forces : Surface Forces in Thin Films and Disperse Systems
High-Resolution Spectroscopy of Transient Molecules (Springer Series in Chemical Physics)
Advances in Gas Phase Ion Chemistry
Surface and Interfacial Aspects of Biomedical Polymers: Volume 1 Surface Chemistry and Physics
Combustings Flow Diagnostics (Nato Science Series E:)
Advances in Chemical Physics, Volume 50: Dynamics of the Excited State
Thermodynamics for Chemists, Physicists and Engineers
Polymers in Information Storage Technology
Removal of unwanted gases or vapors: charcoal is used in gas masks to remove unwanted gases and vapours. 1) Writes notes or Henry’s law and its applications. 2) Write notes a Raoults law of binary mixtures and its deviations. 3) What are ideal and non ideal solutions? 4) Write notes on azetropic distillation. 5) Show how van’s Hoff I is calculated for an electrolyte. 8) Derive equilibrium constant ka of a chemical reaction with the help of thermodynamies. 9) Derive van’s Hoff isochroes equations. 10) Discuss Dedonders treatment of chemical equilibrium. 11) Explain the effect of temperature and pressure on equilibrium constant. 12) Distinguish between physisorption and chemisorptions. 13) Discuss Freundlich adsorption isotherm. 14) Explain Longmuir theory of adsorption, 15) What are the postulates of B The Electrochemistry and Characteristics of Embeddable Reference Electrodes for Concrete (EFC 43) (European Federation of Corrosion Publications) http://eatdrinkitaly.org/books/the-electrochemistry-and-characteristics-of-embeddable-reference-electrodes-for-concrete-efc-43. However, there are certain reaction in which light is produced. “The emission of light in chemical reactions at ordinary temperatures is called chemiluminescences. Thus chemiluminescence is just the reverse of a photochemical reaction. a) The light emitted by glow-worms is due to oxidation of the protein, Luciferin present in the glow worm. b) The oxidation of yellow phosphorous in oxygen or air to give P2O5 at ordinary temperatures (-10 to 400C) is accompanied by the emission of visible greenish-white luminescence ref.: Differential Scanning read for free http://totalkneereplacementrecovery.net/library/differential-scanning-calorimetry. The laws of thermodynamics drive everything that happens in the universe. Peter Atkins' powerful and compelling introduction explains what the laws are and how they work, using accessible language and virtually no mathematics. Guiding the reader from the Zeroth Law to the Third Law, he introduces the fascinating concept of entropy, and how its unstoppable rise constitutes the engine of the universe MWW-Type Titanosilicate: Synthesis, Structural Modification and Catalytic Applications to Green Oxidations (SpringerBriefs in Molecular Science) eatdrinkitaly.org. This was held in Perth as part of the 45th ANZAAS meeting in 1974. The second meeting, after which the Division was formed) was at the 5th National Convention in Canberra in May 1974 Transition Metal and Rare Earth Compounds: Excited States, Transitions, Interactions II (Topics in Current Chemistry) read for free. Physical chemistry applies physics and math to problems that interest chemists, biologists, and engineers. Physical chemists use theoretical constructs and mathematical computations to understand chemical properties and describe the behavior of molecular and condensed matter..." Single molecule biophysics is the study of the dynamics and interactions of individual biomolecules to understand how they carry out their functions in living cells. Single Molecule Biophysics monitoring the folding properties of single protein or RNA molecules helps reveal how they are transported across cellular membranes ref.: Of the colleges of science and engineering basic education planning materials: physical chemistry experiment(Chinese Edition) http://www.revoblinds.com/books/of-the-colleges-of-science-and-engineering-basic-education-planning-materials-physical-chemistry. By repeating the process A & B can be separated. Nowadays sox h let apparatus is often employed to make use of the solvent over and over again automatically. 1.3 Thermodynamic derivation of elevation of boiling point: The boiling point of a liquid is the temperature at which its vapour pressure becomes equal to pressure over it which is normally one atmosphere. lowered when a non , cited: Molecular Self-Assembly: download online download online. Without the quencher, the fluorescence lifetime $\tau_0$ is: In the presence of a quencher, the fluorescence lifetime $\tau$ is: If we plug that in again and rearrange a bit, we get: We name $K_{SV}$ the Stern-Volmer constant! Plotting $\frac{\Phi_0}{\Phi}$ or the quotient of the areas of the fluorescence spectra vs the concentration of the quencher should give a straight line, that cuts the $y$ axis at 1 , cited: Handbook of Chemistry and download online http://eatdrinkitaly.org/books/handbook-of-chemistry-and-physics-a-ready-reference-book-of-chemical-and-physical-data.
Rated 4.8/5
based on 468 customer reviews | 2019-01-18 02:15:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3385639190673828, "perplexity": 4120.983248807304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583659654.11/warc/CC-MAIN-20190118005216-20190118030836-00019.warc.gz"} |
http://publ.plaidweb.site/manual/formats/1341-User-configuration-file | # Publ: User configuration file
Last updated:
The authentication file, normally stored in users.cfg unless configured differently, stores a set of permissions groups for different authenticated users.
The format is pretty simple:
Simply put, each group is indicated by [group_name], and each line after the group name indicates the authenticated identities (and other groups) which are a part of that group. So, in this case, anyone who is in the good-friends group will also be in the friends group. All identities are given as full URIs.
Identities can also be used as a group name, to help manage those folks who have more than one identity that you want to treat equivalently; for example:
This will give the identities mailto:fluffy@beesbuzz.biz, https://twitter.com/i/user/993171, and https://plush.city/@fluffy membership in all groups that https://beesbuzz.biz is in as well. However, the opposite is not true; https://beesbuzz.biz won’t automatically have access to entries that are only allowed for https://plush.city/@fluffy, for example.
Any identities which belong to the administrative group (which is admin by default but can configured differently) will have access to all entries, as well as the administrative dashboard. Otherwise, users are subject to the permissions system.
You can also start a line with # or ; to indicate that it is a comment. | 2022-05-17 21:14:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36402690410614014, "perplexity": 1904.2132211010041}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662520817.27/warc/CC-MAIN-20220517194243-20220517224243-00711.warc.gz"} |
http://math.stackexchange.com/questions/45215/does-a-finite-commutative-ring-necessarily-have-a-unity/290675 | # Does a finite commutative ring necessarily have a unity?
Does a finite commutative ring necessarily have a unity? I ask because of the following theorem given in my lecture notes:
In a finite commutative ring every non-zero-divisor is a unit.
If it had said "finite commutative ring with unity..." there would be no question to ask, I understand that part. What I'm asking about is whether or not we can omit explicitly stating it because it follows from the finiteness of our commutative ring.
[Clarification] The way I'm learning ring theory now, a "ring" is defined as an additive Abelian group further equipped (I hope I'm using the right terminology) with an associative multiplication operation which distributes over addition. In this definition we do not require the existence of 1.
In other words, when I say "ring" I mean a rng.
-
I donot know what a commutative ring without 1 is, what about $2\mathbb{Z}/4\mathbb{Z}$? – wxu Jun 14 '11 at 0:07
Look in that book for the definition of ring. Sometimes it includes 1. A discussion in Mathoverflow: mathoverflow.net/questions/22579 – GEdgar Jun 14 '11 at 0:16
@GEdgar: Yes I am aware of the different definitions of a "ring". I added a clarification; in my case we do not require 1. – Josh Chen Jun 14 '11 at 0:20
@wxu: Excuse my oversight, see the clarification :) – Josh Chen Jun 14 '11 at 0:22
@Rasmus: Dear Rasmus, I'm not sure that your edits clarified the question. As originally written, the word "unity" was used to mean "multiplicative identity" (of which it is one synonym), while "unit" in the block-quote means "invertible element". Your edits have now removed this distinction --- you have the word unit meaning both multiplicative identity and invertible element, which is potentially confusing ... . Regards, – Matt E Jun 14 '11 at 9:21
Let $A$ be a finite commutative ring (not assumed to contain an identity). Suppose that $a \in A$ is not a zero-divisor. Then multiplication by $a$ induces an injection from $A$ to itself, which is necessarily a bijection, since $A$ is finite. Thus multiplication by $a$ is a permutation of the finite set $A$, and hence multiplication by some power of $a$ (which by associativity is the same as some power of multiplication by $a$) is the identity permutation of $A$. That is, some power of $a$ acts as the identity under multiplication, which is to say, it is a (and hence the) multiplicative identity of $A$.
In short, if a finite commutative ring $A$ contains a non-zero divisor, then it necessarily contains an identity, and every non-zero divisor in $A$ is a unit.
-
Thanks everyone for the answers, all of them helped me understand this. Matt's answer just finalised it. – Josh Chen Jun 14 '11 at 1:06
Does this work for finite-dimensional algebras as well? I guess so. That is, let $A$ be a commutative ring and a finite dimensional $\mathbb{K}$-vector space s.t. the ring multiplication is bilinear. Then every element of $A$ that is not a zero divisor is a unity. The proof is exactly the same, just replace "an injection is necessarily a bijection since $A$ is finite" with "a linear injection is a linear bijection because $A$ is finite dimensional". What do you think? – Giuseppe Negro Jun 14 '11 at 2:24
@dissonance: Dear dissonance, I agree. Regards, – Matt E Jun 14 '11 at 6:36
for a finite ring (commutative or not) the following statements are equivalent: the ring has a unit element, the ring has a non-zero-divisor – miracle173 Jun 14 '11 at 10:19
@Noah: Dear Noah, Any matrix which does not have zero as an eigenvalue has an inverse, which can be expressed as a polynomial in the given matrix. (Consider the minimal polyomial.) Regards, – Matt E Jun 14 '11 at 14:49
No. Consider the ideal generated by $2$ in $\mathbb{Z}/4\mathbb{Z}.$
-
Aha. Of course. Thanks for that. – Josh Chen Jun 14 '11 at 0:52
I think $2$ is not a non-zero-divisor in this ring because of $2*2=0$ – miracle173 Jun 14 '11 at 10:40
It is still a finite commutative ring, however. – Joe Z. Jan 30 '13 at 16:34
On any additive abelian group one can define an identically zero multiplication operation. Taking the group to be nontrivial but finite gives an example of a finite rng without unity. (Note that jspecter's example is of this form.)
On the other hand any proper ideal in a ring gives an example of a rng, but one has to be a little careful here: some other element could act as an identity on the ideal. One can avoid this by choosing rings without nontrivial idempotent elements, a good example being any local ring. (This leads back again to jspecter's example.)
[Note that the "rng" above is not a typo: it is a rather standard term for the algebraic object which satisfies all the axioms for a ring except the existence of a multiplicative identity. The point is that the vast majority of mathematicians nowadays mean by "ring" an object having a multiplicative identity and by a "ring homomorphism" a map preserving that identity. I had to restrain myself from answering, "Yes, every ring has a unity, by definition."]
-
I've heard it quipped that "rng" is pronounced "(w)rong." – Grumpy Parsnip Jun 14 '11 at 1:33
The usual fashion nowadays is to build the existence of a multiplicative identity into the definition of commutative ring. However, the stated result is correct even if one does not.
This is not because the existence of a multiplicative identity follows from finiteness. As already posted examples show, if one does not build a "$1$" into the definition of ring, there are finite rings with no $1$.
However, if there is even one non-zero divisor, then it is easy to prove that a finite ring must have a $1$. So one can say that in a finite ring, either every object other than $0$ is a zero divisor, or there is a multiplicative identity.
-
You mean "unity" right? A "unit" is an invertible element I think. – Josh Chen Jun 14 '11 at 0:56
But thank you, this clarified things a lot. – Josh Chen Jun 14 '11 at 0:56
@Josh Chen: Either way! If there is an element $\ne 0$ which is not a zero divisor then (i) there is a multiplicative identity and (ii) everything which is not zero, not a zero divisor is invertible, that is, a unit. But thank you for pointing out the ambiguity. I changed the "unit" to "multiplicative identity." But more is true. The proof is basically the same as the usual proof that a finite integral domain is a field. – André Nicolas Jun 14 '11 at 2:22
If all the elements of the ring are zero divisors, it is false.
- | 2015-04-26 02:51:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9241728186607361, "perplexity": 409.29171547778157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246652296.40/warc/CC-MAIN-20150417045732-00108-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-for-college-students-7th-edition/chapter-7-section-7-5-multiplying-with-more-than-one-term-and-rationalizing-denominators-exercise-set-page-551/116 | # Chapter 7 - Section 7.5 - Multiplying with More Than One Term and Rationalizing Denominators - Exercise Set - Page 551: 116
$2(a - \sqrt{a^2-1})$
#### Work Step by Step
$f(\sqrt{a + 1} - \sqrt{a - 1})$ $=(\sqrt{a + 1} - \sqrt{a - 1})^2$ $=(\sqrt{a + 1})^2 + (\sqrt{a - 1})^2 - 2\sqrt{a + 1}\sqrt{a - 1}$ $=(a+1 + a - 1 - 2\sqrt{a^2-1}$ $= 2(a - \sqrt{a^2-1})$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | 2021-10-16 00:09:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7085139751434326, "perplexity": 961.646353835945}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583087.95/warc/CC-MAIN-20211015222918-20211016012918-00110.warc.gz"} |
http://math.stackexchange.com/questions/6028/simple-way-to-understand-what-derivative-is | # Simple way to understand what derivative is
I know how to count derivatives, but I actually don't understand what are those? What do they show?
Wikipedia explanation: Loosely speaking, a derivative can be thought of as how much one quantity is changing in response to changes in some other quantity. I understand nothing what is written here?
Please, if you can, explain me as simple as you can it. Thank you very much.
-
How do you count derivatives? – Mariano Suárez-Alvarez Oct 4 '10 at 17:28
Also: there are probably more calculus texts out there than people who regularly visit this site: surely one of them should be of help for you? I am pretty confident your local library has some available... – Mariano Suárez-Alvarez Oct 4 '10 at 17:33
I always thought the most intuitive way to think about the derivative was to think of a particle in motion and that the derivative corresponded to its velocity at a point. – WWright Oct 5 '10 at 5:27
I find the visual way of thinking about it to be the easiest: if you look at the graph of $f$ and zoom in to the point $(x,f(x))$, the graph will eventually start looking very much like a line. That line is the "tangent line" to $f$ at $x$, and its slope is the derivative of $f$ at $x$. (Some functions won't ever start looking like a line, no matter how far you zoom in. One example is $f(x)=\left\vert x\right\vert$, at $(0,0)$. We say that this function isn't differentiable there.)
The formal definition of the derivative, as $$f^\prime(x)=\lim_{a\rightarrow x}\frac{f(x)-f(a)}{x-a},$$ is really just another, more mathematical, way to describe "zooming in" and the construction of a tangent line. If you think about it, the expression inside the limit is just the slope formula for a line going through $(x,f(x))$ and $(a,f(a))$. This line is called a "secant line." If we let $a=x$, then we only have one point and so we don't have a unique line anymore. But if we instead ensure that $a\ne x$ but that $a$ gets closer and closer to $x$, the secant lines approach the tangent line that we saw above. This is just the same "zooming in" I was talking about above.
If you're less of a visual person, it's often helpful to think of a physical quantity, like velocity. Imagine driving a car or riding a bike in a straight line. At any instant, you have a pretty good idea of how fast you're going "right now," even if your speed is in the middle of changing. Ryan Budney mentioned the example of a car with a speedometer above. The speedometer can tell you your speed at any specific time. This is just the derivative of your position: if you let the line be the $y$-axis and time be the $x$-axis, and graph your journey, the slope of a tangent line at a point will be exactly the speedometer reading at the that point. On the other hand, you can also measure how much time it takes for you to get from $a$ to $b$: this is giving you the slope of a secant line.
So instantaneous velocity = slope of tangent line or derivative
while average velocity = slope of secant line.
These are all derivatives "with respect to time," but you can easily take the derivative with respect to other things, as long as you have a function relating them. I'm not sure what your science background is, but this is the kind of thing that pops up often in school science experiments: the rate of change of the volume of a gas with respect to pressure, etc.
-
If one thing, $A$, changes when you change something else, $B$, then the derivative of $A$ with respect to $B$ is the rate of change of $A$ when you make very small changes in $B$.
-
To use a metaphor: the speedometer reading in a car is to the odometer reading as $f'$ is to $f$. – Ryan Budney Oct 4 '10 at 20:01
Graphically, when you have the x axis, and the y axis with the graph of a (derivable) function f, then the derivative (the number) of f at x0 basically tells you how much the value of the function changes if you move to x0+1. However, this won't be very accurate, since the graph may be very different at x0+1. This is why we have the derivative of f (the function) which gives the above number for any "x0" you give it. So it gives the coefficient of the tangent line at each point. It's only about how your function grows for an infinitesimal "move" around some x0.
-
You can find some nice introductory lectures on derivatives in various videos from Khan's Academy. The site has a free online collection of a couple thousand videos on mathematics, science, history, and economics. It's worth exploring when you're looking for introductory lectures on such topics.
- | 2015-04-28 05:47:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7067711353302002, "perplexity": 193.7075053851656}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246660724.78/warc/CC-MAIN-20150417045740-00246-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://getsomedia.com/ibtnqzct/numpy-fast-matrix-inversion-7be908 | Great question. The numpy.linalg.det() function calculates the determinant of the input matrix. ulinalg.py - supporting linear … 0 & 0 & 0 & 1 NumPy's operations are divided into three main categories: Fourier Transform and Shape Manipulation, Mathematical and Logical Operations, and Linear Algebra and Random Number Generation. Given a square matrix a, return the matrix ainv satisfying dot (a, ainv) = dot (ainv, a) = eye (a.shape). I_{4} = In SciPy, the matrix inverse of the Numpy array, A, is obtained using linalg.inv (A), or using A.I if A is a Matrix. \begin{bmatrix} These routines are not designed to be particularly fast. Executing the above script, we get the matrix. All of the NumPy array methods for operating on arrays. Now itâs on you. numpy.linalg has a standard set of matrix decompositions and things like inverse and determinant. There are multiple ways to solve such a system, such as Elimination of Variables, Cramer's Rule, Row Reduction Technique, and the Matrix Sol… (eg. $$As of at least July 16, 2018 Numba has a fast matrix inverse. Get tips, tricks and exclusive resources right in your inbox weekly to grow and crack Data Science/ML and Python jobs. Moreover, if you have a cooler approach to do above operations, please do share the code in comments. Use the âinvâ method of numpyâs linalg module to calculate inverse of a Matrix. \end{bmatrix} The larger square matrices are considered to be a combination of 2x2 matrices. If you know how, please leave a comment.If you know other languages, you could create a script for these. It turns out that for any matrix, column rank = row rank, and are collectively referred to as the rank of A. Hope it was easy, cool and simple to follow. 1 & 0 \\ Machine Learning | Python | Pandas | Numpy. The key to making it fast is to use vectorized operations, generally implemented through NumPy's universal functions (ufuncs). There are two methods by which we can add two arrays.$$ If the generated inverse matrix is correct, the output of the below line will be True. This section motivates the need for NumPy's ufuncs, which can be used to make repeated calculations on array elements much more efficient. I am also happy if you post some of your solutions with running times ☺ I am quite sure that my Java and C++ code can be written much better. NumPy Arrays ¶. NumPy: Inverse of a Matrix In this tutorial, we will make use of NumPy's numpy.linalg.inv () function to find the inverse of a square matrix. 1 & 2 & 3 \\ , Here are the results of my benchmarking: Use the âinvâ method of numpyâs linalg module to calculate inverse of a Matrix. You can verify the result using the numpy.allclose() function. Plus, tomorrows … The inverse of a matrix is a matrix that when multiplied with the original matrix produces the identity matrix. In this article we will discuss different ways to reverse the contents of 1D and 2D numpy array ( columns & rows ) using np.flip() and [] operator. Inverse of an identity [I] matrix is an identity matrix [I]. In this tutorial we first find inverse of a matrix then we test the above property of an Identity matrix.Â, # Let's create a square matrix (NxN matrix), # Inverse of Identity matrix is an identity matrix, let's check that. numpy.fft.ifft2¶ fft.ifft2 (a, s=None, axes=(-2, -1), norm=None) [source] ¶ Compute the 2-dimensional inverse discrete Fourier Transform. First of all import numpy module i.e. 0 & 1 & 0\\ A simple addition of the two arrays x and y can be performed as follows: The same preceding operation can also be performed by using the add function in the numpy package as follows: Using this library, we can perform complex matrix operations like multiplication, dot product, multiplicative inverse, etc. $$If your numpy/scipy is compiled using one of these, then dot () will be computed in parallel (if this is faster) without you doing anything. Inverse of a matrix exists only if the matrix is non-singular i.e., determinant should not be 0. Now itâs on you. 2D array are also called as Matrices which can be represented as collection of rows and columns.. We will be walking thru a brute force procedural method for inverting a matrix with pure Python. In this we are specifically going to talk about 2D arrays. NumPy in python is a general-purpose array-processing package. The identity matrix is a square matrix in which all the elements of the principal (main) diagonal are ones and all other elements are zeros. numpy.linalg.inv(a) [source] ¶ Compute the (multiplicative) inverse of a matrix. Writing code in comment? Matrix Multiplication in NumPy is a python library used for scientific computing.$$. The NumPy code is as follows. which is its inverse. , Array is a linear data structure consisting of list of elements. \end{bmatrix} In order to understand how matrix addition is done, we will first initialize two arrays: Similar to what we saw in a previous chapter, we initialize a 2 x 2 array by using the np.array function. 1 & 3 & 3 \\ Matrix inversion is an extremely well-studied problem; this is not a place to be messing about with inventing new approaches. In addition to the above, if you need any help in your Python or Machine learning journey, comment box is all yours. 1 & 2 & 4 scipy.ifft () in Python Last Updated: 29-08-2020 With the help of scipy.ifft () method, we can compute the inverse fast fourier transformation by passing simple 1-D numpy array and it will return the transformed array by using this method. \begin{bmatrix} If a NumPy array is used repeatedly, convert it to Fortran order before the first use. \end{bmatrix} The NumPy library is a popular Python library used for scientific computing applications, and is an acronym for \"Numerical Python\". The inverse of a matrix A is the matrix B such that AB = I where I is the identity matrix consisting of ones down the main diagonal. That add efficiency and clarity down the main diagonal ufuncs ) ] operator trick the of... Of useful rolling linear combinations of your data generally implemented through numpy 's numpy.linalg.inv ( function... Determinant, and so on of ndarry be numpy fast matrix inversion to make repeated on! Considered to be a combination of 2x2 matrices exists only if the generated matrix... Aryes, Jr1 and Problems of matrices by Frank Aryes, Jr1 specifically going to talk about arrays... Numpy.Linalg.Det ( ) function to find the inverse of a matrix that multiplied. Comment box is all yours a desirable library for the numpy fast matrix inversion function that can enable this layout... Create arrays ( ndarray ) an acronym for \ '' Numerical Python\ '' )... Of bindings of C++ the numpy fast matrix inversion on arrays only if the matrix numpy a desirable library for Python!, singular value decomposition, determinant should not be 0 an acronym for \ '' Numerical Python\.. Simple to follow tutorials approach to do above operations, please leave a comment.If you know other languages, could... As they are very often used as of at least July 16, 2018 has! Science/Ml and Python jobs a Schaum 's Outline Series book Theory and Problems of matrices by Frank Aryes Jr1! Least July 16, 2018 Numba has a fast matrix inverse a Python... Numpy arrays ( ndarray ) then Blaze and Eigen will definitely be better options for.. Can enable this memory layout conversion is numpy.asfortranarray get the matrix module is designed to be a square. To get the matrix is non-singular i.e., determinant should not be 0 routines are designed!, etc matrix such that where is the matrix objects inherit all the attributes and methods ndarry! Inverse using numpy Python using the numpy array methods for operating on arrays know other languages, you could a. 2D array can be of any dimension, i.e the output of the line. This makes numpy a desirable library for the Python users, if you need more complex then! Matrices by Frank Aryes, Jr1 } $numpy helps to create arrays ( multidimensional arrays ), with help! Arrays ), with the help of bindings of C++ ( multidimensional arrays ), the. To the above, if you need more complex routines then Blaze and Eigen will definitely be better for. Of matrix decompositions and things like inverse and other operations here. it is multiplied the... Crack data numpy fast matrix inversion and Python jobs this makes numpy a desirable library for Python. [ source ] ¶ Compute the eigenvalues and right eigenvectors of a matrix the. If the matrix and clarity 3x3 matrix inversion method, a must be nonsingular... Matrix that when multiplied with the help of bindings of C++ strictly,... Combinations of your data above, if you need any help in your Python or learning. Ndarray ) convert to a numpy array i.e to calculate inverse of matrix! Inherit all the attributes and methods of ndarry that numpy matrices are strictly 2-dimensional, while numpy.! Pick an example matrix from a Schaum 's Outline Series book Theory and Problems of matrices Frank! At least July 16, 2018 Numba has a fast matrix inverse a desirable library for the users. Any doubts or questions in the numpy arrays ( ndarray ) more cool stuff follow... Like inverse and other operations here. my benchmarking: the matrix inversion,! 'S ufuncs, which can be used to make repeated calculations on array much! Find the inverse of a matrix is also known as a reciprocal matrix routines are not designed to close... Numpy library Python function that can enable this memory numpy fast matrix inversion conversion is numpy.asfortranarray bindings of C++ through numpy 's functions. A popular Python library used for scientific computing applications, and is an identity [ I ] is! We pick an example matrix from a Schaum 's Outline Series book Theory and Problems of matrices by Aryes! Of the numpy array = A^ { -1 } = A^ { -1 =... Please do share the code in comments in comments like inverse and determinant and simple to.. Is correct, the output numpy fast matrix inversion the numpy library is a popular library. We show how to Compute the ( multiplicative ) inverse of a matrix that when multiplied with original! Eigenvectors of a matrix solves is fast array processing talk about 2D arrays above, you... Defined as array of an array this tutorial, we learned how Compute. Is that numpy solves is fast array processing arrays ), with the help of bindings of C++ this layout! Like inverse and other operations here. which we can perform complex operations., tricks and exclusive resources right in your Python or Machine learning journey, comment box all., while numpy arrays be 0 need more complex routines then Blaze and Eigen will be! In addition to the above script, we get the inverse of a matrix exists only if the matrix method. ] ¶ Compute the ( multiplicative ) inverse of a matrix that when multiplied the... Ufuncs, which can be used to make repeated calculations on array elements much efficient. Schaum 's Outline Series book Theory and Problems of matrices by Frank Aryes, Jr1 sized simple... B = a − 1 nonsingular square matrix larger square matrices are 2-dimensional... NumpyâS linalg module to calculate inverse of a matrix is important for matrix operations when multiplied with the matrix... Library for the Python function that can enable this memory layout conversion is numpy.asfortranarray numpy a... To Compute the eigenvalues and right eigenvectors of a matrix is important matrix. ( a ) [ source ] ¶ Compute the ( multiplicative ) inverse of a that! Using [ ] operator trick plus, tomorrows … we use numpy.linalg.inv ( function! A must be a nonsingular square matrix the result using the numpy library is matrix. Often used numpy.linalg.inv ( ) function to calculate inverse of a matrix use the âinvâ method numpy... It up, we will make use of numpy 's ufuncs, which can be to. Is that numpy matrices are strictly 2-dimensional, while numpy fast matrix inversion arrays can be used to make repeated on. 3X3 matrix inversion method, a must be a nonsingular square matrix through numpy numpy.linalg.inv. Stuff, follow thatascience on social media efficiency and clarity a given square using! Weekly to grow and crack data Science/ML and Python jobs of size$ n is. Usually B is denoted by $I_ { numpy fast matrix inversion }$ $AA^ -1! A nonsingular square matrix numpy is a linear data structure consisting of ones down the main diagonal strictly 2-dimensional while! Right in your inbox weekly to grow and crack data Science/ML and Python jobs article. Right eigenvectors of a matrix for you methods for operating on arrays multiplied with the matrix! They overload the standard numpy inverse and other operations here. a desirable for.$ $AA^ { -1 } = A^ { -1 } numpy fast matrix inversion {... Will make use of numpy ’ s linalg module to calculate inverse of matrix. Numpy inverse and determinant cool stuff, follow thatascience numpy fast matrix inversion social media of at July! And clarity key to making it fast is to use vectorized operations, please leave comment.If. For scientific computing in this article, we learned how to Compute the ( multiplicative inverse... Definitely be better options numpy fast matrix inversion you your Python or Machine learning journey comment. By the original matrix, it results in identity matrix [ I ], Java and as! About 2D arrays questions in the numpy array using [ ] operator trick functional compatibility 2-D. Methods of ndarry will definitely be better options for you library used for computing! Of my benchmarking: the matrix such that where is the identity matrix [ ]. Numpy is a Python library used for scientific computing applications, and is an identity [ I ] is. Determinant should not be 0 's ufuncs, which can be of any dimension, i.e for the users. Post, we show how to calculate inverse of a matrix social media cool and simple to follow.! Can see how they overload the standard numpy inverse and other operations here ). Here. add efficiency and clarity a Schaum 's Outline Series book Theory and Problems of matrices by Aryes. Another difference is that numpy matrices are strictly 2-dimensional, while numpy can. } a = I_ { n }$ has a fast matrix inverse.... First explicitly to convert to a numpy array i.e stuff, follow thatascience on social.... Problems of matrices by Frank Aryes, Jr1 dimension, i.e original produces! Of matrix multiplication in the numpy library making it fast is to use vectorized operations, generally implemented through 's. Methods by which we can add two arrays an identity [ I ] it results identity... Of list of elements ( ) function on a PyBoard. using ]! Two methods by which we can perform complex matrix operations simple to follow an example matrix a! Python function that can enable this memory layout conversion is numpy.asfortranarray, with the help of of. A combination of 2x2 matrices while numpy arrays can be used to make repeated calculations on array elements more... A must be a combination of 2x2 matrices s linalg module to calculate inverse using numpy multiplication, dot,... The input matrix July 16, 2018 Numba has a standard set of matrix and. | 2021-04-14 23:12:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5595989227294922, "perplexity": 1134.52936072772}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038078900.34/warc/CC-MAIN-20210414215842-20210415005842-00002.warc.gz"} |
http://lists.gnu.org/archive/html/lilypond-user/2003-04/msg00013.html | lilypond-user
[Top][All Lists]
## Re: Shifting text markup and reharsal marks
From: Antonio Palama Subject: Re: Shifting text markup and reharsal marks Date: Wed, 02 Apr 2003 09:24:46 +0200 User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.3) Gecko/20030327 Debian/1.3-4
Hans Forbrich wrote:
My understanding of "padding" is to ensure separation to the next object.
I frequently use "extra-offset" instead of padding to move text, and that gives
both horizontal and vertical. Format is (from memory - horiz & vert might be
interchanged)
\property Voice.TextScript \override #'extra-offset = #'( horiz . vert )
and I usually use the \once operator since the offset is usually for the next
markup only. {Replace horiz and vert with numeric values in staff spaces.) I
haven't tried whether the values can be 'real' or fractions, so far I've only
needed integer, but I can see fractions being required.
I personally think that would be a mistake to allow move/position by absolute
distances, since the score could be recast in any number of sizes (13, 16, 20
point, etc.) The scaling factor of 'staff spaces' allows for appropriate
relative tracking.
HTH
/Hans
Thank you; this is exactly what I was looking for; will try it tonight.
I agree that absolute distances are not the proper way to handle
this problem but until now, the only way I found to do offsets was
the LaTeX hack:
c4^"\\hspace{5mm} Allegro"
or something similar.
Does the instruction you give modify the offset of rehearsal marks too or
have I to use a diffent property like Voice.RehearsalMark or something
similar?
Thanks again,
Antonio | 2016-05-24 14:00:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7446792125701904, "perplexity": 12652.69582982041}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049270798.25/warc/CC-MAIN-20160524002110-00047-ip-10-185-217-139.ec2.internal.warc.gz"} |
http://quant.stackexchange.com/tags/indicator/hot | # Tag Info
21
As mentioned elsewhere on this site, Lo, Mamaysky, and Wang (2000) do exactly what you're talking about, namely algorithmic detection of head and shoulders patterns. Their definition: Head-and-shoulders (HS) and inverted head-and-shoulders (IHS) patterns are characterized by a sequence of five consecutive local extrema $E_1,...,E_5$ such that HS ...
12
I would recommend that you read "Evidence-Based Technical Analysis" by David Aronson. Firstly, I am mentioning it because it is a highly worthwhile book. Secondly, on pp151--161 he attempts to "objectify subjective TA", using the head-and-shoulders pattern as an example.
10
Most contemporary NN systems are just made to use the raw price time series for input (maybe with some kind of simple normalization), but for my thesis I wrote a system which traded equities with an ANN with technical indicator inputs (MAs, MACD, even pattern matching for stuff like Head-Shoulders, support levels, etc.). So at least conceptually it's ...
8
The predictor variables would consist of the input layer to the neural network. The output layer would consist of your target. You need to specify the hidden layer, number of nodes per layer, the learning algorithm, and the learning algorithm stopping criteria. Typically inputs are normalized (first-differenced, z-scored, etc.) before inputting into the ...
6
Yes, there are. For pure technical indicator libraries I would first check out: http://www.ta-lib.org/ Its open source and they provide APIs for both C# and Java among others. Let me know if you look for commercial ones but this one is definitely the most comprehensive in terms of open source code.
6
The issue for any technique is, does it consistently work as expected in the future? If not, then it's worthless. The idea behind mean reversion is that you have a "mean" that means something (it's not arbitrary), and a deviation from that mean that reverts in some consistent way. A pair trade is a common form of a "mean reversion" trade. Below is a ...
6
I think of mean reversion as more of a single stock phenomenon. In aggregate, these ididosyncratic mean reversions should offset one another and make the market smoother than its component stocks. There is a lot of work on mean reversion at the single stock level. The best entry is Jegadeesh's 1990 paper on what became known as "short run reversal" -- the ...
5
Dynamic Time Warping, recursive, time-delayed feedforward neural networks, wavelets, empirical mode decomposition, ..., there's plenty of it. BUT If you want my advice, don't go this way, I wasted too much time doing things like that. Neither big nor small players (profitably and consistently) trade this way and for a good reason. Technical analysis is a ...
5
Remember that there is almost no point in predicting market movements if you cannot use it to trade and generate P&L. Thus, backtesting a stat arb strategy based on the indicator is best option. Don't let yourself fooled by correlation or even directional forecast percentage accuracy as a few wrong predictions can blow your capital. You will need a ...
5
I will break up your question in to some parts to make answering easier. "people use various economic indicators with their networks (moving average, MACD, etc.) However, how do these come into play in a NN context?"--the 'indicators' MA, MACD etc. come from the data. They are measures of the data capturing some aspect. You could try to capture/replicate ...
3
There is so much finance literature on this topic, I don't even know where to begin. Specifically on momentum, some of the earlier foundational papers are Returns to Buying Winners and Selling Losers: Implications for Stock Market Efficiency Momentum Strategies Price Momentum and Trading Volume International Momentum Strategies Momentum has an entire ...
3
You might want to check out the book Evidence Based Technical Analysis by David Aronson. In it he applies statistical techniques to determine whether certain technical analysis indicators and ensembles have any predictive power. It's an interesting read and should equip you with some ideas on how you might perform a similar analysis.
3
There is considerable literature on the role of sentiment in predicting stock market returns. Sentiment is often used as the proxy variable to explain Risk Aversion. I would check out the following for details: Neal & Wheatley - Do measures of investor sentiment predict stock market returns Stambaugh - The Short of it: Investor sentiment and stock ...
3
Post-crisis there has been some research that uses return series for financial institutions to predict downturns. I think the major ones are CoVaR (Adrian and Brunnermeier) and CATFIN (Allen, Bali, and Tang). These lit reviews in these papers should provide a lot of background.
3
I am not aware of a research specifically on integrating Macro view, but I'll give a shot at your question, hopefully it helps. I believe the answers depends on the initial trading strategies and on the macroeconomic indicators. From the way you formulate your question, I imagine that your trading strategy in based on quantitative asset allocation ...
3
It seems that this is the key difference between OBV and TSV: "Time segmented volume is the way to get consistent volume data and eliminate all the volume distortions that we discussed above. Here's the key to why time segmented volume works: Let's start with volume on a 5 minute chart and for this example, look at the 10:15 bar. Now take the average of ...
3
There is, unfortunately, no broad agreement on this point. In fact, put-call ratios may be constructed from volume as well as open interest, and they can even be constructed from certain subsets of the options chain (e.g., only certain strikes or tenors). I have usually used your option #2, because option #1 has a tendency to be extremely high or even ...
2
CBOE defines Put Call ratio as PCR = OIputs / OIcalls and I have always seen it defined this way. You should express it in a decimal way, a fraction doesn't really make sense here if you have 9999 in OIcall and 9998 in OIput for instance.
1
In the United States, the Federal Reserve is always late to adjust to rising inflation with an extreme outlier in the mid-1990s. Inflation always leads the flattening of the yield curve since the Fed raising interest rates which flattens the yield curve is usually in response to rising inflation. Poorly managed currencies or even the US in the 1970s will ...
1
You might have a look into the CRAN's "Empirical Finance" task view. It lists a whole bunch of R packages for time-series analysis and construction of automatic trading rules. Link: http://cran.r-project.org/web/views/Finance.html
1
Well pattern recognition and image processing is so developed these days. This is cutting edge in CS now and if we could identify cancer or brain tumor on a hazy image or a suspect face on an industry cam then recognizing head and shoulders on a chart is really really easy. Support Vector Machines or entropy come to mind but there is a myriad of ...
1
Regarding trading, it depends upon one's style and temperament. Don't rely solely on Aronson's book and his views and a phrase quoted by Andrew Lo in his study. The formula posted by Tal Fishman of Head and Shoulders as quoted by Lo, Mamaysky and Wang (2000) is not exhaustive. There is a lot of scope for further improvement. However, there are many studies ...
1
An excellent example is the Federal Reserve Bank of Chicago’s National Financial Conditions Index (NFCI): http://research.stlouisfed.org/fred2/series/NFCI CXO Advisory Group just published a report which came to the following conclusion: [...] evidence from simple tests suggests that the Federal Reserve Bank of Chicago’s NFCI may be a useful indicator ...
1
Renowned CXO Advisory Group have created a research compendium exclusively on momentum investing. This is the most exhaustive treatment of the topic I have ever seen: The Momentum Investing Research Compendium With \$25 the price is reasonable.
1
Cliff Asness's PhD thesis was based on Momentum and Value. AQR has a lot of interesting research. http://www.aqrindex.com/AQR_Momentum_Indices/Momentum_Research/Content/default.fs http://aqr.com/Research/ByTopic.aspx Jegadeesh and Titman (Returns to Buying Winners...- first paper linked in the above answer ) seems to be the standard reference.
Only top voted, non community-wiki answers of a minimum length are eligible | 2014-04-20 10:52:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5481510162353516, "perplexity": 1709.5220415160359}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00660-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/college-algebra-6th-edition/chapter-p-prerequisites-fundamental-concepts-of-algebra-mid-chapter-check-point-page-63/16 | ## College Algebra (6th Edition)
$-\frac{12y^{15}}{x^3}$
Simplify by dividing the exponents in the denominator. Do this by subtracting the exponents. Then make sure to put terms with negative exponents in the denominator. $\frac{24x^2y^{13}}{-2x^5y^{-2}}$ $-12x^{2-5}y^{13-(-2)}$ $-12x^{-3}y^{15}$ $-\frac{12y^{15}}{x^3}$ | 2019-11-22 19:55:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9166284799575806, "perplexity": 250.935792211205}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671548.98/warc/CC-MAIN-20191122194802-20191122223802-00050.warc.gz"} |
http://clarkrichards.org/r/knitr/make/2016/07/05/Makefile-for-knitr/ | # A Makefile for knitr documents
One of the best things I’ve found about using R for all my scientific work is powerful and easy to use facilities for generating dynamic reports, particularly using the knitr package. The seamless integration of text, code, and the resulting figures (or tables) is a major step toward fully-reproducible research, and I’ve even found that it’s a great way of doing “exploratory” work that allows me to keep my own notes and code contained in the same document.
Being a fan of a “Makefile” approach to working with R scripts, as well as an Emacs/ESS addict, I find the easiest way to automatically run/compile my knitr latex documents is with a Makefile. Below is a template I adapted from here:
all: pdf
MAINFILE := **PUT MAIN FILENAME HERE**
RNWFILES :=
RFILES :=
TEXFILES :=
CACHEDIR := cache
FIGUREDIR := figures
LATEXMK_FLAGS :=
##### Explicit Dependencies #####
################################################################################
RNWTEX = $(RNWFILES:.Rnw=.tex) ROUTFILES =$(RFILES:.R=.Rout)
RDAFILES= $(RFILES:.R=.rda) MAINTEX =$(MAINFILE:=.tex)
MAINPDF = $(MAINFILE:=.pdf) ALLTEX =$(MAINTEX) $(RNWTEX)$(TEXFILES)
# Dependencies
$(RNWTEX):$(RDAFILES)
$(MAINTEX):$(RNWTEX) $(TEXFILES)$(MAINPDF): $(MAINTEX)$(ALLTEX)
.PHONY: pdf tex clean
pdf: $(MAINPDF) tex:$(RDAFILES) $(ALLTEX) %.tex:%.Rnw Rscript \ -e "library(knitr)" \ -e "knitr::opts_chunk[['set']](fig.path='$(FIGUREDIR)/$*-')" \ -e "knitr::opts_chunk[['set']](cache.path='$(CACHEDIR)/$*-')" \ -e "knitr::knit('$<','$@')" %.R:%.Rnw Rscript -e "Sweave('$^', driver=Rtangle())"
%.Rout:%.R
R CMD BATCH "$^" "$@"
%.pdf: %.tex
latexmk -pdf $< clean: -latexmk -c -quiet$(MAINFILE).tex
-rm -f $(MAINTEX)$(RNWTEX)
-rm -rf $(FIGUREDIR) -rm *tikzDictionary -rm$(MAINPDF) | 2017-09-21 19:16:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6242161393165588, "perplexity": 12678.336475566153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687837.85/warc/CC-MAIN-20170921191047-20170921211047-00576.warc.gz"} |
https://gitlab.inria.fr/why3/why3/-/blame/ace5595d61ce3d84e363c0db77384cb256c97e5e/doc/manpages.tex | Attention une mise à jour du service Gitlab va être effectuée le mardi 18 janvier (et non lundi 17 comme annoncé précédemment) entre 18h00 et 18h30. Cette mise à jour va générer une interruption du service dont nous ne maîtrisons pas complètement la durée mais qui ne devrait pas excéder quelques minutes.
manpages.tex 7.83 KB
MARCHE Claude committed Sep 06, 2010 1 \chapter{Reference manuals for the Why3 tools} MARCHE Claude committed Sep 08, 2010 2 \label{chap:manpages} MARCHE Claude committed Sep 06, 2010 3 4 \section{Compilation, Installation} MARCHE Claude committed Dec 13, 2010 5 \label{sec:install} MARCHE Claude committed Sep 06, 2010 6 MARCHE Claude committed Sep 07, 2010 7 8 9 10 11 12 13 Compilation of Why3 must start with a configuration phase which is run as \begin{verbatim} ./configure \end{verbatim} This analyzes you current configuration and check if requirements hold. Compilation requires: \begin{itemize} MARCHE Claude committed Dec 08, 2010 14 15 16 \item The Objective Caml compiler, version 3.10 or higher. It is available as a binary package for most Unix distributions. For debian-based Linux distributions, you can install the packages MARCHE Claude committed Sep 07, 2010 17 18 19 20 21 \begin{verbatim} ocaml ocaml-native-compilers \end{verbatim} It is also installable from sources, downloadable from the Web site \url{http://caml.inria.fr/ocaml/} MARCHE Claude committed Dec 13, 2010 22 \end{itemize} MARCHE Claude committed Sep 07, 2010 23 MARCHE Claude committed Dec 13, 2010 24 25 For the IDE, additional Ocaml libraries are needed: \begin{itemize} MARCHE Claude committed Sep 07, 2010 26 27 28 29 30 31 32 \item The Lablgtk2 library for Ocaml bindings of the gtk2 graphical library. For debian-based Linux distributions, you can install the packages \begin{verbatim} liblablgtk2-ocaml-dev liblablgtksourceview2-ocaml-dev \end{verbatim} It is also installable from sources, available from the site \url{http://wwwfun.kurims.kyoto-u.ac.jp/soft/olabl/lablgtk.html} MARCHE Claude committed Dec 13, 2010 33 34 35 36 37 38 39 \item The Ocaml bindings of the sqlite3 library For debian-based Linux distributions, you can install the package \begin{verbatim} libsqlite3-ocaml-dev \end{verbatim} It is also installable from sources, available from the site \url{http://ocaml.info/home/ocaml_sources.html#ocaml-sqlite3} MARCHE Claude committed Sep 07, 2010 40 41 \end{itemize} MARCHE Claude committed Dec 13, 2010 42 43 \subsection{Local use, without installation} MARCHE Claude committed Dec 13, 2010 44 It is not mandatory to install Why3 to use it. Local use is obtained via MARCHE Claude committed Dec 13, 2010 45 46 47 48 \begin{verbatim} ./configure --enable-local make \end{verbatim} MARCHE Claude committed Dec 13, 2010 49 The Why3 executables are then available in subdirectory \texttt{bin/}. MARCHE Claude committed Dec 13, 2010 50 MARCHE Claude committed Sep 06, 2010 51 \section{Installation of external provers} MARCHE Claude committed Sep 06, 2010 52 MARCHE Claude committed Dec 13, 2010 53 54 55 56 Why3 can use a wide range of external theorem provers. These need to be installed separately, and then Why3 needs to be configured to use them. There is no need to install these provers before compiling and installing Why. MARCHE Claude committed Dec 13, 2010 57 MARCHE Claude committed Dec 13, 2010 58 59 For installation of external provers, please look at the Why provers tips page \url{http://why.lri.fr/provers.en.html}. MARCHE Claude committed Dec 13, 2010 60 MARCHE Claude committed Dec 13, 2010 61 62 For configuring Why3 to use the provers, follow intructions given in Section~\ref{sec:why3config}. MARCHE Claude committed Dec 13, 2010 63 MARCHE Claude committed Dec 13, 2010 64 65 \section{The \texttt{why3config} command-line tool} \label{sec:why3config}. MARCHE Claude committed Dec 13, 2010 66 MARCHE Claude committed Dec 13, 2010 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 Why3 must be configured to access external provers. Typically, this is done by running either the command line tool \begin{verbatim} why3config \end{verbatim} or using the menu \begin{verbatim} File/Detect provers \end{verbatim} of the IDE. This must be done again each time a new prover is installed. The set of all provers which are attempted to detect is described in the readable configuration file \texttt{provers-detection-data.conf} of the Why3 data directory (\eg{} \texttt{/usr/local/share/why3}). Advanced users may try to modify this file to add support for detection of other provers. (In that case, please consider submitting a new prover configuration on the bug tracking system). The result of the prover detection is stored in the user's configuration file (\eg{} \texttt{~/.why.conf}). Again, this file is human readable, and advanced users may modify it in order to experiment different ways of calling provers, \eg{} different versions of the same prover, or with different options. The provers which are typically attemped for detection are \begin{itemize} \item Alt-Ergo~\cite{conchon08smt,ergo}: \url{} \item CVC3~\cite{BarTin-CAV-07}: \url{} \item Coq~\cite{CoqArt}: \url{} \item Eprover~: \url{} \item Gappa~\cite{melquiond08rnc}: \url{} \item Simplify~\cite{simplify05}: \url{} \item Spass~: \url{} \item veriT~: \url{} \item Yices~\cite{DM06}: \url{} \item Z3~\cite{z3}: \url{} \end{itemize} MARCHE Claude committed Dec 13, 2010 105 MARCHE Claude committed Sep 06, 2010 106 107 108 109 110 \section{The \texttt{why3} command-line tool} \section{The \texttt{why3ml} tool} \section{The \texttt{why3ide} tool} MARCHE Claude committed Dec 13, 2010 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 \label{sec:ideref} \subsection{Command-line options} \begin{description} \item[-I] $d$: adds $d$ in the load path, to search for theories. \end{description} \subsection{Left toolbar} \begin{description} \item[Provers] To each detected prover corresponds to a button in this prover framed box. Clicking on this button starts the prover on the selected goal(s). \item Start an editor on the selected task. For automatic provers, this allows to see the file sent to the prover. For interactive provers, this also allows to add or modify the corresponding proof script. The modifications are saved, and can be retrieved later even if the goal was modified. \item[Split] This splits the current goal into subgoals if it is a conjunction of two or more goals. \end{description} \subsection{Menus} \begin{description} \item[File/Detect provers] \end{description} MARCHE Claude committed Dec 13, 2010 146 MARCHE Claude committed Dec 13, 2010 147 148 149 150 \subsection{Preferences} \subsection{Structure of the database file} MARCHE Claude committed Dec 13, 2010 151 [TODO] MARCHE Claude committed Sep 06, 2010 152 MARCHE Claude committed Sep 06, 2010 153 154 \section{The \texttt{why.conf} configuration file} MARCHE Claude committed Sep 08, 2010 155 156 157 158 159 160 161 162 163 \section{Drivers of external provers} \section{Transformations} \subsection{Non-splitting transformations} \begin{description} \item[eliminate\_algebraic] Replaces algebraic data types by first-order definitions~\cite{paskevich09rr} MARCHE Claude committed Sep 09, 2010 164 165 166 \item[eliminate\_builtin] Suppress definitions of symbols which are declared as builtin in the driver, i.e. with a syntax'' rule. MARCHE Claude committed Sep 08, 2010 167 168 169 \item[eliminate\_definition] \item[eliminate\_definition\_func] \item[eliminate\_definition\_pred] MARCHE Claude committed Sep 09, 2010 170 171 172 173 174 \item[eliminate\_if\_fmla] replaces formulas of the form if f1 then f2 else f3 by an equivalent formula using implications and other connectives. (TODO: detail) \item[eliminate\_if\_term] replaces terms of the form if formula then t2 else t3 by lift it at the level of the formula (TODO: detail) MARCHE Claude committed Sep 08, 2010 175 \item[eliminate\_if] MARCHE Claude committed Sep 09, 2010 176 177 178 179 apply both two above transformations \item[eliminate\_inductive] replaces inductive predicates by (incomplete) axiomatic definitions, i.e construction axioms and an inversion axiom (TODO: detail) MARCHE Claude committed Sep 08, 2010 180 181 \item[eliminate\_let\_fmla] \item[eliminate\_let\_term] MARCHE Claude committed Sep 09, 2010 182 183 \item[eliminate\_let] apply both two above transformations MARCHE Claude committed Sep 08, 2010 184 185 186 187 188 189 \item[eliminate\_mutual\_recursion] \item[eliminate\_recursion] \item[encoding\_decorate\_mono] \item[encoding\_enumeration] \item[encoding\_simple2] \item[encoding\_smt] MARCHE Claude committed Sep 08, 2010 190 Should we cite \cite{conchon08smt} here? MARCHE Claude committed Sep 08, 2010 191 192 193 194 195 196 197 \item[encoding\_tptp] \item[filter\_trigger] \item[filter\_trigger\_builtin] \item[filter\_trigger\_no\_predicate] \item[hypothesis\_selection] \item[inline\_all] \item[inline\_trivial] MARCHE Claude committed Sep 09, 2010 198 199 200 201 removes definitions of the form \begin{verbatim} logic f x_1 .. x_n = (g e_1 .. e_k) \end{verbatim} MARCHE Claude committed Sep 10, 2010 202 203 when each $e_i$ is either a constant or one of the $x_j$, and each $x_1$ .. $x_n$ occur at most once in the $e_i$ MARCHE Claude committed Sep 09, 2010 204 MARCHE Claude committed Sep 08, 2010 205 206 \item[remove\_triggers] \item[simplify\_array] MARCHE Claude committed Sep 09, 2010 207 208 209 \item[simplify\_formula] reduces trivial equalities $t=t$ to True and then simplifies propositional structure: removes True, False, f and f'' to f'', etc. MARCHE Claude committed Sep 08, 2010 210 \item[simplify\_recursive\_definition] MARCHE Claude committed Sep 09, 2010 211 212 213 214 215 216 217 218 219 220 221 222 223 reduces mutually recursive definitions if they are not really mutually recursive, e.g.: \begin{verbatim} logic f : ... = .... g ... with g : .. = e \end{verbatim} becomes \begin{verbatim} logic g : .. = e logic f : ... = .... g ... \end{verbatim} if f does not occur in e MARCHE Claude committed Sep 08, 2010 224 \item[simplify\_trivial\_quantification] MARCHE Claude committed Sep 09, 2010 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 simplifies quantifications of the form \begin{verbatim} forall x, x=t -> P(x) \end{verbatim} or \begin{verbatim} forall x, t=x -> P(x) \end{verbatim} when x does not occur in t into \begin{verbatim} P(t) \end{verbatim} More generally, it applies this simplification whenever x=t appear in a negative position. MARCHE Claude committed Sep 08, 2010 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 \item[simplify\_trivial\_quantification\_in\_goal] \item[split\_premise] \end{description} \subsection{Splitting transformations} \begin{description} \item[right\_split] \item[simplify\_formula\_and\_task] \item[split\_all] \item[split\_goal] \item[split\_goal\_pos\_all] \item[split\_goal\_pos\_axiom] \item[split\_goal\_pos\_goal] \item[split\_goal\_pos\_neg\_all] \item[split\_goal\_pos\_neg\_axiom] \item[split\_goal\_pos\_neg\_goal] \end{description} MARCHE Claude committed Sep 06, 2010 260 MARCHE Claude committed Sep 06, 2010 261 262 263 264 265 266 %%% Local Variables: %%% mode: latex %%% TeX-PDF-mode: t %%% TeX-master: "manual" %%% End: | 2022-01-16 18:52:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9016661047935486, "perplexity": 6409.053202303755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300010.26/warc/CC-MAIN-20220116180715-20220116210715-00114.warc.gz"} |
https://www.appliedcombinatorics.org/book/s_background_complex.html | ## SectionB.18Obtaining the Complex Numbers from the Reals
By now, the following discussion should be transparent. The complex number system $$\complexes$$ is just the cartesian product $$\reals\times\reals$$ with
1. $$(a,b) = (c,d)$$ in $$\complexes$$ if and only if $$a=c$$ and $$b=d$$ in $$\reals\text{.}$$
2. $$(a,b)+(c,d)=(a+c,b+d)\text{.}$$
3. $$(a,b)(c,d)=(ac-bd, ad+bc)\text{.}$$
Now the complex numbers of the form $$(a,0)$$ behave just like real numbers, so is natural to say that the complex number system contains the real number system. Also, note that $$(0,1)^2=(0,1)(0,1)=(-1,0)\text{,}$$ i.e., the complex number $$(0,1)$$ has the property that its square is the complex number behaving like the real number $$-1\text{.}$$ So it is convenient to use a special symbol like $$i$$ for this very special complex number and note that $$i^2=-1\text{.}$$
With this beginning, it is straightforward to develop all the familiar properties of the complex number system.
### SubsectionB.18.1Decimal Representation of Real Numbers
Every real number has a decimal expansion—although the number of digits after the decimal point may be infinite. A rational number $$q=m/m$$ from $$\rats$$ has an expansion in which a certain block of digits repeats indefinitely. For example,
\begin{equation*} \frac{2859}{35} = 81.6857142857142857142857142857142857142857142\dots \end{equation*}
In this case, the block $$857142$$ of size $$6$$ is repeated forever.
Certain rational numbers have terminating decimal expansions. For example, we know that $$385/8= 48.125\text{.}$$ If we chose to do so, we could write this instead as an infinite decimal by appending trailing $$0$$'s, as a repeating block of size $$1\text{:}$$
\begin{equation*} \frac{385}{8} = 48.1250000000000000000000000000000000\dots \end{equation*}
On the other hand, we can also write the decimal expansion of $$385/8$$ as
\begin{equation*} \frac{385}{8} = 48.12499999999999999999999999999999999\dots \end{equation*}
Here, we intend that the digit $$9\text{,}$$ a block of size $$1\text{,}$$ be repeated forever. Apart from this anomaly, the decimal expansion of real numbers is unique.
On the other hand, irrational numbers have non-repeating decimal expansions in which there is no block of repeating digits that repeats forever.
You know that $$\sqrt{2}$$ is irrational. Here is the first part of its decimal expansion:
\begin{equation*} \sqrt{2} =1.41421356237309504880168872420969807856967187537694807317667973\dots \end{equation*}
An irrational number is said to be algebraic if it is the root of polynomial with integer coefficients; else it is said to be transcendental. For example, $$\sqrt{2}$$ is algebraic since it is the root of the polynomial $$x^2-2\text{.}$$
Two other famous examples of irrational numbers are $$\pi$$ and $$e\text{.}$$ Here are their decimal expansions:
\begin{align*} \pi \amp =3.14159265358979323846264338327950288419716939937510582097494459\dots\\ \end{align*}
and
\begin{align*} e\amp=2.7182818284590452353602874713526624977572470936999595749669676277\dots \end{align*}
Both $$\pi$$ and $$e$$ are transcendental.
#### ExampleB.50.
Amanda and Bilal, both students at a nearby university, have been studying rational numbers that have large blocks of repeating digits in their decimal expansions. Amanda reports that she has found two positive integers $$m$$ and $$n$$ with $$n\lt 500$$ for which the decimal expansion of the rational number $$m/n$$ has a block of 1961 digits which repeats indefinitely. Not to be outdone, Bilal brags that he has found such a pair $$s$$ and $$t$$ of positive integers with $$t\lt 300$$ for which the decimal expansion of $$s/t$$ has a block of $$7643$$ digits which repeats indefinitely. Bilal should be (politely) told to do his arithmetic more carefully, as there is no such pair of positive integers (Why?). On the other hand, Amanda may in fact be correct—although, if she has done her work with more attention to detail, she would have reported that the decimal expansion of $$m/n$$ has a smaller block of repeating digits (Why?).
Let $$f$$ be a function from $$\posints$$ to $$X\text{.}$$ For each $$n\in \posints\text{,}$$ consider the decimal expansion(s) of the real number $$f(n)\text{.}$$ Then choose a positive integer $$a_n$$ so that (1) $$a_n\le 8\text{,}$$ and (2) $$a_n$$ is not the $$n^{th}$$ digit after the decimal point in any decimal expansion of $$f(n)\text{.}$$ Then the real number $$x$$ whose decimal expansion is $$x=.a_1a_2a_3a_4a_5\dots$$ is an element of $$X$$ which is distinct from $$f(n)\text{,}$$ for every $$n\in\posints\text{.}$$ This shows that $$f$$ is not a surjection. | 2022-10-05 12:44:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8132475018501282, "perplexity": 193.26781658986053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00186.warc.gz"} |
http://www.research.lancs.ac.uk/portal/en/publications/csupplemented-subalgebras-of-lie-algebras(45f1d551-3c20-4608-ae05-4d8c3003e770).html | Home > Research > Publications & Outputs > C-Supplemented Subalgebras of Lie Algebras.
Electronic data
• 108 KB, PDF-document
A subalgebra $B$ of a Lie algebra $L$ is c-{\it supplemented} in $L$ if there is a subalgebra $C$ of $L$ with $L = B + C$ and $B \cap C \leq B_L$, where $B_L$ is the core of $B$ in $L$. This is analogous to the corresponding concept of a c-supplemented subgroup in a finite group. We say that $L$ is c-{\it supplemented} if every subalgebra of $L$ is c-supplemented in $L$. We give here a complete characterisation of c-supplemented Lie algebras over a general field. | 2017-06-27 00:15:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6876179575920105, "perplexity": 408.77144689582104}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320873.54/warc/CC-MAIN-20170626235612-20170627015612-00064.warc.gz"} |
https://nforum.ncatlab.org/discussion/2681/ | # Start a new discussion
## Not signed in
Want to take part in these discussions? Sign in if you have an account, or apply for one below
## Discussion Tag Cloud
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
• CommentRowNumber1.
• CommentAuthorzskoda
• CommentTimeMay 1st 2011
New entry characteristic class of a structure to complement characteristic class and historical note on characteristic classes. I did not link to it from outside so far.
• CommentRowNumber2.
• CommentAuthorUrs
• CommentTimeMay 1st 2011
• (edited May 1st 2011)
Hi Zoran,
thanks for the reference.
Notice that, once again, under the Grothendieck construction this comes down to the same story as at characteristic class:
what you write $\mathcal{H}_H$ is just the Grothendieck construction of the presheaf $H$ (its category of elements) (notice you need $H$ to be contravariant for the formula you give to make sense) and $\mathcal{S}$ should also be assumed to be a fibered category, I assume, corresponging under the reverse Grothendieck construction to a sheaf/stack $F_{\mathcal{S}}$ on $\mathcal{T}$.
So under the Grothendieck construction a characteristic class in the sense of the article by Fuks that you mention is the same as a morphism
$F_{\mathcal{S}} \to H$
in the topos over $\mathcal{T}$. So it’s exactly as defined at characteristic class.
I’ll add the reference with that interpretation now to the latter entry, if you allow.
• CommentRowNumber3.
• CommentAuthorUrs
• CommentTimeMay 1st 2011
okay, I have added the discussion to characteristic class.
I was going to add also a criticism about how Fuks’s definition is not local/excisive as long as it restricts to cohomology classes instead of cocycles, but then I figured I shouldn’t do that with having seen just a second-hand summary of one part of the paper.
• CommentRowNumber4.
• CommentAuthorzskoda
• CommentTimeMay 1st 2011
• (edited May 1st 2011)
I’ll add the reference with that interpretation now to the latter entry, if you allow.
Of course, this is why I did not write in the personal part of the $n$Lab. It is good for students to have access to 1-categorical approach. I saw immediately that it is about fibered categories (does this reformulation make it possible without “concrete” assumption ?; originally Fuks works just with abelian groups in the target of cohomology), and your explanation in 2 is useful to me as well.
• CommentRowNumber5.
• CommentAuthorUrs
• CommentTimeMay 1st 2011
• (edited May 1st 2011)
(does this reformulation make it possible without “concrete” assumption ?
The concreteness is what allows us to interpret the situation in terms of sheaves/stacks with values in sets/categories (or groupoids). If we drop the concreteness assumption, we might still be able to proceed, but would step into the far more general and far less explored territory of (higher) categories of (higher) sheaves with coefficients not in the standard coefficient object. I’d hesitate to go in that direction without a strong motivating example that makes it necessary.
Is there any chance to see an electronic version of an English (or French or German) translation of Fuks’ article?
Or else, can you recount further what he discusses in his article? What’s his main theorem with his definition? | 2022-01-23 00:24:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 9, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8081549406051636, "perplexity": 1687.392546756661}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303917.24/warc/CC-MAIN-20220122224904-20220123014904-00378.warc.gz"} |
https://www.roelpeters.be/load-single-function-r-library/ | Home » How to load a single function from an R library
# How to load a single function from an R library
Tags:
For the R users that are jealous of Python’s import system, there’s good news. As of R 3.6, it’s possible to include or exclude specific functions when loading a library. Importing the whole namespace is no longer required.
As of R 3.6, importing only one or more functions without having the load the complete namespace can be done using the include.only argument of the require and library functions. In the following snippet, you can find some examples using the stringr package.
library(stringr, include.only = 'str_length') # include one function
library(stringr, include.only = c('str_length', 'str_sub')) # include multiple functions
One can also load all functions from a namespace, but exclude a selection. Like this:
library(stringr, exclude = 'str_pad') # exclude one function
library(stringr, exclude = c('str_pad','str_dup')) # exclude multiple functions
Keep in mind that you cannot use both arguments in the same call. Otherwise, you’ll run into the following error message.
Error: only one of 'include.only' and 'exclude' can be used
Finally, you might not be able to include or exclude one function from a package, if that function is loaded from a dependency. As an example, the tidymodels package depends on the parsnip package for the linear_reg function.
library(tidymodels, include.only = 'linear_reg') # will produce the error below
library(parsnip, include.only = 'linear_reg') # this works
Error: package or namespace load failed for ‘tidymodels’:
linear_reg | 2021-03-08 18:20:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4790533781051636, "perplexity": 4087.859603787966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178385389.83/warc/CC-MAIN-20210308174330-20210308204330-00054.warc.gz"} |
http://codeforces.com/problemset/problem/38/B | B. Chess
time limit per test
2 seconds
memory limit per test
256 megabytes
input
standard input
output
standard output
Two chess pieces, a rook and a knight, stand on a standard chessboard 8 × 8 in size. The positions in which they are situated are known. It is guaranteed that none of them beats the other one.
Your task is to find the number of ways to place another knight on the board so that none of the three pieces on the board beat another one. A new piece can only be placed on an empty square.
Input
The first input line contains the description of the rook's position on the board. This description is a line which is 2 in length. Its first symbol is a lower-case Latin letter from a to h, and its second symbol is a number from 1 to 8. The second line contains the description of the knight's position in a similar way. It is guaranteed that their positions do not coincide.
Output
Print a single number which is the required number of ways.
Examples
Input
a1b2
Output
44
Input
a8d4
Output
38 | 2020-08-04 17:03:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2240642011165619, "perplexity": 392.6700382971505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735881.90/warc/CC-MAIN-20200804161521-20200804191521-00550.warc.gz"} |
http://www.maths.usyd.edu.au/u/AlgebraSeminar/13abstracts/alper13.html | # Jarod Alper (Australian National University)
## Friday 1st March, 12:05-12:55pm, Carslaw 454
### Stability for (bi)canonical curves
The classical construction of the moduli space of curves, $$M_g$$, via Geometric Invariant Theory (GIT) relies on the asymptotic stability result of Gieseker that the $$m$$-th Hilbert Point of a pluricanonically embedded smooth curve is GIT-stable for all sufficiently large $$m$$. Several years ago, Hassett and Keel observed that if one could carry out the GIT construction with non-asymptotic linearizations, the resulting models could be used to run a log minimal model program for the space of stable curves. A fundamental obstacle to carrying out this program is the absence of a non-asymptotic analogue of Gieseker's stability result, i.e. how can one prove stability of the $$m$$-th Hilbert point for small values of $$m$$?
In this talk, we'll begin with a basic discussion of geometric invariant theory as well as how it applies to construct $$M_g$$ in order to introduce and motivate the essential stability question in which this procedure rests on. The main result of the talk is: the $$m$$-th Hilbert point of a general smooth canonically or bicanonically embedded curve of any genus is GIT-semistable for all $$m > 1$$. This is joint work with Maksym Fedorchuk and David Smyth. | 2017-12-15 08:30:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8115367889404297, "perplexity": 656.572792815363}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948567785.59/warc/CC-MAIN-20171215075536-20171215095536-00040.warc.gz"} |
http://www.ask.com/question/butene-structure-formula | # Butene Structure Formula?
Butene's structure formula can be found in the website http://www.gcsescience.com/o27.htm, which is well illustrated. It has a variety of structures, which represent different isomers of dissimilar butane of the but-2-ene.
Q&A Related to "Butene Structure Formula?"
The structural formula for butene is CH2=CH-CH2-CH3 there's a double bond between the CH2 and the CH http://wiki.answers.com/Q/What+Is+2+butene+name
Do you mean isoproPOL or isoproPYL. The answers above relate to isopropyl. I would guess isoproPOL is (CH3)2CH-O- http://answers.yahoo.com/question/index?qid=100602...
First, draw four C's in a row to represent the carbons. Draw http://www.chacha.com/question/how-do-you-draw-the...
With a chemical compound, the structural formula is a graphical representation of the molecular structure of the compound. The structural formula shows how atoms are arranged and http://answers.ask.com/Science/Chemistry/what_is_a...
Similar Questions
Top Related Searches
Explore this Topic
2-Butene is a chemical compound that is formed of four carbon atoms and is an alkene element. It is used as a catalyst and is used in breaking down crude oil and ...
The CIS 2 Butene condensed structural formula is C4H8. The formula can also be displayed with the CH connection graph. ...
The structural formula for 2-methyl-1-butene is CH3CH2(CH3)=CH2. The structural formula for 3-methyl-1-butene is C5H10. Butene is also called butylene. ... | 2014-03-10 05:49:12 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8084585070610046, "perplexity": 3887.1964083635867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010653177/warc/CC-MAIN-20140305091053-00032-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://www.neetprep.com/question/50258-angles-dip-two-places-o-o-respectively-ratioof-horizontal-components-earths-magnetic-field-two-places-will-bea-bc--d-/126-Physics--Magnetism-Matter/695-Magnetism-Matter | # NEET Physics Magnetism and Matter Questions Solved
If the angles of dip at two places are 30o and 45o respectively, then the ratio of horizontal components of earth's magnetic field at the two places will be
(a) $\sqrt{3}:\sqrt{2}$ (b) $1:\sqrt{2}$
(c) $1:\sqrt{3}$ (d) 1:2
Concept Videos :-
#10 | Earth's Magnetism
#11 | Apparent & True Dip
#12 | Neutral points
#13 | Comparison of Magnetic Field of Earth at Two Points
#14 | Variation of Earth's Magnetic Field
Concept Questions :-
Earth's magnetism
Explanation is a part of a Paid Course. To view Explanation Please buy the course.
Difficulty Level: | 2019-10-21 08:15:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6721921563148499, "perplexity": 12771.905368575672}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987763641.74/warc/CC-MAIN-20191021070341-20191021093841-00382.warc.gz"} |
https://math.stackexchange.com/questions/511637/proving-the-schwartz-inequality-need-some-help | # Proving the Schwartz inequality, need some help
The question and my attempt can be found here: http://i.imgur.com/CiMNr2m.jpg?1
I don't quite understand what i'm suppose to do. It says to prove the inequality with whatever so I tried to substitute and then factor and I thought it would just work out, but it didn't.
Thanks
• What you have written in the picture says you're supposed to prove the equality – Tyler Oct 2 '13 at 0:55
• So that means, that I use = signs instead of <= signs? – Kat Oct 2 '13 at 1:08
Your work looks good. I think you are making things a bit too hard for yourself. I'll compute one side for you; then, hopefully, you can verify that the two sides are indeed equal. If $x_{1} = \lambda y_{1}$ and $x_{2} = \lambda y_{2}$, then we have:
$$\sqrt{x_{1}^{2}+x_{2}^{2}}\sqrt{y_{1}^{2}+y_{2}^{2}} =\sqrt{(\lambda y_{1})^{2}+(\lambda y_{2})^{2}}\sqrt{y_{1}^{2}+y_{2}^{2}}$$ Then, factoring out a $\lambda^{2}$, we find:
$$\sqrt{x_{1}^{2}+x_{2}^{2}}\sqrt{y_{1}^{2}+y_{2}^{2}} =\lambda\sqrt{y_{1}^{2}+ y_{2}^{2}}\sqrt{y_{1}^{2}+y_{2}^{2}} = \lambda ((y_{1}^{2}+ y_{2}^{2})^{1/2})^{2}$$
$$\lambda(y_{1}^{2} + y_{2}^{2})$$
• If I understand your question correctly, I am using the general result that for positive $a, b$, we have $\sqrt{a^{2}b} = \sqrt{a^{2}}\sqrt{b} = a\sqrt{b}$. – Alex Wertheim Oct 2 '13 at 1:24 | 2019-05-24 04:57:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9170196652412415, "perplexity": 233.1383094990615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257514.68/warc/CC-MAIN-20190524044320-20190524070320-00491.warc.gz"} |
https://math.stackexchange.com/questions/697154/unique-number-of-numbers-multiplied-together | # Unique number of numbers multiplied together
I'm sure this has been asked before, but how many unique numbers can be made from multiplying $4$ numbers, each between $1$ and $100$?
My guess is the all numbers from $1$ to $100^4$ except those with prime factors above $100$. However this excludes numbers like $11^5$. Then I would also have to exclude numbers with more than $4$ prime factors, and each one is $\ge 11$. I'm probably still missing some though.
Is there a way to find or get an estimate of this number without using a computer? I'm guessing something to do with the prime counting function. Any insight is appreciated.
Edit: Here are some data points (range, unique numbers). Can anyone find a pattern?
10,275
20,2670
30,8679
40,21346
50,49076
60,89247
70,149530
80,253818
90,381413
100,520841
• This question reminds me of Project Euler --- some interesting questions. Can you use programming? – Andrew Kelley Mar 3 '14 at 3:46
• In fact, I imagine a brute force calculation in C++ would take under a minute: just compute all possible products and stuff them in an unordered_set. – Hurkyl Mar 3 '14 at 3:49
• I just checked, and $2(64^{4}) < 100^{4}$. So your initial guess is not correct, but I think this is very similar to what you did notice (about $11^5$). – Andrew Kelley Mar 3 '14 at 3:51
• @Hurkyl Thank you for your unordered_set suggestion, the result was found under a second. – qwr Mar 3 '14 at 5:04
• Excel finds a fit $y=0.1326x^{3.2858}$ that looks good to the eye. For a cubic fit, it finds $y = 0.6849x^3 - 15.791x^2 - 35.111^x + 3240.3$ which also looks good. – Ross Millikan Mar 3 '14 at 23:13
You are looking at a four-dimensional analogue of the famous "Erdös multiplication table problem". In that problem, we want to know $N_2(x)$, the number of distinct integers occur in the form $mn$ where $1\le m\le x$ and $1\le n\le x$. Clearly $N_2(x)$ is less than $x^2$; Erdös was the first to show that $N_2(x)/x^2$ tends to $0$ as $x$ tends to infinity. A series of improvements, culminating in work of Kevin Ford, showed that $N_2(x)$ is about $x^2$ divided by a small power of $\log x$.
You're now asking about $N_4(x)$, defined similarly. I suspect that $N_4(x)$ is about $x^4$ divided by a slightly larger power of $\log x$. In particular, there are probably methods for getting lower bounds for $N_2(x)$ (e.g., showing that $N_2(x)/x^{2-\varepsilon}$ tends to infinity with $x$, for any fixed $\varepsilon>0$) that could be extended to show that $N_4(x)$ is eventually larger than $x^\alpha$ for every $\alpha<4$.
The simple computer route to this is to do four nested loops. You can require that each number be at least as large as the one before, which gives somewhat more than $\frac 1{4!}100^4 \approx 4,200,000$ products (the divisor is smaller when there are duplicates), then sort the products and throw out duplicates. I suspect it is rather close to $4E6$ because there won't be many duplicates, but that is a guess. Even up to $1000$ is easily within desktop computer speed.
• By the way, the actual number is around $520,000$. – qwr Mar 3 '14 at 4:57
• @qwr: I think the first step towards conjecturing approximately how many there should be would be to compute a table and a graph of the actual exact values for the problem, with $100$ replaced by $n$ for many values of $n$; e.g. maybe every small multiple of $10$, or maybe for all $n < 100$. The graph may give useful clues. You might want to plot the log of the number as well. – Hurkyl Mar 3 '14 at 5:12 | 2019-08-23 16:02:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7988572120666504, "perplexity": 222.93719486845018}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318894.83/warc/CC-MAIN-20190823150804-20190823172804-00429.warc.gz"} |
https://inmabb.criba.edu.ar/revuma/revuma.php?p=doi/v62n1a03 | Revista de la Unión Matemática Argentina Home Editorial board For authors Latest issue In press MCA 2021 Online first Prize Search OJS
### Published volumes
##### 1936-1944
Uniform approximation of Muckenhoupt weights on fractals by simple functions
Volume 62, no. 1 (2021), pp. 57–66
### Abstract
Given an $A_p$-Muckenhoupt weight on a fractal obtained as the attractor of an iterated function system, we construct a sequence of approximating weights, which are simple functions belonging uniformly to the $A_p$ class on the approximating spaces. | 2021-10-21 20:44:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5086263418197632, "perplexity": 2870.3842587931726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585441.99/warc/CC-MAIN-20211021195527-20211021225527-00271.warc.gz"} |
https://testbook.com/question-answer/the-maximum-and-minimum-speeds-of-a-flywheel-are-6--5e6cb20ff60d5d1968dc5178 | # The maximum and minimum speeds of a flywheel are 630 rpm and 600 rpm. The coefficient of fluctuation of speed is
This question was previously asked in
CIL MT Mechanical: 2020 Official Paper
View all CIL MT ME Papers >
1. 0.0500
2. 0.0487
3. 0.0300
4. 0.0476
Option 2 : 0.0487
Free
CT 1: Growth and Development - 1
68944
10 Questions 10 Marks 10 Mins
## Detailed Solution
Concept:
The difference between the maximum and minimum speeds during a cycle is called the maximum fluctuation of speed. The ratio of the maximum fluctuation of speed to the mean speed is called the coefficient of fluctuation of speed. The reciprocal of the coefficient of fluctuation of speed is known as the coefficient of steadiness.
Let $${N_1}$$ and $${N_2}$$ = Maximum and minimum speeds in r.p.m. during the cycle
Range of speed = N1 - N2
$$N$$ = Mean speed in r.p.m. $$= \frac{{{N_1} + {N_2}}}{2}$$
Coefficient of fluctuation of speed
$${C_s} = \;\frac{{{N_1} - {N_2}}}{N} = \;\frac{{2\left( {{N_1} - {N_2}} \right)}}{{{N_1} + \;{N_2}}}$$
$$\begin{array}{l} m = \frac{1}{{Coeff.\;o\;f\;fluctuation\;of\;speed}}\\ = \frac{{{N_{mean}}}}{{{N_{max}} - {N_{min}}}} = \frac{{{N_1} + {N_2}}}{{2\left( {{N_1} - {N_2}} \right)}} \end{array}$$
Calculation:
Given:
N1 =630 rpm and N2 = 600 rpm
then,
N = Mean speed in r.p.m. $$= \frac{{{N_1} + {N_2}}}{2}=\frac{{{630} + {600}}}{2} =615\ rpm$$
and range of speed = 630 - 600 = 30 rpm
and $${C_s} = \;\frac{{{N_1} - \;{N_2}}}{N} = \;\frac{{2\left( {{N_1} - {N_2}} \right)}}{{{N_1} + {N_2}}}=\frac{30}{615}$$
thus, CS = 0.0487 | 2022-01-21 19:27:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7482122778892517, "perplexity": 1482.2013654600476}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303709.2/warc/CC-MAIN-20220121192415-20220121222415-00654.warc.gz"} |
https://stats.libretexts.org/Courses/Lake_Tahoe_Community_College/Support_Course_for_Elementary_Statistics%3A__ISP/01%3A_Decimals_Fractions_and_Percents | Skip to main content
# 1: Decimals Fractions and Percents
• 1.1: Rounding and Scientific Notation
In this section, we will go over how to round decimals to the nearest whole number, nearest tenth, nearest hundredth, etc. In most statistics applications that you will encounter, the numbers will not come out evenly, and you will need to round the decimal.
• 1.2: Converting between Fractions - Decimals and Percents
In this section, we will convert from decimals to percents and back. We will also start with a fraction and convert it to a decimal and a percent. In statistics we are often given a number as a percent and have to do calculations on it. To do so, we must first convert it to a percent. Also, the computer or calculator shows numbers as decimals, but for presentations, percents are friendlier. It is also much easier to compare decimals than fractions, thus converting to a decimal is helpful.
• 1.3: Comparing Fractions, Decimals, and Percents
In this section, we will go over techniques to compare two numbers. These numbers could be presented as fractions, decimals or percents and may not be in the same form. For example, when we look at a histogram, we can compute the fraction of the group that occurs the most frequently. We might be interested in whether that fraction is greater than 25% of the population. By the end of this section we will know how to make this comparison.
• 1.4: Using Fractions - Decimals and Percents to Describe Charts
Charts, such as bar charts and pie charts are visual ways of presenting data. You can think of each slice of the pie or each bar as a part of the whole. The numerical versions of this are a list of fractions, decimals and percents. By the end of this section we will be able to look at one of these charts and produce the corresponding fractions, decimals, and percents.
This page titled 1: Decimals Fractions and Percents is shared under a CC BY license and was authored, remixed, and/or curated by Larry Green.
• Was this article helpful? | 2023-01-31 06:11:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9172230958938599, "perplexity": 527.1262037466503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499845.10/warc/CC-MAIN-20230131055533-20230131085533-00221.warc.gz"} |
https://www.federalreserve.gov/econres/notes/feds-notes/why-are-there-still-bank-branches-20180820.htm | August 20, 2018
### The Branch Puzzle: Why Are there Still Bank Branches?
Elliot Anenberg, Andrew C. Chang, Serafin Grundl, Kevin B. Moore, and Richard Windle*
Over the past 25 years, advancements in information technology have allowed for many banking services to be conducted online instead of in a local bank branch. However, Figure 1 shows that the number of bank branches increased from 1990 to 2008, plateaued between 2008 and 2012, and decreased only slightly since 2013 as branch closures in some locations have been somewhat offset by openings in others. 1 The persistence of bank branches is puzzling because the presence of other types of establishments whose products or services are now available online has declined significantly, even for establishments that, unlike banks, sell physical goods.2 Even long before the Internet era, some observers predicted the death of the bank branch as electronic substitutes for branches began to emerge.3
##### Figure 1: Number of Branches of FDIC-insured Commercial Banks
We provide evidence that the persistence of the large number of local bank branches across the country--even in areas with expensive real estate and in the face of improving information technology--may be due to the fact that both depositors and small businesses continue to value local bank branches.
From the triennial Survey of Consumer Finances (SCF), we show that even though the majority of depositors now use online banking, a majority of depositors that use online banking still visit bank branches, which suggests that online banking is an imperfect substitute for bank branches. We also document broad-based reliance of depositors on bank branches, although this reliance varies with age, wealth, income, and occupation.
For small business lending, using Community Reinvestment Act (CRA) disclosures, we show that among banks that are CRA reporters the share of loans made by lenders without a local branch presence remains quite low. This finding suggests that local branch presence is still important for small business lending. However, we also find that among CRA reporters the non-local share has risen substantially since 2011, suggesting that local branch presence has lost some importance in recent years, at least for some businesses.4
This recent trend towards a growing role for non-local lenders could be caused by changes in underwriting technology or by changes in the composition of small businesses themselves. Underwriting technology is changing as credit scoring gains importance in small business lending, which could allow local lenders, who have access to soft information, to face increased competition from non-local lenders. At the same time, the nature of some types of small businesses themselves is changing, as more small businesses sell their goods and services online, and purchase their inputs online.5 This shift in the nature of small businesses also potentially reduces the advantage of local lenders in screening small businesses.
1. Depositors
We use the SCF to provide three complementary pieces of evidence that households value local bank branches despite the rise of online banking services: (1) questions on the use of online banking, (2) new questions on the propensity for households to visit local branches, and (3) household subjective assessment of the importance of branch location for selecting their primary financial institution.
Who Uses Online Banking? Most people.
The 2016 SCF asked households whether they used online banking in the last 12 months, and over 70 percent of households reported using online banking. Thus, the persistence of bank branches does not appear to be simply explained by a low online banking take-up rate. In comparison, only 4 percent of households reported using online banking in 1995, suggesting that improvements in information technology have increased the availability and usage of online banking services.6
The 2016 SCF also contained new questions asking whether respondents visited a local bank branch. About 84 percent of households with a checking or savings account reported visiting the local branch of the institution of their main savings or their main checking account within the last year, and almost all households who visited their local branch did so to use banking services other than just the ATM.
Although online banking appears to substitute for services provided by local branches, the substitution is far from complete. Column 1 of Table 1 shows that among households with checking accounts, those who reported usage of online banking were only 2 percentage points less likely to report going to a local branch of the institution of their main checking account. Column 2 shows that among households with savings accounts, online banking usage is somewhat more substitutable with going to the local branch of the institution of their main savings account. For these households the difference is 12 percentage points. However, even among the households with savings accounts that reported using online banking, 70 percent of them still reported going to a local branch.
##### Table 1: Online Banking Users are Less Likely to Visit Branches
(1) (2)
Go to Checking Branch?
Go to Savings Branch?
Use Online Banking?
-0.02**
(0.01)
-0.12***
(0.02)
Constant
0.84***
(0.01)
0.82***
(0.01)
Which Households Visit Local Bank Branches? Older, Wealthier, and Self-Employed.
While the 2016 SCF showed that 84 percent of households reported visiting a local bank branch, certain types of households were more likely to visit branches than others. To further explore these differences, we regress a dummy variable, VisitBranch, for whether a respondent's household visited a local branch, on a variety of household characteristics, shown in equation (1).
\begin{align}(1) \ \ \ \ \ {VisitBranch}_i = &\ {WealthPercentile}_i+{IncomePercentile}_i+\\&\ {AgeCategory}_i+{OccupationCategory}_i+X_i+\epsilon_i \end{align}
In equation (1) WealthPercentile is four dummies for whether respondent i's household was in the 25th-49th, 50th-74th, 75th-89th, or 90th+ percentile of wealth, IncomePercentile is five dummies for whether the respondent's household was in the 20th-39th, 40th-59th, 60th-79th, 80th-89th, or 90th+ percentile of income, AgeCategory is five dummies for whether the respondent was 35-44, 45-54, 55-64, 65-74, or 75+ at the time of the survey, OccupationCategory is three dummies for whether the respondent was Self-Employed, Retired/Disabled/Student/Homemaker, or otherwise out of the labor force (the omitted category is working for someone else), and X is a variety of other control variables.7 We estimate equation (1) using the 2016 SCF after applying survey weights.
Figure 2: Effects of Depositor Characteristics on Branch Use.
##### Figure 2d: Self-Employed People Use Branches More
Older households were more likely to use their local branch. Figure 2a shows that relative to respondents under the age of 35, respondents over the age of 75 were about six percentage points more likely to have visited a local bank branch in the last year. This pattern could reflect that older households have stronger preferences for visiting their local branch than younger households, but an alternative interpretation is that age is proxying for a year-of-birth effect (i.e. a cohort effect).
Higher wealth households were also more likely to visit their local branch. Figure 2b shows that on average, above-median wealth households were about 7.5 percentage points more likely to have visited their local branch. This finding could reflect higher demand for banking services among higher wealth households, as the SCF also shows that, among households with checking accounts, the number of services a household obtains from its main checking institution increases from 2.0 for the lowest wealth group to 3.6 for the top wealth group. This finding could also reflect that higher wealth households demand certain banking products that are better serviced through visiting a local branch.
While higher wealth households were more likely to use their local branch, higher income households were somewhat less likely to use their local branch, as shown in Figure 2c. Households in the top 10 percent of the income distribution were about 6 percentage points less likely to visit a branch than households in the bottom 20 percent. Visiting a branch is time consuming, and so perhaps the lower usage of local branches among higher income households reflects their higher opportunity cost of time.8
Finally, as shown in Figure 2d, we find that self-employed households were especially likely to use their local branch. They were about 6 percentage points more likely to visit a branch than households that work for someone else. This is perhaps because such households demand a set of banking products (e.g. depositing checks with large balances) that are best serviced through visiting a local branch. The 2016 SCF data show that about 35 percent of self-employed households used the same institution for their business as for their main checking account.
Do Households Place Subjective Value on Branches? Yes, and More Value With Age.
The SCF also asks respondents for the most important reason for choosing the financial institution for their main checking account, offering respondents a menu of options. This question is asked in the 2016 SCF, but also in previous survey years as well. Respondents have consistently said that the location of an institution's offices is the most important reason for choosing their financial institution. This reason accounts for over a third of responses to this question in each survey year.9
Figure 3 plots the fraction of SCF respondents that list the location of offices as the most important reason for choosing their main financial institution for their checking account, for three birth cohorts, against the median age of the cohort. Figure 3 shows that older households are much more likely to value brick and mortar locations, with an upward trend in the reported value of location of offices starting at around age 31, consistent with our findings in Figure 2a. However, the difference in responses between cohorts for a given age of the cohort, shown by the areas of Figure 3 where two cohorts overlap in age, are usually small. Thus, our results suggest that the increasing effect of age on branch importance reflects an age effect rather than a cohort effect. This interpretation implies that when currently young depositors transition into old age they will have a stronger preference for visiting their local branch, meaning that branches may remain important in the future. That said, our interpretation is based on comparisons of cohorts from decades ago, before the rapid advancements in information technology in recent years, and it is possible that the cohort effect has become more important.
To study the importance of local branches for small businesses, we merge data on small business loans below $1 million, from CRA disclosures, with data on local branches from the Federal Deposit Insurance Corporation's (FDIC) Summary of Deposits.10 We define local markets either as a metropolitan statistical area (MSA) ("urban" market) or as a county for counties not contained in an MSA ("rural" market). In our merged data, we observe lending volume by bank (but not bank branch) and year for three loan size buckets, and the local market in which the small business is located. Our sample period is 2000 to 2016.11 Figure 4: Average Share of Loans by Lenders Without Local Branch Presence Among CRA Reporters. ##### Figure 4a: All Markets,$100,000-$250,000 ##### Figure 4b: All Markets,$250,000-$1 million ##### Figure 4c: Urban Markets,$100,000-$250,000 ##### Figure 4d: Urban Markets,$250,000-$1 million ##### Figure 4e: Rural Markets,$100,000-$250,000 ##### Figure 4f: Rural Markets,$250,000-$1 million Figure 4 shows the evolution of the average share of loans made by lenders among CRA reporters without a local branch. The share is shown separately for loans between$100,000 to $250,000 and loans between$250,000 to $1 million.12 The share of lenders without a local branch increased slowly from 2000 to 2008, decreased somewhat during the financial crisis, and increased substantially after 2011. Despite the recent increase, the share of lenders without a branch presence remains below 20 percent for urban markets and below 45 percent for rural markets, which suggests that local branches still play an important role in small business lending.13 One interpretation of the recent increase in the non-local share among CRA reporters is that a local branch is becoming less important for acquiring soft information, possibly caused by changes in underwriting technology (e.g. credit scoring). Alternatively, it could be driven by changes in the nature of small businesses themselves. For example, loan officers in a local branch may have less of a comparative advantage in evaluating small businesses that sell their products online and purchase some of their inputs online. An important caveat of our analysis is that we only observe banks that are CRA reporters. Small banks that are not CRA reporters might be lending less outside the area of their branch footprint. Therefore, the share of out-of-market lenders may be smaller than the numbers shown in Figure 4 if all banks are taken into consideration.14 As CRA reporters only account for 60-75 percent of small business lending over the sample period, it is possible that lending is more local than suggested by the graphs in Figure 4. However, as the CRA share increased during the sample period, it is unlikely that the recent increase of out-of-market lending among CRA reporters is driven by a shift towards non-CRA reporters. To further explore the relationship between bank branches and small business lending, we regress a bank's lending market share against its share of branches in that market. This regression also uses year fixed effects and, therefore, we estimate the association of local branches with lending from bank-market-level differences in loan and branch shares. Our specification is: $$(2) \ \ \ \ \ {Loan Share}_{jmt} = \beta_t {Branch Share}_{jmt}+\mu_t+\epsilon_{jmt}$$ LoanSharejmt is the amount of small business loans of bank j in market m in year t as a share of total amount of small business loans in market m in year t. BranchSharejmt is the number of bank branches of bank j in market m in year t as a share of total number of bank branches in market m and year t. The coefficient $\beta_t$ is allowed to vary by year and $\mu_t$ are year fixed effects. The loan share is the loan share among banks that are CRA reporters. The branch share is renormalized such that the branch shares of all banks that are CRA reporters also sum up to one. A higher value of $\beta$ implies a closer association between branch presence and small business lending.15 The number of observations is 32,585,004 as each bank-market-time triplet results is an observation. Figure 5 plots the coefficient estimates for $\beta_t$ for loans between$100,000 and $250,000. Since the end of the financial crisis, the association between the local branch share and the loan share has decreased. From 2000 to 2010, an increase in the local branch share by 1 percentage point was associated with an increase in the local loan share of 0.67-0.71 percentage points. Between 2011 and 2016, however, this number fell to about 0.60 percentage points. This corroborates the suggestive evidence from the graphs in Figure 4 that non-local lenders are becoming more important for small business lending.16 How does the increasing role of non-local lenders affect small businesses? If the increasing role is driven by improvements in underwriting technology that make non-local lenders more competitive with local ones (e.g. by making more information available at a distance), then small businesses could potentially benefit from increased competition and variety of lenders. However, if the improvements in underwriting technology are accompanied by branch closures, then those small businesses that still rely heavily on local branches, either for borrowing or for other purposes, could be harmed. ##### Figure 5: Correlation Between Branch Share and Loan Share Conclusion Antitrust analysis of bank mergers currently defines banking markets to be geographically local, and our findings that depositors and small businesses still rely on bank branches support this market definition.17 However, our data also indicate that non-local lenders are gaining importance in small business lending. References [1] Sumit Agarwal, Robert Hauswald. Distance and private information in lending. The Review of Financial Studies, 23(7):2757—2788, 2010. [2] Dean F Amel, Kenneth P Brevoort. The perceived size of small business banking markets. Journal of Competition Law and Economics, 1(4):771—784, 2005. [3] Dean F Amel, Arthur B Kennickell, Kevin B Moore, others. Banking Market Definition: Evidence from the Survey of Consumer Finances. Divisions of Research & Statistics and Monetary Affairs, Federal Reserve Board, 2008. [4] Dean F Amel, Martha Starr-McCluer. Market definition in banking: Recent evidence. The Antitrust Bulletin, 47(1):63—89, 2002. [5] Kenneth P Brevoort, John D Wolken, John A Holmes. Distance still matters: the information revolution in small business lending and the persistent role of location, 1993-2003. , 2010. [6] Jesse Bricker, Lisa J Dettling, Alice Henriques, Joanne W Hsu, Lindsay Jacobs, Kevin B Moore, Sarah Pack, John Sabelhaus, Jeffrey Thompson, Richard A Windle. Changes in US Family Finances from 2013 to 2016: Evidence from the Survey of Consumer Finances. Federal Reserve Bulletin, 103:1—41, 2017. [7] Glenn B Canner, others. Evaluation of CRA data on small business lending. Business Access to Capital and Credit:8—9, 1999. [8] Hans Degryse, Steven Ongena. Distance, lending relationships, and competition. The Journal of Finance, 60(1):231—266, 2005. [9] Timothy E Dore, Traci L Mach. Recent Trends in Small Business Lending and the Community Reinvestment Act. 2018. [10] Roberto Felici, Marcello Pagnini. Distance, bank heterogeneity and entry in local banking markets. The Journal of Industrial Economics, 56(3):500—534, 2008. [11] Stephan Hollander, Arnt Verriest. Bridging the gap: the design of bank loan contracts and distance. Journal of Financial Economics, 119(2):399—419, 2016. [12] Mitchell A Petersen, Raghuram G Rajan. Does Distance Still Matter? The Information Revolution in Small Business Lending. The Journal of Finance, 57(6):2533—2570, 2002. [13] Hirofumi Uchida, Gregory F Udell, Nobuyoshi Yamori. Loan officers and relationship lending to SMEs. Journal of Financial Intermediation, 21(1):97—122, 2012. * We are grateful to Jacob Gramlich and Beth Kiser for helpful comments and to Dean Amel for making the note more entertaining. Thanks to Tim Dore for help with the CRA data. The analysis and conclusions set forth are those of the authors and do not indicate concurrence by other members of the staff, by the Board of Governors, or by the Federal Reserve System. Return to text 1. For example JP Morgan Chase recently announced that it plans to enter 15 to 20 regional markets around the United States by 2023, by opening new 400 branches. Similarly, Bank of America plans to open 500 new branches. Return to text 2. For example the number of book store locations declined from about 38,500 in 2004 to about 22,500 in 2018, according to Statista. In some industries that don't sell physical goods, like travel agencies, brick and mortar stores have disappeared almost entirely. Return to text 3. For example, in 1975, the Vice Chairman of the Federal Reserve George Mitchell said: "I have no doubt that some banks now continuously review their branching policies in light of the development of electronic substitutes for branches but the statistical evidence of such policies is hard to find… It appears to me that the continued growth of banking offices indicates a clear misreading of the trend in the banking technology climate, a misreading that is likely to prove costly for some banking enterprises." But since 1975, the number of bank branches more than doubled from about 30,000 to around 80,000 today. Return to text 4. Our analysis on small businesses relates to the literature on the importance of physical distance and soft information for lending relationships ([12, 5, 8, 10, 11, 13, 1]). Return to text 5. For example Amazon claims that more than 1 million small businesses with revenues below$7.5 million sell on its site. Return to text
6. This statistic is from the 1995 SCF. Prior to 2016, the SCF asked about online banking in a different way than in 2016. Online banking was one of the options for how a household interacted with a financial institution. Using that measure, the fraction of households using online banking increased from 4 percent in 1995 to 64 percent in 2013. Return to text
7. The control variables are: a dummy for being white non-hispanic, a dummy for living in a metropolitan statistical area, dummies for whether the respondent had a checking or a savings account for non-business use at any financial institution (not necessarily at their main financial institution), and a measure of financial literacy, as defined by [6]. Return to text
8. To examine the interaction between wealth and income's effect on the probability of visiting a branch, we re-run equation (1) by replacing WealthPercentile and IncomePercentile with a full WealthPercentile and IncomePercentile interaction set. This regression shows that the probability of visiting a branch conditioned on wealth falls with income for the top three wealth bins. The probability of visiting a branch conditioned on income rises with wealth for the bottom four income bins. Return to text
9. In contrast, less than five percent of respondents cite the location of an institution's offices as the most important reason for choosing their mortgage lender. The difference in these responses is consistent with evidence that markets for mortgages and other consumer loans are more national in scope than the market for depositors. Return to text
10. The Community Reinvestment Act mandates that banks that exceed an asset threshold of about $1 billion must report small business loans to the Federal Reserve Board. The data can be downloaded at https://www.ffiec.gov/cra/craflatfiles.htm. For a description of the data see [7]. See [9] for recent trends in small business lending and background on the Community Reinvestment Act. We estimate that CRA reporters account for about 60 percent of all small business loans as measured by outstanding loan balances. Return to text 11. An important caveat is that these data contain all loans with a small principal amount, not only loans to small businesses. These data are thought to be informative about lending to small businesses because about one half of loans below$1 million are taken out by small businesses with less than $1 million in revenue ([7]). In 2005 the reporting requirements for CRA loans were relaxed. We treat banks that were CRA reporters prior to 2005, but not afterwards, as non-reporters. Return to text 12. We calculate market shares as the fraction of loans to small businesses in the market by lenders without a local branch presence. The average market share is the simple average over all banking markets. We exclude loans below$100,000 because a large fraction of these loans are likely credit card loans. Return to text
13. The higher share for rural markets partly reflects that MSAs (urban markets) are typically comprised of multiple counties whereas we assume rural markets are single counties, and so it is more likely that we classify a lenders as local in an urban market. It may also reflect that local rural lenders are less likely to be required to file CRA disclosures. Return to text
14. To assess this we calculate the share of small business lending by CRA reporters using call report data, which is available for all banks. For loans between $100,000 and$250,000 the share of the CRA reporters increased somewhat from 64.9 percent in 2000 to 72.4 percent in 2016. For loans between $250,000 to$1 million their share also increased from 69.6 percent to 74.0 percent. These shares are based on dollar amounts. Shares based on number of loans are similar.
As CRA reporters only account for 60-75 percent of small business lending over the sample period, it is possible that lending is more local than suggested by the graphs in Figure 4. However, as the CRA share increased during the sample period, it is unlikely that the recent increase of out-of-market lending among CRA reporters is driven by a shift towards non-CRA reporters. Return to text
15. To see this relationship, consider that if all small businesses exclusively borrow locally, then when all loans are of common size and when small businesses are equally likely to choose from any of the local branches, β=1. Return to text
16. We also experimented with including various fixed effects in the specification (2). Including bank or market fixed effects does not change the general pattern of βt. Using loans between $250,000 and$1 million also yields similar results. Return to text
17. The Department of Justice defines geographic markets for the purposes of antitrust analysis such that a "hypothetical monopolist… in that region would profitably impose at least a 'small but significant and nontransitory' increase in price" (see https://www.justice.gov/atr/12-geographic-market-definition). Our results have implications for the geographic market definition in banking antitrust policy, because whether such a price increase would be profitable depends crucially on how many customers rely on local bank branches ([4], [2], and [3]). Return to text | 2019-02-16 17:43:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43581241369247437, "perplexity": 2731.0657937052288}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247480905.29/warc/CC-MAIN-20190216170210-20190216192210-00416.warc.gz"} |
https://docs.fires.im/en/latest/Building-a-FireSim-AFI.html | # 4. Building Your Own Hardware Designs (FireSim FPGA Images)¶
This section will guide you through building an AFI image for a FireSim simulation.
## 4.1. Amazon S3 Setup¶
During the build process, the build system will need to upload a tar file to Amazon S3 in order to complete the build process using Amazon’s backend scripts (which convert the Vivado-generated tar into an AFI). The manager will create this bucket for you automatically, you just need to specify a name.
So, choose a bucket name, e.g. firesim-yourname. Bucket names must be globally unique. If you choose one that’s already taken, the manager will notice and complain when you tell it to build an AFI. To set your bucket name, open deploy/config_build.ini in your editor and under the [afibuild] header, replace
s3bucketname=firesim-yournamehere
with your own bucket name, e.g.:
s3bucketname=firesim-sagar
## 4.2. Build Recipes¶
In the deploy/config_build.ini file, you will notice that the [builds] section currently contains several lines, which indicates to the build system that you want to run all of these builds in parallel, with the parameters listed in the relevant section of the deploy/config_build_recipes.ini file. Here you can set parameters of the simulated system, and also select the type of instance on which the Vivado build will be deployed. From our experimentation, there are diminishing returns using anything above a z1d.2xlarge, so we default to that. If you do wish to use a different build instance type keep in mind that Vivado will consume in excess of 32 GiB for large designs.
To start out, let’s build a simple design, firesim-rocket-quadcore-no-nic-l2-llc4mb-ddr3. This is a design that has four cores, no nic, and uses the 4MB LLC + DDR3 memory model. To do so, comment out all of the other build entries in deploy/config_build.ini, besides the one we want. So, you should end up with something like this (a line beginning with a # is a comment):
[builds]
# this section references builds defined in config_build_recipes.ini
# if you add a build here, it will be built when you run buildafi
firesim buildafi
This will run through the entire build process, taking the Chisel RTL and producing an AFI/AGFI that runs on the FPGA. This whole process will usually take a few hours. When the build completes, you will see a directory in deploy/results-build/, named after your build parameter settings, that contains AGFI information (the AGFI_INFO file) and all of the outputs of the Vivado build process (in the cl_firesim subdirectory). Additionally, the manager will print out a path to a log file that describes everything that happened, in-detail, during this run (this is a good file to send us if you encounter problems). If you provided the manager with your email address, you will also receive an email upon build completion, that should look something like this: | 2020-06-04 07:54:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31158503890037537, "perplexity": 2020.1500260232908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347439213.69/warc/CC-MAIN-20200604063532-20200604093532-00125.warc.gz"} |
https://www.physicsforums.com/threads/spivak-inequality.692956/ | # Homework Help: Spivak inequality
1. May 21, 2013
### Von Neumann
Question:
Find all numbers $x$ for which $\frac{1}{x}+\frac{1}{1-x}>0$.
Solution:
If $\frac{1}{x}+\frac{1}{1-x}>0$,
then $\frac{1-x}{x(1-x)}+\frac{x}{x(1-x)}>0$;
hence $\frac{1}{x(1-x)}>0$.
Now we note that
$\frac{1}{x(1-x)} \rightarrow ∞$ as $x \rightarrow 0$
and $\frac{1}{x(1-x)} \rightarrow 0$ as $x \rightarrow 1$.
Thus, $0<x<1$.
Notes:
Not quite sure if that's the sort of solution Spivak is looking for in Ch.1.
2. May 21, 2013
### Infrared
A non-zero number and its reciprocal will always have the same sign so $\frac{1}{x(1-x)}$ will be positive where $x(1-x)$ is
3. May 21, 2013
### Von Neumann
Ah, I see. Don't know how I didn't see that. | 2018-07-23 10:11:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6930833458900452, "perplexity": 2066.3087003828705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676596204.93/warc/CC-MAIN-20180723090751-20180723110751-00126.warc.gz"} |
http://bootmath.com/constructing-a-degree-4-rational-polynomial-satisfying-fsqrt2sqrt3-0.html | # Constructing a degree 4 rational polynomial satisfying $f(\sqrt{2}+\sqrt{3}) = 0$
Goal: Find $f \in \mathbb{Q}[x]$ such that $f(\sqrt{2}+\sqrt{3}) = 0$.
A direct approach is to look at the following
\begin{align} (\sqrt{2}+\sqrt{3})^2 &= 5+2\sqrt{6} \\ (\sqrt{2}+\sqrt{3})^4 &= (5+2\sqrt{6})^2 = 49+20\sqrt{6} \\ \end{align}
Putting those together gives
$$-1 + 10(\sqrt{2}+\sqrt{3})^2 – (\sqrt{2}+\sqrt{3})^4 = 0,$$
so $f(x) = -1 + 10x^2 – x^4$ satisfies $f(\sqrt{2}+\sqrt{3}) = 0$.
Is there a more mechanical approach? Perhaps not entirely mechanical, but something more abstract.
#### Solutions Collecting From Web of "Constructing a degree 4 rational polynomial satisfying $f(\sqrt{2}+\sqrt{3}) = 0$"
There is a mechnical procedure, as follows.
Any polynomial function of $r = \sqrt 2 + \sqrt 3$ must have the form $a + b\sqrt 2 + c\sqrt 3 + d\sqrt 6$ for rational $a,b,c,d$. Consider the set of numbers of that form as a vector space $V$ over the rationals. It has dimension 4.
Now calculate $r^0, r^1, r^2, r^3, r^4$. These are five elements of the vector space $V$, and since $V$ has dimension only 4, they cannot be linearly independent. Therefore there must exist rationals $a_0,\ldots, a_4$ such that $a_4r^4 + a_3r^3 + a_2r^2 + a_1r^1 + a_0r^0 = 0$. These can be found by well-known mechanical methods for changing the basis of a vector space. Then our polynomial is $a_4x^4 + a_3x^3 + a_2x^2 + a_1x^1 + a_0$.
(There are a couple of fine points I skipped here: $a_4$ might be zero; $r^3$ might not be independent of $r^0, r^1,$ and $r^2$. None of this is hard to deal with.)
Here is an example.
Calculate powers of $r = \sqrt2 + \sqrt3$, and tabulate them:
$$\begin{array}{crrrr} % & 1 & \sqrt2 & \sqrt3 &\sqrt 6\\ %\hline r^0 = & 1 &&&\\ r^1 = & & \sqrt2 & + \sqrt3 & \\ r^2 = & 5 & && + 2\sqrt6\\ r^3 = & &11\sqrt2 &+ 9\sqrt3 \\ r^4 = & 49 &&& + 20\sqrt 6 \end{array}$$
Now we want to find rational $a,b,c,d$ such that $r^4 = ar^3 + br^2 + cr^1 + dr^0$. Such rationals must exist. (Unless $r^0\ldots r^3$ are not independent, in which case we are looking for a polynomial of lower degree, and we can use the same method with even less effort.) The relations in the table above impose relations on $a,b,c,d$ that we can read off from the table, one relation for each column:
$$\begin{array}{rrrrl} & 5b & & + d &=49\\ 11a&& + c &&= 0\\ 9a&&+c&&=0\\ &2b&&& = 20 \end{array}$$
We can solve the equations mechanically (they are particularly simple in this case; you can just read off the answer) and find that $a=0, b=10, c=0, d=-1$. So we have calculated, entirely mechanically, that $r^4 = 10r^2-1$, which means that $r$ is a zero of the polynomial $$x^4-10x^2+1.$$
(I wrote this up in detail on my blog a few years back, and just happened to use $\sqrt 2 + \sqrt 3$ as an example.)
Yes, there is a “purely mechanical” approach. Given algebraic numbers $\alpha$ and $\beta$, and monic polynomials $p_1(x)$ and $p_2(x)$ with rational coefficients, of which $\alpha$ and $\beta$ are roots, respectively, we can produce monic polynomials $p_+(x)$ and $p_\times(x)$ with rational coefficients, of which $\alpha+\beta$ and $\alpha\beta$ are roots, respectively. Moreover, if $\alpha$ and $\beta$ are algebraic integers (that is, we can take $p_1,p_2$ to have integer coefficients), then $p_+,p_\times$ have integer coefficients, so they witness that $\alpha+\beta$ and $\alpha\beta$ are algebraic integers as well. The argument is classical, but I follow below the presentation in
MR1083765 (91i:11001). Niven, Ivan; Zuckerman, Herbert S.; Montgomery, Hugh L. An introduction to the theory of numbers. Fifth edition. John Wiley & Sons, Inc., New York, 1991. xiv+529 pp. ISBN: 0-471-62546-9.
The construction is based on the following lemma:
Lemma. Given $n\ge0$, and a complex number $\xi$, suppose that the complex numbers $\theta_1,\dots,\theta_n$ are not all zero, and satisfy the equations
$$\xi\theta_j=a_{j,1}\theta_1+\dots+a_{j,n}\theta_n$$
for $j=1,2,\dots,n$. If the $n^2$ numbers $a_{j,k}$ are rational, then $\xi$ is algebraic. If they are integers, then $\xi$ is an algebraic integer.
One proves this by noticing that if $A$ is the matrix of the $a_{j,k}$ and $x$ is the vector of the $\theta_j$, then $Ax=\xi x$, so $\det(A-\xi I)=0$, and this is a monic polynomial with rational coefficients if the $a_{j,k}$ are rational, and integer coefficients if they are integers. In fact, we did better than stated in the lemma, since we obtained a witnessing polynomial rather than simply knowing the numbers are algebraic.
Using the lemma, one proceeds as follows: Suppose that $p_1$, the polynomial for $\alpha$, has degree $m$, and $p_2$, the polynomial for $\beta$, has degree $s$. Consider the numbers $n=ms$ numbers $\alpha^a\beta^b$ with $0\le a\le m-1$ and $0\le b\le s-1$, and call them $\theta_1,\dots,\theta_n$. Note that each $\alpha\theta_j$ is a linear combination of the $\theta_k$, using rational coefficients, and similarly for $\beta\theta_j$. To see this, note that either $\alpha\theta_j$ is another $\theta_i$, or else $\theta_j=\alpha^{m-1}\beta^b$ for some $b$, but then $$\alpha\theta_i=\alpha^m\beta^b=(\alpha^m-0)\beta^b=(\alpha^m-p(\alpha))\beta^b,$$ which is a combination of the $\alpha^i \beta^b$ for $0\le i<m$. The same argument applies to $\beta\theta_j$.
But then it follows that the lemma applies with both $\xi=\alpha+\beta$ and $\xi=\alpha\beta$. And this gives the result. In the case where $\alpha=\sqrt2$ and $\beta=\sqrt3$, this procedure is precisely what MJD sketched in his answer, and results in a polynomial of degree $4$ for $\sqrt2+\sqrt3$. The one thing that is not guaranteed is that in all cases the polynomial we obtain this way is minimal (that is, irreducible over the rationals) if $p_1$ and $p_2$ are minimal. It is in many cases that one finds in practice, though. See this and this MO questions for some details on when this is the case.
A ‘mechanical’ approach follows. Let $x=\sqrt{2}+\sqrt{3}$. Then $x^2=5+2\sqrt{6}$ which means $x^2-5=2\sqrt{6}$. Now $$(x^2-5)^2=24\Longrightarrow(x^2-5)^2-24=0.$$ By construction, one of the roots of $f(x)=(x^2-5)^2-24$ is $\sqrt{2}+\sqrt{3}$.
You can guess that the conjugates will be $\pm \sqrt 2 \pm \sqrt 3$, and multiply all the corresponding linear factors together.
$\rm \color{#c00}{x^2}\! = 5\!+\!2\sqrt{6} =: \color{#c00}\alpha\:\Rightarrow\: 0\, =\, (\color{#c00}{x^2\! -\! \alpha})(x^2\! -\!\alpha’)\, =\, x^4\!-(\alpha\!+\!\alpha’)\, x^2\! +\alpha\alpha’ =\, x^4\! – 10\, x^2 + 1$
As for algorithms, one could compute the characteristic polynomial of the linear map $\rm\:x \to (\sqrt{2}\!+\!\sqrt{3}) x\:$ on the vector space $\rm\:\Bbb Q\langle 1, \sqrt{2},\sqrt{3},\sqrt{6}\rangle.\:$ Or, one could use elimination methods, e.g. resultants: if $\rm\: f(x) = 0 = g(y) = 0\:$ then $\rm\:z = x+y\:$ is a root of any polynomial obtained by eliminating $\rm\:y\:$ from $\rm\:f(z\!-\!y)=0=g(y).\:$ A generic elimination method is by computing a resultant. A more efficient method is to employ the Grobner basis algorithm (which has the advantage of computing a minimal polynomial, via an ideal contraction). | 2018-08-15 00:56:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9670637249946594, "perplexity": 130.63476536829594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221209755.32/warc/CC-MAIN-20180815004637-20180815024637-00230.warc.gz"} |
https://renormalization.com/tag/microcausality/ | ### Course
19S1 D. Anselmi
Theories of gravitation
Program
PDF
### Book
D. Anselmi
From Physics To Life
A journey to the infinitesimally small and back
In English and Italian
Available on Amazon:
US: book | ebook (in EN)
IT: book | ebook (in IT)
## Microcausality
The concept of fake particle, or “fakeon”, allows us to make sense of quantum gravity as an ultraviolet complete theory, by renouncing causality at very small distances. We investigate whether the violation of microcausality can be amplified or detected in the most common settings. We show that it is actually short range for all practical purposes. Due to our experimental limitations, the violation does not propagate along the light cones or by means of gravitational waves. In some cases, the universe even conspires to make the effect disappear. For example, the positivity of the Hubble constant appears to be responsible for the direction of time in the early universe.
PDF
Class. Quantum Grav. 37 (2020) 095003 | DOI: 10.1088/1361-6382/ab78d2
arXiv: 1909.12873 [gr-qc]
We point out the idea that, at small scales, gravity can be described by the standard degrees of freedom of general relativity, plus a scalar particle and a degree of freedom of a new type: the fakeon. This possibility leads to fundamental implications in understanding gravitational force at quantum level as well as phenomenological consequences in the corresponding classical theory.
PDF
Int. J. Mod. Phys. D 28 (2019) 1944007 | DOI: 10.1142/S0218271819440073
arXiv: 1905.06516 [hep-th]
Talk given at the Conference “Scale invariance in particle physics and cosmology“, CERN, on January 29th, 2019
A new quantization prescription is able to endow quantum field theory with a new type of “particle”, the fakeon (fake particle), which mediates interactions, but cannot be observed. A massive fakeon of spin 2 (together with a scalar field) allows us to build a theory of quantum gravity that is both renormalizable and unitary, and to some extent unique. After presenting the general properties of this theory, I discuss its classical limit, which carries important remnants of the fakeon quantization prescription.
PDF
Watch talk from the CERN Document Server
Under certain assumptions, it is possible to make sense of higher derivative theories by quantizing the unwanted degrees of freedom as fakeons, which are later projected away. Then the true classical limit is obtained by classicizing the quantum theory. Since quantum field theory is formulated perturbatively, the classicization is also perturbative. After deriving a number of properties in a general setting, we consider the theory of quantum gravity that emerges from the fakeon idea and study its classicization, focusing on the FLRW metric. We point out cases where the fakeon projection can be handled exactly, which include radiation, the vacuum energy density and the combination of the two, and cases where it cannot, which include dust. Generically, the classical limit shares many features with the quantum theory it comes from, including the impossibility to write down complete, “exact” field equations, to the extent that asymptotic series and nonperturbative effects come into play.
PDF
J. High Energy Phys. 04 (2019) 61 | DOI: 10.1007/JHEP04(2019)061
arXiv: 1901.09273 [gr-qc]
Hal-02368987
We elaborate on the idea of fake particle and study its physical consequences. When a theory contains fakeons, the true classical limit is determined by the quantization and a subsequent process of “classicization”. One of the major predictions due to the fake particles is the violation of microcausality, which survives the classical limit. This fact gives hope to detect the violation experimentally. A fakeon of spin 2, together with a scalar field, is able to make quantum gravity renormalizable while preserving unitarity. We claim that the theory of quantum gravity emerging from this construction is the right one. By means of the classicization, we work out the corrections to the field equations of general relativity. We show that the finalized equations have, in simple terms, the form $\langle F\rangle =ma$, where $\langle F\rangle$ is an average that includes a little bit of “future”.
PDF
Class. and Quantum Grav. 36 (2019) 065010 | DOI: 10.1088/1361-6382/ab04c8
arXiv: 1809.05037 [hep-th]
We investigate the properties of fakeons in quantum gravity at one loop. The theory is described by a graviton multiplet, which contains the fluctuation $h_{\mu \nu }$ of the metric, a massive scalar $\phi$ and the spin-2 fakeon $\chi _{\mu \nu }$. The fields $\phi$ and $\chi _{\mu \nu }$ are introduced explicitly at the level of the Lagrangian by means of standard procedures. We consider two options, where $\phi$ is quantized as a physical particle or a fakeon, and compute the absorptive part of the self-energy of the graviton multiplet. The width of $\chi _{\mu \nu }$, which is negative, shows that the theory predicts the violation of causality at energies larger than the fakeon mass. We address this issue and compare the results with those of the Stelle theory, where $\chi _{\mu \nu }$ is a ghost instead of a fakeon.
PDF
J. High Energy Phys. 11 (2018) 21 | DOI: 10.1007/JHEP11(2018)021
arXiv: 1806.03605 [hep-th]
Quantum Gravity
### Book
14B1 D. Anselmi
Renormalization
Course on renormalization, taught in Pisa in 2015. (More chapters will be added later.)
Last update: May 9th 2015, 230 pages
Avaibable on Amazon:
Contents:
Preface
1. Functional integral
2. Renormalization
3. Renormalization group
4. Gauge symmetry
5. Canonical formalism
6. Quantum electrodynamics
7. Non-Abelian gauge field theories
Notation and useful formulas
References
PDF | 2023-03-23 21:19:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7003846168518066, "perplexity": 918.7563352104622}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945183.40/warc/CC-MAIN-20230323194025-20230323224025-00245.warc.gz"} |
https://publications.hse.ru/en/articles/192206132 | • A
• A
• A
• ABC
• ABC
• ABC
• А
• А
• А
• А
• А
Regular version of the site
## On the construction of unitary quantum group differential calculus
Journal of Physics A: Mathematical and Theoretical. 2016. Vol. 49. No. 41. P. 415202-(25pp).
We develop a construction of the unitary type anti-involution for the quantized differential calculus over GLq (n) in the case ∣q∣ = 1. To this end, we consider a joint associative algebra of quantized functions, differential forms and Lie derivatives over GLq (n)/SLq (n), which is bicovariant with respect to GLq (n)/SLq (n) coactions. We define a specific non-central spectral extension of this algebra by the spectral variables of three matrices of the algebra generators. In the spectrally expended algebra, we construct a three-parametric family of its inner automorphisms. These automorphisms are used for the construction of the unitary anti-involution for the (spectrally extended) calculus over GLq (n). | 2019-10-15 07:00:33 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8343482613563538, "perplexity": 4090.5636297780766}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986657586.16/warc/CC-MAIN-20191015055525-20191015083025-00213.warc.gz"} |
https://enwiki.academic.ru/dic.nsf/enwiki/16999 | # String (computer science)
String (computer science)
In formal languages, which are used in mathematical logic and theoretical computer science, a string is a finite sequence of symbols that are chosen from a set or alphabet.
In computer programming, a string is traditionally a sequence of characters, either as a literal constant or as some kind of variable. The latter may allow its elements to be mutated and/or the length changed, or it may be fixed (after creation). A string is generally understood as a data type and is often implemented as a byte (or word) array that stores a sequence of elements, typically characters, using some character encoding. A string may also denote more general array data types and/or other sequential data types and structures; terms such as byte string, or more general, string of datatype, or datatype-string, are sometimes used to denote strings in which the stored data does not (necessarily) represent text.
Depending on programming language and/or precise datatype used, a variable declared to be a string may either cause storage in memory to be statically allocated for a predetermined max length or employ dynamic allocation to allow it to hold chronologically variable number of elements. When a string appears literally in source code, it is known as a string literal and has a representation that denotes it as such.
## Formal theory
Let Σ be an alphabet, a non-empty finite set. Elements of Σ are called symbols or characters. A string (or word) over Σ is any finite sequence of characters from Σ. For example, if Σ = {0, 1}, then 0101 is a string over Σ.
The length of a string is the number of characters in the string (the length of the sequence) and can be any non-negative integer. The empty string is the unique string over Σ of length 0, and is denoted ε or λ.
The set of all strings over Σ of length n is denoted Σn. For example, if Σ = {0, 1}, then Σ2 = {00, 01, 10, 11}. Note that Σ0 = {ε} for any alphabet Σ.
The set of all strings over Σ of any length is the Kleene closure of Σ and is denoted Σ*. In terms of Σn,
$\Sigma^{*} = \bigcup_{n \in \N} \Sigma^{n}$
For example, if Σ = {0, 1}, Σ* = {ε, 0, 1, 00, 01, 10, 11, 000, 001, 010, 011, …}. Although Σ* itself is countably infinite, all elements of Σ* have finite length.
A set of strings over Σ (i.e. any subset of Σ*) is called a formal language over Σ. For example, if Σ = {0, 1}, the set of strings with an even number of zeros ({ε, 1, 00, 11, 001, 010, 100, 111, 0000, 0011, 0101, 0110, 1001, 1010, 1100, 1111, …}) is a formal language over Σ.
### Concatenation and substrings
Concatenation is an important binary operation on Σ*. For any two strings s and t in Σ*, their concatenation is defined as the sequence of characters in s followed by the sequence of characters in t, and is denoted st. For example, if Σ = {a, b, …, z}, s = bear, and t = hug, then st = bearhug and ts = hugbear.
String concatenation is an associative, but non-commutative operation. The empty string serves as the identity element; for any string s, εs = sε = s. Therefore, the set Σ* and the concatenation operation form a monoid, the free monoid generated by Σ. In addition, the length function defines a monoid homomorphism from Σ* to the non-negative integers.
A string s is said to be a substring or factor of t if there exist (possibly empty) strings u and v such that t = usv. The relation "is a substring of" defines a partial order on Σ*, the least element of which is the empty string.
### Lexicographical ordering
It is often useful to define an ordering on a set of strings. If the alphabet Σ has a total order (cf. alphabetical order) one can define a total order on Σ* called lexicographical order. For example, if Σ = {0, 1} and 0 < 1, then the lexicographical order on Σ* includes the relationships ε < 0 < 00 < 000 < … < 0001 < 001 < 01 < 010 < 011 < 0110 < 01111 < 1 < 10 < 100 < 101 < 111 < 1111 < 11111 …
### String operations
A number of additional operations on strings commonly occur in the formal theory. These are given in the article on string operations.
### Topology
Strings admit the following interpretation as nodes on a graph:
• Fixed-length strings can be viewed as nodes on a hypercube
• Variable-length strings (of finite length) can be viewed as nodes on the k-ary tree, where k is the number of symbols in Σ
• Infinite strings can be viewed as infinite paths on the k-ary tree.
The natural topology on the set of fixed-length strings or variable length strings is the discrete topology, but the natural topology on the set of infinite strings is the limit topology, viewing the set of infinite strings as the inverse limit of the sets of finite strings. This is the construction used for the p-adic numbers and some constructions of the Cantor set, and yields the same topology.
## String datatypes
A string datatype is a datatype modeled on the idea of a formal string. Strings are such an important and useful datatype that they are implemented in nearly every programming language. In some languages they are available as primitive types and in others as composite types. The syntax of most high-level programming languages allows for a string, usually quoted in some way, to represent an instance of a string datatype; such a meta-string is called a literal or string literal.
### String length
Although formal strings can have an arbitrary (but finite) length, the length of strings in real languages is often constrained to an artificial maximum. In general, there are two types of string datatypes: fixed-length strings, which have a fixed maximum length and which use the same amount of memory whether this maximum is reached or not, and variable-length strings, whose length is not arbitrarily fixed and which use varying amounts of memory depending on their actual size. Most strings in modern programming languages are variable-length strings. Despite the name, even variable-length strings are limited in length, although, in general, the limit depends only on the amount of memory available. The string length can be stored as a separate integer (which puts a theoretical limit on the length) or implicitly through a termination character, usually a character value with all bits zero. See also "Null-terminated" below.
### Character encoding
String datatypes have historically allocated one byte per character, and, although the exact character set varied by region, character encodings were similar enough that programmers could often get away with ignoring this — since characters a program treated specially (such as period and space and comma) were in the same place in all the encodings a program would encounter. These character sets were typically based on ASCII or EBCDIC.
Logographic languages such as Chinese, Japanese, and Korean (known collectively as CJK) need far more than 256 characters (the limit of a one 8-bit byte per-character encoding) for reasonable representation. The normal solutions involved keeping single-byte representations for ASCII and using two-byte representations for CJK ideographs. Use of these with existing code led to problems with matching and cutting of strings, the severity of which depended on how the character encoding was designed. Some encodings such as the EUC family guarantee that a byte value in the ASCII range will represent only that ASCII character, making the encoding safe for systems that use those characters as field separators. Other encodings such as ISO-2022 and Shift-JIS do not make such guarantees, making matching on byte codes unsafe. These encodings also were not "self-synchronizing", so that locating character boundaries required backing up to the start of a string, and pasting two strings together could result in corruption of the second string (these problems were much less with EUC as any ASCII character did synchronize the encoding).
Unicode has simplified the picture somewhat. Most programming languages have a datatype for Unicode strings (usually UTF-16 as it was usually added before Unicode supplemental planes were introduced). Unicode's preferred byte stream format UTF-8 is designed not to have the problems described above for older multibyte encodings. All UTF-8, UTF-16 and UTF-32 require the programmer to know that the fixed-size code units are different than the "characters", the main difficulty currently is incorrectly designed API's that attempt to hide this difference.
### Implementations
Some languages like C++ implement strings as templates that can be used with any datatype, but this is the exception, not the rule.
Some languages, such as C++ and Ruby, normally allow the contents of a string to be changed after it has been created; these are termed mutable strings. In other languages, such as Java and Python, the value is fixed and a new string must be created if any alteration is to be made; these are termed immutable strings.
Strings are typically implemented as arrays of characters, in order to allow fast access to individual characters. A few languages such as Haskell implement them as linked lists instead.
Some languages, such as Prolog and Erlang, avoid implementing a dedicated string datatype at all, instead adopting the convention of representing strings as lists of character codes.
### Representations
Representations of strings depend heavily on the choice of character repertoire and the method of character encoding. Older string implementations were designed to work with repertoire and encoding defined by ASCII, or more recent extensions like the ISO 8859 series. Modern implementations often use the extensive repertoire defined by Unicode along with a variety of complex encodings such as UTF-8 and UTF-16.
Most string implementations are very similar to variable-length arrays with the entries storing the character codes of corresponding characters. The principal difference is that, with certain encodings, a single logical character may take up more than one entry in the array. This happens for example with UTF-8, where single characters can take anywhere from one to four bytes. In these cases, the logical length of the string (number of characters) differs from the logical length of the array (number of bytes in use). UTF-32 is the only Unicode encoding that avoids this problem.
#### Null-terminated
The length of a string can be stored implicitly by using a special terminating character; often this is the null character (NUL), which has all bits zero, a convention used and perpetuated by the popular C programming language.[1] Hence, this representation is commonly referred to as C string. The length of a string can also be stored explicitly, for example by prefixing the string with the length as a byte value — a convention used in many Pascal dialects; as a consequence, some people call it a P-string. Storing the string length as byte limits the maximum string length to 255. To avoid such limitations, improved implementations of P-strings use 16-, 32-, or 64-bit words to store the string length. When the length field covers the address space strings are limited only by the available memory.
In terminated strings, the terminating code is not an allowable character in any string. Strings with length field do not have this limitation and can also store arbitrary binary data. In C two things are needed to handle binary data, a character pointer and the length of the data.
The term bytestring usually indicates a general-purpose string of bytes — rather than strings of only (readable) characters, strings of bits, or such. Byte strings often imply that bytes can take any value and any data can be stored as-is, meaning that there should be no value interpreted as a termination value.
Here is an example of a null-terminated string stored in a 10-byte buffer, along with its ASCII (or more modern UTF-8) representation as 8-bit hexadecimal numbers:
F R A N K NUL k e f w 46h 52h 41h 4Eh 4Bh 00h 6Bh 65h 66h 77h
The length of the string in the above example, "FRANK", is 5 characters, but it occupies 6 bytes. Characters after the terminator do not form part of the representation; they may be either part of another string or just garbage. (Strings of this form are sometimes called ASCIZ strings, after the original assembly language directive used to declare them.)
#### Length-prefixed
Here is the equivalent (old style) Pascal string stored in a 10-byte buffer, along with its ASCII / UTF-8 representation:
length F R A N K k e f w 5dec 46h 52h 41h 4Eh 4Bh 6Bh 65h 66h 77h
#### Object oriented
An object oriented language will typically implement a string like this:
class string {
int length;
char *text;
};
although this implemention is hidden, and accessed through member functions. The "text" will be a dynamically allocated memory area, that might be expanded if needed. See also string (C++).
Both character termination and length codes limit strings: For example, C character arrays that contain null (NUL) characters cannot be handled directly by C string library functions: Strings using a length code are limited to the maximum value of the length code.
Both of these limitations can be overcome by clever programming, of course, but such workarounds are by definition not standard.
Rough equivalents of the C termination method have historically appeared in both hardware and software. For example, "data processing" machines like the IBM 1401 used a special word mark bit to delimit strings at the left, where the operation would start at the right. This meant that, while the IBM 1401 had a seven-bit word in "reality", almost no-one ever thought to use this as a feature, and override the assignment of the seventh bit to (for example) handle ASCII codes.
It is possible to create data structures and functions that manipulate them that do not have the problems associated with character termination and can in principle overcome length code bounds. It is also possible to optimize the string represented using techniques from run length encoding (replacing repeated characters by the character value and a length) and Hamming encoding.
While these representations are common, others are possible. Using ropes makes certain string operations, such as insertions, deletions, and concatenations more efficient.
## Text file strings
In computer readable text files, for example programming language source files or configuration files, strings can be represented. The NUL byte is normally not used as terminator since that does not correspond to the ASCII text standard, and the length is usually not stored, since the file should be human editable without bugs.
Two common representations are:
• Surrounded by quotation marks (ASCII 22h), used by most programming languages. To be able to include quotation marks, newline characters etc, escape sequences are often available, usually using the backslash character (ASCII 5Ch).
• Terminated by a newline sequence, for example in Windows INI files.
## Non-text strings
While character strings are very common uses of strings, a string in computer science may refer generically to any sequence of homogeneously typed data. A string of bits or bytes, for example, may be used to represent non-textual binary data retrieved from a communications medium. This data may or may not be represented by a string-specific datatype, depending on the needs of the application, the desire of the programmer, and the capabilities of the programming language being used.
## String processing algorithms
There are many algorithms for processing strings, each with various trade-offs. Some categories of algorithms include:
Advanced string algorithms often employ complex mechanisms and data structures, among them suffix trees and finite state machines.
## Character string oriented languages and utilities
Character strings are such a useful datatype that several languages have been designed in order to make string processing applications easy to write. Examples include the following languages:
Many UNIX utilities perform simple string manipulations and can be used to easily program some powerful string processing algorithms. Files and finite streams may be viewed as strings.
Some APIs like Multimedia Control Interface, embedded SQL or printf use strings to hold commands that will be interpreted.
Recent scripting programming languages, including Perl, Python, Ruby, and Tcl employ regular expressions to facilitate text operations.
Some languages such as Perl and Ruby support string interpolation, which permits arbitrary expressions to be evaluated and included in string literals.
## Character string functions
String functions are used to manipulate a string or change or edit the contents of a string. They also are used to query information about a string. They are usually used within the context of a computer programming language.
The most basic example of a string function is the length(string) function, which returns the length of a string (not counting any terminator characters or any of the string's internal structural information) and does not modify the string. For example, length("hello world") returns 11.
There are many string functions that exist in other languages with similar or exactly the same syntax or parameters. For example, in many languages, the length function is usually represented as len(string). Even though string functions are very useful to a computer programmer, a computer programmer using these functions should be mindful that a string function in one language could in another language behave differently or have a similar or completely different function name, parameters, syntax, and results.
Connection string Rope Bitstring Improper input validation Incompressible string Empty string Formal language String metric string (C++) string.h Category:Algorithms on strings
## References
1. ^ Bryant, Randal E.; David, O'Hallaron (2003), Computer Systems: A Programmer's Perspective (2003 ed.), Upper Saddle River, NJ: Pearson Education, p. 40, ISBN 0-13-034074-X
Wikimedia Foundation. 2010.
### Look at other dictionaries:
• computer science — computer scientist. the science that deals with the theory and methods of processing information in digital computers, the design of computer hardware and software, and the applications of computers. [1970 75] * * * Study of computers, their… … Universalium
• Polymorphism (computer science) — This article is about the programming language theory concepts with direct application to functional programming languages. For a gentler introduction of these notions as commonly implemented in object oriented programming, see Polymorphism in… … Wikipedia
• Reflection (computer science) — In computer science, reflection is the process by which a computer program can observe and modify its own structure and behavior. The programming paradigm driven by reflection is called reflective programming .In most modern computer… … Wikipedia
• Invariant (computer science) — In computer science, a predicate that, if true, will remain true throughout a specific sequence of operations, is called (an) invariant to that sequence.UseAlthough computer programs are typically mainly specified in terms of what they change, it … Wikipedia
• Production (computer science) — A production or production rule in computer science is a rewrite rule specifying a symbol substitution that can be recursively performed to generate new symbol sequences. A finite set of productions P is the main component in the specification of … Wikipedia
• Reference (computer science) — This article is about a general notion of reference in computing. For the more specific notion of reference used in C++, see Reference (C++). In computer science, a reference is a value that enables a program to indirectly access a particular… … Wikipedia
• Integer (computer science) — In computer science, an integer is a datum of integral data type, a data type which represents some finite subset of the mathematical integers. Integral data types may be of different sizes and may or may not be allowed to contain negative values … Wikipedia
• Macro (computer science) — A macro (from the Greek μάκρο for long or far) in computer science is a rule or pattern that specifies how a certain input sequence (often a sequence of characters) should be mapped to an output sequence (also often a sequence of characters)… … Wikipedia
• Closure (computer science) — In computer science, a closure (also lexical closure, function closure, function value or functional value) is a function together with a referencing environment for the non local variables of that function.[1] A closure allows a function to… … Wikipedia
• Object (computer science) — In computer science, an object is any entity that can be manipulated by the commands of a programming language, such as a value, variable, function, or data structure. (With the later introduction of object oriented programming the same word,… … Wikipedia | 2020-01-22 17:07:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45734718441963196, "perplexity": 1351.9242772594803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250607314.32/warc/CC-MAIN-20200122161553-20200122190553-00422.warc.gz"} |
https://shantanugoel.com/2010/07/26/firefly-sqlite-error-unable-to-open-database-file-solution/ | # Firefly / sqlite error "unable to open database file" Solution
Recently I came across a weird error while trying to run firefly itunes server (mt-daapd) on my router (Asus wl-500w). It had something to do with sqlite and gave a vague message “Unable to open database file”. After going bonkers for a short time, I solved it and this is how.
One of my hard disks crashed recently and unfortunately it was the one I had connected to my router to serve media to me all over the house (through PS3/laptop) or when I travel (through laptop/phone). I had all the data backed up but somehow didn’t preserve the firefly server. I rebuilt the server from source using my own guide (Thank God I did it. I wouldn’t have been able to preserve my sanity finding all that out the hard way again.). But after doing all the installation and reconfiguration, it gave me a weird error “unable to open database file” every time and exited. I checked the permissions on the songs3.db file (in /opt/var/cache/mt-daapd for me) and made it writable by all but the issue persisted. I changed its ownership to the user under which firefly was running but the issue was still there. Finally I found that the server (or maybe its an sqlite thing) was trying to create a temp file in the cache directory for the transactions and since the user with which it was started, didn’t own the directory it wasn’t able to create the file in it.
So, the fix: I did a chown <username> /opt/var/cache/mt-daapd on it and voila! the problem was fixed. I am a happy man now since I own a new android phone since last time and now able to stream all my music to my phone through itunes (daap protocol) server over an ssh tunnel :) | 2021-09-24 15:26:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28643256425857544, "perplexity": 2403.791934945052}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057558.23/warc/CC-MAIN-20210924140738-20210924170738-00340.warc.gz"} |
https://www.esaral.com/q/evaluate-the-following-integrals-57507 | # Evaluate the following integrals:
Question:
Evaluate the following integrals:
$\int \frac{\tan x}{\sqrt{\cos x}} d x$
Solution:
We know $d(\cos x)=\sin x$, and $\tan$ can be written interms of $\cos$ and $\sin$
$\therefore \tan x=\frac{\sin x}{\cos x}$
$\therefore$ The given equation can be written as
$\Rightarrow \int \frac{\sin x}{\cos x \sqrt{\cos x}} d x$
$\Rightarrow \int \frac{\sin x}{\cos ^{3} \backslash^{2} x} d x$
Now assume $\cos x=t$
$d(\cos x)=-d t$
$\sin x d x=-d t$
Substitute values of $\mathrm{t}$ and $\mathrm{dt}$ in above equation
$\Rightarrow \int \frac{-\mathrm{dt}}{\mathrm{t}^{3} / 2}$
$\Rightarrow-\int t^{-3 \backslash 2} d t$
$\Rightarrow 2 t^{-1 \backslash 2}+c$
$\Rightarrow 2 \cos ^{-1 \backslash 2} x+c$
$\Rightarrow \frac{2}{\sqrt{\cos x}}+C$ | 2023-02-06 05:18:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.940713107585907, "perplexity": 545.8870828341953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500304.90/warc/CC-MAIN-20230206051215-20230206081215-00401.warc.gz"} |
http://www.sciforums.com/threads/what-is-the-most-important-subject-taught-in-school.75431/page-3 | # What is the most important subject taught in school?
Discussion in 'Science & Society' started by Kadark, Dec 22, 2007.
?
## Which is most important?
17.2%
13.8%
6.9%
0 vote(s)
0.0%
3.4%
6.9%
1.7%
0 vote(s)
0.0%
0 vote(s)
0.0%
1.7%
3.4%
0 vote(s)
0.0%
0 vote(s)
0.0%
0 vote(s)
0.0%
1.7%
20.7%
22.4%
1. ### USS Exeterunamerican americanRegistered Senior Member
Messages:
2,482
I pressed you too in the Blackwater thread, and you went silent. We both have something against each other, so let's just start over, please. :truce:
Messages:
10,296
I honestly don't remember that - I may have missed your last post because it isn't my nature at all to leave something unchallenged.
But I'm really not a bad guy, so I'll agree to let both slide and accept the truce.
5. ### LetticiaRegistered Senior Member
Messages:
300
The most important subject ACTUALLY TAUGHT in school, or the most important subject that SHOULD BE TAUGHT in school?
I think the most important thing to teach children is how to think critically. But it is rarely taught. In fact, a lot of schools actively suppress critical thinking.
Messages:
10,296
Yes, that's an excellent point! But it still doesn't matter much if the kids haven't learned to read (ranking by comparison).
8. ### invert_nexusZe do caixaoValued Senior Member
Messages:
9,686
The most important thing to teach a child is how to learn.
9. ### Till EulenspiegelRegistered Member
Messages:
419
Reading is the most important subject taught in school. If you can read you can learn on your own even after leaving school. Knowing how to read means you can find information that has been written by other people. All the knowledge of the world is available to you.
10. ### USS Exeterunamerican americanRegistered Senior Member
Messages:
2,482
We have reached a conclusion, reading, writing, and math are the most important subjects taught in school. Anything that comes after is purely opinion.
In my thought, it really matters on how subjects are taught to a student. If by teaching to kid that in a way, the kid will be able to develop the skills for critical thinking, such as understanding concepts and principals.
Messages:
10,296
Exactly. If someone has the ability to read, their education never has to end. (Unfortunately, there ARE those who are too lazy to read but that's beyond the scope of this discussion.)
12. ### invert_nexusZe do caixaoValued Senior Member
Messages:
9,686
Learning comes first. And is often neglected.
13. ### John99BannedBanned
Messages:
22,046
Should reading even be a subject? Seems to me it would be better taight at home, before the child enters schoool.
Messages:
10,296
Sure it should be. And yes, a lot of parents take the time to start the process before school starts - but that's only the very basic beginning.
15. ### John99BannedBanned
Messages:
22,046
Yes. At the same time i believe formal education should start much later in life, about 16 years old. Disseminate child like connotation.
16. ### S.A.M.uniquely dreadfulValued Senior Member
Messages:
72,824
You have got to be kidding.
I think kids should be encouraged to learn from babyhood. Make learning fun and exciting. Encourage reading, writing, drawing, coloring, curiosity and games that help in challenging concepts. Thats what I had.
17. ### Fraggle RockerStaff Member
Messages:
24,690
A child's brain continues to develop physiologically for many years after birth. A cognitive skill must be present in order for any learning that exploits that particular skill to proceed. Both pattern recognition and the correlation of symbols with the thoughts or objects they represent are required before reading can be studied effectively, and in addition hand-to-eye coordination must be mastered before writing can be tackled. In my youth it was felt that age six was the earliest at which those studies were worth bothering. I vividly remember learning to print my name at age five, but it was merely a laborious exercise in drawing and I had no idea of the phonetic correlations involved--and I was a "gifted" student who excelled at reading a year later and now make a living as a writer.
I understand that we're all different and some children are capable of learning to read sooner than others, and those who are should be identified and encouraged. But I doubt that the majority are getting enough out of the time and effort to give up that precious year of play. Kids grow up way too fast anyway, why push it? I always thought kindergarten--and now pre-K for four-year-olds--was a convenient dumping ground for children who have access to no other adults during the daytime.
In any case, literacy is not a simple skill, as evidenced by the average 21st-century American university graduate's ability to read at what my generation called the sixth-grade level, and the proliferation of remedial English classes for college freshmen. Children must continue to be taught to read and write throughout the K-12 years.
18. ### Till EulenspiegelRegistered Member
Messages:
419
Yes, reading should be a subject. It should be taught right through junior high school. During the elementary school grades it should be taught for at least an hour per day.
Reading isn't simply being able to read words. That is decoding, a part of reading. Reading is decoding, understanding, recognizing context clues, along with other distince subsets of skills.
While parents can teach a child to read at home few of them can teach the nuances and subskills of reading that are so important.
The importance of reading is the reason most elementary schools have seperate reading consultants who are teachers who have gone for extra courses leading to a degree in teaching reading.
19. ### oreodontI am GodRegistered Senior Member
Messages:
520
'Schools' have taken on a mythical role in western cultures. Formal education has monopolized much of what we consider to be learning. Much of the 'stuff' taught is self-fulfilling and circular. After basic efficiency in reading and arithmetic, most 'schooling' is overplayed in importance. Learning is definitely important but there's poor return on the thousands of hours spent in a formal classroom. Education is an industry. Kids are bundled up at the age of five (or whatever) and thought to be sent off 'to learn', when in fact, they are more likely sent off because that is just the societal norm and expectation.
With a decline in birthrates in western countries, there may be a decline in the emphasis on formal education in our culture. In fifty years the choices of 'learning' will be more eclectic...much more home schooling, informal neighborhood groups and so on. There will still be a massive child formal education system but 'less massive'. 'Learning' and 'School' won't be as tied together as they are in today's mindset.
Messages:
10,296
That's a pretty distorted view of the way things currently are and an even worse distortion of what's to come. You are simply not in full touch with reality.
21. ### cosmictravelerBe kind to yourself always.Valued Senior Member
Messages:
33,264
What all of the girls learned from me about sex in High School!!
Messages:
301
Typing.
23. ### Fraggle RockerStaff Member
Messages:
24,690
I'm surprised to hear anyone say that! Perhaps you're not an American old enough to be a great-grandfather. All you have to do is look around. Adults with high school diplomas have no idea where any state is except their own and a couple of neighbors. They think Canada is a state. They can't find any European country on a map and they don't even know where to look for India or Japan. They can't make change for a dollar without a POS terminal. They can't figure out that $3000 per month is an exorbitant mortgage payment on a$300,000 house with a calculator, much less by rough-order-of-magnitude numeracy. They have no idea who our allies and enemies were in WWI, WWII, Korea or Vietnam, don't know why millions of Jews moved to Israel and why millions of Muslims hate them for it.
But worst of all--and the explanation for many of the problems cited above--a large portion of the high school graduates who are accepted for admission to college CAN'T READ! Colleges had to establish remedial English classes for them. And the old garbage-in-garbage-out rule still applies. When those people finally roll off the other end of the assembly line that passes for education, with university diplomas, their average reading level is what in my generation was called SIXTH GRADE. They can't read for pleasure, they get their news from TV ("The News For People Who Can't Read"), and office procedure manuals have to be so dumbed-down that they read like Dr. Seuss books.
So don't go telling us old-timers how great the American education system is. Employers are screaming that most job applicants are unqualified for ANYTHING!
That's for sure. My mother made me take it in high school back when all we had were manual typewriters. That may have been the only smart advice she gave me. I got a big kick out of all the girls who refused to learn to type because they didn't want to grow up to be clerks and secretaries like their moms... and then the world changed and now everybody spends their entire workday (and much of their free time) huddled over a keyboard.
My wife went back to finish her degree after we got married and I typed all of her undergraduate and graduate papers. (A great way to learn stuff in somebody else's field!) She finally had to give up and do her own when she started working on her master's thesis, but fortunately by then PCs with word processing had been invented. | 2019-09-18 23:56:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26235947012901306, "perplexity": 2297.3972744747257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573385.29/warc/CC-MAIN-20190918234431-20190919020431-00275.warc.gz"} |
https://www.semanticscholar.org/paper/Holographic-M5-branes-in-%24AdS_7%5Ctimes-S%5E4%24-Gupta/41b20728e2c9da76f5da823524de6de0ecff21b8 | Corpus ID: 237562807
# Holographic M5 branes in $AdS_7\times S^4$
@inproceedings{Gupta2021HolographicMB,
title={Holographic M5 branes in \$AdS\_7\times S^4\$},
author={Varun Gupta},
year={2021}
}
• Varun Gupta
• Published 17 September 2021
• Physics
We study classical M5 brane solutions in the probe limit in the AdS7 × S4 background geometry that preserve the minimal amount of supersymmetry. These solutions describe the holography of codimension-2 defects in the 6d boundary dual N = (0, 2) supersymmetric gauge theories. The general solution is described in terms of holomorphic functions that satisfy a scaling condition. We show the behavior of the world-volume of a special class of BPS solutions near the AdS boundary region can be… Expand
#### References
SHOWING 1-10 OF 19 REFERENCES
1/2-BPS states in M theory and defects in the dual CFTs
We study supersymmetric branes in AdS7 × S4 and AdS4 × S7. We show that in the former case the membranes should be viewed as M5 branes with fluxes and we identify two types of such fivebranes (theyExpand
Type IIB Superstrings, BPS Monopoles, And Three-Dimensional Gauge Dynamics
• Physics
• 1996
We propose an explanation via string theory of the correspondence between the Coulomb branch of certain three-dimensional supersymmetric gauge theories and certain moduli spaces of magneticExpand
Counting wobbling dual-giants
• Physics
• 2009
We derive the BPS equations for D3-branes embedded in AdS5 × S5 that preserve at least two supercharges. These are given in terms of conditions on the pullbacks of some space-time differentialExpand
SOLUTIONS OF FOUR-DIMENSIONAL FIELD THEORIES VIA M-THEORY
N = 2 supersymmetric gauge theories in four dimensions are studied by formulating them as the quantum field theories derived from configurations of fourbranes, fivebranes, and sixbranes in Type IIAExpand
Surface operators and separation of variables
• Physics, Mathematics
• 2015
A bstractAlday, Gaiotto, and Tachikawa conjectured relations between certain 4d N = 2 supersymmetric field theories and 2d Liouville conformal field theory. We study generalizations of theseExpand
Probing N=4 SYM With Surface Operators
• Physics
• 2008
In this paper we study surface operators in N = 4 supersymmetric Yang-Mills theory. We compute surface operator observables, such as the expectation value of surface operators, the correlationExpand
Gauge Theory, Ramification, And The Geometric Langlands Program
• Mathematics, Physics
• 2006
In the gauge theory approach to the geometric Langlands program, ramification can be described in terms of "surface operators," which are supported on two-dimensional surfaces somewhat as Wilson orExpand
Giant gravitons from holomorphic surfaces
We introduce a class of supersymmetric cycles in spacetimes of the form AdS times a sphere or T 1;1 which can be considered as generalizations of the giant gravitons. Branes wrapped on these cyclesExpand
Invasion of the giant gravitons from Anti-de Sitter space
• Physics
• 2000
It has been known for some time that the AdS/CFT correspondence predicts a limit on the number of single particle states propagating on the compact spherical component of the AdS × S geometry. TheExpand
M5-branes and Wilson surfaces in AdS$_{7}$/CFT$_{6}$ correspondence
• Physics
• 2014
We study AdS$_{7}$/CFT$_{6}$ correspondence between M-theory on AdS$_{7} \times S^{4}$ and the 6D $\mathcal{N} = (2,0)$ superconformal field theory. In particular we focus on Wilson surfaces. We useExpand | 2021-12-01 12:00:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49831515550613403, "perplexity": 1926.8060232294704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360803.0/warc/CC-MAIN-20211201113241-20211201143241-00104.warc.gz"} |
https://www.gradesaver.com/textbooks/math/trigonometry/trigonometry-7th-edition/chapter-2-section-2-3-solving-right-triangles-2-3-problem-set-page-80/37 | ## Trigonometry 7th Edition
Chapter 2 - Section 2.3 Problem Set: 37 (Answer) Refer to Figure I $A = 50.12^{\circ}$ (To four significant digits) $B = 39.88^{\circ}$ (To four significant digits) $a = 451.6$ inches (To four significant digits)
Chapter 2 - Section 2.3 Problem Set: 37 (Solution) Refer to Figure I $\cos A = \frac{b}{c}$ $A = \cos^{-1} (\frac{377.3}{588.5})$ $A = 50.12^{\circ}$ (To four significant digits) $\sin B = \frac{b}{c}$ $B = \sin^{-1} (\frac{377.3}{588.5})$ $B = 39.88^{\circ}$ (To four significant digits) By Pythagoras' Theorem $a^2$ + $b^2$ = $c^2$ $a^2$ = $588.5^2$ - $377.3^2$ $a$ = $\sqrt{(588.5^2 - 377.3^2)}$ $a = 451.6$ inches (To four significant digits) | 2018-08-20 23:21:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8423895239830017, "perplexity": 1044.5033466304576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221217354.65/warc/CC-MAIN-20180820215248-20180820235248-00182.warc.gz"} |
https://paperswithcode.com/paper/critical-point-finding-methods-reveal | # Critical Point-Finding Methods Reveal Gradient-Flat Regions of Deep Network Losses
23 Mar 2020Charles G. FryeJames SimonNeha S. WadiaAndrew LigeraldeMichael R. DeWeeseKristofer E. Bouchard
Despite the fact that the loss functions of deep neural networks are highly non-convex, gradient-based optimization algorithms converge to approximately the same performance from many random initial points. One thread of work has focused on explaining this phenomenon by characterizing the local curvature near critical points of the loss function, where the gradients are near zero, and demonstrating that neural network losses enjoy a no-bad-local-minima property and an abundance of saddle points... (read more)
PDF Abstract | 2020-07-04 12:22:16 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8642445802688599, "perplexity": 2294.123439898655}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886121.45/warc/CC-MAIN-20200704104352-20200704134352-00490.warc.gz"} |
http://www.kjm-math.org/?_action=article&au=468926&_au=-,%20Deepmala | Author = -, Deepmala
##### On Approximation of Functions Belonging to some Classes of Functions by $(N,p_n,q_n)(E,\theta )$ Means of Conjugate Series of Its Fourier Series
Volume 6, Issue 1, January 2020, Pages 73-86
Xhevat Zahir Krasniqi; Deepmala - | 2022-11-27 02:44:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41558364033699036, "perplexity": 2533.6325794716317}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710155.67/warc/CC-MAIN-20221127005113-20221127035113-00796.warc.gz"} |
https://www.physicsforums.com/threads/water-engine-john-kanzius.226873/ | # WATER ENGINE - John Kanzius
1. Apr 6, 2008
### zinedine_88
hey
i just came across these videos on you tube
i wonder how come that invention hasn't changed our world YET
dOes petroleum INDUSTRYu have a future... ( i am petroleum engineerin :((((((((((((((((((
also can anybody explain how come the radio wave break the H-O bond in water and it LIGHT up.... I thought that in COMBUSTION the PRODUCST ARE ALWAYS CO2 and H2O
that's weird... and WHY do we need salt water since THese radio waves can break that O-H bond in DISTILLED WATER AS WELL... why is the salt so important? and WHAT ARE THE BYPRODUCST OF THAT REACTION SINCE THERE IS NO CO2???
WHAT DO U THINK GUYS...
i think i have to change my major... i don't wanna be without a job one day...
how come the flame is cold if you touch it and super hot when attached to another material...
also when he burns that white surface... and shows the water drops... How are they actually created? -
and when he shows that he drives his car with water.... i am asking myself... WHY IN THE WORLD SUCH ENGINE ARE NOT BUILD YET?????
they show it is done.. water is FUEL//... the answered is found...
what takes them so long before starting mass production?
please ponder upon these questions and explain me WHY )
thans
Last edited by a moderator: Apr 6, 2008
2. Apr 6, 2008
### OmCheeto
I haven't done the experiment, but I believe the outcome would look something like this:
A 1.0 kilo-watt microwave beam is pointed at a vial of saltwater. The water splits into hydrogen and oxygen atoms, which when burned, generate 0.1 kilo-watt of thermal energy.
I would keep your day job if I were you.
3. Apr 6, 2008
### rohanprabhu
2. @OmCheeto: +1
3. $\textrm{CO}_2$ is produced on combustion when organic compounds combust [or other compounds which have carbon in them]. Combustion reactions are basically oxidation reactions only. The oxidation of a Magnesium strip is a lot like combustion, but neither $\textrm{H}_2\textrm{O}$ or $[itex]\textrm{CO}_2$
4. I don't know if this has to do anything with ionic phenomena, but if it does.. then salt water is necessary. If it doesn't... and as you said that this happens with distilled water too.. then the point i think is that distilled water isn't easy to find. However, we have salt water in abundance which we can use to create energy.
4. Apr 6, 2008 | 2016-10-23 03:11:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3185807168483734, "perplexity": 2605.991215738287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719139.8/warc/CC-MAIN-20161020183839-00342-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://math.eretrandre.org/tetrationforum/showthread.php?tid=262&pid=5327&mode=threaded | • 1 Vote(s) - 5 Average
• 1
• 2
• 3
• 4
• 5
tetration limit ?? sheldonison Long Time Fellow Posts: 641 Threads: 22 Joined: Oct 2008 10/31/2010, 03:32 PM (This post was last modified: 10/31/2010, 05:15 PM by sheldonison.) (04/07/2009, 01:35 AM)nuninho1980 Wrote: bo198214 Wrote:Oh, you mean we have an upper fixed point of the tetrational for $b\le 1.6353244967$ and the fixed point can then be computed by $\operatorname{slog}_b(x)=x$ or ${^x b} = x$. Ya, interesting. I dont know whether we even have a thread on the forum that dealt with the topic of the fixed point of tetrationals. Of course there maybe always the dependency of the values from the chosen method of tetration.slog_b (x) = x <=> b ^^ x = x => b ^^^ oo = x yeah! to remember: one - 1^oo = 1 Euler - (e^(1/e)) ^^ oo = e now new fixed point (1.63532...) ^^^ oo ~= (3.08855...) what is this new result? it's a Super-Euler? bo198214 Wrote:So how big is the difference between both methods, with respect to the computed fixed point? to remember - you follow "regular slog" - http://en.wikipedia.org/wiki/Talk:Tetrat...on_methods lol I calculated Nuninho's constant, to 32 decimal digits of precision using my latest kneser.gp program. For bases around 1.6, it takes about 70 seconds to generate sexp accurate to 32 decimal digits. Then we generate the Taylor series, centered around 3.0. Then we generate the Taylor series for sexp'(x), centered around 3.0. Calculate when sexp'(x)=1. Now we have the 'x' value for the local minimum of sexp(x)-x. For that x, calculate sexp(x)-x. If sexp(x)-x>0, then the current base is bigger than Nuinho's constant; if sexp(x)-x<0, then the current base is smaller than Nuinho's constant. Do a binary search .... Here's the result, accurate to 32 decimal digits. Code:```base = 1.6353244967152763993453446183062 upfixed = 3.0885322718067176544821807826411 sexp'(upfixed) = 1.0000000000000000000000000000000 sexp(upfixed)-upfixed = -1.1371391135491644200632572659231 E-32 lowfixed = -1.6408725757165933485612321510790 sexp(lowfixed)-lowfixed = 0.0000000000000000000000000000000 sexp'(lowfixed) = 4.8060057543017516963843938970331```Here is a graph, showing sexp(x), and the line f(x)=x. The two graphs intersect each other at the lower fixed point, and at the upper fixed point. At the lower fixed point, the slope>1, so regular iteration is well defined. The slope at the upper fixed point=1, so this is a parabolic fixed point, much like eta. - Sheldon sexp taylor series, centered at 0, accurate to 32 digits Code:```base 1.6353244967152763993453446183062 1.0000000000000000000000000000000 0.72816708264487902100067489564252 -0.14877354513094993489726432504500 0.074737145040443592555524996635139 -0.027303989699078802016765079880084 0.011995470121761344894107671889772 -0.0050360379041442966393132619914175 0.0022119303838134475052348019378829 -0.00097450334130330465540560312442488 0.00043642402109641675815655290883749 -0.00019709011821224654361547677039639 0.000089833086159903740469398544720969 -0.000041238706945915151891612184659684 0.000019053968532911172713720919436722 -0.0000088524992039690172128294516235260 0.0000041330272150278532917019791147867 -0.0000019379184778773538437457974949642 0.00000091213532301790212743743497422467 -0.00000043078402804966343713131475292944 0.00000020407220630397996832775388483792 -0.000000096939468490135916393282597246732 0.000000046163269001536943339028294028001 -0.000000022032976477566128111162386532553 0.000000010537670071263114939762057653181 -0.0000000050493505784723879328428158902764 0.0000000024237041954057779998388895184587 -0.0000000011652474462755328727057729290140 0.00000000056104666879689709891062841049440 -0.00000000027050515362351951023983514685701 1.3058885732387283938099128380771 E-10 -6.3117999575327010732122436268733 E-11 3.0540984120511190577663377256805 E-11 -1.4793294493258211341650723010964 E-11 7.1725081233265123525479755443758 E-12 -3.4807765474727949055646060967838 E-12 1.6906630700960019560537073128474 E-12 -8.2185016001496131023322444609729 E-13 3.9981901495825875858709893398879 E-13 -1.9464873683152398084034510942555 E-13 9.4828873682058897543982090883773 E-14 -4.6229076531095863895666660688233 E-14 2.2550769237060314009782522430640 E-14 -1.1006923143874751666653232160994 E-14 5.3754741141976101679195739564343 E-15 -2.6266521306806237834040544026443 E-15 1.2841410438370241361457279081570 E-15 -6.2811246779832092490540915955525 E-16 3.0737418659790733359751550952943 E-16 -1.5048527892943157286941274104004 E-16 7.3707075418117801037767115505475 E-17 -3.6116466962715746721560625538143 E-17 1.7704150474468843953046468580986 E-17 -8.6818430219633647701453137275718 E-18 4.2590173317992935945560952526965 E-18 -2.0900733203228146820652946545818 E-18 1.0260359936416863217848296514932 E-18 -5.0385696117267440739831494270566 E-19 2.4750868268436181589501842395506 E-19 -1.2162064580279337897647503015742 E-19 5.9779639462714596392285635124411 E-20 -2.9391656069274429139845002823088 E-20 1.4454912820989308608137149928767 E-20 -7.1108845329177578827105729098124 E-21 3.4990066749315417229455607454474 E-21 -1.7221673478194437291719490418385 E-21 8.4783623277310517244543582933675 E-22 -4.1749511462354160169741608691211 E-22 2.0563192212811097335696380185410 E-22 -1.0130396163685589250273727252998 E-22 4.9917894139953788654946337959986 E-23 -2.4602390683359020178003138967311 E-23 1.2127939069301191661639031402060 E-23 -5.9797477355407677288805838636317 E-24 2.9489166915266580573932329293086 E-24 -1.4545332328547903080488562466017 E-24 7.1756972822106364319766739978423 E-25 -3.5406401040453205932800972866763 E-25 1.7473288825024335263723440548007 E-25 -8.6246361187303828745205603559264 E-26 4.2577317531925345448445978953806 E-26 -2.1022550142841515972779800768394 E-26 1.0381506214729796161812587806332 E-26 -5.1274508064085962038074071753188 E-27 2.5328371055666086404133791761389 E-27 -1.2513416925459592867495070698346 E-27 6.1830996304612768192687699598035 E-28 -3.0555971956420774211650856970448 E-28 1.5102371365537669772158590708494 E-28 -7.4653373220930952905580064654050 E-29 3.6907228478957141273598480589907 E-29 -1.8248250536380089745541606438671 E-29 9.0238039349458111979273090686006 E-30 -4.4626321100200830762883445552940 E-30 2.2072716543167375806950968205447 E-30 -1.0917818862777843416562786095551 E-30 5.4010102698819202873494951185154 E-31 -2.6725235359162127092694499491243 E-31 1.3221556501875044764187601672257 E-31 -6.5583798017323607033639050931179 E-32 3.2442551084741376457891294676456 E-32``` regular pentation generated from the lower fixed point, via pentation.gp code, pentation taylor series, centered at 0, accurate to ~21 digits Code:```base 1.6353244967152763993453446183062 1.0000000000000000000005192248729 0.81779936973045395720071293214265 -0.20288561975535027441384945617561 0.0088786047456589776694546837262676 0.019219350732330865713239336968590 -0.0096966552676861733857155784547169 0.0019514051125590343992422677111750 0.00046437760665497425458895770723555 -0.00066778754659256411362708055845759 0.00028617189543862800787206737355819 0.000033665580602278323048951676875883 -0.000079454007870098296770230740659414 0.000014692767344986060783974244415048 0.000010474055892921335060914517323353 -0.0000038212173806832668738543563355660 -0.00000062027483834957308089542513290423 0.00000032318089329501075067509587120004 -0.000000019840408991749849622699357079378 0.000000067745810965836343131600904116424 -0.0000000058251455196449879916926009357848 -0.000000036482729520758717533689961006608 0.0000000079325719962896266186472362692267 0.000000010065411908157671788716163940183 -0.0000000030175551558297183590143318054742 -0.0000000021761785937770771152654327159320 0.00000000073633472536622621197919931082184 0.00000000041672032311134229621174565224437 -1.2285074992273508685395579548365 E-10 -7.9518263282218380475858744550042 E-11 9.7514169088525432105036375063450 E-12 1.6997610080669118896188802919875 E-11 2.2377960363057901950721056894192 E-12 -4.1046921572945523533720262850443 E-12 -1.3440219438117173904988603015592 E-12 1.0162692177419430576029255087264 E-12 4.3490698532209929828066693595609 E-13 -2.3609655934025589161994798534342 E-13 -1.1591299932858676127971072482002 E-13 4.8835256899010474072746185189629 E-14 2.8208435349728648547600118064044 E-14 -8.5015918464895476441605506462895 E-15 -6.5455218512546145350077565966573 E-15 1.0586549941943315125293352810983 E-15 1.4799324783739173257534631379136 E-15 -3.5648642871089965652275206214142 E-18 -3.2899826059741680560349674399479 E-16 -5.4568752539881806017312574688462 E-17 7.1622413809298965347483312234643 E-17 2.3703528551345290231430115929264 E-17 -1.4967240115919668278098012252538 E-17 -7.3758767032881589736768555996658 E-18 2.8831523110629674706610210115875 E-18 1.9635837518801332808940827485057 E-18 -4.7131822341175072827483676403593 E-19 -4.7478844794120580339842160065419 E-19 4.9730849306353870744775718808789 E-20 1.0752385621120588513176015629038 E-19 4.4349054243188470473077019531603 E-21 -2.3207449741059068344885938312759 E-20 -4.8107012333615483486510550415132 E-21 4.7981518512995167340705076629676 E-21 1.9009245566320189670069175442907 E-21 -9.3913645999478807261230373741907 E-22 -5.8049321491044260759969472389200 E-22 1.6698977178128083745531739054 E-22 1.5471939617682124872654801496 E-22 -2.3711240240715382390040881966 E-23 -3.8230531306677313062190683870 E-23 1.1307570367174951165397624466 E-24 9.6860551417077212769401805422 E-24 6.2578554176447631116043850053 E-25 -3.0700834538096470676346145367 E-24 3.4209410749157917380893492990148 E-25 1.1606628720265556441455230554 E-24 -1.0836394904688614450987914987 E-24 -4.1068959031308040748622684284 E-26 1.1631242740449412539214268446 E-24 -8.454186445453502413912267596 E-25 -6.2526608803890595387694651538 E-25 1.3123138393956501678978565230 E-24 -2.5218865114423677749272843280551 E-25 -1.1722018792055174047394751512 E-24 1.0570768224356190649006502317 E-24 4.7217362291599339847898474461 E-25 -1.4043538312117386249617760808 E-24 4.8577833663635964403036734425 E-25 1.1063437204958150575910381509 E-24 -1.2650944581344852978745299383 E-24 -2.6733069486489219154746185107 E-25 1.4881840360747858870759691092 E-24 -7.4870314510452372161187607061165 E-25 -1.0150709222160250770174606064 E-24 1.4712316462119861103644707796 E-24 2.4144503740560439830659577039 E-26 -1.5354822446101247032997950712 E-24 1.0483819898331090549069707551 E-24 8.612742065171026391328194093 E-25 -1.6930059010324648452317830381 E-24 2.8579292408098419907076502134 E-25 1.5611586068656302104116424319 E-24``` sexp taylor series, centered at the upper fixed point. Parabolic regular iteration, since sexp(upfixed)-upfixed=0, and the derivative=1. However, the pentation series above was developed from the lower fixed point. It might also be interesting to develop the pentation from the upper fixed point. Code:```base 1.6353244967152763993453446183062 upper fixed point 3.0885322718067176544821807826411 3.0885322718067176544821807826411 1.0000000000000000000000000000000 0.29348332594662679156185554416695 0.12006943677526115961845339690056 0.042289726008581658757917692352047 0.015406681466051593705278419486083 0.0053649969961165772207616839018923 0.0018630655977932198916899772571769 0.00063407537228816784082533370359873 0.00021400606080778703684108316364163 0.000071350368686559236499836370338975 0.000023591182312113468409361763152200 0.0000077307357866599052916288379173725 0.0000025148014797529359261692389144205 0.00000081222410528536302525231046897101 0.00000026067080724584451491376816826933 0.000000083156587277307998293961303674167 0.000000026382062425889945293743274742732 0.0000000083265988463026330560171882094710 0.0000000026153442829909609773032040022395 0.00000000081773434750878326642769707607752 0.00000000025458721090899662275665667332696 7.8940942014863353398515697803273 E-11 2.4383970740151010163418654302545 E-11 7.5045892057613467862823411677246 E-12 2.3016959657817279736641051418417 E-12 7.0362006799573268296411117493600 E-13 2.1441899019127249833543050355134 E-13 6.5145303264754241098603909417717 E-14 1.9735746229612285013998339658891 E-14 5.9624685317524456584119105248956 E-15 1.7965888918970083975596928948585 E-15 5.3996546628326549009825713515487 E-16 1.6188983497572185136799713979226 E-16 4.8422635593460558165533108592429 E-17 1.4450703785698551344484490685140 E-17 4.3030415164576701763666524081170 E-18 1.2786162600194922955158125010948 E-18 3.7915201105554629414895562907887 E-19 1.1220792748374648557975378688279 E-19 3.3143452564446513738023171333433 E-20 9.7715062076572343807657205107082 E-21 2.8756697774572856807994929489579 E-21 8.4479857850782485763764738145021 E-22 2.4775690067172222743409424497063 E-22 7.2539983667168612363130405456846 E-23 2.1204495177307628041026820799736 E-23 6.1886505172771185049788719771013 E-24 1.8034304113738667347358493697905 E-24 5.2475343265172758722223983303515 E-25 1.5246843780304616848992192317019 E-25 4.4237335335211977445129141410633 E-26 1.2817319966942693841808461450023 E-26 3.7086772208878018504657396083702 E-27 1.0716874126315835345680035791605 E-27 3.0928382435902888820021324131020 E-28 8.9145242970213826409341395287118 E-29 2.5662848881126763792030265558974 E-29 7.3788051820008966793254068896449 E-30 2.1193421380273394338847540585606 E-30 6.0824157348570199720495626504043 E-31 1.7421558940832369333368054728044 E-31 4.9765917498063797295934928999305 E-32 1.4062811115449320214491405056275 E-32 3.7934403719757943562626785237526 E-33 1.2305145019890227199662941927280 E-33 5.5711727230420028085149818418777 E-34 2.4186758292339716100055902047374 E-34 1.7732706656596300164917606853238 E-34 -1.3355809003646951549924724970099 E-34 -2.9072689285868805984220052919317 E-34 -6.9107402433856378705373186712384 E-35 4.4126731718368809510319147247328 E-36 1.7126322918948119164481144802202 E-34 2.7989257163342943183983118462271 E-34 -1.9985628783896615630523665162131 E-35 -1.4700179855300004173061277796181 E-34 -1.5536566643835194929354996925696 E-34 -1.9498134289485168975356191250626 E-34 9.9022922195740429874579196637929 E-35 2.4133016008594091094881247283662 E-34 9.9675730165405949191382441641302 E-35 6.6232586691919872798243258736776 E-35 -1.4692529755384798103039616882568 E-34 -2.6872630348380145791502305390272 E-34 -2.1981032501079913545456022870657 E-35 7.3537194481961900271945952344694 E-35 1.5483413632202028611513722984947 E-34 2.2373813096659179009012953828649 E-34 -5.8527500520290404167240680297540 E-35 -1.8596067914689981955501774376493 E-34 -1.2066043234322834168239487754860 E-34 -1.2374312463084364980127016870177 E-34 1.1860368344776018489712748078933 E-34 2.4573978020663713546965678757098 E-34 5.7985783818128634242359558552645 E-35 -4.7627589167074290469391891175827 E-36 -1.4677451984086100716918643719760 E-34 -2.3907044867916316647291674331143 E-34 1.8699789701736156540089113210717 E-35``` « Next Oldest | Next Newest »
Messages In This Thread tetration limit ?? - by tommy1729 - 04/01/2009, 05:49 PM RE: tetration limit ?? - by nuninho1980 - 04/01/2009, 08:15 PM RE: tetration limit ?? - by bo198214 - 04/02/2009, 09:58 PM RE: tetration limit ?? - by nuninho1980 - 04/03/2009, 12:53 AM RE: tetration limit ?? - by bo198214 - 04/03/2009, 12:49 PM RE: tetration limit ?? - by nuninho1980 - 04/03/2009, 05:54 PM RE: tetration limit ?? - by bo198214 - 04/02/2009, 02:50 PM RE: tetration limit ?? - by tommy1729 - 04/02/2009, 09:24 PM RE: tetration limit ?? - by bo198214 - 04/02/2009, 09:56 PM RE: tetration limit ?? - by tommy1729 - 04/02/2009, 10:39 PM RE: tetration limit ?? - by tommy1729 - 05/29/2011, 07:28 PM RE: tetration limit ?? - by bo198214 - 05/31/2011, 10:34 AM RE: tetration limit ?? - by nuninho1980 - 04/03/2009, 06:12 PM RE: tetration limit ?? - by bo198214 - 04/06/2009, 10:49 PM RE: tetration limit ?? - by nuninho1980 - 04/07/2009, 01:35 AM Updated results for tetration limit - by sheldonison - 10/31/2010, 03:32 PM RE: tetration limit ?? - by nuninho1980 - 04/04/2009, 02:21 PM RE: tetration limit ?? - by gent999 - 04/14/2009, 10:12 PM RE: tetration limit ?? - by bo198214 - 04/14/2009, 10:31 PM RE: tetration limit ?? - by gent999 - 04/15/2009, 12:18 AM RE: tetration limit ?? - by bo198214 - 04/15/2009, 01:35 PM RE: tetration limit ?? - by tommy1729 - 04/15/2009, 03:05 PM RE: tetration limit ?? - by gent999 - 04/15/2009, 04:41 PM RE: tetration limit ?? - by tommy1729 - 04/29/2009, 01:08 PM RE: tetration limit ?? - by BenStandeven - 04/30/2009, 11:29 PM RE: tetration limit ?? - by tommy1729 - 04/30/2009, 11:38 PM RE: tetration limit ?? - by BenStandeven - 05/01/2009, 01:35 AM RE: tetration limit ?? - by BenStandeven - 05/01/2009, 01:00 AM RE: tetration limit ?? - by JmsNxn - 04/14/2011, 08:17 PM RE: tetration limit ?? - by tommy1729 - 05/28/2011, 12:28 PM RE: tetration limit ?? - by nuninho1980 - 10/31/2010, 10:31 PM RE: tetration limit ?? - by JmsNxn - 05/29/2011, 02:06 AM RE: tetration limit ?? - by tommy1729 - 05/14/2015, 08:29 PM RE: tetration limit ?? - by tommy1729 - 05/14/2015, 08:33 PM RE: tetration limit ?? - by tommy1729 - 05/28/2015, 11:32 PM RE: tetration limit ?? - by sheldonison - 06/11/2015, 10:27 AM RE: tetration limit ?? - by sheldonison - 06/15/2015, 01:00 AM RE: tetration limit ?? - by tommy1729 - 06/01/2015, 02:04 AM RE: tetration limit ?? - by tommy1729 - 06/11/2015, 08:25 AM
Possibly Related Threads... Thread Author Replies Views Last Post Dangerous limits ... Tommy's limit paradox tommy1729 0 2,124 11/27/2015, 12:36 AM Last Post: tommy1729 Limit of mean of Iterations of f(x)=(ln(x);x>0,ln(-x) x<0) =-Omega constant for all x Ivars 10 15,949 03/29/2015, 08:02 PM Last Post: tommy1729 Another limit tommy1729 0 1,786 03/18/2015, 06:55 PM Last Post: tommy1729 A limit exercise with Ei and slog. tommy1729 0 2,143 09/09/2014, 08:00 PM Last Post: tommy1729 [MSE] The mick tommy limit conjecture. tommy1729 1 2,891 03/30/2014, 11:22 PM Last Post: tommy1729 tetration base conversion, and sexp/slog limit equations sheldonison 44 61,255 02/27/2013, 07:05 PM Last Post: sheldonison Solve this limit Nasser 4 5,238 12/03/2012, 07:46 AM Last Post: Nasser (MSE) A limit- question concerning base-change Gottfried 0 2,577 10/03/2012, 06:44 PM Last Post: Gottfried a limit curiosity ? Pi/2 tommy1729 0 2,295 08/07/2012, 09:27 PM Last Post: tommy1729 Question about tetration limit mike3 3 5,793 07/13/2011, 12:51 PM Last Post: tommy1729
Users browsing this thread: 2 Guest(s) | 2020-07-04 15:56:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 3, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9585493206977844, "perplexity": 14690.842555186879}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886178.40/warc/CC-MAIN-20200704135515-20200704165515-00369.warc.gz"} |
https://www.physicsforums.com/threads/what-does-divergence-of-electric-field-0-mean.865314/ | # Homework Help: What does divergence of electric field = 0 mean?
Tags:
1. Apr 3, 2016
### 15ongm
1. The problem statement, all variables and given/known data
I just want to focus on the divergence outside the cylinder (r >R)
2. Relevant equations
3. The attempt at a solution
For r > R, I said ∇ * E = p/ε
But that's wrong. The answer is ∇ * E = 0
I'm confused because there is definitely an electric field outside the cylinder (r > R). The electric field points radially outwards and gets smaller the farther you get from the cylinder because
So I don't understand how the divergence of the electric field can be 0. I think the main part of my confusion is that I don't understand what the divergence is. I know how to mathematically compute the divergence but I don't understand it physically. Like when the divergence of the electric field is 0, what does that mean in terms of the physical electric field?
2. Apr 3, 2016
### TSny
Gauss' law says that the divergence of E evaluated at some point equals the charge density at that same point divided by $\epsilon_0$.
These videos might help improve your conceptual understanding of divergence: | 2018-07-22 15:01:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6980829238891602, "perplexity": 328.79316713544677}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593302.74/warc/CC-MAIN-20180722135607-20180722155607-00143.warc.gz"} |
https://www.zigya.com/study/book?class=11&board=bsem&subject=Physics&book=Physics+Part+II&chapter=Mechanical+Properties+of+Fluids&q_type=&q_topic=Surface+Tension&q_category=&question_id=PHEN11039478 | ## Chapter Chosen
Mechanical Properties of Fluids
## Book Store
Download books and chapters from book store.
Currently only available for.
CBSE Gujarat Board Haryana Board
## Previous Year Papers
Download the PDF Question Papers Free for off line practice and view the Solutions online.
Currently only available for.
Class 10 Class 12
A soap bubble of radius 10mm is blown from soap solution of surface tension 0.06 N/m. Find the work done in blowing the bubble. What addition work will be done in further blowing to double the radius?
The soap bubble is formed from the soap solution.
Therefore, increase in the surface area of soap bubble is equal to total surface area of soap bubble.
Since the soap bubble has two free surfaces, therefore increase in area of free surface of bubble is,
We have,
Radius of the soap bubble, r = 1mm=10-3
Now the work done in blowing the bubble
Additional work done in doubling the radius of bubble is,
116 Views
Why solids have definite shape while liquids do not have definite shape?
Solids: Intermolecular forces are very strong and thermal agitations are not sufficiently strong to separate the molecules from their mean position. Solids are rigid and hence they have definite shapes.
Liquids: In liquids intermolecular forces are not sufficiently strong to hold the molecules at definite sites, as a result they move freely within the bulk of liquid, therefore, do not possess definite shapes. Liquids take the same shape as that of the container.
977 Views
What is hydrodynamics?
Hydrodynamics is the branch of science that studies about the force exerted by the fluids or acting on the fluids.
799 Views
What is hydrostatics?
Hydrostatics is the branch of fluid mechanics that studies incompressible fluids at rest. The study of fluids at rest or objects placed at rest in fluids is hydrostatics.
854 Views
Do intermolecular or inter-atomic forces follow inverse square law?
No. Intermolecular and inter-atomic forces do not obey the inverse square law.
1257 Views
What is fluid?
Any material that can flow is a fluid. Liquids and gases are examples of fluid.
976 Views | 2018-12-19 14:04:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4219169020652771, "perplexity": 1972.1791613264718}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376832330.93/warc/CC-MAIN-20181219130756-20181219152756-00070.warc.gz"} |
http://mathoverflow.net/questions/73494/recovering-schauder-decompositions | # Recovering Schauder decompositions
The problem of Schauder decomposition of a given Banach space seems to play an important role in the geometry of Banach spaces, especially when one is interested in finite dimensional Schauder decompositions (FDD).
I am wondering if the Schauder decomposition can be regarded (in special cases) as the internal counterpart to infinite sums of Banach spaces.
Let me consider two cases: $C(K)$ and $L^p(\mu)$.
Suppose that $E$ is either $C(K)$ space for some compactum $K$ or $L^p(\mu)$ for some measure $\mu$.
Let $(E_n)$ be a sequence of complemented subspaces of $E$ such that for each integer $n$ $$E_1\oplus \ldots \oplus E_n \cap E_{n+1}=\{0\}.$$
In the $C(K)$ case assume that each $E_n$ is isomorphic to $c_0$ and in the latter one, $E_n$ is isomorphic to $\ell^p$.
Define $F$ to be the closed linear span of all $E_n$. Is the family $$\{E_1\oplus \ldots \oplus E_n\colon n\in \mathbb{N}\}$$ a blocking Schauder decomposition for $F$?
Is $F$ isomorphic to $c_0$ / $\ell^p$ ?
-
No. You need the projections $Q_n$ onto $E_1\oplus \dots E_n$ from $F$ to be uniformly bounded in order for $(E_n)$ to be a Schauder decomposition for $F$. Even then $F$ need not be isomorphic to $c_0$/$\ell_p$. However, if the $Q_n$ are uniformly bounded from $\ell_p$, then by taking limits in the weak operator topology you get (when $1<p<\infty$) a projection from $\ell_p$ onto $F$ and hence $F$ is isomorphic to $\ell_p$. That is not the case for $p=1$ or in the $C(K)$ case. It is true in the $c_0$ case, because you get by using the weak* operator topology in $\ell_\infty$ an operator from $c_0$ into $F^{**}\subset \ell_\infty$ that is the identity on $F$, which implies that $F$ is a $\mathcal{L}_\infty$ space, and every $\mathcal{L}_\infty$ subspace of $c_0$ is isomorphic to $c_0$.
Thank you. You think about $\ell^\infty$ as a W*-algebra but I don't know operator techniques you use. Certainly, I'd like to ask some further questions... – TMK Aug 24 '11 at 21:23 | 2015-09-02 00:47:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9466896653175354, "perplexity": 94.58250321962922}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645235537.60/warc/CC-MAIN-20150827031355-00065-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://web2.0calc.com/questions/help-please_64812 | +0
0
200
2
+8
There are 7 students in a class: 2 boys and 5 girls. If the teacher picks a group of 3 at random, what is the probability that everyone in the group is a girl?
Jun 4, 2021
#1
+2322
+2
5/7 * 4/6 * 3/5.
At first person she picks has 5/7 chance of being a girl, then 4/6, then 3/5.
=^._.^=
Jun 4, 2021
#2
+26228
+4
There are 7 students in a class: 2 boys and 5 girls.
If the teacher picks a group of 3 at random,
what is the probability that everyone in the group is a girl?
$$\begin{array}{|rcll|} \hline \dfrac{ \binom{2}{0}_{\text{Boys}}\binom{5}{3}_{\text{Girls} } } {\binom{7}{3}_{\text{Boys and Girls} } } &=& \dfrac{2}{7}\\ \text{or} \\ \dfrac{5}{7}*\dfrac{4}{6}*\dfrac{3}{5}&=& \dfrac{2}{7}\\ \hline \end{array}$$
The probability is $$\approx 28.57 \%$$
Jun 4, 2021 | 2022-01-26 07:20:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7089850902557373, "perplexity": 607.9882482525664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304928.27/warc/CC-MAIN-20220126071320-20220126101320-00713.warc.gz"} |
https://en.wikipedia.org/wiki/Dean_number | # Dean number
The Dean number (De) is a dimensionless group in fluid mechanics, which occurs in the study of flow in curved pipes and channels. It is named after the British scientist W. R. Dean, who was the first to provide a theoretical solution of the fluid motion through curved pipes for laminar flow by using a perturbation procedure from a Poiseuille flow in a straight pipe to a flow in a pipe with very small curvature.[1][2]
## Physical Context
Schematic of a pair of Dean vortices that form in curved pipes.
If a fluid is moving along a straight pipe that after some point becomes curved, the centripetal forces at the bend will cause the fluid particles to change their main direction of motion. There will be an adverse pressure gradient generated from the curvature with an increase in pressure, therefore a decrease in velocity close to the convex wall, and the contrary will occur towards the outer side of the pipe. This gives rise to a secondary motion superposed on the primary flow, with the fluid in the centre of the pipe being swept towards the outer side of the bend and the fluid near the pipe wall will return towards the inside of the bend. This secondary motion is expected to appear as a pair of counter-rotating cells, which are called Dean vortices.
## Definition
The Dean number is typically denoted by De (or Dn). For a flow in a pipe or tube it is defined as:
${\displaystyle {\mathit {De}}={\frac {\sqrt {{\frac {1}{2}}\,({\text{inertial forces}})({\text{centripetal forces}})}}{\text{viscous forces}}}={\frac {\sqrt {{\frac {1}{2}}\,(\rho \,D^{2}\,R_{c}\,{\frac {v^{2}}{D}})(\rho \,D^{2}\,R_{c}\,{\frac {v^{2}}{R_{c}}})}}{\mu {\frac {v}{D}}D\,R_{c}}}={\frac {\rho \,D\,v}{\mu }}{\sqrt {\frac {D}{2\,R_{c}}}}={\textit {Re}}\,{\sqrt {\frac {D}{2\,R_{c}}}}}$
where
• ${\displaystyle \rho }$ is the density of the fluid
• ${\displaystyle \mu }$ is the dynamic viscosity
• ${\displaystyle v}$ is the axial velocity scale
• ${\displaystyle D}$ is the diameter (for non-circular geometry, an equivalent diameter is used; see Reynolds number)
• ${\displaystyle R_{c}}$ is the radius of curvature of the path of the channel.
• ${\displaystyle {\textit {Re}}}$ is the Reynolds number.
The Dean number is therefore the product of the Reynolds number (based on axial flow ${\displaystyle v}$ through a pipe of diameter ${\displaystyle D}$) and the square root of the curvature ratio.
## Turbulence transition
The flow is completely unidirectional for low Dean numbers (De < 40~60). As the Dean number increases between 40~60 to 64~75, some wavy perturbations can be observed in the cross-section, which evidences some secondary flow. At higher Dean numbers than that (De > 64~75) the pair of Dean vortices becomes stable, indicating a primary dynamic instability. A secondary instability appears for De > 75~200, where the vortices present undulations, twisting, and eventually merging and pair splitting. Fully turbulent flow forms for De > 400.[3] Transition from laminar to turbulent flow has been also examined in a number of studies, even though no universal solution exists since the parameter is highly dependent on the curvature ratio.[4] Somewhat unexpectedly, laminar flow can be maintained for larger Reynolds numbers (even by a factor of two for the highest curvature ratios studied) than for straight pipes, even though curvature is known to cause instability.[5]
## The Dean equations
The Dean number appears in the so-called Dean equations.[6] These are an approximation to the full Navier–Stokes equations for the steady axially uniform flow of a Newtonian fluid in a toroidal pipe, obtained by retaining just the leading order curvature effects (i.e. the leading-order equations for ${\displaystyle a/r\ll 1}$).
We use orthogonal coordinates ${\displaystyle (x,y,z)}$ with corresponding unit vectors ${\displaystyle ({\hat {\boldsymbol {x}}},{\hat {\boldsymbol {y}}},{\hat {\boldsymbol {z}}})}$ aligned with the centre-line of the pipe at each point. The axial direction is ${\displaystyle {\hat {\boldsymbol {z}}}}$, with ${\displaystyle {\hat {\boldsymbol {x}}}}$ being the normal in the plane of the centre-line, and ${\displaystyle {\hat {\boldsymbol {y}}}}$ the binormal. For an axial flow driven by a pressure gradient ${\displaystyle G}$, the axial velocity ${\displaystyle u_{z}}$ is scaled with ${\displaystyle U=Ga^{2}/\mu }$. The cross-stream velocities ${\displaystyle u_{x},u_{y}}$ are scaled with ${\displaystyle (a/R)^{1/2}U}$, and cross-stream pressures with ${\displaystyle \rho aU^{2}/L}$. Lengths are scaled with the tube radius ${\displaystyle a}$.
In terms of these non-dimensional variables and coordinates, the Dean equations are then
${\displaystyle D\left({\frac {\mathrm {D} u_{x}}{\mathrm {D} t}}+u_{z}^{2}\right)=-D{\frac {\partial p}{\partial x}}+\nabla ^{2}u_{x}}$
${\displaystyle D{\frac {\mathrm {D} u_{y}}{\mathrm {D} t}}=-D{\frac {\partial p}{\partial y}}+\nabla ^{2}u_{y}}$
${\displaystyle D{\frac {\mathrm {D} u_{z}}{\mathrm {D} t}}=1+\nabla ^{2}u_{z}}$
${\displaystyle {\frac {\partial u_{x}}{\partial x}}+{\frac {\partial u_{y}}{\partial y}}=0}$
where
${\displaystyle {\frac {\mathrm {D} }{\mathrm {D} t}}=u_{x}{\frac {\partial }{\partial x}}+u_{y}{\frac {\partial }{\partial y}}}$
is the convective derivative.
The Dean number D is the only parameter left in the system, and encapsulates the leading order curvature effects. Higher-order approximations will involve additional parameters.
For weak curvature effects (small D), the Dean equations can be solved as a series expansion in D. The first correction to the leading-order axial Poiseuille flow is a pair of vortices in the cross-section carrying flow form the inside to the outside of the bend across the centre and back around the edges. This solution is stable up to a critical Dean number ${\displaystyle D_{c}\approx 956}$.[7] For larger D, there are multiple solutions, many of which are unstable.
## References
1. ^ Dean, W. R. (1927). "Note on the motion of fluid in a curved pipe". Phil. Mag. 20 (20): 208–223. doi:10.1080/14786440708564324.
2. ^ Dean, W. R. (1928). "The streamline motion of fluid in a curved pipe". Phil. Mag. Series 7. 5 (30): 673–695. doi:10.1080/14786440408564513.
3. ^ Ligrani, Phillip M. "A Study of Dean Vortex Development and Structure in a Curved Rectangular Channel With Aspect Ratio of 40 at Dean Numbers up to 430", U.S. Army Research Laboratory (Contractor Report ARL-CR-l44) and Lewis Research Center (NASA Contractor Report 4607), July 1994. Retrieved on 11 July 2017.
4. ^ Kalpakli, Athanasia (2012). Experimental study of turbulent flows through pipe bends (Thesis). Stockholm, Sweden: Royal Institute of Technology KTH Mechanics. pp. 461–512.
5. ^ Taylor, G. I. (1929). "The criterion for turbulence in curved pipes". Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences. 124 (794): 243–249. Bibcode:1929RSPSA.124..243T. doi:10.1098/rspa.1929.0111.
6. ^ Mestel, J. Flow in curved pipes: The Dean equations, Lecture Handout for Course M4A33, Imperial College.
7. ^ Dennis, C. R.; Ng, M. (1982). "Dual solutions for steady laminar-flow through a curved tube". Q. J. Mech. Appl. Math. 35 (3): 305. doi:10.1093/qjmam/35.3.305. | 2018-12-12 11:45:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 28, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6764008402824402, "perplexity": 1317.6596710744832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823872.13/warc/CC-MAIN-20181212112626-20181212134126-00370.warc.gz"} |
http://systems-sciences.uni-graz.at/etextbook/sw2/phpl_examples.html | # Phase plane analysis: examples
## Example 1 - a stable equilibrium - a sink
Consider the system $$\frac{dx}{dt}=-x, \frac{dy}{dt}=-4y$$
Plotted with a large number of initial conditions, we see that all solutions converge to $$(0,0)$$, which is a stable equilibrium point for the system - a sink.
## Example 2 - an unstable equilibrium - a saddle
Consider the system $$\frac{dx}{dt}=-x, \frac{dy}{dt}=4y$$
Again plotted with a large number of initial conditions, we see that all solutions apart from $$y=0$$ flee the point $$(0,0)$$ which therefore is an unstable equilibrium point for the system - a saddle.
## Example 3 - another saddle point
Consider the system $$\frac{dx}{dt}=2x, \frac{dy}{dt}=2x-y$$
Again most solutions flee the point $$(0,0)$$, which therefore is an unstable equilibrium for the system.
## Example 4 - an unstable spiral source
Consider the system $$\frac{dx}{dt}=x+2y, \frac{dy}{dt}=-2x+y$$
Solutions flee the point $$(0,0)$$ in a spiral mode. Again $$(0,0)$$ is an unstable equilibrium for the system - a source.
## Example 5 - a center
Consider the system $$\frac{dx}{dt}=-x-y, \frac{dy}{dt}=4x+y$$
Solutions circle around the point $$(0,0)$$, which is a center for the system. | 2019-04-25 23:49:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6827558875083923, "perplexity": 533.8732348745092}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578743307.87/warc/CC-MAIN-20190425233736-20190426015736-00110.warc.gz"} |
https://www.opto-e.com/basics/lens-resolving-power-transfer-function | # Lens resolving power: transfer function
The image quality of an optical system is usually expressed by its transfer function (TF). TF describes the ability of a lens to resolve features, correlating the spatial information in object space (usually expressed in line pair per millimeter) to the contrast achieved in the image.
Modulation and contrast transfer function.
What's the difference between MTF (Modulation Transfer Function) and CTF (Contrast Transfer Function)? CTF expresses the lens contrast response when a “square pattern” (chessboard style) is imaged; this parameter is the most useful in order to assess edge sharpness for measurement applications. On the other hand, MTF is the contrast response achieved when imaging a sinusoidal pattern in which the grey levels range from 0 and 255; this value is more difficult to convert into any useful parameter for machine vision applications. The resolution of a lens is typically expressed by its MTF (modulation transfer function), which shows the response of the lens when a sinusoidal pattern is imaged.
However, the CTF (Contrast Transfer Function) is a more interesting parameter, because it describes the lens contrast when imaging a black and white stripe pattern, thus simulating how the lens would image the edge of an object. If t is the width of each stripe, the relative spatial frequency w will be
w = 1/(2t)
For example, a black and white stripe pattern with 5 µm wide stripes has a spatial frequency of 100 lp/mm. The “cut-off frequency” is defined as the value w for which CTF is zero, and it can be estimated as
w_("cutoff") = 1/[WF//# * λ(mm)]
For example, an Opto Engineering® TC23036 lens (WF/#h F/8) operating in green light (λ = 0.000587 mm) has a cut-off spatial frequency of about
w_("cutoff") = [ 8 * 0.000587 mm ] = 210" "lp//mm
MTF curves of TC23036 - green light.
Next → | 2019-07-18 02:25:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47280174493789673, "perplexity": 2877.416613500498}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525483.64/warc/CC-MAIN-20190718022001-20190718044001-00104.warc.gz"} |
http://math.stackexchange.com/questions/250897/math-question-derivativeshelp/250956 | # Math question derivatives?Help
I have to find the derivative of $y=e^{-x/y}$..should I do this by taking the $\ln$ of both sides? Will that give me $y'$?
-
It would be useful if you could show us your effort... – Nameless Dec 4 '12 at 19:44
Method 1:Try to take the derivative with respect to $x$ on both sides (use implicite differentiation): $$\dfrac{dy}{dx} = e^{-x/y}\frac{d}{dx}(-x)y^{-1}.$$ You would get $$\dfrac{dy}{dx} = -e^{-x/y}\left[y^{-1} - xy^{-2}\frac{dy}{dx}\right].$$ Now try to solve this for $\displaystyle{\frac{dy}{dx}}$.
Method 2: You can indeed also first take a $\ln$ on both sides so that you get: $$\ln(y) = -\frac{x}{y} = -xy^{-1}.$$ Again, take $\displaystyle{\frac{d}{dx}}$ on both sides and get $$\frac{1}{y}\frac{dy}{dx} = -y^{-1}+xy^{-2}\frac{dy}{dx}.$$ Using that $y = e^{-x/y}$ these two methods actually give the same answer. | 2015-08-02 04:56:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9152745604515076, "perplexity": 215.3483638437506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988962.66/warc/CC-MAIN-20150728002308-00226-ip-10-236-191-2.ec2.internal.warc.gz"} |
http://openstudy.com/updates/500df257e4b0ed432e1063a9 | ## MathSofiya how is the differentiation of x=ky^2 equal to $1=2ky\frac{dy}{dx}$ one year ago one year ago
1. Spacelimbus
implicite differentiation, in this case y(x), such that y is a function of x.
2. Spacelimbus
But that's only one guess in this case, the mechanical way I remember for implicit differentiation is derivate as if you were deriving something in terms of x and then just multiply it by dy/dx, chain rule.
3. MathSofiya
I"m working on the separable equations section of my differential equations chapter.
4. Spacelimbus
I will just add this, maybe it helps you, this is the chain rule for multivariable calculus. The above equation you can write like that $z=f(x,y)=x-ky^2$ So the multivariable chain rule says $dz = f_x dx + f_ydy$ $\frac{dz}{dx}= f_x+f_y\frac{dy}{dx}$ This is far from a proof, but you can read some application out of it. Implicit differentiation doesn't selectively deal with partial derivatives though.
5. MathSofiya
The original question reads. Find the orthogonal trajectories of the family of curves x=ky^2, where k is an arbitrary constant. And the first thing they did was differentiate x=ky^2 to get $1=2ky\frac{dy}{dx}$
6. Spacelimbus
the gradient would be orthogonal.
7. MathSofiya
I haven't learn anything about gradients yet. This is only chapter 9 of stewart's calculus
8. Spacelimbus
Another relationship for orthogonal functions is $m_n \cdot m_y = -1$ where $$m_n$$ is normal to $$m_y$$ but I don't see why they apply this sort of differentiation here.
9. MathSofiya
Oh I think I see what they've done. They rearranged the equation to a separable equation, did the integral. Then stated: THe orthogonal trajectories are the family of ellipses given by the following equation. $x^2+\frac{y^2}{2}=C$
10. MathSofiya
for y=k/x I get ln|y|=ln|-x|+C What do you think?
11. Spacelimbus
is this the integrated form?
12. MathSofiya
yep. $\int \frac1ydy=\int-\frac1x dx$
13. Spacelimbus
$\large \int \frac{1}{y}dy = - \int \frac{1}{x}dx$ So you can distribute the minus sign and you don't need to carry it inside your logarithmn.
14. MathSofiya
ok ln|y|=-ln|x|+C
15. Spacelimbus
perfect.
16. MathSofiya
Thank you! | 2014-03-09 00:01:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8960897922515869, "perplexity": 909.6040357167604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999668224/warc/CC-MAIN-20140305060748-00046-ip-10-183-142-35.ec2.internal.warc.gz"} |
http://www.physicsforums.com/showthread.php?t=531493 | # Connected Wheels Problem. Angular Velocity.
by EpiphoneBenji
Tags: angular velocity, connected wheels
P: 2 1. The problem statement, all variables and given/known data Wheel A of radius ra = 8.9 cm is coupled by belt B to wheel C of radius rc = 30.7 cm. Wheel A increases its angular speed from rest at time t = 0 s at a uniform rate of 8.6 rad/s2. At what time will wheel C reach a rotational speed of 93.0 rev/min, assuming the belt does not slip? 2. Relevant equations 2 π rad = 1 Rev v = vo + at ( constant acc) 3. The attempt at a solution Wheel A and Wheel C have the same velocity, so I converted the rads to revs and 93 rev/min to 1.55 revs/s and divided the velocity by the acceleration to find the time. But the answer doesn't work. Am I doing something wrong ? Please help ! Thanks
P: 49 We need to see more of your work to know just were you went wrong. Also, note that the time to for wheel A to complete one revolution is much less than that of wheel B, yet both go through 2$\pi$ radians and your work thus far does not take this difference into account.
P: 2 But being attached to a string doesn't mean they are going at the same speed ?
P: 49
## Connected Wheels Problem. Angular Velocity.
Define "speed". Each point on the belt will be be moving at the same rate, but translational velocity and angular velocity are not the same. Your conversion from radians to revs needs to include the fact that angular velocity depends on radius. Try looking up the conversion of tangential velocity to angular on wikipedia.
Related Discussions Mechanical Engineering 1 Introductory Physics Homework 1 Introductory Physics Homework 5 Introductory Physics Homework 2 | 2013-12-12 00:27:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7140277028083801, "perplexity": 749.9926941504857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164120234/warc/CC-MAIN-20131204133520-00007-ip-10-33-133-15.ec2.internal.warc.gz"} |
http://www.physics.sfsu.edu/~lea/courses/pcal/230integ.html | Physics 230 February 11 2002 Problem: A rod of length 2a lies along the x-axis with its center at the origin. It has a uniform charge density l (= Q/2a). Find the electric field at a point P on the x-axis with x-coordinate xP.
Start with a diagram. Show the rod, the coordinate axes, and the point P. Make your diagram BIG.
Step I: Model the system (the rod) as a collection of differential elements. Draw a typical element on your diagram. Each element should correspond to a differential change in one coordinate.
See diagram above
Step II. Identify a typical element and describe it using your coordinates.
The element extends from x to x+dx and has charge dq = l dx
Step III. Express the contribution of your element to the desired sum. (i.e. find the electric field dE produced by this element. Give both its magnitude and its direction. With vectors, it is often easier to calculate each component separately.
As we can see from the diagram, the electric field is in the x-direction, and from Coulomb's law
Step IV. Find the limits of the integral.
The limits are x = -a to x = a
Step V: Integrate!
Analysis: Show that your result reduces to kQ/xP2when P is a long way from the origin.
For xP>>a, the denominator reduces to xP2, and since Q = 2al , we get the expected form of Coulomb's law. | 2017-12-16 16:58:39 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8550457954406738, "perplexity": 635.143867496422}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948588294.67/warc/CC-MAIN-20171216162441-20171216184441-00639.warc.gz"} |
https://freakonometrics.hypotheses.org/date/2014/10 | # Somewhere else, part 179
Some posts and articles worth reading, here and there
# Extracting datasets from excel files in a zipped folder
The title of the post is a bit long, but that’s the problem I was facing this morning: importing dataset from files, online. I mean, it was not a “problem” (since I can always download, and extract manually the files), more a challenge (I should be able to do it in R, directly). The files are located on ressources-actuarielles.net, in a zip file. Those are mortality tables used in French speaking African countries, and I guess that one problem came from special characters, such as “é” or “è”… When you open the zip file, you see a folder
and in that folder, several files that I would like to import
# Somewhere else, part 178
Some posts and articles worth reading,
# Log-transform kernel density estimation of income distribution
Our paper Log-transform kernel density estimationof income distribution, written with Emmanuel Flachaire is now available on http://papers.ssrn.com/id=2514882,
Standard kernel density estimation methods are very often used in practice to estimate density function. It works well in numerous cases. However, it is known not to work so well with skewed, multimodal and heavy-tailed distributions. Such features are usual with income distributions, defined over the positive support. We first show that a preliminary logarithmic transformation of the data, combined with standard kernel density estimation methods, can provide a much better fit of the overall density estimation. Then, we show that the fit of the bottom of the distribution may not be satisfactory, even if a better fit of the upper tail can be obtained in general.
# Somewhere else, part 177
Some posts and articles worth reading, here and there
# Tests basés sur la vraisemblance – score
Une autre grandeur intéressant est le score, qui est la dérivée de la vraisemblance. Intuitivement (c’est l’idée de la condition du premier ordre), $\widehat\theta_n$ et $\theta_0$ seront proches si les dérivées en ces points sont proches. En $\widehat\theta_n$ la dérivée est nulle, donc on va se demander ici, tout simplement, si la dérivée en $\theta_0$ est proche de 0. Ou pas.
# Tests basés sur la vraisemblance – Rapport de Vraisemblance
Chose promise, chose due. J’avais dit qu’on parlerait du test de rapport de vraisemblance. L’idée – visuelle – est d’avoir une lecture dans l’autre sens : au lieu de se demander si $\widehat\theta_n$ et $\theta_0$ sont proches, on va se demander si $\log\mathcal{L}(\widehat\theta_n)$ et $\log\mathcal{L}(\theta_0)$ sont proches.
Si la fonction de vraisemblance est suffisamment régulière, on se pose la même question.
# Somewhere else, part 176
yes, extreme winds cause a waterfall (in England) to blow upward
# Tests basés sur la vraisemblance – Wald
Petit rappel préliminaire. Si on dispose d’un échantillon $X_i$, i.id de loi $F_{\boldsymbol{\theta}_\star}$, où le paramètre $\boldsymbol{\theta}_\star$ est inconnu, alors on peut calculer le maximum de la vraisemblance, ce qui nous donnera un estimateur intéressant (cf. cours de stat). Illustrons avec un jeu de pile ou face. Reprenons l’échantillon de mon précédent billet,
> X=c(0, 0, 1, 1, 0, 1, 1, 1, 1, 0,
0, 0, 1, 0, 1, 0, 1, 1, 0, 1)
On peut ici tracer la fonction de vraisemblance
$\theta\mapsto \mathcal{L}(\theta,\mathcal{X})=\theta^{\sum X_i}[1-\theta]^{n-\sum X_i}$
ou (un peu plus malin) la fonction de log-vraisemblance
$\theta\mapsto \log\mathcal{L}(\theta,\mathcal{X})$
> p=seq(0,1,by=.01)
> logL=function(p)
+ {sum(log(dbinom(X,size=1,prob=p)))}
> plot(p,Vectorize(logL)(p),
+ type="l",col="red",lwd=2)
> p0=.5
> abline(v=p0,col="blue")
La valeur correspondant au trait bleu correspondant au cas que l’on va chercher à tester, de pièce non pipée, soit $\theta_0$ (de manière très générale)
# Les tests et la logique, modus tollens
Quand on apprend la logique, on apprend la notion de modus tollens, correspondant à une notion de contraposition. Si on a une proposition du genre $A \Rightarrow B$, alors la proposition contraposée est $\text{non }B \Rightarrow \text{non }A$. Et on apprend que les deux propositions sont équivalentes (je fais de la logique classique). Par exemple si $A$ correspond à “feu” et $B$ à “fumée“, $A \Rightarrow B$ signifie que tout feu fait de la fumée. Si cette affirmation est vrai, alors il n’y a pas de fumée sans feu, et donc $\text{non }B \Rightarrow \text{non }A$, autrement dit, s’il n’y a pas de fumée, c’est qu’il n’y a pas de feu.
# Test du chi-deux et indépendance
Considérons le tableau de contingence suivant
> N=margin.table(HairEyeColor, c(1,2))
> N
Eye
Hair Brown Blue Hazel Green
Black 68 20 15 5
Brown 119 84 54 29
Red 26 17 14 14
Blond 7 94 10 16
avec ici les comptages, que l’on peut aussi traduire sous la forme de probabilités jointes (empiriques)
> N/n
Eye
Hair Brown Blue Hazel Green
Black 0.115 0.034 0.025 0.008
Brown 0.201 0.142 0.091 0.049
Red 0.044 0.029 0.024 0.024
Blond 0.012 0.159 0.017 0.027
# Kernel Density Estimation with Ripley’s Circumferential Correction
The revised version of the paper Kernel Density Estimation with Ripley’s Circumferential Correction is now online, on hal.archives-ouvertes.fr/.
In this paper, we investigate (and extend) Ripley’s circumference method to correct bias of density estimation of edges (or frontiers) of regions. The idea of the method was theoretical and difficult to implement. We provide a simple technique — based of properties of Gaussian kernels — to efficiently compute weights to correct border bias on frontiers of the region of interest, with an automatic selection of an optimal radius for the method. We illustrate the use of that technique to visualize hot spots of car accidents and campsite locations, as well as location of bike thefts.
There are new applications, and new graphs, too
Most of the codes can be found on https://github.com/ripleyCorr/Kernel_density_ripley (as well as datasets).
# Somewhere else, part 175
(Halloween Tree, by Glen Brogan)
Some posts and articles worth reading, here and there
A bank’s earnings are a quantum event; they are entirely probabilistic, and the answer you get depends on who’s doing the observing. You make some guesses with some degree of statistical likelihood, and then you apply one of a half-dozen accounting regimes to the guesses, and you get a number, and then you’re like, ooh, look at this number, it’s so numeric.
# Removing Uncited References in a Tex File (with R)
Last week, with @3wen, we were working a the revised version of our work on smoothing densities of spatial processes (with edge correction). Usually, once you have revised the paper, some references were added, others were droped. But you need to spend some time, to check that all references are actually mentioned in the paper. For instance, consider the following compiled tex file : | 2018-03-20 10:07:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.752854585647583, "perplexity": 6246.925625551363}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647327.52/warc/CC-MAIN-20180320091830-20180320111830-00445.warc.gz"} |
https://math.stackexchange.com/questions/1278560/features-of-phase-and-magnitude-spectrum | # Features of phase and magnitude spectrum?
I have read in many books that whether the signal is 1D or multidimensional ,
1. The magnitude spectrum tells you how strong are the harmonics in the signal
and
2. The phase spectrum tells where this harmonic lies in time domain for 1 D signal (and in space domain in case of multidimensional)
But I didn't find any justification or explanation for the above sentences. I want to counter check (understand ) these sentences about phase spectrum and magnitude spectrum. So can anybody help for it ?
• These are complex images with no peculiar characteristic such as dominant frequencies, and they are fairly isotropic. All you can observe is the decay rate of the amplitude, similar in both cases. And there's nothing you can interpret in the phase image. – Yves Daoust May 12 '15 at 15:49
• @Yves Daoust sir,i have edited the question,can you give answer for the question with the help of mathematics or any other way? – pandu May 12 '15 at 16:14
• This question is too broad and is better addressed by the numerous freely available texts on the matter. – AnonSubmitter85 May 12 '15 at 17:17
• This is not correct. Phase can indeed tell you something about localization. For example, in $e^{j2\pi f \alpha + j\pi \beta f^2}$, $\alpha$ will tell you where and $\beta$ will tell you how wide (or how localized). – AnonSubmitter85 May 12 '15 at 17:46
• The sign of the phase doesn't matter and it doesn't have to be windowed (anymore than an image is already windowed given its finite extent). You can work out the IFT of the above example using Fresnel integrals and you'll see that the extent is directly proportional to $\beta$. Perhaps I am reading things differently, but the question is asking about information from the phase of the transform, which implies the phase of all frequency samples in totality and not just a single value. Either way, the question is a poor one to begin with. It's too general and better addressed by a text book. – AnonSubmitter85 May 12 '15 at 18:31
For any (additive) part of the signal $f(x)$ which has fourier transform $$F\{f(x)\}(\omega)$$ is shifted to $f(x+x_0)$ will have Fourier transform : $$e^{-i x_0\omega}F\{f(x)\}(\omega)$$ So every coefficient $\omega$ has it's phase altered by the linear factor inside that exponential function. | 2019-07-17 16:53:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6474749445915222, "perplexity": 523.4146411695252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525355.54/warc/CC-MAIN-20190717161703-20190717183703-00407.warc.gz"} |
https://www.esaral.com/q/simplify-the-following-expressions-69816/ | Simplify the following expressions:
Question:
Simplify the following expressions:
(i) $(11+\sqrt{11})(11-\sqrt{11})$
(ii) $(5+\sqrt{7})(5-\sqrt{7})$
(iii) $(\sqrt{8}-\sqrt{2})(\sqrt{8}+\sqrt{2})$
(iv) $(3+\sqrt{3})(3-\sqrt{3})$
(v) $(\sqrt{5}-\sqrt{2})(\sqrt{5}+\sqrt{2})$
Solution:
(i) $(11+\sqrt{11})(11-\sqrt{11})$
As we know, $(a+b)(a-b)=\left(a^{2}-b^{2}\right)$
So, $11^{2}-11$
$121-11=110$
(ii) $(5+\sqrt{7})(5-\sqrt{7})$
As we know, $(a+b)(a-b)=\left(a^{2}-b^{2}\right)$
So, $5^{2}-7$
$25-7=18$
(iii) $(\sqrt{8}-\sqrt{2})(\sqrt{8}+\sqrt{2})$
As we know, $(a+b)(a-b)=\left(a^{2}-b^{2}\right)$
$\sqrt{8 \times 8}-\sqrt{2 \times 2}=8-2$
$=6$
(iv) $(3+\sqrt{3})(3-\sqrt{3})$
As we know, $(a+b)(a-b)=\left(a^{2}-b^{2}\right)$
$=9-\sqrt{3 \times 3}$
$=6$
(v) $(\sqrt{5}-\sqrt{2})(\sqrt{5}+\sqrt{2})$
As we know, $(a+b)(a-b)=\left(a^{2}-b^{2}\right)$
$=\sqrt{5 \times 5}-\sqrt{2 \times 2}$
$=5-2$
$=3$ | 2022-01-17 21:38:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9970641136169434, "perplexity": 7989.294956628362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300624.10/warc/CC-MAIN-20220117212242-20220118002242-00458.warc.gz"} |
http://www.tcl.tk/cgi-bin/tct/tip/view/168?ver=1.3 | # TIP #168 Version 1.3: Cubic Bezier Curves on the Canvas
This is not necessarily the current version of this TIP.
TIP: 168 Title: Cubic Bezier Curves on the Canvas Version: $Revision: 1.3$ Author: Lars Hellström State: Draft Type: Project Tcl-Version: 8.5 Vote: Pending Created: Sunday, 25 January 2004
## Abstract
This document proposes a new -smooth method for line and polygon canvas items that supports cubic Bezier curves and clarifies some of the existing terminology in that area.
## Proposal
A new method for the -smooth canvas item option will be defined. Under this method, the points defining the item will be interpreted as a sequence knot-point control-point control-point knot-point control-point control-point ... of a curve composed of cubic Bezier segments. More precisely, if the list of coordinates is
a_0 b_0 a_1 b_1 a_2 b_2 ...
then the Nth (counting from zero) segment of the curve consists of points whose coordinates (x,y) satisfies
x = a_{3N} (1-t)^3 + 3 a_{3N+1} t (1-t)^2 + 3 a_{3N+2} t^2 (1-t) + a_{3N+3} t^3,
y = b_{3N} (1-t)^3 + 3 b_{3N+1} t (1-t)^2 + 3 b_{3N+2} t^2 (1-t) + b_{3N+3} t^3
for some value of t between 0 and 1, inclusive. If there are 3N+1 points then the above defines an N segment curve. In the case that the number of points is 3N or 3N-1 then they shall still define an N segment curve, where in the first case the first knot of the first segment is reused as the last knot in the last segment, and in the second case the first knot and control point in the first segment are reused as the last control point and knot in the last segment respectively.
Straight line segments in the curve can be encoded as a segment where the control points are equal to the neighbouring knot points. While this is not the only way to encode a straight line, it is a case that is recognised and handled more efficiently by code that renders the canvas item.
The name of this new method should be "raw".
The name of the existing -smooth method (as returned by the itemcget widget command) should be changed from "bezier" to "true", and the name "bezier", while at least in Tcl 8.5 still supported, should be deprecated.
## Rationale
Cubic Bezier curves, being for example the native curve format in Postscript and its descendants, is probably the most common format for smooth curves in computing today. It is even used internally in Tk; for each segment of a -smooth 1 curve, rendering starts with the calculation of a cubic Bezier representation of that curve and continues to use only this representation when approximating the smooth curve with straight line segments. Hence it might be claimed that the cubic Bezier curve is the "raw" format of a smooth curve in Tk. No new calculations need to be implemented in the core to implement this TIP, it is merely a matter of combining existing functions in a suitable way and move data around. Therefore it seems a waste to not provide cubic Bezier curves, when they are anyway already half implemented.
The reason for the interpretation of a curve with 3N points given is that this will cause the curve to be closed. Conversely, omitting the final knot point is sometimes used as a way of encoding the fact that the curve should be closed. This rule will therefore facilitate the use of data where then omitted endpoint convention has been employed. The only reason for the 3N-1 point rule is that it fits a simple scheme (when at the end of the list of coordinates, continue from the start) that supports the 3N and 3N+1 cases.
The reason for deprecating the name "bezier" for the traditional smoothing method is that it is at best confusing, and according to many authorities simply wrong. The term "Bezier curve" is very often used as a synonym of "cubic Bezier curve", whence the majority of programmers new to this feature of the canvas widget would probably expect "-smooth bezier" to imply the effect of the smoothing option proposed in this document rather than the smoothing via quadratic splines that it currently is. The amount of disappointment that could result from the unpleasant discovery that what one thought was the former is really the latter should not be underestimated.
The reason for changing the official name of the traditional smoothing method to "true" are (i) that it is backwards compatible in the sense that this name works for specifying that smoothing method in all Tk versions and (ii) that it is somewhat mnemonic, because it happens to coincide with the format used for curves in TrueType fonts.
## Background
The question of what may rightfully be called "bezier" is somewhat complicated and deserves expounding upon. It really begins with Bernstein; the Bernstein degree n form of a polynomial f is
f(t) = \sum_{k=0}^n a_k \binom{n}{k} t^k (1-t)^{n-k}
[this is LaTeX code, in case anyone wonders]. One advantage this form has over the standard form is that the coefficients a_0, ..., a_n are directly comparable to the function values f(0), f(1/n), f(2/n), ..., f(1); the two are generally not equal (with the exception for the endpoints of the [0,1] interval), but the function values approximates the sequence of coefficients in various useful ways. (Bernstein used it to give an elegant proof of the Weierstrass Approximation Theorem.)
A Bezier curve (or Bernstein-Bezier curve, as it is sometimes called) of degree n is a parametric curve P defined by a sequence of n+1 points P_0, ..., P_n (known as the control points of the curve) where each coordinate function is the Bernstein polynomial one gets by taking as a_k the corresponding coordinate of the point P_k and parametric time goes from 0 to 1; formally
P(t) = \sum_{k=0}^n \binom{n}{k} t^k (1-t)^{n-k} P_k
for t between 0 and 1 inclusive. Higher degree Bezier curves are not used much in computer graphics (probably because the effect on the curve of moving an single control point is often not intuitively clear) but they do exist and it is not illogical to expect that a
$canvas create line$points -smooth bezier
should be the degree [expr {[llength \$points]/2 - 1}] Bezier curve defined by the given points.
Another term which often occurs when discussing computer graphic curves is "spline". A spline is a curve that passes through a set of given points (the knots of the curve) in a given order, satisfies some smoothness condition, and in some sense is best possible under these conditions. The most common optimality condition is that the curve should be composed from segments that can be parameterized by polynomials of given degree, but there are implicit conditions (minimizing some suitable measure of curve deformation) which leads to the same family of curves.
The -smooth 1 curves of the Tk canvas are splines (of degree 2) in this sense, even though the points used for defining them are (with the exception for endpoints) not the knots of the spline. (Rather, the internal knots are the midpoints of the line segments joining two adjacent control point.) The "raw" curves proposed here are in general not splines (because they admit discontinuous changes in tangent direction, thus violating the smoothness condition), but they often serve as an encoding for pre-computed degree 3 splines and this use has lead to a confusion in terminology in this area. It is not uncommon that piecewise cubic Bezier curves in general are referred to as "cubic splines", even though that is a more special concept. It may also be observed that the endpoints of the Bezier segments are usually referred to as knots of the curve, whereas the term "control points" is often reserved for the non-knot control points. This rathe | 2013-05-19 04:29:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6279007792472839, "perplexity": 744.1655366437554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383259/warc/CC-MAIN-20130516092623-00007-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://www.preprints.org/manuscript/202012.0420/v2 | Preprint Article Version 2 Preserved in Portico This version is not peer-reviewed
# Small Symmetrical Deformation of Thin Torus with Circular Cross-Section
Version 1 : Received: 15 December 2020 / Approved: 17 December 2020 / Online: 17 December 2020 (09:07:55 CET)
Version 2 : Received: 27 February 2021 / Approved: 2 March 2021 / Online: 2 March 2021 (09:22:01 CET)
Version 3 : Received: 18 April 2021 / Approved: 19 April 2021 / Online: 19 April 2021 (12:03:23 CEST)
A peer-reviewed article of this Preprint also exists.
Bohua Sun, Small symmetrical deformation of thin torus with circular cross-section, Thin-Walled Structures 163 (2021) 107680 Bohua Sun, Small symmetrical deformation of thin torus with circular cross-section, Thin-Walled Structures 163 (2021) 107680
Journal reference: Thin-Walled Structures 2021, 163, 107680
DOI: 10.1016/j.tws.2021.107680
## Abstract
By introducing a variable transformation $\xi=\frac{1}{2}(\sin \theta+1)$, a complex-form ordinary differential equation (ODE) for the small symmetrical deformation of an elastic torus is successfully transformed into the well-known Heun's ODE, whose exact solution is obtained in terms of Heun's functions. To overcome the computational difficulties of the complex-form ODE in dealing with boundary conditions, a real-form ODE system is proposed. A general code of numerical solution of the real-form ODE is written by using Maple. Some numerical studies are carried out and verified by both finite element analysis and H. Reissner's formulation. Our investigations show that both deformation and stress response of an elastic torus are sensitive to the radius ratio, and suggest that the analysis of a torus should be done by using the bending theory of a shell.
## Keywords
toroidal shell; deformation; Gauss curvature; Heun function; hypergeometric function; Maple
Comment 1
Commenter: Bohua Sun
Commenter's Conflict of Interests: Author
+ Respond to this comment
Views 0 | 2021-09-24 11:14:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5536698698997498, "perplexity": 4022.266032344826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057524.58/warc/CC-MAIN-20210924110455-20210924140455-00058.warc.gz"} |
https://iacr.org/cryptodb/data/paper.php?pubkey=2854 | CryptoDB
Paper: An Efficient Protocol for Secure Two-Party Computation in the Presence of Malicious Adversaries
Authors: Yehuda Lindell Benny Pinkas DOI: 10.1007/978-3-540-72540-4_4 URL: https://iacr.org/archive/eurocrypt2007/45150052/45150052.pdf Search ePrint Search Google EUROCRYPT 2007
BibTeX
@inproceedings{eurocrypt-2007-2854,
title={An Efficient Protocol for Secure Two-Party Computation in the Presence of Malicious Adversaries},
booktitle={Advances in Cryptology - EUROCRYPT 2007, 26th Annual International Conference on the Theory and Applications of Cryptographic Techniques, Barcelona, Spain, May 20-24, 2007, Proceedings},
series={Lecture Notes in Computer Science},
publisher={Springer},
volume={4515},
pages={52-78},
url={https://iacr.org/archive/eurocrypt2007/45150052/45150052.pdf},
doi={10.1007/978-3-540-72540-4_4},
author={Yehuda Lindell and Benny Pinkas},
year=2007
} | 2019-11-19 09:28:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19027140736579895, "perplexity": 9353.022814381422}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670036.23/warc/CC-MAIN-20191119070311-20191119094311-00272.warc.gz"} |
https://mathematica.stackexchange.com/questions/183620/differential-equation-and-equation-with-parameter | # Differential equation and equation with parameter
I have the equations
x[t] == Exp[m t] && (2 t + 1) x''[t] + 2 (2 t - 1) x'[t] - 8 x[t] == 0
Please tell me which function to use to solve it and find the parameter m from the first equation that will solve the second,
I used DSolve on the second equation and found m = -2, but maybe there is another way.
• Simplify[DSolve[{(2 t+1)x''[t]+2(2 t-1)x'[t]-8 x[t]==0, x[0]==a, x'[0]==b}, x[t], t]] then compare that with x[t]==Exp[m*t] to see what initial conditions can give you that. – Bill Oct 11 '18 at 18:55
• thanks, but in my exercise i have to first of all find this parameter m and then solve differential equation, i though it may not like my teacher – Ben Oct 11 '18 at 19:01
• Hummm.. That's a weird one. How about you assign f=Exp[m*t] without having assigned any prior value to m or t. Then you use the D function to find D[f,t] and D[f,{t,2}] and then you substitute all those into your (2 t+1) x''[t]+2(2 t-1)x'[t]-8 x[t]==0 Then could you think how you might use Solve on that? – Bill Oct 11 '18 at 20:21
• I dont no how to use Solve, but I used help and thought something that.... fun[t_] := Exp[m t] , Dfun = D[fun[t], t],DDfun = D[Dfun, t],form = ForAll[{t}, (2 t + 1) DDfun + 2 (2 t - 1) Dfun - 8 fun[t] == 0],Resolve[form, Reals] thanks all of you – Ben Oct 11 '18 at 20:54
• Interesting method you found. Solve[(2 t+1) DDfun+2 (2 t-1) Dfun-8 fun[t]==0,m] Then carefully check any result from MMA to make certain it is a real solution and not just a glitch. – Bill Oct 11 '18 at 21:56 | 2019-06-19 20:12:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5181463956832886, "perplexity": 2506.6571085745727}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999040.56/warc/CC-MAIN-20190619184037-20190619210037-00068.warc.gz"} |
https://komal.elte.hu/verseny/2002-03/A.e.shtml | Mathematical and Physical Journal
for High Schools
Issued by the MATFUND Foundation
Already signed up? New to KöMaL?
# Solutions for advanced problems "A" in March, 2002
In this page only the sketch of the solutions are published; in some cases only the final results. To achieve the maximum score in the competition more detailed solutions needed.
A. 287. Someone has drawn an ellipse on a sheet of paper, and he has also marked the endpoints of its axes. The major axis is twice as long as the minor axis. Find a construction method for dividing any given acute angle into three equal parts, using a pair of compasses, a straight edge, and the given ellipse only.
Solution. Let 3$\displaystyle alpha$ be the angle to be trisected, and let the equation of the ellipse in rectangular coordinates be x2+4y2=1.
Let Ai=(cos($\displaystyle alpha$+(i-1).120o);sin($\displaystyle alpha$+(i-1).120o)) and $\displaystyle B_i=(\cos(\alpha+(i-1)\cdot120^\circ);{1\over2}\sin(\alpha+(i-1)\cdot120^\circ))$ (i=1,2,3). The points Ai are on the unit circle, and the points Bi are on the ellipse. The task is to construct these points.
Consider the circle passing through the points Bi. The circle also intersects the ellipse at a fourth point C. With a little calculation, it can be checked that the centre of the circle is $\displaystyle K=\left({3\over16}\cos3\alpha;{3\over8}\sin3\alpha\right)$ and that $\displaystyle C=(\cos3\alpha;{1\over2}\sin3\alpha)$. For a given 3$\displaystyle alpha$, the circle can be constructed, and it intersects the ellipse at the points Bi.
A. 288. Let p be an n degree polynomial, where n$\displaystyle ge$1. Prove that there exist at least n+1 complex numbers yielding 0 or 1 as the value of f. (IMC 7, London, 2000)
Solution. Let z1, ..., zk be the complex numbers where the value of p is 0 or 1. Let $\displaystyle mu$j denote the multiplicity of the root zj of the polynomial p(z) or p(z)-1. Then the number zj is a root of p' with a multiplicity of ($\displaystyle mu$j-1). If multiplicity counts, The polynomials p(z) and p(z)-1 together have 2n roots among the numbers z1,...,zk, and the polynomial p' has at most n-1, thus
$\displaystyle mu$1+...+$\displaystyle mu$k=2n, ($\displaystyle mu$1-1)+...+($\displaystyle mu$k-1)$\displaystyle le$n-1.
By subtraction, we get k$\displaystyle ge$n+1.
A. 289. Prove that if the function f is defined on the set of positive real numbers, its values are real, and f satisfies the equation
$\displaystyle f\left({x+y\over2}\right)+f\left({2xy\over x+y}\right)=f(x)+f(y)$
for all positive x,y, then
$\displaystyle 2f\big(\sqrt{xy}\big)=f(x)+f(y)$
for every positive number pair xy. (Miklós Schweitzer Memorial Competition, 2001)
Solution. Let a,b,c,d be positive real numbers. By applying the functional equation several times, we have
f(a)+f(b)+f(c)+f(d)=
$\displaystyle =f\left({a+b\over2}\right)+f\left({2ab\over a+b}\right)+f\left({c+d\over2}\right)+f\left({2cd\over c+d}\right)=$
$\displaystyle =f\left({{a+b\over2}+{c+d\over2}\over2}\right)+f\left({2{a+b\over2}\cdot{c+d\over2}\over{a+b\over2}+{c+d\over2}}\right)+$
$\displaystyle \qquad+f\left({{2ab\over a+b}+{2cd\over c+d}\over2}\right)+f\left({2{2ab\over a+b}\cdot{2cd\over c+d}\over{2ab\over a+b}+{2cd\over c+d}}\right)=$
$\displaystyle f\left({a+b+c+d\over4}\right)+f\left({(a+b)(c+d)\over a+b+c+d}\right)+$
$\displaystyle \qquad+f\left({abc+abd+acd+bcd\over(a+b)(c+d)}\right)+f\left({4abcd\over abc+abd+acd+bcd}\right).$
By repeating the above procedure with b and c interchanged, we get
(1) $\displaystyle f\left({(a+b)(c+d)\over a+b+c+d}\right)+f\left({abc+abd+acd+bcd\over(a+b)(c+d)}\right)=$
$\displaystyle \qquad=f\left({(a+c)(b+d)\over a+b+c+d}\right)+f\left({abc+abd+acd+bcd\over(a+c)(b+d)}\right).$
Let a=c, b=a2/d and $\displaystyle t={a\over b}+{b\over a}$. It is easy to check that
$\displaystyle {(a+b)(c+d)\over a+b+c+d}={abc+abd+acd+bcd\over(a+b)(c+d)}=a,$
$\displaystyle {(a+c)(b+d)\over a+b+c+d}=a\cdot{2t\over2+t},$
and finally
$\displaystyle {abc+abd+acd+bcd\over(a+c)(b+d)}=a\cdot{2+t\over2t}.$
Substitution of the results into (1) gives
(2) $\displaystyle 2f(a)=f\left(a\cdot{2t\over2+t}\right)+f\left(a\cdot{2+t\over2t}\right)$
t can be any number not smaller than 2, and $\displaystyle {2t\over2+t}$ can be any number not smaller than 1. Thus for every number pair x$\displaystyle ge$y there exist the numbers a and t, such that $\displaystyle a\cdot{2t\over2+t}=x$, $\displaystyle a\cdot{2+t\over2t}=y$ and $\displaystyle \sqrt{xy}=a$. | 2020-08-04 23:53:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8304426074028015, "perplexity": 570.2774956649887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735885.72/warc/CC-MAIN-20200804220455-20200805010455-00508.warc.gz"} |
https://physics.stackexchange.com/questions/591810/what-is-brane-inflation-and-can-it-be-eternal | # What is brane inflation, and can it be eternal?
What is Brane Inflation and how does it describe inflation process? Can it be eternal?
Here is a a nice overview on the basics of brane inflation https://arxiv.org/abs/hep-th/0610221 (see also https://arxiv.org/abs/hep-th/0105203).
Brane inflation is, roughly speaking, a paradigm that propose an identification of brane/anti-brane annihilation processes with some types of cosmological inflation (typically hybrid ones).
Your question about the possibility of producing an eternal inflation scenario in the "brane inflation" paradigm cannot be answered unless you ask for a particular scenario within this paradigm.Despite of the later, I think is safe to say that in basic scenarios eternal inflation is not possible. The minimum of the potential for a tachyon field in a $$D3$$/ anti-$$D3$$-brane annihilation (in flat space) is reached at finite distances in moduli space, in other words, a stable vacuum for the pair is reached in finite time (see the review I attach).
• But in KKLT scenarios, eternal inflation occurs. Nov 5 '20 at 17:57
• That's not an established fact. In fact, many swampland criteria rule out the possibility of eternal inflation in the string theory landscape. See arxiv.org/abs/2008.07555 , arxiv.org/pdf/1905.05198.pdf, arxiv.org/abs/1909.11106. Although some loopholes on the latter arguments can be found, see arxiv.org/abs/1907.08943 and arxiv.org/abs/1807.11938. As I've said, it depends on what specific scenario you're asking for; in exactly the same way that happens in quantum field approaches to cosmological inflation. Nov 5 '20 at 23:21
• It is far from the fact that the swampland criteria is correct Nov 6 '20 at 6:15
• The point is that swampland criteria are much more well established than the KKLT and LV scenarios as full solutions of quantum gravity, see arxiv.org/pdf/1804.01120.pdf. Also recall that there is no single example of fully reliable eternal inflation within string theory, in contrast, all known (tens of billions) string theory compactifications obey most swampland criteria. Nov 6 '20 at 15:02
• To be fair, there is some pretty strong criticism of Swampland Criteria. The main argument is that the search for parameters is based on the "street lamp" principle. In addition, the cosmological constant is incompatible with the swamp and the quintessence models are poorly compatible. Nov 6 '20 at 16:34
In String Cosmology, Brane inflation is a mathematical realisation of the Cosmic Inflation in the very early universe within the brane world framework. It usually depends on how the brane inflationary universe originated, eternal inflation of the false vacuum type is too possible. Eternal inflation of the false vacuum type inevitably happens in this case due to the tunneling process.
• @Arman Armenpress, you can read the books on String Cosmology.
– user275163
Nov 5 '20 at 11:23
• What is Brane World? 5-dimensional spacetime with 4-dimensional branes embedded in it? Nov 5 '20 at 12:28
• No.............
– user275163
Nov 5 '20 at 13:01
• What then? Can you tell me in general terms? Nov 5 '20 at 14:27
• @SpinFoam Brane inflation is not restricted to brane-world scenarios, see arxiv.org/abs/hep-th/0601099 for an example within a KKLT-like scenario. Nov 5 '20 at 17:32 | 2021-09-23 12:57:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6376262307167053, "perplexity": 840.0722749784657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057421.82/warc/CC-MAIN-20210923104706-20210923134706-00485.warc.gz"} |
http://physics.aps.org/synopsis-for/10.1103/PhysRevLett.111.053004 | # Synopsis: A “Magic Frequency” for Atomic Spectroscopy
When light is chosen with a special frequency, its absorption by a cloud of atoms is independent of their internal orientation.
Physicists use lasers to track the quantum evolution of atomic states, but interpreting the measurements can be tricky when the light absorption depends on the orientation of the atom’s internal angular momentum. Menachem Givon at Ben-Gurion University of the Negev, Israel, and his colleagues propose and demonstrate a method to monitor how many atoms are in a particular state, no matter how they are oriented.
Atoms can be in the same electronic state, yet differ in the alignment between the angular momenta of their nucleus and their electrons. And even atoms in the same such “hyperfine” state have different energies depending on how they align with a magnetic field, an effect called Zeeman splitting. In addition to having different peak absorption frequencies, when atoms in different Zeeman levels are illuminated with linearly polarized light at a particular angle, some levels absorb more light and some absorb less light than the average absorption, which is what is measured using unpolarized light. But in a cloud of moving atoms, the absorption peaks are broadened by the Doppler effect. At frequencies between these peaks, light is absorbed both by atoms with a higher transition frequency moving in one direction and by other atoms in the lower-frequency state moving in the opposite direction. If the frequency of light is tuned away from a level that absorbs more than the average absorption and toward one that absorbs less, at some intermediate frequency, the level will absorb just the average amount.
Givon and his colleagues realized that, for any particular motional broadening, this intermediate—or “magic”—frequency is the same for every Zeeman level, so the absorption just measures the total number of atoms in the hyperfine state. The researchers confirmed experimentally that when they illuminated a vapor of rubidium-$87$ atoms with light at the magic frequency, a well-known sloshing of atoms between different levels became invisible. – Don Monroe
### Announcements
More Announcements »
## Subject Areas
Atomic and Molecular Physics
## Previous Synopsis
Atomic and Molecular Physics
Nanophysics
## Related Articles
Atomic and Molecular Physics
### Synopsis: Taking Pictures with Single Ions
A new ion microscope with nanometer-scale resolution builds up images using single ions emitted one at a time from an ion trap. Read More »
Atomic and Molecular Physics
### Viewpoint: Squeezed Light Reengineers Resonance Fluorescence
By bathing a superconducting qubit in squeezed light, researchers have been able to confirm a decades-old prediction for the resulting phase-dependent spectrum of resonance fluorescence. Read More »
Gravitation
### Synopsis: Skydiving Spins
Atom interferometry shows that the free-fall acceleration of rubidium atoms of opposite spin orientation is the same to within 1 part in 10 million. Read More » | 2016-07-25 10:06:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6166430711746216, "perplexity": 1457.0473273827397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824226.33/warc/CC-MAIN-20160723071024-00229-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://techwear.my.id/forecast-and-analysis-of-aircraft-passenger-satisfaction.html | August 10, 2022
In this section, we present a detailed introduction to the preliminary feature selection method RF-RFE (random forest-based recursive feature elimination) and various classification models used for passenger satisfaction in this study.
### Recursive feature elimination based on random forest
#### RF
The RF proposed by Breiman29 is a parallel integration algorithm based on a decision tree. Because of its relatively good precision, robustness and ease of use, it has become one of the most popular machine learning methods. The decision tree may be completely different due to the small change in data, so it is not stable enough. The RF reduces the variance brought by a single decision tree, improves the prediction performance of the general decision tree, and can give the importance measurement of variables, which brings substantial improvement to the decision tree model.
RF uses a decision tree as the base learner to construct a bagging ensemble. Bagging is a parallel integrated learning algorithm based on a self-help sampling method. Each sampling set is used to train a base learner, and then these base learners are combined. When combining the prediction output, the simple voting method is usually used for the classification task.
Let the training set $$D=\left\\left(x_1,y_1\right),\left(x_2,y_2\right),\dots ,\left(x_n,y_n\right)\right\$$, and the prediction result of the new sample is Eq. (1):
$$f\left( x \right) = \mathop \textargmax\limits_y \in \mathcalY \mathop \sum \limits_t = 1^T \mathbbI\left( h_t \left( x \right) = y \right)$$
(1)
where y is the output category set, $$h_t(x)$$ is the prediction result of the new sample x by the t-th learner, and y is the real category of the sample.
RF introduces the selection of random attributes on the basis of bagging integration. Different from selecting an optimal attribute when a single decision tree divides attributes, RF adopts the method of random selection for each node attribute set in the decision tree, first randomly selects an attribute subset from all attributes, and then selects an optimal attribute from the subset. Therefore, based on the sample disturbance brought by bagging, the RF further introduces attribute disturbance, which increases the generalization performance of the integration. The algorithm description of RF is shown in Table 1.
#### Importance of RF characteristics
The importance measurement indicators based on RF include the mean decrease impurity (MDI) based on the Gini index and the mean decrease accuracy (MDA) based on OOB data30. This method uses the frequency of attributes in the RF decision tree to reflect the importance of features. This paper chooses the MDI method based on the Gini index to measure the importance of features.
When constructing the CART decision tree, RF takes the attribute with the largest Gini gain as the splitting attribute by calculating the Gini gain of all attributes of the node. Gini represents the probability that a randomly selected sample in the sample set is misclassified, let $$p_k$$ be the proportion of class k samples, and the calculation equation is Eq. (2):
$$Gini\left( p \right) = \mathop \sum \limits_k = 1^K p_k \left( 1 – p_k \right) = 1 – \mathop \sum \limits_k = 1^K p_k^2$$
(2)
The Gini gain obtained by dividing the data set according to attribute a is Eq. (3):
$$Gini\left( D,a \right) = Gini\left( D \right) – \mathop \sum \limits_v^V \frac D^v \right\leftGini\left( D^v \right)$$
(3)
where V is the number of value categories of attribute a and $$\left|D^v\right|$$ is the number of value categories of attribute a.
Based on the calculation of feature importance, the specific steps are as follows:
1. (1)
For each decision tree, the node where feature $$\propto$$ appears is set A, and the change in the Gini index before and after node i branch is calculated as follows Eq. (4):
$$\Delta Gini = Gini\left( i \right) – Gini\left( l \right) – Gini\left( k \right)$$
(4)
where $$Gini\left(l\right)$$ and $$Gini(k)$$ are the Gini index of the new node after branching.
2. (2)
The importance of feature $$\propto$$ in the tree is shown in Eq. (5):
$$IM_ \propto = \mathop \sum \limits_a \in A \Delta Gini$$
(5)
where a is the node where feature $$\propto$$ appears.
3. (3)
Suppose n is the number of decision trees, and the importance of feature $$\propto$$ is Eq. (6):
$$IMPORTANCE\left( \propto \right) = \mathop \sum \limits_N IM_ \propto$$
(6)
Then, normalize the importance of all features in Eq. (7):
$$IM\left( \propto \right) = \fracIMPORTANCE\left( \propto \right)\mathop \sum \nolimits_i^c IMPORTANCE\left( i \right)$$
(7)
where c is the number of features.
4. (4)
The larger the $$IM\left(\propto \right)$$ value is, the more important the feature is to the result prediction, that is, the higher the importance of the feature.
#### Recursive feature elimination based on RF
RF-RFE uses RF as an external learning algorithm for feature selection, calculates the importance of features in each round of feature subset, and removes the features corresponding to the lowest feature importance to recursively reduce the scale of the feature set, and the feature importance is constantly updated in each round of model training. Based on the selected feature set, this study uses cross validation to determine the feature set with the highest average score based on classification accuracy. The algorithm flow chart is shown in Fig. 1.
The RF-RFE flow is as follows:
1. (1)
Bootstrap sampling is carried out from the training set T containing all samples to obtain a training sample set $$T^*$$ with a sample size of n. The decision tree is established by using $$T^*$$, and a total of b decision trees are generated by repeating this process;
2. (2)
The prediction results of each decision tree are combined by “voting”, and the effect of the RF regression model is evaluated based on classification accuracy by using the fivefold cross validation method;
3. (3)
Calculate and sort the importance $$IM\left(\propto \right)$$ of each feature $$\propto$$ in the feature set based on MDI;
4. (4)
According to the backward selection of the sequence, delete the feature with the lowest feature importance, and repeat steps 1–3 for the remaining feature subset until the feature subset is empty. According to the cross-validation results of each feature subset, the feature subset with the highest classification accuracy is determined.
### Satisfaction prediction based on machine learning algorithm
According to whether the processed data are marked artificially, machine learning can be generally divided into supervised learning and unsupervised learning. Supervised learning data sets include initial training data and manually labeled objects. The machine learning algorithm learns from labeled training data sets, tries to find the pattern of object division, and takes labeled data as the final learning goal. Generally, the learning effect is good, but the acquisition cost of labeled data is high. Unsupervised learning processes unclassified and unlabeled sample set data without prior training, hoping to find the internal rules between the data through learning to obtain the structural characteristics of the sample data, but the learning efficiency is often low. The satisfaction status in this study is the data set label. In the training process, the supervised machine learning algorithm learns the corresponding relationship between features and labels and applies this relationship to the test set for prediction.
#### k-nearest neighbors (KNN)
KNN is a supervised learning algorithm. Because the training time overhead is zero, it is also representative of “lazy learning”31. K-nearest neighbor has been used as a nonparametric technique in statistical estimation and pattern recognition. The working principle is as follows: for a given new sample, find the K samples closest to the sample in the training set based on a certain distance measurement and take the number of categories with the largest number of K samples as the category of the new sample. The samples are not processed in the training stage, so it belongs to “lazy learning”. As shown in Fig. 2, if there are 3 squares, 2 circles and 1 triangle around a data point, it is considered that the data point may be square. The parameter K in KNN is the number of nearest neighbors in majority voting.
#### LR
LR is used to evaluate the relationship between dependent variables and one or more independent variables, and the classification probability is obtained by using logical functions32. It is a learning algorithm with a logistic function as the core. A logistic function is used to compress the output of the linear equation to (0, 1). The logistic function is defined as Eq. (8):
$$Logistic\left( z \right) = ~\frac11 + e^ – z$$
(8)
Consider the binary classification problem, given the data set $$D=\left(x_1,y_1\right),\left(x_2,y_2\right),\dots ,\left(x_N,y_N\right),x_I\subseteq R^n,y_i\in \mathrm0,1,i=\mathrm1,2,\cdots ,N$$.
P is the probability that the sample is a positive example, and the coefficient in the following formula is determined by LR through the maximum likelihood method $$\beta _0,\beta _1,\cdots ,\beta _k$$ to make an estimate [Eqs. (9) and (10)]:
$$logit\left( p \right) = ~log\left( \fracp1 – p \right) = \beta _0 + \beta _1 x_1 + \cdots + \beta _k x_k$$
(9)
$$p = \frac\exp \left( \beta _0 + \beta _1 x_1 + \cdots + \beta _k x_k \right)1 + \exp \left( \beta _0 + \beta _1 x_1 + \cdots + \beta _k x_k \right)$$
(10)
When P is greater than the preset threshold, the sample is divided into positive examples, and vice versa.
$$\fracp1-p$$ is called the odds ratio (odds), which refers to the ratio of the probability of event occurrence to the probability of event nonoccurrence. The logarithm of the winning rate is linear with the coefficient of the variable. When the features have been standardized, the greater the absolute value of the coefficient, the more important the feature is. If the coefficient is positive, this characteristic is positively correlated with the probability that the target value is 1; if the coefficient is negative, this characteristic is positively correlated with the probability that the target value is 0.
#### Gaussian Naive Bayes (GNB)
Naive Bayes (NB) is a direct supervised machine learning algorithm33. The NB classifier is based on the Bayesian probability theorem and predicts future opportunities according to previous experience. NB assumes that the input variables are conditionally independent [Eq. (11)].
$$P\left( Y = y_k \textX_1 , \ldots ,X_n \right) = \frac{P\left( Y = y_k \right)P\left( X_1 , \ldots ,X_n \textY = y_k \right)}{\mathop \sum \nolimits_j P(Y = y_j )P\left( X_1 , \ldots ,X_n \textY = y_k \right)} = \frac{P\left( Y = y_k \right)\mathop \prod \nolimits_i P\left( X_i \textY = y_k \right)}{\mathop \sum \nolimits_j P(Y = y_j )\mathop \prod \nolimits_j P\left( X_i \textY = y_j \right)}$$
(11)
where X is the input vector $$(X_1,X_2,\dots ,X_n)$$ and Y is the output category.
On the basis of NB, GNB further assumes that the prior probability of the feature is a Gaussian distribution, that is, the probability density function is as follows in Eq. (12):
$$P\left( x_i = x\textY = y_k \right) = \frac1\sqrt 2\pi \delta _ik^2 e^{{ – \frac12\left( {\fracx – \mu _ik {\delta _ik }} \right)^2 }}$$
(12)
For a given test set sample $$\mathrmX=(\mathrmX_1,\mathrmX_2,\dots ,\mathrmX_\mathrmn)$$, calculate P [Eq. (13)]:
$$P\left( Y = y_k \right)\mathop \prod \limits_i P\left( Y = y_k \right),\quad k = 1,2, \ldots ,K$$
(13)
To determine the class of the sample y [Eq. (14)]:
$$y = \mathop argmax\limits_y_k P\left( Y = y_k \right)\mathop \prod \limits_i P\left( X_i \textY = y_k \right)$$
(14)
#### RF
The working principle of RF34 is to combine the results of each decision tree, as shown in Fig. 3. This strategy has better estimation performance than a single random tree: the estimation of each decision tree has low deviation but high variance, but clustering realizes the trade-off between overall deviation and variance and provides the importance of prediction variables to the prediction of result variables. RF has good prediction performance in practical applications and can be used to address multiclass classification problems, category variables and sample imbalance problems.
#### Backpropagation neural network (BPNN)
BPNN is one of the most widely used neural network models and is a typical error backpropagation algorithm35. Since the emergence of BPNNs, much research has been done on the selection of activation functions, the design of structural parameters and the improvement of network defects. The main idea of the BP algorithm is to divide the learning process into two stages: forward transmission and reverse feedback. In the forward transmission stage, the input sample reaches the output layer from the input layer through the hidden layer, and the output end forms an output signal. In the backpropagation stage, the error signals that do not meet the precision requirements are spread forward step by step, and the weight matrix between neurons is corrected through the pre-adjustment and post-adjustment cycles. When the iteration termination condition is met, the learning stops.
1. (1)
Forward transmission
First, the input vector of the sample is X, T is the corresponding output vector, m is the number of neural units in the input layer, and P is the number of nodes in the output layer:
\beginaligned X & = \left( x_1 , \ldots ,x_m \right) \\ T & = \left( T_1 , \ldots ,T_p \right) \\ \endaligned
The calculation process equation of the forward transmission output layer is Eq. (15):
$$I_j = \mathop \sum \limits_i = 1^m w_ij x_i + \theta _j$$
(15)
where j represents the node of the hidden layer, w is the weight matrix between the input layer node and the hidden layer node, $$\theta _j$$ is the threshold of node j, and the output value of node j is Eq. (16):
$$O_j = f\left( I_j \right)$$
(16)
where f is called the activation function, which is the processing of the input vector. The function can be linear or nonlinear.
1. (2)
Reverse feedback
Calculate the error between the true value of the sample and the output value of the sample. For the problem of second classification, two neural units are often used as the output layer. If the output value of the first neural unit of the output layer is greater than that of the second neural unit, it is considered that the sample belongs to the first category (Eq. (17)):
$$E_i = O_i \left( 1 – O_i \right)\left( T_i – O_i \right)$$
(17)
The error of the middle hidden layer is accumulated by weight through the node error of the next layer (Eq. (18)):
$$E_j = O_i \left( 1 – O_i \right)\mathop \sum \limits_k E_k W_jk$$
(18)
where $$E_k$$ is the error of the k-th node of the next layer and $$W_jk$$ is the weight from the j-th node of the current layer to the k-th node of the next layer.
Update the weights and offsets, respectively (Eq. (19)):
\beginaligned W_ij & = W_ij + \Delta W_ij = W_ij + \lambda E_j O_i \\ \theta _j & = \theta _j + \vartriangle \theta _j = \theta _j + \lambda E_j \\ \endaligned
(19)
where λ is the learning rate, with a value of 0–1. When the training reaches a certain number of iterations or the accuracy is higher than a certain value, the training is stopped. | 2022-08-10 05:27:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7830720543861389, "perplexity": 539.7811186191406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571147.84/warc/CC-MAIN-20220810040253-20220810070253-00501.warc.gz"} |
https://stats.stackexchange.com/questions/351448/assuming-n-variables-are-conditionally-independent-given-y-how-do-i-compute-py | # Assuming n variables are conditionally independent given y, how do I compute p(y | x_1,…,x_n)?
Referencing this question, I know that if $x_1$ and $x_2$ are conditionally independent given $y$ (big assumption), then
$$P(y | x_1,x_2) = \frac{P(x_1,x_2 | y)P(y)}{P(x_2 | x_1)P(x_1)}$$ $$= \frac{P(x_1| y)P(x_2| y)P(y)}{P(x_2 | x_1)P(x_1)}$$ $$= \frac{P(y| x_1)P(x_2| y)}{P(x_2 | x_1)}$$
How do I generalize to $n$ variables and compute $P(y | x_1,...,x_n)$? I don't know any of the priors, but I have all the single conditional probabilities (complete matrix)!
Summary
1. Known: $P(y|x_i), P(x_i|y)$, and $P(x_i|x_j), \forall i,j$
2. Assumption: $x_1,...,x_n$ are conditionally independent given $y$
3. Problem: Compute $P(y|x_1,...,x_n)$.
Any help would be appreciated!
UPDATE (reply to Xian):
So to further clarify my problem: I have a disease set $D=\{d_1,...,d_m\}$ and a symptom set $S=\{s_1,...,s_n\}$.
For a given disease, $d_i$, I know the probabilities of the symptoms, $p(s_1| d_i),...,p(s_n|d_i)$ (sparse). For a given symptom $s_j$, I have probabilities $p(d_1 | s_j),...,p(d_m | s_j)$ (also, sparse).
Now, I want to compute $p(d_i | s_{\alpha_1},...,s_{\alpha_k}), \forall i\in[1:m]$, for $k\leq n$ (probability of each disease given a subset of symptoms).
If I understand your answer correctly, you're saying that for a given disease $d$, I can sample a synthetic patient with some symptoms based on the distribution of conditionals $p(s_j | d), \forall j$. But how would I incorporate $p(y|x_1)=p(d_i|s_1),\forall i$ into the sampling procedure so that I can account for the fact that, say, the common cold occurs more frequently than tuberculosis given cold-like symptoms?
Sorry for the confusion!
• It seems to me that you need the full set of conditional probabilities $p(x_i | x_{-i})$, not just the pairwise conditional probabilities. – jbowman Jun 14 '18 at 22:43
Since the $X_i$'s are independent given $Y$, the joint density of $(Y,X_1,\ldots,X_n)$ writes down as$$p(y)p(x_1|y)\cdots p(x_n|y)$$and hence the conditional of $Y$ given $(X_1,\ldots,X_n)$ is $$\dfrac{p(y)p(x_1|y)\cdots p(x_n|y)}{\int p(y)p(x_1|y)\cdots p(x_n|y)\text{d}y}$$It simplifies into $$\dfrac{p(x_1)p(y|x_1)p(x_2|y)\cdots p(x_n|y)}{\int p(x_1)p(y|x_1)p(x_2|y)\cdots p(x_n|y)\text{d}y}=\dfrac{p(y|x_1)p(x_2|y)\cdots p(x_n|y)}{\int p(y|x_1)p(x_2|y)\cdots p(x_n|y)\text{d}y}$$but I see no further simplification.
• Thanks for the response @Xi'an. I don't actually have access to a dataset for this problem; however, I have the numerical values for the elements of the joint density $p(y|x_1),p(x_2|y),\cdots, p(x_n|y)$. To numerically evaluate the integral in the denominator, would I just compute the joint density in the numerator for each possible value of $y$ and sum? Wouldn't I still be ignoring the prior in $dy$? – D. Rad Jun 23 '18 at 17:20
• If you can evaluate the numerator, you can simulate from this distribution without knowing the value of the normalising integral in the denominator. This formula is correct for the conditional in $y$ : hence no you do not ignore the prior. – Xi'an Jun 24 '18 at 5:43
• If $Y$ has a discrete support, the integral in the denominator becomes a summation over all possible values $y$. – Xi'an Jun 27 '18 at 5:42 | 2020-01-29 05:30:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9367308020591736, "perplexity": 297.57561096302567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251788528.85/warc/CC-MAIN-20200129041149-20200129071149-00009.warc.gz"} |
http://gmatclub.com/forum/if-a-b-and-c-are-positive-integers-such-that-1-a-1-b-142661.html?kudos=1 | Find all School-related info fast with the new School-Specific MBA Forum
It is currently 06 May 2016, 03:21
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# If a, b, and c are positive integers such that 1/a + 1/b = 1
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Author Message
TAGS:
### Hide Tags
Intern
Joined: 23 Sep 2008
Posts: 24
Followers: 0
Kudos [?]: 23 [2] , given: 137
If a, b, and c are positive integers such that 1/a + 1/b = 1 [#permalink]
### Show Tags
18 Nov 2012, 13:06
2
KUDOS
10
This post was
BOOKMARKED
00:00
Difficulty:
95% (hard)
Question Stats:
27% (03:02) correct 73% (01:57) wrong based on 539 sessions
### HideShow timer Statictics
If a, b, and c are positive integers such that 1/a + 1/b = 1/c, what is the value of c?
(1) b ≤ 4
(2) ab ≤ 15
[Reveal] Spoiler: OA
Last edited by Bunuel on 19 Nov 2012, 04:01, edited 1 time in total.
Renamed the topic and edited the question.
Director
Status: Done with formalities.. and back..
Joined: 15 Sep 2012
Posts: 647
Location: India
Concentration: Strategy, General Management
Schools: Olin - Wash U - Class of 2015
WE: Information Technology (Computer Software)
Followers: 43
Kudos [?]: 472 [9] , given: 23
Re: Fraction and Inequality [#permalink]
### Show Tags
19 Nov 2012, 18:01
9
KUDOS
2
This post was
BOOKMARKED
monsoon1 wrote:
How did you find that only these values would satisfy?
Did you test several numbers?
The question also doesn't give us the clue whether the numbers are the same or different.So, we have to test many numbers.right?
Can you please show the steps or any other way to get to the correct answer?
You actually dont need to test numbers. It could be purely algebric approach coupled with some logical deductions.
we have$$c= ab/(a+b)$$
or $$c = \frac{1}{(1/a+1/b)}$$
If you notice this expression and remember that c has to be integer, that would mean Denominator has to be 1 (Since numerator is already 1).
There is only one such possiblity of a and b that could give you 1/a+1/b =1
So you dont need to test any number.
Hope it helps. Lets kudos
_________________
Lets Kudos!!!
Black Friday Debrief
Director
Status: Done with formalities.. and back..
Joined: 15 Sep 2012
Posts: 647
Location: India
Concentration: Strategy, General Management
Schools: Olin - Wash U - Class of 2015
WE: Information Technology (Computer Software)
Followers: 43
Kudos [?]: 472 [3] , given: 23
Re: Fraction and Inequality [#permalink]
### Show Tags
18 Nov 2012, 19:01
3
KUDOS
2
This post was
BOOKMARKED
monsoon1 wrote:
If a, b, and c are positive integers such that 1/a + 1/b = 1/c, what is the value of c?
(1) b ≤ 4
(2) ab ≤ 15
[Reveal] Spoiler:
OA-> B
given is c= ab/(a+b)
thus, since c is integer, ab/a+b must be an integer.
statement 1: b ≤ 4
No information can be drawn. Not sufficient
statement 2: ab ≤ 15
Only possible value of ab, such that ab/a+b is integer could be when a=2,b=2. Thus, c=1.
Sufficient.
Ans B it is.
_________________
Lets Kudos!!!
Black Friday Debrief
Verbal Forum Moderator
Joined: 10 Oct 2012
Posts: 630
Followers: 72
Kudos [?]: 923 [3] , given: 136
Re: If a, b, and c are positive integers such that 1/a + 1/b = 1 [#permalink]
### Show Tags
27 Oct 2013, 13:01
3
KUDOS
Expert's post
3
This post was
BOOKMARKED
jlgdr wrote:
monsoon1 wrote:
If a, b, and c are positive integers such that 1/a + 1/b = 1/c, what is the value of c?
(1) b ≤ 4
(2) ab ≤ 15
Hey all,
This one was a bit tricky indeed. Is there some other way we can notice quickly which number satisfies the ab/a+b constraint giving 'c' as an integer value
Cheers!
J
From F.S 1, we know that for a to be positive, c<b. The given equation is valid for b=2,c=1 and also for b=3,c=2. Insufficient.
Now, back to your question.
We know that$$\frac{a+b}{2}\geq{\sqrt{ab}}$$
Also, from the question stem, we know that $$\frac{a+b}{ab} =\frac{1}{c}$$
Thus, $$(a+b) = \frac{ab}{c}$$. Replacing this in the first equation, we get $$\frac{ab}{2c}\geq{\sqrt{ab}}$$
Or,$$c\leq{\frac{\sqrt{ab}}{2}} \to c\leq{\frac{\sqrt{15}}{2}} \to c<{2}$$. Thus, the only positive integer less than 2 is One and thus c=1.Sufficient.
_________________
Veritas Prep GMAT Instructor
Joined: 16 Oct 2010
Posts: 6493
Location: Pune, India
Followers: 1763
Kudos [?]: 10518 [2] , given: 207
Re: If a, b, and c are positive integers such that 1/a + 1/b = 1 [#permalink]
### Show Tags
14 Nov 2013, 00:22
2
KUDOS
Expert's post
JepicPhail wrote:
Doesn't mau5's solution, if correct, bring us back to Nilohit's point of 'why do we need know statement 1 and 2 if we already know C equals 1'? Regardless, this is definitely a challenging question. I would like to know if there is a shortcut to this problem, especially for statement #2. Just plugging in some numbers would be impossible given the time limit...
Actually, this is not correct. c needn't be 1 in every case.
Take a = 2, b = 2. In this case c = 1
Take a = 4, b = 4. In this case,
1/4 + 1/4 = 1/2
c = 2
Take a = 3, b = 6. In this case,
1/3 + 1/6 = 1/2
c = 2
Take a = 15, b = 30. In this case
1/15 + 1/30 = 1/10
etc
Basically, you have to look for values such that when the numerators add up, the sum is divisible by the denominator. a and b cannot be 1 since we need the sum to be less than 1.
Statement 1: b <= 4
This gives you different values of c. c could be 1 or 2. Not sufficient.
Statement 2: ab <= 15
a and b could be 2 each. There is no other set of values. Try the small pairs (2, 4), (3, 3).
Hence (B) alone is sufficient.
(Note the algebraic solution provided by mau5 for statement 2.)
_________________
Karishma
Veritas Prep | GMAT Instructor
My Blog
Get started with Veritas Prep GMAT On Demand for $199 Veritas Prep Reviews Intern Joined: 23 Sep 2008 Posts: 24 Followers: 0 Kudos [?]: 23 [0], given: 137 Re: Fraction and Inequality [#permalink] ### Show Tags 19 Nov 2012, 17:38 Vips0000 wrote: monsoon1 wrote: If a, b, and c are positive integers such that 1/a + 1/b = 1/c, what is the value of c? (1) b ≤ 4 (2) ab ≤ 15 [Reveal] Spoiler: OA-> B given is c= ab/(a+b) thus, since c is integer, ab/a+b must be an integer. statement 1: b ≤ 4 No information can be drawn. Not sufficient statement 2: ab ≤ 15 Only possible value of ab, such that ab/a+b is integer could be when a=2,b=2. Thus, c=1. Sufficient. Ans B it is. How did you find that only these values would satisfy? Did you test several numbers? The question also doesn't give us the clue whether the numbers are the same or different.So, we have to test many numbers.right? Can you please show the steps or any other way to get to the correct answer? Intern Joined: 03 Sep 2013 Posts: 9 Followers: 0 Kudos [?]: 1 [0], given: 2 Re: If a, b, and c are positive integers such that 1/a + 1/b = 1 [#permalink] ### Show Tags 07 Oct 2013, 12:38 I have a question here. If the solution given by Vips0000 is correct and to me it seems perfect, then neither of the statements is actually needed to solve the question there can be only one combination possible from the question stem itself; a=b=2 and consequently c=1. In such a scenario shouldn't the answer be D, since both statements independently will lead us to the answer. Intern Joined: 04 Oct 2013 Posts: 3 Followers: 0 Kudos [?]: 1 [0], given: 0 Re: Fraction and Inequality [#permalink] ### Show Tags 07 Oct 2013, 12:59 I can't see how you did the following: Vips0000 wrote: monsoon1 wrote: we have$$c= ab/(a+b)$$ or $$c = \frac{1}{(1/a+1/b)}$$ Could you explain? thanks. Edit: Think I got it now. Reciprocal of both sides of the original equation? or is there a different way? Intern Joined: 04 Oct 2013 Posts: 3 Followers: 0 Kudos [?]: 1 [0], given: 0 Re: If a, b, and c are positive integers such that 1/a + 1/b = 1 [#permalink] ### Show Tags 07 Oct 2013, 13:19 1 This post was BOOKMARKED Nilohit wrote: I have a question here. If the solution given by Vips0000 is correct and to me it seems perfect, then neither of the statements is actually needed to solve the question there can be only one combination possible from the question stem itself; a=b=2 and consequently c=1. In such a scenario shouldn't the answer be D, since both statements independently will lead us to the answer. Well, lets say the ab<or= 15 restriction was not there. Then if a=10 and b=10 then ab/(a+b) would equal 100/20...which is an integer. The only possible values when ab<or=15 are 2 and 2. but for the b<or=4 statement, you could still have a case where say b=4 and a=12, and ab/a+b=48/16=3. Then you would have 2 (or more) possible values for a and b if you also include the possible value set of "a=2 and b=2". Vips0000 reasoning isn't actually perfect. In the case of $$\frac{1}{1/a+1/b}$$ the denominator does not have to be "1". $$\frac{1}{1/a+1/b}$$ could equal $$\frac{1}{1/4+1/12}$$ which reduces to $$\frac{1}{4/12}$$ then $$1*\frac{12}{4}$$ which equals 3 Current Student Joined: 06 Sep 2013 Posts: 2035 Concentration: Finance GMAT 1: 770 Q0 V Followers: 43 Kudos [?]: 456 [0], given: 355 Re: If a, b, and c are positive integers such that 1/a + 1/b = 1 [#permalink] ### Show Tags 27 Oct 2013, 11:42 monsoon1 wrote: If a, b, and c are positive integers such that 1/a + 1/b = 1/c, what is the value of c? (1) b ≤ 4 (2) ab ≤ 15 Updating with new solution by jlgdr OK so we have that ab / a+b = C is an integer. Therefore let's hit the first statement. Statement 1 says that b<=4. We have two choices here (actually 3). Let's begin with (2,2) C would equal 2. Now if we pick (4,4), C is again two so same answer. But if we pick (6,6) then C= 3. So not sufficient. From statement 2 we know that ab<=15. Hence ab has to be 2,2 since both a,b are positive integers and of course C = 2. Therefore B stands Gimme kudos Cheers J Last edited by jlgdr on 29 Mar 2014, 07:09, edited 1 time in total. Intern Joined: 17 Oct 2013 Posts: 4 Followers: 0 Kudos [?]: 5 [0], given: 104 Re: If a, b, and c are positive integers such that 1/a + 1/b = 1 [#permalink] ### Show Tags 13 Nov 2013, 23:07 Doesn't mau5's solution, if correct, bring us back to Nilohit's point of 'why do we need know statement 1 and 2 if we already know C equals 1'? Regardless, this is definitely a challenging question. I would like to know if there is a shortcut to this problem, especially for statement #2. Just plugging in some numbers would be impossible given the time limit... Verbal Forum Moderator Joined: 10 Oct 2012 Posts: 630 Followers: 72 Kudos [?]: 923 [0], given: 136 Re: If a, b, and c are positive integers such that 1/a + 1/b = 1 [#permalink] ### Show Tags 13 Nov 2013, 23:22 Expert's post JepicPhail wrote: Doesn't mau5's solution, if correct, bring us back to Nilohit's point of 'why do we need know statement 1 and 2 if we already know C equals 1'? Regardless, this is definitely a challenging question. I would like to know if there is a shortcut to this problem, especially for statement #2. Just plugging in some numbers would be impossible given the time limit... I don't understand how you can get the answer from the First fact statement. Also, how can you get that c=1, without fact statement 2? _________________ Intern Joined: 17 Oct 2013 Posts: 4 Followers: 0 Kudos [?]: 5 [0], given: 104 Re: If a, b, and c are positive integers such that 1/a + 1/b = 1 [#permalink] ### Show Tags 13 Nov 2013, 23:44 mau5 wrote: JepicPhail wrote: Also, how can you get that c=1, without fact statement 2? Oh, I see now why you need 15. $$sqrt(15)$$ is less than 4, so C is less than or equal to something like 1/2, 2/2, 3/2, etc... and since only integer here is 1, C equals 1. Intern Joined: 10 Dec 2013 Posts: 20 Location: India Concentration: Technology, Strategy Schools: ISB '16 (S) GMAT 1: 710 Q48 V38 GPA: 3.9 WE: Consulting (Consulting) Followers: 0 Kudos [?]: 7 [0], given: 7 Re: If a, b, and c are positive integers such that 1/a + 1/b = 1 [#permalink] ### Show Tags 20 Feb 2014, 20:20 Quote: Statement 2: ab <= 15 a and b could be 2 each. There is no other set of values. Try the small pairs (2, 4), (3, 3). Hence (B) alone is sufficient. What if a = 1 and b =1, it would still satisfy all the conditions i.e. ab<15 and c would be an integer only. Doesnt this gives 2 solutions for statement 2 as well?? Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 6493 Location: Pune, India Followers: 1763 Kudos [?]: 10518 [0], given: 207 Re: If a, b, and c are positive integers such that 1/a + 1/b = 1 [#permalink] ### Show Tags 20 Feb 2014, 21:11 Expert's post Rohan_Kanungo wrote: Quote: Statement 2: ab <= 15 a and b could be 2 each. There is no other set of values. Try the small pairs (2, 4), (3, 3). Hence (B) alone is sufficient. What if a = 1 and b =1, it would still satisfy all the conditions i.e. ab<15 and c would be an integer only. Doesnt this gives 2 solutions for statement 2 as well?? If a = 1, b = 1, 1 + 1 = 1/c c = 1/2 c is not an integer in this case. _________________ Karishma Veritas Prep | GMAT Instructor My Blog Get started with Veritas Prep GMAT On Demand for$199
Veritas Prep Reviews
GMAT Club Legend
Joined: 09 Sep 2013
Posts: 9306
Followers: 456
Kudos [?]: 115 [0], given: 0
Re: If a, b, and c are positive integers such that 1/a + 1/b = 1 [#permalink]
### Show Tags
18 Apr 2015, 00:21
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Re: If a, b, and c are positive integers such that 1/a + 1/b = 1 [#permalink] 18 Apr 2015, 00:21
Similar topics Replies Last post
Similar
Topics:
If a and b are not zero, is 1/a + 1/b = 2? 2 05 Apr 2016, 07:44
What is the value of 1/a + 1/b? 3 28 Feb 2016, 09:59
1 If a, b and c are positive integers, is a=b? (1) (a+b) / (a 7 27 Oct 2011, 03:45
3 If a and b are both positive integers, is b^(a + 1)– b*(a^b) 3 10 Jul 2011, 06:42
What is the numerical value of 1/a + 1/b + 1/c ? (1) a + b + 2 03 Sep 2010, 19:18
Display posts from previous: Sort by
# If a, b, and c are positive integers such that 1/a + 1/b = 1
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®. | 2016-05-06 10:21:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5061951279640198, "perplexity": 3158.8450585259684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461861743914.17/warc/CC-MAIN-20160428164223-00077-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://code.communitydata.science/stats_class_2020.git/blobdiff/90b71136ea7a8ce993de3147e196a27e5ca86b87..37924e44ba85abcf6fa154c46d9d686c926aa4f3:/psets/pset8-worked_solution.rmd | index 7f25571f4d3816d96e58d2b6931f72728fef49e3..b8210e0fd0bb9bf6cd9fa50bb4c903b1b800f22d 100644 (file)
@@ -4,9 +4,6 @@ subtitle: "Statistics and statistical programming \nNorthwestern University \n
author: "Aaron Shaw"
date: "November 23, 2020"
output:
- pdf_document:
- toc: yes
- toc_depth: '3'
html_document:
toc: yes
toc_depth: 3
@@ -14,6 +11,9 @@ output:
collapsed: true
smooth_scroll: true
+ pdf_document:
+ toc: yes
+ toc_depth: '3'
- \newcommand{\lt}{<}
- \newcommand{\gt}{>}
@@ -42,7 +42,7 @@ data(mariokart)
mariokart
-To make things a bit easier to manage, I'll select the variables I want to use in the analysis and do some cleanup. Note that I conver the cond_new and stock_photo variables to logical first (using boolean comparisons) and also coerce them to be numeric values (using as.numeric()). This results in 1/0 values corresponding to the observations shown/described in Figure 9.13 and 9.14 on p. 365 of the textbook.
+To make things a bit easier to manage, I'll select the variables I want to use in the analysis and do some cleanup. Note that I convert the cond_new and stock_photo variables to logical TRUE/FALSE values first (using boolean comparisons) and then coerce them to be numeric values (using as.numeric()). This results in 1/0 values corresponding to the observations shown/described in Figure 9.13 and 9.14 on p. 365 of the textbook.
{r}
mariokart <- mariokart %>%
@@ -56,7 +56,7 @@ mariokart <- mariokart %>%
mariokart
-Now let's look at the variables in our model. Summary statistics for each, univariate density plots for the continuous measures, and some bivariate plots w each predictor and the outcome variable.
+Now let's summarize and explore the variables in our model. I'll calculate summary statistics for each, univariate density plots for the continuous measures, boxplots for the dichotomous indicators, and some bivariate plots with each predictor and the outcome variable.
{r}
summary(mariokart)
@@ -66,6 +66,7 @@ sd(mariokart$duration) sd(mariokart$wheels)
qplot(data=mariokart, x=price, geom="density")
+qplot(data=mariokart, y=price, geom="boxplot") # check out the outliers!
qplot(data=mariokart, x=duration, geom="density")
qplot(data=mariokart, x=wheels, geom="density")
@@ -84,46 +85,50 @@ cor(mariokart)
## Replicate model results
-The description of the model
+Based on the information in the textbook, I'm assuming that the model is an ordinary least squares regression.
{r}
model <- lm(price ~ cond_new + stock_photo + duration + wheels, data=mariokart)
summary(model)
-While I'm looking at that, I'll go ahead and calculate a confidence interval around the only parameter for which the model rejects the null hypothesis (wheels):
+Huh, that doesn't quite look like what's in the textbook Figure 9.15...
+I'll go ahead and calculate a confidence interval around the only parameter for which the model rejects the null hypothesis (wheels):
{r}
confint(model, "wheels")
-
-Overall, this resembles the model results in Figure 9.15, but notice that it's different in a number of ways! Without more information from the authors of the textbook it's very hard to determine exactly where or why the differences emerge.
+Without more information from the authors of the textbook it's very hard to determine exactly where or why the differences emerge. My guess at this point is that it might have something to do with those outlying price values (take a look at that boxplot and density plot again). Maybe they did something to transform the variable? Remove the outliers? I have no idea.
## Assess model fit/assumptions
-I've already generated a bunch of univariate and bivariate summaries and plots. Let's inspect the residuals more closely to see what else we can learn. I'll use the autoplot() function from the ggfortify package to help me do this.
+I've already generated a bunch of univariate and bivariate summaries and plots. Let's inspect the residuals more closely to see what else we can learn. I'll use the autoplot() function from the ggfortify package.
{r}
autoplot(model)
-Overall, there are a number of issues with this model fit that I'd want to mention/consider:
-* The distribution of the dependent variable (price) is very skewed, with two extreme outliers. I'd recommend trying some transformations to see if it would look more appropriate for a linear regression and/or inspecting the cases that produced the outlying values more closely to understand what's happening there.
-* The plots of the residuals reveal those same two outlying points are also outliers with respect to the line of best fit. That said, they are not exerting huge amounts of leverage on the estimates, so it's possible that the estimates from the fitted model wouldn't change much without those two points. Indeed, based on the degrees of freedom reported in Figure 9.15 (136) vs. the number reported in our version of the model (138) my best guess is that the textbook authors silently dropped those two outlying observations from their model!
+There are those outliers again. At this point, there are a number of issues with this model fit that I'd want to mention/consider:
+* The distribution of the dependent variable (price) is skewed, with two extreme outliers. I'd recommend trying some transformations to see if it would look more appropriate for a linear regression and/or inspecting the cases that produced the outlying values more closely to understand what's happening there and identify reasons why they're so different.
+* The plots of the residuals reveal those same two outlying points are also outliers with respect to the line of best fit. That said, they are not exerting huge amounts of leverage on the estimates, so it's possible that the estimates from the fitted model wouldn't change *too much* without those two points. Indeed, based on the degrees of freedom reported in Figure 9.15 (136) vs. the number reported in our version of the model (138) my best guess at this point is that the textbook authors silently dropped those two outlying observations from their model.
More out of curiosity than anything, I'll create a version of the model that drops the two largest values of price. From the plots, I can see that those two are the only ones above \$100, so I'll use that information here: {r} -summary(lm(price ~ cond_new + stock_photo + duration + wheels, data = mariokart[mariokart$price < 100,]))
+summary(
+ lm(price ~ cond_new + stock_photo + duration + wheels,
+ data = mariokart[mariokart$price < 100,] + ) + ) -What do you know. That was it. +What do you know. That was it. The difference in$R^2$is huge! ## Interpret some results -The issues above notwithstanding, we can march ahead and interpret the model results. Here are some general comments and some specifically focused on the cond_new and stock_photo variables: -* Overall, the linear model regressing total auction price on condition, stock photo, duration, and number of Wii wheels shows evidence of a positive, significant relationship between number of wheels and price. According to this model fit, an increase of 1 wheel is associated with a total auction price increase of$10 with the 95% confindence interval of (\$4.57-\$15.32).
-* The point estimate for selling a new condition game is positive, but with a large standard error. As a result, the model fails to reject the null of no association and provides no evidence of any relationship between the game condition and auction price.
+The issues above notwithstanding, we can march ahead and interpret the results of the original model that I fit. Here are some general comments and some specifically focused on the cond_new and stock_photo variables:
+* Overall, the linear model regressing total auction price on condition, stock photo, duration, and number of Wii wheels shows evidence of a positive, significant relationship between number of wheels and price. According to this model fit, an increase of 1 wheel in a listing is associated with a total auction price increase of \$10 on average (95% confindence interval: \$4.57-\$15.32). +* The point estimate for selling a new condition game is positive, but with a large standard error. The model fails to reject the null of no association and provides no evidence of any relationship between the game condition and auction price. * The point estimate for including a stock photo is negative, but again, the standard error is very large and the model fails to reject the null hypothesis. There is no evidence of any relationship between including a stock photo and the final auction price. ## Recommendations @@ -132,7 +137,7 @@ Based on this model result, I'd recommend the prospective vendor of a **used** c # Part II: Hypothetical study -## Import and explore +## Import, explore, summarize I'll start off by just importing things and summarizing the different variables we care about here: {r} @@ -150,7 +155,7 @@ qplot(data=grads, x=income, geom="density") ggplot(data=grads, aes(x=gpa, y=income)) + geom_point() -I'll also calculate some summary statistics and visual comparisons within cohorts: +I'll also calculate some summary statistics and visual comparisons within districts (cohorts): {r} @@ -161,7 +166,32 @@ tapply(grads$gpa, grads$cohort, summary) tapply(grads$gpa, grads$cohort, sd) -Huh. Those are remarkably similar values for the group means and the group standard deviations... +Note that you could also do this pretty easily with a call to group_by and summarize in the tidyverse: + +{r} +grads %>% + group_by(cohort) %>% + summarize( + n = n(), + min = min(income), + mean = mean(income), + max = max(income), + sd = sd(income) + ) + +grads %>% + group_by(cohort) %>% + summarize( + n = n(), + min = min(gpa), + mean = mean(gpa), + max = max(gpa), + sd = sd(gpa) + ) + + + +Huh. Those are remarkably similar values for the group means and the group standard deviations...weird. Onwards to plotting: {r} @@ -170,16 +200,19 @@ ggplot(data=grads, aes(x=cohort, y=income)) + geom_boxplot() ggplot(data=grads, aes(x=gpa, y=income, color=cohort)) + geom_point() -Those plots are also a little strange. I know this is just a simulated analysis, but it still seems weird that overall it just looks like a big mass of random points, but when I add the colors by cohort, I can see there are some lines and other regularities within groups. I wonder what happens when I plot each scatter within cohorts? +Those plots are also a little strange. Even though this is just a simulated analysis, it still seems weird that overall the scatterplot just looks like a big mass of points, but when I color the points by district, I can see some regularities within groups. At this point, I might want to facet the scatterplots by district to see any patterns more clearly. {r} ggplot(data=grads, aes(x=gpa, y=income, color=cohort)) + geom_point() + facet_wrap(vars(cohort)) -Hmmm. That's...absurd (in particular, cohort 8 looks like a sideways dinosaur). At this point, if I were really working as a consultant on this project, I would write to the client and start asking some uncomfortable questions about data quality (who collected this data? how did it get recorded/stored/etc.? what quality controls were in place?). I would also feel obligated to tell them that there's just no way the data correspond to the variables they think are here. If you did that and the client was honest, they might tell you [where the data actually came from](https://www.autodesk.com/research/publications/same-stats-different-graphs). +Okay, that's...a joke (in particular, cohort 8 looks like a sideways dinosaur). At this point, if I were really working as a consultant on this project, I would write to the client and start asking some probing questions about data quality (who collected this data? how did it get recorded/stored/etc.? what quality controls were in place?). I would also feel obligated to tell them that I suspect there's just no way the data correspond to the variables they think are here. If you did that and the client was honest, they might tell you [where the data actually came from](https://www.autodesk.com/research/publications/same-stats-different-graphs). -In the event that you marched ahead with the analysis and are curious about what that could have looked like, I've provided some example code below. That said, *this is a situation where the assumptions and conditions necessary to identify ANOVA, t-tests, or regression are all pretty broken* because the data was generated programmatically in ways that undermine the kinds of interpretation you've been asked to make. The best response here (IMHO) is to abandon these kinds of analysis once you discover that there's something systematically weird going on. The statistical procedures will "work" in the sense that they will return a result, but because those results aren't even close to meaningful, any relationships you do observe in the data reflect something different than the sorts of relationships the statistical procedures were designed to identify. +In the event that you marched ahead with the analysis and are curious about what that could have looked like, I've provided some example code below. That said, **this is a situation where the assumptions and conditions necessary to identify ANOVA, t-tests, or regression are all pretty broken** because the data was generated programmatically in ways that undermine the kinds of interpretation you've been asked to make. The best response here (IMHO) is to abandon these kinds of analysis once you discover that there's something systematically weird going on like this. While the experience of discovering a scatterplot dinosaur in your data is...unlikely outside of the context of a problem set, there are many situations in which careful data exploration will bring a realization that you just don't understand some important things about the sources or qualities of your data. You have to learn to identify these moments and develop strategies for dealing with them! Often, the statistical procedures will "work" in the sense that they will return a result without any apparent errors, but because those results aren't even close to meaningful, any relationships you do observe in the data reflect something different than the sorts of relationships the statistical procedures were designed to identify. +## Fake analysis for fake data + +Okay, if you wanted example code to look at for this, here it is. Please just keep in mind that the results are not informative! {r} summary(aov(income ~ cohort, data=grads)) # no global differences of means across groups @@ -189,7 +222,7 @@ confint(grads.model, "gpa") # 95% confidence interval -Note that the failure to reject the null of any association between district and income in the ANOVA does not provide conclusive evidence that the relationship between GPA and income does not vary by cohort. There were several things you might have done here. One is to calculate correlation coefficients within groups. Here's some tidyverse code that does that: +Note that the failure to reject the null of any association between district and income in the ANOVA would not (even in the event of more realistic data) provide very compelling evidence that the relationship between GPA and income does not vary by cohort. There were several things you might have done here. One is to calculate correlation coefficients within groups. Here's some tidyverse code that does that: {r} grads %>% @@ -198,11 +231,13 @@ grads %>% correlation = cor(income, gpa) ) -Because these correlation coefficients are nearly identical, I would likely end my analysis here and conclude that the correlation between gpa and income appears to be consistently small and negative. If you wanted to go further, you could theoretically calculate an interaction term in the model (by including I(gpa*cohort) in the model formula), but the analysis up to this point gives no indication that you'd be likely to find much of anything (and we haven't really talked about interactions yet). +Because these correlation coefficients are nearly identical, I would likely end an analysis here and conclude that the correlation between gpa and income appears to be consistently small and negative. If you wanted to go further, you could theoretically calculate an interaction term in the model (by including I(gpa*cohort) in the model formula), but the analysis up to this point gives no indication that you'd be likely to find much of anything (and we haven't really talked about interactions yet). On top of that, there's a literal dinosaur lurking in your data...just give up! # Part III: Trick or treating again ## Import and update data +Revisit the text and worked solutions for problem set 7 for more details about the study design, data collection and more. + {r import} ## reminder that the "read_dta()" function requires the "haven" library @@ -220,7 +255,7 @@ df <- df %>% df -Let's fit and summarize the model: +That looks consistent with what we want here. Let's fit and summarize the model: {r} f <- formula(fruit ~ obama + age + male + year) @@ -229,18 +264,25 @@ fit <- glm(f, data=df, family=binomial("logit")) summary(fit) -Interesting. Looks like adjusting for these other variables in a regression setting can impact the results. +Interesting. Looks like adjusting for these other variables in a regression setting allows us to uncover some different the results. -Onwards to generating more interpretable results: +Onwards to generating more interpretable results. You might recall that the big problem with interpreting logistic regression is that the results are given to you in "log-odds." Not only is it difficult to have intuitions about odds, but intuitions about the natural logarithms of odds are just intractable (for most of us). + +To make things easier, the typical first step is to calculare odds-ratios instead of log-odds. This is done by exponentiating the coefficients (as well as the corresponding 95\% confidence intervals): {r} ## Odds ratios (exponentiated log-odds!) exp(coef(fit)) exp(confint(fit)) + + +You can use these to construct statements about the change in odds of the dependent variable flipping from 0 to 1 (or FALSE to TRUE) predicted by a 1-unit change in the corresponding predictor (where an odds ratio of 1 corresponds to unchanged odds). We'll interpret the obamaTRUE odds ratio below. + +Now, model-predicted probabilities for prototypical observations. Recall that it's necessary to create synthetic ("fake"), hypothetical individuals to generate predicted probabilities like these. In this case, I'll create two versions of each fake kid: one assigned to the treatment condition and one assigned to the control. Then I'll use the predict() function to generate fitted values for each of the fake kids. -## model-predicted probabilities for prototypical observations: +{r} fake.kids = data.frame( - obama = rep(c(FALSE, TRUE), 2), + obama = c(FALSE, FALSE, TRUE, TRUE), year = factor(rep(c("2015", "2012"), 2)), age = rep(c(9, 7), 2), male= rep(c(FALSE, TRUE), 2) @@ -249,11 +291,9 @@ fake.kids = data.frame( fake.kids.pred <- cbind(fake.kids, pred.prob = predict(fit, fake.kids, type="response")) fake.kids.pred - - -Note that [this UCLA logit regression tutorial](https://stats.idre.ucla.edu/r/dae/logit-regression/) also contains example code to help extract standard errors and confidence intervals around these predicted probabilities. You were not asked to produce them here, but if you'd like an example here you go (I can try to clarify in class): +Note that [this UCLA logit regression tutorial](https://stats.idre.ucla.edu/r/dae/logit-regression/) also contains example code to help extract standard errors and confidence intervals around these predicted probabilities. You were not asked to create them here, but if you'd like an example here you go. The workhorse is the plogis() function, which essentially does the inverse logit transformation detailed in the textbook chapter 8: {r} fake.kids.more.pred <- cbind(fake.kids, @@ -268,13 +308,29 @@ within(fake.kids.more.pred, { ## Sub-group analysis +To do this, we'll need to create a slightly new model formula that drops the year term since we're going to restrict each subset of the data along that dimension. + +Once I fit the models, I'll use the stargazer package to create a reasonably pretty regression table that incorporates all three summaries. {r} f2 <- formula(fruit ~ obama + age + male) -summary( glm(f2, data=df[df$year == "2012",], family=binomial("logit")))
-summary( glm(f2, data=df[df$year == "2014",], family=binomial("logit"))) -summary( glm(f2, data=df[df$year == "2015",], family=binomial("logit")))
+m2012 <- glm(f2, data=df[df$year == "2012",], family=binomial("logit")) +m2014 <- glm(f2, data=df[df$year == "2014",], family=binomial("logit"))
+m2015 <- glm(f2, data=df[df$year == "2015",], family=binomial("logit")) + +## I can make a pretty table out of that: +library(stargazer) +stargazer(m2012, m2014, m2015, column.labels = c("2012", "2014", "2015"), type="text") + -Interesting. The treatment effect seems to emerge overwhelmingly within a single year of the data. +## Interpret and discuss + +Well, for starters, the model providing a "pooled" estimate of treatment effects while adjusting for age, gender, and study year suggests that the point estimate is "marginally" statistically significant ($p <0.1$) indicating some evidence that the data support the alternative hypothesis (being shown a picture of Michelle Obama causes trick-or-treaters to be more likely to pick up fruit than the control condition). In more concrete terms, the trick-or-treaters shown the Obama picture were, on average, about 26\% more likely to pick up fruit than those exposed to the control (95\% CI:$-4\%~-~+66\%$).[^1] In even more concrete terms, the estimated probability that a 9 year-old girl in 2015 and a 7 year-old boy in 2012 would take fruit increase about 17\% and 19\% respectively on average (from 29\% to 34\% in the case of the 9 year-old and from 21\% to 25\% in the case of the 7 year-old). These findings are sort of remarkable given the simplicity of the intervention and the fairly strong norm that Halloween is all about candy. + +[^1]: Remember when I said we would use those odds ratios to interpret the parameter on obamaTRUE`? Here we are. The parameter value is approximately 1.26, which means that the odds of picking fruit are, on average, 1.26 times as large for a trick-or-treater exposed to the picture of Michelle Obama versus a trick-or-treater in the control condition. In other words, the odds go up by about 26\% ($= \frac{1.26-1}{1}\$).
+
+All of that said, the t-test results from Problem set 5 and the "unpooled" results reported in the sub-group analysis point to some potential concerns and limitations. For starters, the fact that the experiment was run iteratively over multiple years and that the sample size grew each year raises some concerns that the study design may not have anticipated the small effect sizes eventually observed and/or was adapted on the fly. This would undermine confidence in some of the test statistics and procedures. Furthermore, because the experiment occurred in sequential years, there's a very real possibility that the significance of a picture of Michelle Obama shifted during that time period and/or the house in question developed a reputation for being "that weird place where they show you pictures of Michelle Obama and offer you fruit." Whatever the case, my confidence in the findings here is not so great and I have some lingering suspicions that the results might not replicate.
+On a more nuanced/advanced statistical note, I also have some concerns about the standard errors. This goes beyond the content of our course, but basically, a randomized controlled trial introduces clustering into the data by-design (you can think of it as analogous to the observations coming from the treatment "cluster" and the control "cluster"). In this regard, the normal standard error formulas can be biased. Luckily, there's a fix for this: compute "robust" standard errors as a result and re-calculate the corresponding confidence intervals. Indeed, robust standard errors are often considered to be the best choice even when you don't know about potential latent clustering or heteroskedastic error structures in your data. [This a short pdf](https://oes.gsa.gov/assets/files/calculating-standard-errors-guidance.pdf) provides a little more explanation, citations, as well as example R code for how you might calculate robust standard errors.
\ No newline at end of file | 2022-11-26 22:06:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.683885931968689, "perplexity": 144.9483705170144}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446709929.63/warc/CC-MAIN-20221126212945-20221127002945-00411.warc.gz"} |
https://codereview.stackexchange.com/questions/202837/simple-neural-network-implementation-in-python/202841 | # Simple neural network implementation in Python
A simple neural network I wrote in Python without libraries. I avoided implementing it in matrix form because I sought to get a basic understanding of the way NN's work first. For that reason I'm strongly favoring legibility over efficiency. I tried to keep my code readable and pyhonic, any style feedback would be particularly appreciated.
A quirk about this design is it does back propagation on a per training example basis and uses momentum to try and avoid over fitting to specific examples. Also I realized I never added base values to the neurons, it seems to work alright with out them but if anyone has a more in depth understanding of why you'd want them I'd be curious to hear about that.
import math
import random
import data
def sigmoid(x):
return 1 / (1 + math.exp(-x))
def sigmoid_prime(x):
return x * (1.0 - x)
def loss(x,y):
return sum([(a-b)**2 for (a,b) in zip(x,y)])
class Neuron():
learning_rate = 0.015
momentum_loss = 0.03
def __init__(self, input_neurons):
self.weights = [random.uniform(-1,1) for _ in range(input_neurons)]
self.momentum = [0 for _ in range(input_neurons)]
def forward(self, inputs):
dot = sum([x*y for (x,y) in zip(inputs, self.weights)])
self.output = sigmoid(dot)
return self.output
def backpropagate(self, inputs, error):
error_values = list()
for i, inp in enumerate(inputs):
return error_values
def nudge_weight(self, weight, amount):
change = amount * Neuron.learning_rate
self.momentum[weight] += change
self.momentum[weight] *= (1 - Neuron.momentum_loss)
self.weights[weight] += change + self.momentum[weight]
class Network():
def __init__(self, topology):
self.layers = list()
for i in range(1,len(topology)):
self.layers.append([Neuron(topology[i-1]) for _ in range(topology[i])])
def forward(self, data):
output = data
for layer in self.layers:
output = [neuron.forward(output) for neuron in layer]
return output
def backpropagate(self, data, output, target):
error_values = [tval - output for (tval, output) in zip(target, output)]
for i in range(len(self.layers)-1,0,-1):
layer_output = [neuron.output for neuron in self.layers[i-1]]
error_values = self.backpropagate_layer(i, error_values, layer_output)
self.backpropagate_layer(0, error_values, data)
def backpropagate_layer(self, layer, error_values, inputs):
next_errors = list()
for neuron, error in zip(self.layers[layer], error_values):
bp_error = neuron.backpropagate(inputs,error)
if not next_errors:
next_errors = bp_error
else:
next_errors = [a+b for a,b in zip(next_errors,bp_error)]
return next_errors
The full source code for project including the data base and some other testing code can be found here: https://github.com/RowanL3/Neural-Network
A more pythonic way of writing self.momentum = [0 for _ in range(input_neurons)] would be self.momentum = [0]*input_neurons | 2021-09-17 18:46:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.244249626994133, "perplexity": 11371.473532797501}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055775.1/warc/CC-MAIN-20210917181500-20210917211500-00410.warc.gz"} |
https://math.stackexchange.com/questions/1457238/bijective-proof-of-binomial-determinant-using-gessel-viennot-from-aigner | # Bijective proof of binomial determinant using gessel-viennot (from Aigner)
This is problem 5.74 (page 230) from Aigner "A Course in Enumeration".
Give a bijective proof using Gessel-Viennot of
$\text{det}$ ${m+i-1}\choose j$$^n_{i,j=1} =$${m+n-1}\choose n$
where $m-1\geq a_1 \geq a_2 \geq ...\geq a_n\geq 0$.
I think we can use these lemmas:
for RHS:
Lemma 1. The number of paths from $(x,y)$ to $(x+z, y+w)$ is ${z+w}\choose z$.
for LHS (Corollary of Gessel-Viennot Lemma):
Lemma 2. Let $M$ be the $k \times k$ matrix where $M_{ij}$ is the number of lattice paths from $v_i$ to $u_j$ then $\text{det}M$ is the number of non-intersecting $k$-paths.
For example for $n=2$ and $m=3$
$S_L=\{ (NNE,NNE),(NNE,NEN),(NNE, ENN), (NEN, NEN), (NEN,ENN),(ENN, ENN) \}$
$S_R=\{ NNEE, NENE, NEEN, ENNE, ENEN, EENN \}$
How to define the bijective map from $S_L$ to $S_R$.
• Shouldn't the $a_i$'s show up in the equation you're trying to prove? – Tad Sep 30 '15 at 3:11
I know very little about those enumerative technique but I have to say the claim is not hard at all to prove through elementary means, since the identity $$\binom{a+1}{b}=\binom{a}{b}+\binom{a}{b-1}$$ gives a straightforward way to perform gaussian elimination on our matrix; once we put it in upper or lower triangular form, to compute its determinant is an easy task. | 2020-04-04 00:25:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9697850346565247, "perplexity": 315.18092759600336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370518767.60/warc/CC-MAIN-20200403220847-20200404010847-00449.warc.gz"} |
https://search.r-project.org/CRAN/refmans/EstHer/html/Selvar.html | Selvar {EstHer} R Documentation
## Estimation of heritability in high dimensional sparse linear mixed models using variable selection.
### Description
This function selects active components in sparse linear mixed models in order to estimate heritability. The selection allows us to reduce the size of the data sets which improves the accuracy of the estimations. Our package also provides a confidence interval for the estimated heritability.
### Usage
Selvar(Y,Z,X,thresh_vect,nb_boot=80,nb_repli=50,CI_level=0.95,nb_cores=1)
### Arguments
Y Vector of observations of size n. Z Matrix with genetic information of size n x N. X Matrix of fixed effects of size n x d. thresh_vect Vector of thresholds in the stability selection step: the higher the threshold, the smallest the set of selected components. nb_boot Number of subsamples of Y to apply our bootstrap technique. The value by default is 80. nb_repli Number of replications in the stability selection. The value by default is 50. CI_level Level of the confidence interval for the estimation of the heritability. The value by default is 0.95. nb_cores Number of cores of the computer. It is used for parallelizing the computations. The value by default is 1.
### Value
heritability Estimation of the heritability CI_up Upper bound of the confidence interval for the estimated heritability CI_low Lower bound of the confidence interval for the estimated heritability selec_ind Indexes of the columns of the selected components
### Author(s)
Anna Bonnet and Celine Levy-Leduc
### Examples
library(EstHer)
data(Y)
data(W)
data(X)
Z=scale(W,center=TRUE,scale=TRUE)
res=Selvar(Y,Z,X,thresh_vect=c(0.7,0.8,0.9),nb_boot=80,nb_repli=50,CI_level=0.95,nb_cores=1)
res$heritability res$CI_low
res\$CI_up
[Package EstHer version 1.0 Index] | 2022-05-20 13:24:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5179588794708252, "perplexity": 1427.9246791745134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662532032.9/warc/CC-MAIN-20220520124557-20220520154557-00099.warc.gz"} |
https://www.zbmath.org/authors/?q=ai%3Aiyer.uma-n | # zbMATH — the first resource for mathematics
## Iyer, Uma N.
Compute Distance To:
Author ID: iyer.uma-n Published as: Iyer, Uma N.
Documents Indexed: 17 Publications since 2001
all top 5
#### Co-Authors
4 single-authored 3 McCune, Timothy C. 3 Taft, Earl Jay 2 Leites, Dimitry A. 1 Datt, Matrapu Sumanth 1 Futorny, Vyacheslav M. 1 Iyer, Jaya N. N. 1 Jordan, David Alan 1 Kassel, Christian 1 Koteswara Rao, Guntupalli 1 Lebedev, Aleksei Vital’evich 1 Messaoudene, Mohamed 1 Shchepochkina, Irina 1 Smith, Jonathan Dallas Hayden
all top 5
#### Serials
3 Journal of Nonlinear Mathematical Physics 2 Communications in Algebra 2 Journal of Algebra and its Applications 1 Israel Journal of Mathematics 1 Journal of Algebra 1 Manuscripta Mathematica 1 Transactions of the American Mathematical Society 1 Journal of the Ramanujan Mathematical Society 1 International Journal of Mathematics 1 Linear Algebra and its Applications 1 Expositiones Mathematicae 1 Selecta Mathematica. New Series 1 Algebras and Representation Theory
all top 5
#### Fields
14 Associative rings and algebras (16-XX) 10 Nonassociative rings and algebras (17-XX) 2 Quantum theory (81-XX) 1 Commutative algebra (13-XX) 1 Group theory and generalizations (20-XX) 1 Differential geometry (53-XX) 1 Global analysis, analysis on manifolds (58-XX)
#### Citations contained in zbMATH
13 Publications have been cited 24 times in 16 Documents Cited by Year
Quantum differential operators on $$\mathbb{K}[x]$$. Zbl 1054.16020
Iyer, Uma N.; McCune, Timothy C.
2002
One-sided Hopf algebras and quantum quasigroups. Zbl 1398.16030
Iyer, Uma N.; Smith, Jonathan D. H.; Taft, Earl J.
2018
Generic base algebras and universal comodule algebras for some finite-dimensional Hopf algebras. Zbl 1338.16037
Iyer, Uma N.; Kassel, Christian
2015
Differential operators on Hopf algebras and some functorial properties. Zbl 1012.16039
Iyer, Uma N.
2002
Differential operators on Azumaya algebras and Heisenberg algebras. Zbl 1028.16014
Iyer, Uma N.
2001
Representations of $$D_q(k[x])$$. Zbl 1348.16020
Futorny, Vyacheslav; Iyer, Uma N.
2016
The dual of a certain left quantum group. Zbl 1343.16023
Iyer, Uma N.; Taft, Earl J.
2016
Noetherian algebras of quantum differential operators. Zbl 1339.16027
Iyer, Uma N.; Jordan, David A.
2015
Examples of simple vectorial Lie algebras in characteristic 2. Zbl 1362.17030
Iyer, Uma N.; Leites, Dimitry; Messaoudene, Mohamed; Shchepochkina, Irina
2010
Prolongs of (ortho-)orthogonal Lie (super)algebras in characteristic 2. Zbl 1362.17029
Iyer, Uma N.; Lebedev, Alexei; Leites, Dimitry
2010
Volichenko algebras as algebras of differential operators. Zbl 1119.16025
Iyer, Uma N.
2006
Differential operators on derivation rings. Zbl 1110.16023
Iyer, Uma N.
2005
Quantum differential operators on the quantum plane. Zbl 1105.17004
Iyer, Uma N.; McCune, Timothy C.
2003
One-sided Hopf algebras and quantum quasigroups. Zbl 1398.16030
Iyer, Uma N.; Smith, Jonathan D. H.; Taft, Earl J.
2018
Representations of $$D_q(k[x])$$. Zbl 1348.16020
Futorny, Vyacheslav; Iyer, Uma N.
2016
The dual of a certain left quantum group. Zbl 1343.16023
Iyer, Uma N.; Taft, Earl J.
2016
Generic base algebras and universal comodule algebras for some finite-dimensional Hopf algebras. Zbl 1338.16037
Iyer, Uma N.; Kassel, Christian
2015
Noetherian algebras of quantum differential operators. Zbl 1339.16027
Iyer, Uma N.; Jordan, David A.
2015
Examples of simple vectorial Lie algebras in characteristic 2. Zbl 1362.17030
Iyer, Uma N.; Leites, Dimitry; Messaoudene, Mohamed; Shchepochkina, Irina
2010
Prolongs of (ortho-)orthogonal Lie (super)algebras in characteristic 2. Zbl 1362.17029
Iyer, Uma N.; Lebedev, Alexei; Leites, Dimitry
2010
Volichenko algebras as algebras of differential operators. Zbl 1119.16025
Iyer, Uma N.
2006
Differential operators on derivation rings. Zbl 1110.16023
Iyer, Uma N.
2005
Quantum differential operators on the quantum plane. Zbl 1105.17004
Iyer, Uma N.; McCune, Timothy C.
2003
Quantum differential operators on $$\mathbb{K}[x]$$. Zbl 1054.16020
Iyer, Uma N.; McCune, Timothy C.
2002
Differential operators on Hopf algebras and some functorial properties. Zbl 1012.16039
Iyer, Uma N.
2002
Differential operators on Azumaya algebras and Heisenberg algebras. Zbl 1028.16014
Iyer, Uma N.
2001
all top 5
#### Cited by 26 Authors
6 Iyer, Uma N. 2 Futorny, Vyacheslav M. 2 Le Stum, Bernard 2 McCune, Timothy C. 2 Quirós Gracián, Adolfo 1 Alonso Álvarez, José Nicanor 1 Bavula, Volodymyr V. 1 Bekkert, Viktor 1 Bouarroudj, Sofiane 1 Datt, Matrapu Sumanth 1 Fernández Vilaboa, José Manuel 1 González Rodríguez, Ramón 1 Grozman, Pavel 1 Im, Bokhee 1 Jordan, David Alan 1 Kassel, Christian 1 Koteswara Rao, Guntupalli 1 Lebedev, Aleksei Vital’evich 1 Leites, Dimitry A. 1 Masuoka, Akira 1 Maxson, Carlton J. 1 Meir, Ehud 1 Nowak, Alex W. 1 Saracco, Paolo 1 Shchepochkina, Irina 1 Smith, Jonathan Dallas Hayden
all top 5
#### Cited in 14 Serials
3 Journal of Pure and Applied Algebra 1 Communications in Algebra 1 Israel Journal of Mathematics 1 Journal of Algebra 1 Pacific Journal of Mathematics 1 Proceedings of the American Mathematical Society 1 Results in Mathematics 1 Linear Algebra and its Applications 1 Selecta Mathematica. New Series 1 Algebras and Representation Theory 1 Journal of Nonlinear Mathematical Physics 1 Bulletin of the Malaysian Mathematical Sciences Society. Second Series 1 SIGMA. Symmetry, Integrability and Geometry: Methods and Applications 1 Journal of Noncommutative Geometry
all top 5
#### Cited in 8 Fields
13 Associative rings and algebras (16-XX) 6 Nonassociative rings and algebras (17-XX) 4 Commutative algebra (13-XX) 3 Category theory; homological algebra (18-XX) 2 Field theory and polynomials (12-XX) 2 Group theory and generalizations (20-XX) 1 Combinatorics (05-XX) 1 Mechanics of particles and systems (70-XX) | 2021-01-25 21:19:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34116610884666443, "perplexity": 13348.46687327187}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703644033.96/warc/CC-MAIN-20210125185643-20210125215643-00642.warc.gz"} |
https://greprepclub.com/forum/very-simple-but-very-confusing-question-8638.html | It is currently 21 Mar 2019, 05:35
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Very simple but very confusing question
Author Message
TAGS:
Intern
Joined: 11 Mar 2018
Posts: 1
Followers: 0
Kudos [?]: 0 [0], given: 0
Very simple but very confusing question [#permalink] 11 Mar 2018, 08:05
00:00
Question Stats:
42% (00:34) correct 57% (00:18) wrong based on 7 sessions
Hi all, I’m confused with the following question so further explanation
Is appreciated :
If 4x-12≥x+9, which of the following must be true?
(A) X>6
(B) X<7
(C) X>7
(D) X>8
(E) X<8
My answer is C and D ,,, but the book’s answer is A
I see that x>6 could be 6.5 so it’s not true and he didn’t
mention it’s an integer in the question. So C,D must be true.
Am I wrong ?
And does the question which of the following must be true point
To selecting one answer or it is possible to select multiple answers?
(Manhattan book)
[Reveal] Spoiler: OA
Moderator
Joined: 18 Apr 2015
Posts: 5845
Followers: 94
Kudos [?]: 1144 [1] , given: 5448
Re: Very simple but very confusing question [#permalink] 12 Mar 2018, 11:53
1
KUDOS
Expert's post
https://greprepclub.com/forum/rules-for ... -1083.html
Also, post the question on the right forum. this is not quantitative comparison question but single answer choice.
moreover, you have to use a proper tag to identify the question. In this case, the source and so on. Look here for further clarification
https://greprepclub.com/forum/qq-how-to ... -2357.html
Back to the question
It asks for $$4x-12 \geq x+9$$
Now $$3x \geq 21$$ ; $$x \geq 7$$
X is greater than 7 OR equal to seven. Of the answer choices, B and E are not good.
A. $$x > 6$$ this is the right answer
C. $$x > 7$$ this is wrong because is true that $$x > 7$$ BUT also equal to seven
D. $$x > 8$$ this is also wrong because it says that x >8 and as it turns out does not take in account the values prior 8.
Hope now is clear.
Regards
_________________
Manager
Joined: 26 Jun 2017
Posts: 104
Followers: 0
Kudos [?]: 39 [0], given: 38
Re: Very simple but very confusing question [#permalink] 22 Mar 2018, 02:03
But if X is greater or equal to 7, how can we choose x>6 as the right answer. There is a infinity of number between the numbers 6 and 7, and x can at least be 7 not below.
_________________
What you think, you become.
Re: Very simple but very confusing question [#permalink] 22 Mar 2018, 02:03
Display posts from previous: Sort by | 2019-03-21 13:35:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5747113823890686, "perplexity": 2707.5310637146945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202525.25/warc/CC-MAIN-20190321132523-20190321154523-00358.warc.gz"} |
https://inconinfire.com/ufupe/2a57eb-another-word-for-roots-in-quadratic-equation | (57+1) 6595616
. .
another word for roots in quadratic equation
Roots are also called x-intercepts or zeros. Synonyms: Solutions or Zeros Can have 0, 1, or 2 real roots ; Consider the graph of quadratic equations. For quadratic equations with integer coefficients, if the discriminant is a perfect square, then the roots are rational numbersâin other cases they may be quadratic irrationals. Roots and zeros When we solve polynomial equations with degrees greater than zero, it may have one or more real roots or one or more imaginary roots. Hint: Factor both quadratic expressions. What does quadratic formula mean? Well, when we talk about roots, itâs another word for solutions. then root of quadratic equation is given by quadratic formula as. roots. Consider the quadratic equation A real number x will be called a solution or a root if it satisfies the equation, meaning .It is easy to see that the roots are exactly the x-intercepts of the quadratic function , that is the intersection between the graph of the quadratic function with the x-axis. However, since normally functions in C++ can return only one value. Solve the quadratic equation $$x^2-20x-69=0$$ In the answer box, write the roots separated by a comma. 2. Roots, zeroes, and x values are 3 other names for solutions of a quadratic equation. âIf f = 0, then the quartic in y is actually a quadratic equation in the variable y 2.â âSmith also extended Gauss's theorem on real quadratic forms to complex quadratic forms.â âFind the 2 roots and a continued fraction for a root of these quadratic equations.â y-intercepts. For example, to find the roots of We are trying find find what value (or values) of x will make it come out to zero. Parabola Basics . What is another word for zeros? A parabola can cross the x-axis once, twice, or never.These points of intersection are called x-intercepts or zeros. A solution to an equation of the form f(x) = 0.Roots may be real or complex.. answer choices . It's represented by , the As a "mathlete", you'll learn that another word for solutions is Quadratic Roots!. Learn this and go on to Quadratic Roots Given a quadratic equation, the student will make connections among the solutions (roots) of the quadratic equation, the zeros of their related functions, and the horizontal intercepts (x-intercepts) of the graph of the function. Tags: Question 18 . Write the equation of the parabola in vertex form. In the standard quadratic equation ax 2 + bx + c = 0, if the determinant b 2 â 4ac ⥠0 . Quadratics definition, the branch of algebra that deals with quadratic equations. x = â b ± b 2 â 4 a c 2 a. The nature of the roots of the equation (a root is another word for solution) clearly depends on the value of the expression under the square root sign, . Problem 2. Tags: Question 3 . If the discriminant is zero, then there is exactly one distinct real root, sometimes called a double root : Problem 3. where [C.sub.2] is a constant, and [m.sub.1] and [m.sub.2] satisfying [m.sub.1] > 1 > [m.sub.2] > 0 are two roots of the quadratic equation (3.5). Nature of the roots of a quadratic equation ⦠The discriminant can be used to confirm the number of x-intercepts and the type of solutions of the quadratic equation. That means we need to account for both possibilities when solving.The factored form of a quadratic equation tells us the roots of a quadratic equation. Conceptually(not algebraically), why do I get a quadratic equation when I subtract a linear equation form a quadratic equation. While simultaneously solving a quadratic equation and a linear equation, I noticed that their intercepts are actually equal to the x intercepts of another quadratic function. An equation is said to have roots; itâs another word for its solutions. It provides for synonyms. Quadratic functions are very important. The roots or also called as zeroes of a polynomial P(x) for the value of x for which polynomial P(x) is equal to 0. See more. Are you ready to enter a world where throwing a softball invites an investigation with things like Quadratic Equations and Functions and different types of polynomials? When we are asked to solve a quadratic equation, we are really being asked to find the roots. Therefore, a quadratic function may have one, two, or zero roots. 3 and -6. Hint: Factor both quadratic expressions. ['درج٠دÙÙ
Ø ÚÙØ§Ø± گاÙÙØ ÙØ§Ø¨Ø³ØªÙ بدرج٠دÙÙ
ÙÙ
ÚÙØ¯Û'] ['Adjective'] ['adjective', 'noun'] ['involving the second and no higher power of an unknown quantity or variable. In this section, we will learn how to find the root(s) of a quadratic equation. answer choices . It is written in the form of aâ
(xâp)â
(xâq) or aâ
(xâp)2 Solving quadratic equations by quadratic formula. zeros of quadratic equation. True. Translation for 'quadratic equation' in the free English-Japanese dictionary and many other Japanese translations. Quadratic Equations Definition. SURVEY . Solution by factorization examples: Find the roots of the quadratic equation 6x 2 â x â 2 = 0. Add 27 to both sides: Divide through by 3: Take the square root of both sides: So this polynomial has two roots: plus three and negative 3. The graph of a quadratic function is a parabola. x = $$\frac{-b±\sqrt{b^{2}-4ac}}{2a}$$ Examples related to quadratic equation. There's a lot to cover, so the material is broken up into sections. Of, relating to, or containing quantities of the second degree. What does quadratic mean? Information and translations of quadratic formula in the most comprehensive dictionary definitions resource on the web. roots of a quadratic function. The quadratic equation looks like ax 2 + bx + c = 0, but if we take the quadratic expression on the left and set it equal to y, we will have a function: the value of the root of the polynomial that will satisfy the equation P(x) = 0. This expression is called the discriminant, because it allows us to discriminate between the various types of numbers that appear as roots of this equation. The solutions of a quadratic equation of the form a x 2 + b x + c = 0 are given by the quadratic formula. Question 947973: Solve for the roots in the following equation. (x^4 + 5x^2 - 36)(2x^2 + 9x - 5) = 0 answer choices . solution to a quadratic equation when it is set equal to zero. Define quadratic equation. In your textbook, a quadratic function is full of x's and y's.This article focuses on the practical applications of quadratic functions. So we know that ð¥ is equal to 13 over two or ð¥ is equal to five-thirds. (adjective) Solving Quadratic Equations by Factoring . Solution: 6x 2 â x â 2 Meaning of quadratic formula. An Optimal Consumption and Investment Problem with Quadratic Utility and Subsistence Consumption Constraints: A Dynamic Programming Approach What quadratic function models the graph? answer choices -2 and 3-1/2 and -6-2 and -6. A quadratic function is graphically represented by a parabola with vertex located at the origin, below the x-axis, or above the x-axis. 2.9k plays . ... What are the roots of this graph? To do this we set the polynomial to zero in the form of an equation: Then we just solve the equation. A number has square roots, cube roots and the rest. Definition of quadratic formula in the Definitions.net dictionary. Tags: Question 11 . two real roots. Introduction to Quadratic Equations. vertex. vertex. Find the difference between the roots of the quadratic equation $$x^2-9x+20=0$$. Write the equation of the parabola in vertex form. solution to a quadratic equation when it is set equal to zero. So what weâre looking for is a quadratic equation whose solutions are 13 over two and five-thirds. Note: The roots of f(x) = 0 are the same as the zeros of the function f(x).Sometimes in casual usage the words root and zero are used interchangeably.. Example: The roots of x 2 â x â 2 = 0 are x = 2 and x = â1. ax 2 + bx + c = 0. a, b, c are constants (generally integers) Roots. (Write your answer in vertex form.) ***** Problem five: Writing a Quadratic function in vertex form Directions: 1. Q. axis of symmetry. 3.2k plays . A polynomial like the one in the question has zeros. This section on applications should be studied after working through:; Quadratic functions is the starting point for learning about polynomial functions with 2 as the largest power â quadratics. Root. Solve the equation $$(x-2)^2+2x=7(x-2)$$ In the answer box, write the roots ⦠60 seconds . synonyms are zeros, solutions. quadratic equation synonyms, quadratic equation pronunciation, quadratic equation translation, English dictionary definition of quadratic equation. It is natural to want to write a function that takes as formal arguments the coefficients of the quadratic equation and returns all the roots and the corresponding residuals. y-intercepts. What is another word for zeros? In other words, we can say that polynomial P(x) will have the same value of x if x=r i.e. ', 'a quadratic equation.'] synonyms are roots, solutions. Ask Question Log in Home Science Math History Literature Technology Health Law Business All Topics Random roots. True or false: The solutions, roots, x-intercepts, and zeros of a quadratic equation are all the same thing. In mathematics, the fundamental theorem of algebra states that every non-constant single-variable polynomial with complex coefficients has at least one complex root. Vertex (3, 7) and point (9, 4) YOU DO 3. 12 Qs . Here, the expression b 2 â 4 a c is called the discriminant. Well, letâs say that our solutions are for ð¥. axis of symmetry. In mathematics, the expression b 2 â 4 a c 2 a into sections real roots ; another word for roots in quadratic equation word! Of algebra states that every non-constant single-variable polynomial with complex coefficients has at least one complex root quadratic equations say! Information and translations of quadratic equation when it is set equal to zero in the free English-Japanese dictionary and other... [ /tex ] parabola with vertex located at the origin, below the x-axis sections... X â 2 = 0 your textbook, a quadratic function may have one two. Roots of a quadratic function is graphically represented by a comma learn and... 0 quadratic equations equation pronunciation, quadratic equation [ tex ] x^2-9x+20=0 [ ]... Integers ) roots quadratic roots! ] x^2-20x-69=0 [ /tex ] 'quadratic equation ' the... WeâRe looking for is a parabola with vertex located at the origin, below x-axis. We can say that our solutions are for ð¥ the fundamental theorem of algebra states that non-constant. Zero in the form of an equation of the root of quadratic equation ⦠quadratic functions are important. Y'S.This article focuses on the practical applications of quadratic equations definition, the. Resource on the practical applications of quadratic equations are constants ( generally integers ) roots =! Constants ( generally integers ) roots when I subtract a linear equation form a quadratic function a... Origin, below the x-axis once, twice, or 2 real roots ; itâs another word solutions! And zeros of a quadratic equation synonyms, quadratic equation when it is set equal to zero (! Origin, below the x-axis once, twice, or zero roots equation form a quadratic.. Type of solutions of a quadratic equation [ tex ] x^2-20x-69=0 [ /tex ] in the free English-Japanese and! Of the polynomial to zero 0 are x = 2 and x = â1 * Problem... 3, 7 ) and point ( 9, 4 ) you do 3 is roots! On to quadratic roots Translation for 'quadratic equation ' in the answer,... Of x-intercepts and the rest to 13 over two or ð¥ is equal to 13 over and... Subtract a linear equation form a quadratic equation form of an equation is said have! Solve the quadratic equation that every non-constant single-variable polynomial with complex coefficients has at least one complex.! + bx + c = 0. a, b, c are constants ( generally ). Find the roots function in vertex form intersection are called x-intercepts or can! 2X^2 + 9x - 5 ) = 0 are x = 2 and x = 2 and x = and. Equation of the root of the form of an equation is said to have roots ; itâs another for. Function may have one, two, or containing quantities of the root of parabola..., why do I get a quadratic function is a quadratic equation,... C = 0. a, b, c are constants ( generally ). Set the polynomial that will satisfy the equation P ( x ) = 0.Roots may be real or... 2 a roots! the polynomial to zero for the roots of the quadratic equation when I subtract linear... Form a quadratic function is a parabola the rest material is broken up into sections parabola can the. Definition of quadratic equations definition are x = â1 with quadratic equations definition ( generally integers ).! Form a quadratic function in vertex form least one complex root to the., two, or 2 real roots ; Consider the graph of a quadratic equation when it set. Roots ; itâs another word for its solutions x^2-20x-69=0 [ /tex ] we know that is... Roots! 7 ) and point ( 9, 4 ) you do.... [ tex ] x^2-20x-69=0 [ /tex ] in the most comprehensive dictionary definitions resource the. Is said to have roots ; itâs another word for solutions states that every another word for roots in quadratic equation. 2X^2 + 9x - 5 ) = 0.Roots may be real or complex 2! ' in the question has zeros function in vertex form by quadratic formula as the rest for is parabola. Satisfy the equation to zero quadratic equation, we are asked to a.: find the roots of a quadratic function in vertex form Directions: 1 roots Translation for equation. 6X 2 â x â 2 = 0 to find the roots of x 's and y's.This focuses..., 1, or zero roots over two or ð¥ is equal 13! Is called the discriminant can be used to confirm the number of x-intercepts and the type of solutions a... English dictionary definition of quadratic formula as zero in the answer box, write the P! Another word for solutions of the quadratic equation ⦠quadratic functions b ± b 2 â 4 c! Solutions are for ð¥ you 'll learn that another word for its solutions x! Say that polynomial P ( x ) will have the same thing 's represented,! I subtract a linear equation form a quadratic equation whose solutions are 13 over two and five-thirds find the between! All the same thing examples: find the roots of the quadratic [. I get a quadratic function, so the material is broken up into sections (,... Quadratics definition, the roots are very important roots, zeroes, and zeros a!, relating to, or 2 real roots ; Consider the graph of quadratic! WeâRe looking for is a parabola with vertex located at the origin, the... Least one complex root quadratic function five: Writing a quadratic function is full of x x=r! - 5 ) = 0 * Problem five: Writing a quadratic equation ⦠functions. A, b, c are constants ( generally integers ) roots, itâs word. We just solve the quadratic equation constants ( generally integers ) roots to solve a quadratic equation solutions... Equations definition ax 2 + bx + c = 0. a, b, c are constants generally! The origin, below the x-axis once, twice, another word for roots in quadratic equation 2 roots... X^2-20X-69=0 [ /tex ] in the free English-Japanese dictionary and many other Japanese translations second degree why I! | 2021-07-26 02:30:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7017878293991089, "perplexity": 565.3966775258175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151972.40/warc/CC-MAIN-20210726000859-20210726030859-00213.warc.gz"} |
https://brilliant.org/problems/quadratic-9/ | Algebra Level 2
If $$x^2 + 2ax + 10 - 3a > 0$$ for all real values of $$x$$, then find the range of $$a.$$
× | 2018-10-16 15:35:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5339449644088745, "perplexity": 257.4138410561795}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510754.1/warc/CC-MAIN-20181016134654-20181016160154-00052.warc.gz"} |
http://tex.stackexchange.com/questions/47285/listings-with-rule-above-caption-to-look-like-floatstyleruled | # Listings with rule above caption to look like \floatstyle{ruled}
I am using the listings package.
I would like my listings to have a heavy rule above the caption, so it looks like the \floatstyle{ruled}.
So it would look like:
a bold line ===
caption
rule ---
code
rule ---
-
## migrated from stackoverflow.comMar 8 '12 at 17:16
This question came from our site for professional and enthusiast programmers.
Give a example file. I will give a solution – Mu30 Mar 9 '12 at 14:28
Welcome to TeX.SE. Please keep in mind that it is always best to compose a fully compilable MWE that illustrates the problem including the \documentclass and the appropriate packages so that those trying to help don't have to recreate it. Basically, show some work by getting the example as far as you can. – Peter Grill Mar 10 '12 at 3:59
The packages listings provides the option frame=single which draws a single line above and below the environment itself. So you have to add a single bold line before the caption is set. Therefore I am using the command pretocmd provided by etoolbox.
\documentclass{article}
\usepackage{listings}
\usepackage{etoolbox}
\makeatletter
\lstset{frame=lines}
\pretocmd\lst@makecaption{\noindent{\rule{\linewidth}{2pt}}}{}{}
\makeatother
\begin{document}
\begin{lstlisting}[caption={Some Caption}]
static uint64_t i = 0;
void every_cycle()
{
if (i > 0)
i--;
}
uint64_t next_num()
{
return (i += 0x100);
}
\end{lstlisting}
Text
\begin{lstlisting}
static uint64_t i = 0;
void every_cycle()
{
if (i > 0)
i--;
}
uint64_t next_num()
{
return (i += 0x100);
}
\end{lstlisting}
\end{document}
-
You can directly use the ruled style this way:
\usepackage{float}
\floatstyle{ruled}
\newfloat{code}{thp}{lop}
\floatname{code}{Listing}
\numberwithin{code}{chapter}
Then do \begin{code} ... \end{code} around the listing instead of \begin{figure} ... \end{figure}.
- | 2015-07-31 15:44:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9094420075416565, "perplexity": 4550.56124274172}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988308.23/warc/CC-MAIN-20150728002308-00061-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/electric-field-strength-calculating-distance-from-charge.904990/ | # Electric field strength -- calculating distance from charge
## Homework Statement
At a distance D from a very long (essentially infinite) uniform line of charge, the electric field strength is 1000 N/C. At what distance from the line will the field strength to be 4000 N/C?
E=kq/r2
## The Attempt at a Solution
I know that E is inversely proportional like so: E~1/r2
hence by rearranging for 'r' I got: r~sqrt(1/E)
then I plugged in 4E because 4000 N/C is four times 1000 N/C: r~sqrt(q/4E)
and I got r~E/2 but the answer is E/4 and I don't know how?
BvU | 2022-01-28 22:37:03 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8771849274635315, "perplexity": 1591.4263288377626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320306346.64/warc/CC-MAIN-20220128212503-20220129002503-00144.warc.gz"} |
http://www.coranac.com/tonc/text/setup.htm | 2. Setting up a development environment
2.1. Introduction
Unless you want to punch in the instructions in binary in a hex editor (“Luxury! When we were young we had to toggle each bit individually with magnets!”), you'll need a development environment to turn human readable code into machine language. There are several options here, but the main one in GBA homebrew is devkitPro and the ARM cross-compiler devkitARM. This chapter will show you how to set-up the necessary components and how to get it running and how to compile tonc's code with it. I'll also show where you can find some other development packages currently available, but the focus in this and other chapters will be devkitPro/ARM.
The last section explains some of the details about using the command-line and makefiles. It is essentially optional, but for historical reasons I have to cover it before the rest of the chapters instead of putting it in an appendix.
2.2. devkitPro and devkitARM
2.2.1. Installation
Fig 2.1: devkitPro dir tree.
In the last few years, devkitPro (DKP) has become the standard toolchain for GBA homebrew and is available for Windows, Mac and Linux platforms. DevkitPro is actually a package, containing, compilers for a number of systems (including GBA), library and example code and an editor. You can find the actual downloads in the download section of the sourceforge page: http://sourceforge.net/projects/devkitpro/.
For the GBA, you will need:
• devkitARM (DKA). The ARM cross-compiler, based on the GCC toolchain.
• MSys. A shell with basic Unix commands like make and rm. Probably only needed for Windows platforms, which usually lack these tools.
Other recommended items are:
• Programmer's Notepad 2: an advanced plain text editor with code highlighters, code-folding capabilities and shell execution commands. I suppose you could call it a mini-IDE. Even if you had your own editor, it is recommended that you get this one as well because both DKP's and Tonc's examples contain PN2 project files, which makes it easier to build GBA projects.
• libgba: a set of basic types, macros and functions for use in GBA development. While I won't be using it here, it is still worth a look. Currently libgba and tonc's own code library libtonc are pretty much incompatible (multiple definitions and such), I am trying to make sure that there won't be any conflicts.
• GBA examples: a set of example projects using libgba.
For Windows, there is an installer that downloads and installs the components automatically. For Mac and Linux, you'll have to install things yourself. The installation process also creates a number of environment variables for directories to devkitPro and devkitARM, and adds msys/bin to the PATH.
When installng DKP on Windows, there's one thing you must be aware of. GCC-based tools have their origins in Unix, and Unix doesn't take kindly to spaces in paths. Therefore do not install into a directory with spaces (like c:\Program Files) and don't put your projects in a folder with spaces in the name either (like My Documents, which is actually ‘short’ for c:\Documents And Settings\UserName\Blah Blah Blah More Ridiculously Long Directory Names That Never Fit In Textboxes\My Documents\). Basically, don't use the standard Windows directories. My own installation tree looks like fig 2.1, but it's customary to put devkitPro in c:\devkitPro.
Do not use spaces in paths
GCC makes use of the GCC toolchain, which doesn't cope well with spaces in paths (think My Documents). Spaces are used as a separator between command-line options and when you have them in paths the tools will interpret that as new options. While there are ways to use them anyway, you can save yourself a lot of headaches by simply staying clear of them.
2.2.2. Building projects with DevkitARM
Fig 2.2: Template project.
There are several ways of building GBA projects, but the recommended process it to use makefiles. In particular, devkitPro's template makefiles. The GBA template makefiles can be found in $(DEVKITPRO)/examples/gba/template. When creating a new project of your own, base it on this one. You can see the basic structure of the template project in fig 2.2. The build directory is where all the intermediate files go. You'd rarely have to look there. The source directory is where you put the source code: the C, C++ and perhaps assembly files. If you have header files, put those in include. Note that the build and include directories don't actually exist in the template project yet; build is created by the build process itself, and since there are no headers to include, the include folder isn't necessary in this case and has been removed, but if you do have headers, you'd put them there. The template directory itself has two files: the PN2 project files, template.pnproj and the Makefile. Once you've opened the project in PN2, you can build the project with Alt+1, and clean the project with Alt+2. If all is well, you should get something like this: > "make" template.c arm-none-eabi-gcc -MMD -MP -MF /e/dev/devkitPro/examples/gba/template/build/template.d -g -Wall -O3 -mcpu=arm7tdmi -mtune=arm7tdmi -fomit-frame-pointer -ffast-math -mthumb -mthumb-interwork -I/e/dev/devkitPro/libgba/include -I/e/dev/devkitPro/examples/gba/template/build -c /e/dev/devkitPro/examples/gba/template/source/template.c -o template.o linking multiboot built ... template_mb.gba ROM fixed! > Process Exit Code: 0 > Time Taken: 00:02 The output consists of 6 lines: 1. make'. Invokes make to run the makefile. 2. template.c'. The file we're compiling. 3. arm-none-eabi-gcc -MMD ...'. This very long line, split over multiple lines here, invokes the compiler. gcc is front-end of the compiler, and arm-none-eabi is the prefix that devkitARM uses to set it apart from all the other versions of gcc. The rest are the compiler options. Basically, this whole thing turns the source file template.c into an object file called template.o. 4. linking multiboot'. After compilation, all object files have to be linked together into the final binary. The actual calling of the linker is hidden here, but it is another call to arm-none-eabi-gcc with a different set of options. I'll cover what “multiboot” means later in the section. 5. built ... template_mb.gba'. Indicated everything worked, and we now have a GBA binary called template_mb.gba. 6. ROM fixed!'. Each GBA ROM starts with a header that the GBA checks to see if it's a valid GBA program. If the header check fails, the GBA will reject the program (even though emulators will still accept it). There is a tool called gbafix that patches the ROM with a valid header, which is what this line is about. Fig 2.3: template(_mb).gba. The ROM fixed!' line means the build has succeeded. You should end up with a template_mb.gba. When you open it in VBA or no$gba you should see something like fig 2.3. If you don't see a .gba file or it shows a white screen, something beyond your contol went wrong. But before we get to what could be amiss, I want you to take a look inside the Makefile itself first.
Using other editors to manage projects
Programmer's Notepad 2 is just one of the many editors you can work with. In principle, all you need is an editor capable of running external tools like make. DevkitPro's FAQ has a nice overview of some of the other options.
Even if you do use another editor, it's a good idea to add a pnproj file if you want others to build your project since they may not have the same editor. Even an empty one will suffice.
Prefix changes in devkitARM r41
In devkitARM r41, the common prefix for GCC's tools changed from arm-eabi to arm-none-eabi. This mean that all older makefiles won't work anymore (including tonc's). To fix this, just replace the old prefix with the new one.
I could have avoided this by using the standard makefiles, but they didn't exist when I started, and now it's just too late to switch :(.
2.2.3. DKP's makefile
A makefile is a script used to manage the files of a project and the steps necessary to build, clean or install a program. They consist of rules that describe the dependencies between the various files of the project and which commands to use. The devkitPro template makefiles are almost completely automated: all the relevant rules are already in place and all you have to do to add source files to a project is tell the makefile which directories the sources are in. Basically, they're pretty fucking awesome. They're also pretty fucking mystifying for first-time users. If you stick to the standard procedure everything should work right out of the box, but if you want tweak how things are done, here are the most important parts from a user's perspective.
The Makefile begins like this:
#---------------------------------------------------------------------------------
# Clear the implicit built in rules
#---------------------------------------------------------------------------------
.SUFFIXES:
#---------------------------------------------------------------------------------
ifeq ($(strip$(DEVKITARM)),)
$(error "Please set DEVKITARM in your environment. export DEVKITARM=<path to>devkitARM) endif include$(DEVKITARM)/gba_rules
#---------------------------------------------------------------------------------
# TARGET is the name of the output, if this ends with _mb a multiboot image is generated
# BUILD is the directory where object files & intermediate files will be placed
# SOURCES is a list of directories containing source code
# DATA is a list of directories containing data files
# INCLUDES is a list of directories containing header files
#---------------------------------------------------------------------------------
TARGET := $(shell basename$(CURDIR))
BUILD := build
SOURCES := source
DATA :=
INCLUDES :=
#---------------------------------------------------------------------------------
# options for code generation
#---------------------------------------------------------------------------------
ARCH := -mthumb -mthumb-interwork
CFLAGS := -g -Wall -O3\
-mcpu=arm7tdmi -mtune=arm7tdmi\
-fomit-frame-pointer\
-ffast-math \
$(ARCH) CFLAGS +=$(INCLUDE)
CXXFLAGS := $(CFLAGS) -fno-rtti -fno-exceptions ASFLAGS :=$(ARCH)
LDFLAGS = -g $(ARCH) -Wl,-Map,$(notdir $@).map #--------------------------------------------------------------------------------- # path to tools - this can be deleted if you set the path to the toolchain in windows #--------------------------------------------------------------------------------- export PATH :=$(DEVKITARM)/bin:$(PATH) #--------------------------------------------------------------------------------- # any extra libraries we wish to link with the project #--------------------------------------------------------------------------------- LIBS := -lgba #--------------------------------------------------------------------------------- # list of directories containing libraries, this must be the top level containing # include and lib #--------------------------------------------------------------------------------- LIBDIRS :=$(LIBGBA)
## more ...
This part of the makefile sets up certain variables that are used later. The various -FLAGS variables are compiler, assembly and linker flags. You don't really have to touch those, though you may want to use -O2 instead of -O3 because -O3 tends to bloat code pretty severely. The really important part is this:
#---------------------------------------------------------------------------------
# TARGET is the name of the output, if this ends with _mb a multiboot image is generated
# BUILD is the directory where object files & intermediate files will be placed
# SOURCES is a list of directories containing source code
# DATA is a list of directories containing data files
# INCLUDES is a list of directories containing header files
#---------------------------------------------------------------------------------
TARGET := $(shell basename$(CURDIR))_mb
BUILD := build
SOURCES := source
DATA :=
INCLUDES :=
Like the comments say, the SOURCES variable lists the directories where your code is. In this case, all the code is in source. If you have code in other directories as well, add them here separated by spaces. Yes, spaces; that's what make uses to tell tokens apart (and this is also why you shouldn't put spaces in paths). If you have sub-directories as well, use forward slashes ('/'), not backward slashes ('\\').
Similarly, DATA and INCLUDES are the lists for binary data and header files. In this case they're empty because there's no extra data or headers. The directories are relative to the location of the makefile; to indicate source is in that directory, use a period ('.').
The TARGET line is also interesting. It is the name of the output file, without an extension. $(shell basename$(CURDIR))' gives the last part of the current directory, which in this case would be template. In other words, it automatically uses the name of the project's directory for the ROM name as well.
The extra _mb' here indicates this should be built as a multiboot game. There are two kinds of GBA builds: cartridge and multiboot. The main difference is where the code and constant data resides. In a cartridge game it's in ROM (32MB); in multiboot it's in EWRAM (256kb). Technically, cartidge is the normal kind binary, but multiboot can be run over a multiboot cable.
Cart vs multiboot builds
There are two different kinds of gba builds: ‘cart’ builds and ‘multiboot’ builds. A cart build puts the main code and data in the 32MB ROM (0800:0000h) of a cart. A multiboot build puts that stuff in the 256kB EWRAM (0200:0000). Commercial games are obviously cart builds, but make use of multiboot builds to make single-cart multiplayer possible.
Other than the maximum size, there is little difference in gameplay between both. For homebrew, multiboot does have one advantage, namely that you can load a game up to hardware without the need of an expensive flashcart; you can build your own PC-GBA cable for peanuts.
Choosing the kind of build is done at link-time through linker specs. For cart-builds use -specs=gba.specs and for multiboot builds use -specs=gba_mb.specs. If the TARGET ends with _mb, the template makefile will link it as a multiboot game.
2.2.4. When compilers attack
In most cases, the steps given thus far will ‘Just Work’. However, it is possible that the installation or the build didn't quite go the way it should. Here is a short list of potential errors you may come across when building the template project.
This application has requested the Runtime to terminate it in an unusual way.'
This is an error I sometimes get when compiling from the Visual C++ IDE. This is not a DKA error, but more a Windows/MSVC one. The next compilation always works.
Windows Vista
This was a problem before devkitARM r21. Vista and GCC didn't really get along before that.
By default, the GBA screen is white and if you have an empty main(), this would be the result. However, if you're sure that something should have shown, it is likely that something went wrong even before your code was ever called. Before main() the ROM's boot code is called ($(DEVKITARM)/arm-none-eabi/lib/gba_crt0.s, if you're curious), which takes care of some house-keeping. Wintermute (the devkitPro maintainer) sometimes tinkers with the bootcode or linkscripts to improve the process, but sometimes things go wrong (sorry, Dave, you know it's true). Case in point: if you build the template project under devkitARM r21 exactly as shown before you'll get a white screen because there is an bug in the linkscript for multiboot builds. The easiest way out of this is to simply not build as multiboot with r21. Alternative solutions can be found at forum:14493. If you ever get a white screen after upgrading devkitARM even though it worked fine before, this is a likely suspect. There is usually an announcement thread in the gbadev forum and chances are that if it is a bootcode/linkscript error you're not the first to notice. 2.2.5. Building Tonc's examples with devkitARM All of Tonc's demos and the code library tonclib have PN2 projects, so it's mainly a matter of opening those in Programmer's Notepad 2 and hitting Alt+1. There are also project files for use on Visual C++ 6 and higher. These make use of a master makefile, tonc.mak. This makefile serves as a hub for building and cleaning individual or all projects. For individual projects, set the DEMO the name of the demo you want to build. From within MSVC, choose the proper build configuration and build as usual. Table 2.1 has an overview of the options. Table 2.1: building tonc projects. to ... run ... MSVC config build libtonc.a make libtonc Build libtonc build foo demo make DEMO=foo Build Single clean foo demo make DEMO=foo clean Clean Single build all demos make build_all Build All clean all demos make clean_all Clean All 2.3. Alternative development environments DevkitARM is the standard toolchain for GBA homebrew right now and almost the only one still being actively maintained. Developing with DKA means C, C++ or assembly and building up everything from scratch (or at least nearly scratch). If you'd like another language or a richer API, these alternatives may be worth a try. devkit Advance I only mention this here because it is still technically an alternative, and most tutorials still refer to it. devkit Advance is another GCC-based toolchain and can be considered the spiritual predecessor to devkitARM. Nowadays, I can't think of any reason to use devkit Advance instead of devkitARM aside perhaps from compatibility with very old projects. If you're still using it, consider switching. DKA vs DKA Both devkitARM and devkit Advance are abbreviated as “DKA”, which might cause some confusion. There is no real way to know which one one is referred to except perhaps by date: documents prior to 2004/2005 will refer to devkit Advance; more recent texts will probably mean devkitARM. HAM, visualHAM and HEL HAM is another GCC-based toolchain, but it also comes with HAMlib, an API for managing backgrounds, sprites and sound. The windows installation also contains an IDE called visualHAM. Setting up HAM is easy: simply download the freeware version from www.ngine.de and install. And then install again because it's only the installer that you've just installed :P. After the second install everything will be ready, but you'll actually have two copies of each, one of them can safely be removed. As with DKA, don't use spaces in paths. HAM is useful if you don't want to have to involve yourself with the guts of GBA programming, but you still need to some idea of now the GBA functions to make use of HAM properly. Hiding the lower levels can be dangerous on systems where resources are sparse, and the GBA certainly qualifies. I should also point out that HAMlib isn't exactly efficient when it comes to speed. If you're using HAM, also get the add-on library called HEL by Perter Schraut from www.console-dev.de. Unlike many of HAM's functions, HEL's code has been optimized to make the most of the GBA's capabilities. HEL is also still being maintained. HAM vs HEL VisualHAM's creator, Peter Schraut has also written an add-on library called HEL. Unlike HAM, some time has been spent on optimizing HEL's code, or at the very least to make it not slow. If you're using HAM, consider using HEL as well. Other languages There are some non-C/asm environments for GBA out there, but as far as I know these projects have mostly been abandoned by their original authors. Note that my knowledge of these packages is extremely limited, so I can't do much more than link to the sites where you can find them. There is dragonBASIC, which provides a BASIC-like syntax. This should be suitable for small projects, but I'm not sure it can be used for full games like a Mario clone. You can find a FreePascal for GBA/NDS at itaprogaming.free.fr, and instructions for using Forth or Lua at www.torlus.com. Finally, there is (or at least was) something called Catalpult at www.nocturnal-central.com. This is a very complete environment with an emulator and I think I've seen a debugger there as well. I think this could be compared to GameMaker, but then again I may be wrong. 2.4. Command line details and legacy topics This section serves two purposes: to give those used to dealing solely with GUIs some background information on how to work with command-line tools (and how not to work with them). Now, this would be a subject for an appendix if it weren't for how Tonc's earlier chapters and its examples are structured. 2.4.1. Working with command-line tools. For most people nowadays, working with programs means double-clicking on a desktop shortcut or double-clicking on a file in Explorer (I'm focussing on Windows here. Sorry, other 10%). For office work this is usually enough, and that'll be the end of it. For development work (particularly console dev), it really pays to have a deeper understanding of what's going on. Most of this subsection will have a high duh!-factor. Feel free to skip it if it gets a little too familiar. Like any other files, program files (executables) are stored somewhere in the file hierarchy. For example, the main executable of Office Word is called winword.exe and may be found at C:/Program Files/(... More Directories ...)/winword.exe. The pathname is also the command to run the program: simply pass the pathname to the shell the OS will execute the program. Usually you will do this via shortcuts of some sort: double-clicking on a shortcut tells the GUI to run the associated target. You can also invoke it via the command line. In the Start Menu, you can find Run.... Entering winword there will also launch Word, just as a double click did. Fig 2.4: Start->Run window. Programs often allow command-line options as well, separated by spaces. The types of options available depends on the program in question, of course. For word, the main option is to pass a filename to open. For example, winword "C:\foo\bar.doc" will open C:\foo\bar.doc (see fig 2.4). The same thing happens when you double-click a Word document: Windows picks up the filename, looks up which application it's associated to and calls that application with the filename as an option. The value of the command-line Of course, using the command-line to open a Word document may seem slightly silly considering you can do the same thing by just double-clicking the file itself, but there are instances where the reverse is true. For example, you can use it to open multiple documents at once (winword C:\a.doc C:\b.doc') or make it print them, or whatever the program allows. GUIs may be easier sometimes, but using the command-line allows for more control. A second great thing about the command-line is that you can automate processes. This is particularly important in programming, because that generally involves taking multiple steps for each file in the project. Doing all of that manually for each file in the project and each time you rebuild is simply beyond any rational consideration; you'll want a script for that. Batch-files and makefiles are examples of such scripts. Basic steps for building a GBA project Converting your C/C++/asm sources into a valid GBA binary requires the following four steps: 1. Compile/assemble the sources. The first step is turning the human readable C or C++ files (.c/.cpp) or assembly files (.s/.asm) to a binary format known as object files (.o). There is one object file for each source file. The tool for this is called arm-none-eabi-gcc. Actually, this is just a front-end for the real compiler, but that's just details. The arm-none-eabi- here is just a prefix specific to devkitARM; other toolchains or platforms have different prefixes. Note that C++ uses g++ instead of gcc. 2. Link the object files. After that, you need to link the separate object files into a single executable ELF file. Any precompiled code libraries (.a) you may have are linked at this stage too. You can actually compile and link at the same time, but it is good practice that you keep them separate: serious projects usually contain multiple files and you don't want to have to wait for the whole world to recompile when you only changed one. This becomes even more important when you start adding data (graphics, music, etc). Again, arm-none-eabi-gcc is used for invoking the linker, although the actual linker is called arm-none-eabi-ld. 3. Translate/strip to pure executable. The ELF file still contains debug data and can't actually be read by the GBA (though most emulators will accept it). arm-none-eabi-objcopy strips the debug data and makes sure the GBA will accept it. Well, almost. 4. Validate header. Each GBA game has a header with a checksum to make sure it's a valid GBA binary. Normally, compilation doesn't supply one, so we have to use a tool like DarkFader's gbafix to fix the header. This tool comes with DKA, so you don't have to download it separately. The demo in the next chapter is called first, which uses a single source file, first.c. To create the binary first.gba, you'll need to execute the following commands. # Compile first.c to first.o arm-none-eabi-gcc -mthumb -mthumb-interwork -c first.c # Link first.o (and standard libs) to first.elf arm-none-eabi-gcc -specs=gba.specs -mthumb -mthumb-interwork first.o -o first.elf # Strip to binary-only arm-none-eabi-objcopy -O binary first.elf first.gba # Fix header gbafix first.gba Note that apart from the filenames (bolded), there are also different options for the tools (anything that starts with a hyphen). The options in italics are technically not required, but recommended nonetheless. I've collected a few of the more common flags in the makefile appendix, so look them up if you want to know. You can look up the full list of options in the manuals, though I should warn you that the number of options can be very large. devkitARM's linker requires a -specs option. Unlike other GBA toolchains, devkitARM requires that either -specs=gba.specs or -specs=gba_mb.specs is present as a linker option. These specs contain the memory map without which the linker can't do its job. If you're migrating from an older toolchain and find that suddenly the binary doesn't work anymore, this is a likely cause. It is also a good idea to always have -mthumb -mthumb-interwork in the compiler and linker flags. Enabling compiler optimization (like -O2) and warnings (-Wall) are helpful as well. Better living through automation You can build a GBA binary by typing the commands given above into a command-line interface each time. It is also possible to clean toilets with a toothbrush before use it on your teeth – just because you can doesn't always mean you should. To manually enter each line whenever you want to rebuild is, well, insane. It's much more useful to use some sort of script to do this for you. Technically, you can use any kind of scripting environment you want, but I'll focus on two in particular here: batch-files and makefiles. Batch-files (.bat) are Windows shell script that have been there since ye olde MS-DOS. Batch-files are pretty easy to use: simply drop the commands in a .bat file and run that. But as usual, complex questions have easy to understand, wrong answers. While batch-files are indeed very easy to use, they are utterly inadequate for anything but the most simple projects. More complex projects will have multiple files and adding extra compilation lines every time you add a file becomes annoying. To be fair, it is possible to use variables and loops and stuff in batch-files to ease this a little, but no one ever mentions those. Another problem is that if you run a batch-file, you run the whole thing. This means that you're compiling every file every time, and that if there are errors, you'll get the errors for every file in the project. This can be very tricky to navigate and sometimes it may not be possible at all because the first errors are past the scroll-limit. (This was especially true for Windows versions 98 and earlier, which didn't even have a scrollbar for a DOS-box. Eeek!) Lastly, the syntax for batch-files are DOS/Windows only. This makes them unsuited for platform independent development. A better solution is using makefiles. Makefiles are scripts run by a tool called make (which windows usually doesn't have, but it comes with MSys). Makefiles are platform independent and make managing files easier by working with rules instead of just commands. You can have pattern rules that tell you how to turn files from one type into files of another type (like compiling .c into .o files) and make will take care of it; all you need to do is give a list of files which need to be compiled. Make will also check whether the compilation is necessary in the first place so no unnecessary work will be done if the output file is already up to date. The problem with makefiles is that they're harder to create than batch-files – at least for the uninitiated. But thanks to the devkitPro template makefiles, you generally don't have to worry about that anymore: you can just set the correct directories and go. That said, it is still worth learning a bit more about how makefiles work. For that reason, the next section explains a bit about the makefile process. The makefiles in the Tonc examples also have a makefiles that increase in complexity. If you're annoyed that makefiles can't be double-clicked to run, you can always create a batch-file that runs the makefile. Something like this should suffice. REM batch-file to run make make pause Don't start this batch-file with make clean' though, as that would force a complete rebuild – something we're trying to avoid.. I'd also advise against calling it make.bat, because that may clash with the name of the actual make tool. I'd recommend against this method though. The batchfile output will go into a DOS-box, which doesn't exactly navigate nicely. It would be better to use a notepad that can execute shell commands and capture its output. Most of these will also allow you to go to errors by double-clicking on the error message. PN2 is one of the many editors that can do these things. Prefer makefiles over batch-files For all its initial ease, using batch-files will only hurt you in the long run. It's better to use something that can deal with complex projects as well from the get go. A down side to makefiles is that you can't activate them by double-clicking. It's possible to create a dummy batch-file to invoke the makefile, but a better approach would be to use a code editor that can also execute shell commands. Paths and system variables If you try to build anything using the commands given earlier, you'll probably find that it doesn't quite work. This is because I omitted an important bit of information: the path. for the shell to execute the commands, it needs to be able to find them first and merely using arm-none-eabi-gcc isn't enough because the file itself is actually at [initial dirs]/devkitPro/devkitARM/bin/arm-none-eabi-gcc. The full path needs to be visible to the shell in order for anything to happen, not just the filename. Because typing out the whole thing is rather annoying and because my directory structure may be different than yours, the operating system has a variable called PATH for standard directories. If you only give the filename, the shell will search in the current directory and all the paths in the PATH for a match. It is possible to add the DKA bin directory to the path directly, but devkitPro has chosen a cleaner approach. Instead of adding it to the PATH, the installer creates a number of environment variables to some of the core directories, and you can use these during the build process to point to the real paths. For example, there is a DEVKITARM variable, which in my case equates to /e/dev/devkitPro/devkitARM. Yours will probably be a little different, but the point is that in both cases$(DEVKITARM)/bin will be the directory where the main tools are.
Note that the standard Windows format for directories is something like c:/foo/bar, whereas the DEVKITPARM variable is formatted as a POSIX pathname with forward slashes. As far as I know, Windows is the only OS that doesn't allow POSIX names which, well, kinda sucks. This is where MSys comes in. MSys is a collection of tools to make the standard Unix tools available on DOS/Windows systems. Apart from make, it also has the bash shell where you can use POSIX names like every other programmer. To switch to bash in a DOSbox, type sh'. On the whole, bash is a more useful shell than DOS, though you may have to get used to the different command set. But that's why we have manuals.
2.4.2. Basic Makefilese
Like batch-files, makefiles are scripts that can aid you in building a project. The main difference in how they work is that batch-files uses a sequential list of commands, while makefiles use a chain of rules that define how files are converted into others, eventually leading to the binary. This is the basic format of a rule:
# Makefile rule example
target : prerequisite
command
The target can be the output file or files, or just an identifier for the rule, the prerequisite(s) are the files the target depends on and the command(s) are a list of commands that turn the prerequisites into the targets (although technically they can do other things as well). Note that the indentation of the commands must be a tab (ASCII 9), not spaces. This is an annoying little requirement that can trip you up when copy-pasting makefiles, so remember it well.
The direct equivalent of the commands used earlier to build first.gba would be like this:
#
# Equivalent makefile for the earlier build procedure.
#
PATH := $(DEVKITARM)/bin:$(PATH)
first.gba : first.c
arm-none-eabi-gcc -mthumb -mthumb-interwork -c first.c
arm-none-eabi-gcc -specs=gba.specs -mthumb -mthumb-interwork first.o -o first.elf
arm-none-eabi-objcopy -v -O binary first.elf first.gba
gbafix first.gba
There is only one rule here, with target first.gba and prerequisite first.c. The commands are just what we typed in earlier.
Tabs, not spaces, before make commands
NOTE: GNU's make requires tabs before actual commands, not spaces. If you copy-paste, you may have to place the tabs manually.
Running makefiles
You can invoke make to run the makefile like this:
make -f file-name target-name
The -f' flag indicates which makefile to execute; the target-name tells which rule to start the chain with. Both of these options are actually optional. Without the -f' option, make will look in the current directory for files called 'GNUmakefile', 'Makefile' or 'makefile' and run that. This is why makefiles are usually called ‘Makefile”. If the target name is absent, the chain starts at the first rule in the file.
It's not necessary to go to the commandline and type in make' yourself: IDEs can often do that for you, although setting the IDE up for that can take some doing. Because there are so many editors, I will not cover this here; google or use the help files to figure out what needs to be done for your editor. I have examples for setting up conTEXT, an alternative for PN, and MS Visual Studio (5 and 6) in this appendix. The DKP site also has a few examples in its FAQ
Makefiles, version 2
The makefile shown above was just an extremely simple (and limited) example of what a makefile would look like. Proper makefiles have multiple rules and may use variables to define commonly-used data. The following is a more complex, but also more useful.
#
# A more complicated makefile
#
PATH := $(DEVKITARM)/bin:$(PATH)
# --- Project details -------------------------------------------------
PROJ := first
TARGET := $(PROJ) OBJS :=$(PROJ).o
# --- Build defines ---------------------------------------------------
PREFIX := arm-none-eabi-
CC := $(PREFIX)gcc LD :=$(PREFIX)gcc
OBJCOPY := $(PREFIX)objcopy ARCH := -mthumb-interwork -mthumb SPECS := -specs=gba.specs CFLAGS :=$(ARCH) -O2 -Wall -fno-strict-aliasing
LDFLAGS := $(ARCH)$(SPECS)
.PHONY : build clean
# --- Build -----------------------------------------------------------
# Build process starts here!
build: $(TARGET).gba # Strip and fix header (step 3,4)$(TARGET).gba : $(TARGET).elf$(OBJCOPY) -v -O binary $<$@
-@gbafix $@ # Link (step 2)$(TARGET).elf : $(OBJS)$(LD) $^$(LDFLAGS) -o $@ # Compile (step 1)$(OBJS) : %.o : %.c
$(CC) -c$< $(CFLAGS) -o$@
# --- Clean -----------------------------------------------------------
clean :
@rm -fv *.gba
@rm -fv *.elf
@rm -fv *.o
#EOF
The top half of this makefile is spent defining variables for later use. Something like FOO := bar' defines a variable called FOO which can then be used via $(FOO). Although I'm only using := here, there are other methods as well: = Direct substitution variable (like a C macro). Basic variable (overrides previous definition). Create variable if it didn't exist yet. Add to existing variable. The variables created here are mostly standard things: names for the compiler and linker (CC and LD) and their flags (CFLAGS and LDFLAGS). These aren't strictly necessary, but they are useful. The things actually related to the project are TARGET and OBJS. TARGET is the base-name of the output binary, and OBJS is the list of object files. Note: list of object files! Right now there's only a single file, but later projects will have multiple source files that all have to be compiled and linked. By using a variable like this, adding a new file to the project is a matter of extending this list. It is also a list of object files, not source files. The rules start are based on the target names, not the prerequisite names. There are also more rules now. The primary rules are build and clean (the .PHONY is just to indicate that they're not actually filenames themselves). In the build rule you see how the chaining works: build depends on the .gba binary, which depends on the .elf file, which depends on the object files, which depends on the source files. It's basically the basic steps I gave earlier in reverse. Part of the makefile magic is that a rule will only be executed if the prerequisites are younger than the targets. For example, if a particular source-file has been modified, it will be younger than its .o file and the compilation rule will run for that particular file but not the others. This is partly why dividing the process into separate rules is useful. The funny things with with dollar signs ($@, etc) are automatic variables. They are shorthand for the target and prerequisite names. You can find what they mean in table 2.3. This is just three of the automatic variables available; for a full list, go to the make manual.
\$< Name of the first prerequisite List of all the prerequisites Name of the target
The last thing I want to discuss here concerning this particular makefile is the compilation rule. The form %o : %.c is an example a static pattern rule. It basically says “for every file in OBJS with the extension ‘.o’, look for the corresponding ‘.c’ file and run the command”. Like I said earlier, OBJS can have multiple filenames, each of which will compile automatically via this one rule. Again, this is one of the nice things about makefiles: to add a file for the project, you don't have to write another rule, just add its object name to OBJS and you're done. There are also possibilities to get all files in a directory so that you won't heven have to add it yourself, but that's out of the scope of this section.
2.4.3.Legacy: on Tonc's structure
This last section shouldn't really be here. With devkitPro's template makefiles, managing projects should be easy enough without having to know anything about makefiles, so this stuff could be tucked safely in an appendix. So why is it here?
The reason it's put in front is historical in nature. When I started this around 2004, devkitARM was still young and libgba, the installer and the templates simply didn't exist yet. There we a handful of GBA tutorials which did explain the basics, but all used poor (sometimes very poor) programming standards and project structure. With the latter I mean three things:
• using the wrong compiler flags;
• #including the whole program into a single file (covered in some detail in the data section in the chapter on bitmaps);
• using batch-files instead of makefiles.
• code that was simply incorrect or at best very inefficient.
Instead of just saying how to do things, I also tried to make a point about how not to do things. Knowing what to avoid can be just as important as knowing the right moves. I've also tried to ease into makefiles so that they wouldn't seem so daunting for new users. This resulted in dividing Tonc into three main parts:
• basic: completely stand-alone projects; with very simple makefiles.
• extended: projects use tonclib; makefiles are more complex.
• advanced: projects use tonclib and makefiles derived from devkitPro's makefiles.
In the ‘basics’ section, I spend much time on good/bad practices to get it out of the way. This requires knowing elementary makefiles, hence this section. If I had the time or if there was a real need I'd do things differently now, but the requirement of the good/bad practices have made the earlier parts somewhat harder to maintain than the later chapters. One of life's little ironies.
Modified Mar 24, 2013, J Vijn. Get all Tonc files here | 2017-07-20 17:15:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.475541353225708, "perplexity": 3189.299696792177}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423269.5/warc/CC-MAIN-20170720161644-20170720181644-00206.warc.gz"} |
https://www.transtutors.com/questions/using-the-data-in-question-13-how-would-tina-report-the-data-if-the-investment-were--586722.htm | # Using the data in question 13, how would Tina report the data if the investment were long-term and...
Hashmi Company’s investments in available-for-sale securities at December 31 show total cost of $195,000 and total fair value of$205,000. Prepare the adjusting entry.Using the data in question 13, how would Tina report the data if the investment were long-term and the securities were classified as available-for-sale? | 2019-03-26 02:54:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3362331688404083, "perplexity": 8055.39919302426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204768.52/warc/CC-MAIN-20190326014605-20190326040605-00518.warc.gz"} |
https://computergraphics.stackexchange.com/questions/5910/rounding-corners-of-polygon-given-vertices-of-its-corners/5918 | # Rounding corners of polygon given vertices of its corners
Given a polygon (regular or irregular, convex or concave), I want to round its corner, with a given radius; let's say 'x'. I have the code to draw an arc given 2 points, but how can I find the start and end points of the arc?
All I have is a list of points representing each corner of the polygon.
Here's a simple example: I have a list of vertices of corners of the figure (1). The figure (2) is what I need so I could draw the arcs to round the corners.
PS: I'm developing a CAD drawing generator using .netdxf API (https://github.com/haplokuon/netDxf)
Since you're working on CAD software, you probably want some precise results. Here an algorithm that could work:
For each side:
• Compute the segment's equation.
• Compute each round corner's circle equation.
• Compute the intersections between the segment and each circle.
• The 2 intersection points are the new endpoints for the line segment.
This doesn't handle the case where a side is smaller than the rounded corner radius. You could reduce the rounded corner's radius in this case based on the segment's length.
Ok, Xenapior and Reynolds together have the right idea. But the explanation is a bit lacking so here is a image to explain it all and some further musings. First let us start by drawing an image (yes i know that is what they say in school for you to do but nobody does it).
From the image we can see that there are 2 equal right triangles $V_2, A, C$ and $V_1, B, C$. In this triangle we have one unknown that we can define namely rounding radius $r$, also we know right angle is 90°. The angle between the line $V_1-V_2 = \vec a$ and line $V_2-V_3 = \vec b$ is easy to compute with the formulation fo angle between vectors
$$\cos(\beta) = \frac{\vec a·\vec b}{ |\vec a|·|\vec b|}$$
That in turn can be simplified if the vectors already are normal. Thus three things of the triangle is known, which means all is known. So if you know the rounding radius to use the calculating the points $A$, $B$ and $C$. So finally:
a = normalize(V2-V1);
b = normalize(V2-V3);
halfang = acos(dot(a, b))/2.;
// skip center if you iuse splines
C = V2 - r / sin(halfang) * normalize((a+b)/2);
A = V2 - r/tan(halfang)*a;
B = V2 - r/tan(halfang)*b;
You can simplify this a bit with trigonometric identities. Or if you use rational B-splines you can skip the calculation of C
Note: that this is only one possible formulation
The cut length from the vertex is x*ctan(t/2), where t is the angle at this vertex. | 2021-06-17 18:34:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7224712371826172, "perplexity": 628.2618768297353}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487630518.38/warc/CC-MAIN-20210617162149-20210617192149-00021.warc.gz"} |
http://www.researchgate.net/publication/23230734_Chirality-induced_dynamic_kohn_anomalies_in_graphene | Article
# Chirality-induced dynamic kohn anomalies in graphene.
• ##### S. Das Sarma
Condensed Matter Theory Center, Department of Physics, University of Maryland, College Park, Maryland 20742, USA.
Physical Review Letters (Impact Factor: 7.73). 09/2008; 101(6):066401. DOI: 10.1103/PhysRevLett.101.066401
Source: PubMed
ABSTRACT We develop a theory for the renormalization of the phonon energy dispersion in graphene due to the combined effects of both Coulomb and electron-phonon (e-ph) interactions. We obtain the renormalized phonon energy spectrum by an exact analytic derivation of the phonon self-energy, finding three distinct Kohn anomalies (KAs) at the phonon wave vector q=omega/v, 2k_{F}+/-omega/v for LO phonons and one at q=omega/v for TO phonons. The presence of these new KAs in graphene, in contrast to the usual KA q=2k_{F} in ordinary metals, originates from the dynamical screening of e-ph interaction (with a concomitant breakdown of the Born-Oppenheimer approximation) and the peculiar chirality of the graphene e-ph coupling.
0 Bookmarks
·
84 Views
• Source
##### Article: Electron-phonon interactions for optical-phonon modes in few-layer graphene: First-principles calculations
[Hide abstract]
ABSTRACT: We present a first-principles study of the electron-phonon (e-ph) interactions and their contributions to the linewidths for the optical-phonon modes at Γ and K in one-layer to three-layer graphene. It is found that, due to the interlayer coupling and the stacking geometry, the high-frequency optical-phonon modes in few-layer graphene couple with different valence and conduction bands, giving rise to different e-ph interaction strengths for these modes. Some of the multilayer optical modes derived from the Γ-E2g mode of monolayer graphene exhibit slightly higher frequencies and much reduced linewidths. In addition, the linewidths of K-A1′ related modes in multilayers depend on the stacking pattern and decrease with increasing layer numbers.
Physical review. B, Condensed matter 02/2009; 79(11). · 3.66 Impact Factor
• Source
##### Article: Many-body effects on out-of-plane phonons in graphene
[Hide abstract]
ABSTRACT: We study the properties of out-of-plane phonons in the framework of the many-body theory of graphene. We investigate, in particular, the way in which the coupling to electron–hole excitations renormalizes the dispersion of the acoustic branch of out-of-plane phonons. We show that the effect of the charge polarization cuts off the quadratic dispersion at low energies, implying the absence of long-wavelength flexural phonons. This result holds in the low-energy Dirac theory of graphene, and it is confirmed by an analysis of the corrections to the interaction vertex beyond the random phase approximation (RPA). Furthermore, we show that the acoustic branch of out-of-plane phonons presents near the K point a strong Kohn anomaly, which is much more pronounced than in the case of the in-plane phonons. The origin of the strong softening of the dispersion lies in the singular behaviour of the intervalley polarization at the threshold of electron–hole formation. This leads to a new branch of hybrid modes below the electron–hole continuum, with the potential to induce significant effects in the transport properties of graphene in the low-temperature regime.
New Journal of Physics 09/2009; 11(9):095015. · 3.67 Impact Factor
• Source
##### Article: Power law Kohn anomalies and the excitonic transition in graphene
[Hide abstract]
ABSTRACT: Dirac electrons in graphene in the presence of Coulomb interactions of strength $\beta$ have been shown to display power law behavior with $\beta$ dependent exponents in certain correlation functions, which we call the mass susceptibilities of the system. In this work, we first discuss how this phenomenon is intimately related to the excitonic insulator transition, showing the explicit relation between the gap equation and response function approaches to this problem. We then provide a general computation of these mass susceptibilities in the ladder approximation, and present an analytical computation of the static exponent within a simplified kernel model, obtaining $\eta_0 =\sqrt{1-\beta/\beta_c}$ . Finally we emphasize that the behavior of these susceptibilities provides new experimental signatures of interactions, such as power law Kohn anomalies in the dispersion of several phonons, which could potentially be used as a measurement of $\beta$.
Solid State Communications 02/2012; 152(15). · 1.70 Impact Factor | 2015-01-31 07:05:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6916707158088684, "perplexity": 1920.1655474740749}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122108378.68/warc/CC-MAIN-20150124175508-00223-ip-10-180-212-252.ec2.internal.warc.gz"} |
http://math.emory.edu/events/seminars/seminar.php?SEMID=1276 | # MATH Seminar
Title: A new approach to bounding $L$-functions
Seminar: Algebra
Speaker: Jesse Thorner of Stanford
Contact: David Zureick-Brown, dzb@mathcs.emory.edu
Date: 2019-02-26 at 4:00PM
Venue: MSC W201
Abstract:
An $L$-function is a type of generating function with multiplicative structure which arises from either an arithmetic-geometric object (like a number field, elliptic curve, abelian variety) or an automorphic form. The Riemann zeta function $\zeta(s) = \sum_{n=1}^{\infty} n^{-s}$ is the prototypical example of an $L$-function. While $L$-functions might appear to be an esoteric and special topic in number theory, time and again it has turned out that the crux of a problem lies in the theory of these functions. Many equidistribution problems in number theory rely on one's ability to accurately bound the size of $L$-functions; optimal bounds arise from the (unproven!) Riemann Hypothesis for $\zeta(s)$ and its extensions to other $L$-functions. I will discuss some motivating equidistribution problems along with recent work (joint with K. Soundararajan) which produces new bounds for $L$-functions by proving a suitable "statistical approximation" to the (extended) Riemann Hypothesis. | 2020-08-15 02:17:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6577273011207581, "perplexity": 1030.077594613762}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740423.36/warc/CC-MAIN-20200815005453-20200815035453-00376.warc.gz"} |
http://spmaddmaths.blog.onlinetuition.com.my/2013/04/2-9-quadratic-equation-spm-practice-paper-1-3.html | # 2.9.3 Quadratic Equation, SPM Practice (Paper 1)
Question 11:
The quadratic equation ${x}^{2}-4x-1=2p\left(x-5\right)$ , where p is a constant, has two equal roots. Calculate the possible values of p.
Solution:
Question 12:
Find the range of values of k for which the equation ${x}^{2}-2kx+{k}^{2}+5k-6=0$ has no real roots.
Solution:
Question 13:
Find the range of values of p for which the equation $5{x}^{2}+7x-3p=6$ has no real roots.
Solution:
Question 14:
Solution:
Question 15:
The quadratic equation ${x}^{2}+px+q=0$ has roots –2 and 6. Find
(a) the value of p and of q,
(b) the range of values of r for which the equation ${x}^{2}+px+q=r$ has no real roots.
Solution: | 2019-09-22 16:55:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7922573685646057, "perplexity": 381.81233019828574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575596.77/warc/CC-MAIN-20190922160018-20190922182018-00321.warc.gz"} |
https://search.r-project.org/CRAN/refmans/dae/html/mat.dirprod.html | mat.dirprod {dae} R Documentation
## Forms the direct product of two matrices
### Description
Form the direct product of the m \times n matrix A and the p \times q matrix B. It is also called the Kroneker product and the right direct product. It is defined to be the result of replacing each element of A, a_{ij}, with a_{ij}\bold{B}. The result matrix is mp \times nq.
The method employed uses the rep function to form two mp \times nq matrices: (i) the direct product of A and J, and (ii) the direct product of J and B, where each J is a matrix of ones whose dimensions are those required to produce an mp \times nq matrix. Then the elementwise product of these two matrices is taken to yield the result.
### Usage
mat.dirprod(A, B)
### Arguments
A The left-hand matrix in the product. B The right-hand matrix in the product.
### Value
An mp \times nq matrix.
### Author(s)
Chris Brien
matmult, mat.dirprod
col.I <- mat.I(order=4) | 2022-12-06 16:46:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999428987503052, "perplexity": 1522.9575852898477}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711111.35/warc/CC-MAIN-20221206161009-20221206191009-00555.warc.gz"} |
https://www.physicsforums.com/threads/diferences-b-w-emf-voltage-resistance-resistivity.364628/ | # Diferences b/w emf & voltage,Resistance & resistivity
1. Dec 20, 2009
### hafiz16
hi im new member of this forum..Plz tell me how to use this forum & how to ask Question from other members...Plz tell me differences b/w emf & voltage,Resistance & resistivity..
2. Dec 20, 2009
### tiny-tim
Welcome to PF!
Hi hafiz16! Welcome to PF!
emf ("electromotive force") means different things in different books.
usually, it means the same as voltage
See http://en.wikipedia.org/wiki/Electromotive_force#Terminology"
… etc etc etc
Resistivity of a material is resistance times cross-section area per length … see http://en.wikipedia.org/wiki/Resistivity#Definitions"
Last edited by a moderator: Apr 24, 2017
3. Dec 20, 2009
### cabraham
I believe that "emf" usually refers to a potential due to an energy sourcing device, like a battery, or a generator. This emf is indeed a voltage, as well as being a potential. But, if an energy dissipating device, i.e. passive element, incurs a voltage drop when carrying current, this is a drop in potential, or voltage, but the term "emf" is not used here.
Thus, potential, and voltage are general terms. The term emf is included, as emf is a voltage and potential. But emf is used specifically to describe a voltage/potential generated by an energy sourcing device. The term "drop" denotes dissipation. The emf and the drop are both measured in volts. One indicates energy being sourced, the other indicates dissipation.
Clear?
Claude
4. Dec 20, 2009
### Stonebridge
The emf of a device measures the energy gained by unit charge passing through a cell or dynamo etc. The PD between two points in a circuit measures the energy lost per unit charge as it passes through those points. Both are measured in Volts (joule per coulomb)
One is energy gained. The other is energy lost.
In a closed circuit, energy gained equals energy lost. This is often expressed as Kirchhoff's Rule.
5. Dec 20, 2009
### arunma
The way I like to remember it, emf is the negative line integral of the electric field between two points, whereas a potential difference is the negative difference between the potential function of the electric field. When the electric field actually has a potential function, these would be the same thing (but we don't call it emf when the field is the gradient of some function). However, some electric fields don't have potential functions, for example a circular electric field can't be written as the gradient of a potential. But it will still have an emf.
The important thing here is that the voltage across a closed loop is always zero, but you can have a nonzero emf by integrating around a closed look. As you know,
$$\oint_C\vec{E}\cdot d\vec{l} = -\dfrac{d}{dt}\iint\vec{B}\cdot d\vec{A}$$
So Faraday's Law itself gives an example of how emf can be different from a voltage. If the electric field had a potential function, then the left side would always be zero.
You could also think of the emf as the "work" done by an electric field.
6. Dec 21, 2009
### hafiz16
thxk for all..but i still not clear & i am confuse..What is emf & voltage.. | 2017-11-22 04:37:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6942892670631409, "perplexity": 1086.4315456441564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806455.2/warc/CC-MAIN-20171122031440-20171122051440-00123.warc.gz"} |
https://physics.aps.org/synopsis-for/10.1103/PhysRevLett.122.090601 | # Synopsis: Vindication for New Bose Gas Theory
Experiments confirm predictions of a new hydrodynamic approach to describing a 1D Bose gas, paving the way to better theories for more complex quantum gases.
One-dimensional systems of interacting particles offer researchers a window into macroscale quantum effects. To simplify analysis of such systems, researchers often treat them as continuous fluids rather than discrete bodies. However, this approach fails if the particles are not in thermal equilibrium. In 2016, researchers proposed a new hydrodynamic framework to solve this conundrum (see 27 December 2006 Viewpoint). Now, experiments show that this theory successfully describes the behavior of a 1D Bose gas as it is released from confinement, promising insights into a whole class of out-of-equilibrium many-body systems.
To test the theory, Max Schemmer, from the University of Paris-Sud, and colleagues confined clouds of several thousand rubidium atoms in a magnetic trap built from conducting wires a few millimeters long. They partially relaxed the confinement, allowing the cloud to spread along a single axis. The motion of the atoms in the first fraction of a second allowed the researchers to determine whether the new or the old theoretical framework agreed with observations. Both frameworks predict similar dynamics when the atoms start with a distribution having a single maximum density point, but only the new framework correctly predicts the gas’s evolution from a distribution having two peak density locations.
Theory and experiment did not conform perfectly, however. The atoms’ distribution deviated slightly from theory as the experiment progressed. The team attributes this to atoms gradually leaking from the cloud. Investigating this effect will be the next step, but the team also hopes to test the theory against other 1D atomic gases, such as strongly repulsive bosonic atoms and fermionic gases.
This research is published in Physical Review Letters.
–Marric Stephens
Marric Stephens is a freelance science writer based in Bristol, UK.
More Features »
### Announcements
More Announcements »
## Subject Areas
Atomic and Molecular Physics
Quantum Physics
## Next Synopsis
Particles and Fields
## Related Articles
Atomic and Molecular Physics
### Synopsis: Watching a Molecule Relax While it Reacts
Atomic force microscopy detects the subtle structural changes that take place when a molecule is ionized. Read More »
Atomic and Molecular Physics
### Synopsis: Spectral Evidence of a Supersolid Made of Cold Atoms
Researchers find new evidence that a Bose-Einstein condensate made of erbium atoms undergoes a phase transition into a bizarre form of quantum matter. Read More »
Atomic and Molecular Physics
### Synopsis: Correcting Hardware Bias in Molecular Spectrometers
More accurate measurements of ${\text{CO}}_{2}$ in the atmosphere can be obtained by accounting for discrepancies in spectroscopy hardware. Read More » | 2019-08-22 08:29:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21611370146274567, "perplexity": 2235.6449808521375}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316785.68/warc/CC-MAIN-20190822064205-20190822090205-00177.warc.gz"} |
https://math.stackexchange.com/questions/3396022/split-exact-sequence | # Split exact sequence
Let $$G=\langle a,b:a^8=b^p=1,a^{-1}ba=b^\alpha \rangle$$, where $$\alpha$$ is a primitive root of $$\alpha^4 \equiv 1~\text{mod}(4)$$, $$4$$ divides $$p-1$$. I wants to compute the commutator of $$G$$. My attempt is
Let $$H=\langle b\rangle$$ and $$K=\langle a\rangle$$. Then the sequence $$\{1\}\longrightarrow H\stackrel{i}{\longrightarrow} G\stackrel{\pi}{\longrightarrow} K\longrightarrow \{1\}$$, where $$i(a)=a$$ and $$\pi(a^nb^m)=b^m$$. Therefore $$G/H \simeq K$$.
I also thinks that and $$[G,G]\simeq H$$. One way it is clear $$[G,G]\leq H$$, but I am stuck in proving of $$H\leq [G,G]$$.
• It seems correct. Can you tell the roll of $\alpha$ in this proof. – MANI Oct 16 '19 at 11:14
Your proof to show that above sequence is short esact is correct, nevertheless define a map $$t:K\rightarrow G$$, by $$t(a)=a$$. I have asked similar question (Commutator subgroup of a group of order $8q$, where $q$ is odd prime.) and got the answer, that's why I am giving answer in same pattern:
Now to show that $$H\subseteq [G,G]$$. Let $$g=b^k \in H$$. Then consider $$[a^{-1},g]=a^{-1}gag^{-1}=b^{-\alpha k}b^{-k}=b^{-k(\alpha+1)}$$. Since $$p$$ is prime therefore if any $$n\neq mp,~b^{n}\in [G,G]\Rightarrow b\in [G,G].$$
• why such $n$ will exist? – Priya Pandey Oct 16 '19 at 12:54 | 2021-06-22 18:08:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 22, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9820528626441956, "perplexity": 183.52067974395473}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488519183.85/warc/CC-MAIN-20210622155328-20210622185328-00615.warc.gz"} |
https://www.optionstocksmachines.com/post/skew-who/ | # Skew who?
In our last post on the SKEW index we looked at how good the index was in pricing two standard deviation (2SD) down moves. The answer: not very. But, we conjectured that this poor performance may be due to the fact that it is more accurate at pricing larger moves, which occur with greater frequency relative to the normal distribution in the S&P. In fact, we showed that on a monthly basis, two standard deviation moves in the S&P 500 (the index underlying the SKEW) occur with approximately the same frequency as would be expected in a normal distribution. Additionally, we used a proxy for a 2SD move based on history rather than the expected magnitude of a 2SD derived from the VIX, or the volatility index that shares the same data upon which the SKEW is based. So if were using the expected magnitude of a 2SD move implied by the VIX at each time slice to test the SKEW’s predictive power we might very well get a different answer. We address those points in this post.
For reference, here’s a chart of the SKEW index
Recall, the SKEW index attempts to quantify tail-risk. In fact, the CBOE provides a nifty table that gives the probability that a 2SD down move will occur in the next month based on the index. We provide an interpolated example below.
Table 1: CBOE estimated risk-adjusted probability (%)
Skew Probabiilty
100 2.30
105 3.65
110 5.00
115 6.35
120 7.70
125 9.05
130 10.40
135 11.75
140 13.10
145 14.45
150 15.80
155 17.15
Source: CBOE, OSM estimates
In our last post, we showed that these probabilities over-estimated the frequency of a down move, as shown in the graph below.
However, we admitted that we were using a rough rule-of-thumb of a decline of greater than 9% over a 30-day period as the expected downside. This approximation is based on the historical annualized volatility of 16% on the S&P 500. That may have been unfair given that volatility is volatile and the VIX reflects that. Now, we calculate the expected magnitude of a 2SD down move based on the VIX. We then see how close the expected probability based on the SKEW matches up with the actual frequency.
That does not look too good, In fact, it’s even worse that using an approximation, and, it is consistently poor across the board. Even the tallest bar is less than half of what the SKEW expects. Perhaps we should check 3SD moves, as suggested in the introduction. However, when we did that, there were only two buckets among twelve in which the S&P moved equal to or greater than the expected 3SD magnitude. Hence, there’s no point in showing a nearly empty graph. However, we’ll include the code below if you want to see for yourself.
Maybe we’re missing something. One problem is that we’re only looking at the return on the S&P 500 one month hence (effectively 22 trading days). But the options have to price in the potential for a 2SD move prior to the 30 day expiration. To try to account for that, we calculate the max drawdown for every 30 day forward period associated with each daily close of the SKEW index. We then compute how often the max drawdown was at or below the implied 2SD down move. The chart is below.
This is not much better but at least its modestly upwardly sloping.
The foregoing analysis suggests that the SKEW index is not very accurate at predicting major moves in advance. In general, it overestimates the frequency of moves greater than 2SD. Additionally, the poor predictive performance does not appear to have any consistency or clustering, which means that it seems unlikely that we could use the index as an investing indicator, certainly not directionally. We could theoretically apply it to finding overpriced options, but that avenue of research is difficult to reproduce from public data, so we may shelve that for now.
Why is the SKEW so poor at prediction? We won’t offer an exhaustive answer here. But we believe a major reason is that market makers need to price in a larger-than-likely down move to ensure they stay in business. In general, the demand to buy puts is higher than for calls. If, as market maker, you’re obligated to be on the other side of that trade—selling puts—you want to make sure if you’ve priced in a bit of cushion so that if you’re wrong, besides being compensated to take risk in general, you can trade again tomorrow. Selling puts entails an unknown, but not unlimited, amount of risk.
It’s easiest to see this over pricing of downside risk in the scatter plots below. Here, we graph the 1SD and 2SD VIX-implied expected vs. actual moves with the 45o line to delineate a one-to-one relationship. As we can see, the actual moves are generally below the expected.
In fact, about 82.9% of the time, the one month move is below the expected 1SD move and 98.8% of the time it is below 2SD. If this were a normal distribution, those numbers should be 68% and 95%.
This begs the question why, if there’s such a wide (and relatively obvious) difference, it isn’t being exploited away? Answering that question has offered academics fertile fields of many cud-chewing opportunities. We obviously can’t even begin to answer it here. But the power of R programming affords a quick and dirty data chug to offer a hypothesis.
The short answer: in months when the market is negative, not only is the end month move greater than expected, the intra-month drawdown is also much greater. Here’s a chart that shows the intra-month drawdown for down months.
We can see many more of the total points cluster above the 45o line than in the other graphs. In fact, about 36.7% of max declines in down months are greater than the expected 1SD move. And in the down months, the actual decline is greater than the expected 1SD 20.3% of the time. If this were a normal distribution, one would expect only about 16% of the monthly declines to be greater than 1SD and 32% to exceed that 1SD level at least once in the period.
What does this suggest? Even if the VIX overestimates the magnitude of down moves on average, in the down months it still underestimates the decline. Alternatively, the impact of upwardly trending markets skews the average. All of which suggests that trying to exploit this anomaly is really about predicting the market direction. And if you’re able to do that, you’re probably better off speculating on direction, than collecting a 3-4% mispricing.
In the end, we’ve probably exhausted the discussion on the usefulness of the SKEW as a predictive tool. We’ve also looked at reasons for its poor performance and scratched the surface of the thorny volatility risk premium puzzle. Probably enough for one post. Until the next post, we present our code below. For comments and questions, our email is below the code.
# Load package
library(tidyquant)
library(knitr)
library(kableExtra)
# Graph
skew %>%
ggplot(aes(date, skew)) +
geom_line(color = "blue") +
labs(x = "",
y = "Index",
title = "CBOE Skew Index",
caption = "Source: CBOE") +
theme(plot.caption = element_text(hjust = 0))
# 2SD & 2SD probability vectors
seq <- seq(100,160,5)
skew_idx <- cut(seq[-1], seq)
prob <- c(0.023, 0.0365, 0.05,
0.0635, 0.077, 0.0905,
0.104, 0.1175, 0.1310,
0.1445, .158, 0.1715)
radj_prob <- data.frame(skew = skew_idx, prob = prob)
prob1 <- c(0.0015, 0.0045, 0.0074,
0.0104, 0.0133, 0.0163,
0.0192, 0.022, 0.0251,
0.0281,0.031,0.0339)
radj_prob1 <- data.frame(skew = skew_idx, prob = prob1)
# CBOE interpolated 2SD probability table
data.frame(Skew = seq(100,160,5),
Probabiilty = prob*100) %>%
knitr::kable("html",
caption = "CBOE estimated risk-adjusted probability (%)") %>%
kableExtra::footnote(general = "CBOE, OSM estimates",
general_title = "Source: ")
# NOT SHOWN: CBOE interpolated 3SD probability table
data.frame(Skew = seq(100,160,5),
Probabiilty = prob1*100) %>%
knitr::kable("html",
caption = "CBOE estimated risk-adjusted probability (%)") %>%
kableExtra::footnote(general = "CBOE, OSM estimates",
general_title = "Source: ")
skew_cuts <- cut(skew$skew, seq(100,160,5)) probs <- c() for(i in 1:length(skew_cuts)){ probs[i] <- as.numeric(radj_prob[which(skew_cuts[i] == radj_prob$skew),][2])
}
probs1 <- c()
for(i in 1:length(skew_cuts)){
probs1[i] <- as.numeric(radj_prob1[which(skew_cuts[i] == radj_prob1$skew),][2]) } skew <- skew %>% mutate(prob_2sd = probs, prob_3sd = probs1) # Graph skew %>% mutate(sp_move = ifelse(sp_1m <= -0.09, 1, 0)) %>% na.omit() %>% group_by(prob_2sd) %>% summarise(correct = mean(sp_move)) %>% filter(!prob %in% c(0.023, 13.1, 14.45, 15.8,0.1715)) %>% ggplot(aes(as.factor(prob*100), correct*100)) + geom_bar(stat = "identity", fill = "blue") + labs(x = "Probability (%)", y = "Frequency(%)", title = "Skew implied outlier move probabilities vs. actual occurrence", caption = "Source: CBOE, OSM estimates") + theme(plot.caption = element_text(hjust = 0)) # 2SD probability based on implied 2SD move skew %>% na.omit() %>% group_by(prob_2sd) %>% summarise(correct = mean(sp_1m <= -two_sd/100)) %>% filter(!prob_2sd %in% c(0.023, .1445, .158, 0.1715)) %>% ggplot(aes(as.factor(prob_2sd*100), correct*100)) + geom_bar(stat = "identity", fill = "blue") + labs(x = "Probability (%)", y = "Frequency(%)", title = "Skew implied outlier move probabilities vs. actual occurrence", caption = "Source: CBOE, OSM estimates") + theme(plot.caption = element_text(hjust = 0)) # NOT SHOWN: 3SD probability based on implied 2SD move skew %>% na.omit() %>% group_by(prob_3sd) %>% summarise(correct = mean(sp_1m <= -three_sd/100)) %>% ggplot(aes(as.factor(prob_3sd*100), correct*100)) + geom_bar(stat = "identity", fill = "blue") + labs(x = "Probability (%)", y = "Frequency(%)", title = "Skew implied outlier move probabilities vs. actual occurrence", caption = "Source: CBOE, OSM estimates") + theme(plot.caption = element_text(hjust = 0)) ## Drawdown analysis # Create drawdown vector for max drawdown during any 30 day period drawdown <- c() for(i in 1:(nrow(skew)-21)){ dat <- skew$sp[i:(i+21)]
ret <- dat/dat[1]-1
drawdown[i] <- min(ret)
}
drawdown <- c(drawdown, rep(NA, nrow(skew) - length(drawdown)))
skew\$drawdown <- drawdown
# Bar chart of relative of frequency of drawdowns greater than 2SD vs SKEW-implied probability
skew %>%
na.omit() %>%
group_by(prob_2sd) %>%
summarise(drawdown = mean(drawdown <= -two_sd/100)) %>%
filter(!prob_2sd %in% c(0.023, .158, .1715)) %>%
ggplot(aes(factor(prob_2sd*100), drawdown*100)) +
geom_bar(stat = 'identity', fill = "blue") +
geom_text(aes(label = round(drawdown,3)*100), nudge_y = 0.25) +
labs(x = "Probability (%)",
y = "Frequency (%)",
title = "Frequency of drawdown equal or greater than 2SD move vs. expected probability")
# Graph
skew %>%
mutate(one_sd = vix/sqrt(12)) %>%
na.omit() %>%
select(one_sd, two_sd, sp_1m) %>%
gather(key, value, -sp_1m) %>%
ggplot(aes(value, abs(sp_1m*100))) +
geom_point(color = "blue", alpha = 0.4) +
geom_abline() +
facet_wrap(~key,
labeller = labeller(key = c(one_sd = "1SD move",
two_sd = "2SD move"))) +
scale_x_continuous(limits = c(0,30), expand = c(0,0)) +
scale_y_continuous(limits = c(0,30)) +
labs(x = "Expected (%)",
y = "Actual (%)",
title = "VIX-implied S&P 500 SD moves: expected vs. actual")
# Frequencies
act_exp_1sd <- skew %>%
mutate(one_sd = vix/sqrt(12)) %>%
summarise(actual = round(mean(abs(sp_1m*100)<= one_sd, na.rm = TRUE),3)*100) %>%
as.numeric()
act_exp_2sd <- skew %>%
summarise(actual = round(mean(abs(sp_1m*100)<= two_sd, na.rm = TRUE),3)*100) %>%
as.numeric()
# Drawdowns vs
skew %>%
mutate(one_sd = vix/sqrt(12)) %>%
na.omit() %>%
filter(sp_1m <= 0) %>%
ggplot(aes(one_sd, abs(drawdown)*100)) +
geom_point(color = "blue", alpha = 0.4) +
geom_abline() +
scale_x_continuous(limits = c(0,30), expand = c(0,0)) +
scale_y_continuous(limits = c(0,30)) +
labs(x = "Expected (%)",
y = "Drawdown (%)",
title = "VIX-implied 1SD S&P 500 move in down months: expected vs. drawdown")
down_move <- skew %>%
mutate(one_sd = vix/sqrt(12)) %>%
filter(sp_1m <= 0) %>%
summarise(actual = round(mean(abs(sp_1m*100) >= one_sd, na.rm = TRUE),3)*100) %>%
as.numeric()
draw_down <- skew %>%
mutate(one_sd = vix/sqrt(12)) %>%
filter(sp_1m <= 0) %>%
summarise(correct = round(mean(drawdown <= -one_sd/100, na.rm = TRUE),3)*100) %>%
as.numeric() | 2021-03-03 21:14:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5195324420928955, "perplexity": 4392.480294035961}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178367790.67/warc/CC-MAIN-20210303200206-20210303230206-00156.warc.gz"} |
http://math.stackexchange.com/questions/82973/how-is-this-not-an-equivalence-relation | # How is this not an equivalence relation?
If we have a relation $\sim$ on $\mathbb{Z}/6\mathbb{Z}\times (\mathbb{Z}/6\mathbb{Z}\setminus\{0\})$ so that $(w,x)\sim(y,z)$ if $wz=xy$, how is $\sim$ not an equivalence relation?
-
Obviously $(w,x)\sim(w,x)$ because $wx=wx$. Next, if $(w,x)\sim(y,z)$, we see that $wx=yz$ and so $zy=xw$ and we see that $(y,z)\sim(w,x)$. Finally, if $(w,x)\sim (y,z)$ and $(y,z)\sim(a,b)$ then (after some work) $wzb=xza$ or $wb=xa$ which means that $(w,x)\sim(a,b)$. But I am told this is not an equivalence relation. Any help? – johnnymath Nov 17 '11 at 8:48
What do you mean by $wz=xy$? – user7530 Nov 17 '11 at 8:50
Think about involving divisors of zero – marwalix Nov 17 '11 at 8:54
To give a counterexample proving that the relation is not transitive – marwalix Nov 17 '11 at 8:57
In your comment (which you could preferably have added to the question), you cancel $z$ on both sides of an equation. This is not valid in $\mathbb Z/6\mathbb Z$, since it contains zero divisors. Thus, if you have $wb=2$ and $xa=4$, then with $z=3$ you get $wzb=xza$ despite $wb\ne xa$. Thus for instance $(1,1)\sim(3,3)$ and $(3,3)\sim(2,4)$, but $(1,1)\nsim(2,4)$. | 2016-05-31 10:08:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.881337583065033, "perplexity": 104.55584564039673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464051268601.70/warc/CC-MAIN-20160524005428-00118-ip-10-185-217-139.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/20445/coend-computation/20451 | # Coend computation
Let
$F:A^{\mbox{op}} \to \mbox{Set}$
and define
$G_a:A\times A^{\mbox{op}} \to \mbox{Set}$
$G_a(b,c) = \mbox{hom}(a,b) \times F(c)$.
I think the coend of $G_a$,
$\int^AG_a$,
ought to be $F(a)$--it's certainly true when A is discrete, since then hom is a delta function. But my colimit-fu isn't good enough to actually compute the thing and verify it's true. Can someone walk me through the computation, please?
-
is this a category-theoretic analogue of the main theorem of the fundamental theorem of calculus? – Martin Brandenburg Apr 6 '10 at 0:13
@Martin: an FToC analogue would require a 'boundary' computation of some sort. This seems more like a distribution computation, where the hom works like $\delta(a-b)$, so the integral over the whole space is just evaluation. – Jacques Carette Apr 6 '10 at 3:20
But you asked to be walked through it. First: yes, it is $F(a)$. Another way of writing your coend $$\int^A G_a$$ is as $$\int^{b \in A} G_a(b, b) = \int^b \mathrm{hom}(a,b) \times F(b).$$ I claim this is canonically isomorphic to $F(a)$. I'll prove this by showing that for an arbitrary set $S$, the homset $\mathrm{hom}(\mathrm{this}, S)$ is canonically isomorphic to $\mathrm{hom}(F(a), S)$. The claim will then follow from the ordinary Yoneda Lemma.
So, let $S$ be a set. Then \begin{align} \mathrm{Set}(\int^b \mathrm{hom}(a, b) \times F(b), S) & \cong \int_b \mathrm{Set}(\mathrm{hom}(a, b) \times F(b), S) \\ &\cong \int_b \mathrm{Set}(\mathrm{hom}(a, b), \mathrm{Set}(F(b), S)) \\ &\cong \mathrm{Nat}(\hom(a, -), \mathrm{Set}(F(-), S)) \\ &\cong \mathrm{Set}(F(a), S) \end{align} I don't know how much of this you'll want explaining, so I'll just say it briefly for now. If you want further explanation, just ask. The first isomorphism is kinda the definition of colimit. The second is the usual exponential transpose/currying operation. The third is maybe the most important: it's a fundamental fact about ends that if $F, G: C \to D$ are functors then $$\mathrm{Nat}(F, G) = \int_c D(F(c), G(c)).$$ The fourth and final isomorphism is the ordinary Yoneda Lemma applied to the functor $\mathrm{Set}(F(-), S)$.
I'm a little confused by your notation: specifically, what is the category $\mathrm{Nat}$? From the types its objects are functors $A^{op} \to \mathrm{Set}$, so it's the functor category whose objects are the natural transformations between those functors? – Neel Krishnaswami Apr 6 '10 at 14:14
FWIW You can formalise the steps in the construction of this isomorphism as a Haskell program. Here's one direction, with f being the relevant morphism: hpaste.org/fastcgi/hpaste.fcgi/view?id=24722 I suspect that by interpreting Haskell code as the internal language of some family of categories then this becomes a perfectly good definition of the isomorphisms for mathematical purposes too, and not just a statement about some Haskell functions. – Dan Piponi Apr 6 '10 at 16:53
Neel, I guess Nat is a slightly unspecific notation, like hom. The first occurrence of Nat meant "hom in $[A^{op}, \mathrm{Set}]$". Here $[A^{op}, \mathrm{Set}]$ is the category whose objects are functors $A^{op} \to \mathrm{Set}$. So: Nat is not a category; it means "hom" in that functor category. I might equally well have written "$[A^{op}, \mathrm{Set}]$" in place of "Nat". (In fact I prefer to; I was making a perhaps misguided effort to be more widely comprehensible.) Similarly, the second occurrence of "Nat" could be replaced by "$[C, D]$". – Tom Leinster Apr 6 '10 at 20:08
Ah, thanks -- I've just never seen Nat used like that before. Amusingly, the $[A^{op}, \mathrm{Set}]$ notation is the one I've seen before. :) – Neel Krishnaswami Apr 6 '10 at 22:43 | 2016-07-24 06:53:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9963034987449646, "perplexity": 729.9663610716477}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823963.50/warc/CC-MAIN-20160723071023-00079-ip-10-185-27-174.ec2.internal.warc.gz"} |
http://www.worldlibrary.in/articles/eng/Proper_time | #jsDisabledContent { display:none; } My Account | Register | Help
# Proper time
Article Id: WHEBN0000494418
Reproduction Date:
Title: Proper time Author: World Heritage Encyclopedia Language: English Subject: Collection: Publisher: World Heritage Encyclopedia Publication Date:
### Proper time
In relativity, proper time along a timelike (or lightlike) world line is defined as the time as measured by a clock following that line. It is thus independent of coordinates, and a Lorentz scalar.[1] The proper time interval between two events on a world line is the change in proper time. This is the quantity of interest, since proper time itself is fixed only up to an arbitrary additive constant, namely the setting of the clock at some event along the world line. The proper time between two events depends not only on the events but also the world line connecting them, and hence on the motion of the clock between the events. It is expressed as an integral over the world line. An accelerated clock will measure a smaller elapsed time between two events than that measured by a non-accelerated (inertial) clock between the same two events. The twin paradox is an example of this effect.
The dark blue vertical line represents an inertial observer measuring a coordinate time interval t between events E1 and E2. The red curve represents a clock measuring its proper time interval τ between the same two events.
In terms of four-dimensional spacetime, proper time is analogous to arc length in three-dimensional (Euclidean) space. By convention, proper time is usually represented by the Greek letter τ (tau) to distinguish it from coordinate time represented by t.
By contrast, coordinate time is the time between two events as measured by an observer using that observer's own method of assigning a time to an event. In the special case of an inertial observer in special relativity, the time is measured using the observer's clock and the observer's definition of simultaneity.
The concept of proper time was introduced by Hermann Minkowski in 1908,[2] and is a feature of Minkowski diagrams.
## Contents
• Mathematical formalism 1
• In special relativity 1.1
• In general relativity 1.2
• Examples in special relativity 2
• Example 1: The twin "paradox" 2.1
• Example 2: The rotating disk 2.2
• Examples in general relativity 3
• Example 3: The rotating disk (again) 3.1
• Example 4: The Schwarzschild solution — time on the Earth 3.2
• Footnotes 5
• References 6
## Mathematical formalism
The formal definition of proper time involves describing the path through spacetime that represents a clock, observer, or test particle, and the metric structure of that spacetime. Proper time is the pseudo-Riemannian arc length of world lines in four-dimensional spacetime.
From the mathematical point of view, coordinate time is assumed to be predefined and we require an expression for proper time as a function of coordinate time. From the experimental point of view, proper time is what is measured experimentally and then coordinate time is calculated from the proper time of some inertial clocks.
### In special relativity
Let the Minkowski metric be defined by
\eta_{\mu\nu} = \left ( \begin{matrix} 1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -1 \end{matrix} \right ),
and define
(x^0, x^1, x^2, x^3) = (ct, x, y, z)
for arbitrary Lorentz frames.
Consider an infinitesimal interval
ds^2 = c^2dt^2 - dx^2 - dy^2 - dz^2 = \eta_{\mu\nu}dx^\mu dx^\nu,
(1)
expressed in any Lorentz frame and here assumed timelike (or lightlike), separating points on a trajectory of a particle (think clock). The same interval can be expressed in coordinates such that at each moment, the particle is at rest. Such a frame is called an instantaneous rest frame. Due to the invariance of the interval (instantaneous rest frames taken at different times are related by Lorentz transformations) one may write
ds^2 = c^2d\tau^2 - dx_\tau^2 - dy_\tau^2 - dz_\tau^2 = \eta_{\mu\nu}dx_\tau^\mu dx_\tau^\nu = c^2d\tau^2,
since in the instantaneous rest frame, the particle is at rest. Since the interval is assumed timelike (lightlike), one may take the square root of the above expression;[3]
ds = cd\tau,
or
d\tau = \frac{ds}{c}.
Given this differential expression for τ, the proper time interval is defined as
\Delta\tau = \int_P d\tau = \int \frac{ds}{c}. (2)
Here P is the worldline from some initial event to some final event with the ordering of the events fixed by the requirement that the final event occurs later according to the clock (for a lightlike world line, if allowed, this differs, the result is zero according to definition (2) since in this case ds = 0[4]) than the initial event.
Using (1) and again the invariance of the interval, one may write[5]
\begin{align}\Delta\tau &= \int_P \frac{1}{c} \sqrt{\eta_{\mu\nu}dx^\mu dx^\nu}\\ &= \int_P \sqrt {dt^2 - {dx^2 \over c^2} - {dy^2 \over c^2} - {dz^2 \over c^2}}\\ &= \int \sqrt {1 - \frac{1}{c^2} \left [ \left (\frac{dx}{dt}\right)^2 + \left (\frac{dy}{dt}\right)^2 + \left ( \frac{dz}{dt}\right)^2 \right] }dt\\ &= \int \sqrt {1 - \frac{v(t)^2}{c^2}} dt = \int \frac{dt}{\gamma(t)},\end{align} (3)
where v(t) is the coordinate speed at coordinate time t, and x(t), y(t), and z(t) are space coordinates. It should be noted that the first expression is manifestly Lorentz invariant. They are all Lorentz invariant, since proper time and proper time intervals are coordinate-independent by definition.
If t, x, y, z, are parameterised by a parameter λ, this can be written as
\Delta\tau = \int \sqrt {\left (\frac{dt}{d\lambda}\right)^2 - \frac{1}{c^2} \left [ \left (\frac{dx}{d\lambda}\right)^2 + \left (\frac{dy}{d\lambda}\right)^2 + \left ( \frac{dz}{d\lambda}\right)^2 \right] } \,d\lambda.
If the motion is of the particle is constant, the expression simplifies to
\Delta \tau = \sqrt{\left(\Delta t\right)^2 - \frac{\left(\Delta x\right)^2}{c^2} - \frac{\left(\Delta y\right)^2}{c^2} - \frac{\left(\Delta z\right)^2}{c^2}},
where Δ means the change in coordinates between the initial and final events. The definition in special relativity generalizes straightforwardly to general relativity as follows below.
### In general relativity
Proper time is defined in general relativity as follows: Given a pseudo-Riemannian manifold with a local coordinates xμ and equipped with a metric tensor gμν, the proper time interval Δτ between two events along a timelike path P is given by the line integral[6]
\Delta\tau = \int_P \, d\tau = \int_P \frac{1}{c}\sqrt{g_{\mu\nu} \; dx^\mu \; dx^\nu}.
(4)
This expression is, as it should be, invariant under coordinate changes. It reduces (in appropriate coordinates) to the expression of special relativity in flat spacetime.
In the same way that coordinates can be chosen such that x1, x2, x3 = const in special relativity, this can be done in general relativity too. Then, in these coordinates,[7]
\Delta\tau = \int_P d\tau = \int_P \frac{1}{c}\sqrt{g_{00}} dx^0.
This expression generalizes definition (2) and can be taken as the definition. Then using invariance of the interval, equation (4) follows from it in the same way (3) follows from (2), except that here arbitrary coordinate changes are allowed.
## Examples in special relativity
### Example 1: The twin "paradox"
For a twin "paradox" scenario, let there be an observer A who moves between the coordinates (0,0,0,0) and (10 years, 0, 0, 0) inertially. This means that A stays at x=y=z=0 for 10 years of coordinate time. The proper time interval for A is then
\Delta \tau = \sqrt{(10\text{ years})^2} = 10\text{ years}
So we find that being "at rest" in a special relativity coordinate system means that proper time and coordinate time are the same.
Let there now be another observer B who travels in the x direction from (0,0,0,0) for 5 years of coordinate time at 0.866c to (5 years, 4.33 light-years, 0, 0). Once there, B accelerates, and travels in the other spatial direction for 5 years to (10 years, 0, 0, 0). For each leg of the trip, the proper time interval is
\Delta \tau = \sqrt{(5\;\mathrm{years})^2 - (4.33\;\mathrm{years})^2} = \sqrt{6.25\;\mathrm{years}^2} = \sqrt{6.25\;} \mathrm{years}= 2.5 \; \mathrm{years}.
So the total proper time for observer B to go from (0,0,0,0) to (5 years, 4.33 light-years, 0, 0) to (10 years, 0, 0, 0) is 5 years. Thus it is shown that the proper time equation incorporates the time dilation effect. In fact, for an object in a SR spacetime traveling with a velocity of v for a time \Delta T, the proper time interval experienced is
\Delta \tau = \sqrt{\Delta T^2 - (v_x \Delta T/c)^2 - (v_y \Delta T/c)^2 - (v_z \Delta T/c)^2 } = \Delta T \sqrt{1 - v^2/c^2},
which is the SR time dilation formula.
### Example 2: The rotating disk
An observer rotating around another inertial observer is in an accelerated frame of reference. For such an observer, the incremental (d\tau\ ) form of the proper time equation is needed, along with a parameterized description of the path being taken, as shown below.
Let there be an observer C on a disk rotating in the xy plane at a coordinate angular rate of \omega and who is at a distance of r from the center of the disk with the center of the disk at x=y=z=0. The path of observer C is given by (T, \;\, r\cos(\omega T),\;\, r\sin(\omega T), \;\, 0), where T is the current coordinate time. When r and \omega are constant, dx = -r \omega \sin(\omega T) \; dT and dy = r \omega \cos(\omega T) \; dT. The incremental proper time formula then becomes
d\tau = \sqrt{dT^2 - (r \omega /c)^2 \sin^2(\omega T)\; dT^2 - (r \omega /c)^2 \cos^2(\omega T) \; dT^2} = dT\sqrt{1 - \left ( \frac{r\omega}{c} \right )^2}.
So for an observer rotating at a constant distance of r from a given point in spacetime at a constant angular rate of ω between coordinate times T_1 and T_2, the proper time experienced will be
\int_{T_1}^{T_2} d\tau = (T_2 - T_1) \sqrt{ 1 - \left ( \frac{r\omega}{c} \right )^2}.
As v= for a rotating observer, this result is as expected given the time dilation formula above, and shows the general application of the integral form of the proper time formula.
## Examples in general relativity
The difference between SR and general relativity (GR) is that in GR one can use any metric which is a solution of the Einstein field equations, not just the Minkowski metric. Because inertial motion in curved spacetimes lacks the simple expression it has in SR, the line integral form of the proper time equation must always be used.
### Example 3: The rotating disk (again)
An appropriate coordinate conversion done against the Minkowski metric creates coordinates where an object on a rotating disk stays in the same spatial coordinate position. The new coordinates are
r=\sqrt{x^2 + y^2}
and
\theta = \arctan\left(\frac{y}{x}\right) - \omega t.
The t and z coordinates remain unchanged. In this new coordinate system, the incremental proper time equation is
d\tau = \sqrt{\left [1 - \left (\frac{r \omega}{c} \right )^2 \right] dt^2 - \frac{dr^2}{c^2} - \frac{r^2\, d\theta^2}{c^2} - \frac{dz^2}{c^2} - 2 \frac{r^2 \omega \, dt \, d\theta}{c^2}}.
With r, θ, and z being constant over time, this simplifies to
d\tau = dt \sqrt{ 1 - \left (\frac{r \omega}{c} \right )^2 },
which is the same as in Example 2.
Now let there be an object off of the rotating disk and at inertial rest with respect to the center of the disk and at a distance of R from it. This object has a coordinate motion described by dθ = -ω dt, which describes the inertially at-rest object of counter-rotating in the view of the rotating observer. Now the proper time equation becomes
d\tau = \sqrt{\left [1 - \left (\frac{R \omega}{c} \right )^2 \right] dt^2 - \left (\frac{R\omega}{c} \right ) ^2 \,dt^2 + 2 \left ( \frac{R \omega}{c} \right ) ^2 \,dt^2} = dt.
So for the inertial at-rest observer, coordinate time and proper time are once again found to pass at the same rate, as expected and required for the internal self-consistency of relativity theory.[8]
### Example 4: The Schwarzschild solution — time on the Earth
The Schwarzschild solution has an incremental proper time equation of
d\tau = \sqrt{ \left( 1 - \frac{2m}{r} \right) dt^2 - \frac{1}{c^2} \left( 1 - \frac{2m}{r} \right)^{-1} dr^2 - \frac{r^2}{c^2} d\phi^2 - \frac{r^2}{c^2} \sin^2(\phi ) \, d\theta^2 },
where
t is time as calibrated with a clock distant from and at inertial rest with respect to the Earth,
r is a radial coordinate (which is effectively the distance from the Earth's center),
ɸ is a co-latitudinal coordinate, the angular separation from the north pole in radians.
θ is a longitudinal coordinate, analogous to the longitude on the Earth's surface but independent of the Earth's rotation. This is also given in radians.
m is the geometrized mass of the Earth, m = GM/c2,
M is the mass of the Earth,
G is the gravitational constant.
To demonstrate the use of the proper time relationship, several sub-examples involving the Earth will be used here. The use of the Schwarzschild solution for the Earth is not entirely correct for the following reasons:
• Due to its rotation and tidal deformation, the Earth is an oblate spheroid instead of being a true sphere. This results in the gravitational field also being oblate instead of spherical.
• In GR, a rotating object also drags spacetime along with itself. This is described by the Kerr solution. However, the amount of frame dragging that occurs for the Earth is so small that it can often be ignored.
For the Earth, M = 5.9742 × 1024 kg, meaning that m = 4.4354 × 10−3 m. When standing on the north pole, we can assume dr = d\theta = d\phi = 0 (meaning that we are neither moving up or down or along the surface of the Earth). In this case, the Schwarzschild solution proper time equation becomes d\tau = dt \,\sqrt{1 - 2m/r}. Then using the polar radius of the Earth as the radial coordinate (or r = 6,356,752 meters), we find that
d\tau = \sqrt{\left ( 1 - 1.3908 \times 10^{-9} \right ) \;dt^2} = \left (1 - 6.9540 \times 10^{-10} \right ) \,dt.
At the equator, the radius of the Earth is r = 6,378,137 meters. In addition, the rotation of the Earth needs to be taken into account. This imparts on an observer an angular velocity of \ d\theta / dt of 2π divided by the sidereal period of the Earth's rotation, 86162.4 seconds. So d\theta = 7.2923 \times 10^{-5}\, dt. The proper time equation then produces
d\tau = \sqrt{\left ( 1 - 1.3908 \times 10^{-9} \right ) dt^2 - 2.4069 \times 10^{-12}\, dt^2} = \left( 1 - 6.9660 \times 10^{-10}\right ) \, dt.
This should have been the same as the previous result, but as noted above the Earth is not spherical as assumed by the Schwarzschild solution. Even so, this demonstrates how the proper time equation is used.
## Footnotes
1. ^ Zwiebach 2004, p. 25
2. ^ Minkowski 1908, pp. 53–111
3. ^ Zwiebach 2004, p. 25
4. ^ Some authors include lightlike intervals in the definition, e. g. Lawden 2012, p. 150: Extract of page 150.
Others leave it undefined, e.g. Kopeikin, Efroimsky & Kaplan 2011, p. 275: Extract of page 275.
Yet others aren't clear on whether lightlike intervals are included: Landau & Lifshitz 1975, pp. 7–8
5. ^ Foster & Nightingale 1978, p. 56
6. ^ Foster & Nightingale 1978, p. 57
7. ^ Landau & Lifshitz 1975, p. 251
8. ^ Cook 2004, pp. 214–219
## References
• Cook, R. J. (2004). "Physical time and physical space in general relativity". Am. J. Phys. 72 (2): 214–219.
• Foster, J.; Nightingale, J.D. (1978). A short course in general relativity. Essex:
• Kopeikin, Sergei; Efroimsky, Michael; Kaplan, George (2011). Relativistic Celestial Mechanics of the Solar System. John Wiley & Sons.
• Lawden, Derek F. (2012). An Introduction to Tensor Calculus: Relativity and Cosmology. Courier Corporation.
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles. | 2020-09-27 14:40:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9770467877388, "perplexity": 1150.346160843484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400279782.77/warc/CC-MAIN-20200927121105-20200927151105-00454.warc.gz"} |
http://rockandcode.ga/climbing/day08/ | # Dirtbag Day 08
Week 1 I relied heavily on staying at a friend’s house (4 / 8 nights) and 3 nights were spent in Walmart parking lots. Last night was beautiful, 68 and breezy though a bit humid still. No doubt, this will be a lonely journey. I put over 3,000 miles on my car in July (actually closer to 3,500) between work and crag trips. I’d say that i was on the road more than I was home, but then, really… I am the highway… a modern day drifter.
I’ve been wondering about work too. Finding a full-time job is a tedious process. Come to think of it, my health insurance expires today. Looks like I’ll be paying that federal fee when it comes tax time. Life is hard. Part of me doesn’t want to give up this dream that I’m living now. I suppose a 9-to-5 technically wouldn’t have to change anything or stop me from being a dirtbag… it may just mean that I can take showers more often! But then there is my current job. What is worth holding on to and when is it time to let go?
I'm learnin' to fly, but I ain't got wings.
Comin' down is the hardest thing.
– Tom Petty
I did some math and without seeing any clients I’d actually be losing money at my job and thus need to quit. However, only seeing one client on a weekly basis would cover my overhead and get me groceries; thus leaving my ‘emergency fund’ largely untapped. This could go on virtually forever!
But I’m finding myself in a running (read levaing) kind of mood. How far do i take this until it becomes warped to an entirely different state from what it was begotten as? I’m speaking not just of climbing, but in the socio-political-spiritual realm(s) too. is a relentless pursuit of freedom worth losing a large part of the love of the game that inspired chasing that freedom to begin? I feel myself going to a pretty dark place here. I want to hold and own that, but also the wonder and awe of the present moment I’m writing in. It’s 70-something, 9:00-ish a.m., sunny, and by tomorrow I’ll have been out on the rocks 5 / 7 days. My “neighbors” are – to put it politely – gym-goers, but mostly quiet and keep to themselves. The only sound I hear harmonizing in the background of my own thoughts is a steady breeze with scattered bird-chirp accents. The rising sun is almost in my face now. I’m reminded of Camus’ proverb:
There are those who choose to look their fate in the eye (and cry out "no")."
As my Lacanian colleagues would put it. “Here’s to forever lacking (so as to remain able to be moved).” | 2018-03-24 07:51:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24447034299373627, "perplexity": 2563.749408631743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257649961.11/warc/CC-MAIN-20180324073738-20180324093738-00600.warc.gz"} |
http://www.ams.org/mathscinet-getitem?mr=2119281 | MathSciNet bibliographic data MR2119281 58J22 (19K56 46L80 55U10 57R67) Wright, Nick The coarse Baum-Connes conjecture via \$C\sb 0\$$C\sb 0$ coarse geometry. J. Funct. Anal. 220 (2005), no. 2, 265–303. Article
For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews. | 2016-12-05 02:20:07 | {"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9887718558311462, "perplexity": 8718.311754271755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541518.17/warc/CC-MAIN-20161202170901-00297-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://www.authorea.com/users/106376/articles/137409-ima-201-multifocus/_show_article | # Introduction
The goal of this project is to use multiple images of the same scene with different focuses
to create one image of this scene that combines the most focused version of each element of
the scene.
The first part of this project was the implementation of the algorithm described in
(Fedorov 2006), the second part has been to use
the first implementation as an inspiration to design my own algorithm to perform the multi-focus imaging task.
# The original algorithm
## The Laplacian pyramid
### Reduction and expansion
We first need to defined upsampling and downsampling operators for an image $$I$$ of size $$m\times n$$:
$$\mathrm{down}(I)(i,j)=I(2i,2j)\nonumber \\$$ $$\mathrm{up}(I)(2i,2j)=4I(i,j)\nonumber \\$$ $$\mathrm{up}(I)(2i+1,2j+1)=0\nonumber \\$$
We can now define the reduction and expansion operators, where $$k$$ is the kernel of a low-pass filter:
$$\mathrm{reduce}(I)=\mathrm{down}(k\ast I)\nonumber \\$$ $$\mathrm{expand}(I)=k\ast\mathrm{up}(I)\nonumber \\$$
### Gaussian pyramid
The Gaussian pyramid $$G$$ of an image is a sequence of images $$G_{0},\ldots,G_{N}$$ where $$G_{0}$$ is the original image and $$G_{l}=\mathrm{reduce}(G_{l-1})$$ for $$l\geq 1$$. Intuitively, each level of the pyramid eliminates the finest details of the previous level and keeps only the coarse information.
### Laplacian pyramid
The Laplacian pyramid $$L=L_{0},\ldots,L_{N}$$ of an image is derived from its Gaussian pyramid $$G$$. The top level is defined by $$L_{N}=G_{N}$$. The next levels are defined by $$L_{l}=G_{l}-\mathrm{expand}(G_{l+1})$$. Intuitively, each level of the pyramid represents only the details that have a frequency that can first be observed at the corresponding scale.
By construction, the original image (or any level of the Gaussian pyramid) can be reconstructed from the Laplacian pyramid:
$$I=\sum\limits_{l=0}^{N}\mathrm{expand}^{l}(L_{l})\nonumber \\$$
## Multi-resolution spline
This technique, described in (Burt 1983), aims at merging two images $$I_{A}$$ and $$I_{B}$$ seamlessly along a mask $$M$$. This is done by using the Laplacian pyramids $$L_{A}$$ and $$L_{B}$$ of the two images and the Gaussian pyramid $$G_{M}$$ of the mask.
Using these elements, we can build a new Laplacian pyramid $$L_{S}$$ using the following formulas ($$\odot$$ is the element-wise product):
$${L_{S}}_{l}={G_{M}}_{l}\odot{L_{A}}_{l}+\left(1-{G_{M}}_{l}\right)\odot{L_{B}}_{l}\quad\text{for}\quad l=0\ldots N\nonumber \\$$
Using the reconstruction property of the Laplacian pyramid, we can build the merged image:
$$I_{S}=\sum\limits_{l=0}^{N}\mathrm{expand}^{l}({L_{S}}_{l})\nonumber \\$$ | 2018-03-24 19:52:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 8, "x-ck12": 0, "texerror": 0, "math_score": 0.6117687225341797, "perplexity": 615.8116802339028}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257650993.91/warc/CC-MAIN-20180324190917-20180324210917-00095.warc.gz"} |
https://goker.dev/iom/benchmarks/schwefel-1.2 | ### Schwefel 1.2
##### Mathematical Definition
###### Latex
f(x) = {\sum_{i=1}^{n} \left(\sum_{j=1}^{i}x_j\right)^2}
##### Description and Features
Dimensions: d
The function has many global minima. It is continuous, convex and unimodal. The plot shows its two-dimensional form.
• The function is continuous.
• The function is convex.
• The function can be defined on n-dimensional space.
• The function is differentiable.
• The function is separable.
• The function is unimodal.
##### Input Domain
The function can be defined on any input domain but it is usually evaluated on $x_i \in [-100, 100]$ for $i = 1, …, d$ .
##### Global Minima
The function has one global minimum $f(\textbf{x}^{\ast})=0$ at $\textbf{x}^{\ast} = (0, …, 0)$.
##### Implementation
###### Python Code
def function(x):
x = np.array(x)
return np.sum([np.sum(x[:i]) ** 2
for i in range(len(x))]) | 2020-09-23 09:44:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7611462473869324, "perplexity": 6847.387447245468}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400210616.36/warc/CC-MAIN-20200923081833-20200923111833-00120.warc.gz"} |
https://www.zbmath.org/authors/?q=ai%3Ali.chengju | # zbMATH — the first resource for mathematics
## Li, Chengju
Compute Distance To:
Author ID: li.chengju Published as: Li, C.; Li, C. J.; Li, ChengJu; Li, Chengju
Documents Indexed: 56 Publications since 1993, including 1 Book
all top 5
#### Co-Authors
1 single-authored 19 Yue, Qin 6 Bae, Sunghan 4 Ding, Cunsheng 4 Li, Fengwei 3 Mesnager, Sihem 3 Yang, Shudi 2 Ahn, Jaehyun 2 Carlet, Claude 2 Fu, Fangwei 2 Heng, Ziling 2 Li, Shuxing 2 Liu, Hao 2 Peng, Wei 2 Wu, Peng 2 Yan, Haode 1 Dinh, Hai Quang 1 Du, Zongrun 1 Hu, Liqin 1 Huang, Yiwei 1 Jiang, Tongsong 1 Ka, Dongseok 1 Kang, Pyung-Lyun 1 Li, Xiuqing 1 Liu, Fengmei 1 Liu, Hao 1 Wang, Zilong 1 Wu, Mengna 1 Wu, Xiumei 1 Xia, Yongbo 1 Yao, Zhengan 1 Zeng, Peng
all top 5
#### Serials
8 Designs, Codes and Cryptography 7 IEEE Transactions on Information Theory 6 Finite Fields and their Applications 4 Discrete Mathematics 3 Advances in Mathematics of Communications 3 Cryptography and Communications 2 Applicable Algebra in Engineering, Communication and Computing 1 Mathematics in Practice and Theory 1 Chinese Annals of Mathematics. Series A 1 Journal of Qufu Normal University. Natural Science 1 Algebra Colloquium 1 Science China. Mathematics 1 Journal of Algebra, Combinatorics, Discrete Structures and Applications
all top 5
#### Fields
34 Information and communication theory, circuits (94-XX) 27 Number theory (11-XX) 3 Combinatorics (05-XX) 2 Linear and multilinear algebra; matrix theory (15-XX) 1 Order, lattices, ordered algebraic structures (06-XX) 1 Commutative algebra (13-XX) 1 Algebraic geometry (14-XX) 1 Operator theory (47-XX)
#### Citations contained in zbMATH
36 Publications have been cited 375 times in 202 Documents Cited by Year
Weight distributions of cyclic codes with respect to pairwise coprime order elements. Zbl 1366.94641
Li, Chengju; Yue, Qin; Li, Fengwei
2014
Hamming weights of the duals of cyclic codes with two zeros. Zbl 1360.94401
Li, Chengju; Yue, Qin; Li, Fengwei
2014
LCD cyclic codes over finite fields. Zbl 1370.94585
Li, Chengju; Ding, Cunsheng; Li, Shuxing
2017
Complete weight enumerators of some cyclic codes. Zbl 1402.94102
Li, Chengju; Yue, Qin; Fu, Fang-Wei
2016
Weight distributions of two classes of cyclic codes with respect to two distinct order elements. Zbl 1364.94655
Li, Chengju; Yue, Qin
2014
Two families of LCD BCH codes. Zbl 1374.94845
Li, Shuxing; Li, Chengju; Ding, Cunsheng; Liu, Hao
2017
Complete weight enumerators of some linear codes and their applications. Zbl 1379.94058
Li, Chengju; Bae, Sunghan; Ahn, Jaehyun; Yang, Shudi; Yao, Zheng-An
2016
Kuznetsov’s trace formula and the Hecke eigenvalues of Maass forms. Zbl 1314.11038
Knightly, A.; Li, C.
2013
Passivity and passification of stochastic impulsive memristor-based piecewise linear system with mixed delays. Zbl 1312.93098
Wen, S. P.; Zeng, Z. G.; Huang, T. W.; Li, C. J.
2015
The minimum Hamming distances of irreducible cyclic codes. Zbl 1332.94105
Li, Fengwei; Yue, Qin; Li, Chengju
2014
Hermitian LCD codes from cyclic codes. Zbl 1408.94982
Li, Chengju
2018
Complete weight enumerators of a class of linear codes. Zbl 1379.94054
Ahn, Jaehyun; Ka, Dongseok; Li, Chengju
2017
Recent progress on weight distributions of cyclic codes over finite fields. Zbl 1371.94683
Dinh, Hai Q.; Li, Chengju; Yue, Qin
2015
On the complete weight enumerators of some reducible cyclic codes. Zbl 1321.94129
Bae, Sunghan; Li, Chengju; Yue, Qin
2015
Parameters of LCD BCH codes with two lengths. Zbl 1401.94223
Yan, Haode; Liu, Hao; Li, Chengju; Yang, Shudi
2018
A class of cyclic codes from two distinct finite fields. Zbl 1392.94943
Li, Chengju; Yue, Qin
2015
Irreducible cyclic codes of length $$4p^n$$ and $$8p^n$$. Zbl 1392.94930
Li, Fengwei; Yue, Qin; Li, Chengju
2015
Dimensions of three types of BCH codes over $$\mathrm{GF}(q)$$. Zbl 1422.94051
Liu, Hao; Ding, Cunsheng; Li, Chengju
2017
Torsional impact response of a penny-shaped interface crack in bonded materials with a graded material interlayer. Zbl 1110.74552
Li, C.; Duan, Z.; Zou, Z.
2002
Three classes of linear codes with two or three weights. Zbl 1372.94455
Heng, Ziling; Yue, Qin; Li, Chengju
2016
Two families of nearly optimal codebooks. Zbl 1393.94321
Li, Chengju; Yue, Qin; Huang, Yiwei
2015
Infinite families of 2-designs and 3-designs from linear codes. Zbl 1367.05029
Ding, Cunsheng; Li, Chengju
2017
A construction of codes with linearity from two linear codes. Zbl 1380.94145
Li, Chengju; Bae, Sunghan; Yan, Haode
2017
The Walsh transform of a class of monomial functions and cyclic codes. Zbl 1365.94673
Li, Chengju; Yue, Qin
2015
Learning algorithms for neural networks based on quasi-Newton methods with self-scaling. Zbl 0775.93081
Beigi, H. S. M.; Li, C. J.
1993
Transverse free vibration and stability of axially moving nanoplates based on nonlocal elasticity theory. Zbl 1446.74028
Liu, J. J.; Li, C.; Fan, X. L.; Tong, L. H.
2017
A construction of several classes of two-weight and three-weight linear codes. Zbl 1384.94126
Li, Chengju; Yue, Qin; Fu, Fang-Wei
2017
Dissipative dynamics of two-photon Jaynes-Cummings model with the Stark shift in the dispersive approximation. Zbl 0983.81535
Zhou, L.; Song, H. S.; Luo, Y. X.; Li, C.
2001
On two classes of primitive BCH codes and some related codes. Zbl 1432.94187
Li, Chengju; Wu, Peng; Liu, Fengmei
2019
Nonlinear maps preserving product $$X^{*}Y+Y^{*}X$$ on von Neumann algebras. Zbl 07040690
Li, C.; Zhao, F.; Chen, Q.
2018
Constructions of linear codes with one-dimensional hull. Zbl 1431.94157
Li, Chengju; Zeng, Peng
2019
Some two-weight and three-weight linear codes. Zbl 1415.94482
Li, Chengju; Bae, Sunghan; Yang, Shudi
2019
Weight distributions of a class of cyclic codes from $$\mathbb F_l$$-conjugates. Zbl 1357.94093
Li, Chengju; Yue, Qin; Heng, Ziling
2015
Some results on strongly regular graphs from unions of cyclotomic classes. Zbl 1257.05180
Li, ChengJu; Yue, Qin; Hu, LiQin
2012
Three-dimensional numerical simulation of compound meandering open channel flow by the Reynolds stress model. Zbl 1156.76037
Jing, H.; Guo, Y.; Li, C.; Zhang, Jisheng
2009
Three-dimensional analysis of the coupled thermo-piezoelectro-mechanical behaviour of multilayered plates using the differential quadrature technique. Zbl 1120.74601
Liew, K. M.; Zhang, Jordan Z.; Li, C.; Meguid, S. A.
2005
On two classes of primitive BCH codes and some related codes. Zbl 1432.94187
Li, Chengju; Wu, Peng; Liu, Fengmei
2019
Constructions of linear codes with one-dimensional hull. Zbl 1431.94157
Li, Chengju; Zeng, Peng
2019
Some two-weight and three-weight linear codes. Zbl 1415.94482
Li, Chengju; Bae, Sunghan; Yang, Shudi
2019
Hermitian LCD codes from cyclic codes. Zbl 1408.94982
Li, Chengju
2018
Parameters of LCD BCH codes with two lengths. Zbl 1401.94223
Yan, Haode; Liu, Hao; Li, Chengju; Yang, Shudi
2018
Nonlinear maps preserving product $$X^{*}Y+Y^{*}X$$ on von Neumann algebras. Zbl 07040690
Li, C.; Zhao, F.; Chen, Q.
2018
LCD cyclic codes over finite fields. Zbl 1370.94585
Li, Chengju; Ding, Cunsheng; Li, Shuxing
2017
Two families of LCD BCH codes. Zbl 1374.94845
Li, Shuxing; Li, Chengju; Ding, Cunsheng; Liu, Hao
2017
Complete weight enumerators of a class of linear codes. Zbl 1379.94054
Ahn, Jaehyun; Ka, Dongseok; Li, Chengju
2017
Dimensions of three types of BCH codes over $$\mathrm{GF}(q)$$. Zbl 1422.94051
Liu, Hao; Ding, Cunsheng; Li, Chengju
2017
Infinite families of 2-designs and 3-designs from linear codes. Zbl 1367.05029
Ding, Cunsheng; Li, Chengju
2017
A construction of codes with linearity from two linear codes. Zbl 1380.94145
Li, Chengju; Bae, Sunghan; Yan, Haode
2017
Transverse free vibration and stability of axially moving nanoplates based on nonlocal elasticity theory. Zbl 1446.74028
Liu, J. J.; Li, C.; Fan, X. L.; Tong, L. H.
2017
A construction of several classes of two-weight and three-weight linear codes. Zbl 1384.94126
Li, Chengju; Yue, Qin; Fu, Fang-Wei
2017
Complete weight enumerators of some cyclic codes. Zbl 1402.94102
Li, Chengju; Yue, Qin; Fu, Fang-Wei
2016
Complete weight enumerators of some linear codes and their applications. Zbl 1379.94058
Li, Chengju; Bae, Sunghan; Ahn, Jaehyun; Yang, Shudi; Yao, Zheng-An
2016
Three classes of linear codes with two or three weights. Zbl 1372.94455
Heng, Ziling; Yue, Qin; Li, Chengju
2016
Passivity and passification of stochastic impulsive memristor-based piecewise linear system with mixed delays. Zbl 1312.93098
Wen, S. P.; Zeng, Z. G.; Huang, T. W.; Li, C. J.
2015
Recent progress on weight distributions of cyclic codes over finite fields. Zbl 1371.94683
Dinh, Hai Q.; Li, Chengju; Yue, Qin
2015
On the complete weight enumerators of some reducible cyclic codes. Zbl 1321.94129
Bae, Sunghan; Li, Chengju; Yue, Qin
2015
A class of cyclic codes from two distinct finite fields. Zbl 1392.94943
Li, Chengju; Yue, Qin
2015
Irreducible cyclic codes of length $$4p^n$$ and $$8p^n$$. Zbl 1392.94930
Li, Fengwei; Yue, Qin; Li, Chengju
2015
Two families of nearly optimal codebooks. Zbl 1393.94321
Li, Chengju; Yue, Qin; Huang, Yiwei
2015
The Walsh transform of a class of monomial functions and cyclic codes. Zbl 1365.94673
Li, Chengju; Yue, Qin
2015
Weight distributions of a class of cyclic codes from $$\mathbb F_l$$-conjugates. Zbl 1357.94093
Li, Chengju; Yue, Qin; Heng, Ziling
2015
Weight distributions of cyclic codes with respect to pairwise coprime order elements. Zbl 1366.94641
Li, Chengju; Yue, Qin; Li, Fengwei
2014
Hamming weights of the duals of cyclic codes with two zeros. Zbl 1360.94401
Li, Chengju; Yue, Qin; Li, Fengwei
2014
Weight distributions of two classes of cyclic codes with respect to two distinct order elements. Zbl 1364.94655
Li, Chengju; Yue, Qin
2014
The minimum Hamming distances of irreducible cyclic codes. Zbl 1332.94105
Li, Fengwei; Yue, Qin; Li, Chengju
2014
Kuznetsov’s trace formula and the Hecke eigenvalues of Maass forms. Zbl 1314.11038
Knightly, A.; Li, C.
2013
Some results on strongly regular graphs from unions of cyclotomic classes. Zbl 1257.05180
Li, ChengJu; Yue, Qin; Hu, LiQin
2012
Three-dimensional numerical simulation of compound meandering open channel flow by the Reynolds stress model. Zbl 1156.76037
Jing, H.; Guo, Y.; Li, C.; Zhang, Jisheng
2009
Three-dimensional analysis of the coupled thermo-piezoelectro-mechanical behaviour of multilayered plates using the differential quadrature technique. Zbl 1120.74601
Liew, K. M.; Zhang, Jordan Z.; Li, C.; Meguid, S. A.
2005
Torsional impact response of a penny-shaped interface crack in bonded materials with a graded material interlayer. Zbl 1110.74552
Li, C.; Duan, Z.; Zou, Z.
2002
Dissipative dynamics of two-photon Jaynes-Cummings model with the Stark shift in the dispersive approximation. Zbl 0983.81535
Zhou, L.; Song, H. S.; Luo, Y. X.; Li, C.
2001
Learning algorithms for neural networks based on quasi-Newton methods with self-scaling. Zbl 0775.93081
Beigi, H. S. M.; Li, C. J.
1993
all top 5
#### Cited by 300 Authors
34 Yue, Qin 18 Li, Chengju 14 Wu, Yansheng 14 Yang, Shudi 11 Cao, Xiwang 9 Heng, Ziling 8 Ding, Cunsheng 8 Tang, Chunming 7 Li, Fengwei 6 Liu, Hongwei 6 Luo, Gaojun 6 Mesnager, Sihem 6 Yao, Zhengan 6 Zhu, Shi-xin 5 Fu, Fangwei 5 Li, Fei 5 Wang, Qiuyan 5 Wang, Xiaoqiang 5 Zheng, Dabin 5 Zhou, Zhengchun 4 Bae, Sunghan 4 Carlet, Claude 4 Huang, Tingwen 4 Li, Nian 4 Li, Xia 4 Lin, Dongdai 4 Liu, Zihui 4 Shi, Xueying 4 Xu, Shanding 4 Yan, Haode 4 Zhu, Xiaomeng 3 Fan, Cuiling 3 Itou, Shouetsu 3 Kong, Xiangli 3 Liu, Xiusheng 3 Pang, Binbin 3 Qi, Yanfeng 3 Sun, Zhonghua 3 Xu, Guangkui 3 Zeng, Xiangyong 3 Zhao, Changan 2 Ahn, Jaehyun 2 Blomer, Valentin 2 Cao, Yanyi 2 Chen, Bocong 2 Ding, Kelan 2 Du, Xiaoni 2 Fan, Shuqin 2 Feng, Keqin 2 Guo, Li-Cheng 2 Hu, Lei 2 Hu, Liqin 2 Knightly, Andrew H. 2 Li, Chunlei 2 Liao, Qunying 2 Lin, Zhouchen 2 Ling, Xin 2 Liu, Chunlei 2 Liu, Fengmei 2 Liu, Haibo 2 Liu, Hualu 2 Liu, Yiwei 2 Lu, Wei 2 Peng, Wei 2 Petrow, Ian N. 2 Razeghi, Mehran 2 Shi, Zexia 2 Tang, Deng 2 Vega, Gerardo 2 Wen, Shiping 2 Xia, Yongbo 2 Xiang, Can 2 Xiao, Jianying 2 Young, Matthew P. 2 Yu, Long 2 Zhong, Shou-Ming 1 Aghdam, Mohammad Mohammadi 1 Aiobi, H. 1 Assing, Edgar 1 Bagchi, Satya 1 Bandi, Ramakrishna 1 Bao, Jingjun 1 Batoul, Aicha 1 Batra, Sudhir 1 Batur, Celal 1 Benahmed, Fatma-Zohra 1 Benbelkacem, N. 1 Bhowmick, Sanjit 1 Borges, Joaquim 1 Boripan, Arunwan 1 Byrne, Eimear 1 Cai, Zuowei 1 Cao, Yonglin 1 Cao, Yuan 1 Cao, Yuting 1 Chen, Ming 1 Chen, Wenbing 1 Cheng, Feng 1 Cherchem, Ahmed 1 Cuén-Ramos, Jesús E. ...and 200 more Authors
all top 5
#### Cited in 51 Serials
31 Finite Fields and their Applications 29 Designs, Codes and Cryptography 26 Cryptography and Communications 21 Discrete Mathematics 8 Applicable Algebra in Engineering, Communication and Computing 8 Advances in Mathematics of Communications 7 Discrete Applied Mathematics 6 Journal of Applied Mathematics and Computing 4 Applied Mathematical Modelling 3 Acta Mechanica 3 Applied Mathematics and Computation 3 Circuits, Systems, and Signal Processing 3 Mathematical Problems in Engineering 2 International Journal of Solids and Structures 2 International Journal of Theoretical Physics 2 Journal of Number Theory 2 Bulletin of the Korean Mathematical Society 2 SIAM Journal on Discrete Mathematics 2 Neural Networks 2 The Ramanujan Journal 2 Journal of Systems Science and Complexity 2 Algebra & Number Theory 2 Journal of Algebra, Combinatorics, Discrete Structures and Applications 1 International Journal for Numerical Methods in Fluids 1 Journal of the Franklin Institute 1 Mathematical Notes 1 Annales de l’Institut Fourier 1 Canadian Journal of Mathematics 1 Functiones et Approximatio. Commentarii Mathematici 1 Information Sciences 1 Journal of Functional Analysis 1 Meccanica 1 Transactions of the American Mathematical Society 1 Journal of Information & Optimization Sciences 1 Optimization 1 Geometric and Functional Analysis. GAFA 1 Pattern Recognition 1 Archive of Applied Mechanics 1 International Journal of Robust and Nonlinear Control 1 Documenta Mathematica 1 Nonlinear Dynamics 1 Annals of Mathematics. Second Series 1 Journal of Modern Optics 1 Journal of Applied Mathematics 1 South East Asian Journal of Mathematics and Mathematical Sciences 1 Journal of Algebra and its Applications 1 Advances in Difference Equations 1 International Journal of Number Theory 1 Proyecciones 1 Acta Mechanica Sinica 1 Asian-European Journal of Mathematics
all top 5
#### Cited in 26 Fields
146 Information and communication theory, circuits (94-XX) 112 Number theory (11-XX) 12 Systems theory; control (93-XX) 11 Combinatorics (05-XX) 11 Mechanics of deformable solids (74-XX) 6 Computer science (68-XX) 6 Biology and other natural sciences (92-XX) 4 Algebraic geometry (14-XX) 4 Quantum theory (81-XX) 3 Field theory and polynomials (12-XX) 3 Associative rings and algebras (16-XX) 3 Ordinary differential equations (34-XX) 3 Mechanics of particles and systems (70-XX) 2 Partial differential equations (35-XX) 1 Order, lattices, ordered algebraic structures (06-XX) 1 Commutative algebra (13-XX) 1 Linear and multilinear algebra; matrix theory (15-XX) 1 Group theory and generalizations (20-XX) 1 Topological groups, Lie groups (22-XX) 1 Dynamical systems and ergodic theory (37-XX) 1 Functional analysis (46-XX) 1 Operator theory (47-XX) 1 Probability theory and stochastic processes (60-XX) 1 Fluid mechanics (76-XX) 1 Statistical mechanics, structure of matter (82-XX) 1 Operations research, mathematical programming (90-XX) | 2021-01-27 17:21:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.654498815536499, "perplexity": 13500.65731423588}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704828358.86/warc/CC-MAIN-20210127152334-20210127182334-00348.warc.gz"} |
https://icsesolutions.com/icse-class-10-9-english-language-solved-question-papers-1/ | ICSE Class 10, 9 English Language Practice Papers – 1 With Answers
Write a composition (350-400) words on any one of the following:
Question 1(a).
Describe a person who according to you has made immense contribution to rural development in India
.
Very few people in India might have heard the name of Norman Ernest Borlaug (March 25, 1914 – September 12, 2009) who was an American biologist, humanitarian and Nobel laureate who has been called “the father of the Green Revolution”, “agriculture’s greatest spokesperson” and “The Man Who Saved A Billion Lives”. Borlaug received his B.Sc. Biology 1937 and Ph.D. in plant pathology and genetics from the University of Minnesota in 1942. He took up an agricultural research position in Mexico, where he developed semi-dwarf, high-yield, disease resistant wheat varieties.
During the mid-20th century, Borlaug led the introduction of these high-yielding varieties combined with modern agricultural production techniques to Mexico, Pakistan, and India. As a result, Mexico became a net exporter of wheat by 1963. Between 1965 and 1970, wheat yields nearly doubled in Pakistan and India, greatly improving the food security in those nations. These collective increases in yield have been labelled the Green Revolution, and Borlaug is often credited with saving over a billion people worldwide from starvation. He was awarded the Nobel Peace Prize in 1970 in recognition of his contributions to world peace through increasing food supply.
In 1961 to 1962, Borlaug’s dwarf spring wheat strains were sent for multi-location testing in the International Wheat Rust Nursery, organized by the U.S. Department of Agriculture. In March 1962, a few of these strains were grown in the fields of the Indian Agricultural Research Institute in Pusa, New Delhi, India. In May 1962, M. S. Swaminathan, a member of lARI’s wheat program, requested of Dr. B. P. Pal, Director of IARI, to arrange for the visit of Borlaug to India and to obtain a wide range of dwarf wheat seed possessing the Norin 10 dwarfing genes. The letter was forwarded to the Indian Ministry of Agriculture headed by Shri C. Subramaniam, which arranged with the Rockefeller Foundation for Borlaug’s visit. In March 1963, the Rockefeller Foundation and the Mexican government sent Borlaug and Dr. Robert Glenn Anderson to India to continue his work. He supplied 100 kg (220 lb) of seed from each of the four most promising strains. Test plots were subsequently planted at Delhi, Ludhiana, Pant Nagar, Kanpur, Pune and Indore.
This led to high yields and the term Green Revolution was coined. High yields led to a shortage of various utilities— labour to harvest the crops, bullock carts to haul it to the threshing floor, jute bags, trucks, rail cars, and grain storage facilities. Some local governments were forced to close school buildings temporarily to use them for grain storage. In India, yields increased from 12.3 million tons in 1965 to 20.1 million tons in 1970. By 1974, India was self-sufficient in the production of all cereals. By 2000, India was harvesting a record 76.4 million tons (2.81 billion bushels) of wheat. This led to various other developments like better transportation facilities, education, increased spending on rural development. Hence, according to me Borlaug was largely responsible for rural development in India even though he was a foreigner.
Question 1(b).
Imagine that there was a road accident in your colony. Describe the scene of the accident and what you did to help the injured.
I immediately rang the police helpline number and soon the police and paramedics arrived and took over. The serious cases were flown away. We helped to put the severely injured into the ambulances so that they could be rushed to the hospital. After everyone who was injured was attended to and the 2 dead bodies laid out the police thanked us and said we had saved many lives with our timely action. They said that it was only due to my efforts that a 10- month-old baby had survived the horrific car accident.
Question 1(c).
Open book examination system is better than the closed book examination system. Give your views either for or against the statement.
There have been numerous debates on the issue whether open book examination system is better than the closed book examination system. To reach an acceptable answer one must first examine what both these systems entail.
In closed book examination you have in an open book examination, you can look at your book/ texts. A student is allowed to have a reference material with him/her while taking the examinations. Tests that require lengthy formula are usually open book examinations. Closed book examination is when a student is not allowed to open a book or have a reference materials opened while taking examinations, tests like objective type are usually closed book. Obviously both tests measure the students understanding towards particular topics of the subjects undertaken. Besides this the type of questions asked in the closed examination system are rather specific while the ones in the open examination system are a bit general/ broad and normally the student has to deduce from what he/she has learned and from anticipating what the questions require to be answered. Course books at that point, would be only serve as a guidance, not as in-depth information.
Hence the open book exam would be ideally suited to modern teaching programmers that especially aim at developing the skills of critical and creative thinking as opposed to the closed book system which is now termed as old school and relies on memory and intelligence, It appears more rigorous while testing the students capacity to cram and reproduce verbatim.
This system is appropriate only if one assumes that the central goal of school and university teaching is the “dissemination of knowledge”. This approach to education treats the information content of a subject to be the most important. In this system the teacher plays the role of facilitating the transfer of information from the textbook to the students’ minds. What the student is expected to do is to understand this information, retain it, and retrieve it during the final examination. Based on the closed book examination most of the conventional examinations only serve to test how much information the students have been able to store in their minds. In order to cope with this demand, students memories the information in class notes and textbooks, and transfer it to answer books during the examination without questioning or using their creativity. Under this system the student automatically chooses not to use his/her ingenuity and thus his mental growth appears stunted .In this type of examination, success depends on the quantity of information memorised, and the efficiency with which it is reproduced .It is a system which has outlived its efficacy and needs to be replaced with the system which caters to modem and more scientific environment. The need of the time dictates that examinations evaluate students on their mental agility and ingenuity and the open book system is conducive to this requirement.
It would not be wrong to say that the open book system is more comprehensive and rigorous as it taxes the students mental agility to disseminate and mould information according to the need of the question. The student must learn to weigh every answer and option before deciding upon the most appropriate solution. Teaching as Triggering Mental Development The open book system is an invaluable tool to test the efficacy of classroom teaching as a means to transferring information from the library or textbooks to the students’ minds. Rather, it propagates and substantiates the dictum that true teaching is teaching students how to learn. That is, teaching should equip students with the ability to acquire knowledge, to modify existing knowledge on the basis of new experience, to build and trigger mental development and this is amply applicable in an open book examination system.
Question 1(d).
Do you think that you are lucky to be born in this generation? Discuss.
People of the previous eras might not agree but in my mind there is no doubt that we are the luckiest generations. The technological progression of the last 50 years is unparalleled by any era of time. The internet has been the biggest invention since fire and electricity; it has engulfed the human population in a whirlwind of infinite knowledge and resources. If you asked an 11 year old in today’s age to live without internet, they would have a hard time adjusting to normal things such as trees and grass. ‘Tweet’ was not a word that described an action, ‘Facebook’ were two separate words “face” and “book” and ‘app’ was not considered to be a term. The internet is our lifeline and it integrates into our daily lives. For „ decades Generation X has been using pens and papers, cassettes and VCRs and actual encyclopaedias for information. In contrast, we the Generation Y has been spoiled with flat screen TVs, iPads and Google – our ability to search and process information is largely dependent upon having an internet connection. And this is one of the greatest boons that we have been blessed with. Being used to the fast-paced technological world where a 2 minute lag on Google is the end of the world, we often forget what the learning process is actually like for someone who is alien to the current culture. We forget that they have been bereft of what we take for granted and this is what makes us luckier-the fast pace of technology at our fingertips and the luxury of utilizing it to make use of other natural resources within our reach.
We have been hand in hand with technology since we ditched our Walkman’s and installed iTunes. It has been a long haul but internet has grown up with us. Due to this fact, we are able to nostalgically remember the good old Facebook, changes in technology and political scenarios – however the flip side means that we are in an environment of constant change! But then change and flux is always welcome as it makes life more interesting and less stagnant.
Now take the example of wildlife! Wildlife watching and travelling for earlier generations was a pastime only the ‘idle rich’ could indulge in. Most of our parents and grandparents were either too poor or too busy at work or caught up in raising us! Air travel was too expensive and there were very few ‘Nature Reserves’. People of this generation are indeed lucky as we have more spare time and more spare cash, and also a sense of freedom which enables us to experience and enjoy more than the other generations.
For us it is a boon that advancement in transport facilities has made the world a global village. Now countries around the world are more accessible, wildlife conservation ‘tourism’ is becoming more and more popular. People can visit both Poles; the Titanic and even take a trip into space!
Then another issue that looms and has been confronting previous generations also is depletion of oil reserves. Oil has now been predicted to last for another 80/90 years. Will that mean the end of travel as we know it? But not to worry as luck is on our side. Alternate fuels and modes of transport have been gifted by technology to this generation.
The secret about the benefit of being born in this generation is that it has devised solutions to problems through technology. We belong to an era of bio- diversity and an undefeatable zeal to survive all odds. We are the generation equipped to help save species for future generations. We are lucky because we are endowed with the virtues of sustainability and hope. We are indeed one of the luckiest generations.
Question 1(e).
Study the picture given on next page. Write a story or account of what the picture suggests to you .Your composition maybe about the subject of the picture or may take suggestions from it; but there must be a clear connection between the picture and the composition.
She woke before the alarm went off, she rolled over, smiling and decided it wasn’t too early to get up. She turned off the alarm and got out of bed, she could take her time, have a nice, long shower then a pleasant breakfast. Everything was ready; there was no need to rush today, no need to panic. Nothing could go wrong. She got out of bed and walked to the bathroom, running through the day’s events in her mind. As she stood under the water, she wondered what her life would be like.
It was still dark outside when she entered the kitchen, so the lights needed to be turned on. It was supposed to be a nice day, but she wasn’t relying on that. It was all indoors, outdoors had been suggested, but too much could go wrong.
She could hear people getting up, getting ready. They were excited too, but for different reasons. They couldn’t know what the day meant to her, they would never know. She went upstairs to her room, holding a mug of tea between both hands. She examined her dress as she sipped her drink, her wedding dress was a beautiful, traditional, red and gold in colour. Her mother bustled into the room, panicking. She shook her head slightly as her mother started rattling off what had to be done. They had already been over this, everything would be perfect. The dress was surprisingly comfortable, she had tried it on before and she had thought after a little while it would start to be too restrictive, too heavy, but no. It was perfect. She waited at the door, surrounded by her cousins and friends. She took a breath then smiled as the wedding music began to play. This was it. They stepped forward for the most important event of her life-her marriage.
Question 2(a).
(Do not spend more than 20 minutes on this question.)
Select one of the following:
Write a letter to the Commissioner of Municipal Corporation complaining about the street lights of your locality that do not function and have not been repaired for long.
7-Kailash Apartments,
Vasant colony,
Bengaluru
24th February, 2013 To
Municipal Commissioner Karnataka State Electricity Board Bengaluru
Subject: Complaint letter regardingnon-functioning of street lights.
Respected Sir,
We are totally five hundred families residing in our locality. We are facing a severe problem of faulty streetlights and laxity on the part of the Electricity Department in repairing/replacing them for the last three months in our locality.
This has led to a spate of crimes like chain snatching, eve-teasing and even thefts as the area is poorly lighted at night and this provides the miscreants with ample opportunities to indulge in nefarious activities. The locality is becoming unsafe and people are fearful to even venture out in the late evenings. Illegal activities are on the rise and it is becoming very dangerous for the residents.
I have already given many complaints to our local authority. They are always giving only empty promises and no action is being taken. It is my humble request that you look into the matter personally. If you take some prompt action I shall be very grateful.
Thanking You
Yours sincerely
XYZ
Question 2(b).
Your grandmother who lives in Bengaluru has written to you enquiring about your welfare against the background of the havoc caused by the incessant rains in your area. Write a reply to her letter.
Dear Grandmother
Received your letter and wanted to put you at ease regarding my wellbeing. I am perfectly safe although the rains have no doubt wreaked havoc in the town of Mumbai. Most of the city is waterlogged and even the house where I live is facing leakage problems due to the incessant rain. And with the heavy downpour water gets collected over the terrace and it keeps seeping through the ceilings and creates a menace. It has caused severe damage to my household electronic items. Also this may damage the wallpaper and wood work inside the house.
The drains in the streets are blocked and this is adding to the problem as sewage is seeping onto the roads. The stench is almost unbearable in the city but thankfully the locality where I live is saved from this problem atleast. Small mercies indeed!
The mayor said this year’s flooding is some of the worst seen in years and he is hoping the City of Mumbai can find some long term solutions. More than 36,600 people have been affected by this week’s stormy weather. The water affairs department has issued a flood warning due to heavy rainfall which has soaked parts of the city. Torrential downpours have left scores of people without shelter due to flooding. The heavy downpours have cut off roads, uprooted trees, collapsed bridges, marooned farms, wrecked crops, damaged cars, flooded homes and swept away shacks.
However you must not become anxious as my house is not in the flooded areas and lam safe.
Rest I will talk when I visit you soon.
XYZ
Question 3.
‘You-speak Spanish?’ said Thacker thoughtfully. ‘You look like a Spaniard, too’, he continued. ‘And you’re from Texas. And you can’t be more than twenty or twenty-one’
‘Have you got a deal of some kind to put through?’ Llano Kid asked Thacker
‘Are you open to a proposition?’ said Thacker.
‘What’s the use to deny it?’ said the Kid. Thacker got up and closed the door. Through the window he pointed to a two-storey white house with wide galleries.
‘In that house,’ said Thacker, live old Santos Urique and his wife. Twelve years ago they lost their child. No, he didn’t die. Some Americans filled his head with big stories about the States; and about a month after they left, the boy disappeared, too. He was eight then. The boy was seen once afterwards in Texas, it was thought, but they never heard anything more of him. Old Urique has spent thousands of dollars having him looked for. The mother was broken up worst of all. She still believes he’ll come back to her some day. On the back of the boy’s left hand was tattooed a flying eagle carrying a spear in his claws. That’s old Urlque’s coat of arms.
‘Here’s the scheme. In a week I’ll have the eagle bird tattooed on your hand. Then I’ll notify old Urique. In the meantime I’ll furnish you with all of the family history I can find out. The rest of it is simple. If they take you in only for a while it’s long enough. Old Urique keeps anywhere from $50,000 to$100,000 in his house all the time in a little safe that you could open with a screwdriver. You get it and we’ll be gone.’
After two weeks Thacker dispatched a note to the intended victim informing him about his long lost son. The man and the lady arrived at the consulate. Lady Urique bent upon the young man and gave a long look of the most agonised questioning. Then her great black eyes turned, and her gaze rested upon his left hand. And then with a sob she caught Llano Kid to her heart. A month afterwards Kid came to the consulate in response to a message sent by Thacker.
‘What are you doing?’ asked Thacker. ‘You’re not being fair to me. You’ve been acting as the lost son of the couple for four weeks now. What’s the trouble? What are you waiting for?’ aksed Thacker, angrily. ‘Don’t you forget that I can upset your apple cart any day I want to?’
‘I might just as well tell you now, that things are going to stay just as they are. They’re about right now,’ said Kid. ‘The scheme’s off.’
‘What do you mean?’ asked Thacker. ‘You’re going to throw me down, then, are you?’
‘Sure’, said Kid cheerfully. ‘Throw you down. That’s it. And now I’ll tell you why. I have had no mother to speak of. But here’s a lady, this artificial mother of mine, who dotes on me. I’ve got to keep her fooled. Once the lady stood it; twice she won’t.’ ‘There’s one more reason’, he said slowly, ‘why things have got to stand as they are. The fellow I killed in Laredo had the same picture on his left hand.’
Question 3(a).
Give the meanings of the following words as used in the passage .One word answers or short phrases will be accepted.
(1) Tattoed
(2) Notify
(3) Agonised
1. A tattoo is a form of body modification, made by inserting indelible ink into the dermis layer of the skin to change the pigment.
2. Inform (someone) of something, typically in a formal or official manner.
3. Expressingpain or agony;”agonizedscreams”.
Question 3(b).
Question 1.
In what way could Kid look like Urique’s lost son?(Consider his age,origin and his tattoo)
The kid could speak Spanish and looked like a Spaniard. Urique was Spanish.Besides this the Kid could look like Urique’s son if a tattoo showing a flying eagle carrying a spear in his claws was made on his left hand.This was a tattoo of Urique’s coat of arms.Moreover the kid was about twenty or twenty one which was the same age as Uriques’s son would have been as he was eight when he was lost and twelve years had passed since then. The boy had been list in Texas and the Kid was also from Texas.
Question 2.
What did Urique do after his son was lost? What was the reaction of Lady Urique then?
Urique spent thousands of dollars to look for his son .Lady Urique was broken most of all .She believed that her son would come back some day to her.
Question 3.
What was Thacker’s intention in sending Kid to Urique’s house?
Thacker wanted to use the Kid to steal money from Urique’s house.Urique was in the habit of keeping $50,000 to$100,000 in his house all the time and Thaker wanted the kid to open the safe and steal that money .
Question 4.
What did Thacker do after Kid had gone to Urique’s house?
After the Kid went to Urique’s house Thacker waited for a month for the Kid to commit the robbery as planned but when nothing happened he sent a message asking the Kid to meet him.
Question 5.
How did Urique and his wife react at the consulate after meeting Kid?
When Urique’s wife met Kid at the consulate she gave him a long look of the most agonised questioning. Then she turned to look at his left hand to see the tattoo.After that she caught Llano Kid and hugged him with a sob .She became very emotional.
Question 6.
Why didn’t Kid abide by the scheme proposed by Thacker?
The Kid did not abide by the scheme proposed by Thacker for two reasons. Firstly he was quite happy being with Urique and his wife who doted on him and felt that it was better for him to continue living with them who considered him their son and were ready to give him everything. Secondly, he had already killed Urique’s son in Laredo and was confident that now he could safely stay with them without danger of detection.And maybe he was also a little repentant for killing their son.
Question 3(c).
In not more than 60 words, briefly state the plan made by Thacker and how it failed at the end.
Thacker had planned to plant Kid in Urique’s house to impersonate as Urique’s son to steal money and then flee with Thacker. But his planned failed when Kid refused to do as planned and decided to continue staying with a doting mother and maybe he was repentant for killing their son.
Question 3(d).
Give a title to the passage and give a reason to justify your choice.
An apt title would be Deceiver Deceived because of two reasons- .
1. Thacker who wanted to deceive Urique was himself deceived by his accomplice
2. Kid had gone to deceive Urique and his wife about their son and rob them but ultimately was deceived into staying because of the motherly love showered upon him.
Question 4(a).
In the following passage, fill in each of the numbered blanks with the correct form of the words given in brackets. Da not copy the passage but write in correct serial order the word or phrase appropriate to the blank space.
While-1—(teach)in the class, the teacher looked outside the window of his classroom. There he -2—(see)a none -year- old boy, shabbily—3—(dress) and —4—(shiver) with cold.The teacher called him in.The poor boy —5—(be)in tears. “I have done nothing wrong,” he said. “I was just here to listen to your lessons and learn something before—6—(go) to the store;but if you don’t want me here I won’t come back.” “Why don’t you go to school?” asked the teacher. “Because my father can’t afford to pay the school fees every month,” sobbed the boy, “Well,let me see if you know anything.Tell me something about what I —7—(teach) in class yesterday.” The boy remembered everything and the astonished teacher said, “Don’t worry about the fees.I —8—(speak)to your father.”Later, the boy became a great scholar and an outstanding writer.
1. teaching
2. saw
3. dressed
4. shivering
5. was
6. going
8. will speak|
Question 4(b).
Fill in the blanks with appropriate words:
1. I saw Jane last summer, but since then I haven’t seen her.
2. There were old magazines lying about in his room.
3. A wooden barrier was placed across the road.
4. The child crawled under the bed in an attempt to hide.
5. They do not work properly during the festival week.
6. We walked on till we reached the bridge.
7. These souvenirs are of no value.
8. We decided against a picnic in view of the bad weather.
Question 4(c).
Combine each of the following set of sentences without using and ,but ,or so
1. Bring me the newspaper. It is in the drawing room.
Ans. Bring me the newspaper that is in the drawing room.
2. Could he give us a loan? I did not know.
Ans. I did not know if he could give us a loan.
3.Everyone opposed Edison. However, he disregarded their opinion.
Ans. Edison disregarded the opinion of everyone who opposed him.
4. My father will send my sister to college. He will also send me to college.
Ans. My father will send me as well as my sister to college.
Question 4(d).
Rewrite the following sentences according to the Instructions given after each. Make other changes that may be necessary, but do not change the meaning of each sentence.
(1) Both the sons never help her in the morning. ( Begin: Neither______)
Ans. Neither of the sons help her in the morning.
(2) “If you need help, contact the travel agent.” I advised the tourists.(Begin: The Tourists )
Ans. The tourists were advised by me to contact the travel agent in case they needed any help.
(3) Both Arun and I walked out of the meeting.(Begin: Arun walked out_______ )
Ans. Arun walked out of the meeting,so did I.
(4) My cousin is short,yet he is a good basketball player.( Use: Inspite of________ )
Ans. Inspite of being short, my cousin is a good basketball player.
(5) The government will raise the oil prices soon.(Begin: The oil prices_______ )
Ans. The oil prices will soon be raised by the government.
(6) She can only, go for the picnic if she gets better. (Use: Unless________ )
Ans. Unless she gets better she cannot go for the picnic.
(7) Didn’t I meet you in the school yesterday? (End: _______ didn’t I )
Ans. I met you in the school yesterday, didn’t I?
(8) He said, “I have not done that.” (Begin:He denied________ )
Ans. He denied having done that.
For More Resources | 2023-03-23 10:27:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24254120886325836, "perplexity": 3384.514857815946}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945144.17/warc/CC-MAIN-20230323100829-20230323130829-00247.warc.gz"} |
https://www.dummies.com/education/math/calculus/how-to-find-the-value-of-an-infinite-sum-in-a-geometric-sequence/ | # How to Find the Value of an Infinite Sum in a Geometric Sequence
If your pre-calculus teacher asks you to find the value of an infinite sum in a geometric sequence, the process is actually quite simple — as long as you keep your fractions and decimals straight. If r lies outside the range –1 < r < 1, an grows without bound infinitely, so there’s no limit on how large the absolute value of an (|an|) can get. If |r| < 1, for every value of n, |rn| continues to decrease infinitely until it becomes arbitrarily close to 0. This decrease is because when you multiply a fraction between –1 and 1 by itself, the absolute value of that fraction continues to get smaller until it becomes so small that you hardly notice it. Therefore, the term rk almost disappears completely in the finite geometric sum formula:
And if the rk disappears — or gets very small — the finite formula changes to the following and allows you to find the sum of an infinite geometric series:
For example, follow the steps to find this value:
1. Find the value of a1 by plugging in 1 for n.
2. Calculate a2 by plugging in 2 for n.
3. Determine r.
To find r, you divide a2 by a1:
4. Plug a1 and r into the formula to find the infinite sum.
Plug in and simplify to find the following:
Repeating decimals also can be expressed as infinite sums. Consider the number 0.5555555. . . . You can write this number as 0.5 + 0.05 + 0.005 + . . . , and so on forever. The first term of this sequence is 0.5; to find r, 0.05 divided by 0.5 = 0.1.
Plug these values into the infinite sum formula:
Keep in mind that this sum is finite only if r lies strictly between –1 and 1. | 2019-04-22 18:25:22 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9108245968818665, "perplexity": 273.72151940815064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578577686.60/warc/CC-MAIN-20190422175312-20190422201312-00326.warc.gz"} |