url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
https://picturethismaths.wordpress.com/tag/applied-topology/ | ## Understanding the brain using topology: the Blue Brain project
ALERT ALERT! Applied topology has taken the world has by storm once more. This time techniques from algebraic topology are being applied to model networks of neurons in the brain, in particular with respect to the brain processing information when exposed to a stimulus. Ran Levi, one of the ‘co-senior authors’ of the recent paper published in Frontiers in Computational Neuroscience is based in Aberdeen and he was kind enough to let me show off their pictures in this post. The paper can be found here.
So what are they studying?
When a brain is exposed to a stimulus, neurons fire seemingly at random. We can detect this firing and create a ‘movie’ to study. The firing rate increases towards peak activity, after which it rapidly decreases. In the case of chemical synapses, synaptic communication flows from one neuron to another and you can view this information by drawing a picture with neurons as dots and possible flows between neurons as lines, as shown below. In this image more recent flows show up as brighter.
Numerous studies have been conducted to better understand the pattern of this build up and rapid decrease in neuron spikes and this study contains significant new findings as to how neural networks are built up and decay throughout the process, both at a local and global scale. This new approach could provide substantial insights into how the brain processes and transfers information. The brain is one of the main mysteries of medical science so this is huge! For me the most exciting part of this is that the researchers build their theory through the lens of Algebraic Topology and I will try to explain the main players in their game here.
Topological players: cliques and cavities
The study used a digitally constructed model of a rats brain, which reproduced neuron activity from experiments in which the rats were exposed to stimuli. From this model ‘movies’ of neural activity could be extracted and analysed. The study then compared their findings to real data and found that the same phenomenon occurred.
Neural networks have been previously studied using graphs, in which the neurons are represented by vertices and possible synaptic connections between neurons by edges. This throws away quite a lot of information since during chemical synapses the synaptic communication flows, over a miniscule time period, from one neuron to another. The study takes this into account and uses directed graphs, in which an edge has a direction emulating the synaptic flow. This is the structural graph of the network that they study. They also study functional graphs, which are subgraphs of the structural graph. These contain only the connections that fire within a certain ‘time bin’. You can think of these as synaptic connections that occur in a ‘scene’ of the whole ‘movie’. There is one graph for each scene and this research studies how these graphs change throughout the movie.
The main structural objects discovered and consequentially studied in these movies are subgraphs called directed cliques. These are graphs for which every vertex is connected to every other vertex. There is a source neuron from which all edges are directed away, and a sink neuron for which all edges are directed towards. In this sense the flow of information has a natural direction. Directed cliques consisting of n neurons are called simplices of dimension (n-1). Certain sub-simplices of a directed clique for their own directed cliques, when the vertices in the sub-simplices contain their own source and sink neuron, called sub-cliques. Below are some examples of the directed clique simplices.
And the images below show these simplices occurring naturally in the neural network.
The researchers found that over time, simplices of higher and higher dimension were born in abundance, as synaptic communication increased and information flowed between neurons. Then suddenly all cliques vanished, the brain had finished processing the new information. This relates the neural activity to an underlying structure which we can now study in more detail. It is a very local structure, simplices of up to 7 dimensions were detected, a clique of 8 neurons in a microcircuit containing tens of thousands. It was the pure abundance of this local structure that made it significant, where in this setting local means concerning a small number of vertices in the structural graph.
As well as considering this local structure, the researchers also identified a global structure in the form of cavities. Cavities are formed when cliques share neurons, but not enough neurons to form a larger clique. An example of this sharing is shown below, though please note that this is not yet an example of a cavity. When many cliques together bound a hollow space, this forms a cavity. Cavities represent homology classes, and you can read my post on introducing homology here. An example of a 2 dimensional cavity is also shown below.
The graph below shows the formation of cavities over time. The x-axis corresponds to the first Betti number, which gives an indication of the number of 1 dimensional cavities, and the y-axis similarly gives an indication of the number of 3 dimensional cavities, via the third Betti number. The spiral is drawn out over time as indicated by the text specifying milliseconds on the curve. We see that at the beginning there is an increase in the first Betti number, before an increase in the third alongside a decrease in the first, and finally a sharp decrease to no cavities at all. Considering the neural movie, we view this as an initial appearance of many 1 dimensional simplices, creating 1 dimensional cavities. Over time, the number of 2 and 3 dimensional simplices increases, by filling in extra connections between 1 dimensional simplices, so the lower dimensional cavities are replaced with higher dimensional ones. When the number of higher dimensional cavities is maximal, the whole thing collapses. The brain has finished processing the information!
The time dependent formation of the cliques and cavities in this model was interpreted to try and measure both local information flow, influenced by the cliques, and global flow across the whole network, influenced by cavities.
So why is topology important?
These topological players provide a strong mathematical framework for measuring the activity of a neural network, and the process a brain undergoes when exposed to stimuli. The framework works without parameters (for example there is no measurement of distance between neurons in the model) and one can study the local structure by considering cliques, or how they bind together to form a global structure with cavities. By continuing to study the topological properties of these emerging and disappearing structures alongside neuroscientists we could come closer to understanding our own brains! I will leave you with a beautiful artistic impression of what is happening.
There is a great video of Kathryn Hess (EPFL) speaking about the project, watch it here.
For those of you who want to read more, check out the following blog and news articles (I’m sure there will be more to come and I will try to update the list)
Frontiers blog
Wired article
Newsweek article
## SIAGA: Topology of Data
### Seven pictures from Applied Algebra and Geometry: Picture #4
The Society for Industrial and Applied Mathematics, SIAM, has recently released a journal of Applied Algebra and Geometry called SIAGA. The poster for the journal features seven pictures. In this blog post I will talk about the fourth picture. See here for my blog posts on pictures onetwo and three. And see here for more information on the new journal.
In the first section of this post “The Context”, I’ll set the mathematical scene and in the second section “The Picture” I’ll talk about this particular image, representing Topology of Data.
# The Context
Topology offers a set of tools that can be used to understand the shape of data. The techniques detect intrinsic geometric structures that are robust to many common sources of error including noise and arbitrary choice of metric. For an introduction, see the book “Elementary Applied Topology” by Robert Ghrist (2014), or the article “Topology and Data” by Gunnar Carlsson (2009).
Say we have noisy data points coming from some unknown space $X$ which we believe possesses an interesting shape. We are interested in using the data to capture the topological invariants of the unknown space. These are its holes of different dimensions, unchanged by continuous squeezing and stretching.
The holes of different dimensions are the homology groups of the space $X$. They are denoted by $H_k(X)$ for $k$ some non-negative integer. The zeroth homology group tells us the number of zero-dimensional holes or, more intuitively, the connectedness of the space. For a space $X$ with $n$ connected components, it is
$H_0(X) = \mathbb{Z}^n$,
the free abelian group with $n$ generators. One-dimensional holes are counted by $H_1(X)$. For example, a circle $X = S^1$ has a single one-dimensional hole, so $H_1(S^1) = \mathbb{Z}$.
The connectedness properties of sampled data tell us a lot about the underlying space from which they are sampled. In some situations, such as for structural biological information, it is indispensable to know the structure of the holes too. These features are unchanged no matter which metric we use, or which space we embed the points into. The higher homology groups $H_k(X)$ for $k \geq 2$ similarly give us such summarizing features.
But there’s a problem: sampling $N$ points from a space gives us a collection of zero-dimensional pieces, which – unless two points land in exactly the same place – are all unconnected. Let us call this data space $D_0$. The space $D_0$ has homology groups
$H_k(D_0) = \begin{cases} \mathbb{Z}^N & k = 0 \\ 0 & \text{otherwise.} \end{cases}$
It is usually the case that many points are very close together, and ought to be considered to come from the same connected component. To measure this we use persistent homology. We take balls of increasing size centered at the original data points, and measure the homology groups of the space consisting of the union of these balls. We call this space $D_\epsilon$, where $\epsilon$ is the radius of the balls. The important structural features are those that persist for large ranges of values of $\epsilon$. For some great illustrations of persistent homology, see this post Rachael wrote for our blog in July 2015.
# The Picture
This picture shows data points sampled from a torus, which we imagine to live in three-dimensional space. It was made by Dmitriy Morozov, who works at the Lawrence Berkeley National Lab. He applies topological methods in cosmology, climate modeling and material science.
The sampled points in the picture lie on the torus, and furthermore in a more specialized slinky-shaped zone of the torus. This is an important feature of the shape which topological methods will capture.
The original data consists of 5000 points, and our persistent homology approach involves taking three-dimensional balls $B_\epsilon(d_i)$ of radius $\epsilon$ centered at each data point $d_i$. When the radius $\epsilon$ is very extremely small, none of the balls will be connected, and the shape of our data is indistinguishable from any other collection of 5000 points in space.
Before long, the radius will exceed half the distance to all the points’ nearest neighbors. The 5000 balls join together to form a curled up circular piece of string. Topological invariants do not notice the curling, so topologically the shape obtained is a thickened circle with a one-dimensional hole $H_1(D_{\epsilon}) = \mathbb{Z}$. When the radius is large enough for the adjacent curls of the slinky to meet, but not to read the opposite side of each curl, we get a hollow torus with $H_1(D_{\epsilon}) = \mathbb{Z}^2$ and $H_2(D_\epsilon) = \mathbb{Z}$. Finally, the opposite sides of each curl of the slinky will meet, and they will meet up with the slinky-curls on the opposite side of the torus. Our shape then becomes a three-dimensional shape with no holes, and $H_1(D_R) = 0$.
In this example, the data points can be visualized and we are able to confirm that our intuition for the important structure of the shape agrees with the homological computations. For higher-dimensional examples it is these persistent features that will guide our understanding of the shape of the data.
## Moduli Spaces
What do you think about when you see a circle?
Depending on the context, a mathematician might think of a circle in many ways. One option is to think about a circle as a collection of infinitely many points, rather than to think of it as a line. It is the collection of points that lie a fixed distance away from a central point.
This perspective is important because we can think of each point on the circle as representing something. For example, in the picture below the blue point represents the red arrow (or ray) that starts at the centre of the circle and passes through that blue point:
We can do this for all points and arrows: each point on corresponds to the arrow starting at the centre of the circle and passing through that point, and each arrow corresponds to the point where it meets the circle.
This gives a bijection between points on the circle and arrows from the centre. But our correspondence between points and lines is better than a bijection: it seems quite natural. The reason for this is if two arrows are very close together then the corresponding points on the circle are also close together, and vice versa – our bijection has captured information about the topology of the lines.
In fact, we can find how similar two arrows are by finding the distance between their associated points on the circle.
Why is this useful? It’s much easier to think about a circle than it is to think about a whole collection of arrows. We can see from the fact that a circle is 1-dimensional that all such arrows can be described using one parameter – the angle of the arrow is enough information to define which arrow we’re talking about. We can stratify the space of arrows by subdividing the circle, for example on the picture below the green region corresponds to arrows that point “upwards”:
We say that the circle is a moduli space for our arrows – each point on the circle represents an arrow in the right kind of “natural” way.
What if, instead of arrows starting at a given point, we were interested in lines passing through a point? What shape could we use to parametrize these – i.e. what could our new moduli space be? We have a problem because each line passes through two points on the circle instead of one so we no longer have a bijection:
Can we still use a circle as the moduli space? (Answer: yes!)
In the above examples we are looking for ways to classify arrows and lines. It is an important problem in maths to classify more complicated objects for example algebraic curves, or fly wings!
Ezra Miller is interested in understanding the moduli space of fly wings – a space where each point corresponds to a different kind of fly wing, where we say two fly wings are different if the veins make different shapes.
Clearly something much more complicated space than a circle will be needed to describe the different kinds of vein shapes of fly wings. In fact, there are lots of different spaces that we could use. Once we have selected a shape, there are interesting and useful questions we can ask:
1. Distance: in our previous example, we had a clear notion of how far apart two arrows were – we could measure the distance travelled to get from one point to another by travelling along the circle. It’s harder to say how far apart two fly wings are, and we want a measure of distance that agrees with our biological intuition.
2. Stratifying the space: before, we could stratify our space of arrows by selecting some region of the circle (for example, the upper half). What about for the fly wing – are there ares of our moduli space that correspond to biologically significant subsets of flies?
There are also many areas of pure maths where people study moduli spaces, which I will talk about in a future blog post. They are a (fairly) simple but important intuitive concept that plays a role in both pure and applied maths.
## Persistent homology applied to evolution and Twitter
In this post I’ll let you know about an application and a variation of persistent homology I learnt about at the Young Topologists Meeting 2015. You might want to read my post on persistent homology first!
In his talks Gunnar Carlsson (Stanford) gave lots of examples of applications of persistent homology. A really interesting one for me was applying persistent homology to evolution trees. Remember that homology tells us about the shape of the data, and in particular if there are any holes or loops in it. We tend to think of evolution as a tree:
but in reality the reason why all our models for evolution are trees is that we take the data and try to fit it to the best tree we can. We don’t even think that it might have a different shape!
In reality, as well as vertical evolution, where one species becomes another, or two other, distinct species over time, we have something called horizontal or reticulate evolution. This is where two species combine to form a hybrid species. In their paper Topology of viral evolution, Chan, Carlsson and Rabadan show how the homology (think of this as something describing the shape of the data, specifically the holes or loops that appear) of trees may be different if we take into account species merging together:
They go on to show how persistent homology can detect such loops caused by horizontal evolution, in the example of viral evolution. This is a brand new approach and really exciting as we now have a way of finding out how many loops are in a given evolutionary dataset, and which data points they correspond to. This can tell us about horizontal evolution, as well as vertical!
Up next is work from Balchin and Pillin (University of Leicester) on a variation of persistent homology inspired by directed graphs. The images in this section are from the slides of Scott Balchin’s talk at the young topologists meeting! The motivation for their variation is: what if you don’t simply have data points, but some other information as well. Take this example of people following people on twitter: draw an arrow from person A to person B if person A follows person B.
We see that Andy follows Cara but Cara does not reciprocate! If you just had Andy and Cara connected by an edge then this information would be lost. Balchin and Pillin looked at a way of encoding this extra information into the complex, taking into account the number of arrows you would need to move along to get from Andy to Cara (1) and also from Cara to Andy (2, via Bill). I will post a link to their paper here as soon as it is released. When the data is considered without this extra information, persistent homology gives a (crazy) barcode that looks like this:
but when you include the directions you get a slightly less mysterious bar code:
which is in a lot of ways more accurate and easy to interpret.
Balchin gave another example of a system where direction mattered: non-transitive dice. If you have a few 6 sided dice, you can represent each one by a circle with 6 numbers in it: the numbers on the sides of the dice! Then put an arrow from dice A to dice B if dice A beats dice B on average.
The non-transitive means sometimes there are loops where dice A beats dice B which beats dice C, but then dice C beats dice A! You can actually buy non-transitive dice and play with them in real life. As you can probably tell, the arrows in this picture are important and so we want to make sure we don’t loose the directions when considering the homology!
There are a few more applications of persistent homology I would like to share with you and hopefully I will get the chance some other week!
## Persistent Homology
A sample of the pictures we will look at this week:
Next week I’m going to the Young Topologists Meeting 2015, at EPFL in Lausanne, Switzerland. Over 180 young topologists are going and many of them will give short talks on their research. Alongside this, there are two invited speakers who will give mini lecture series:
• Gunnar Carlsson of Stanford University, lecturing about Methods of applied topology
• Emily Riehl of Harvard University, lecturing about Infinity category theory from scratch
I’ll try to write something about these courses, and this post will be a wee introduction to a tool introduced by Gunnar Carlsson which considers topology of data clouds: persistent homology. The pictures in this post were drawn by Paul Horrocks during our joint dissertation at undergrad: points to Paul!
The idea of persistent homology is to use a tool of topology – homology – to understand something about the structure or shape of a set of data points. But topology is to do with spaces, for example manifolds or surfaces. Therefore we want to make a space out of our data before we can work out the homology.
We do this by plotting our set of points, and around each point we draw a ball. This ball has a radius and we can vary the size of this radius:
Once we have drawn these balls, we join two of the points by a line if their corresponding balls intersect, and colour in triangles formed by three lines if the balls corresponding to the three points of the triangle have a patch where they all intersect. For different radii we get different structures.
here only two of the balls intersected so there is only one line | 2017-08-21 02:37:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 28, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5419883131980896, "perplexity": 458.8228431861982}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886107487.10/warc/CC-MAIN-20170821022354-20170821042354-00153.warc.gz"} |
https://math.stackexchange.com/questions/2893768/reversing-digits-of-power-of-2-to-yield-power-of-7 | # Reversing digits of power of 2 to yield power of 7
Are there positive integers $n$ and $m$ for which reversing the base-10 digits of $2^n$ yields $7^m$?
I've answered this question for powers of 2 and powers of 3 in the negative. Permutations of the digits of a number divisible by 3 yields another number divisible by 3 in base-10. This arises from the fact that the sum of base-10 digits of a number divisible by 3 is itself divisible by 3.
Thus since $2^n$ is not divisible by 3, reversing the digits can't yield another number divisible by 3, and hence no natural number power of 2 when reversed will be a natural power of 3.
I'm currently trying to put limitations on n and m by considering modular arithmetic. Suggestions for techniques in the comments would also be appreciated.
• If the number of digits is even, adding $2^n$ to $7^m$ would give a multiple of 11. By letting 7=-4 modulo 11, some things can be said about $m$ and $n$. – Marco Aug 25 '18 at 3:20
• Also going on with the same idea of remainders modulo 3, you can say $n$ must be even since $2^n=7^m=1$ modulo 3. – Marco Aug 25 '18 at 3:22
• – Misha Lavrov Aug 25 '18 at 3:55
• $1024=2^{10}$ and $2401=7^4$ are close to being reversals of each other.... – Gerry Myerson Aug 25 '18 at 4:23
• This is also related – mbjoe Aug 25 '18 at 18:29
Some basic observations:
$n$ must be even: one has $2^n=7^m=1 \pmod{3}$ and so $n$ is even.
$n-m$ must be divisible by 3: one has $2^n=7^m=(-2)^m \pmod{9}$ and so $2^{n-m}=(-1)^m \pmod{9}$ and so $n-m$ is divisible by 3.
$n-2m$ is divisible by 10 as the following two paragraph show.
If the number of digits is even, then $2^n+7^m$ is a multiple of 11. Therefore, $2^n+(-4)^m =0 \pmod{11}$. Since $n$ is even, we let $n=2l$, and so $4^l+(-4)^m=0 \pmod{11}$. It follows that $m$ is odd and $l-m$ is divisible by 5.
If the number of digits is odd, then $2^n=7^m \pmod{11}$. Again we have $4^l=(-4)^m \pmod{11}$ and so $4^{l-m}=(-1)^m \pmod{11}$, which implies that $m$ is even and $l-m$ is divisible by 5.
The number of digits of $2^n$ is given by the inequality $10^s<2^n<10^{s+1}$. So $s<n \log_{10} 2<s+1$. In particular $|n\log 2 - m \log 7|<1$. | 2019-10-19 01:51:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7877094149589539, "perplexity": 156.69011678170673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986688674.52/warc/CC-MAIN-20191019013909-20191019041409-00110.warc.gz"} |
http://cerco.cs.unibo.it/changeset/3626 | # Changeset 3626
Ignore:
Timestamp:
Mar 6, 2017, 7:22:26 PM (8 months ago)
Message:
Changes to main cerco.tex file:
Changed subtitle to something a little more snappy (still not really happy with it),
Rewrote abstract to be less defensive, too.
File:
1 edited
### Legend:
Unmodified
r3619 the Seventh Framework Programme for Research of the European Commission, under FET-Open grant number: 243881}} \subtitle{Verified lifting of concrete complexity annotations through a realistic C compiler} \subtitle{} \journalname{Journal of Automated Reasoning} \titlerunning{Certified Complexity} \authorrunning{Boender, Campbell, Mulligan, and Sacerdoti~Coen} \institute{Jaap Boender \at Faculty of Science and Technology,\\ Middlesex University London,\\ United Kingdom.\\ Faculty of Science and Technology, Middlesex University London, United Kingdom.\\ \email{J.Boender@mdx.ac.uk} \and Brian Campbell \at Department of Informatics,\\ University of Edinburgh,\\ United Kingdom.\\ Department of Informatics, University of Edinburgh, United Kingdom.\\ \email{Brian.Campbell@ed.ac.uk} \and Dominic P. Mulligan \at Computer Laboratory,\\ University of Cambridge, \\ United Kingdom.\\ Computer Laboratory, University of Cambridge, United Kingdom.\\ \email{Dominic.Mulligan@cl.cam.ac.uk} \and Claudio Sacerdoti~Coen \at Dipartimento di Informatica---Scienza e Ingegneria (DISI),\\ University of Bologna,\\ Italy.\\ Dipartimento di Informatica---Scienza e Ingegneria (DISI), University of Bologna, Italy.\\ \email{Claudio.SacerdotiCoen@unibo.it}} \begin{abstract} We provide an overview of the FET-Open Project CerCo (Certified Complexity'). Our main achievement is the development of a technique for analysing non-functional properties of programs (time, space) at the source level with little or no loss of accuracy and a small trusted code base. Intensional properties of programs---time and space usage, for example---are an important component of the specification of a program, and therefore overall program correctness. Here, intensional properties can be analysed \emph{asymptotically}, or \emph{concretely}, with the latter analyses computing resource bounds in terms of clock cycles, bits transmitted, bytes allocated, or other basal units of resource consumption, for a program execution. For many application domains, for instance libraries exporting cryptographic primitives that must be hardened against timing side-channel attacks, concrete complexity analysis is arguably more important than asymptotic. The core component is a C compiler, verified in the Matita theorem prover, that produces an instrumented copy of the source code in addition to generating object code. Traditional static analysis tools for resource analysis suffer from a number of disadvantages. They are sophisticated, complex pieces of software, that must be incorporated into the trusted codebase of an application if their analysis is to be believed. They also reason on the machine code produced by a compiler, rather than at the level of the source-code that the application programmer is familiar with, and understands. More ideal would be a mechanism to lift' a cost model from the machine code generated by a compiler, back to the source code level, where analyses could be performed in terms of source code, abstractions, and control-flow constructs written and understood by the programmer. However: incorporating the precision of traditional static analyses into a high-level approach is a challenge, and how to do this reliably is not \emph{a priori} clear. In this paper, we describe the scientific achievements of the European Union's FET-Open Project CerCo (Certified Complexity'). CerCo's main achievement is the development of a technique for analysing intensional properties of programs at the source level, with little or no loss of accuracy and a small trusted code base. The core component of the project a C compiler, verified in the Matita theorem prover, that produces an instrumented copy of the source code, in addition to generating object code. This instrumentation exposes, and tracks precisely, the actual (non-asymptotic) computational cost of the input program at the source level. Untrusted invariant generators and trusted theorem provers may then be used to compute and certify the parametric execution time of the code. We describe the architecture of our C compiler, its proof of correctness, the associated toolchain developed around the compiler, as well as a case study in applying this toolchain to the verification of concrete timing bounds on cryptographic code. \keywords{Verified compilation \and Complexity analysis \and CerCo (Certified Complexity')} \end{abstract} | 2017-10-24 04:04:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30578410625457764, "perplexity": 3688.689423061959}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187828134.97/warc/CC-MAIN-20171024033919-20171024053919-00616.warc.gz"} |
http://thepasqualian.com/?m=201004 | ### Archive
Archive for April, 2010
## On the Beale Cipher, Part II (and Other Book Ciphers)
Last time I talked about extending the usual frequency analysis on the (first) Beale cipher to augment our understanding of the composition of the individual letters the numbers may represent. I have said before that Markov chains are immensely applicable everywhere; the Beale cipher seems no exception. The idea came to me last month, a couple years after reading Singh's book, and while I was happily lazy watching a couple seagulls fish out of Manzanillo's ocean. I had also read Snell's book and a particular description of how Markov himself thought about Markov chains intrigued me: apparently he had counted the transitions of vowels to consonants and consonants to vowels in a book, thought out his theory, and showed that the long-term fractions of vowels and consonants in the book (using a chain) stabilized to the actual ratio.
This same idea can be applied, I think, on the Beale cipher. Suppose we know a priori how transitions between the encoding letters of the key, which we surmise is the first letter. Say for example the key is the Declaration of Independence (which in fact is the key for Beale cipher 2), as
"When in the Course of human events it becomes necessary..." and each first letter encodes for a particular number. We can see that W transitions to I, I to T, T to C, and so:
W->I->T->C->O->H->E-> etc. By finding the proportion of time any letter, say, W, transitions to any of the other letters, we've got ourselves a transition probability matrix. In this abbreviated example we see that W transitions to I one hundred percent of the time; its transition probability vector would be represented by a 1 in the position of the letter I and 0 otherwise. Thus, in effect, we are assuming that a random variable can take states represented by the letters of the alphabet, and it can transition to any other letter or stay where it is with a given probability.
Of course, the longer the encoding key is the better; most of the 26x26 transition probability matrix can be filled without a row of zeros, which is in effect what happens with the letter A above, since we count no letter toward which it transitions. In counting the whole of the transitions in the Declaration of Independence, the only letters that don't transition are X, Y, Z; to bypass the difficulty of a transition probability matrix whose rows do not sum to 1, I have made it so that X, Y, and Z's rows do sum to 1 by assuming that the states X, Y, and Z transition to any other letter of the alphabet in equal proportion.
Writing the Declaration of Independence transition probability matrix thusly, it becomes a regular transition probability matrix with stable nth power. In other words, we can take powers of this matrix up to an arbitrary number, such as 3000, 5000, without the fear of it diverging or giving weird numbers. In fact, in particular, the Declaration of Independence transition probability matrix (DOITPM for short) stabilizes to the fourth decimal digit after about 15 powers. The reason we care about the powers of the TPM is that such new matrix represents the probability of transitioning to a particular letter after n transitions! Thus, where the DOITPM alone represents the probability of, say, W transitioning to I at the first step (the next cipher number down), DOITPM to the 3rd power represents the probability of (in particular) W jumping to another letter after three steps (the next three cipher numbers down).
Of course, calculating the DOITPM to any power is taxing or impossible if done by hand: I downloaded Octave (which I earnestly recommend if, like me, you don't have the 5000+ dollars for a Matlab license) and built a few script files for the purpose of doing all this.
The very cool thing about this approach is that virtually all points 1-6 that I commented on in my last post are taken care of. But in particular, if one has a prior belief about the probability that a given number in the cipher is a particular letter (a probability vector), such can be propagated forward by using the TPM and its powers. As an example, we may have the prior belief that the cipher's 1 is a first letter with probability vector equal to the frequencies of first letters in the English language. Such a vector (P) times the DOITPM gives us a probability vector for the cipher's 2 (one-step forward), P*DOITPM^2 power gives the probability vector of the cipher's 3 (two steps forward), etc.
On the other hand, we may surmise that, say, the cipher's 64 has the standard frequencies of letters of the English language. We can propagate such belief forward to 65, 66, 67, etc., until about 15 letters after our original (since the DOITPM stabilizes and all steps after about the fifteenth are the same).
We need not use the above frequencies, although they do seem reasonable "first-guess" beliefs. But if we suspect for any reason that a particular letter, say T, is the cipher's 1, we can propagate such belief forward and get good results for cipher's 2, 3, and even 4. After 4 things begin to stabilize toward the long-term proportions, and this is only natural as our certainty of the next letters increases. If we are good at crosswords and a little bit lucky, we can determine that 2 is a particular letter. We can then modify the probability vector of such and propagate such belief forward... and so on until the whole text is deciphered.
We can also propagate partial beliefs: if we suspect the cipher's 1 is either a T or a W with equal probability, but there is a slight chance it can be any other letter, our probability vector for that number can be something like 0.004 for all letters except T and W which are 0.45. As always this belief can be propagated forward and with luck we can determine more letters based on frequency guesses.
Computationally, in the very inefficient script files I have built (I freely admit I am no programmer, and I wrote the script files somewhat quickly in order that the ideas not die before they could see the light of day, so there are a lot of unnecessary steps and redundancies) and my 2 core, it takes something like 20 minutes to process all the cipher and propagate all number's initial probability vectors by proceeding in layers. An example of an inefficiency is that, despite the fact the DOITPM stabilizes at about the 15th power, I got ambitious to squeeze even the slightest changes and am calculating it sometimes up to the 2000+ power, rather than caching the 15th power and using that. Thus, every time I want to modify a probability vector for a particular number on the cipher, I have to wait 20+ minutes for it to finish recalculating. It would be nice to have a gazillion computers working in a grid though.
Nevertheless, as I have mentioned, one will have to proceed in layers; by this I mean the following. Say you have an initial belief about the cipher's 1. We can propagate this belief all the way up to the cipher's 2000. But say you also have an initial probability vector of the ciphers 6. At 6 the probability "waves" begin to collide. It would be ideal if there were a manner to combine the probability (discrete) waves to obtain a more certain probability vector (kind of like a Kalman filter for discrete distributions): I could not think of any (although I want this to be the subject of a next post), or at least not any that would give me a more certain wave, so I opted to choose the wave that was more entropically close to 0. Since we have 26 "boxes" with different proportions of "balls" in them (the respective probability of being a particular letter), we can use information entropy to opt for the one vector that carries the most "certain" belief, or the one with the least entropy. Thus, if 6's prior contains less entropy than 1's propagated vector, I would opt to stick with 6's vector, and vice versa. I'd do this with all vectors at every single position (thus why it takes about 20 minutes to process, too).
Of course, if Beale number 1 was encoded with a key with similar transition probabilities as the DOI, and this would be a reasonable assumption if we think Beale encoded the cipher using a text from his time (similar stylistically, etc.), then we can use the DOITPM to attempt a decoding of it. If instead we believe that he used a text that he himself wrote, an analysis of Beale number 2 could yield a TPM that could crack it. Lastly, if we believe that Beale number 1 was encoded with a key very much like any other book in the world, we could again examine such a TPM and attempt a decipherment.
In my next post or possibly upon revision of this one, I will post an .xls file containing the DOITPM and the probability vectors of each number in the cipher of Beale 1, having assumed that the cipher's 1 and 71 have the frequencies associated with the first letter of the English language, and all the rest have the standard frequencies of the letters of the English language as prior beliefs. I also intend to post an .xls file except containing a TPM of a typical book.
The new probability vectors obtained in this fashion for each number of the cipher differ markedly from the typical letter frequencies: this makes sense, because the frequencies now depend on the transitions of the letters in the key itself. This may be the reason why some cryptographers have disputed that the cipher is written in English (based on statistical arguments); I think it is written in English, only the proportions of the letters in the cipher changes because of the particular proportions of the first letters of words in the key.
If anyone is interested in the script files, I may post them too: they are .m files, so you can run them on Octave or Matlab both. Leave a message below.
Otherwise if anyone is interested in hooking up computers for the processing of this (or donating money so I can buy several :)), also leave a message below or contact me using the contact form. Thanks!
Categories:
## On the Beale Cipher, Part I
Ugh, so right now there is construction going on behind my house; an abandoned house got put down and I can only imagine Telecable extending it's dominions over. The street is zoned to be residential, not commercial, but the owner seems to be politically well-positioned: he gets to do whatever he wants. In Mexico, political power is concentrated amongst a few, and not necessarily the smartest (quite the contrary in fact, the majority seems pretty dumb); I have often wondered why it is not requisite to have those whose ambition is a political post (in the legislature, in the judiciary and executive branches) take an IQ or aptitude test (... and have the smartest be selected): I'd much rather have smart crooks than stupids.
At any rate, with all that pounding in the background, I thought I'd talk a little bit about my thoughts on the Beale cipher. I started studying it a bit enthusiastic about the prospect of deciphering it completely (in the beginning of April actually), but I have become convinced this is not a task I can accomplish too easily and without any support -- in other words, it's difficult to do on my own. Besides, I am terrible at crossword puzzles, a skill that seems necessary to be able to guess certain words. I can't but remember my friend Seth K who could complete the NY Times Sunday crossword in like five minutes. Anyway: my motivator was the challenge, or the mathematical aspect of it, not the 40 million in US dollars that it is purported to be about (the first cipher). For some background and history, I earnestly recommend Simon Singh's book "The Code Book." It is extremely entertaining -- and it quickly has became one of my favorite books. Of course there's also the Wikipedia article which is much more succinct.
When I first looked at it under the chapter of Singh's "Le Chiffre Indechiffrable," I suspected then as I do now that it is not entirely undecipherable. The reasons for this:
*1. The Beale cipher is not a pad cipher, so it may be "breakable" without the specific requirement of the key.
*2. The fact that certain numbers repeat themselves (some 8 times ("18"), others only once) would suggest that the Beale cipher is susceptible to a form of frequency analysis.
*3. Certain sequences, for example, 64 following 18 in two different parts (first Beale cipher letters 124-125 and 187-188) may be important.
*4. The distance between numbers may be significant. It would seem the closer the difference the stronger the relationship between the letters; thus, the closer they are together the more information we could potentially squeeze out of them. Sequence repetitions may also help a lot (64 following 18 twice, for example), and adds credence to number 1 and 2.
5. The letter encoded by 1 has a different probability distribution of being a particular letter than do the others (see Wikipedia article). The same goes for 71, which is the first letter of the first Beale cipher. This may be true of other letters but it's not exactly clear which, because we don't know where a word ends and another begins. We can surely use this to augment our frequency analysis.
6. It cannot escape us then that if the Beale cipher is a book cipher, then each number in the code likely represents the first letter of a word from the key. However, it's not that knowing the first-letter frequencies of the key is likely to be any help. Although there does seem to be another bit of information of the key that would be extremely useful (and we can somewhat extract), which I will explain in my next post.
Later, I will describe how I have incorporated all these pieces of information into an analysis of frequencies, or an augmented form of the usual frequency analysis. For now, suffice it to say that starred statements I realized fairly early; unstarred statements I noticed only after developing the analysis a bit.
Categories:
## On revolutionizing the whole of Linear Algebra, Markov Chain Theory, Group Theory... and all of Mathematics
I have been so remiss about writing here lately! I'm so sorry! There are several good reasons for this, believe me. Among them: (1) I have been enthralled with deciphering a two hundred-year old code, the Beale cipher part I, with no substantial results except several good ideas that I may yet pursue and expound on soon here. But this post is not intended to be about that. (2) My computer died around December and I got a new one and I hadn't downloaded TEX; I used this as an excuse not to write proofs from Munkres's Topology chapter 1, and so, I have added none. I slap myself for this (some of the problems are really boring, although they are enlightening in some ways, I have to admit, part of the reason why I began doing them in the first place). (3) The drudgery of day to day work, which is soooo utterly boring that it leaves me little time for "fun," or math stuff, and my attention being constantly hogged by every possible distraction, at home, etc. Anyway.
For a few months now I have been reading a lot on Markov chains because they have captured my fancy recently (they are so cool), and in fact they tie in to a couple projects I've been having or been thinking about. I even wrote J. Laurie Snell because a chapter on his book was excellent (the one on Markov chains) with plenty of amazing exercises that I really enjoyed. In looking over that book and a Schaum's outline, a couple questions came to my head and I just couldn't let go of these thoughts; I even sort of had to invent a concept that I want to describe here.
So in my interpretation of what a Markov chain is, and really with zero rigor, consider you have $n < \infty$ states, position yourself at $i$. In the next time period, you are allowed to change state if you want, and you will jump to another state $j$ (possibly $i$) with probability $p_ij$ (starting from $i$). These probabilities can be neatly summarized in a finite $n \times n$ matrix, with each row being a discrete distribution of your jumping probabilities, and therefore each row sums to 1 in totality. I think it was Kolmogorov who extended the idea to an infinite matrix, but we must be careful with the word "infinite," as the number of states are still countable, and so they are summarized by an $\infty \times \infty$ countably infinite matrix. Being keen that you are, dear reader, you know I'm setting this question up: What would an uncountably infinite transition probability matrix look like? No one seems to be thinking about this, or at least I couldn't find any literature on the subject. So here are my thoughts:
The easiest answer is to consider a state $i$ to be any of the real numbers in an interval, say $[0,1]$, and to imagine such a state can change to any other state on such a real interval (that is isomorphic to any other connected closed interval of the same type, as we may know from analysis). This is summarized by a continuous probability distribution on $[0,1]$, whose sum is again 1; a good candidate is a beta function, such as $6 x (1-x)$, with parameters (2,2). I think we can "collect" such probability distributions continuously on $[0,1] \times [0,1]$: a transition probability patch, as I've been calling it. It turns out that it becomes important, if patches are going to be of any use in the theory, to be able to raise the patch to powers (akin to raising matrixes to powers), to multiply patches by (function) vectors and other tensors, and to extend the common matrix algebra to conform to patches; but this is merely a mechanical problem, as I describe in the following pdf. (Comments are very welcome, preferably here on the site!).
CSCIMCR
As you may be able to tell, I've managed to go quite a long ways with this, so that patches conform reasonably to a number discrete Markov chain concepts, including a patch version of the Chapman-Kolmogorov equations; but having created patches, there is no reason why we cannot extend the idea to "patchixes" or continuous matrixes on $[0,1] \times [0,1]$ without the restriction that each row cross-section sum to 1; in fact it seems possible to define identity patchixes (patches), and, in further work (hopefully I'll be involved in it), kernels, images, eigenvalues and eigenvectors of patchixes, commuting patchixes, commutator patchixes, and a slew of group theoretical concepts.
Having defined a patchix, if we think of the values of the patchix as the coefficients in front of, say, a polynomial, can we not imagine a new "polynomial" object that runs through exponents of say $x$ continuously between $[0,1]$ with each term being "added" to another? (Consider for example something like $\sum_i g(i)x^i, i \in [0,1]$?) I think these are questions worth asking, even if they are a little bit crazy, and I do intend to explore them some, even if it later turns out it's a waste of time.
Categories: | 2017-11-21 02:45:48 | {"extraction_info": {"found_math": true, "script_math_tex": 17, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 17, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7693628668785095, "perplexity": 927.1303679294872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806310.85/warc/CC-MAIN-20171121021058-20171121041058-00153.warc.gz"} |
https://www.gamedev.net/forums/topic/52417-what-is-an-edge-/ | #### Archived
This topic is now archived and is closed to further replies.
# what is an edge ?
This topic is 6294 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
hi, i want to write a simple bsp renderer. i read a tutorial so i only know the theoretical part of bsp-tree generating and bsp rendering. i decided to use quake2-bsp''s because i found the tutorial from flipcode about the quake2 bsp-file format. first of all i dont''t want to render only the cluster i am in, i want to render all faces in the map. but i don''t understand what an edge is. if i understood it correctly, an edge has two indices into the vertexarray. but ... aaammm ... ?????!!!???? can someone explain me what an edge is or how can i use it ??
##### Share on other sites
Well think of what an edge is in reality and thats pretty much what it is in 3D theory.
Lets say you have a cube:
A_________B |\ |\ | \______|_\ | |H | |G |_|______|C|D \| \| \________\ E F
A-B is an edge, B-C is an edge, C-D is an edge D-A is an edge
D-E is an edge E-F is an edge, F-G is an edge G-B is an edged, G-H is an edge, etc...
A-C is not an edge, B-D is not an edge. A-F is not an edge, B-E is not an edge, etc...
Seeya
Krippy
1. 1
2. 2
frob
20
3. 3
4. 4
Rutin
17
5. 5
• 21
• 13
• 10
• 9
• 18
• ### Forum Statistics
• Total Topics
632555
• Total Posts
3007050
× | 2018-09-18 15:17:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18773247301578522, "perplexity": 3744.194590934075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155561.35/warc/CC-MAIN-20180918150229-20180918170229-00076.warc.gz"} |
https://questions.examside.com/past-years/jee/jee-advanced/physics/simple-harmonic-motion/ | NEW
New Website Launch
Experience the best way to solve previous year questions with mock tests (very detailed analysis), bookmark your favourite questions, practice etc...
## MCQ (More than One Correct Answer)
More
A block with mass M is connected by a massless spring with stiffness constant k to a rigid wall and moves without fricti...
JEE Advanced 2016 Paper 2 Offline
Two independent harmonic oscillators of equal masses are oscillating about the origin with angular frequencies $$\omega... JEE Advanced 2015 Paper 1 Offline ## MCQ (Single Correct Answer) More A small block is connected to one end of a massless spring of un-stretched length 4.9 m. The other end of the spring (se... IIT-JEE 2012 Paper 1 Offline Consider the spring-mass system, with the mass submerged in water, as shown in the figure. The phase space diagram for o... IIT-JEE 2011 Paper 1 Offline The phase space diagram for simple harmonic motion is a circle centred at the origin. In the figure, the two circles rep... IIT-JEE 2011 Paper 1 Offline The phase space diagram for a ball thrown vertically up from ground is IIT-JEE 2011 Paper 1 Offline A wooden block performs$$SHM$$on a frictionless surface with frequency,$${v_0}.$$The block carries a charge$$+Q$$... IIT-JEE 2011 Paper 2 Offline A point mass is subjected to two simultaneous sinusoidal displacements in x-direction,$${x_1}\left( t \right) = A\sin \...
IIT-JEE 2011 Paper 2 Offline
The acceleration of this particle for $$|x| > {X_0}$$ is
IIT-JEE 2010 Paper 1 Offline
For periodic motion of small amplitude A, the time period T of this particle is proportional to
IIT-JEE 2010 Paper 1 Offline
If the total energy of the particle is E, it will perform periodic motion only if
IIT-JEE 2010 Paper 1 Offline
A simple pendulum has time period T1. The point of suspension is now moved upward according to the relation y = Kt2, (K ...
IIT-JEE 2005 Screening
A particle executes simple harmonic motion between x = - A to x = + A. The time taken for it to go from 0 to $${A \over ... IIT-JEE 2001 Screening The period of oscillation of a simple pendulum of length$$L$$suspended from the roof of a vehicle which moves without ... IIT-JEE 2000 Screening A particle free to move along the x-axis has potential energy given by$$U\left( x \right) = k\left[ {1 - \exp \left( { ...
IIT-JEE 1999 Screening
Two bodies M and N of equal masses are suspended from two separate massless springs of spring constant k1 and k2 respect...
IIT-JEE 1988
## Numerical
More
A 0.1 kg mass is suspended from a wire of negligible mass. The length of the wire is 1 m and its crosssectional area is ...
IIT-JEE 2010 Paper 1 Offline
## Fill in the Blanks
More
An object of mass 0.2 kg executes simple harmonic oscillation along the x-axis with a frequency of \left( {{{25} \over...
IIT-JEE 1994
### Joint Entrance Examination
JEE Main JEE Advanced WB JEE
### Graduate Aptitude Test in Engineering
GATE CSE GATE ECE GATE EE GATE ME GATE CE GATE PI GATE IN
NEET
Class 12 | 2022-08-08 22:57:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8509979248046875, "perplexity": 2059.311982481734}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.1/warc/CC-MAIN-20220808213349-20220809003349-00326.warc.gz"} |
https://www.physicsforums.com/threads/virial-theorem-and-frictional-forces.259199/ | Virial Theorem and Frictional Forces
1. Sep 25, 2008
Old Guy
In the virial theorem, why do velocity-dependent frictional forces disappear? I've seen this stated a number of times, but never any explanation. (And, obviously, haven't been able to figure it out myself!)
2. Sep 25, 2008
olgranpappy
I don't know if it is true in general, but for forces which are directly proportional to the velocity the virial is
proportional to
$$\langle{\vec v}\cdot{\vec r}\rangle\;,$$
where $\langle\ldots\rangle$ is time averaging.
But, that virial is the time average of a total derivative w.r.t time (i.e., of r^2/2) and so is zero if the quantity r is bounded and the averaging is done over a long time. (Or if the motion is periodic and the averaging is done over a period of the motion).
3. Sep 25, 2008
Old Guy
Thanks very much; can you expand a bit on your answer? Specifically, what does it mean for r to be "bounded", and how does it make the time average of a total time derivative go to zero? I understand about the "long time" bit, although even there it would seem to approach a limit of zero, and not actually equalling zero. Am I thinking about this right? Thanks again.
4. Sep 25, 2008
olgranpappy
I'll expand and expound:
First of all, I'm discussing a single classical particle.
The particle travels along a trajectory
$${\vec r}(t)$$
under the influence of some forces.
When I say that r is bounded I mean that I can choose a "bound"--some (possibly large) number (R) such that I always have
$$r(t)<R\;,$$
for all t. Put more simply, the particle never escapes off to infinity and is always within some finite (though perhaps large) distance from the origin. Also, Because r is bounded so too is r^2 bounded.
Next, I define the "time averaging" as
$$\langle\ldots\rangle\equiv \lim_{T\to\infty}\frac{1}{T}\int_0^Tdt(\ldots)$$
So then
$$\langle \frac{d r^2}{dt}\rangle =\lim_{T\to\infty}\frac{1}{T}\int_0^T\frac{d r^2}{dt} =\lim_{T\to\infty}\frac{r^2(T)-r^2(0)}{T}\;.$$
But that last quantity is necessarily less than
$$lim_{T\to\infty} \frac{R^2}{T}=0\;.$$
(it's also necessarily greater than -R^2/T)
Thus, because the quantity r^2 was bounded the time-average of the derivative of r^2 was zero. This is true for any bounded quantity. | 2017-06-25 09:04:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8491201400756836, "perplexity": 617.8563977583614}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320476.39/warc/CC-MAIN-20170625083108-20170625103108-00615.warc.gz"} |
https://www.answers.com/Q/Do_nuclear_power_plants_pollute_more_than_coal_power_plants | 0
# Do nuclear power plants pollute more than coal power plants?
Cyberlord123
Lvl 1
2011-03-07 16:01:11
No.
Nuclear power is more efficient because nuclear power is used as splitting atoms, making big bursts of energy, whereas coal power is simply burning coal. So nuclear power uses uranium fission to create energy (electricity), whereas coal power burns coal, emitting carbon.
(Mind you, nuclear energy leaves behind radioactive waste - that is arguably easier to deal with for the time being. Not to mention that accidents at nuclear plants can have devastating environmental effects.
Wiki User
2011-03-07 16:01:11 | 2023-03-25 19:48:18 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8509860634803772, "perplexity": 6067.909474170944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945372.38/warc/CC-MAIN-20230325191930-20230325221930-00610.warc.gz"} |
https://math.stackexchange.com/questions/455620/if-the-characteristic-polynomial-of-matrix-a-has-n-zero-roots-then-a-is-n?noredirect=1 | # If the characteristic polynomial of matrix $A$ has $n$ zero roots, then $A$ is nilpotent.
How can I prove the following statement:
If the characteristic polynomial of matrix $A$ has $n$ zero roots, then $A$ is nilpotent.
Thank you!
• Is $A$ an $n \times n$ matrix? Jul 30 '13 at 13:24
• See the answers to this question.
– jkn
Jul 30 '13 at 13:25
• Have you heard of the Cayley-Hamilton theorem? Jul 30 '13 at 13:26
If the characteristic polynomial has only zero roots, it has the form $\lambda^n = 0$ (or $(-\lambda)^n = 0$, depending on convention). By the theorem of Cayley-Hamilton, a square matrix $A$ fulfills its own characteristic equation (even if it's not diagonalizable), therefore $A^n = 0$ (zero matrix), therefore $A$ is nilpotent. | 2021-10-23 17:45:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8169057369232178, "perplexity": 182.44072244918513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585737.45/warc/CC-MAIN-20211023162040-20211023192040-00217.warc.gz"} |
https://sites.google.com/site/infiniteanalysis10/ | home
Developments in Quantum Integrable Systems ~ RIMS 研究集会 「 量子可積分系の展開」 ~ June 14 -- 16, 2010 Research Institute for Mathematical Sciences, Room 111, Kyoto Univ. Program (last update May, 17)June 14 (Mon)9:30 -- 10:30 Tetsuji Miwa (Kyoto)Omega Trick in XXZ model10:40 -- 11:40 Christian Korff (Glasgow)Quantum Integrable Systems and Enumerative Geometry I: a free fermion formulation of quantum cohomology13:30 -- 14:30 Hidetoshi Awata (Nagoya)Five-dimensional AGT Relation, q-W Algebra and Deformed $\beta$-ensemble14:40 -- 15:40 Yasushi Komori (Rikkyo)Multiple Bernoulli polynomials and multiple $L$-functions of root systems16:00 -- 17:00 Masato Okado (Osaka)Stable rigged configurations and X=M for sufficiently large rankJune 15 (Tue)9:30 -- 10:30 Christian Korff (Glasgow)Quantum Integrable Systems and Enumerative Geometry II: crystal limit and the WZNW fusion ring10:40 -- 11:40 Rinat Kashaev (Geneve)Introduction to quantum Teichmuller theory13:30 -- 14:30 Tomoki Nakanishi (Nagoya)Dilogarithm identities in conformal field theory and cluster algebras14:40 -- 15:40 Roberto Tateo (Torino) The ODE/IM correspondence and its applications16:00 -- 17:00 Tomohiro Sasamoto (Chiba)Height distributions of 1D Kardar-Parisi-Zhang equationJune 16 (Wed)9:30 -- 10:30 Roberto Tateo (Torino) Thermodynamic Bethe Ansatz and the AdS/CFT correspondence10:40 -- 11:40 Yuji Satoh (Tsukuba)Gauge/string duality and thermodynamic Bethe ansatz equations13:30 -- 14:30 Rinat Kashaev (Geneve)Discrete Liouville equation and quantum Teichmuller theory 14:40 -- 15:40 Kentaro Nagao (Nagoya)Instanton counting with adjoint matters via affine Lie algebras (changed)16:00 -- 17:00 Masahito Yamazaki (IPMU)Dimer, crystal, free fermion and wall-crossingAbstract Christian Korff (Glasgow) (1) Quantum Integrable Systems and Enumerative Geometry I: a free fermion formulation of quantum cohomologyI will present a brief introduction to the quantum cohomology ring of the Grassmannian. It first appeared in works by Gepner, Vafa, Intriligator and Witten and a particular specialisation of it can be identified with the fusion ring of the gauged $\widehat{gl}(n)$-WZNW model. The talk will focus on how one derives known (geometric) results about the quantum cohomology ring in a simple combinatorial setting using well-known techniques from quantum integrable systems. For instance, performing a Jordan-Wigner transformation one derives the Vafa-Intriligator formula for Gromov-Witten invariants. The free fermion formalism also allows one to derive new results such as recursion relations for Gromov-Witten invariants and a fermion product formula. (2) Quantum Integrable Systems and Enumerative Geometry II: crystal limit and the WZNW fusion ring The second talk will focus on a closely related ring, the fusion ring of the $\hat{sl}(n)$-WZNW model. It will be discussed how this ring arises from the crystal limit of the $U_q\hat{sl(2)}$-vertex model with "infinite" spin and $n$ lattice sites. Its transfer matrix can be interpreted as the generating function of complete symmetric polynomials in a noncommutative alphabet, the generators of the affine plactic algebra. (The latter is an extension of the finite plactic algebra first introduced by Lascoux and Sch\"utzenberger.) Exploiting the Jacobi-Trudy formula one introduces noncommutative Schur polynomials and defines the fusion product in a purely combinatorial manner. In close analogy to the discussion of the quantum cohomology ring one derives the Verlinde formula for the fusion coefficients (the structure constants of the fusion ring) via the algebraic Bethe ansatz. I shall conclude by stating the precise relationship between the quantum cohomology and fusion ring in terms of a simple projection formula which directly relates Gromov-Witten invariants and fusion coefficients. The former count rational curves of finite degree, while the latter are dimensions of spaces of generalized theta functions over the Riemann sphere with three punctures. For information on access to RIMS, check RIMS home page. Organizing committee Michio Jimbo (Rikkyo), Atsuo Kuniba (Tokyo), Tetsuji Miwa (Kyoto), Tomoki Nakanishi (Nagoya), Masato Okado (Osaka), Yoshihiro Takeyama (Tsukuba) | 2017-10-18 01:02:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5161794424057007, "perplexity": 7669.422769202514}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822625.57/warc/CC-MAIN-20171017234801-20171018014801-00189.warc.gz"} |
https://bitbucket.org/arigo/cpython-withatomic/src/eb78658d819f/Doc/libtime.tex?at=v1.5b2 | # cpython-withatomic / Doc / libtime.tex
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 \section{Built-in Module \sectcode{time}} \label{module-time} \bimodindex{time} This module provides various time-related functions. It is always available. An explanation of some terminology and conventions is in order. \begin{itemize} \item The epoch'' is the point where the time starts. On January 1st of that year, at 0 hours, the time since the epoch'' is zero. For \UNIX{}, the epoch is 1970. To find out what the epoch is, look at \code{gmtime(0)}. \item UTC is Coordinated Universal Time (formerly known as Greenwich Mean Time). The acronym UTC is not a mistake but a compromise between English and French. \item DST is Daylight Saving Time, an adjustment of the timezone by (usually) one hour during part of the year. DST rules are magic (determined by local law) and can change from year to year. The C library has a table containing the local rules (often it is read from a system file for flexibility) and is the only source of True Wisdom in this respect. \item The precision of the various real-time functions may be less than suggested by the units in which their value or argument is expressed. E.g.\ on most \UNIX{} systems, the clock ticks'' only 50 or 100 times a second, and on the Mac, times are only accurate to whole seconds. \item On the other hand, the precision of \code{time()} and \code{sleep()} is better than their \UNIX{} equivalents: times are expressed as floating point numbers, \code{time()} returns the most accurate time available (using \UNIX{} \code{gettimeofday()} where available), and \code{sleep()} will accept a time with a nonzero fraction (\UNIX{} \code{select()} is used to implement this, where available). \item The time tuple as returned by \code{gmtime()} and \code{localtime()}, or as accpted by \code{mktime()} is a tuple of 9 integers: year (e.g.\ 1993), month (1--12), day (1--31), hour (0--23), minute (0--59), second (0--59), weekday (0--6, monday is 0), Julian day (1--366) and daylight savings flag (-1, 0 or 1). Note that unlike the C structure, the month value is a range of 1-12, not 0-11. A year value less than 100 will typically be silently converted to 1900 plus the year value. A -1 argument as daylight savings flag, passed to \code{mktime()} will usually result in the correct daylight savings state to be filled in. \end{itemize} The module defines the following functions and data items: \renewcommand{\indexsubitem}{(in module time)} \begin{datadesc}{altzone} The offset of the local DST timezone, in seconds west of the 0th meridian, if one is defined. Negative if the local DST timezone is east of the 0th meridian (as in Western Europe, including the UK). Only use this if \code{daylight} is nonzero. \end{datadesc} \begin{funcdesc}{asctime}{tuple} Convert a tuple representing a time as returned by \code{gmtime()} or \code{localtime()} to a 24-character string of the following form: \code{'Sun Jun 20 23:21:05 1993'}. Note: unlike the C function of the same name, there is no trailing newline. \end{funcdesc} \begin{funcdesc}{clock}{} Return the current CPU time as a floating point number expressed in seconds. The precision, and in fact the very definiton of the meaning of CPU time'', depends on that of the C function of the same name, but in any case, this is the function to use for benchmarking Python or timing algorithms. \end{funcdesc} \begin{funcdesc}{ctime}{secs} Convert a time expressed in seconds since the epoch to a string representing local time. \code{ctime(t)} is equivalent to \code{asctime(localtime(t))}. \end{funcdesc} \begin{datadesc}{daylight} Nonzero if a DST timezone is defined. \end{datadesc} \begin{funcdesc}{gmtime}{secs} Convert a time expressed in seconds since the epoch to a time tuple in UTC in which the dst flag is always zero. Fractions of a second are ignored. \end{funcdesc} \begin{funcdesc}{localtime}{secs} Like \code{gmtime} but converts to local time. The dst flag is set to 1 when DST applies to the given time. \end{funcdesc} \begin{funcdesc}{mktime}{tuple} This is the inverse function of \code{localtime}. Its argument is the full 9-tuple (since the dst flag is needed --- pass -1 as the dst flag if it is unknown) which expresses the time in \emph{local} time, not UTC. It returns a floating point number, for compatibility with \code{time.time()}. If the input value can't be represented as a valid time, OverflowError is raised. \end{funcdesc} \begin{funcdesc}{sleep}{secs} Suspend execution for the given number of seconds. The argument may be a floating point number to indicate a more precise sleep time. \end{funcdesc} \begin{funcdesc}{strftime}{format, tuple} Convert a tuple representing a time as returned by \code{gmtime()} or \code{localtime()} to a string as specified by the format argument. The following directives, shown without the optional field width and precision specification, are replaced by the indicated characters: \begin{tableii}{|c|p{24em}|}{code}{Directive}{Meaning} \lineii{\%a}{Locale's abbreviated weekday name.} \lineii{\%A}{Locale's full weekday name.} \lineii{\%b}{Locale's abbreviated month name.} \lineii{\%B}{Locale's full month name.} \lineii{\%c}{Locale's appropriate date and time representation.} \lineii{\%d}{Day of the month as a decimal number [01,31].} \lineii{\%H}{Hour (24-hour clock) as a decimal number [00,23].} \lineii{\%I}{Hour (12-hour clock) as a decimal number [01,12].} \lineii{\%j}{Day of the year as a decimal number [001,366].} \lineii{\%m}{Month as a decimal number [01,12].} \lineii{\%M}{Minute as a decimal number [00,59].} \lineii{\%p}{Locale's equivalent of either AM or PM.} \lineii{\%S}{Second as a decimal number [00,61].} \lineii{\%U}{Week number of the year (Sunday as the first day of the week) as a decimal number [00,53]. All days in a new year preceding the first Sunday are considered to be in week 0.} \lineii{\%w}{Weekday as a decimal number [0(Sunday),6].} \lineii{\%W}{Week number of the year (Monday as the first day of the week) as a decimal number [00,53]. All days in a new year preceding the first Sunday are considered to be in week 0.} \lineii{\%x}{Locale's appropriate date representation.} \lineii{\%X}{Locale's appropriate time representation.} \lineii{\%y}{Year without century as a decimal number [00,99].} \lineii{\%Y}{Year with century as a decimal number.} \lineii{\%Z}{Time zone name (or by no characters if no time zone exists).} \lineii{\%\%}{\%} \end{tableii} Additional directives may be supported on certain platforms, but only the ones listed here have a meaning standardized by ANSI C. On some platforms, an optional field width and precision specification can immediately follow the initial \% of a directive in the following order; this is also not portable. The field width is normally 2 except for \%j where it is 3. \end{funcdesc} \begin{funcdesc}{time}{} Return the time as a floating point number expressed in seconds since the epoch, in UTC. Note that even though the time is always returned as a floating point number, not all systems provide time with a better precision than 1 second. \end{funcdesc} \begin{datadesc}{timezone} The offset of the local (non-DST) timezone, in seconds west of the 0th meridian (i.e. negative in most of Western Europe, positive in the US, zero in the UK). \end{datadesc} \begin{datadesc}{tzname} A tuple of two strings: the first is the name of the local non-DST timezone, the second is the name of the local DST timezone. If no DST timezone is defined, the second string should not be used. \end{datadesc} | 2016-02-13 16:24:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.519444465637207, "perplexity": 1061.7655822896158}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701166739.77/warc/CC-MAIN-20160205193926-00335-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://www.transtutors.com/questions/find-the-latest-10k-of-domino-s-pizza-and-please-answer-the-following-a-compute-the--1099791.htm | Find the latest 10k of Domino's Pizza and please answer the following: a) Compute the debt-to-asset.
Find the latest 10k of Domino's Pizza and please answer the following: a) Compute the debt-to-asset ratio for the last two years and explain its meaning. How did the debt position change over the last year? What were the sources of these changes? Would potential lenders prefer the debt to total assets ratio to be larger or smaller? Why?
Attachments:
PARTICULARS YEAR ENDING ON JAN 3, 2010 YEAR ENDING ON JAN 2, 2011 Total Assets $453,761$460,837 Total Liabilities $1,774,755$1,671,488 Debt to Asset Ratio = Total Debt / Total Assets 3.91 times 3.63 times The debt-asset ratio shows the proportion of company’s assets which are financed through debt or borrowings. If the ratio is greater than one, it implies that company’s assets are financed through debt. If the ratio is greater than one, it means that company’s assets are financed through equity. The current liabilities...
Related Questions in Financial Accounting - Others
• Find the latest 10k of Domino's Pizza and please answer the following: a) Compute the debt-to-asset.
(Solved) September 21, 2015
Find the latest 10 k of Domino's Pizza and please answer the following: a) Compute the debt-to-asset ratio for the last two years and explain its meaning. How did the debt position change over the last year ? What were the sources of these changes? Would potential lenders prefer the debt to total
QUESTION: Find the latest 10k of Domino's Pizza and please answer the following: a) Compute the debt-to-asset ratio for the last two years and explain its meaning
• Go to http://moneycentral.msn.com and look up the companies The Coca-Cola Company (symbol: KO) and..
(Solved) July 28, 2015
and Statement of Cash Flow. Each of these tabs will give you five years ’ worth of financial information. Run the following ratios for the past THREE years on both companies. You MUST show all computations to receive credit for this part. 1) Current ratio : 2) Quick ratio : 3) Current cash debt...
Ratio Analysis Ratio Formula Pepsi-Co Coca-Cola 2012 2011 2010 2012 2011 2010 Current Ratio Current Assets / Current Liabilities 1.10 1.19 0.99 1.09 1.05 1.17 Current Assets $18,720.00$...
• Need help on the document that is attached
(Solved) January 17, 2016
Need help on the document that is attached
PFA the solution PFA the solution PFA the solution PFA the solutionPFA the solutionPFA the solutionPFA the solutionPFA the solutionPFA the solutionPFA the solutionPFA the solutionPFA the...
(Solved) February 17, 2016
NIKE, Inc. was incorporated in 1967 under the laws of the State of Oregon. NIKE e-commerce website is located at www.nike.com. r NIKE corporate website, located at news. nike.com, NIKE Brand...
• common assesment
(Solved) December 07, 2013
’s website to obtain the latest Form 10 - K filings. Write a paper under the following headings: 1. Introduction: State background information, what the company does, industry, and other basic information. 2. Financial Statements: Answer the following questions: a. What are the financial statements | 2018-09-24 17:08:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3320835828781128, "perplexity": 3883.255748962846}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160620.68/warc/CC-MAIN-20180924165426-20180924185826-00083.warc.gz"} |
https://www.biostars.org/p/224440/#224851 | What are the best tools for differential detection between ATAC-seq samples?
2
7
Entering edit mode
6.0 years ago
I'm trying to figure out what tools most people use for differential chromatin detection among samples (condition1 vs condition2). I normally use diffReps for differential detection for ChIP-seq but haven't done it for ATAC. Any suggestions?
ATAC-seq next-gen • 14k views
0
Entering edit mode
You may have a look at this publication: Nfib Promotes Metastasis through a Widespread Increase in Chromatin Accessibility (doi:10.1016/j.cell.2016.05.052). They use ATAC-seq to compare chromatin accessability between metastatic cells and its solid tumor-of-origin and use DESeq2.
0
Entering edit mode
I'll check it out thanks
15
Entering edit mode
6.0 years ago
1. Call the peaks, by merging all the replicates (bam files), with very less stringent criteria ( p-value of 0.1 ) such that you end up with union of regions ( peaks ) across two biological conditions, that have some signal enrichment.
2. Now either use the entire peak or center the peaks on summit with +/- 200bp and count the number of reads with in each peak from all replicates independently using featureCounts. ( You can easily covert the above bed file to saf format )
awk -v OFS = "\t" 'BEGIN { print "PeakID", "Chr", "Start", "End", "Strand" } { print "Peak_"NR, $1,$2, $3, "." }' test.bed > test.saf 3. Feed this matrix to DESeq2 or edgeR and get differential peaks. Do some filtering of peaks like removing peaks that have very few counts across replicates, otherwise this will be a problem for multiple testing correction to include so many peaks . Edit: Most of the differential binding tools run either DESeq or edgeR in the back end and they are suitable for TFs, as the signal is sharp with out any noise. But with ATAC, the peaks have both characters of TF and histone modifications. I would 1. Run macs2 with p-value of 0.1 2. Use the narrowPeaks file, either the entire peak (Better)or take +/- 200bp from the summit (only if peaks called with --summit option in MACS, so that broad peaks are split into multiple sub-peaks). 3. Quantify peaks using featureCounts. 4. Feed the matrix to DESeq2. You should also upload this file ( from step 2 ) to browser to get a feel of how the regions that you are using for DE analysis looks like. Most likely they will be okay. ADD COMMENT 0 Entering edit mode This is a great idea. Did you try this method over some published differential chromatin tools? (ex;Diffbind or diffReps). This seems like a similar method to what DiffBind does, however in the past I have noticed peak independent differential detection tools have performed better with my data. Just curious if you've looked into it? ADD REPLY 0 Entering edit mode I updated my answer. ADD REPLY 0 Entering edit mode How many replicates do you normally have when doing this? Right now, our pilot data has only one replicate per condition ADD REPLY 0 Entering edit mode What are these "peak independent differential detection tools"? ADD REPLY 0 Entering edit mode I used to use diffReps for ChIP data and I thought I could apply it to ATAC when starting out but it did not work very well for me. I think because ATAC data tends to be much more noisy than ChIP ADD REPLY 0 Entering edit mode Its not because ATAC is more noisy, Its because the open chromatin is not very dynamic unless the cell environment changes drastically. If you compare the ATAC profiles between two different cell types, you will see differential open chromatin regions but may not with in the same cell type exposed to different environments. It could be pure biological, as ATAC measures openness of chromatin but not the binding ( as what you expect in Chip-Seq experiments ). ADD REPLY 0 Entering edit mode Aka more biological noise in signal than ChIP data ADD REPLY 0 Entering edit mode Its not "noise". ADD REPLY 0 Entering edit mode How do you easily convert from BED to SAF? ADD REPLY 0 Entering edit mode test.bed chr1 123 456 . awk -v OFS="\t" 'BEGIN {print "GeneID","Chr","Start","End","Strand"} { print "Peak_"NR,$1,$2,$3,"."}' test.bed
Output:
GeneID Chr Start End Strand
Peak_1 chr1 123 456 .
0
Entering edit mode
What if you have only a single replicate for each sample? I think DESeq2 requires replicates for each sample. I have ATAC-seq peaks for 8 time points of 4 conditions --> 32 samples in total. There are no replicates.
Kenneth.
1
Entering edit mode
I think that you have to learn about design formula and design matrices, that DESseq2 and other packages use to describe experimental designs like yours. The topic is covered in the DESeq2 manual and many posts on the Bioconductor support forum, and is a bit too complex for me to introduce it briefly here.
0
Entering edit mode
Thank you Charles...I'll start there.
0
Entering edit mode
Have you tried MANorm?
0
Entering edit mode
Hi Goutham,
How could I do your 3rd step: "3. Count the reads for each replicate."?
I have the BAM file and the narrowPeaks from macs2.
Thank you very much!
0
Entering edit mode
I use featurecounts from Rsubread, a bioconductor package. There is a pretty good example on pages 5-7 of the vingette under Section 3 "Counting mapped reads for genomic features". You can also do this via HTSeq-count.
0
Entering edit mode
feautureCounts should do the trick. It has an R interface or command line version. You need to convert your peak coordinates to SAF format as given on their website.
0
Entering edit mode
Is there normally a lot of variability in your data? When I do a spearman rank correlation, all of my samples are not very well related to eachother (spearman < 0.5) but the peaks look good among replicates. Is this abnormal?
0
Entering edit mode
Did you do PCA analysis on your replicate samples ?
0
Entering edit mode
So I started to draft up a response and then thought it might just be easier to send the data. I'll provide a link to a correlation heat map and MDS plot from edgeR. The problem that we have had with our data is that our sequencing depth has varied pretty substantially so this could have some affect (In the link will also be read depth plots). Also, I should add that there are two replicates for each sample but these are TECHNICAL replicates from the same dog. When it comes to the factor we are running differential analysis on, it's gonna be based upon many factors so these dogs are just normal domestic dogs that we are gonna check differences in sex, bread, etc.
http://rpubs.com/achits/285291
0
Entering edit mode
Hi geek_y! I am still a bit confused about how to call differential peaks with DESeq2 since after performing featureCounts I have got several matrixes and the rows of each matrix will not be the same. How can I combine these matrixes into one when DESeq2 only take one input?
1
Entering edit mode
I believe before doing featureCounts, you will somehow have to merge the peak calls. So that you end up with a set of "query" genomic regions. Then do featureCounts for these "query" regions across samples. This will make it easier to combine counts from all samples in the end... ready for DESeq2.
0
Entering edit mode
You'll find better support on bioconductor for this question
0
Entering edit mode
4.5 years ago
Once you have your peaks for each condition, you can try DAStk, a tool recently put out for that purpose: | 2022-11-27 04:22:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23977501690387726, "perplexity": 3874.66143015223}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710192.90/warc/CC-MAIN-20221127041342-20221127071342-00872.warc.gz"} |
https://statprep-workshops-2019.netlify.app/presentations/gaise-asa/ | # Guidelines from GAISE/ASA/AMATYC
## The Guidelines for Assessment and Instruction in Statistics Education
The GAISE report (2016) was produced by a committee of leading statistical educators and endorsed by the ASA and AMATYC. It gives specific recommendations and some examples about best practices in teaching statistics.
### Goals for intro stats
• Goal 1. Students should become critical consumers of statistically-based results reported in popular media, recognizing whether reported results reasonably follow from the study and analysis conducted.
• Goal 2. Students should be able to recognize questions for which the investigative process in statistics would be useful and should be able to answer questions using the investigative process.
• Goal 3. Students should be able to produce graphical displays and numerical summaries and interpret what graphs do and do not reveal.
• Goal 4. Students should recognize and be able to explain the central role of variability in the field of statistics.
• Goal 5. Students should recognize and be able to explain the central role of randomness in designing studies and drawing conclusions. Random assignment in comparative experiments allows direct cause-and- effect conclusions to be drawn while other data collection methods usually do not.
• Goal 6. Students should gain experience with how statistical models, including multivariable models, are used. While the details of these more complicated models may be beyond most introductory courses, it is important that students have an appreciation that the relationship between two variables may depend on other variables. Multivariable relationships, illustrating Simpson’s Paradox or investigated via multiple regression, help students discover that a two-way table or a simple regression line does not necessarily tell the entire (or even an accurate) story of the relationship between two variables.
• Goal 7. Students should demonstrate an understanding of, and ability to use, basic ideas of statistical inference, both hypothesis tests and interval estimation, in a variety of settings.
• Goal 8. Students should be able to interpret and draw conclusions from standard output from statistical software packages.
• Goal 9. Students should demonstrate an awareness of ethical issues associated with sound statistical practice.
### Main recommendations
• Recommendation 1: Teach statistical thinking.
• “We should model statistical thinking for our students throughout the course, rather than present students with a set of isolated tools, skills, and procedures.”
• Teach statistics as an investigative process of problem-solving and decision-making
• “Mentioning the investigative process at the beginning of the course but then treating various course topics in a compartmentalized manner does not help students to see the big picture. We recommend that throughout the entire introductory course, instructors illustrate the complete investigative cycle with every example/exercise presented, starting with the motivating question that led to the data collection and ending with the scope of conclusions and directions for future work.”
• Give students experience with multivariable thinking.
• Recommendation 2: Focus on conceptual understanding.
• Focus on students’ understanding of key concepts, illustrated by a few techniques, rather than covering a multitude of techniques with minimal focus on underlying ideas.
• Pare down content of an introductory course to focus on core concepts in more depth.
• Perform most computations using technology to allow greater emphasis on understanding concepts and interpreting results.
• Recommendation 3: Integrate real data with a context and a purpose.
• The report has 15 bullet points under this recommendation.
• Recommendation 4: Foster active learning.
• Recommendation 5: Use technology to explore concepts and analyze data.
• Recommendation 6: Use assessments to improve and evaluate student learning.
• “Assessments need to focus on understanding key ideas, and not just on skills, procedures, and computed answers.”
### What to leave out …
• Probability theory
• Constructing plots by hand
• Basic statistics
• Drills with z-, t-, $$\chi^2$$, and F-tables.
• Advanced training on a statistical software program.
### Technology and GAISE
1. Interactive Applets
• “Interactive applets can be used to emphasize important statistical concepts without being encumbered by lots of calculations.”
• “Applets work well with the query first method. This means that the students try to answer the conceptual questions first on their own and then again after using the applet.”
• Under “Future Direction of Applets and Interactive Visualizations,” the report emphasizes the Shiny system the StatPREP Little Apps use.
2. Statistical Software
3. Accessing Real Data online (Observational, Experimental, Survey)
4. Using Games and Other Virtual Environments
5. Real Time Response Systems
## Changing professional standards about p-values and significance
From Nature, March 2019, “Retire statistical significance”
“How do statistics so often lead scientists to deny differences that those not educated in statistics can plainly see? For several generations, researchers have been warned that a statistically non-significant result does not ‘prove’ the null hypothesis (the hypothesis that there is no difference between groups or no effect of a treatment on some measured outcome). Nor do statistically significant results ‘prove’ some other hypothesis. Such misconceptions have famously warped the the literature with overstated claims and, less famously, led to claims of conflicts between studies where none exists.”
The American Statistician March 2019
“The ASA Statement on P-Values and Statistical Significance stopped just short of recommending that declarations of ‘statistical significance’ be abandoned. We take that step here. We conclude, based on our review of the articles in this special issue and the broader literature, that it is time to stop using the term ‘statistically significant’ entirely. Nor should variants such as ‘significantly different,’ ‘p < 0.05,’ and ‘nonsignificant’ survive, whether expressed in words, by asterisks in a table, or in some other way.
“In sum,”statistically significant" — don’t say it and don’t use it.”
“Statistics education will require major changes at all levels to move to a post ‘p < 0.05’ world. We are excited that, with support from the ASA, the US Conference on Teaching Statistics (USCOTS) will focus its 2019 meeting on teaching inference.”
### What does this mean for teaching?
One area of consensus …
More emphasis on confidence intervals and effect size.
The two articles on education from the TAS March 2019 volume
Content Audit for p-Value Principles in Introductory Statistics, Maurer, K., Hudiburgh, L., Werwinski, L., and Bailer J.
1. Evaluate the coverage of p-value principles in the introductory statistics course using rubrics or other systematic assessment guidelines.
2. Discuss and deploy improvements to curriculum coverage of p-value principles.
3. Meet with representatives from other departments, who have majors taking your statistics courses, to make sure that inference is being taught in a way that fits the needs of their disciplines.
4. Ensure that the correct interpretation of p-value principles is a point of emphasis for all faculty members and embedded within all courses of instruction.
Beyond Calculations: A Course in Statistical Thinking, Steel, A., Liermann, M., and Guttorp, P.
1. Design curricula to teach students how statistical analyses are embedded within a larger science life-cycle, including steps such as project formulation, exploratory graphing, peer review, and communication beyond scientists.
2. Teach the p-value as only one aspect of a complete data analysis.
3. Prioritize helping students build a strong understanding of what testing and estimation can tell you over teaching statistical procedures.
4. Explicitly teach statistical communication. Effective communication requires that students clearly formulate the benefits and limitations of statistical results.
5. Force students to struggle with poorly defined questions and real, messy data in statistics classes.
6. Encourage students to match the mathematical metric (or data summary) to the scientific question. Teaching students to create customized statistical tests for custom metrics allows statistics to move beyond the mean and pinpoint specific scientific questions. | 2021-04-15 13:24:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2957310080528259, "perplexity": 3440.874087114624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038085599.55/warc/CC-MAIN-20210415125840-20210415155840-00220.warc.gz"} |
http://www.journaltocs.hw.ac.uk/index.php?action=browse&subAction=subjects&publisherID=87&journalID=9796&pageb=5 | for Journals by Title or ISSN for Articles by Keywords help
Subjects -> STATISTICS (Total: 130 journals)
The end of the list has been reached or no journals were found for your choice.
Metrika [SJR: 0.605] [H-I: 30] [2 followers] Follow Hybrid journal (It can contain Open Access articles) ISSN (Print) 1435-926X - ISSN (Online) 0026-1335 Published by Springer-Verlag [2335 journals]
• Conditional empirical likelihood for quantile regression models
• Authors: Wu Wang; Zhongyi Zhu
Pages: 1 - 16
Abstract: In this paper, we propose a new Bayesian quantile regression estimator using conditional empirical likelihood as the working likelihood function. We show that the proposed estimator is asymptotically efficient and the confidence interval constructed is asymptotically valid. Our estimator has low computation cost since the posterior distribution function has explicit form. The finite sample performance of the proposed estimator is evaluated through Monte Carlo studies.
PubDate: 2017-01-01
DOI: 10.1007/s00184-016-0588-6
Issue No: Vol. 80, No. 1 (2017)
• Robust feature screening for varying coefficient models via quantile
partial correlation
• Authors: Xiang-Jie Li; Xue-Jun Ma; Jing-Xiao Zhang
Pages: 17 - 49
Abstract: This article is concerned with feature screening for varying coefficient models with ultrahigh-dimensional predictors. We propose a new sure independence screening method based on quantile partial correlation (QPC-SIS), which is quite robust against outliers and heavy-tailed distributions. Then we establish the sure screening property for the QPC-SIS, and conduct simulations to examine its finite sample performance. The results of simulation study indicate that the QPC-SIS performs better than other methods like sure independent screening (SIS), sure independent ranking and screening, distance correlation-sure independent screening, conditional correlation sure independence screening and nonparametric independent screening, which shows the validity and rationality of QPC-SIS.
PubDate: 2017-01-01
DOI: 10.1007/s00184-016-0589-5
Issue No: Vol. 80, No. 1 (2017)
• Shrinkage estimation of the linear model with spatial interaction
• Authors: Yueqin Wu; Yan Sun
Pages: 51 - 68
Abstract: The linear model with spatial interaction has attracted huge attention in the past several decades. Different from most existing research which focuses on its estimation, we study its variable selection problem using the adaptive lasso. Our results show that the method can identify the true model consistently, and the resulting estimator can be efficient as the oracle estimator which is obtained when the zero coefficients in the model are known. Simulation studies show that the proposed methods perform very well.
PubDate: 2017-01-01
DOI: 10.1007/s00184-016-0590-z
Issue No: Vol. 80, No. 1 (2017)
• On the residual lifetime of coherent systems with heterogeneous components
• Authors: P. Samadi; M. Rezaei; M. Chahkandi
Pages: 69 - 82
Abstract: The residual lifetime is of significant interest in reliability and survival analysis. In this article, we obtain a mixture representation for the reliability function of the residual lifetime of a coherent system with heterogeneous components in terms of the reliability functions of residual lifetimes of order statistics. Some stochastic comparisons are made on the residual lifetimes of the systems. Some examples are also given to illustrate the main results.
PubDate: 2017-01-01
DOI: 10.1007/s00184-016-0591-y
Issue No: Vol. 80, No. 1 (2017)
• Asymptotics of self-weighted M-estimators for autoregressive models
• Authors: Xinghui Wang; Shuhe Hu
Pages: 83 - 92
Abstract: In this paper, we consider a stationary autoregressive AR(p) time series $$y_t=\phi _0+\phi _1y_{t-1}+\cdots +\phi _{p}y_{t-p}+u_t$$ . A self-weighted M-estimator for the AR(p) model is proposed. The asymptotic normality of this estimator is established, which includes the asymptotic properties under the innovations with finite or infinite variance. The result generalizes and improves the known one in the literature.
PubDate: 2017-01-01
DOI: 10.1007/s00184-016-0592-x
Issue No: Vol. 80, No. 1 (2017)
• Fiducial inference in the classical errors-in-variables model
• Authors: Liang Yan; Rui Wang; Xingzhong Xu
Pages: 93 - 114
Abstract: For the slope parameter of the classical errors-in-variables model, existing interval estimations with finite length will have confidence level equal to zero because of the Gleser–Hwang effect. Especially when the reliability ratio is low and the sample size is small, the Gleser–Hwang effect is so serious that it leads to the very liberal coverages and the unacceptable lengths of the existing confidence intervals. In this paper, we obtain two new fiducial intervals for the slope. One is based on a fiducial generalized pivotal quantity and we prove that this interval has the correct asymptotic coverage. The other fiducial interval is based on the method of the generalized fiducial distribution. We also construct these two fiducial intervals for the other parameters of interest of the classical errors-in-variables model and introduce these intervals to a hybrid model. Then, we compare these two fiducial intervals with the existing intervals in terms of empirical coverage and average length. Simulation results show that the two proposed fiducial intervals have better frequency performance. Finally, we provide a real data example to illustrate our approaches.
PubDate: 2017-01-01
DOI: 10.1007/s00184-016-0593-9
Issue No: Vol. 80, No. 1 (2017)
• Robust Dickey–Fuller tests based on ranks for time series with
• Authors: V. A. Reisen; C. Lévy-Leduc; M. Bourguignon; H. Boistard
Pages: 115 - 131
Abstract: In this paper the unit root tests proposed by Dickey and Fuller (DF) and their rank counterpart suggested by Breitung and Gouriéroux (J Econom 81(1): 7–27, 1997) (BG) are analytically investigated under the presence of additive outlier (AO) contaminations. The results show that the limiting distribution of the former test is outlier dependent, while the latter one is outlier free. The finite sample size properties of these tests are also investigated under different scenarios of testing contaminated unit root processes. In the empirical study, the alternative DF rank test suggested in Granger and Hallman (J Time Ser Anal 12(3): 207–224, 1991) (GH) is also considered. In Fotopoulos and Ahn (J Time Ser Anal 24(6): 647–662, 2003), these unit root rank tests were analytically and empirically investigated and compared to the DF test, but with outlier-free processes. Thus, the results provided in this paper complement the studies of the previous works, but in the context of time series with additive outliers. Equivalently to DF and Granger and Hallman (J Time Ser Anal 12(3): 207–224, 1991) unit root tests, the BG test shows to be sensitive to AO contaminations, but with less severity. In practical situations where there would be a suspicion of additive outlier, the general conclusion is that the DF and Granger and Hallman (J Time Ser Anal 12(3): 207–224, 1991) unit root tests should be avoided, however, the BG approach can still be used.
PubDate: 2017-01-01
DOI: 10.1007/s00184-016-0594-8
Issue No: Vol. 80, No. 1 (2017)
• Bayesian estimation based on ranked set sample from Morgenstern type
bivariate exponential distribution when ranking is imperfect
• Authors: Manoj Chacko
Abstract: In this paper we consider Bayes estimation based on ranked set sample when ranking is imperfect, in which units are ranked based on measurements made on an easily and exactly measurable auxiliary variable X which is correlated with the study variable Y. Bayes estimators under squared error loss function and LINEX loss function for the mean of the study variate Y, when (X, Y) follows a Morgenstern type bivariate exponential distribution, are obtained based on both usual ranked set sample and extreme ranked set sample. Estimation procedures developed in this paper are illustrated using simulation studies and a real data.
PubDate: 2016-12-08
DOI: 10.1007/s00184-016-0607-7
• An ergodic theorem for proportions of observations that fall into random
sets determined by sample quantiles
• Authors: Anna Dembińska
Abstract: Assume that a sequence of observations $$(X_n; n\ge 1)$$ forms a strictly stationary process with an arbitrary univariate cumulative distribution function. We investigate almost sure asymptotic behavior of proportions of observations in the sample that fall into a random region determined by a given Borel set and a sample quantile. We provide sufficient conditions under which these proportions converge almost surly and describe the law of the limiting random variable.
PubDate: 2016-12-07
DOI: 10.1007/s00184-016-0606-8
• On the shape of the cross-ratio function in bivariate survival models
induced by truncated and folded normal frailty distributions
• Authors: Steffen Unkel
Abstract: In shared frailty models for bivariate survival data the frailty is identifiable through the cross-ratio function (CRF), which provides a convenient measure of association for correlated survival variables. The CRF may be used to compare patterns of dependence across models and data sets. We explore the shape of the CRF for the families of one-sided truncated normal and folded normal frailty distributions.
PubDate: 2016-12-07
DOI: 10.1007/s00184-016-0608-6
• Acceleration of the stochastic search variable selection via componentwise
Gibbs sampling
• Authors: Hengzhen Huang; Shuangshuang Zhou; Min-Qian Liu; Zong-Feng Qi
Abstract: The stochastic search variable selection proposed by George and McCulloch (J Am Stat Assoc 88:881–889, 1993) is one of the most popular variable selection methods for linear regression models. Many efforts have been proposed in the literature to improve its computational efficiency. However, most of these efforts change its original Bayesian formulation, thus the comparisons are not fair. This work focuses on how to improve the computational efficiency of the stochastic search variable selection, but remains its original Bayesian formulation unchanged. The improvement is achieved by developing a new Gibbs sampling scheme different from that of George and McCulloch (J Am Stat Assoc 88:881–889, 1993). A remarkable feature of the proposed Gibbs sampling scheme is that, it samples the regression coefficients from their posterior distributions in a componentwise manner, so that the expensive computation of the inverse of the information matrix, which is involved in the algorithm of George and McCulloch (J Am Stat Assoc 88:881–889, 1993), can be avoided. Moreover, since the original Bayesian formulation remains unchanged, the stochastic search variable selection using the proposed Gibbs sampling scheme shall be as efficient as that of George and McCulloch (J Am Stat Assoc 88:881–889, 1993) in terms of assigning large probabilities to those promising models. Some numerical results support these findings.
PubDate: 2016-11-22
DOI: 10.1007/s00184-016-0604-x
• Efficient paired choice designs with fewer choice pairs
• Authors: Aloke Dey; Rakhi Singh; Ashish Das
Abstract: For paired choice experiments, two new construction methods of designs are proposed for the estimation of the main effects. In many cases, these designs require about 30–50% fewer choice pairs than the existing designs and at the same time have reasonably high D-efficiencies for the estimation of the main effects. Furthermore, as against the existing efficient designs, our designs have higher D-efficiencies for the same number of choice pairs.
PubDate: 2016-11-17
DOI: 10.1007/s00184-016-0605-9
• Imputation based statistical inference for partially linear quantile
regression models with missing responses
• Authors: Peixin Zhao; Xinrong Tang
Abstract: In this paper, we consider the confidence interval construction for partially linear quantile regression models with missing response at random. We propose an imputation based empirical likelihood method to construct confidence intervals for the parametric components and the nonparametric components, and show that the proposed empirical log-likelihood ratios are both asymptotically Chi-squared in theory. Then, the confidence region for the parametric component and the pointwise confidence interval for the nonparametric component are constructed. Some simulation studies and a real data application are carried out to assess the performance of the proposed estimation method, and simulation results indicate that the proposed method is workable.
PubDate: 2016-11-01
DOI: 10.1007/s00184-016-0586-8
• Qualitative robustness of estimators on stochastic processes
• Authors: Katharina Strohriegl; Robert Hable
Abstract: A lot of statistical methods originally designed for independent and identically distributed (i.i.d.) data are also successfully used for dependent observations. Still most theoretical investigations on robustness assume i.i.d. pairs of random variables. We examine an important property of statistical estimators—the qualitative robustness in the case of observations which do not fulfill the i.i.d. assumption. In the i.i.d. case qualitative robustness of a sequence of estimators is, according to Hampel (Ann Math Stat 42:1887–1896, 1971), ensured by continuity of the corresponding statistical functional. A similar result for the non-i.i.d. case is shown in this article. Continuity of the corresponding statistical functional still ensures qualitative robustness of the estimator as long as the data generating process satisfies a certain convergence condition on its empirical measure. Examples for processes providing such a convergence condition, including certain Markov chains or mixing processes, are given as well as examples for qualitatively robust estimators in the non-i.i.d. case.
PubDate: 2016-11-01
DOI: 10.1007/s00184-016-0582-z
• Tree based diagnostic procedures following a smooth test of
goodness-of-fit
• Authors: Gilles R. Ducharme; Walid Al Akhras
Abstract: This paper introduces a statistical procedure, to be applied after a goodness-of-fit test has rejected a null model, that provides diagnostic information to help the user decide on a better model. The procedure goes through a list of departures, each being tested by a local smooth test. The list is organized into a hierarchy by seeking answers to the questions “Where is the problem?” and “What is the problem there?”. This hierarchy allows to focus on finer departures as the data becomes more abundant. The procedure controls the family-wise Type 1 error rate. Simulations show that the procedure can succeed in providing useful diagnostic information.
PubDate: 2016-11-01
DOI: 10.1007/s00184-016-0585-9
• A test of linearity in partial functional linear regression
• Authors: Ping Yu; Zhongzhan Zhang; Jiang Du
Abstract: This paper investigates the hypothesis test of the parametric component in partial functional linear regression. We propose a test procedure based on the residual sums of squares under the null and alternative hypothesis, and establish the asymptotic properties of the resulting test. A simulation study shows that the proposed test procedure has good size and power with finite sample sizes. Finally, we present an illustration through fitting the Berkeley growth data with a partial functional linear regression model and testing the effect of gender on the height of kids.
PubDate: 2016-11-01
DOI: 10.1007/s00184-016-0584-x
• AR(1) model with skew-normal innovations
• Authors: M. Sharafi; A. R. Nematollahi
Abstract: In this paper, we consider an autoregressive model of order one with skew-normal innovations. We propose several methods for estimating the parameters of the model and derive the limiting distributions of the estimators. Then, we study some statistical properties and the regression behavior of the proposed model. Finally, we provide a Monte Carlo simulation study for comparing performance of estimators and consider a real time series to illustrate the applicability of the proposed model.
PubDate: 2016-11-01
DOI: 10.1007/s00184-016-0587-7
• Nonparametric estimation in a mixed-effect Ornstein–Uhlenbeck model
• Authors: Charlotte Dion
Abstract: Two adaptive nonparametric procedures are proposed to estimate the density of the random effects in a mixed-effect Ornstein–Uhlenbeck model. First a kernel estimator is introduced with a new bandwidth selection method developed recently by Goldenshluger and Lepski (Ann Stat 39:1608–1632, 2011). Then, we adapt an estimator from Comte et al. (Stoch Process Appl 7:2522–2551, 2013) in the framework of small time interval of observation. More precisely, we propose an estimator that uses deconvolution tools and depends on two tuning parameters to be chosen in a data-driven way. The selection of these two parameters is achieved through a two-dimensional penalized criterion. For both adaptive estimators, risk bounds are provided in terms of integrated $$\mathbb {L}^2$$ -error. The estimators are evaluated on simulations and show good results. Finally, these nonparametric estimators are applied to neuronal data and are compared with previous parametric estimations.
PubDate: 2016-11-01
DOI: 10.1007/s00184-016-0583-y
• A study on the conditional inactivity time of coherent systems
• Authors: S. Goli; M. Asadi
Abstract: The study on the inactivity times is useful in evaluating the aging and reliability properties of coherent systems in reliability engineering. In the present paper, we investigate the inactivity time of a coherent system consisting of n i.i.d. components. We drive some mixture representations for the reliability function of conditional inactivity times of coherent systems under two specific conditions on the status of the system components. Some ageing and stochastic properties of the proposed conditional inactivity times are also explored.
PubDate: 2016-10-25
DOI: 10.1007/s00184-016-0600-1
• Reliability parameters estimation for parallel systems under imperfect
repair
• Authors: Soumaya Ghnimi; Soufiane Gasmi; Arwa Nasr
Abstract: We consider in this paper a parallel system consisting of $$\eta$$ identical components. Each component works independently of the others and has a Weibull distributed inter-failure time. When the system fails, we assume that the repair maintenance is imperfect according to the Arithmetic Reduction of Age models ( $$ARA_{m}$$ ) proposed by Doyen and Gaudoin. The purpose of this paper is to generate a simulated failure data of the whole system in order to forecast the behavior of the failure process. Besides, we estimate the maintenance efficiency and the reliability parameters of an imperfect repair following $$ARA_{m}$$ models using maximum likelihood estimation method. Our method is tested with several data sets available from related sources. The real data set corresponds to the time between failures of a compressor which is tested by Likelihood Ratio Test (LR). An analysis of the importance and the effect of the memory order of imperfect repair classes ( $$ARA_{m}$$ ) will be discussed using LR test.
PubDate: 2016-10-22
DOI: 10.1007/s00184-016-0603-y
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
Home (Search)
Subjects A-Z
Publishers A-Z
Customise
APIs | 2017-01-19 10:49:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5433791279792786, "perplexity": 1430.030207901137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00290-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://cs.stackexchange.com/questions/55888/about-the-interpretation-of-the-sos-hardness-results-of-the-planted-max-clique-p | # About the interpretation of the SOS hardness results of the planted Max-Clique problem
One can look at these two papers http://arxiv.org/abs/1502.06590 and http://arxiv.org/abs/1507.05136 and see their main theorems. If I understand right then both these papers are talking of the planted-Max-Clique problem and the second one is improving the detection threshold from $$n^{1/3}$$ in the first paper to $$n^{1/2}$$.
• Does any of these results imply or do they implicitly contain the hardness threshold for say just the Max-Clique problem w.r.t even just the degree-4 SOS?
Or are both these results totally specific to the planted problem and say nothing about the Max-Clique?
If I say look at the corollary 2.1 (bottom of page 10) in http://arxiv.org/pdf/1502.06590v1.pdf then this seems to be a hardness threshold for just the Max-Clique problem but I am not feeling very sure since the authors never put in these same words. Am I missing something? The same can be said about Theorem 1.1 (page 2) in the second paper.
If one sees the section 2.4 in the second paper, where they construct the SDP witness it feels that their construction would make sense only in the "planted" setting and then contrary to my feeling from theorem 1.1 now I start feeling that the result applies only with planting.
• You might be interested in this: eccc.hpi-web.de/report/2016/058. – Yuval Filmus Apr 12 '16 at 22:39
• Yes. I saw this within minutes of when it was uploaded today afternoon. Wonder what is the next question to answer here! – gradstudent Apr 13 '16 at 2:30
## 1 Answer
The planted clique problem and the maximum clique problem are different problems, though the former was introduced as an average case version of the latter. A paper addressing the planted clique problem doesn't address the maximum clique problem, and vice versa.
In particular, the papers you mention concern the planted clique problem, not the maximum clique problem.
• Yes! But you see why theorem 2.1 in the first and theorem 1.1 in the second look like theorems about Max-Clique rather than about the planted thing? These seem pretty general statements with no reference to the planting. For Max-Clique since the SDP doesn't change from the one in these papers why doesn't the same certificate as constructed in these papers also imply the same thresholds for the Max-Clique problem? – gradstudent Apr 13 '16 at 2:21
• The results you mention are all probabilistic. They only hold with high probability. – Yuval Filmus Apr 13 '16 at 5:33
• Yes. But isn't that enough? What do you think is missing? – gradstudent Apr 13 '16 at 13:42
• The witnesses only exist with high probability over the choice of the graph. The results don't say anything about arbitrary (adversarial) graphs. – Yuval Filmus Apr 13 '16 at 14:29
• If one knows that with high probability (even just non-zero probability) the hard graphs exist then isn't it immediately obvious that one can always pick those hard graphs adversarially to trip SOS? (...ofcourse I guess what is open is to see if the $\sqrt{n}$ can be improved to the UGC bound of Max-Clique for 4 or higher degree SOS..) – Anirbit Apr 13 '16 at 21:06 | 2020-10-31 14:27:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.822139322757721, "perplexity": 546.4791745733413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107918164.98/warc/CC-MAIN-20201031121940-20201031151940-00583.warc.gz"} |
http://www.gutenberg.us/articles/eng/Finite_difference_method | #jsDisabledContent { display:none; } My Account | Register | Help
Finite difference method
Article Id: WHEBN0006054681
Reproduction Date:
Title: Finite difference method Author: World Heritage Encyclopedia Language: English Subject: Collection: Publisher: World Heritage Encyclopedia Publication Date:
Finite difference method
In mathematics, finite-difference methods (FDM) are numerical methods for solving differential equations by approximating them with difference equations, in which finite differences approximate the derivatives. FDMs are thus discretization methods.
Today, FDMs are the dominant approach to numerical solutions of partial differential equations.[1]
Derivation from Taylor's polynomial
First, assuming the function whose derivatives are to be approximated is properly-behaved, by Taylor's theorem, we can create a Taylor Series expansion
f(x_0 + h) = f(x_0) + \frac{f'(x_0)}{1!}h + \frac{f^{(2)}(x_0)}{2!}h^2 + \cdots + \frac{f^{(n)}(x_0)}{n!}h^n + R_n(x),
where n! denotes the factorial of n, and Rn(x) is a remainder term, denoting the difference between the Taylor polynomial of degree n and the original function. We will derive an approximation for the first derivative of the function "f" by first truncating the Taylor polynomial:
f(x_0 + h) = f(x_0) + f'(x_0)h + R_1(x),
Setting, x0=a we have,
f(a+h) = f(a) + f'(a)h + R_1(x),
Dividing across by h gives:
{f(a+h)\over h} = {f(a)\over h} + f'(a)+{R_1(x)\over h}
Solving for f'(a):
f'(a) = {f(a+h)-f(a)\over h} - {R_1(x)\over h}
Assuming that R_1(x) is sufficiently small, the approximation of the first derivative of "f" is:
f'(a)\approx {f(a+h)-f(a)\over h}.
Accuracy and order
The error in a method's solution is defined as the difference between the approximation and the exact analytical solution. The two sources of error in finite difference methods are round-off error, the loss of precision due to computer rounding of decimal quantities, and truncation error or discretization error, the difference between the exact solution of the finite difference equation and the exact quantity assuming perfect arithmetic (that is, assuming no round-off).
The finite difference method relies on discretizing a function on a grid.
To use a finite difference method to approximate the solution to a problem, one must first discretize the problem's domain. This is usually done by dividing the domain into a uniform grid (see image to the right). Note that this means that finite-difference methods produce sets of discrete numerical approximations to the derivative, often in a "time-stepping" manner.
An expression of general interest is the local truncation error of a method. Typically expressed using Big-O notation, local truncation error refers to the error from a single application of a method. That is, it is the quantity f'(x_i) - f'_i if f'(x_i) refers to the exact value and f'_i to the numerical approximation. The remainder term of a Taylor polynomial is convenient for analyzing the local truncation error. Using the Lagrange form of the remainder from the Taylor polynomial for f(x_0 + h), which is
R_n(x_0 + h) = \frac{f^{(n+1)}(\xi)}{(n+1)!} (h)^{n+1} , where x_0 < \xi < x_0 + h,
the dominant term of the local truncation error can be discovered. For example, again using the forward-difference formula for the first derivative, knowing that f(x_i)=f(x_0+i h),
f(x_0 + i h) = f(x_0) + f'(x_0)i h + \frac{f''(\xi)}{2!} (i h)^{2},
and with some algebraic manipulation, this leads to
\frac{f(x_0 + i h) - f(x_0)}{i h} = f'(x_0) + \frac{f''(\xi)}{2!} i h,
and further noting that the quantity on the left is the approximation from the finite difference method and that the quantity on the right is the exact quantity of interest plus a remainder, clearly that remainder is the local truncation error. A final expression of this example and its order is:
\frac{f(x_0 + i h) - f(x_0)}{i h} = f'(x_0) + O(h).
This means that, in this case, the local truncation error is proportional to the step sizes. The quality and duration of simulated FDM solution depends on the discretization equation selection and the step sizes (time and space steps). The data quality and simulation duration increase significantly with smaller step size.[2] Therefore, a reasonable balance between data quality and simulation duration is necessary for practical usage. Large time steps are favourable to increase simulation speed in many practice, however too large time steps may create instabilities and affecting the data quality.[3][4]
The von Neumann method (Fourier stability analysis) is usually applied to determine the numerical model stability.[3][4][5][6]
Example: ordinary differential equation
For example, consider the ordinary differential equation
u'(x) = 3u(x) + 2. \,
The Euler method for solving this equation uses the finite difference quotient
\frac{u(x+h) - u(x)}{h} \approx u'(x)
to approximate the differential equation by first substituting in for u'(x) then applying a little algebra (multiplying both sides by h, and then adding u(x) to both sides) to get
u(x+h) = u(x) + h(3u(x)+2). \,
The last equation is a finite-difference equation, and solving this equation gives an approximate solution to the differential equation.
Example: The heat equation
Consider the normalized heat equation in one dimension, with homogeneous Dirichlet boundary conditions
U_t=U_{xx} \,
U(0,t)=U(1,t)=0 \, (boundary condition)
U(x,0) =U_0(x) \, (initial condition)
One way to numerically solve this equation is to approximate all the derivatives by finite differences. We partition the domain in space using a mesh x_0, ..., x_J and in time using a mesh t_0, ...., t_N . We assume a uniform partition both in space and in time, so the difference between two consecutive space points will be h and between two consecutive time points will be k. The points
u(x_j,t_n) = u_{j}^n
will represent the numerical approximation of u(x_j, t_n).
Explicit method
The stencil for the most common explicit method for the heat equation.
Using a forward difference at time t_n and a second-order central difference for the space derivative at position x_j (FTCS) we get the recurrence equation:
\frac{u_{j}^{n+1} - u_{j}^{n}}{k} = \frac{u_{j+1}^n - 2u_{j}^n + u_{j-1}^n}{h^2}. \,
This is an explicit method for solving the one-dimensional heat equation.
We can obtain u_j^{n+1} from the other values this way:
u_{j}^{n+1} = (1-2r)u_{j}^{n} + ru_{j-1}^{n} + ru_{j+1}^{n}
where r=k/h^2.
So, with this recurrence relation, and knowing the values at time n, one can obtain the corresponding values at time n+1. u_0^n and u_J^n must be replaced by the boundary conditions, in this example they are both 0.
This explicit method is known to be numerically stable and convergent whenever r\le 1/2 .[7] The numerical errors are proportional to the time step and the square of the space step:
\Delta u = O(k)+O(h^2) \,
Implicit method
The implicit method stencil.
If we use the backward difference at time t_{n+1} and a second-order central difference for the space derivative at position x_j (The Backward Time, Centered Space Method "BTCS") we get the recurrence equation:
\frac{u_{j}^{n+1} - u_{j}^{n}}{k} =\frac{u_{j+1}^{n+1} - 2u_{j}^{n+1} + u_{j-1}^{n+1}}{h^2}. \,
This is an implicit method for solving the one-dimensional heat equation.
We can obtain u_j^{n+1} from solving a system of linear equations:
(1+2r)u_j^{n+1} - ru_{j-1}^{n+1} - ru_{j+1}^{n+1}= u_{j}^{n}
The scheme is always numerically stable and convergent but usually more numerically intensive than the explicit method as it requires solving a system of numerical equations on each time step. The errors are linear over the time step and quadratic over the space step:
\Delta u = O(k)+O(h^2). \,
Crank–Nicolson method
Finally if we use the central difference at time t_{n+1/2} and a second-order central difference for the space derivative at position x_j ("CTCS") we get the recurrence equation:
\frac{u_j^{n+1} - u_j^{n}}{k} = \frac{1}{2} \left(\frac{u_{j+1}^{n+1} - 2u_j^{n+1} + u_{j-1}^{n+1}}{h^2}+\frac{u_{j+1}^{n} - 2u_j^{n} + u_{j-1}^{n}}{h^2}\right).\,
This formula is known as the Crank–Nicolson method.
The Crank–Nicolson stencil.
We can obtain u_j^{n+1} from solving a system of linear equations:
(2+2r)u_j^{n+1} - ru_{j-1}^{n+1} - ru_{j+1}^{n+1}= (2-2r)u_j^n + ru_{j-1}^n + ru_{j+1}^n
The scheme is always numerically stable and convergent but usually more numerically intensive as it requires solving a system of numerical equations on each time step. The errors are quadratic over both the time step and the space step:
\Delta u = O(k^2)+O(h^2). \,
Usually the Crank–Nicolson scheme is the most accurate scheme for small time steps. The explicit scheme is the least accurate and can be unstable, but is also the easiest to implement and the least numerically intensive. The implicit scheme works the best for large time steps.
References
1. ^
2. ^
3. ^ a b
4. ^ a b
5. ^
6. ^
7. ^ Crank, J. The Mathematics of Diffusion. 2nd Edition, Oxford, 1975, p. 143.
• K.W. Morton and D.F. Mayers, Numerical Solution of Partial Differential Equations, An Introduction. Cambridge University Press, 2005.
• Autar Kaw and E. Eric Kalu, Numerical Methods with Applications, (2008) [1]. Contains a brief, engineering-oriented introduction to FDM (for ODEs) in Chapter 08.07.
• .
• Randall J. LeVeque, Finite Difference Methods for Ordinary and Partial Differential Equations, SIAM, 2007. | 2020-08-06 22:13:19 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8240713477134705, "perplexity": 1222.6246612447414}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737039.58/warc/CC-MAIN-20200806210649-20200807000649-00516.warc.gz"} |
https://research.nsu.ru/ru/publications/measurement-of-photonjet-transverse-momentum-correlations-in-502- | # Measurement of photon–jet transverse momentum correlations in 5.02 TeV Pb + Pb and pp collisions with ATLAS
The ATLAS collaboration
Результат исследования: Научные публикации в периодических изданияхстатьярецензирование
22 Цитирования (Scopus)
## Аннотация
Jets created in association with a photon can be used as a calibrated probe to study energy loss in the medium created in nuclear collisions. Measurements of the transverse momentum balance between isolated photons and inclusive jets are presented using integrated luminosities of 0.49 nb−1 of Pb + Pb collision data at sNN=5.02 TeV and 25 pb−1 of pp collision data at s=5.02 TeV recorded with the ATLAS detector at the LHC. Photons with transverse momentum 63.1<pT γ<200 GeV and |ηγ|<2.37 are paired with all jets in the event that have pT jet>31.6 GeV and pseudorapidity |ηjet|<2.8. The transverse momentum balance given by the jet-to-photon pT ratio, x, is measured for pairs with azimuthal opening angle Δϕ>7π/8. Distributions of the per-photon jet yield as a function of x, (1/Nγ)(dN/dx), are corrected for detector effects via a two-dimensional unfolding procedure and reported at the particle level. In pp collisions, the distributions are well described by Monte Carlo event generators. In Pb + Pb collisions, the x distribution is modified from that observed in pp collisions with increasing centrality, consistent with the picture of parton energy loss in the hot nuclear medium. The data are compared with a suite of energy-loss models and calculations.
Язык оригинала английский 167-190 24 Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics 789 https://doi.org/10.1016/j.physletb.2018.12.023 Опубликовано - 10 февр. 2019
## Fingerprint
Подробные сведения о темах исследования «Measurement of photon–jet transverse momentum correlations in 5.02 TeV Pb + Pb and pp collisions with ATLAS». Вместе они формируют уникальный семантический отпечаток (fingerprint). | 2023-03-21 02:46:50 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8261217474937439, "perplexity": 7632.585591795432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943589.10/warc/CC-MAIN-20230321002050-20230321032050-00709.warc.gz"} |
https://dsp.meta.stackexchange.com/questions/1389/scipy-questions-support/1390#1390 | # Scipy questions/support?
I assume that some or even many here use SciPy for DSP.
What do you use for support? I've found SciPy-user mailing list, but it doesn't seem that active.
Is dsp.stackexchange.com useful/suitable for SciPy questions?
As @Bookend says, there is an active Stackoverflow tag of scipy that you should use for debugging questions on that site.
If there are lists of resources that we should compile here, then it looks like Stackoverflow itself allows some of them. See this and this for example. However, I don't think scipy quite falls into the category of C and C++.
Generally, programming questions are considered as being off-topic. We are trying to be language-agnostic in here. Strict programming questions belong to Stack Overflow.
Obviously there are some exceptions, when source code is a nice thing to have. Then one might consider writing a working example in language of their choice (MATLAB/Python/R).
My other answer should shed some light on it. | 2021-11-30 19:24:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.521756112575531, "perplexity": 1280.1860834995152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359065.88/warc/CC-MAIN-20211130171559-20211130201559-00363.warc.gz"} |
https://crypto.stackexchange.com/questions/89497/use-slow-hashing-to-reduce-digest-size/89498#89498 | # Use slow hashing to reduce digest size?
I have seen this question on MD5 replacement for 128 bit digests. It is said numerous times that having a 128 bit digest is impossible today because finding a collision would only require $$2^{64}$$ operations, which is in the means of big organisations nowadays. However, I think that these answers miss a fairly important point.
The time to bruteforce a hash function does not only depend on the number of security bits of its digest, but it also depends on the time needed to compute a single hash. This is why slow hashing functions such as bcrypt (184 bit) can afford to have a shorter digest size compared to fast hashing ones such as SHA-256 (256 bit).
In this case, would it not be possible to have a collision-resistant 128-bit slow-hashing function to replace MD5?
• First of all, don't use any hashes that aren't collision resistant in creating the slow hash; if you want to have fewer bits, you crop the hash after all computations are done. Strengthening doesn't add too many bits to the security - it's just linear after all, and the problem with creating a generic hash function is that once a collision is found it can be reused. Apr 20 at 10:20
• Thank you for your answer. What do you call strengthening? Besides, let's consider a 128 bit fast random oracle $o$ (not MD5 as it has weaknesses, maybe the first 128 bits of SHA-256). Could this slow hashing function with $n$ rounds be considered secure: h=msg; for i in 1..n: h=o(h || i); endfor; return h;? Is it still possible to reuse collisions in this case? And as I said in the comment of kelalaka's answer, I don't think reiterating the same fast hash function multiple times is the only way of creating a slow hash function. Apr 20 at 11:47
Would it not be possible to have a collision-resistant 128-bit slow-hashing function to replace MD5?
That's possible.
We could use Argon2 parameterized for 128-bit output and (say) 10 ms computation on a Raspberry Pi 3. If something could speed this up a hundredfold, and we parallelize on 1 million units, there's <40% chance of finding a collision with $$2^{128/2}/10^{10}/86400/365.25\approx58$$ years of computation.
That would replace MD5 in some applications where it's 128-bit size matters, collision-resistance is essential, but not speed for small inputs nor 512-bit block size (e.g. HMAC-MD5, but this one is not broken as far as we know).
Not coincidentally, that's current best practice for password hashing.
Note: I have simplified this answer after discovering Argon2 version 0x13 already includes a fast hash as the first step, namely Blake2b.
• Yes, using slow hashes designed for passwords will be the solution for the OP. Apr 20 at 17:54
Bcrypt is a password hashing function likes PBKDf2, Scrypt, and, Argon, where in the password hashing the collision is not important, pre-images are important.
If you just iterate the $$\operatorname{MD5^n}(x)=\operatorname{MD5}(\operatorname{MD5}(...(\operatorname{MD5}(x)...))$$ n-times then we will have an already well-known problem. A collision in the inner MD5 is a collision for the $$\operatorname{MD5^n}$$ therefore simple iteration is not secure only slows the collision finding, really? Just find a collision for $$\operatorname{MD5(x)}$$ then you have a collision. In other words, the cost of finding collusion is not affected!
An easy fix is for the case $$n=2$$ is $$\operatorname{MD5^2}(x)=\operatorname{MD5}(\operatorname{MD5}(x)||x))$$ or similar approaches.
We don't need doubling MD5 or SHA-1 to improve security, we just need new hash functions like SHA3 and the very fast one Blake2b.
Finally, we want cryptographic hash functions to be secure and fast, not slow. Slowness is required in password hashing.
update for the comment
It turns out that hashing with MD5 is required for the identification. In this case, the pre-image attack is more important if the public keys are considered to be kept secret. In the pre-image attack, given a hash value $$h$$ we are looking for an $$x'$$ such that $$h = \operatorname{MD5}(x')$$. The $$x'$$ may be the original $$x$$ such that $$h = \operatorname{MD5}(x)$$ or not. If the attackers seek the original they need to search more. MD5 has only one pre-image attack that its practical cost is not faster than the generic pre-images search that has $$2^{128}$$-time.
The collision here is only relevant for you since you don't have two or more users who have the same identity.
So you can use a modern hash function with trimmed instead of MD5.
• Thank you for your answer. Yes, finding a collision for one of the inner $\operatorname{MD5}$ would result in a collision for $\operatorname{MD5^n}$, but iterating a fast hashing function is not the only way to construct a slow hashing function, you gave an example yourself. SHA3 and Blake2b have a digest size at least equal to 256 bit, I was wondering if we could construct a secure 128 bit hash function by today's standards, I don't really care about the speed. Apr 20 at 10:14
• In theory, you can have one, however, still, the cost of finding collision with generic birthday attack is $2^{64}$ with %50 probability. Why do you need 128-bit output? Apr 20 at 10:44
• I would like a 128 bit output because user public keys of my system will be hashed to get their identifier, and I want the shortest identifiers possible. Ideally, I would like identifiers to be remembered by humans. Of course, it would not be the raw bit string that would be remembered, but maybe a hex or b64 encoding of it, or a hash visualization. Yes, a birthday attack would only need $2^{64}$ operations on average, but as it will be slow hashing, it would still be computationally infeasible for large entities to break the hash. Apr 20 at 10:46
• Aren't public-keys are public? Anyway, what you seek is not collision, it is pre-image so that one can extract $x$ from $MD5(x)$. The pre-image attack in MD5 in practice not faster than the generic attack that have cost $2^{128}$. Apr 20 at 10:53
• Nope! collision is finding arbitrary $x$ and $y$ such that $H(x)=H(y)$. What you describe is a multi-preimage attack. In this case, the parallelized rainbow table is the key to find one. To mitigate just use a random salt per password. Apr 20 at 11:47 | 2021-09-25 03:34:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 16, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31911376118659973, "perplexity": 964.7870988438852}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057589.14/warc/CC-MAIN-20210925021713-20210925051713-00054.warc.gz"} |
https://www.repository.cam.ac.uk/browse?type=author&sort_by=1&order=ASC&rpp=20&etal=-1&value=Wells%2C+G+N&offset=0 | Now showing items 1-11 of 11
• #### Analysis of a finite element formulation for modelling phase separation
(Springer, 2007)
The Cahn-Hilliard equation is of importance in materials science and a range of other fields. It represents a diffuse interface model for simulating the evolution of phase separation in solids and fluids, and is a nonlinear ...
• #### A $C^0$ discontinuous Galerkin formulation for thin shells
(2007)
This paper presents a C 0 discontinuous Galerkin formulation for the simulation of thin shells. The method is based on Koiter’s shell model and allows finite element solutions to be obtained by using standard $C^0$ Lagrange ...
• #### Computer code in support of the manuscript "Phase field model for coupled displacive and diffusive microstructural processes under thermal loading"
(2011)
Supporting material for the article: Maraldi, M., Wells, G. N., and Molari, L. (2011). Phase field model for coupled displacive and diffusive microstructural processes under thermal loading. Journal of the Mechanics and ...
• #### Discontinuous modelling of strain localisation and failure
(2001-06-12)
The computational simulation of failure in solids poses many challenges. A proper understanding of how structures respond under loading, both before and past the peak load, is important for safe and economical constructions. ...
• #### DOLFIN: Automated Finite Element Computing
(2009)
We describe here a library aimed at automating the solution of partial differential equations using the finite element method. By employing novel techniques for automated code generation, the library combines a high level ...
• #### An interface element based on the partition of unity
(Delft University of Technology, 2001)
An alternative interface finite element is developed. By using the partition of unity property of finite element shape functions, discontinuous shape functions are added to the standard finite element basis. The interface ...
• #### Representations of finite element tensors via automated code generation
(2009-02-26)
We examine aspects of the computation of finite element matrices and vectors which are made possible by automated code generation. Given a variational form in a syntax which resembles standard mathematical notation, the ...
• #### Supporting material for the paper 'Analysis of an interface stabilised finite element method: The advection-diffusion-reaction equation'
(2010-10-20)
Supporting computer code for the paper 'Analysis of an interface stabilised finite element method: The advection-diffusion-reaction equation', by Garth N. Wells.
• #### Supporting material for the paper 'Analysis of an interface stabilised finite element method: The advection-diffusion-reaction equation'
(2009-10-28)
This solver is in support of the paper 'Analysis of an interface stabilised finite element method: The advection-diffusion-reaction equation', by Garth N. Wells
• #### Supporting material for the paper 'Automated Modelling of Evolving Discontinuities'
(2009-07-23)
This computer code is in support of the paper 'Automated Modelling of Evolving Discontinuities' by Mehdi Nikbakht and Garth N. Wells in the journal Algorithms.
• #### Supporting material for the paper 'Optimisations for quadrature representations of finite element tensors through automated code generation'
(2009-07-20) | 2018-02-26 03:44:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3039734661579132, "perplexity": 3614.8596044760607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891817999.51/warc/CC-MAIN-20180226025358-20180226045358-00630.warc.gz"} |
http://aramram.com/forum/8739c3-propanal-mass-spectrum | and Informatics, Computational Chemistry Comparison and Benchmark Database, NIST / TRC Web Thermo Tables, "lite" edition (thermophysical and thermochemical data), NIST / TRC Web Thermo Tables, professional edition (thermophysical and thermochemical data), NIST Mass Spectrometry Data Center, William E. Wallace, director, Modified by NIST for use in this application. Your institution may already be a subscriber. If you assume the sample fragments into same amount of each ion, then the detector is going to show a higher intensity peak at 29 as two fragments have this M r. The infra-red spectra can also be used to show a chemical reaction has been successful, where a functional group is changed, for example with the oxidation of an alcohol. The peak at . The populations of the whole range of mass numbers of interest can be determined by plotting the rate of ion collection as a function of the magnetic field of the analyzing magnet. Where are the answers in the aqa chemistry a level textbook? Tell us a little about yourself to get started. Incorrect molecular weights will be obtained if the positive ion, $$M^+$$, becomes fragmented before it reaches the collector, or if two fragments combine to give a fragment heavier than $$M^+$$. The two spectra are shown below and it is clear that a change of functional group has taken place. We will illustrate this with the following simple example. e.g. jcamp-dx.js and errors or omissions in the Database. Enthalpy question (specific heat capacity)? If the resolution of the instrument is sufficiently high, quite exact masses can be measured, which means that ions with $$m/e$$ values differing by one part in 50,000 can be distinguished. The LibreTexts libraries are Powered by MindTouch® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. For $$n$$ carbons, we expect, $\frac{\text{abundance of} \left( M + 1 \right)^+}{\text{abundance of} \: M^+} = n \times \% \ce{^{13}C} \: \text{abundance}/100$, If the measured $$\left( M + 1 \right)^+/M^+$$ ratio is 6.6:100, then, \begin{align} \frac{6.6}{100} &= n \times 1.1/100 \\ n &= 6 \end{align}. This ion is presumably the tert-butyl cation and the alternate cleavage to the less stable ethyl cation with $$m/e =$$ 29 is much less significant. Data compiled by: NIST Mass Spectrometry Data Center, William E. Wallace, director. T h e s ymmetric stretching in carbon dioxide does not absorb in the infrared region as the centre of partial positive and negative charge are unaltered. A chemist would look at the key peaks to determine the type of compounds, and then refer to the infra-red records of that type of compound to identify the specific compound. ENGAA Prep Thread 2020 (for 2021 Admissions). | 2021-03-03 11:42:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8657174110412598, "perplexity": 1680.2379731452359}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178366959.54/warc/CC-MAIN-20210303104028-20210303134028-00214.warc.gz"} |
https://dsp.stackexchange.com/questions/35191/what-is-the-total-impulse-response-in-a-system-with-feedback-interconnection | # What is the total impulse response in a system with feedback interconnection?
Let's assume that we have a system with a typical feedback interconnection where the output is given by the following equation:
$$y(t) = \left(x(t) - z(t)\right) \star h_{1}(t) \tag{1}$$
where: $$z(t) = y(t) \star h_{2}(t) \tag{2}$$
So, is $h_{1}(t)$ the system's total impulse response or the convolution between $h_{1}(t)$ and $h_{2}(t)$ after replacing $z(t)$ in the equation $(1)$ while using the associative property?
It is easier to work in the $s$-domain:
$$Z=YH_2$$ $$Y=(X-Z)H_1$$ Hence, $$Y=(X-YH_2)H_1=XH_1-YH_1H_2\Rightarrow Y(1+H_1H_2)=XH_1$$ Therefore, $$H(s)=\frac{Y(s)}{X(s)}=\frac{H_1(s)}{1+H_1(s)H_2(s)}$$ which is called the closed-loop transfer function. The closed-loop impulse response can be found by inverse Laplace transform. | 2020-02-25 16:16:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9376111030578613, "perplexity": 166.04197691888768}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146123.78/warc/CC-MAIN-20200225141345-20200225171345-00066.warc.gz"} |
http://zbmath.org/?q=an:1190.37022 | # zbMATH — the first resource for mathematics
##### Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
##### Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
##### Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
Algebraic characterization of the isometries of the hyperbolic 5-space. (English) Zbl 1190.37022
The author obtains an algebraic characterization of the dynamical types of the orientation preserving isometries of the hyperbolic 5-space ${ℍ}^{5}$. The characterization is described by using the representation of the isometries of ${ℍ}^{5}$ as $2×2$ matrices over the quaternions, $𝔾𝕃\left(2,ℍ\right)$, together with the natural embedding of $𝔾𝕃\left(2,ℍ\right)$ into $𝔾𝕃\left(4,ℂ\right)$. It is also given the determination of the conjugacy classes and the $z$-classes in $𝔾𝕃\left(2,ℍ\right)$.
##### MSC:
37C85 Dynamics of group actions other than $ℤ$ and $ℝ$, and foliations 51M10 Hyperbolic and elliptic geometries (general) and generalizations 15A33 Matrices over special rings
##### References:
[1] Ahlfors L.V.: Möbius Transformations and Clifford Numbers. Differential Geometry and Complex Analysis, pp. 65–73. Springer, New York (1985) [2] Aslaksen H.: Quaternionic determinants. Math. Intell. 18, 57–65 (1996) · Zbl 0881.15007 · doi:10.1007/BF03024312 [3] Beardon A.F.: The Geometry of Discrete Groups. Graduate Texts in Mathematics 91. Springer- Verlag, Berlin (1983) [4] Brenner J.L.: Matrices of quaternions. Pacific J. Math. 1, 329–335 (1951) [5] Cao W., Parker J.R., Wang X.: On the classification of quaternionic Möbius transformations. Math. Proc. Cambridge Philos. Soc. 137(2), 349–361 (2004) · Zbl 1059.30043 · doi:10.1017/S0305004104007868 [6] Cao, C. Waterman, P.L.: Conjugacy Invariants of Möbius Groups: Quasiconformal Mappings and Analysis pp. 109–139. Springer (1998) [7] Cao W.: On the classification of four-dimensional Möbius transformations. Proc. Edinb. Math. Soc. 50(1): 49–62 (2007) · Zbl 1121.30022 · doi:10.1017/S0013091505000398 [8] Foreman B.: Conjugacy invariants of $𝕊L\left(2,ℍ\right)$ . Linear Algebra Appl. 381, 25–35 (2004) · Zbl 1048.15015 · doi:10.1016/j.laa.2003.11.002 [9] Gongopadhyay K., Kulkarni R.S.: z-Classes of isometries of the hyperbolic space. Conform. Geom. Dyn. 13, 91–109 (2009) · Zbl 1206.51017 · doi:10.1090/S1088-4173-09-00190-8 [10] Gongopadhyay, K.: z-Classes of isometries of pseudo-riemannian geometries of constant curvature. Ph.D. Thesis, IIT Bombay (2008) [11] Gongopadhyay, K.: Dynamical types of isometries of hyperbolic space of dimension 5. arXiv:math/0511444v1 [12] Kellerhals R.: Collars in $PSL\left(2,ℍ\right)$ . Ann. Acad. Sci. Fenn. Math. 26, 51–72 (2001) [13] Kellerhals R.: Quaternions and some global properties of hyperbolic 5-manifolds. Canad. J. Math. 55, 1080–1099 (2003) · Zbl 1054.57019 · doi:10.4153/CJM-2003-042-4 [14] Kulkarni R.S.: Dynamical types and conjugacy classes of centralizers in groups. J. Ramanujan Math. Soc. 22(1), 35–56 (2007) [15] Kulkarni R.S.: Dynamics of linear and affine maps. Asian J. Math. 12(3), 321–344 (2008) [16] Lee H.C.: Eigenvalues and canonical forms of matrices with quaternion coefficients. Proc. Roy. Irish Acad. Sect. A. 52, 253–260 (1949) [17] Parker J.R., Short I.: Conjugacy classification of quaternionic Möbius transformations. Comput. Methods Funct. Theory 9, 13–25 (2009) [18] Ratcliffe J.G.: Foundation of Hyperbolic Manifolds. Graduate Texts in Mathematics 149. Springer, New York (1994) [19] Waterman P.L.: Möbius groups in several dimensions. Adv. Math. 101, 87–113 (1993) · Zbl 0793.15019 · doi:10.1006/aima.1993.1043 [20] Wilker J.B.: The quaternion formalism for Möbius groups in four or fewer dimensions. Linear Algebra Appl. 190, 99–136 (1993) · Zbl 0786.51005 · doi:10.1016/0024-3795(93)90222-A | 2014-04-17 18:49:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 13, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7247741222381592, "perplexity": 5587.951887892097}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00559-ip-10-147-4-33.ec2.internal.warc.gz"} |
http://openstudy.com/updates/4ff44b44e4b01c7be8c7a128 | ## abdul_shabeer Group Title Does a point have no dimension? 2 years ago 2 years ago
1. mathslover Group Title
yes a point is dimensionless
2. zepp Group Title
0D = Dot 1D = Line 2D = Plane 3D = Solid 4D = Space-time 5D = ...idk :P
3. abdul_shabeer Group Title
Then how does it make a line?
4. zepp Group Title
What do you mean?
5. zepp Group Title
. <-- This is not a point, it's just a representation because a point has no height, no length, no thickness.
6. zepp Group Title
One dimension: A line, numbers, they have only a length;
7. zepp Group Title
2D: Shape, Length and width 3D: Solids, Length, width and thickness
8. abdul_shabeer Group Title
What is the definition for a line?
9. zepp Group Title
4D Spacetime: 3 dimentional object travelling/moving through time and space.
10. zepp Group Title
Straight objects without depth and width.
11. abdul_shabeer Group Title
Is it made of many points?
12. zepp Group Title
Um, you can see it that way :)
13. abdul_shabeer Group Title
When a point has no dimension then how can a collection of points make a line?
14. zepp Group Title
A point has no length, depth, width, when you align then, you just created a length
15. TuringTest Group Title
take a point and send it in some direction through space the path that it traces out is a line the real number line is infinitely dense so now we are getting into some of the trickyness of infinity
16. TuringTest Group Title
|dw:1341410594698:dw|here is the real number line (still a line, consisting of infinitely many points) I can ask you to pick out the point 1
17. TuringTest Group Title
|dw:1341410679112:dw|and there it is I can also ask you to find numbers between numbers so if I ask you to find 1.5 it should be on the line
18. abdul_shabeer Group Title
My doubt is when a point has zero dimension, then how can it make a line that has one dimension?
19. TuringTest Group Title
|dw:1341410742343:dw|similarly I can keep asking you to find points between the two we have already found 1.75 must be on the line 1.85 1.8 1.8243574687654365768.... etc. must all be on the line therefore there are infintiely many points on a line
20. TuringTest Group Title
between any two points there are infinitely many numbers, do you agree?
21. TuringTest Group Title
like between 1 and 2 are there not an infinity of numbers?
22. abdul_shabeer Group Title
Though there are infinitely many points, each has zero length, zero breadth etc.
23. TuringTest Group Title
but you are trying to add an infinity of infinitely tiny lengths!$\infty\cdot\frac1\infty=\text{undefined}$you can't treat infinity like a number and perform addition on it; it is a concept, rather than a number.
24. TuringTest Group Title
it would require an infinity of points lined up in a row, and they still would have not length this may seem paradoxical, but it reflects the infinite density of number distribution on the real line, so it is not trivial
25. abdul_shabeer Group Title
Wouldn't 0+0+0+0........ be zero?
26. TuringTest Group Title
no, because $$0\cdot\infty$$ is undefined
27. TuringTest Group Title
again, I know this does not make instinctive sense infinity is a tricky topic that has driven many mathematicians mad!
28. TuringTest Group Title
29. abdul_shabeer Group Title
When 0*infinity is undefined, then how does it produce a length?
30. TuringTest Group Title
because you don't have to build a line one point at a time; it's a mathematical construct that contains infinitely many points
31. TuringTest Group Title
if there were some finite number of points that you could put together to make a line, then the real number line would not be infinitely dense (i.e. some points would be missing from the line because there would have to be a limit on the number of times I can tell you to find the midpoint between two points)
32. TuringTest Group Title
|dw:1341411610187:dw|take any line segment; I want the midpoint (whatever it is)
33. TuringTest Group Title
|dw:1341411643038:dw|now I want the midpoint of the right half
34. TuringTest Group Title
|dw:1341411666632:dw|now again, the midpoint of the right half
35. TuringTest Group Title
|dw:1341411687412:dw|we could do this forever, right? that means there are infinitely many points on that line; I can always ask you to find the midpoint between any two points.
36. TuringTest Group Title
yet even though it is an infinity of points, they only add up to a finite line segment, so an infinity of things can add to something finite. Reference to Zeno's paradox.
37. abdul_shabeer Group Title
What does "though it is an infinity of points, they only add up to a finite line segment" mean?
38. zepp Group Title
TuringTest just showed that there's an amount of point between two point, but when this infinite adds up, it gives something finite, something you can just count on your fingers.
39. zepp Group Title
an infinite amount*
40. abdul_shabeer Group Title
How can infinite things add and give some finite value?
41. nbouscal Group Title
It is important to understand here that points and lines are simply geometric realizations of abstract concepts. As mentioned earlier, a . is not a point, nor is a drawing of a line actually a line. A point takes up no space, so it can't be shown, and similarly a line can't be shown because it has no height. Even a plane can't be shown really, because it has no depth. So, when you're asking these questions, it is best to avoid relying too heavily on your geometric interpretation of the concepts. Instead, it is better to look at them using the tools of analysis. TuringTest is referring to some relevant results of analysis, for example, the uncountability of the real line. Not only is the real line infinite in the number of points, it is uncountably infinite. This ends up implying that the number of points between 0 and 1 is actually the same as the number of points on the entire real line. So, to interpret this somewhat geometrically, you can zoom in as close as you would like to the real line and still be looking at the same number of points. Of course, this doesn't work like any real-world object. Another relevant way of looking at the real line is through the lens of topology. We can talk about the real line as a metric space, and we define the metric abstractly. We simply define how distance works. One sees by learning some introductory topology that the real line is actually just one way to look at the set of real numbers, and there are many others. So, this is another way to look at the problem.
42. TuringTest Group Title
in answer to how an infinity of things can add up to something finite do you know what an infinite series is @abdul_shabeer ?
43. abdul_shabeer Group Title
No @TuringTest
44. zepp Group Title
The sum of all terms of the geometric sequence that halves each time would be the perfect example :D
45. TuringTest Group Title
$\sum_{n=1}^\infty(\frac12)^n=\frac12+\frac14+\frac18+...=1$when you study them this may make a little more sense the one above can be said to represent Zeno's paradox
46. nbouscal Group Title
In general, we don't have an infinity of things simply "adding up" to finite value. We have an infinity of things approaching a limit that has finite value.
47. asnaseer Group Title
Hey guys - I'm at work at the moment but spotted this discussion and remembered a thread I saw that might help explain this to to you @abdul_shabeer : http://mathforum.org/library/drmath/view/55297.html Hope it helps - back to work now... :)
48. UnkleRhaukus Group Title
|dw:1341412717556:dw| | 2014-08-22 00:02:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7149834036827087, "perplexity": 1128.299378039159}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500822053.47/warc/CC-MAIN-20140820021342-00108-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://crypto.stackexchange.com/questions/42970/nash-cryptosystem/42972 | # Nash cryptosystem
In 1955, Nash proposed a cryptosystem in a declassified handwritten letters sent to the National Security Agency. The letters also include a conjecture which is equivalent to the famous $P \ne NP$ conjecture. I am not an expert in cryptoghraphy and I want to know the encryption and decryption functions in modern language.
What is the formal description of Nash's cryptosystem according to modern cryptography? What is the implicit hardness assumption used to provide security?
Update: Shamir and Zinger published a paper that describes an efficient plaintext attack on Nash's cryptosystem (pointed out by @deviantfan).
• Did you read the letters? – deviantfan Jan 13 '17 at 23:47
• @deviantfan the letter predates modern cryptography and hence understanding it is very hard for me. – Mohammad Al-Turkistany Jan 13 '17 at 23:58
• ... So binary numbers, addition, and the concept of permutation is hard? Well ... I find Nashs system much easier than any modern cipher. – deviantfan Jan 14 '17 at 0:15
• @deviantfan Please go ahead and post an answer. As I said this is not my area of expertise. – Mohammad Al-Turkistany Jan 14 '17 at 0:20
Two things first:
• Even in 1955, Nash's encryption algorithm (I'll call it NEA) was rejected by the NSA because they deemed it not secure enough. So do not use it in real life.
• Like eg. AES, NEA is not based on any of the usual hard algorithms like factorization etc.
NEA is a symmetric stream cipher, ie. there's just one key for both encrypting and decrypting, and there's no minimum block size: One input bit becomes one output bit.
NEA needs one (possibly) public parameter, a key with several parts, and an IV (initilization vector) for each message.
The public parameter first:
• A natural number N, larger is better for security. 256 is a usual value.
A key consists of:
• Two random permutations P for N values. Ie. P[0][0] to P[0][N-1] are the numbers from 0 to N-1 but in some random order, and P[1][0] to P[1][N-1] too but in a completely different random order.
• Two random N-bit numbers, B[0][0] to B[0][N-1] and B[1][0] to B[1][N-1]
A IV is a random N-bit number just like B[0] or B[1] above.
Encrypting/Decrypting a message M with L bit to the ciphertext C, as pseudocode:
//N, P, B, and IV are given
//S is a N-bit memory
Permut(X)
{
R = S[P[X][N-1]]
for all i from N-2 down to 0
{
S[P[X][i+1]] = S[P[X][i]] xor B[X][i];
}
S[P[X][0]] = X
return R
}
Encrypt(M,L)
{
S = IV
C[0] = M[0] xor Permut(0)
for all i from 1 to L
{
C[i] = M[i] xor Permut(C[i-1])
}
return C
}
Decrypt(C,L)
{
return Encrypt(C,L)
}
• Nice. Can you comment on possible weaknesses against attacks? – Mohammad Al-Turkistany Jan 14 '17 at 11:43
• @MohammadAl-Turkistany Eg. a chosen plaintext attack in O(N^2 log^3 N): eprint.iacr.org/2012/339.pdf – deviantfan Jan 14 '17 at 12:22 | 2021-02-25 16:45:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4908861815929413, "perplexity": 3552.357937120773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178351374.10/warc/CC-MAIN-20210225153633-20210225183633-00241.warc.gz"} |
https://nybiofuels.info/pics10-2019-36/bracket-latex-65.php | # math mode - Latex Brace Stack - TeX - LaTeX Stack Exchange - bracket latex
## math mode - Big Parenthesis in an Equation - TeX - LaTeX Stack Exchange bracket latex
No installation, real-time collaboration, version control, hundreds of LaTeX Here's how to type some common math braces and parentheses in LaTeX.
The usual thing to do is replace (with \left(and) with \right), which automatically expand to fit the material between them. Note that every \left requires a \right.
For parentheses and brackets, you can write $1^\text{st}$ Bracket: [ math]\LaTeX[/math] code: [code ]\left(\frac{x}{y} \right)[/code] Output: [math]\left.
a big brace on the left and a "column" with several equations and on the right, one number for each equation, automatically added by the.
You can try the cases env in amsmath. \documentclass{article} \usepackage{ amsmath} \begin{document} f(x)=\begin{cases} 1, & \text{if $x<0$}. | 2020-05-27 04:46:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9977876543998718, "perplexity": 6945.434195460282}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347392141.7/warc/CC-MAIN-20200527044512-20200527074512-00048.warc.gz"} |
http://hackage.haskell.org/package/in-other-words-0.1.0.0/docs/Control-Effect-Type-Unravel.html | in-other-words-0.1.0.0: A higher-order effect system where the sky's the limit
Control.Effect.Type.Unravel
Synopsis
# Documentation
data Unravel p :: Effect where Source #
A primitive effect which allows you to break a computation into layers. This is the primitive effect underlying Intercept and InterceptCont.
Note: ThreadsEff instances are not allowed to assume that p is a functor.
Unravel is typically used as a primitive effect. If you define a Carrier that relies on a novel non-trivial monad transformer t, then you need to make a ThreadsEff t (Unravel p) instance (if possible).
The following threading constraints accept Unravel:
• ReaderThreads
• ErrorThreads
• NonDetThreads
• SteppedThreads
• ContThreads
Constructors
Unravel :: p a -> (m a -> a) -> m a -> Unravel p m a
#### Instances
Instances details | 2020-11-28 12:49:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5540046095848083, "perplexity": 5130.174764277611}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195417.37/warc/CC-MAIN-20201128095617-20201128125617-00509.warc.gz"} |
https://economics.stackexchange.com/tags/corporate-finance/hot | # Tag Info
11
You are right that it seems strange why a cash-rich company is borrowing. In the case of Apple, the money that they are borrowing is being used to pay dividends to shareholders. The reason why they aren't using their \200 billion is because doing so would cost them tens of billions of dollars in taxes. The current US tax code taxes corporations at 35% when ... 5 People, particularly business leaders, seem to remain confused about this issue even today. At the core of is the question Is equity finance expensive?. We certainly observe in the data that the realized returns on firm debt are much lower than the realized returns on firm equity. Does this mean that firms have too much equity? If equity capital always ... 5 The first equation can be written as: $$r_E(Levered) = \frac{E+D}{E}r_E(Unlevered) - \frac{D}{E}r_D$$ Then, isolating the unlevered return gives: $$r_E(Unlevered) = \frac{E}{E+D}r_E(Levered) + \frac{D}{E+D}r_D$$ And this is the WACC. 4 If you are asking "Is the WACC the amount that the company expects to earn on the stocks and bonds that it holds.." then the answer is no. The WACC, in very simple terms, is the amount of money a company pays to obtain financing for projects. These types of financing are clearly listed in the wikipedia article and clearly extend beyond stocks and bonds ... 4 I don't know if you refer to the extensive margin (some borrowers not being able to get credit) or to the intensive margin (one borrower not being able to get as much credit as (s)he wants). If you are referring to the former, one of the theoretical papers for borrowing constraints on markets with asymmetric information is the following one: Stiglitz and ... 4 All assets which have a finite useful life are depreciated. For example, your patents or copyright might hold for 5 or 10 years but no more. Thus, it is quite coherent to reflect the loss of value through depreciation and amortization. Same goes for a software for example: in 5 years time, a software might be obsolete, so we need to reflect this in the ... 4 Debt is cheap. Flexibility is valuable. They hold debt + cash up to the point where the value of flexibility is still greater than the net cost of servicing the debt minus any interest earnt on the cash. It saves them the transaction costs of re-raising debt when they need it, had they paid it down early. It's cashflow that typically kills businesses, ... 3 Another key feature of those shell companies is that they hide the ultimate beneficiary of the transactions. Banks, insurance companies and most financial services firm must make enquiries as part of the "Know Your Customer" (KYC) regulations: they should be able to find out who will benefit ultimately from the transactions, or in the name of whom they are ... 3 Considering this is an Economics Stack Exchange site, I’m going to answer in the spectre of Financial Economics. These are the most foundational equations and ideas of Financial Economics to understand more complex applied or academic research. 1. Gross yield The gross yield is the yield on an investment before the deduction of taxes and expenses. 1+R_{t+... 3 A December fiscal year end, which gives a first quarter of three months ending March 30, aligns the fiscal and the tax year. This can be very convenient and in the United States is sometimes required. In addition, some regulated firms like banks are required to prepare documents on calendar quarters regardless of the month of their fiscal year end, and it is ... 3 To illustrate what Tirole has done, let's consider a simpler environment. Consider a utility maximisation problem over two goods, x and y. The consumer has utility function u(x,y) = f(x) + y, where f is strictly increasing and strictly concave. The consumer's problem is thus \begin{align} \max_{x,y} &\quad f(x) + y \\ \text{s.t.} &\quad ... 3 It appears to me that it is the other way round: The RBS was running out of cash which is why the stock price was dropping. Stocks usually don't affect the immediate operation of a company, since they are traded on secondary markets (stock exchanges) among stock owners, not bought from / sold to the actual company which issued the stock. 3 frequent Corporate Finance textbooks are MBA level - Berk and de Marzo (2017) - Corporate Finance, 4th ed. (Stanford) Advanced undergrad and corp theory-specific - Grinblatt and Titman - Financial Markets & Corporate Strategy PhD 1st year level - strictly articles from Journal of Finance, Review of Financial studies, etc, that empirically ... 3 Declaring bankruptcy is a step taken by companies to get protection from creditors. (They cannot seize collateral unilaterally, etc.) The company can still operate, within the legal framework, and with the obvious limitation that nobody wants to be owed money by the firm. Although equity often ends up worthless after all creditors have their claims settled, ... 3 Indeed the debt tax shield is distorting. Any policy that will essentially distort prices to favor A over B will lead to too much A and too little B. So such differentials are (almost) always distorting (unless you actually want less B and more A due to externalities). Here the tax deductibility makes debt cheaper compared to equity. We know this is ... 3 [W]hy do they need to write down "adopted a leniency law at some later point of time"? Because in Korea case, the word "our sample period" means "1995-2002" already. Assuming Korea is the early-adopter country, then all countries theretofore untreated before 1997 may serve as a counterfactual. This includes the countries never ... 2 The point @EnergyNumbers raises is correct, and it's easy to understand from an intuitive standpoint: One of the key roles of financial intermediaries is to match the demand for liabilities of a given tenor to the demand for assets of a given tenor. Financial intermediation allows maturity mismatches to exist in non-finance sectors of the economy by taking ... 2 The first equation is dollars time interest over total dollars. For example, if a company wants to finance a project, issues \1M in equity with an expected ROI to the investors of 6% and \$4M in bonds at 4%, it's WACC is: $$\frac{(4\%*4,000,000 + 6\% * 1,000,000)}{4,000,000+1,000,000}$$ which for simplicity we can say is$$\frac{(4\%*4 + 6\% * 1)}{4+1}... 2 The concept of$\text{WACC}$seems pretty straightforward... it is a weighted average percentage, calculated in principle as equation$(2)$in the question shows. If we have two sources of financing each demanding a different interest rate and with given percentage contribution each to the total funds we want to borrow, then what would be the single ... 2 The assertion of the book is based on the phenomenon of commercial credit - the fact that business-to-business sales almost always are on credit, and the differences between terms-of-credit that a company gives to its customers, compared to the terms of credit that enjoys from its suppliers. It describes the (short-term) phenomenon, peculiar to some, that "... 2 Using the Federal Reserve's definition for M1 (warning, M definitions can vary between countries, so always check the local definition): "M1 is defined as the sum of currency held by the public and transaction deposits at depository institutions (which are financial institutions that obtain their funds mainly through deposits from the public, such as ... 2 If I'm reading it correctly, Table X (page 2633) of Schwert (2000) (Journal of Finance, Hostility in Takeovers: In the Eyes of the Beholder?) says that about 78 percent of deals in 1975-1996 were successful. However, this measure is constructed based on the acquisition of the firm, not the bid of the acquirer, so that if there are multiple bidders this is ... 2 A shell is simply an inactive company - there is a market for shell companies because it allows ordinary persons to buy a ready to go business - for example public traded shells with a stockmarket ticker allow you to skip all the paperwork. Back to topic. A Limited company is a legal person, thus it can buy / sell and hold other companies and assets - sue ... 2 What happens is completely dependent on the owners. They're the ones whose income has been taken anyway. They may be the only ones with the legal power to do anything (depending on the jurisdiction: there may be some countries where the State can intervene in such matters). In some jurisdictions, the directors have a legal obligation to maximise returns to ... 2 They are not the same. Basic accounting equation: Assets = Liability + Shareholder Equity Assets refers to what the company actually owns: cash, property, inventory, etc. Assets are paid for in two major ways: debt (liability) and stock (equity). Essentially, everything a company owns is paid for by a combination of (1) getting loans from other entities ... 2$\beta$is the measure of the sensitivity of stock returns to market returns. This has nothing to do with the value of$R^2$. Your results appear to be fine, you can get significant beta estimates but low$R^2$. Why? As measured by$R^2$, 24.56% of variation in Apple returns is accounted for by the variation in the market index,$S$&$P 500\$. Clearly, ...
2
In the link one can very clearly see that the company has no contractually short term debt, and in the short-term (i.e. in the next 12 months) has to pay part of its long term debt. Also, that the debt amounts are not included in the line "Accounts payable" Also, one can see its long term debts And no, debt is not only bonds.
2
Fama's idea is to look at the level of equity that would have been necessary in the last crisis and multiply by 1.1 and call that a starting point for a level of equity that makes the company safer from insolvency. He phrased it as adding 10 percent. In Fama's view there will still be risk of insolvency. "You have to watch them because they are very ...
2
When I was studying for my bachelors we learned finance from textbook Corporate Finance by Jonathan Berk. I think thats good entry into learning some basic financial analysis.
2
I think the terms "treatment group" and "control group" are at best a loose analogy in an econometric model with two way fixed effects and staggered adoption of the treatment group. In short, I think you are right to be skeptical of the use of the terms "treatment group" and "control group" here. I think the authors ...
Only top voted, non community-wiki answers of a minimum length are eligible | 2021-06-24 09:48:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3730562627315521, "perplexity": 1766.0533344876194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488552937.93/warc/CC-MAIN-20210624075940-20210624105940-00203.warc.gz"} |
https://www.gamedev.net/forums/topic/678495-nested-for-loops-hlslsolved/ | • 12
• 14
• 13
• 10
• 11
# Nested for loops hlsl(SOLVED)
This topic is 716 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Hey guys, so I've been having this issue with compiling my compute shader code for SM5 A* path finding, and by issue I mean my D3D11CompileFromFile has been running for roughly 8 hours. Below is the essence of my code. I wanted to know if anyone has anyways to help me fix the issue, I've done as much as I possibly can to remove branches and for loops, but it still won't compile in a reasonable timespan. The search for the lowest F - Value compiles in about two minutes, but once I add the code for the neighbor search, it is stuck compiling. I have gone through many iterations of the neighbor for loop, from linear searching of data, to instant access through cpu side knowledge of grid layout, from if statements to switch statements, and I still can't seem to fix the issue. a Thank you.
#define MAX_NAV_NODE_SEARCH_COUNT (80)
{
int OpenListCount = 0;
//add the start node to the search open list
.....
OpenListCount++;
for(int nodeCheckIndex = 0; (nodeCheckIndex < MAX_NAV_NODE_SEARCH_COUNT && OpenListCount >0); nodeCheckIndex++)
{
//find the lowest F value
....
for(int i = 0; i < MAX_NAV_NODE_COUNT; i++)
{
{
...
}
}
//traverse the neighbors
for(i = 0; i < CurrAreaData.NeighborCount; i++)
{
//get neighbor data from Structured Buffer
//no a walkable neighbor, continue
if(!NeighborStaticData.IsWalkable)
{
continue;
}
//check what state the neighbor is in
{
//it is in the closed list so ignore it
case NAVPATH_CLOSED:
{
break;
}
//is is in the open list, so update it's nav data
case NAVPATH_OPEN:
{
....
}
//it hasn't been reached yet, so add it to the open list
case NAVPATH_INVALID:
{
....
}
}
}
}
}
Edited by AThompson | 2018-04-24 21:18:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17631570994853973, "perplexity": 3432.6661236803416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947328.78/warc/CC-MAIN-20180424202213-20180424222213-00208.warc.gz"} |
https://eng.libretexts.org/Bookshelves/Computer_Science/Book%3A_An_Introduction_to_Ontology_Engineering_(Keet)/3%3A_Description_Logics/3.0%3A_Prelude_to_Description_Logics | # 3.0: Prelude to Description Logics
Figure 3.1.1 shows a basic overview of the principal components of a DL knowledge base, with the so-called TBox containing the knowledge at the class-level and the ABox containing the data (individuals). Sometimes you will see added to the figure an RBox, which is used to make explicit there are relationships and the axioms that hold for them.
Figure 3.1.1: A Description Logic knowledge base. Sometimes you will see a similar picture extended with an “RBox”, which denotes the DL roles and their axioms.
The remainder of this section contains, first, a general introduction to DL (3.1: DL Primer), which are the first five sections of the DL Primer [KSH12], and is reproduced here with permission of its authors Markus Krötzsch, František Simančík, and Ian Horrocks1. (Slightly more detailed introductory notes with examples can be found in the first 8 pages of [Tur10] and the first 10 pages of [Sat07]; a DL textbook is in the pipeline). We then proceed to several important DLs out of the very many DLs investigated, being $$\mathcal{ALC}$$ and $$\mathcal{SROIQ}$$, in 3.2: Important DLs. We then proceed to describing and illustrating the standard reasoning services for DLs in 3.3: Reasoning Services, which essentially applies and extends the tableau reasoning of the previous chapter. Note that DLs and its reasoning services return in Chapter 4 about OWL 2, building upon the theoretical foundations introduced in this chapter.
### Footnotes
1I harmonised the terminology so as to use the same terms throughout the book, cf. adding synonyms at this stage, and added a few references to other sections in this book to integrate the text better. Also, I moved their $$\mathcal{SROIQ}$$ section into 3.2: Important DLs and inserted $$\mathcal{ALC}$$ there, and made the family composition examples more inclusive. | 2019-04-21 14:08:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7822743654251099, "perplexity": 1782.0587071964437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578531984.10/warc/CC-MAIN-20190421140100-20190421162100-00004.warc.gz"} |
https://www.projecteuclid.org/euclid.tjm/1255958187 | ## Tokyo Journal of Mathematics
### On the Hyers-Ulam Stability of Real Continuous Function Valued Differentiable Map
#### Abstract
We consider a differentiable map $f$ from an open interval to a real Banach space of all bounded continuous real-valued functions on a topological space. We show that $f$ can be approximated by the solution to the differential equation $x'(t)=\lambda x(t)$, if $||f'(t)-\lambda f(t)||_\infty\leq\varepsilon$ holds.
#### Article information
Source
Tokyo J. Math., Volume 24, Number 2 (2001), 467-476.
Dates
First available in Project Euclid: 19 October 2009
https://projecteuclid.org/euclid.tjm/1255958187
Digital Object Identifier
doi:10.3836/tjm/1255958187
Mathematical Reviews number (MathSciNet)
MR1874983
Zentralblatt MATH identifier
1002.39039
#### Citation
MIURA, Takeshi; TAKAHASI, Sin-ei; CHODA, Hisashi. On the Hyers-Ulam Stability of Real Continuous Function Valued Differentiable Map. Tokyo J. Math. 24 (2001), no. 2, 467--476. doi:10.3836/tjm/1255958187. https://projecteuclid.org/euclid.tjm/1255958187 | 2020-02-24 08:59:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.522456169128418, "perplexity": 1123.4844663077527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145910.53/warc/CC-MAIN-20200224071540-20200224101540-00260.warc.gz"} |
http://scipy.github.io/devdocs/generated/scipy.integrate.simps.html | # scipy.integrate.simps¶
scipy.integrate.simps(y, x=None, dx=1, axis=-1, even='avg')[source]
Integrate y(x) using samples along the given axis and the composite Simpson’s rule. If x is None, spacing of dx is assumed.
If there are an even number of samples, N, then there are an odd number of intervals (N-1), but Simpson’s rule requires an even number of intervals. The parameter ‘even’ controls how this is handled.
Parameters
yarray_like
Array to be integrated.
xarray_like, optional
If given, the points at which y is sampled.
dxint, optional
Spacing of integration points along axis of y. Only used when x is None. Default is 1.
axisint, optional
Axis along which to integrate. Default is the last axis.
evenstr {‘avg’, ‘first’, ‘last’}, optional
‘avg’Average two results:1) use the first N-2 intervals with
a trapezoidal rule on the last interval and 2) use the last N-2 intervals with a trapezoidal rule on the first interval.
‘first’Use Simpson’s rule for the first N-2 intervals with
a trapezoidal rule on the last interval.
‘last’Use Simpson’s rule for the last N-2 intervals with a
trapezoidal rule on the first interval.
quad
romberg
quadrature
fixed_quad
dblquad
double integrals
tplquad
triple integrals
romb
integrators for sampled data
cumtrapz
cumulative integration for sampled data
ode
ODE integrators
odeint
ODE integrators
Notes
For an odd number of samples that are equally spaced the result is exact if the function is a polynomial of order 3 or less. If the samples are not equally spaced, then the result is exact only if the function is a polynomial of order 2 or less.
Examples
>>> from scipy import integrate
>>> x = np.arange(0, 10)
>>> y = np.arange(0, 10)
>>> integrate.simps(y, x)
40.5
>>> y = np.power(x, 3)
>>> integrate.simps(y, x)
1642.5
>>> integrate.quad(lambda x: x**3, 0, 9)[0]
1640.25
>>> integrate.simps(y, x, even='first')
1644.5
#### Previous topic
scipy.integrate.cumtrapz
#### Next topic
scipy.integrate.romb | 2020-01-23 23:03:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6398822069168091, "perplexity": 2886.352913860475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250614086.44/warc/CC-MAIN-20200123221108-20200124010108-00130.warc.gz"} |
http://mathonline.wikidot.com/equivalent-subnormal-series-in-a-group | Equivalent Subnormal Series in a Group
# Equivalent Subnormal Series in a Group
Recall from the Subnormal Series in a Group page that if $G$ is a group and $1$ is the identity in $G$ then a subnormal series in $G$ of length $k$ is a collection of subgroups of $G$ such that:
(1)
\begin{align} \quad \{ 1 \} = G_0 \trianglelefteq G_1 \trianglelefteq ... \trianglelefteq G_k = G \end{align}
That is, $G_i$ is a normal subgroup of $G_{i+1}$ for all $i \in \{ 0, 1, ..., k-1 \}$.
We now define what it means for two subnormal series in a group to be equivalent.
Definition: Let $G$ be a group and let $1$ denote the identity in $G$. Two subnormal series $\{ 1 \} = G_0 \trianglelefteq G_1 \trianglelefteq ... \trianglelefteq G_k = G \}$ and $\{ 1 \} = H_0 \trianglelefteq H_1 \trianglelefteq ... \trianglelefteq H_l = G \}$ are said to be Equivalent if: 1) $k = l$. 2) There exists a bijection $\pi : \{ 0, 1, ..., k-1 \} \to \{ 0, 1, ..., k - 1\}$ such that $G_{i+1} / G_i \cong H_{\pi(i) + 1} / H_{\pi(i)}$ for all $i \in \{ 0, 1, ..., k-1 \}$.
Therefore, two subnormal series are equivalent if they have the same length and if every factor in the set of factors for the first subnormal series is isomorphic to one of the factors in the set of factors for the second subnormal series.
For example, consider the following subnormal series:
(2)
\begin{align} \quad \{ 0 \} \trianglelefteq \mathbb{Z}/2\mathbb{Z} \trianglelefteq \mathbb{Z}/6\mathbb{Z} \quad (*) \end{align}
(3)
\begin{align} \quad \{ 0 \} \trianglelefteq \mathbb{Z}/3\mathbb{Z} \trianglelefteq \mathbb{Z}/6\mathbb{Z} \quad (**) \end{align}
We claim that $(*)$ and $(**)$ are equivalent subnormal series. First observe that $(*)$ and $(**)$ have the same length. We now find the factors of both series. For the subnormal series $(*)$ we have that the factors are:
(4)
\begin{align} (\mathbb{Z}/2\mathbb{Z})/\{0\} \cong \mathbb{Z}/2\mathbb{Z} \\ (\mathbb{Z}/6\mathbb{Z})/(\mathbb{Z}/2\mathbb{Z}) \cong \mathbb{Z}/3\mathbb{Z} \\ \end{align}
And for the subnormal series $(**)$ we have that the factors are:
(5)
\begin{align} (\mathbb{Z}/3\mathbb{Z})/\{0\} \cong \mathbb{Z}/3\mathbb{Z} \\ (\mathbb{Z}/6\mathbb{Z})/(\mathbb{Z}/3\mathbb{Z}) \cong \mathbb{Z}/2\mathbb{Z} \end{align}
Therefore the subnormal series $(*)$ and $(**)$ are equivalent. | 2020-02-29 07:28:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 5, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9997127652168274, "perplexity": 318.00835580414844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875148671.99/warc/CC-MAIN-20200229053151-20200229083151-00077.warc.gz"} |
https://physics.stackexchange.com/questions/580694/proof-that-an-unconstrained-rigid-body-rotates-about-its-centre-of-mass | # Proof that an unconstrained rigid body rotates about it's centre of mass [closed]
I have seen a lot of questions here which ask why a free rigid body always rotates about it's centre of mass. The answer in most cases is like a "thought experiment". First, we prove that when a force is applied to a rigid body, it behaves like a point object where the entire mass of the object is concentrated to one point called the "centre of mass". Then we transfer out attention to a co-ordinate system at the centre of mass (so that the centre of mass is at rest, relatively). Then we say that the definition of a rigid body is that the distance between the particles of the rigid body always remain constant. This means that the distance between the centre of mass, and any point in the rigid body also remains constant. So, the only possible motion of any point will be a circular path around the centre of mass : hence, the only possible motion of a rigid body about the centre of mass is a rotation. Also, since the distance between any points in the rigid body must be constant, particles inside the rigid body cannot rotate in opposite directions or different axes, as this would change the distances.
Now, I have also been taught this way. In school and university, even in our Dynamics text book (Meriam & Kraige), the concept of "rotation" and "moment" was just introduced..like its common sense. There was no "mathematical proof" that rotation is the motion around the centre of mass (CM). Rotation and translation are always treated differently, even though its taught that the net motion will be a sum of the two.
I have been wondering whether you can prove that the motion of a particle in a rigid body with respect to the centre of mass is a rotation. I have come up with a kind of half baked derivation below :
First, as always we consider a rigid body as a system of particles connected with massless rigid rods. For simplicity I have considered only the 2D case. In the figure below, I have considered a 3 particle system, with all relevant variables marked.
The red point is the centre of mass (CM) of the system. Here a force $$\vec f$$ is applied to mass $$m_1$$ which does not pass through the CM. So, this system would rotate.
To apply the principles of dynamics, we first isolate all masses and draw the free body diagram
Here $$\vec f_{12}$$ and $$\vec f_{13}$$ are the reaction forces on $$m_1$$ from $$m_2$$ and $$m_3$$. Applying newtons second law to $$m_1$$ we have $$\vec f + \vec f_{12} + \vec f_{13} = m_1\ddot{\vec r_1}$$
For mass $$m_3$$
we have $$\vec f_{31} + \vec f_{32} = m_3\ddot{\vec r_3}$$
and for mass $$m_2$$
we have $$\vec f_{21} + \vec f_{23} = m_2\ddot{\vec r_2}$$
Now adding up all the above equations and noting that $$\vec f_{12}=-\vec f_{21}$$ and $$\vec f_{13}=-\vec f_{31}$$ and $$\vec f_{32}=-\vec f_{23}$$, we have $$\vec f=m_1\ddot{\vec r_1}+m_2\ddot{\vec r_2}+m_3\ddot{\vec r_3}$$ Introducing the position of centre of mass as $$\vec r_{cm}=\frac{m_1\vec r_1+m_2\vec r_2+m_3\vec r_3}{m_1+m_2+m_3}$$ and differentiating $$\ddot{\vec r_{cm}}=\frac{m_1\ddot{\vec r_1}+m_2\ddot{\vec r_2}+m_3\ddot{\vec r_3}}{m_1+m_2+m_3}$$ Now we can substitute for $$m_1\ddot{\vec r_1}+m_2\ddot{\vec r_2}+m_3\ddot{\vec r_3}$$ in the dynamic equation to get $$\vec f=(m_1+m_2+m_3)\ddot{\vec r_{cm}}$$ This is nothing but the equation of motion of a point particle whose mass is $$m_1+m_2+m_3$$ situated at position $$\vec r_{cm}$$. Thus, the rigid body behaves like the entire mass is concentrated at the centre of mass. Now we turn our attention to the centre of mass co-ordinate system $$x_{cm} - y_{cm}$$. To do this we note that $$\vec r_1=\vec r_{cm}+\vec r_{1c}$$ and $$\vec r_2=\vec r_{cm}+\vec r_{2c}$$ and $$\vec r_3=\vec r_{cm}+\vec r_{3c}$$ Substituting for $$\vec r_1$$, $$\vec r_2$$ and $$\vec r_3$$ in the dynamic equation for each mass, we have $$\vec f + \vec f_{12} + \vec f_{13} - m_1\ddot{\vec r_{cm}}=m_1\ddot{\vec r_{1c}}\\\vec f_{31} + \vec f_{32} - m_3\ddot{\vec r_{cm}}= m_3\ddot{\vec r_{3c}}\\\vec f_{21} + \vec f_{23} - m_2\ddot{\vec r_{cm}}= m_2\ddot{\vec r_{2c}}$$ Again adding all of the above up, we have $$\vec f-(m_1+m_2+m_3)\ddot{\vec r_{cm}}=m_1\ddot{\vec r_{1c}}+m_2\ddot{\vec r_{2c}}+m_3\ddot{\vec r_{3c}}$$ Now we invoke the definition of rigid body. This means that the distance between any 2 masses is constant. This may be written for our case as $$\frac {d}{dt}\left(\vec r_{12}\cdot\vec r_{12}\right)=0$$ since the magnitude of the vector between any 2 masses is constant. However $$\vec r_{12}=\vec r_{2c}-\vec r_{1c}$$. So we have $$\frac {d}{dt}\left[\left(\vec r_{2c}-\vec r_{1c}\right)\cdot\left(\vec r_{2c}-\vec r_{1c}\right)\right]=0\\\frac {d}{dt}\left[{\vert\vec r_{2c}\vert}^2+{\vert\vec r_{1c}\vert}^2-2\vec r_{2c}\cdot\vec r_{1c}\right]=0$$ This essentially means that $$\frac {d}{dt}\left[\vec r_{2c}\cdot\vec r_{1c}\right]=0$$ Applying product rule $$\vec r_{2c}\cdot\dot{\vec r_{1c}}+\vec r_{1c}\cdot\dot{\vec r_{2c}}=0$$ Differentiating once again, $$\vec r_{2c}\cdot\ddot{\vec r_{1c}}+\vec r_{1c}\cdot\ddot{\vec r_{2c}}+2\dot{\vec r_{1c}}\cdot\dot{\vec r_{2c}}=0$$ Since the last term is a product of derivatives, we say that it is infinitesimally small, and ignore it. This gives $$\vec r_{2c}\cdot\ddot{\vec r_{1c}}+\vec r_{1c}\cdot\ddot{\vec r_{2c}}=0$$ Applying same treatment for $$\vec r_{13}$$, we have $$\vec r_{1c}\cdot\ddot{\vec r_{3c}}+\vec r_{3c}\cdot\ddot{\vec r_{1c}}=0$$ From the above 2 equations, we can write $$\ddot{\vec r_{2c}}=\frac{-\vec r_{1c}\cdot\vec r_{2c}\cdot\ddot{\vec r_{1c}}}{{\vert\vec r_{1c}\vert}^2}\\\ddot{\vec r_{3c}}=\frac{-\vec r_{1c}\cdot\vec r_{3c}\cdot\ddot{\vec r_{1c}}}{{\vert\vec r_{1c}\vert}^2}$$ Substituting for $$\ddot{\vec r_{2c}}$$ and $$\ddot{\vec r_{3c}}$$ in the summed up dynamic equation, we get $$\vec f-(m_1+m_2+m_3)\ddot{\vec r_{cm}}=m_1\ddot{\vec r_{1c}}+m_2\frac{-\vec r_{1c}\cdot\vec r_{2c}\cdot\ddot{\vec r_{1c}}}{{\vert\vec r_{1c}\vert}^2}+m_3\frac{-\vec r_{1c}\cdot\vec r_{3c}\cdot\ddot{\vec r_{1c}}}{{\vert\vec r_{1c}\vert}^2}$$ Now we focus on the term $$(m_1+m_2+m_3)\ddot{\vec r_{cm}}$$. From the definition of the centre of mass, we have $$(m_1+m_2+m_3)\ddot{\vec r_{cm}}=m_1\ddot{\vec r_1}+m_2\ddot{\vec r_2}+m_3\ddot{\vec r_3}$$ We will now proceed to invoke the rigid body condition the same way we did above, by noting that $$\vec r_{12}=\vec r_2-\vec r_1$$ and that $$\vec r_{13}=\vec r_1-\vec r_3$$. After applying the same treatment as above, we get $$\ddot{\vec r_2}=\frac{-\vec r_1\cdot\vec r_2\cdot\ddot{\vec r_1}}{{\vert\vec r_1\vert}^2}\\\ddot{\vec r_3}=\frac{-\vec r_1\cdot\vec r_3\cdot\ddot{\vec r_1}}{{\vert\vec r_1\vert}^2}$$ Substituting these into the centre of mass definition above, we have $$(m_1+m_2+m_3)\ddot{\vec r_{cm}}=m_1\ddot{\vec r_1}+m_2\frac{-\vec r_1\cdot\vec r_2\cdot\ddot{\vec r_1}}{{\vert\vec r_1\vert}^2}+m_3\frac{-\vec r_1\cdot\vec r_3\cdot\ddot{\vec r_1}}{{\vert\vec r_1\vert}^2}$$. Now, if we take the common term $$\ddot{\vec r_1}$$ apart, all the other terms on RHS are scalar products. So we may write $$(m_1+m_2+m_3)\ddot{\vec r_{cm}}=K_1\ddot{\vec r_1}$$ where $$K_1=\frac{m_1{\vert\vec r_1\vert}^2-m_2\vec r_1\cdot\vec r_2-m_3\vec r_1\cdot\vec r_3}{{\vert\vec r_1\vert}^2}$$ Now we make the observation $$\vec r_{2c}-\vec r_{1c}=\vec r_2-\vec r_1=\vec r_{12}$$Differentiating twice, we have $$\ddot{\vec r_{2c}}-\ddot{\vec r_{1c}}=\ddot{\vec r_2}-\ddot{\vec r_1}$$ Substituting for $$\ddot{\vec r_{2c}}$$ in terms of $$\ddot{\vec r_{1c}}$$ and $$\ddot{\vec r_2}$$ in terms of $$\ddot{\vec r_1}$$ as derived above, we have $$\frac{-\vec r_{1c}\cdot\vec r_{2c}\cdot\ddot{\vec r_{1c}}}{{\vert\vec r_{1c}\vert}^2}-\ddot{\vec r_{1c}}=\frac{-\vec r_1\cdot\vec r_2\cdot\ddot{\vec r_1}}{{\vert\vec r_1\vert}^2}-\ddot{\vec r_1}$$ Again, we can notice that after taking term $$\ddot{\vec r_{1c}}$$ on LHS as common and taking term $$\ddot{\vec r_1}$$ on RHS as common, what left inside the brackets will be a scalar term. So we write $$\ddot{\vec r_1}=K_2\ddot{\vec r_{1c}}$$ So finally we may write $$(m_1+m_2+m_3)\ddot{\vec r_{cm}}=K_3\ddot{\vec r_{1c}}$$ where $$K_3=K_1K_2$$ Now we can substitute for the term $$(m_1+m_2+m_3)\ddot{\vec r_{cm}}$$ in the summed dynamic equation which becomes $$\vec f-K_3\ddot{\vec r_{1c}}=m_1\ddot{\vec r_{1c}}+m_2\frac{-\vec r_{1c}\cdot\vec r_{2c}\cdot\ddot{\vec r_{1c}}}{{\vert\vec r_{1c}\vert}^2}+m_3\frac{-\vec r_{1c}\cdot\vec r_{3c}\cdot\ddot{\vec r_{1c}}}{{\vert\vec r_{1c}\vert}^2}$$ Now I am going to do what is called a "pro gamer move". Since scalar product is commutative, I will group the terms in RHS as $$\vec f-K_3\ddot{\vec r_{1c}}=m_1\ddot{\vec r_{1c}}+m_2\frac{-\vec r_{1c}\cdot\left(\vec r_{2c}\cdot\ddot{\vec r_{1c}}\right)}{{\vert\vec r_{1c}\vert}^2}+m_3\frac{-\vec r_{1c}\cdot\left(\vec r_{3c}\cdot\ddot{\vec r_{1c}}\right)}{{\vert\vec r_{1c}\vert}^2}$$ Now, the terms in the brackets are scalar products; which means that the second and third terms in the RHS are vectors in the direction of $$\vec r_{1c}$$ Now to remove those additional terms, I take a cross product with $$\vec r_{1c}$$ on both LHS and RHS. $$\vec r_{1c}\times\vec f-K_3\vec r_{1c}\times\ddot{\vec r_{1c}}=m_1\vec r_{1c}\times\ddot{\vec r_{1c}}+m_2\frac{-\vec r_{1c}\times\vec r_{1c}\cdot\left(\vec r_{2c}\cdot\ddot{\vec r_{1c}}\right)}{{\vert\vec r_{1c}\vert}^2}+m_3\frac{-\vec r_{1c}\times\vec r_{1c}\cdot\left(\vec r_{3c}\cdot\ddot{\vec r_{1c}}\right)}{{\vert\vec r_{1c}\vert}^2}$$ In this case, since the second and third terms in RHS before cross product where vectors in the direction of $$\vec r_{1c}$$, taking cross product means that these terms will be $$0$$. Thus finally we have $$\vec r_{1c}\times\vec f=\left(m_1+K_3\right)\left(\vec r_{1c}\times\ddot{\vec r_{1c}}\right)$$ Which is nothing but $$\tau=I\alpha$$ where I call the bracketed term as $$I$$ (moment of inertia). So I have obtained the moment equation in the centre of mass co-ordinate system. I have the following questions :
1. Even though I set out to prove that the motion of $$m_1$$ will be circular, I didn't quite reach there. Does the moment equation prove that $$m_1$$ will have circular motion ?
2. Is what I have done correct ?
The answer that satisfied me is written by Claudio Saspinski below (Thankyou very much). Rotation is a basic type of motion like translation, where one point of the rigid body is fixed (relatively). So there is no need to "prove" that the motion of a rigid body is rotation, all bodies rotate about some point on it. As stated in the comment, any point on the rigid body can be taken as the centre of rotation of the body. It is proved that no matter what this point is, the angular velocity is the same, by applying the conditions of a rigid body. So we can take any point other than the centre of mass as the rotation reference. The reason we take the centre of mass as the centre of rotation in my current understanding, is for ease of analysis of motion. This is because the motion of the CM is just like a point object under applied force and is easy to handle. If we take the centre of rotation as another point, then to get the configuration of the body at a later time t, we need the position of this point at that time, which is more involved, as this point does not behave like a point body under applied force.
So in conclusion, my question is kind of not correct or not valid. Centre of rotation of a body is any point you choose to be.
• Finding the momentum expressions for a rigid body as a collection of particles is far simpler than your effort, and is also standard course material for college level dynamics class. Sep 20, 2020 at 3:38
• Thanks for the reference, but, as I said, it doesn't "prove" that the motion of a rigid body about the centre of mass is a rotation. It just introduces the equations as a "given" entity without any consideration of the philosophy behind it Sep 23, 2020 at 18:28
• I disagree. The equations are not just given but derived from 1st principles and they prove exactly what you ask for. I am not sure what the difficulty here is, but this is nothing new here. Sep 23, 2020 at 21:16
• In the material, rotation and translation are treated differently..just that. If I say that I define angular momentum in a certain way, then I can go ahead and differentiate..that's not the issue here. I was just saying that there is no "proof" that the only motion possible about the centre of mass is a rotation Oct 1, 2020 at 18:31
• Nov 30, 2021 at 21:14
The point is not that a unconstrained rigid body rotates around its COM. In reality, for any time $$t$$, a rigid body (constrained or not) always rotates around any of its points with an angular velocity $$\boldsymbol \omega (t)$$.
Suppose $$3$$ generic points $$P_0, P_1, P_2$$ that belongs to the rigid body. And lets $$P_0$$ be by hypothesis the center of rotation. Let's $$\mathbf {r_1}$$ and $$\mathbf {r_2}$$ be the position vectors of $$P_1$$ and $$P_2$$ in a non rotating frame where $$P_0$$ is the origin.
As the distances between the points doesn't change, The modulus of the position vectors are constant. That leads to: $$\mathbf {r_1.v_1} = \mathbf {r_2.v_2} = 0$$
The distance between points $$P_1$$ and $$P_2$$ also doesn't change: $$(\mathbf {r_1} - \mathbf {r_2})\mathbf .(\mathbf {v_1} - \mathbf {v_2}) = 0$$
From that equations we get: $$\mathbf {r_1.v_2} + \mathbf {r_2.v_1} = 0$$
We can define $$3$$ vectors $$\boldsymbol \omega_1$$, $$\boldsymbol \omega_2$$, and $$\boldsymbol \omega_3$$ such that: $$\mathbf {v_1} = \boldsymbol \omega_1 \times \mathbf {r_1}$$, $$\mathbf {v_2} = \boldsymbol \omega_2 \times \mathbf {r_2}$$, and $$\mathbf {v_1} - \mathbf {v_2} = \boldsymbol \omega_3 \times (\mathbf {r_1} - \mathbf {r_2})$$
$$\mathbf {r_2.v_1} = \mathbf{r_2.}(\boldsymbol \omega_1 \times \mathbf {r_1}) = \boldsymbol \omega_1 \times \mathbf {r_1.r_2}$$
$$\mathbf {r_1.v_2} = \mathbf{r_1.}(\boldsymbol \omega_2 \times \mathbf {r_2}) = \boldsymbol \omega_2 \times \mathbf {r_2.r_1}$$
$$\mathbf {r_2.v_1} + \mathbf {r_1.v_2} = 0 \implies \boldsymbol \omega_1 \times \mathbf {r_1.r_2} + \boldsymbol \omega_2 \times \mathbf {r_2.r_1} = 0 \implies \boldsymbol \omega_1 = \boldsymbol \omega_2$$
$$\mathbf {v_1} - \mathbf {v_2} = \boldsymbol \omega_1 \times \mathbf {r_1} - \boldsymbol \omega_2 \times \mathbf {r_2} = \boldsymbol \omega_1 \times \mathbf {r_1} - \boldsymbol \omega_1 \times \mathbf {r_2} = \boldsymbol \omega_1 \times (\mathbf {r_1} - \mathbf {r_2})$$
So, $$\boldsymbol \omega_1 = \boldsymbol \omega_2 = \boldsymbol \omega_3 = \boldsymbol \omega$$
The choice of $$P_1$$ and $$P_2$$ was arbitrary, so the conclusion is valid for all the rigid body. The existence of an unique angular velocity implies that for each time $$t$$, all points of the rigid body rotates around a generic point of it, chosen arbitrarily as origin of a non rotating frame.
If $$P_0$$ is the center of mass: $$m_1\mathbf {r_1} + m_2\mathbf {r_2} + ... + m_n\mathbf {r_n} = 0$$
Taking the derivative with respect to time: $$m_1\mathbf {v_1} + m_2\mathbf {v_2} + ... + m_n\mathbf {v_n} = 0$$. So the momentum of the body for a non rotating frame attached to the COM is always zero. If the body is unconstrained, that means without external forces applied, its total momentum must be constant for any inertial frame:
$$\mathbf p_{tot} = m_1(\mathbf {v_1 + u}) + m_2(\mathbf {v_2 + u}) + ... + m_n(\mathbf {v_n + u}) = \sum m_i\mathbf v_i + \sum m_i\mathbf u = 0 + M\mathbf u$$
where $$\mathbf u$$ is the velocity of the COM with respect to the inertial frame, and $$M$$ the total mass of the body.
Conclusion: in that case the momentum of the rigid body can be taken as the product of its mass by the velocity of the COM.
• Thanks! So it means that we do not have to prove that the motion around centre of mass is a rotation right? We can take any point on the rigid body as the centre of rotation, But we stick to the centre of mass as we can decouple the translation of the body from its rotation if we do so. Jan 23 at 11:39
• @S_holmes Exactly. Jan 23 at 13:08
• How does $[(\boldsymbol{\omega_{2}} - \boldsymbol{\omega_{3}})\times \mathbf{r_{2}}]\cdot \mathbf{r_{1}} = 0$ imply that $\boldsymbol{\omega_{2}} = \boldsymbol{\omega_{3}}$? Using the scalar triple product identity, I see that the expression can be simplified to $(\boldsymbol{\omega_{2}} - \boldsymbol{\omega_{3}})\cdot(\mathbf{r_{2}} \times \mathbf{r_{1}})$ So the omegas are equal iff no three particles of the rigid body are collinear( which is obviously false) and the two operands of the dot product are not orthogonal for all $P_{0}, P_{1} , P_{2}$ in the rigid body. What am I missing here? Feb 22 at 17:56
• @Curious I edited the answer to be more clear. Feb 22 at 22:31
• @ClaudioSaspinski Where is the edit? I have a problem with the line “Making the dot product with $\mathbf{r_{1}}$, the LHS is zero which implies that $\boldsymbol{\omega_{2}} = \boldsymbol{\omega_{3}}$” Can you explicitly write the edit at the bottom of the answer as I can’t see any edits regarding that? Feb 23 at 5:59
For the rotation i will take the sum of the torques about the center of mass, you obtain:
$$\vec{r}_{1c}\times \sum \vec F_1+\vec{r}_{2c}\times \sum \vec F_2+\vec{r}_{3c}\times \sum \vec F_3=\frac{d}{dt}\left(I\,\vec{{\omega}}\right)$$
with
$$\sum \vec F_1=\vec f+\vec f_{12}+\vec{f}_{13}$$ $$\sum \vec F_2=-\vec f_{12}+\vec{f}_{23}$$ $$\sum \vec F_3=-\vec f_{23}-\vec{f}_{13}$$
for rigid body is $$~\vec{f}_{ij}$$ equal zero
you obtain
$$\vec{r}_{1c}\times \vec{f}=\vec\tau_{\text{CM}}=\frac{d}{dt}\left(I\,\vec{{\omega}}\right)$$
where $$I$$ is the inertia tensor of the rigid body taken at the COM and $$\vec\omega$$ the angular velocity of the COM
• I don't understand how you combined the terms in LHS to eliminate other force values. Like how can you combine cross products ? Sep 20, 2020 at 17:51
• I think that $a\times (b+c)=a\times b+a\times c$
– Eli
Sep 20, 2020 at 19:21
• but how did you apply it..i know I seem dumb..but I dont get it..like..the cross products are with $\vec r_{2c}$ and $\vec r_{3c}$ right ? how did those terms disappear ? Sep 23, 2020 at 18:21
• They disappear because $~\vec{f}_{ij}$ equal zero for a rigid body
– Eli
Sep 23, 2020 at 18:31
• But how did you combine the LHS terms to get $\vec f_{ij}$ in the LHS..could you please show me the intermediate steps? Oct 1, 2020 at 18:32
In some sense, the center of mass is defined as the point in which a pure torque will force a body to rotate about, just as a force through the center of mass (and hence no net torque) forces the body to purely translate. You can see that those two statements are equivalent to each other, and proving one proves the other.
The root of all of this is the definition of momentum and angular momentum of a rigid body as a collection of particles that are fixed to each other. The center of mass is exactly the only point in space which de-couples the linear from the rotational momentum such that momentum describes the motion of the center of mass, and angular momentum the motion about the center of mass.
In this answer to Why does a body not rotate if force is applied on the centre of mass? I describe how the decomposition of the position (and hence motion) of each particle $$i$$ into the position of the center of mass $$\boldsymbol{r}_{\rm COM}$$ plus the relative position from the center of mass $$\boldsymbol{d}_i$$ allows us to use the simplification $$\sum_i m_i \boldsymbol{d}_i = \boldsymbol{0}$$ as the definition for the center of mass, and how this leads to the following expressions for momenta
• Linear Momentum $$\boldsymbol{p} = m \, \boldsymbol{v}_{\rm COM} \tag{1}$$
• Angular Momentum $$\boldsymbol{L}_{\rm COM} = \mathbf{I}_{\rm COM} \boldsymbol{\omega} \tag{2}$$
The important point from the above is that they are completely decoupled in the sense that momentum $$\boldsymbol{p}$$ does not depend on the rotation $$\boldsymbol{\omega}$$ and that angular momentum $$\boldsymbol{L}_{\rm COM}$$ does not depend on the motion of the center of mass $$\boldsymbol{v}_{\rm COM}$$.
Now forces and torques are the time derivatives of momentum and angular momentum, are are also completely decoupled between linear and rotational motion only when expressed at the center of mass.
To see the mathematically, consider a short-lived strong force that causes an impulse in vector form $$\boldsymbol{J}= \int \boldsymbol{F} \, {\rm d}t$$ applied at some location $$\boldsymbol{r}$$ not at the center of mass. The effect is going to be an instanteneous change in motion in terms of $$\Delta \boldsymbol{v}_{\rm COM}$$ and $$\Delta \boldsymbol{\omega}$$ as a result of this impulse directy changing the momenta of the body.
• Linear motion $$\Delta \boldsymbol{v}_{\rm COM} = \tfrac{1}{m} \boldsymbol{J} \tag{3}$$
• Rotational motion $$\Delta \boldsymbol{\omega} = \mathbf{I}_{\rm COM}^{-1} (\boldsymbol{r} \times \boldsymbol{J}) \tag{4}$$
Notice that (3) is the inverse of (1) and (4) is the inverse of (2) since $$(\boldsymbol{r} \times \boldsymbol{J})$$ is the net moment of impulse on the center of mass due to impulse being away from the center of mass.
So to answer your question, when a force is applied away from the center of mass is causes a change in both linear and rotational motion, but if the same force goes through the center of mass (and hence $$\boldsymbol{r}=\boldsymbol{0}$$) then only the linear motion is affected.
Now consider a different case where the forces is zero $$\boldsymbol{J}=\boldsymbol{0}$$, but there is still non-zero a net moment of impulse $$\boldsymbol{\Gamma} \neq \boldsymbol{0}$$ causing (3) to be $$\Delta \boldsymbol{v}_{\rm COM} = \boldsymbol{0}$$ and (4) to be $$\Delta \boldsymbol{\omega} = \mathbf{I}_{\rm COM}^{-1} \boldsymbol{\Gamma} \neq \boldsymbol{0}$$.
This is the case where the body starts to rotate, but the center of mass does not change motion. This is the only case where this can happen. Only when the net force is zero and the net torque is not zero. | 2022-05-23 07:48:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 145, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7430336475372314, "perplexity": 131.651821331973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662556725.76/warc/CC-MAIN-20220523071517-20220523101517-00391.warc.gz"} |
http://math.stackexchange.com/questions/90967/question-about-estimating-a-summation | Let $t$ be a positive real number, with $x$ running over the standard lattice points in $\mathbb{R}^{2}$, is it true that $\sum_{|x| > t} t^{-5} = O(t^{-3})$? If so why?
-
Is $x$ an integer? A positive integer? – Henry Dec 12 '11 at 23:46
I consider $x$ to be a lattice point in the standard lattice. I edited the question slightly so that it should make sense now. – THK Dec 12 '11 at 23:53
what dimension is your "standard lattice?" – yoyo Dec 12 '11 at 23:57
If $x$ ranges over the integers, the sum is $0$. – Brian M. Scott Dec 12 '11 at 23:58
@yoyo: The lattice points should be in $\mathbb{R}^{2}$, I have edited the question to reflect this. Thanks! – THK Dec 13 '11 at 0:03
show 1 more comment
If we suppose that $t$ is bigger than 1, say, then we can approximate half of our sum with the integral $\displaystyle \int_t^\infty \frac{1}{x^5} = \frac{1}{4 t^4}$, and $\displaystyle\frac{1}{t^4} \in O\left(\frac{1}{t^3}\right)$.
Do what I did for one dimension above, but in two dimensions. $\displaystyle \int_0^{2 \pi} \int_t^\infty \frac{1}{r^5} r dr d\theta$ will do if you like polar coordinates.
The sum is over a lattice in $\mathbb R^2$, so you need a double integral to approximate it. – stopple Dec 13 '11 at 0:08 | 2014-07-12 12:42:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9817334413528442, "perplexity": 312.12675448118347}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776432978.12/warc/CC-MAIN-20140707234032-00010-ip-10-180-212-248.ec2.internal.warc.gz"} |
https://puzzling.stackexchange.com/questions/52767/mindless-counting/52784 | # Mindless counting
* image courtesy of wikipedia
Normal-punctuation-man walked into a bar where exclamation-point-man was serving:
- Fast, give that man a tea!
- I'm too hot. I need a cool drink, relax at the sea.
- For goodness sake, warm, cool, the two are the same! Drink up!
- No. Not similar.
- Hush! Before I tie a knot on your tongue!
- I see, great pick, sure!
The discussion strikes you as infinitely interesting. You sit there at the next table over and ponders the following:
$$X = \frac{\#oronym^{\#synonym} + \#homophone + \#homograph }{\#heterograph + \#heteronym + \#homonym} + \#contronym$$
* # counted in pairs except contronyms and oronyms (oronyms only exist in the direction of (multiple-word -> single word))
** Edited riddle to remove word i hadn't thought of. There may be more things i missed
What is X?
• Is "godness" simply a typo? – Gareth McCaughan Jun 19 '17 at 14:27
• Thanks for the "counted in pairs" clarification but it still isn't perfectly clear. Are we meant to count matches with words/phrases not actually present in the dialogue? E.g., "pick sure" ~ "picture" but the word "picture" doesn't appear here; should we be counting this as an oronym? – Gareth McCaughan Jun 19 '17 at 14:29
• (My feeling is that if we aren't restricting ourselves to pairs that both occur in the dialogue then the answer isn't going to be very well defined because it may depend on what obscure words one counts.) – Gareth McCaughan Jun 19 '17 at 14:30
• Clarified some more! Hm the restriction is that both the words exist in the dialogue e.g. the synonyms 'sky' and 'heaven' would both have to be included for them to add one to #synonyms – Adam Jun 19 '17 at 14:36
Answer in progress. First, you can simplify the calculations, because...
Some of the equation terms are actually reused. Homophones include homonyms and heterographs. Homographs include homonyms and heteronyms. (Source)
Next we start counting:
Oronyms (3):
(Must be multiple words from the source text that sound like a single word, so examples like "Before"→"be for" and "relax"→"real axe" are not counted.)
1. "man a tea" (manatee)
2. "I see" (icy)
3. "pick, sure" (picture)
Contronyms (1):
1. Fast ("moving quickly" versus "non-moving")
Synonym pairs (3):
1. sea, drink
2. hot, warm
3. tie, knot
Heterograph pairs (3):
1. too, two
2. sea, see
3. not, knot
Heteronym pairs (1):
1. sake, sake ("rice wine" versus "consideration")
Homonym pairs (1):
1. cool, cool ("relaxing and pleasant" versus "cold", although this is unclear from context)
Finally we perform calculations:
# Homophones = (1 Homonyms) + (3 Heterographs) = 4
# Homographs = (1 Homonyms) + (1 Heteronyms) = 2
$$X = \frac{3^{3} + 4 + 2 }{3 + 1 + 1} + 1 = \frac{38}{5} = 7.6$$
That's all I have for now. My answer for X is not an integer, so I doubt that it is the value that the author expected. It is possible that I misunderstood some parts of the puzzle.
• I'll give you the tick for that one, good job! The only synonyms I had was: hot-warm, and same-similar. That would've given X = 4. Otherwise it was exactly what I intended. – Adam Jun 20 '17 at 7:20 | 2019-09-20 23:16:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5303804278373718, "perplexity": 6632.539438156616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574084.88/warc/CC-MAIN-20190920221241-20190921003241-00306.warc.gz"} |
https://greprepclub.com/forum/25-percent-of-30-is-75-percent-of-what-number-9581.html | It is currently 26 Mar 2019, 00:29
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# 25 percent of 30 is 75 percent of what number?
Author Message
TAGS:
GRE Prep Club Legend
Joined: 07 Jun 2014
Posts: 4857
GRE 1: Q167 V156
WE: Business Development (Energy and Utilities)
Followers: 105
Kudos [?]: 1783 [0], given: 397
25 percent of 30 is 75 percent of what number? [#permalink] 11 Jun 2018, 15:49
Expert's post
00:00
Question Stats:
100% (00:07) correct 0% (00:00) wrong based on 5 sessions
25 percent of 30 is 75 percent of what number?
[Reveal] Spoiler: OA
10
_________________
Sandy
If you found this post useful, please let me know by pressing the Kudos Button
Try our free Online GRE Test
Intern
Joined: 21 Nov 2017
Posts: 33
Followers: 0
Kudos [?]: 11 [0], given: 0
Re: 25 percent of 30 is 75 percent of what number? [#permalink] 13 Jun 2018, 18:11
25%*30 = 75%*X --> X = 25*30/75 = 10
GRE Prep Club Legend
Joined: 07 Jun 2014
Posts: 4857
GRE 1: Q167 V156
WE: Business Development (Energy and Utilities)
Followers: 105
Kudos [?]: 1783 [0], given: 397
Re: 25 percent of 30 is 75 percent of what number? [#permalink] 01 Jul 2018, 13:47
Expert's post
Explanation
Translate the question as 0.25(30) = 0.75x and solve on the calculator: x = 10.
Alternatively, write the percents in simplified fraction form and solve on paper:
$$\frac{1}{4}30=\frac{3}{4}x$$
$$3x=30$$ or $$x=10.$$
_________________
Sandy
If you found this post useful, please let me know by pressing the Kudos Button
Try our free Online GRE Test
Re: 25 percent of 30 is 75 percent of what number? [#permalink] 01 Jul 2018, 13:47
Display posts from previous: Sort by | 2019-03-26 08:29:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23172113299369812, "perplexity": 12345.56871620875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204885.27/warc/CC-MAIN-20190326075019-20190326101019-00438.warc.gz"} |
https://pypsa-eur.readthedocs.io/en/latest/index.html | # PyPSA-Eur: An Open Optimisation Model of the European Transmission System¶
PyPSA-Eur is an open model dataset of the European power system at the transmission network level that covers the full ENTSO-E area.
It contains alternating current lines at and above 220 kV voltage level and all high voltage direct current lines, substations, an open database of conventional power plants, time series for electrical demand and variable renewable generator availability, and geographic potentials for the expansion of wind and solar power.
The model is suitable both for operational studies and generation and transmission expansion planning studies. The continental scope and highly resolved spatial scale enables a proper description of the long-range smoothing effects for renewable power generation and their varying resource availability.
The restriction to freely available and open data encourages the open exchange of model data developments and eases the comparison of model results. It provides a full, automated software pipeline to assemble the load-flow-ready model from the original datasets, which enables easy replacement and improvement of the individual parts.
PyPSA-Eur is designed to be imported into the open toolbox PyPSA for which documentation is available as well.
This project is currently maintained by the Department of Digital Transformation in Energy Systems at the Technische Universität Berlin. Previous versions were developed within the IAI at the Karlsruhe Institute of Technology (KIT) and by the Renewable Energy Group at FIAS to carry out simulations for the CoNDyNet project, financed by the German Federal Ministry for Education and Research (BMBF) as part of the Stromnetze Research Initiative.
A version of the model that adds building heating, transport and industry sectors to the model, as well as gas networks, is currently being developed in the PyPSA-Eur-Sec repository.
Getting Started
Configuration
Rules Overview
References
# Warnings¶
Please read the limitations section of the documentation and paper carefully before using the model. We do not recommend to use the full resolution network model for simulations. At high granularity the assignment of loads and generators to the nearest network node may not be a correct assumption, depending on the topology of the underlying distribution grid, and local grid bottlenecks may cause unrealistic load-shedding or generator curtailment. We recommend to cluster the network to a couple of hundred nodes to remove these local inconsistencies.
# Learning Energy System Modelling¶
If you are (relatively) new to energy system modelling and optimisation and plan to use PyPSA-Eur, the following resources are one way to get started in addition to reading this documentation.
• Documentation of PyPSA, the package for simulating and optimising modern power systems which PyPSA-Eur uses under the hood.
• Course on Energy System Modelling, Karlsruhe Institute of Technology (KIT), Dr. Tom Brown
# Citing PyPSA-Eur¶
If you use PyPSA-Eur for your research, we would appreciate it if you would cite the following paper:
@article{PyPSAEur,
author = "Jonas Hoersch and Fabian Hofmann and David Schlachtberger and Tom Brown",
title = "PyPSA-Eur: An open optimisation model of the European transmission system",
journal = "Energy Strategy Reviews",
volume = "22",
pages = "207 - 215",
year = "2018",
issn = "2211-467X",
doi = "10.1016/j.esr.2018.08.012",
eprint = "1806.01613"
}
If you want to cite a specific PyPSA-Eur version, each release of PyPSA-Eur is stored on Zenodo with a release-specific DOI. This can be found linked from the overall PyPSA-Eur Zenodo DOI:
# Pre-Built Networks as a Dataset¶
There are pre-built networks available as a dataset on Zenodo as well for every release of PyPSA-Eur.
The included .nc files are PyPSA network files which can be imported with PyPSA via:
import pypsa
filename = "elec_s_1024_ec.nc" # example
n = pypsa.Network(filename)
# Licence¶
PyPSA-Eur work is released under multiple licenses:
• All original source code is licensed as free software under MIT.
• Configuration files are mostly licensed under CC0-1.0.
• Data files are licensed under CC-BY-4.0.
See the individual files and the dep5 file for license details.
Additionally, different licenses and terms of use also apply to the various input data, which are summarised below. More details are included in the description of the data bundles on zenodo.
Files BY NC SA Mark Changes Detail
eez/* x x x http://www.marineregions.org/disclaimer.php | 2022-01-17 00:17:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2623702883720398, "perplexity": 2638.9033927473115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300253.51/warc/CC-MAIN-20220117000754-20220117030754-00678.warc.gz"} |
https://mathspace.co/textbooks/syllabuses/Syllabus-408/topics/Topic-7234/subtopics/Subtopic-96661/?activeTab=interactive | # Simplify numerical square and cube root expressions
## Interactive practice questions
What is the largest square number that divides exactly into $75$75?
$\editable{}$
Easy
Approx a minute
Consider the expression $\sqrt{19}$19. | 2022-01-28 23:06:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21872730553150177, "perplexity": 4876.270137394506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320306346.64/warc/CC-MAIN-20220128212503-20220129002503-00277.warc.gz"} |
https://www.beatthegmat.com/data-sufficiency-f7.html | • Award-winning private GMAT tutoring
Register now and save up to $200 Available with Beat the GMAT members only code • 5-Day Free Trial 5-day free, full-access trial TTP Quant Available with Beat the GMAT members only code • Magoosh Study with Magoosh GMAT prep Available with Beat the GMAT members only code • FREE GMAT Exam Know how you'd score today for$0
Available with Beat the GMAT members only code
• 5 Day FREE Trial
Study Smarter, Not Harder
Available with Beat the GMAT members only code
• Free Veritas GMAT Class
Experience Lesson 1 Live Free
Available with Beat the GMAT members only code
• Get 300+ Practice Questions
25 Video lessons and 6 Webinars for FREE
Available with Beat the GMAT members only code
• Free Practice Test & Review
How would you score if you took the GMAT
Available with Beat the GMAT members only code
• Free Trial & Practice Exam
BEAT THE GMAT EXCLUSIVE
Available with Beat the GMAT members only code
• 1 Hour Free
BEAT THE GMAT EXCLUSIVE
Available with Beat the GMAT members only code
Forum Rules
## Data Sufficiency
Topics Posts Last Post
### Sticky:WANT BeatTheGMAT TO PAY FOR YOUR GMAT?
by: beatthegmat
0 Wed Sep 20, 2017 9:25 am
by: beatthegmat
### Sticky:OFFICIAL GUIDE QUESTION LIBRARY
by: beatthegmat
0 Wed Sep 20, 2017 9:23 am
by: beatthegmat
### Is ∣x∣<1 ?
by: M7MBA
2 Fri Apr 20, 2018 6:29 am
by: Vincen
### Does x = 3?
by: M7MBA
2 Fri Apr 20, 2018 6:19 am
by: Vincen
### If x and y are integers, is x+y an even
1 Fri Apr 20, 2018 5:56 am
### What is the value of y?
by: M7MBA
1 Fri Apr 20, 2018 5:17 am
### If k is a positive two-digit integer, what is the tens
by: M7MBA
1 Fri Apr 20, 2018 4:42 am
by: GMATGuruNY
### If a > 0 and b > 0, is a/b > b/a ?
by: M7MBA
1 Fri Apr 20, 2018 4:30 am
by: GMATGuruNY
### Is kw>0?
by: M7MBA
1 Fri Apr 20, 2018 4:22 am
by: GMATGuruNY
### Each of three consecutive positive integers is less than 100
1 Fri Apr 20, 2018 12:40 am
### When a and b are positive integers, what is the greates
1 Thu Apr 19, 2018 2:49 am
by: GMATGuruNY
### Is x>0?
2 Thu Apr 19, 2018 1:39 am
### If q is a member of the set {21, 22, 24, 25, 26, 27}
by: VJesus12
2 Wed Apr 18, 2018 7:14 am
by: Vincen
### What was the percent increase in the value of a certain
by: VJesus12
2 Wed Apr 18, 2018 5:27 am
### Is the total of the sales prices of 3 products greater than
1 Wed Apr 18, 2018 3:45 am
### In a certain coding scheme, each word is encoded by
by: VJesus12
1 Wed Apr 18, 2018 3:40 am
by: GMATGuruNY
### What is the value of the sum of a list of n odd integers?
by: VJesus12
1 Wed Apr 18, 2018 3:08 am
by: GMATGuruNY
### What is the remainder when k^2 is divided by 8?
by: Gmat_mission
3 Tue Apr 17, 2018 3:40 pm
### How many miles long is the route from Danton to Griggston?
by: M7MBA
2 Mon Apr 16, 2018 4:19 pm
### Students in a class are arranged to form groups of 4
by: M7MBA
2 Mon Apr 16, 2018 4:18 pm
### Is x>0?
4 Mon Apr 16, 2018 3:44 pm
### On Wednesday morning a fortune cookie machine ran
by: M7MBA
2 Mon Apr 16, 2018 9:00 am
### When a positive integer n is divided by 5, the remainder is
3 Sun Apr 15, 2018 5:30 pm
### If the elements of set X are a, b, c and d, is the average (
2 Sun Apr 15, 2018 5:29 pm
### Is standard deviation of Set A > standard deviation of Se
by: GMATinsight
2 Sun Apr 15, 2018 8:08 am
### What is the area of the circle circumscribing triangle ABC?
by: GMATinsight
1 Sun Apr 15, 2018 8:04 am
by: deloitte247
### Is x > y?
by: VJesus12
2 Sat Apr 14, 2018 4:48 pm
### Is positive integer n - 1 a multiple of 3?
by: Gmat_mission
1 Sat Apr 14, 2018 7:16 am
### What is the sum of five numbers?
by: Gmat_mission
1 Sat Apr 14, 2018 6:17 am
by: Vincen
### Is k the square of an integer?
by: Gmat_mission
1 Sat Apr 14, 2018 6:10 am
by: Vincen
### Does ab = a^2
by: Gmat_mission
1 Sat Apr 14, 2018 3:40 am
by: GMATGuruNY
### p>q, Is q negative?
by: VJesus12
2 Sat Apr 14, 2018 2:56 am
by: GMATGuruNY
### If mn < np < 0, is n < 1?
by: M7MBA
1 Fri Apr 13, 2018 5:28 pm
### Each of the 50 students participating in a
by: NandishSS
1 Fri Apr 13, 2018 8:34 am
by: GMATGuruNY
### In a survey of 200 college graduates, 30 percent said
by: VJesus12
2 Fri Apr 13, 2018 7:30 am
### If x > y, is x > 6 ? (1) (x - 7)(y - 7) = 0 (2) x >
by: M7MBA
3 Fri Apr 13, 2018 7:19 am
### If a, x, y, and z are integers greater than zero, are
by: VJesus12
1 Fri Apr 13, 2018 5:16 am
by: Vincen
### Events A, B, C and D are all possible outcomes of an experim
1 Fri Apr 13, 2018 1:28 am
### Is ab < 0 ?
by: VJesus12
1 Thu Apr 12, 2018 9:38 pm
### CO ordiates !
by: vishubn
6 Thu Apr 12, 2018 3:48 pm
### If 2^(x + y) = 4^8, then
by: jjjinapinch
3 Thu Apr 12, 2018 3:41 pm
### Is x^2-y^2>x+y?
7 Wed Apr 11, 2018 1:59 am
by: GMATGuruNY
### Set X consists of 100 numbers. The average (arithmetic mean)
by: gmatquant25
3 Tue Apr 10, 2018 4:37 pm
### Last school year, each of the 200 students
by: jjjinapinch
4 Tue Apr 10, 2018 2:19 pm
### By what percent was the price of a certain television set di
by: NandishSS
6 Tue Apr 10, 2018 10:26 am
### OG In a certain classroom, there are 80 books
1 Mon Apr 09, 2018 3:20 pm
### Help - Tired of missing this type of question
8 Mon Apr 09, 2018 3:06 pm
### xy plane
by: beater
3 Mon Apr 09, 2018 2:59 pm
### OG2015 DS If n + k = m
by: lionsshare
4 Mon Apr 09, 2018 5:44 am
### What is the units digit of 3^n?
by: M7MBA
2 Mon Apr 09, 2018 2:49 am
Goto page
### Top First Responders*
1 GMATGuruNY 72 first replies
2 Rich.C@EMPOWERgma... 45 first replies
3 Brent@GMATPrepNow 43 first replies
4 Jay@ManhattanReview 27 first replies
5 ErikaPrepScholar 9 first replies
* Only counts replies to topics started in last 30 days
See More Top Beat The GMAT Members
### Most Active Experts
1 GMATGuruNY
The Princeton Review Teacher
133 posts
2 Rich.C@EMPOWERgma...
EMPOWERgmat
118 posts
3 Jeff@TargetTestPrep
Target Test Prep
105 posts
4 Max@Math Revolution
Math Revolution
93 posts
5 Scott@TargetTestPrep
Target Test Prep
92 posts
See More Top Beat The GMAT Experts
• New posts
• New posts [ Popular ]
• New posts [ Locked ]
• No new posts
• No new posts [ Popular ]
• No new posts [ Locked ]
• Announcement
• Sticky | 2018-04-21 11:38:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22351402044296265, "perplexity": 12341.507847172092}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945143.71/warc/CC-MAIN-20180421110245-20180421130245-00307.warc.gz"} |
http://research.microsoft.com/apps/catalog/default.aspx?p=1&sb=no&ps=25&t=publications&sf=&s=&r=&vr=&ra=47204 | Our research
##### Content type
+
Events (326)
Groups (147)
+
News (2391)
People (836)
Projects (976)
+
Publications (11535)
+
Videos (4566)
##### Research areas
Communication and collaboration47188 (169)
Computational linguistics47189 (136)
Computational sciences47190 (147)
Computer systems and networking47191 (565)
Economics47192 (79)
Education47193 (61)
Gaming47194 (60)
Graphics and multimedia47195 (174)
Hardware and devices47196 (160)
Health and well-being47197 (54)
Human-computer interaction47198 (646)
Information retrieval and management47199 (535)
Machine learning47200 (570)
Security and privacy47202 (207)
Social science47203 (207)
Software development47204 (449)
Theory47205 (196)
1–25 of 449
Sort
Show 25 | 50 | 100
1234567Next
Geographically distributed systems often rely on replicated eventually consistent data stores to achieve availability and performance. To resolve conflicting updates at different replicas, researchers and practitioners have proposed specialized consistency protocols, called replicated data types, that implement objects such as registers, counters, sets or lists. Reasoning about replicated data types has however not been on par with comparable work on abstract data types and concurrent data types, lacking...
##### Publication details
Date: 22 January 2014
Type: Inproceedings
Publisher: ACM SIGPLAN
It is often the case that increasing the precision of a program analysis leads to worse results. It is our thesis that this phenomenon is the result of fundamental limits on the ability to use precise abstract domains as the basis for inferring strong invariants of programs. We show that bias-variance tradeoff, an idea from learning theory, can be used to explain why more precise abstractions do not necessarily lead to better results and also provides practical techniques for coping with such limitations....
##### Publication details
Date: 1 January 2014
Type: Inproceedings
Publisher: ACM
Symbolic Automata extend classical automata by using symbolic alphabets instead of finite ones. Most of the classical automata algorithms rely on the alphabet being finite, and generalizing them to the symbolic setting is not a trivial task. In this paper we study the problem of minimizing symbolic automata. We formally define and prove the basic properties of minimality in the symbolic setting, and lift classical minimization algorithms (Huffman-Moore’s and Hopcroft’s algorithms) to symbolic automata....
##### Publication details
Date: 1 January 2014
Type: Inproceedings
Publisher: ACM
The program verification tool SLAyer uses abstractions during analysis and relies on a solver for reachability to refine spurious counterexamples. In this context, we extract a reachability benchmark suite and evaluate methods for encoding reachability properties with heaps using Horn clauses over linear arithmetic. The benchmarks are particularly challenging and we describe and evaluate pre-processing transformations that are shown to have significant effect.
##### Publication details
Date: 15 December 2013
Type: Inproceedings
Publisher: Springer
Contracts are a simple yet very powerful form of specification. They consists of method preconditions and postconditions, of object invariants, and of assertions and loop invariants. Ideally, the programmer will annotate all of her code with contracts which are mechanically checked by some static analysis tool. In practice, programmers only write few contracts, mainly preconditions and some object invariants. The reason for that is that other contracts are clear from the code'': Programmers do not like...
##### Publication details
Date: 1 November 2013
Type: Inproceedings
Publisher: ACM
As online services become more and more popular, incident management has become a critical task that aims to minimize the service downtime and to ensure high quality of the provided services. In practice, incident management is conducted through analyzing a huge amount of monitoring data collected at runtime of a service. Such data-driven incident management faces several significant challenges such as the large data scale, complex problem space, and incomplete ...
##### Publication details
Date: 1 November 2013
Type: Inproceedings
Publisher: 28th IEEE/ACM International Conference on Automated Software Engineering
In this tutorial I will introduce CodeContracts, the .NET solution for contract specifications. CodeContracts consist of a language and compiler-agnostic API to express contracts, and of a set of tools to automatically generate the documentation and to perform dynamic and static verification. The CodeContracts API is part of .NET since v4, the tools are available for download on the Visual Studio Gallery. To date, they have been downloaded more than 100,000 times.
##### Publication details
Date: 1 November 2013
Type: Inproceedings
Publisher: ACM
In this paper, we present the results from two surveys related to data science applied to software engineering. The first survey solicited questions that software engineers would like to ask data scientists to investigate about software, software processes and practices, and about software engineers. Our analysis resulted in a list of 145 questions grouped into 12 categories. The second survey asked a different pool of software engineers to rate the 145 questions and identify the most important ones to...
##### Publication details
Date: 28 October 2013
Type: TechReport
Publisher: Microsoft Research
Number: MSR-TR-2013-111
We present a new semantics sensitive sampling algorithm for probabilistic programs, which are “usual” programs endowed with statements to sample from distributions, and condition executions based on observations. Since probabilistic programs are executable, sampling can be performed by repeatedly executing them. However, in the case of programs with a large number of random variables and observations, naive execution does not produce high quality samples, and it takes an intractable number of samples in...
##### Publication details
Date: 1 October 2013
Type: TechReport
Number: MSR-TR-2013-109
Three-valued models, in which properties of a system are either true, false or unknown, have recently been advocated as a better representation for reactive program abstractions generated by automatic techniques such as predicate abstraction. Indeed, for the same cost, model checking three-valued abstractions, also called may/must abstractions, can be used to both prove and disprove any temporal-logic property, whereas traditional conservative abstractions can only prove universal properties. Also,...
##### Publication details
Date: 1 October 2013
Type: TechReport
Publisher: NATO
Number: MSR-TR-2013-104
A great deal of effort has been spent on both trying to specify software requirements and on ensuring that software actually matches these requirements. A wide range of techniques that includes theorem proving, model checking, type-based analysis, static analysis, runtime monitoring, and the like have been proposed. However, in many areas adoption of these techniques remains spotty. In fact, obtaining a specification or a precise notion of correctness is in many cases quite elusive. For many tasks, even...
##### Publication details
Date: 26 September 2013
Type: TechReport
Number: MSR-TR-2013-94
In order to understand the questions that software engineers would like to ask data scientists about software, the software process, and about software engineering practices, we conducted two surveys: the first survey solicited questions and the second survey ranked a set of questions. Our analysis resulted in a catalog of 145 questions grouped into 12 categories as well as a ranking of the importance of each question. This technical report contains the survey text as well as the complete list of 145...
##### Publication details
Date: 14 September 2013
Type: TechReport
Publisher: Microsoft Research
Number: MSR-TR-2013-84
We present a method for the analysis of functional properties of large-scale DNA strand displacement (DSD) circuits based on Satisfiability Modulo Theories that enables us to prove the functional correctness of DNA circuit designs for arbitrary inputs, and provides significantly improved scalability and expressivity over existing methods. We implement this method as an extension to the Visual DSD tool, and use it to formalize the behavior of a 4-bit square root circuit, together with the components used...
##### Publication details
Date: 1 September 2013
Type: Inproceedings
Publisher: Springer
Program verifiers that attempt to verify programs automatically pose the verification problem as the decision problem: "Does there exist a proof that establishes the absence of errors?" In this paper, we argue that program verification should instead be posed as the following decision problem: "Does there exist an execution that establishes the presence of an error?" We formalize the latter problem as Reachability Modulo Theories (RMT) using an imperative programming language parameterized by a...
##### Publication details
Date: 1 September 2013
Type: Inproceedings
Note: This is an updated article: a previous version of this article contained a wrong lemma and corresponding mistakes in various proofs of Section 5. We propose a programming model where effects are treated in a disciplined way, and where the potential side-effects of a function are apparent in its type signature. The type and effect of expressions can also be inferred automatically, and we describe a polymorphic type inference system based on Hindley-Milner style inference. A novel feature is that we...
##### Publication details
Date: 28 August 2013
Type: TechReport
Number: MSR-TR-2013-79
We present a new algorithm for Bayesian inference over probabilistic programs, based on data flow analysis techniques from the program analysis community. Unlike existing techniques for Bayesian inference on probabilistic programs, our data flow analysis algorithm is able to perform inference directly on probabilistic programs with loops. Even for loop-free programs, we show that data flow analysis offers better precision and better performance benefits over existing techniques. We also describe heuristics...
##### Publication details
Date: 1 August 2013
Type: Inproceedings
Publisher: ACM
Static analysis tools have experienced a dichotomy over the span of the last decade. They have proven themselves to be useful in many domains, but at the same time have not (in general) experienced any notable concrete integration into a development environment. This is partly due to the inherent complexity of the tools themselves, as well as due to other intangible factors. Such factors usually tend to include questions about the return on investment of the tool and the value the tool provides in a...
##### Publication details
Date: 1 August 2013
Type: Inproceedings
Publisher: ACM
We show how a test suite for a sequential program can be profitably used to construct a termination proof. In particular,we describe an algorithm TpT for constructing a termination proof of a program based on information derived from testing it. TpT iteratively calls two phases: (a) an infer phase, and (b) a validate phase. In the infer phase, machine learning, in particular, linear regression is used to efficiently compute a candidate loop bound for every loop in the program. These loop bounds are...
##### Publication details
Date: 1 August 2013
Type: Inproceedings
Publisher: ACM
Model checking and testing have a lot in common. Over the last two decades, significant progress has been made on how to broaden the scope of model checking from finite-state abstractions to actual software implementations. One way to do this consists of adapting model checking into a form of systematic testing that is applicable to industrial-size software. This chapter presents an overview of this strand of software model checking.
##### Publication details
Date: 1 August 2013
Type: TechReport
Number: MSR-TR-2013-80
Previous versions of a program can be a powerful enabler for program analysis by defining new relative specifications and making the results of current program analysis more relevant. In this paper, we describe the approach of {\it differential assertion checking} (DAC) for comparing versions of a program with respect to a set of assertions. DAC provides a natural way to write relative specifications over two programs. We introduce a novel modular approach to DAC by reducing it to single program checking...
##### Publication details
Date: 1 August 2013
Type: Inproceedings
Publisher: ACM
This paper describes a cross-version compiler validator and measures its effectiveness on the CLR JIT compiler. The validator checks for semantically equivalent assembly language output from various versions of the compiler, including versions across a seven-month time period, across two architectures (x86 and ARM), across two compilation scenarios (JIT and MDIL), and across optimizations levels. For month-to-month comparisons, the validator achieves a false alarm rate of just 2.2%. To help understand...
##### Publication details
Date: 1 August 2013
Type: Inproceedings
Publisher: ACM
One of the goals of software engineering research is to achieve generality: Are the phenomena found in a few projects reflective of others? Will a technique perform as well on projects other than the projects it is evaluated on? While it is common sense to select a sample that is representative of a population, the importance of diversity is often overlooked yet as important. In this paper, we combine ideas from representativeness and diversity and introduce a measure called sample coverage, defined as the...
##### Publication details
Date: 1 August 2013
Type: Inproceedings
Publisher: ACM
Program verification relies heavily on induction, which has received decades of attention in mechanical verification tools. When program correctness is best described by infinite structures, program verification is usefully aided also by co-induction, which has not benefited from the same degree of tool support. Co-induction is complicated to work with in interactive proof assistants and has had no previous support in dedicated program verifiers. This paper shows that an SMT-based program verifier can...
##### Publication details
Date: 12 July 2013
Type: TechReport
Publisher: Microsoft Research
Number: MSR-TR-2013-49
Symbolic Finite Transducers augment classic transducers with symbolic alphabets represented as parametric theories. Such extension enables succinctness and the use of potentially infinite alphabets while preserving closure and decidability properties. Extended Symbolic Finite Transducers further extend these objects by allowing transitions to read consecutive input elements in a single step. While when the alphabet is finite this extension does not add expressiveness, it does so when the alphabet is...
##### Publication details
Date: 1 July 2013
Type: Inproceedings
Publisher: Springer
Symbolic automata theory lifts classical automata theory to rich alphabet theories. It does so by replacing an explicit alphabet with an alphabet described implicitly by a Boolean algebra. How does this lifting affect the basic algorithms that lay the foundation for modern automata theory and what is the incentive for doing this? We investigate these questions here. In our approach we use state-of-the-art constraint solving techniques for automata analysis that are both expressive and efficient, even for...
##### Publication details
Date: 1 July 2013
Type: Inproceedings
Publisher: Springer
1–25 of 449
Sort
Show 25 | 50 | 100
1234567Next
> Our research | 2013-12-13 03:27:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3314015865325928, "perplexity": 2523.33584609242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164836485/warc/CC-MAIN-20131204134716-00048-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://support.bioconductor.org/p/98173/ | oligo and GenericPDInfo
1
0
Entering edit mode
Aedin Culhane ▴ 510
@aedin-culhane-1526
Last seen 24 months ago
United States
Hi
I have an old array that is not annotated within Bioc annotation. I tried affy and oligo package, but am running into issues applying a non-standard (invariantset) normalization. We have good bioloigcal reasons to believe that total transcriptional counts should vary in samples, so rma so I want to compare rma to other normalization approaches.
affy::expresso with invariant falls over.
expresso(abatch,bgcorrect.method="none",normalize.method="invariantset",pmcorrect.method="pmonly",summary.method="liwong")
background correction: none
normalization: invariantset
PM/MM correction : pmonly
expression values: liwong
background correcting...done.
normalizing...
Error in smooth.spline(ref[i.set], data[i.set]) :
missing or infinite values in inputs are not allowed
I know the affy package is older, so I tried oligo. I built my own custom pd package. Both my own package and the brainarray MBNI annotation fail. Both of these have class GenericPDInfo, and the vignette and man pages do not address this case in much detail.
After, searching this message board, I use target="mps1" (not included in the package documentation). Both oligo::rma or oligo:: fitProbeLevelModel, return a result, however I wish to compare rma or non-standard approaches. However oligo::summarize will not accept a "GenericFeatureSet" only a matrix or ff_matrix.
eSetTest <- oligo::summarize(exprs(eNormTest),method="medianpolish", verbose=TRUE)
Error in (function (classes, fdef, mtable) :
unable to find an inherited method for function 'summarize' for signature '"GenericFeatureSet"'
The following work;
eNormTest <- oligo::normalize(obatch_bgcorrected, method="invariantset", target="mps1")
eSetTest <- oligo::summarize(exprs(eNormTest),method="medianpolish", verbose=TRUE)
However of course exprs(eNorm) ignore the probe_level information. Therefore I need to provide probe with the probe level info in the pd.package (MBNI annotation).
I can start reading code of the pd package, Annotationdbi, but am spending more time that I wanted trying to do something very simple. Maybe I am missing a really easy work around and am wasting time needlessly.
Thanks
Aedin
0
Entering edit mode
@james-w-macdonald-5106
Last seen 1 hour ago
United States
I don't think there is an easy way to do this (where 'easy' means 'a function exists that you can just use'), particularly since rma has everything hard-coded. But it wouldn't be that hard to roll your own. Something like this?
myrma<- function (object, background = TRUE, normalize = TRUE,
subset = NULL, target = "mps1")
{
pmi <- pmindex(object, subset = subset, target = "mps0")
pm0 <- exprs(object)[pmi, , drop = FALSE]
if (background)
pm0 <- backgroundCorrect(pm0, method = "rma")
if (normalize)
pm0 <- normalize(pm0, method = "invariantset")
rownames(pm0) <- as.character(pmi)
targetInfo <- oligo:::getMPSInfo(get(annotation(object)), substr(target,
4, 4), "fid", type = "pm")
exprs <- basicRMA(pm0[as.character(targetInfo$fid), , drop = FALSE], pnVec = targetInfo$man_fsetid, normalize = FALSE,
background = FALSE, verbose = TRUE)
colnames(exprs) <- sampleNames(object)
out <- new("ExpressionSet")
slot(out, "assayData") <- assayDataNew(exprs = exprs)
slot(out, "phenoData") <- phenoData(object)
slot(out, "featureData") <- basicAnnotatedDataFrame(exprs,
byrow = TRUE)
slot(out, "protocolData") <- protocolData(object)
slot(out, "annotation") <- slot(object, "annotation")
if (!validObject(out))
stop("Resulting object is invalid.")
return(out)
}
0
Entering edit mode
Thanks a million Jim oligo:::getMPSInfo is what I looking for. Since its not exported in the NAMESPACE, there is no help. Where can I find some additional info or help, for example a description of mps1, mps0 etc, as I see you use both below. The 'fields' parameter ("fid", etc). Aedin On 7/17/17 13:54, James W. MacDonald [bioc] wrote: > Activity on a post you are following on support.bioconductor.org > <https: support.bioconductor.org=""> > > User James W. MacDonald <https: support.bioconductor.org="" u="" 5106=""/> > wrote Answer: oligo and GenericPDInfo > <https: support.bioconductor.org="" p="" 98173="" #98175="">: > > I don't think there is an easy way to do this (where 'easy' means 'a > function exists that you can just use'), particularly since rma has > everything hard-coded. But it wouldn't be that hard to roll your own. > Something like this? > > myrma<- function (object, background = TRUE, normalize = TRUE, > subset = NULL, target = "mps1") > { > pmi <- pmindex(object, subset = subset, target = "mps0") > pm0 <- exprs(object)[pmi, , drop = FALSE] > if (background) > pm0 <- backgroundCorrect(pm0, method = "rma") > if (normalize) > pm0 <- normalize(pm0, method = "invariantset") > rownames(pm0) <- as.character(pmi) > targetInfo <- oligo:::getMPSInfo(get(annotation(object)), substr(target, > 4, 4), "fid", type = "pm") > exprs <- basicRMA(pm0[as.character(targetInfo$fid), , > drop = FALSE], pnVec = targetInfo$man_fsetid, normalize = FALSE, > background = FALSE, verbose = TRUE) > colnames(exprs) <- sampleNames(object) > out <- new("ExpressionSet") > slot(out, "assayData") <- assayDataNew(exprs = exprs) > slot(out, "phenoData") <- phenoData(object) > slot(out, "featureData") <- basicAnnotatedDataFrame(exprs, > byrow = TRUE) > slot(out, "protocolData") <- protocolData(object) > slot(out, "annotation") <- slot(object, "annotation") > if (!validObject(out)) > stop("Resulting object is invalid.") > return(out) > } > > ------------------------------------------------------------------------ > > Post tags: affy, oligoclasses, oligo package, pdinfobuilder, > annotationdbi > > You may reply via email or visit > A: oligo and GenericPDInfo > -- Aedin Culhane --------------- Department of Biostatistics and Computational Biology,Dana-Farber Cancer Institute Department of Biostatistics, Harvard TH Chan School of Public Health phone: 617 640 8107 email: aedin@jimmy.harvard.edu
0
Entering edit mode
There isn't at present much help. The GenericArray infrastructure is supposed to be a generalization of the code to allow things like the MBNI arrays (and any random future Array that Affy might foist upon us) to automatically be accommodated.
The old school way of doing things was to take all of Affy's library files and then generate tables in the DB that were named according to the data from Affy. As an example:
> dbListTables(db(pd.moex.1.0.st.v1))
[1] "chrom_dict" "core_mps" "extended_mps" "featureSet" "full_mps"
[6] "level_dict" "mmfeature" "pmfeature" "table_info" "type_dict"
Where all those xxx_mps tables map the probes to probesets, depending on the summarization level you wanted to use. But this requires each array type to have its own methods, so we have a profusion of methods for e.g., rma:
> showMethods(rma)
Function: rma (package oligo)
object="ExonFeatureSet"
object="ExpressionFeatureSet"
object="GeneFeatureSet"
object="GenericFeatureSet"
object="HTAFeatureSet"
object="SnpCnvFeatureSet"
The tables in a GenericPDInfoPackage are generic:
> dbListTables(con)
[1] "featureSet1" "mmfeature" "mps1mm" "mps1pm" "pmfeature"
[6] "table_info"
and if there are multiple summarization levels, you can simply add more featureSetK/mpsKpm/mpsKmm triplets to define the probe -> probeset mappings. So this stops the profusion of array types and methods, but it adds the inevitable question of 'so how are the mps1 and mps2 targets different for this array?', which will obviously have a profusion of its own, perhaps even worse?
The function getMPSInfo is intended to do a join between the featureSetK and mpsKpm tables (in your case featureSet1 and mps1pm) and return a data.frame that maps the fid (or what I conventionally call the probe ID) to the fsetid and man_fsetid, which are the probeset level IDs, in order to do the summarization.
And the mps0 target is sort of a joke, I imagine. Or maybe it's just a placeholder that has no inherent meaning as yet. The target argument isn't used, so you could put whatever you want and still get the right thing.
> showMethods(pmindex, class = "DBPDInfo", includeDefs=T)
Function: pmindex (package oligo)
object="DBPDInfo"
function (object, ...)
{
.local <- function (object, subset = NULL, target = NULL)
{
if (!is.null(subset))
warning("Subset not implemented (yet). Returning everything.")
tmp <- dbGetQuery(db(object), "select fid from pmfeature")[[1]]
sort(tmp)
}
.local(object, ...)
}
0
Entering edit mode
Thanks I am very glad, I didn't spend the next few hours trying to work this out by myself... I don't think I would have got too far. Thanks for being so responsive. If oligo and brainarrays folks could get together to agree some standards, these great resources would be more accessible to the rest of us ;-)).. I'd happily buy the coffee/beer;-) Aedin On 7/17/17 14:58, James W. MacDonald [bioc] wrote: > Activity on a post you are following on support.bioconductor.org > <https: support.bioconductor.org=""> > > User James W. MacDonald <https: support.bioconductor.org="" u="" 5106=""/> > wrote Comment: oligo and GenericPDInfo > <https: support.bioconductor.org="" p="" 98173="" #98178="">: > > There isn't at present much help. The GenericArray infrastructure is > supposed to be a generalization of the code to allow things like the > MBNI arrays (and any random future Array that Affy might foist upon > us) to automatically be accommodated. > > The old school way of doing things was to take all of Affy's library > files and then generate tables in the DB that were named according to > the data from Affy. As an example: > > > dbListTables(db(pd.moex.1.0.st.v1)) > [1] "chrom_dict" "core_mps" "extended_mps" "featureSet" "full_mps" > [6] "level_dict" "mmfeature" "pmfeature" "table_info" "type_dict" > > Where all those xxx_mps tables map the probes to probesets, depending > on the summarization level you wanted to use. But this requires each > array type to have its own methods, so we have a profusion of methods > for e.g., rma: > > > showMethods(rma) > Function: rma (package oligo) > object="ExonFeatureSet" > object="ExpressionFeatureSet" > object="GeneFeatureSet" > object="GenericFeatureSet" > object="HTAFeatureSet" > object="SnpCnvFeatureSet" > > The tables in a GenericPDInfoPackage are generic: > > > dbListTables(con) > [1] "featureSet1" "mmfeature" "mps1mm" "mps1pm" "pmfeature" > [6] "table_info" > > and if there are multiple summarization levels, you can simply add > more featureSetK/mpsKpm/mpsKmm triplets to define the probe -> > probeset mappings. So this stops the profusion of array types and > methods, but it adds the inevitable question of 'so how are the mps1 > and mps2 targets different for this array?', which will obviously have > a profusion of its own, perhaps even worse? > > The function getMPSInfo is intended to do a join between the > featureSetK and mpsKpm tables (in your case featureSet1 and mps1pm) > and return a data.frame that maps the fid (or what I conventionally > call the probe ID) to the fsetid and man_fsetid, which are the > probeset level IDs, in order to do the summarization. > > And the mps0 target is sort of a joke, I imagine. Or maybe it's just a > placeholder that has no inherent meaning as yet. The target argument > isn't used, so you could put whatever you want and still get the right > thing. > > > showMethods(pmindex, class = "DBPDInfo", includeDefs=T) > Function: pmindex (package oligo) > object="DBPDInfo" > function (object, ...) > { > .local <- function (object, subset = NULL, target = NULL) > { > if (!is.null(subset)) > warning("Subset not implemented (yet). Returning everything.") > tmp <- dbGetQuery(db(object), "select fid from pmfeature")[[1]] > sort(tmp) > } > .local(object, ...) > } > > ------------------------------------------------------------------------ > > Post tags: affy, oligoclasses, oligo package, pdinfobuilder, > annotationdbi > > You may reply via email or visit > C: oligo and GenericPDInfo > -- Aedin Culhane --------------- Department of Biostatistics and Computational Biology,Dana-Farber Cancer Institute Department of Biostatistics, Harvard TH Chan School of Public Health phone: 617 640 8107 email: aedin@jimmy.harvard.edu
0
Entering edit mode
Hi Jim To make your function "happy", I need to specify the oligo version of the functions. backgroundCorrect was calling a limma function, normalize was calling a igraph function. It couldn't find basicAnnotatedDataFrame, as its not exported. When I modified the following lines, it worked pm0 <- oligo::backgroundCorrect(pm0, method = "rma") pm0 <- oligo::normalize(pm0, method = "invariantset") slot(out, "featureData") <- oligo:::basicAnnotatedDataFrame(exprs, byrow = TRUE) However I have a second question, if I set background = FALSE in the function call, it seems to still performs background correction which seems to be performed as part of oligo:::normalize. What background correction is performed by oligo::normalize and can I modify it? Thanks Aedin On 7/17/17 13:54, James W. MacDonald [bioc] wrote: > Activity on a post you are following on support.bioconductor.org > <https: support.bioconductor.org=""> > > User James W. MacDonald <https: support.bioconductor.org="" u="" 5106=""/> > wrote Answer: oligo and GenericPDInfo > <https: support.bioconductor.org="" p="" 98173="" #98175="">: > > I don't think there is an easy way to do this (where 'easy' means 'a > function exists that you can just use'), particularly since rma has > everything hard-coded. But it wouldn't be that hard to roll your own. > Something like this? > > myrma<- function (object, background = TRUE, normalize = TRUE, > subset = NULL, target = "mps1") > { > pmi <- pmindex(object, subset = subset, target = "mps0") > pm0 <- exprs(object)[pmi, , drop = FALSE] > if (background) > pm0 <- backgroundCorrect(pm0, method = "rma") > if (normalize) > pm0 <- normalize(pm0, method = "invariantset") > rownames(pm0) <- as.character(pmi) > targetInfo <- oligo:::getMPSInfo(get(annotation(object)), substr(target, > 4, 4), "fid", type = "pm") > exprs <- basicRMA(pm0[as.character(targetInfo$fid), , > drop = FALSE], pnVec = targetInfo$man_fsetid, normalize = FALSE, > background = FALSE, verbose = TRUE) > colnames(exprs) <- sampleNames(object) > out <- new("ExpressionSet") > slot(out, "assayData") <- assayDataNew(exprs = exprs) > slot(out, "phenoData") <- phenoData(object) > slot(out, "featureData") <- basicAnnotatedDataFrame(exprs, > byrow = TRUE) > slot(out, "protocolData") <- protocolData(object) > slot(out, "annotation") <- slot(object, "annotation") > if (!validObject(out)) > stop("Resulting object is invalid.") > return(out) > } > > ------------------------------------------------------------------------ > > Post tags: affy, oligoclasses, oligo package, pdinfobuilder, > annotationdbi > > You may reply via email or visit > A: oligo and GenericPDInfo > -- Aedin Culhane --------------- Department of Biostatistics and Computational Biology,Dana-Farber Cancer Institute Department of Biostatistics, Harvard TH Chan School of Public Health phone: 617 640 8107 email: aedin@jimmy.harvard.edu
0
Entering edit mode
I don't understand your question. Are you saying that these lines:
if (normalize)
pm0 <- normalize(pm0, method = "invariantset")
run regardless of normalize being TRUE or FALSE? That doesn't seem likely - a bug of that magnitude (e.g., R totally ignoring that if statement) would have been caught by now.
Or do I misunderstand your question?
0
Entering edit mode
Hi Jim Ignore the last question. I was incorrect Aedin On 7/18/17 11:15, James W. MacDonald [bioc] wrote: > Activity on a post you are following on support.bioconductor.org > <https: support.bioconductor.org=""> > > User James W. MacDonald <https: support.bioconductor.org="" u="" 5106=""/> > wrote Comment: oligo and GenericPDInfo > <https: support.bioconductor.org="" p="" 98173="" #98206="">: > > I don't understand your question. Are you saying that these lines: > > if (normalize) > pm0 <- normalize(pm0, method = "invariantset") > > > run regardless of normalize being TRUE or FALSE? That doesn't seem > likely - a bug of that magnitude (e.g., R totally ignoring that if > statement) would have been caught by now. > > Or do I misunderstand your question? > > ------------------------------------------------------------------------ > > Post tags: affy, oligoclasses, oligo package, pdinfobuilder, > annotationdbi > > You may reply via email or visit > C: oligo and GenericPDInfo > -- Aedin Culhane --------------- Department of Biostatistics and Computational Biology,Dana-Farber Cancer Institute Department of Biostatistics, Harvard TH Chan School of Public Health phone: 617 640 8107 email: aedin@jimmy.harvard.edu | 2021-09-16 18:19:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41048234701156616, "perplexity": 14255.4897885506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053717.37/warc/CC-MAIN-20210916174455-20210916204455-00187.warc.gz"} |
https://jns.usst.edu.cn/html/2022/4/20220410.html | 上海理工大学学报 2022, Vol. 44 Issue (4): 381-387 PDF
Matching of heating water tank and thermal storage tank in photovoltaic thermal-heat pump system
QU Minglu, YAN Nannan, WANG Haiyang, LU Mingqi
School of Environment and Architecture, University of Shanghai for Science and Technology, Shanghai 200093, China
Abstract: To study the problem of size matching of the heating tank and the thermal storage tank in the photovoltaic thermal-heat pump system, a simulation model of the photovoltaic and thermal-heat pump system was established by using TRNSYS software. The model was tested using the measured data, and the findings revealed that the errors were within reasonable bounds, indicating that the simulation model was accurate and dependable. Thermal storage tanks with the sizes of 30, 60, 90 L/m2 were selected to match the heating water tanks with different capacities of 100, 200, 300 L, respectively, and an annual operating performance simulation of the system model was conducted using the system model. To evaluate the system, the annual exergy loss was chosen as the evaluation index. Simulation results show that the annual exergy loss of the system is the smallest when the capacity of the storage tank is 60 L/m2 for all the three different capacities of the heating water tank. As a result, the thermal storage tank with a size of 60 L/m2 is advised to match each capacity of the heating water tank.
Key words: solar energy photovoltaic thermal heat pump TRNSYS software exergy loss performance simulation
1 实验系统
图 1 光伏光热-热泵系统图 Fig. 1 Photovoltaic thermal-heat pump system diagram
2 系统综合效率分析
$E_{\text {l}}=\left(P_{{\rm{W}}}+P_{{\rm{H}}}+E_{\text {s}}\right)-\left(P_{0}+E_{{\rm{X}}, {\rm{C}}}\right)$ (1)
$E_{{\rm{X}}}=Q_{{\rm{H}}}\left(1-\frac{T_{0}}{T}\right)$ (2)
$E_{{\rm{C}}}=Q_{{\rm{e}}}\left(1-\frac{T_{0}}{T_{{\rm{c}}}}\right)$ (3)
$E_{\text {s}}=Q_{\text {s}}\left[1-\frac{4}{3} \frac{T_{0}}{T_{{\rm{s}}}}+\frac{1}{3}\left(\frac{T_{0}}{T_{{\rm{s}}}}\right)^{4}\right]$ (4)
PV/T组件光电转换效率 $n_{\rm{P}}$ 为光电输出功率与太阳有效热能的比值,热泵的制热量 $Q_{\rm{H}}$ 为供热侧进出水管中循环水得热量。
$n_{{\rm{P}}}=W_{{\rm{P}}} / Q_{\text {s}}$ (5)
$Q_{{\rm{H}}}=m_{{\rm{L}}} c\left(T_{{\rm{out}}}-T_{{\rm{in}}}\right)$ (6)
$C O P=Q_{{\rm{H}}} /W$ (7)
3 系统性能模拟与结果分析
图 2 光伏光热−热泵系统仿真模型 Fig. 2 Simulation model of the photovoltaic thermal-heat pump system
3.1 系统模拟验证
图 3 实验的气象参数 Fig. 3 Meteorological parameters of the experiment
图 4 两水箱温度的实验与模拟数据对比 Fig. 4 Comparison of experimental and simulated data for the temperature of two water tanks
图 5 发电量的实验与模拟数据对比 Fig. 5 Comparison of experimental and simulated data of power generation
3.2 模拟工况
3.3 模拟结果分析
图 6 不同容量时系统电效率逐月变化情况 Fig. 6 Monthly change in electrical efficiency of the system under different capacities
图 7 不同容量时热泵机组逐月运行能耗 Fig. 7 Monthly operating energy consumption of heat pump units under different capacities
图 8 不同容量时系统全年COP变化情况 Fig. 8 Annual COP change of the system under different capacities
4 结 论
a. 水箱容量越大,系统全年的平均电效率越高,热泵机组的年平均COP越大,但是,在直供模式下供热水箱达到设定温度的时间越长。
b. 以全年㶲损作为系统运行性能评价指标,通过计算不同容量供热水箱下系统的全年㶲损可知,100,200,300 L供热水箱容量下最匹配的蓄热水箱大小均为60 L/m2,此时系统运行性能更佳。
[1] ZHANG Y, YAN D, HU S, et al. Modelling of energy consumption and carbon emission from the building construction sector in China, a process-based LCA approach[J]. Energy Policy, 2019, 134: 110949. DOI:10.1016/j.enpol.2019.110949 [2] 胡姗, 张洋, 燕达, 等. 中国建筑领域能耗与碳排放的界定与核算[J]. 建筑科学, 2020, 36(S2): 288-297. [3] 龙惟定, 梁浩. 我国城市建筑碳达峰与碳中和路径探讨[J]. 暖通空调, 2021, 51(4): 1-17. [4] 向俊杰. 从碳达峰碳中和看节能监察管理[J]. 上海节能, 2021(6): 570-575. [5] MORENO D, FERNÁNDEZ M, ESQUIVIAS P M. A comparison of closed-form and finite-element solutions for heat transfer in a nearly horizontal, unglazed flat plate PVT water collector: performance assessment[J]. Solar Energy, 2017, 141: 11-24. DOI:10.1016/j.solener.2016.11.015 [6] AL-UGLA A A, EL-SHAARAWI M A I, SAID S A M, et al. Techno-economic analysis of solar-assisted air-conditioning systems for commercial buildings in Saudi Arabia[J]. Renewable and Sustainable Energy Reviews, 2016, 54: 1301-13l0. DOI:10.1016/j.rser.2015.10.047 [7] 荆树春, 朱群志, 张静秋, 等. 光伏光热一体化装置与热泵结合系统的影响因素分析[J]. 可再生能源, 2013, 31(6): 1-4. DOI:10.13941/j.cnki.21-1469/tk.2013.06.009 [8] 崔云翔, 蔡颖玲. 上海地区太阳能−地源热泵冬季联合运行试验研究[J]. 流体机械, 2019, 47(2): 65-69. DOI:10.3969/j.issn.1005-0329.2019.02.012 [9] 金满, 徐洪涛, 张剑飞, 等. 太阳能辅助地源热泵联合供暖系统模拟研究[J]. 上海理工大学学报, 2021, 43(2): 111-117. DOI:10.13255/j.cnki.jusst.20200326002 [10] 曲明璐, 卢明琦, 宋小军, 等. 蓄热型太阳能光伏光热组件与热泵一体化系统模拟研究[J]. 流体机械, 2020, 48(8): 82-88. DOI:10.3969/j.issn.1005-0329.2020.08.015 [11] DIKICI A, AKBULUT A. Performance characteristics and energy-exergy analysis of solar-assisted heat pump system[J]. Building and Environment, 2008, 43(11): 1961-1972. DOI:10.1016/j.buildenv.2007.11.014 [12] TIWARI A, DUBEY S, SANDHU G S, et al. Exergy analysis of integrated photovoltaic thermal solar water heater under constant flow rate and constant collection temperature modes[J]. Applied Energy, 2009, 86(12): 2592-2597. DOI:10.1016/j.apenergy.2009.04.004 [13] 郑瑞澄. 民用建筑太阳能热水系统工程技术手册[M]. 北京: 化学工业出版社, 2006: 73–82. [14] XU G Y, DENG S M, ZHANG X S, et al. Simulation of a photovoltaic/thermal heat pump system having a modified collector/evaporator[J]. Solar Energy, 2009, 83(11): 1967-1976. DOI:10.1016/j.solener.2009.07.008 | 2022-11-26 22:01:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5707815885543823, "perplexity": 14893.87556502989}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446709929.63/warc/CC-MAIN-20221126212945-20221127002945-00740.warc.gz"} |
http://mathoverflow.net/feeds/question/79420 | Generalising Gelfand's spectral theory - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-24T13:43:51Z http://mathoverflow.net/feeds/question/79420 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/79420/generalising-gelfands-spectral-theory Generalising Gelfand's spectral theory Salvo Tringali 2011-10-28T17:59:03Z 2011-10-29T13:25:05Z <p>This is primarily a request for references and advices.</p> <blockquote> <p><strong>Question (edited on 10/29/2011).</strong> What's known about comprehensive generalisations of Gelfand's spectral theory for unital [associative] normed algebras [over the real or complex field] (*)?</p> </blockquote> <p>Here, a <em>generalisation</em> should be meant as a framework, say, with the following distinctive features (among the others):</p> <ol> <li>It should be founded on somehow different bases than the classical theory - especially to the extent that the notion itself of spectrum isn't any longer defined in terms of, and cannot be reduced to, the existence of any inverse in some unital algebra.</li> <li>It should recover (at least basic) notions and results from the classical theory for unital <em>Banach</em> algebras in some appropriate incarnation (more details on this point are given below), for which the "generalised spectrum" does reproduce the classical one.</li> <li>It should be <em>unsensitive</em> to completeness [under suitable mild hypotheses] in any setting where completeness is a well-defined notion (**), so yielding as a particular outcome that an element in a unital normed algebra, $\mathfrak{A}$, shares the same spectrum as its image in the Banach completion of $\mathfrak{A}$.</li> </ol> <p>(*) If useful to know, my absolute reference here is the (let me say) wonderful book by Charles E. Rickart: <em>General Theory of Banach Algebras</em> (Academic Press, 1970).</p> <p>(**) At least in principle, the kind of generalisation that I've in mind is tailored on the properties of topological vector spaces, though I've worked it out only in the restricted case of <em>normed</em> spaces.</p> <hr> <p><strong>Historical background.</strong></p> <p>As acknowledged by Jean Dieudonné in his <em>History of Functional Analysis</em>, the notion of spectrum (along with the foundation of modern spectral theory) was first introduced by David Hilbert in a series of articles inspired by Fredholm's celebrated work on integral equations (the word <em>spectrum</em> seems to have been lent by Hilbert from an 1897 article by Wilhelm Wirtinger) in the effort of lifting properties and notions from matrix theory to the broader (and more abstract) framework of linear operators. Especially, this led Hilbert to the discovery of complete inner product spaces (what we call, today, Hilbert spaces just in his honour). In 1906, Hilbert himself extended his previous analysis and discovered the continuous spectrum (already present but not fully recognised in earlier work by George William Hill in connection with his own study of periodic Sturm-Liouville equations).</p> <p>A few years later, Frigyes Riesz introduced the concept of an algebra of operators in a series of articles culminating in a 1913 book, where Riesz studied, among the other things, the algebra of bounded operators on the separable Hilbert space. In 1916 Riesz himself created the theory of what we call nowadays compact operators. Riesz's spectral theorem was the basis for the definitive discovery of the spectral theorem of self-adjoint (and more generally normal) operators, which was simultaneously accomplished by Marshall Stone and John von Neumann in 1929-1932.</p> <p>The year 1932 is another important date in this story, as it saw the publication of the very first monography on operator theory, by Stefan Banach. The systematic work of Banach gave new impulse to the development of the field and almost surely influenced the later work of von Neumann on the theory of operator algebras (developed, partly with Francis Joseph Murray, in a series of articles starting from 1935). Then came the seminal work of Israil Gelfand (partly in collaboration with Georgi E. Shilov and Mark Naimark), who introduced Banach algebras (under the naming of <em>normed rings</em>) and elaborated the corresponding notion of spectrum starting with <a href="http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=sm&paperid=6046&option_lang=eng" rel="nofollow">a 1941 article in <em>Matematicheskii Sbornik</em></a>. </p> <p>Now, it is undoubtable that Gelfand's work has deeply influenced the subsequent developments of spectral theory (and, accordingly, functional analysis). Yet, as far as I can understand in my own small way, something is still missing. I mean, something which may still be done, on the one hand, to clean up some inherent "defects" (or better <em>fragilities</em>) of the classical theory and, on the other, to make it more abstract and, then, portable to different contexts.</p> <p><strong>Naïve stuff.</strong></p> <p>As I learned from an anonymous user on MO (<a href="http://mathoverflow.net/questions/24090/what-is-so-spectral-about-spectra/24107#24107" rel="nofollow">here</a>), the term <em>spectrum</em>, in operator theory as well as in the context of normed algebras, is seemingly derived from the Latin verb <em>spècere</em> ("to see"), from which the root <em>spec-</em> of the Latin word spectrum ("something that appears, that manifests itself, vision"). Furthermore, the suffix <em>-trum</em> in <em>spec-trum</em> may come from the Latin verb <a href="http://www.encyclo.co.uk/define/instruo" rel="nofollow"><em>instruo</em></a> (like in the English word "instrument", which follows in turn the Latin noun <em>instrumentum</em>). So, the classical (or, herein, Gelfand) spectrum may be really considered, even from an etymological perspective, as a tool to inspect (or get improved knowledge of) some properties. I like to think of it as a sort of magnifying glass; we can move it through the algebra, zoom in and out on its elements, and get local information about them and/or global information about the whole structure.</p> <p>Now, taking in mind (some parts of) <a href="http://mathoverflow.net/questions/31358/can-a-mathematical-definition-be-wrong" rel="nofollow">another thread</a> on this board about "wrong" definitions in mathematics, we are likely to agree that the worth of a notion is also measured by its <em>sharpness</em> (let me be vague on this point for the moment). And the classical notion of spectrum is, in fact, so successful because it is sharp in an appropriate sense, to the extent that it reveals deep underlying connections, say, between the algebraic and topological structures of a complicated object such as a <em>Banach</em> algebra (which is definitely magic, at least in my view). On another hand, what struck my curiosity is the consideration that the same conclusion doesn't hold (not at least with the same consistency) if <em>Banach</em> algebras are replaced by arbitrary (i.e. possibly incomplete) <em>normed</em> algebras, where the spectrum of a given element, $\mathfrak{a}$, can be scattered through the whole complex plane (in the complete case, as it is well-known, the spectrum is bounded by the norm of $a$, and indeed compact). So the question is: Why does this happen? And my answer is: essentially because the classical notion of spectrum is <em>too algebraic</em>, though completeness can actually conceal its true nature and make us even forgetful of it, or at least convinced that it must not be really so algebraic (despite of its own definition!) if it can dialogue so well with the topological structure. Yes, any normed algebra can be isometrically embedded (as a dense subalgebra) into a Banach one, but I don't think this makes a difference in what I'm trying to say, and it does not seriously explain anything. Clearly enough, the problem stems from the general failure in the convergence of the Neumann series $\sum_{n=0}^\infty (k^{-1}\mathfrak{a})^n$ for $k$ an arbitrary scalar with modulus greater than the norm of $\mathfrak{a}$. And why this? Because the convergence of such a Neumann series follows from the cauchyness of its partial sums, which is not a sufficient condition to convergence as far as the algebra is incomplete. According to my humble opinion, this is something like a "bug" in the classical vision, but above all an opportunity for getting a better understanding of some facts.</p> <p><strong>Motivations.</strong></p> <p>In the end, my motivation for this long post is that I've seemingly developed (the basics of) something resembling a spectral theory for linear (possibly unbounded) operators between <em>different</em> normed spaces. To me, this stuff looks like a sharpening of the classical theory in that it removes some of its "defects" (including the one addressed above); and also as an abstraction since, on the one hand, it puts standard notions from the operator setting (such as the ones of eigenvalue, continuous spectrum, and approximate spectrum) on a somehow different ground (so possibly foreshadowing further generalisations) and, on the other, it recovers familiar results (such as the closeness, the boundedness, and the compactness of the spectrum as well as the fact that all the points in the boundary are approximate eigenvalues) as a special case (while revealing some (unexpected?) dependencies).</p> <p>Then, I'd really like to know what has been already done in these respects before putting everything in an appropriate form, submitting the results to any reasonable journal, and being answered, possibly after several months, that I've just reinvented the wheel. It would be really frustrating... Yes, of course, I've already asked here around (in Paris), but I've got nothing more concrete than contrasting (i.e. negative and positive) <em>feelings</em>. Also, I was suggested to contact a few people, and I've done it with one of them some weeks ago (sending him something like a ten page summary after checking his availability by an earlier email), but I've got no reply so far and indeed he seems to have disappeared... Then, I resolved to come here and consult the "oracle of MO" (as I enjoy calling this astounding place). :-)</p> <p>Thank you in advance for any help.</p> http://mathoverflow.net/questions/79420/generalising-gelfands-spectral-theory/79450#79450 Answer by Anatoly Kochubei for Generalising Gelfand's spectral theory Anatoly Kochubei 2011-10-29T05:06:32Z 2011-10-29T05:06:32Z <p>There exists an approach by Berezansky and Brasche to define a kind of generalized selfadjointness and develop some eigenfunction expansions for operators between different Hilbert spaces. This is done within the theory of rigged Hilbert spaces. Their motivation was apparently different from yours - they wanted to cover the case of Schroedinger operators with distribution potentials. Nevertheless there can be some connection. See</p> <p>Yu.M. Berezansky and J. Brasche, Generalized selfadjoint operators and their singular perturbations. Methods Funct. Anal. Topol. 8, No. 4, 1-14 (2002);</p> <p>Yu.M. Berezansky, J. Brasche, and L. P. Nizhnik, On generalized selfadjoint operators on scales of Hilbert spaces, Methods Funct. Anal. Topol. 17, No. 3, 193-196 (2011).</p> <p>The second paper is available online: <a href="http://www.imath.kiev.ua/~mfat/" rel="nofollow">http://www.imath.kiev.ua/~mfat/</a></p> | 2013-05-24 13:44:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7759422659873962, "perplexity": 1144.382661700957}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704662229/warc/CC-MAIN-20130516114422-00026-ip-10-60-113-184.ec2.internal.warc.gz"} |
http://mathhelpforum.com/trigonometry/200775-vector-problem.html | 1. ## Vector problem
Hello,
I'm having trouble answering the following question. The answer (s) is a given, but I still can't derive the end result. The chapter this question's at in my book, I think the author may want you to return to it with more knowledge after the next few chapters, but I am too stubborn to move on until I have solved the question.
Anyway:
A load of 4.32 N is lifted by two strings making angles of $18^\tiny o$ and $30^ \tiny o$ with the vertical. If for this system the vectors representing the forces form a closed triangle when in equilibrium, calculate the tensions in the strings.
Answer [1.8 N, 2.91 N ]
Now, I've drawn this to scale and it is a scalene triangle inside two right angled triangles. I've made the assumption in that the vertical line will have a tension of 4.32 N because $\sin (90) = 1$. However, I can't see how to set up the equations in order to use the cosine or conversely the sine rule. My best attempt is the following (using trignometric ratios in order to solve for one of the tensions in the ropes which corresponds to 18 degrees):
Let x, y and z equal a decrease in the adjacent, hypotonuse and opposite side respectively.
Then:
$\cos (\theta) = \frac {A}{H}$
$\cos (18) = \frac {4.32 - x}{\left (\frac {4.32}{\cos (18)}\right) - y}$
$4.32 - 0.951(y) = 4.32 - x$
$x = 0.951(y)\ or\ y= \frac{x}{0.951}$
$\sin (\theta) = \frac{O}{H}$
$\sin (18) = \frac {1.4 - z}{\left (\frac{4.32}{\cos(18)} - y \right)}$
$4.32\tan(18) - \sin (18)y = 1.4 - z$
$z = 0.309(y)$
$\tan (\theta) = \frac{O}{A}$
$\tan(18) = \frac {1.4 - z}{4.32 - x}$
$Substituting\ for\ x\ and\ z\ in\ terms\ of\ one\ variable\ only$
$\tan(18) = \frac{1.4 - 0.309y}{4.32 - 0.951y}$
$1.4 - 0.309y = 1.4 - 0.309y$
$0 = 0.618y\ ?$
Sidenote: I do feel how this end result looks, it should have cancelled.
Any help would brilliant.
2. ## Re: Vector problem
vertical components of the two tensions are in equilibrium with the load's weight ...
$T_1 \cos(18) + T_2 \cos(30) = 4.32$
horizontal components of tension are equal and opposite in direction
$T_1 \sin(18) - T_2 \sin(30) = 0$
solve the system of equations ...
$T_1 = \frac{T_2 \sin(30)}{\sin(18)}$
$\frac{T_2 \sin(30)}{\sin(18)} \cdot \cos(18) + T_2 \cos(30) = 4.32$
$T_2 \left[\sin(30) \cot(18) + \cos(30) \right] = 4.32$
$T_2 = \frac{4.32}{\sin(30) \cot(18) + \cos(30)} \approx 1.8 N$
$T_1 \approx 2.91 N$
3. ## Re: Vector problem
@ Skeeter
Aha, I see. I think my mechanics needs a service.
Thanks! | 2016-10-28 02:17:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 24, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6868829727172852, "perplexity": 555.6111328794151}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721415.7/warc/CC-MAIN-20161020183841-00257-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/partitioning-an-infinite-set.794216/ | # Partitioning an infinite set
## Main Question or Discussion Point
A theorem on equivalence relation states that for any set S, the set of equivalence classes of S under an equivalence relation R constitutes a partition of a set. Moreover, given any partition of a set, one can define an equivalence relation on the set.
What allows you to "create" a partition of a set, say the set of its equivalence classes? If the set is finite, then it is intuitively easy to see that a partition can be created since the elements of the set must eventually be exhausted. But if the set is infinite, say Z, then what guarantees we can create a partition, of say, cosets of some subgroup of Z.
Related Set Theory, Logic, Probability, Statistics News on Phys.org
lavinia
Gold Member
A theorem on equivalence relation states that for any set S, the set of equivalence classes of S under an equivalence relation R constitutes a partition of a set. Moreover, given any partition of a set, one can define an equivalence relation on the set.
What allows you to "create" a partition of a set, say the set of its equivalence classes? If the set is finite, then it is intuitively easy to see that a partition can be created since the elements of the set must eventually be exhausted. But if the set is infinite, say Z, then what guarantees we can create a partition, of say, cosets of some subgroup of Z.
Not sure what you mean by 'create', so here is an example for you.
Say that two integers are equivalent if their difference is an even number. This defines two equivalence classes, the odd and the even integers. Would you say that these two equivalence classes were "created"?
Last edited:
Fredrik
Staff Emeritus
Gold Member
What allows you to "create" a partition of a set, say the set of its equivalence classes? If the set is finite, then it is intuitively easy to see that a partition can be created since the elements of the set must eventually be exhausted. But if the set is infinite, say Z, then what guarantees we can create a partition, of say, cosets of some subgroup of Z.
The axiom schema of separation. It's an axiom schema of ZFC set theory. It says (roughly) that for each set S and each property P, there's a set $\{x\in S|P(x)\}$ whose elements are precisely those x that are elements of S and have property P.
The string P(x) that we interpret as "x has property P" is a statement about x, or to be more precise, about the set that the symbol "x" represents. That statement may be true for some values of x and false for other values of x. The elements of $\{x\in S|P(x)\}$ are those elements of S for which P(x) is a true statement. P(x) can for example be the statement that x is an equivalence class of elements of S.
WWGD
Gold Member
2019 Award
The axiom schema of separation. It's an axiom schema of ZFC set theory. It says (roughly) that for each set S and each property P, there's a set $\{x\in S|P(x)\}$ whose elements are precisely those x that are elements of S and have property P.
The string P(x) that we interpret as "x has property P" is a statement about x, or to be more precise, about the set that the symbol "x" represents. That statement may be true for some values of x and false for other values of x. The elements of $\{x\in S|P(x)\}$ are those elements of S for which P(x) is a true statement. P(x) can for example be the statement that x is an equivalence class of elements of S.
But didn't Russell show that this did not always create a set with his "Barber that shaves others only if he shaves himself"?
Fredrik
Staff Emeritus
Gold Member
But didn't Russell show that this did not always create a set with his "Barber that shaves others only if he shaves himself"?
Russell's paradox shows up if we allow the set $\{x|P(x)\}$ for all properties P. (This is called the principle of comprehension). If we do, then $\{x|x\notin x\}$ is a set. Let's call it $R$. Now we can show that $R\in R\Leftrightarrow R\notin R$.
The axiom schema I mentioned doesn't have this problem. Define $R=\{x\in S|x\notin x\}$. Now $R\notin R$ doesn't imply that $R\in R$, because it's possible that $R\notin S$.
Russell's paradox shows up if we allow the set $\{x|P(x)\}$ for all properties P. (This is called the principle of comprehension). If we do, then $\{x|x\notin x\}$ is a set. Let's call it $R$. Now we can show that $R\in R\Leftrightarrow R\notin R$.
The axiom schema I mentioned doesn't have this problem. Define $R=\{x\in S|x\notin x\}$. Now $R\notin R$ doesn't imply that $R\in R$, because it's possible that $R\notin S$.
Right, the Russell paradox does not immediately show up, so ZFC has no known problems. However, we can never make sure there aren't any problems. Maybe there's some contradiction lurking inside ZFC anyway that we don't yet know about. In either case, we can never prove that no contradictions will ever show up (Godel's theorem). So whether set theory like ZFC is a good system to work with is an open question (and one that will likely remain open forever). For now it suffices though.
WWGD
Gold Member
2019 Award
Ah, so we use type theory to disallow certain statements so that $R \notelement of R$?
WWGD
Gold Member
2019 Award
Russell's paradox shows up if we allow the set $\{x|P(x)\}$ for all properties P. (This is called the principle of comprehension). If we do, then $\{x|x\notin x\}$ is a set. Let's call it $R$. Now we can show that $R\in R\Leftrightarrow R\notin R$.
The axiom schema I mentioned doesn't have this problem. Define $R=\{x\in S|x\notin x\}$. Now $R\notin R$ doesn't imply that $R\in R$, because it's possible that $R\notin S$.
So do we use type theory so that $R\in R\Leftrightarrow R\notin R$ ?.
Ah, so we use type theory to disallow certain statements so that $R \notelement of R$?
Not at all. That is not the approach of ZFC. Statements like these are perfectly allowed. In other axiom systems such as type theory of NF, we do not allow such statements. But in ZFC, this is perfectly fine. | 2020-04-01 09:37:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8402750492095947, "perplexity": 235.98838866325252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505550.17/warc/CC-MAIN-20200401065031-20200401095031-00384.warc.gz"} |
http://physics.stackexchange.com/questions/29016/cyclists-electrical-tingling-under-power-lines/29036 | # Cyclist's electrical tingling under power lines
It's been happening to me for years. I finally decided to ask users who are better with "practical physics" when I was told that my experience – that I am going to describe momentarily – prove that I am a diviner, a psychic, a "sensibil" as we call it. The right explanation clearly needs some electrodynamics although it's "everyday electrodynamics" and theoretical physicists are not trained to quickly answer such questions although each of us has probably solved many exercises that depend on the same principles.
When I am biking under the power lines – which probably have a high voltage in them – I feel a clear tingling shocks near my buttocks and related parts of the body for a second or so when I am under a critical point of the power lines. It is a strong feeling, not a marginal one: it feels like a dozen of ants that are stinging me at the same moment. It seems almost clear that some currents are running through my skins at 50 Hz. I would like to know the estimate (and calculation or justification) of the voltage, currents etc. that are going through my skin and some comparison with the shock one gets when he touches the power outlet.
Now,
• my bike that makes this effect particularly strong is a mountain bike, Merida;
• the speed is about 20 km/h and the velocity is perpendicular to the direction of the current in the power line;
• the seat has a hole in it and there is some metal – probably a conducting one – just a few centimeters away from the center of my buttocks. It's plausible that I am in touch with the metal – or near touch;
• my skin is kind of sweating during these events and the liquid isn't pure water so it's probably much more conductive than pure water;
• the temperature was 22 °C today, the humidity around 35%, clear skies, 10 km/h wind;
• the power lines may be between 22 kV and 1 MV and at 50 Hz, the altitude is tens of meters but I don't really know exactly.
What kind of approximation for the electromagnetic waves are relevant? What is the strength? How high currents one needs?
Does one need some amplification from interference etc. (special places) to make the effect detectable? (I only remember experiencing this effect at two places around Pilsen; the most frequent place where I feel it is near Druztová, Greater Pilsen, Czechia.)
Is the motion of the wheels or even its frequency important? Is there some resonance?
Does the hole in the seat and the metal play any role? Just if you think that I am crazy, other people are experience the effect (although with different body parts), see e.g. here and here. This PDF file seems to suggest that the metals and electromagnetic induction is essential for the effect but the presentation looks neither particularly comprehensive nor impartial enough.
An extra blog discussion on this topic is here:
http://motls.blogspot.com/2012/05/electric-shocks-under-high-voltage.html
-
What do you mean " the velocity is perpendicular to the current"? That you are crossing the line under the high tension line? Up to that point I thought you were biking in parallel. – anna v May 27 '12 at 4:40
Had this happened to me, I would have a) taken a small lamp, of the kind used in torches, and made a circuit from a metal part to my insulated hand and watched whether there was light when crossing. If yes, I would take my voltmeter on AC index and watch again between the metal part and my body to measure the current. If no light or current was seen I would presume that the effect was on the physiology of my body ( water mainly) and read up on that. Too many unknown parameters in the problem and it has to be sliced down. – anna v May 27 '12 at 4:46
p.s. does the effect stop if you stop at that point? – anna v May 27 '12 at 5:01
Have a look at this stopgeek.com/richard-boxs-light-field.html . also youtube.com/watch?v=cXhZvyGtMrk – anna v May 27 '12 at 7:37
Yes, Anna, it appears when I am crossing but I suspect that if I were riding in parallel on the right place, it could be the same effect. And maybe not. Maybe there's some current running around the bike and the polarizations matter. ... I should make an experiment, like stopping at that point. But it has happened to me about 5 times in my life - although it's pretty safely guaranteed and regular with that bike - and it's unpleasant enough a feeling that I just don't want to repeat it again! But maybe i will do the sacrifice at some point haha. – Luboš Motl May 27 '12 at 18:08
## 7 Answers
First, Field strength.
This calculation is strictly an electric potential calculation; radiation and induction are safely ignored at 50Hz.*
For a 200kV transmission line 20m above ground, the max electric field at ground level is about 1.2 kV/m.** This number is reduced from the naive 200kV/20m=10 kV/m calculation by two effects:
1) The ~1/r variation in the electric field (reduction to 3 kV/m). I used the method of images to calculate this field, with a 10 cm conductor diameter to keep the peak field below the 1MV/m breakdown field.
2) Cancellation from the other two power lines in this 3-phase system, which are at +/-120 degree electrical phases with respect to the first, and are physically offset in a horizontal line per the photo. I estimated 7m spacings between adjacent lines. The maximum E-field actually occurs roughly twice as far out as the outermost line; the field under the center conductor is lower.
Next, Can you feel it?
1) The human body circuit model for electrostatic discharge is 100pF+1.5kohm; that's a gross simplification but better than nothing. If one imagined a 2m high network, the applied voltage results in a 50Hz current of about 70uA ($C \omega V$). Very small.
2) There will be an AC voltage difference between the (insulated) human and (insulated) bicycle. A 1m vertical separation between their centers of gravity would yield roughly 1200V. This voltage is rather small compared to some car-door-type static discharges, but it would still be sufficient to break down a short air gap (but not a couple cm), and would repeat at 100Hz. I imagine it would be noticeable in a sensitive part of the anatomy.
If the transmission voltage is actually 400 kV, all the field strengths and voltages would of course double.
(*) In response to a comment, here's an estimate of the neglected induction and radiation effects, courtesy of Maxwell 4 and 3:
Induction: Suppose a power line is carrying a healthy 1000A AC current (f=50 Hz). Then by Ampere's law, there is a circumferential AC magnetic field; at the wire-to-ground distance of 20 meters that field's amplitude is $10 \mu T$. (Compare with the earth's DC field of approximately 0.5 gauss, or $50 \mu T$.)
The flux of this magnetic field through a $1 m^2$ area loop (with normal parallel to the ground and perpendicular to the wire) is $\Phi = 10 \mu Wb$ AC. Then from Faraday's law, the voltage around the loop is $d \Phi /dt = 2 \pi f \Phi = 3 mV$ (millivolts). So much for induction.
One can also estimate the magnetic field resulting from the $1200 V/m$ ground-level AC electric field, which has an electric flux density $D =\epsilon_0 E = 10.6 nC/m^2$ and a displacement current density $\partial D / \partial t = 2 \pi f D = 3.3 \mu A/m^2$. The flux of this field through a $1 m$ square loop (parallel to the ground) is $3.3 \mu A$, so the average magnetic field around the square is $0.8 \mu A/m$, for a ridiculously small magnetic flux density of $1 pT$.
(**) 1 Sep 2014 update. Dmytry very astutely points out in a comment that there will be local electric field intensification effects from conductive irregularities in the otherwise flat ground surface, such as our cyclist (who, being somewhat sweaty, will have a conductive surface). The same principle applies to lightning rods.
For the proverbial spherical cyclist, the local field will be increased by a factor of 3, independent of the sphere's size, as long as it's much less than the distance to the power line. It turns out that it doesn't matter whether the sphere is grounded or insulated, since its total charge remains 0.
For more elongated shapes the intensification can be much higher: for a grounded prolate spheroid with 10:1 dimensions, the multiplication factor is 50. This intensification of course enhances any sensation one might feel.
-
Excellent, you got a +1 already for your first sentence, a very useful first step... Can a millimeter air gap be replaced by a centimeter of slightly wet shorts? Or is the air gap needed literally, i.e. in between body hair? I am still not getting why 700 V is safe. Why is it unsafe to touch 230 V or 120 V power outlets then? – Luboš Motl May 28 '12 at 8:15
@Luboš Motl, 1) With a closed circuit (bridged by an ionic conductive liquid), you get continuous current, which I don't think would be as noticeable as a sudden spark across an air gap. 2) The difference from a power outlet is the energy (or current, or power) available. Again a gross simplification, but the 100pF human capacitor will only supply 10mW under these conditions, while, once current from a wall outlet has started to flow, it will supply $>10A$. I think about 10mA continuously along the right path through the body can be fatal. – Art Brown May 28 '12 at 17:06
Oh, I see, so it's really about a finite charge in a "capacitor" that limits how much I can get out of it... Thanks. I will actually vote your answer as the "real answer to my question" although there have obviously been many other, sometimes even relevant ideas here... – Luboš Motl May 28 '12 at 17:31
@Luboš Motl, Thanks, but I'm frankly puzzled about the actual sensation. I believe you feel it, but am not sure my answer is sufficient to explain it. Maybe someone else will have an idea... I found an error in my superposition of the other phases that raises the field strength to 800 V/m, and will update the answer accordingly. – Art Brown May 28 '12 at 17:37
@ArtBrown besides discharges there's another little-known effect: variation in friction of moving charged surfaces. This phenomenon was used in the Marconi era as a radio detector: a sort of motorized sliding capacitor drum. If pants and bicycle seat are sliding, and if metal in the seat+bike has significant AC volts relative to the human body, then a 2F (100Hz) vibration would be felt whenever the surfaces were sliding across each other. Test: does the odd sensation always vanish when gliding below the power lines wo/pedaling? – wbeaty Jul 5 '13 at 9:17
If the power line is 20m high, and has the voltage of 1MV , then the electric field (near ground), very roughly, is on order of 1000/30 kv ~ 30 000 v/m (the numbers are very approximate and the field is complicated because it is a wire near a plate scenario, and wire diameter is unknown but not too small else the air would break down, i.e. spark over, near the wire).
You get charged to several tens kilovolts relatively to bike, then you discharge through clothing, again and again, if the line is AC because the voltage is alternating, if the line is DC because as you're moving the field changes magnitude.
The fluorescent lights light up under power lines; the field is this strong.
http://www.doobybrain.com/2008/02/03/electromagnetic-fields-cause-fluorescent-bulbs-to-glow/
With regards to the current, as the current is pulsed (you get charged then rapidly discharge through the air gap), the current can be strong enough to be felt even if average current is extremely small. The pulse current is same as when you get zapped taking off clothing, or the like.
-
Right, thanks, +1, exactly, those 30 kV per meter which is huge even if one only gets a small remnant of it is something I am thinking about. Surprising that not too many people get killed in various situations under the power lines... – Luboš Motl May 27 '12 at 19:31
@LubošMotl Hi Lubos. I think that just kilovolts are not enough to kill you, there has to be enough current. I suspect that the 1/r^2 drop of radiated power is enough to avoid deadly currents, it must be the reason the lines are so high. – anna v May 27 '12 at 20:34
+1 Yes, this is the correct answer: eta.co.uk/2010/11/29/… – John McVirgo May 27 '12 at 20:41
Luboš Motl: There would not be enough zap. The physiological zap is a complicated function of pulse duration, current, and voltage. In this scenario the total charge that goes through the body on each zap is no bigger than if you get zapped taking off a sweater or stroking a cat (which can also generate several kilovolts), and the pulse duration is so short and energy so low that neither the current nor the voltage are relevant, but the total charge (integral of current by time). – Dmytry May 27 '12 at 21:41
When calculating the volts per meter of the static field, it's important to assume that the bicycle is conductive (presumably an aluminum frame).
Without the bicyclist, one would use image charges to calculate the electric field at the bicycle. The three phases should partially cancel, and Art Brown's calculation seems reasonable, around 1200 volts per meter.
By the way, there's an additional DC voltage; the atmosphere (on a fair weather day) carries a voltage of about 60 to 100 volts in summer and 300 to 500 volts per meter in winter. On days when this effect is large it may be possible to see more of an effect.
When you insert a vertical conductor into the electric field of 1200 volts per meter, the electric field near the ends of the conductor are much larger. To estimate the effect you need to guess the radius of the top end of the conductor. This depends on the seat construction; if the seat itself is metal then its radius is on the order of 0.1 meter.
To first order, a vertical pole placed in an electric field will end up with charges at its two ends. For a bicycle frame of height 1m, the charges will be separated by about 1m. Of course the charge required to cancel the background potential depends on the radii of the ends of the pole. (An infinitely sharp pole will create an infinite electric field, before taking into account electric resistance breakdown of the air.)
To compute the electric field due to the bicycle frame, let's first say that the frame is 1m in height. Thus the two ends of the frame will have to carry voltages of +-600 volts with respect to the field produced by the overhead wires.
The actual electric field depends on how sharp the conductor is. Very sharp conductors have very large electric fields. Let's suppose that the bicycle seat has an effective radius of around 0.1 meters. What is the electric field at the seat?
Suppose that you have a point charge and that it produces a voltage of 600 volts at a radius of 0.1 meters, with 0 volts at infinity. What is the electric field at 0.1 meters? This is a question about the relationship between charge, potential and field. Some equations: $$V = \frac{1}{4\pi\epsilon_0}\frac{q}{r}$$ $$E = -\frac{1}{4\pi\epsilon_0}\frac{q}{r^2}$$ From these, we see that the electric field is increased by a factor of 1/r = 10/meter. Thus the field in the immediate vicinity of the bicycle seat is around $$600\;\; \textrm{volts} \times 10/\textrm{meter} = 6000 \;\;\textrm{volts / meter}.$$
It wouldn't surprise me that a sensitive part of the human anatomy could detect this electric field; it amounts to 60 volts per cm.
Most people have verified that if you touch your tongue to a 9 volt battery you can feel the shock. Now imagine a 50 volt battery jammed into your perspiring nether regions. This might very well feel like a lot of ants in your pants.
-
My approach would be to treat yourself like the plate of a parallel-plate capacitor. Make the following assumptions:
eps = 9e-12
A = surface area of you + bike ~ 1 square meter
d = distance to power line ~20 meters
V = 1000 kV
Then the current is I = C*dV/dt = (eps*A/d)*(2*pi*50)*V = 140 microamps.
Now is it really possible to feel 140uA? According to the OSHA website, 1mA is the minimum current you can feel from your hand to your foot (http://www.osha.gov/SLTC/etools/construction/electrical_incidents/eleccurrent.html). So 140uA isn't that far off, and maybe you can make some argument about the current density being higher where it's funneled through the seat. More likely, your nerves are more sensitive in some areas of the body than others.
I highly doubt that at biking velocities there is any significant current from motion through the magnetic fields of the lines.
-
what about treating the metal parts of the bike as an antenna and the body shorting it? – anna v May 27 '12 at 9:23
Dear anna, something like what you say has to be right. Can one estimate it? What is the voltage that may be in the antenna? What is the electric field in the electromagnetic wave? When one multiplies it by one meter, one has to get the voltage that may be attached to the body. I am pretty sure that Brian's estimate is smaller by many, many orders of magnitude. – Luboš Motl May 27 '12 at 17:09
Lubos, if you think 100uA is many, many orders of magnitude below what you're feeling under power lines, then please stay away from physics labs! A quarter amp could kill you under the right circumstances. As for the antenna model, look up the 'Hertzian' or 'short dipole' formula, which in the near field (under the power lines) reduces simply to the Laplace equation of electrostatics - i.e. you are a capacitor. See antenna-theory.com/antennas/shortdipole.php – BrianC May 28 '12 at 2:49
I am not sure that the following is relevant, but maybe what you feel is caused by the action of electric field on the hair on your skin. I wrote elsewhere on this web-site about this effect: "the electric field polarizes, rather than charges, hair, and then acts on the resulting electric dipoles, judging by the formulas in: "Proceedings of the 2005 IEEE Engineering in Medicine and Biology 27th Annual Conference, Shanghai, China, September 1-4, 2005", p. 4266. "Analysis of Body Hair Movement in ELF Electric Field Exposure", H. O. Shimizu, K. Shimizu. According to the formulas, it is essential that the electric field is not uniform. The authors claim good agreement with experimental results." It is also possible that, as others wrote here, metal parts of the bike modify the electric field, enhancing the effect.
-
It's very interesting but from my basic school years, I became convinced that the effect of static electricity on hair is only relevant if the hair is dry etc. This sensation on the bike only occurs near the buttocks and groin area, I don't have so much hair to rely upon, and they're wet because I kind of sweat, anyway. So I don't believe the static electricity is really too relevant here. – Luboš Motl May 27 '12 at 18:48
I don't know. Maybe we are talking about different phenomena: you are talking about effects related to charging of hair, whereas there is no charging in the mechanism described in the quoted article. – akhmeteli May 27 '12 at 18:56
Oh, I see, so this could also depend on my having body hair? ;-) – Luboš Motl May 27 '12 at 19:20
Well, body hair is ubiquitous anyway :-) – akhmeteli May 27 '12 at 19:43
As Dmytry & BrianC said, you are spanning about 2m of a field gradient of about 5e4 v/m.
What's more, most of you and the bike are practically shorting out that 10%, since you are either metal or brine. So what voltage there is is dropping across fairly thin insulators - tires & clothing.
The current might be in the range of 1e-6 amps, and if that were going through the salt water of your body, you might not feel it. But if it hits your skin as a spark, you probably will feel it.
-
Without calculating anything I can say that you are actually conducting electricity at 6ohz, the amperage is too small to harm because the resistance of your body combined with that also of the tires and the air overcomes the voltage. The salt in your perspiration does increase conductivity, the metal bike in a magnetic field does induce voltage much like a transformer does. I have felt the same effects when working near power lines of 345kv and handling any metal object. If you held a metal pole in the air high enough on a wet day near a power line it would kill you.
-
## protected by Qmechanic♦Feb 18 at 8:45
Thank you for your interest in this question. Because it has attracted low-quality answers, posting an answer now requires 10 reputation on this site.
Would you like to answer one of these unanswered questions instead? | 2014-11-26 11:40:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6271359324455261, "perplexity": 907.2305330014545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931006747.51/warc/CC-MAIN-20141125155646-00090-ip-10-235-23-156.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/finding-the-inverse-of-a-nasty-1-to-1-function.230343/ | # Finding the inverse of a nasty 1-to-1 function.
jforeman83
Hey, everybody.
I have a function:
$$\int\limits_{x}^{x+c} exp(-t^2) dt = y$$
c is a known constant here.
I am beating my head against the wall trying to find a good way to numerically evaluate the inverse here, i.e. I have y and c and I want to know x. I know that erf^-1 is readily available in mathematica and maple and the like but the limits of integration here make this a bit nastier. Any ideas? I don't need a perfect evaluation, just a moderately good approximation will work.
Homework Helper
Gold Member
If you're interested only in values of x smaller than 1, then you can always Taylor expand exp(-t²) and integrate term by term and keep only the first 2 terms. I get y=c-xc+c²/2.
lzkelley
i think the rigorous method would be to use a Fourier inversion expansion; can't remember the details of how to do that - but with a smooth gaussian function i think its not that bad.
jforeman83
If you're interested only in values of x smaller than 1, then you can always Taylor expand exp(-t²) and integrate term by term and keep only the first 2 terms. I get y=c-xc+c²/2.
I wish I could say with certainty that this was the case, because you're right, that would be a good idea. Unfortunetly, I think I'll need something that covers a bit more values.
jforeman83
i think the rigorous method would be to use a Fourier inversion expansion; can't remember the details of how to do that - but with a smooth gaussian function i think its not that bad.
Do you have any recommendations on sources where I might read up on this?
ObsessiveMathsFreak
This function does not have a proper inverse as y(x)=y(-x-c)
I believe that the inverse may exist if you restrict x to be greater than zero.
Homework Helper
I think we can view this problem as a differential equation, and use some method of numerically solving ODE's such as the midpoint method or Runge-Kutta? Not 100% sure how it'll work out though.
daudaudaudau
If you only want to evaluate it numerically, then why don't you just use Newton-Rhapson to determine when y(x) = K ?
Homework Helper
That involves evaluating the integral, which can't be done analytically though I do know there are many tables of data for that particular function (The error function). So Yes, I guess that method can do it numerically, good idea =]
jforeman83
If you only want to evaluate it numerically, then why don't you just use Newton-Rhapson to determine when y(x) = K ?
This is a fantastic idea. Thanks! | 2022-11-27 08:48:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.78400719165802, "perplexity": 310.1348330774204}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710218.49/warc/CC-MAIN-20221127073607-20221127103607-00587.warc.gz"} |
https://juliaapproximation.github.io/FastTransforms.jl/dev/generated/padua/ | This demonstrates the Padua transform and inverse transform, explaining precisely the normalization and points
using FastTransforms
We define the Padua points and extract Cartesian components:
N = 15
x = pts[:,1]
y = pts[:,2];
nothing #hide
We take the Padua transform of the function:
f = (x,y) -> exp(x + cos(y))
nothing #hide
and use the coefficients to create an approximation to the function $f$:
f̃ = (x,y) -> begin
j = 1
ret = 0.0
for n in 0:N, k in 0:n
ret += f̌[j]*cos((n-k)*acos(x)) * cos(k*acos(y))
j += 1
end
ret
end
#3 (generic function with 1 method)
At a particular point, is the function well-approximated?
f̃(0.1,0.2) ≈ f(0.1,0.2)
true
Does the inverse transform bring us back to the grid?
ipaduatransform(f̌) ≈ f̃.(x,y)
true | 2022-08-10 02:52:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7820296883583069, "perplexity": 9583.612291047653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571097.39/warc/CC-MAIN-20220810010059-20220810040059-00522.warc.gz"} |
https://hyperedge.tech/2022/10/24/reduce-deep-learning-training-time-and-cost-with-mosaicml-composer-on-aws/ | In the past decade, we have seen Deep learning (DL) science adopted at a tremendous pace by AWS customers. The plentiful and jointly trained parameters of DL models have a large representational capacity that brought improvements in numerous customer use cases, including image and speech analysis, natural language processing (NLP), time series processing, and more. In this post, we highlight challenges commonly reported specifically in DL training, and how the open-source library MosaicML Composer helps solve them.
## The challenge with DL training
DL models are trained iteratively, in a nested for loop. A loop iterates through the training dataset chunk by chunk and, if necessary, this loop is repeated several times over the whole dataset. ML practitioners working on DL training face several challenges:
• Training duration grows with data size. With permanently-growing datasets, training times and costs grow too, and the rhythm of scientific discovery slows down.
• DL scripts often require boilerplate code, notably the aforementioned double for loop structure that splits the dataset into minibatches and the training into epochs.
• The paradox of choice: several training optimization papers and libraries are published, yet it’s unclear which one to test first, and how to combine their effects.
In the past few years, several open-source libraries such as Keras, PyTorch Lightning, Hugging Face Transformers, and Ray Train have been attempting to make DL training more accessible, notably by reducing code verbosity, thereby simplifying how neural networks are programmed. Most of those libraries have focused on developer experience and code compactness.
In this post, we present a new open-source library that takes a different stand on DL training: MosaicML Composer is a speed-centric library whose primary objective is to make neural network training scripts faster via algorithmic innovation. In the cloud DL world, it’s wise to focus on speed, because compute infrastructure is often paid per use—even down to the second on Amazon SageMaker Training—and improvements in speed can translate into money savings.
Historically, speeding up DL training has mostly been done by increasing the number of machines computing model iterations in parallel, a technique called data parallelism. Although data parallelism sometimes accelerates training (not guaranteed because it disturbs convergence, as highlighted in Goyal et al.), it doesn’t reduce overall job cost. In practice, it tends to increase it, due to inter-machine communication overhead and higher machine unit cost, because distributed DL machines are equipped with high-end networking and in-server GPU interconnect.
Although MosaicML Composer supports data parallelism, its core philosophy is different from the data parallelism movement. Its goal is to accelerate training without requiring more machines, by innovating at the science implementation level. Therefore, it aims to achieve time savings which would result in cost savings due to AWS’ pay-per-use fee structure.
## Introducing the open-source library MosaicML Composer
MosaicML Composer is an open-source DL training library purpose-built to make it simple to bring the latest algorithms and compose them into novel recipes that speed up model training and help improve model quality. At the time of this writing, it supports PyTorch and includes 25 techniques—called methods in the MosaicML world—along with standard models, datasets, and benchmarks
Composer is available via pip:
pip install mosaicml
Speedup techniques implemented in Composer can be accessed with its functional API. For example, the following snippet applies the BlurPool technique to a TorchVision ResNet:
import logging from composer import functional as CF
import torchvision.models as models
logging.basicConfig(level=logging.INFO) model = models.resnet50()
CF.apply_blurpool(model)
Optionally, you can also use a Trainer to compose your own combination of techniques:
from composer import Trainer
from composer.algorithms import LabelSmoothing, CutMix, ChannelsLast trainer = Trainer( model=.. # must be a composer.ComposerModel train_dataloader=..., max_duration="2ep", # can be a time, a number of epochs or batches algorithms=[ LabelSmoothing(smoothing=0.1), CutMix(alpha=1.0), ChannelsLast(), ]
) trainer.fit()
## Examples of methods implemented in Composer
Some of the methods available in Composer are specific to computer vision, for example image augmentation techniques ColOut, Cutout, or Progressive Image Resizing. Others are specific to sequence modeling, such as Sequence Length Warmup or ALiBi. Interestingly, several are agnostic of the use case and can be applied to a variety of PyTorch neural networks beyond computer vision and NLP. Those generic neural network training acceleration methods include Label Smoothing, Selective Backprop, Stochastic Weight Averaging, Layer Freezing, and Sharpness Aware Minimization (SAM).
Let’s dive deep into a few of them that were found particularly effective by the MosaicML team:
• Sharpness Aware Minimization (SAM) is an optimizer than minimizes both the model loss function and its sharpness by computing a gradient twice for each optimization step. To limit the extra compute to penalize the throughput, SAM can be run periodically.
• Attention with Linear Biases (ALiBi), inspired by Press et al., is specific to Transformers models. It removes the need for positional embeddings, replacing them with a non-learned bias to attention weights.
• Selective Backprop, inspired by Jiang et al., allows you to run back-propagation (the algorithms that improve model weights by following its error slope) only on records with high loss function. This method helps you avoid unnecessary compute and helps improve throughput.
Having those techniques available in a single compact training framework is a significant value added for ML practitioners. What is also valuable is the actionable field feedback the MosaicML team produces for each technique, tested and rated. However, given such a rich toolbox, you may wonder: what method shall I use? Is it safe to combine the use of multiple methods? Enter MosaicML Explorer.
## MosaicML Explorer
To quantify the value and compatibility of DL training methods, the MosaicML team maintains Explorer, a first-of-its kind live dashboard picturing dozens of DL training experiments over five datasets and seven models. The dashboard pictures the pareto optimal frontier in the cost/time/quality trade-off, and allows you to browse and find top-scoring combinations of methods—called recipes in the MosaicML world—for a given model and dataset. For example, the following graphs show that for a 125M parameter GPT2 training, the cheapest training maintaining a perplexity of 24.11 is obtained by combining AliBi, Sequence Length Warmup, and Scale Schedule, reaching a cost of about \$145.83 in the AWS Cloud! However, please note that this cost calculation and the ones that follow in this post are based on an EC2 on-demand compute only, other cost considerations may be applicable, depending on your environment and business needs.
Screenshot of MosaicML Explorer for GPT-2 training
## Notable achievements with Composer on AWS
By running the Composer library on AWS, the MosaicML team achieved a number of impressive results. Note that costs estimates reported by MosaicML team consist of on-demand compute charge only.
## Conclusion
You can get started with Composer on any compatible platform, from your laptop to large GPU-equipped cloud servers. The library features intuitive Welcome Tour and Getting Started documentation pages. Using Composer in AWS allows you to cumulate Composer cost-optimization science with AWS cost-optimization services and programs, including Spot compute (Amazon EC2, Amazon SageMaker), Savings Plan, SageMaker automatic model tuning, and more. The MosaicML team maintains a tutorial of Composer on AWS. It provides a step-by-step demonstration of how you can reproduce MLPerf results and train ResNet-50 on AWS to the standard 76.6% top-1 accuracy in just 27 minutes.
If you’re struggling with neural networks that are training too slow, or if you’re looking to keep your DL training costs under control, give MosaicML on AWS a try and let us know what you build! | 2022-11-28 04:47:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18058083951473236, "perplexity": 3804.13419440121}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710473.38/warc/CC-MAIN-20221128034307-20221128064307-00242.warc.gz"} |
https://en.wikipedia.org/wiki/Statistical_coupling_analysis | # Statistical coupling analysis
Statistical coupling analysis or SCA is a technique used in bioinformatics to measure covariation between pairs of amino acids in a protein multiple sequence alignment (MSA). More specifically, it quantifies how much the amino acid distribution at some position i changes upon a perturbation of the amino acid distribution at another position j. The resulting statistical coupling energy indicates the degree of evolutionary dependence between the residues, with higher coupling energy corresponding to increased dependence.[1]
## Definition of statistical coupling energy
Statistical coupling energy measures how a perturbation of amino acid distribution at one site in an MSA affects the amino acid distribution at another site. For example, consider a multiple sequence alignment with sites (or columns) a through z, where each site has some distribution of amino acids. At position i, 60% of the sequences have a valine and the remaining 40% of sequences have a leucine, at position j the distribution is 40% isoleucine, 40% histidine and 20% methionine, k has an average distribution (the 20 amino acids are present at roughly the same frequencies seen in all proteins), and l has 80% histidine, 20% valine. Since positions i, j and l have an amino acid distribution different from the mean distribution observed in all proteins, they are said to have some degree of conservation.
In statistical coupling analysis, the conservation (ΔGstat) at each site (i) is defined as: ${\displaystyle \Delta G_{i}^{stat}={\sqrt {\sum _{x}(\ln P_{i}^{x})^{2}}}}$.[2]
Here, Pix describes the probability of finding amino acid x at position i, and is defined by a function in binomial form as follows:
${\displaystyle P_{i}^{x}={\frac {N!}{n_{x}!(N-n_{x})!}}p_{x}^{n_{x}}(1-p_{x})^{N-n_{x}}}$,
where N is 100, nx is the percentage of sequences with residue x (e.g. methionine) at position i, and px corresponds to the approximate distribution of amino acid x in all positions among all sequenced proteins. The summation runs over all 20 amino acids. After ΔGistat is computed, the conservation for position i in a subalignment produced after a perturbation of amino acid distribution at j (ΔGi | δjstat) is taken. Statistical coupling energy, denoted ΔΔGi, jstat, is simply the difference between these two values. That is:
${\displaystyle \Delta \Delta G_{i,j}^{stat}=\Delta G_{i|\delta j}^{stat}-\Delta G_{i}^{stat}}$, or, more commonly, ${\displaystyle \Delta \Delta G_{i,j}^{stat}={\sqrt {\sum _{x}(\ln P_{i|\delta j}^{x}-\ln P_{i}^{x})^{2}}}}$
Statistical coupling energy is often systematically calculated between a fixed, perturbated position, and all other positions in an MSA. Continuing with the example MSA from the beginning of the section, consider a perturbation at position j where the amino distribution changes from 40% I, 40% H, 20% M to 100% I. If, in a subsequent subalignment, this changes the distribution at i from 60% V, 40% L to 90% V, 10% L, but does not change the distribution at position l, then there would be some amount of statistical coupling energy between i and j but none between l and j.
## Applications
Ranganathan and Lockless originally developed SCA to examine thermodynamic (energetic) coupling of residue pairs in proteins.[3] Using the PDZ domain family, they were able to identify a small network of residues that were energetically coupled to a binding site residue. The network consisted of both residues spatially close to the binding site in the tertiary fold, called contact pairs, and more distant residues that participate in longer-range energetic interactions. Later applications of SCA by the Ranganathan group on the GPCR, serine protease and hemoglobin families also showed energetic coupling in sparse networks of residues that cooperate in allosteric communication.[4]
Statistical coupling analysis has also been used as a basis for computational protein design. In 2005, Russ et al.[5] used an SCA for the WW domain to create artificial proteins with similar thermodynamic stability and structure to natural WW domains. The fact that 12 out of the 43 designed proteins with the same SCA profile as natural WW domains properly folded provided strong evidence that little information—only coupling information—was required for specifying the protein fold. This support for the SCA hypothesis was made more compelling considering that a) the successfully folded proteins had only 36% average sequence identity to natural WW folds, and b) none of the artificial proteins designed without coupling information folded properly. An accompanying study showed that the artificial WW domains were functionally similar to natural WW domains in ligand binding affinity and specificity.[6]
In de novo protein structure prediction, it has been shown that, when combined with a simple residue-residue distance metric, SCA-based scoring can fairly accurately distinguish native from non-native protein folds.[7] | 2017-06-25 02:27:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5996665358543396, "perplexity": 2070.1061463813403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320386.71/warc/CC-MAIN-20170625013851-20170625033851-00066.warc.gz"} |
http://latex.org/forum/viewtopic.php?f=27&t=23027&p=78115&sid=ff16189478628c8fe2557f0181c8e854 | ## LaTeX forum ⇒ Others ⇒ LaTeX Code Highlighting for Notepad++
Information and discussion about other LaTeX editors not listed above
Djabo
Posts: 1
Joined: Sun Apr 14, 2013 10:07 pm
### LaTeX Code Highlighting for Notepad++
Notepad++ is an awesome editor. I love programming with it.
Unfortunately the built-in code highlighting for (La)TeX doesn't recognize and color math environments, such as $M$, $\exp(i z) = \cos(z) + i \,\sin(z)$ or
\begin{align} \exp(z) := \sum_{n = 0}^{\infty} \frac{z^n}{n!}\end{align}
In fact it only colors keywords, delimiters and operators.
Notepad++ however provides for easily creating your own language definitions, which I did.
So this is my ad hoc definition for (La)TeX code highlighting, with emphasis on some math environments and certain keywords. For Code-Folding I'm using \section %\endsection.
Note that it isn't a complete or proper code lighting reflecting all of LaTeX grammar. It certainly features a good number of bugs and was tailored to my own personal use, but you're welcome to change it and then repost your version!
Happy coding!
Attachments
NuTeX.zip
LaTeX Code Highlighting Scheme for Notepad++ | 2017-11-20 03:35:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8128843307495117, "perplexity": 6936.818151335756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805911.18/warc/CC-MAIN-20171120032907-20171120052907-00444.warc.gz"} |
https://davidthemathstutor.com.au/2019/04/21/logarithms-part-1/ | # Logarithms, Part 1
Logarithms confuse many of my students so I thought it is time to explain these. I touched on these before on a post about inverse operations, but let’s add some more detail.
Let’s first define some terms here. Consider the expression x2. Here, x is raised to the power of 2. x is the base and 2 is the exponent, power, order, or index. Lots of different terms for the exponent – I will mostly use the term exponent. So the exponent defines what to do with the base.
Now before I talk about logarithms specifically, I want to review what various kinds of exponents mean. I have talked about this before, but these concepts should be fully understood if logarithms are to make sense to you.
Now x2 means x × x. Positive integer exponents means how many times you multiply the base by itself. So in general, for a positive integer m,
xm = x × x × x × x … where x is listed m times.
The special case of when m = 0 is defined as x0 = 1, no matter how small or how large x is. Now what about negative integers?
$\begin{array}{c} {{x}^{{-}{1}}\hspace{0.33em}{=}\hspace{0.33em}\frac{1}{{x}^{1}}{;}\hspace{0.33em}\hspace{0.33em}{x}^{{-}{2}}\hspace{0.33em}{=}\hspace{0.33em}\frac{1}{{x}^{2}}{;}\hspace{0.33em}\hspace{0.33em}\frac{1}{{x}^{{-}{2}}}\hspace{0.33em}{=}\hspace{0.33em}{x}^{2}}\\\ {{x}^{{-}{m}}\hspace{0.33em}{=}\hspace{0.33em}\frac{1}{{x}^{m}}{;}\hspace{0.33em}\hspace{0.33em}\frac{1}{{x}^{{-}{m}}}\hspace{0.33em}{=}\hspace{0.33em}{x}^{m}} \end{array}$
So a negative exponent is the same as the positive one except it and its base is in the denominator or vice versa. You can freely move a factor that is a base and its exponent between the numerator and the denominator, as long as you change the sign of the exponent.
What about fractional exponents? Let’s start with fractions where “1” is in the numerator. The denominator in a fraction exponent refers to the root of the number. For example,
${x}^{\frac{1}{2}}\hspace{0.33em}{=}\hspace{0.33em}\sqrt[2]{x}\hspace{0.33em}{=}\hspace{0.33em}\sqrt{x}$
The “2” for the square root is usually assumed if it is not there. However, for other roots (like cube roots), the index must be there to indicate the kind of root it is. Other examples:
${x}^{\frac{1}{3}}\hspace{0.33em}{=}\hspace{0.33em}\sqrt[3]{x}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}{x}^{\frac{1}{6}}\hspace{0.33em}{=}\hspace{0.33em}\sqrt[6]{x}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}{x}^{\frac{1}{n}}\hspace{0.33em}{=}\hspace{0.33em}\sqrt[n]{x}$
The numerator in a fractional exponent means the same as if it wasn’t in a fraction. so we can combine these two definitions for more general fractions:
${x}^{\frac{2}{3}}\hspace{0.33em}{=}\hspace{0.33em}\sqrt[3]{{x}^{2}}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}{x}^{\frac{5}{6}}\hspace{0.33em}{=}\hspace{0.33em}\sqrt[6]{{x}^{5}}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}{x}^{\frac{m}{n}}\hspace{0.33em}{=}\hspace{0.33em}\sqrt[n]{{x}^{m}}$
Now we have not covered irrational exponents like x𝜋. The development of these are a bit more complex so I’ll just say “use your calculator”.
Indeed, you can use your calculator to calculate a number raised to a power if it has a key labelled as “yx” or has a key with the “^” symbol on it. I will leave it to you to find out how to use these keys. If you do not have a fancy calculator, there is always the all-knowing internet.
So we have talked before on how to solve equations like x2 = 16 by taking the square root of both sides of the equation. But how do you solve 2x = 16? Notice that x is now in the exponent. That changes everything as you can’t take the xth root of a number on your calculator……………but can you?
In the next post on this topic, I’ll introduce you to logarithms then later, how they are used. | 2019-06-20 21:39:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8521907329559326, "perplexity": 274.36801938974537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999273.79/warc/CC-MAIN-20190620210153-20190620232153-00463.warc.gz"} |
https://www.physicsforums.com/threads/trig-word-prob.57968/ | # Trig word prob
1. Dec 28, 2004
### aisha
Jennie stands 30 m from the base of a high rise building. The angle of elevation from her eyes to the top of the tower is 70°. How high is the tower if her eyes are 1.8 m above the ground?
Is the answer to this 82.42 m + 1.8m=84.22 m?
2. Dec 28, 2004
### phreak
tan(70) = x/30 = 82.42
82.42 + 1.8 = 84.22
Yup
3. Dec 28, 2004
### dextercioby
Your first equation doensn't make any logical sense... :grumpy:
$$\tan 70°=\frac{h}{30}\sim 2.747 \Rightarrow h\sim 2.747\cdot 30=82.415m$$
Total height:approx.84.2 m.
Daniel.
PS.1.80m to her eyes,almost 1.90m in all:that's a giant *****...!! :tongue2:
4. Dec 28, 2004
### The Bob
$$Tan \Psi = \frac {Opp}{Adj}$$
$$Tan 70 = \frac {x}{30}$$
$$x = Tan70 \times 30$$
$$x = 82.42$$
Then add 1.8m becaue of her eye level:
$$y = 82.42 + 1.8$$
$$y = 84.22m$$
Therefore I say you are right in what you originally said.
P.S. Well my eye level is about 1.95m :tongue2:
5. Jan 22, 2005
### qweretyq
1.9 meters is not a giant...
its about 6 feet 3 inches...
im 6' exactly
6. Jan 23, 2005
### The Bob
If you check my post above I said that my eye level was about 1.95m (probably more now :tongue2:) and I am not a giant. Just rather tall.
7. Jan 23, 2005
### dextercioby
Since i'm only 1.71,i feel quite uncomfortable when a lady next to me is towering 20 cm more...Bob,if u were near a girl at 2.15,u'd feel the same way... :tongue2:
Daniel.
8. Jan 23, 2005
### iodmys
1.90 meters is very tall for a girl, too tall, in fact. I'd say that she's "deformed."
Last edited: Jan 23, 2005
9. Jan 23, 2005
### The Bob
I most likely would but you find a female 2.15m :tongue2:
I have met a female almost as tall as me, two infact, and you do wonder why or how. Then you think why or how you are that tall as well. | 2017-11-22 15:32:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2869793772697449, "perplexity": 6699.641950148084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806609.33/warc/CC-MAIN-20171122141600-20171122161600-00205.warc.gz"} |
https://greprepclub.com/forum/in-the-figure-above-lmno-and-ghjk-are-rectangles-where-14758.html?sort_by_oldest=true | It is currently 26 Feb 2020, 10:13
GMAT Club Daily Prep
Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
Your Progress
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
In the figure above, LMNO and GHJK are rectangles where
Question banks Downloads My Bookmarks Reviews Important topics
Author Message
TAGS:
Founder
Joined: 18 Apr 2015
Posts: 9736
Followers: 202
Kudos [?]: 2385 [0], given: 9227
In the figure above, LMNO and GHJK are rectangles where [#permalink] 20 Aug 2019, 07:55
Expert's post
00:00
Question Stats:
66% (01:02) correct 33% (00:22) wrong based on 6 sessions
Attachment:
#GREpracticequestion In the figure above.jpg [ 27.76 KiB | Viewed 340 times ]
In the figure above, $$LMNO$$ and $$GHJK$$ are rectangles where $$GH = \frac{1}{2}LM$$ and $$HJ= \frac{1}{2} MN$$. What fraction of the region bounded by $$LMNO$$
is NOT shaded?
(A) $$\frac{1}{4}$$
(B) $$\frac{1}{3}$$
(C) $$\frac{1}{2}$$
(D) $$\frac{2}{3}$$
(E) $$\frac{3}{4}$$
[Reveal] Spoiler: OA
_________________
Need Practice? 20 Free GRE Quant Tests available for free with 20 Kudos
GRE Prep Club Members of the Month: Each member of the month will get three months free access of GRE Prep Club tests.
VP
Joined: 20 Apr 2016
Posts: 1129
WE: Engineering (Energy and Utilities)
Followers: 18
Kudos [?]: 1042 [1] , given: 231
Re: In the figure above, LMNO and GHJK are rectangles where [#permalink] 07 Sep 2019, 19:54
1
This post received
KUDOS
Carcass wrote:
Attachment:
#GREpracticequestion In the figure above.jpg
In the figure above, $$LMNO$$ and $$GHJK$$ are rectangles where $$GH = \frac{1}{2}LM$$ and $$HJ= \frac{1}{2} MN$$. What fraction of the region bounded by $$LMNO$$
is NOT shaded?
(A) $$\frac{1}{4}$$
(B) $$\frac{1}{3}$$
(C) $$\frac{1}{2}$$
(D) $$\frac{2}{3}$$
(E) $$\frac{3}{4}$$
Here,
Let LM = 4 and MN = 8
therefore, GH = $$\frac{1}{2} * LM = \frac{1}{2} * 4 = 2$$
and HJ = $$\frac{1}{2}* MN = \frac{1}{2} * 8 = 4$$
Area of GHJK = $$2*4 = 8$$ [ Area of Rectangle = length * breadth]
and Area of LMNO = $$4 *8 = 32$$
Area of the unshaded region = $$32 -8 = 24$$
Therefore,
fraction of the region bounded by $$LMNO$$is NOT shaded = $$\frac{24}{32} = \frac{3}{4}$$
_________________
If you found this post useful, please let me know by pressing the Kudos Button
Rules for Posting
Got 20 Kudos? You can get Free GRE Prep Club Tests
GRE Prep Club Members of the Month:TOP 10 members of the month with highest kudos receive access to 3 months GRE Prep Club tests
Re: In the figure above, LMNO and GHJK are rectangles where [#permalink] 07 Sep 2019, 19:54
Display posts from previous: Sort by
In the figure above, LMNO and GHJK are rectangles where
Question banks Downloads My Bookmarks Reviews Important topics
Powered by phpBB © phpBB Group Kindly note that the GRE® test is a registered trademark of the Educational Testing Service®, and this site has neither been reviewed nor endorsed by ETS®. | 2020-02-26 18:13:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29970985651016235, "perplexity": 7377.046368525548}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146485.15/warc/CC-MAIN-20200226181001-20200226211001-00018.warc.gz"} |
https://tex.stackexchange.com/questions/402303/smash-subscript-in-sum | # smash subscript in sum
I found a solution (3rd \sqrt), but am curious as to why the \smash{} in {\sum_{\smash{ij}} (2nd \sqrt) is ignored.
Notes:
## Code:
\documentclass{article}
\usepackage{mathtools}
\begin{document}
$\sqrt{\sum_{ij} f(i,j)} \sqrt{\sum_{\smash{ij}} f(i,j)} \sqrt{\vphantom{\sum}\smash{\sum_{ij}} f(i,j)}$
\end{document}
You get the same as in the second example with an empty subscript, because TeX reserves the space and it forces the next size, which happens to be the same as when the subscript is present. The only difference is that in the first case the radical is lowered because of the depth of j.
The “real” solution is
$\vphantom{\sum_{ij}}\sqrt{\vphantom{\sum}\mathop{\smash{\sum_{ij}}} f(i,j)}$
The outer phantom is to ensure the real depth is taken care of.
However, I'd suggest one of the following two realizations.
\documentclass{article}
\usepackage{mathtools}
\begin{document}
$\Bigl(\,\sum_{ij}f(i,j)\Bigr)^{1/2} \qquad \biggl(\,\sum_{ij}f(i,j)\biggr)^{\!1/2}$
\end{document}
Here's a visual proof of the top statement.
\documentclass{article}
\usepackage{mathtools}
\begin{document}
\fbox{$\displaystyle{\mathop{{}{=}}_{}}$}
\fbox{$\displaystyle=$}
\end{document}
The {}{=} is to ensure the Op atom is not centered with respect to the math axis and not to add spaces around =.
With \showlists we see the first formula is
\displaystyle
\mathord
.\mathop
..\mathord
...{}
..\mathord
...\mathrel
....\fam0 =
._{}
and the empty subscript is clearly visible and adds the vertical space.
• I would like to know why in the "real'' solution it is necessary to have both \vphantom{\sum} and \mathop{\smash{\sum_{ij}}} at once; after some tests, the latter seems to yield pretty much the same as just \vphantom{\sum}\smash{\sum_{ij}}, i.e., the OP's own suggestion. – Fitzcarraldo Feb 1 '19 at 1:11
• @Fitzcarraldo You get different spacing. Slightly different, but not negligible. – egreg Feb 1 '19 at 7:46
• So, in the case of the square root of a product of two families or series, namely, \sqrt{\Bigl(\sum_{i\in I} a_i^2\Bigr) \Bigl(\sum_{i\in I} b_i^2\Bigr)} or \sqrt{\biggl(\sum_{i=1}^\infty a_i^2\biggr) \biggl(\sum_{i=1}^\infty b_i^2\biggr)}, what would you suggest in the same vein? Would you treat the entire inner bunch as a whole, and would you use \mathord instead of \mathop? – Fitzcarraldo Feb 1 '19 at 8:52
• @Fitzcarraldo Not sure what you mean. \mathop{\smash{...}} is necessary to get the proper spacing around the summation sign. – egreg Feb 1 '19 at 8:57
• That part is clear to me; what I wanted to know is what to do when you don't have one summation sign, but two, and even more, surrounded both by parentheses. Would you apply \mathop{\smash{...}} twice only to each such sign, or would you put the whole brick I wrote, parentheses and all, inside just one \mathop{\smash{...}}, which in that case might turn into \mathord{\smash{...}}, since its content wouldn't be, strictly speaking, an operator anymore? – Fitzcarraldo Feb 1 '19 at 9:10
If using \sqrt I propose this as a nicer looking (in my view) way:
\documentclass{article}
% \usepackage{mathtools}
\begin{document}
$\sqrt{\sum\nolimits_{ij} f(i,j)}$
$\sqrt{\,\sum\nolimits_{ij} f(i,j)}$
\end{document}
I find the square root too close to the top angle of sum symbol, hence the second line which adds a bit of space.
FWIW, the nath package ignores the subscripts which calculating the height of the square root:
\documentclass{article}
\usepackage{nath}
\begin{document}
$\sqrt{\sum_{ij} f(i,j)}$
\end{document}
which gives
Note that nath is incompatible with the display math environments of amsmath, which can severely limits its usability. | 2021-02-26 01:17:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8474348783493042, "perplexity": 1169.3997140998335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178355944.41/warc/CC-MAIN-20210226001221-20210226031221-00251.warc.gz"} |
https://www.springerprofessional.de/empirical-modeling-of-the-economy-and-the-environment/14366512 | main-content
## Über dieses Buch
ZhongXiang Zhang (East-West Center, Honolulu) uses a global model based on marginal abatement cost curves for 12 world regions to estimate the contributions of the three flexibility mechanisms under the Kyoto Protocol, i. e. emissions trading, joint implementation, and the clean development mechanism. He shows how the reduction in compliance costs of industrialized regions depends on the extent to which the flexibility mechanisms will be available. Not surprisingly, the fewer the restrictions on the use of flexibility mechanisms will be, the greater the gains from their use. These gains are unevenly distributed, however, with industrialized regions that have the highest autarkic marginal abatement costs tending to benefit the most. Restrictions on the use of flexibility mechanisms not only reduce the potential of the industrialized regions' efficiency gains, but are also not beneficial to developing countries since they restrict the total financial flows to developing countries under the clean development mechanism. Christoph Bohringer (ZEW, Mannheim), Glenn W. Harrison (University of South Carolina, Columbia), and Thomas F. Rutherford (University of Colorado, Boulder) evaluate the welfare implications of alternative ways in which the EU could distribute its aggregate emission reduction commitment under the Kyoto Protocol across member states. Using a large-scale CGE model, they compare a uniform proportional cutback in emissions and the actual EU burden sharing agreement with an equitable allocation scheme derived from an endogenous burden sharing calculation. The latter equalizes the relative welfare cost across member states.
## Inhaltsverzeichnis
### Introduction
Abstract
This book contains the proceedings of an international workshop “Empirical Modeling of the Economy and the Environment” held at the Centre for European Economic Research (ZEW, Mannheim) in June 2001. The workshop was organized on occasion of ZEW’s 10th anniversary and in honor of the 60th birthday of Klaus Conrad (University of Mannheim) who has been affiliated with the Centre since its foundation. The papers presented by internationally reputed experts in the field of environmental economics cover a wide spectrum of issues in environmental regulation.
Christoph Böhringer, Andreas Löschel
### Environmental Regulation and Productivity Growth: An Analysis of U.S. Manufacturing Industries
Abstract
We show that traditional measures of productivity change that ignore the unproductive nature of pollution abatement capital within the production process are likely to underestimate the true productivity gains that most manufacturing industries are able to generate in any given year. While the average bias of traditional measures is not large in absolute terms,the bias can be substantial for industries with relatively large pollution abatement capital expenditures. We also find that environmental regulation has a non-trivial adverse effect on productivity change,lowering productivity growth by roughly 0.3% across all industries,and by more than 1% for some industries.
Daniel L. Millimet, Thomas Osang
### Environmental Regulation and Competitiveness: An Exploratory Meta-Analysis
Abstract
The relationship between domestic environmental regulation and international competitiveness has evoked various speculations. The common neoclassical train of thought is that strict environmental regulation is detrimental to the competitiveness of industry,and that it induces phenomena such as ecological dumping,ecological capital flight,and regulatory chill’ in environmental standards. A different view is that strict environmental regulation triggers industry’s innovation potential,and subsequently increases its competitiveness. The impact of environmental regulation on competitiveness has been analyzed in terms of international capital movements,new firm formation,and international trade. The paper presents a statistically supported evaluation of the literature,in order to assess what the main conclusions regarding the relationship between environmental regulation and competitiveness are when it comes to studies on international trade flows. The synthesis of the literature is subsequently used to present guidelines for future primary research in this area.
Abay Mulatu, Raymond J. G. M. Florax, Cees A. Withagen
### Trade, Technology, and Carbon Emissions: A CGE Analysis for West Germany
Abstract
This paper examines the determinants of the substantial decline of West German production-related carbon intensity in the face of falling energy prices. A computable general equilibrium model is used to determine the simulated effects of observed changes of world energy prices and domestic energy policy on the sectoral patterns of carbon emissions energy consumption output value added and other indicators of structural change. The structural changes not accounted for by energy prices and energy policy are attributed to changing patterns of productivity growth in Germany and the rest of the world (ROW) and changing patterns of ROW demand. Weights on these driving forces are selected by least squares. One key finding is that the contribution of ROW productivity and demand patterns to emission-relevant structural change unaccounted for by energy prices and energy policy is just under 30%. The remainder is split almost equally among patterns of domestic autonomous energy efficiency improvement and domestic labor efficiency patterns.
Heinz Welsch
### Environmental Policies in Open Economies and Leakage Problems
Abstract
Pollution leakage is an important issue when countries address international environmental problems unilaterally. Leakage in this context is the increase in foreign emissions after a reduction in domestic emissions which results from international market interdependencies. In this paper,1 consider two markets,the first one for a polluting input (e.g. energy) and the second one for internationally mobile capital. Leakage is decomposed into its two components and the magnitudes of the effects are determined for a calibrated version of the model. Towards the end of the paper,normative questions related to the optimality of environmental policies are addressed.
Michael Rauscher
### Pollution Charges and Incentives
Abstract
Most investigations of the consequences of environmental policies and Pigouvian taxes (pollution charges,in particular carbon taxes) using CGE models reveal an only modest impact on economic activity; some report even positive economic side effects. Yet the slowdown in economic growth and in particular of productivity,see Conrad and Wastl (1995),suggest a stronger and negative influence in particular if one accounts for the fortunately very low energy prices that accompanied the ambitious goals of environmental policies in the late 1980s and the 1990s. This paper attempts to explain this apparent difference. In particular it will be shown that environmental policies,here investigated for pollution charges,call for a modification of incentives such that the power of optimal incentives is reduced to which workers will respond with less effort. This friction,which may add up to significant numbers for the economy at large,is neglected in both the theoretical literature and in the application of CGE-models. Accounting for this friction and the empirical evidence so far suggests to take the optimistic findings concerning the consequences of environmental policies on economic growth at least cum grano salis.
Franz Wirl
### An Economic Assessment of the Kyoto Protocol Using a Global Model Based on the Marginal Abatement Costs of 12 Regions
Abstract
The Kyoto Protocol incorporates emissions trading,joint implementation,and the clean development mechanism to help Annex 1 countries to meet their Kyoto targets at a lower overall cost. Using a global model based on the marginal abatement costs of 12 countries and regions,this paper estimates the contributions of the three Kyoto flexibility mechanisms to meet the total greenhouse gas emissions reductions required of Annex 1 countries under the three trading scenarios respectively. Our results clearly demonstrate that the fewer the restrictions on the use of flexibility mechanisms the gains from their use are greater. The gains are unevenly distributed,however,with Annex 1 countries that have the highest autarkic marginal abatement costs tending to benefit the most. Our results also indicate that restrictions on the use of flexibility mechanisms not only reduce potential of the Annex 1 countries’ efficiency gains,but also are not beneficial to developing countries because they restrict the total financial flows to developing countries under the clean development mechanism.
ZhongXiang Zhang
### Sharing the Burden of Carbon Abatement in the European Union
Abstract
We evaluate the welfare implications of alternative ways in which the EU could distribute its aggregate emission reduction commitment under the Kyoto Protocol across member states. An endogenous burden sharing calculation, in which the welfare costs across member states are equalized, differs substantially from uniform proportional cutbacks as well as the specific burden sharing rule actually adopted by the EU.
Christoph Böhringer, Glenn W. Harrison, Thomas F. Rutherford
### Banking and Trade of Carbon Emission Rights: A CGE Analysis
Abstract
This paper analyses trading and banking of carbon emission rights. Within the framework of a modestly simple,integrated assessment model that breaks the world economy in just two regions,North and South,it can be shown: (1) There exists separability between environmental targets and the choice of instruments. Increasing the “when and where” flexibility in greenhouse gas abatement either through banking or trading of carbon emission permits or both positively affects global welfare. It has,however,almost no impact on global climate change. (2) Depending upon the choice of instruments there are significant distributional effects across regions. Both regions can improve welfare simultaneously,if carbon emission rights are traded on open international markets. But if it were feasible to bank or borrow carbon permits,then — independent of whether there is trading of carbon rights or not — the South suffers welfare losses compared to a No-Trade-No-Banking situation.
Gunter Stephan, Georg Müller-Fürstenberger
### Cost-Efficiency Methodology for the Selection of New Car Emission Standards in Europe
Abstract
In the Auto-Oil Programme,the European Commission looks for emission limits for cars such that the urban air quality targets are reached at minimum cost. This optimization problem was solved by Degraeve et al. (1998). In this paper we deal with two methodological problems in this cost efficiency approach. We study first what is known as the overachievement problem in cost-effectiveness analysis. In a pure cost-efficiency approach,there is a tendency to understate the merits of federal regulatory measures: Because these measures are uniform they will always do more than required in some regions. We prove this and show how this problem can be solved using minimum information on the benefits of environmental improvements. The second problem we study is the implementation problem of local measures. From a European wide perspective,it may be cost-efficient that some regions take local measures but this is not necessary in the interest of these regions when there is transfrontier pollution. When this behavioral constraint is taken into account,the cost efficient bundle will change. We show how these two considerations affect the selection of optimal emission standards for cars in Europe.
Zeger Degraeve, Stef Proost, Gunther Wuyts
### Commitment and Time Consistency of Environmental Policy and Incentives for Adoption and R&D
Abstract
In this paper we survey recent developments on the incentives through environmental policy instruments to adopt advanced abatement technology. We further investigate repercussions on R&D. First we study the case where the regulator makes long-term commitments to policy levels and does not anticipate the arrival of new technology. We show that taxes provide stronger incentives than permits,auctioned and free permits offer identical incentives,and standards may give stronger incentives than permits. Second,we investigate scenarios where the regulator anticipates new technologies. We show that with taxes and permits the regulator can induce first-best outcomes if he moves after firms have invested,whereas this does not always hold if he moves first. Third,we consider a model where a polluting downstream industry is regulated either by emission taxes or by tradable permits. A separate monopolistic or duopolistic upstream industry engages in R&D and,in case of R&D success,sells an advanced abatement technology to the downstream firms. We study three different timings of environmental policy,ex post taxation (or issuing permits),ex interim commitment to a tax rate (a quota of permits) after observing R&D success but before adoption,and finally ex ante commitment before and independent of R&D success. We show that ex interim commitment always dominates ex post environmental policy. Moreover,ex interim second best taxation dominates ex interim second best optimal permit policy. There is no unique ranking,however,between ex ante and ex interim commitment. Finally we sketch ways to generalize the model to upstream duopoly.
Till Requate
### Ecological Tax Reform and Efficiency of Taxation: A Public Good Perspective
Abstract
Revenue-neutral ecological tax reforms are known to yield a green dividend,but not an efficiency dividend,in general. This result is shown to be not counterintuitive when one looks at such reforms as providing more of the costly public good environmental quality’ financed by distortionary taxes. In that perspective the double dividend conjecture is tantamount to assuming that more environmental quality can be bought for less money,which is rather unlikely even if general equilibrium interdependencies are accounted for.
Rüdiger Pethig
### Optimal Intertemporal Pricing of Resource Stocks: The Case of Fossil Fuel Extraction and Atmospheric CO2 Deposits
Abstract
The purpose of this paper is to extend the dynamic resource allocation problem by including stock externalities like accumulated CO 2 and SO 2 emissions as well as flow externalities like pollutants which can be abated (SO 2 ). The objective is to examine how the evolution of energy-, CO 2- or S0 2- tax rates can address these problems in an optimal way. The concern about the time profile of an energy tax arises from the fact that fossil fuels are an exhaustible resource and that global warming,being a consequence of carbon accumulation in the atmosphere,is a stock externality problem. We use a micro model of a firm,which maximizes profits,uses energy as one of its inputs and is confronted with a varying energy tax. It reacts by substitution,by changing its output level or by purchasing abatement equipment. The government is well aware about firms’ reaction on price signals. It maximizes a stream of social welfare by choosing an optimal path of its instrument — an energy tax. Our analyses supports the idea of a tax first rising and later falling over time. | 2020-03-28 10:04:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42892351746559143, "perplexity": 3486.046170746386}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370490497.6/warc/CC-MAIN-20200328074047-20200328104047-00016.warc.gz"} |
https://www.biostars.org/p/9493330/ | Forum:Non biology & computer professional interested in bioinformatics
3
0
Entering edit mode
2 days ago
ASU • 0
I am from a non biology background. I was in IT profession for 3 years working with DB and into teaching profession in colleges under computer science department. As a PhD work can I go with bioinformatics ( as I am a non biology student but interested in that ) is it a good option. Can you please help me out on this. Also what level of knowledge in this is required to pursue my PhD - bioinformatics in machine learning,
ADD COMMENT
0
Entering edit mode
If you are joining a PhD program then no doubt your department/mentor/PhD committe will make sure that you take appropriate courses to build a strong basic understanding of principals. Depending on your background/departmental regulations this may require more or less formal courses. You will probably learn more from interacting with biologists as you work with them to solve (immediate) problems than in formal classes (at least the biology part).
ADD REPLY
1
Entering edit mode
You will probably learn more from interacting with biologists as you work with them to solve problems than in formal classes (at least the biology part).
The poster will probably learn more biology needed to solve the immediate problem from those they interact with. I seriously doubt that they will learn through collaborative personal interactions about chemical principles of life, molecular structure, biochemistry, molecular genetics, or any of the foundational biological principles. There is a reason why people take formal classes.
ADD REPLY
0
Entering edit mode
One would assume that some of these fundamentals would have been covered in secondary education. I did say in my comment that OP's mentors/department will require them take formal courses as needed. My comment was indeed about acquiring immediate knowledge needed to solve biological problems OP may be working on. I have added a word to clarify that above.
ADD REPLY
1
Entering edit mode
2 days ago
An interest in the topic combined with your computer expertise might be enough to start off.
but, you will (whether you like it or not) have to increase your biological knowledge over the years after that. Bioinformatics is and remains a science that tries to solve BIOLOGICAL problems (and thus not solely computational problems) so a growing knowledge of the biology will only increase your "value" and will makes things clearer along the way.
(just my personal two cents of course :) )
ADD COMMENT
0
Entering edit mode
2 days ago
There's two groups of people in bioinformatics; the biologists who learn some programming, and the programmers who learn some biology. They have different views and solutions and are both very important. Don't worry about not having the background yet, learning and interest is all you need get up to speed.
You will need to understand the state of the art, to ensure your thesis is not redundant or tried-and-failed already. Try to leverage your skills of course, there's plenty of database work in bioinformatics.
This is not an uncommon question by the way.
ADD COMMENT
0
Entering edit mode
There's two groups of people in bioinformatics; the biologists who learn some programming, and the programmers who learn some biology.
This is like saying there are 10 types of people: those who understand binary and those who don't. And then bunch of others who are somewhere on the spectrum. First, there are world-class bioinformaticians who don't program at all, not to mention the whole spectrum of programmers who don't understand any biology beyond what is needed to solve 1-2 problems.
People call themselves bioinformaticians - and legitimately so - for reasons that have little to do with the biology:programming ratio. Some of the best bioinformaticians I know are by education physicists, virologists, crystallographers, mathematicians. In general, these are very bright people who are driven to learn new things and contribute to solving problems, rather than being biologists or programmers.
ADD REPLY
0
Entering edit mode
You're absolutely right. I was grossly categorizing the mathematician as a programmer, and the virologist as a biologist. From the perspective of an I.T. professional (original poster), it's hard to know the difference in crystallography and genetics, that sort of thing.
ADD REPLY
0
Entering edit mode
2 days ago
Mensur Dlakic ★ 14k
I don't mean to categorize people in what I write below. It is just a personal opinion and an attempt to lay out different ways of becoming a bioinformatician that are rooted in our individual preferences.
In simple terms, one could argue that becoming a bioinformatician is simply a matter of adding some biology to an existing IT/programming/informatics knowledge or vice versa. One can do a whole RNAseq experiment, from the wet lab experiments to data analysis, with limited biology knowledge other than RNA expression, and with just enough ability to run prepackaged scripts and make tiny modifications to them such as file locations or p-value cutoffs. I have interacted with them frequently, and they do just fine for themselves even with the aforementioned limitations. I wouldn't want to train students with such limited outlook on research, but there is not doubt that ultra-specialization works.
A bigger question for me is why anyone wants to become a bioinformatician, and from there it becomes easier to decide what the missing ingredients are. Some people want to get into bioinformatics because it is hot and pays well - or at least they think so. As I said above, there is nothing wrong with earning a living by knowing just enough of biology and these other fields. In fact, some people who are in this group are so good that they become irreplaceable.
Then there is a bioinformatician who is an excellent collaborator, and is stronger on one side than the other. These are in general very capable and professional, and can get away with being much stronger in biology or in everything else because their expertise is complementary to the whole team rather than driving the whole project.
The most difficult to become, and thus rarest, is a bioinformatician who can lead a group and drive the whole research endeavor. To achieve this means spending a lot of time learning about science and its process in general, and not just biology or just computers/programing/math. It means being able to talk with people who know more in their area of expertise but not necessarily can synthesize all the information. It involves life-long learning to stay abreast of developments in all areas. Not very many people would have this as their goal early in their careers because hardly anyone decides when they are younger than 30 that life-long learning will be what they do. It is usually something that develops later, so I think this group for practical purposes can be eliminated from the early consideration of what kind of bioinformatician one wants to be.
ADD COMMENT
Login before adding your answer.
Traffic: 1886 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats
Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Powered by the version 2.3.6 | 2021-10-15 22:01:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24376842379570007, "perplexity": 1372.928588260197}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583083.92/warc/CC-MAIN-20211015192439-20211015222439-00092.warc.gz"} |
https://mathoverflow.net/questions/235869/easier-girards-paradox-in-a-circular-pure-type-system-pts | # Easier Girard's paradox in a circular pure type system (PTS)
System U is an inconsistent PTS in that one has a term of type $\bot = \forall p\colon \ast \ldotp p$, and such a term is explicitly constructed in Hurkens' A Simplification of Girard's Paradox.
One-sorted circular PTS $\lambda\ast$ ($S = \{\ast \}, A=\{\ast\colon \ast\}, R=\{(\ast,\ast)\}$) (Geuvers' Logics and Type Systems, p.78) is also inconsistent since a term of type $\bot$ in System U can be translated into $\lambda\ast$ by mapping $\ast, \square$ and $\triangle$ to $\ast$.
Is there any easier way to construct a term of type $\bot$ in $\lambda\ast$? Smaller terms, or constructions easier to understand, are appreciated.
## 1 Answer
I feel like most of my posts on mathoverflow and cstheory.stackexchange consist of this answer, but the most perspicuous (in my opinion) proof of inconsistency of U and $*:*$ is a construction by Alexandre Miquel, given in his phd dissertation. Tragically, it is in French, so I'll summarize the idea below, and maybe you'll be able to fill in the remainder using his other paper lamda-Z: Zermelo's Set Theory as a PTS with 4 Sorts..
The idea is to build a model of naive set theory in $*:*$. The naive theory allows for unrestricted comprehension, and in particular the paradox falls out quite easily. The proof term of $\bot$ itself is longer than in Hurkens' presentation, but I think it's safe to say that the process is at least more straightforward.
The crucial idea is to model a set as a directed pointed graph. The point represents the set, the graph contains all the members of the set, and the membership relation is represented by the edges. We rely on the encoding of $\Sigma$ types in the impredicative theories.
A set $S$ is thus an element of
$$\mathrm{Set}=\Sigma (A:*)(x :A)(R:A\rightarrow A\rightarrow *)$$
where the carrier is $A$, the base is $x$ and the edge relation is $R$. What's a bit counter-intuitive is that sets are not necessarily well-founded. To get sane notions of equality and membership, we need to define equality as bisimilarity $\simeq$ of pointed graphs.
Then it's pretty easy to define membership: $S\in T$ if
• $T = (A, x, R)$
• There is some $y$ with $R\ y\ x$ ($y$ is an element of $x$)
• $S\simeq (A, y, R)$ ($S$ is the "same set" as $y$)
Then we can show:
For each predicate $P:\mathrm{Set}\rightarrow *$ that respects bisimilarity there is a set $$\{\ x\ |\ P(x)\ \}$$
It's then easy to show that $P(x)=x\notin x$ respects bisimilarity.
• One can construct the type $U := \Pi (X: \Box_2) (X \to X \to \ast) \to X \to \ast$ of the universe and the relation $\mathsf{elt}\colon U \to U \to \ast$ over $U$, and construct arbitrary sets by the axiom schema of separation. By adding $(\Box_3, \Box_2, \Box_2) \in R$ to $\lambda$Z, one has $U \colon \Box_2$, which leads to the inconsistency. – H Koba Apr 16 '16 at 19:20
• By mapping $* \mapsto *, \Box_1 \mapsto \Box, \Box_2 \mapsto \Box, \Box_3 \mapsto \triangle$, one can show that System U (and hence $\lambda\ast$) is also inconsistent. The product rule $(\triangle, \Box, \Box)$ corresponds to $(\Box_3, \Box_2, \Box_2)$ in $\lambda$Z. – H Koba Apr 16 '16 at 19:23
• I think I understand how this works with * : *, but could you elaborate how this works in System U? I'm assuming that the construction of {x | P x} relies on instantiating A : * with Set itself somehow? Is that not disallowed in System U, since Set : □ and not Set : *? Or do we have A : □ but still R : A -> A -> *? – Jules Sep 20 '19 at 14:39
• @Jules: $\mathrm{Set}$ has type $*$, just like in system F, $\Pi A:*.A$ has type $*$. This is the impredicative nature of system F, which is even more pronounced in system U, since $\Sigma$ types can be expressed. It's surprising that system F is consistent! – cody Sep 21 '19 at 20:23
• I was worried about the ∗ at the end of R : A -> A -> ∗. What about that one? Doesn't this make Set : □, which means that you can't instantiate A with Set? – Jules Sep 23 '19 at 9:33 | 2021-03-04 10:25:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8240463733673096, "perplexity": 290.67753680390166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178368687.25/warc/CC-MAIN-20210304082345-20210304112345-00340.warc.gz"} |
https://electronics.stackexchange.com/questions/432954/how-does-a-computer-know-when-to-get-the-output-from-the-alu | # How does a computer know when to "get" the output from the ALU?
Basically what the title says, I guess I've got some sort of misconception or somthing probably. The ALU can have, say, a ripple carry adder which doesn't produce its entire output all at the same time (it ripples). So how does a computer know when the output is ready? Does it have to do with the clockspeed and propagation delay?
• Given the need for addition in many computer operations, such as computing array element locations in memory, or relative-register addressing, the Clock period is usually somewhat larger than the ADDER propagation delay. Apr 17, 2019 at 1:22
• the "computer" does not know ..... the person that designed the cpu knows and has designed the circuitry in a way that allows the output of the ALU to be read at the correct time Apr 17, 2019 at 1:30
Let's say that you work out the worst-case propagation delays through the ripple-carry adder and all associated logic as being $$\178\:\text{ns}\$$ (this includes both subtraction and addition, let's say.) Then you might choose to arrange the minimum clock period to be $$\250\:\text{ns}\$$ so that you are certain there is enough time. If so, then you might latch the inputs to the ALU on the prior clock period (assuming you can latch both inputs on the same clock, which may not be true) and latch the output of the ALU on the current (following) clock period. That's fine, because you know for sure that there has been enough time for the ALU output to stabilize.
That's not the only way, though. You might have a system which, for other design reasons, has a minimum clock period of $$\100\:\text{ns}\$$. So, to be safe you arrange things so that you latch the inputs on $$\c_0\$$ and then only latch the output of the ALU on $$\c_3\$$ ($$\300\:\text{ns}\$$ later.) The faster clock may serve other uses well, despite causing the addition/subtraction to take even longer than it might had the minimum clock period been longer. Of course, you could also consider latching the output on $$\c_2\$$ in this case because $$\200\:\text{ns}\$$ is longer than the required $$\178\:\text{ns}\$$.
The above is for clocked CPUs. There are also asynchronous CPUs where the reasoning and methods mentioned above are handled differently. That's a whole different area of research and the answer would be very much different than I gave above. But I don't know much about these, so I'll hold short of saying more about it.
These are all design choices, though. It's not left to the electronics itself to "guess at." (Not in my experience, anyway.) The designer works out the details and makes decisions. Where I was involved some, the design is simulated first. Then it is placed on a huge cube of FPGAs that can run the design almost at full speed. A little "pod" reaches out from the cube to plug into a CPU socket on a real board for further testing. All this before committing to an ASIC and an IC FAB run.
• Suppose that we talk about the 8086, and the 16-bit register DIV instruction. It takes 144 to 162 clock cycles ( oocities.org/mc_introtocomputers/Instruction_Timing.PDF ). A few questions: (1) does the 8086 simply "stall" for so many clock cycles?, (2) if so, do these cycles count as part of the Execution step?, (3) how does the CPU decide how many clock cycles (i.e. between 144 and 162) to actually wait?
– obe
Sep 1, 2019 at 0:55
• @obe I don't know the exact design internals of the 8086 (it has the same 40-pin package as the 8088 except that the 8086 shares 16 bits of the 20 bit address bus for data instead of just sharing 8 bits.) The DIV instruction does operate as a state-machine in the execution unit, but it obviously includes some "early-out" tests (an all-0 or all-1 test for some part of a word is easily implemented.) Perhaps the best way to see how that might work would be to examine the ADSP-2100 Family manual and study their DIV instruction (1 bit at a time) and how to perform division over long word lengths.
– jonk
Sep 1, 2019 at 3:46
• @obe (Both the DIVS and DIVQ provide good details about division -- look for their behavioral schematic in the manual.)
– jonk
Sep 1, 2019 at 4:12
• thanks! i'm using DIV as an example. So if I understand you correctly, the 8086 (likely) has some logic outside the ALU to examine the numbers and in some cases deduce the length of the division information, but once "submitted" to the ALU - it simply waits out the number of cycles it had previously determined the operation would take, but doesn't communicate with the ALU (e.g. to get completion signal or more accurate estimations). Is that it?
– obe
Sep 1, 2019 at 10:46
• @obe Division logic is expensive. The more bits per clock or cycle time, the more expensive it is. It's likely that their division algorithm produced one bit per internal cycle time (3 clocks each?) That's why I referred you to the ADSP-2100 family -- they tell you everything you need to understand about that process. But it also likely they included some cheap logic to detect special cases that could be used to shorten the process by a few cycles (up to 6*3?) The state machine is part of the exec unit and controls the ALU logic process.
– jonk
Sep 1, 2019 at 18:15
It's a fixed number of clock cycles after the data is put in the ALU. The exact number depends on the processor, and sometimes also on the specific instruction. You might find it useful to look up the term "pipelining" as it applies to computing. | 2022-09-27 02:47:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 9, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5684890151023865, "perplexity": 1017.0617547174546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334974.57/warc/CC-MAIN-20220927002241-20220927032241-00738.warc.gz"} |
http://www.euro-math-soc.eu/job/research-fellow-mathematical-physics | RESEARCH FELLOW IN MATHEMATICAL PHYSICS
Department of Mathematics and Statistics
Faculty of Science
Salary: AUD$61,138* - AUD$82,963 p.a. (Level A) or AUD$87,334 - AUD$103,705 p.a. (Level B), plus 9.25% superannuation. Level of appointment is subject to qualifications and experience.
(*PhD entry level AUD\$77,290 p.a.)
The Australian Research Council has funded the research project `A synthesis of random matrix theory for applications in mathematics, physics and engineering'. Random matrix theory enjoys an even expanding range of applications, and the aim of this project is to draw together seemingly disparate techniques to tackle problems from different applications.
Applicants must have: either a PhD in mathematics, or a PhD in physics; a strong publication record in the field of random matrices; and the ability and desire to work collaboratively.
The position tenure is full-time (fixed term), and is available for three years (Level A), or two years (Level B).
Close date: 24 March 2014
For position information and to apply online go to http://hr.unimelb.edu.au/careers, click on ‘Search for Jobs’ and search under the job title or job number 0032901.
Organisation:
Job location:
Parkville
Australia
Contact and application information
Monday, March 24, 2014
Contact name:
Department of Mathematics and Statistics, Faculty of Science
Categorisation
Category:
Postdoctoral fellowships
Keywords: | 2017-01-16 14:59:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20302768051624298, "perplexity": 5375.484367498999}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00115-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://www.cnblogs.com/devos/p/4396773.html | # JAVA与ABA问题
* This is a modification of the Michael & Scott algorithm,* adapted for a garbage-collected environment, with support for* interior node deletion (to support remove(Object)). For* explanation, read the paper.** Note that like most non-blocking algorithms in this package,* this implementation relies on the fact that in garbage* collected systems, there is no possibility of ABA problems due* to recycled nodes, so there is no need to use "counted* pointers" or related techniques seen in versions used in* non-GC'ed settings.
2. 说这个类的实现依赖于“在GC系统中,there is no possibility of ABA problems due to recycled nodes"。问题在于,什么叫”recycled nodes”?
1. 我们在CAS中比较对节点的引用 & 我们复用节点。假如queue初始的状态是A -> E。在变化后的状态是A -> X -> E。那么我们在CAS中比较对A的引用时,就无法看出状态的变化。“复用”,就是像这个例子一样,把同一个节点再次加个队列。
2. 我们在CAS中比较对节点的引用 & 某个new出来的节点A2的地址恰好和A1的地址相同。
GC环境和无GC的环境(如C++)的不同在于第二种情况。即,在JAVA中第,第二种情况是不可能发生的。原因在于,在我们用CAS比较A1和A2这两个引用时,暗含着的事实是——这两个引用存在,所以它们所引用的对象都是GC root可达的。那么在A1引用的对象还未被GC时,一个新new的对象是不可能和A1的对象有相同的地址的。所以A1 != A2。
A common case of the ABA problem is encountered when implementing a lock-free data structure. If an item is removed from the list, deleted, and then a new item is allocated and added to the list, it is common for the allocated object to be at the same location as the deleted object due to optimization. A pointer to the new item is thus sometimes equal to a pointer to the old item which is an ABA problem.
public boolean offer(E e) {
checkNotNull(e);
final Node<E> newNode = new Node<E>(e);
for (Node<E> t = tail, p = t;;) {
Node<E> q = p.next;
if (q == null) {
// p is last node
if (p.casNext(null, newNode)) {
// Successful CAS is the linearization point
// for e to become an element of this queue,
// and for newNode to become "live".
if (p != t) // hop two nodes at a time
casTail(t, newNode); // Failure is OK.
return true;
}
}
else if (p == q)
// We have fallen off list. If tail is unchanged, it
// will also be off-list, in which case we need to | 2019-10-22 15:04:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2177005410194397, "perplexity": 5240.450286736355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987822098.86/warc/CC-MAIN-20191022132135-20191022155635-00388.warc.gz"} |
https://www.projecteuclid.org/euclid.dmj/1383760702 | ## Duke Mathematical Journal
### Tangent lines, inflections, and vertices of closed curves
#### Abstract
We show that every smooth closed curve $\Gamma$ immersed in Euclidean space $\mathbf {R}^{3}$ satisfies the sharp inequality $2(\mathcal{P}+\mathcal{I})+\mathcal{V}\geq6$ which relates the numbers $\mathcal{P}$ of pairs of parallel tangent lines, $\mathcal{I}$ of inflections (or points of vanishing curvature), and $\mathcal{V}$ of vertices (or points of vanishing torsion) of $\Gamma$. We also show that $2(\mathcal{P^{+}}+\mathcal{I})+\mathcal{V}\geq4$, where $\mathcal{P}^{+}$ is the number of pairs of concordant parallel tangent lines. The proofs, which employ curve-shortening flow with surgery, are based on corresponding inequalities for the numbers of double points, singularities, and inflections of closed curves in the real projective plane $\mathbf {RP}^{2}$ and the sphere $\mathbf {S}^{2}$ which intersect every closed geodesic. These findings extend some classical results in curve theory from works of Möbius, Fenchel, and Segre, including Arnold’s “tennis ball theorem.”
#### Article information
Source
Duke Math. J., Volume 162, Number 14 (2013), 2691-2730.
Dates
Revised: 16 February 2013
First available in Project Euclid: 6 November 2013
https://projecteuclid.org/euclid.dmj/1383760702
Digital Object Identifier
doi:10.1215/00127094-2381038
Mathematical Reviews number (MathSciNet)
MR3127811
Zentralblatt MATH identifier
1295.53002
#### Citation
Ghomi, Mohammad. Tangent lines, inflections, and vertices of closed curves. Duke Math. J. 162 (2013), no. 14, 2691--2730. doi:10.1215/00127094-2381038. https://projecteuclid.org/euclid.dmj/1383760702
#### References
• [1] S. Alexander, M. Ghomi, and J. Wong, Topology of Riemannian submanifolds with prescribed boundary, Duke Math. J. 152 (2010), 533–565.
• [2] S. Angenent, Parabolic equations for curves on surfaces. II. Intersections, blow-up and generalized solutions, Ann. of Math. (2) 133 (1991), 171–215.
• [3] S. Angenent, “Inflection points, extatic points and curve shortening” in Hamiltonian Systems with Three or More Degrees of Freedom (S’Agaró, 1995), NATO Adv. Sci. Inst. Ser. C Math. Phys. Sci. 533, Kluwer, Dordrecht, 1999, 3–10.
• [4] V. I. Arnold, Topological Invariants of Plane Curves and Caustics, Univ. Lecture Ser. 5, Amer. Math. Soc., Providence, 1994.
• [5] V. I. Arnold, “On the number of flattening points on space curves” in Sinaĭ’s Moscow Seminar on Dynamical Systems, Amer. Math. Soc. Transl. Ser. 2 171, Amer. Math. Soc., Providence, 1996, 11–22.
• [6] V. I. Arnold “Topological problems in wave propagation theory and topological economy principle in algebraic geometry” in The Arnoldfest (Toronto, ON, 1997), Fields Inst. Commun. 24, Amer. Math. Soc., Providence, 1999, 39–54.
• [7] K.-S. Chou and X.-P. Zhu, The Curve Shortening Problem, Chapman & Hall/CRC, Boca Raton, Fla, 2001.
• [8] D. DeTurck, H. Gluck, D. Pomerleano, and D. S. Vick, The four vertex theorem and its converse, Notices Amer. Math. Soc. 54 (2007), 192–207.
• [9] M. P. do Carmo and F. W. Warner, Rigidity and convexity of hypersurfaces in spheres, J. Differential Geom. 4 (1970), 133–144.
• [10] H. G. Eggleston, Convexity, Cambridge Tracts in Math. Math. Phys. 47, Cambridge Univ. Press, New York, 1958.
• [11] F. Fabricius-Bjerre, On the double tangents of plane closed curves, Math. Scand. 11 (1962), 113–116.
• [12] W. Fenchel, Über Krümmung und Windung geschlossener Raumkurven, Math. Ann. 101 (1929), 238–252.
• [13] W. Fenchel, On the differential geometry of closed space curves, Bull. Amer. Math. Soc. 57 (1951), 44–54.
• [14] M. Ghomi, Shadows and convexity of surfaces, Ann. of Math. (2) 155 (2002), 281–293.
• [15] M. Ghomi, Tangent bundle embeddings of manifolds in Euclidean space, Comment. Math. Helv. 81 (2006), 259–270.
• [16] M. Ghomi, $h$-principles for curves and knots of constant curvature, Geom. Dedicata 127 (2007), 19–35.
• [17] M. Ghomi, Topology of surfaces with connected shades, Asian J. Math. 11 (2007), 621–634.
• [18] M. Ghomi, A Riemannian four vertex theorem for surfaces with boundary, Proc. Amer. Math. Soc. 139 (2011), 293–303.
• [19] M. Ghomi, Vertices of closed curves in Riemannian surfaces, Comment. Math. Helv. 88 (2013), 427–448.
• [20] M. Ghomi and B. Solomon, Skew loops and quadric surfaces, Comment. Math. Helv. 77 (2002), 767–782.
• [21] M. Ghomi and S. Tabachnikov, Totally skew embeddings of manifolds, Math. Z. 258 (2008), 499–512.
• [22] M. A. Grayson, Shortening embedded curves, Ann. of Math. (2) 129 (1989), 71–111.
• [23] B. Halpern, Global theorems for closed plane curves, Bull. Amer. Math. Soc. 76 (1970), 96–100.
• [24] J. Hass and P. Scott, Shortening curves on surfaces, Topology, 33 (1994), 25–43.
• [25] E. Heil, Some vertex theorems proved by means of Moebius transformations, Ann. Mat. Pura Appl. (4) 85 (1970), 301–306.
• [26] S. B. Jackson, Vertices for plane curves, Bull. Amer. Math. Soc. 50 (1944), 564–578.
• [27] S. B. Jackson, Geodesic vertices on surfaces of constant curvature, Amer. J. Math. 72 (1950), 161–186.
• [28] F. Klein, Eine neue Relation zwischen den Singularitäten einer algebraischen Curve, Math. Ann. 10 (1876), 199–209.
• [29] F. Klein, Elementarmathematik vom höheren Standpunkte aus. Dritter Band: Präzisions- und Approximationsmathematik, 3rd ed., Grundlehren Math. Wiss. 16, Springer, Berlin, 1968.
• [30] A. Kneser, “Bemerkungen über die anzahl der extrema des krümmung auf geschlossenen kurven und über verwandte fragen in einer night eucklidischen geometrie” in Festschrift Heinrich Weber, Teubner, Leipzig, 1912, 170–180.
• [31] A. F. Möbius, “Über die grundformen der linien der dritten ordnung” in Gesammelte Werke II, Hirzel, Leipzig, 1886, 89–176.
• [32] S. Mukhopadhyaya, New methods in the geometry of a plane arc, Bull. Calcutta Math. Soc. 1 (1909), 31–37.
• [33] R. Osserman, The four-or-more vertex theorem, Amer. Math. Monthly 92 (1985), 332–337.
• [34] V. Ovsienko and S. Tabachnikov, Projective Differential Geometry Old and New, Cambridge Tracts in Math. 165, Cambridge Univ. Press, Cambridge, 2005.
• [35] G. Panina, Singularities of piecewise linear saddle spheres on $S^{3}$, J. Singul. 1 (2010), 69–84.
• [36] H. Rosenberg, Hypersurfaces of constant curvature in space forms, Bull. Sci. Math. 117 (1993), 211–239.
• [37] M. C. Romero-Fuster and V. D. Sedykh, A lower estimate for the number of zero-torsion points of a space curve, Beiträge Algebra Geom. 38 (1997), 183–192.
• [38] R. Schneider, Convex Bodies: The Brunn-Minkowski Theory, Encyclopedia Math. Appl. 44, Cambridge Univ. Press, Cambridge, 1993.
• [39] V. D. Sedykh, Four vertices of a convex space curve, Bull. London Math. Soc. 26 (1994), 177–180.
• [40] V. D. Sedykh, “Discrete versions of the four-vertex theorem” in Topics in Singularity Theory, Amer. Math. Soc. Transl. Ser. 2 180, Amer. Math. Soc., Providence, 1997, 197–207.
• [41] B. Segre, Global differential properties of closed twisted curves, Rend. Sem. Mat. Fis. Milano 38 (1968), 256–263.
• [42] B. Segre, “Sulle coppie di tangenti fra loro parallele relative ad una curva chiusa sghemba” in Hommage au Professeur Lucien Godeaux, Librairie Universitaire, Louvain, 1968, 141–167.
• [43] B. Solomon, Central cross-sections make surfaces of revolution quadric, Amer. Math. Monthly 116 (2009), 351–355.
• [44] G. Thorbergsson and M. Umehara, “A unified approach to the four vertex theorems. II” in Differential and Symplectic Topology of Knots and Curves, Amer. Math. Soc. Transl. Ser. 2 190, Amer. Math. Soc., Providence, 1999, 229–252.
• [45] G. Thorbergsson and M. Umehara, Inflection points and double tangents on anti-convex curves in the real projective plane, Tohoku Math. J. (2) 60 (2008), 149–181.
• [46] B. Totaro, Space curves with nonzero torsion, Internat. J. Math. 1 (1990), 109–117.
• [47] R. Uribe-Vargas, On 4-flattening theorems and the curves of Carathéodory, Barner and Segre, J. Geom. 77 (2003), 184–192.
• [48] J. L. Weiner, A theorem on closed space curves, Rend. Mat. (6) 8 (1975), 789–804.
• [49] J. L. Weiner, Global properties of spherical curves, J. Differential Geom. 12 (1977), 425–434.
• [50] J. L. Weiner, A spherical Fabricius-Bjerre formula with applications to closed space curves, Math. Scand. 61 (1987), 286–291.
• [51] Y.-Q. Wu, Knots and links without parallel tangents, Bull. London Math. Soc. 34 (2002), 681–690.
• [52] S.-T. Yau, “Open problems in geometry” in Differential Geometry: Partial Differential Equations on Manifolds (Los Angeles, Calif., 1990), Proc. Sympos. Pure Math. 54, Amer. Math. Soc., Providence, 1993, 1–28. | 2019-11-22 11:37:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 11, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4767991304397583, "perplexity": 3117.94199262032}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671249.37/warc/CC-MAIN-20191122092537-20191122120537-00234.warc.gz"} |
https://saadquader.wordpress.com/2013/02/07/kleenes-recursion-theorem/ | Here we give an illustrative proof of Kleene’s recursion theorem, a fundamental theorem in computability/recursion theory. The proof is so simple it can be stated in few lines. On the other hand, the proof is a beauty in itself (after you get hold of it). Without further ado, we will first state the preliminary definitions/lemmas/theorems and then give the formal statement.
Preliminaries
The words recursive and computable mean the same thing, and can be used interchangeably.
A partial function is defined on only some inputs. A total function is defined on all inputs. Here, inputs are natural numbers. If a function $\phi$ is not defined on some input $x$, we say that $\phi(x)$ diverges, denoted by $\phi(x)\uparrow$. Otherwise, if we can compute $y=\phi(x)$, we say that $\phi(x)$ converges, denoted by $\phi(x)\downarrow$.
Every partial computable function is computable by a Turing Machine (TM). This is the famous Church-Turing Thesis. The inputs on which this function is undefined, the Turing Machine will produce no answer. Hence there exists a TM program for every partial function. The entire description of this program can be uniquely converted to a natural number. Thus the partial computable function which is computed by the TM program with description $x$ is denoted by $\phi_x$, where $x$ is a natural number.
Consider any partial recursive function $\phi_x(m,n)$. The S-m-n theorem (in its simplest form) tells us that there exists a total computable function $f(x,m)$ such that $\phi_x(m,n) = \phi_{f(x,m)}(n)$ for all $m$. In other words, it is possible to find (by a TM) a partial recursive function $\phi_{f(m)}(n)$ which has the same input-output mapping as $\phi_x(m,n)$ for all inputs $n$. You can think about it as hard-wiring one or more input-arguments of any function into its index.
Now we are ready to state the recursion theorem.
The Statement
Kleene’s Recursion Theorem tells us that for every total computable function $f$ which takes a natural number as input and gives another natural number as output, there exists a particular input $n$ such that the two partial computable functions $\phi_n$ and $\phi_{f(n)}$ have the same input-output characteristics.
In other words,
1. Pick any computable function $f$ that is total.
2. Then, there exists some natural number $n$, such that
3. If we grab the two partial functions $\phi_n$ and $\phi_{f(n)}$, we will see that
4. Wow! They have the same input-output mapping for all inputs, that is,
1. If $\phi_n(x)$ diverges for some $x$, so does $\phi_{f(n)}(x)$.
2. If $\phi_n(x)$ converges for some $x$, so does $\phi_{f(n)}(x)$ and moreover, $\phi_n(x)=\phi_{f(n)}(x)$.
This is the same as saying that every total computable function $f$ has an input element $n$ such that the two partial functions indexed by $n$ and $f(n)$ will behave identically on all inputs. In this sense, $n$ is called the fixed-point for the total computable function $f$.
The Proof
The proof is elegant and short. However, it contains (like most recursion theory proofs) self-references and therefore sometimes hard to visualize for a beginner. Below we will outline the proof presented to our class lecture by Professor Johanna Franklin.
Figure 1: Components for the proof of Kleene’s recursion theorem.
Step 1. Consider a total, one-to-one function $d$ with one input. We will define this function later. This function maps the given natural number to another natural number.
Step 2. Consider any total computable function $f$, which also maps a given natural number to another natural number.
Step 3. Now consider the composition of $f\circ d$, that is, the result of successively applying $d$ and $f$ on some input. Clearly, since both $f$ and $d$ are total, the composition will also be total. Let $\phi_v$ be the total function that computes this composition. Therefore, for all $x, \phi_v(x)=f(d(x))$. Note that $v$ is a fixed number which must exist although we do not know its exact value. Also note that since $\phi_v$ is total, it follows that $\phi_v(v)\downarrow$.
Step 4. Let $n=d(v)$. Since $d$ is a total function, such an $n$ must exist. Now we have $\phi_v(v)=f(d(v))=f(n)$.
Step 5. Now we will define a partial function $\psi(u,z)$ on two inputs, as follows. First, we will evaluate $\phi_u(u)$ (remember that $u$ is an input to our function $\psi$). If $\phi_u(u)$ converges to some value, say, $y=\phi_u(u)$, then $\psi(u,z)$ gives the same output as $\phi_y(z)$. Note that since $\phi_y$ is partial, this output may or may not converge. Otherwise, if $\phi_u(u)\uparrow$, then our function $\psi$ diverges. This description is given as follows:
$\psi(u,z)=\left\{\begin{array}{ll}\phi_{\phi_u(u)}(z) & \text{if }\phi_u(u)\downarrow\\\uparrow & \text{otherwise}\end{array}\right.$
Step 6. Now it is time to define the total one-to-one function $d$ mentioned at Step 1. We will do it as follows. We will take the partial function $\psi(u,z)$ defined in previous step, then use the S-m-n theorem to show the existence of another partial function $\phi_{d(u)}(z)$ which has identical input-output behavior as $\psi(u,z)$.
Therefore, $d$ is a function which maps a natural number to another natural number. Since the parameter $u$ of the function $\psi(u,z)$ can be any natural number, $d$ is defined for all inputs, and hence total. Moreover, as S-m-n theorem tells us, $d$ is computable as well.
Step 7. Now we have all the pieces of the puzzle. Here we are interested at only one particular number, namely $v$ from Step 3. At Step 3 we showed that $\phi_v$ is total, and hence $\phi_v(v)\downarrow$. Therefore, for any input parameter $z$, we have the following.
$\begin{array}{rcl}\phi_n(z)&=&\phi_{d(v)}(z)\ \ \ \ \text{(Step 4)}\\&=&\psi(v,z)\ \ \ \ \text{(Step 6)}\\&=&\phi_{\phi_v(v)}(z)\ \ \ \ \text{since }\phi_v(v)\downarrow\text{ (Step 5)}\\&=&\phi_{f(n)}(z)\ \ \ \ \text{(Step 4)}\\\end{array}$
$\square$ | 2017-08-20 03:57:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 81, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9803800582885742, "perplexity": 184.00880723623843}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105970.61/warc/CC-MAIN-20170820034343-20170820054343-00078.warc.gz"} |
https://www.storyofmathematics.com/system-of-linear-inequalities/ | # Solving Systems of Linear Inequalities – Technique & Examples
Before solving systems of linear inequalities, let’s look at what inequality means. The word inequality means a mathematical expression in which the sides are not equal to each other.
Basically, there are five inequality symbols used to represent equations of inequality.
These are less than (<), greater than (>), less than or equal (≤), greater than or equal (≥), and the not equal symbol (≠). Inequalities are used to compare numbers and determine the range or ranges of values that satisfy the conditions of a given variable.
## What is a System of Linear Inequalities?
A system of linear inequalities is a set of equations of linear inequalities containing the same variables.
Several methods of solving systems of linear equations translate to the system of linear inequalities. However, solving a system of linear inequalities is somewhat different from linear equations because the inequality signs hinder us from solving by substitution or elimination method. Perhaps the best method to solve systems of linear inequalities is by graphing the inequalities.
## How to Solve Systems of Linear Inequalities?
Previously, you learned how to solve a single linear inequality by graphing. In this article, we will learn how to find solutions for a system of linear inequalities by graphing two or more linear inequalities simultaneously.
The solution to a system of linear inequality is the region where the graphs of all linear inequalities in the system overlap.
To solve a system of inequalities, graph each linear inequality in the system on the same x-y axis by following the steps below:
• Isolate the variable y in each linear inequality.
• Draw and shade the area above the borderline using dashed and solid lines for the symbols > and ≥ respectively.
• Similarly, draw and shade the area below the borderline using dashed and solid lines for the symbols < and ≤ respectively.
• Shade the region where all the equations overlap or intersect. If there is no intersection region, then we conclude the system of inequalities has no solution.
Let’s go over a couple of examples to understand these steps.
Example 1
Graph the following system of linear inequalities:
y ≤ x – 1 and y < –2x + 1
Solution
Graph the first inequality y ≤ x − 1.
• Because of the “less than or equal to” symbol, we will draw a solid border and do the shading below the line.
• Also, graph the second inequality y < –2x + 1 on the same x-y axis.
• In this case, our borderline will be dashed or dotted because of the less than symbol. Shade the area below the borderline.
Therefore, the solution to this system of inequalities is the darker shaded region extending forever in a downward direction, as shown below.
Example 2
Solve the following system of inequalities:
x – 5y ≥ 6
3x + 2y > 1
Solution
• First, isolate the variable y to the left in each inequality.
For x – 5y ≥ 6;
=> x ≥ 6 + 5y
=> 5y ≤ x – 6
=> y ≤ 0.2x – 1.2
And for 3x + 2y > 1;
=> 2y > 1 – 3x
=> y > 0.5 – 1.5x
• We’ll graph y ≤ 2x– 1.2 and y > 0.5 – 1.5x using a solid line and a broken, respectively.
The solution of the system of inequality is the darker shaded area which is the overlap of the two individual solution regions.
Example 3
Graph the following system of linear inequalities.
y ≤ (1/2) x + 1,
y ≥ 2x – 2,
y ≥ -(1/2) x – 3.
Solution
This system of inequalities has three equations that are all connected by an “equal to” symbol. This tells us that all the borderlines will be solid. The graph of the three inequalities is shown below.
The shaded region of the three equations overlaps right in the middle section. Therefore, the solutions of the system lie within the bounded region, as shown on the graph.
Example 4
Graph the following system of linear inequalities:
x + 2y < 2, y > –1,
x ≥ –3.
Solution
Isolate the variable y in the first inequality to get;
y < – x/2 +1 You should note that the inequality y > –1 and x ≥ –3 will have horizontal and vertical boundary lines, respectively. Let’s graph the three inequalities as illustrated below.
The darker shaded region enclosed by two dotted line segments and one solid line segment give the three inequalities.
Example 5
Solve the following system of linear inequalities:
–2x -y < -1
4x + 2y ≤-6
Solution
Isolate the variable y in each inequality.
–2x -y < -1 => y > –2x + 1
4x + 2y ≤ -6 => y ≤ -2x -3
Let’s go ahead and graph y > –2x + 1 and y ≤ -2x -3:
Since the shaded areas of two inequalities don’t overlap, we can therefore conclude that the system of inequalities has no solution.
### Practice Questions
1. Which of the following shows the following system of linear inequalities’ graph:
$y \leq x – 2$ and $y < –3x + 2$
2. Solve the following system of inequalities:
$x – 4y \geq 3$
$2x + 4y > 1$
3. True or False: Using the result from the previous, we can see that $(1, 1)$ is a solution to the system of linear inequalities.
4. Graph the following system of linear inequalities.
$y \leq (1/4) x + 2$,
$y \geq x – 1$,
$y \geq -(1/4) x – 1$.
5. True or False: Using the result from the previous, we can see that $(-1, 1)$ is a solution to the system of linear inequalities.
6. Graph the following system of linear inequalities:
$x + y > 1$,
$y < –2$,
$x \leq –2$. | 2023-03-31 12:18:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.535473108291626, "perplexity": 563.5792728005434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949642.35/warc/CC-MAIN-20230331113819-20230331143819-00461.warc.gz"} |
https://math.stackexchange.com/questions/1637489/difference-between-a-limit-and-accumulation-point | # Difference between a limit and accumulation point?
What is the exact difference between a limit point and an accumulation point?
An accumulation point of a set is a point, every neighborhood of which has infinitely many points of the set. Alternatively, it has a sequence of DISTINCT terms converging to it?
Whereas a limit point simply has a sequence which converges to it? i.e. something like $(1)^n$ which is a constant sequence.
Is this the right idea? As much detail and intuition as possible would be greatly appreciated.
The difference is very simple.
1) As you wrote: an accumulation point of a set is a point, every neighborhood of which contains infinitely many points of the set.
2) But a limit point is a special accumulation point. No matter how small neighborhood you choose, all members $a_n$ (after a certain $n$) are in the neighborhood of the limit point.
The requirement for all members (after certain $n$) is obviously stronger than the requirement for infinitely many points/members.
So every limit point is an accumulation point, but not every accumulation point is a limit point. Also note that: (1) if a sequence has a limit point, then that's the only accumulation point of the sequence; (2) if a sequence has more than one accumulation points, that this sequence has no limit point. Try to prove these two, it will clear your confusions.
Unfortunately, there doesn't seem to be one standard definition for theses terms. One author may say that accumulation points are limit points, others do not. Best thing to do is to check the definition given in book, paper, etc that you are reading.
• So it isn't a big deal to treat them somewhat interchangeably? – Aaron Zolotor Feb 2 '16 at 15:14
• It might depending on how these terms are defined in whatever source you're reading. – Tim Raczkowski Feb 2 '16 at 15:16 | 2020-04-01 02:33:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8643660545349121, "perplexity": 126.66567410258402}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505359.23/warc/CC-MAIN-20200401003422-20200401033422-00053.warc.gz"} |
https://www.aimsciences.org/article/doi/10.3934/krm.2012.5.639 | Article Contents
Article Contents
# Finite element method with discrete transparent boundary conditions for the time-dependent 1D Schrödinger equation
• We consider the time-dependent 1D Schrödinger equation on the half-axis with variable coefficients becoming constant for large $x$. We study a two-level symmetric in time (i.e. the Crank-Nicolson) and any order finite element in space numerical method to solve it. The method is coupled to an approximate transparent boundary condition (TBC). We prove uniform in time stability with respect to initial data and a free term in two norms, under suitable conditions on an operator in the approximate TBC. We also consider the corresponding method on an infinite mesh on the half-axis. We derive explicitly the discrete TBC allowing us to restrict the latter method to a finite mesh. The operator in the discrete TBC is a discrete convolution in time; in turn its kernel is a multiple discrete convolution. The stability conditions are justified for it. The accomplished computations confirm that high order finite elements coupled to the discrete TBC are effective even in the case of highly oscillating solutions and discontinuous potentials.
Mathematics Subject Classification: 65M60, 65M12, 65M06, 35Q40.
Citation:
• [1] X. Antoine, A. Arnold, C. Besse, M. Ehrhardt and A. Schädle, A review of transparent and artificial boundary conditions techniques for linear and nonlinear Schrödinger equations, Commun. Comp. Phys., 4 (2008), 729-796. [2] X. Antoine and C. Besse, Unconditionally stable discretization schemes of non-reflecting boundary conditions for the one-dimensional Schrödinger equation, J. Comp. Phys., 188 (2003), 157-175.doi: 10.1016/S0021-9991(03)00159-1. [3] A. Arnold, Numerically absorbing boundary conditions for quantum evolution equations, VLSI Design, 6 (1998), 313-319. [4] A. Arnold, M. Ehrhardt and I. Sofronov, Discrete transparent boundary conditions for the Schrödinger equation: Fast calculations, approximation, and stability, Comm. Math. Sci., 1 (2003), 501-556. [5] B. Ducomet and A. Zlotnik, On stability of the Crank-Nicolson scheme with approximate transparent boundary conditions for the Schrödinger equation. I, Comm. Math. Sci., 4 (2006), 741-766. [6] B. Ducomet and A. Zlotnik, On stability of the Crank-Nicolson scheme with approximate transparent boundary conditions for the Schrödinger equation. II, Comm. Math. Sci., 5 (2007), 267-298. [7] B. Ducomet, A. Zlotnik and I. Zlotnik, On a family of finite-difference schemes with approximate transparent boundary conditions for a generalized 1D Schrödinger equation, Kinetic and Related Models, 2 (2009), 151-179. [8] M. Ehrhardt and A. Arnold, Discrete transparent boundary conditions for the Schrödinger equation, Riv. Mat. Univ. Parma (6), 4 (2001), 57-108. [9] V. A. Gordin, "Mathematical Problems in Hydrodynamical Weather Forecasting. Computational Aspects," (in Russian), "Gidrometeoizdat," Leningrad, 1987; Abridged English version: "Mathematical Problems and Methods in Hydrodynamical Weather Forecasting," Gordon and Breach, Amsterdam, 2000. [10] R. A. Horn and C. R. Johnson, "Matrix Analysis," Cambridge University Press, Cambridge, 1985. [11] J. Jin and X. Wu, Analysis of finite element method for one-dimensional time-dependent Schrödinger equation on unbounded domains, J. Comp. Appl. Math., 220 (2008), 240-256.doi: 10.1016/j.cam.2007.08.006. [12] C. A. Moyer, Numerov extension of transparent boundary conditions for the Schrödinger equation discretized in one dimension, Am. J. Phys., 72 (2004), 351-358.doi: 10.1119/1.1619141. [13] F. Schmidt and D. Yevick, Discrete transparent boundary conditions for Schrödinger-type equations, J. Comp. Phys., 134 (1997), 96-107.doi: 10.1006/jcph.1997.5675. [14] M. Schulte and A. Arnold, Discrete transparent boundary conditions for the Schrödinger equation-a compact higher order scheme, Kinetic and Related Models, 1 (2008), 101-125. [15] G. Strang and G. Fix, "An Analysis of the Finite Element Method," Prentice-Hall Series in Automatic Computation, Prentice-Hall, Inc., Englewood Cliffs, NJ, 1973. [16] I. A. Zlotnik, Computer simulation of the tunnel effect, (in Russian), Moscow Power Engin. Inst. Bulletin, 6 (2010), 10-28. [17] I. A. Zlotnik, A family of difference schemes with approximate transparent boundary conditions for the generalized nonstationary Schrödinger equation in a half-strip, Comput. Math. Math. Phys., 51 (2011), 355-376.doi: 10.1134/S0965542511030122. | 2022-12-10 05:23:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.691869854927063, "perplexity": 1616.2974783166576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711712.26/warc/CC-MAIN-20221210042021-20221210072021-00603.warc.gz"} |
https://gamedev.stackexchange.com/questions/192101/what-is-causing-my-wheel-colliders-raycast-to-fail | # What is causing my wheel collider's raycast to fail?
I'm making custom car suspension(Custom wheelcollider etc), now I encountered a problem where if car gets dropped from a certain height, raycast will fail to detect collision and wheels intersect with terrain.
My raycast starts from the center of a wheel and goes down.
Example of what happens for ~1 frame when car is dropped:
Then it jumps up back to normal state.
I know that Physics.Raycast detection happens every FixedUpdate, which happens rarer than Update(And I update my wheel positions in Update). But since all physics happen every FixedUpdate, doesn't it mean that rigidbodies move every fixed update too?
I don't understand where the conflict happens and why Raycast doesn't detect that ground hit for that one frame.
Wheel collider's code:
void FixedUpdate()
{
if (Physics.Raycast(transform.position, -transform.up, out hit, springLength))
{
// Debug.Log(transform.name + " has hit");
contactSpeed = (hit.distance - hitDistance) / Time.deltaTime;
// Debug.Log(contactSpeed);
hitDistance = Mathf.Clamp(hit.distance, maxCompression, hit.distance);
// forceDirection = (hit.point - transform.position).normalized;
}
else
{
//resting position = springlen
hitDistance = springLength;
}
}
And code that modifies wheel model's transform.position:
void Update()
{
for (int i = 0; i < wheels.Count; i++)
{
UpdateWheel(i);
}
}
void UpdateWheel(int i)
{
Transform wheelModel = wheelModels[i];
PhysicsWheel wheel = wheels[i];
Vector3 wheelPos = wheelModel.localPosition;
//springStretch = just hitDistance's getter.
wheelModel.localPosition = wheelPos;
Vector3 wheelRot = wheelModel.localRotation.eulerAngles;
// wheelRot
wheelModel.Rotate(Vector3.right * wheel.RPM * 6 * Time.deltaTime, Space.Self);
// wheelModel.transform.localRotation = Quaternion.Euler(wheelRot);
}
Physics settings are default:
Clarifications:
hitDistance is used to find where raycast actually hit the ground(By subtracting radius from it, I can find out where the wheel should be), contactSpeed is delta between two hitDistances, its used to dampen the spring.
• There could be numerous reasons for this behavior. I think we need to see more of your code and inspector settings. May 6 at 8:39
• @Philipp Edited accordingly.
– Nick
May 6 at 8:44
• FixedUpdate runs before the physics step. So by the time you get to Update, the raycast result you've got is old news, representing the depth one physics step in the past, not the current physics state. May 6 at 10:53
• @DMGregory Thanks. I understood now, but I can't think of a solution except some very complex ones, which probably are bad idea.
– Nick
May 6 at 12:26
• Did you consider just repeating your raycast in Update to get a last-chance fix-up before the current state is rendered? May 6 at 12:27 | 2021-09-28 18:48:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40269166231155396, "perplexity": 7719.862420507231}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060882.17/warc/CC-MAIN-20210928184203-20210928214203-00085.warc.gz"} |
https://easystats.github.io/bayestestR/reference/area_under_curve.html | Based on the DescTools AUC function. It can calculate the area under the curve with a naive algorithm or a more elaborated spline approach. The curve must be given by vectors of xy-coordinates. This function can handle unsorted x values (by sorting x) and ties for the x values (by ignoring duplicates).
area_under_curve(x, y, method = c("trapezoid", "step", "spline"), ...)
auc(x, y, method = c("trapezoid", "step", "spline"), ...)
## Arguments
x Vector of x values. Vector of y values. Method to compute the Area Under the Curve (AUC). Can be "trapezoid" (default), "step" or "spline". If "trapezoid", the curve is formed by connecting all points by a direct line (composite trapezoid rule). If "step" is chosen then a stepwise connection of two points is used. For calculating the area under a spline interpolation the splinefun function is used in combination with integrate. Arguments passed to or from other methods.
DescTools
## Examples
library(bayestestR)
posterior <- distribution_normal(1000)
dens <- estimate_density(posterior)
dens <- dens[dens$x > 0, ] x <- dens$x
y <- dens\$y
area_under_curve(x, y, method = "trapezoid")
#> [1] 0.4980976area_under_curve(x, y, method = "step")
#> [1] 0.4992463area_under_curve(x, y, method = "spline")
#> [1] 0.4980982 | 2021-06-20 12:01:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5353970527648926, "perplexity": 4112.160848694617}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487662882.61/warc/CC-MAIN-20210620114611-20210620144611-00231.warc.gz"} |
https://jquery-angular.wekeepcoding.com/article/10001317/How+to+draw+a+circle+embedded+a+plane+in+a+3d+figure+using+pgfplots%3F | ## How to draw a circle embedded a plane in a 3d figure using pgfplots? - 3d
### Godot triangle tile map for 2D game. UV map for tiles
I want to create triangle based tile map for my minecraft-like godot pixelart game project (triangles is best solution [in my opinion] for planet to 2d plane real time projection). The main problem that godot tile maps have no way to edit UV map of tile - so if i make (64)x(64*0.866) texture of equilateral triangle two sides of it will have VERY VISIBLE PIXELS. So tilemap become to look ugly for me.
My solution is: make a equilateral triangle poligon2d(with 3 poligons for each side of triangle) and use UV-maped square texture (square can be separated to 4 isosceles triangles [3 of them used for equilateral triangle texture]). But godot_3.5 don't have a way to edit UV map of tileset or make it using poligon2d. I tried to use diferent texture types (large texture, camera texture, viewpoint texture, atlas texture) but nothing helped. (I know I can make my own tool witch will copy 2dpoligon for each tilemap cell but this sounds like hell... 64 triangles in a chunk so we have 64 poligons2d for each chunk [goodbuy my poor CPU])
I even tried to turn square into triangle by making 2d VERTEX shader... But well it become even uglier than before...
Here it is(((
uniform sampler2D tex : hint_albedo;
uniform float deform_amt : hint_range(-1,1);
uniform float deform_speed : hint_range(0.5,4);
uniform bool animate;
const float PI = 3.1415;
void vertex(){
if (true){
if ((UV.x == 1. && UV.y == 0.) && true)
VERTEX = vec2(0.0,0.866*(VERTEX.y*2.0)-VERTEX.y);
if (UV.x == 0.0 && UV.y == 0.0)
VERTEX = vec2(VERTEX.x/2.0,0.866*(VERTEX.y*2.0)/2.0-VERTEX.y);
}
}
// not my part but good way to deform texture
void fragment() {
float def_amt = deform_amt;
vec4 cl = texture(tex, UV);
if (animate)
def_amt *= sin(TIME * deform_speed);
COLOR = texture(TEXTURE, clamp(UV + cl.r * def_amt / 2.0, vec2(0.0), vec2(1.0)));
}
I need a solutions that will make my tile look like UV mapped triangle poligon2d. Maybe it's a fragment shader... But I am new at shaders and have no Idea how to edit UV in way I want...
### 3 axis Coordinate system in processing 3.2.4
I want to make a program that draws a three axis coordinate system in Processing and takes as input a point's coordinates A(x,y,z) and displays it in the three axis coordinate system, can anyone here provide me with a code i could start with ?
You can easily draw an axis using the line() function and passing two pairs of x,y,z coordinates (the "from" and "to" points of the line in 3D).
Drawing 3 lines and colouring each axis with a colour (e.g. X,Y,Z as R,G,B) should do:
void drawAxes(float size){
//X - red
stroke(192,0,0);
line(0,0,0,size,0,0);
//Y - green
stroke(0,192,0);
line(0,0,0,0,size,0);
//Z - blue
stroke(0,0,192);
line(0,0,0,0,0,size);
}
If you plan to use multiple coordinate systems, it's worth reading the 2D transformations tutorial. The same concepts apply to 3D as well in terms of isolating and nesting coordinate systems using pushMatrix()/popMatrix() calls:
PVector a = new PVector(100,50,20);
void setup(){
size(400,400,P3D);
strokeWeight(3);
}
void draw(){
background(255);
//draw original coordinate system
drawAxes(100);
//draw from centre and rotate with mouse
translate(width * 0.5, height * 0.5, 0);
rotateX(map(mouseY,0,height,-PI,PI));
rotateY(map(mouseX,0,width,PI,-PI));
//draw centred coordinate system
drawAxes(100);
//isolate coordinate system for A point
pushMatrix();
translate(a.x,a.y,a.z);
//draw translated A point
drawAxes(50);
popMatrix();
}
void drawAxes(float size){
//X - red
stroke(192,0,0);
line(0,0,0,size,0,0);
//Y - green
stroke(0,192,0);
line(0,0,0,0,size,0);
//Z - blue
stroke(0,0,192);
line(0,0,0,0,0,size);
}
You can run a p5.js preview bellow:
<iframe width="400" height="400" src="https://alpha.editor.p5js.org/embed/HkQoQTAvl" style="border:none;"></iframe>
i had a play around with it, it seems to be switching and reversing axis-es in between the first and second given points on a line, god knows why, but its what you gotta do to make it work as expected. so a line on the x axis is line(100, 0, 0, 0, 100, 0); and the y axis is line(0, 100, 0, 0, 0, 0);. so it kinda reverses everything each time in between declared points from where you'd expect them to be from the processing axis diagram.
### Sphere is stretched when translated
When I do this: (translate sphere to center)
translate(width/2, height/2);
noStroke();
fill(255, 0, 0);
lights();
sphere(60);
I get the result I should:
However, when I try to move the sphere away, say the bottom -
translate(width/2, height - 120);
I get an oval shape:
How can I get the sphere to stay round no matter how I move/translate it?
It's simply stretched because of perspective projection, to give the illusion of depth.
To get it to appear as a circle you'll need to use an Orthographic projection.
### Fragment shader does not behave the same between WebGL and Three.js => migration failure
I have a big issue by trying to "port" a fragment shader from WebGL into Three.JS. First, I managed to apply a concentric halo effect within a square by using a fragment shader across WebGL here : unitary test successfully managed by WebGL. To summarize, I use a basic square based on universal array of vertex defined in JS along with the following texture coordinates :
vertices = [ texvert = [
2.0, 2.0, 0.0, 1.0, 0.0,
-2.0, 2.0, 0.0, 0.0, 0.0,
2.0, -2.0, 0.0, 1.0, 1.0,
-2.0, -2.0, 0.0 0.0, 1.0
]; ]
Here is the source code for my fragment shader :
varying vec2 vTextureCoord;
void main(void) {
float distance = distance(vTextureCoord, vec2(0.5, 0.5));
float att = 1.8 * (distance - 0.005) ;
gl_FragColor = vec4(0.0, 0.0, 0.0, att);
}
So far, so good. However, migrating this "raw" shader into Three.JS led to unexpected behavior (unitary test badly managed by Three.JS). So what's wrong with my use case ?
By the way, here is the way I proceeded across ThreeJS t o create mesh and material :
var squareGeometry = new THREE.Geometry();
squareGeometry.vertices.push(new THREE.Vector3(2.0, 2.0, 0.0));
squareGeometry.vertices.push(new THREE.Vector3(-2.0, 2.0, 0.0));
squareGeometry.vertices.push(new THREE.Vector3(-2.0, -2.0, 0.0));
squareGeometry.vertices.push(new THREE.Vector3(2.0, -2.0, 0.0));
squareGeometry.faces.push(new THREE.Face4(0, 1, 2, 3));
attributes: attributes,
side:THREE.DoubleSide
});
myMaterial.transparent = "true";
----------------------
Your three.js example is throwing console errors. You need to fix
those. 2. You should update to the current version of three.js. 3. the
current version of three.js does not support quads. Use
WestLangley 2 days ago
Example has been updated: no more console logs (apart from one: THREE.WebGLRenderer 66)
Three.JS has been upgraded yet to current/last version => release 66.
new THREE.PlaneGeometry(20,20) has been used according to expert advice
But the result remains the same however: black screen. Instead of expected concentric HALO effect. I keep wondering why 'att' parameter diverges between WebGL and Three.JS shader...
### Need help for proper rectilinear projection of equirectangular panoramic image
With the algorithm below, when the projection plane is tangent to the equator (the center line of the equirectangular image), projected image looks rectilinear.
But when the projection plane is tilted, (py0 != panorama.height/2), lines are warped.
The two last "lines" in the algorithm below needs to be "rectified", in order to adjust px and/or py when the center line of the destination plane is not at the same level than the center line of the equirectangular image.
// u,v,w :
// Normalized 3D coordinates of the destination pixel
// elevation, azimuth:
// Angles between the origin (sphere center) and the destination pixel
// px0, py0 :
// 2D coordinates in the equirectangular image for the
// the destination plane center (long*scale,lat*scale)
// px, py:
// 2D coordinates of the source pixel in the equirectangular image
// (long*scale,lat*scale)
angularStep=2*PI/panorama.width;
elevation=asin(v/sqrt(u*u+v*v+w*w));
azimuth=-PI/2+atan2(w,u);
px=px0+azimuth/angularStep;
py=py0+elevation/angularStep;
I can compute the intersection p between the normal of each destination pixel and the sphere, then convert cartesian coordinates to long/lat using available C code:
But I know there's a simpler and much less time consuming method, involving adjusting source pixel coordinates in equirectangular image (px,py) knowing the longitude/latitude (px0,py0) at which the center of the projection plane intersect the "sphere".
I managed to get this to work using the formula for gnomonic projection in a webgl shader http://mathworld.wolfram.com/GnomonicProjection.html
float angleOfView
float phi1
float lambda0 //centre of output projection
float x = PI2*(vTextureCoord.s - 0.5) ; //input texture coordinates,
float y = PI2*(vTextureCoord.t - 0.5 );
float p = sqrt(x*x + y*y);
float c = atan(p, angleOfView);
float phi = asin( cos(c)*sin(phi1) + y*sin(c)*cos(phi1)/p );
float lambda = lambda0 + atan( x*sin(c), (p*cos(phi1)*cos(c) - y*sin(phi1)*sin(c)));
vec2 tc = vec2((lambda /(PI*2.0) + 0.5, (phi/PI) + 0.5); //reprojected texture coordinates
vec4 texSample = texture2D(tEqui, tc); //sample using new coordinates | 2023-03-29 10:00:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2016773521900177, "perplexity": 6520.570296976697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948965.80/warc/CC-MAIN-20230329085436-20230329115436-00743.warc.gz"} |
http://www.jiskha.com/display.cgi?id=1364399894 | # Geometry
posted by .
The value of y that minimizes the sum of the two distances from (3,5) to (1,y) and from (1,y) to (4,9) can be written as \frac{a}{b} where a and b are coprime positive integers. Find a + b
• Geometry -
38 | 2017-06-25 22:52:05 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9635511040687561, "perplexity": 645.0492072438444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320593.91/warc/CC-MAIN-20170625221343-20170626001343-00344.warc.gz"} |
https://www.anarchi.cc/archiprint/ | Archiprint is AnArchi’s architectural journal, edited by our Archiprint committee. The journal contains articles from both theorists and practitioners in the field of architecture. Each issue addresses a specific theme and aims to provide actualities and food for thought.
The latest edition, De Stijl, can be collected at AnArchi, floor 2 of Vertigo, for as long as stock lasts.
#### All editions can be read here:
De Stijl
No. 10 \\ Volume 6 \ Issue 1
The Architecture of Design
No. 9 \\ Volume 5 \ Issue 1
The Ideal Profession
No. 8 \\ Volume 4 \ Issue 2
Influence & Inspiration
No. 7 \\ Volume 4 \ Issue 1
Creating & Experiencing Identity
No. 6 \\ Volume 3 \ Issue 2
Movement in Architecture
No. 5 \\ Volume 3 \ Issue 1
Show us what you have got!
No. 4 \\ Volume 2 \ Issue 2
Because it is always a competition!
No. 3 \\ Volume 2 \ Issue 1
The Research Issue
No. 2 \\ Volume 1 \ Issue 2
Archiprint
No. 1 \\ Volume 1 \ Issue 1 | 2017-06-23 06:54:49 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000091791152954, "perplexity": 9191.29283649878}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320023.23/warc/CC-MAIN-20170623063716-20170623083716-00229.warc.gz"} |
http://math.stackexchange.com/questions/73202/translation-of-nabla-modality-with-box-and-diamond-modalities/73209 | # Translation of nabla modality with box and diamond modalities
I got an exercise from my teacher to translate formulas of modal logic with modal operator $\nabla$ into formulas with operators $\Box$ and $\Diamond$.
If the set of possible worlds is $X$, the accessibility relation is $R$ and the semantics of $\nabla$ are as follows:
$x \models \nabla\{\phi_1,...,\phi_n \}$ iff $(\forall y \in X:xRy \to (\exists\phi_k: y \models \phi_k)) \wedge (\forall \phi_k\exists y\in X:xRy \wedge y \models \phi_k)$,
what is the corresponding formula with the same meaning without the $\nabla$ operator?
I think that the result is $\nabla\{\phi_1,\dots,\phi_n\} = \Box(\phi_1\lor\dots\lor\phi_n)\wedge(\Diamond\phi_1\wedge\dots\Diamond\wedge\phi_n)$
Is that correct? Thank you.
-
Looks okay to me. – Henning Makholm Oct 17 '11 at 0:47
I haven't done modal logic in a while, but I believe your translation is correct. The definition you were provided states that
$$x \vDash \nabla \Phi \equiv (\forall y \in W . R(x,y) \Rightarrow \exists \phi \in \Phi . y \vDash \phi) \wedge (\forall \phi \in \Phi . \exists y \in W . R(x,y) \wedge y \vDash \phi)$$
To see that your translation is correct, consider the following equivalence:
$$x \vDash \bigwedge \{ \Diamond \phi : \phi \in \Phi \}\equiv \forall \phi \in \Phi. \exists y \in W. R(x,y) \wedge y \vDash \phi$$
This gives you the second conjunct of the definition of the nabla operator. Now you merely need to secure the first conjunct. Again, consider the equivalence $$x \vDash \Box \left ( \bigvee \Phi \right ) \equiv \forall y \in W. R(x,y) \Rightarrow y \vDash \bigvee \Phi$$
In order for the consequent of the implication of the right-hand side of the bi-implication to be true, we must have $y \vDash \phi$ for some $\phi \in \Phi$ and all $y \in W$ such that $R(x,y)$. This gives the first conjunct of the definition of the nabla operator. Hence,
$$x \vDash \nabla \Phi \equiv x \vDash \Box \left ( \bigvee \Phi \right ) \wedge \bigwedge \{ \Diamond \phi : \phi \in \Phi\}$$
Unless I'm missing something, the proof should follow straightforwardly from expanding the definitions on the right-hand side of your translation.
-
Thank you very much for your answer. If I understand it right, shouldn't there be "we must have y⊨ϕ for some ϕ∈\Phi" instead of "y∈\Phi"? – Matej Oct 17 '11 at 11:13
Yeah, thanks for catching that. – danportin Oct 17 '11 at 22:40 | 2014-09-24 02:44:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.950247585773468, "perplexity": 168.82484387011442}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657141651.17/warc/CC-MAIN-20140914011221-00175-ip-10-234-18-248.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/816281/why-there-are-no-constant-functions-on-mathbbr-with-compact-support | # Why there are no constant functions on $\mathbb{R}$ with compact support?
For some the question might seem trivial but the concept is new to me and I have been wondering why there are no constant functions on $$\mathbb{R}$$ with compact support?
Following wiki:
Def.1) Functions with compact support on a topological space $$X$$ are those whose support is a compact subset of $$X$$.
Def.2) $$\operatorname{supp}f=:\overline{\{x\in X\;,f(x)\neq 0\}}$$
What if we take such function $$f:\mathbb{R}\to\mathbb{R}$$ defined as $$f(x)=0\;,\forall x\in\mathbb{R}$$ then $$\operatorname{supp}f=\overline{\{x\in\mathbb{R}\;,f(x)\neq 0\}}=\overline{\emptyset}=\emptyset$$ but $$\emptyset$$ is a compact subset of $$\mathbb{R}$$ I think....
Could someone enlighten me? Thank you.
• Where were you told that no constant functions on $\mathbb{R}$ have compact support? – Hayden May 31 '14 at 18:28
• well, anyway that's the only one there is, $f \equiv 0$ – mm-aops May 31 '14 at 18:30
• @Hayden: here for example www-personal.umich.edu/~wangzuoq/437W13/Notes/Lec%2030.pdf (12th line) – user124471 May 31 '14 at 18:32
• Well, I'd beg to differ with the claim, for precisely the example you gave. But as mm-aops pointed out, it's the only example. – Hayden May 31 '14 at 18:35
• the statement is very clear : ''there are NO functions'', (such functions don't exists). The real question here is if my counter-example is really correct (i.e that I followed all definitions correctly and so on). – user124471 May 31 '14 at 18:37
## 1 Answer
The author in the article you link to (www-personal.umich.edu/~wangzuoq/437W13/Notes/Lec%2030.pdf) is just being sloppy. He really should have said there are no non-zero constant functions with compact support on $\mathbb R$ (and thus $H_c^0(\mathbb R)$.
So yes, your example is correct. The empty set is (tautologically) compact.
• Hi Fredrik. Thank you. Unfortunately it is not the first time I have seen it....Exactly the same statement I found in Bott, Tu, ''Differential forms in algebraic topology''. Cheers – user124471 May 31 '14 at 18:44
• @user124471 Yep, I've seen it as well. In fact, I was just reading in the book by Bott & Tu a week ago, and had to think about the exact same question as you asked. – Fredrik Meyer May 31 '14 at 18:47
• @user124471 I was wondering the same thing! Now I know. Can you include Bott Tu in your question? – user636532 May 29 '19 at 11:27 | 2020-01-17 19:30:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 9, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7760120034217834, "perplexity": 391.05891677772047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250590107.3/warc/CC-MAIN-20200117180950-20200117204950-00209.warc.gz"} |
https://moorishwanderer.wordpress.com/tag/economic/ | # The Moorish Wanderer
## Inflation and Household Consumption Distribution
Posted in Dismal Economics, Flash News, Moroccan Politics & Economics, Morocco by Zouhair ABH on December 5, 2012
In the fall of 1972 President Nixon announced that the rate of increase of inflation was decreasing. This was the first time a sitting president used the third derivative to advance his case for re-election.
Hugo Rossi
The premise of government fuel subsidies and other goods is to keep inflation low – more specifically, to keep prices low. This policy was sustainable as long as commodity prices were low enough, and they were, up to 2008, at levels manageable enough to avoid large price increases and preserve budgetary balances.
Junior minister Najib Boulif provided a figure of 5% inflation rate if the subsidies provided by the Compensation Fund were lifted overnight – a figure I still grapple with (how did he get 5%?) as it is far above the present level of inflation, as captured by the Consumer Price Index.
Since the new CPI designed by the Moroccan Bureau for Statistics HCP has been used in 2006 (effectively since 2008-2009) inflation has established itself around 1.9% on average – meaning weighted prices for some 387 products have increased each year by almost 2%. And there lies a fact many household might find it hard to understand: their perceived inflation seems higher. “I mean, look at the price of vegetables and bread!” And they are right. Prices of vegetables have increased 7.5% on average, bread and other wheat-based goods increased 3.1% each year – and have been more volatile than other components of the CPI aggregate. However, as mentioned before, the method weights up these goods as an aggregate, and so households will not experience the same CPI. Especially between the top and bottom households.
This is just simple math, the average ICV index is: $\mathbb{E}(ICV)_{p,j}=\sum_{p=1}^{385}\mathbb{E}(W_{p,j})Q_{p,j}$ where $\mathbb{E}(W_p)$ is the average weight for a particular good in all cities.
On the other hand, $\mathbb{E}(ICV)_{d_i}=\sum_{p=1}^{385}W_{p,d_i}Q_p$ provides different weights for each decile ($d_i$) and these tell a different story. How come the national average weighting is so far off those of perhaps the majority of Moroccan households? It has a lot to do with consumption distribution: the top 10% concentrated a third of household final consumption, so their low marginal propensity of consumption pulls the average weight for food consumption down, further than, say, the median food consumption per household: the weighted average of food-related consumption is 40.6% (close enough to ICV’s allocated 39% to 41% to this consumption category) so it does put those with higher consumption levels relative to their income at a disadvantage. The median consumption on the other hand, is at 47% which would indeed put ICV a full percentage point above the present index.
Poorer households CPI is more sensible to ICV-CPI
Poorer households tend to weight food and related items a lot more than affluent households, and thus the former tend to feel the sting of inflation more acutely: these household tend to spend a larger fraction of their income in food (almost 44% higher than the top decile) and are thus subject to a higher inflation.
The graph opposite shows Morocco’s CPI, ICV tends to underestimate price levels for the poorest household. No news there of course, but a simple analysis shows poorer households have an “ICV effect” 62% higher than the top income earners. So this is not just a matter of weighting averages per class products: the bottom 10% households spend 44% more on food than the top 10%, but they experience a 62% higher inflation factor.
In terms of inflation levels, the “poor inflation” establishes itself around 2.18% – versus a “rich inflation” of 1.4% – a difference of 18%, with a social loss weighting skewed toward the poorer: in essence, 63% of the resulting inflation is shouldered by the bottom 10%. (One could finesse the analysis a bit by including the remaining deciles, but the weight placed on households below the median will be larger still)
So not only the Compensation Fund benefits mainly those who do not need it, it even fails to protect those population from the effects of inflation. I also shows government transfers to businesses – not households directly- only distort needlessly market prices to the tune of 6% GDP worth of compensation cost.
## The Worst of Trickle-down, or Zombie Keynesianism?
Posted in Dismal Economics, Flash News, Moroccan Politics & Economics, Morocco, Read & Heard by Zouhair ABH on April 19, 2012
There is enough evidence to state that more than ever, big government is alive and kicking in Morocco, and not in a nice way; it has indeed broken with a 30-years trend in 2010; this means that some additional 5.8 Bn have been spent above the 50-years long trend, accruing to the 20Bn-worth exponential break that started 3 years ago. 5Bn might not be a lot relative to the Budget -1.67%- but it does account for 4% of government expenditure in real terms, so the matching resources accounted for in the Budget as a whole.
So here we are with a government who has not broken with the fateful decision to increase dramatically government expenditure in 2010; this is, quite simply, Zombie Keynesianism: the government puts on (some) welfare programs, increases recruitment 40%, and comes up with an effective 21Bn package expenditure no government has prepared for but yet finds itself actually spending it. Most importantly however, the Compensation Fund takes a large bite out of government expenditure – the World Bank Open-Data defines it as:
“General government final consumption expenditure (formerly general government consumption) includes all government current expenditures for purchases of goods and services (including compensation of employees). It also includes most expenditures on national defence and security, but excludes government military expenditures that are part of government capital formation. Data are in current local currency.”
HP-detrended aggregates. Government expenditure at its highest level away from 50-years trend since 1976
And considering the available data on that subject, the negative effects of the present course of action are just as equally showing on the short as well as the long run: depending on how the economy fares in 2013, the combined effects of the generous increases in public service payroll and the Compensation Fund will deteriorate an already compromised Budget Balance, and later on, the government will have to increases taxes, or cut spending, or both.
It seems this moment the government is trying for some shadow stimulus package, and it shows: the latest Treasury monthly survey points out the structure of Government Budget has been markedly altered compared to that of 2011: the Budget represents only 14.6% compared to the 23.2% in 2011 (and there goes the government’s boasting about the 188Mds committed to investment) while payroll and subsidies increased their contribution from 36.7%, 12.1% to 41.2%, 18.2%, respectively. And contrary to the government’s claim, the Compensation Fund does not benefit the middle class as much as a few wealthy households.
Les dépenses du budget général ont atteint 68,3 MMDH à fin mars 2012, en légère hausse de 0,8% par rapport à leur niveau à fin mars 2011, qui s’explique par une augmentation de 17,6% des dépenses de fonctionnement conjuguée à une baisse de l’investissement et des charges de la dette budgétisée1 de 34,2% et de 11,1%
respectivement.
So basically the government has put a lot of money to stabilize prices -but at the same time transfers generous sums back to the privileged few- and to recruit many more civil servants – that might not be needed or do not have what it takes- the result is indeed a stimulus package, and it might as well be working by providing the boost for GDP growth, but it will not last long, and the benefits of such an overkill are not that obvious.
|σ |σj/σy |Corr(y,j)|
----------+--------+-------+----------
Y_GDP |0,08030 |1 |1 |
----------+--------+-------+---------+
Con |0,07013 |0,87339|0,82150 |
----------+--------+-------+---------+
Investment|0,24127 |3,00463|0,83690 |
----------+--------+-------+---------+
Government|0,22035 |2,74415|0,49970 |
--------------------------------------
$\sigma =\left (\sum_{i=1955}^{2011}\left ( \mu -x_{i} \right )^{2} \right )^{1/2}\\ \rho _{x,y}=\frac{\sigma_{xy}}{\sigma_{y}\sigma_{x}}$
The table above shows some evidence that government expenditure does not necessarily influence GDP the way other aggregates do, and the effects can be random indeed: government expenditure is just as volatile as the most volatile aggregate in an economy (Investment) yet it is also the least correlated to GDP. The only way that generous increase in government expenditure can pick up growth is through the subsidy to household consumption. In an ideal world, the Finance Ministry would provide us with a technical note to explain and illustrate the model they are using to forecast growth, and more importantly, the contribution to growth per aggregate. One thing is sure though, the present increase in expenditure doesn’t help, and the boomerang effect will be painful.
## Business Cycles – A Story of Our Economy
Posted in Dismal Economics, Moroccan Politics & Economics, Morocco, Read & Heard by Zouhair ABH on February 18, 2012
An interesting discussion with my students (yes, I have recently been assigned TA duties, so kudos to me!) about debt, pensions and demographic growth -more specially a simplified version of the Ramsey-Cass-Koopmans model– prompted me to think about how come none of our economic-oriented institutions, namely Bank Al Maghrib, HCP or MINEFI, engage into some meaningful activities and produce high-quality papers – more often. It seems to me one paper on core inflation, one on business cycles (up to 2007) and a couple of unfortunately very brief reports of macro-econometric surveys are not enough to justify the total Human Resources spending of 2Bn shared by HCP and MINEFI alone. Crack some whips, and lemme read something substantial to bitch about!
Business Cycles per 3 filtering processes - HP seems to be good compromise for good smoothing
But I digress; back on the original subject – I posted a very raw estimate of business cycles – one that basically has the long-run growth estimated by means of a straight line, a simplistic assumption with no obvious advantages; as I have (finally) managed to unlock and master an interesting little device with Stata, I can now compute cycles per Hodrick-Prescott filter procedure. Dataset was extracted from the UPenn World Table, augmented with 2009-2011 Log GDP from World Bank Open Data website.
the graph plots three possible modus operandi for cycles filtering; the final result tends to favour HP since it does away with much of the high volatility segments that a linearly smoothing tends, in contrast, to exacerbate; from then on, a clear-cut graph depicting our economy’s history during the last half a century can provide informative insight about past events, and more importantly, a prediction of what is yet to come;
Let’s stick with H-P filter; what does it tell us? First off, that the party held since the early 2000s is bound to stop, or at least to slow down somehow; in any case, the next dozen of quarters would tell us exactly what it is all about. It is therefore up to a sound policy design to stabilize output, rather than force an artificial short-term and short-lived expansion.
Why does it seem to differ from earlier computations? First off, the initial data is slightly different; U-Penn PWT Table tends to use normalized data, better suited for international comparisons, and includes some specific PPP-related computations like the Geary-Khamis method. Furthermore, the selected trend dates back to 1955 (instead of 1960) which means the average trend growth rate is around 6.8% – which might also provide an additional explanation on why PJD was so keen to promote a 7% annual growth rate – although the retained growth is real, PPP-adjusted and per capita; further computations still bring the (ideal) potential growth rate around 5% (accounting for an average 2% demographic annual growth, hence vindicating further my claim that: a/ ideal GDP growth rate is 5% and b/ stabilized growth brings a lot more benefits in terms of growth gains as higher rates are usually linked to high volatility.
(one last note perhaps: data is annual, and that means I might have missed some short-term cycles within)
are we heading to a 1980s redux?
Turbulent independence: 1955-1962
for such a short period of time, fluctuations sure have been important; a newly independent nation like Morocco had to deal with hostile conditions, due to a drain in foreign capital – that is why pre-existing legislation has been heavily upgraded with the establishment of Office des Changes in 1958. On the other hand, massive transfers fell on the lap of the successive new governments, who also had the task of building a whole set of legislation from scratch (including labour, currency and budget legislation) as well as engaging in long-haul public expenditure – for lack of any sizeable private sector initiative.
Boom & Bust: 1962-1973
A very short shoot-up has been observed during the considered period, although it seems to be due to a particularly high government expenditure observed in 1962 and 1963, although that shock was rather short-lived, as it failed to prevent a deep recession that extended all the way to the early 1970s.
[…] Comment en est-on arrivé là ? “Vous ne pouvez pas expliquer ces événements en faisant l’économie de l’histoire du Maroc de l’époque”, bougonne Simon Lévy. Très juste. Commençons par le contexte social. Plusieurs grèves ont marqué la fin de l’année 1964. La Promotion nationale, définie par les autorités en 196[0] comme un instrument de développement économique, est un fiasco complet. Les prix de la viande et du sucre grimpent de plus de 40 %. Casablanca compte 600 000 chômeurs et subit un exode aux allures d’avalanche : 36 000 personnes sont jetées dans la ville chaque année. Un tiers des habitants a moins de 40 dirhams par mois pour survivre.
The Big-State approach entailed by programs such as La Promotion Nationale failed ultimately for various reasons, thus leading to the observed trough for most of the mid-1960s.
Euphoric Borrow-and-Spend: 1973-1981
the commodity prices shot-up in 1973 provided Morocco with a lot of resources to spend: prices tripled between 1973 and 1974 from $13/ton to$42/ton and then \$68/ton in 1975. Furthermore, large transfers in government expenditure observed during the mid-1970s, in the aftermath of the Madrid Treaty boosted the economy to perform very high growth rates, 7.5% on average between 1974 and 1977. In a particular fashion, these years were boosted mainly with government expenditure (including military spending) and receipts from Phosphate prices;
Boom & Bust, Redux: 1981-1991
Following the 1983 Structural Adjustment Program (SAP), the Moroccan economy was on a bumpy road, and save perhaps for the immediate impact of a drastic reduction in food subsidies:
In the early years of adjustment, Morocco made considerable progress. Between 1983 and 1988, Bank programs focused primarily on sectoral reform, while the IMF took the lead in stabilization efforts. In addition, the Bank addressed long-term structural change through project lending, economic and sector work (ESW), and dialogue with the government. As reducing the deficit was very urgent, it was decided that structural fiscal problems would be addressed at a later stage. Four adjustment loans (in trade, agriculture, education, and public enterprises) were approved during this time. With the exception of education sector reform, which encountered political resistance, and irrigation, which faced shortages of public funds, all were successful.
During this period, overall GDP rose by almost 5 percent a year, manufacturing exports grew rapidly, the deficit was halved, and the balance of payments current account reached a surplus.
Depressing 1990s:
the conjugated effects of SAP aftermath and a series of droughts grounded the economy to a halt, as indeed reports from the World Bank and the OECD pointed out back then; average growth most of the 1990s was null or slightly negative; between 1992 and 1997 average growth was about 1.45%, indeed:
Toward the end of the 1980s, the Bank was excessively bullish it its assessments of Morocco’s economic future. Progress in public enterprise and financial sector reforms was considered excellent. Two adjustment loans approved during this period carried relatively light conditions for disbursement because they were considered a continuation of a smoothly running reform program. The Bank’s ESW, which was of very high quality, was not sufficiently used and disseminated.
The Bank’s overoptimism continued through 1993, despite the fact that there had been hardly any economic growth since 1990. Growth slowed from almost 5 percent a year in the second half of the 1980s to 2 percent in the early 1990s.
Booming 2000s and aftermath: despite what one might think, the last decade was comparatively the best expansion cycle ever observed; government spending and debt were steadily halved, proceeds from privatization were injected back into the economy, and the private sector benefited from a strong foreign demand for exports. By 2007, average growth was about 5% and set on recovering the accrued losses from the 1990s.
The cycle was somewhat broken in 2009 – a halt rather than a breakdown, since the trend has not gone in typical past observations below the potential output line. But based on past occurrences, it seems we are about to witness the end of the longest expansionary cycle in the last half a century, and with it, the end of the only crucial recipe for development the past government have been so keen to promote: growth; insofar as growth was ‘sustainable’, the proceeds were high enough to allow for a certain (albeit very unjust) distribution of wealth. A slow growth would invariably lead to less for the many, hence increasing risks for genuine social resentment – and considering the current set of events, no one in government or else is keen on that scenario to happen.
## [Sneak Peek] 2012 Budget Bill
Posted in Dismal Economics, Flash News, Moroccan Politics & Economics, Morocco, Tiny bit of Politics by Zouhair ABH on November 4, 2011
Early October 2011, The budget bill has (finally) been delivered to the representatives at parliament house to debate, amend and vote on. Not surprisingly, the budget has been so consensual the next government will most certainly not reconvene to introduce a rectification bill next session. As for the first draft that has been hurriedly withdrawn, we will never know its content (perhaps we will, but later. Much later)
Figures out of a 325Bn budget
In general terms, this is not a bad bill. In the sense that it does not depart from the previous more or less successful attempts in reigning in budget deficit in the region of 3% GDP, the bill brings a moderate 20Bn net borrowings requirement package, in line with the previous bills.
But behind seemingly “business as usual” figures hide the harsh truth of financial mismanagement; we can expect the service debt to go up in the next couple of years from the current 42Bn to match the 400Bn stock of debt, courtesy to that courageous decision to avoid social unrest with the most simplistic policies a government can come up with, i.e. increasing subsidies on strategic goods – the compensation fund was indeed the main reason why the 2011 deficit worsened from 13Bn to almost 34Bn.
We can also expect pay-wage to rise at a higher trend, thanks to the additional 25,000 new civil servants positions provisioned for in this draft bill. Ironically though, a third of it goes to the Interior Ministry – in fairness, about the same number of positions have been created for the Education department as well. Whatever the expected benefits of having extra local civil servants -or more importantly, more teachers in the classrooms- the drain of civil service pay-wage will keep on increasing, and there is a simple explanation to that: whenever a group of productive civil servants (teachers, police force members, local functionaries, doctors, etc.) are recruited, another batch of bureaucratic staff is recruited as well.
Humphrey Appleby's shopping list - Budget Bill 2012
For every collected dirham in government receipts, 38 cents goes to pay civil servants alone. Even more concerning is how one can easily equate government receipts from income taxes (28.5 Bn) to the interest on paid debt (20.2 Bn) and these numbers have been going for some time now. The trouble is, it is not like the government’s hands are tied and cannot raise more revenues out of the taxpayers (corporations or otherwise).
The fiscal exonerations report points out a maintained yearly increase of 7.63%, and thus broke through the symbolic ceiling of 30Bn tax deductions, exonerations, loopholes and credit that would more than make up for the deficit. Real-Estate developers still make up the bulk of tax deductions recipients, to the tune of almost 5.5Bn (a 22% increase from the 4.4Bn offered last year) while other sectors that would benefit from these deductions and generate some income, like Tourism, Automotive and Chemical industries do not enjoy a third of what RE tycoons get in terms of VAT and Corporate tax breaks.
It seems the spirit embodied in these deductions is to slow down the growth rate of fiscal receipts (currently at 9.8% for a projected 5% real GDP growth) and that’s where a dangerous contradiction lies: the 2012 exonerations reverse VAT deduction from 13.7Bn in 2011 to 13.2Bn 2012 even though domestic consumption still contributes around 1 percentage point to GDP growth. If indeed the tax deductions were geared toward sustaining growth, then investment and exports are the ones needing the deductions, and real-estate development doesn’t account for its 5.5Bn compared to 3Bn for exports for instance.
On a lighter note: the Moroccan taxpayers are the proud recipients of 10,000 receipts from trials and experimental farms managed by the Agriculture ministry; and PJD caucus will have a field day with the Alcohol taxes receipts to increase them from the 1.1Bn, i.e. 0.3% of total Budget to whatever level they set to both satisfy their Moral Crusade (sorry, Jihad) against that devilish beverage, and at the same time destroy a domestic industry and compel consumers to chose contrebande or imported products.
## 600,000 Beamte Und Ein Befehl
Let us consider one particular aspect of the now stated policy of the Finance Ministry, i.e. its commitment to “all budget entities have been requested to economize 10 percent of their budget allocations for some non-essential current expenditure items”. Now, either ministerial departments will cut 10% of their non essential expenses – in which case total savings will amount, at best, to a few hundred millions- or, all departmental bodies will have to cut 10% of their total current expenditure, with cuts justified as such. This scenario means a package of MAD 22 Bn cuts uniformly distributed across ministries, a bad policy, government-wise, since the largest (and most important) departments will be hit harder: Education, Health and Law & Order. 600,000 public servants are therefore held at gunpoint by that one order: “cut 10%”.
The caring government
The cuts are scheduled to target current expenditure, which means civil service pay-wage, purchases of hardware (non-investment hardware) and other current expenses like electricity bills and rent for instance. But let us not be deluded on that point: current expenses are mainly made up of pay-wage, and depending on ministries, these can make up to 94% of total current expenses (Education) but each department has its own ratio, and a non-commensurable cut across departments will inevitably cause great harm to those employing large staff. Whether on believes in small government in Morocco or not, there will be few dissenting voices regarding the reduction of teachers and police force members, just to achieve MAD 4,1Bn savings.
Of course, clerical and non-essential staff could be laid off, though this means a renewed struggle with “Ghost civil servants“, a fight long lost even before it has begun. So this not a cost-killing operation, but a genuine austerity plan: the blind plan, the size of cuts and the timing, all these elements point out to a difficult 2012 Budget bill and years of near-stagnation ahead. But let us first consider the impact of that 10% cut on human resources, indeed, 600,000 civil servants (from local and central services) will no doubt see their pay frozen, or cut. And contrary to the Intilaka program enacted in 2006, 39,000 departures are not going to be enough to balance the books (as a matter of fact, it makes up only 3/4 of expected savings on wages).
Now, a 10% cut on the 6 largest departments -Education, Interior, Health, Justice, Finances and Equipment departments account for 89,6% of total workforce, means that some 51,800 positions will have to be economized one way or the other. Unless each department manages to strike a deal with unions to cut wages some MAD 7,000 per annum per civil servant -that saves some MAD 4.2 Bn in salaries, i.e. two-thirds of targeted costs. But then again, this modus operandi assumes Unions and government will be reasonable in their negotiations to freeze pay over the next couple of years, but that is very unlikely, given the sad history of horse-trading between both parties. The other alternative is to exhort civil servants to retire well ahead before the 65 years-old limit, thus minimizing payroll. The remaining third alternative, and unless things get very desperate, might not be considered: in short, make people redundant with limited or no pay.
Early retirement is a good policy: regulations specify that any civil servant willing to retire early will receive a reduced part of their wage, until they reach 65. Now, considering that a large chunk of workforce is 50-ish years old, the 15 years gap can be used to reach the average 7,000 annual pay cut target. But the point is, these retired schoolteachers, policemen and attorneys will not be working for the public sector any more, thus inflicting great damage upon public service, a damage it could do without. What is worse, these cuts/early retirement cannot, yet again, be uniformly distributed. The trouble is, large-staffed department will bear the brunt of cuts not because they are the ones with the largest bureaucracies, but because it is the nature of their operations: you need to take on more teachers to keep teacher/pupils ratio low, healthcare officers and workers to reach similar ratios, more policemen to insure neighbourhoods are quiet… the error of an indiscriminate budget cut is that it overlooks such details, and end up hurting essential, front-line services.
Quadruple whammy (sectors make up for 3/4 of job losses)
Could things be done some other way? it seems not. cutting wages accounts for a third of overall planned cuts. Other than that, departments will need to cancel orders on hardware. There is no way only secondary expenses will be cut; eventually essential services will be on the ministry’s sights. Under the assumption of uniform distribution of total pay wage per department, teachers and Education staff, for instance, receive MAD 11,000 a month -not an unreasonable mean, considering the ageing structure of the teachers’ corps. Healthcare workers are slightly better off, with an average monthly wage of MAD 12,000. Staff from the interior, finally, receive 6,000 monthly on average. Hardly high-income earners indeed.
Now, in his briefing before the Cabinet, minister Mezouar:
a mis l’accent sur la nécessité de préserver les acquis relatifs aux équilibres macroéconomiques et de garantir les conditions de poursuite de l’élan de développement que connaît le pays.
and that means, the stated policy of his budget cut is not to harm public service. That also means, he needs to be more specific about the 10% cut, and exempt departments from what is a sure blow to the standards of their services to Moroccan users. If the ministry is serious about putting together a stimulus package – especially in these trying economic times- then it should consider carefully the budget cuts it is planning, for fear it might take the country to recession, rather than stabilizing macroeconomic balances.
Budget cuts are not pure evil: it is a given that government debt is too large, and its foreign-held, short-term maturity weighs a great deal on the dwindling foreign reserves. Cutting expenses -as well as raising receipts- is the way to go. But instead of targeting blindly departments, the government needs to put into practise its pledge to engage in “structural reforms”, i.e. to end up exemption and fiscal niches that benefit only to the wealthiest.
the VAT and Income Tax reforms need, in effect, to be seriously considered in that spirit. As for expenses, the Audit Court has pointed out numerous dysfunctional items within the public sector and that saves up capital and expenses over the years. Then, dysfunctional payroll regulations can be addressed as well.
But I digress. The minister obviously knows his job better than I do. | 2020-07-05 20:46:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3689010739326477, "perplexity": 4649.923418382576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655888561.21/warc/CC-MAIN-20200705184325-20200705214325-00255.warc.gz"} |
https://www.imrpress.com/journal/CEOG/48/1/10.31083/j.ceog.2021.01.2209/htm | NULL
Section
All sections
Countries | Regions
Countries | Regions
Article Types
Article Types
Year
Volume
Issue
Pages
IMR Press / CEOG / Volume 48 / Issue 1 / DOI: 10.31083/j.ceog.2021.01.2209
Open Access Original Research
Is visual inspection with acetic acid (VIA) a useful method of finding pre-invasive cervical cancer?
Show Less
1 Gynecology-Oncology, Cancer Research Center, Shahid Beheshti University of Medical Sciences, 1983963113 Tehran, Iran
2 Preventative Gynecology Research Center, Shahid Beheshti University of Medical Sciences, 1983963113 Tehran, Iran
3 Department of Developmental Biology, Science and Research University of Tehran, 1417466191 Tehran, Iran
4 Department of Biostatistics, Tarbiat Modares University, 14117-13116 Tehran, Iran
5 Obstetrics & Gynecology, Arak University of Medical Sciences, 38156879 Arak, Iran
6 Obstetrician & Gynecologist, Imam Hossein Medical Center, Shahid Beheshti University of Medical Sciences, 1983963113 Tehran, Iran
7 Developmental biology, Science and Research University of Tehran, 1417466191 Tehran, Iran
Clin. Exp. Obstet. Gynecol. 2021, 48(1), 128–131; https://doi.org/10.31083/j.ceog.2021.01.2209
Submitted: 4 July 2020 | Revised: 15 October 2020 | Accepted: 16 November 2020 | Published: 15 February 2021
Abstract
Purpose of investigation: The prospects for effective cytological screening in the majority of developing countries are limited. This study evaluated visual inspection with acetic acid (VIA) for the diagnosis of CIN 2-3 in case findings. Materials and Methods: In this prospective study, 1437 asymptomatic women were screened using VIA followed by colposcopy-biopsy of all cases. The predictive efficacy of VIA was determined based on pathological confirmation of CIN 2-3 on colposcopy-directed biopsy. Results: In 388 (27%) out of 1437 cases, the VIA was positive. In 31 (2.1%) cases, CIN 2-3 was confirmed on cervical biopsy. The sensitivity, specificity and negative predictive value (NPV) of the VIA were 41.9%, 73.3%, and 98.2%, respectively. Conclusion: The present study confirmed the value of VIA for screening, with sensitivity of 41.9% and NPV of 98.2%. Future studies should determine the best place for VIA testing alongside other types of testing in screening algorithms.
Keywords
Screening
Cervical cancer
Cervical dysplasia
Colposcopy
Visual inspection with acetic acid
Iran
1. Introduction
Over 80% of new cervical cancer cases globally are diagnosed in developing countries. In some countries, cervical cancer is the leading cause of cancer death in women and the second most frequent malignancy after breast cancer. Systematic screening guidelines in developed countries have changed the epidemiology of cervical cancer. In many underdeveloped regions, lack of these organized screening programs has led to the late diagnosis of this malignancy in it’s advanced stages [1-3]. Development of new screening methods through appropriate use of cytological screening has been encouraging in developing countries [4-11].
The World Health Organization has approved visual inspection of the cervix using acetic acid (VIA) as a screening method for cervical cancer. Studies have shown comparable efficiency of VIA and cytology, with the former being simpler and easier to perform by paramedical nurses in developing countries [1,3,12]. VIA is reported to be highly sensitive and specific (80% and 92%, respectively) in detection of high grade CIN [13-15]. In a study in India, VIA revealed sensitivity of 67.9% for CIN 3 detection [16]. In a meta-analysis, sensitivity of 73.2% was observed for detection of CIN 2 and more lesions by VIA [17]. The goal of this study was to evaluate the accuracy of VIA for diagnosis of CIN 2-3 in case findings.
2. Material and methods
This prospective cross-sectional study assessed sexual active women in all age groups, referred by a charity center to a third-level medical center located in Tehran for cervical screening from 2011 to 2014. Socio-economic situation of participants in the study was in a wide range from low to moderate level. The subjects were enrolled into study and demographic information was collected after informed consent was obtained. Pregnant, hysterectomized women and patients with known invasive or pre-invasive cervical lesions were excluded from the study. All of 1437 cases underwent screening by visual inspection with acetic acid (VIA).
A trained nurse carried out the VIA. Acetic acid (3%) was applied to the cervix for about one minute and cervix observed without magnification in white light. After observing the posture of the Squamocolumnar junction (SCJ) as to satisfactory or unsatisfactory, the VIA results were reported as negative, positive (fine or dense) or strongly positive (very opaque). In a two-month training course before taking part in the study, the nurse checked her VIA test reports with an expert Gyneco-oncologist who was involved in the research. In all cases, colposcopy-biopsy was carried out for purpose of comparison and the results were recorded.
In Colposcopic report sheet, findings were categorized as satisfactory examination, leukoplakia and acetowhite lesion. The location of lesions as well as presence of punctuations, mosaicism or abnormal vessels were recorded. High score colposcopy was described in the presence of a dense acetowhite, lesions in more than one quadrant, pilling, punctuation, mosaicism or abnormal vessels. Biopsy samples were preserved in formalin and delivered to a pathology laboratory located in an academic medical center. Samples were reported by one of three highly experienced pathologists, with more than 15-year work experience. The pathology report positive for CIN 2-3 on the Colposcopic biopsy was regarded as the gold standard.
Descriptive statistics were used to represent the data. The mean and standard deviation were used for continuous data, frequency, and proportion for categorical data. Predictive values such as sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were calculated. R-Package version 3.0.1 was used for data analysis [18]. Subgroup analyses of strongly VIA positive and VIA positive cases were also carried out.
3. Results
Of the 1437 asymptomatic subjects screened (VIA followed by colposcopy). The mean age of the cases was 45.41 $\pm$ 10.84 with a range of 18 to 86 years. Age distribution of study subjects is presented in Fig. 1. Of these, 388 (27%) were VIA positive and 126 (8.8%) were strongly VIA positive. In 145 out of 388 (37.4%) VIA positive cases, SCJ was observed and in 243, (62.6%) it was not observed. In 64 out of 126 (50.8%) strongly positive VIA cases, SCJ was observed and in 62 (49.2%), it was not observed (SCJ is not usually seen in postmenopausal women due to lack of hormones). Table 1 shows the predictive values for the VIA for CIN 2-3.
Fig. 1.
Hidtogram of age distribution of study subjects.
Table 1.Sensitivity, specificity, positive and negative predictive value of VIA test in CIN2 or more case finding.
Test Accuracy rate Sensitivity (95% CI) Specificity (95% CI) PPV${}^{1}$ (95% CI) NPV${}^{2}$ (95% CI) Strong positive VIA 89.77 (88.06-91.27) 16.13 (5.45-33.73) 91.39 (89.8-92.81) 3.97 (1.29-9.04) 98.02 (97.11-98.7) Positive VIA 72.65 (70.25-74.93) 41.94 (24.55-60.92) 73.33 (70.93-75.63) 3.35 (1.8-5.66) 98.28 (97.3-98.98) ${}^{1}$Positive predictive value. ${}^{2}$Negative Predictive value.
The VIA positive test diagnosed 13 out of 31 CIN2-3 cases. The strongly positive VIA diagnosed that 5 out of 31 CIN 2-3 cases.
4. Discussion
The present study revealed that strongly positive and positive VIA detected CIN 2-3 case with a sensitivity of 16.13% and 41.94%, respectively. In both groups, the negative predictive value was 98%. The accuracy of both was considerable at 89.7% for strongly positive VIA and 72.6% for positive VIA results (Table 1).
The design of this article was cross-sectional and treatment was performed, however, the treatment information was not included in the design. In this project, women who have started their sexual activity, regardless of their age, are included in the study group. since the women in the study came from acharity organization, and had no previous screening history, no age related limit was set and all women were tested.
Comparison of results of this study with other studies that included VIA as screening and colposcopy-biopsy as the gold standard for CIN 2-3 case findings are shown in Table 2.
Table 2.Comparison of results of different studies regarding VIA test screening in CIN2 or more case finding.
Study No. Positive VIA N (%) Positive CIN${}_{2}$N (%) Sensitivity Specificity PPV${}^{1}$ NPV${}^{2}$ Arab et al. (Strong VIA) Present Study 1437 126 (8.8) 31 (2.1) 16.13 91.39 3.97 98.02 Arab et al. VIA Present Study 1437 388 (27) 31 (2.1) 41.94 73.23 3.35 98.28 Zong et al. [22] 3763 698 (18.5) 22 (0.59) 77.3 81.2 24.4 99.8 Fokom-domgue et al. [23] 47782 8314 (17.4) 1577 (3.3) 82.4 87.4 Apollinair et al. [4] 640 38 (5.9) 8 (1.2) 72.9 95.2 Poomtavorn et al. [14] 200 59 (29.5) 32 (16) 59.4 76.2 32.2 90.8 Gaffikin et al. [24] 66-96 64-98 Basu et al. [25] 55.7 82.1 ${}^{1}$Positive predictive value. ${}^{2}$Negative Predictive value.
In the present study, the influence of subjectivity in VIA test was strong. For the strongly positive VIA 8.8% positive tests and for overall positive VIA 27% positive tests were reported.
If objectivity of testing is implemented, such as in a study in Thailand [14] the efficacy could improve. The criteria for a positive VIA were to find the lesions to be near an SCJ, well defined and opaque. VIA was considered to be negative in the presence of lesions which were faint or translucent, positive acetowhite over a polyp or Nabothian cyst, the narrow acetowhite rim around SCJ or lesions distant from SCJ.
One limitation of the present study was the lack of awareness of the exact location relative to the SCJ and the size of the lesion. This could make the VIA more subjective. Another important clinical aspect is the level of experience of the paramedical nurse who reported the VIA. A longer training session and a period of supervised performance could improve the quality of the test. Other helpful methods such as VIA with magnification (VIAM) would upgrade the efficiency of the test. Another important clinical aspect is pre-test probability in the study population.
The high negative predictive value of the present study for the strongly positive VIA and overall positive VIA was 98%. This is comparable to the negative predictive values of VIA reported by Li-Ju Zong et al. (99.8%) and by Poomtavor et al. (90.8%) [14,15].
VIA could be valuable in conjunction with other screening tests for diagnosis of CIN 2-3 in case findings. VIA and HPV testing have been suggested in combination instead of the PAP smear and HPV testing in developing countries. VIA is easy and could be performed during Gynecologic visits without extra-charge and without high technology requirement requirement but the HPV test was not available for these people due to cost related issues. It is shown to reduce women’s mortality from cervical cancer in developing countries [19,20]. VIA in combination with HPV testing could be an indicator of the need for colposcopy. VIA and pelvic exams should be repeated annually [21].
5. Conclusions
VIA is a simple procedure that can be carried out by a paramedical nurse. It is not expensive and worked well for screening in the present study (sensitivity of 41.9% and NPV of 98.2%). An accuracy of 89.7% and specificity of 91.3% for strongly positive VIA results and less accuracy and specificity of positive VIA results were reported (Table 1). These tests could be included in the screening algorithms in combination with cytology or HPV testing. A future study could determine the best place for VIA in screening programs and algorithms alongside other types of tests.
Limitation: In the present study, a trained nurse was under very careful teaching by an expert Gyneco-oncologist, which is not offered in normal teaching courses. Therefore, such excellent results of VIA might be due to this supervision.
Author contributions
Study concept and design: M.A*, A.M; Acquisition of data: M.A*, A.M, GH.F; Analysis and interpretation of data: M.A , A.M, GH.F, R.GH; Drafting of the manuscript: M.A*, A.M, GH.F, M.M; Critical revision of the manuscript for important intellectual content M.A*, A.M, M.M, S.S, M.S; Statistical analysis: M.A*, R.GH; Administrative, technical, and material support: M.A*, A.M, S.S; Study supervision: M.A*, A.M; Editing and submit articles: M.A*, GH.F.
Acknowledgment
Thanks to all colleagues of the gynecology clinic and Imam Hossein Research Center.
Conflict of interest
The author declares that she has no conflicting interest.
Publisher’s Note: IMR Press stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Share | 2022-10-03 08:17:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 10, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3023446500301361, "perplexity": 5874.937387775072}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00738.warc.gz"} |
https://www.ibpsonline.in/2017/05/ibps-po-2017-quantitative-aptitude_98.html | Sunday, 21 May 2017
1 .
A. 280 B. 216 C. 243 D. none of these
2 . In 2007, from all the colleges together an overall 40 % of the students enrolled for a computer course total how many students enrolled for these course ?
A. 700 B. 600 C. 800 D. 900
3 . What is the ratio of the average number of student enrolled with colleges together in 2009 to that in 2010 ?
A. 117 : 108 B. 111 : 113 C. 110 : 113 D. 105 : 108
4 . The Average number of students enrolled from college Q for all the years together is approximately what percent of the average number of students enrolled from college R for all the years together ?
A. 83 B. 85 C. 87 D. none of these
5 . In 2008, from all colleges together 10 % students enrolled went abroad. Approximately how many students went abroad?
A. 311 B. 211 C. 321 D. none of these
6 . What is the average number of students enrolled for college R for all the years is –
A. 302 B. 502 C. 402 D. none of these
7 .
A. 4000 B. 5000 C. 3000 D. none of these
8 . The sale of Hindi newspaper in locality P is approximately what percent of the total sale of Hindi newspaper in all the localities together ?
A. 20% B. 15% C. 18% D. 24%
9 . What is the ratio of the sale of Hindi newspaper in locality R to the sale of English newspaper in locality T ?
A. 12:7 B. 11:7 C. 7:12 D. none of these
10 . The sale of English and Hindi newspaper is locality P is what percent of the sale of English and Hindi newspaper in locality S ?
A. 220 B. 125 C. 120 D. none of these
1 .
Answer : Option C Explanation : 360x$9\over10$x$3\over4$=243
2 .
Answer : Option D Explanation : (480 + 350 + 380 + 500 + 540) × 40 % = 900
3 .
Answer : Option C Explanation :
4 .
Answer : Option A Explanation : $320+350+300+360+340)\over(400+380+410+430+390)$x100=$1670\over2010$x100=83.08≈83%
5 .
Answer : Option B Explanation : ; (420 + 380 + 410 + 520 + 460) ×10 %=2110x$10\over100$=211
6 .
Answer : Option C Explanation : $(400+380+410+430+390)\over5$= $2010\over5$=402
7 .
Answer : Option C Explanation : (23000 – 20000) = 3000
8 .
Answer : Option D Explanation : $5500\over23000$x100=23.91≈24%
9 .
Answer : Option A Explanation : $6000\over3500$=$12\over7$=12:7
10 .
Answer : Option C Explanation : $9000\over7500$x100=120 | 2021-06-23 01:52:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6476626992225647, "perplexity": 5202.891438997275}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488528979.69/warc/CC-MAIN-20210623011557-20210623041557-00572.warc.gz"} |
https://fboehm.us/software/gemma2/index.html | ## Overview
gemma2 is an implementation in R of the GEMMA v 0.97 EM algorithm that is part of the GEMMA algorithm for REML estimation of multivariate linear mixed effects models of the form:
[vec(Y) = X vec(B) + vec(G) + vec(E)]
where (E) is a n by 2 matrix of random effects that follows the matrix-variate normal distribution
[G MN(0, K, V_g)]
where (K) is a relatedness matrix and (V_g) is a 2 by 2 covariance matrix for the two traits of interest.
Additionally, the random errors matrix (E) follows the distribution:
[E MN(0, I_n, V_e)]
and (G) and (E) are independent.
## Installation
To install gemma2, use the devtools R package from CRAN. If you haven’t installed devtools, please run this line of code:
install.packages("devtools")
Then, run this line of code to install gemma2:
devtools::install_github("fboehm/gemma2")
## References
X. Zhou & M. Stephens. Efficient multivariate linear mixed model algorithms for genome-wide association studies. Nature Methods volume 11, pages 407–409 (2014). https://www.nature.com/articles/nmeth.2848
https://github.com/genetics-statistics/GEMMA | 2021-07-27 17:34:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4856237769126892, "perplexity": 6602.941367699859}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153474.19/warc/CC-MAIN-20210727170836-20210727200836-00713.warc.gz"} |
https://math.stackexchange.com/questions/2319039/stable-numerical-partial-fraction-decomposition-for-large-scale-problem | # Stable numerical partial fraction decomposition for large scale problem
Given a rational polynomial $\frac{f(x)}{g(x)}$ with $f$ and $g$ having same degree and real coefficients. The degree of $f$ and $g$ is a few thousand. I would like to compute the partial fraction decomposition: $$\frac{f(x)}{g(x)} = \sum_i \frac{r_i}{x - p_i},$$ where $p_i$ and $r_i$ are the poles and residues of the rational polynomial.
Unfortunately, MATLABs residuez breaks down numerically in the degree of few hundred. Is there any way to perform the partial fraction decomposition numerically more stable? What is the proper way to assess the difficulty of the problem?
I am aware that the partial fraction decomposition is an ill-posed problem. Some more information on my problem: The poles lie on the unit circle and have multiplicity of 1. The poles can be accurately computed for a degree of few thousand. I only need the absolute value of the residues. None of the zeros and poles cancel each other. All coefficients lie between -1 and 1, and most of the coefficients are zero.
If these are simple poles, the residues are $$r_j = \lim_{x \to p_j} \frac{(x - p_j) f(x)}{g(x)} = \frac{f(p_j)}{g'(p_j)}$$ If the coefficients of $f$ and $g$ (and thus of $g'$) are large, accurate computation of $f(p_j)$ and $g'(p_j)$ will be difficult. I suggest you use a Computer Algebra System such as Maple or Mathematica that can use higher precision.
• I don't know what the Matlab algorithm does. If the coefficients are all between $-1$ and $1$, and $p_j$ are on the unit circle, the absolute error for direct computation of $g'(p_j)$ shouldn't be too bad, though if the actual value should be very close to $0$ the relative error will be large. – Robert Israel Jun 12 '17 at 2:03 | 2019-12-13 17:06:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8124545216560364, "perplexity": 65.3530546048087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540564599.32/warc/CC-MAIN-20191213150805-20191213174805-00079.warc.gz"} |
https://freedocs.mi.hdm-stuttgart.de/sd1Collection4Exercise.html | ## Collections IV, Exercises
No. 196
### Implementing a set of strings
Q: We want to partly implement a simplified version of Set: package de.hdm_stuttgart.mi.sd1.stringset; /** * A collection of Strings that contains no duplicate elements. * More formally, sets contain no pair of Strings s1 and s2 such that * s1.equals(s2), and no null elements. As implied by its name, * this class models the mathematical set abstraction. * * The StringSet class places stipulations on the contracts of the add, * equals and hashCode methods. * * The stipulation on constructors is, not surprisingly, that all constructors * must create a set that contains no duplicate elements (as defined above). * */ public interface Set_String { /** * Returns the number of strings in this set (its cardinality). * * @return the number of elements in this set (its cardinality) */ public int size() ; /** * Returns true if this set contains no elements. * * @return true if this set contains no elements */ public boolean isEmpty(); /** * Returns true if this set contains the specified element. More * formally, returns true if and only if this set contains an * element e such that (o==null ? e==null : o.equals(e)). * * @param o element whose presence in this set is to be tested * @return true if this set contains the specified element. * A null value will be treated as "not in set". * */ public boolean contains(Object o); /** * Returns an array containing all strings in this set. * * The returned array will be "safe" in that no references to it are * maintained by this set. (In other words, this method allocates * a new array). The caller is thus free to modify the returned array. * * @return an array containing all strings in this set. */ public String[] toArray(); /** * Adds the specified element to this set if it is not already present. * More formally, adds the specified element e to this set if the set * contains no element e2 such that (e==null ? e2==null : e.equals(e2)). * If this set already contains the element, the call leaves the set * unchanged and returns false. In combination with the restriction on * constructors, this ensures that sets never contain duplicate elements. * * null values will be discarded * * @param s string to be added to this set * * @return true if this set did not already contain the specified element. * The attempted insert of a null value will return false. */ public boolean add(String s); /** * Removes the specified string from this set if it is present * (optional operation). More formally, removes a string s * such that (o==null ? s==null : o.equals(s)), if this set * contains such a string. Returns true if this set contained * the string (or equivalently, if this set changed as a result * of the call). (This set will not contain the string once the * call returns.) * * @param s String to be removed from this set, if present. * @return true if this set contained the specified string. */ public boolean remove(Object s); /** * Removes all of the strings from this set (optional operation). * The set will be empty after this call returns. */ public void clear(); }Implement this interface:public class MySet_String implements Set_String { /** * Constructs a new, empty set; */ public MySet_String() { ... } /** * Copy array values into this set excluding duplicates. * * @param source The array to copy values from */ public MySet_String(final String[] source) { ... } @Override public int size() { ... } ... }Hints: Store strings among with corresponding hash code values in two separate arrays. You may use the “amortized doubling” strategy from Allow for variable capacity holding integer values to accommodate arbitrary numbers of instances. On lookup use hash code values prior to comparing via equals() in order to gain performance. Write appropriate tests to assure a sound implementation. A: Maven module source code available at sub directory P/CollectionImplement/StringSet/Solution below lecture notes' source code root, see hints regarding import. Online browsing of API and implementation. | 2020-04-06 05:29:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4794968068599701, "perplexity": 1988.7018819066036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371618784.58/warc/CC-MAIN-20200406035448-20200406065948-00210.warc.gz"} |
https://www.ideals.illinois.edu/handle/2142/91323 | ## Files in this item
FilesDescriptionFormat
application/pdf
1891.pdf (21kB)
AbstractPDF
## Description
Title: Spectroscopy of the X1Σ+, a1π and B1Σ+ electronic states of mgs Author(s): Tokaryk, Dennis W. Contributor(s): Linton, Colan; Adam, Allan G.; Caron, Nicholas Subject(s): Small molecules Abstract: The spectra of some astrophysical sources contain signatures from molecules containing magnesium or sulphur atoms. Therefore, we have extended previous studies of the diatomic molecule \chem{MgS}, which is a possible candidate for astrophysical detection. Microwave spectra of X$^1\Sigma^+$ , the ground electronic state, were reported in 1989\footnote{S. Takano, S. Yamamoto and S. Saito, Chem. Phys. Lett. \textbf{159}, 563-566 (1989).} and 1997\footnote{K. A. Walker and M. C. L. Gerry, J. Mol. Spectrosc \textbf{182}, 178-183 (1997).}, and the B$^1\Sigma^+$--X$^1\Sigma^+$ electronic absorption spectrum in the blue was last studied in 1970\footnote{M. Marcano and R. F. Barrow, Trans. Faraday Soc. \textbf{66}, 2936-2938 (1970).}. We have investigated the B$^1\Sigma^+$--X$^1\Sigma^+$ 0-0 spectrum of \chem{MgS} at high resolution under jet-cooled conditions in a laser-ablation molecular source, and have obtained laser-induced fluorescence spectra from four isotopologues. Dispersed fluorescence from this source identified the low-lying A$^1\Pi$ state near 4520 \wn. We also created \chem{MgS} in a Broida oven, with the help of a stream of activated nitrogen, and took rotationally resolved dispersed fluorescence spectra of the B$^1\Sigma^+$--A$^1\Pi$ transition with a grating spectrometer by laser excitation of individual rotational levels of the B$^1\Sigma^+$ state via the B$^1\Sigma^+$--X$^1\Sigma^+$ transition. These spectra provide a first observation and analysis of the A$^1\Pi$ state. Issue Date: 2016-06-22 Publisher: International Symposium on Molecular Spectroscopy Genre: Conference Paper/Presentation Type: Text Language: En URI: http://hdl.handle.net/2142/91323 Rights Information: Copyright 2016 by the authors Date Available in IDEALS: 2016-08-22
| 2016-10-22 21:32:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3500824570655823, "perplexity": 14640.422176763757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719045.47/warc/CC-MAIN-20161020183839-00012-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://homework.zookal.com/questions-and-answers/there-are-two-goods-good-a-and-good-b-for-593387850 | 2. Economics
3. there are two goods good a and good b for...
# Question: there are two goods good a and good b for...
###### Question details
There are two goods, good A and good B. For each question, select the correct answer from the dropdown box.
(a) Suppose that when the price of Good A increases from $4 to$5, the demand for Good B decreases from 55 units to 40 units. Using the midpoint formula, the cross price elasticity of demand for good B with respect to the price of good A is
i) 1.42
ii) 1.50
iii) -1.50
iv) -1.42
(b) Of the following, which is the best example of what Good A and Good B may be?
i) butter and margarine
ii) printer and ink | 2021-04-13 08:02:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4457204043865204, "perplexity": 1638.8496034864863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038072175.30/warc/CC-MAIN-20210413062409-20210413092409-00103.warc.gz"} |
https://www.nature.com/articles/s41586-020-2828-1/?error=cookies_not_supported&code=b9de48e7-2983-444b-b5af-f5c5b6e0fedd | Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# The physical mechanisms of fast radio bursts
## Abstract
Fast radio bursts are mysterious millisecond-duration transients prevalent in the radio sky. Rapid accumulation of data in recent years has facilitated an understanding of the underlying physical mechanisms of these events. Knowledge gained from the neighbouring fields of gamma-ray bursts and radio pulsars has also offered insights. Here I review developments in this fast-moving field. Two generic categories of radiation model invoking either magnetospheres of compact objects (neutron stars or black holes) or relativistic shocks launched from such objects have been much debated. The recent detection of a Galactic fast radio burst in association with a soft gamma-ray repeater suggests that magnetar engines can produce at least some, and probably all, fast radio bursts. Other engines that could produce fast radio bursts are not required, but are also not impossible.
## Relevant articles
• ### Fast radio bursts at the dawn of the 2020s
The Astronomy and Astrophysics Review Open Access 29 March 2022
## Access options
\$32.00
All prices are NET prices.
## Data availability
The data that support the plots within this paper and other finding of this study are available from the corresponding authors upon reasonable request.
## References
1. Lorimer, D. R., Bailes, M., McLaughlin, M. A., Narkevic, D. J. & Crawford, F. A bright millisecond radio burst of extragalactic origin. Science 318, 777–780 (2007). This discovery paper marks the birth of the FRB research field.
2. Thornton, D. et al. A population of fast radio bursts at cosmological distances. Science 341, 53–56 (2013).
3. Petroff, E. et al. Identifying the source of perytons at the Parkes radio telescope. Mon. Not. R. Astron. Soc. 451, 3933–3940 (2015).
4. Spitler, L. G. et al. A repeating fast radio burst. Nature 531, 202–205 (2016). This paper reports the discovery of the first repeating FRB source: FRB 121102.
5. Chatterjee, S. et al. A direct localization of a fast radio burst and its host. Nature 541, 58–61 (2017).
6. Marcote, B. et al. The repeating fast radio burst FRB 121102 as seen on milliarcsecond angular scales. Astrophys. J. 834, L8 (2017).
7. Tendulkar, S. P. et al. The host galaxy and redshift of the repeating fast radio burst FRB 121102. Astrophys. J. 834, L7 (2017). This paper reports the discovery of the first host galaxy and redshift of an FRB source: FRB 121102.
8. Loeb, A., Shvartzvald, Y. & Maoz, D. Fast radio bursts may originate from nearby flaring stars. Mon. Not. R. Astron. Soc. 439, L46–L50 (2014).
9. Platts, E. et al. A living theory catalogue for fast radio bursts. Phys. Rep. 821, 1–27 (2019).
10. Kulkarni, S. R. From gamma-ray bursts to fast radio bursts. Nat. Astron. 2, 832–835 (2018).
11. The CHIME/FRB Collaboration. A bright millisecond-duration radio burst from a Galactic magnetar. Nature http://doi.org/10.1038/s41586-020-2863-y (2020). This paper reports the discovery of an FRB associated with a Galactic SGR, establishing the magnetar origin of at least some FRBs.
12. Bochenek, C. D. et al. A fast radio burst associated with a Galactic magnetar. Nature https://doi.org/10.1038/s41586-020-2872-x (2020). This paper also reports the discovery of an FRB associated with a Galactic SGR, establishing the magnetar origin of at least some FRBs.
13. Li, C. K. et al. Identification of a non-thermal X-ray burst with the Galactic magnetar SGR 1935+2154 and a fast radio burst with Insight-HXMT. Preprint at https://arxiv.org/abs/2005.11071 (2020).
14. Ridnaia, A. et al. A peculiar hard X-ray counterpart of a Galactic fast radio burst. Preprint at https://arxiv.org/abs/2005.11178 (2020).
15. Mereghetti, S. et al. INTEGRAL discovery of a burst with associated radio emission from the magnetar SGR 1935+2154. Astrophys. J. 898, L29 (2020).
16. Tavani, M. et al. An X-ray burst from a magnetar enlightening the mechanism of fast radio bursts. Preprint at https://arxiv.org/abs/2005.12164 (2020).
17. Petroff, E., Hessels, J. W. T. & Lorimer, D. R. Fast radio bursts. Astron. Astrophys. Rev. 27, 4 (2019). This paper is a comprehensive review of the FRB field summarizing observational properties of FRBs as of 2019.
18. Cordes, J. M. & Chatterjee, S. Fast radio bursts: an extragalactic enigma. Annu. Rev. Astron. Astrophys. 57, 417–465 (2019). This paper is another comprehensive review of the FRB field summarizing the observational properties of FRBs as of 2019.
19. Lorimer, D. R. A decade of fast radio bursts. Nat. Astron. 2, 860–864 (2018).
20. Katz, J. I. Fast radio bursts. Prog. Part. Nucl. Phys. 103, 1–18 (2018).
21. Popov, S. B., Postnov, K. A. & Pshirkov, M. S. Fast radio bursts. Phys. Uspekhi 61, 965 (2018).
22. CHIME/FRB Collaboration. A second source of repeating fast radio bursts. Nature 566, 235–238 (2019).
23. The CHIME/FRB Collaboration. CHIME/FRB detection of eight new repeating fast radio burst sources. Astrophys. J. 885, L24 (2019).
24. Kumar, P. et al. Faint repetitions from a bright fast radio burst source. Astrophys. J. 887, L30 (2019).
25. Luo, R. et al. Diverse polarisation angle swings from a repeating fast radio burst source. Nature (in the press).
26. Ravi, V. The prevalence of repeating fast radio bursts. Nat. Astron. 3, 928–931 (2019).
27. Lu, W., Piro, A. L. & Waxman, E. Implications of CHIME repeating fast radio bursts. Preprint at https://arxiv.org/abs/2003.12581 (2020).
28. Petroff, E. et al. A survey of FRB fields: limits on repeatability. Mon. Not. R. Astron. Soc. 454, 457–462 (2015).
29. Palaniswamy, D., Li, Y. & Zhang, B. Are there multiple populations of fast radio bursts? Astrophys. J. 854, L12 (2018).
30. Caleb, M., Stappers, B. W., Rajwade, K. & Flynn, C. Are all fast radio bursts repeating sources? Mon. Not. R. Astron. Soc. 484, 5500–5508 (2019).
31. Zhang, Y. G. et al. Fast radio burst 121102 pulse detection and periodicity: a machine learning approach. Astrophys. J. 866, 149 (2018).
32. The CHIME/FRB Collaboration. Periodic activity from a fast radio burst source. Nature 582, 351–355 (2020).
33. Rajwade, K. M. et al. Possible periodic activity in the repeating FRB 121102. Mon. Not. R. Astron. Soc. 495, 3551–3558 (2020).
34. Ioka, K. & Zhang, B. A binary comb model for periodic fast radio bursts. Astrophys. J. 893, L26 (2020).
35. Lyutikov, M., Barkov, M. V. & Giannios, D. FRB periodicity: mild pulsars in tight O/B-star binaries. Astrophys. J. 893, L39 (2020).
36. Dai, Z. G. & Zhong, S. Q. Periodic fast radio bursts as a probe of extragalactic asteroid belts. Astrophys. J. 895, L1 (2020).
37. Levin, Y., Beloborodov, A. M. & Bransgrove, A. Precessing flaring magnetar as a source of repeating FRB 180916.J0158+65. Astrophys. J. 895, L30 (2020).
38. Zanazzi, J. J. & Lai, D. Periodic fast radio bursts with neutron star free precession. Astrophys. J. 892, L15 (2020).
39. Yang, H. & Zou, Y.-C. Orbit-induced spin precession as a possible origin for periodicity in periodically repeating fast radio bursts. Astrophys. J. 893, L31 (2020).
40. Luan, J. & Goldreich, P. Physical constraints on fast radio bursts. Astrophys. J. 785, L26 (2014).
41. Cordes, J. M., Wharton, R. S., Spitler, L. G., Chatterjee, S. & Wasserman, I. Radio wave propagation and the provenance of fast radio bursts. Preprint at https://arxiv.org/abs/1605.05890 (2016).
42. Xu, S. & Zhang, B. On the origin of the scatter broadening of fast radio burst pulses and astrophysical implications. Astrophys. J. 832, 199 (2016).
43. Hessels, J. W. T. et al. FRB 121102 bursts show complex time-frequency structure. Astrophys. J. 876, L23 (2019).
44. Petroff, E. et al. FRBCAT: the fast radio burst catalogue. Publ. Astron. Soc. Aust. 33, 45 (2016).
45. Bannister, K. W. et al. A single fast radio burst localized to a massive galaxy at cosmological distance. Science 365, 565–570 (2019).
46. Ravi, V. et al. A fast radio burst localized to a massive galaxy. Nature 572, 352–354 (2019).
47. Marcote, B. et al. A repeating fast radio burst source localized to a nearby spiral galaxy. Nature 577, 190–194 (2020).
48. Prochaska, J. X. et al. The low density and magnetization of a massive galaxy halo exposed by a fast radio burst. Science 366, 231–234 (2019).
49. Macquart, J. P. et al. A census of baryons in the Universe from localized fast radio bursts. Nature 581, 391–395 (2020).
50. Li, Z. et al. Cosmology-insensitive estimate of IGM baryon mass fraction from five localized fast radio bursts. Mon. Not. R. Astron. Soc. 496, L28–L32 (2020).
51. Zhang, B. Fast radio burst energetics and detectability from high redshifts. Astrophys. J. 867, L21 (2018).
52. Lin, L. et al. No pulsed radio emission during a bursting phase of a Galactic magnetar. Nature https://doi.org/10.1038/s41586-020-2839-y (2020). This paper reports the non-detection of FRBs from many SGR bursts, suggesting that the FRB–SGR associations are rather rare.
53. Kellermann, K. I. & Pauliny-Toth, I. I. K. The spectra of opaque radio sources. Astrophys. J. 155, L71 (1969).
54. Chawla, P. et al. Detection of repeating FRB 180916.J0158+65 down to frequencies of 300 MHz. Astrophys. J. 896, L41 (2020).
55. Gajjar, V. et al. Highest frequency detection of FRB 121102 at 4–8 GHz using the Breakthrough Listen digital backend at the Green Bank Telescope. Astrophys. J. 863, 2 (2018).
56. Law, C. J. et al. A multi-telescope campaign on FRB 121102: implications for the FRB population. Astrophys. J. 850, 76 (2017).
57. Karastergiou, A. et al. Limits on fast radio bursts at 145 MHz with ARTEMIS, a real-time software backend. Mon. Not. R. Astron. Soc. 452, 1254–1262 (2015).
58. Michilli, D. et al. An extreme magneto-ionic environment associated with the fast radio burst source FRB 121102. Nature 553, 182–185 (2018).
59. Cho, H. et al. Spectropolarimetric analysis of FRB 181112 at microsecond resolution: implications for fast radio burst emission mechanism. Astrophys. J. 891, L38 (2020).
60. Day, C. K. et al. High time resolution and polarisation properties of ASKAP-localised fast radio bursts. Mon. Not. R. Astron. Soc. 497, 3335–3350 (2020) (2020).
61. Lorimer, D. R. & Kramer, M. Handbook of Pulsar Astronomy (Cambridge Univ. Press, 2012). This is a comprehensive book on pulsar astronomy, enabling comparison of FRB phenomenology with pulsar phenomenology.
62. Radhakrishnan, V. & Cooke, D. J. Magnetic poles and the polarization structure of pulsar radiation. Astrophys. Lett. 3, 225 (1969).
63. Ravi, V. et al. The magnetic field and turbulence of the cosmic web measured using a brilliant fast radio burst. Science 354, 1249–1252 (2016).
64. Margalit, B. & Metzger, B. A concordance picture of FRB 121102 as a flaring magnetar embedded in a magnetized ion-electron wind nebula. Astrophys. J. 868, L4 (2018).
65. Yang, Y.-P., Li, Q.-C. & Zhang, B. Are persistent emission luminosity and rotation measure of fast radio bursts related? Astrophys. J. 895, 7 (2020).
66. Petroff, E. et al. A real-time fast radio burst: polarization detection and multiwavelength follow-up. Mon. Not. R. Astron. Soc. 447, 246–255 (2015).
67. Yi, S.-X., Gao, H. & Zhang, B. Multi-wavelength afterglows of fast radio bursts. Astrophys. J. 792, L21 (2014).
68. Bannister, K. W., Murphy, T., Gaensler, B. M. & Reynolds, J. E. Limits on prompt, dispersed radio pulses from gamma-ray bursts. Astrophys. J. 757, 38 (2012).
69. DeLaunay, J. J. et al. Discovery of a transient gamma-ray counterpart to FRB 131104. Astrophys. J. 832, L1 (2016).
70. Cunningham, V. et al. A search for high-energy counterparts to fast radio bursts. Astrophys. J. 879, 40 (2019).
71. Metzger, B. D., Berger, E. & Margalit, B. Millisecond magnetar birth connects FRB 121102 to superluminous supernovae and long-duration gamma-ray bursts. Astrophys. J. 841, 14 (2017). This paper proposes that young magnetars born in extreme explosions such as GRBs and superluminous supernovae are the engines of repeating FRBs.
72. Law, C. J. et al. A search for late-time radio emission and fast radio bursts from superluminous supernovae. Astrophys. J. 886, 24 (2019).
73. Men, Y. et al. Non-detection of fast radio bursts from six gamma-ray burst remnants with possible magnetar engines. Mon. Not. R. Astron. Soc. 489, 3643–3647 (2019).
74. Wang, X.-G. et al. Is GRB 110715A the progenitor of FRB 171209? Astrophys. J. 894, L22 (2020).
75. Bhandari, S. et al. The SUrvey for Pulsars and Extragalactic Radio Bursts—II. New FRB discoveries and their follow-up. Mon. Not. R. Astron. Soc. 475, 1427–1446 (2018).
76. Luo, R., Lee, K., Lorimer, D. R. & Zhang, B. On the normalized FRB luminosity function. Mon. Not. R. Astron. Soc. 481, 2320–2337 (2018).
77. Lu, W. & Piro, A. L. Implications from ASKAP fast radio burst statistics. Astrophys. J. 883, 40 (2019).
78. Luo, R. et al. On the FRB luminosity function. II. Event rate density. Mon. Not. R. Astron. Soc. 494, 665–679 (2020).
79. Lu, W., Kumar, P. & Zhang, B. A unified picture of Galactic and cosmological fast radio bursts. Mon. Not. R. Astron. Soc. 498, 1397–1405 (2020). (2020).
80. Nicholl, M. et al. Empirical constraints on the origin of fast radio bursts: volumetric rates and host galaxy demographics as a test of millisecond magnetar connection. Astrophys. J. 843, 84 (2017).
81. Bhandari, S. et al. The host galaxies and progenitors of fast radio bursts localized with the Australian Square Kilometre Array Pathfinder. Astrophys. J. 895, L37 (2020).
82. Li, Y. & Zhang, B. A comparative study of host galaxy properties between fast radio bursts and stellar transients. Astrophys. J. 899, L6 (2020).
83. Totani, T. Cosmological fast radio bursts from binary neutron star mergers. Publ. Astron. Soc. Jpn. 65, L12 (2013).
84. Zhang, B. A possible connection between fast radio bursts and gamma-ray bursts. Astrophys. J. 780, L21 (2013).
85. Wang, J.-S., Yang, Y.-P., Wu, X.-F., Dai, Z.-G. & Wang, F.-Y. Fast radio bursts from the inspiral of double neutron stars. Astrophys. J. 822, L7 (2016).
86. Margalit, B., Berger, E. & Metzger, B. D. Fast radio bursts from magnetars born in binary neutron star mergers and accretion induced collapse. Astrophys. J. 886, 110 (2019).
87. Wang, F. Y. et al. Fast radio bursts from activity of neutron stars newborn in BNS mergers: offset, birth rate, and observational properties. Astrophys. J. 891, 72 (2020).
88. Zhang, B. Fast radio bursts from interacting binary neutron star systems. Astrophys. J. 890, L24 (2020).
89. Zhang, B. The Physics of Gamma-Ray Bursts (Cambridge Univ. Press, 2018). This is a comprehensive book on GRB phenomenology and theoretical models, enabling cross-comparison of the FRB and the GRB fields.
90. Popov, S. B. & Postnov, K. A. Hyperflares of SGRs as an engine for millisecond extragalactic radio bursts. In Evolution of Cosmic Objects through their Physical Activity (eds Harutyunian, H. A., Mickaelian, A. M. & Terzian, Y.) 129–132 (2010). This paper was the first to propose that SGRs are the sources of FRBs, an idea recently proved by the FRB 200428–SGR 1935+2154 association.
91. Kulkarni, S. R., Ofek, E. O., Neill, J. D., Zheng, Z. & Juric, M. Giant sparks at cosmological distances? Astrophys. J. 797, 70 (2014).
92. Katz, J. I. How soft gamma repeaters might make fast radio bursts. Astrophys. J. 826, 226 (2016).
93. Lyubarsky, Y. A model for fast extragalactic radio bursts. Mon. Not. R. Astron. Soc. 442, L9–L13 (2014). This paper first proposes the synchrotron maser coherent mechanism to interpret FRBs.
94. Beloborodov, A. M. A flaring magnetar in FRB 121102? Astrophys. J. 843, L26 (2017).
95. Kumar, P., Lu, W. & Bhattacharya, M. Fast radio burst source properties and curvature radiation model. Mon. Not. R. Astron. Soc. 468, 2726–2739 (2017).
96. Yang, Y.-P. & Zhang, B. Bunching coherent curvature radiation in three-dimensional magnetic field geometry: application to pulsars and fast radio bursts. Astrophys. J. 868, 31 (2018).
97. Wadiasingh, Z. et al. The fast radio burst luminosity function and death line in the low-twist magnetar model. Astrophys. J. 891, 82 (2020).
98. Nemiroff, R. J. A century of gamma ray burst models. AIP Conf. Proc. 307, 730 (1994).
99. Waxman, E. On the origin of fast radio bursts (FRBs). Astrophys. J. 842, 34 (2017).
100. Plotnikov, I. & Sironi, L. The synchrotron maser emission from relativistic shocks in fast radio bursts: 1D PIC simulations of cold pair plasmas. Mon. Not. R. Astron. Soc. 485, 3816–3833 (2019).
101. Metzger, B. D., Margalit, B. & Sironi, L. Fast radio bursts as synchrotron maser emission from decelerating relativistic blast waves. Mon. Not. R. Astron. Soc. 485, 4091–4106 (2019).
102. Beloborodov, A. M. Blast waves from magnetar flares and fast radio bursts. Astrophys. J. 896, 142 (2020).
103. Melrose, D. B. Coherent emission mechanisms in astrophysical plasmas. Rev. Mod. Plasma Phys. 1, 5 (2017). This is a comprehensive review for coherent radio emission models for the sources in the Universe other than FRBs.
104. Harding, A. K. Gamma-ray pulsar light curves as probes of magnetospheric structure. J. Plasma Phys. 82, 635820306 (2016).
105. Rankin, J. M. Toward an empirical theory of pulsar emission. VI. The geometry of the conal emission region. Astrophys. J. 405, 285 (1993).
106. Ruderman, M. A. & Sutherland, P. G. Theory of pulsars—polar caps, sparks, and coherent microwave radiation. Astrophys. J. 196, 51–72 (1975).
107. Camilo, F. et al. The magnetar XTE J1810–197: variations in torque, radio flux density, and pulse profile morphology. Astrophys. J. 663, 497–504 (2007).
108. Zhang, B. Mergers of charged black holes: gravitational-wave events, short gamma-ray bursts, and fast radio bursts. Astrophys. J. 827, L31 (2016).
109. Levin, J., D’Orazio, D. J. & Garcia-Saenz, S. Black hole pulsar. Phys. Rev. D 98, 123002 (2018).
110. Long, K. & Pe’er, A. Synchrotron maser from weakly magnetized neutron stars as the emission mechanism of fast radio bursts. Astrophys. J. 864, L12 (2018).
111. Katz, J. I. Coherent emission in fast radio bursts. Phys. Rev. D 89, 103009 (2014).
112. Lu, W. & Kumar, P. On the radiation mechanism of repeating fast radio bursts. Mon. Not. R. Astron. Soc. 477, 2470–2493 (2018). This paper is a comprehensive survey of many coherent emission models and a critical assessment of the validity of these models for FRBs.
113. Ghisellini, G. Synchrotron masers and fast radio bursts. Mon. Not. R. Astron. Soc. 465, L30–L33 (2017).
114. Lu, W., Kumar, P. & Narayan, R. Fast radio burst source properties from polarization measurements. Mon. Not. R. Astron. Soc. 483, 359–369 (2019).
115. Lyubarsky, Y. Induced scattering of short radio pulses. Astrophys. J. 682, 1443–1449 (2008).
116. Murase, K., Kashiyama, K. & Mészáros, P. A burst in a wind bubble and the impact on baryonic ejecta: high-energy gamma-ray flashes and afterglows from fast radio bursts and pulsar-driven supernova remnants. Mon. Not. R. Astron. Soc. 461, 1498–1511 (2016).
117. Kumar, P. & Lu, W. Radiation forces constrain the FRB mechanism. Mon. Not. R. Astron. Soc. 494, 1217–1228 (2020).
118. Piro, A. L. The impact of a supernova remnant on fast radio bursts. Astrophys. J. 824, L32 (2016).
119. Yang, Y.-P. & Zhang, B. Dispersion measure variation of repeating fast radio burst sources. Astrophys. J. 847, 22 (2017).
120. Yang, Y.-P., Zhang, B. & Dai, Z.-G. Synchrotron heating by a fast radio burst in a self-absorbed synchrotron nebula and its observational signature. Astrophys. J. 819, L12 (2016).
121. Goldreich, P. & Julian, W. H. Pulsar electrodynamics. Astrophys. J. 157, 869 (1969).
122. Lyubarsky, Y. Fast radio bursts from reconnection in a magnetar magnetosphere. Astrophys. J. 897, 1 (2020).
123. Melrose, D. B. Amplified linear acceleration emission applied to pulsars. Astrophys. J. 225, 557–573 (1978).
124. Melikidze, G. I., Gil, J. A. & Pataraya, A. D. The spark-associated soliton model for pulsar radio emission. Astrophys. J. 544, 1081–1096 (2000).
125. Yang, Y.-P., Zhu, J.-P., Zhang, B. & Wu, X.-F. Pair separation in parallel electric field in magnetar magnetosphere and narrow spectra of fast radio bursts. Astrophys. J. 901, L13 (2020). (2020).
126. Kumar, P. & Bošnjak, Ž. FRB coherent emission from decay of Alfvén waves. Mon. Not. R. Astron. Soc. 494, 2385–2395 (2020).
127. Zhang, B. A “cosmic comb” model of fast radio bursts. Astrophys. J. 836, L32 (2017).
128. Wang, W., Zhang, B., Chen, X. & Xu, R. On the time-frequency downward drifting of repeating fast radio bursts. Astrophys. J. 876, L15 (2019).
129. Usov, V. V. & Katz, J. I. Low frequency radio pulses from gamma-ray bursts? Astron. Astrophys. 364, 655–659 (2000).
130. Sagiv, A. & Waxman, E. Collective processes in relativistic plasma and their implications for gamma-ray burst afterglows. Astrophys. J. 574, 861–872 (2002).
131. Kaspi, V. M. & Beloborodov, A. M. Magnetars. Annu. Rev. Astron. Astrophys. 55, 261–301 (2017).
132. Thompson, C. & Duncan, R. C. Neutron star dynamos and the origins of pulsar magnetism. Astrophys. J. 408, 194–217 (1993).
133. Beniamini, P., Hotokezaka, K., van der Horst, A. & Kouveliotou, C. Formation rates and evolution histories of magnetars. Mon. Not. R. Astron. Soc. 487, 1426–1438 (2019).
134. Vink, J. & Kuiper, L. Supernova remnant energetics and magnetars: no evidence in favour of millisecond proto-neutron stars. Mon. Not. R. Astron. Soc. 370, L14–L18 (2006).
135. Tendulkar, S. P., Kaspi, V. M. & Patel, C. Radio nondetection of the SGR 1806–20 giant flare and implications for fast radio bursts. Astrophys. J. 827, 59 (2016).
136. Li, Y., Zhang, B., Nagamine, K. & Shi, J. The FRB 121102 host is atypical among nearby FRBs. Astrophys. J. 884, L26 (2019 (2019).
137. Thompson, C. & Duncan, R. C. The soft gamma repeaters as very strongly magnetized neutron stars—I. Radiative mechanism for outbursts. Mon. Not. R. Astron. Soc. 275, 255–300 (1995).
138. Margalit, B., Beniamini, P., Sridhar, N. & Metzger, B. D. implications of a “fast radio burst” from a galactic magnetar. Astrophys. J. 899, L27 (2020).
139. Katz, J. I. The FRB-SGR connection. Preprint at https://arxiv.org/abs/2006.03468 (2020).
140. Yu, Y.-W., Zou, Y.-C., Dai, Z.-G. & Yu, W.-F. Revisiting the confrontation of the shock-powered synchrotron maser model with the Galactic FRB 200428. Preprint at https://arxiv.org/abs/2006.00484 (2020).
141. Connor, L., Sievers, J. & Pen, U.-L. Non-cosmological FRBs from young supernova remnant pulsars. Mon. Not. R. Astron. Soc. 458, L19–L23 (2016).
142. Cordes, J. M. & Wasserman, I. Supergiant pulses from extragalactic neutron stars. Mon. Not. R. Astron. Soc. 457, 232–257 (2016).
143. Katz, J. I. Are fast radio bursts made by neutron stars? Mon. Not. R. Astron. Soc. 494, L64–L68 (2020).
144. Gu, W.-M., Dong, Y.-Z., Liu, T., Ma, R. & Wang, J. A neutron star-white dwarf binary model for repeating fast radio burst 121102. Astrophys. J. 823, L28 (2016).
145. Zhang, B. FRB 121102: a repeatedly combed neutron star by a nearby low-luminosity accreting supermassive black hole. Astrophys. J. 854, L21 (2018).
146. Katz, J. I. Searching for Galactic micro-FRB with lunar scattering. Mon. Not. R. Astron. Soc. 494, 3464–3468 (2020).
147. Dai, Z. G., Wang, J. S., Wu, X. F. & Huang, Y. F. Repeating fast radio bursts from highly magnetized pulsars traveling through asteroid belts. Astrophys. J. 829, 27 (2016).
148. Smallwood, J. L., Martin, R. G. & Zhang, B. Investigation of the asteroid-neutron star collision model for the repeating fast radio bursts. Mon. Not. R. Astron. Soc. 485, 1367–1376 (2019).
149. Dai, Z. G. A magnetar-asteroid impact model for FRB 200428 associated with an X-ray burst from SGR 1935+2154. Astrophys. J. 897, L40 (2020).
150. Falcke, H. & Rezzolla, L. Fast radio bursts: the last sign of supramassive neutron stars. Astron. Astrophys. 562, A137 (2014).
151. Ai, S., Gao, H. & Zhang, B. On the true fractions of repeating and non-repeating FRB sources. Preprint at https://arxiv.org/abs/2007.02400 (2020).
152. Wang, M.-H. et al. Testing the hypothesis of a compact-binary-coalescence origin of fast radio bursts using a multimessenger approach. Astrophys. J. 891, L39 (2020).
## Acknowledgements
I thank P. Kumar, W. Lu, J. I. Katz. Y.-P. Yang and Z.-G. Dai for comments.
## Author information
Authors
### Corresponding author
Correspondence to Bing Zhang.
## Ethics declarations
### Competing interests
The author declares no competing interests.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Zhang, B. The physical mechanisms of fast radio bursts. Nature 587, 45–53 (2020). https://doi.org/10.1038/s41586-020-2828-1
• Accepted:
• Published:
• Issue Date:
• DOI: https://doi.org/10.1038/s41586-020-2828-1
• ### Magnetic reconnection in the era of exascale computing and multiscale experiments
• Hantao Ji
• William Daughton
• Jongsoo Yoo
Nature Reviews Physics (2022)
• ### Luminosity distribution of fast radio bursts from CHIME/FRB Catalog 1 by means of the updated Macquart relation
• Xiang-Han Cui
• Cheng-Min Zhang
• Yi-Yan Yang
Astrophysics and Space Science (2022)
• ### Repeating fast radio bursts: Coherent circular polarization by bunches
• Wei-Yang Wang
• Jin-Chen Jiang
• Renxin Xu
Science China Physics, Mechanics & Astronomy (2022)
• ### Fast radio bursts at the dawn of the 2020s
• E. Petroff
• J. W. T. Hessels
• D. R. Lorimer
The Astronomy and Astrophysics Review (2022)
• ### The evolution of binary neutron star post-merger remnants: a review
• Nikhil Sarin
General Relativity and Gravitation (2021) | 2022-08-09 12:12:43 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8137508034706116, "perplexity": 10215.066810507202}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570921.9/warc/CC-MAIN-20220809094531-20220809124531-00483.warc.gz"} |
https://math.stackexchange.com/questions/245141/do-the-real-numbers-and-the-complex-numbers-have-the-same-cardinality | # Do the real numbers and the complex numbers have the same cardinality?
So it's easy to show that the rationals and the integers have the same size, using everyone's favorite spiral-around-the-grid.
Can the approach be extended to say that the set of complex numbers has the same cardinality as the reals?
• One can show that $|\mathbb R| = |\mathbb R^2| = |\mathbb C|$ Nov 26, 2012 at 19:05
• It's quite sad, but it's easier to write an answer than finding the duplicate. And I am sure this question has been asked before. Nov 26, 2012 at 19:08
• The best treatment of this in an existing answer is probably here. Nov 26, 2012 at 19:31
Yes.
$$|\mathbb R|=2^{\aleph_0}; |\mathbb C|=|\mathbb{R\times R}|=|\mathbb R|^2.$$
We have if so:
$$|\mathbb C|=|\mathbb R|^2 =(2^{\aleph_0})^2 = 2^{\aleph_0\cdot 2}=2^{\aleph_0}=|\mathbb R|$$
If one wishes to write down an explicit function, one can use a function of $\mathbb{N\times 2\to N}$, and combine it with a bijection between $2^\mathbb N$ and $\mathbb R$.
• What is $2^\mathbb{N}$? Jun 10 at 18:14
Of course. I will show it on numbers in $$[0,1)$$ and $$[0,1)\times[0,1)$$. Consider $$z=x+iy$$ with $$x=0.x_1x_2x_3\ldots$$ and $$y=0.y_1y_2y_3\ldots$$ their decimal expansions (the standard, greedy ones with no $$9^\omega$$ as a suffix). Then the number $$f(z)=0.x_1y_1x_2y_2x_3y_3\ldots$$ is real and this map is clearly injective on the above mentioned sets. Generalization to the whole $$\mathbb C$$ is straightforward. This gives $$\#\mathbb C\leq\#\mathbb R$$. the other way around is obvious.
• This requires a bit more work. The map isn’t well-defined until you deal with the $0.4999\dots=0.5000\dots$ issue; if you deal with that straightforwardly, it’a nor surjective. Nov 26, 2012 at 19:14
• Yes, you are right. However, they all all (complex) rational hence of no interest for the sets of continuum cardinality. I'll add a comment.
– yo'
Nov 26, 2012 at 19:16
• And btw, usually a string with suffix $9^\omega$ is not considered to be an expansion (it is only a representation), in the usual greedy expansions as defined by Rényi in 1957.
– yo'
Nov 26, 2012 at 19:22
• I’ve never seen anyone make a distinction between representation and expansion, and I very much doubt that the distinction can be considered standard; certainly it does not qualify as well-known, so if you use it, you need to explain it. Nov 26, 2012 at 19:29
• And yes, I know that only countably many numbers are affected and that this does not affect the result, but I don’t know that the OP knows this. Nov 26, 2012 at 19:33
A straightforward bijection $$B : \mathbb{R}^2 \rightarrow \mathbb{C}$$ is: $$B(a,b) = a + bi$$. I omit the verification of injectivity and surjectivity. Then $$|C| = |\mathbb{R}^2|$$. The separate result that $$|\mathbb{R}^k| = |\mathbb{R}| \; \forall \; k \in \mathbb{N}$$ implies $$|\mathbb{R}^2| = |\mathbb{R}|$$. Altogether, $$|\mathbb{C}| = |\mathbb{R}^2| = |\mathbb{R}|$$.
One particularly nice class of bijections from $\Bbb R$ to $\Bbb C = \Bbb R^2$, which is in my opinion a little bit similar to the spiral around the grid, is given by the space-filling curves.
• This is incorrect. Space filling curves are not injective. Aug 8, 2014 at 15:01
• @DanRust: Can you explain why? Oct 15, 2018 at 14:58
• If an injective and surjective curve in the square existed, then this would imply that such a curve is in fact a homeomorphism, as the interval is compact, and the square is Hausdorff. We know they are not homeomorphic as the interval has a cut point and the square does not. Oct 15, 2018 at 15:05 | 2022-09-28 01:17:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 15, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8196757435798645, "perplexity": 252.612925433927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00343.warc.gz"} |
https://forum.allaboutcircuits.com/threads/planning-for-application-program.186025/ | # Planning for application program
#### Sparsh45
Joined Dec 6, 2021
143
What I don't understand is how planning is done to develop application programs that runs on the operating system. Obviously my question is related to the microcontroller on which the operating system runs. Right now I can't tell about the specific operating system it can be any which is running on the microcontroller.
I have heard that people who develop application programs for operating systems. Use paper and pencil before writing code, do they design state diagrams on paper? I do not know how many people develop application programs here, maybe someone does, then please tell your strategy
#### MrChips
Joined Oct 2, 2009
27,670
What I don't understand is how planning is done to develop application programs that runs on the operating system. Obviously my question is related to the microcontroller on which the operating system runs. Right now I can't tell about the specific operating system it can be any which is running on the microcontroller.
I have heard that people who develop application programs for operating systems. Use paper and pencil before writing code, do they design state diagrams on paper? I do not know how many people develop application programs here, maybe someone does, then please tell your strategy
This is done everyday by application developers.
Writing code is far down on the list of tasks to be performed.
If you are designing an application program or any type of commercial product, your first steps ought to be:
1) What is the function of the product?
2) Is there a market for the product?
3) Has a survey of the market been conducted?
4) What is the perception by the market of your product?
5) What is competition to your product?
6) What does the cost/price analysis of the product reveal?
7) What are the risk factors in developing/manufacturing/maintaining the product?
Next steps into the product development:
1) Write the user manual for the product.
2) Produce mock-up and conceptual design of the product.
3) Produce a simulation of the product.
4) Have the design and user interface tested by persons outside of the development team and by potential clients.
And we have not even started looking at code design!
By this stage you would have the functional specifications of the product well defined.
#### KeithWalker
Joined Jul 10, 2017
2,609
As Mr. Chips stated, first the application must be well defined. Once a full functional specification has been produced and decisions have been made on what platform, language, firmware and interfaces will be used, the design of the software can begin. There are many different graphic tools available to make this task a little simpler. Basically, the application is divided up into several large functional sections such as; operator interface, data collection, data manipulation and output. These are drawn up on the top sheet of the planner. On the second level down, each functional sections is divided into smaller interrelated tasks. The next level down, divides these tasks into smaller related tasks, and so-on until each task is reduced to functions that can no longer be divided. This is called Top-Down design.
The actual implementation of the software is done from the Bottom-Up. Each individual low level task is written and tested to ensure that is does exactly what it is supposed to. Then, on the next level up, the low level tasks are tested together to ensure that they interact correctly. This is repeated until the top level of the design has been tested. This is called Bottom-Up software development, and you can see that if the job is a very big one, and has been designed correctly, low level tasks can be tackled by a whole team of software specialists, each writing their own assigned function. It is the task of the project manager to ensure that each module performs and interacts correctly with all the other modules..
#### BobTPH
Joined Jun 5, 2013
6,073
Edited ti add: I wrote this drawing from my experience with highly innovative projects. Every application is different, and requires different approaches, I am presenting one of these. The lovely wife Morticia worked in more traditional applications, and she pointed out that in her field, the user manual was the specification, it was all about user interaction.
With all due respect, I think the posts by MrChips and KeithWalker reflect corporate propaganda and are nonsense. I worked in the software industry for over 40 years and rarely saw a successful project follow the structured approach they present, while I saw many failures of such projects.
The successful projects I have worked on were usually the result if a single person or small group, often hiding their efforts from management. There was nothing like a spec, let alone a user manual before coding started.
Exploratory code was written without consideration of user interface details or complete functionality. This code was modifed rapidly, with large pieces scrapped when the did not work out, and the functional details worked out by trial and error.
When a successful prototype was done, then it was presented to management and a more traditional process put in place. Much of the code was rewritten with the knowledge gained from the prototype.
The most commercially successful project I worked on my in my career produced \$100M in sales in the first year, because we solved a problem for a customer we were unable to win without this innovation.
It was proposed as an idea that could be expressed in a single sentence by a very talented consulting engineer. He came to me with the question of whether it was possible, because it involved my area if expertise. After an initial feeling of no, I found that all of my gut feeling objections turned out to be wrong, and it was quite doable.
I worked in a prototype without telling my management what I was doing. Once the prototype was done, we brought it to the customer (management was informed before that.). In two weeks we brought up their million-line application on our company’s hardware, which they had considered impossible before.
After that, the project was given to another team who made it a product.
Bob
Last edited:
#### MrChips
Joined Oct 2, 2009
27,670
The successful projects I have worked on were usually the result if a single person or small group, often hiding their efforts from management. There was nothing like a spec, let alone a user manual before coding started.
Exploratory code was written without consideration of user interface details or complete functionality. This code was modified rapidly, with large pieces scrapped when they did not work out, and the functional details worked out by trial and error.
When a successful prototype was done, then it was presented to management and a more traditional process put in place. Much of the code was rewritten with the knowledge gained from the prototype.
After that, the project was given to another team who made it a product.
Bob
With all due respect, you created a prototype and demonstrated proof of concept albeit by trial and error. You had unknowingly created the user manual and laid out the specifications of the product. The prototype was alpha tested by the client. The final development was actually performed by another team based on your prototype.
One of the objectives of laying out the specifications in detail is to avoid returning to the drawing board and having to do a major rework.
This actually fits in perfectly with the scheme I had already presented.
#### GetDeviceInfo
Joined Jun 7, 2009
2,125
When I was young, I would start a project by coding, now that I am old, I end a project with coding. Can't speak for others, but I suspect much has to do with where one is in their travels.
#### MrChips
Joined Oct 2, 2009
27,670
The TS is asking about how an application program development proceeds. I have stated that code writing does not begin until the groundwork has been laid, i.e. the functionality, user interface and specifications are fully developed and presented. With this covered the actual software team has well defined specifications to follow.
@KeithWalker explains the concepts of Top-Down Design and Bottom-Up Implementation very well. Both top-down and bottom-up coding can proceed in parallel.
I am an Apple Developer and created CAD applications for Apple Macintosh when it first came out. My main program was laid out like this:
Code:
Initialize
REPEAT
GetNextEvent
DoEvent
UNTIL done
The beauty with this is that the program is running from day one. The details of each function can be implemented as work progresses. This is Top-Down implementation with a clear road map.
#### MrChips
Joined Oct 2, 2009
27,670
When I was young, I would start a project by coding, now that I am old, I end a project with coding. Can't speak for others, but I suspect much has to do with where one is in their travels.
"With age comes wisdom but sometimes age comes alone."
#### KeithWalker
Joined Jul 10, 2017
2,609
In the many years I worked for Hewlett Packard as a project engineer and consultant, I developed many software driven automated hardware systems for customers. Each one used the methods I described and each one was completed and signed off to the customer on time, within the quoted budget and with customer satisfaction.
During that time I also worked on renovating a number of older systems with software that had been constructed in the way BobTPH described. We called it "spaghetti coding" because it had just "grown" without any structure. Because of that, if a small change was made to correct a problem in one place, it caused all kinds of unexpected things to happen in other parts of the program. It was a nightmare to support and in most cases, it was more economical to just re-write it.
One of the most important features of top-down development with bottom-up implementation is that it is modular. Each module is well defined and self-contained and independent of the other modules. A module can be modified or replaced without it causing unexpected results in other places. That makes for robust, supportable software.
#### BobTPH
Joined Jun 5, 2013
6,073
During that time I also worked on renovating a number of older systems with software that had been constructed in the way BobTPH described. We called it "spaghetti coding" because it had just "grown" without any structure.
You are mis-characterizing what I said. I did not describe growing a product without structure. I described very loose and free approach to getting to a prototype, followed by designing the final product with the learnings from the prototyping phase.
Spaghetti code results when an existing product is updated to learn new tricks that the original designer ls did not foresee. These eventually must be rewritten from scratch.
I think we are talking about different activities. I am talking more about innovation than product development.
It is not clear whether either of these were what the TS was asking.
Bob
#### Sparsh45
Joined Dec 6, 2021
143
It is not clear whether either of these were what the TS was asking.
Have you written any embedded programs that run on the operating system? If you write such program then I can ask further questions. Generally I divide programming into two categories, first bare metal programming where we write code directly for microcontroller and second Embedded OS programming where we write application programs that run on operating system and OS running on microcontroller/Processor.
I don't have much difficulty in designing code for bare metal style I can design it with little effort. My brain doesn't understand at all how to do planning while writing code for an application program that would run on operating system.
In my views, I understood that in order to develop an application program, it is very important to know about the task, the task states, time and task order.
Many people say that if you have learned to develop application program for operating system that makes your life easier but for me it is more difficult because I don't really understand how they makes plan.
#### KeithWalker
Joined Jul 10, 2017
2,609
Have you written any embedded programs that run on the operating system? If you write such program then I can ask further questions. Generally I divide programming into two categories, first bare metal programming where we write code directly for microcontroller and second Embedded OS programming where we write application programs that run on operating system and OS running on microcontroller/Processor.
I don't have much difficulty in designing code for bare metal style I can design it with little effort. My brain doesn't understand at all how to do planning while writing code for an application program that would run on operating system.
In my views, I understood that in order to develop an application program, it is very important to know about the task, the task states, time and task order.
Many people say that if you have learned to develop application program for operating system that makes your life easier but for me it is more difficult because I don't really understand how they makes plan.
Since the early 1980s I have written programs that run on 4-bit microprocessers, DOS, Windows 98, 2000, XP etc, basic stamp, PIC, Arduino and just about anything else that would run a program, using many different languages and assemblers.
There is very little difference in planning and writing an application program whether it will on the bare microcontroller or in an operating system on the controller. The only difference is in the choice of tools available for the task.
Either way, a functional specification is absolutely necessary because that is the definition of what the application will do. It will define input and output parameters and function and will include program sequence and flow.
Once the task has been defined, the choice is made on which tools to use to write the program. This can be anything from a single pass assembler to a full object oriented high level language.
The software can then be designed using whatever design tools you prefer, bearing in mind the characteristics and limitations of the platform that you have chosen.
Writing and testing the software are the very last steps in the process.
Last edited:
#### BobTPH
Joined Jun 5, 2013
6,073
Agreed, the planning has nothing to do with whether it is bare metal or running on an OS.
I think perhaps your use of language is confusing us. What do you mean by "planning for application program". Specifically, what "planning" do you do when writing your bare metal applications?\
Also, you keep getting hung up on task execution. I spent my whole life writing programs that run on an operating system. Most were a single task, a few were multi-task. But in no case did I ever plan out how the tasks run. That is the purview of the OS. The application programmer plans any necessary cooperation and communication between tasks, not how they are run.
Bob
#### MrChips
Joined Oct 2, 2009
27,670
Have you written any embedded programs that run on the operating system? If you write such program then I can ask further questions. Generally I divide programming into two categories, first bare metal programming where we write code directly for microcontroller and second Embedded OS programming where we write application programs that run on operating system and OS running on microcontroller/Processor.
I don't have much difficulty in designing code for bare metal style I can design it with little effort. My brain doesn't understand at all how to do planning while writing code for an application program that would run on operating system.
In my views, I understood that in order to develop an application program, it is very important to know about the task, the task states, time and task order.
Many people say that if you have learned to develop application program for operating system that makes your life easier but for me it is more difficult because I don't really understand how they makes plan.
What is your definition of an Operating System?
As others have said, an OS is a set of software tools and libraries to perform various functions so that the SW developer does not have to create themselves. OS defines a set of SW layers on top of which the application is laid.
#### Sparsh45
Joined Dec 6, 2021
143
What do you mean by "planning for application program". Specifically, what "planning" do you do when writing your bare metal applications?
Bob
If I'm working on a complex project I design a state machine or flowchart on the paper before I write the code. I call this process planning.
#### KeithWalker
Joined Jul 10, 2017
2,609
If I'm working on a complex project I design a state machine or flowchart on the paper before I write the code. I call this process planning.
That is only a small part of process planning. It is designing the functional specification of the project. Once that has been done, the software design and choice of platform, peripherals and tools begins.
#### rpiloverbd
Joined Dec 21, 2021
27
Everyone has said enough already. Long story short, they think about the product's features first, then work on circuits and codes etc. Then after the initial prototyping, they decide to add more features. This cycle continues until the management finally gives the green signal for product release.
#### click_here
Joined Sep 22, 2020
545 | 2023-01-30 18:50:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2896209955215454, "perplexity": 1126.851855444932}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499826.71/warc/CC-MAIN-20230130165437-20230130195437-00431.warc.gz"} |
https://tr.overleaf.com/blog/25-beautiful-graphs-how-to-turn-an-inkscape-drawing-into-an-editable-latex-document | • ## Beautiful Graphs: How to turn an Inkscape drawing into an editable LaTeX document
Posted on May 24, 2013 Guest post by Yetish Joshi I'm a PhD student and recently had to write a couple of papers for a conference. Despite emailing the pdfs for review to my supervisors, after submitting things still got missed. I'm hoping writeLaTeX will enable a more collaborative means to write documents and ensure the latest version is worked upon. I use pdf_tex files for my graphs, and since the writeLaTeX team added support so I could use them in my documents here, I wanted to share how this style of graph is achieved so others can produce beautiful vector based graphics. Beautiful graphs with pdf_tex The pdf_tex file is a tex wrapper produced by Inkscape as a means to strip out text from an SVG diagram when saving PDF. Hence, the graphic is stored in the PDF and the text is rendered by latex. Click on the example below to open in the writeLaTeX editor. You can see in the files menu the two uploaded files which make up the figure. Step-by-step guide Let's say you have your data. One could analyse it with R, preferably with a front-end like R studio. However, as I have lots of data I prefer filtering down to a subset, and need the ability to have interactive flexibility other than Rs script based manner. Hence, for my graphs I use Veusz -- it's a great mulit-platform free tool by Jeremy Sanders (see his twenty minute video if you've got some time!). Remember you have to think a bit differently to spreadsheet based graphs but the flexibility is great. However, after you first install it please note that within Preferences you need to tick a check box to enable save text as text for SVG. Then you're safe to save as SVG and thus use Inkscapes PDF+LaTeX option. Now something different (well slightly at least). There is an extension for Inkscape to save as tikz, which is actually quite cool. I believe it to be multi-platform, only used it in Windows and Ubuntu, Ubuntu may be a bit odd, but it gets there. The reason for this other workflow of tikz export is to use tikz (and possibly PGF) to draw line drawings, see e.g. the examples at TeXample.net. Let say you use something like yED (just watch the video -- 90 seconds and you're in love). It's a cross of VB6 and Visio -- weird but it works. Best of all its free and supports SVG. Equally you can use a web based solution like draw.io, which is very cool. Personally, I look at the code produced by tikz export extension for length/complexity, and for line diagrams/flowcharts it is nice. Below is the output using the tikz export extension from Inkscape -- click to open in the writeLaTeX editor. The file can be tidied up by including the picture as an attachment, rather than directly in the main tex file. Click to open in writeLaTeX -- you can still scale the figure as required with commands in the main tex file, and add captions. Either way, the lines are pin sharp. If you would like diagram software with native support for tex then look no further than classic Dia, it is a bit of a pain, but its support is great, however it is the only one to support isometric graph-guidelines, which can aid to create a 3D diagram. Dia -> Diagram Properties -> Hex Grid. Below is a Cube (3D) drawn in Dia and exported as tex with PGF -- click to open in writeLaTeX. For scaling within the this file I play with \setlength{\du}{15\unitlength}, while for scaling I play with \pgfsetlinewidth{0.100000\du}. Finally, tables. These can be a bit of a nightmare, and a simple solution is to produce them in LibreOffice Calc then open them in gnumeric and save as Latex Table Fragment. I hope this helps! Thanks for reading -- Yetish Joshi
### Start writing now!
#### Overleaf is Ücretsiz
New to LaTeX?
Start with a template | 2019-03-24 18:17:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6853926777839661, "perplexity": 1951.868901748797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203464.67/warc/CC-MAIN-20190324165854-20190324191854-00288.warc.gz"} |
https://electronicsreference.com/analog/op_amps/ | # Operational Amplifier
The operational amplifier is a DC voltage amplifier with high gain. Op amps are generally used in the form of an integrated circuit although they can also be constructed using transistors.
Several of the most popular integrated circuits of all time are op amps, including the 741 and 358 series op amps. Historically, op amps were one of the first examples of popular integrated circuits. For decades they have been widely produced, studied, and improved upon.
As a result, op amps can be obtained at extremely low cost, with high reliability, and they are widely used within the industry. They are versatile and considered foundational circuit elements of many electronics systems, useful to professionals and hobbyists alike.
## What is An Operational Amplifier?
An operational amplifier is a type of voltage amplifier. They are extremely versatile circuits that are used to perform a wide variety of functions.
Op amps have their name because they can perform mathematical operations. They are capable of performing addition, subtraction, integration, and differentiation as well as many other functions.
An op amp is a voltage amplifier that is designed to use negative feedback.
Passive elements like resistors and capacitors are used to form complete circuits that include feedback loops to improve performance.
The feedback circuit, in large part, determines the operation (i.e. the ‘op’) of the op amp. By carefully designing the circuit, we can use the op-amp to do many different things.
## The Problem With Transistor Amplifiers
In order to understand how op amps work, it’s a worthwhile exercise to review why op amps are so popular.
When amplifiers based on active components (i.e. transistors) were first produced, they had a few problems. They needed to be adjusted because gain tolerance was poor, and they also behaved erratically in the field. The gain tended to drift significantly, making early amplifiers difficult and costly to deal with.
This problem was solved by op amps, which use the concept of feedback to drastically improve an amplifier’s performance. From the very beginning, op amps have been associated with the first (intentional) feedback circuits; when you think ‘op amp’, you should also think ‘feedback’.
The underlying reason that op amps are more stable than transistor amplifiers, is that the total gain relies on a feedback circuit that consists of stable, reliable, predictable passive components rather than unstable, finicky active components (like the transistors in the amplifier itself).
## How Op Amps Work
### Structure of an op amp
Op amps can be considered three terminal devices, with two inputs and one output. These can be seen on the schematic symbol shown below:
The three terminals are:
(1) Inverting input (V)
(2) Non-inverting input (V+)
(3) Output (Vout)
The output of the op amp itself is the difference between the two inputs. Since the two inputs are being compared with each other to produce the output, they are collectively known as a differential input.
Like other amplifiers, op amps are characterized by their gain, A. In simple amplifiers, the open loop voltage gain AOL is the ratio of the output to the input voltage:
A_{OL}=\frac{V_{out}}{V_{in}}
Note: The open loop gain AOL is sometimes written as AV or G. These are usually interchangeable.
At this point, you might notice that we have used a simple definition for the gain AV that uses one input and one output. But we’ve already seen that there are actually two inputs to the op amp, V+ and V.
This is because op amps use a differential input, meaning that they use the difference between two inputs. In other words:
V_{in}=V_+-V_-
So the open loop gain AOL is the output voltage divided by the differential input:
A_{OL}=\frac{V_{out}}{V_+-V_-}
We can solve for the output of the circuit is equal to the gain times the differential:
V_{out}=A_{OL}(V_+-V_-)
This means that the gain is equal to the ratio of the output to the difference between of the non inverting and inverting inputs:
A_{OL}=\frac{V_{out}}{V_+-V_-}
In op amps, the voltage gain AV is also commonly known as the open loop differential gain, with the symbol AO. The gain of op amps is very large and is commonly of the order of 106 (100,000x).
### Op Amps Need Power to Provide Gain
Op amps can’t break the laws of physics and create energy, so where does all that gain come from?
The extra energy must come from another source that acts as the main power source of the amplifier. We call the positive terminal VCC and the negative terminal -VCC.
Sometimes VCC and -VCC will be included in a drawing for completion but in other cases they may be omitted for simplicity. Just keep in mind that amplification can’t take place without an external power source.
## Ideal Op-Amps
### The Golden Rules of Operational Amplifiers
There are a few qualities of ideal op amps that are useful to assume even though real life op amps aren’t ideal. These are also known as the ‘Golden Rules of Op Amps’.
1. Infinite open loop gain. Open loop gain is the gain of the amplifier without any feedback (i.e. without a feedback circuit). Real op amps have high open loop gain (20,000-200,000) but we often just estimate that the gain is infinite unless designing a high precision circuit.
2. Infinite input resistance. This means that current does not flow through the inputs of the op amp. In other words, I+ = I = 0
3. When the op amp is used with negative feedback, it will attempt to keep the difference between V+ and V_ equal to zero.
### Behavior of Ideal Op-Amps
Ideal operational amplifier will behave reliably depending on the inputs V+ and V_.
It’s helpful to create a new term VΔ, which is equal to the difference between V+ and V_:
V_{\Delta}=V_+-V_-
When VΔ is positive, Vout will tend to infinite:
for\: V_{\Delta}=V_+-V_->0:V_{out}\rightarrow \infin
When VΔ is negative, Vout will tend to infinite:
for\: V_{\Delta}=V_+-V_-<0:V_{out}\rightarrow -\infin
When VΔ is zero, Vout will also tend toward zero:
for\: V_{\Delta}=V_+-V_-=0:V_{out}\rightarrow 0
## Positive and Negative Feedback
An op amp can be used with positive or negative feedback, depending on which terminal is connected to the output.
Positive feedback is when the non-inverting input (V+) is connected with the output. This can be seen in the following circuit:
Negative feedback occurs when the inverting input (V_) is connected with the output.
## Negative Feedback is Standard
When it comes to op amps, negative feedback is the standard. This configuration is known as an inverting operational amplifier.
## Op Amps Are Integrated Circuits (ICs)
Internally, op amps are made up of components like transistors, resistors, and capacitors. They are complex enough that companies produce them in pre-packaged forms called integrated circuits.
Integrated circuits (ICs) contain the entire circuit inside of a single component.
The most popular op amp of all time, the 741, contains 20 transistors and 11 resistors. It can be purchased at very low cost, has a small footprint, and high reliability. It’s hard to beat the integrated circuit design.
Their popularity as integrated circuits is one of the reasons op amps are still so popular. The convenience of being able to design with them means that they will continue to be used in many applications.
## Op-Amp Circuits
Op amps can be used in various ways to produce a wide variety of circuits. Each circuit has a unique gain and properties that make it useful in different situations.
### Op-Amp Voltage Follower
The op-amp voltage follower is a unity-gain amplifier, meaning that it has a gain of 1. It is the simplest op-amp circuit with feedback and is therefore an excellent circuit to learn in order to understand op-amp functionality.
The op-amp voltage follower is created by connecting the output of the op-amp directly to the inverting input with a bare wire:
The op-amp voltage follower produces an output that is the same as the signal on the non-inverting input. In other words, it produces a gain of one:
Gain = A_v = \frac{V_{out}}{V_{in}}=1
Although this may seem to be a trivial function, they are incredibly useful as buffer circuits.
This is because the op-amp itself has high input impedance and low output impedance. These helpful impedance properties allow the op-amp voltage follower to be used as a buffer between stages that would otherwise have a loss of signal strength. They are therefore also known as op-amp buffers.
## Non-Inverting Op Amp
A non-inverting op amp is a circuit that uses an op amp to produce a positive gain that is greater than one. The term ‘non-inverting’ in the name refers to the fact that in this configuration, the output is the same phase as the input.
We can see this reflected in its’ gain. The non-inverting op amp has positive gain, whereas an inverting amplifier has a negative gain because it inverts the signal.
Gain = A_v = \frac{V_{out}}{V_{in}}=1+\frac{R_F}{R_1}
A non-inverting op amp uses two resistors (R1 and RF) that form a voltage divider between the op amp output and ground. The output of the voltage divider is fed into the inverting input to provide negative feedback to the circuit:
## Inverting Op Amp
The inverting op amp reverses the polarity of the input signal as it amplifies. It is similar in construction to the non-inverting op amp. The primary difference in construction between the non-inverting op amp and inverting op amp circuits is the reversal of the input and ground connections.
An inverting op amp uses a negative feedback loop, with one resistor directly connecting the output to the inverting input, and another connecting the feedback node with the main input. These two resistors form a kind of voltage divider. However, the non-inverting input is grounded (V+ = 0); if the op amp is able to stabilize the circuit then V- will also equal zero (0).
The result of this configuration is that the polarity of the input signal is reversed; a positive input voltage results in a negative output (and vice versa). This is expressed by the negative sign in the gain formula for the inverting op amp:
Gain=A_v=-\frac{R_2}{R_1} | 2022-12-07 03:17:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6018345355987549, "perplexity": 1203.9078928791698}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711126.30/warc/CC-MAIN-20221207021130-20221207051130-00121.warc.gz"} |
http://math.stackexchange.com/questions/142467/an-ordered-field-has-no-smallest-positive-element | # An ordered field has no smallest positive element
Prove that every ordered field has no smallest positive element.
-
What have you tried? – Jonas Meyer May 8 '12 at 3:02
@Ross: I just removed it. – Brian M. Scott May 8 '12 at 3:04
Welcome to math.SE: since you are new, I wanted to let you know a few things about the site. In order to get the best possible answers, it is helpful if you say in what context you encountered the problem, and what your thoughts on it are; this will prevent people from telling you things you already know, and help them give their answers at the right level. If this is homework, please add the [homework] tag; people will still help, so don't worry. Also, many find the use of imperative ( "Use", "Prove") to be rude when asking for help; please consider rewriting your post. – Arturo Magidin May 8 '12 at 3:05
add comment
## 1 Answer
Hint: Suppose that $x$ is the smallest positive element. Show that $1/2$ is positive, and thus $x/2$ is positive. Thus $0<x/2<x$, a contradiction.
-
A few questions about this. What if the field is finite or has finite characteristic? – nullUser May 8 '12 at 3:10
A field with finite characteristic can never be ordered, as we would then have $$1<1+1<1+1+1<\cdots<0$$ – Alex Becker May 8 '12 at 3:11
I understand that we are trying to prove this by a contradiction. With that i understand why we are letting x be the smallest positive element. What I don't understand is how you show 1/2 is positive and then from that show x/2 is positive. Can you expand a little more please. – walter miller May 8 '12 at 4:26
@waltermiller: $1$ must be positive, since $1=1^2$; since $1$ is positive, $1+1$ is positive. Since $1+1$ is positive, $\frac{1}{1+1}$ is positive. – Arturo Magidin May 8 '12 at 5:09
add comment | 2014-03-10 10:47:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8291608095169067, "perplexity": 652.3322134387158}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010746376/warc/CC-MAIN-20140305091226-00003-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://puzzling.stackexchange.com/questions/3257/how-do-we-get-those-dang-illuminati/3294 | # How do we get those dang Illuminati?
The Illuminati just stole our office They shot JFK and have committed numerous crimes against humanity (not really) but we usually let that slide. But for stealing our offices, we have decided to get them arrested for those crimes, take over both offices, and get our revenge. In order to do this, though, we need their names.
To state the rules here:
• Congressmen are either Knights (R), Illuminati (L), Both (B), or Citizens (C)
• No one else belongs to either group
• We are not supposed to know who any member of either group is. So no more members of either group will willing reveal themselves.
• When attending joint meetings all of us are disguised but we can count how many of each of us there are
• After the plan is put into action, there will be no communication between us
• Citizens can't join either group by any means during this time frame. If you say their name and repeatedly tap your shoulders, they just start googling mental health centers to recommend to you.
• If a person (in this case always Bob) touches his left shoulder after saying a persons name, that person becomes an Illuminati (B) if he was only a Knight (R) or loses Illuminati (R) status if he was both (B).
• If a person (in this case always Bob) touches his right shoulder after saying a persons name, that person becomes a Knight (B) if he was only Illuminati(L) or loses Knight status (L) if he was both (B).
Senator Bob has been identified as a member of only our group and will not betray us. This is certain. We have agreed to hold a meeting every day with the Illuminati where every member of each group must be present in the correct uniform. If they are in both, they must wear the robes of one and the mask of the other. Bob will hold a prayer in front of congress between each meeting in which he can touch his shoulders as much as he likes and everyone will know.
We know there are only 100 congressmen whose status we don't already know. We know only 32 of these are in the Illuminati and 32 are our men. None of those unidentified are in both. Bob and Peter are currently known to be Knight Templar and Illuminati respectively because they shoulder tapped with obvious motives.
Is there a way we can determine the names of every one of those 32 congressmen who are right now Illuminati without revealing any of our names to them?
I am willing to lie and cheat as long as they don't know we are doing it. I am a politician like you after all.
Lets assume that we have ensure Peter will not shoulder tap again and they will not sabatoge us unless they know for a fact we have identified them (for now lets assume they are dumb enough they can't do that). They are smart enough, however, to try and catch us breaking meeting rules.
Bonus's if you can do any of the following:
• Minimize the number of meetings/shoulder tapping sessions
• Compute a plan for an arbitrary number of congressmen in each group
• Have the plan allowing for the possibility that unidentified congressmen is a member of both.
• Sorry but i forgot a necessary assumption, their group as a whole does not know anything we dont'. – kaine Oct 24 '14 at 19:43
• If Bob changes an Illuminati to a Templar (and we figure out who he is) should he change him back to Illuminati before the arrest happens? Or can we arrest him as a Templar? – Peter Oct 24 '14 at 22:53
• Arrest is by name not team. They dont have to be on that team at the end. – kaine Oct 24 '14 at 22:54
• "without revealing any of our names" - "our" means the cabal planning Bob's strategy, or all Knights? – aschepler Oct 25 '14 at 2:58
• This is not related to puzzling but I have to say that the deleted answers are fantastic. I guess any mention of "Illuminati" draw the spammers. (Fun fact: My dictionary wants "Illuminati" to be capitalized. Respect.) Can anybody figure out the phone number +2348076826545 ? It's all over the web. – Engineer Toast Apr 23 '15 at 16:01
As a first solution that doesn't out any Knights:
Each day, Bob names one congressman and touches his right shoulder.
If at the next joint meeting one more person is wearing a split uniform (B), the just-named person is outed as an Illuminati. If a citizen or Knight was named, nothing happens.
This takes at most 99 days to uncover all Illuminati. (After 99 days, if we have only changed the costumes of 31 people, the unnamed congressman must be the 32nd unknown Illuminati.)
• Shouldn't we be touching our left shoulder upon every outing to bolster our numbers permanently? – Weckar E. Mar 22 '17 at 9:09
To speed things up, we can use the fact that none of the unidentified are B.
Let Bob execute RLR for congressman 1, LRLR for congressman 2 and RLRLR for congressman 3.
For all possibilities, this gives us the following table:
before after change
1 2 3 R B L
L L L | R R B | +2 -2
L L R | R R L | +1 +1 -2
L R L | R B B | +1 -1
L R R | R B L | -1 +2 -1
R L L | L R B |
R L R | L R L | -1 +1
R R L | L R L | -2 +1 +1
R R R | L B L | -3 +2 +1
We can see that each assignment of memberships corresponds to a unique change in frequencies at the next secret meeting. The next day Bob does the same for congressmen 4, 5 and 6 and so on. Since we're dealing with groups of three congressmen per meeting, we can work through all hundred in 34 days, and have the Illuminati behind bars before Christmas.
Perhaps this scheme can actually be extended a little further, but not to the full 100 congressman. Since there are about $100^2$ unique frequencies, the best you can do with this strategy is to expose the Illuminati in groups of 13 (ie. in 8 days). I'll leave that for another poster to work out.
• Some of the 100 are mere Citizens. – aschepler Oct 25 '14 at 13:00
• Oh snap, you're right. That affects the frequencies. At the very least R, L, L will cause the same frequency change as C, C, C. Back to the drawing board. – Peter Oct 25 '14 at 13:12
• This might identify us too but we might be able to cheat so their numbers are off (dress one of own differently) – kaine Oct 25 '14 at 20:28
We won't be using the left shoulder, because that will out the Templars. The right shoulder is fine though.
Improving on aschepler's solution, we name 2 people each day while touching the right shoulder. If the number of B is unchanged at the next meeting, neither was illuminati. If B has increased by 2, they were both illuminati. If only 1, then we spend the next day identifying which: name one of the two while touching the left shoulder. If the number of B has dropped at the next meeting, this person was originally illuminati, otherwise it was the other person.
This method has a worst case when we never eliminate 2 illuminati in one day and still have 1 left in 99-100 so we spend 49 days testing pairs and 32 identifying which member of each pair was illuminati, giving 81 days.
Ok, let's start with an inefficient one, to set up the principle (and check that I understand the rules).
We number the unknowns 1 through 100 (or 1 though 98 if we're counting Bob and this Peter fellow). First, we call a meeting, and make a note of how many people turn up in each group.
At his first session of congress, Bob says the name of congressman 1, and taps his right shoulder, says the name and taps his left shoulder, and then says the name and taps his right shoulder (RLR). This turns Illuminati into Templar and vice versa, while members of both and citizens are unaffected.
We call another meeting and see if the numbers have changed. If we have an extra Templar, we add congressman 1 to the list.
Bob goes through all congressmen in this fashion, and after only 100 days we know whom to arrest.
• Pretty much correct. Peter and Bob are not in the 100. This should work with no cheating. – kaine Oct 24 '14 at 23:21
• Unfortunately, this answer was posted by Peter, who is a known Illuminati, so... well, you distract him and I'll call the cops – Joe Oct 25 '14 at 0:36
• Doesn't this change the 32 Knights into Illuminati, outing them? – aschepler Oct 25 '14 at 3:02
• R》B》L; L》L》B; B》R》R; somthing is off with this method for the both and knight cases. Thank you for pointing that out. – kaine Oct 25 '14 at 5:00
• Hmm. The illuminati "made a mistake" and tried to mislead us with a false solution... Tricks! – Joe Oct 25 '14 at 11:55
## protected by Community♦Nov 7 '14 at 19:06
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). | 2019-10-19 16:22:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4637288749217987, "perplexity": 1936.8204566849186}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986696339.42/warc/CC-MAIN-20191019141654-20191019165154-00347.warc.gz"} |
http://sigma-delta.de/documentation.html | I understand
# Documentation
For a detailed description of the theoretical aspects see Related Publications.
## Input Signal
In the current implementation each individual modulator is optimized via single sine wave inputs at eight different amplitudes but with the same input frequency. These amplitudes are linearly spaced between selectable minimum and maximum values $$A_\mathrm{min}$$ and $$A_\mathrm{max}$$. The frequency can be set in percent of the inband.
Amplitude Input amplitude range Frequency Input frequency
### Amplitude
The eight input amplitudes of the signals are defined in a range between $$A_\mathrm{min} = [0.001;A_\mathrm{max})$$ and $$A_\mathrm{max} = (A_\mathrm{min};2]$$. For the final optimized result, a complete amplitude sweep is performed in order to show the complete dynamic range performance.
• Hints for the optimization:
The covered range of the amplitudes can be utilized in order to aim for a certain maximum stable amplitude (MSA). E.g., when selecting 0.7 for $$A_\mathrm{min}$$ and 0.8 for $$A_\mathrm{max}$$, only modulators which are stable in that range are evaluated as good. However, the tougher the constraints the higher the chance that no modulator can be found at all.
• Examples:
Selecting for the minimum and maximum amplitude values:
• $$A_\mathrm{min} = 0.3$$
• $$A_\mathrm{max} = 1.0$$
results in the test amplitudes:
Test signal # 1 2 3 4 5 6 7 8 Test amplitude 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
### Frequency
The normalized input frequency $$f_\mathrm{norm}$$ can be set between $$[0.01;1]$$ in $$\frac{f}{f_\mathrm{s}}$$.
• Hints for the optimization:
Inserting a frequency lower than $$\frac{1}{3}$$ times the inband will include the third harmonic produced by the quantizer. Thereby, the more robust modulators are favoured.
• Examples:
Selecting a frequency of
• $$f_\mathrm{norm} = 0.3$$
results in a tone right in the middle of the band of interest at $$0.3\cdot\frac{f_\mathrm{s}}{2\cdot\mathrm{OSR}}$$. Increasing or decreasing the value shifts the signal peak indicated by the arrows in the following figure.
## Integrators/Resonators
The current implementation allows selecting one out of seven integrator and resonator types, each of them providing their individuel set of parameters. Depending on the chosen type, the tool offers different design goals and constraints. Moreover, the selection affects the available settings in the D/A converter and coefficient blocks.
Model Type Integrator/Resonator model type. Dynamics Optimize for a defined minimum and maximum integrator output swing. DC gain DC Gain of the integrator/resonator.. GBW/Bandwidth Represents the GBW or location of the first pole depeding on the model type. Resonance Frequency Resonance Frequency if supported by the model. Prop. path High-level coefficient representing a proportional path, if supported by the model.
### Model type
#### Integrators
##### Ideal
Ideal integrator means, the integrator is ideal by all matters. There are no limitations in DC gain, GWB, etc. Each ideal integrator represents one state in the state-space model of the overal modulator model, which is used for optimization. A circuit-level model is shown in the figure below. Here, the circuit-level model utilizes an ideal OpAmp with a transfer characteristic of $$A(s)= \infty$$. Feedback paths of the modulator are represented by the ideal current source with $$V_j$$ as input.
##### OpAmp Integrator
In the OpAmp based integrator model, the dominant pole of the OpAmp is accounted for in order to investigate the most dominant non-ideal behavior. By this pole, finite DC gain and finite GBW can be modeled. Each OpAmp based integrator represents two states in the state-space model of the overal modulator model, which is used for optimization, obviously leading to smaller simulation throughputs than the ideal models. The model supports resistive and capacitive inputs represented by $$V_i$$ and $$V_k$$ in the circuit-level diagram below. Feedback paths of the modulator are represented by the ideal current source with $$V_j$$ as input. During the optimization gain errors due to multiple resistive inputs are accounted for automatically.
##### OpAmp Integrator with Proportional Path
As in the OpAmp based integrator model without proportional path, the dominant pole of the OpAmp is accounted for in order to investigate the most dominant non-ideal behavior. By this pole, finite DC gain and finite GBW can be modeled. Also each OpAmp based integrator with proportional path represents two states in the state-space model of the overal modulator model, which is used for optimization, obviously leading to smaller simulation throughputs than the ideal models. The proportional path is formed by the risistor $$R_p$$. During the optimization gain errors due to multiple resistive inputs are accounted for automatically. The model is implemented as shown below in the circuit-diagram. Therefore, possible resonators feed back not only the integrated but also the proportionally scaled signal. The model supports resistive inputs represented by $$V_i$$ in the circuit-level diagram below. Feedback paths of the modulator are represented by the ideal current source with $$V_j$$ as input.
##### gm-C Integrator
As an alternative, an OTA based integrator model is available. Each input path is formed by its own OTA with input voltage $$V_i$$ and the frequency dependant gain $$g_\mathrm{m}(s)$$, which shows one-pole behaviour. The pole location specifies the bandwidth of the OTA. A finite output transconductance is accounted by $$g_\mathrm{out}$$. The model is implemented as shown below in the circuit-diagram. The model is represented in the state-space model of the overal modulator model by two states.
#### Resonators
##### Ideal
Ideal resonator means, the resonator is ideal by all matters. There are no limitations in DC gain, GWB, Q, etc. Each ideal resonator represents two states in the state-space model of the overal modulator model, which is used for optimization. A circuit-level model is shown in the figure below incorporating an LC resonator. Here, the circuit-level model utilizes an ideal OpAmp with a transfer characteristic of $$A(s)= \infty$$. Feedback paths of the modulator are represented by the ideal current source with $$V_j$$ as input.
##### OpAmp LC Resonator
In the OpAmp based resonator model, the dominant pole of the OpAmp is accounted for in order to investigate the most dominant non-ideal behavior. By this pole, finite DC gain and finite GBW can be modeled. Each OpAmp based resonator represents three states in the state-space model of the overal modulator model, which is used for optimization, obviously leading to smaller simulation throughputs than the ideal models. The model supports resistive inputs represented by $$V_i$$ in the circuit-level diagram below. Feedback paths of the modulator are represented by the ideal current source with $$V_j$$ as input. The finite resistance $$R_f$$ is used to model a finite quality factor Q.
##### gm-LC Resonator
An OTA based resonator model is also available. Each input path is formed by its own OTA with input voltage $$V_i$$ and the frequency dependant gain $$g_\mathrm{m}(s)$$, which shows one-pole behaviour. The pole location specifies the bandwidth of the OTA. The output transconductance is $$g_\mathrm{out}$$ used to model a finite quality factor Q. The model is implemented as shown below in the circuit-diagram. The model is represented in the state-space model of the overal modulator model by three states.
### Dynamics
For each of the integrator and resonator models, an optimization goal for the output swing $$V_\mathrm{out}$$ can be set. Modulators with swings lower than the $$V_\mathrm{out,min}$$ or higher then the $$V_\mathrm{out,max}$$ value are rated worse than ones within this boundaries.
### DC gain
For non-ideal OpAmp based model, a DC gain parameter $$A_\mathrm{DC}$$ can be set. It can be minimized or a fixed value can be set.
• Minimize:
Allows the genetic algorithm to change the DC gain in order to find a minimal acceptable value. This option enables the choice of $$A_\mathrm{DC,min}$$ and $$A_\mathrm{DC,max}$$ boundaries for a defined, allowed range.
• Fixed:
Fixes the DC gain to a defined value during optimization. This option enables to choose the defined value in the range $$A_\mathrm{DC} \in [10;100000]$$ on an absolute scale.
• Hints for Optimization:
Low DC gain values will result in poor linearity, which is not investigated during the optimization!
### GBW/Bandwidth
Depending on the type of the non-ideal models, GBW or bandwidth are available as parameters. For OpAmp based models, GBW is used, for OTA based models bandwidth. Both $$f_\mathrm{gbw},f_\mathrm{b} \in [0.01;10]$$ are normalized to $$f_\mathrm{s}$$. Higher values than 10 times $$f_\mathrm{s}$$ are considered as ideal and, thus, not supported. Both can be minimized or a fixed value can be set.
• Minimize:
Allow the genetic algorithm to change $$f_\mathrm{gbw}/f_\mathrm{b}$$ in order to find a minimal acceptable value. This option enables the choice of $$f_\mathrm{min}$$ and $$f_\mathrm{max}$$ boundaries for an allowed range.
• Fixed:
Fixes $$f_\mathrm{gbw}/f_\mathrm{b}$$ to a defined value during optimization.
### Resonance Frequency
For resonator models, the resonance frequency parameter $$f_0$$ is shown. It is connected to the design goal that tries to place this frequency in the inband of the modulator. $$f_0$$ is normalized to $$f_\mathrm{s}$$.
• Optimize:
Allow the genetic algorithm to change $$f_0$$ in order to find an optimal value. This option enables the choice of $$f_\mathrm{min}$$ and $$f_\mathrm{max}$$ boundaries for an allowed range.
• Fixed:
Fixes $$f_0$$ to a defined value during optimization.
### Proportional Path
This parameter is only available for the model type OpAmp Integrator with Proportional Path. Possible options are:
• Optimize:
Optimize enables the genetic algorithm to change the value of this coefficient in order to find a better modulator. This option enables the choice of $$k_\mathrm{min}$$ and $$k_\mathrm{max}$$ boundaries for a defined, allowed range.
• Fixed:
Fixes the coefficient $$k \in [0.00001;10)$$ to a defined value during optimization.
## High-Level Coefficients
In the high-level block diagram the modulator coefficients are ideal signal scaling blocks. Nevertheless, in order to account also for low-level behavior, some additional options are implemented if certain integrator or resonator models are selected. E.g., if the OpAmp Integrator model is selected, the designer can choose between resistive or capacitive coupling of the feed forward paths targeting that integrator, which effects the output swings and the integrator gain error.
For the optimization all coefficients of the modulator can be individualized seperately. The designer can set fixed coefficient values or optimize the values with or without previously set upper and lower boundaries as optimization goals.
Activate/Deactivate Enable/disable path during optimization. Type Path realization via resistor. Value Optimize coefficient value.
### Activate/Deactivate
This option adds or removes the coefficient from the block diagram. It is available for all coefficients but $$b_1$$, since this is considered as mandatory.
### Coupling Type
If the OpAmp Integrator model is selected, there are two options for the coupling of the coefficient with the integrator. For all other models, the respective modelled coupling type is used.
• Resistive:
Resistive input means the input to the integrator is formed by a resistor as shown below. This path affects the integrator gain, which is accounted for automatically.
• Capacitive:
Capacitive input means the input to the integrator is formed by a capacitor as shown below. In this case, a high-level path according to the model in the main window is formed across an integrator even though in the low-level domain the signal is forwarded by the proportional behavior through the integrator. This path does not affect the integrator gain but certainly the integrator swing. Capacitive paths with ideal models can be investigated by inserting appropriate ideal paths.
### Coefficient Value
The coefficient value is the main parameter of this block. It represents the high-level coefficient itself. It can be either fixed to a certain value or optimized.
• Optimize:
This option enables the genetic algorithm to change the value of this coefficient in order to find a better modulator. This option enables the choice of minimum and maximum boundaries for a defined range, which is allowed for the coefficient.
• Fixed:
It is possible to set the coefficient to a defined value during optimization. Possible values are: $$b_x,c_x,d_x,e_x \in [0.00001;10)$$.
• Same as $$a_1$$:
Fix the coefficient $$b_1$$ to the same value as $$a_1$$ during optimization. This option is set by default for lowpass modulators. Modulators, which do not have any other inputs to the first summing node than the paths through $$a_1$$ and $$b_1$$, exhibit an STF of 0 dBFS for low frequencies, if $$b_1$$ equals $$a_1$$.
## Internal Quantizer
The quantizer is considered as an ideal element, while dithering is possible.
Quantizer levels Number of quantizer levels Dithering Add random noise onto the quantizer input Dynamics Optimize for a defined quantizer input swing
### Quantizer Levels
This parameter defines the number of levels $$N_\mathrm{levels} \in [2;64]$$ in the quantizer. An even number results in a mid-rise, while an odd number results in a mid-thread quantizer characteristic.
### Dithering
This option adds noise onto the quantizer input during each sampling instant. The noise is uniformly distributed and the interval is defined by +/- the chosen value. Possible values for dithering are in the rage of $$[0;1]$$, while the value is normalized to one quantizer stepwidth and thus scales down automatically with increasing number of quantizer levels.
### Dynamics
For the quantizer, an optimization goal for the input swing $$V_\mathrm{in}$$ can be set. Modulators with swings lower than the $$V_\mathrm{in,min}$$ or higher then the $$V_\mathrm{in,max}$$ value are rated worse than ones within this boundaries.
## D/A Converter
The D/A converters (DACs) of the block diagram form the feedback paths and can account for some low-level behavior. Different input coupling options are available if certain integrator or resonator models are selected. E.g., if the OpAmp Integrator model is selected, the designer can choose in between resistive or current source coupling, which effects the output swings and the integrator gain error and, moreover, the last feedback DAC can be coupled capacitivly.
For the optimization all DACs of the modulator can be individualized seperately. The designer can set fixed coefficient values or optimize the values with or without previously set upper and lower boundaries as optimization goals. Further, the output waveform and a local delay can be set.
Activate/Deactivate Enable/disable path during optimization. Type Path realization via resistor. Value Optimize coefficient value. D/A settings Local delay of the signal
### Activate/Deactivate
This option adds or removes the DAC from the block diagram. It is available for all DACs but $$a_1$$, since these are considered as mandatory.
### Type
There are up to three options for the coupling of the DAC with the integrator or resonator, depending on the selected model. The last DAC has more options, as its signal is added to the output of the last integrator.
• Resistive:
Resistive input means the input is formed by a resistor as seen below. This path affects the integrator gain, which is accounted for automatically. OpAmp based non-ideal integrators allow this option.
• Capacitive:
Capacitive input is only available for the last DAC in combination with the non-ideal OpAmp Integrator model. It means that the path is formed by a capacitor to the last integrator as seen below. In the block diagram this path is represented by a summation of the signals after the integrator. In this case, a high-level path according to the model in the main window is formed across an integrator even though in the low-level domain the signal is forwarded by the proportional behavior through the integrator. This path does not affect the integrator gain but certainly the integrator swing. The capacitive path with the ideal model can be investigated by inserting the appropriate ideal path.
• Current Source:
Current source means the input is formed by a current source. The integrator/resonator gain is not affected.
### Coefficient Value
The DAC coefficient value is the main parameter of this block. It represents the high-level coefficient itself. It can be either fixed to a certain value or optimized.
• Optimize:
This option enables the genetic algorithm to change the value of this coefficient in order to find a better modulator. This option enables the choice of minimum and maximum boundaries for a defined range, which is allowed for the coefficient.
• Fixed:
It is possible to set the coefficient to a defined value during optimization. Possible values are: $$a_x \in [0.00001;10)$$.
### D/A settings
• Local ELD:
The value for the local excess-loop-delay can be chosen to ELD $$\in [0;1]$$. In contrast to the global ELD it is an individual value for each DAC. The summation of global and local ELD has to be in the range $$[0;1]$$. Here, the value is normalized to the duration of one sampling clock cycle $$T_\mathrm{s} = \frac{1}{f_\mathrm{s}}$$.
• Waveform:
The DAC output waveform can be chosen individually for each feedback DAC. Non-return-to-zero (NRZ), return-to-zero (RZ), raised cosine (RCOS) and exponential decay (EXP) are available.
## Global Excess-Loop-Delay
The Global Excess-Loop-Delay block adds an global ELD to all feedback paths.
Excess-loop-delay Combined delay of the quantizer and DAC.
### Excess-loop-delay
The value for the ELD can be chosen to ELD $$\in [0;1]$$. The summation of global and local ELD has to be in the range $$[0;1]$$. Here, the value is normalized to the duration of one sampling clock cycle $$T_\mathrm{s} = \frac{1}{f_\mathrm{s}}$$. | 2018-06-25 05:36:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7839630842208862, "perplexity": 1463.5508672532942}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867493.99/warc/CC-MAIN-20180625053151-20180625073151-00254.warc.gz"} |
https://labs.tib.eu/arxiv/?author=L.%20Del%20Frate | • ### The MEG detector for ${\mu}+\to e+{\gamma}$ decay search(1303.2348)
April 10, 2013 hep-ex, physics.ins-det
The MEG (Mu to Electron Gamma) experiment has been running at the Paul Scherrer Institut (PSI), Switzerland since 2008 to search for the decay \meg\ by using one of the most intense continuous $\mu^+$ beams in the world. This paper presents the MEG components: the positron spectrometer, including a thin target, a superconducting magnet, a set of drift chambers for measuring the muon decay vertex and the positron momentum, a timing counter for measuring the positron time, and a liquid xenon detector for measuring the photon energy, position and time. The trigger system, the read-out electronics and the data acquisition system are also presented in detail. The paper is completed with a description of the equipment and techniques developed for the calibration in time and energy and the simulation of the whole apparatus. | 2021-01-26 02:42:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6449190974235535, "perplexity": 1525.3592121387176}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704795033.65/warc/CC-MAIN-20210126011645-20210126041645-00417.warc.gz"} |
https://math.stackexchange.com/questions/1285867/minimal-value-polynomials-with-integer-coefficients | # minimal value polynomials with integer coefficients
Let $D$ be the set of polynomials of integer coefficients $f\in\mathbb{Z}[x]$ such that $f(x)\ge 0$ at $x\in[-2,2]$, where the zero polynomial $f=0$ is excluded. Can I find a finite "minimal" set $F\subseteq D$ on $[-2,2]$? That is, $F$ is finite and for each $f\in D$, there exists $g\in F$ such that $f(x)\ge g(x)$ for every $x\in[-2,2]$. If not, at least can I find a finite minimal set of each degree? I don't know even how I approach this problem, except for the degree 1 case..
• Why not $F:=\{0\}$? – Guest May 17 '15 at 5:35
• Ah, it's my mistake. We must trivially exclude the zero. – D. Lee May 17 '15 at 5:37
Let $n$ be a natural number greater than the degrees of every polynomial in $F$, and consider $x^n$. We must have $0 \le g(x) \le x^n$ for all $x\in [-2,2]$. Note that in particular we have $g(0) = 0$, and since $g(x) \ge 0$ in a neighborhood, the nonzero coefficient of lowest degree must be positive. However, $x^n < |g(x)|$ for $x$ suitably close to $0$ for such polynomials $g$, since that lowest coefficient must necessarily be in degree less than $n$. This contradiction shows that no such $F$ can exist.
If you want counterexamples in a specified degree, note that each such polynomial intersects the $x$-axis finitely often, and therefore the set of those $x$ coordinates such that $g(x) = 0$ is finite. You can choose a rational $p/q$ that is not in this set, and then set $f(x) = (qx-p)^2h(x)$, where $h(x)$ is any polynomial that is positive on $[-2,2]$. Then $f(p/q) = 0 < g(p/q)$ for all $g$ in your finite set $F$. This covers every case except the case of linear functions.
In that case choose a point $(r_1,r_2)$ lying below the graphs of every element of $F$ (such a point exists if you take $r_1 = p/q$ as above, and then set $r_2$ less than all values of $g(p/q)$). Then the line between the points $(r_1, r_2)$ and $(-2, 0)$ will provide the counterexample. | 2019-10-23 23:01:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9493140578269958, "perplexity": 95.22958585809837}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987836368.96/warc/CC-MAIN-20191023225038-20191024012538-00065.warc.gz"} |
https://community.developer.atlassian.com/t/parameterized-ao-test-is-it-possible/39986 | # Parameterized AO test - is it possible?
Hi all!
I am trying to write Parameterized tests for AO.
Like basic manual, I took this.
My test class looks like:
``````@Jdbc
@Data(TestDatabaseUpdater.class)
@RunWith(ActiveObjectsJUnitRunner.class)
public class MyClassAOTest {}
``````
From the manual I know I need to use runner:
``````@RunWith(Parameterized.class)
``````
And the problem is that @RunWith cant take two params like this:
``````@RunWith({ActiveObjectsJUnitRunner.class, Parameterized.class})
``````
and cant duplicates in the test class like this:
``````@RunWith(ActiveObjectsJUnitRunner.class)
@RunWith(Parameterized.class)
``````
Therefore, I realized that it is impossible to write parameterized tests for AO. Or am I mistaken and is there another way? | 2022-05-22 04:57:09 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8100272417068481, "perplexity": 3959.02867899545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662543797.61/warc/CC-MAIN-20220522032543-20220522062543-00292.warc.gz"} |
https://eprint.iacr.org/2010/363 | An Analysis of Affine Coordinates for Pairing Computation
Kristin Lauter, Peter L. Montgomery, and Michael Naehrig
Abstract
In this paper we analyze the use of affine coordinates for pairing computation. We observe that in many practical settings, for example when implementing optimal ate pairings in high security levels, affine coordinates are faster than using the best currently known formulas for projective coordinates. This observation relies on two known techniques for speeding up field inversions which we analyze in the context of pairing computation. We give detailed performance numbers for a pairing implementation based on these ideas, including timings for base field and extension field arithmetic with relative ratios for inversion-to-multiplication costs, timings for pairings in both affine and projective coordinates, and average timings for multiple pairings and products of pairings.
Available format(s)
Category
Implementation
Publication info
Published elsewhere. Pairing 2010
Keywords
Pairing computationMiller's algorithmaffine coordinatesoptimal ate pairingfinite field inversionspairing costmultiple pairingspairing products.
Contact author(s)
mnaehrig @ microsoft com
History
2010-10-12: last of 2 revisions
See all versions
Short URL
https://ia.cr/2010/363
CC BY
BibTeX
@misc{cryptoeprint:2010/363,
author = {Kristin Lauter and Peter L. Montgomery and Michael Naehrig},
title = {An Analysis of Affine Coordinates for Pairing Computation},
howpublished = {Cryptology ePrint Archive, Paper 2010/363},
year = {2010},
note = {\url{https://eprint.iacr.org/2010/363}},
url = {https://eprint.iacr.org/2010/363}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content. | 2022-06-28 18:24:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4619014859199524, "perplexity": 4505.73139270765}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103573995.30/warc/CC-MAIN-20220628173131-20220628203131-00186.warc.gz"} |
https://rdrr.io/cran/label.switching/man/aic.html | aic: Artificial Identifiability Constraints
Description
This function relabels the MCMC output by simply ordering a specific parameter. Let m, K and J denote the number of simulated MCMC samples, number of mixture components and different parameter types, respectively.
Usage
1 aic(mcmc.pars, constraint)
Arguments
mcmc.pars m\times K\times J array of simulated MCMC parameters. constraint An integer between 1 and J corresponding to the parameter that will be used to apply the Identifiabiality Constraint. In this case, the MCMC output is reordered according to the constraint mcmc.pars[i,1,constraint] < … < mcmc.pars[i,K,constraint], for all i=1,…,m. If constraint = "ALL", all J Identifiability Constraints are applied.
Value
permutations an m\times K array of permutations.
Author(s)
Panagiotis Papastamoulis
Examples
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 #load a toy example: MCMC output consists of the random beta model # applied to a normal mixture of \code{K=2} components. The number of # observations is equal to \code{n=5}. The number of MCMC samples is # equal to \code{m=300}. The 1000 generated MCMC samples are stored #to array mcmc.pars. data("mcmc_output") mcmc.pars<-data_list$"mcmc.pars" # mcmc parameters are stored to array \code{mcmc.pars} # mcmc.pars[,,1]: simulated means of the two components # mcmc.pars[,,2]: simulated variances of the two components # mcmc.pars[,,3]: simulated weights of the two components # We will apply AIC by ordering the means # which corresponds to value \code{constraint=1} run<-aic(mcmc = mcmc.pars,constraint=1) # apply the permutations returned by typing: reordered.mcmc<-permute.mcmc(mcmc.pars,run$permutations) # reordered.mcmc[,,1]: reordered means of the two components # reordered.mcmc[,,2]: reordered variances of the components # reordered.mcmc[,,3]: reordered weights
Search within the label.switching package
Search all R packages, documentation and source code
Questions? Problems? Suggestions? or email at ian@mutexlabs.com.
Please suggest features or report bugs with the GitHub issue tracker.
All documentation is copyright its authors; we didn't write any of that. | 2017-03-28 10:08:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4508541226387024, "perplexity": 3937.021928090341}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189686.56/warc/CC-MAIN-20170322212949-00072-ip-10-233-31-227.ec2.internal.warc.gz"} |
http://physics.stackexchange.com/questions?page=799&sort=newest&pagesize=50 | # All Questions
182 views
### Fluid to particles under newtonian gravity
How to start with a perfect fluid concept and reach (by approximations through certain mathematically well defined assumptions) to the concept of particle ? Here newtonian gravitation is being ...
4k views
### Spectral Line Width and Uncertainty principle
so I've been at this for about 3 - 4 hours now. It is an homework assignment (well part of a question which i've already completed). We did not learn this in class. All work is shown below. An ...
202 views
### Does the recent re-count of stars in elliptical gallaxies affect our understanding of the universal mass balance?
I've seen several popular reports of a new count of low-mass stars in elliptical galaxies (here's one). Edit: Pursuant to several correct comments I've changed the title to agree with the actual ...
4k views
### Applications of Algebraic Topology to physics
I have always wondered about applications of Algebraic Topology to Physics, seeing as am I studying algebraic topology and physics is cool and pretty. My initial thoughts would be that since most ...
3k views
### Classical mechanics without coordinates book
I am a math grad student who would like to learn some classical mechanics. The caveat is I am not to interested in the standard coordinate approach. I can't help but think of the fields that arise in ...
284 views
### Can a web community write papers? [closed]
the internet has changed science drastically. Not only in terms of distributing knowledge e.g. via online encyclopedias as wikipedia and freely available sources of publications as arXiv but also as a ...
612 views
### “Natural units” of mass
Gravitational attraction is given by $\frac{GMm}{r^2}$ while attraction due to electric charge is given by $\frac{q_1 q_2}{r^2}$. Why does gravity need a constant while electric charge doesn't? ...
604 views
### What if physical constants were increased or decreased? [closed]
(Probably related to this one, and probably should be CW.) A very long time ago, I had the good fortune to read George Gamow's excellent series of Mr. Tompkins books. That introduced me to the idea ...
1k views
### Deriving the speed of the propagation of a change in the Electromagnetic Field from Maxwell's Equations
I've been told that, from Maxwell's equations, one can find that the propagation of change in the Electromagnetic Field travels at a speed $\frac{1}{\sqrt{\mu_0 \epsilon_0}}$ (the values of which can ...
411 views
### How to determine the (n,m) dimensions of a carbon nanotube?
I've been reading about nanotubes lately, and I keep seeing the $(n,m)$ notation. How does this describe a nanotube's structure? How do I determine which is $n$ and which is $m$ ? I'm familiar with ...
2k views
### Calculating de Broglie wavelength
Hey, trying to finish an assignment but having some trouble with it. I will show all my work. The topic is on wave/particle dualty, uncertainty principle (second year modern physics course). So the ...
4k views
### Which experiments prove atomic theory?
Which experiments prove atomic theory? Sub-atomic theories: atoms have: nuclei; electrons; protons; and neutrons. That the number of electrons atoms have determines their relationship with other ...
6k views
### Accelerating particles to speeds infinitesimally close to the speed of light?
I'm in a freshmen level physics class now, so I don't know much, but something I heard today intrigued me. My TA was talking about how at the research facility he worked at, they were able to ...
301 views
### Creation of the Electromagnetic Spectrum [closed]
After seeing this image: http://mynasadata.larc.nasa.gov/images/EM_Spectrum3-new.jpg And reading this: "The long wavelength limit is the size of the universe itself, while it is thought that the ...
375 views
### Magnetism-Related Terminology
A few questions about a magnet and a paperclip: What do you call a material that attracts another material via magnetism? (i.e. the magnet) What do you call the material that is attracted in #1? ...
405 views
### CPT and heat equation
I haven't understood this thing: Physics is invariant for CPT trasform...But the Heat or diffusive equation $\nabla^2 T=\partial_t T$ is not invariant for time reversal...but it's P invariant..So CPT ...
690 views
### Nonlinear optics as gauge theory
the widely used approach to nonlinear optics is a Taylor expansion of the dielectric displacement field $\mathbf{D} = \epsilon_0\cdot\mathbf{E} + \mathbf{P}$ in a Fourier representation of the ...
571 views
### An iPhone falling on carpet is fine, is it true? [closed]
I heard that an iphone falling on carpet would not cause damage to it, both internal and external. Any physical explanation for this?
804 views
### Relation of angular speed of a rigid body to Euler's Angles
My Question was like this and i have realised few things and still have some doubts I have a book in which a paragraph goes like this Now, $\dot\phi$, $\dot \theta$, $\dot\psi$ are respectively ...
879 views
### Separation of variables, eigenfunctions of the Dirac operator
Disclaimer: I am not a physicist; I am a geometer (and a student!) trying to learn some physics. Please be gentle. Thanks! When solving the Schrödinger equation for a particle in a spherical ...
1k views
### Why are there 3 quarks in proton?
A few quark related questions (I don't knowmuch about them other than there are 2 flavours concerning protons and neutrons) Why are there 3 quarks in a proton or neutron? Why not 2 or 4? Is there an ...
216 views
### Is it possible to determine timescales of electron dynamics from the natural linewidth of an electronic transition?
A lot of work has been done recently on electron dynamics using attosecond pump-probe techniques; for instance in this paper. In this particular paper, the authors photoionized the neutral ...
5k views
### Is it possible to obtain gold through nuclear decay?
Is there a series of transmutations through nuclear decay that will result in the stable gold isotope ${}^{197}\mathrm{Au}$ ? How long will the process take?
2k views
### Why are quark types known as flavors?
There are six types of quarks, known as flavors. Why where these types called flavors? Why do the flavors have such odd names (up, down, charm, strange, top, and bottom)?
2k views
### Spherical wave as sum of plane waves
How can we do this computation? $\iiint_{R^3} \frac{e^{ik'r}}{r} e^{ik_1x+k_2y+k_3z}dx dy dz$ where $r=\sqrt{x^2+y^2+z^2}$ ? I think we must use distributions... Physically, it's equivalent to ...
1k views
### Ising model for dummies
I am looking for some literature on the Ising model, but I'm having a hard time doing so. All the documentation I seem to find is way over my knowledge. Can you direct me to some documentation on it ...
542 views
### Light emission spectrum units
Do someone knows the units of the spectra provided here ? It seems obvious enough that it's said nowhere, but even Wikipedia and other sites are quite blurry on this point. So, is it power ($W$), ...
2k views
### Why does my watch act like a mirror under water?
I have a digital watch, rated to go underwater to $100 \rm m$. When it is underwater it can be read normally, up until you reach a certain angle, then suddenly, it becomes almost like a mirror, ...
385 views
### What is an analog to QM's Hilbert space in GR?
I've read that QM operates in a Hilbert space (where the state functions live). I don't know if its meaningful to ask such a question, what are the answers to an analogous questions on GR and ...
976 views
### How many Onsager's solutions are there?
Update: I provided an answer of my own (reflecting the things I discovered since I asked the question). But there is still lot to be added. I'd love to hear about other people's opinions on the ...
844 views
### Does the energy of a magnetic field decrease when it moves a conductor carrying a current?
When a charged particle moves in an electric field, the field performs work on the particle. Thus, the energy of the field decreases, turning into kinetic energy of the particle. Does the magnetic ...
10k views
### How efficient is an electric heater?
How efficient is an electric heater? My guess: greater than 95%. Possibly even 99%. I say this because most energy is converted into heat; some is converted into light and kinetic energy, and ...
155 views
### Can I parameterize the state of a quantum system given reduced density matrices describing its subparts?
As the simplest example, consider a set of two qubits where the reduced density matrix of each qubit is known. If the two qubits are not entangled, the overall state would be given by the tensor ...
605 views
### Is there a conserved quantity that enforces planar orbits in central force motion?
From what I remember, one of the first steps in finding the equations of motion for an orbiting body is to argue that the body's motion has to be restricted to a plane, because the central force has ...
5k views
### Why is the mapped universe shaped like an hourglass?
I've watched a video from the American National History Museum entitled The Known Universe. The video shows a continuous animation zooming out from earth to the entire known universe. It claims to ...
1k views
### How far does a trampoline vertically deform based on the mass of the object?
If a baseball is dropped on a trampoline, the point under the object will move a certain distance downward before starting to travel upward again. If a bowling ball is dropped, it will deform further ...
3k views
### Relation between density and pressure for a perfect fluid
what is the relation between mass density $\rho$ and pressure $P$ for a perfect fluid ?
140 views
### Behaviour of mass and momentum distributions under Newtonian Gravity
In the context of this question should mass distribution $\rho(r,t)$ and momentum distribution $p(r,t)$ be well behaved ? By 'well behaved' it is meant that derivatives of all orders exist everywhere. ...
434 views
### If time standard clocks and any memories about the time standard are destroyed, can we recover the time standard again?
Assume the time standard clocks and any memories about the time standard are destroyed. Can we recover the time standard again exactly? Recovering the time standard again means we can determine the ...
1k views
### Why some nuclei with “magic” numbers of neutrons have a half-life less than their neighbor isotopes?
It's easy to find the "magic" numbers of neutrons on the diagrams of alpha-decay energy: 82, 126, 152, 162. Such "magic" nuclei should be more stable than their neighbors. But why some nuclei ...
3k views
### Reciprocal Lattices
Is there an easy way to understand and/or visualize the reciprocal lattice of a two or three dimensional solid-state lattice? What is the significance of the reciprocal lattice, and why do solid ...
1k views
### Appearance of atoms
I was watching a documentary entitled "The Atom" and one of the statements made was that Atoms behave differently when we look at them. I wasn't too sure about the reasoning behind this and i'm hoping ...
182 views
### Evolution of mass and velocity distributions under newtonian gravitation
Let $\rho(r,t)$ and $v(r,t)$ be mass and velocity distributions. Given $\rho(r,0)$ and $v(r,0)$ (initial conditions) what is the differential equation that describes the evolution of $\rho(r,t)$ and ...
6k views
### Is there a name for the derivative of current with respect to time, or the second derivative of charge with respect to time?
This measurement comes up a lot in my E&M class, in regards to inductance and inductors. Is there really no conventional term for this? If not, is there some historical reason for this omission? ...
354 views
### What is the definition of momentum when a mass distribution $\rho(r,t)$ is given?
This question is Edited after recieving comments. What is the definition of momentum when a mass distribution $\rho(r,t)$ is given? Assuming a particle as a point mass we know the definition of ...
661 views
### General relativity (gravitation) in time and one spatial dimension
I don't have any idea of general relativity but intend to learn. Is it a good idea to study general relativity in two dimensions (time and single spatial dimension) in the begining to get good idea on ...
4k views
### Why is Physics so hard? [closed]
I'm in a classical mechanics class now. On our exams, most questions are quantitative. And in general, besides the theory part, all physics problems just require you to gather formulas, manipulate ...
647 views
### 2nd Law of Thermodynamics
I understand that the 2nd law of thermodynamics roughly states that, if you have a body (or a gas in a chamber) that is hot at one end and cold on the other, the heat will always flow from the hot to ... | 2014-09-30 14:18:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8639295101165771, "perplexity": 684.20984562934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663007.13/warc/CC-MAIN-20140930004103-00377-ip-10-234-18-248.ec2.internal.warc.gz"} |
https://www.scielosp.org/article/csp/2013.v29n6/1195-1204/en/ | ARTIGO ARTICLE
Risk factor control in hypertensive and diabetic subjects attended by the Family Health Strategy in the State of Pernambuco, Brazil: the SERVIDIAH study
Controle dos fatores de risco em hipertensos e diabéticos acompanhados pela Estratégia Saúde da Família no Estado de Pernambuco, Brasil: estudo SERVIDIAH
Control de los factores de riesgo en hipertensos y diabéticos seguidos por la Estrategia Salud de la Familia en el estado de Pernambuco, Brasil: estudio SERVIDIAH
Annick FontbonneI; Eduarda Ângela Pessoa CesseII; Islândia Maria Carvalho de SousaII; Wayner Vieira de SouzaII; Vera Lúcia de Vasconcelos ChavesII; Adriana Falangola Benjamin BezerraIII; Eduardo Freese de CarvalhoII
IUMR 204 Nutripass, Institut de Recherche pour le Développement, Montpellier, France
IICentro de Pesquisas Aggeu Magalhães, Fundação Oswaldo Cruz, Recife, Brasil
IIICentro de Ciências da Saúde, Universidade Federal de Pernambuco, Recife, Brasil
Correspondence
ABSTRACT
The SERVIDIAH study (Evaluation of Health Services for Diabetic and Hypertensive Subjects) was conducted in 2010 in the State of Pernambuco, Brazil. A multi-stage random sample of 785 hypertensive and 823 diabetic patients was drawn from 208 Family Health Strategy (FHS) units selected throughout 35 municipalities. Patients underwent a structured interview and weight, height, blood pressure and HbA1c levels (for diabetic patients) were measured. Mean age was approximately 60 years, and women were overrepresented in the sample (70%). 43.7% of hypertensive subjects and 25.8% of diabetic subjects achieved adequate blood pressure control and 30.5% of diabetic subjects had HbA1c levels below 7%. Despite 70% of the patients being overweight or obese, few had adhered to a weight-loss diet. The study of this representative sample of hypertensive and diabetic patients attended by the FHS in the State of Pernambuco shows that improvements in the management of hypertension and diabetes are needed in order to prevent the occurrence of serious and costly complications, especially given the context of increasing incidence of these two conditions.
Hypertension; Diabetes Mellitus; Family Health Program; Risk Factors
RESUMO
Hipertensão; Diabetes Mellitus; Programa Saúde da Família; Fatores de Risco
RESUMEN
Hipertensión; Diabetes Mellitus; Programa de Salud Familiar; Factores de Riesgo
Introduction
The rapid increase in prevalence of common nutrition-related chronic diseases, such as arterial hypertension and type 2 diabetes, is a worldwide phenomenon and poses one of the major challenges to public health in the 21st century 1. The diagnosis of these diseases is often delayed due to lack of symptoms and the resulting complications are costly for the patient, in terms of health and quality of life, and for society 2. The prevention of these complications requires early diagnosis and interventions that favorably influence prognosis, as well as regular monitoring to control blood pressure, blood glucose levels and other cardiovascular risk factors 3,4. The systematic, long-term surveillance of patients with chronic illnesses to avoid later life-threatening or incapacitating conditions is not symptom-driven and represents a shift in approach for most health care systems that were originally organized to react to acute illnesses by diagnosing the problem, initiating treatment and dismissing the patient until another acute episode requires renewed medical attention 5,6.
In 2001, the Brazilian Ministry of Health launched a plan to improve primary care for hypertensive and diabetic individuals which focuses on three key areas: continuous professional education, promotion of healthy lifestyle habits and registration of hypertensive and diabetic patients in primary care units 7. Primary care in Brazil, particularly in the North-east Region, is increasingly provided through the Family Health Strategy (FHS). A FHS team comprises of at least one physician, one nurse, one nurse technician and a lay Community Health Worker (CHW), and is responsible for the primary care of the population of a specific geographical area. This strategy has been shown to have a positive impact on various health and social indicators 8,9. However, to date, few studies have assessed the implementation and real results of the plan to improve primary care for hypertensive and diabetic individuals within the FHS. This was the main aim of the SERVIDIAH study (Evaluation of Health Services for Diabetic and Hypertensive Subjects), conducted between 2009 and 2010 which analyzed a representative sample of hypertensive and diabetic individuals registered in the FHS in the State of Pernambuco, Brazil.
One of the objectives of the plan to improve primary care for hypertensive and diabetic subjects is to achieve control of hypertension and diabetes and other cardiovascular risk factors to prevent potential future complications. The objective of the present analysis of the SERVIDIAH database was to describe and compare the degree of control of these disorders among subjects attended by the FHS in the State of Pernambuco by municipality size.
Methodology
The SERVIDIAH study is an epidemiological survey of a representative sample of patients with hypertension or type 2 diabetes registered with FHS units in the State of Pernambuco conducted between November 2009 and December 2010. The study covered 35 municipalities (Figure 1), of which 16 were small (less than 20,000 inhabitants), 16 were medium-sized (between 20,000 and 100,000 inhabitants), and three were large (over 100,000 inhabitants). The small and medium-sized municipalities were randomly selected from the 2000 Brazilian census using the random command of Epi Info software (Centers for Disease Control and Prevention, Atlanta, USA), while the three large municipalities (Recife, Caruaru and Petrolina) were chosen because they are the capital cities of the three macroregions of the State of Pernambuco (Zona da Mata, Agreste, Sertão). One out of ten FHS units were randomly selected from the FSH units listed in August 2008 on the National Register of Health Establishments (CNES), the most recently consolidated database available at the time of study design.
Selection of subjects
One CHW from each selected FHS unit (a total of 37 in small municipalities, 98 in medium-sized municipalities and 73 in large municipalities) was selected to participate in the study. Subjects were randomly selected from the patient list compiled by each CHW since this record is considered the most accurate and up to date patient registry. The following numbers of hypertensive and diabetic subjects over 20 years of age were selected from these lists: small municipalities (six hypertensive and six diabetic subjects); medium-sized municipalities (three hypertensive and three diabetic subjects); large municipalities (four hypertensive and four diabetic subjects). The aim was to ensure an even balance of around 300 subjects in each municipality size category, allowing a 10% difference in control between municipality size, with a statistical power of 80% and a 5% significance level.
Subjects registered by the CHW as having arterial hypertension but not diabetes were considered hypertensive and those registered as having type 2 diabetes were considered diabetic, regardless of the presence or absence of associated arterial hypertension. Sample members were selected randomly from the list until the desired final number of hypertensive and diabetic subjects for each FHS unit was obtained. All efforts were made to include all designated subjects. Those subjects that were unable to participate in an interview at the FHS unit itself, or some local public facility nearby or on a weekday, were interviewed at home or at the workplace or on weekends. In order to maintain representativeness, subjects were not substituted when it was not possible to interview the individual. The general response rate was 86.7%.
The study was approved by the Ethics Committee of the Aggeu Magalhães Research Centre, Oswaldo Cruz Foundation (CPqAM/Fiocruz) and the Brazilian National Commission of Ethics in Research (CONEP). All participating subjects were duly informed of the objectives and procedures of the study, and all signed an informed consent form.
Data collection
Data was collected using a structured questionnaire and face-to-face interviews conducted by trained investigators. Information was gathered about demographic and socio-economic characteristics, physical activity (no/yes), tobacco consumption, adhesion to salt-free, sugar-free, and/or weight-loss diets, hypertension or diabetes care received at the FHS unit, type of treatment and clinical control of hypertension or diabetes and of other cardiovascular risk factors, as well as satisfaction with the care received and potential expenses related to health care.
Systolic and diastolic blood pressures were measured to the nearest mmHg at the left wrist three times during the interview, with the subject seated for at least 10 minutes prior to measurement. The mean of the three values was used for the analysis. Blood pressure was considered well controlled if mean systolic and mean diastolic blood pressure were below 140mmHg and 90mmHg, respectively, in hypertensive subjects, and below 130mmHg and 80mmHg, respectively, in diabetic subjects 3.
Glycemic control was assessed by measuring HbA1c levels from a capillary blood sample using the in2it point-of-care analyzer (Bio-Rad Laboratories Inc., Berkeley, USA). In accordance with the recommendations of the American Diabetes Association 4, diabetes was considered as well controlled if HbA1c was below 7%.
Study subjects were weighed wearing light clothing using a Tanita BC553 electronic scale (Tanita Corp., Tokyo, Japan), which is accurate to the nearest 0.1 kg, while height was measured using a portable stadiometer (Alturaexata, Belo Horizonte, Brazil), which is accurate to the nearest mm. Body mass index (BMI) was calculated based on weight divided by squared height (kg/m2). Subjects with a BMI between 25 and 29.9 were considered overweight and subjects with a BMI equal to greater than 30 were classified as obese (World Health Organization criteria) 10.
Statistical analysis
Hypertensive and diabetic subjects were grouped into independent samples and analyzed separately. Continuous variables are expressed as mean ± standard deviation (SD), while categorical variables are presented as percentages. Comparisons by municipality size and by category of blood pressure or glycemic control were performed using standard parametric tests (ANOVA for continuous variables and chi-square for categorical variables).
Since the sampling procedure gave equal weighting to municipalities regardless of size, crude means and percentages of the total sample were corrected to make them representative of the whole state. The distribution of hypertensive and diabetic subjects from the SERVIDIAH sample in small, medium-sized and large municipalities was 25.3%, 36.3% and 38.4%, respectively. The distribution of FHS registered hypertensive and diabetic subjects in small, medium-sized and large municipalities in the state of Pernambuco, reported on the website of the Brazilian Ministry of Health in December 2009, was 16.9%, 43.6% and 39.5%, respectively. The correction coefficients were calculated by dividing the municipality size percentage given for the entire State of Pernambuco by the relevant percentage given by the SERVIDIAH sample, resulting in the following values: 0.169/0.253 = 0.668 for small municipalities, 0.436/0.363 = 1.201 for medium-sized municipalities, and 0.395/0.384 = 1.029 for large municipalities. These coefficients were used to calculate the means and percentages of the entire state using the "weight" command of the SPSS statistics module (SPSS Inc., Chicago, USA).
All statistical analyses were performed using SPSS version 19.
Results
A total of 785 hypertensive and 823 diabetic subjects were interviewed throughout the 35 municipalities included in the SERVIDIAH study. Mean age was around 60 years in the two samples, and women represented more than two-thirds of the sample. Almost 40% of the sample had no formal schooling and received an income below the minimum wage (R$545 at the time of the study, equivalent to approximately US$ 310) (Table 1). Most subjects had no formal employment, 942 were retired or had a pension, and 294 were housewives or students. The following statistical differences between the municipalities of different sizes were found: the education level of both hypertensive and diabetic subjects was highest in large municipalities and lowest in small municipalities; the proportion of diabetic subjects that reported receiving an income below minimum wage was greater in large municipalities; the proportion of hypertensive subjects with no formal employment was lowest in large municipalities.
The clinical characteristics of the diseases are shown in Table 2. For both hypertensive and diabetic subjects, the known duration of disease was around nine years. Over 90% of subjects were using some form of pharmacological treatment for the disease; the proportion of hypertensive subjects using some form of pharmacological treatment was significantly greater in large municipalities, and significantly smaller in small municipalities. Insulin treatment, alone or in combination, was used by 15.6% of the diabetic subjects. Adherence to a low-salt diet among hypertensive subjects and diabetic subjects with hypertension was high (88.1%) and adherence rates increased with decreasing municipality size. Adherence to a low-sugar diet in diabetic subjects was also high (87.3%), and adherence rates also increased with decreasing municipality size. On the other hand, the adherence rate for weight-loss diets was low (16.1% in hypertensive subjects and 13.6% among diabetic subjects). Adherence to weight-loss diets among diabetic subjects appeared to be greater in large municipalities.
With regard to the control of cardiometabolic risk factors, Table 3 shows that blood pressure was below 140/90mmHg in 43.7% of the hypertensive subjects, and over the recommended threshold of 130/80mmHg in 74.2% of the diabetic subjects. Control of blood pressure appeared to improve with increasing municipality size among both hypertensive and diabetic subjects. HbA1c levels were below the recommended threshold in 30.5% of diabetic subjects indicating good glycemic control; no relation was found between glycemic control and municipality size. The prevalence of overweight and obesity was extremely high in both samples (74.7% in hypertensive subjects and 73.5% among diabetic subjects). BMI in hypertensive and diabetic subjects was 28.8 ± 5.6 and 28.4 ± 5.3kg/m2 respectively. Approximately 70% of the total sample did not do any physical activity; the proportion of individuals not doing any physical activity appeared to be slightly lower in large municipalities (64%). Smoking was reported by 13% of hypertensive and diabetic subjects.
Discussion
There is little research on the extent of risk factor control among hypertensive and diabetic patients in Brazil, whether at the primary care level or at more specialized levels. The SERVIDIAH study gives a picture of risk factor control at the primary care level in a representative sample of hypertensive and diabetic subjects registered with the FHS in the State of Pernambuco. As found by similar studies 11,12,13, the participants' income and education levels were generally low. The low rates of formal employment can be partly accounted for by the high mean age of the sample due to a high proportion of retirees and over-representation of women in the sample, many of which declared themselves housewives. Although common in studies at the primary care level 11,12,13,15,16,17,18,19, the over-representation of women in this sample is contrary to expectations, since the prevalence of hypertension and diabetes does not differ greatly between men and women 14. The greater number of women can be explained by the fact that overweight and obesity, prominent risk factors for both conditions, are generally more prevalent in women, especially those with low socio-economic status 20. Another possible explanation is that women are known to use medical services more frequently than men 21 and therefore the under-representation of men may suggest the lack of an active search for male cases of hypertension and diabetes in the community.
The proportion of hypertensive subjects with blood pressure below 140-90mmHg found by our study (43.7%,) is somewhat higher than most other studies of samples in a primary care setting which observed rates of around 30% 11,18,22. Only Sousa et al. 13 found a similar rate (40.9%) in a study carried out in a small municipality in the South of Brazil. Nobre et al. 23 found a slightly higher proportion of patients with well-controlled hypertension (46.5%); however, this study was conducted in specialized cardiological centers. The proportion of diabetics with blood pressure below the recommended threshold for type 2 diabetes was 25.8%, showing that blood pressure control was much less satisfactory in diabetic subjects. We could not find other studies which investigated the extent of blood pressure control in diabetic subjects at primary care level. In a study of elderly subjects attending specialized centers, Silva et al. 24 found that 46% of type 2 diabetics had blood pressure below 130/85mmHg. The findings of Gomes et al. 25 were similar to ours (28.5% of subjects with systolic blood pressure below 130mmHg and 19.3% with diastolic blood pressure below 90mmHg). Outside Brazil, the few studies that have assessed blood pressure control by type 2 diabetic subjects show diverging results. In a sample of subjects attending general practices in Belgium, only 13% of the diabetic patients were found to have well-controlled blood pressure (< 130/80mmHg) 26, whereas in the 1999-2000 US NHANES Study 35.8% of subjects achieved adequate blood pressure control 27.
Blood glucose control in diabetic subjects from the SERVIDIAH study was unsatisfactory, since only 30.5% of subjects were found to have HbA1c levels below the 7% threshold. The proportion of diabetic patients with well-controlled glucose varies greatly from study to study. In a study of subjects attending a primary care health unit in São Paulo, Brazil, Silva 18 found that 42% of diabetic patients had HbA1c levels below the 6.5% threshold, which is similar to the rate found by a national survey of specialized centers (46% of patients with HbA1c levels below the 7% threshold) 25. On the other hand, another national survey of specialized centers found that only 27% of patients were below the 7% threshold 28, and in a study of a sample attending a rehabilitation center, 24% of diabetic patients had HbA1c levels below 8% 24. The results of studies outside Brazil vary greatly, ranging from a minimum of 16.7% of patients below the 7% threshold in a study of primary care units in Tunisia 29, to a maximum of 55.7% in the 2003-2004 US NHANES Study 30, with intermediate values observed by British 31 and French 32 studies and the above-mentioned Belgian study 26 of 24%, 40% and 49%, respectively.
Since hypertensive and diabetic subjects are at very high risk of cardiovascular complications, control of other cardiovascular risk factors should be an important part of healthcare. However, the SERVIDIAH study shows that control of other cardiovascular risk factors is less than satisfactory. The high prevalence of overweight and obesity is a general finding in most studies of hypertensive and/or type 2 diabetic patients 15,16,18,28,33. It is also striking to note the low proportion of subjects that adhere to weight-loss diets (16.1% of hypertensive subjects and 13.6% of diabetic subjects). In a study of a sample of hypertensive and diabetic subjects from five municipalities located in the Baixada Santista in the South-east Region of Brazil, Bersusa et al. 34 found that more than 80% of subjects made no efforts to maintain or lose weight and that around 70% of subjects engaged in no leisure-time physical activity. This common finding in studies of this age group 11,34,35,36 highlights the need for protective measures against cardiovascular disease such as incentives to participate in community programs, such as the well-known Academia da Cidade program in Brazil 37, and the inclusion of physical educators and physiotherapists in primary care settings.
Diabetes control did not seem to be related to municipality size. However, control of blood pressure by both hypertensive and diabetic subjects improved with increasing municipality size. This may be related to other significant differences between large, medium-sized and small municipalities, such as education level, which is a factor known to influence health outcomes 38. Also management of hypertension itself differed slightly between municipality size with greater use of antihypertensive medications by hypertensive subjects with increasing municipality size. Also, the proportion of subjects that adhered to weight-loss diets and were physically active was higher in large municipalities. On the other hand adherence to salt-free diets increased with decreasing municipality size. This may reflect a real difference in quality of care depending on municipality size, indicating a possible problem related to the decentralization of the health service where small municipalities find it more difficult to organize an adequate response to the health challenges facing their population 39,40.
One of the main limitations of the SERVIDIAH study is the fact that much of data collection depended on declarations made by the subjects. Although the questionnaire was tested to ensure correct understanding of the questions and reliability of answers, the existence of biases or uncertainties cannot be discarded. The main strength of the study is that every effort was made to make the sample representative of hypertensive and diabetic users of the FHS in the state of Pernambuco, and the high response rate and absence of substitution of dropouts suggests that this was truly achieved.
In conclusion, this analysis of the SERVIDIAH database highlights a number of important aspects of the control of hypertension, diabetes and other cardiovascular risk factors in subjects attending the FHS in the State of Pernambuco. These include the following: the low proportion of men in the sample may reflect the fact that efforts to find male cases of hypertension and diabetes in the community were insufficient; on the whole, the results regarding control are comparable or similar to findings of other studies inside and outside Brazil in primary care or more specialized care settings, showing that control is unsatisfactory; extent of control may be influenced by municipality size. Further research is required to identify which elements of care have most impact on the control of risk factors and health status of hypertensive and diabetic patients attended by the FHS to help guide actions to improve patient care and prevent complications.
Contributors
A. Fontbonne, E. A. P. Cesse and E. F. Carvalho were responsible for study design, organized and supervised data collection, analysis and interpretation, and drafted this article. I. M. C. Sousa, A. F. B. Bezerra and V. L. V. Chaves contributed substantially to study conception, data collection, and interpretation of the results presented in this article, critically revised the paper for important intellectual content, and approved the final version of this article. W. V. Souza contributed substantially to study conception and statistical analysis, critically revised the paper for important intellectual content, and approved the final version of this article.
Acknowledgments
The authors wish to thank the institutions which funded the SERVIDIAH study, the administrators and technicians of the Family Health Strategy in the participating municipalities in the State of Pernambuco, the Public Health Residents and Master students and all personnel who contributed to data collection, data management and statistical analysis. We are also grateful to FACEPE, CNPq, Fiocruz and IRD for their financial support.
References
1. Yach D, Kellogg M, Voute J. Chronic diseases: an increasing challenge in developing countries. Trans R Soc Trop Med Hyg 2005; 99:321-4.
2. Williams R, Van Gaal L, Lucioni C; CODE-2 Advisory Board. Assessing the impact of complications on the costs of type II diabetes. Diabetologia 2002; 45:S13-7.
3. Sociedade Brasileira de Cardiologia; Sociedade Brasileira de Hipertensão; Sociedade Brasileira de Nefrologia. VI diretrizes brasileiras de hipertensão. Arq Bras Cardiol 2010; 95 Suppl 1:1-51.
4. American Diabetes Association. Standards of medical care in diabetes 2011. Diabetes Care 2011; 34 Suppl 1:S11-61.
5. Bodenheimer T, Wagner EH, Grumbach K. Improving primary care for patients with chronic illness: the chronic care model, Part 2. JAMA 2002; 288:1909-14.
6. Rothman AA, Wagner EH. Chronic illness management: what is the role of primary care? Ann Intern Med 2003; 138:256-61.
7. Secretaria de Políticas Públicas, Ministério da Saúde. Plano de reorganização da atenção à hipertensão arterial e ao diabetes mellitus. Rev Saúde Pública 2001; 35:585-8.
8. Rocha A, Soares R. Evaluating the impact of community-based health interventions: evidence from Brazil's Family Health Program. Health Econ 2010; 19:126-58.
9. Paim J, Travassos C, Almeida C, Bahia L, Macinko J. The Brazilian health system: history, advances, and challenges. Lancet 2011; 377:1778-97.
10. World Health Organization. Report of a WHO consultation on obesity: preventing and managing the global epidemic. Geneva: World Health Organization; 1998.
11. Santa Helena ET, Nemes MIB, Eluf-Neto J. Avaliação da assistência a pessoas com hipertensão arterial em unidades de Estratégia de Saúde da Família. Saúde Soc 2010; 19:614-26.
12. Paiva DCP, Bersusa AAS, Escuder MML. Avaliação da assistência ao paciente com diabetes e/ou hipertensão pelo Programa Saúde da Família do Município de Francisco Morato, São Paulo, Brasil. Cad Saúde Pública 2006; 22:377-85.
13. Sousa LB, Souza RKT, Scochi MJ. Hipertensão arterial e saúde da família: atenção aos portadores em município de pequeno porte na Região Sul do Brasil. Arq Bras Cardiol 2006; 87:496-503.
14. Sandberg K, Ji H. Sex differences in primary hypertension. Biol Sex Differ 2012; 3:7.
15. Borba TB, Muniz RM. Sobrepeso em idosos hipertensos e diabéticos cadastrados no Sistema HiperDia da unidade basica de saude do Simões Lopes, Pelotas, RS, Brasil. Journal of Nursing and Health 2011; 1:69-76.
16. Ferreira CLRA, Ferreira MG. Características epidemiológicas de pacientes diabéticos da rede pública de saúde análise a partir do Sistema HiperDia. Arq Bras Endocrinol Metab 2009; 53:80-6.
17. Carlos PR, Palha PF, Veiga EV, Beccaria LM. Perfil de hipertensos em um núcleo de saúde da família. Arq Ciênc Saúde 2008; 15:176-81.
18. Silva RT. Controle de diabetes mellitus e hipertensão arterial com grupos de intervenção educacional e terapêutica em segmento ambulatorial de uma unidade básica de saúde. Saúde Soc 2006; 15:180-9.
19. Borges PCS, Caetano JC. Abandono do tratamento da hipertensão arterial sistêmica dos pacientes cadastrados no Hiperdia/MS em uma unidade de saúde do município de Florianópolis-SC. ACM Arq Catarin Med 2005; 34:45-50.
20. Monteiro CA, Moura EC, Conde WL, Popkin BM. Socioeconomic status and obesity in adult populations of developing countries: a review. Bull World Health Organ 2004; 82:940-6.
21. Figueiredo W. Assistência à saúde dos homens: um desafio para os serviços de atenção primária. Ciênc Saúde Coletiva 2005; 10:105-9.
22. Araújo JC, Guimarães AC. Controle da hipertensão arterial em uma unidade de saúde da família. Rev Saúde Pública 2007; 41:368-74.
23. Nobre F, Ribeiro AB, Mion Jr. D. Control de la presion arterial en pacientes bajo tratamiento antihipertensivo en Brasil Controlar Brasil. Arq Bras Cardiol 2010; 94:645-52.
24. Silva RCP, Simões MJS, Leite AA. Fatores de risco para doenças cardiovasculares em idosos com diabetes mellitus tipo 2. Rev Ciênc Farm Básica Apl 2007; 28:113-21.
25. Gomes MB, Gianella D, Faria M, Tambascia M, Fonseca RM, Réa R, et al. Prevalence of type 2 diabetic patients within the targets of care guidelines in daily clinical practice: a multi-center study in Brazil. Rev Diabetic Stud 2006; 3:82-7.
26. Wens J, Gerard R, Vandenberghe H. Optimizing diabetes care regarding cardiovascular targets at general practice level: Direct@GP. Prim Care Diabetes 2011; 5:19-24.
27. Saydah SH, Fradkin J, Cowie CC. Poor control of risk factors for vascular disease among adults with previously diagnosed diabetes. JAMA 2004; 291:335-42.
28. Mendes ABV, Fittipaldi JAS, Neves RCS, Chacra AR, Moreira Jr. ED. Prevalence and correlates of inadequate glycaemic control: results from a nationwide survey in 6,671 adults with diabetes in Brazil. Acta Diabetol 2010; 47:137-45.
29. Ben Abdelaziz A, Soltane I, Gaha K, Thabet H, Tlili H, Ghannem H. Predictive factors of glycemic control in patients with type 2 diabetes mellitus in primary health care. Rev Epidemiol Santè Publique 2006; 54:443-52.
30. Hoerger TJ, Segel JE, Gregg EW, Saaddine JB. Is glycemic control improving in U.S. adults? Diabetes Care 2008; 31:81-6.
31. Fox KM, Gerber Pharmd RA, Bolinder B, Chen J, Kumar S. Prevalence of inadequate glycemic control among patients with type 2 diabetes in the United Kingdom general practice research database: a series of retrospective analyses of data from 1998 through 2002. Clin Ther 2006; 28:388-95.
32. Marant C, Romon I, Fosse S, Weill A, Simon D, Eschwège E, et al. French medical practice in type 2 diabetes: the need for better control of cardiovascular risk factors. Diabetes Metab 2008; 34:38-45.
33. Gomes MB, Giannella Neto D, Mendonça E, Tambascia MA, Fonseca RM, Réa RR, et al. Prevalência de sobrepeso e obesidade em pacientes com diabetes mellitus do tipo 2 no Brasil: Estudo Multicêntrico Nacional. Arq Bras Endocrinol Metab 2006; 50:136-44.
34. Bersusa AAS, Pascalicchio AE, Pessoto UC, Escuder MML. Acesso a serviços de saúde na Baixada Santista de pessoas portadoras de hipertensão arterial e/ou diabetes. Rev Bras Epidemiol 2010; 13:513-22.
35. Alves JGB, Siqueira FV, Figueiroa JN, Facchini LA, Silveira DS, Piccini RX, et al. Prevalência de adultos e idosos insuficientemente ativos moradores em áreas de unidades básicas de saúde com e sem Programa Saúde da Família em Pernambuco, Brasil. Cad Saúde Pública 2010; 26:543-56.
36. Secretaria de Gestão Estratégica e Participativa, Secretaria de Vigilância em Saúde, Ministério da Saúde. Vigitel Brasil 2010: vigilância de fatores de risco e proteção para doenças crônicas por inquérito telefônico. Brasília: Ministério da Saúde; 2011.
37. Simões EJ, Hallal P, Pratt M, Ramos L, Munk M, Damascena W, et al. Effects of a community-based, professionally supervised intervention on physical activity levels among residents of Recife, Brazil. Am J Public Health 2009; 99:68-75.
38. Cowell AJ. The relationship between education and health behavior: some empirical evidence. Health Econ 2006; 15:125-46.
39. Collins C, Green A. Decentralization and primary health care: some negative implications in developing countries. Int J Health Serv 1994; 24:459-75.
40. Atkinson S, Cohn A, Ducci ME, Fernandes L, Smyth F. Promotion and prevention within a decentralized framework: changing health care in Brazil and Chile. Int J Health Plann Manage 2008; 23:153-71.
Correspondence
E. A. P. Cesse
Departamento de Saúde Coletiva, Centro de Pesquisas Aggeu Magalhães, Fundação Oswaldo Cruz
Av. Moraes Rego s/n
Recife, PE 50670-420, Brasil
educesse@cpqam.fiocruz.br
Submitted on 10/Sep/2012
Final version resubmitted on 03/Jan/2013
Approved on 31/Jan/2013
Escola Nacional de Saúde Pública Sergio Arouca, Fundação Oswaldo Cruz Rio de Janeiro - RJ - Brazil | 2021-10-22 19:56:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3467179536819458, "perplexity": 13756.398472500392}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585518.54/warc/CC-MAIN-20211022181017-20211022211017-00467.warc.gz"} |
https://dsp.stackexchange.com/questions/46389/second-moment-properties-covariance-of-lag-window-estimator | # Second moment properties (covariance) of lag window estimator
We know that the lag window estimator of the spectral density is defined as: $$S^{(lw)}(f)=\int_{-f_{(N)}}^{f_{(N)}}W_m(f-\phi)\hat{S}^{p}(\phi)d\phi=\sum\limits_{\tau=-(N-1)}^{\tau=N-1}w_{\tau,m}s^{(p)}_{\tau}e^{-2\pi f\tau}$$ Where $W_m(f)$ (the spectral window) is the Fourier transform of $w_{\tau,m}$( the lag window), and $\hat{S}^{p}$ (The periodogram ) is the Fourier transform of $s^{(p)}_{\tau}$ (the covariance function).
The covariance between $S^{(lw)}(f)$ and $S^{(lw)}(f')$ is approximatly $0$ if $|f-f'|$ is larger than the width (or bandwidth) of the lag window $W_m$. But if we write the following: $$cov\{S^{(lw)}(f),S^{(lw)}(f')\}=E\{S^{(lw)}(f)S^{(lw)}(f')\}- S(f)S(f')$$ $S(f)$ being the true spectral density at frequency $f$. We can write then: $$E\{S^{(lw)}(f)S^{(lw)}(f')\} =\frac{1}{N}\sum\limits_ {f,f'}W_m(f)W_m(f')E\{\hat{S}^{(p)}(f)\hat{S}^{(p)}(f')\}$$ $$=\frac{1}{N}\sum\limits_ {f,f'}W_m(f)W_m(f')E\{\hat{S}^{(p)}(f)\}E\{\hat{S}^{(p)}(f')\}=S(f)S(f')$$ because we know that the periodogram ordinates are uncorrelated for $|f-f'|> \frac{1}{N}$. From what I have just written the ordinates of the lag window estimator are uncorrelated onlu if the peridodogram ordinates are uncorrelated ($|f-f'|> \frac{1}{N}$), which is inconsistent with the result that we already know. Where am I mistaken?
• Hi Tony: I'm not clear on how the last line is obtained but I think you are assuming that the lag window estimator is an unbiased estimator of the true spectral density which might not be an ok assumption. Similarly, the equation two lines up from the last also assumes the same thing, namely that the expected value of the lag window estimator is equal to the true spectral density. I haven't entered the spectral world much yet ( I'm self studying DSP ) but I know that time domain estimates of the spectral density are prone to problems so the unbiasedness assumption may not be valid. – mark leeds Jan 18 '18 at 9:09 | 2020-07-10 01:16:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9160426259040833, "perplexity": 317.0359918544532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655902377.71/warc/CC-MAIN-20200709224746-20200710014746-00381.warc.gz"} |
https://www.tutorialspoint.com/revolving-door-in-cplusplus | # Revolving Door in C++
C++Server Side ProgrammingProgramming
#### C in Depth: The Complete C Programming Guide for Beginners
45 Lectures 4.5 hours
#### Practical C++: Learn C++ Basics Step by Step
Most Popular
50 Lectures 4.5 hours
#### Master C and Embedded C Programming- Learn as you go
Best Seller
66 Lectures 5.5 hours
Suppose we have a list of requests, where requests[i] contains [t, d] indicating at time t, a person arrived at the door and either wanted to go inside (inside is indicating using 1) or go outside (outside is indicating using 0).
So if there is only one door and it takes one time unit to use the door, there are few rules that we have to follow −
• The door starts with 'in' position and then is set to the position used by the last participant.
• If there's only one participant at the door at given time t, they can use the door.
• If two or more participant want to go in, earliest participant goes first and then the direction previously used holds precedence.
• If no one uses the door for one-time unit, it reverts back to the initial state.
So, we have to find the sorted list where each element contains [t, d], indicating at time t, a person either went inside or outside.
So, if the input is like [[2,0],[3,1],[6,0],[6,1],[3,0]], then the output will be [[2,0],[3,0],[4,1],[6,1],[7,0]]
To solve this, we will follow these steps −
• sort the array v
• create one list ret
• curr := 1, i := 0, j := 0
• n := size of v
• while i < n, do:
• if ret is not empty and v[i, 0] - t of last element of ret > 1, then:
• curr := 1
• j := i + 1
• Define an array arr of size 2
• (increase arr[v[i, 1]] by 1)
• while (j < n and v[j, 0] is same as v[i, 0]), do −
• (increase arr[v[j, 1]] by 1)
• t := maximum of (if ret is empty, then 0, otherwise t of last element of ret) and v[i, 0]
• if arr[1] is non-zero and arr[0] is non-zero, then −
• while arr[curr] is non-zero, decrease arr[curr] by one in each step, do −
• insert {t, curr} at the end of ret
• (increase t by 1)
• curr := curr XOR 1
• while arr[curr] is non-zero, decrease arr[curr] by one in each step, do −
• insert {t, curr} at the end of ret
• (increase t by 1)
• Otherwise
• curr := v[i, 1]
• while arr[curr] is non-zero, decrease arr[curr] by one in each step, do −
• insert {t, curr} at the end of ret
• (increase t by 1)
• curr := direction of last element of ret
• i := j
• return ret
Let us see the following implementation to get better understanding −
## Example
Live Demo
#include <bits/stdc++.h>
using namespace std;
void print_vector(vector<vector<auto>> v) {
cout << "[";
for (int i = 0; i < v.size(); i++) {
cout << "[";
for (int j = 0; j < v[i].size(); j++) {
cout << v[i][j] << ", ";
}
cout << "],";
}
cout << "]" << endl;
}
class Solution {
public:
vector<vector<int>> solve(vector<vector<int>>& v) {
sort(v.begin(), v.end());
vector < vector <int > > ret;
int curr = 1;
int i = 0;
int j = 0;
int n = v.size();
while(i < n){
if(!ret.empty() && v[i][0] - ret.back()[0] > 1){
curr = 1;
}
j = i + 1;
vector <int> arr(2);
arr[v[i][1]]++;
while(j < n && v[j][0] == v[i][0]){
arr[v[j][1]]++;
j++;
}
int t = max((ret.empty()? 0 : ret.back()[0] + 1), v[i][0]);
if(arr[1] && arr[0]){
while(arr[curr]--){
ret.push_back({t, curr});
t++;
}
curr = curr ^ 1;
while(arr[curr]--){
ret.push_back({t, curr});
t++;
}
}else{
curr = v[i][1];
while(arr[curr]--){
ret.push_back({t, curr});
t++;
}
}
curr = ret.back()[1];
i = j;
}
return ret;
}
};
int main(){
vector<vector<int>> v = {{2, 0},{3, 1},{6, 0},{6, 1},{3, 0}};
Solution ob;
print_vector(ob.solve(v));
}
## Input
{{2, 0},{3, 1},{6, 0},{6, 1},{3, 0}}
## Output
[[2, 0, ],[3, 0, ],[4, 1, ],[6, 1, ],[7, 0, ],]
Updated on 02-Sep-2020 11:59:33 | 2022-12-04 17:53:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23258906602859497, "perplexity": 11584.382226158394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710978.15/warc/CC-MAIN-20221204172438-20221204202438-00403.warc.gz"} |